Source: MIT Technology Review
In late October, News Corp filed a lawsuit against Perplexity AI, a popular AI search engine. At first glance, this may not seem unusual. After all, it is one of many cases requiring AI developers to provide attribution, obtain consent, or pay compensation for the use of data. This lawsuit, however, is different and may be the most consequential of them all.
At the heart of this debate is the future of AI search, namely chatbots capable of extracting and summarizing content from information across the web. As they grow in popularity, these AI "answer engines" may become the gateway to the Internet that replaces traditional search engines. Unlike ordinary AI chatbots that are only trained to reproduce information (albeit often unreliably), AI search tools such as Perplexity, Google's Gemini or OpenAI's newly launched SearchGPT are designed to obtain and repackage information from third-party websites to provide Users provide short summaries with links to a handful of sources ranging from research papers to Wikipedia articles to YouTube transcripts. The AI system does the reading and writing, but the information comes from outside.
At its best, AI search can better infer user intent, amplify quality content, and integrate information from multiple sources. But if AI search becomes our main gateway to the web, it will have an impact on the already fragile digital economy. Currently, the production of online content relies on fragile incentives tied to virtual traffic: advertising, subscriptions, donations, sales or brand exposure. If AI search shields web content behind “omniscient” chatbots, it could deprive creators of the traffic and attention they need to survive.
If AI search disrupts this ecosystem, existing laws may struggle to help. Governments have realized that content is "full of loopholes" in the legal system and are beginning to use other methods to regulate the flow of online value. In this narrow window of time, the AI industry should proactively build a smarter content market to avoid government interventions that are ineffective, only benefit a few, or restrict the free flow of ideas online.
Copyright won’t solve the disruptive impact of AI search
News Corp argued that using its content to extract information for use in AI searches amounted to copyright infringement, and claimed that Perplexity AI was "competing for readers while giving a free ride." This view may also be shared by the New York Times, which sent a cease-and-desist letter to Perplexity AI in mid-October.
In some ways, the charges against AI Search are stronger than other cases involving AI training. AI training typically learns generalized behavioral patterns from large amounts of repetitive content, and the contribution of a single piece of content is often limited. But in search, the value of content lies in its novelty or uniqueness, or in the unique authority of the creator. AI search is designed to reproduce specific characteristics in the underlying data, cite the authority of the original author, and act as a surrogate for the original content.
Even so, News Corp still faces the difficult challenge of proving that Perplexity AI infringed copyright when processing and summarizing information. Copyright law does not protect "mere fact" or the results of creative, journalistic, or scholarly endeavors. U.S. courts have historically favored technology defendants who use content for sufficiently deformed purposes, and this trend is likely to continue. And if News Corp succeeds, the impact of this precedent will far exceed Perplexity AI. Restricting the use of information-rich content for non-creative or non-expressive purposes may limit the use of rich, diverse, high-quality data and hinder broader efforts to improve the safety and reliability of AI systems.
Governments are learning how to regulate the distribution of value online
If existing laws fail to address these issues, governments may seek new laws. Inspired by controversies with traditional search and social media platforms, governments may follow the example of media bargaining laws implemented in Australia and Canada, or similar programs proposed by California and Congress in the United States. The reforms force certain platforms to pay some media organizations for showing their content, such as news segments or knowledge panels. The EU has imposed similar obligations through copyright reform, while the UK has introduced broad competition powers to force bargaining where necessary.
However, forced bargaining is a crude solution to this complex problem. These reforms favor specific news organizations and are based on the assumption that platforms like Google and Meta exploit publishers. In practice, it is difficult to determine how much of the platform's traffic actually comes from news, with estimates ranging from 2% to 35%, with news content on social media accounting for only 3%. At the same time, platforms have brought significant benefits in amplifying publisher content, but there is no unified consensus on the appropriate distribution of this two-way value. More controversially, these bargaining rules not only target content reproduction, but also impose restrictions on indexing and linking, threatening the "free link" capacity that underpins the web. In addition, bargaining laws that focus on traditional media only apply to 1,400 publications in Canada, 1,500 in the European Union, and 62 organizations in Australia, ignoring the many creators and users who contribute to platform traffic every day.
The industry should seize the limited opportunities to establish a fair reward system
However, the threat of intervention itself may have a greater impact than actual reform. AI companies have recognized the risk that litigation could escalate into regulation. For example, Perplexity AI, OpenAI, and Google have begun striking deals with publishers and content platforms, some covering AI training and others focusing on AI search. But similar to earlier bargaining laws, these agreements only benefit a handful of companies, and some of them, like Reddit, have yet to commit to sharing revenue with their own creators.
This policy of selective concessions is unsustainable. It ignores the vast majority of creators online, who cannot easily opt out of AI search and do not have the bargaining power of traditional publishers. This tactic weakens the urgency of reform by placating its loudest critics. Legitimizing a handful of AI companies through complex business agreements that make it difficult for new entrants to obtain equal treatment or equivalent immunity could further entrench a new wave of search giants. In the long term, this may create perverse incentives for AI companies to favor low-cost, low-quality sources over higher-cost, high-quality news or content, leading to an increasingly indiscriminate culture of information consumption.
Instead, the AI industry should invest in frameworks that allow creators of all types to benefit from sharing quality content. From YouTube to TikTok to X, technology platforms have proven that they can provide innovative reward mechanisms for creators in complex content markets. In fact, monetizing everyday content more equitably is a core goal of the “web3” movement promoted by venture capitalists. This logic also applies to AI search. If a query results in substantial engagement but the user does not click through to the original source, commercial AI search platforms should find ways to attribute that value to the creator and share it more broadly.
Of course, it is also possible that our digital economy was imperfect from the beginning. Relying on sporadic advertising revenue may be unsustainable, and the attention economy has caused substantial damage to privacy, information integrity, and online democracy. Supporting high-quality journalism and fresh content may require other forms of investment or incentives.
But we should not give up on the pursuit of a fairer digital economy. AI search makes content negotiation more urgent and more feasible than ever. Pioneers in the AI industry should seize this opportunity to lay the foundation for building smart, fair, and scalable reward systems. If they fail to act, governments already have the ability and confidence to impose the shared value system they envision.
Benjamin Brooks is a fellow at Harvard's Berkman Klein Center, where he focuses on regulatory and legislative responses to AI. He previously led public policy work at Stability AI, which develops open models for image, language, audio and video generation. His views do not represent the position of any affiliated organization.