With the widespread use of artificial intelligence technology in social media and search engines, tech giants Google and Meta use AI to generate summary and replies of user comments, raising new legal risks. Especially in defamation cases, this move may put the platform in greater legal liability. The Australian High Court jurisprudence provides a legal basis for this risk that the platform is not only a carrier of information, but may also be considered as a publisher. This article will analyze the defamation risks faced by Google and Meta using AI to generate content and the measures they take to reduce risks.
As artificial intelligence technology evolves, technology companies such as Google and Meta use user reviews or reviews to generate AI responses on their platforms, which could lead to new legal risks of defamation.
Australian legal experts point out that when users post comments suspected of defamation on Google or Facebook, they are usually legally liable. However, in 2021, an important ruling by the Australian High Court in Dylan Waller’s case argues that platforms that carry defamatory comments, such as social media pages, may also be legally liable for those comments.
These tech companies have been prosecuted for defamation issues several times over the past period of time. For example, Google was forced to pay more than $700,000 to former NSW governor John Barry Laro in 2022 because the company hosted a defamation video. In addition, Google was sentenced to $40,000 in 2020 for search results linking to a news article about a Melbourne lawyer, although the judgment was later overturned by the High Court.
Last week, Google began launching a map feature in the United States based on its new AI technology Gemini, which allows users to query play locations or events and summarize user reviews of restaurants or locations. At the same time, Google has also launched an AI overview feature in search results to Australian users, providing users with a brief summary of search results. Meta has begun providing AI generation services for comment summary on its Facebook platform, especially comments posted by news organizations.
Legal expert Michael Douglas said that as these technologies are promoted, some cases may come to court. He believes that if Meta absorbs comments and generates responses that contain defamatory content, Meta will be considered a publisher and may face liability for defamation. He noted that although the company may propose a defense of “innocent communication”, the chances of success of this defense are not high.
David Rolf, senior lecturer in law at the University of Sydney, said the phenomenon of repeated defamatory comments by AI could cause problems for tech companies, although recent defamation law reforms may reduce risks. He pointed out that new legal reforms were implemented before AI technology was widely used and therefore failed to fully address the challenges brought by new technologies.
In addressing these legal risks, Miriam Daniel, vice president of Google Maps, said their team worked to remove false comments and that AI technology was designed to provide a "balanced perspective." Meta also said that its AI is still improving and there may be inaccurate or inappropriate output.
Key points:
Australian legal experts warn that Google and Meta's AI-generated content could face new risk of defamation.
A 2021 ruling holds that platforms that carry defamatory comments may also bear legal liability.
Google and Meta are launching AI capabilities while strengthening the management of false comments to reduce legal risks.
In short, the application of AI technology has brought convenience to technology companies, but it also brought new legal challenges. How to balance innovation and risk control will be a key issue for these companies in the future. The improvement of laws and regulations and the continuous improvement of technology will jointly determine the destiny of AI technology in future development.