With the widespread application of artificial intelligence technology on the platforms of technology companies, the model of using user comments to generate AI responses has become increasingly common, which has brought new legal risks to giants such as Google and Meta, especially in terms of defamation. Legal cases in Australia have shown that platforms may be held legally responsible for hosting defamatory comments, which makes the model of AI-generated responses facing serious challenges. The editor of Downcodes will provide you with an in-depth analysis of this issue.
With the development of artificial intelligence technology, technology companies such as Google and Meta use user comments or reviews to generate AI responses on their platforms, which may lead to new defamation legal risks.
Australian legal experts point out that when users post allegedly defamatory comments on Google or Facebook, it is usually the users themselves who face legal liability. However, in 2021, an important ruling by the Australian High Court in the Dylan Waller case held that platforms that host defamatory comments, such as social media pages, may also be held legally responsible for those comments.
In the past period, these technology companies have been sued several times for defamation. For example, Google was forced to pay more than AU$700,000 to former New South Wales deputy governor John Barilaro in 2022 after the company hosted a defamatory video. Additionally, Google was ordered to pay $40,000 in damages in 2020 after its search results linked to a news article about a Melbourne lawyer, although the judgment was later overturned by the High Court.
Last week, Google began to launch a map function based on its new AI technology Gemini in the United States. Users can use this function to query places or activities and summarize user reviews of restaurants or places. At the same time, Google also launched the AI overview function in search results to Australian users, providing users with a brief summary of search results. Meta has begun to provide AI generation services for comment summaries on its Facebook platform, especially comments published by news organizations.
Legal expert Michael Douglas said that as these technologies are promoted, there may be some cases that enter the court. He believes that if Meta absorbs comments and generates responses, and those responses contain defamatory content, Meta will be regarded as a publisher and may face defamation liability. He noted that while companies may raise an "innocent communication" defense, the defense has little chance of success.
David Rolfe, a senior lecturer in law at the University of Sydney, said the phenomenon of AI repeating defamatory comments could cause problems for technology companies, although recent reforms to defamation laws may have reduced the risk. He noted that the new legal reforms were implemented before AI technology became widely available and therefore failed to fully address the challenges posed by the new technology.
In response to these legal risks, Miriam Daniel, vice president of Google Maps, said that their team works hard to remove fake reviews and that AI technology is designed to provide a "balanced view." Meta also stated that its AI is still being improved and may produce inaccurate or inappropriate output.
The battle between AI technology and legal risks is intensifying. While technology companies enjoy the convenience brought by AI, they must also face up to and actively respond to the potential legal liabilities it brings. How to balance technological innovation and legal compliance will become a difficult problem facing these technology giants. In the future, we will continue to pay attention to the dynamic development of this field.