Apple’s new AI news summarization feature has caused huge controversy due to frequent misinformation. This feature is designed to provide users with convenient news information, but it has been widely criticized for incorrectly summarizing reports from the BBC and the New York Times. The international press freedom organization Reporters Without Borders even publicly called on Apple to immediately withdraw this feature, believing that it spreads false information, damages the credibility of the media, and threatens the public's right to obtain reliable information. This incident also highlights the challenges and risks faced by emerging AI technologies in the field of news communication, as well as the difficulties faced by media organizations in the face of rapidly developing technologies.
The artificial intelligence features recently launched by Apple have attracted widespread attention and controversy. The feature was designed to provide users with a summary of news, but it has been criticized for misinformation. International press freedom organization Reporters Without Borders has publicly called on Apple to immediately withdraw the feature after it incorrectly summarized a BBC report in a notification to users.
Specifically, Apple's AI system incorrectly stated that Luigi Manchione, the suspect suspected of killing UnitedHealthcare CEO, had committed suicide. However, the BBC did not mention this in its original report. The BBC stated that it has contacted Apple about this issue and hopes to resolve the matter as soon as possible, but it has not confirmed whether Apple has responded to this.
Vincent Bertier, director of technology and news affairs at Reporters Without Borders, said that Apple should be responsible for the incident and withdraw this feature in a timely manner. He pointed out that artificial intelligence is just a probability machine and cannot simply judge facts through random selection. He emphasized that automatically generated misinformation not only damages the credibility of the media, but also threatens the public's right to obtain reliable information.
In addition, the organization also expressed concerns about the risks posed by emerging artificial intelligence tools to the media, believing that current AI technology is immature and should not be allowed to be used for the dissemination of public information.
In response, the BBC responded that it is vital to ensure that the public trusts the information and news it publishes, and that all content, including push notifications, must be true and trustworthy.
Apple launched the generative AI tool in the United States in June this year, touting its ability to summarize specific content in concise paragraphs, bullet points or tables. This feature is designed to help users obtain news information more conveniently. Users can choose to receive such centralized push notifications on iPhone, iPad and Mac devices. However, since the feature officially launched in late October, users have discovered that it also incorrectly summarized a New York Times report claiming that Israeli Prime Minister Benjamin Netanyahu had been arrested, when in fact it was just the International Criminal Court issuing an arrest warrant for him. make.
The challenge in this incident is that news organizations do not have control over the summary content of Apple AI. While some publishers have chosen to use AI to assist with article writing, this is at their discretion. However, the summary of Apple AI was published in the name of a publishing organization, which not only spreads potential misinformation but also jeopardized the credibility of the media.
Apple did not respond to the incident, and the controversy over AI capabilities also reflects the challenges the news publishing industry faces in a rapidly changing technological environment. Since the launch of ChatGPT, several technology giants have launched their own large-scale language models, and many models have been accused of using copyrighted content, including news reports, during training. Although some media organizations have taken legal action, others, such as Axel Springer, have chosen to reach licensing agreements with relevant developers.
Highlight:
Apple's new AI feature has been widely criticized for false positives.
Reporters Without Borders called on Apple to withdraw this feature, emphasizing that AI technology should not be used for news summarization.
The BBC said it was vital to ensure the credibility of the information and it had contacted Apple about the issue.
This incident triggered extensive discussions about the ethics and safety issues of artificial intelligence technology in the field of news communication. It also warned technology companies that they need to be cautious when developing and applying AI technology to avoid spreading false information and safeguard public interests. Apple's response to this incident and how it improves its AI capabilities in the future will be closely watched.