While artificial intelligence (AI) expands the boundaries of creativity and improves communication efficiency, it also brings hidden worries such as the proliferation of false information and intellectual property infringement, posing new challenges to the construction of an international communication ecosystem. Some media and international organizations have begun to explore how to apply new AI technologies to deal with AI disinformation, "use technology to fight technology, and use magic to defeat magic."
"The emergence of false information in wars is nothing new, but it has gained unprecedented power in the digital age." At the opening ceremony of the 6th World Media Summit held in Urumqi, Xinjiang on the 14th, he talked about the impact of false information on the real world. Ba Yuhua, head of the communications department of the International Committee of the Red Cross's regional delegation for East Asia, expressed concern about the damage caused.
The application of generative AI and large model technology has brought the world into an era of digital information where "what you see may not necessarily be believed, and what you hear may not necessarily be true". The emergence of multi-modal AI "deep fake" content such as sounds, videos, and images has created an increasing fog of false information.
“Generative AI technologies will increase the risk of sophisticated misinformation and disinformation. In a world with so much content, the need for trusted news sources, robust fact-checking and transparency will only grow.” Yan Lingsi, vice president of Reuters Asia Pacific, said during the media summit.
The U.S. News Credibility Assessment and Research Institute has been tracking and evaluating the ability of generative AI to create false information. A report released by the agency late last year showed that the number of fake news websites created using AI agents increased from 49 to more than 600 in seven months.
Although the application of AI has brought about changes in the entire media ecology, making the information dissemination environment increasingly diverse and complex, what remains unchanged is the mission and responsibility of the media to adhere to facts and truth.
"Technology has opened up new opportunities for us, but it has also brought new challenges. Artificial intelligence will not only improve the efficiency of news dissemination, but also require us to rethink our ethical standards." Kovacs Tao, CEO of Hungary's ATV Media Group "I firmly believe that truth and facts remain at the core of our media and our responsibility in the digital age," Marsh said.
Facing the new characteristics of the production and dissemination of false information in the digital age, how to strengthen regulations and guidance to reduce the generation of false information from the source; how to make good use of new technological means to ensure content traceability and credibility have become common concerns of media in many countries.
At the Sixth World Media Summit, Xinhua News Agency’s national high-end think tank released the “Responsibility and Mission of News Media in the Artificial Intelligence Era” think tank report to the world. The report's survey of news media organizations in 53 countries and regions around the world shows that 85.6% of the respondents support strengthening regulations and governance in some form in response to the possible negative effects of the application of generative AI in the media industry.
At the summit, Chinese and foreign guests had heated discussions on how to deal with the spread of false information caused by AI abuse. "This summit has created an opportunity for global media to enhance the credibility of information in dealing with disinformation, misinformation and hate speech." Chang Qide, the United Nations Development System Resident Coordinator in China, said in a video speech during the summit.
United Nations agencies and multinational media have begun to accelerate the construction of an "authenticity" defense line. Chang Qide said that the United Nations this year released the "Global Information Integrity Principles", urging governments, technology companies, advertisers, public relations companies and media to cooperate to jointly build a more ethical information ecosystem.
Globally, institutions such as Xinhua News Agency, Reuters, the British Broadcasting Corporation, and National Public Radio have formulated AI codes of conduct and guidelines to prevent risks that may violate authenticity in the application of AI in the media industry.
"For every threat created with the help of AI, the technology itself can provide an effective 'antidote'." Pavel Negoitsa, president of Rossiya Gazeta, said that AI can detect "deep forgeries" and stop fraudsters actions, etc.
Faced with the situation where the public easily “believes” the content generated by artificial intelligence, many parties have called for adding labels to the content generated by artificial intelligence to help the public distinguish between authenticity and falsehood. In September this year, the Cyberspace Administration of China publicly solicited opinions on the "Measures for Labeling Synthetic Content Generated by Artificial Intelligence (Draft for Comment)", stating that network information service providers should label in accordance with the requirements of relevant mandatory national standards. In 2023, AFP and major European media organizations issued a statement on AI information regulation and industry initiatives, requiring generative AI models and users to clearly, specifically, and consistently identify the artificial intelligence-generated content contained in their output content.
Some media have begun to cooperate with technology companies to build "authenticity firewalls." The British Broadcasting Corporation and American companies such as Adobe, Google, Intel, and Microsoft jointly established the Content Source and Authenticity Alliance. The alliance works around providing content authenticity labels and historical traceability information service systems for digital media.
In the future, the application of AI will undoubtedly further increase the complexity of the information environment. The media should adhere to the principle that "authenticity is the life of news" to clear up the fog of false information for the public and create a clear public opinion environment.