An MCN organization uses artificial intelligence programs to produce thousands of fake news items a day, causing a large amount of false information to spread virally on the Internet; a novel platform account relies on AI "creation" to update more than a dozen e-books every day, but the writing is illogical and rhetorical Empty; medical papers have been retracted due to the use of false illustrations generated by AI... At present, it is becoming more and more common for AI to generate, forge or tamper with text, pictures, audio and video, and a large amount of shoddy "information that is difficult to distinguish between true and false" Garbage", triggering discussions about "AI pollution".
Since last year, generative artificial intelligence has set off wave after wave of craze around the world, and its disruptive applications have benefited many industries and Internet users. However, a coin has two sides, and there are also some negative problems accompanying generative artificial intelligence. "AI pollution" caused by "information garbage" is becoming more and more prominent.
A research report released in April this year by the New Media Research Center of the School of Journalism and Communication at Tsinghua University showed that the number of economic and enterprise AI rumors has grown by 99.91% in the past year. The US investigative agency "News Guard" stated that the number of websites generating false articles has surged by more than 1,000% since May 2023, involving 15 languages. Some experts believe that the output of “information garbage” produced by AI is huge, difficult to identify, and costly to screen.
The harm caused by “AI pollution” is obvious. "AI pollution" will cause netizens to fall into cognitive hallucinations. On a certain knowledge sharing platform, the AI that seems to “know astronomy at the top and geography at the bottom” generates content that is hollow and blunt. In the absence of critical thinking, the "knowledge system" quickly woven by AI may, on the one hand, degrade people's critical thinking ability, and on the other hand, it may also cause people to fall into cognitive illusions, causing public cognitive confusion, and thus distorting the public. The collective understanding of reality and scientific consensus ultimately leads people to be “led” by AI. Especially for the younger generation who grew up with the Internet, once their cognition is shaped by "information garbage", the consequences will be disastrous.
“AI pollution” will backfire on the development of the AI industry. As we all know, the accuracy of AI models largely depends on the quality of training data. If false and spam content generated by AI "reflows" to the Internet and becomes new data for training AI models, this "garbage in, garbage out" cycle model may cause the output quality of AI to drop off a cliff, which is not conducive to the development of the entire AI industry. develop. For example, an Internet company used search engine optimization to prioritize AI-generated articles in search results, making it difficult for users to retrieve high-quality information and triggering widespread criticism from users.
In addition, "AI pollution" also involves many issues such as law, ethics and even social stability. False content produced by AI may infringe on intellectual property rights, shake copyright rules, infringe on personal privacy, leak identity information, and have risks of abuse, which may disrupt and manipulate public opinion. Illegal acts produced by AI will also make social governance more difficult.
It can be seen that it is imperative to rectify the "AI pollution" of the Internet. According to the 54th "Statistical Report on China's Internet Development" released by the China Internet Information Center, as of June this year, the number of Internet users in my country was nearly 1.1 billion. The Internet builds a new home for human existence, and the younger generation has become the "aboriginal people" of the Internet. It can be said that rectifying "AI pollution" is a necessary action to create a clear cyberspace and ensure a good life for every netizen.
First, the source governance of AI learning and generation mechanisms should be strengthened. Clarify the AI platform’s responsibility for controlling source materials and supervising generated content, improve AI content generation rules, force AI-generated content to be marked with prominent labels, and improve the transparency and explainability of AI technology.
Second, strengthen the screening and supervision of AI-generated content. Relevant departments and enterprises need to focus their supervision on the screening and review of AI-generated content. They can develop relevant review algorithms, standardize the flow and dissemination of generated content into the public opinion field, and promptly discover and delete low-quality or false content.
Third, improve users’ ability to identify AI-generated content. Netizens should treat Internet information rationally, enhance prevention awareness and identification capabilities. They can use reverse search tools to check the source and author information of the content, and analyze the language and structural characteristics of the content, so as to identify "fake" and "inferior".
Cyberspace is not a "garbage dump" littered with litter. Combating "AI pollution" and creating a clean and safe network home requires the joint management of all departments and the participation of the whole society.