Pictures may not necessarily be the truth! How to break rumors about AI-related disasters?
Author:Eve Cole
Update Time:2024-11-22 10:36:01
Not long ago, many parts of the country were affected by extreme weather, especially after entering the main flood season. Some areas experienced widespread extreme high temperatures, record-breaking heavy rainfall, and unpredictable windy weather. These extreme weather events have triggered natural disasters such as floods and landslides, which have had a serious impact on people's lives and production activities. What is infuriating is that some people took the opportunity to use the Internet and artificial intelligence (AI) technology to create and spread disaster-related rumors, further disrupting the already tense information environment. Why do rumors about disasters and risks exist? What impact has the development of artificial intelligence technology in recent years had on rumors about disasters and risks? How to curb the creation and spread of rumors about disasters and risks? AI has become a new trend in disaster-related rumors during the flood season. If you browse rumor refuting platforms such as the China Internet Joint Rumor Refuting Platform in July and August every year, you will find that rumors involving high temperatures, floods, disasters and other fields are frequent. Since July this year, natural disasters such as floods, waterlogging, and mudslides caused by heavy rainfall have continued to escalate, causing significant damage to people's lives and property. Related rumors have also become rampant, interfering with emergency management work. Recently, there have been reports on the Internet: "The flood peak passed through and Hongyadong in Chongqing was flooded", "Heavy rains in Changsha, Hunan caused many motorcycles to be submerged", "Floods in Dazhou, Sichuan caused many deaths", "Heavy rains in Dengzhou, Henan killed more than a thousand people" ” and other rumors have extremely bad impact. After reviewing and sorting out the types of online rumors involving disasters and risks in recent years, they can be roughly classified into three categories. One is to fabricate disasters and casualties. Some netizens fabricate or exaggerate the actual situation in the disaster-stricken areas, such as incorrectly reporting the height of the flood, the number of affected people, or the extent of property damage, and spreading unconfirmed information on casualties, including the number of deaths and missing persons, causing public panic and anxiety. . For example, on August 4, a Jiangsu netizen posted a video saying that a severe flash flood occurred in Lushan County, Ya'an, Sichuan, and more than 30 people have lost contact. The Internet Information Office of the Lushan County Party Committee later issued a statement clarifying that after verification, there have been no flash floods in Lushan County in recent days, and there have been no cases of loss of contact. The second is to publish misleading videos or pictures. The most common method is to "false one's true identity" and "replace the facts", that is, photos or videos of past disasters or other unrelated events are faked into scenes from the current disaster scene, falsely reporting the disaster situation, and misleading the public's judgment. For example, on June 19, a netizen posted a message on a short video platform saying that a severe mudslide occurred in Jiuwu Town, Lingchuan County, Guilin City, Guangxi. After verification by the Guilin Internet Information Office, the video posted online is actually a video of a large-scale mudslide that occurred on the Izu Mountain in Atami City, Japan on July 3, 2021. The netizen distorted his video and seriously misled the public. The third is to fabricate false rescue information. For example, publishing false rescue needs, donation accounts, or volunteer recruitment information not only consumes precious rescue resources, but may also delay real rescue operations and have a negative impact on disaster relief efforts. For example, in 2023, criminals used the account of the Red Cross Society of Jiaxing City, Zhejiang Province without permission and committed fraud on the Internet in the name of "donation rebates." Not long ago, the WeChat public account of the Hunan Provincial Internet Reporting Center also announced a case: Netizen Yang used AI technology to splice together pictures of river water and bayberry trees, creating the illusion that heavy rains and rising water had submerged the fruit trees. The realistic pictures led some people to believe it, triggering panic. The AI technology mentioned in the case is one of the new methods recently used to create rumors about disasters and risks. In recent years, with the development of technologies such as generative artificial intelligence and video synthesis, cases of disaster-related rumors created using new technologies such as AI have occurred frequently in many places across the country. Compared with traditional disaster-related rumors, they have The manufacturing threshold is lower and more confusing. On April 12, the Ministry of Public Security announced 10 typical cases of combating online rumors and crimes, "Jiangxi Public Security Bureau investigated and dealt with a case of MCN (Multi-Channel Network) agency using artificial intelligence tools to spread rumors" and "Chongqing Public Security Bureau investigated and dealt with a case of using artificial intelligence tools" "The case of using tools to fabricate rumors about an 'explosion accident'" is among them. An MCN organization in Nanchang, Jiangxi was exposed to using advanced AI software to automatically collect network information by entering keywords, and then generate articles of hundreds to thousands of words. These articles appear to be rich in information, but in fact the content is mostly fabricated and accompanied by false pictures that appear to be related to the event, which greatly increases their misleading nature. The maximum output of fake articles created by this organization can be 4,000 to 7,000 articles a day. The current development of AI technology has reached a relatively mature stage. Many AI tools and services have been commercialized and are easy to obtain. Free or low-cost versions are also available. Professional programming skills or advanced technical knowledge are not required, and ordinary people can use them. Ready-made software or online services can be used to generate text, images or videos, greatly reducing the economic cost and time investment of producing rumors. AI can also quickly generate relevant rumor content based on hot events and automatically splice information to form a seemingly reasonable narrative, increasing the authenticity of the information. For example, on January 23, 2024, in order to obtain traffic and make profits, the illegal actor Yang Moumou used AI software to generate false news information that "the landslide disaster in Yunnan has killed 8 people" and collected relevant information on the Internet. Published on online platforms, causing adverse social impact. The reporter browsed the false news released by Yang Moumou. This rumor written by AI has concise language, comprehensive information, and professional wording. At first glance, it is very similar to news released by daily official media, and it can easily mislead readers. Why do disaster-related rumors persist? Explore the starting point of creating and spreading disaster-related rumors, taking advantage of people's curiosity and concern, attracting public attention by creating sensational content, and increasing the number of views and fans of social media accounts. , and then obtaining commercial benefits is one of the main purposes of many false information publishers. For example, as mentioned earlier, "Jiangxi Public Security Bureau investigated a case of MCN agency using artificial intelligence tools to spread rumors." It is initially estimated that the company's daily income exceeds 10,000 yuan. Not long ago, an Internet celebrity posted two flood videos on the Douyin platform, showing vehicles being washed away by floods, and claiming that due to recent heavy rainfall, heavy rains and rising water levels have occurred in many places in Yongzhou, Hunan. On June 26, the police from the Xiaojiayuan Police Station of the Lengshuitan Public Security Bureau in Yongzhou City, after questioning in accordance with the law, found that the relevant video was a video that Yang downloaded from the Internet in order to gain attention and attract traffic, regardless of the facts, and then posted the text to his online account. It was not Local facts. This kind of behavior in order to gain attention, traffic is king, and profit is paramount, so Yang was eventually punished administratively. Some rumormongers may express their attitudes or emotions by maliciously spreading false negative information out of dissatisfaction with reality or other emotional reasons. What's more, they try to influence the government's image, policy decisions, or the direction of public opinion by creating chaos. In addition, there are some people who do not deliberately spread rumors, but are ignorant and fearless, and forward and share information from unknown sources without verification. Humanity has been threatened and struck by various natural disasters since its birth. Throughout history and at home and abroad, whenever a natural disaster occurs, rumors often accompany it. For example, "Caomuzi" recorded such a rumored incident: In the 14th year of Zhizheng in the Yuan Dynasty, the southern region encountered a rare heavy rain that lasted for more than 80 days, swamping the country, causing floods and famine disasters. At that time, there were rumors that a large number of dragons hidden underground took advantage of the heavy rain to emerge. Many people believed this and were very panicked. In 1978, after three earthquakes occurred in Thessaloniki, Greece, rumors arose that "a major earthquake would occur in the city center." Almost half of the city's 700,000 population fled. People sold fixed assets at low prices and rushed to buy food. . In 2019, when the super typhoon "Lekima" landed in China, rumors such as "Crocodiles suddenly appeared on the streets" and "All manhole covers on the ground will be opened at 7 pm one night to facilitate drainage" appeared on social media, triggering unnecessary public concern. of panic. In 2023, a 6.2-magnitude earthquake occurred in Jishishan, Gansu Province. During this period, Gansu and Qinghai administratively punished 14 netizens for spreading rumors about earthquake relief and disaster relief, and criticized and educated 121 people in accordance with the law. In particular, the "rumor that Canada Goose down jackets donated to the earthquake-stricken area in Gansu were resold" spread widely on the Internet, causing a negative impact. There was also such a "strange incident" in 2010: At that time, rumors appeared on the Internet that destructive earthquakes might occur in cities such as Taiyuan, Shanxi. From 1 p.m. to 7 a.m. on the eighth day of the first lunar month, Jinzhong, Luliang, Shanxi Province Dozens of counties and cities, thousands of villages, and nearly tens of millions of homes in Taiyuan and other places are brightly lit. In the cold wind, people stay up late at night and crowd into the streets to "wait for the earthquake." Why are rumors related to natural disasters such as earthquakes and floods so popular? Because people have a natural fear of natural disasters, this fear will make people more sensitive to relevant news. When faced with the threat of a possible disaster, people tend to believe the worst-case scenario, fueling the spread of panic. Moreover, before and after a disaster occurs, official information often cannot be conveyed to everyone in a timely and accurate manner, which provides space for rumors to breed. Especially in areas with limited information or underdeveloped networks, people are more likely to trust information from informal channels. In addition, some people may lack sufficient scientific knowledge to identify the authenticity of information and have a herd mentality. These factors jointly promote the growth and spread of rumors. "The damage caused by a widely circulated earthquake rumor is no less than that of a moderate earthquake." False information about disasters and risks will not only create unnecessary panic by exaggerating extremely severe weather conditions or disaster consequences; It occupies already tight information resource channels and even obscures the accurate information released by officials and the real needs of disaster-stricken people seeking help through the Internet. Such behavior not only harms public interests and has a negative impact on social credibility, but may also cause problems for rescue, disaster relief, and emergency response work. Intelligent review and manual review must be combined to maintain the authenticity of information when facing natural disasters. As the saying goes, if you start a rumor, you will lose your legs if you refute it. Although the rumormongers in the rumor-mongering cases mentioned above have been punished, and major website platforms have also actively refuted the relevant rumors, it cannot be ruled out that some netizens only watch If there are rumors and no clarification, the impact of the rumors cannot be completely eliminated. The rapid spread of disaster-related rumors not only easily misleads the public, but may also cause panic, resource misallocation and even secondary disasters. Therefore, improving the public's ability to identify AI rumors and increasing the supervision of online platforms have become urgent issues that need to be resolved. In order to control the chaos of AI fraud and deepen the governance of the network ecosystem, relevant departments and platforms have introduced a number of policies and measures in recent years. With the gradual improvement of AI-related laws and regulations, the rights and responsibilities of all parties have gradually become clearer. In December 2021 and November 2022, the Cyberspace Administration of China and other departments successively issued the "Regulations on the Management of Algorithm Recommendations for Internet Information Services" and the "Regulations on the Management of Deep Synthesis of Internet Information Services" to target the use of artificial intelligence algorithms to spread illegal and harmful information, To address issues such as infringement of user rights and manipulation of public opinion, we must strengthen security management and promote the reasonable and effective use of algorithm recommendation technology and deep synthesis technology in accordance with the law. Among them, the "Administrative Regulations on Deep Synthesis of Internet Information Services" requires deep synthesis service providers and users not to use deep synthesis services to produce, copy, publish, and disseminate false news information, and points out that deep synthesis service providers should establish and improve a rumor refuting mechanism. Anyone who uses deep synthesis services to produce, copy, publish, or disseminate false information must take timely measures to refute rumors, keep relevant records, and report to the cybersecurity and informatization department and relevant competent authorities. The "Interim Measures for the Management of Generative Artificial Intelligence Services" issued by the Cyberspace Administration of China and other departments in July 2023 set requirements for both providers and regulatory authorities. Article 12 stipulates that providers should label images, videos and other generated content in accordance with the "Regulations on the Management of Deep Synthesis of Internet Information Services". Article 14 stipulates that if the provider discovers illegal content, it shall promptly take disposal measures such as stopping generation, stopping transmission, and elimination, take measures such as model optimization training to make rectifications, and report to the relevant competent authorities. If the provider discovers that the user is using generative artificial intelligence services to engage in illegal activities, it shall take disposal measures such as warning, restricting functions, suspending or terminating the provision of services to the user in accordance with the law and contract, keep relevant records, and report to the relevant competent authorities. Article 16 stipulates that departments such as cybersecurity and informatization, development and reform, education, science and technology, industry and information technology, public security, radio and television, press and publication, etc., shall strengthen the management of generative artificial intelligence services in accordance with their respective responsibilities and laws. In April this year, the Secretariat of the Central Cyberspace Affairs Commission issued the "Notice on Carrying out the Special Action of "Cleaning Up and Rectifying 'Self-Media'' Unlimited Bountiful Traffic", requiring the strengthening of labeling and display of information sources. If information is generated using technologies such as AI, it must be clearly marked as generated by technology. Any content that contains fiction, deduction, etc. must be clearly labeled as fiction. Take the Douyin short video platform as an example. On May 10, 2023, Douyin released the "Platform Specifications and Industry Initiatives on Artificial Intelligence Generated Content" stating that artificial intelligence-generated content is difficult to identify and has also brought about problems such as false information and infringement. With reference to laws and regulations such as the "Regulations on the Management of Deep Synthesis of Internet Information Services", Douyin proposes eleven platform specifications and industry initiatives, the fourth of which requires publishers to prominently mark content generated by artificial intelligence to help other users distinguish virtuality from reality. , especially confusing scenes. Therefore, content suspected to be generated by AI must be clearly marked. Currently, for content that is suspected of using AI technology, Douyin will mark the prompt "The content is suspected to be generated by AI, please screen carefully" below the work. There are also some platforms that will clearly add fictional labels to content that includes fiction, interpretation, etc., and take measures such as "banning" illegal accounts. An expert who has been paying attention to the field of artificial intelligence for a long time said that research on rumor identification in short videos and pictures has actually begun more than ten years ago. The current relatively mature technology relies on retrieval and comparison for screening. Because videos and image rumors are often spliced together with materials from other times or other regions, relying on artificial intelligence technology can identify and solve a large part of the problems. By collecting various data covering various countries and regions, and pushing forward information for several years or even more than ten years, after collecting and saving enough data from time to space, most rumors can be retrieved to identify similar pictures and videos. The way frames are discovered. The expert also mentioned that it would be difficult to identify rumors that are purely newly generated using AI technology. In recent years, based on the development of multi-modal large model generation technology, it is easy to generate some brand-new short videos or pictures, and the technical route of using these materials to create rumors has changed. These new technologies can generate information that has never been circulated on the Internet, which greatly increases the difficulty of identification. Previous rumor screening technologies based on retrieval and comparison have had little effect. Therefore, how to identify false content generated by new generative models is still being explored from a technical perspective. Zhang Ying, a specially appointed expert of the China Association for Science and Technology Rumor Refutation Platform, believes that in the face of the challenge of AI-generated rumors involving disasters and risks, improving the public’s ability to identify and strengthening the supervision strategy of online platforms are two mutually reinforcing aspects. Zhang Ying suggested that we should focus on four aspects of work: first, disseminate scientific and technological progress and give hope to the people; second, refute rumors and explain misunderstandings, and save the people from disaster; third, pay attention to backward areas and vulnerability; fourth, pay attention to the public Participate and increase the public’s sense of gain. Regarding the supervision strategy of online platforms, Zhang Ying gave the following suggestions: First, combine intelligent review with manual review. Online platforms should introduce advanced AI technology to assist content review to improve review efficiency and accuracy. At the same time, a manual review mechanism is established to conduct a secondary review of suspected rumors to ensure the fairness and authority of the review results. The second is to establish a rapid response and rumor refuting mechanism. Once rumors involving disasters and risks are discovered, online platforms should immediately activate a rapid response mechanism, quickly delete the rumor content, and release refutation information through official channels. At the same time, government departments, authoritative media, etc. have established a linkage mechanism to jointly combat the spread of rumors. The third is to strengthen user behavior management and guidance. Online platforms should strengthen the supervision and management of user behavior and punish and warn users who maliciously spread rumors. At the same time, the public is reminded to obtain information through authoritative channels, guide users to rationally participate in information dissemination through pop-up prompts, user education, etc., and jointly maintain a good network environment. The fourth is to promote data sharing and cross-platform collaboration. Encourage the establishment of data sharing mechanisms between different network platforms to achieve information sharing and collaborative operations. Through cross-platform collaboration, public opinion monitoring can be strengthened and the spread dynamics and scope of rumors can be more comprehensively grasped, providing strong support for formulating targeted countermeasures. Zhang Ying also gave some suggestions on how the public can identify AI rumors. The first is to strengthen information literacy education. The public should receive systematic information literacy education, learn how to identify the reliability of information sources, and understand the limitations and potential risks of AI-generated content. In normal times, relevant industry authorities, mainstream media, and science popularization experts should also carry out science popularization and education through various forms. For example, the China Association for Science and Technology has created a rumor-refuting platform to carry out rumor-refuting work and science popularization in areas of concern to the people, such as life and health, food and medicine, and emergency safety. , the Cyberspace Administration of China also carried out work such as publishing the annual rumor refuting work list, and achieved good results. At the same time, relevant departments should improve the public's alertness and ability to identify rumors through case analysis, simulation exercises, etc. The second is to verify information through multiple channels. The public is encouraged to adopt a multi-channel verification method when faced with disaster-related information. In addition to information released by official media and government agencies, you can also use resources such as search engines, official accounts on social media, and third-party fact-checking agencies to comprehensively determine the authenticity of the information. Take earthquake forecasting as an example. Short-term forecasting is still a problem in the world. Relevant laws and regulations stipulate that only governments at the provincial level and above have the authority to release relevant information. Earthquake rumors must not be easily believed. The third is to cultivate critical thinking. Develop the public's critical thinking and learn to independently analyze and evaluate information. Be cautious about information that is too exaggerated, emotional, or lacks specific evidence, and do not believe or spread it. As far as earthquakes are concerned, earthquakes are not terrible. As long as the building fortification meets the standards or is well prepared, they generally will not cause catastrophic consequences, so there is no need to be overly afraid. At the same time, in terms of emergency evacuation methods: there is no one-size-fits-all emergency evacuation method. The public should think scientifically and how to respond to fires of different places and sizes according to the place, time and people. It should be thought about and dealt with scientifically.