During the National Day holiday this year, a large number of spoof videos of "Lei Jun AI dubbing" appeared on Douyin, Bilibili and other platforms, which caused great trouble to Lei Jun himself and Xiaomi.
In these spoof videos, "Lei Jun" made sharp comments on hot topics such as traffic jams and holidays, as well as some indecent abuse. Some short parody videos have views reaching hundreds of millions within a few days. In response to this, Lei Jun himself released a short video at the end of October, saying that he hoped everyone would stop playing. "Some netizens complained that I was scolded for seven consecutive days during the National Day, which really bothered me and made me very uncomfortable. Yes, this is not a good thing.”
Talking about the above incident, Gao Ting, vice president of research at Gartner, told Jiemian News that false information is still fake information in essence, but its harm has been amplified in the era of large models.
According to an information security practitioner, there are more and more spoof videos of "Lei Jun's AI dubbing", mainly because an external AI company is promoting its voice cloning function. This type of meme is known to the public as AI imitation, and there is little risk of spreading false information. The difficulty in platform governance is that it is impossible to determine whether celebrities are opposed to such netizens’ trolling, or whether they are happy to see the results for marketing purposes. It is difficult for the platform to formulate a governance strategy before the parties clearly express their approval or objection.
In contrast, news-based AI rumors often bring greater social harm. A search on Jiemian News found that similar incidents have occurred many times this year.
Recently, an article titled "A Shandong aunt was fined RMB 160,000 for selling fruit at a stall. The regulatory bureau responded that she did not pay a fine of RMB 1.45 million, and the court sentenced it." was widely circulated on WeChat public accounts and other platforms, and also triggered many debates among netizens. . However, after verification by relevant departments, this article was created by an MCN company in Changsha in order to increase the reading volume of the public account and the company’s revenue, using AI-generated content to attract attention. Prior to this, several short video platforms had also used AI to fabricate news such as "earthquake" and "robbery of cash transport truck". These contents brought a certain degree of panic to netizens who did not know the truth.
Regarding fake news brought about by generative AI, various content platforms have made it clear that they will crack down. The relevant person in charge of Douyin Security Center said that the platform will deal with false information severely once discovered, regardless of whether it is generated by AI or not. At the same time, for AI-generated content, on the premise of complying with community content rules, Douyin requires publishers to prominently mark the content to help other users distinguish virtuality from reality, especially in confusing scenes.
However, according to Jiemian News, the difficulty in controlling fake news generated by AI lies in the high cost of verification. It is difficult for the platform to judge the authenticity of the information on its own. In many cases, regulatory authorities are required to intervene, but the relevant information is not verified during the verification process. It has spread rapidly.
In addition to the negative impact that false information has on corporate reputation and the normal order of social life, it also has an impact on corporate network security such as "phishing" or "account takeover." Previously, there has been an incident in the industry where fraudsters used AI technology to imitate the voice of the CEO of an energy company and defrauded the company's executives of millions of dollars. The timbre, tone and accent generated by the AI were so lifelike that the executive did not realize it was a fraud.
Gao Ting said that generative AI has led to more destructive and realistic attacks, which are more difficult to identify by humans and blocked by traditional technologies, and will cause greater losses to enterprises.
The cost of producing false information is getting lower and lower, but the negative impact is becoming more and more obvious. It is difficult to completely solve the problem only through traditional manual verification methods. The above-mentioned technical security experts said that we need more emerging technologies to solve the problem of false information management. Some people use technical means to create false information. Technical security experts should be able to identify the characteristics of these false information, fight against it through technical means, and truly solve the problem from the source.
In the field of information security, many technical experts have reached some preliminary consensus on false information governance and regard "false information security" as a new technical research direction.
Specifically, "disinformation security" includes a series of technologies that can ensure integrity, assess authenticity, and prevent impersonation and tracking the spread of harmful information in the dissemination of information. The principle behind it is to use large language models to track content on social media, verify the integrity of real-time communications, ensure the authenticity of third-party media, etc.
Gartner lists "false information security" as one of the important technology trends among its top ten strategic technology trends predicted for 2025. The agency believes that "false information security" technology will produce significant commercial benefits in the next 1 to 3 years.
From the perspective of research practice, some technologies have been proven to be effective. A study from the University of California shows that contextual tags that provide users with details such as context, explanations for errors, and relevant authoritative content can help reduce the proliferation of false content. Overseas content platforms such as Facebook and X are using artificial intelligence and other technical means to train their systems to automatically detect and label massive amounts of information to overcome the limitations of manual verification. These studies and practices also have strong reference significance for domestic content platforms.
For problems such as phishing scams caused by generative AI, enterprises and organizations can respond with a complete set of technical solutions. Jiemian News learned that many domestic Internet companies such as 360 and Ant Financial are already developing solutions based on large models, trying to identify more risks through data access and analysis, analysis and judgment, and traceability investigations.