Artificial intelligence technology is developing rapidly, and deep forgery technology, as one of its applications, is profoundly affecting our lives. This technology can generate highly realistic fake videos and images, which brings many conveniences but also brings serious risks, such as using celebrity images to commit fraud. This article will discuss the challenges and countermeasures brought by deep forgery technology, aiming to increase public risk awareness and maintain social order.
In recent years, AI has penetrated into every aspect of life, from voice assistants to autonomous driving technology. However, the widespread application of AI technology also brings some potential risks, among which deepfake technology has attracted social attention.
Deepfake technology uses algorithms to generate highly realistic false content. By learning from a large amount of real data, it generates videos or images that are very similar to people or scenes. Although this technology demonstrates the power of AI, it also breeds fraud. For example, there have been recent incidents of people pretending to use the image and voice of Dr. Zhang Wenhong to promote live broadcasts. In the fake video, a synthetic figure promoted a certain product, resulting in the sale of more than 1,200 items. This incident caused strong dissatisfaction between Dr. Zhang Wenhong and the public.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
Tang Jiansheng, deputy secretary-general of the Shanghai Consumer Rights Protection Commission, said that such behavior of impersonating celebrity images through AI technology has constituted a serious infringement of consumer rights. Similar cases include people using Lei Jun's profile picture to make spoof videos during the National Day, and impersonating Andy Lau's voice to attract traffic. Relevant companies and celebrities have spoken out to warn the public to be more vigilant.
Experts point out that current AI technology can easily clone other people's faces and voices, and the content generated is extremely realistic. However, this type of technology is not without its flaws. Anomalies can still be found by carefully observing the blending of a face with the background or the matching of a voice with a mouth shape. Additionally, real-time live streaming is currently difficult to achieve with such technology.
Legal professionals have made it clear that unauthorized use of another person’s image or voice is illegal. Zhu Wei, associate professor at China University of Political Science and Law, emphasized that according to the Civil Code, this behavior infringes upon personality rights; according to the Cybersecurity Law, the relevant content is illegal information, and the publisher may even face criminal liability.
For consumers, if they purchase goods because AI fake celebrities bring goods, they can require the merchant to "refund one and compensate three" in accordance with the "Consumer Rights Protection Law", and the minimum compensation amount is 500 yuan. At the same time, short video platforms should also assume regulatory responsibilities, strengthen the review and punishment of relevant content, and avoid the widespread dissemination of illegal information.
The rise of deep forgery technology reminds the public that while enjoying the convenience of AI, they also need to be vigilant about its potential risks.
Facing the challenge of deep forgery technology, we need the government, enterprises and individuals to work together to strengthen legislative supervision, improve technical identification capabilities, and jointly build a safe and reliable network environment. Only in this way can AI technology be better utilized, avoid its malicious use, and maintain social order and public interests.