Microsoft recently released a white paper, warning that malicious applications of generative AI are becoming increasingly rampant, covering multiple fields such as fraud, child sexual abuse, election manipulation, and involuntary image leakage. The report not only points out technical challenges, but also emphasizes the serious threats it poses to society, and proposes a practical solution to British policy makers, aiming to use legal, technical and social cooperation and other means. Together to deal with this emerging social problem. The white paper elaborates on various methods of using AI technology to commit crimes, and proposes the importance of strengthening legislation, promoting technological innovation and enhancing public awareness. It calls on all parties to work together to build a safe and reliable AI application environment.
Microsoft recently released a white paper that deeply analyzes the malicious use of generative artificial intelligence (AI), including fraud, child sexual abuse materials, election manipulation, and involuntary private images. The company stressed that these problems are not only technical challenges, but also a major threat to society.
According to Microsoft's white paper, criminals are increasingly using the power of generative AI to commit evil deeds. These vicious behaviors include using false information generated by AI to commit fraud, creating child sexual abuse materials, electoral manipulation through deep forgery techniques, and creating involuntary intimate images that are especially directed at women. "We can never forget that abuse of AI has a profound impact on real people," said Hugh Millward, vice president of external affairs at Microsoft.
The white paper specifically targets policymakers in the UK and proposes a comprehensive set of solutions based on six core elements to address the above issues. These six elements include: a strong security architecture, a permanent source of media and watermark tools, modern laws to protect the public, strong collaboration between the industry and government and civil society, protection of abuse of services, and public education.
In a specific recommendation for UK policymakers, Microsoft calls on AI system providers to inform users that the content is generated by AI when they interact with AI systems. In addition, Microsoft recommends implementing advanced source marking tools to mark synthetic content, and the government should set an example by verifying the authenticity of its media content. Microsoft also stressed the need for new laws to prohibit fraud through AI tools to protect the fairness of elections. At the same time, the legal framework for protecting children and women from online exploitation should also be strengthened, including criminalizing the production of sexual deep falsification.
Microsoft also pointed out that it is important to store metadata technologies that indicate whether media is generated by AI. Similar projects have been pushed by companies such as Adobe, which aim to help people identify the source of images. However, Microsoft believes that standards like “content vouchers” require policy measures and public awareness to work.
In addition, Microsoft has partnered with organizations such as StopNCII.org to develop tools to detect and remove abused images. Victims can defend their rights through Microsoft's central reporting portal. The “Take It Down” service provided by the National Center for Missing and Exploited Children also provides additional support. "The problem of AI abuse can be long-term, so we need to redouble our efforts to collaborate creatively with tech companies, philanthropic partners, civil society and government to address this issue. We can't fight alone," Millward said.
Key points:
Microsoft has released a white paper that reveals the many ways generative AI is used maliciously, including scams and election manipulation.
In response to British policymakers, Microsoft has proposed solutions to six major elements, calling for comprehensive protection of law and technology.
Emphasizing the importance of working with all parties, Microsoft calls for a joint effort to address the challenges posed by abuse of AI.
In short, Microsoft's white paper provides an important reference for responding to malicious applications of generative AI, emphasizes the comprehensive role of technology, law and social cooperation, and calls on global joint efforts to prevent the abuse of AI technology and maintain social security and stability.