The editor of Downcodes learned that Microsoft recently released a white paper to delve into the malicious use of generative AI. The report reveals that AI is used in serious criminal activities such as fraud, producing child sexual abuse materials, manipulating elections, and producing involuntary private images, emphasizing that these problems are not only technical challenges, but also pose a major threat to society. Microsoft calls for global cooperation to jointly deal with the risks brought by AI.
Microsoft recently released a white paper that provides an in-depth analysis of the malicious use of generative artificial intelligence (AI), including fraud, child sexual abuse material, election manipulation, and non-consensual private images. The company emphasized that these problems are not only technical challenges, but also a major threat to society.
According to a Microsoft white paper, criminals are increasingly leveraging the power of generative AI to carry out nefarious crimes. These include the use of AI-generated disinformation to defraud, the creation of child sexual abuse material, election manipulation through deepfake technology, and the creation of non-consensual intimate images that particularly target women. "We must never forget that misuse of AI has profound consequences for real people," said Hugh Millward, Microsoft's vice president of external affairs.
Aimed specifically at UK policymakers, the white paper proposes a comprehensive set of solutions based on six core elements to address the above issues. The six elements include: a strong security architecture, permanent sources of media and watermarking tools, modernized laws to protect the public, strong collaboration between industry and government and civil society, protection against misuse of services, and public education.
In specific recommendations for UK policymakers, Microsoft calls for AI system providers to be required to inform users that content is generated by AI when users interact with AI systems. Additionally, Microsoft recommends implementing advanced source tagging tools to flag synthetic content, and governments should also set an example by conducting authenticity verification of their media content. Microsoft also emphasized that new laws are needed to prohibit fraud through AI tools to protect the integrity of elections. At the same time, the legal framework to protect children and women from online exploitation should also be strengthened, including by criminalizing the production of sexual deepfakes.
Microsoft also pointed out that it is important to store metadata technology that indicates whether the media was generated by AI. Similar projects are already underway by companies such as Adobe that aim to help people identify the origin of images. However, Microsoft believes that standards like Content Credentials require policy measures and public awareness to be effective.
In addition, Microsoft is working with organizations such as StopNCII.org to develop tools to detect and remove abusive images. Victims can appeal through Microsoft's central reporting portal. For young people, additional support is provided through the Take It Down service provided by the National Center for Missing and Exploited Children. Millward said: "The misuse of AI is likely to be a long-term problem, so we need to redouble our efforts and work creatively with technology companies, charity partners, civil society and governments to tackle this problem. We cannot do it alone."
This Microsoft white paper provides a valuable reference for dealing with malicious applications of generative AI, emphasizes the importance of technical, legal and social cooperation, and provides an important direction for future AI governance. Only by joining hands with multiple parties can we effectively deal with the risks brought by AI and ensure social security and stability.