Recently, AI model security issues have received increasing attention. Researchers at North Carolina State University have made a major breakthrough when they developed a method to extract AI models by capturing electromagnetic signals from computers with an accuracy of more than 99%. This discovery has raised concerns in the industry about the protection of intellectual property rights in AI models, especially in the context that companies such as OpenAI, Anthropic and Google have invested heavily in developing proprietary models. The article explores the potential impact of this new technology and how enterprises can respond to the growing risk of AI model theft, as well as analyzing related security measures and future development trends.
Recently, researchers at North Carolina State University proposed a new method to extract artificial intelligence (AI) models by capturing electromagnetic signals emitted by computers, with an accuracy of more than 99%. The discovery could pose a challenge to commercial AI development, especially as companies such as OpenAI, Anthropic and Google have invested heavily in proprietary models. However, experts note that the actual real-world impact of this technology, as well as defense measures, remain unclear.
Lars Nyman, chief marketing officer of CUDO Compute, said that AI theft is not just the loss of the model itself, but may also trigger a series of chain reactions, such as competitors using years of research and development results, and regulatory agencies investigating intellectual property rights. Mismanagement, and even lawsuits from customers who discovered that their AI “uniqueness” was not unique. This situation may lead to a push within the industry for standardized audits, like SOC2 or ISO certification, to distinguish safe companies from irresponsible ones.
In recent years, the threat of hacker attacks against AI models has become increasingly serious. The business world's reliance on AI makes this problem even more acute. Recent reports show that thousands of malicious files were uploaded to Hugging Face, a key repository of AI tools, severely compromising models used in industries such as retail, logistics and finance. National security experts have warned that weak security measures can leave proprietary systems at risk of theft, as shown by the OpenAI security flaw. Stolen AI models can be reverse-engineered or sold, which will undercut a business's investment and undermine trust, allowing competitors to quickly catch up.
The North Carolina State University research team revealed key information about the model's structure by placing a probe near Google's edge tensor processing unit (TPU) and analyzing its signal. This attack method does not require direct access to the system, which exposes AI intellectual property to serious security risks. Study co-author Aydin Aysu, associate professor of electrical and computer engineering, emphasized that building an AI model is expensive and requires a lot of computing resources, so it is crucial to prevent the model from being stolen.
As AI technology becomes more widely used, companies need to re-examine some of the equipment used for AI processing. Technology consultant Suriel Arellano believes companies may move to more centralized and secure computing or consider alternative technologies that are less susceptible to theft. While the risk of theft exists, AI is also enhancing cybersecurity, improving response efficiency through automated threat detection and data analysis, helping to identify potential threats and learn to respond to new attacks.
Highlights:
Researchers demonstrated a method of extracting AI models by capturing electromagnetic signals with an accuracy of over 99%.
AI model theft may lead competitors to exploit the company's years of research and development results, affecting business security.
Enterprises need to strengthen the security protection of AI models to deal with the increasing threat of hacker attacks.
All in all, AI model security has become a focus for enterprises. Facing increasingly complex cybersecurity threats, enterprises need to actively take measures to strengthen the protection of AI models and explore safer AI technologies to protect their intellectual property rights and commercial interests.