Recently, a new threat has emerged in the field of network security: malicious machine learning models. Researchers discovered 100 such models on the Hugging Face platform, which may be used to inject malicious code on user devices and perform malicious operations using technologies such as PyTorch. This highlights the severe challenges facing the field of AI security and reminds us to remain highly vigilant about the security of AI models. The emergence of malicious AI models not only affects user data security, but may also cause broader systemic risks. The industry needs to work together to strengthen the security audit and protection mechanisms of AI models.
Researchers discovered 100 malicious machine learning models on the Hugging Face AI platform, which may allow attackers to inject malicious code on user machines. Malicious AI models use methods such as PyTorch to execute malicious code, exacerbating security risks. To reduce risks, AI developers should use new tools to improve AI model security. The discovered malicious models highlight the risks of malicious AI models to user environments, requiring continued vigilance and enhanced security.AI security issues are becoming increasingly prominent, requiring AI developers, platform providers and users to work together to establish a more complete security mechanism to jointly deal with the risks posed by malicious AI models. Only by strengthening the security review of models and developing more effective detection and defense technologies can we ensure user safety and promote the healthy development of AI technology. Only through multi-party collaboration can we build a safer and more reliable AI ecological environment.