The JFrog security team recently released a report stating that there are a large number of malicious AI ML models on the Hugging Face platform, numbering at least 100. The risks hidden by these malicious models cannot be underestimated. Some models even have the ability to execute code on the victim's machine and establish a persistent backdoor, posing a serious threat to user data security. Researchers have discovered that malicious models built using the PyTorch and Tensorflow Keras frameworks, such as the model named "baller423", can establish a reverse shell on the target host to achieve remote control. Although some malicious models may be uploaded for security research purposes, with the intention of discovering vulnerabilities and obtaining bounties, this does not reduce their potential harm.
The JFrog security team discovered at least 100 malicious AI ML models on the Hugging Face platform. Some models can execute code on the victim machine, providing a persistent backdoor. Security researchers have discovered that there are PyTorch and Tensorflow Keras models with malicious functions on the platform. For example, the baller423 model uploaded by users can establish a reverse shell on the specified host. Some malicious models may be uploaded for security research purposes to discover vulnerabilities and obtain bounties.The Hugging Face platform should strengthen the review mechanism for uploaded models to effectively prevent such security risks. Users should also increase their security awareness and use AI models from unknown sources with caution to avoid malicious attacks. This incident once again reminds us that with the development of artificial intelligence technology, security issues have become increasingly prominent, and the industry needs to work together to build a safer and more reliable AI ecological environment.