The "Computer Use" function of Claude AI launched by Anthropic Company gives it the ability to control devices. However, less than two months after this function was launched, security researchers discovered serious security vulnerabilities. Research by security expert Johann Rehnberger shows that through simple prompt word injection, Claude can be induced to download and run malware, such as the Sliver open source command and control framework. This has raised concerns about the security of AI and highlighted the importance of security issues that cannot be ignored while AI technology is developing rapidly.
Less than two months after Anthropic launched Computer Use, a feature that lets Claude control devices, security researchers have discovered potential vulnerabilities. The latest research results disclosed by cybersecurity expert Johann Rehnberger are shocking: through simple prompt word injection, AI can be induced to download and run malware.
Rehnberger named this exploit "ZombAIs". In the demo, he successfully got Claude to download Sliver, an open source command and control framework originally used for red team testing but now widely used as a malware tool by hackers. What’s even more worrying is that this is just the tip of the iceberg. Researchers pointed out that AI can also be induced to write, compile and run malicious code, and the attack methods are difficult to prevent.
Picture source note: The picture is generated by AI, and the picture is authorized by the service provider Midjourney
It is worth noting that this type of security risk is not unique to Claude. Security experts have discovered that the DeepSeek AI chatbot also has a prompt word injection vulnerability, which may allow an attacker to take over the user's computer. In addition, the large language model may also output ANSI escape codes, triggering the so-called "Terminal DiLLMa" attack, thereby hijacking the system terminal.
In this regard, Anthropic has already reminded users in its beta statement: "The Computer Use function may not always run as expected. It is recommended to take precautions to isolate Claude from sensitive data and operations to avoid risks related to prompt word injection."
This incident once again reminds us that while AI technology is developing rapidly, security issues cannot be ignored. Developers need to find a balance between functionality and security, and users also need to increase security awareness and take necessary protective measures when using AI tools.
This incident once again emphasized the importance of AI security. Both developers and users need to be more vigilant and work together to build a safe and reliable AI application environment. Only in this way can we ensure that AI technology can better serve mankind and avoid potential risks.