Google recently issued a security warning, pointing out that APT organizations supported by multiple countries are using its artificial intelligence assistant Gemini to enhance cyberattack capabilities. Instead of directly launching attacks with Gemini, these organizations use their efficient auxiliary functions such as vulnerability research, target reconnaissance, and tool development to reduce attack preparation time and improve attack efficiency. This highlights the double-edged sword effect brought by generative AI technology in the field of network security, which can not only increase productivity but also be exploited maliciously.
Google Threat Intelligence Group (GTIG) found that APT organizations from more than 20 countries are actively trying Gemini, with hacking in Iran and China the most prominent. They use Gemini to assist in the development of tools and scripts, researching public vulnerabilities, translating technical documents, scouting target organizations, and finding ways to avoid detection. Iranian hackers use Gemini to conduct reconnaissance by defense organizations and international experts, study known vulnerabilities, and develop phishing activities; Chinese hackers mainly target US military and government agencies, conduct vulnerability research and scripting. North Korea's APT organizations are also actively leveraging Gemini, covering multiple stages of the attack life cycle, and even using it to help North Korea's IT workers plan to fake identities to get jobs in Western companies. Russian hackers use Gemini relatively little, focusing mainly on script assist and translation, which may be related to its preferences or security considerations for native AI models. Although hackers tried to use public jailbreaks to deal with Gemini, none of these attempts succeeded. This also reflects the severity of the current abuse of generative artificial intelligence tools and the new security challenges brought about by the expansion of the AI market.
For example, Iranian hackers use Gemini for a variety of activities, including reconnaissance of defense organizations and international experts, researching known vulnerabilities, developing phishing activities and creating content for impact operations. In addition, they also use Gemini to translate and interpret military technology, covering areas such as drones and missile defense systems.
Meanwhile, Chinese-backed hackers focus on reconnaissance of U.S. military and government agencies, using Gemini to conduct vulnerability research, scripting, and escalation of permissions. They also explore how to access Microsoft Exchange through password hashing, and even reverse engineer some security tools.
North Korea's APT organizations are also actively leveraging Gemini to cover multiple stages of the attack life cycle, researching free hosting services, conducting target reconnaissance and developing malware. They also used Gemini to help North Korea's IT workers program to draft job applications to obtain jobs from Western companies in a false identity.
By contrast, Russian hackers use Gemini less, focusing mainly on script assistance and translation. Their activities show preference for artificial intelligence models developed locally, or avoid Western tools for operational security reasons.
It is worth mentioning that although hackers tried to use public jailbreaks to deal with Gemini, these attempts were unsuccessful. This also reflects the abuse of generative artificial intelligence tools in the current market. As the AI market gradually expands, the number of models lacking protection measures has also increased, bringing new challenges to cybersecurity.
Google’s warning reminds us that while artificial intelligence technology brings convenience, it also brings new risks to network security. It is necessary to strengthen protection measures for artificial intelligence models and formulate corresponding security strategies to deal with increasingly complex cyber threats.