Gartner's latest report shows that AI-powered cyberattacks have become the biggest risk facing enterprises, and have been ranked first for three quarters. The report is based on a survey of 286 senior risk and audit executives, and 80% of respondents expressed concern about AI-enhanced malicious attacks. Attackers use AI to write malware, create realistic phishing emails, and even conduct large-scale distributed denial of service attacks. The application of AI lowers the threshold for cybercrime, allowing attackers with lower technical levels to easily carry out complex cyber attacks, which poses severe challenges to enterprise security protection.
According to a latest report released by Gartner, the application of artificial intelligence (AI) in cyberattacks has become the biggest risk facing enterprises for three consecutive quarters.
The consulting firm surveyed 286 senior risk and audit executives between July and September and found that 80% of respondents expressed deep concerns about AI-enhanced malicious attacks. This trend is not surprising, as there is evidence that cyberattacks using AI are on the rise.
Image source notes: The image is generated by AI, and the image authorized service provider Midjourney
The report also lists some other emerging risks, including AI-assisted information misleading, increasing political polarization, and mismatched organizational talent allocation. Attackers are using AI to write malware, make phishing emails, and more. Take HP as an example, researchers intercepted an email campaign that spreads malware in June, suspecting that its scripts were written with the help of generative AI. The script is well structured and each command has comments, which is not common in manual writing.
According to data from security company Vipre, the number of business email fraud attacks increased by 20% in the second quarter of 2023 compared with the same period last year, with nearly 50% of which were generated by AI. CEOs, HRs and IT staff are the main goals. Vipre's chief product and technology officer Usman Choudhary said criminals are using sophisticated AI algorithms to create compelling phishing emails that mimic the tone and style of legitimate communications.
In addition, retail sites suffered an average of 569,884 AI-powered attacks per day, according to Imperva Threat Research report. The researchers noted that tools like ChatGPT, Claude and Gemini, as well as robots that specialize in crawling website data to train large language models, are being used to perform activities such as distributed denial of service attacks and business logic abuse.
More and more ethical hackers have also acknowledged the use of generative AI, with the percentage rising from 64% last year to 77%. The researchers say AI can help with multi-channel attacks, failure injection attacks and automatic attacks, which can attack multiple devices simultaneously. As such, if the "good people" think AI is useful, the "bad people" will also use this technology.
The rise of AI is not surprising because it lowers the threshold for cybercrime, allowing criminals with lower technical levels to use AI to generate deep forgery, scan network portals, conduct reconnaissance, etc. Researchers at the Swiss Federal Institute of Technology recently developed a model that can solve the problem of Google reCAPTCHA v2 100%. Analysts at the security firm Radware predicted early this year that the emergence of private GPT models will be used for malicious purposes, and the number of zero-day vulnerabilities and deep-fals fraud will increase accordingly.
Gartner also noted that the key issues of IT vendors have been on the executive’s list for the first time. Zachary Ginsburg, senior director of risk and audit practice at Gartner, said customers who are centrally dependent on a single supplier may face higher risks. Just like the incident that dStrike happened in July, paralyzing 8.5 million Windows devices around the world, causing huge impacts on emergency services, airports and law enforcement agencies.
In short, the double-edged sword effect of AI technology is becoming increasingly prominent. Enterprises need to actively respond to the network security challenges brought by AI, strengthen security protection measures, and enhance their own security awareness in order to effectively resist AI-driven network attacks and ensure their own security in the digital era.