Gartner’s latest report states that AI has become the largest cybersecurity risk facing enterprises for three consecutive quarters, and 80% of executives surveyed are deeply concerned about it. The editor of Downcodes will interpret the report content for you, analyze how AI is used in cyber attacks, and how enterprises should respond to this increasingly severe challenge. This article will delve into the application of AI in malware writing, phishing email production, and distributed denial-of-service attacks, and analyze the latest findings from security companies and research institutions.
According to the latest report released by Gartner, the application of artificial intelligence (AI) in cyber attacks has become the biggest risk faced by enterprises for three consecutive quarters.
The consulting firm surveyed 286 senior risk and audit executives between July and September and found that 80% of respondents expressed deep concern about AI-enhanced malicious attacks. This trend is not surprising, as there is evidence that cyberattacks using AI are on the rise.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
Other emerging risks listed in the report include AI-assisted misinformation, increasing political polarization and mismatched organizational talent allocation. Attackers are using AI to write malware, craft phishing emails, and more. Taking HP as an example, researchers intercepted an email campaign spreading malware in June and suspected that the script was written with the help of generative AI. The script is clearly structured and each command is commented, which is not common in human writing.
According to data from security company Vipre, in the second quarter of 2023, the number of business email fraud attacks increased by 20% compared with the same period last year, of which nearly 50% were generated by AI. CEOs, HR and IT staff became prime targets. Usman Choudhary, chief product and technology officer at Vipre, said criminals are using sophisticated AI algorithms to craft convincing phishing emails that mimic the tone and style of legitimate communications.
Additionally, according to a report by Imperva Threat Research, retail websites suffered an average of 569,884 AI-driven attacks per day from April to September. Researchers pointed out that tools like ChatGPT, Claude and Gemini, as well as bots that specialize in crawling website data to train large language models, are being used to conduct activities such as distributed denial-of-service attacks and business logic abuse.
An increasing number of ethical hackers also admit to using generative AI, with the proportion rising to 77% from 64% last year. AI can help with multi-channel attacks, fault injection attacks and automated attacks that attack multiple devices simultaneously, the researchers said. Just like this, if the “good guys” find AI useful, the “bad guys” will also take advantage of the technology.
The rise of AI is not surprising, as it has lowered the threshold for cybercrime, allowing criminals with lower technical skills to use AI to generate deep fakes, scan network entrances, conduct reconnaissance, and more. Researchers at the Swiss Federal Institute of Technology recently developed a model that can 100% solve the problem of Google reCAPTCHA v2. Analysts at security firm Radware predicted at the beginning of the year that the emergence of private GPT models would be used for malicious purposes and that the number of zero-day vulnerabilities and deepfake scams would increase.
Gartner also noted that for the first time, critical IT vendor issues are on executives' attention lists. Zachary Ginsburg, senior director in the risk and audit practice at Gartner, said customers who rely heavily on a single vendor may face higher risks. Just like the dStrike incident in July, which paralyzed 8.5 million Windows devices around the world, it had a huge impact on emergency services, airports, and law enforcement agencies.
All in all, the double-edged sword effect of AI technology is most vividly reflected in the field of network security. Enterprises need to proactively take defensive measures, such as strengthening security training, implementing multi-factor authentication, and adopting advanced threat detection technology, to effectively deal with AI-driven cyberattacks. In the future, it is crucial to continue to pay attention to AI security development trends and actively explore response strategies.