The rapid development of artificial intelligence technology has brought unprecedented opportunities to enterprises, but it has also brought new risks and challenges. The rise of generative AI has attracted particular attention to its potential security risks. This article will analyze the latest released risk reports to explore the difficulties faced by enterprises in dealing with generative AI risks and how to take effective measures to ensure their own safety and compliance.
In recent years, the rapid development of artificial intelligence (AI) has brought many opportunities for enterprises, but at the same time, its potential threats are becoming increasingly apparent. According to the latest 2024 "New Generation Risk Report", the survey shows that up to 80% of the surveyed companies have not yet formulated special response plans for generative AI risks, including security risks such as AI-driven online fraud.
The survey was conducted by Risk Connect, a risk management software company, and respondents included 218 global risk compliance and resilience professionals. The results show that 24% of respondents believe that AI-driven cybersecurity threats such as ransomware, phishing, and deep forgery will have a significant impact on enterprises in the next 12 months. Meanwhile, 72% of respondents said that cybersecurity risks have had a significant or serious impact on their organizations, an increase from 47% last year’s data.
As concerns about AI ethics, privacy and security intensify, the report pointed out that although companies' concerns about AI have increased, they have failed to follow up in time in risk management strategies, and there are still many key gaps. For example, 65% of companies do not have policies for the use of generative AI in partners and suppliers, although third parties are common channels of intrusion for cyber scammers.
Internal threats should not be underestimated. Taking the example of using generative AI to produce marketing content, marketing expert Anthony Miyazaki reminds that although generative AI performs well in writing texts, the final copy still needs to be edited manually to ensure its persuasiveness and accuracy. . In addition, relying on AI to generate website content can also lead to negative effects. For example, Google has made it clear that using AI content to manipulate the search process will reduce its search ranking, which will cause a serious blow to the company's search engine optimization (SEO) .
To address these challenges, companies need to ensure full coverage of internal policies, ensure the security of sensitive data and comply with relevant regulations. John Skimoni, chief security officer of Del Technology, said they had formulated principles ahead of the generative AI boom to ensure that AI applications are fair, transparent and responsible.
At Empathy First Media, Vice President Ryan Doser also emphasized the strict measures taken by the company to use AI by employees, including prohibiting the input of customer-sensitive data into generative AI tools, requiring manual review of AI-generated content, etc. These measures are designed to increase transparency and build customer trust.
Key points:
80% of enterprises have not developed specialized plans for generative AI risks and face potential security risks.
72% of companies believe that cybersecurity risks have had a significant impact on them and call for enhanced risk management.
Enterprises should take positive measures to ensure the security and compliance of AI applications and avoid internal and external threats.
To sum up, enterprises need to actively respond to the security risks brought by generative AI, formulate complete risk management strategies, and strengthen internal security controls in order to maintain competitiveness in the AI era and continue to develop healthily. Ignoring AI risks will likely lead to serious safety accidents and economic losses.