In recent years, artificial intelligence (AI) technology has developed rapidly, bringing huge opportunities to enterprises, but also lurking many risks. The editor of Downcodes compiled a report on how enterprises should deal with generative AI risks. The report pointed out that most enterprises have not yet formulated effective response strategies for generative AI risks, especially in terms of network security, AI-driven network fraud, etc. The threat is becoming increasingly serious and deserves the attention of enterprises.
In recent years, the rapid development of artificial intelligence (AI) has brought many opportunities to enterprises, but at the same time, its potential threats have become increasingly apparent. According to the latest 2024 "New Generation Risk Report", the survey shows that as many as 80% of the companies surveyed have not yet formulated a special response plan for generative AI risks, which includes security risks such as AI-driven online fraud.
The survey, conducted by risk management software company Riskconnect, included 218 global risk compliance and resilience professionals. The results show that 24% of respondents believe that AI-driven cybersecurity threats (such as ransomware, phishing and deepfakes) will have a significant impact on enterprises in the next 12 months. Meanwhile, 72% of respondents said cybersecurity risks had a significant or severe impact on their organizations, up from 47% last year.
As concerns over issues such as AI ethics, privacy and security intensify, the report points out that although companies' concerns about AI have increased, they have failed to follow up on risk management strategies in a timely manner, and there are still many key gaps. For example, 65% of companies have no policy in place regarding the use of generative AI by partners and vendors, even though third parties are a common entry channel for cyber fraudsters.
Internal threats cannot be underestimated either. Taking companies' use of generative AI to produce marketing content as an example, marketing expert Anthony Miyazaki reminded that although generative AI is excellent at writing text, the final copy still needs to be manually edited to ensure its persuasiveness and accuracy. . In addition, relying on AI to generate website content may also lead to negative impacts. For example, Google has made it clear that if AI content is used to manipulate the search process, its search rankings will be reduced, which will have a serious impact on the search engine optimization (SEO) of enterprises. .
To address these challenges, companies need to ensure comprehensive coverage of internal policies, secure sensitive data, and comply with relevant regulations. John Skimoni, chief security officer of Del Technology, said that they formulated relevant principles before the generative AI craze to ensure that the development of AI applications is fair, transparent and responsible.
At digital marketing agency Empathy First Media, Vice President Ryan Doser also emphasized the strict measures the company has taken on employees' use of AI, including prohibiting the input of sensitive customer data into generative AI tools and requiring manual review of AI-generated content. These measures are designed to increase transparency and build customer trust.
All in all, enterprises need to actively respond to the security risks brought by generative AI, formulate comprehensive risk management strategies, and strengthen employee training in order to remain competitive and ensure business security in the AI era. The editor of Downcodes recommends that enterprises refer to the suggestions in the report and actively take measures to prevent problems in the bud.