From data manipulation to AI-driven threats: The emerging wave of cyber risks.

Artificial intelligence (AI) is quickly becoming a common tool across many industries, helping organizations improve efficiency and gain insights. However, as more businesses adopt AI, they also face increased cyber risks.

A recent report from the World Economic Forum highlighted a staggering 223% rise in the use of generative AI on the dark web from 2023 to 2024. This surge indicates that while AI can enhance security, it is also being exploited by cybercriminals. A survey conducted by SoSafe in March revealed that 87% of security professionals had encountered AI-driven attacks targeting their organizations.

Greg Scoblete, a principal at Verisk’s emerging issues team, recently spoke about the dual nature of AI in a webinar. He noted that AI technology can both increase risks and provide new ways to mitigate them. Scoblete identified two significant threats: adversarial machine learning and AI agents.

Adversarial machine learning involves attacks on AI models during their development stages. Scoblete pointed out two main types of these attacks: poisoning attacks and privacy attacks. Poisoning attacks aim to disrupt AI model outputs by tampering with training data. This can happen actively, where a hacker deliberately corrupts data, or passively, where flawed data is unknowingly included in training sets. For instance, researchers demonstrated how they could embed tiny amounts of corrupted data into digital artwork, which could degrade an AI model’s performance if used for training.

The cost of executing such attacks can be surprisingly low. Scoblete mentioned that researchers managed to poison just 0.01% of a popular dataset for only $60. The widespread vulnerability across different AI models and the reliance on a limited number of training datasets make these attacks particularly concerning.

Privacy attacks, on the other hand, target AI models that are already in use. These attacks can extract sensitive information or even replicate the model itself, posing serious risks as AI models often contain personal data and trade secrets. Scoblete highlighted incidents where AI systems mistakenly exposed confidential information, emphasizing the importance of proper governance in AI usage. Alarmingly, an IBM survey found that only 37% of organizations have any governance policies regarding AI.

Another emerging risk comes from AI agents, which are advanced systems that operate autonomously. Scoblete described these agents as capable of performing tasks that go beyond simple automation. They can interact with various data sources, execute code, and even create sub-agents. However, this autonomy raises significant concerns, including the potential for misuse and data exposure.

As organizations grapple with these new risks, brokers are becoming crucial in guiding clients on how to manage AI-related threats. Experts suggest that brokers should help clients identify where AI is being used, emphasize the need for governance and controls, and carefully review cyber insurance policies to ensure they cover these emerging risks.

In conclusion, while AI offers exciting opportunities for growth and efficiency, it also brings significant challenges that organizations must address. As cyber threats evolve, understanding and mitigating these risks will be essential for businesses aiming to harness the full potential of AI.

Author

  • 360 Insurance Reviews Official Logo

    Sophia Langley runs real-life budget scenarios to recommend coverage mixes that protect households without sinking their monthly finances.