Organizations worldwide are being urged to implement strict guidelines for the use of artificial intelligence (AI) in the workplace. This comes in response to the increasing threat of "shadow AI," where employees use AI tools without the approval of their IT departments.
Shadow AI poses a significant risk, as it involves the use of generative AI tools, such as ChatGPT and Microsoft Copilot, through personal accounts. A recent survey by TELUS Digital revealed that 68% of enterprise employees who use generative AI at work are accessing these tools without oversight. Alarmingly, over half of these users—57%—have admitted to entering sensitive company information into these AI platforms.
Menlo Security, a firm specializing in browser security, warns that the rise of shadow AI could lead to serious data breaches. They highlight that while data loss is concerning, data leakage—where sensitive information is unintentionally exposed—can be even more problematic, especially with generative AI. Users may not intend to share sensitive data, but it can happen during tasks like summarizing or rewording content.
The issue is compounded by a significant increase in web traffic to generative AI sites, which surged by 50% to reach over 10 billion visits in January 2025. A staggering 80% of this traffic came from browser access.
In light of these findings, Devin Ertel, Chief Information Security Officer at Menlo Security, emphasizes the importance of establishing clear governance for AI use in organizations. He advocates for providing employees with safe and responsible ways to utilize generative AI while protecting sensitive corporate data.
Ertel points out that simply informing employees about corporate policies on AI is not enough. To effectively combat shadow AI, companies need to adopt trusted AI systems and require their exclusive use. However, controlling AI tool usage becomes challenging when employees access tools from their personal devices.
Menlo Security advises that if organizations cannot manage the AI tools used outside their networks, they must ensure strict controls for those used within their environments. The message is clear: organizations must act now to safeguard their data and create a secure framework for AI use in the workplace.