This trend presents significant security and compliance risks that IT leaders must address with strategic approaches rather than blanket prohibitions.
A recent study by Software AG revealed a concerning trend for compliance and security teams: many employees are using unauthorized AI tools, creating what some call "Shadow AI."
The results of the study are striking:
Founder of research firm GAI, John Sviokla, notes that this creates “a massive security problem if rogue IT users share data with models and providers without review or approval," ultimately meaning that "just about half your knowledge workers are not going to go back to old ways of working – no matter what you do."
This trend presents significant security and compliance risks that IT leaders must address with strategic approaches rather than blanket prohibitions.
Understanding the drivers behind AI usage is essential for developing effective responses. The data points to three key motivations:
Unsanctioned and unmonitored AI usage introduces significant security and compliance risks that should concern every IT leader:
Rather than implementing outright bans (which data shows are ineffective), forward-thinking organizations are adopting strategic approaches that balance security with productivity:
Establish a baseline by conducting a comprehensive audit to identify unauthorized AI usage through proxy analysis, network monitoring, and software inventory reviews. As one security expert told VentureBeat, "one security head of a New York financial firm believed fewer than 10 AI tools were in use. A 10-day audit uncovered 65 unauthorized solutions, most with no formal licensing."
Develop clear policies that define acceptable AI use cases, approved tools, and data handling procedures. These policies should balance productivity needs with security requirements, acknowledging that employees will find ways to use these tools regardless of blanket prohibitions.
The most effective approach is to provide employees with secure, enterprise-grade AI tools that offer similar functionality to consumer options while maintaining proper security controls. Platforms like Userfront Workforce AI enable secure access to trusted AI assistants while also providing:
Implement training programs that explain the risks of AI and demonstrate how to use approved tools effectively. As WinWire CTO Vineet Arora told VentureBeat, "The data confirms that once employees have sanctioned AI pathways and clear policies, they no longer feel compelled to use random tools in stealth."
The evidence is clear: attempting to ban AI tools outright will likely drive usage underground, exacerbating security risks rather than mitigating them. As Steve Ponting, Director at Software AG, observes: "While 75% of knowledge workers use AI today, that figure will rise to 90% in the near future because it helps to save time (83%), makes employees' jobs easier (81%) and improves productivity (71%)."
The most successful approach is to provide secure enterprise alternatives that satisfy both productivity needs and security requirements. By implementing proper governance structures, offering sanctioned AI tools, and educating employees on secure usage, organizations can harness the benefits of AI while minimizing potential risks.
Remember that employee AI usage isn't going away—but with the right strategy, you can bring it into the light, where it can be properly secured, monitored, and leveraged for good.
Experience smarter enterprise sign-on tools & reporting.