.png)
Like most information security managers right now, I'm figuring out how to handle the GenAI wave. The difference is that at Wiremind, AI isn't just a tool employees use. It's also is at the core of what we build.
In this article, I’ll share what I've learned about maintaining enterprise security in the GenAI era. Less theory, more of what's actually working in practice. My approach to GenAI builds on the security philosophy I outlined in my previous article on how we approach corporate security at Wiremind.
Your employees are using generative AI tools, probably more than you know. And probably with data you'd rather they didn't share.
Research shows 55% of employees use unapproved generative AI tools at work. Employees aren't malicious in their intentions, they're trying to work faster: summarize a contract, draft a response, or parse some data.
But every prompt is a potential data leak. And most GenAI tools have unclear data retention policies, training data reuse clauses, and third-party access that users never read about.
Simply blocking tools doesn't work. Block ChatGPT, people move to Claude. Block that, they find Gemini. Or some new tool you've never heard of. You're always behind.
At Wiremind, we moved from restriction to controlled enablement. What does that mean in practice?
With this approach, the goal is zero uncontrolled usage, not zero usage.
Over time I've developed a simple checklist for evaluating AI tool requests. It's not comprehensive, but it catches most of the critical risks.
1. Where does the data go?
Specific questions I ask:
If the vendor can't answer these clearly, it's a no.
2. What's the worst case scenario if data is misused or compromised?
I'm not asking what the tool is supposed to do. I'm asking what it could do.
The more powerful the tool, the more carefully we review it. So, how do we decide which tools to implement? Not everything passes cleanly. Sometimes we approve with restrictions (no sensitive data, specific use cases only), sometimes we find alternatives, and sometimes, it's a hard no.
Having this framework in place has prevented several potential incidents. That said, controlling what tools come in is only half the picture. The threat landscape itself is changing.
While we're figuring out how to secure AI, attackers are figuring out how to use it. GenAI has lowered the barrier for a lot of attacks. Things that used to take time and skill are now faster and easier.
What we're seeing:
This doesn't mean panic. Most attacks still rely on basics: weak passwords, unpatched systems, social engineering. But the baseline is shifting and attacks that used to require expertise are becoming commoditized.
What this means for defense?
Traditional security awareness training needs updating. "Look for spelling mistakes" isn't reliable advice anymore. Verification processes need to assume that voice and video can be faked. And detection needs to account for attacks that look more legitimate than before.
Defending against these threats isn't just about better tools, it starts with how your teams think about AI.
Tools and controls only go so far. A lot of AI risk comes down to how people behave. The problem is that most employees trust AI outputs too much. They see a confident, well-written response and assume it's correct. They copy code without reviewing it. They make decisions based on AI summaries without checking the source.
Our best front-line defense is focusing on shaping the behavior of our teams. In training, we focus on:
This isn't a one-time training. The available tools are changing constantly, so we revisit our training policies regularly and update guidance as the landscape shifts.
Securing an organization in the GenAI era isn't about having perfect answers, technology moves too fast for that.
What does matter:
At Wiremind, we are proactively working secure our use of generative AI, not only because we use it, but also because we build ML powered products. But every organization is heading the same direction, whether they planned to or not.
The question isn't whether AI will change your security posture, its about whether you're adapting fast enough.
To learn more about security at Wiremind, visit our Security page: https://wiremind.io/security