Here’s an uncomfortable truth: your employees are almost certainly using AI at work right now – and not just occasionally. They’re pasting customer queries into ChatGPT, running emails through Grammarly, and uploading spreadsheets into AI analysis tools – often without telling you and without realising there’s a risk.
It’s called shadow AI, and if you haven’t heard of it yet, you’re not alone. The UK’s National Cyber Security Centre (NCSC) has published guidance on the broader shadow IT problem, and AI tools are fast becoming its most significant frontier. But ignorance isn’t much of a defence when it comes to data protection.
Once you understand what’s happening, it’s entirely manageable.
What is Shadow AI?
Shadow AI describes any artificial intelligence tool that employees use without the knowledge or approval of their organisation. Think of it as the modern equivalent of shadow IT – staff downloading software onto company devices without permission – except the stakes are significantly higher, because AI tools actively process and often store the data you feed them.
The most common culprits include ChatGPT and other large language models, AI writing assistants like Grammarly or Jasper, AI-powered transcription and note-taking tools, image generators like DALL-E or Midjourney, and the growing number of browser extensions with AI features quietly built in.
These tools are freely available, often require nothing more than an email address, and many employees genuinely don’t see the harm. After all, they’re just trying to work more efficiently.
Why this should concern you
The problem isn’t that your team is using AI. The problem is that they’re probably feeding it sensitive business data without understanding where that data ends up.
When someone pastes a draft contract into ChatGPT to tidy up the language, that content may be used to train the model. When someone uploads a spreadsheet of customer details to an AI-powered analysis tool, those records leave your controlled environment entirely. Under the UK GDPR, your organisation is still responsible for that data, even if it was shared unknowingly or without your consent.
The risks are real and varied: confidential business information being exposed to third-party AI providers, personal data breaches that could trigger GDPR reporting obligations, intellectual property leaking into publicly accessible models, and inconsistent outputs creating quality or compliance issues downstream.
For businesses with 50 to 300 employees, the challenge is particularly sharp. You’re large enough to have significant data exposure, but you may not yet have the IT governance structures, or the managed cyber security measures, in place to catch what’s happening.
How to discover shadow AI in your organisation
The first step is simply understanding the scale of what’s going on. And the most effective place to start is also the simplest: ask your team.
A short, non-judgemental staff survey can reveal which tools people are using and why. Frame it as curiosity rather than an investigation, and you’ll get far more honest answers if people don’t feel they’re being caught out. As the NCSC puts it, blaming or punishing staff only pushes shadow IT further underground. Most employees using AI tools at work believe they’re being resourceful, not reckless, and that’s an important distinction to respect.
Beyond surveys, your IT team or provider can review network traffic to identify which AI platforms are being accessed from company devices and networks. A device audit can flag AI-powered browser extensions, one of the most overlooked sources of data leakage. And it’s worth having direct conversations with department heads, because different teams tend to develop different AI habits. Marketing might be generating content, finance might be experimenting with data analysis, and HR might be screening CVs. Each presents a unique set of risks.
Building an AI policy for your business
Once you understand what’s happening, the temptation might be to ban AI tools outright. A blanket ban won’t stop people using AI; it will just push the activity further underground, where you have even less visibility.
Instead, bring it into the light with a clear, practical AI policy for your business. A good policy doesn’t need to be lengthy, but it should be specific. Start by defining which AI tools are approved for business use and which are not. Naming individual platforms removes ambiguity. Set clear boundaries on what types of data can and cannot be entered into AI tools; personal data, financial records, and anything commercially sensitive should be off-limits unless the tool is an approved, enterprise-grade platform with appropriate safeguards.
Your policy should also create a simple route for employees to request new AI tools. If someone finds a platform that could genuinely help their work, make it easy for them to come to you rather than working around you. Pair that with basic training requirements, not just on how to use approved tools but on understanding boundaries – and you’ve got a framework that protects the business without stifling the people in it.
Channel the enthusiasm; don’t suppress it
The truth is that your employees are using these tools because they’re genuinely useful. If your team has gone out and found their own AI solutions, they’ve already identified problems worth solving. Your job is to make sure those problems get solved safely.
The businesses getting this right are the ones channelling that energy rather than suppressing it. They’re identifying where AI adds real value – meeting transcription, document drafting, data analysis, repetitive admin – and providing approved tools that give employees what they need within a secure, managed environment. Paired with proper cyber security foundations, this approach turns a risk into a genuine competitive advantage.
That balance between innovation and security isn’t just good governance. It’s good business.
Book a Discovery Call
Shadow AI isn’t something to panic about, but it is something to act on. The longer it goes unmanaged, the greater the risk to your data, your compliance standing, and your reputation.
Not sure where your organisation stands? Cloud Geeni can help. As an ISO 27001-certified managed service provider with UK-based data centres, we understand the compliance challenges facing growing businesses across the North of England. Book a discovery call to assess your current AI exposure and start building a practical AI policy that safeguards your business.
Register for Our ‘Making AI Practical’ Event
Want a broader, hands-on conversation about getting your business AI ready? Register your interest in our Making AI Practical lunch and learn – a free, jargon-free session designed for business leaders looking for actionable guidance.
Frequently Asked Questions
Can my business be fined if an employee shares data through an AI tool?
Yes. Under UK GDPR, the organisation is the data controller – that responsibility doesn’t transfer to the employee who caused the breach. The ICO’s breach guidance confirms that breaches caused by human error still fall on the organisation, and fines can reach £8.7 million or 2% of global turnover.
Are enterprise AI tools like Microsoft Copilot safer than free ones?
Generally, yes. Microsoft confirms that prompts and data accessed through Microsoft 365 Copilot aren’t used to train foundation models. Free consumer tools like the standard version of ChatGPT don’t offer those safeguards by default, which is why specifying approved platforms in your AI policy matters.
How do I know which AI tools my employees are using?
Start with your people, not your technology. The NCSC’s shadow IT guidance stresses that blaming staff makes the problem worse. A non-judgemental survey, combined with a network traffic review and device audit from your IT team or provider, will give you a much clearer picture.
Do we need an AI policy if we don't officially use any AI tools?
That’s exactly when you need one most. The NCSC’s guidance makes the point that unmanaged tools exist whether organisations sanction them or not. If there’s no official position on AI, your team will make their own judgements, and those won’t always align with your data protection obligations.