Published by Dimitri Vedeneev, Executive Director Secure AI Lead and Henry Ma, Technical Director, Strategy & Consulting, on March 27 2026
This blog was originally published as part of CyberCX’s C-Suite Cyber Newsletter series on LinkedIn
The age of artificial intelligence (AI) in cybercrime has arrived – but the more immediate risk may be internal.
Organisations are adopting AI tools and systems at breakneck speed. But without proper governance policies, staff education and authorised AI tools, organisations invite the emerging risk of shadow AI into their environments.
What happened?
In 2025, CyberCX began to see threat actors using generative AI to create custom, bespoke scripts and payloads to reduce the time between initial access and achieving their objectives. While the quality of these scripts and payloads were at best dubious, the trajectory is clear: organisations need to brace themselves to confront AI-enabled cyber attacks in 2026 and beyond.
But as CyberCX’s 2026 Threat Report outlined, the more immediate risks to organisations might be internal. For the first time, CyberCX responded to incidents sparked by an organisation’s staff uploading sensitive information to public AI portals.
- AI data spills accounted for around 3% of all incidents CyberCX’s Digital Forensics and Incident Response team responded to in 2025.
This risk stems from shadow AI, which refers to the usage of AI in a work environment that is not sanctioned, authorised, managed or approved by authorised personnel. This could be anything from software, agents, solutions or products.
Why does this matter?
From meeting transcripts and inbox management to chatbots and virtual assistants, AI has become ubiquitous in the workplace – and the trend is only heading in one direction. New technologies, tools and use cases are emerging virtually every week.
AI tools can drive cost savings and efficiency by shrinking the time and investment required for administrative tasks and code development, shifting workers into higher value roles. But as is the case with any new technology, breakneck adoption introduces new risks.
- Shadow AI becomes an inevitable outcome when technology teams do not focus on releasing AI tooling to their workforce in a timely manner. Technology teams that cause undue friction in this space are fighting a losing battle.
Shadow AI is both a pull and a push risk for organisations.
- Pull risk: when employees and projects need or want to use AI for a certain use case.
- Push risk: when the enterprise provides AI tools but does not match capability to the use case or educate users around full capability usage.
The AI tool approval process must work faster than the business’ instinct to bypass it. If the technology team is seen to impede AI tool uptake, then shadow AI will spread throughout the organisation.
How could this impact your organisation?
From data spills to falling short of regulatory compliance, shadow AI is a significant risk to an organisation’s secure AI journey without the right safeguards.
CyberCX is already responding to data spills where members of an organisation have inadvertently uploaded sensitive documents or information to public facing large language models (LLMs). There are a range of risks associated with these data spills:
- Once fed into an LLM, there’s no mechanism to remove that information, and your sensitive information could be used to train that model (although not all LLMs train on your data, depending on commercial agreements).
- Intellectual property can become lost or compromised as it may now be available to other users of that AI tool.
- Regulatory and legal risks associated with the exposure of sensitive customer or personal information could result in fines, penalties, lawsuits and the accompanying reputational damage.
- Data loss and confidentiality risks come with any unmanaged software use, not just shadow AI. But the data exposure risk with current AI products may be wider, given they accept far more data inputs – from text and video to documents and audio – than traditional SaaS software products.
- With the ever-present risk of hallucinations, there’s no guarantee that public LLMs and other AI tools will generate accurate or unbiased outputs, which could lead to poor decision-making. A recent survey by Anthropic of 80,000 Claude users across 159 countries found that their biggest concern was the propensity of AI to make mistakes and hallucinations.
To control the risks associated with AI, it is critical to reduce shadow AI to an absolute minimum.
What should you do?
Here are three steps your organisation can take to minimise shadow AI:
- Develop a business strategy that aligns AI use with business drivers and risk appetite and a robust AI governance model focused on reducing risks associated with AI. This allows for a pragmatic approach to reviewing and endorsing AI tools for enterprise adoption so that AI can accelerate business objectives while risks are minimised. Consider implementing an AI Governance Committee that works directly with the business to identify and support the rollout of high priority AI use cases and tools.
- Implement a data strategy to label and classify data, enabling clear parameters for who, where and what should have access to your data. This should limit the likely use of shadow AI for any commercially sensitive or personal information.
- Educate staff about the use of public AI tools in the workplace to set clear, unambiguous expectations about which AI tools are allowed and for what tasks and activities to minimise the risk of AI misuse. Organisations should provide mandatory Responsible Use of AI Training to the business. Users must be aware of endorsed solutions, how to maximise their capabilities, and how to use them responsibly.


