Just released: CyberCX 2026 Threat Report → 

Shadow AI: Why your biggest AI threat may come from within

Threat Advisory

Published by Dimitri Vedeneev, Executive Director Secure AI Lead and Henry Ma, Technical Director, Strategy & Consulting, on March 27 2026

This blog was originally published as part of CyberCX’s C-Suite Cyber Newsletter series on LinkedIn

 


 

The age of artificial intelligence (AI) in cybercrime has arrived – but the more immediate risk may be internal.

Organisations are adopting AI tools and systems at breakneck speed. But without proper governance policies, staff education and authorised AI tools, organisations invite the emerging risk of shadow AI into their environments.

 

What happened?

In 2025, CyberCX began to see threat actors using generative AI to create custom, bespoke scripts and payloads to reduce the time between initial access and achieving their objectives. While the quality of these scripts and payloads were at best dubious, the trajectory is clear: organisations need to brace themselves to confront AI-enabled cyber attacks in 2026 and beyond.

But as CyberCX’s 2026 Threat Report outlined, the more immediate risks to organisations might be internal. For the first time, CyberCX responded to incidents sparked by an organisation’s staff uploading sensitive information to public AI portals.

This risk stems from shadow AI, which refers to the usage of AI in a work environment that is not sanctioned, authorised, managed or approved by authorised personnel. This could be anything from software, agents, solutions or products.

 

Why does this matter?

From meeting transcripts and inbox management to chatbots and virtual assistants, AI has become ubiquitous in the workplace – and the trend is only heading in one direction. New technologies, tools and use cases are emerging virtually every week.

AI tools can drive cost savings and efficiency by shrinking the time and investment required for administrative tasks and code development, shifting workers into higher value roles. But as is the case with any new technology, breakneck adoption introduces new risks.

Shadow AI is both a pull and a push risk for organisations.

The AI tool approval process must work faster than the business’ instinct to bypass it. If the technology team is seen to impede AI tool uptake, then shadow AI will spread throughout the organisation.

 

How could this impact your organisation?

From data spills to falling short of regulatory compliance, shadow AI is a significant risk to an organisation’s secure AI journey without the right safeguards.

CyberCX is already responding to data spills where members of an organisation have inadvertently uploaded sensitive documents or information to public facing large language models (LLMs). There are a range of risks associated with these data spills:

To control the risks associated with AI, it is critical to reduce shadow AI to an absolute minimum.

 

What should you do?

Here are three steps your organisation can take to minimise shadow AI:

  1. Develop a business strategy that aligns AI use with business drivers and risk appetite and a robust AI governance model focused on reducing risks associated with AI. This allows for a pragmatic approach to reviewing and endorsing AI tools for enterprise adoption so that AI can accelerate business objectives while risks are minimised. Consider implementing an AI Governance Committee that works directly with the business to identify and support the rollout of high priority AI use cases and tools.
  2. Implement a data strategy to label and classify data, enabling clear parameters for who, where and what should have access to your data. This should limit the likely use of shadow AI for any commercially sensitive or personal information.
  3. Educate staff about the use of public AI tools in the workplace to set clear, unambiguous expectations about which AI tools are allowed and for what tasks and activities to minimise the risk of AI misuse. Organisations should provide mandatory Responsible Use of AI Training to the business. Users must be aware of endorsed solutions, how to maximise their capabilities, and how to use them responsibly.

Other Cyber Security Resources

cta icon

Ready to get started?

Find out how CyberCX can help your organisation manage risk, respond to incidents and build cyber resilience.