After the Mythos moment: The age of AI has transformed cyber readiness
This blog was originally published as part of CyberCX’s C-Suite Cyber Newsletter series on LinkedIn.
The launch of Claude Mythos Preview has fired the starting gun on a new era of artificial intelligence-enabled vulnerability detection. Other AI developers will be hot on their heels with tools that are just as capable (or better) at finding vulnerabilities. Some of these may even be open source or developed in authoritarian regimes.
The time for organisations to act is now.
What happened?
Earlier this month, frontier AI lab Anthropic revealed the creation of Claude Mythos Preview (Mythos) – a new and unreleased large language model (LLM) which Anthropic claims has advanced cyber security capabilities that can autonomously discover, chain, and exploit zero-day vulnerabilities at scale.
Anthropic says that Mythos is so powerful that it’s too dangerous to be released publicly. Instead, Anthropic has limited Mythos access to a coalition of over 50 major technology and infrastructure partners through a program dubbed Project Glasswing.
Mythos is claimed to have three main capabilities that makes it vastly more advanced than other LLMs:
- Increased autonomy and reliability: Tests showed Mythos created 181 Firefox exploits, while Claude Opus 4.6 managed only two.
- Chained vulnerabilities: Mythos finds complex vulnerabilities that link multiple issues, like combining several memory bugs into one exploit.
- Single–prompt capability: Mythos gets more done with a single prompt, without the need for complex setup or adjustments.
(Source: SANS Institute, Cloud Security Alliance, [un]prompted, and OWASP GenAI Security Project)
Why it matters
Mythos looms as a gamechanger for the scale and speed at which cyber vulnerabilities can be detected, chained together, and potentially exploited, using AI capabilities.
While Anthropic hasn’t given a timeline for making Mythos public, saying it prefers to work with the US government and Project Glasswing partners to determine next steps, the company does anticipate that competitors and other groups will release AI models with similar capabilities more widely within the next 18 months.
- The New York Times reported that Mythos had set off “global alarms” and triggered emergency responses from central banks and intelligence agencies globally.
- Rival frontier AI lab OpenAI has recently released its competing model GPT-5.4-Cyber to a select group of users.
This estimated 18-month timeframe – and the fact Anthropic won’t release Mythos widely yet – gives organisations a crucial window to start acting now to prepare for a major shift in how quickly bugs can be found and exploited.
And there are no guarantees as to how quickly this window will close.
Last week, Bloomberg reported that a small group of unauthorised users had gained access to Mythos through third-party access and using other common “internet sleuthing tools”, and had been using Mythos regularly ever since, demonstrating that while Anthropic has held Mythos’ public release, there are ever present risks that this capability – or something similar – could fall into the wrong hands.
How could this impact me and my organisation?
According to Anthropic, the Mythos model has discovered – with minimal oversight – thousands of high and critical severity zero-days across every major operating system and browser in the past several weeks – and suggested ways to exploit them.
- Some of these zero days are decades old, with Mythos identifying one now-patched 27-year-old bug in OpenBSD, an operating system known primarily for its security.
This is a potential gamechanger for every organisation and naturally begs the question: what gaps have we missed – and how can we find them?
What should I do?
1. Zero Trust Readiness Assessment
Undertake a rapid review focused on critical assets and mapping dependencies which can help an organisation identify its most vital systems, data, and processes and how these elements are interconnected. By understanding these dependencies, organisations can more effectively pinpoint vulnerabilities and strengthen defences, making it harder for tools like Mythos to exploit gaps and disrupt operations.
2. Limit unauthorised access through network segmentation
Segmenting networks will enable organisations to limit the impact of potential security breaches by proactively limiting the potential for lateral movement between critical assets and reducing possible attack surfaces. Organisations can start designing and implementing a strategy to segregate critical assets within secure networks now.
3. Find and address security weaknesses with AI-assisted remediation
While Mythos remains unreleased, organisations can use available AI tools and techniques to identify, prioritise, and address security weaknesses in systems and applications. By accelerating the automated detection, verification of exploitability and reachability, organisations can deploy fixes faster with AI.
Ready to get started?
Our team of Secure AI experts and technology partners support organisations successfully adopt and deploy AI through industry leading strategy, governance, access control and training.

