Unauthorized Users Have Already Gained Access to Anthropic’s Mythos

April 24, 2026
Unauthorized Users Have Already Gained Access to Anthropic’s Mythos
IANS News

Key Points

  • A small group of unauthorized users reportedly gained access to Anthropic’s highly restricted Mythos AI model the same day it was announced, despite efforts to limit availability to select partners via Project Glasswing.
  • The group relied on ordinary and well‑known techniques, highlighting persistent vendor and contractor risk.
  • IANS Faculty emphasize the need for heightened AI vendor scrutiny and preparation for accelerated vulnerability discovery.

 

Unauthorized Users Have Already Gained Access to Anthropic’s Mythos

A group of unauthorized users gained access to Anthropic’s new AI model  Mythos -- on the same day that Anthropic announced that it released the model to a small group of partners, Bloomberg reported.

The unauthorized users reportedly relied on multiple tactics to access Mythos, including leveraging access an individual had as a third-party contractor for Anthropic, utilizing internet-sleuthing tools frequently used by cybersecurity researchers, and using information obtained from a private Discord channel dedicated to hunting for access to unreleased AI models.

The group claimed it was able to determine Mythos’ online location based on an educated guess, drawing on prior knowledge about the format Anthropic has used for previous models and details leaked in a data breach from Mercor, an AI training startup.

"The breach itself is almost disappointing in its ordinariness. No exotic prompt injection. No model extraction wizardry.  Just a third-party contractor, predictable URL conventions, and shared credentials. This is supply chain 101 wearing an AI costume."  Jeff Brown, IANS Faculty.

The unauthorized users claimed to have been using Mythos and other Anthropic models regularly, but reportedly not for cybersecurity purposes. Anthropic said there is no evidence that unauthorized access affected its systems or extended beyond its environment.

 

Big Picture

Although it was likely inevitable that unauthorized users would access Mythos, it's alarming that it happened so quickly. The attack vector used was predictable and well within the threat model Anthropic should have anticipated.

The incident also raises uncomfortable questions about whether Anthropic’s internal controls match the power it ascribes to Mythos.

"The fact that Anthropic isn't detecting this illicit access further reinforces to me that making Mythos generally available is more a capacity issue than a security issue. If Mythos were truly as dangerous as Anthropic marketing implies, not having world class anomaly detection would be wildly irresponsible."  Jake Williams, IANS Faculty.

This incident also fits a broader pattern of security shortfalls around Anthropic releases, where speed and capability repeatedly appear to outpace governance and control discipline.

"Is it possible for Anthropic to get anything right when it comes to security? Releasing MCP without an authentication interface, losing their source code, releasing Cowork skills without any policy enforcement, and now not being able to keep their super-secret, technology-destroying project under wraps. Should we be surprised?”  Aaron Turner, IANS Faculty.

Third-party vendor and contractor access continues to be a weak spot, even when core infrastructure seems secure. This incident stresses the importance of AI vendor scrutiny, particularly as powerful AI tools carry the power to accelerate cyber threats if they fall into the wrong hands.

"Your perimeter is only as strong as the least-governed partner in the chain. CISOs should treat external AI testing, hosting, and integration path like privileged production access now—tight vendor segmentation, short-lived credentials, and logging that tells you not just who got in, but what model they touched and why."  Summer Fowler, IANS Faculty.

 


IANS Faculty Recommendations

  • Map your Glasswing exposure: Identify which of your material vendors have Mythos or equivalent offensive-AI access. That list is your zero-day blast radius. 
  • Treat frontier-model previews as crown-jewel infrastructure: API keys, URLs, contractor offboarding, and access reviews for AI vendor environments need the same rigor as your production identity plane. Stop treating "preview" as lower criticality.
  • Rewrite the IR playbook assumption: Build a tabletop for "adversary has persistent access to a zero-day discovery engine." The old math that zero-days are scarce and expensive no longer holds if the discovery function is automated and leaked.

 Jeff Brown, IANS Faculty

 

Authors & Contributors

Emily Dempsey, Author,  IANS News

Jeff Brown, IANS Faculty

Summer Fowler, IANS Faculty

Aaron Turner, IANS Faculty

Jake Williams, IANS Faculty

 

Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our News & blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in connection with such information, opinions, or advice.

Subscribe to IANS Blog

Receive a wealth of trending cyber tips and how-tos delivered directly weekly to your inbox.

Please provide a business email.