Anthropic Investigates Unauthorized Access Claim to Claude Mythos Model
Anthropic is investigating a claim that a modest group of people gained unauthorized access to its Claude Mythos model, a new frontier AI system designed for advanced cybersecurity tasks. The investigation comes amid heightened attention on the model’s capabilities following its announcement in April 2026 as part of Project Glasswing, a collaborative initiative to secure critical software infrastructure.
Background on Claude Mythos and Project Glasswing
On April 7, 2026, Anthropic announced Claude Mythos Preview, a general-purpose language model exhibiting strong performance in computer security tasks. The model was introduced alongside Project Glasswing, a coalition of major technology companies including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. The initiative aims to employ Mythos Preview to identify and mitigate vulnerabilities in critical software systems, with Anthropic committing up to $100 million in usage credits and $4 million in direct donations to open-source security organizations.
According to Anthropic’s internal testing, Mythos Preview demonstrated the ability to autonomously discover thousands of high-severity vulnerabilities across major operating systems and web browsers, including long-standing bugs in OpenBSD and FFmpeg. The model also showed proficiency in constructing sophisticated exploit chains, such as multi-vulnerability privilege escalation in the Linux kernel and remote code execution against FreeBSD.
Details of the Unauthorized Access Claim
The specific nature of the claim regarding unauthorized access to Claude Mythos has not been detailed in publicly available sources. Anthropic has confirmed it is investigating the allegation but has not released further information about the alleged breach, the individuals involved, or the potential scope of access. The company emphasized its commitment to security and responsible AI development in its public statements.

Industry Response and Implications
The investigation underscores ongoing challenges in securing advanced AI models, particularly those with potent cybersecurity capabilities. Industry experts note that as AI systems like Mythos Preview grow more capable of identifying and exploiting software vulnerabilities, the risk of misuse increases if such tools fall into the wrong hands. This has prompted calls for robust access controls, monitoring protocols, and ethical guidelines governing the deployment of frontier AI models.
Project Glasswing partners have stated they will continue to use Mythos Preview under strict supervision for defensive security purposes, such as scanning and patching vulnerabilities in critical infrastructure. Anthropic has pledged to share findings from its investigations to help the broader industry strengthen safeguards around powerful AI systems.
Conclusion
As Anthropic proceeds with its investigation into the alleged unauthorized access to Claude Mythos, the incident highlights the dual-use nature of advanced AI in cybersecurity. While models like Mythos Preview offer significant potential for strengthening digital defenses, they also necessitate heightened vigilance to prevent misuse. The outcome of the investigation may influence future policies governing access to and deployment of high-capability AI systems in sensitive domains.
For ongoing updates on this developing story and related advancements in AI security, readers are encouraged to follow official communications from Anthropic and the Project Glasswing initiative.
Anthropic Investigates Claim of Unauthorized Access to Claude Mythos Model
Anthropic is investigating a claim that a small group of individuals gained unauthorized access to its Claude Mythos model, a frontier AI system unveiled in April 2026 for advanced cybersecurity applications. The allegation has prompted internal review as the company assesses the validity of the claim and its implications for model security.
Context: Claude Mythos and Project Glasswing
Claude Mythos Preview was announced on April 7, 2026, as a general-purpose language model with exceptional capabilities in computer security tasks. Anthropic’s internal evaluations showed the model could autonomously identify thousands of high-severity vulnerabilities across major operating systems and web browsers, including longstanding flaws in OpenBSD and FFmpeg. It also demonstrated skill in constructing sophisticated exploit chains, such as multi-step privilege escalation in the Linux kernel and remote code execution against FreeBSD.
The model’s release coincided with the launch of Project Glasswing, a collaborative initiative involving Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. The partnership aims to deploy Mythos Preview for defensive cybersecurity efforts, such as scanning and patching critical software vulnerabilities. Anthropic committed up to $100 million in usage credits and $4 million in direct funding to open-source security organizations to support the initiative.
Status of the Investigation
As of the latest available information, Anthropic has confirmed it is investigating the allegation of unauthorized access but has not disclosed specific details about the claim, including the number of individuals involved, the method of alleged access, or any potential data exposure. The company has not confirmed whether any breach occurred and emphasized its commitment to maintaining robust security protocols for its AI systems.
Industry and Security Implications
The investigation highlights the growing focus on securing advanced AI models, particularly those with dual-use potential in cybersecurity. Experts note that while systems like Mythos Preview can significantly enhance defensive capabilities—such as identifying and mitigating software flaws—they also pose risks if accessed without authorization. This has reinforced the importance of stringent access controls, continuous monitoring, and ethical frameworks for frontier AI models.
Project Glasswing participants have affirmed their continued use of Mythos Preview under controlled conditions for vulnerability research and patching critical infrastructure. Anthropic stated it would share insights from its investigation to help improve security practices across the industry.
Conclusion
Anthropic’s investigation into the alleged unauthorized access to Claude Mythos underscores the challenges of safeguarding powerful AI technologies. While the model holds promise for advancing cybersecurity defenses through initiatives like Project Glasswing, ensuring its responsible use remains paramount. The company has not released further details but affirmed its dedication to transparency and security as the review progresses.
Readers seeking updates on this matter are advised to consult Anthropic’s official communications and verified reporting from reputable technology news sources.