Global Alarm Over Secretive AI Hacking System: Mythos Model Sparks Cybersecurity Scramble

by Marcus Liu - Business Editor
0 comments

Anthropic’s Mythos AI Model Leaked: Unauthorized Access Raises Cybersecurity Concerns Anthropic’s unreleased AI model, Mythos, has been accessed by unauthorized users, according to multiple reports. The model, described by Anthropic as capable of identifying and exploiting zero-day vulnerabilities across major operating systems and web browsers, was intended for a limited group of partners under an initiative called Project Glasswing. Bloomberg reported that a small group of users gained access to Mythos Preview through a combination of tactics. One member of the group was a third-party contractor for Anthropic, providing privileged access. The group then used naming conventions from a prior data breach at AI training startup Mercor to guess the model’s online location. Once located, they employed additional methods to maintain access. Mashable confirmed the group accessed Mythos via a private Discord channel focused on hunting unreleased AI models. The users stated they were not using the model for malicious purposes but for tasks like building simple websites. However, they claimed to have ongoing access and hinted at accessing other unreleased Anthropic models. Fortune noted that despite Anthropic’s efforts to limit Mythos to 40 elite companies—including Microsoft, Apple, and Google—for pre-release security testing, the broad distribution across these organizations made a leak nearly inevitable. David Lindner, chief information security officer at Contrast Security, stated that expanding access to such a model increases the risk of unauthorized exposure, saying, “It was bound to happen.” Anthropic confirmed to Bloomberg that it is investigating reports of unauthorized access to Claude Mythos Preview through one of its third-party vendor environments. The company did not provide further comment to Fortune. The incident has reignited concerns about the security of powerful AI systems and the risks posed by their premature exposure, even when access is restricted to trusted partners. As AI models grow more capable of automating cyber threat detection and exploitation, safeguarding them against leaks becomes increasingly critical to global digital security.

Related Posts

Leave a Comment