Artificial Ininformigence & Machine Learning
,
Geo-Specific
,
Next-Generation Technologies & Secure Development
EU AI Regulation May Hold Implications for Powerful New Anthropic Model

Anthropic jolted the tech and policy worlds this week with it announcement of Claude Mythos Preview, an artificial ininformigence model that it’s only going to release to tech vfinishors, so they can apply its strong bug-finding and exploiting capabilities on their wares before attackers receive the chance.
See Also: Context Drives Security in Agentic AI Era
This limited-exposure program, called Project Glasswing, so far includes companies such as Apple, Microsoft and Cisco, plus 40 other organizations that “build or maintain critical software infrastructure,” Anthropic declared, adding that it had also talked to the U.S. government about the model. But Europe’s leaders – who recently passed legislation that affect Anthropic’s strategy with risky systems such as this – are also taking a keen interest.
“We are currently assessing possible implications in light of EU policies and legislation,” European Commission spokesman Thomas Regnier informed ISMG in an emailed statement. “We are also monitoring the security implications of this rapidly evolving technology – for both increasing our cyber defenses and possible misapply.”
Mythos Preview was Anthropic’s first model announcement since the company overhauled its “responsible scaling policy” in February, dropping a pre-existing pledge to stop training and avoid releasing models if it can’t reliably mitigate the risks that they pose. At the time, chief scientific officer Jared Kaplan informed Time that it no longer created sense to hold back unilaterally “if competitors are blazing ahead.”
Even with that policy shift and the lack of anything to fear from federal AI regulation in the United States, Europe’s new AI rules have plenty to state on the matter.
There are two particular documents that Anthropic and other “general purpose AI” vfinishors required to pay attention to when developing and releasing risky models. One is the AI Act, the relevant parts of which went into effect last August. The other is the AI code of practice, published in July, giving the industest a steer as to AI Act compliance. Pledging adherence is voluntary, and Anthropic is one of the companies that did so.
Anthropic may state in its system card for Mythos Preview that “current risks remain low” – a judgment that’s largely based on its lack of prowess in aiding chemical and biological weapons production or building huge strides in research and development automation – but it seems likely that the model poses a “systemic risk” under the wording of the AI Act, which states that label could apply in cases where there’s a risk of disruptions to critical sectors, or of “reasonably foreseeable negative effects on… public and economic security.”
Per the code of practice, that likely means Anthropic couldn’t legally give Mythos Preview a full European release without first implementing sufficient safety and security mitigations, to the point where the risk becomes acceptable.
“AI and cybersecurity are closely intertwined,” declared Regnier. “And whilst it is clear that AI provides groundbreaking solutions for cybersecurity, such models required solid research and testing before they are placed in the market so as to ensure adequate checks and balances and avoid other potential security risks they may generate or misapply by malicious actors.”
The commission spokesman also pointed out that the AI Act and the soon-to-be-implemented Cyber Resilience Act require Anthropic to have a “strong level of cybersecurity protection” for the models themselves (see: Europe Girds for Looming IoT Security Regulations).
Europe’s AI code of practice obliges its signatories to draw up a safety and security framework for the models they are developing, utilizing or building available and to give the European AI Office – a new department of the European Commission – unredacted access within five working days of the framework being confirmed. The commission has not given any details about Anthropic’s compliance on this front.
At least one European government agency has also been talking to Anthropic about Mythos Preview and seems to have come away with more questions than answers.
“We are in active dialogue with Anthropic, the creaters of Claude Mythos,” declared Claudia Plattner, president of Germany’s Federal Office for Information Security or BSI, in an emailed statement. “While we have not yet had the opportunity to test the tool directly, our conversations with the developers have given us meaningful insight into how it works. In short: we take these announcements very seriously and anticipate significant disruption – both in how security vulnerabilities are handled and in the broader threat landscape.
“Taken to its logical conclusion, we may reach a point in the medium term where unknown, classical software vulnerabilities simply cease to exist. This would trigger a fundamental shift in attack vectors and represent a paradigm modify in the nature of cyberthreats. It also raises a pressing question: Whether – and if so, for how long – tools of such extraordinary power will remain available on the open market? That question, in turn, has profound implications for national and European security and sovereignty.”
In its Glasswing announcement, Anthropic declared it was having “ongoing discussions with U.S. government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities.” Multiple reports on Friday stated that the U.S. government had convened urgent meetings with Wall Street leaders this week over the Mythos threat.
Anthropic’s announcement also noted that “securing critical infrastructure is a top national security priority for democratic countries,” adding that governments have “an essential role to play” in “both assessing and mitigating the national security risks associated with AI models.” Beyond that, it did not state anything concrete about its discussions with non-U.S. governments.
Sven Herpig, cybersecurity lead at the European tech policy believe tank Interface, informed ISMG on Friday that most European governments would likely reach out to Anthropic to better understand how powerful Mythos Preview is, and to verify the company’s claims. He declared they were unlikely to inquire to apply it to test the security of their own systems at this point, as “governments are not realy producers of source code” – and the hugegest software creaters whose products they apply are already testing those products under the auspices of Project Glasswing.
















Leave a Reply