Anthropic · Fortune Technology
When OpenAI published GPT-5.3-Codex in February, the company confirmed it was the first model it had classified as high-capability
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
Hackers have already leveraged Anthropic’s tools to enable more sophisticated and autonomous attacks.
Key facts
- When OpenAI released GPT-5.3-Codex in February, the company said it was the first model it had classified as high-capability for cybersecurity tasks under its Preparedness Framework and the first it
- Anthropic is giving a group of Big Tech and cybersecurity firms access to a preview version of Claude Mythos —its unreleased and most advanced AI model for economies, public safety, and national
- The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over the next few months
- While concerns about AI’s potential to automate large-scale cyberattacks have been building for a while, Anthropic’s newest model appears to represent a dangerous new level of AI performance in cyber
Summary
Anthropic is giving a group of Big Tech and cybersecurity firms access to a preview version of Claude Mythos —its unreleased and most advanced AI model for economies, public safety, and national security—could be severe. While concerns about AI’s potential to automate large-scale cyberattacks have been building for a while, Anthropic’s newest model appears to represent a dangerous new level of AI performance in cyber tasks. Previous models from OpenAI and Anthropic had already reached a new risk level for cyber threats. “Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely,” Anthropic said in a statement.