OpenAI · Axios
Why it matters: Recent security testing suggests that GPT-5.5 model is nearly as good at catching and exploiting software flaws
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
OpenAI is opening a limited preview of GPT-5.5-Cyber to vetted cyber defenders who are "responsible for securing critical infrastructure," per a press release.
Key facts
- Recent security testing suggests that GPT-5.5 model is nearly as good at finding and exploiting software bugs as Anthropic's Mythos Preview
- OpenAI is rolling out a more permissible version of GPT-5.5, aka "Spud", to vetted cyber defenders, the company said Thursday
- OpenAI is opening a limited preview of GPT-5.5-Cyber to vetted cyber defenders who are "responsible for securing critical infrastructure," per a press release
- The new GPT-5.5-Cyber model is specifically designed to help defenders write proofs of concept for the bugs they can find or run simulations to test their organization's security posture
Summary
OpenAI is rolling out a more permissible version of GPT-5.5, aka "Spud", to vetted cyber defenders, the company said Thursday. Recent security testing suggests that GPT-5.5 model is nearly as good at finding and exploiting software bugs as Anthropic's Mythos Preview. A source familiar with GPT-5.5-Cyber's abilities told Axios that they were roughly on par with Mythos. Cyber defenders who are vetted and approved for the highest tier of OpenAI's Trusted Access for Cyber program will receive a version of GPT-5.5 that has fewer guardrails than the publicly available model.