White House · Anthropic · OpenAI · Google · Mythos · CNBC Technology
Google confirms it likely thwarted effort by hacker group to deploy AI for 'mass exploitation event'
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
Google's Threat Intelligence Group said in a report on Monday that it thwarted an effort by hackers to use artificial intelligence models to "plan a mass vulnerability exploitation operation.
Key facts
- Last week, OpenAI announced that GPT-5.5-Cyber, a variation of its latest model, is rolling out in a limited preview capacity to vetted cybersecurity teams
- The concerns sent shockwaves through the industry and led to White House meetings with technology and business leaders
- In Monday's report, Google highlighted several examples of how hackers are already using tools such as OpenClaw to find vulnerabilities, launch cyberattacks and develop malware
- GTIG said it has "high confidence" that it recorded hackers using an AI model to find and exploit a zero-day vulnerability, or a software flaw unknown to developers, creating a way to bypass
Summary
GTIG said it has "high confidence" that it recorded hackers using an AI model to find and exploit a zero-day vulnerability, or a software flaw unknown to developers, creating a way to bypass two-factor authentication. "The criminal threat actor planned to use it in a mass exploitation event but their proactive counter discovery may have prevented its use," Google wrote in the post, without disclosing the name of the hacker group. The findings underscore how hackers are using available AI tools like OpenClaw to exploit software flaws in ways that can be particularly damaging to companies, government agencies and other organizations even as cybersecurity firms pump billions of dollars into bolstering their defenses.