Anthropic · AI Agent · OpenAI · Gemini · Claude · Google · The Guardian Technology
AI-powered hacking has exploded into industrial-scale threat, Google confirms
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
In three months, AI-powered hacking has gone from a nascent problem to an industrial-scale threat, according to a report from Google.
Key facts
- The Ada Lovelace Institute (ALI), an independent AI research body, has cautioned against assumptions of a multibillion-pound public sector productivity boost from AI
- Steven Murdoch, a professor of security engineering at University College London, said AI tool could help the defensive side in cybersecurity, as well as the hackers
- In three months, AI-powered hacking has gone from a nascent problem to an industrial-scale threat, according to a report from Google
- There’s a misconception that the AI vulnerability race is imminent
Summary
The findings from Google’s threat intelligence group add to an intensifying, global discussion about how the newest AI models are extremely adept at coding, and becoming extremely powerful tools for exploiting vulnerabilities in a broad array of software systems. It finds that criminal groups, as well as state-linked actors from China, North Korea and Russia, appear to be widely using commercial models, including Gemini, Claude and tools from OpenAI, to refine and scale up attacks. “Threat actors are using AI to boost the speed, scale, and sophistication of their attacks. Last month, the AI company Anthropic declined to release one of its newest models, Mythos, after asserting that it had extremely powerful capabilities and posed a threat to governments, financial institutions and the world generally if it fell into the wrong hands.