Google · Ars Technica
Spooked by Mythos, Trump suddenly realized AI safety testing might be good
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
This week, the Trump administration backpedaled and signed agreements with Google DeepMind, Microsoft, and xAI to run government safety checks on the firms’ frontier AI models before and after their release.
Key facts
- As for funding, Congress in January approved up to $10 million to expand CAISI, Fortune reported
- Provided to Ars, Sarah Kreps, director of the Tech Policy Institute at Cornell University, said that AI firms should be developing closer ties with the government as AI advances
- Rumman Chowdhury, an AI governance consultant and founder of Humane Intelligence, similarly criticized CAISI’s preparedness
- To date, CAISI said it has completed about 40 evaluations, including those of frontier models that have yet to be released
Summary
Previously, Donald Trump had stubbornly cast aside the Biden-era policy, dismissing the need for voluntary safety checks as overregulation blocking unbridled innovation. But after Anthropic announced that it would be too risky to release its latest Claude Mythos model —fearing that bad actors might exploit its advanced cybersecurity capabilities—Trump’s suddenly concerned about AI safety. In CAISI’s press release, the center acknowledges that the voluntary agreements signed by Google, Microsoft, and xAI “build on” Biden’s policy. “Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” Fall said.