← Back to KHAO

Google ·

Spooked by Mythos, Trump suddenly realized AI safety testing might be good

2 min read

Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.

◌ Single Source

Photo of Ashley Belanger.

This week, the Trump administration backpedaled and signed agreements with Google DeepMind, Microsoft, and xAI to run government safety checks on the firms’ frontier AI models before and after their release.

Key facts

Summary

Previously, Donald Trump had stubbornly cast aside the Biden-era policy, dismissing the need for voluntary safety checks as overregulation blocking unbridled innovation. But after Anthropic announced that it would be too risky to release its latest Claude Mythos model —fearing that bad actors might exploit its advanced cybersecurity capabilities—Trump’s suddenly concerned about AI safety. In CAISI’s press release, the center acknowledges that the voluntary agreements signed by Google, Microsoft, and xAI “build on” Biden’s policy. “Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” Fall said.

Read full article at Ars Technica →

#Google #Microsoft