Business · Tom's Hardware
Trump administration considers mandatory pre-release vetting of AI models
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
The Trump administration is said to be discussing an executive order that would establish a government review process for new AI models before they’re released to the public, The New York Times has reported, citing unnamed U.S. officials.
Key facts
- Officials told the New York Times that the NSA, the Office of the National Cyber Director, and the Director of National Intelligence could oversee the review
- That naturally attracted a lot of unwanted government attention at a time when the Trump administration is already locking horns with Anthropic over the collapsed $200 million Pentagon contract
- Follow Tom's Hardware on Google News, or add them as a preferred source, to get their latest news, analysis, & reviews in your feeds
- The Trump administration is said to be discussing an executive order that would establish a government review process for new AI models before they’re released to the public, The New York Times
Summary
The proposed order would create an “AI working group” of tech executives and government officials to develop oversight procedures, with White House staff briefing leaders from Anthropic, Google, and OpenAI on the plans last week. The sudden reversal coincides with a leadership vacuum in White House AI policy. The new approach sounds a lot like the UK's AI Security Institute model, where government bodies evaluate frontier models against safety benchmarks before and after deployment. Perhaps unsurprisingly, the catalyst for all this appears to have been Anthropic’s Mythos model, which the company’s marketing described as capable of finding thousands of critical software vulnerabilities and too dangerous for public release.