Ai · Politico Technology
Common Sense Media rolled out an AI safety group Tuesday with the help of several tech companies
Compiled by KHAO Editorial — aggregated from 2 outlets. See llms.txt for citation guidance.
◌ Single Source
The Youth AI Safety Institute will conduct independent assessments of AI products.
Key facts
- The Youth AI Safety Institute will conduct independent assessments of AI products
- The institute has a $20 million annual budget, with donors including OpenAI’s nonprofit, Anthropic and Pinterest
- Stay in touch with the whole team: Aaron Mak; Bob King; Nate Robson; John Hewitt Jones
- Frankly, it would create a false sense of security,” said Neil Chilson, a former acting chief technologist at the Federal Trade Commission who is currently the Abundance Institute’s head of AI policy
Summary
The prospect of the government vetting AI models is causing a minor panic in certain tech policy and industry circles. This week, the New York Times and POLITICO reported that the White House is considering an executive order that would, in part, require government testing and approval before AI companies can release their models. Kevin Hassett then said during a Wednesday Fox Business appearance that the executive order may include “a clear roadmap to everybody about … how future AIs that also potentially create vulnerabilities should go through a process so that they’re released to the wild after they’ve been proven safe, like an FDA drug.” The news came as the administration met with tech companies about Mythos and other AI models that potentially pose a dire threat to cybersecurity.