Safety · NIST AI
“Participation in the consortium is open to all organizations interested in AI safety that can contribute through combinations of expertise
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
★ Tier-1 Source
“NIST is responsible for helping industry understand how to manage the risks inherent in AI products.
Key facts
- NIST plans to host a workshop on Nov. 17, 2023, for those interested in learning more about the consortium and engaging in the conversation about AI safety
- The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, companies and impacted communities to help ensure that AI systems are safe and trustworthy,” said
- The U.S. AI Safety Institute will harness work already underway by NIST and others to build the foundation for trustworthy AI systems, supporting use of the AI RMF, which NIST released in January 2023
- Interested organizations with relevant technical capabilities should submit a letter of interest by Dec. 2, 2023
Summary
Official websites use.gov A.gov website belongs to an official government organization in the United States. Secure.gov websites use HTTPS A lock or https:// means you’ve safely connected to the.gov website. GAITHERSBURG, Md. — The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) is calling for participants in a new consortium supporting development of innovative methods for evaluating artificial intelligence (AI) systems to improve the rapidly growing technology’s safety and trustworthiness. The institute and its consortium are part of NIST’s response to the recently released Executive Order on Safe, Secure, and Trustworthy Development and Use of AI. “The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, companies and impacted communities to help ensure that AI systems are safe and trustworthy,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E.