NIST · AI Safety Institute · U.S. · The Information · NIST AI
ARIA scales up on the AI Risk Management Framework, which NIST published in January 2023
Compiled by KHAO Editorial — aggregated from 2 outlets. See llms.txt for citation guidance.
★ Tier-1 Source
“Measuring impacts is about more than how well a model functions in a laboratory setting,” said Reva Schwartz, NIST Information Technology Lab’s ARIA program lead.
Key facts
- ARIA expands on the AI Risk Management Framework, which NIST released in January 2023, and helps to operationalize the framework’s risk measurement function, which recommends that quantitative
- Measuring impacts is about more than how well a model functions in a laboratory setting,” said Reva Schwartz, NIST Information Technology Lab’s ARIA program lead
- The ARIA program is designed to meet real-world needs as the use of AI technology grows,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E
- Assessing Risks and Impacts of AI (ARIA) aims to help organizations and individuals determine whether a given AI technology will be valid, reliable, safe, secure, private and fair once deployed
Summary
Official websites use.gov A.gov website belongs to an official government organization in the United States. Secure.gov websites use HTTPS A lock or https:// means you’ve safely connected to the.gov website. The National Institute of Standards and Technology (NIST) is launching a new testing, evaluation, validation and verification (TEVV) program intended to help improve understanding of artificial intelligence’s capabilities and impacts. Assessing Risks and Impacts of AI (ARIA) aims to help organizations and individuals determine whether a given AI technology will be valid, reliable, safe, secure, private and fair once deployed. “to fully understand the impacts AI is having and will have on our society, we need to test how AI functions in realistic scenarios, and that’s exactly what we’re doing with this program,” said U.S. Commerce Secretary Gina Raimondo.