Research · NIST AI
An AI system can malfunction if an adversary flags a way to confuse its decision making
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
★ Tier-1 Source
In this example, errant markings on the road mislead a driverless car, potentially making it veer into oncoming traffic.
Key facts
- Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), is part of NIST’s broader effort to support the development of trustworthy AI
- Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said co-author Alina Oprea, a professor at Northeastern University — We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors
- Official websites use.gov A.gov website belongs to an official government organization in the United States
Summary
Official websites use.gov A.gov website belongs to an official government organization in the United States. Secure.gov websites use HTTPS A lock or https:// means you’ve safely connected to the.gov website. An AI system can malfunction if an adversary finds a way to confuse its decision making. Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction — and there’s no foolproof defense that their developers can employ. Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice.