← Back to KHAO

Research ·

An AI system can malfunction if an adversary flags a way to confuse its decision making

2 min read

Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.

★ Tier-1 Source

An AI system can malfunction if an adversary finds a way to confuse its decision making. In this example, errant markings on the road mislead a driverless car, potentially making it veer into oncoming traffic. This “evasion” attack is one of numerous adversarial tactics described in a new NIST publi.

In this example, errant markings on the road mislead a driverless car, potentially making it veer into oncoming traffic.

Key facts

Summary

Official websites use.gov A.gov website belongs to an official government organization in the United States. Secure.gov websites use HTTPS A lock or https:// means you’ve safely connected to the.gov website. An AI system can malfunction if an adversary finds a way to confuse its decision making. Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction — and there’s no foolproof defense that their developers can employ. Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice.

Read full article at NIST AI →