Agent · Fortune Technology
What do you do when your AI agent hallucinates with your money?
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
Imagine you tell an AI agent to convert $10,000 in U.S. dollars to Canadian dollars by end of day.
Key facts
- Imagine you tell an AI agent to convert $10,000 in U.S. dollars to Canadian dollars by end of day
- Most trustworthy AI research aims to reduce the probability of failure,” said Wenyue Hua, senior researcher at Microsoft Research
- Model-level safety improvements, they argue, can reduce the probability of an AI failure, but cannot eliminate it
- ARS takes a complementary approach: Instead of trying to make the model perfect, they formalize what happens financially when it isn’t
Summary
Right now, nobody has to. In a paper published on April 8, researchers from Microsoft Research, Columbia University, Google DeepMind, Virtuals Protocol, and AI startup T54 Labs have proposed a sweeping new financial protection framework called the Agentic Risk Standard (ARS), designed to do for AI agents what escrow, insurance, and clearinghouses do for traditional financial transactions. The team are talking about an entire “agentic economy” here, T54 founder Chandler Fang told Fortune in an emailed statement; “it is different from simply using AI agents for financial tasks.” He said there are two fundamental types of agentic transactions: human-in-the-loop financial transactions and agent-autonomous transactions. The core problem the team identifies is what they call a “guarantee gap,” which they define as a “disconnect between the probabilistic reliability that AI safety techniques provide and the enforceable guarantees users need before delegating high-stakes tasks.