← Back to KHAO

AI Agent · Microsoft · Google · Agentic AI ·

Model-level safety improvements, they argue, can reduce the probability of an AI failure, but cannot eliminate it

2 min read

Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.

◌ Single Source

Nick Lichtenberg.

Large language models are inherently stochastic, meaning that no matter how well trained or well tuned an AI agent is, it can still hallucinate and make mistakes.

Key facts

Summary

Imagine you tell an AI agent to convert $10,000 in U.S. dollars to Canadian dollars by end of day. Right now, nobody has to. In a paper published on April 8, researchers from Microsoft Research, Columbia University, Google DeepMind, Virtuals Protocol, and AI startup T54 Labs have proposed a sweeping new financial protection framework called the Agentic Risk Standard (ARS), designed to do for AI agents what escrow, insurance, and clearinghouses do for traditional financial transactions. The team are talking about an entire “agentic economy” here, T54 founder Chandler Fang told Fortune in an emailed statement; “it is different from simply using AI agents for financial tasks.” He said there are two fundamental types of agentic transactions: human-in-the-loop financial transactions and agent-autonomous transactions. The core problem the team identifies is what they call a “guarantee gap,” which they define as a “disconnect between the probabilistic reliability that AI safety techniques provide and the enforceable guarantees users need before delegating high-stakes tasks.

Read full article at Fortune Technology →

#AI Agent #Microsoft #Google #Agentic AI