Anthropic · MIT Technology Review
Why having “humans in the loop” in an AI war is an illusion
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
The availability of artificial intelligence for use in warfare is at the center of a legal battle between Anthropic and the Pentagon.
Key facts
- Huge advances have been made in developing and building more capable models, driven by record investments—forecast by Gartner to grow to around $2.5 trillion in 2026 alone
- According to Stanford’s 2026 AI Index, AI is sprinting, and they're struggling to keep up
- Will Douglas Heaven archive page How Pokémon Go is giving delivery robots an inch-perfect view of the world Exclusive: Niantic's AI spinout is training a new world model using 30 billion images
- Will Douglas Heaven archive page This startup wants to change how mathematicians do math Axiom Math is giving away a powerful new AI tool
Summary
This debate has become urgent, with AI playing a bigger role than ever before in the current conflict with Iran. AI systems are opaque “black boxes” But the debate over “humans in the loop” is a comforting distraction. The immediate danger is not that machines will act without human oversight; it is that human overseers have no idea what the machines are “thinking.” The Pentagon’s guidelines are fundamentally flawed because they rest on the dangerous assumption that humans understand how AI systems work. Having studied intentions in the human brain for decades and in AI systems more recently, the reporter can attest that state-of-the-art AI systems are “black boxes.” The team know the inputs and outputs, but the artificial “brain” processing them remains opaque.
But what the operator does not know is that the AI system’s calculation included a hidden factor: Beyond devastating the munitions factory, the secondary explosions would also severely damage a nearby children’s hospital. To the AI, maximizing disruption in this way meets its given objective. Keeping a human in the loop may not provide the safeguard people imagine, because the human cannot know the AI’s intention before it acts. Advanced AI systems do not simply execute instructions; they interpret them.