Anthropic · MIT Technology Review
There’s a fault line running through enterprise AI, and it’s not the one getting the most attention
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
The public conversation still tracks foundation models and benchmarks—GPT versus Gemini, reasoning scores, and marginal capability gains.
Key facts
- According to Stanford’s 2026 AI Index, AI is sprinting, and they're struggling to keep up
- For example, if an organization processes 50,000 cases a week and captures three high-quality decision points per case, that’s 150,000 labeled examples every week without creating a separate
- Exclusive: Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players
- An exclusive conversation with OpenAI’s chief scientist, Jakub Pachocki, about his firm's new grand challenge and the future of AI
Summary
There’s a fault line running through enterprise AI, and it’s not the one getting the most attention. Model providers like OpenAI and Anthropic sell intelligence as a service: you have a problem, you call an API, you get an answer. Incumbent organizations, by contrast, can treat AI as an operating layer: instrumentation across operations, feedback loops from human decisions, and governance that turns individual tasks into reusable policy. The prevailing narrative says nimble startups will out-innovate incumbents by building AI-native from scratch. Traditional services organizations are built on a simple architecture: humans use software to do expert work.