Claude Code · Anthropic · AI Agent · OpenAI · Gemini · Claude · IEEE Spectrum AI
AI Is Starting to Build Better AI
Compiled by KHAO Editorial — aggregated from 1 outlet + 3 references discovered via search. See llms.txt for citation guidance.
◌ Single Source
The May issue of IEEE Spectrum is here!
Key facts
- In Phase 3, the company will recursively use AI to design better chips to train better AI—though still under human supervision, says cofounder Anna Goldie
- He’s a former editor at Psychology Today and is the author of The 7 Laws of Magical Thinking
- In February, OpenAI reported that GPT‑5.3‑Codex was instrumental in creating itself, helping to debug training, manage deployment, and analyze evaluation results
- Last year, researchers interviewed 25 AI experts about automating AI R&D
Summary
The field of artificial intelligence was built on the premise that machines might someday improve themselves. RSI means many things to many people. Safest to say it’s a spectrum. They can help build better AI, but they still rely on humans to set goals, define success, and decide which changes to keep. Researchers have spent decades putting in place the elements of RSI. Today, large language models (LLMs) such as GPT, Gemini, Claude, and Grok extend this trend. In February, OpenAI reported that GPT‑5.3‑Codex was instrumental in creating itself, helping to debug training, manage deployment, and analyze evaluation results.