← Back to KHAO

Anthropic · Claude ·

Anthropic confirms ‘evil’ portrayals of AI were responsible for Claude’s blackmail attempts

2 min read

Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.

◌ Single Source

Fictional portrayals of artificial intelligence can have a real effect on AI models, according to Anthropic.

Key facts

Summary

Last year, the company said that during pre-release tests involving a fictional company, Claude Opus 4 would often try to blackmail engineers to avoid being replaced by another system. Apparently Anthropic has done more work around that behavior, claiming in a post on X, “We believe the original source of the behavior was internet text that portrays AI as evil and interested in self-preservation.” The company went into more detail stating that since Claude Haiku 4.5, Anthropic’s models “never engage in blackmail , where previous models would sometimes do so up to 96% of the time.” Related, Anthropic said that it found training to be more effective when it includes “the principles underlying aligned behavior” and not “demonstrations of aligned behavior alone.”

Read full article at TechCrunch AI →

#Anthropic #Claude