Openai · BBC Technology
OpenAI tells ChatGPT models to stop talking about goblins
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
ChatGPT-maker OpenAI has had to instruct some of its AI tools to stop talking about goblins, after finding the term had randomly crept into responses.
Key facts
- It added that after a researcher who had seen a few "goblin" mentions asked it to be checked out, developers found the term's appearance in ChatGPT responses had risen by 175% since GPT-5.1's launch
- In May 2024, Google's AI chatbot was widely mocked for telling users it was okay to eat rocks and "glue pizza
- OpenAI said it first noticed increased mentions of goblins, gremlins and other creatures after the launch of GPT-5.1 in November
- Why does GPT 5.5 have a restraining order against 'Raccoons,' 'Goblins,' and 'Pigeons
Summary
The company said it spotted increased mentions of the mythological creatures, as well as gremlins, in metaphors used by ChatGPT and other tools powered by its latest flagship model, GPT-5. After users and employees flagged problems being described as "little goblins", OpenAI said it took steps to mitigate the issue - including telling its coding agent Codex not to refer to them unless relevant. It discovered that a "nerdy personality" it developed for ChatGPT had unwittingly been incentivised to reward goblin mentions. The issue highlights the challenges AI firms face in tackling the potential for systems and their training to reward and reinforce errors like language quirks.