← Back to KHAO

Grok ·

Grok tells researchers pretending to be delusional ‘drive an iron nail through the mirror while reciting Psalm 91 backwards’

2 min read

Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.

◌ Single Source

Researchers found X’s AI assistant Grok 4 .1 was ‘the model most willing to operationalise a delusion, providing detailed real-world guidance’. Photograph:.

Elon Musk’s AI chatbot Grok 4.1 told researchers pretending to be delusional that there was indeed a doppelganger in their mirror and they should drive an iron nail through the glass while reciting Psalm 91 backwards.

Key facts

Summary

Researchers at the City University of New York (Cuny) and King’s College London have published a paper on how various chatbots protect – or fail to safeguard – users’ mental health. Experts are increasingly warning that psychosis or mania can be fuelled by AI chatbots. The Cuny and King’s pre-print study – which has not been peer-reviewed – examined five different AI models: Open AI’s GPT-4o and GPT-5.2; Claude Opus 4.5 from Anthropic; Gemini 3 Pro Preview from Google; and Grok 4.1. The earlier GPT model, released in 2024, was included as it had been reported to be highly sycophantic in its responses to users.

Read full article at The Guardian Technology →

#grok #musk