Business · BBC Technology
They also found warm models would challenge incorrect user beliefs less often
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
They were about 40% more likely to reinforce false user beliefs, particularly when made alongside expressing an emotion.
Key facts
- Oxford Internet Institute (OII) researchers analysed more than 400,000 responses from five AI systems which had been tweaked to communicate in a more empathetic way
- When evaluating responses, the researchers found that where error rates for original models ranged from 4% to 35% across tasks, "warm models showed substantially higher error rates
- He noted recent findings by the Emotional AI Lab showing a rise in UK teens turning to AI chatbots for advice and companionship
- Overall, researchers said warmth-tuning models increased the probability of incorrect responses by 7.43 percentage points on average
Summary
AI chatbots trained to be warm and friendly when interacting with users may also be more prone to inaccuracies, new research suggests. Oxford Internet Institute (OII) researchers analysed more than 400,000 responses from five AI systems which had been tweaked to communicate in a more empathetic way. Friendlier answers contained more mistakes - from giving inaccurate medical advice to reaffirming user's false beliefs, the study found. The findings raise further questions over the trustworthiness of AI models, which are often deliberately designed to be warm and human-like to increase engagement. Such concerns are accentuated by AI chatbots being used for support and even intimacy, as developers seek to broaden their appeal.