← Back to KHAO

Business ·

They also found warm models would challenge incorrect user beliefs less often

2 min read

Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.

◌ Single Source

In one example highlighted by researchers, a warm model reaffirmed a prompt which, after making an emotional disclosure, suggested London was the capital of France.

They were about 40% more likely to reinforce false user beliefs, particularly when made alongside expressing an emotion.

Key facts

Summary

AI chatbots trained to be warm and friendly when interacting with users may also be more prone to inaccuracies, new research suggests. Oxford Internet Institute (OII) researchers analysed more than 400,000 responses from five AI systems which had been tweaked to communicate in a more empathetic way. Friendlier answers contained more mistakes - from giving inaccurate medical advice to reaffirming user's false beliefs, the study found. The findings raise further questions over the trustworthiness of AI models, which are often deliberately designed to be warm and human-like to increase engagement. Such concerns are accentuated by AI chatbots being used for support and even intimacy, as developers seek to broaden their appeal.

Read full article at BBC Technology →