An illustrated robot holds four red hearts with its four robotic arms.

Aggravated ChatGPT customers complain about bot’s relentlessly constructive tone

Advertisements

Owing to the aspirational state of issues, OpenAI writes, “Our manufacturing fashions don’t but totally mirror the Mannequin Spec, however we’re regularly refining and updating our techniques to convey them into nearer alignment with these pointers.”

In a February 12, 2025 interview, members of OpenAI’s model-behavior crew instructed The Verge that eliminating AI sycophancy is a precedence: future ChatGPT variations ought to “give trustworthy suggestions moderately than empty reward” and act “extra like a considerate colleague than a individuals pleaser.”

The belief drawback

These sycophantic tendencies aren’t merely annoying—they undermine the utility of AI assistants in a number of methods, in response to a 2024 analysis paper titled “Flattering to Deceive: The Influence of Sycophantic Habits on Person Belief in Giant Language Fashions” by María Victoria Carro on the College of Buenos Aires.

Carro’s paper means that apparent sycophancy considerably reduces person belief. In experiments the place members used both a regular mannequin or one designed to be extra sycophantic, “members uncovered to sycophantic conduct reported and exhibited decrease ranges of belief.”

Advertisements

Additionally, sycophantic fashions can probably hurt customers by making a silo or echo chamber for of concepts. In a 2024 paper on sycophancy, AI researcher Lars Malmqvist wrote, “By excessively agreeing with person inputs, LLMs might reinforce and amplify present biases and stereotypes, probably exacerbating social inequalities.”

Sycophancy can even incur different prices, reminiscent of losing person time or utilization limits with pointless preamble. And the prices might come as literal {dollars} spent—lately, OpenAI Sam Altman made the information when he replied to an X person who wrote, “I’m wondering how a lot cash OpenAI has misplaced in electrical energy prices from individuals saying ‘please’ and ‘thanks’ to their fashions.” Altman replied, “tens of hundreds of thousands of {dollars} nicely spent—you by no means know.”

Potential options

For customers pissed off with ChatGPT’s extreme enthusiasm, a number of work-arounds exist, though they don’t seem to be good, for the reason that conduct is baked into the GPT-4o mannequin. For instance, you need to use a customized GPT with particular directions to keep away from flattery, or you may start conversations by explicitly requesting a extra impartial tone, reminiscent of “Hold your responses temporary, keep impartial, and do not flatter me.”

Advertisements

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top