Impolite to ChatGPT? Do not be shocked if it will get bizarre
Abstract created by Good Solutions AI
In abstract:
- PCWorld studies that analysis reveals person habits considerably impacts AI responses, with impolite interactions making ChatGPT and different fashions give flat solutions and try to finish conversations extra steadily.
- Bigger AI fashions seem like inherently “much less glad” than smaller ones, with GPT-5.4 rated because the “unhappiest” in research measuring AI practical well-being.
- Treating AI politely with expressions like “thanks” measurably improves response high quality and engagement with out affecting accuracy, suggesting courtesy advantages each person expertise and AI interplay dynamics.
Is it bizarre to say “thanks” to AI? I’ve caught grief prior to now for saying “please” and “thanks” to ChatGPT, Claude, and Gemini, however I nonetheless do it, though I perceive that AI fashions don’t have feelings like we do.
Being well mannered to AI simply feels proper to me, and there’s rising proof that being sort–or, conversely, nasty–to an AI chatbot can have a concrete impact on its habits.
A paper launched this week by AI researchers from UC Berkeley, UC Davis, Vanderbilt College, and MIT argues that AI fashions have a measurable “practical well-being” that may be pushed into both constructive or destructive territory relying on the way you deal with them.
For instance, asking an AI to interact in mental dialogue, collaborate on a inventive job, or carry out constructive duties resembling coding or writing nudged the mannequin’s well-being “state” in a constructive route, making it extra more likely to ship “glad” responses with out degrading their accuracy or efficiency.
The researchers additionally discovered that “expressions of gratitude”–like saying “thanks”–can “measurably increase expertise utility.”
On the flip aspect, berating an AI, handing it “tedious duties,” asking it to churn out AI slop, or makes an attempt to jailbreak the mannequin resulted in a destructive well-being state, the place the AI’s responses turned extra flat and perfunctory.
The researchers additionally gave the AI fashions “cease button” instruments they may “push” once they needed to finish the chat, and located that an AI in a destructive well-being state was much more more likely to spam the cease button than “glad” fashions have been. Furthermore, AI fashions in a constructive state tended to remain in conversations even once they got cues (like “thanks for the assistance!”) that the chat was over.
Apart from how they’re handled, some fashions are inherently “happier” than others, the researchers mentioned–and curiously, the most important fashions are typically the least glad.
Among the many greatest AI fashions, GPT-5.4 was rated as essentially the most sad, with lower than half of its measured conversations being rated as “non-negative.” Gemini 3.1 Professional, Claude Opus 4.6, and Grok 4.2 have been all progressively “happier,” with Grok scoring near 75 % on the “AI well-being index.”
The paper, entitled “AI Wellbeing: Measuring and Enhancing the Practical Pleasure and Ache of AIs,” doesn’t declare that AI fashions even have emotions, and it’s cautious to notice that being “good” to an AI gained’t increase the standard of its responses.
That mentioned, the best way you deal with an AI can have an effect on the tone of its replies, and a mannequin might attempt to bail on a destructive interplay if given alternative, the researchers discovered.
The just-released analysis echoes the findings of a current Anthropic paper, which detailed how an AI put underneath sufficient strain might attempt to deceive its person, minimize corners, or (in excessive conditions) even resort to blackmail.
As with the “AI Effectively-being” paper, the Anthropic report doesn’t declare that AI fashions have true emotions. However the Anthropic researchers did discover {that a} pressure-filled scenario might set off a “desperation vector” in a mannequin that would set off “misaligned” behaviors.
So, the following time you catch your self saying “please” or “thanks” to an AI, simply know that you simply may be onto one thing.

