ChatGPT quietly mounted its most annoying behavior
Abstract created by Sensible Solutions AI
In abstract:
- PCWorld studies that ChatGPT has quietly changed its pushy “Would you want me to…” follow-up prompts with gentler “Should you’re …” recommendations.
- This alteration eliminates the manipulative, engagement-boosting techniques that made interactions really feel pressured and annoying to customers.
- Whereas opponents like Gemini nonetheless use the extra demanding method, ChatGPT’s subtler methodology creates a extra comfy person expertise.
I’m again working with ChatGPT after a months-long break up, throughout which I had dalliances with Gemini and later Claude (I’m a serial AI subscriber, much like how I juggle Netflix, HBO Max, and the like). Now that I’ve rekindled my relationship with ChatGPT—all enterprise, I guarantee you—I’ve observed one thing completely different in regards to the chatbot… or moderately, one thing it has stopped doing, to my nice aid.
Again within the day, ChatGPT was all in regards to the follow-ups: these questions it could pose on the backside of its responses to ask additional interplay. “Would you prefer to know extra about how Tailscale may work together with your Raspberry Pi setup?” was only one instance, or “Would you want me to attract up a 10-week meal plan for your loved ones?”
I don’t have an issue with follow-up questions after they’re related or stream naturally from a dialogue. However with ChatGPT—and loads of different common AI chatbots—the follow-up prompts turned persistent and borderline obsessive, with solutions to virtually all my questions arriving with “Would you want me to…” recommendations tacked onto the ends.
In fact, everyone knows the rationale for these incessant follow-up prompts: the necessity for large AI suppliers to spice up engagement, to maintain us chatting with our AI companions for so long as potential, thus making us extra prone to re-sub on the finish of the month.
Personally, I discovered the continuous “Would you want…” inquiries to be annoying, manipulative, and even a tad tense. They made me really feel compelled to answer, even when my reply was a easy “No, thanks.” And if I did have a follow-up query, I felt the necessity to redirect the dialog (“As a substitute of the 10-week plan, may you assist me with a easy French dressing recipe for tonight?”), which means extra typing within the chatbox.
ChatGPT’s never-ending follow-up questions weren’t the only motive I took a break, however they certain weren’t attractive me to stay round. But once I lastly did return a few weeks in the past, I observed one thing completely different virtually instantly: these “Would you want…” questions have been gone.
Now, as a substitute of a query, you get extra of an “Should you’d like”-style suggestion. “Should you’re , I can present you 5 superior Claude Cowork workflows which are actually fascinating,” ChatGPT teased in a current dialog. Right here’s one other one: “If you need, I can inform you the one massive NYC condominium tax deduction many house owners miss.”
It’s a delicate however key change. As a substitute of an in-your-face query that appears to demand a solution, the brand new follow-ups are merely there for the taking—and simply skippable if you want.
On the similar time, there’s additionally some Jedi-level thoughts manipulation occurring with these new follow-ups. As a rule, I am asking ChatGPT to inform me extra about these gotta-know Claude workflows and hidden condominium tax deductions. No matter’s within the secret sauce of those non-question follow-ups, it’s working.
I’ve requested OpenAI for extra particulars about its less-pushy follow-up prompts and can replace this story as soon as I hear again.
I additionally polled the three massive AI giants—ChatGPT, Gemini, and Claude—in regards to the new type of non-question follow-ups, they usually all mentioned roughly the identical factor: whereas the naggy questions can result in “assistant fatigue,” the conditional “in the event you’d like” phrasing units up a extra “comfy dynamic” that’s really extra inviting.
ChatGPT, naturally, adopted up with an “in the event you’re ” immediate about “a complete spectrum of ‘strain ranges’ in conversational prompts,” whereas the reliably to-the-point Claude simply served up a sometimes thorough and tutorial reply with none type of follow-up nag on the finish.
Then there’s Gemini, which wrote almost a dozen paragraphs about how “passive availability” prompts might be simpler than “energetic prompting” questions, after which instantly hit me with a “Would you want me to…” query on the finish. Ugh.

