Claude customers are educating it to speak like a caveman. This is why
Abstract created by Sensible Solutions AI
In abstract:
- PCWorld stories that Claude AI customers are adopting “caveman” prompting strategies to cut back token consumption by stripping filler phrases and articles from responses.
- This technique can dramatically reduce output tokens, with one neural community rationalization dropping from 460 to simply 80 tokens utilizing simplified language.
- Whereas efficient for coding duties, debates proceed over whether or not caveman prompting compromises response high quality for nuanced explanations.
It’s no secret that Claude gobbles up tokens like a Corvette guzzles gasoline—and identical to gasoline, tokens value cash. That’s why the heaviest Claude customers are all the time in search of methods to save lots of on token utilization. The newest trick? A wild and controversial new technique: talking like a caveman.
Right here’s the way it works: Utilizing both a specialised system immediate or a full-on plug-in, customers are instructing Claude to speak like a caveman, stripping away any and all fluff corresponding to preambles, reward, asides, filler phrases (like “very” and “actually”), redundancies, and even articles (“a,” “the”).
One of many many advised “caveman” system prompts floating round Reddit embody this one: “Communicate primitive. Use nouns and verbs. No grammar filler (the, is, are, of). Preserve phrases brief. Save tokens. Be blunt.”
The outcomes will fluctuate wildly relying on the precise immediate you utilize. I attempted a variation of that immediate, including the particular articles to keep away from whereas additionally asking it to cut the pleasantries and filler phrases. What I acquired from Claude wasn’t precisely true caveman converse, though I’m positive you would get that with correct prompting. As an alternative, I acquired super-clipped responses that had been devoid of idle chit-chat and digressions.
For instance, when asking a caveman-prompted Claude to clarify the fundamentals of neural networks, it got here again with this:
Neural networks: layers of related nodes. Enter layer takes knowledge. Hidden layers discover patterns. Output layer offers consequence.
Nodes go alerts, weighted by significance. Coaching adjusts weights utilizing backpropagation — reduce error, repeat thousands and thousands of occasions.
Key sorts: CNNs (pictures), RNNs (sequences), Transformers (language). Trendy LLMs are transformers at large scale.
That’s a 50-word response, which comes out to roughly 80 tokens. (There are extra tokens than phrases as a result of an AI mannequin should break down bigger phrases, punctuation, and emojis into smaller models earlier than processing them, and every of these models counts as a token.) Evaluate that to the 330-word, 460-token response from Claude after I didn’t use the caveman immediate.
So, case closed, proper? Let’s all use caveman mode! Properly, not fairly.
Ever for the reason that caveman immediate started gaining traction earlier this month, it has spawned heated debates on Reddit and elsewhere. Some customers argue that this caveman technique results in dumbed-down responses. Others notice that any token financial savings are just for the output tokens, not the tokens you ship to Claude. Additionally, an elaborate “caveman” system immediate will value you each time you ship a brand new immediate.
It’s additionally potential that the Claude caveman technique isn’t a jack-of-all-trades. Whereas it could possibly be well-suited for coding, a process the place you’re sometimes anticipating a cut-and-dried response, it’s most likely not the perfect match for “explainer” prompts that require extra element or nuance, corresponding to my prior “inform me about neural networks” request.
Nonetheless, the Claude caveman technique is one other instance of how customers are getting artistic about boosting their AI token effectivity, a problem that’s looming for even informal AI customers as an increasing number of highly effective agentic instruments (like Claude Cowork) hit the mainstream.
Certainly, the makers of the extra in style Claude caveman plug-ins are shortly discovering methods to stretch their Claude token use with out sacrificing high quality within the discount.

