Anthropic confirms it’s been ‘adjusting’ Claude utilization limits
Abstract created by Good Solutions AI
In abstract:
- Anthropic has confirmed adjusting Claude’s five-hour utilization limits throughout peak weekday hours (5 a.m. to 11 a.m.) to handle rising demand for the AI service.
- PCWorld highlights this displays a broader pattern of AI suppliers scuffling with elevated token consumption, notably affecting Claude’s one-million token context window customers.
- The adjustments impression roughly 7% of customers throughout Free, Professional, and Max tiers who now hit session limits sooner throughout high-demand durations.
If it feels such as you’ve been hitting your Claude utilization limits way more shortly over the previous week, you’re completely proper.
Anthropic has confirmed that it has been “adjusting” the five-hour utilization limits for Claude Free, Professional, and Max customers throughout the peak hours of 5 a.m. to 11 a.m. on weekdays, whereas leaving general weekly limits unchanged.
The information comes by way of a Reddit publish from an Anthropic consultant. I reached out to Anthropic and confirmed the publish is genuine.
The Anthropic publish doesn’t specify when the utilization restrict adjustment befell, however my understanding is the brand new charge limits kicked in on Monday.
Claude customers have been complaining bitterly about how shortly they’ve been hitting their utilization limits over the previous week or so, and plenty of had suspected a silent discount of their five-hour utilization allotments. Seems they had been proper.
Anthropic says it imposed the adjusted utilization limits to “handle rising demand for Claude.”
“We’ve landed numerous effectivity wins to offset this, however ~7% of customers will hit session limits they wouldn’t have had earlier than, notably in Professional tiers,” the Anthropic publish continues. “In the event you run token-intensive background jobs, shifting them to off-peak hours will stretch your session limits additional.”
Anthropic acknowledged that the restrict adjustment “was irritating” and that it’s “persevering with to put money into scaling effectively.”
Phrase of Anthropic throttling peak utilization limits comes amid a surge of curiosity in Claude following its authorized standoff with the Protection Division, which has sought to tag Anthropic as a “provide chain threat” after the corporate balked at signing a navy contract. A decide lately stayed the Pentagon’s transfer to use the “provide chain threat” label.
Anthropic’s transfer to regulate its five-hour utilization limits speaks to a much bigger problem: how the massive AI suppliers deal with subscribers on flat-rate plans.
Prior to now, AI customers on “plus,” “professional,” or “max” plans (which value wherever from $10-250 a month, relying on the supplier) hardly ever hit utilization limits as a result of they had been merely chatting with fashions in a web-based chatbox.
However with the rise of agentic AI performance reminiscent of vibe-coding functions and “pc use” talents, flat-rate AI subscribers are burning much more tokens than ever earlier than, and the massive AI suppliers are struggling to maintain up with the demand.
The issue is exacerbated by Claude’s monumental one-million token context window, which was rolled out earlier this month.
What’s occurring now’s that Anthropic and different AI corporations are hitting the brakes on flat-rate utilization, generally silently.
And no, it’s not cool.

