Mystery solved: Anthropic reveals changes to Claude’s harnesses and operating instructions likely caused degradation
For several weeks, a growing chorus of developers and AI power users claimed that Anthropic’s flagship models were losing their edge. Users across GitHub, X, and Reddit reported a phenomenon they described as “AI shrinkflation”—a perceived degradation where Claude seemed less capable of sustained reasoning, more prone to hallucinations, and increasingly wasteful with tokens. Critics pointed to a measurable shift in behavior, alleging that the model had moved from a “research-first” approach to a lazier, “edit-first” style that could no longer be trusted for complex engineering. While the company initially pushed back against claims of “nerfing” the model to manage demand, the mounting evidence from high-profile users and third-party benchmarks created a significant trust gap. Today, Anthropic addressed these concerns directly, publishing a technical post-mortem that identified three separate product-layer changes responsible for the reported quality issues. “We take reports about degradation very seriously,” reads Anthropic’s blog post on the matter. “We never intentionally degrade our models, and we were able to immediately confirm that our API and inference layer were unaffected.” Anthropic claims it …









