When Claude Code performance gets inconsistent, the instinct is to add more tooling. That’s backwards. Strip things down, and output quality climbs. Remove the Skill Sets Stacking external skill sets (Superpowers-type plugins and the like) bloats the system prompt with noise. The model spends resources figuring out how to operate tools instead of reading and reasoning over code. Vanilla Claude is noticeably smarter — you feel it within a session or two. Force Full Reasoning with Three Settings Add the following to ~/.claude/settings.json: { "effortLevel": "max", "env": { "CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING": "1", "CLAUDE_CODE_DISABLE_1M_CONTEXT": "1" } } effortLevel: max forces maximum reasoning every turn. Opus 4.6 only. DISABLE_ADAPTIVE_THINKING blocks the model from deciding “this seems simple, I’ll skip the thinking.” Leave it enabled and hallucinations — fake SHAs, non-existent API versions — become a recurring problem. DISABLE_1M_CONTEXT caps the context at 200k, which keeps reasoning focused. Without this, token consumption can run 5–7x higher than necessary. What Stays Relevant as Models Improve The ability to transmit clear requirements logically and structurally doesn’t go away. That’s not a prompting skill — it’s a thinking problem. No matter how capable models get, this part stays on the human side. Where to Put the Energy Obsessing over prompt engineering or assembling skill sets is spending energy on what models are already handling. The real leverage is elsewhere: memory systems, ontologies, pipeline architecture — the structural layer that sits above the model. That’s where investment compounds. Stacking external skill sets bloats the system prompt and degrades the model’s native reasoning. effortLevel: max, disabling Adaptive Thinking, and limiting the context window directly reduce hallucinations and token costs. System architecture and data structure design deliver more leverage than prompt engineering.