The Minimum Viable Reasoning Prompt
By Karo
Stop chain-of-thought prompting reasoning models. We are making them worse.
Reasoning models (o3, GPT-5 thinking, Claude Sonnet 4.5 thinking, Gemini 3 Pro Thinking, DeepSeek-R1) do their thinking before they answer. When we tell them "think step by step," we overwrite the exact process the model was trained to run internally. Less scaffolding. More problem.
This prompt is the counter-move. Three lines. That is the whole thing. It forces us to state the problem, what success looks like, and which constraints are load-bearing. Then it gets out of the way.
Who it is for
Anyone still pasting 2023-era mega-prompts into 2026 reasoning models and wondering why the output feels flatter than the non-reasoning version.
Read the full guide
Works with: ChatGPT, Claude, Gemini, DeepSeek, Grok
Tags: Advanced Context Engineering, AI Fluency, System Prompts
← Back to Attitude Vault