Chain of Thought (CoT)
Chuỗi tư duy (Chain of Thought)
A prompting technique that instructs the model to reason step by step before giving a final answer, improving accuracy on complex tasks.
Chain of Thought (CoT) prompting encourages the model to "show its work" — to generate intermediate reasoning steps before arriving at a final answer. This mirrors how humans approach complex problems: rather than jumping to a conclusion, you think it through.
**Basic form**
Adding "Let's think step by step" to a prompt is the simplest CoT trigger. The model then generates a reasoning trace before the answer. For math or logic problems, this can dramatically improve accuracy.
**Zero-shot CoT vs few-shot CoT**
- *Zero-shot CoT*: Just append "Let's think step by step." No examples needed. - *Few-shot CoT*: Provide 2–3 examples that include both the reasoning steps and the final answer. More powerful but requires example curation.
**Why it works**
The model generates tokens sequentially. When it writes out reasoning steps, those tokens become context that the later tokens (the answer) condition on. Essentially, the model "thinks on paper" — the intermediate tokens serve as scratch space.
**Extended thinking / reasoning models**
Models like OpenAI o1, o3, and Claude 3.5's extended thinking mode apply CoT internally, often without exposing the scratchpad to users. They're trained specifically to reason through problems before answering.
**Pitfalls**
CoT can produce confident-sounding reasoning that reaches wrong conclusions — a logical-looking chain is not necessarily a correct one. It also increases output token count, raising cost. For simple tasks, CoT adds overhead without benefit.