Techniques

Few-Shot Learning

Học từ vài mẫu (Few-Shot Learning)

Providing a small number of input-output examples in the prompt to teach the model a pattern without changing its weights.

Few-shot learning (in the LLM context) means giving the model a handful of examples — typically 2 to 10 — directly in the prompt before your actual request. The model infers the pattern from the examples and applies it to the new input.

**Example structure**

Input: "The food was terrible and the service was slow." Output: negative

Input: "Absolutely loved the atmosphere and the cocktails." Output: positive

Input: "It was okay, nothing special." Output: [model fills this in]

The model learns "this is a sentiment classification task with three possible outputs" from the examples alone — no fine-tuning needed.

**Why it works**

LLMs are trained on enormous amounts of text that include implicit patterns. Few-shot examples activate in-context learning: the model treats examples as a demonstration of what's expected and extends the pattern. This is sometimes called "in-context learning."

**How many shots?**

Typically 3–8 examples per category or output type. More isn't always better — after about 10 examples you're eating context window without much accuracy gain. For complex tasks, structured chain-of-thought few-shot examples (showing reasoning steps, not just input-output pairs) outperform simple examples.

**Few-shot vs fine-tuning**

Few-shot: no training cost, immediate, takes context window space, lost when context resets. Fine-tuning: one-time training cost, persistent, faster at inference, more reliable for consistent formatting.

**Pitfalls**

Example selection matters enormously — biased examples produce biased outputs. Inconsistent formatting in examples confuses the model. And few-shot examples consume tokens, pushing you closer to the context limit.