System Prompt
System Prompt (lời nhắc hệ thống)
A privileged instruction block sent to an LLM before the conversation begins, used to set role, constraints, and behavior.
The system prompt is an instruction block that sits at the very beginning of a conversation, separated from user messages. Models are trained to treat it as authoritative context that frames everything that follows. It's the closest thing to "configuration" for an LLM conversation.
**What goes in a system prompt**
- *Role definition*: "You are a helpful customer support agent for Acme Inc." - *Behavioral constraints*: "Always respond in Vietnamese. Never discuss competitor products." - *Output format*: "Respond only with valid JSON matching the schema below." - *Knowledge injection*: Current date, user tier, product catalog snippets. - *Safety guardrails*: "If the user asks for harmful content, decline politely."
**How models handle it**
The system prompt is typically more "trusted" than user messages in models trained with RLHF. Instructions there carry more weight than conflicting instructions from the user turn. However, system prompts are not a security boundary — sophisticated prompt injection can still override them.
**API vs chat interface**
In the OpenAI and Anthropic APIs, the system prompt is a dedicated field. In chat UIs, it's often called "custom instructions" or "personality." Claude models expose it explicitly; some wrappers combine it with the first user message.
**Pitfalls**
System prompts become stale as the product evolves — outdated instructions cause confusing behavior. Very long system prompts eat context window. And despite the "system" name, users can often see system prompts through prompt injection attacks — treat them as lightly confidential, not secret.