Principles and Techniques for Prompt Writing in Context Engineering
All Posts All Posts

Principles and Techniques for Prompt Writing in Context Engineering

October 12, 2025·
AI
·5 min read
Tecker Yu
Tecker Yu
AI Native Cloud Engineer × Part-time Investor

Summary from anthropic: Effective context engineering for AI agents

Introduction: From Single Prompts to Context State Management

When building complex AI agents, our focus has shifted beyond simply writing “magical” prompts. The reasoning process of large language models (LLMs) is a multi-turn, dynamic state management process. Therefore, we need to move from “prompt engineering” to context engineering.

Prompt engineering focuses on how to write and organize LLM instructions to achieve optimal results. Context engineering, as the natural evolution of prompt engineering, concerns itself with the complete set of strategies for curating and maintaining optimal token collections (information) throughout the LLM reasoning process. System prompts are the core component of an agent’s initial, static context, and their writing quality directly determines the agent’s initial utility and guidance direction.

Effective context engineering requires us to “think in context”—that is, how to utilize the limited context window to find the smallest possible collection of high-signal tokens that maximizes the likelihood of desired outcomes.

Core Principles for Prompt Writing: Clarity and Minimalism

Context is an agent’s limited resource, as it is constrained by architectural limitations such as the LLM’s attention budget and context decay. Therefore, system prompts must be efficient and compact.

Extremely Clear and Direct Language

System prompts should be extremely clear and use simple, direct language to present the agent’s guidance information.

Pursuing Minimal Information Collections

Regardless of how you organize your system prompts, you should strive to achieve minimal collections of information that must adequately summarize the expected agent behavior.

Note that “minimal” here does not necessarily mean brief. You still need to provide the agent with sufficient information upfront to ensure it can adhere to the expected behavior. Our overall guiding principle is to be thoughtful while keeping the context informative, yet tight.

Best Practice: It’s better to start testing with a minimal prompt and use the best available model to observe its performance on the task. Then, iteratively add clear instructions and examples to improve performance based on the failure patterns discovered in initial testing.

Key Technique: Finding “The Right Altitude”

“The right altitude” is the “Goldilocks zone” that agent developers need to master when writing prompts. It requires us to strike a balance between two common failure modes:

Failure Mode One: Too Rigid (Hard-coded Complex Logic)

  • Characteristics: Engineers attempt to hard-code complex, brittle logic in prompts to trigger precise agent behaviors.
  • Consequences: This approach increases system fragility and maintenance complexity over time.
  • Example (to avoid): Like “If tool A returns an HTTP 404 error, and the user hasn’t mentioned ‘retry’ in the past 5 minutes, then you must call tool B and set the query parameter to the first 10 words of the user message, unless the user message contains a URL”—this brittle If-Else logic should be avoided.

Failure Mode Two: Too Vague (Fuzzy Guidance)

  • Characteristics: Engineers provide vague, high-level guidance that fails to give the LLM specific signals to guide expected outputs, or incorrectly assumes the agent has shared context.
  • Consequences: The agent may deviate from goals and fail to make effective, expected actions.
  • Example (to avoid): Like “Be a good assistant and help users solve problems”—this prompt is too general to effectively guide behavior.

Prompts at “The Right Altitude”

The optimal altitude is: specific enough to effectively guide behavior, while being flexible enough to provide the model with powerful heuristics to guide behavior.

Example (ideal system prompt fragment):

Agent Role and Goal: “You are a professional technical documentation analyst. Your primary goal is to accurately answer user queries about the document library. You must always use the search_documents tool to obtain the latest information. If information sources are uncertain, you must clearly inform the user of insufficient information.”

Constraints and Output Format: “Before answering, you must indicate in internal reasoning steps which tools you used and what information you retrieved. Final answers must be concise and contain only information from tool results.”

Structured Prompts: Leveraging Tags to Enhance Agent Readability

To avoid mixed prompt content and improve agent comprehension efficiency, it’s recommended to organize prompts into distinct sections.

Technique: Use XML tags or Markdown headings for division

It’s recommended to use techniques such as XML tags or Markdown headings to divide these sections. This structured approach enables LLMs to better identify different types of information, and this clear division remains good practice even as model capabilities continue to improve.

Section NameTag ExamplePurpose
Background Information<background_information>Sets the agent’s professional domain or constraints.
Instructions<instructions>Clearly defines steps or action rules the agent must follow.
Tool Guidance## Tool guidanceDescribes how to use tools, timing, and priorities.
Output Description## Output descriptionSpecifies format requirements for final output (such as JSON, Markdown).

Carefully Curated Few-shot Prompting

Providing examples (i.e., few-shot prompting) is a consistently strongly recommended best practice in agent development. Examples serve as visual blueprints of expected agent behavior, and for LLMs, examples have tremendous guiding power.

Curation Points:

  • Avoid stacking edge cases: Don’t try to cram a large list of edge cases into prompts, attempting to cover every rule the agent might need to follow.
  • Provide normalized examples: You should strive to curate a diverse, normalized set of examples to effectively illustrate expected agent behavior. These examples should demonstrate core functionality, correct tool usage workflows, and expected response styles.

Through these techniques, agent developers can ensure their system prompts provide effective guidance signals within the context engineering framework while respecting the LLM’s limited attention budget, thereby building more reliable and efficient agents.

Views