Agents are specialized AI configurations that run as sub-sessions for focused tasks.
Key insight: Agents ARE bundles. They use the same file format and are loaded via load_bundle(). The only difference is the frontmatter key (meta: vs bundle:).
→ For file format, tool/provider configuration, @mentions, and composition, see BUNDLE_GUIDE.md → For agent spawning and resolution patterns, see PATTERNS.md
This guide covers only what's unique to agents.
| Aspect | Bundle | Agent |
|---|---|---|
| Frontmatter key | bundle: |
meta: |
| Required fields | name, version |
name, description |
| Loaded via | load_bundle() |
load_bundle() (same!) |
| Purpose | Session configuration | Sub-session with focused role |
# Bundle frontmatter # Agent frontmatter
bundle: meta:
name: my-bundle name: my-agent
version: 1.0.0 description: "..."This is THE critical field for agent discoverability. The coordinator and task tool see this description when deciding which agent to delegate to.
Answer three questions:
- WHEN should I use this agent? (Activation triggers)
- WHAT does it do? (Core capability)
- HOW do I invoke it? (Examples)
meta:
name: my-agent
description: |
[WHEN to use - activation triggers]. Use PROACTIVELY when [condition].
[WHAT it does - core capability in 1-2 sentences].
Examples:
<example>
user: '[Example user request]'
assistant: 'I'll use my-agent to [action].'
<commentary>[Why this agent is the right choice]</commentary>
</example>meta:
name: bug-hunter
description: |
Specialized debugging expert. Use PROACTIVELY when user reports errors,
unexpected behavior, or test failures.
Examples:
<example>
user: 'The pipeline is throwing a KeyError somewhere'
assistant: 'I'll use bug-hunter to systematically track down this KeyError.'
<commentary>Bug reports trigger bug-hunter delegation.</commentary>
</example>
<example>
user: 'Tests are failing after the recent changes'
assistant: 'Let me use bug-hunter to investigate the test failures.'
<commentary>Test failures are a clear debugging task.</commentary>
</example># ❌ Too vague - when would you use this?
meta:
description: "Helps with code stuff"
# ❌ No examples - callers have to guess
meta:
description: "Analyzes code for quality issues"
# ✅ Clear triggers + capability + examples
meta:
description: |
Use PROACTIVELY when user reports errors or test failures.
Systematic debugging with hypothesis-driven root cause analysis.
<example>
user: 'The build is failing'
assistant: 'I'll use bug-hunter to investigate.'
</example>The meta.description field is the ONLY discovery mechanism for agents. When the
task tool presents available agents to the LLM, this description is all it sees to
decide which agent to use.
Poor descriptions cause delegation failures. One-liner descriptions are unacceptable.
Every agent description MUST include:
What problem does this agent solve? What value does it provide?
Explicit conditions that should cause delegation to this agent. Use keywords: MUST, REQUIRED, ALWAYS, PROACTIVELY, "Use when..."
Keywords and concepts this agent is authoritative on.
Pattern: **Authoritative on:** term1, term2, term3, "multi-word concept"
This serves as the agent's "taxonomy" - terms that should trigger delegation.
Concrete examples showing user request → delegation rationale.
Use <example> blocks with <commentary> tags.
meta:
name: my-agent
description: |
[ONE SENTENCE: What this agent does and why it matters]
Use PROACTIVELY when [primary trigger condition].
**Authoritative on:** [comma-separated domain terms/keywords]
**MUST be used for:**
- [Condition 1]
- [Condition 2]
<example>
user: '[Example user request]'
assistant: 'I'll delegate to [agent] because [reason].'
<commentary>
[Why this triggers the agent - helps LLMs learn the pattern]
</commentary>
</example>❌ One-liner descriptions: "Helps with debugging"
❌ No trigger conditions: Missing WHEN to use
❌ No taxonomy terms: LLM can't match domain questions
❌ No examples: LLM doesn't learn delegation patterns
Check each agent's description against these criteria:
- >100 words (not a one-liner)
- Has explicit trigger conditions
- Lists domain terms ("Authoritative on:")
- Includes at least one example
- Explains the value proposition
The markdown body after frontmatter becomes the agent's system prompt. Recommended structure:
# Agent Name
[One-line role description]
**Execution model:** You run as a one-shot sub-session. Work with what
you're given and return complete results.
## Operating Principles
1. [Principle 1]
2. [Principle 2]
## Workflow
1. [Step 1]
2. [Step 2]
## Output Contract
Your response MUST include:
- [Required element 1]
- [Required element 2]
---
@foundation:context/shared/common-agent-base.mdAlways end with the @mention to include shared base instructions (git guidelines, tone, security, tool policies).
Agents can declare what kind of model they need rather than pinning a specific provider or model. The routing matrix resolves the role to a concrete provider/model at session start, based on the active matrix and installed providers.
String shorthand — request a single role:
meta:
name: my-agent
description: "..."
model_role: codingList form with fallback chain — try roles in order:
meta:
name: my-agent
description: "..."
model_role: [vision, coding, general]With the list form, the system tries vision first. If no installed provider matches any candidate for that role, it falls back to coding, then general.
| Role | Use for |
|---|---|
coding |
Code generation, implementation, debugging |
ui-coding |
Frontend/UI code — components, layouts, styling, spatial reasoning |
security-audit |
Vulnerability assessment, attack surface analysis, code auditing |
reasoning |
Deep architectural reasoning, system design, complex multi-step analysis |
critique |
Analytical evaluation — finding flaws in existing work, not generating solutions |
creative |
Design direction, aesthetic judgment, high-quality creative output |
writing |
Long-form content — documentation, marketing, case studies, storytelling |
research |
Deep investigation, information synthesis across multiple sources |
vision |
Understanding visual input — screenshots, diagrams, UI mockups |
image-gen |
Image generation, visual mockup creation, visual ideation |
critical-ops |
High-reliability operational tasks — infrastructure, orchestration, coordination |
fast |
Quick utility tasks — parsing, classification, file ops, bulk work |
general |
Versatile catch-all, no specialization needed |
Choosing the right role? See the routing-matrix bundle's
context/role-definitions.mdfor detailed guidance on each role, including "when to use / when NOT to use" recommendations.
Every routing matrix must define general and fast. Other roles are optional — if a role isn't defined in the active matrix, the fallback chain skips it.
---
meta:
name: code-reviewer
description: |
Use PROACTIVELY when user asks for code review or quality analysis.
Systematic review with actionable feedback.
model_role: [coding, general]
---
# Code Reviewer
[Agent instructions...]If you need to pin a specific provider and model (bypassing routing), provider_preferences in agent frontmatter still works:
meta:
name: my-agent
description: "..."
provider_preferences:
- provider: anthropic
model: claude-opus-4-6When both model_role and provider_preferences are present, provider_preferences takes priority.
Expert agents serve as context sinks - they carry heavy documentation that would bloat every session if always loaded.
- Token efficiency: Heavy docs load ONLY when agent spawns, not in every session
- Delegation pattern: Parent sessions stay lean; sub-sessions burn context doing work
- Longer session success: Critical strategy for sessions that run many turns
---
meta:
name: my-expert
description: "Expert for X domain. Delegate when user needs..."
---
# My Expert
[Role description]
## Knowledge Base
@my-bundle:docs/FULL_GUIDE.md # Heavy docs - loaded only when spawned
@my-bundle:docs/REFERENCE.md # More heavy docs
@my-bundle:docs/PATTERNS.md # Even more
---
@foundation:context/shared/common-agent-base.mdPair your expert agent with a behavior that injects a thin awareness pointer:
# behaviors/my-expert.yaml
bundle:
name: behavior-my-expert
version: 1.0.0
agents:
include:
- my-bundle:my-expert # Heavy agent file
context:
include:
- my-bundle:context/my-awareness.md # Thin pointer (~30 lines)The thin awareness file tells root sessions: "This domain exists. Delegate to my-bundle:my-expert."
The agent file carries all the heavy @mentions that only load when the agent is actually spawned.
# ❌ BAD: Heavy docs in behavior context (loads for everyone)
context:
include:
- my-bundle:docs/FULL_GUIDE.md # 500 lines in every session!
- my-bundle:docs/REFERENCE.md # More bloat
# ✅ GOOD: Thin pointer in behavior, heavy docs in agent
context:
include:
- my-bundle:context/awareness.md # 30 lines: "domain exists, delegate"Callers don't know when to use the agent. Add activation triggers and examples.
Forgetting @foundation:context/shared/common-agent-base.md causes inconsistent behavior.
Callers don't know what to expect back. Define what the agent returns.
Agents ARE bundles. Don't reinvent - use the same patterns from BUNDLE_GUIDE.md.
Put heavy @mentions in agent files (context sink), not in behavior context.include.
| Topic | Documentation |
|---|---|
| File format, YAML structure | BUNDLE_GUIDE.md |
| Tool/provider configuration | BUNDLE_GUIDE.md |
| @mention resolution | BUNDLE_GUIDE.md |
| Agent spawning patterns | PATTERNS.md |
| Agent resolution | PATTERNS.md |
| Bundle composition | CONCEPTS.md |