Team Mode
Team mode lets you define multiple personas in a single YAML file. Personas run sequentially — each one receives the prior persona's output as context, building a chain of perspectives on the same task.
Unlike Compose (which wires separate agent services together), team mode keeps everything in one file with no delegate sinks, no depends_on, and no separate role YAMLs.
Quick Start
# team.yaml
apiVersion: initrunner/v1
kind: Team
metadata:
name: code-review-team
description: Multi-perspective code review
spec:
model:
provider: openai
name: gpt-5-mini
personas:
architect: "review for design patterns, SOLID principles, and architecture issues"
security: "find security vulnerabilities, injection risks, auth issues"
maintainer: "check readability, naming, test coverage gaps, docs"
tools:
- type: filesystem
root_path: .
read_only: true
- type: git
repo_path: .
read_only: trueinitrunner validate team.yaml
initrunner run team.yaml --task "review the auth module"A prompt (--task or -p) is required. Interactive (-i) and autonomous (-a) modes are not supported for teams.
How It Works
- The runner loads the team file and validates it (
kind: Team). - For each persona (in insertion order), a temporary agent is created with the persona's prompt as its system role.
- The task prompt is sent to the first persona. Each subsequent persona receives the original task plus all prior outputs wrapped in
<prior-agent-output>XML tags. - Tools and guardrails are shared across all personas.
- The final persona's output is returned as the team result.
Prior outputs are wrapped in XML to mitigate prompt injection from earlier personas:
<prior-agent-output persona="architect">
...architect's review...
</prior-agent-output>Team Definition
| Field | Type | Required | Description |
|---|---|---|---|
apiVersion | str | yes | initrunner/v1 |
kind | str | yes | Must be "Team" |
metadata.name | str | yes | Team name (lowercase, hyphens) |
metadata.description | str | no | Human-readable description |
spec.model | object | yes | Model configuration (shared by all personas) |
spec.personas | dict | yes | Ordered map of persona name to system prompt |
spec.tools | list | no | Tools available to all personas |
spec.guardrails | object | no | Per-run and team-level guardrails |
Personas
Personas are defined as a YAML mapping where the key is the persona name and the value is the system prompt:
personas:
researcher: "gather comprehensive information about the topic, listing key facts, sources, and different perspectives"
fact-checker: "verify claims from the research, flag unsupported statements, and note confidence levels"
writer: "synthesize the verified research into a clear, well-structured summary"Personas run in insertion order — YAML preserves key order, so the order you write them is the order they execute. Each persona is a lightweight agent with its own system prompt but shared model, tools, and guardrails.
Guardrails
Team mode supports all standard per-run guardrails plus two team-specific limits:
| Field | Type | Default | Description |
|---|---|---|---|
team_token_budget | int | null | Cumulative token budget across all personas. Pipeline stops if exceeded. |
team_timeout_seconds | int | null | Wall-clock limit for the entire team run. Pipeline stops if exceeded. |
guardrails:
max_tokens_per_run: 50000
max_tool_calls: 20
timeout_seconds: 300
team_token_budget: 150000
team_timeout_seconds: 900max_tokens_per_run and timeout_seconds apply to each persona individually. team_token_budget and team_timeout_seconds apply to the entire team run across all personas.
Audit Logging
Team runs are logged with trigger_type: "team" in the audit database. Each persona's run is tracked individually with a shared team_run_id so you can correlate them:
{
"trigger_type": "team",
"team_run_id": "abc123",
"persona": "architect",
"tokens_used": 4200
}Use initrunner audit export to inspect team run logs.
Validation
initrunner validate supports kind: Team files:
initrunner validate team.yamlIt checks for valid persona names, model configuration, tool definitions, and guardrail values.
Team vs Compose
| Team | Compose | |
|---|---|---|
| File count | One YAML | One compose YAML + one role YAML per service |
| Execution | Sequential personas | Parallel services with delegate sinks |
| Data flow | Automatic — prior output injected as context | Explicit — delegate sinks route between services |
| Model | Shared across all personas | Each service has its own model |
| Use case | Multiple perspectives on one task | Multi-service pipelines and workflows |
Use team mode when you want multiple viewpoints on the same input. Use Compose when you need independent services with different models, triggers, and routing.
Examples
Code Review Team
Three personas review code from different angles:
apiVersion: initrunner/v1
kind: Team
metadata:
name: code-review-team
description: Multi-perspective code review
spec:
model:
provider: openai
name: gpt-5-mini
personas:
architect: "review for design patterns, SOLID principles, and architecture issues"
security: "find security vulnerabilities, injection risks, auth issues"
maintainer: "check readability, naming, test coverage gaps, docs"
tools:
- type: filesystem
root_path: .
read_only: true
- type: git
repo_path: .
read_only: true
guardrails:
max_tokens_per_run: 50000
max_tool_calls: 20
timeout_seconds: 300
team_token_budget: 150000initrunner run code-review-team.yaml --task "review the auth module"Research Team
Research a topic, verify claims, then produce a polished summary:
apiVersion: initrunner/v1
kind: Team
metadata:
name: research-team
description: Research a topic and produce a polished summary
spec:
model:
provider: openai
name: gpt-5-mini
personas:
researcher: "gather comprehensive information about the topic, listing key facts, sources, and different perspectives"
fact-checker: "verify claims from the research, flag unsupported statements, and note confidence levels"
writer: "synthesize the verified research into a clear, well-structured summary"
tools:
- type: web_reader
- type: datetime
guardrails:
max_tokens_per_run: 50000
timeout_seconds: 300
team_token_budget: 150000
team_timeout_seconds: 900initrunner run research-team.yaml --task "summarize the state of WebAssembly adoption in 2026"