Compose
Agent Composer lets you define multiple agents as services in a single compose.yaml file, wire them together with delegate sinks, and run them all with one command.
Services start in tiers based on depends_on. Each service is a standalone agent connected to others via delegate sinks — in-memory queues that route output from one agent to the next.
Quick Start
# compose.yaml
apiVersion: initrunner/v1
kind: Compose
metadata:
name: my-pipeline
description: Simple producer-consumer pipeline
spec:
services:
producer:
role: roles/producer.yaml
sink:
type: delegate
target: consumer
consumer:
role: roles/consumer.yaml
depends_on:
- producer# Validate
initrunner compose validate compose.yaml
# Start (foreground, Ctrl+C to stop)
initrunner compose up compose.yamlCompose Definition
The top-level structure follows the apiVersion/kind/metadata/spec pattern:
| Field | Type | Default | Description |
|---|---|---|---|
apiVersion | str | (required) | e.g. initrunner/v1 |
kind | str | (required) | Must be "Compose" |
metadata.name | str | (required) | Compose definition name |
metadata.description | str | "" | Human-readable description |
spec.services | dict | (required) | Map of service name to configuration |
Service Configuration
services:
my-service:
role: roles/my-role.yaml
sink:
type: delegate
target: other-service
depends_on:
- dependency-service
restart:
condition: on-failure
max_retries: 3
delay_seconds: 5
environment: {}| Field | Type | Default | Description |
|---|---|---|---|
role | str | (required) | Path to role YAML (relative to compose file) |
sink | object | null | null | Delegate sink for routing output |
depends_on | list[str] | [] | Services that must start first |
restart.condition | str | "none" | "none", "on-failure", or "always" |
restart.max_retries | int | 3 | Maximum restart attempts |
restart.delay_seconds | int | 5 | Seconds before restarting |
environment | dict | {} | Additional environment variables |
Delegate Sinks
Route a service's output to other services via in-memory queues.
# Single target
sink:
type: delegate
target: consumer
queue_size: 100
timeout_seconds: 60
# Fan-out to multiple targets
sink:
type: delegate
target:
- researcher
- responder
keep_existing_sinks: true| Field | Type | Default | Description |
|---|---|---|---|
type | str | (required) | Must be "delegate" |
target | str | list[str] | (required) | Target service name(s) |
keep_existing_sinks | bool | false | Also activate role-level sinks |
queue_size | int | 100 | Max buffered events in target's inbox |
timeout_seconds | int | 60 | Block time when queue is full before dropping |
Only successful runs are forwarded. Failed runs are silently skipped.
Startup Order
Services start in topological order based on depends_on. Services without dependencies start first, forming tiers of parallel startup. Shutdown happens in reverse order.
services:
inbox-watcher:
role: roles/inbox-watcher.yaml
sink: { type: delegate, target: triager }
triager:
role: roles/triager.yaml
depends_on: [inbox-watcher]
sink: { type: delegate, target: [researcher, responder] }
researcher:
role: roles/researcher.yaml
depends_on: [triager]
responder:
role: roles/responder.yaml
depends_on: [triager]Tier 0: inbox-watcher (no dependencies)
Tier 1: triager (depends on inbox-watcher)
Tier 2: researcher, responder (both depend on triager)Restart Policies
| Condition | Restart when... |
|---|---|
none | Never restart |
on-failure | Restart only if errors were recorded |
always | Restart whenever the service thread exits |
A health monitor thread checks every 10 seconds and applies restart policies.
Systemd Deployment
Install compose pipelines as systemd user services for production:
# Install the unit
initrunner compose install compose.yaml
# Start
initrunner compose start my-pipeline
# Enable on boot
systemctl --user enable initrunner-my-pipeline.service
# Monitor
initrunner compose status my-pipeline
initrunner compose logs my-pipeline -fEnvironment Variables
Systemd services don't inherit shell exports. Provide secrets via environment files:
{compose_dir}/.env— project-level secrets~/.initrunner/.env— user-level defaults
Use --generate-env to create a template .env file:
initrunner compose install compose.yaml --generate-envUser Lingering
To keep services running after logout:
loginctl enable-linger $USERExample: Email Pipeline
inbox-watcher ──> triager ──> researcher
│
└──────> responderapiVersion: initrunner/v1
kind: Compose
metadata:
name: email-pipeline
description: Multi-agent email processing pipeline
spec:
services:
inbox-watcher:
role: roles/inbox-watcher.yaml
sink:
type: delegate
target: triager
triager:
role: roles/triager.yaml
depends_on: [inbox-watcher]
sink:
type: delegate
target: [researcher, responder]
circuit_breaker_threshold: 5
researcher:
role: roles/researcher.yaml
depends_on: [triager]
responder:
role: roles/responder.yaml
depends_on: [triager]
restart: { condition: on-failure, max_retries: 3, delay_seconds: 5 }Service Roles
Each service points to a standalone role YAML. Here are the two key roles in this pipeline:
roles/triager.yaml — routes emails to the right handler:
apiVersion: initrunner/v1
kind: Agent
metadata:
name: triager
description: Routes emails to the right handler
spec:
role: >
You are an email triage agent. Analyze the email summary and
determine if it needs research (technical questions, data requests)
or a direct response (simple inquiries, acknowledgments).
Output your decision and reasoning clearly.
model:
provider: openai
name: gpt-4o-mini
temperature: 0.1
guardrails:
max_tokens_per_run: 2000
timeout_seconds: 30roles/responder.yaml — drafts email responses:
apiVersion: initrunner/v1
kind: Agent
metadata:
name: responder
description: Drafts email responses
spec:
role: >
You are an email response agent. Given a triaged email that needs
a direct response, draft a professional, helpful reply. Keep the
tone friendly and concise.
model:
provider: openai
name: gpt-4o-mini
temperature: 0.5
guardrails:
max_tokens_per_run: 3000
timeout_seconds: 30Service roles are minimal — they focus on a single task and don't need triggers or sinks (the compose file handles routing). This keeps each agent simple and testable independently.
Example: CI Pipeline
A webhook-driven pipeline that processes CI events, diagnoses build failures, and sends notifications.
webhook-receiver ──> build-analyzer ──> notifiercompose.yaml
apiVersion: initrunner/v1
kind: Compose
metadata:
name: ci-pipeline
description: CI event processing pipeline
spec:
services:
webhook-receiver:
role: roles/webhook-receiver.yaml
sink:
type: delegate
target: build-analyzer
build-analyzer:
role: roles/build-analyzer.yaml
depends_on: [webhook-receiver]
sink:
type: delegate
target: notifier
notifier:
role: roles/notifier.yaml
depends_on: [build-analyzer]
restart: { condition: on-failure, max_retries: 3, delay_seconds: 5 }roles/notifier.yaml
The most interesting service — it combines Slack messaging with the GitHub commit status API:
apiVersion: initrunner/v1
kind: Agent
metadata:
name: ci-notifier
description: Sends Slack notifications and updates GitHub commit status
spec:
role: |
You are a CI notification agent. You receive analyzed build events and:
1. Send a formatted Slack notification:
- Success: "✅ Build passed — [repo] @ [branch] ([sha])"
- Failure: "❌ Build failed — [repo] @ [branch] ([sha])\n
Diagnosis: [diagnosis]\nCategory: [category]"
- Include the build URL as a link
- Add a timestamp via get_current_time
2. Update the GitHub commit status using the create_commit_status API
endpoint:
- state: "success" or "failure"
- description: brief status message
- context: "ci-pipeline/initrunner"
Always send both the Slack message and the GitHub status update.
model:
provider: openai
name: gpt-4o-mini
temperature: 0.0
tools:
- type: slack
webhook_url: "${SLACK_WEBHOOK_URL}"
default_channel: "#ci-alerts"
username: CI Pipeline
icon_emoji: ":construction_worker:"
- type: api
name: github-status
description: GitHub commit status API
base_url: https://api.github.com
headers:
Accept: application/vnd.github.v3+json
auth:
Authorization: "Bearer ${GITHUB_TOKEN}"
endpoints:
- name: create_commit_status
method: POST
path: "/repos/{owner}/{repo}/statuses/{sha}"
description: Create a commit status check
parameters:
- name: owner
type: string
required: true
- name: repo
type: string
required: true
- name: sha
type: string
required: true
- name: state
type: string
required: true
description: "pending, success, failure, or error"
- name: description
type: string
required: false
- name: context
type: string
required: false
default: "ci-pipeline/initrunner"
body_template:
state: "{state}"
description: "{description}"
context: "{context}"
timeout: 15
- type: datetime
guardrails:
max_tokens_per_run: 15000
max_tool_calls: 10
timeout_seconds: 60Test the webhook
# Start the pipeline
initrunner compose up compose.yaml
# In another terminal, send a test event
curl -X POST http://localhost:9090/ci-webhook \
-H "Content-Type: application/json" \
-d '{
"source": "github-actions",
"repo": "myorg/myapp",
"branch": "main",
"sha": "abc12345",
"status": "failure",
"author": "dev@example.com",
"message": "fix: update auth middleware",
"url": "https://github.com/myorg/myapp/actions/runs/12345"
}'What to notice: The notifier combines two tool types —
slackfor human-readable alerts andapifor machine-readable GitHub status updates. The webhook receiver uses awebhooktrigger (port 9090), and the compose file wires all three services together with delegate sinks.
Example: Content Pipeline
content-watcher ──> researcher ──> writer
│
└──────> reviewerUses process_existing: true on the file watch trigger to handle files already in the directory on startup. See Triggers for details.