InitRunner

Configuration

InitRunner agents are configured through YAML role files. Every role follows the apiVersion/kind/metadata/spec structure.

Full Schema

apiVersion: initrunner/v1        # Required — API version
kind: Agent                       # Required — must be "Agent"

metadata:
  name: my-agent                  # Required — unique agent identifier
  description: ""                 # Optional — human-readable description
  tags: []                        # Optional — categorization tags
  author: ""                      # Optional — author name
  version: ""                     # Optional — semantic version
  dependencies: []                # Optional — pip dependencies

spec:
  role: |                         # Required — system prompt
    You are a helpful assistant.

  model:                          # Model configuration
    provider: openai              # Provider name
    name: gpt-4o-mini             # Model identifier
    temperature: 0.1              # Sampling temperature (0.0-2.0)
    max_tokens: 4096              # Max tokens per response
    base_url: null                # Custom endpoint URL
    api_key_env: null             # Env var for API key

  tools: []                       # Tool configurations
  guardrails: {}                  # Resource limits
  ingest: null                    # Document ingestion / RAG
  memory: null                    # Memory system
  triggers: []                    # Trigger configurations
  sinks: []                       # Output sink configurations
  security: null                  # Security policy
  skills: []                      # Skill references

Metadata Fields

FieldTypeDefaultDescription
namestr(required)Unique agent identifier
descriptionstr""Human-readable description
tagslist[str][]Categorization tags
authorstr""Author name
versionstr""Semantic version string
dependencieslist[str][]pip dependencies for custom tools

Model Configuration

FieldTypeDefaultDescription
providerstr"openai"Provider name (openai, anthropic, google, groq, mistral, ollama)
namestr"gpt-4o-mini"Model identifier
base_urlstr | nullnullCustom endpoint URL (enables OpenAI-compatible mode)
api_key_envstr | nullnullEnvironment variable containing the API key
temperaturefloat0.1Sampling temperature (0.0-2.0)
max_tokensint4096Maximum tokens per response (1-128000)

See Providers for provider-specific setup and Ollama/OpenRouter configuration.

Guardrails

FieldTypeDefaultDescription
max_tokens_per_runint50000Maximum output tokens consumed per agent run
max_tool_callsint20Maximum tool invocations per run
timeout_secondsint300Wall-clock timeout per run
max_request_limitint | nullnullMaximum LLM API round-trips per run
input_tokens_limitint | nullnullPer-request input token limit
total_tokens_limitint | nullnullPer-request combined input+output token limit
session_token_budgetint | nullnullCumulative token budget for REPL session (warns at 80%)
daemon_token_budgetint | nullnullLifetime token budget for daemon process
daemon_daily_token_budgetint | nullnullDaily token budget for daemon (resets at UTC midnight)

See Guardrails for enforcement behavior, daemon budgets, and autonomous limits.

Spec Sections Overview

SectionDescriptionDocs
modelLLM provider and model settingsProviders
toolsTool configurations (filesystem, HTTP, MCP, custom, etc.)Tools
guardrailsToken limits, timeouts, tool call limitsGuardrails
autonomyAutonomous plan-execute-adapt loopsAutonomy
ingestDocument ingestion and RAG pipelineIngestion
memorySession persistence and semantic memoryMemory
triggersCron, file watch, and webhook triggersTriggers
securityContent policies, rate limiting, tool sandboxingSecurity

Environment Variables

VariableDescription
OPENAI_API_KEYOpenAI API key
ANTHROPIC_API_KEYAnthropic API key
GOOGLE_API_KEYGoogle AI API key
GROQ_API_KEYGroq API key
MISTRAL_API_KEYMistral API key
INITRUNNER_HOMEData directory (default: ~/.initrunner/)

Resolution order for INITRUNNER_HOME: INITRUNNER_HOME > XDG_DATA_HOME/initrunner > ~/.initrunner.

Full Annotated Example

apiVersion: initrunner/v1
kind: Agent
metadata:
  name: support-agent
  description: Answers questions from the support knowledge base
  tags:
    - support
    - rag
spec:
  role: |
    You are a support agent. Use search_documents to find relevant
    articles before answering. Always cite your sources.
  model:
    provider: openai
    name: gpt-4o-mini
    temperature: 0.1
    max_tokens: 4096
  ingest:
    sources:
      - "./knowledge-base/**/*.md"
      - "./docs/**/*.pdf"
    chunking:
      strategy: fixed
      chunk_size: 512
      chunk_overlap: 50
  tools:
    - type: filesystem
      root_path: ./src
      read_only: true
    - type: mcp
      transport: stdio
      command: npx
      args: ["-y", "@anthropic/mcp-server-filesystem"]
  triggers:
    - type: file_watch
      paths: ["./knowledge-base"]
      extensions: [".html", ".md"]
      prompt_template: "Knowledge base updated: {path}. Re-index."
    - type: cron
      schedule: "0 9 * * 1"
      prompt: "Generate weekly support coverage report."
  guardrails:
    max_tokens_per_run: 50000
    max_tool_calls: 20
    timeout_seconds: 300

On this page