Agents as Config, Not Code
Define your agent's role, tools, and behavior in a YAML file. No framework to learn, portable enough to check into git.
Open-source CLI that turns a YAML file into a running AI agent you can serve as an API. Built-in RAG, persistent memory, OpenAI-compatible endpoints, and a live dashboard. Define, run, observe.
pip install initrunnerkind: Agent
name: code-reviewer
role: >
You review pull requests for bugs,
style issues, and security flaws.
model:
provider: openai
name: gpt-5-mini
tools:
- type: github
permissions: [read]
- type: filesystem
root_path: ./repo$ initrunner run agent.yaml
[init] Loading agent: code-reviewer
[init] Provider: openai / gpt-5-mini
[init] Tools: github, filesystem
[run] Agent ready. Awaiting input...
> Review PR #42 for security issues
[tool] github → fetched PR #42 (3 files changed)
[tool] read_file → src/auth.ts
[done] Review posted. 2 issues found.Define your agent's role, tools, and behavior in a YAML file. No framework to learn, portable enough to check into git.
Every input, tool call, and output is written to an immutable SQLite log. Full auditability out of the box.
Run on OpenAI today, switch to Anthropic tomorrow — change one line in your YAML. No vendor lock-in.
Wire up cron schedules, file watchers, or webhooks. Your agents run unattended and notify you on completion.
Plug in filesystem access, HTTP calls, MCP servers, or any Python function. If you can write it, your agent can use it.
Enable autonomy and your agent plans multi-step tasks, executes each one, and adapts when something fails.
RAG, memory, reusable skills, an OpenAI-compatible API server, and a live dashboard — configure in YAML, no external services required.
Run initrunner serve and your agent becomes an OpenAI-compatible API — /v1/chat/completions with streaming, auth, and multi-turn conversations. Plug it into any OpenAI SDK, chat UI, or internal tool.
One command. OpenAI-compatible out of the box.
$ initrunner serve agent.yaml --port 8000
[serve] Agent: research-assistant
[serve] Endpoint: /v1/chat/completions
[serve] Auth: Bearer token ✓
[serve] Streaming: enabled ✓
[serve] Listening on http://0.0.0.0:8000A terminal UI and web dashboard built for ops. Watch tool calls stream in real time, audit complete decision chains, and chat with running agents. Full decision chains are queryable — no log-grepping needed.
Observe, audit, intervene — from your terminal or browser.



The guardrails block gives you per-run token caps, session budgets, and daily or lifetime daemon budgets — all in YAML. Agents stop automatically when they hit a limit and warn at 80 %. No surprise bills from runaway loops, no manual kill switches. Set it once and forget it.
Per-run caps. Daily budgets. Automatic enforcement.
guardrails:
max_tokens_per_run: 10000
max_tool_calls: 15
session_token_budget: 100000
daemon_token_budget: 500000
daemon_daily_token_budget: 100000
# Agents stop at the limit. Warn at 80%.
# No surprise bills from runaway loops.Point at a folder of markdown, PDFs, or CSVs. InitRunner chunks, embeds, and indexes them automatically. Your agent gets search_documents() as a tool — semantic search over your own knowledge base, with source citations. No vector database to manage. No embedding pipeline to wire up.
Three lines of YAML. Semantic search over your own docs.
ingest:
sources:
- "./docs/**/*.md"
- "./reports/*.pdf"
chunking:
strategy: paragraph
chunk_size: 512Session persistence picks up where you left off. Semantic memory lets agents remember() what worked and recall() it later — across sessions, across days. Your support agent remembers the customer’s last issue. Your research agent builds on yesterday’s findings.
No Redis. No external store. Just YAML.
memory:
max_memories: 1000
# Your agent gets these tools automatically:
# → remember(content, category)
# → recall(query, top_k)
# → list_memories(category, limit)Bundle tools and prompt instructions into a single SKILL.md file — YAML frontmatter for tools and requirements, Markdown body for the prompt. Reference the skill from any agent config and InitRunner auto-merges the tools, appends the prompts, and validates environment requirements before loading. No more duplicating tool blocks across twenty agent files.
One SKILL.md. Every agent gets it.
---
name: web-research
tools:
- type: web_reader
- type: filesystem
root_path: ./output
requires:
env: [SEARCH_API_KEY]
---
You are a web research specialist.
Search the web, extract key findings,
and save structured summaries to ./output.
Always cite your sources.Describe your agent in a YAML file — its model, role, tools, and memory. Or run initrunner init and answer three questions. Either way, you're done in under a minute.
Execute with initrunner run agent.yaml. Your agent loads, connects to its tools, and starts working. Every action is logged to a tamper-proof audit trail automatically.
Add a trigger — cron, file watcher, or webhook — and your agent runs itself. You get notified when it's done, not when it needs you.
Switch providers with a one-line config change. No code rewrites, no SDK migrations.