InitRunner

RAG in 5 Minutes

Get a document-search agent up and running in three commands.

Before you start: initrunner ingest needs an embedding model. The default is OpenAI text-embedding-3-small — set OPENAI_API_KEY to use it, or set embeddings.provider to switch providers (Google, Ollama, and more). No API keys? Jump to fully local setup.

The 3-Command Flow

initrunner setup --template rag   # scaffold a RAG-ready role file
initrunner ingest role.yaml       # embed and index your documents
initrunner run role.yaml          # chat with your knowledge base

What each command does

initrunner setup --template rag

Scaffolds a role YAML pre-configured with spec.ingest pointing at a ./docs/ directory, paragraph chunking, and search_documents usage instructions in the system prompt. A docs/ folder with a sample markdown file is created alongside the role file.

The scaffolded role file includes this embedding config by default:

spec:
  ingest:
    sources:
      - "./docs/**/*.md"
    embeddings:
      provider: openai
      model: text-embedding-3-small
      # api_key_env: OPENAI_API_KEY  # optional: override which env var holds the key

Change provider and model to switch embedding backends. See Providers for all options.

After the setup wizard finishes it prints a reminder:

Next step: add your documents to ./docs/ then run:
  initrunner ingest role.yaml

initrunner ingest role.yaml

Reads every file matched by spec.ingest.sources, splits the text into chunks, generates embeddings, and stores everything in a local SQLite vector database (~/.initrunner/stores/<agent-name>.db). Re-running is safe — existing chunks are replaced.

initrunner run role.yaml

Starts the agent. The search_documents tool is auto-registered. Ask any question and the agent will search your indexed documents before answering, citing the source files it used.

Embedding API Key

The embedding key is read from an environment variable. The default depends on your provider:

ProviderDefault env varNotes
openaiOPENAI_API_KEY
anthropicOPENAI_API_KEYAnthropic has no embeddings API — falls back to OpenAI by default; set embeddings.provider to switch
googleGOOGLE_API_KEY
ollama(none)Runs locally

Anthropic users: Anthropic has no embeddings API. The default fallback is OpenAI — set OPENAI_API_KEY (in your environment or ~/.initrunner/.env) if keeping that default. To avoid needing an OpenAI key, set embeddings.provider: google or embeddings.provider: ollama instead.

Override the key name — if your key is stored under a different env var name, set api_key_env in the embedding config:

spec:
  ingest:
    embeddings:
      provider: openai
      model: text-embedding-3-small
      api_key_env: MY_EMBED_KEY   # read from MY_EMBED_KEY instead of OPENAI_API_KEY

Diagnose key issues with the doctor command:

initrunner doctor

The Embedding Providers section shows which keys are set and which are missing.

Fully Local — No API Keys

Swap both the LLM and the embedding model to Ollama for a completely local setup:

spec:
  model:
    provider: ollama
    name: llama3.2
  ingest:
    sources:
      - "./docs/**/*.md"
    embeddings:
      provider: ollama
      model: nomic-embed-text

Then run the same three commands — no API keys required.

Next Steps

On this page