RAG in 5 Minutes
Get a document-search agent up and running in three commands.
Before you start:
initrunner ingestneeds an embedding model. The default is OpenAItext-embedding-3-small— setOPENAI_API_KEYto use it, or setembeddings.providerto switch providers (Google, Ollama, and more). No API keys? Jump to fully local setup.
The 3-Command Flow
initrunner setup --template rag # scaffold a RAG-ready role file
initrunner ingest role.yaml # embed and index your documents
initrunner run role.yaml # chat with your knowledge baseWhat each command does
initrunner setup --template rag
Scaffolds a role YAML pre-configured with spec.ingest pointing at a ./docs/ directory, paragraph chunking, and search_documents usage instructions in the system prompt. A docs/ folder with a sample markdown file is created alongside the role file.
The scaffolded role file includes this embedding config by default:
spec:
ingest:
sources:
- "./docs/**/*.md"
embeddings:
provider: openai
model: text-embedding-3-small
# api_key_env: OPENAI_API_KEY # optional: override which env var holds the keyChange provider and model to switch embedding backends. See Providers for all options.
After the setup wizard finishes it prints a reminder:
Next step: add your documents to ./docs/ then run:
initrunner ingest role.yamlinitrunner ingest role.yaml
Reads every file matched by spec.ingest.sources, splits the text into chunks, generates embeddings, and stores everything in a local SQLite vector database (~/.initrunner/stores/<agent-name>.db). Re-running is safe — existing chunks are replaced.
initrunner run role.yaml
Starts the agent. The search_documents tool is auto-registered. Ask any question and the agent will search your indexed documents before answering, citing the source files it used.
Embedding API Key
The embedding key is read from an environment variable. The default depends on your provider:
| Provider | Default env var | Notes |
|---|---|---|
openai | OPENAI_API_KEY | |
anthropic | OPENAI_API_KEY | Anthropic has no embeddings API — falls back to OpenAI by default; set embeddings.provider to switch |
google | GOOGLE_API_KEY | |
ollama | (none) | Runs locally |
Anthropic users: Anthropic has no embeddings API. The default fallback is OpenAI — set OPENAI_API_KEY (in your environment or ~/.initrunner/.env) if keeping that default. To avoid needing an OpenAI key, set embeddings.provider: google or embeddings.provider: ollama instead.
Override the key name — if your key is stored under a different env var name, set api_key_env in the embedding config:
spec:
ingest:
embeddings:
provider: openai
model: text-embedding-3-small
api_key_env: MY_EMBED_KEY # read from MY_EMBED_KEY instead of OPENAI_API_KEYDiagnose key issues with the doctor command:
initrunner doctorThe Embedding Providers section shows which keys are set and which are missing.
Fully Local — No API Keys
Swap both the LLM and the embedding model to Ollama for a completely local setup:
spec:
model:
provider: ollama
name: llama3.2
ingest:
sources:
- "./docs/**/*.md"
embeddings:
provider: ollama
model: nomic-embed-textThen run the same three commands — no API keys required.
Next Steps
- Ingestion reference — chunking strategies, embedding models, supported file formats
- RAG Patterns & Guide — common patterns, embedding model comparison, fully local RAG