Memory
InitRunner's memory system gives agents two capabilities: short-term session persistence for resuming conversations, and long-term semantic memory for storing and recalling information across sessions.
Both are backed by a single SQLite database per agent using sqlite-vec for vector similarity search.
Quick Start
apiVersion: initrunner/v1
kind: Agent
metadata:
name: memory-assistant
description: A personal assistant that learns and remembers across sessions
spec:
role: |
You are a personal assistant with persistent memory. You learn from every
conversation and use that knowledge to provide better, more personalized help
over time.
Memory guidelines:
1. Proactively remember: whenever the user shares something worth retaining
— a preference, a fact, a correction, a personal detail, an instruction —
call remember() immediately. Choose a meaningful category based on content
(e.g. "preference", "fact", "instruction", "personal", "project", "goal").
Tell the user what you stored.
2. Recall before answering: when a question could benefit from prior context,
call recall() with a relevant query. Skip recall for purely generic or
conversational messages that clearly don't need stored knowledge.
3. Be transparent: briefly mention when you are recalling or storing
information so the user understands how their knowledge base is being used.
4. Use list_memories() when the user asks what you know about them or wants
to review stored memories.
You can help with anything: answering questions, brainstorming, tracking
goals, managing preferences, or just chatting. The key differentiator is that
you get better the more the user interacts with you.
model:
provider: openai
name: gpt-4o-mini
temperature: 0.1
memory:
max_sessions: 10
max_memories: 1000
max_resume_messages: 20# Interactive session (auto-saves history)
initrunner run role.yaml -i
# Resume where you left off
initrunner run role.yaml -i --resume
# Manage memory
initrunner memory clear role.yaml
initrunner memory export role.yaml -o memories.jsonSession Walkthrough
Here's what memory looks like across two sessions. The agent stores information in session 1 and uses it in session 2.
Session 1 — the user shares preferences:
$ initrunner run memory-assistant.yaml -i
You: I prefer dark mode in all my apps and I use vim keybindings everywhere.
Agent: [calls remember("User prefers dark mode in all apps", "preference")]
[calls remember("User uses vim keybindings everywhere", "preference")]
Got it! I've saved two preferences:
- Dark mode in all apps
- Vim keybindings everywhere
I'll keep these in mind for future conversations.
You: I'm working on a project called Atlas — it's a Kubernetes operator in Go.
Agent: [calls remember("Working on project Atlas — a Kubernetes operator written in Go", "project")]
Noted! I've saved that you're working on Atlas, a Kubernetes operator
in Go. Feel free to ask me anything about it anytime.Session 2 — resume and the agent recalls prior context:
$ initrunner run memory-assistant.yaml -i --resume
You: What editor settings would you recommend for me?
Agent: [calls recall("editor settings preferences")]
Based on what I know about you, here are my recommendations:
Since you prefer **dark mode** and **vim keybindings**, I'd suggest:
- VS Code with the Vim extension and a dark theme like "One Dark Pro"
- Or Neovim directly with a modern config (LazyVim is popular)
For your **Go/Kubernetes** work on Atlas, you'll also want:
- gopls for Go language server support
- The Go extension (or vim-go for Neovim)The agent never asked "what do you like?" — it already knew from session 1. The
--resumeflag loads the conversation history, andrecall()searches the semantic memory for relevant stored facts.
Configuration
| Field | Type | Default | Description |
|---|---|---|---|
max_sessions | int | 10 | Maximum sessions to keep (oldest pruned) |
max_memories | int | 1000 | Maximum long-term memories to keep |
max_resume_messages | int | 20 | Messages loaded on --resume |
embeddings.provider | str | "" | Embedding provider (empty = derives from spec.model.provider) |
embeddings.model | str | "" | Embedding model (empty = provider default) |
store_backend | str | "sqlite_vec" | Memory store backend |
store_path | str | null | null | Custom database path (default: ~/.initrunner/memory/<agent-name>.db) |
Short-Term: Session Persistence
Session persistence saves REPL conversation history to SQLite after each turn, enabling the --resume flag.
- During an interactive session, the full message history is saved after every turn.
- Each session gets a unique ID (random 12-character hex).
--resumeloads the most recent session, capped atmax_resume_messages.- Active session history is trimmed to
max_resume_messages * 2(default: 40) to prevent unbounded growth. - System prompts are stripped from saved sessions so the current
spec.roleis always used on resume. - Old sessions beyond
max_sessionsare pruned automatically.
Long-Term: Semantic Memory
When spec.memory is configured, three tools are auto-registered:
remember(content, category)
Stores information in long-term memory with a semantic embedding. Categories are sanitized to lowercase alphanumeric with underscores. After storing, memories beyond max_memories are pruned (oldest removed).
recall(query, top_k)
Searches long-term memory by semantic similarity. Returns the top_k most similar memories with category, similarity score, and timestamp.
list_memories(category, limit)
Lists recent memories, optionally filtered by category.
CLI Commands
memory clear
initrunner memory clear role.yaml # clear all (prompts for confirmation)
initrunner memory clear role.yaml --force # skip confirmation
initrunner memory clear role.yaml --sessions-only # clear only sessions
initrunner memory clear role.yaml --memories-only # clear only long-term memoriesmemory export
initrunner memory export role.yaml # exports to memories.json
initrunner memory export role.yaml -o my-export.json # custom output pathEmbedding Models
Memory uses the same embedding provider resolution as ingestion:
memory.embeddings.model— if set, used directlymemory.embeddings.provider— used to look up the default modelspec.model.provider— falls back to the agent's model provider
Provider Defaults
| Provider | Default Embedding Model |
|---|---|
openai | openai:text-embedding-3-small |
anthropic | openai:text-embedding-3-small |
google | google:text-embedding-004 |
Scaffold
initrunner init --name assistant --template memory