InitRunner

Import from PydanticAI

Already have a PydanticAI agent? InitRunner can convert it into a native role file automatically — mapping your model, system prompt, output type, and tools to InitRunner's YAML format. @agent.tool and @agent.tool_plain functions are extracted into a sidecar Python module with RunContext parameters stripped so they keep working without changes.

You can import via the dashboard (paste your code) or the CLI (point at a file).

Dashboard Import

1. Start a new agent and select Import

Open the dashboard, go to Agents → New Agent, type a name, and select Import under "Start From". Toggle the framework pill to PydanticAI, then paste your Python code into the source editor. Choose the model that will power the conversion (this is the builder model, not your agent's model — the agent's model is read from your code).

Paste your PydanticAI source code and select the PydanticAI framework pill

2. Review the converted agent

InitRunner parses your code and generates a complete role definition. Review the YAML — your model, system prompt, output schema, and tools are mapped automatically. If any PydanticAI features couldn't be converted (e.g. pydantic_graph, logfire, MCP servers), you'll see warnings at the top explaining what to configure manually.

Review the generated InitRunner agent YAML and any import warnings

3. Save

Click Save Agent to write the role file. Your imported agent is ready to run from the dashboard or CLI.

CLI Import

Point initrunner new at your PydanticAI Python file:

initrunner new --pydantic-ai weather_agent.py

InitRunner reads the file, extracts the agent configuration via AST parsing, and generates a role.yaml in the current directory. If your code has @agent.tool, @agent.tool_plain, or FunctionToolset functions, a sidecar module (e.g. role_tools.py) is created alongside.

By default you enter an interactive refinement loop where you can tweak the generated YAML. Skip it with --no-refine:

# Import without interactive refinement
initrunner new --pydantic-ai weather_agent.py --no-refine

# Custom output path
initrunner new --pydantic-ai weather_agent.py --output weather-bot.yaml

# Use a specific builder model for conversion
initrunner new --pydantic-ai weather_agent.py --provider anthropic --model claude-sonnet-4-6
FlagDescription
--pydantic-ai PATHPath to the PydanticAI Python file
--output PATHOutput file path (default: role.yaml)
--provider TEXTBuilder model provider (auto-detected if omitted)
--model TEXTBuilder model name
--no-refineSkip the interactive refinement loop
--forceOverwrite existing file without prompting

If the file contains multiple Agent() assignments, the converter imports the first one (in source order) and warns about skipped agents.

Before and After

Here's a concrete example. This PydanticAI agent has two tools, a structured output type, a system prompt, and model settings:

PydanticAI agent (weather_agent.py):

import httpx
from pydantic import BaseModel
from pydantic_ai import Agent, RunContext
from pydantic_ai.settings import ModelSettings


class WeatherReport(BaseModel):
    city: str
    temperature_f: float
    condition: str
    summary: str


agent = Agent(
    "openai:gpt-4o-mini",
    output_type=WeatherReport,
    system_prompt="You are a weather assistant. Use the provided tools to fetch real weather data, then return a structured report.",
    model_settings=ModelSettings(temperature=0.1),
)


@agent.tool
async def get_weather(ctx: RunContext[None], city: str) -> str:
    """Fetch current weather for a city from wttr.in."""
    async with httpx.AsyncClient() as client:
        resp = await client.get(f"https://wttr.in/{city}?format=j1", timeout=10)
        resp.raise_for_status()
        data = resp.json()
        current = data["current_condition"][0]
        return (
            f"City: {city}, "
            f"Temp: {current['temp_F']}F, "
            f"Condition: {current['weatherDesc'][0]['value']}"
        )


@agent.tool_plain
def fahrenheit_to_celsius(temp_f: float) -> str:
    """Convert Fahrenheit to Celsius."""
    celsius = (temp_f - 32) * 5 / 9
    return f"{temp_f}F = {celsius:.1f}C"

Run the import:

initrunner new --pydantic-ai weather_agent.py --no-refine

Generated role.yaml:

apiVersion: initrunner/v1
kind: Agent
metadata:
  name: weather-assistant
  spec_version: 2
spec:
  role: >-
    You are a weather assistant. Use the provided tools to fetch real weather
    data, then return a structured weather report.
  model:
    provider: openai
    name: gpt-4o-mini
  output:
    type: json_schema
    schema:
      type: object
      additionalProperties: false
      properties:
        city:
          type: string
        temperature_f:
          type: number
        condition:
          type: string
        summary:
          type: string
      required:
        - city
        - temperature_f
        - condition
        - summary
  tools:
    - type: custom
      module: weather_bot_tools

Generated weather_bot_tools.py:

"""Custom tools extracted from PydanticAI agent."""

import httpx
from pydantic import BaseModel


async def get_weather(city: str) -> str:
    """Fetch current weather for a city from wttr.in."""
    async with httpx.AsyncClient() as client:
        resp = await client.get(f"https://wttr.in/{city}?format=j1", timeout=10)
        resp.raise_for_status()
        data = resp.json()
        current = data["current_condition"][0]
        return (
            f"City: {city}, "
            f"Temp: {current['temp_F']}F, "
            f"Condition: {current['weatherDesc'][0]['value']}"
        )


def fahrenheit_to_celsius(temp_f: float) -> str:
    """Convert Fahrenheit to Celsius."""
    celsius = (temp_f - 32) * 5 / 9
    return f"{temp_f}F = {celsius:.1f}C"

What changed:

  • Agent("openai:gpt-4o-mini") became spec.model: {provider: openai, name: gpt-4o-mini}
  • system_prompt= became spec.role
  • ModelSettings(temperature=0.1) became spec.model.temperature (omitted since 0.1 is the default)
  • output_type=WeatherReport became spec.output with the full JSON schema
  • @agent.tool and @agent.tool_plain decorators were stripped
  • ctx: RunContext[None] was removed from the async tool signature
  • pydantic_ai imports were filtered out; httpx and pydantic imports were kept
  • The sidecar module name was derived from the output YAML filename

What Gets Converted

PydanticAIInitRunner
Agent("openai:gpt-5")spec.model: {provider: openai, name: gpt-5}
Agent(OpenAIModel("gpt-5"))spec.model: {provider: openai, name: gpt-5}
system_prompt="..."spec.role
instructions="..."spec.role (combined with system_prompt)
@agent.system_prompt decoratorspec.role (static return extracted)
@agent.instructions decoratorspec.role (static return extracted)
ModelSettings(temperature=0.7)spec.model.temperature: 0.7
ModelSettings(max_tokens=4096)spec.model.max_tokens: 4096
output_type=MySchemaspec.output: {type: json_schema}
output_type=NativeOutput(MySchema)spec.output: {type: json_schema}
@agent.tool / @agent.tool_plaintype: custom + sidecar module
FunctionToolset toolstype: custom + sidecar module
tools=[func] kwargtype: custom + sidecar module
UsageLimits(request_limit=10)spec.guardrails.max_request_limit: 10

Tool Extraction and RunContext

PydanticAI tools often take a RunContext[Deps] first parameter for dependency injection. InitRunner manages tool context differently, so the converter:

  1. Strips the RunContext parameter from the function signature
  2. Checks if the parameter name is referenced in the body — if ctx.deps or similar is used, it inserts a # TODO comment and sets a warning

Tools that only use RunContext for typing (not in the body) convert cleanly. Tools that depend on ctx.deps need manual adjustment after import.

Supported Model Classes

The converter recognizes these PydanticAI model classes and maps them to InitRunner providers:

Model ClassProvider
OpenAIModel, OpenAIChatModel, OpenAIResponsesModelopenai
AnthropicModelanthropic
GeminiModel, GoogleModelgoogle
GroqModelgroq
MistralModelmistral
BedrockConverseModelbedrock
CohereModelcohere
XAIModelxai

What to Configure Manually

Some PydanticAI features don't have a direct 1:1 mapping and need manual configuration after import. The importer warns you about these:

PydanticAI FeatureInitRunner EquivalentGuide
pydantic_graph state machinesflow.yaml multi-agent orchestrationFlow
logfire / instrument=spec.observabilityObservability
MCPServerStdio / MCPServerHTTPtype: mcp in toolsTools
builtin_tools=[...]Add equivalent InitRunner tools manuallyTools
@agent.output_validatorNot portable — validate in tool logicStructured Output
TextOutput / StructuredDict output typesNot directly portableConfiguration
Dynamic @agent.instructions with RunContextDescribe logic in spec.roleConfiguration

Next Steps

On this page