skills/pydantic/skills/building-pydantic-ai-agents

building-pydantic-ai-agents

Installation
SKILL.md

Building AI Agents with Pydantic AI

Pydantic AI is a Python agent framework for building production-grade Generative AI applications. This skill provides patterns, architecture guidance, and tested code examples for building applications with Pydantic AI.

When to Use This Skill

Invoke this skill when:

  • User asks to build an AI agent, create an LLM-powered app, or mentions Pydantic AI
  • User wants to add tools, capabilities (thinking, web search), or structured output to an agent
  • User asks to define agents from YAML/JSON specs or use template strings
  • User wants to stream agent events, delegate between agents, or test agent behavior
  • Code imports pydantic_ai or references Pydantic AI classes (Agent, RunContext, Tool)
  • User asks about hooks, lifecycle interception, or agent observability with Logfire

Do not use this skill for:

  • The Pydantic validation library alone (pydantic/BaseModel without agents)
  • Other AI frameworks (LangChain, LlamaIndex, CrewAI, AutoGen)
  • General Python development unrelated to AI agents

Quick-Start Patterns

Create a Basic Agent

from pydantic_ai import Agent

agent = Agent(
    'anthropic:claude-sonnet-4-6',
    instructions='Be concise, reply with one sentence.',
)

result = agent.run_sync('Where does "hello world" come from?')
print(result.output)
"""
The first known use of "hello, world" was in a 1974 textbook about the C programming language.
"""

Add Tools to an Agent

import random

from pydantic_ai import Agent, RunContext

agent = Agent(
    'google-gla:gemini-3-flash-preview',
    deps_type=str,
    instructions=(
        "You're a dice game, you should roll the die and see if the number "
        "you get back matches the user's guess. If so, tell them they're a winner. "
        "Use the player's name in the response."
    ),
)


@agent.tool_plain
def roll_dice() -> str:
    """Roll a six-sided die and return the result."""
    return str(random.randint(1, 6))


@agent.tool
def get_player_name(ctx: RunContext[str]) -> str:
    """Get the player's name."""
    return ctx.deps


dice_result = agent.run_sync('My guess is 4', deps='Anne')
print(dice_result.output)
#> Congratulations Anne, you guessed correctly! You're a winner!

Structured Output with Pydantic Models

from pydantic import BaseModel

from pydantic_ai import Agent


class CityLocation(BaseModel):
    city: str
    country: str


agent = Agent('google-gla:gemini-3-flash-preview', output_type=CityLocation)
result = agent.run_sync('Where were the olympics held in 2012?')
print(result.output)
#> city='London' country='United Kingdom'
print(result.usage())
#> RunUsage(input_tokens=57, output_tokens=8, requests=1)

Dependency Injection

from datetime import date

from pydantic_ai import Agent, RunContext

agent = Agent(
    'openai:gpt-5.2',
    deps_type=str,
    instructions="Use the customer's name while replying to them.",
)


@agent.instructions
def add_the_users_name(ctx: RunContext[str]) -> str:
    return f"The user's name is {ctx.deps}."


@agent.instructions
def add_the_date() -> str:
    return f'The date is {date.today()}.'


result = agent.run_sync('What is the date?', deps='Frank')
print(result.output)
#> Hello Frank, the date today is 2032-01-02.

Testing with TestModel

from pydantic_ai import Agent
from pydantic_ai.models.test import TestModel

my_agent = Agent('openai:gpt-5.2', instructions='...')


async def test_my_agent():
    """Unit test for my_agent, to be run by pytest."""
    m = TestModel()
    with my_agent.override(model=m):
        result = await my_agent.run('Testing my agent...')
        assert result.output == 'success (no tool calls)'
    assert m.last_model_request_parameters.function_tools == []

Use Capabilities

Capabilities are reusable, composable units of agent behavior — bundling tools, hooks, instructions, and model settings.

from pydantic_ai import Agent
from pydantic_ai.capabilities import Thinking, WebSearch

agent = Agent(
    'anthropic:claude-opus-4-6',
    instructions='You are a research assistant. Be thorough and cite sources.',
    capabilities=[
        Thinking(effort='high'),
        WebSearch(),
    ],
)

Add Lifecycle Hooks

Use Hooks to intercept model requests, tool calls, and runs with decorators — no subclassing needed.

from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities.hooks import Hooks
from pydantic_ai.models import ModelRequestContext

hooks = Hooks()


@hooks.on.before_model_request
async def log_request(ctx: RunContext[None], request_context: ModelRequestContext) -> ModelRequestContext:
    print(f'Sending {len(request_context.messages)} messages')
    return request_context


agent = Agent('openai:gpt-5.2', capabilities=[hooks])

Define Agent from YAML Spec

Use Agent.from_file to load agents from YAML or JSON — no Python agent construction code needed.

from pydantic_ai import Agent

# agent.yaml:
# model: anthropic:claude-opus-4-6
# instructions: You are a helpful research assistant.
# capabilities:
#   - WebSearch
#   - Thinking:
#       effort: high

agent = Agent.from_file('agent.yaml')

Task Routing Table

I want to... Documentation
Create or configure agents Agents
Bundle reusable behavior (tools, hooks, instructions) Capabilities
Intercept model requests, tool calls, or runs Hooks
Define agents in YAML/JSON without Python code Agent Specs
Use template strings in agent instructions Template Strings
Let my agent call external APIs or functions Tools
Organize or restrict which tools an agent can use Toolsets
Give my agent web search with automatic provider fallback WebSearch Capability
Give my agent URL fetching with automatic provider fallback WebFetch Capability
Give my agent web search or code execution (builtin tools) Built-in Tools
Search with DuckDuckGo/Tavily/Exa Common Tools
Ensure my agent returns data in a specific format Structured Output
Pass database connections, API clients, or config to tools Dependencies
Access usage stats, message history, or retry count in tools RunContext
Choose or configure models Models
Automatically switch to backup model when primary fails Fallback Model
Show real-time progress as my agent works Streaming Events and Final Output
Work with messages and multimedia Message History
Reduce token costs by trimming or filtering conversation history Processing Message History
Keep long conversations manageable without losing context Summarize Old Messages
Use MCP servers MCP
Build multi-step graphs Graph
Debug a failed agent run or see what went wrong Model Errors
Make my agent resilient to temporary failures Retries
Understand why my agent made specific decisions Using Logfire
Write deterministic tests for my agent Unit testing with TestModel
Enable thinking/reasoning across any provider Thinking · Thinking Capability
Systematically verify my agent works correctly Evals
Use embeddings for RAG Embeddings
Use durable execution Durable Execution
Have one agent delegate tasks to another Agent Delegation
Route requests to different agents based on intent Programmatic Agent Hand-off
Require tool approval (human-in-the-loop) Deferred Tools
Use images, audio, video, or documents Input
Use advanced tool features Advanced Tools
Validate or require approval before tool execution Advanced Tools
Call the model without using an agent Direct API
Expose agents as HTTP servers (A2A) A2A
Handle network errors and rate limiting automatically Retries
Use LangChain or ACI.dev tools Third-Party Tools
Publish reusable agent extensions as packages Extensibility
Build custom toolsets, models, or agents Extensibility
Debug common issues Troubleshooting
Migrate from deprecated APIs Changelog
See advanced real-world examples Examples
Look up an import path API Reference

Architecture and Decisions

Load Architecture and Decision Guide for detailed decision trees, comparison tables, and architecture overview:

Topic What it covers
Decision Trees Tool registration, output modes, multi-agent patterns, capabilities, testing approaches, extensibility
Comparison Tables Output modes, model provider prefixes, tool decorators, built-in capabilities, agent methods
Architecture Overview Execution flow, generic types, construction patterns, lifecycle hooks, model string format

Quick reference — model string format: "provider:model-name" (e.g., "openai:gpt-5.2", "anthropic:claude-sonnet-4-6", "google-gla:gemini-3-pro-preview")

Quick reference — key agent methods: run(), run_sync(), run_stream(), run_stream_sync(), run_stream_events(), iter()

Key Practices

  • Python 3.10+ compatibility required
  • Observability: Pydantic AI has first-class integration with Logfire for tracing agent runs, tool calls, and model requests. Add it with logfire.instrument_pydantic_ai(). For deeper HTTP-level visibility, logfire.instrument_httpx(capture_all=True) captures the exact payloads sent to model providers.
  • Testing: Use TestModel for deterministic tests, FunctionModel for custom logic

Common Gotchas

These are mistakes agents commonly make with Pydantic AI. Getting these wrong produces silent failures or confusing errors.

  • @agent.tool requires RunContext as first param; @agent.tool_plain must not have it. Mixing these up causes runtime errors. Use tool_plain when you don't need deps, usage, or messages.
  • Model strings need the provider prefix: 'openai:gpt-5.2' not 'gpt-5.2'. Without the prefix, Pydantic AI can't resolve the provider.
  • TestModel requires agent.override(): Don't set agent.model directly. Always use the context manager: with agent.override(model=TestModel()):.
  • str in output_type allows plain text to end the run: If your union includes str (or no output_type is set), the model can return plain text instead of structured output. Omit str from the union to force tool-based output.
  • Hook decorator names on .on don't repeat on_: Use hooks.on.run_error and hooks.on.model_request_error — not hooks.on.on_run_error.
  • history_processors is plural: The Agent parameter is history_processors=[...], not history_processor=.

Common Tasks

Load Common Tasks Reference for detailed implementation guidance with code examples:

Task Section
Add capabilities (Thinking, WebSearch, etc.) Add Capabilities to an Agent
Intercept model requests and tool calls Intercept Agent Lifecycle with Hooks
Define agents from YAML/JSON config files Define Agents Declaratively with Specs
Enable thinking/reasoning across providers Enable Thinking Across Providers
Trim or filter conversation history Manage Context Size
Stream events and show real-time progress Show Real-Time Progress
Auto-switch providers on failure Handle Provider Failures
Write deterministic tests Test Agent Behavior
Delegate tasks between agents Coordinate Multiple Agents
Instrument with Logfire for debugging Debug and Validate Agent Behavior
Weekly Installs
171
Repository
pydantic/skills
GitHub Stars
29
First Seen
14 days ago
Installed on
github-copilot167
codex165
opencode165
warp162
gemini-cli162
kimi-cli162