pydantic-ai
.
The integration provides automatic retries for LLM calls, full observability of agent decisions,
and durable execution semantics that make workflows idempotent and rerunnable.
The Scenario: AI Data Analyst
You need to analyze datasets programmatically, but writing custom analysis code for each dataset is time-consuming. Instead, you’ll build an AI agent that:- Understands your dataset structure
- Decides which analyses are most valuable
- Uses Python tools to calculate statistics and detect anomalies
- Generates actionable insights
PrefectAgent
– Wrapspydantic-ai
agents for durable execution- Agent Tools – Python functions the AI can call, automatically wrapped as Prefect tasks
TaskConfig
– Custom retry policies and timeouts for AI operations- Durable Execution – Automatic idempotency and failure recovery
Setup
Install dependencies (if not already installed):Agent Tools
These functions are “tools” that the AI agent can call to analyze data. Prefect automatically wraps each tool execution as a task for observability and retries.Analysis Results Model
Structured output ensures the AI returns consistent, parseable results.Creating the AI Agent
We configure the agent with tools and wrap it with PrefectAgent for durability.Sample Dataset Generator
Create a realistic sales dataset for demonstration.Main Analysis Flow
Orchestrate the entire AI analysis workflow with Prefect.Serve the Flow
To get full durable execution with automatic idempotency, serve the flow to create a deployment. Deployed flows enable Prefect’s transactional semantics for agent operations.Triggering Flow Runs
Once served, trigger runs via: Prefect UI:- Navigate to http://localhost:4200
- Go to Deployments → “ai-data-analyst-deployment”
- Click “Run” → “Quick Run”
Local Testing
For quick local testing without deployment:What Just Happened?
When you serve and trigger this flow, Prefect andpydantic-ai
work together to create a resilient AI pipeline:
- Deployment Creation –
serve()
creates a deployment and starts a worker to execute flow runs - Durable AI Execution – The
PrefectAgent
wrapper makes all AI operations retryable:- LLM calls retry up to 3 times with exponential backoff (1s, 2s, 4s)
- Tool calls retry up to 2 times
- All operations respect 60s timeout
- Tool Observability – Each time the AI calls a tool (
get_column_info
,calculate_statistics
,detect_anomalies
), the call is run as a Prefect task - Structured Results – Pydantic validates the AI’s output, ensuring it matches the expected schema
- Automatic Idempotency – When a deployed flow run is retried, Prefect’s transactional semantics ensure that completed tasks are skipped and only failed operations are re-executed. This prevents duplicate API calls and wasted compute.
Key Takeaways
- Deploy for Durability – Use
flow.serve()
orflow.deploy()
to unlock automatic idempotency and transactional semantics - Retry Intelligence – Failed flow runs can be retried from the UI, skipping already-completed tasks
- Tool Observability – Every AI decision and tool call is tracked, logged, and independently retryable
- Zero Boilerplate – Just wrap your pydantic-ai agent with
PrefectAgent
- Customizable Policies – Fine-tune retries, timeouts, and error handling per operation type
- Set your OpenAI API key:
export OPENAI_API_KEY='your-key'
- Start the Prefect server:
prefect server start
- Serve the flow:
uv run -s examples/ai_data_analyst_with_pydantic_ai.py
- Trigger a run from the UI (http://localhost:4200) or CLI
- Watch all AI operations tracked in real-time