pydantic-ai-agents

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • Data Exposure & Exfiltration (SAFE): The skill follows secure patterns for handling sensitive information. It retrieves API keys from environment variables using os.getenv rather than hardcoding them. Network activity is confined to standard API requests to example documentation domains.
  • Unverifiable Dependencies & Remote Code Execution (SAFE): The code relies on reputable, standard libraries such as pydantic-ai, httpx, and logfire. There are no instances of dynamic code execution (e.g., eval, exec) or suspicious remote script downloads.
  • Indirect Prompt Injection (SAFE): Although the agent pattern involves ingesting data from external sources (e.g., transaction history), the skill explicitly demonstrates the use of Pydantic models and validators to sanitize and enforce the structure of agent outputs, which is a primary defense against indirect injection attacks.
  • Persistence Mechanisms (SAFE): No code was found that attempts to modify system startup files, shell profiles, or create scheduled tasks.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:38 PM