pydantic-ai-dependency-injection
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFE
Full Analysis
- [Prompt Injection] (SAFE): No instructions attempting to override agent behavior or bypass safety filters were detected.
- [Data Exposure & Exfiltration] (SAFE): The code snippets use placeholder values (e.g., 'secret') for API keys. No actual credentials or exfiltration logic were found.
- [Remote Code Execution] (SAFE): No evidence of unauthorized downloads, piped shell commands, or dynamic execution of untrusted code.
- [Indirect Prompt Injection] (LOW): The skill demonstrates patterns where data retrieved from external sources (like a database) is interpolated into system prompts and instructions (e.g.,
add_user_context). While this is a standard design pattern for this library, it creates an ingestion surface where attacker-controlled data could influence the agent's behavior if the underlying data source is compromised. - Ingestion points:
ctx.deps.db.get_user(ctx.deps.user_id)inSKILL.md. - Boundary markers: None explicitly shown in templates.
- Capability inventory: File system access and network operations are delegated to standard library clients (httpx).
- Sanitization: Not demonstrated in the templates, as they focus on architectural patterns.
Audit Metadata