pydantic-ai-model-integration
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFE
Full Analysis
- SAFE (SAFE): The skill consists of instructional text and Python code snippets for a known framework. No executable malicious scripts or binary components are present.
- CREDENTIALS_UNSAFE (SAFE): The code snippets use placeholders like 'your-key' for API keys and documentation suggests using environment variables (e.g., ANTHROPIC_API_KEY), which is a safe practice.
- Indirect Prompt Injection (LOW): 1. Ingestion points: User inputs are passed to agent.run() and agent.run_stream() in SKILL.md. 2. Boundary markers: None present in the simplified examples. 3. Capability inventory: No tool definitions or dangerous subprocess/network operations are included in the provided snippets. 4. Sanitization: Input sanitization is not demonstrated in these basic configuration examples. This surface area is normal for an LLM integration skill.
Audit Metadata