cost-optimized-llm
Pass
Audited by Gen Agent Trust Hub on Feb 19, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- Indirect Prompt Injection (LOW): The skill accepts external user input via the 'prompt' variable and interpolates it directly into downstream LLM requests (Anthropic, OpenRouter, Google) without sanitization.
- Ingestion points: 'prompt' parameter in 'smart_complete', 'call_openrouter', 'call_anthropic', and 'model.generate_content' calls.
- Boundary markers: Absent. The prompt is passed directly as the message content.
- Capability inventory: Network requests to external LLM providers (Anthropic, OpenRouter, Google) and local file writing to track costs.
- Sanitization: Absent. No escaping or validation is performed on the user-provided prompt before routing.
- Data Exposure (LOW): The skill performs local file system operations by writing usage statistics to a hardcoded path at '~/.claude/llm_costs.jsonl'. While the content (tokens and model names) is not highly sensitive, hardcoded file writes outside the skill's local directory are a minor security concern.
Audit Metadata