llm-gateway-routing

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEEXTERNAL_DOWNLOADSPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
  • [EXTERNAL_DOWNLOADS] (LOW): The skill references external third-party packages and Docker images not included in the trusted list.
  • Evidence: Installation commands for 'litellm' in 'references/litellm-guide.md'.
  • Evidence: Docker image reference 'ghcr.io/berriai/litellm:main-latest' in 'references/litellm-guide.md'.
  • Context: While these are standard tools for LLM routing, they represent external dependencies from untrusted organizations.
  • [PROMPT_INJECTION] (LOW): The skill provides templates that ingest and process data from external LLM providers, creating a surface for indirect prompt injection.
  • Ingestion points: The 'LLMResponse' structure in 'templates/fallback-chain.ts' which holds content from API calls.
  • Boundary markers: No boundary markers or 'ignore' instructions are present in the code templates to delimit untrusted LLM output.
  • Capability inventory: The templates include network request capabilities (fetch) which could be influenced by injected instructions.
  • Sanitization: No sanitization or validation of the 'content' field from LLM responses is implemented in the templates.
  • [DATA_EXFILTRATION] (LOW): The skill configuration directs traffic to third-party LLM proxies (OpenRouter, Helicone).
  • Evidence: baseURL set to 'https://openrouter.ai/api/v1' and 'https://oai.helicone.ai/v1' across multiple configuration files.
  • Context: This is the intended primary purpose of the skill, but users should be aware that prompt data and API keys flow through these third-party intermediaries.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:13 PM