ai-infrastructure-litellm

Pass

Audited by Gen Agent Trust Hub on Apr 7, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill utilizes environment variable injection (os.environ/) for all provider API keys and the proxy master key, preventing sensitive credential exposure in configuration files.
  • [SAFE]: External resources, including the docker-compose.yml file and the LiteLLM container image, are retrieved from the project's official GitHub and GHCR repositories.
  • [SAFE]: The documentation encourages the use of virtual keys with specific model permissions and budget constraints, following the principle of least privilege for application access.
  • [INDIRECT_PROMPT_INJECTION]: The skill describes an infrastructure layer that processes untrusted external data.
  • Ingestion points: Requests to the proxy's chat/completions and embeddings endpoints in SKILL.md and examples/core.md.
  • Boundary markers: No specific prompt delimiters or instruction isolation markers are configured in the provided examples.
  • Capability inventory: The proxy has network access to route requests to external LLM providers and access to a PostgreSQL database for spend tracking as described in SKILL.md.
  • Sanitization: Content is passed through to downstream providers without explicit sanitization by the proxy layer.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 7, 2026, 01:31 AM