litellm

Pass

Audited by Gen Agent Trust Hub on Mar 3, 2026

Risk Level: SAFEEXTERNAL_DOWNLOADSPROMPT_INJECTIONCREDENTIALS_UNSAFE
Full Analysis
  • [EXTERNAL_DOWNLOADS]: The skill instructs users to install the 'litellm' Python package and run a Docker container from 'ghcr.io/berriai/litellm'. These are well-known and reputable sources for this specific technology.
  • [CREDENTIALS_UNSAFE]: The documentation contains placeholder credentials such as 'sk-1234', 'key1', and 'postgresql://user:pass@localhost/litellm'. These are used purely for illustrative purposes in configuration examples and do not represent actual exposed secrets.
  • [PROMPT_INJECTION]: The skill facilitates processing user input through LLM APIs, creating an indirect prompt injection surface.
  • Ingestion points: Data is passed to models via the 'messages' parameter in 'completion' and 'acompletion' functions in 'SKILL.md'.
  • Boundary markers: None are present in the provided code snippets.
  • Capability inventory: Network operations for LLM API requests and potential database connectivity.
  • Sanitization: No input validation or sanitization is shown in the basic usage examples.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 3, 2026, 12:51 PM