litellm
Pass
Audited by Gen Agent Trust Hub on Mar 2, 2026
Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
- [PROMPT_INJECTION]: The skill provides a pathway for LLM calls, creating a surface for indirect prompt injection when handling external data.
- Ingestion points: The
promptandsystemarguments inscripts/llm_call.pyaccept arbitrary text input. - Boundary markers: The message list is constructed in
scripts/llm_call.pywithout using delimiters or instructions to the model to ignore potential commands within the user input. - Capability inventory: The script executes network calls to various model provider endpoints using the
litellmlibrary. - Sanitization: No validation or sanitization is applied to the input strings before they are transmitted to the LLM API.
- [EXTERNAL_DOWNLOADS]: The skill documentation refers to the
litellmlibrary, which is a well-known package for LLM orchestration. - Evidence:
SKILL.mdcontains instructions for installing the library viapip install litellm.
Audit Metadata