langfuse
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFE
Full Analysis
- [Data Exposure & Exfiltration] (SAFE): API keys in the provided code snippets use safe placeholders (e.g., 'pk-...' and 'sk-...'), and data transmission is limited to the intended observability host.
- [Indirect Prompt Injection] (SAFE): The skill has an ingestion surface as it processes LLM inputs and outputs for tracing; however, it lacks exploitable capabilities. 1. Ingestion points: trace() and callback handlers. 2. Boundary markers: absent. 3. Capability inventory: no subprocess execution, dynamic code execution (eval/exec), or file writing. 4. Sanitization: absent.
- [Unverifiable Dependencies] (SAFE): All referenced Python packages (langfuse, openai, langchain) are well-known, standard industry libraries.
Audit Metadata