langfuse

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • PROMPT_INJECTION (SAFE): The instructions are strictly limited to technical documentation and role-definition for LLM observability. No jailbreak or bypass patterns are present.
  • CREDENTIALS_UNSAFE (SAFE): Code examples use dummy placeholders like 'pk-...' and 'sk-...' for public and secret keys, adhering to best practices for documentation.
  • EXTERNAL_DOWNLOADS (SAFE): While the skill references external Python libraries, it does not include commands to download or install them from untrusted sources.
  • DATA_EXFILTRATION (SAFE): Network requests are directed only to the official Langfuse cloud host or user-defined self-hosted URLs for the purpose of observability telemetry.
  • INDIRECT_PROMPT_INJECTION (LOW): As an observability tool, the skill naturally processes LLM inputs and outputs. However, it does not include execution capabilities that would allow ingested data to perform unauthorized actions.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 04:59 PM