cache-cost-tracking
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTIONEXTERNAL_DOWNLOADSDATA_EXFILTRATION
Full Analysis
- Indirect Prompt Injection (HIGH): The skill possesses a high vulnerability surface for indirect prompt injection by ingesting external data and passing it to executable components. * Ingestion points: The call_llm_with_cache function in SKILL.md receives raw 'prompt' strings, and run_analysis receives 'url' inputs. * Boundary markers: Absent; untrusted content is not delimited or marked to prevent instruction override. * Capability inventory: The skill invokes llm.generate(prompt) and agent.analyze(content), providing a direct path for injected instructions to influence agent reasoning or trigger side effects. * Sanitization: Absent; there is no validation or filtering of external input before processing.
- Unverifiable Dependencies (MEDIUM): The skill relies on the third-party 'langfuse' library, which is not within the trusted source scope. * Evidence: Imports include 'from langfuse.decorators import observe' and 'from langfuse import Langfuse'.
- Data Exposure & Exfiltration (LOW): The skill transmits usage metrics, prompt metadata, and session IDs to the Langfuse platform, which is an external, non-whitelisted domain.
Recommendations
- AI detected serious security threats
Audit Metadata