exploring-llm-traces

Pass

Audited by Gen Agent Trust Hub on Apr 23, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: No security issues were identified. The skill follows best practices for AI observability and debugging.
  • [COMMAND_EXECUTION]: The skill instructs the agent to execute local Python scripts (print_summary.py, print_timeline.py, etc.) to process large trace results. These scripts use only standard Python libraries (json, os, sys) and do not perform any network operations or unauthorized file access. This is a legitimate pattern for handling data that exceeds the agent's context window.
  • [DATA_EXFILTRATION]: All data retrieval is performed through authorized PostHog MCP tools. No unauthorized data access or external transmission patterns were found.
  • [PROMPT_INJECTION]: The skill includes instructions to prevent training data bias by using posthog:read-data-schema to discover real property names before querying. These are beneficial safety and accuracy instructions.
  • [INDIRECT_PROMPT_INJECTION]: The skill processes trace data which may contain untrusted input from previously recorded LLM interactions. While it lacks explicit prompt boundary markers for this data, the risk is inherent to the primary debugging purpose, and the provided scripts include truncation logic to mitigate large data ingestion issues.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 23, 2026, 04:19 PM