llms-dashboard

Warn

Audited by Gen Agent Trust Hub on Mar 2, 2026

Risk Level: MEDIUMDATA_EXFILTRATIONPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [DATA_EXFILTRATION]: The skill performs extensive reading of sensitive local configuration and history files associated with several AI tools.
  • Evidence: Files like ~/.claude.json (contains OAuth details), google_accounts.json (active accounts), and VS Code's storage.json (workspace project paths) are accessed.
  • This access is consistent with the skill's primary purpose but exposes personal and environmental metadata.
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection leading to Cross-Site Scripting (XSS).
  • Ingestion points: Aggregator scripts in the scripts/ directory read raw chat logs, which are attacker-controllable if an LLM was previously prompted to output malicious code.
  • Capability inventory: The skill generates HTML files that are intended to be opened in a browser.
  • Sanitization: No sanitization or escaping is performed. The update_*.py scripts use html.replace() to perform direct string interpolation of chat content into HTML templates.
  • Boundary markers: Absent. There are no delimiters or warnings in the templates to prevent the browser from executing scripts embedded in the data.
  • [EXTERNAL_DOWNLOADS]: The dashboard templates reference external JavaScript and CSS libraries.
  • Evidence: Scripts load TailwindCSS from cdn.tailwindcss.com and Chart.js from cdn.jsdelivr.net.
  • These references target well-known and trusted services required for the visualization functionality.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Mar 2, 2026, 03:29 AM