performance-profiling

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFECOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION] (LOW): The script scripts/lighthouse_audit.py uses subprocess.run to execute the Lighthouse CLI. While it uses a list of arguments to prevent shell injection, the lack of validation for the url parameter allows for argument injection (e.g., passing flags like --help instead of a URL).
  • [EXTERNAL_DOWNLOADS] (LOW): The skill requires the lighthouse npm package to be installed globally. While this is a well-known tool, the skill's reliance on external, unmanaged dependencies is a minor security concern.
  • [PROMPT_INJECTION] (LOW): As the skill fetches and processes content from external URLs, it is susceptible to indirect prompt injection. If the target website contains malicious instructions in its metadata, an AI agent processing the resulting Lighthouse report might follow those instructions. Mandatory Evidence Chain: 1. Ingestion points: scripts/lighthouse_audit.py via the url argument. 2. Boundary markers: Absent; the Lighthouse output is returned as raw JSON data. 3. Capability inventory: subprocess.run in scripts/lighthouse_audit.py and tools like Bash allowed in SKILL.md. 4. Sanitization: Absent; no URL validation or content filtering is performed.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 04:58 PM