NYC
skills/johnlindquist/claude/stuck/Gen Agent Trust Hub

stuck

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • PROMPT_INJECTION (SAFE): The prompt templates are designed to assist with debugging and do not attempt to override the AI's core instructions or safety filters. Regarding Indirect Prompt Injection vulnerability surface: 1. Ingestion points: user pastes code/errors into placeholders. 2. Boundary markers: triple backticks are used. 3. Capability inventory: shell access (npm, git, gh, rm). 4. Sanitization: none. This is considered safe as it is intrinsic to the tool's purpose and the commands are standard for development troubleshooting.
  • COMMAND_EXECUTION (SAFE): The skill utilizes common developer tools (npm, git, gh) for legitimate troubleshooting. Destructive commands like 'rm -rf' are limited to cleaning local build artifacts (node_modules).
  • DATA_EXFILTRATION (SAFE): No patterns were found indicating unauthorized access to sensitive files or external network transmissions beyond the intended LLM interaction.
  • EXTERNAL_DOWNLOADS (SAFE): The 'npm install' command is a standard developer action used to restore dependencies from local configurations and does not involve downloading unverified remote scripts.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:06 PM