execute
Fail
Audited by Gen Agent Trust Hub on Mar 22, 2026
Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTIONDATA_EXFILTRATION
Full Analysis
- [PROMPT_INJECTION]: The skill provides instructions to "Never ask the user" and "Don't stop until the work is fully complete," explicitly directing the agent to bypass human-in-the-loop safety checks and operate fully autonomously.- [PROMPT_INJECTION]: Under the "Team Execution" section, the skill instructs the agent to spawn teammates using
mode: "bypassPermissions". This is a direct attempt to circumvent permission-based security controls within the agent's task execution environment.- [COMMAND_EXECUTION]: The skill utilizes thenoodle worktree execcommand to run arbitrary commands within a worktree environment, which can be derived from untrusted plan files or user input.- [COMMAND_EXECUTION]: The methodology includes executing a local shell script (sh scripts/lint-arch.sh), creating a path for arbitrary code execution if the script is modified by an attacker.- [DATA_EXFILTRATION]: Thenoodle event emitcommand transmits implementation summaries and potentially sensitive progress data to an external backend service using a session identifier.- [COMMAND_EXECUTION]: The skill is susceptible to indirect prompt injection. - Ingestion points: Reads untrusted data from
brain/plans/andbrain/todos.md(SKILL.md). - Boundary markers: Absent; no delimiters or instructions to ignore embedded commands are present.
- Capability inventory: Includes
noodleCLI,pnpm,go,git,sh, andTaskcreation withbypassPermissions(SKILL.md). - Sanitization: Absent; the skill does not escape or validate the content of the ingested files before using them to drive implementation steps.
Recommendations
- AI detected serious security threats
Audit Metadata