permissions
Fail
Audited by Gen Agent Trust Hub on Feb 20, 2026
Risk Level: HIGHCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION] (HIGH): The skill provides a command (
/perms mode full) that explicitly disables all security restrictions. This 'full' mode allows the execution of any system command, which is inherently dangerous if accessible to an automated agent. - [COMMAND_EXECUTION] (HIGH): The skill enables runtime modification of the command allowlist via
/perms allow <pattern>. An agent with access to this tool can grant itself permission to execute any binary, bypassing the intended security boundaries of the host environment. - [PROMPT_INJECTION] (MEDIUM): There is a high risk of 'Self-Escalation' via prompt injection. An attacker could provide a prompt that instructs the agent to run
/perms mode fullor/perms allow *. Since the skill does not distinguish between human-initiated policy changes and agent-initiated ones, the agent may unwittingly dismantle its own security sandbox. - [INDIRECT_PROMPT_INJECTION] (LOW): The skill is a surface for indirect prompt injection as it processes command patterns and agent IDs from potentially untrusted inputs.
- Ingestion points: The
executefunction inindex.tsparses theargsstring which contains user-provided command patterns and agent identifiers. - Boundary markers: None. The instructions are interpolated directly into the permission logic.
- Capability inventory: Access to
execApprovals.setSecurityConfigandexecApprovals.addToAllowlistinindex.tswhich control the underlying system's ability to spawn subprocesses. - Sanitization: None. The skill accepts
regexandglobpatterns without validation, allowing for broad or potentially malicious pattern matching.
Recommendations
- AI detected serious security threats
Audit Metadata