copilot-cli

Fail

Audited by Gen Agent Trust Hub on Mar 4, 2026

Risk Level: HIGHCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill constructs shell commands by inserting user-provided strings directly into a bash template (e.g., copilot -p "<english prompt>"). This pattern is highly susceptible to command injection, as a user can provide input containing shell metacharacters such as backticks, semicolons, or command substitution syntax to execute arbitrary code on the host system.
  • [COMMAND_EXECUTION]: The skill's documentation and examples promote the use of high-privilege flags like --allow-all-tools, --allow-all-paths, and --yolo. These flags grant the delegated Copilot process unrestricted permissions to execute shell commands and access the filesystem, which significantly increases the risk of system compromise if the delegated model is manipulated by a malicious prompt.
  • [PROMPT_INJECTION]: The skill exhibits an indirect prompt injection surface by processing untrusted user data and passing it to an external LLM-based CLI tool. Ingestion points: User-provided task descriptions are ingested in the prompt construction phase of SKILL.md. Boundary markers: None are implemented; user input is simply wrapped in double quotes. Capability inventory: The skill has Bash, Read, and Write permissions, and the delegated tool can be granted full system access via the --yolo flag. Sanitization: No sanitization or escaping of the user-provided prompt is performed before execution.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Mar 4, 2026, 07:59 PM