toml-command-builder

Pass

Audited by Gen Agent Trust Hub on Mar 18, 2026

Risk Level: SAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [INDIRECT_PROMPT_INJECTION]: The skill provides templates and instructions for building prompts that incorporate untrusted data from the local environment into LLM contexts.
  • Ingestion points: The skill promotes the use of @{...} for file content injection, !{...} for shell command output (e.g., git diffs), and {{args}} for raw user input.
  • Boundary markers: Documentation examples use markdown code blocks (e.g., ~~~diff) to wrap injected content, but they lack explicit instructions or examples for the LLM to ignore potentially malicious instructions embedded within that data.
  • Capability inventory: The skill is authorized to use Read, Glob, Grep, and Bash tools, which are used to gather data and write the resulting TOML command files to the filesystem.
  • Sanitization: While the documentation notes that shell arguments are escaped for security, there is no mention of sanitizing or filtering data injected into the natural language prompt itself.
  • [COMMAND_EXECUTION]: The skill documents and provides templates for executing arbitrary shell commands via the !{...} syntax, which is a core feature of the Gemini CLI tool being documented.
  • Evidence: Examples include !{git diff --staged}, !{find . -type f -name "*.ts" | head -50}, and a validation check using python -c. The skill also explicitly warns against dangerous usage like !{rm -rf {{args}}}.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 18, 2026, 04:47 AM