writing-skills

Fail

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTIONREMOTE_CODE_EXECUTION
Full Analysis
  • Prompt Injection (HIGH): The file persuasion-principles.md and examples/CLAUDE_MD_TESTING.md explicitly document and advocate for the use of behavioral manipulation techniques to bypass AI rationalization and internal reasoning.
  • It utilizes 'Authority' and 'Commitment' principles using markers like 'YOU MUST', 'No exceptions', and 'If you didn't use it, you failed'.
  • It references research specifically focused on persuading AI to comply with 'objectionable requests', providing a blueprint for bypassing alignment and safety filters through psychological pressure.
  • Command Execution (MEDIUM): The script render-graphs.js uses child_process.execSync to invoke the system dot (Graphviz) binary.
  • While it uses stdin for input rather than shell interpolation, it still grants an external binary access to process untrusted data extracted from markdown files.
  • Indirect Prompt Injection (HIGH): The render-graphs.js script provides a significant attack surface (Category 8).
  • Ingestion points: The script reads raw content from SKILL.md (untrusted data).
  • Boundary markers: None. It uses regex to extract content between triple backticks.
  • Capability inventory: Uses execSync to run subprocesses and fs.writeFileSync to write files to the disk.
  • Sanitization: No validation or sanitization is performed on the dot content before it is passed to the system command. If the Graphviz installation is configured with features like gvpr or file inclusion enabled, an attacker-controlled SKILL.md could achieve arbitrary file read/write or code execution.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 16, 2026, 12:45 PM