skills/luongnv89/skills/skill-creator/Gen Agent Trust Hub

skill-creator

Warn

Audited by Gen Agent Trust Hub on Mar 11, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill extensively uses Python's subprocess module to execute system-level commands. scripts/run_eval.py invokes the claude CLI tool to run test queries, while eval-viewer/generate_review.py uses lsof and kill to manage local network ports. Additionally, the SKILL.md mandates the inclusion of git commands (git fetch, git pull --rebase) in any repository-mutating skills it generates.
  • [REMOTE_CODE_EXECUTION]: A dynamic execution pattern is present in scripts/run_eval.py, where the script programmatically writes new instruction files to the .claude/commands/ directory and then immediately triggers the claude CLI to process those files. This allows the skill to execute dynamically generated logic.
  • [EXTERNAL_DOWNLOADS]: The scripts/improve_description.py script utilizes the anthropic Python library to communicate with external AI models via the Anthropic API. This represents an intentional network exit point to a well-known service.
  • [PROMPT_INJECTION]: The skill exhibits an indirect prompt injection surface in scripts/improve_description.py. It ingests untrusted data from eval_set.json (user-defined test queries) and feedback.json (user feedback) and interpolates this content into prompts sent to the LLM. While it uses XML tags for boundary delimitation, malicious queries in the test set could potentially influence the behavior of the description optimizer.
  • [DATA_EXFILTRATION]: While not primarily for exfiltration, the capability to read local files (via the subagent workflow) and send their content to the Anthropic API (via the improvement loop) establishes a path for sensitive data to leave the local environment.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Mar 11, 2026, 10:20 PM