skill-creator
Warn
Audited by Gen Agent Trust Hub on Mar 11, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The skill extensively uses Python's
subprocessmodule to execute system-level commands.scripts/run_eval.pyinvokes theclaudeCLI tool to run test queries, whileeval-viewer/generate_review.pyuseslsofandkillto manage local network ports. Additionally, theSKILL.mdmandates the inclusion of git commands (git fetch,git pull --rebase) in any repository-mutating skills it generates. - [REMOTE_CODE_EXECUTION]: A dynamic execution pattern is present in
scripts/run_eval.py, where the script programmatically writes new instruction files to the.claude/commands/directory and then immediately triggers theclaudeCLI to process those files. This allows the skill to execute dynamically generated logic. - [EXTERNAL_DOWNLOADS]: The
scripts/improve_description.pyscript utilizes theanthropicPython library to communicate with external AI models via the Anthropic API. This represents an intentional network exit point to a well-known service. - [PROMPT_INJECTION]: The skill exhibits an indirect prompt injection surface in
scripts/improve_description.py. It ingests untrusted data fromeval_set.json(user-defined test queries) andfeedback.json(user feedback) and interpolates this content into prompts sent to the LLM. While it uses XML tags for boundary delimitation, malicious queries in the test set could potentially influence the behavior of the description optimizer. - [DATA_EXFILTRATION]: While not primarily for exfiltration, the capability to read local files (via the subagent workflow) and send their content to the Anthropic API (via the improvement loop) establishes a path for sensitive data to leave the local environment.
Audit Metadata