skill-creator

Pass

Audited by Gen Agent Trust Hub on Mar 18, 2026

Risk Level: SAFECOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill uses Python's subprocess and os modules across several utility scripts (run_eval.py, run_loop.py, generate_review.py) to execute the claude CLI and local system commands (e.g., lsof for port management). This execution is the intended mechanism for running skill evaluations and managing the local viewer server.
  • [EXTERNAL_DOWNLOADS]: The README.md documentation provides manual installation steps that involve cloning a repository from GitHub (github.com/alenazaharovaux/share). This is documented as the vendor's own repository for distributing the skill content.
  • [PROMPT_INJECTION]: The skill contains an indirect prompt injection surface (Category 8) within its description optimization loop (scripts/run_loop.py). The tool ingests user-provided or agent-generated evaluation queries from evals/evals.json and interpolates them into a prompt used by a secondary LLM to refine the skill's description.
  • Ingestion points: evals/evals.json, feedback.json, and direct user input for evaluation queries.
  • Boundary markers: The prompt in improve_description.py uses XML-style tags (<current_description>, <scores_summary>, <skill_content>) to delimit untrusted data.
  • Capability inventory: The skill possesses file-write access, network access (via the Anthropic API client), and local command execution via subprocess.
  • Sanitization: While inputs are structured via JSON, the core natural language content of queries is processed by the LLM without specific sanitization against adversarial instructions.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 18, 2026, 04:17 AM