legacy-to-ai-ready

Fail

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTION
Full Analysis
  • EXTERNAL_DOWNLOADS (HIGH): The skill-downloader component (assets/skill-downloader/scripts/download_skill.py) and its supporting scripts facilitate the retrieval of code from external, unverified sources. It explicitly supports arbitrary GitHub repositories and ZIP/TAR archives from any URL, including third-party marketplaces like skillsmp.com and skillhub.club.
  • REMOTE_CODE_EXECUTION (HIGH): Downloaded skills are installed into the project's .claude/skills/ directory. These skills often contain executable Python or Shell scripts. The validation logic in validate_skill_md only performs a surface-level check for YAML frontmatter (name/description) and does not inspect the scripts for malicious behavior, allowing for potential RCE if a user is socially engineered into downloading a compromised skill.
  • COMMAND_EXECUTION (MEDIUM): The skill templates and reference materials (e.g., references/hooks-patterns.md) promote the use of lifecycle hooks in .claude/settings.json. These hooks execute shell commands automatically during tool use (e.g., PreToolUse, PostToolUse). If an attacker successfully influences these configurations via indirect injection or downloaded skills, they can achieve persistent command execution.
  • INDIRECT_PROMPT_INJECTION (LOW): The core functionality involves analyzing legacy codebases to generate AI configurations. A malicious actor could embed instructions within comments in a legacy codebase that, when processed by the agent during the 'Analysis' or 'Transformation' phases, could lead to the generation of insecure rules or the execution of unauthorized tools.
  • Ingestion points: scripts/analyze_codebase.py (referenced in main SKILL.md) and standard file-reading tools.
  • Boundary markers: Recommends using .claudeignore to exclude sensitive files, providing some mitigation against data exposure.
  • Capability inventory: Extensive use of Edit, Write, and Bash tools across all subagent patterns.
  • Sanitization: Lacks explicit sanitization for external content interpolated into prompts; relies on standard LLM reasoning.
  • DATA_EXPOSURE (SAFE): The skill includes documentation on protecting sensitive files (e.g., references/advanced-patterns.md) and recommends patterns for .claudeignore to prevent AI access to credentials and secrets.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 17, 2026, 05:08 PM