legacy-modernizer

Pass

Audited by Gen Agent Trust Hub on Feb 23, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill processes external source code for assessment and refactoring, which creates a surface for indirect prompt injection. Malicious content within the analyzed files (such as comments or strings) could theoretically attempt to influence the agent's output or modernization recommendations.\n
  • Ingestion points: The LegacyCodeAnalyzer in references/system-assessment.md recursively reads and parses .py files using ast.parse and f.read().\n
  • Boundary markers: No specific boundary markers or instructions to the LLM to ignore instructions found within analyzed data are implemented in the code-reading utilities.\n
  • Capability inventory: The skill has extensive file system access (read/write via Path) and the ability to execute system commands via subprocess.\n
  • Sanitization: The tool performs static analysis on raw file content without sanitizing or filtering instructions that might be embedded in code comments.\n- [COMMAND_EXECUTION]: The skill includes functionality to execute system-level commands, primarily intended for development and assessment tasks.\n
  • Evidence: In references/system-assessment.md, the identify_hotspots method uses subprocess.run to call git log. While standard for code analysis, this provides a pathway for command execution on the host environment.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 23, 2026, 10:10 AM