code-validation
Pass
Audited by Gen Agent Trust Hub on Feb 18, 2026
Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [PROMPT_INJECTION] (LOW): The skill is susceptible to indirect prompt injection (Category 8) because it ingests untrusted source code and git diffs which are then evaluated by an LLM in the 'Heuristic Review' phase. Maliciously crafted code or comments could attempt to influence the LLM's validation verdict.
- Ingestion points: Untrusted data enters via
scripts/diff_analyzer.py(git diffs) andscripts/static_analyzer.py(source files). - Boundary markers: No explicit delimiters or instructions to ignore embedded commands are documented for the LLM review phase.
- Capability inventory: The skill executes local scripts and generates reports, but does not have direct network write access in the provided documentation.
- Sanitization: There is no mention of sanitizing or escaping the ingested code content before it is processed by the LLM.
- [COMMAND_EXECUTION] (SAFE): The skill executes local Python scripts (
scripts/diff_analyzer.pyandscripts/static_analyzer.py) that are part of the skill's own directory. This is expected behavior for a deterministic code scanning tool and does not involve executing remote or untrusted code strings.
Audit Metadata