debug
Warn
Audited by Gen Agent Trust Hub on Feb 15, 2026
Risk Level: MEDIUMEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS] (LOW): The skill recommends installing 'google-generativeai' and 'gh'. These are trusted tools from established organizations (Google and GitHub).
- [COMMAND_EXECUTION] (MEDIUM): The workflow relies on a 'gemini' executable that is not installed by the listed prerequisites. Using unverified binaries from unspecified sources is a security risk as they could perform arbitrary actions on the system.
- [PROMPT_INJECTION] (LOW): (Category 8: Indirect Prompt Injection) The skill ingests untrusted data from GitHub issues via 'gh search issues' and interpolates it into prompts.
- Ingestion points: Output from 'gh search issues' is passed directly into the LLM prompt.
- Boundary markers: None identified; the content is directly concatenated into the prompt string.
- Capability inventory: The skill provides text-based diagnostics and code fix recommendations but does not automate execution of those fixes.
- Sanitization: None; external content is used without filtering.
- Risk: A malicious GitHub issue could contain instructions designed to manipulate the agent's reasoning, although the impact is limited by the lack of autonomous execution capabilities in the provided scripts.
Audit Metadata