repo-research

Pass

Audited by Gen Agent Trust Hub on Feb 21, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • Prompt Injection (LOW): The generate_search_prompt function in scripts/qa.py interpolates raw user input (question) directly into a prompt template for the LLM. This provides a surface for direct prompt injection where a user could provide instructions to bypass the agent's intended logic.- Indirect Prompt Injection (LOW): The skill is designed to ingest and process data from external GitHub repositories which are inherently untrusted. Instructions embedded in these files could influence the agent's behavior during the analysis process.
  • Ingestion points: scripts/qa.py identifies README.md, package.json, requirements.txt, and source code files as context for answering questions.
  • Boundary markers: Absent. The prompts generated for the LLM do not utilize clear delimiters or specific instructions to disregard embedded commands in the analyzed content.
  • Capability inventory: The skill performs file reading, repository cloning (git clone), and uses grep for pattern matching.
  • Sanitization: No sanitization or escaping of external repository content is performed before it is included in the LLM context.- Command Execution (LOW): As noted in CHANGELOG.md, the skill utilizes the grep utility for code searching via scripts/search.py. While a standard utility, executing shell commands on untrusted file content can lead to argument injection if the implementation does not properly sanitize inputs.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 21, 2026, 04:28 PM