skills/rahulgi/skills/council/Gen Agent Trust Hub

council

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFECOMMAND_EXECUTIONPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
  • PROMPT_INJECTION (LOW): The skill is vulnerable to Indirect Prompt Injection as it grants sub-agents access to the full repository context and explicitly encourages reading codebase files.
  • Ingestion points: scripts/counsel.py (line 97) instructs personas to 'Read any files you need' from the current working directory.
  • Boundary markers: Absent. User-provided questions and codebase content are concatenated into prompts without delimiters or warnings to ignore embedded instructions.
  • Capability inventory: The script executes claude, gemini, and codex CLIs. Notably, codex is invoked with a --full-auto flag, which may enable autonomous tool use or execution loops.
  • Sanitization: No sanitization or validation is performed on the user question or the content of the files read from the repository.
  • COMMAND_EXECUTION (LOW): The skill executes external CLI tools using subprocess.run. Although it uses argument lists to mitigate shell injection, it unsets environment variables (CLAUDECODE) to bypass safety constraints intended to prevent recursive/nested agent sessions.
  • DATA_EXFILTRATION (LOW): The skill is designed to transmit repository context to external LLM providers (Anthropic, Google, etc.). While this is the intended primary purpose, it constitutes a data exposure surface if the codebase contains sensitive information or secrets.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:34 PM