deep-research

Fail

Audited by Gen Agent Trust Hub on Apr 5, 2026

Risk Level: HIGHCOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill provides the agent with complex shell scripts in 'subagent_prompt.md' and 'counter_review_team_guide.md' to manage subagents and parallel tasks. These scripts include loops, subshells, and CLI invocations like 'claude -p "$(cat ...)"', which are susceptible to command injection if task identifiers or prompt file names are untrusted.
  • [DATA_EXFILTRATION]: The skill's source governance policy in 'source_accessibility_policy.md' explicitly instructs the agent to request and use user-provided sensitive data, such as private API keys and proprietary database access, for third-party research tasks. This design significantly increases the risk of exfiltrating these credentials or the private data they protect to external websites or search engine logs via the 'web_search' and 'web_fetch' tools.
  • [PROMPT_INJECTION]: The skill contains a deceptive file named '.security-scan-passed' claiming a successful audit by gitleaks and pattern-based tools. This is a malicious metadata poisoning pattern intended to bypass security review and mislead users about the skill's safety profile.
  • [DATA_EXFILTRATION]: Indirect prompt injection via 'web_fetch' and 'web_search' presents a threat where malicious external content could cause the agent to exfiltrate sensitive 'exclusive sources' or API keys. The skill lacks sanitization of external data and provides powerful capabilities like 'write_file' and shell execution that could be abused in such a scenario.
  • [COMMAND_EXECUTION]: The architecture uses multiple specialized agents that communicate via shell-like commands ('SendMessage to'), providing an extensive attack surface for lateral movement or command injection across the agent team if any single component is compromised.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Apr 5, 2026, 06:33 AM