multi-ai-research
Warn
Audited by Gen Agent Trust Hub on Apr 9, 2026
Risk Level: MEDIUMEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS]: The skill requires the installation of a third-party CLI tool,
@jackwener/opencli, and an associated browser bridge extension from an unverified GitHub repository. This dependency is central to the skill's ability to communicate with external AI models. - [COMMAND_EXECUTION]: The agent is instructed to execute multiple shell commands, including 'opencli' for research tasks and Python/SQLite scripts for data health checks. These commands involve environment variable manipulation and interactions with the local file system.
- [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it ingests data from external AI services that process live web content and X (Twitter) feeds. Malicious instructions hidden in those external data sources could potentially influence the research results or the agent's behavior.
- Ingestion points: Data retrieved via 'opencli grok' and 'opencli gemini' from the web/social media (SKILL.md, Phase 3).
- Boundary markers: The skill uses internal templates and structural requirements (Phase 2) but lacks strict cryptographic or non-textual boundary markers for external data.
- Capability inventory: Execution of bash commands, file system access via Python/SQLite, and dispatching multiple internal sub-agents.
- Sanitization: Includes a 'Phase 5' arbitration logic that cross-validates findings and checks for 'potential hallucinations', which serves as a functional mitigation against data poisoning.
Audit Metadata