autogpt-agents
Pass
Audited by Gen Agent Trust Hub on Mar 28, 2026
Risk Level: SAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS]: The skill documentation provides instructions to fetch the AutoGPT source code from its official repository on GitHub.
- [COMMAND_EXECUTION]: The setup and maintenance of the platform require executing various shell commands, including Docker Compose for container management, NPM for frontend assets, and Poetry for Python dependency handling.
- [PROMPT_INJECTION]: The platform architecture includes an attack surface for indirect prompt injection, as it is designed to ingest and process data from external webhooks and integrations.
- Ingestion points: The
WebhookHandlerinreferences/advanced-usage.mdand various integration blocks (GitHub, Notion, etc.) inSKILL.mdare points where untrusted data enters the agent context. - Boundary markers: The provided documentation does not explicitly demonstrate the use of delimiters or specific instructions to isolate untrusted input from the agent's core instructions.
- Capability inventory: The platform has extensive capabilities, including executing LLM calls, performing HTTP requests, and interacting with databases, which increases the potential impact of a successful injection.
- Sanitization: No explicit sanitization or payload validation logic is shown in the provided code examples for handling incoming webhook data.
Audit Metadata