autogpt-agents

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION] (LOW): Indirect Prompt Injection Surface. The platform is designed to ingest and process untrusted data from external sources such as webhooks and HTTP requests. This data is subsequently used in LLM prompts and custom Python abilities. 1. Ingestion points: SKILL.md (Webhook triggers and HTTP request blocks). 2. Boundary markers: None documented in the provided setup examples. 3. Capability inventory: SKILL.md and troubleshooting.md (LLM execution, custom Python abilities, HTTP requests, and shell execution via Forge). 4. Sanitization: No explicit sanitization or validation logic is shown for external inputs in the code snippets.
  • [EXTERNAL_DOWNLOADS] (LOW): Standard dependency management. The installation guide requires cloning from GitHub and installing packages via npm and poetry. Evidence: Commands such as git clone, npm install, and poetry run are documented in SKILL.md and troubleshooting.md.
  • [COMMAND_EXECUTION] (SAFE): Troubleshooting instructions provide standard commands for local process and service management. Evidence: troubleshooting.md (kill, lsof, systemctl).
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:25 PM