autonomous-agent-patterns
Warn
Audited by Gen Agent Trust Hub on Apr 14, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONDATA_EXFILTRATIONREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: Section 3.3 implements a 'SandboxedExecution' pattern that uses 'subprocess.run(shell=True)'. Executing shell commands through a shell interpreter increases the risk of command injection if arguments are not properly sanitized.
- [REMOTE_CODE_EXECUTION]: Section 6.1 (MCPAgent) describes a pattern for dynamic tool creation where an LLM generates Python code that is written to a file and then executed ('hot-reloaded'). This self-modifying code behavior allows for arbitrary code execution driven by model outputs.
- [DATA_EXFILTRATION]: The 'ContextManager' class in Section 5.1 includes functionality to fetch content from arbitrary URLs using the 'requests' library. If an agent is tasked with processing untrusted URLs, this could be leveraged for Server-Side Request Forgery (SSRF) or exfiltrating local data.
- [PROMPT_INJECTION]: The skill documents an attack surface for indirect prompt injection by design. 1. Ingestion points: External data enters the context via 'add_file' and 'add_url' (SKILL.md). 2. Boundary markers: The prompt uses markdown code blocks (```) to delimit content, which can be bypassed by malicious input. 3. Capability inventory: The agent patterns include shell command execution ('execute_sandboxed'), file modification ('edit_file'), and dynamic code generation. 4. Sanitization: No sanitization or input validation logic is shown in the context ingestion snippets.
Audit Metadata