aws-strands-agents-agentcore
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFE
Full Analysis
- [SAFE] (SAFE): The content is purely educational documentation and boilerplate code for building AI agents. No malicious scripts or exfiltration patterns were detected.
- [Indirect Prompt Injection] (LOW): The documentation describes agents that ingest untrusted user input (e.g., chat queries) and interact with external systems (e.g., databases). This creates a surface for indirect prompt injection, which is acknowledged in the 'Security Considerations' section with recommended mitigations like 'human-in-the-loop' and 'least privilege' IAM roles.
- Ingestion points: User input via
agent(payload["prompt"])and API requests inarchitecture.md. - Boundary markers: Not explicitly defined in code snippets, though system prompts are used to frame execution.
- Capability inventory: Database queries, web searching, and tool execution via the
Agentclass are documented capabilities. - Sanitization: Not detailed in the code snippets, though the 'Security Patterns' section suggests validation hooks as a primary defense.
Audit Metadata