aws-agentcore
Fail
Audited by Gen Agent Trust Hub on Feb 15, 2026
Risk Level: HIGHREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
- [Indirect Prompt Injection] (HIGH): The framework promotes an architecture that ingests untrusted data from multiple sources and passes it to an LLM with access to high-privilege tools.
- Ingestion points: Input parameters to
agent.invoke(), responses from thesearch_databasetool, and outputs from the mentioned Browser tool. - Boundary markers: Absent. The code examples do not demonstrate the use of XML tags, delimiters, or system instructions to mitigate adversarial content within processed data.
- Capability inventory: The skill explicitly supports a "Code Interpreter" (Category 10), "Browser Tool" (Category 2), and Lambda execution via the Bedrock runtime client.
- Sanitization: Absent. No evidence of input validation or output sanitization is provided in the orchestration or tool-use patterns.
- [Dynamic Execution] (HIGH): The integration of a "Code Interpreter" tool allows for runtime code execution. Without explicit sandboxing or restricted execution environments mentioned, this allows an LLM to execute arbitrary Python code generated from potentially malicious user prompts.
- [Unverifiable Dependencies] (MEDIUM): The skill encourages users to clone and deploy code from
https://github.com/awslabs/amazon-bedrock-agentcore-samples. While official AWS-affiliated labs, theawslabsorganization is not in the predefined Trusted GitHub Organizations list, requiring verification of the remote scripts before execution.
Recommendations
- AI detected serious security threats
Audit Metadata