guidance
Warn
Audited by Gen Agent Trust Hub on Apr 4, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The documentation includes multiple examples of a 'ReAct Agent' pattern (in 'SKILL.md' and 'references/examples.md') that utilizes a 'calculator' tool implemented with Python's 'eval()' function. This function executes arbitrary string input as Python code.
- [REMOTE_CODE_EXECUTION]: Because the input to the 'eval()' call is dynamically generated by the language model ('action_input') based on untrusted user questions, it constitutes a potential remote code execution vector if the model is compromised or tricked via injection.
- [PROMPT_INJECTION]: The skill exhibits an attack surface for indirect prompt injection. It takes untrusted user input and processes it through agentic loops without adequate protection.
- Ingestion points: Untrusted data enters the context via the 'question' parameter in the 'react_agent' function and the 'text' parameter in data extraction functions.
- Boundary markers: Absent. The provided patterns do not utilize specific delimiters or instructions to protect against instructions embedded within user data.
- Capability inventory: The skill defines high-risk capabilities including the use of 'eval()' within a tool-calling framework.
- Sanitization: Absent. No input validation, escaping, or output filtering is applied before the model's generated strings are executed by the Python interpreter.
Audit Metadata