symbolic-equation
Warn
Audited by Gen Agent Trust Hub on Apr 21, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The skill's primary function is to discover equations by generating and executing Python code at runtime. The
evaluator.pylogic described inreferences/llmsr-patterns.mdshows that LLM-proposed code samples are compiled and run within a sandbox environment to determine their fitness scores. This dynamic code generation and execution pose a inherent risk if the LLM produces malicious logic. - [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection because it processes the dataset description and physical context ($0) which are then used to guide the LLM's equation generation.
- Ingestion points: Untrusted data enters the agent context through the
$0dataset description parameter inSKILL.md. - Boundary markers: There are no explicit boundary markers or instructions to ignore embedded commands in the
LLM Instruction Promptfound inreferences/llmsr-patterns.md. - Capability inventory: The skill possesses the capability to execute generated Python scripts using the
evaluator.pycomponent. - Sanitization: The skill mitigates risks by executing programs within a sandbox and applying a 30-second timeout as specified in
evaluator.pyandconfig.py.
Audit Metadata