denario
Warn
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: MEDIUMEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
- EXTERNAL_DOWNLOADS (MEDIUM): The installation instructions require downloading code from an unverified GitHub repository (AstroPilot-AI/Denario) and pulling a Docker image from an unverified user (pablovd/denario). These sources are not listed as trusted, posing a potential supply chain risk.\n- COMMAND_EXECUTION (MEDIUM): The skill utilizes a
get_results()method to execute computational experiments. This core functionality implies the dynamic generation and execution of Python code by AI agents based on user-provided research descriptions. Without documented sandboxing, this creates a high-risk surface for arbitrary code execution on the local machine or within a container.\n- REMOTE_CODE_EXECUTION (LOW): The LLM configuration guide includes a piped shell installation command (curl | bash) for the Google Cloud SDK. While the source (google) is trusted, this pattern is inherently insecure. Per the [TRUST-SCOPE-RULE], the severity is downgraded due to the trusted status of the organization.\n- PROMPT_INJECTION (LOW): The skill is vulnerable to Indirect Prompt Injection (Category 8) because it ingests untrusted research data and methodology descriptions through methods likeset_data_description(). Malicious instructions within these inputs could influence agent behavior during experimental code generation. Mandatory Evidence: 1. Ingestion points:set_data_descriptionandset_methodinSKILL.md; 2. Boundary markers: Absent; 3. Capability inventory:get_resultsexecutes computational tools and scripts; 4. Sanitization: Absent.
Audit Metadata