skills/randroids-dojo/skills/godot/Gen Agent Trust Hub

godot

Fail

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • REMOTE_CODE_EXECUTION (HIGH): The skill instructs users to install the butler CLI by piping a remote script directly into the shell (curl -L https://itch.io/butler | sh in references/deployment.md). This pattern is highly susceptible to Man-In-The-Middle (MITM) attacks and provides no integrity verification. The severity is high because it is a direct instruction for insecure code execution, although related to the skill's deployment purpose.
  • EXTERNAL_DOWNLOADS (MEDIUM): Instructions in references/ci-integration.md and references/playgodot.md direct users to download pre-built Godot binaries from an untrusted third-party GitHub repository (Randroids-Dojo/godot). These binaries are used for game automation without cryptographic checksums or signature verification. Although this is part of the skill's primary functionality (PlayGodot automation), it introduces a risk of executing unverified external code.
  • COMMAND_EXECUTION (MEDIUM): Multiple Python scripts, including scripts/run_tests.py, scripts/export_build.py, and scripts/validate_project.py, execute external binaries via subprocess.run. While these scripts perform some path validation, they rely on environment variables (GODOT, GODOT4) or the system PATH to find executables, which could be subverted in a compromised environment.
  • PROMPT_INJECTION (LOW): The skill possesses a surface for indirect prompt injection through the parsing of external test results.
  • Ingestion points: scripts/parse_results.py parses JUnit XML files from potentially untrusted report directories.
  • Boundary markers: No specific boundary markers or instruction-ignore tags are used during data ingestion.
  • Capability inventory: The skill can execute Godot engine commands and shell scripts through multiple Python wrapper scripts.
  • Sanitization: The XML parsing logic in parse_results.py does not perform strict schema validation or sanitization of input data before processing.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 17, 2026, 06:04 PM