analyzing-linux-elf-malware

Warn

Audited by Gen Agent Trust Hub on Mar 15, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The script scripts/agent.py uses subprocess.run with shell=True and unvalidated string interpolation of the filepath variable. This vulnerability is present in the extract_strings, check_packing, and analyze_dynamic_linking functions. An attacker could execute arbitrary shell commands by inducing the agent to analyze a file with a name containing shell metacharacters (e.g., malware.elf; rm -rf /).
  • [REMOTE_CODE_EXECUTION]: The command injection vulnerability in the analysis script directly translates to a remote code execution risk. If the agent is used to analyze a binary from an untrusted source, the attacker could leverage the crafted filename to execute code in the agent's environment.
  • [PROMPT_INJECTION]: The skill exhibits a significant surface for indirect prompt injection because it parses and presents untrusted binary strings to the agent.
  • Ingestion points: Binary files processed by the scripts/agent.py utility or via manual commands like strings described in SKILL.md.
  • Boundary markers: Absent. There are no delimiters or instructions to help the agent distinguish between malware data and legitimate analysis commands.
  • Capability inventory: The skill documentation and scripts provide access to powerful tools like gdb, strace, and shell execution, which could be abused if an injected instruction is followed.
  • Sanitization: The agent.py script does not sanitize the input file path or the extracted strings before they are processed by the shell or presented to the user.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Mar 15, 2026, 12:28 AM