code-fix-assistant

Fail

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: HIGHCOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
  • [Indirect Prompt Injection] (HIGH): The skill is highly vulnerable to indirect prompt injection. It is designed to ingest and process untrusted external content (source code files) and has high-privilege capabilities including modifying files and executing system commands.
  • Ingestion points: code_fixer.py uses read_file() to ingest content from the path specified in the file_path input.
  • Boundary markers: There are no boundary markers or instructions to the agent to ignore embedded commands within the files being analyzed.
  • Capability inventory: The skill can execute arbitrary system tools via subprocess.run (in formatters.py and validator.py) and can overwrite files on the local filesystem using write_file() in code_fixer.py when auto_apply is enabled.
  • Sanitization: No sanitization or filtering is performed on the file content before it is processed or returned to the agent's context.
  • [Command Execution] (HIGH): The skill relies heavily on subprocess.run() to invoke various compilers, formatters, and linters (black, prettier, eslint, javac, gofmt, rustfmt, flake8, isort).
  • Risk: While the skill uses list-style arguments to prevent basic shell injection, it invokes complex tools like javac (Java compiler) on code read from the filesystem. Compiling untrusted code can lead to resource exhaustion or exploit vulnerabilities in the compilers themselves.
  • [Data Exposure & Exfiltration] (MEDIUM): The skill allows the agent to read any file the user has permissions for by providing a file_path.
  • Evidence: code_fixer.py contains the read_file method which lacks path validation or sandboxing. An attacker could potentially use an agent equipped with this skill to read sensitive files (e.g., ~/.ssh/id_rsa, .env files) by claiming they are 'legacy code' that needs 'formatting'. The content of these files would then be exposed to the agent's context, where it could be exfiltrated.
  • [Dynamic Execution] (MEDIUM): The validator.py script performs runtime compilation using javac to check the validity of Java code. Compiling code generated or modified by an LLM at runtime is a form of dynamic execution that introduces security risks if the environment is not strictly isolated.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 16, 2026, 09:44 AM