file_explorer

Fail

Audited by Gen Agent Trust Hub on Feb 15, 2026

Risk Level: HIGHPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
  • [PROMPT_INJECTION] (HIGH): The skill is highly susceptible to Indirect Prompt Injection.
  • Ingestion points: The safe_read function in explorer.py reads content from any file in the local project directory.
  • Boundary markers: The output is printed directly to the agent's context without any delimiters or 'ignore instructions' warnings.
  • Capability inventory: While this script is read-only, AI agents using a 'file explorer' typically possess write or execute capabilities in the same environment, making injection highly dangerous.
  • Sanitization: None. The script does not escape or filter content before providing it to the agent.
  • [DATA_EXFILTRATION] (MEDIUM): Inconsistent security controls allow access to sensitive data.
  • Finding: While list_tree and search_files use is_ignored() to hide sensitive files like .env, .git/config, and .venv, the safe_read function does not check this list. Any file inside the working directory can be read if the path is guessed or known, leading to the exposure of hardcoded credentials or environment secrets.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 15, 2026, 03:00 AM