mixseek-skills

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • PROMPT_INJECTION (LOW): The skill is susceptible to indirect prompt injection. User-provided prompts are interpolated into Jinja2 templates used to instruct LLM agents without adequate sanitization or boundary delimiters. Evidence Chain: 1. Ingestion points: The user_prompt variable is interpolated in mixseek-prompt-builder/assets/deep-research.toml, default.toml, and single-round.toml. 2. Boundary markers: Absent; the templates use only standard markdown headers which are easily subverted. 3. Capability inventory: The skill can execute local scripts via run-python.sh, create directories/configs via init-workspace.sh, and configure agents with tool access. 4. Sanitization: No escaping or validation is performed on the user-provided prompt before interpolation.
  • COMMAND_EXECUTION (SAFE): The skill utilizes several Bash scripts (detect-python.sh, run-python.sh, init-workspace.sh) for environment detection and file system setup. These scripts are functionally consistent with the skill's stated purpose of workspace management and do not contain obfuscated or malicious code.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 04:49 PM