tiling-tree

Pass

Audited by Gen Agent Trust Hub on Mar 8, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The script scripts/tiling_tree.py interpolates the user-supplied problem and criteria variables directly into the prompts used to instruct the LLM sub-agents. Because these inputs are not sanitized or wrapped in boundary markers, a malicious user could provide a problem statement containing instructions that override the agent's behavior.
  • Ingestion points: The problem and criteria arguments are ingested via command-line parameters in scripts/tiling_tree.py.
  • Boundary markers: No delimiters (like XML tags) or 'ignore embedded instructions' warnings are present in the prompt templates _splitter_prompt or _evaluator_prompt.
  • Capability inventory: The skill can invoke parallel LLM instances via invoke_parallel, perform single-agent evaluations via invoke_claude, and write markdown files to the local file system using the --output parameter.
  • Sanitization: There is no evidence of string escaping, validation, or filtering of user-supplied content before it is processed by the model.
  • [DYNAMIC_EXECUTION]: The script uses sys.path.insert to dynamically include /home/claude and /mnt/skills/user/orchestrating-agents/scripts at runtime to resolve dependencies such as claude_client and muninn_utils. While common for dependency management in specific agent environments, loading code from hardcoded absolute paths can be a security concern if the environment is not strictly managed.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 8, 2026, 02:48 PM