ui-ux-pro-max

Fail

Audited by Gen Agent Trust Hub on Feb 20, 2026

Risk Level: HIGHCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • COMMAND_EXECUTION (HIGH): Unsafe path construction in scripts/search.py for the persistence feature. The script uses user-provided --project-name and --page arguments to create directories and files. The sanitization logic only replaces spaces with hyphens, failing to block path traversal sequences like ../. This allows a malicious user or an influenced agent to write files to arbitrary locations on the filesystem.
  • Evidence: project_slug = args.project_name.lower().replace(' ', '-') and subsequent file creation logic for design-system/{project_slug}/MASTER.md.
  • COMMAND_EXECUTION (MEDIUM): Unverifiable core logic due to missing files. The script scripts/search.py imports its primary search and persistence functionality from core.py and design_system.py, neither of which are included in the skill. This prevents a complete audit of how data is queried or how the filesystem is modified.
  • Evidence: from core import ... and from design_system import ... in scripts/search.py.
  • PROMPT_INJECTION (LOW): Indirect prompt injection surface (Category 8). The skill reads and formats data from multiple CSV files to be consumed by an LLM. It lacks explicit boundary markers or instruction-isolation delimiters in its output formatting, creating a risk if the source data is ever modified to include malicious instructions.
  • Ingestion points: scripts/search.py reads charts.csv, colors.csv, web-interface.csv, and jetpack-compose.csv via the missing core.py module.
  • Boundary markers: Absent. The format_output function uses standard Markdown headers but no security-specific delimiters.
  • Capability inventory: File system write access via the --persist flag.
  • Sanitization: Minimal path sanitization and no content sanitization before outputting to the LLM.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 20, 2026, 06:05 AM