ui-ux-pro-max
Fail
Audited by Gen Agent Trust Hub on Feb 20, 2026
Risk Level: HIGHCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- COMMAND_EXECUTION (HIGH): Unsafe path construction in
scripts/search.pyfor the persistence feature. The script uses user-provided--project-nameand--pagearguments to create directories and files. The sanitization logic only replaces spaces with hyphens, failing to block path traversal sequences like../. This allows a malicious user or an influenced agent to write files to arbitrary locations on the filesystem. - Evidence:
project_slug = args.project_name.lower().replace(' ', '-')and subsequent file creation logic fordesign-system/{project_slug}/MASTER.md. - COMMAND_EXECUTION (MEDIUM): Unverifiable core logic due to missing files. The script
scripts/search.pyimports its primary search and persistence functionality fromcore.pyanddesign_system.py, neither of which are included in the skill. This prevents a complete audit of how data is queried or how the filesystem is modified. - Evidence:
from core import ...andfrom design_system import ...inscripts/search.py. - PROMPT_INJECTION (LOW): Indirect prompt injection surface (Category 8). The skill reads and formats data from multiple CSV files to be consumed by an LLM. It lacks explicit boundary markers or instruction-isolation delimiters in its output formatting, creating a risk if the source data is ever modified to include malicious instructions.
- Ingestion points:
scripts/search.pyreadscharts.csv,colors.csv,web-interface.csv, andjetpack-compose.csvvia the missingcore.pymodule. - Boundary markers: Absent. The
format_outputfunction uses standard Markdown headers but no security-specific delimiters. - Capability inventory: File system write access via the
--persistflag. - Sanitization: Minimal path sanitization and no content sanitization before outputting to the LLM.
Recommendations
- AI detected serious security threats
Audit Metadata