frontend-design

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • Prompt Injection (SAFE): Use of the term 'CRITICAL' is strictly instructional for the AI's design process and does not attempt to bypass safety filters or override core system instructions.
  • Data Exposure & Exfiltration (SAFE): The skill does not access sensitive files, environment variables, or perform network requests.
  • Unverifiable Dependencies & Remote Code Execution (SAFE): The skill contains no package managers or remote script execution patterns. It is hosted on a trusted source (anthropics/skills).
  • Privilege Escalation & Persistence (SAFE): No commands related to system permissions or persistent access were detected.
  • Indirect Prompt Injection (SAFE): While the skill processes user requirements to generate code, it does not involve complex data ingestion or automated execution of untrusted external content that would represent a significant injection risk beyond standard LLM interaction.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 05:02 PM