frontend-design

Pass

Audited by Gen Agent Trust Hub on Feb 12, 2026

Risk Level: LOWNO_CODE
Full Analysis

The skill consists of a standard Apache License 2.0 file and a Markdown file (SKILL.md) containing instructions for the AI. No executable scripts or external dependencies are present. The SKILL.md file defines the skill's purpose and provides extensive guidelines for frontend design, including aesthetic principles and technical considerations for generating code (HTML/CSS/JS, React, Vue, etc.).

  1. Prompt Injection: The keywords 'CRITICAL' and 'IMPORTANT' are used within the skill's instructions to emphasize design principles (e.g., 'CRITICAL: Choose a clear conceptual direction', 'IMPORTANT: Match implementation complexity'). These are not used to override the AI's safety guidelines or system prompt.
  2. Data Exfiltration: No commands or patterns indicative of data exfiltration (e.g., curl, wget with sensitive file paths) were found.
  3. Obfuscation: No Base64 encoding, zero-width characters, homoglyphs, or other obfuscation techniques were detected in either file.
  4. Unverifiable Dependencies: The skill does not specify any external package installations (npm install, pip install) or direct downloads from external URLs.
  5. Privilege Escalation: No commands like sudo or chmod for privilege escalation were found.
  6. Persistence Mechanisms: No attempts to establish persistence (e.g., modifying .bashrc, creating cron jobs) were detected.
  7. Metadata Poisoning: The name, description, and license fields in SKILL.md are benign and accurately reflect the skill's purpose.
  8. Indirect Prompt Injection: While any skill processing user input can theoretically be susceptible to indirect prompt injection, this skill itself does not contain any patterns that would facilitate it. The analysis focuses on the skill's own content.
  9. Time-Delayed / Conditional Attacks: No conditional logic based on time, usage, or environment variables was found.

Overall, the skill is purely descriptive and instructional, guiding the AI's behavior without introducing any direct security risks through its own content or execution.

Audit Metadata
Risk Level
LOW
Analyzed
Feb 12, 2026, 10:49 PM