figma
Pass
Audited by Gen Agent Trust Hub on Mar 12, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- [SAFE]: The skill follows security best practices by accessing sensitive credentials like the Figma API token and webhook secret through environment variables (process.env.FIGMA_TOKEN and process.env.FIGMA_WEBHOOK_SECRET) instead of hardcoding them.
- [SAFE]: All network requests within the provided examples target well-known and trusted services, specifically the official Figma API (api.figma.com) and Amazon S3 (s3.amazonaws.com) for asset hosting.
- [PROMPT_INJECTION]: The skill presents an indirect prompt injection surface. 1. Ingestion points: Untrusted design data (node names, text content, and styles) is ingested from the Figma API (SKILL.md). 2. Boundary markers: No specific delimiters or safety instructions are used to separate design data from code templates. 3. Capability inventory: The skill utilizes the Node.js file system module (fs.writeFile) to write generated content to the local disk (SKILL.md). 4. Sanitization: Node properties like 'name' and 'characters' are directly interpolated into component names, filenames, and JSX content without validation or escaping, which could be exploited if a Figma file contains malicious strings.
Audit Metadata