figma
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTIONCREDENTIALS_UNSAFE
Full Analysis
- PROMPT_INJECTION (HIGH): The skill is highly vulnerable to Indirect Prompt Injection through its core functionality of reading external Figma files.
- Ingestion points: Multiple scripts (
figma_client.py,style_auditor.py,accessibility_checker.py) ingest complete JSON representations of Figma files, including user-controlled layer names and text content. - Boundary markers: Absent. The documentation provides no instructions for the agent to treat data from the Figma API as untrusted or to use delimiters to prevent command confusion.
- Capability inventory: The skill has network access (via
requestsandaiohttp) and file-write capabilities (exporting assets, generating JSON tokens, and creating HTML reports). - Sanitization: None detected. Content from Figma files is processed and used to generate reports and tokens without visible sanitization.
- COMMAND_EXECUTION (LOW): The skill's operational model relies on executing local Python scripts via the CLI. While these scripts are part of the skill package, they represent a standard command execution surface for local operations.
- CREDENTIALS_UNSAFE (MEDIUM): The skill handles a sensitive
FIGMA_ACCESS_TOKEN. The documentation encourages users to store this in a.envfile, which is a common vector for credential exposure if the file is accidentally committed or accessed by unauthorized processes.
Recommendations
- AI detected serious security threats
Audit Metadata