mermaid-github-safe-area
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFE
Full Analysis
- Prompt Injection (SAFE): The instructions focus exclusively on Mermaid diagram configuration and do not attempt to override AI safety filters or exfiltrate system prompts.
- Data Exposure & Exfiltration (SAFE): No commands for accessing sensitive files (e.g., SSH keys, credentials) or performing network exfiltration were found.
- Obfuscation (SAFE): The skill content is clear and readable with no signs of Base64, zero-width characters, or homoglyph-based evasion.
- Unverifiable Dependencies & Remote Code Execution (SAFE): The skill does not install external packages or download/execute remote scripts.
- Privilege Escalation (SAFE): No use of sudo, chmod, or other privilege modification commands.
- Persistence Mechanisms (SAFE): The skill does not attempt to modify shell profiles or system startup services.
- Metadata Poisoning (SAFE): Name and description accurately match the provided functionality.
- Indirect Prompt Injection (SAFE): The skill handles existing Mermaid configuration blocks as data, but since it lacks execution capabilities (like shell access or network calls), the risk is negligible.
- Time-Delayed / Conditional Attacks (SAFE): No logic was found that triggers malicious behavior based on dates, times, or specific environment variables.
- Dynamic Execution (SAFE): The skill provides logic for the LLM to follow when generating text; it does not involve runtime code compilation or unsafe deserialization.
Audit Metadata