behavioral-modes
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFE
Full Analysis
- Prompt Injection (SAFE): The skill defines task-specific personas (modes) such as BRAINSTORM or IMPLEMENT. These instructions do not attempt to bypass safety filters or override system-level constraints.
- Data Exposure & Exfiltration (SAFE): No hardcoded credentials or sensitive file paths were detected. The skill does not use network-enabled tools (like curl or fetch) to send data externally.
- Obfuscation (SAFE): No encoded content, hidden characters, or homoglyph-based evasion techniques were identified in the instructions.
- Unverifiable Dependencies & Remote Code Execution (SAFE): The skill does not define or install external packages. No remote scripts are downloaded or executed.
- Privilege Escalation (SAFE): The skill does not use commands like sudo or modify system configurations.
- Persistence Mechanisms (SAFE): No mechanisms for maintaining long-term access, such as cron jobs or shell profile modifications, are present.
- Metadata Poisoning (SAFE): The metadata accurately reflects the skill's purpose and does not contain deceptive instructions.
- Indirect Prompt Injection (SAFE): While the skill reacts to user-provided triggers (e.g., "review", "debug"), it lacks the high-privilege capabilities (like file writing or network requests) required to make such injections dangerous.
- Time-Delayed / Conditional Attacks (SAFE): No logic was found that gates actions based on time, environment, or specific execution counts.
- Dynamic Execution (SAFE): The skill does not generate or execute code at runtime; it only provides guidelines for the AI's natural language output.
Audit Metadata