moai-foundation-core
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFE
Full Analysis
- [Prompt Injection] (SAFE): No instructions found that attempt to override the AI agent's safety guidelines or system prompts. The skill defines its own command set (/moai:...) for documentation and workflow purposes.- [Data Exposure & Exfiltration] (SAFE): The code snippets demonstrate standard practices for handling environment variables and local file paths within the skill's own directory structure. No exfiltration patterns to external domains were identified.- [Indirect Prompt Injection] (SAFE): The framework is designed to process user input (e.g., 'issue_description') and pass it into agent prompts via the
Task()function. While this creates an attack surface, the provided code is for a development methodology and does not contain malicious behavior. Boundary markers and sanitization are encouraged by the framework's security documentation inmodules/execution-rules.md.- [External Downloads] (SAFE): CI/CD YAML examples reference standard development tools (pytest, bandit, etc.) from trusted package managers. These are instructional examples and not part of the skill's own runtime behavior.- [Command Execution] (SAFE): TheTRUST5Validatorclass includes the use ofsubprocess, but it is presented as a component of a quality validation tool for developers to use in their projects, adhering to the framework's guidelines for automated quality gates.
Audit Metadata