openai-docs
Fail
Audited by Gen Agent Trust Hub on Feb 22, 2026
Risk Level: HIGHCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
- Privilege Escalation (HIGH): The instructions in
SKILL.mddirect the agent to "immediately retry the same command with escalated permissions" if an installation attempt fails due to sandboxing, representing a high-risk attempt to bypass security controls. - External Downloads (LOW): The skill specifies a dependency and installation path for an MCP server from
https://developers.openai.com/mcp. This is downgraded to LOW as OpenAI is a trusted organization, but the automated installation process remains a security concern. - Indirect Prompt Injection (LOW): Ingestion points: External documentation fetched from OpenAI developer docs. Boundary markers: Not present. Capability inventory: The agent can execute tool installation commands (
codex mcp add) and access the web. Sanitization: No content sanitization is implemented for the documentation data.
Recommendations
- AI detected serious security threats
Audit Metadata