codex-plan-reviewer
Warn
Audited by Gen Agent Trust Hub on Mar 19, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTIONREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
- [COMMAND_EXECUTION]: The script
scripts/codex_review.pyexecutes thecodexCLI usingsubprocess.runwith the--full-autoflag enabled. This configuration allows the tool to automatically run commands or scripts suggested by the model, which presents a significant security risk if the model's output is manipulated by malicious input. - [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection as it ingests untrusted markdown plans and prior context logs directly into LLM prompts without sanitization or robust boundary markers.
- Ingestion points: The
plan_contentandprior_contextvariables inscripts/codex_review.pyare populated from local files provided as arguments. - Boundary markers: Absent; external content is appended after simple text headers like '===== PLAN TO REVIEW ====='.
- Capability inventory: The skill can execute shell commands via the subagent and the
codexCLI, and it has write access to the workspace directory. - Sanitization: None; external file content is interpolated verbatim into the prompt strings.
- [REMOTE_CODE_EXECUTION]: The combination of processing untrusted external data (markdown plans) and the use of a CLI tool with automatic execution capabilities (
codex exec --full-auto) creates a potential path for remote code execution if a malicious plan successfully injects instructions that the model then generates as executable output. - [EXTERNAL_DOWNLOADS]: The skill instructions recommend the global installation of the
@openai/codexNPM package. While this is an official package from a well-known provider, users should verify its integrity before granting it global execution privileges on their system.
Audit Metadata