skills/xiaolai/vmark/ai-coding-agents/Gen Agent Trust Hub

ai-coding-agents

Warn

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: MEDIUMPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [Indirect Prompt Injection] (MEDIUM): The documentation describes an attack surface where tools ingest untrusted external data while having the ability to perform side-effect operations. * Ingestion points: Git pull requests (via 'codex review') and web search results (via 'web_search' feature). * Boundary markers: No markers or 'ignore embedded instructions' warnings are documented for the processing of external content. * Capability inventory: Documentation describes shell execution via MCP servers and workspace modification ('workspace-write'). * Sanitization: No sanitization or input validation methods are described.
  • [Privilege Escalation] (MEDIUM): The skill documents configurations and flags that explicitly bypass security and sandbox constraints. * Evidence: Documentation for 'codex --yolo' (described as bypassing all safety and capable of destroying the system) and 'claude --dangerously-skip-permissions' in 'references/comparison-and-edge-cases.md'. * Evidence: Permission profile 'disk-full-read-access' allows reading any file on the system, including sensitive configuration files.
  • [Unverifiable Dependencies] (LOW): The documentation references the installation and execution of external Node.js packages. * Evidence: 'npm i -g @openai/codex' and 'npx @my-org/db-mcp-server'. While the OpenAI package is from a trusted organization, '@my-org' is an unknown source.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Feb 16, 2026, 12:11 AM