NYC

configuring-agent-brain

Fail

Audited by Gen Agent Trust Hub on Feb 18, 2026

Risk Level: HIGHREMOTE_CODE_EXECUTIONCREDENTIALS_UNSAFEPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
  • REMOTE_CODE_EXECUTION (HIGH): The file references/installation-guide.md includes commands to pipe remote scripts directly into the shell: curl -LsSf https://astral.sh/uv/install.sh | sh. While commonly used, this is a dangerous execution pattern from a source not included in the trusted whitelist.
  • CREDENTIALS_UNSAFE (HIGH): The file references/troubleshooting-guide.md instructs users to append their OpenAI API key in plaintext to their shell profile: echo 'export OPENAI_API_KEY="sk-proj-..."' >> ~/.bashrc. This constitutes an unsafe persistence mechanism for credentials.
  • PROMPT_INJECTION (LOW): The skill defines a surface for Indirect Prompt Injection via document indexing (agent-brain index). Evidence: 1. Ingestion points: Document files (.md, .txt, .pdf) via CLI indexing. 2. Boundary markers: Absent. 3. Capability inventory: Network requests (OpenAI API), file system access, and process management. 4. Sanitization: Not specified in documentation.
  • COMMAND_EXECUTION (LOW): Both files contain numerous commands for process termination (kill -9), system-level permission changes (chmod 755), and environment modification, which represent a significant attack surface if misused.
  • EXTERNAL_DOWNLOADS (LOW): The skill installs several third-party Python packages (FastAPI, ChromaDB, LlamaIndex, etc.) from external registries during the setup process.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 18, 2026, 03:01 PM