natural-language-postgres-presentation

Fail

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCREDENTIALS_UNSAFE
Full Analysis
  • External Downloads & Remote Code Execution (HIGH): The skill performs a git clone from an untrusted repository (Eng0AI/natural-language-postgres-presentation) followed by pnpm install and pnpm dev. This execution flow allows an untrusted third party to run arbitrary code on the host machine via npm/pnpm lifecycle scripts or the application code itself.
  • Credentials Unsafe (HIGH): Users are instructed to store a POSTGRES_URL (database connection string) and OPENAI_API_KEY in a .env file. Since the code being executed is untrusted, these sensitive credentials are at high risk of exfiltration to the repository owner's infrastructure.
  • Indirect Prompt Injection (LOW): The core functionality involves converting natural language to SQL. This presents an attack surface where maliciously crafted user prompts could lead to unauthorized database operations if the underlying LLM logic is not properly constrained.
  • Ingestion points: Natural language user input via the Next.js interface.
  • Boundary markers: None visible in the provided setup documentation.
  • Capability inventory: Full PostgreSQL database access via POSTGRES_URL.
  • Sanitization: No evidence of sanitization or SQL generation safety measures in the documentation.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 17, 2026, 06:11 PM