natural-language-postgres
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS] (HIGH): The skill requires cloning code from an untrusted repository (https://github.com/Eng0AI/natural-language-postgres.git) not belonging to a trusted organization.
- [REMOTE_CODE_EXECUTION] (HIGH): Following the setup instructions (pnpm install and pnpm dev) results in the execution of unvetted code from an untrusted external source.
- [COMMAND_EXECUTION] (MEDIUM): The setup process involves significant shell command execution, including directory manipulation and script initialization.
- [PROMPT_INJECTION] (HIGH): The skill is highly vulnerable to Indirect Prompt Injection (Category 8) due to its core functionality of converting untrusted natural language into database queries. 1. Ingestion points: Plain English user questions. 2. Boundary markers: None documented. 3. Capability inventory: PostgreSQL database access and querying. 4. Sanitization: None documented.
Recommendations
- AI detected serious security threats
Audit Metadata