pglr
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [PROMPT_INJECTION] (LOW): The skill is susceptible to indirect prompt injection. Malicious instructions stored within database records could be interpreted as commands by the AI agent when retrieved via
pglr query. - Ingestion points:
src/query/index.ts(theexecute,listTables, anddescribeTablemethods return database-controlled content to the agent context). - Boundary markers: Absent. The data is returned as raw JSON without explicit 'ignore' instructions or delimiters in the agent prompt.
- Capability inventory:
Bash(pglr:*)allowed inSKILL.md, allowing the agent to execute anypglrcommand including write operations if the agent is tricked into adding--allow-writes. - Sanitization: Absent. While error messages are sanitized (redacted), the actual row data returned from the database is not filtered for malicious prompt patterns.
- [COMMAND_EXECUTION] (SAFE): The agent executes the
pglrbinary. The skill is designed to ensure that the agent never handles or sees the database connection string or credentials. - [DATA_EXFILTRATION] (SAFE): Credentials are stored in
~/.pglr/connections.jsonusing0o600permissions. The skill includes a dedicatedsanitizer.tsthat redacts passwords, connection strings, and IP addresses from error messages to prevent accidental exposure. - [DYNAMIC_EXECUTION] (SAFE): While the skill executes dynamic SQL, it implements a robust read-only validator (
src/security/read-only.ts) that strips comments/strings and blocks write keywords (e.g., INSERT, DROP) and dangerous PostgreSQL functions (e.g., pg_read_file, pg_sleep) unless the--allow-writesflag is explicitly provided.
Audit Metadata