skills/progremir/pglr/pglr/Gen Agent Trust Hub

pglr

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION] (LOW): The skill is susceptible to indirect prompt injection. Malicious instructions stored within database records could be interpreted as commands by the AI agent when retrieved via pglr query.
  • Ingestion points: src/query/index.ts (the execute, listTables, and describeTable methods return database-controlled content to the agent context).
  • Boundary markers: Absent. The data is returned as raw JSON without explicit 'ignore' instructions or delimiters in the agent prompt.
  • Capability inventory: Bash(pglr:*) allowed in SKILL.md, allowing the agent to execute any pglr command including write operations if the agent is tricked into adding --allow-writes.
  • Sanitization: Absent. While error messages are sanitized (redacted), the actual row data returned from the database is not filtered for malicious prompt patterns.
  • [COMMAND_EXECUTION] (SAFE): The agent executes the pglr binary. The skill is designed to ensure that the agent never handles or sees the database connection string or credentials.
  • [DATA_EXFILTRATION] (SAFE): Credentials are stored in ~/.pglr/connections.json using 0o600 permissions. The skill includes a dedicated sanitizer.ts that redacts passwords, connection strings, and IP addresses from error messages to prevent accidental exposure.
  • [DYNAMIC_EXECUTION] (SAFE): While the skill executes dynamic SQL, it implements a robust read-only validator (src/security/read-only.ts) that strips comments/strings and blocks write keywords (e.g., INSERT, DROP) and dangerous PostgreSQL functions (e.g., pg_read_file, pg_sleep) unless the --allow-writes flag is explicitly provided.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:34 PM