ai-querying-databases

Fail

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: HIGHCREDENTIALS_UNSAFECOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
  • [CREDENTIALS_UNSAFE] (HIGH): The file examples.md contains a hardcoded database connection string with a plaintext password: postgresql://ai_reader:pass@localhost:5432/ecommerce. Hardcoding credentials in examples can lead to accidental deployment in production environments.
  • [COMMAND_EXECUTION] (MEDIUM): The implementation uses execute_query(self.engine, sql) to run SQL statements generated by an LLM based on user-provided questions. This is a form of dynamic execution where the instructions (SQL) are created at runtime from untrusted input. While a validate_sql placeholder is mentioned, its logic is not defined, leaving the system vulnerable to SQL injection if the LLM is manipulated.
  • [DATA_EXFILTRATION] (MEDIUM): The HR assistant example explicitly includes schemas for sensitive data like salaries, performance_reviews, and employees. Because the agent can query any table in the schema, it presents a high risk of unauthorized exposure of sensitive personnel data through crafted natural language queries.
  • [PROMPT_INJECTION] (LOW): The skill is vulnerable to Indirect Prompt Injection (Category 8).
  • Ingestion points: The question parameter in the forward methods of EcommerceQA and HRQA in examples.md.
  • Boundary markers: Absent. The user input is directly passed to the generate_sql DSPy module without delimiters or 'ignore' instructions.
  • Capability inventory: The skill has the capability to execute database queries via execute_query (file: examples.md).
  • Sanitization: There is a call to validate_sql(sql), but no implementation details are provided to confirm its effectiveness against adversarial inputs.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 17, 2026, 06:52 PM