ai-querying-databases
Fail
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: HIGHCREDENTIALS_UNSAFECOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- [CREDENTIALS_UNSAFE] (HIGH): The file
examples.mdcontains a hardcoded database connection string with a plaintext password:postgresql://ai_reader:pass@localhost:5432/ecommerce. Hardcoding credentials in examples can lead to accidental deployment in production environments. - [COMMAND_EXECUTION] (MEDIUM): The implementation uses
execute_query(self.engine, sql)to run SQL statements generated by an LLM based on user-provided questions. This is a form of dynamic execution where the instructions (SQL) are created at runtime from untrusted input. While avalidate_sqlplaceholder is mentioned, its logic is not defined, leaving the system vulnerable to SQL injection if the LLM is manipulated. - [DATA_EXFILTRATION] (MEDIUM): The HR assistant example explicitly includes schemas for sensitive data like
salaries,performance_reviews, andemployees. Because the agent can query any table in the schema, it presents a high risk of unauthorized exposure of sensitive personnel data through crafted natural language queries. - [PROMPT_INJECTION] (LOW): The skill is vulnerable to Indirect Prompt Injection (Category 8).
- Ingestion points: The
questionparameter in theforwardmethods ofEcommerceQAandHRQAinexamples.md. - Boundary markers: Absent. The user input is directly passed to the
generate_sqlDSPy module without delimiters or 'ignore' instructions. - Capability inventory: The skill has the capability to execute database queries via
execute_query(file:examples.md). - Sanitization: There is a call to
validate_sql(sql), but no implementation details are provided to confirm its effectiveness against adversarial inputs.
Recommendations
- AI detected serious security threats
Audit Metadata