db-seeder
Fail
Audited by Gen Agent Trust Hub on Feb 20, 2026
Risk Level: HIGHCOMMAND_EXECUTIONCREDENTIALS_UNSAFEDATA_EXFILTRATIONREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION] (HIGH): The configuration template 'assets/seed-config-template.yaml' includes a 'hooks' section that allows for arbitrary shell command execution (e.g., 'python scripts/update_search_index.py'). This design allows anyone who can influence the configuration file to execute arbitrary code on the system.\n- [CREDENTIALS_UNSAFE] (HIGH): The 'README.md' and 'references/database-configs.md' provide numerous examples of database connection strings with hardcoded credentials (e.g., 'postgresql://postgres:password@localhost:5432/elios_dev'). This encourages unsafe credential management and increases the likelihood of accidental secret exposure in version control or logs.\n- [REMOTE_CODE_EXECUTION] (MEDIUM): The seeding configuration uses strings containing Python lambda expressions (e.g., 'lambda fake: fake.user_name()') to define data generation logic. This implies the underlying implementation uses eval() or exec(), posing a risk of code injection if configuration data is sourced from untrusted inputs.\n- [DATA_EXFILTRATION] (MEDIUM): The 'detect_db_config.py' script is designed to scan environment variables, '.env' files, and project structures to extract database credentials. While intended for automation, this capability can be abused to discover and expose sensitive infrastructure secrets.\n- [PROMPT_INJECTION] (LOW): This skill presents an indirect prompt injection surface. Because it ingests configuration files (YAML/JSON) that define executable hooks and dynamic code, an attacker who can influence the creation of these files (e.g., by tricking an AI agent into generating a malicious config) could achieve arbitrary command execution.
Recommendations
- AI detected serious security threats
Audit Metadata