personize-governance
Pass
Audited by Gen Agent Trust Hub on Mar 14, 2026
Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [PROMPT_INJECTION]: Indirect Prompt Injection Surface. The
auto-learning-loop.tsanddocument-ingestion.tsrecipes ingest untrusted external data (git commit messages and local document files) and pass them into LLM prompts viaclient.ai.promptto extract learnings. - Ingestion points: Git log output (
recipes/auto-learning-loop.ts) and local folder content (recipes/document-ingestion.ts). - Boundary markers: The prompt templates use simple string interpolation without robust delimiters or explicit instructions to ignore embedded commands within the ingested text.
- Capability inventory: The skill has extensive capabilities to create, update, and delete organizational guidelines via the Personize API, as well as file system write access for local configuration.
- Sanitization: No sanitization or filtering of external content is performed before processing.
- [COMMAND_EXECUTION]: Local shell command execution. The
auto-learning-loop.tsscript executes thegitcommand usingchild_process.spawnSyncto retrieve commit logs. The use of an arguments array rather than a single command string prevents standard shell injection vulnerabilities. - [SAFE]: Secure path handling. The
recipes/ide-governance-bridge.tsscript includes logic to prevent path traversal when generating local documentation files by ensuring the output path resides within the current working directory. Additionally, all external network requests are directed to the vendor's verified API endpoints (agent.personize.ai).
Audit Metadata