personize-governance

Pass

Audited by Gen Agent Trust Hub on Mar 14, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: Indirect Prompt Injection Surface. The auto-learning-loop.ts and document-ingestion.ts recipes ingest untrusted external data (git commit messages and local document files) and pass them into LLM prompts via client.ai.prompt to extract learnings.
  • Ingestion points: Git log output (recipes/auto-learning-loop.ts) and local folder content (recipes/document-ingestion.ts).
  • Boundary markers: The prompt templates use simple string interpolation without robust delimiters or explicit instructions to ignore embedded commands within the ingested text.
  • Capability inventory: The skill has extensive capabilities to create, update, and delete organizational guidelines via the Personize API, as well as file system write access for local configuration.
  • Sanitization: No sanitization or filtering of external content is performed before processing.
  • [COMMAND_EXECUTION]: Local shell command execution. The auto-learning-loop.ts script executes the git command using child_process.spawnSync to retrieve commit logs. The use of an arguments array rather than a single command string prevents standard shell injection vulnerabilities.
  • [SAFE]: Secure path handling. The recipes/ide-governance-bridge.ts script includes logic to prevent path traversal when generating local documentation files by ensuring the output path resides within the current working directory. Additionally, all external network requests are directed to the vendor's verified API endpoints (agent.personize.ai).
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 14, 2026, 07:00 AM