personize-memory

Pass

Audited by Gen Agent Trust Hub on Mar 13, 2026

Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill processes data from various external sources, creating a potential surface for indirect prompt injection where malicious instructions could be embedded in stored memories.
  • Ingestion points: Data is ingested from CRMs (HubSpot, Salesforce), databases (Postgres, MySQL), and rich text sources (transcripts, emails) via the memory.memorize() and memory.memorizeBatch() methods, as described in SKILL.md and reference/memorize.md.
  • Boundary markers: The provided context assembly recipes (e.g., recipes/context-assembly.ts) do not demonstrate the use of delimiters or instructions to the LLM to ignore potentially malicious content within the retrieved memories when building agent prompts.
  • Capability inventory: The skill facilitates wide-ranging data retrieval through methods like smartRecall and smartDigest, which are intended to provide context for agent generation pipelines.
  • Sanitization: While the SQL templates in templates/postgres.md include warnings against SQL injection, there is no explicit sanitization demonstrated for natural language instructions that might be contained within the ingested content.
  • [EXTERNAL_DOWNLOADS]: The skill utilizes several well-known Node.js libraries to facilitate integrations with third-party services.
  • Evidence: Integration templates and recipes reference packages such as @hubspot/api-client, jsforce, pg, and mysql2 for connecting to external CRM and database systems.
  • [COMMAND_EXECUTION]: The skill includes standard build and deployment configurations for Node.js environments.
  • Evidence: The Dockerfile and github-action.yml configuration files include standard commands such as npm ci and npm run build to prepare the execution environment and compile the source code.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 13, 2026, 03:49 PM