conversation-memory

Pass

Audited by Gen Agent Trust Hub on Apr 14, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill documents architectural patterns for ingesting untrusted user content into persistent memory stores (documented in the 'Tiered Memory System' and 'Entity Memory' sections of SKILL.md). This data is later retrieved and interpolated into LLM prompts (demonstrated in the 'promptWithMemory' function). This architecture is vulnerable to indirect prompt injection, as malicious instructions embedded in stored messages could influence the agent's behavior during future interactions. While the example implementation uses markdown headers as basic boundary markers, it lacks explicit sanitization or instruction-filtering for the retrieved context. The capability to perform LLM completions ('llm.complete') using this unvalidated context forms the primary attack surface.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 14, 2026, 07:26 PM