simplemem-skill

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
  • PROMPT_INJECTION (LOW): Direct Prompt Injection surface in src/core/answer_generator.py. The AnswerGenerator.generate_answer method interpolates the user's query directly into a prompt template (User Question: {query}) without sanitization, which could allow a user to influence the LLM's behavior via the query string.
  • PROMPT_INJECTION (LOW): Indirect Prompt Injection surface. The skill retrieves stored memories and includes them in the LLM context. If memories contain malicious instructions, they could influence the agent's actions.
  • Ingestion points: Data enters the system via the add and import commands in scripts/cli_persistent_memory.py.
  • Boundary markers: The prompt in src/core/answer_generator.py uses simple labels (Relevant Context:) but lacks distinct delimiters or explicit instructions to ignore commands within the context, increasing the risk of instruction leakage.
  • Capability inventory: The skill primarily performs text synthesis and database lookups; it does not expose dangerous capabilities like arbitrary shell execution or file system writes to the LLM's output.
  • Sanitization: No input sanitization or output validation is performed on the retrieved memory content before it is processed by the LLM.
  • PROMPT_INJECTION (LOW): The VectorStore.structured_search method in src/database/vector_store.py performs manual string formatting to build SQL-like where clauses for LanceDB (e.g., location LIKE '%{safe_location}%'). While it attempts basic single-quote escaping, this is an incomplete protection against query injection for the underlying database engine.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:38 PM