simplemem-skill
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
- PROMPT_INJECTION (LOW): Direct Prompt Injection surface in
src/core/answer_generator.py. TheAnswerGenerator.generate_answermethod interpolates the user'squerydirectly into a prompt template (User Question: {query}) without sanitization, which could allow a user to influence the LLM's behavior via the query string. - PROMPT_INJECTION (LOW): Indirect Prompt Injection surface. The skill retrieves stored memories and includes them in the LLM context. If memories contain malicious instructions, they could influence the agent's actions.
- Ingestion points: Data enters the system via the
addandimportcommands inscripts/cli_persistent_memory.py. - Boundary markers: The prompt in
src/core/answer_generator.pyuses simple labels (Relevant Context:) but lacks distinct delimiters or explicit instructions to ignore commands within the context, increasing the risk of instruction leakage. - Capability inventory: The skill primarily performs text synthesis and database lookups; it does not expose dangerous capabilities like arbitrary shell execution or file system writes to the LLM's output.
- Sanitization: No input sanitization or output validation is performed on the retrieved memory content before it is processed by the LLM.
- PROMPT_INJECTION (LOW): The
VectorStore.structured_searchmethod insrc/database/vector_store.pyperforms manual string formatting to build SQL-like where clauses for LanceDB (e.g.,location LIKE '%{safe_location}%'). While it attempts basic single-quote escaping, this is an incomplete protection against query injection for the underlying database engine.
Audit Metadata