llm-wiki
Pass
Audited by Gen Agent Trust Hub on Apr 12, 2026
Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [PROMPT_INJECTION]: The skill's primary function is to ingest and synthesize untrusted data from the 'raw/' directory, creating an inherent attack surface for indirect prompt injection. A malicious source document could attempt to manipulate the agent's behavior during the ingestion or query phases. The skill mitigates this by requiring a 'Discuss with the user' step where the agent must present key claims and planned modifications for human approval before writing to the wiki.
- Ingestion points: External files (PDFs, Markdown, HTML) processed by 'scripts/ingest_source.py' and the 'wiki-ingestor' agent.
- Boundary markers: The system uses an 'index.md' catalog for scope but does not implement strict delimiters for source content within prompts.
- Capability inventory: The agents have 'Read', 'Write', 'Edit', and 'Bash' capabilities to manage vault contents.
- Sanitization: Content is not automatically sanitized; the workflow relies on user review of proposed updates.
- [COMMAND_EXECUTION]: The sub-agents and slash commands utilize 'Bash' to execute the skill's internal Python scripts. These scripts use the Python standard library to perform indexing, BM25 searching, and graph analysis of the local markdown files. The execution is limited to the skill's included scripts and the local vault directory.
Audit Metadata