llm-wiki
Pass
Audited by Gen Agent Trust Hub on Apr 13, 2026
Risk Level: SAFEEXTERNAL_DOWNLOADSDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- [SAFE]: This skill consists entirely of conceptual documentation and architectural guidelines in Markdown format. It does not include any executable scripts, binaries, or code files.
- [EXTERNAL_DOWNLOADS]: The documentation contains a reference link to a public GitHub Gist by a well-known researcher. This is used as an informational resource for the architecture and does not involve automated downloading or execution of remote content.
- [DATA_EXFILTRATION]: The skill suggests using environment variables (e.g.,
OBSIDIAN_SOURCES_DIR,CLAUDE_HISTORY_PATH) to configure local paths for knowledge ingestion. These are standard configuration practices for local file processing and do not indicate unauthorized data exfiltration. - [PROMPT_INJECTION]: The described architecture involves the ingestion of untrusted local documents into an LLM context. While this creates a potential surface for indirect prompt injection, the skill incorporates mitigation strategies like provenance markers and human curation. Evidence chain: Ingestion occurs via the
OBSIDIAN_SOURCES_DIR(as defined in SKILL.md); boundary markers are not explicitly defined in this architectural skill; the capability inventory includes file reading and searching primitives (Read,Grep) mentioned in the Retrieval Primitives section; sanitization of input documents is not specified.
Audit Metadata