llm-wiki

Pass

Audited by Gen Agent Trust Hub on Apr 16, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [COMMAND_EXECUTION]: The skill requires the execution of local Python scripts included in the repository, such as scripts/lint_wiki.py, scripts/audit_review.py, and scripts/scaffold.py. These scripts perform file system operations including directory creation and reading/writing Markdown files within the user-specified wiki root.
  • [PROMPT_INJECTION]: The skill is designed to ingest untrusted data from external sources and human feedback, creating a surface for indirect prompt injection. 1. Ingestion points: Untrusted data enters the agent context through files placed in the raw/ directory (e.g., web articles, papers) and the audit/ directory (human feedback). 2. Boundary markers: The instructions do not define specific delimiters or instructions for the agent to disregard embedded commands within the ingested content. 3. Capability inventory: The agent possesses the capability to modify and overwrite files within the wiki/, log/, and outputs/ directories based on its processing of the ingested data. 4. Sanitization: No explicit sanitization or filtering logic is prescribed for the content before it is processed and persisted into the wiki structure.
  • [EXTERNAL_DOWNLOADS]: The documentation encourages the installation and use of external tools such as qmd and various Obsidian plugins. It also references well-known services and trusted repositories for configuration and guidelines, which are documented as standard operational dependencies.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 16, 2026, 07:43 AM