llm-context
Warn
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: MEDIUMPROMPT_INJECTION
Full Analysis
- Prompt Injection (MEDIUM): High risk of Indirect Prompt Injection due to the consumption of untrusted local data.
- Ingestion points: The skill explicitly directs the agent to read context from the
.llm/directory, specificallytodo.md, 'saved documentation', and 'entire git clones for tools'. - Boundary markers: The instructions lack any requirement for delimiters or specific instructions to ignore embedded prompts within the files found in
.llm/. - Capability inventory: While the skill itself defines guidelines, it references the
@markdown-tasks:tasksskill for execution and implies the agent will use tools cloned into the directory. It also involves modifying.git/info/exclude. - Sanitization: There is no mechanism described for sanitizing, validating, or escaping the content read from the untracked
.llm/directory before it is processed as context for the agent's reasoning or decision-making.
Audit Metadata