llm-tldr
Pass
Audited by Gen Agent Trust Hub on Mar 1, 2026
Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
- [PROMPT_INJECTION]: The
SKILL.mdfile includes a 'Prompt Architect Overlay' which uses role definition and persona instructions ('prompt-architect-enhanced specialist') to guide the agent's behavior. While this is a form of instruction override, it is intended to ensure deterministic execution of the skill's features. - [PROMPT_INJECTION]: The skill represents a surface for indirect prompt injection because it processes untrusted data (source code) and provides summaries to the AI. If a codebase contains malicious instructions within comments or strings, the agent could potentially be influenced when reading the structured output.
- Ingestion points:
tldr warm,tldr extract, andtldr contextcommands which read all files in a project. - Boundary markers: None explicitly specified in the tool output descriptions; content is delivered as structured markdown/text summaries.
- Capability inventory: The tool can read/write to a local cache directory (
.tldr/), execute CLI commands, and run a background daemon. - Sanitization: No explicit sanitization or filtering of codebase strings/comments is mentioned to prevent LLM instruction obedience.
- [EXTERNAL_DOWNLOADS]: The skill instructions require the installation of the
llm-tldrpackage from the official PyPI registry, which also pulls in dependencies such assentence-transformersandfaiss-cpufor its semantic search functionality. - [COMMAND_EXECUTION]: The workflow involves running various local commands via the
tldrCLI to warm indexes, generate call graphs, and perform semantic searches. It also includes starting and stopping a background daemon process.
Audit Metadata