local-llm-provider
Pass
Audited by Gen Agent Trust Hub on Mar 1, 2026
Risk Level: SAFEREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION]: The setup instructions in the documentation describe installing Ollama via a piped script:
curl -fsSL https://ollama.com/install.sh | sh. This is a standard installation pattern for the official Ollama service. - [EXTERNAL_DOWNLOADS]: The skill references external resources, including the Ollama installation script and the
vllmpackage available on PyPI, for local inference capabilities. - [COMMAND_EXECUTION]: The skill provides CLI-based entry points for interacting with the local-llm-provider logic, allowing users to run queries and manage model listing through Node.js commands.
- [PROMPT_INJECTION]: The skill processes untrusted user prompts by interpolating them into JSON payloads for LLM provider requests.
- Ingestion points: The
promptvariable in thecompletemethod ofprovider.jsand input viaqueryCLI. - Boundary markers: No delimiters or explicit safety instructions are used to wrap user input.
- Capability inventory: The skill makes network requests to local and remote API endpoints.
- Sanitization: No evidence of input validation or content sanitization before API transmission.
Audit Metadata