llm-integration
Pass
Audited by Gen Agent Trust Hub on Apr 17, 2026
Risk Level: SAFE
Full Analysis
- [EXTERNAL_DOWNLOADS]: The skill includes documentation for installing the Ollama local inference engine using a shell script from its official website (ollama.ai). As a well-known service in the AI ecosystem, this download is documented as a standard installation procedure.
- [COMMAND_EXECUTION]: The skill uses dynamic context injection via shell commands to automatically detect local project configuration, such as model types in dependency files and model paths in environment variables. These commands are used for local discovery to facilitate the generation of configuration files and do not perform any sensitive or outbound operations.
- [SAFE]: The skill emphasizes security best practices for LLM integrations, such as implementing strict schemas for tool definitions, using iteration guards in tool execution loops to prevent infinite cycles, and applying quality gates for model outputs.
Audit Metadata