LLM Model Selection Skill
Pass
Audited by Gen Agent Trust Hub on Mar 10, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill consists of documentation and guidelines for Large Language Model (LLM) selection. It provides strategic advice on matching model capabilities (such as Claude Opus, Sonnet, and Haiku) to specific task types like architecture design or simple documentation updates.
- [SAFE]: No executable code, scripts, or automated commands are present in the skill files.
- [SAFE]: External references are limited to official documentation for Anthropic and OpenAI models, which are trusted technology providers.
- [SAFE]: The skill defines a safety-oriented 'Warning Protocol' to ensure the user is aware when a task might require a more capable model than the one currently active, which is a utility and safety feature.
Audit Metadata