ai-models
Pass
Audited by Gen Agent Trust Hub on Apr 8, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill serves as a static documentation and reference guide for AI model identifiers and capabilities.
- [SAFE]: Code snippets for Anthropic, OpenAI, Google, Eleven Labs, Replicate, Stability AI, Mistral, and Voyage AI utilize official SDKs or direct API calls to well-known services.
- [SAFE]: Credentials and API keys are handled securely via environment variables (e.g., process.env.OPENAI_API_KEY). The skill includes explicit warnings against hardcoding or committing actual keys in the provided .env.example template.
- [SAFE]: All external URLs point to official documentation and landing pages of recognized AI technology providers.
Audit Metadata