ruby-llm
<essential_principles>
How RubyLLM Works
RubyLLM provides one beautiful Ruby API for all LLM providers. Same interface whether using GPT, Claude, Gemini, or local Ollama models.
1. One API for Everything
# Chat with any provider - same interface
chat = RubyLLM.chat(model: 'gpt-4.1')
chat = RubyLLM.chat(model: 'claude-sonnet-4-5')
chat = RubyLLM.chat(model: 'gemini-2.0-flash')
# All return the same RubyLLM::Message object
response = chat.ask("Hello!")
puts response.content
2. Configuration First
Always configure API keys before use:
# config/initializers/ruby_llm.rb
RubyLLM.configure do |config|
config.openai_api_key = ENV['OPENAI_API_KEY']
config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
config.gemini_api_key = ENV['GEMINI_API_KEY']
config.request_timeout = 120
config.max_retries = 3
end
3. Tools Are Ruby Classes
Define tools as RubyLLM::Tool subclasses with description, param, and execute:
class Weather < RubyLLM::Tool
description "Get current weather for a location"
param :latitude, type: 'number', desc: "Latitude"
param :longitude, type: 'number', desc: "Longitude"
def execute(latitude:, longitude:)
# Return structured data, not exceptions
{ temperature: 22, conditions: "Sunny" }
rescue => e
{ error: e.message } # Let LLM handle errors gracefully
end
end
chat.with_tool(Weather).ask("What's the weather in Berlin?")
4. Rails Integration with acts_as_chat
Persist conversations automatically:
class Chat < ApplicationRecord
acts_as_chat
end
chat = Chat.create!(model: 'gpt-4.1')
chat.ask("Hello!") # Automatically persists messages
5. Streaming with Blocks
Real-time responses via blocks:
chat.ask("Tell me a story") do |chunk|
print chunk.content # Print as it arrives
end
</essential_principles>
- Build a new AI feature (chat, embeddings, image generation)
- Add Rails chat integration (acts_as_chat, Turbo Streams)
- Implement tools/function calling
- Add streaming responses
- Debug an LLM interaction
- Optimize for production
- Something else
Wait for response, then read the matching workflow.
After reading the workflow, follow it exactly.
<verification_loop>
After Every Change
# 1. Does it load?
bin/rails console -e test
> RubyLLM.chat.ask("Test")
# 2. Do tests pass?
bin/rails test test/models/chat_test.rb
# 3. Check for errors
bin/rails test 2>&1 | grep -E "(Error|Fail|exception)"
Report to user:
- "Config: API keys loaded"
- "Chat: Working with [model]"
- "Tests: X pass, Y fail" </verification_loop>
<reference_index>
Domain Knowledge
All in references/:
Getting Started: getting-started.md Core Features: chat-api.md, tools.md, streaming.md, structured-output.md Rails: rails-integration.md Capabilities: embeddings.md, image-audio.md Infrastructure: providers.md, error-handling.md, mcp-integration.md Quality: anti-patterns.md </reference_index>
<workflows_index>
Workflows
All in workflows/:
| File | Purpose |
|---|---|
| build-new-feature.md | Create new AI feature from scratch |
| add-rails-chat.md | Add persistent chat to Rails app |
| implement-tools.md | Create custom tools/function calling |
| add-streaming.md | Add real-time streaming responses |
| debug-llm.md | Find and fix LLM issues |
| optimize-performance.md | Production optimization |
| </workflows_index> |
More from faqndo97/ai-skills
kamal-deployment
Deploy containerized applications (especially Rails) to VPS using Kamal 2. Covers deploy.yml configuration, accessories (PostgreSQL, Redis, Sidekiq), SSL/TLS, secrets management, CI/CD with GitHub Actions, database backups, server hardening, debugging, and scaling. Use when setting up Kamal, configuring deployments, troubleshooting deploy issues, or managing production infrastructure with Kamal.
31stimulus
Build Stimulus controllers from scratch through production. Full lifecycle - create, debug, test, optimize, integrate with Turbo. Covers targets, values, actions, outlets, and UI patterns.
22ruby-on-rails
Build Ruby on Rails features from scratch through production. Full lifecycle - build, debug, test, optimize, refactor. Follows Vanilla Rails philosophy (37signals/DHH), SOLID principles, and Rails 8 patterns.
19noticed
Build Rails notifications with the noticed gem. Full lifecycle - create notifiers, configure delivery methods (email, Slack, push, SMS, ActionCable), debug, and test. Covers individual and bulk delivery patterns.
15shadcn-ui
Build production-ready React/Next.js UIs with shadcn/ui components. Full lifecycle - install, customize, compose, debug, optimize. Covers components, forms, tables, theming, animations, and hooks.
14inertia-rails
Build Rails + Inertia.js applications from scratch through production. Full lifecycle - setup, pages, forms, validation, shared data, authentication. Covers React/Vue/Svelte frontends with Vite bundling. Includes cookbook for shadcn/ui, modals, meta tags, and error handling.
13