inference-sh
SKILL.md
inference.sh
Run 150+ AI apps in the cloud with a simple CLI. No GPU required.
Install CLI
curl -fsSL https://cli.inference.sh | sh
infsh login
Quick Examples
# Generate an image
infsh app run falai/flux-dev-lora --input '{"prompt": "a cat astronaut"}'
# Generate a video
infsh app run google/veo-3-1-fast --input '{"prompt": "drone over mountains"}'
# Call Claude
infsh app run openrouter/claude-sonnet-45 --input '{"prompt": "Explain quantum computing"}'
# Web search
infsh app run tavily/search-assistant --input '{"query": "latest AI news"}'
# Post to Twitter
infsh app run x/post-tweet --input '{"text": "Hello from AI!"}'
# Generate 3D model
infsh app run infsh/rodin-3d-generator --input '{"prompt": "a wooden chair"}'
Commands
| Task | Command |
|---|---|
| List all apps | infsh app list |
| Search apps | infsh app list --search "flux" |
| Filter by category | infsh app list --category image |
| Get app details | infsh app get google/veo-3-1-fast |
| Generate sample input | infsh app sample google/veo-3-1-fast --save input.json |
| Run app | infsh app run google/veo-3-1-fast --input input.json |
| Run without waiting | infsh app run <app> --input input.json --no-wait |
| Check task status | infsh task get <task-id> |
What's Available
| Category | Examples |
|---|---|
| Image | FLUX, Gemini 3 Pro, Grok Imagine, Seedream 4.5, Reve, Topaz Upscaler |
| Video | Veo 3.1, Seedance 1.5, Wan 2.5, OmniHuman, Fabric, HunyuanVideo Foley |
| LLMs | Claude Opus/Sonnet/Haiku, Gemini 3 Pro, Kimi K2, GLM-4, any OpenRouter model |
| Search | Tavily Search, Tavily Extract, Exa Search, Exa Answer, Exa Extract |
| 3D | Rodin 3D Generator |
| Twitter/X | post-tweet, post-create, dm-send, user-follow, post-like, post-retweet |
| Utilities | Media merger, caption videos, image stitching, audio extraction |
Related Skills
# Image generation (FLUX, Gemini, Grok, Seedream)
npx skills add inference-sh/skills@ai-image-generation
# Video generation (Veo, Seedance, Wan, OmniHuman)
npx skills add inference-sh/skills@ai-video-generation
# LLMs (Claude, Gemini, Kimi, GLM via OpenRouter)
npx skills add inference-sh/skills@llm-models
# Web search (Tavily, Exa)
npx skills add inference-sh/skills@web-search
# AI avatars & lipsync (OmniHuman, Fabric, PixVerse)
npx skills add inference-sh/skills@ai-avatar-video
# Twitter/X automation
npx skills add inference-sh/skills@twitter-automation
# Model-specific
npx skills add inference-sh/skills@flux-image
npx skills add inference-sh/skills@google-veo
# Utilities
npx skills add inference-sh/skills@image-upscaling
npx skills add inference-sh/skills@background-removal
Reference Files
Documentation
- Agent Skills Overview - The open standard for AI capabilities
- Getting Started - Introduction to inference.sh
- What is inference.sh? - Platform overview
- Apps Overview - Understanding the app ecosystem
- CLI Setup - Installing the CLI
- Workflows vs Agents - When to use each
- Why Agent Runtimes Matter - Runtime benefits
Weekly Installs
1.3K
Repository
inference-sh/skillsFirst Seen
3 days ago
Installed on
claude-code242
gemini-cli241
antigravity240
cursor238
opencode238
codex234