skill-seekers
Skill Seekers
Convert documentation websites, GitHub repositories, and PDFs into Claude AI skills automatically.
Quick Reference
Installation
pip install skill-seekers
Core Commands
# Scrape documentation website
skill-seekers scrape --config configs/react.json
skill-seekers scrape --url https://docs.example.com --name myskill
# Scrape GitHub repository
skill-seekers github --repo facebook/react
# Extract from PDF
skill-seekers pdf --pdf docs/manual.pdf --name myskill
# Combine multiple sources (docs + GitHub + PDF)
skill-seekers unified --config configs/react_unified.json
# Enhance the skill with AI
skill-seekers enhance output/myskill/
# Package into .zip for Claude
skill-seekers package output/myskill/
Complete Workflow
# 1. Scrape documentation
skill-seekers scrape --url https://react.dev --name react
# 2. Enhance with AI (optional but recommended)
skill-seekers enhance output/react/
# 3. Package into zip
skill-seekers package output/react/
# 4. Upload output/react.zip to Claude at https://claude.ai/skills
Key Features
| Feature | Description |
|---|---|
| Doc Scraping | Scrape any documentation website |
| GitHub Scraping | Extract code, APIs, issues from repos |
| PDF Extraction | Extract text, tables, images from PDFs |
| Unified Scraping | Combine docs + code + PDF in one skill |
| Conflict Detection | Find discrepancies between docs and code |
| AI Enhancement | Improve SKILL.md quality automatically |
| Async Mode | 2-3x faster with --async flag |
Output Structure
output/
├── myskill_data/ # Raw scraped data (cached)
└── myskill/ # Built skill directory
├── SKILL.md # Main skill file (required)
└── references/ # Categorized documentation
├── index.md
├── api.md
└── ...
Available Presets
skill-seekers scrape --config configs/godot.json # Godot Engine
skill-seekers scrape --config configs/react.json # React
skill-seekers scrape --config configs/vue.json # Vue.js
skill-seekers scrape --config configs/django.json # Django
skill-seekers scrape --config configs/fastapi.json # FastAPI
Config File Structure
{
"name": "myframework",
"description": "When to use this skill",
"base_url": "https://docs.myframework.com/",
"selectors": {
"main_content": "article",
"title": "h1",
"code_blocks": "pre code"
},
"url_patterns": {
"include": ["/docs", "/guide"],
"exclude": ["/blog", "/about"]
},
"rate_limit": 0.5,
"max_pages": 500
}
Navigation
For detailed information, see:
references/readme.md- Full documentation and examplesreferences/quickstart.md- Getting started guidereferences/usage.md- Complete command reference
More from jakerains/agentskills
shot-list
Generate professional shot lists from screenplays and scripts. Use when user uploads a screenplay (.fountain, .fdx, .txt, .pdf, .docx) or describes scenes for production planning. Parses scripts to extract scenes, helps determine camera setups, shot types, framing, and movement through collaborative discussion, then generates beautifully formatted PDF shot lists for production. Triggers include requests to create shot lists, plan shots, break down scripts for filming, or organize camera coverage.
27nextjs-pwa
Build Progressive Web Apps with Next.js: service workers, offline support, caching strategies, push notifications, install prompts, and web app manifest. Use when creating PWAs, adding offline capability, configuring service workers, implementing push notifications, handling install prompts, or optimizing PWA performance. Triggers: PWA, progressive web app, service worker, offline, cache strategy, web manifest, push notification, installable app, Serwist, next-pwa, workbox, background sync.
9elevenlabs
Complete ElevenLabs AI audio platform: text-to-speech (TTS), speech-to-text (STT/Scribe), voice cloning, voice design, sound effects, music generation, dubbing, voice changer, voice isolator, and conversational voice agents. Use when working with audio generation, voice synthesis, transcription, audio processing, or building voice-enabled applications. Triggers: generate speech, clone voice, transcribe audio, create sound effects, compose music, dub video, change voice, isolate vocals, build voice agent, ElevenLabs API/SDK/CLI/MCP.
9onnx-webgpu-converter
Convert HuggingFace transformer models to ONNX format for browser inference with Transformers.js and WebGPU. Use when given a HuggingFace model link to convert to ONNX, when setting up optimum-cli for ONNX export, when quantizing models (fp16, q8, q4) for web deployment, when configuring Transformers.js with WebGPU acceleration, or when troubleshooting ONNX conversion errors. Triggers on mentions of ONNX conversion, Transformers.js, WebGPU inference, optimum export, model quantization for browser, or running ML models in the browser.
8vercel-workflow
Build durable workflows with Vercel Workflow DevKit using "use workflow" and "use step" directives. Use for long-running tasks, background jobs, AI agents, webhooks, scheduled tasks, retries, and workflow orchestration. Supports Next.js, Vite, Astro, Express, Fastify, Hono, Nitro, Nuxt, SvelteKit.
7apple-foundation-models
Build Apple Intelligence features with Foundation Models and Image Playground on iOS 26+, iPadOS 26+, macOS 26+, Mac Catalyst 26+, and visionOS 26+. Use when implementing SystemLanguageModel, LanguageModelSession, guided generation with @Generable/@Guide, tool calling, streaming responses, prompt design, safety and guardrail handling, model availability checks, content tagging, context-window limits, local on-device inference, routing to larger-model paths, adapters, and ImagePlayground/ImageCreator APIs. Covers model capabilities and limitations, structured output, error handling, and SwiftUI integration patterns.
7