huggingface-best
HuggingFace Best Model Finder
Finds the best models for a task by querying official HF benchmark leaderboards, enriching results with model size data, filtering for what fits on the user's device, and returning a comparison table with benchmark scores.
Step 1: Parse the request
Extract from the user's message:
- Task: what they want the model to do (coding, math/reasoning, chat, OCR, RAG/retrieval, speech recognition, image classification, multimodal, agents, etc.)
- Device: hardware constraints (MacBook M-series 8/16/32/64GB unified memory, RTX GPU with VRAM amount, CPU-only, cloud/no constraint, etc.)
If device is not mentioned, skip filtering entirely and return the highest-performing models regardless of size. If the task is genuinely ambiguous, ask one clarifying question.
Device → max parameter budget
When a device is specified, extract its available memory (unified RAM for Apple Silicon, VRAM for discrete GPUs) and apply:
- fp16 max params (B) ≈ memory (GB) ÷ 2
- Q4 max params (B) ≈ memory (GB) × 2
Examples: 16GB → 8B fp16 / 32B Q4 — 24GB VRAM → 12B fp16 / 48B Q4 — 8GB → 4B fp16 / 16B Q4
Step 2: Find relevant benchmark datasets
Fetch the full list of official HF benchmarks:
curl -s -H "Authorization: Bearer $(cat ~/.cache/huggingface/token)" \
"https://huggingface.co/api/datasets?filter=benchmark:official&limit=500" | jq '[.[] | {id, tags, description}]'
Read the returned list and select the datasets most relevant to the user's task — match on dataset id, tags, and description. Use your judgment; don't limit yourself to 2-3. Aim for comprehensive coverage: if 5 benchmarks clearly cover the task, use all 5.
Step 3: Fetch top models from leaderboards
For each selected benchmark dataset:
curl -s -H "Authorization: Bearer $(cat ~/.cache/huggingface/token)" \
"https://huggingface.co/api/datasets/<namespace>/<repo>/leaderboard" | jq '[.[:15] | .[] | {rank, modelId, value, verified}]'
Collect model IDs and scores across all benchmarks. If a leaderboard returns an error (404, 401, etc.), skip it and note it in the output.
Step 4: Enrich with model metadata
For the top 10-15 candidate model IDs, get model infos.
# REST API
curl -s -H "Authorization: Bearer $(cat ~/.cache/huggingface/token)" \
"https://huggingface.co/api/models/org/model1" | jq '{safetensors, tags, cardData}'
# CLI (hf-cli)
hf models info org/model1 --json | jq '{safetensors, tags, cardData}'
Extract from each response:
- Parameters:
safetensors.total→ convert to B (e.g., 7_241_748_480 → "7.2B") - License: from model card tags (look for
license:apache-2.0,license:mit, etc.) - If
safetensorsis absent, parse size from the model name (look for "7b", "8b", "13b", "70b", "72b", etc.)
Step 5: Filter and rank
If a device was specified:
- Remove models exceeding the fp16 parameter budget for the device
- Flag models that fit only with Q4 quantization (multiply budget by ~4 for Q4 capacity)
- If a highly-ranked model is slightly over budget, keep it with a "needs Q4" note — don't silently drop it
If no device was mentioned: skip all size filtering — just rank by benchmark score.
Then: rank by benchmark score (descending), keep top 5-8 models.
Include proprietary models (GPT-4, Claude, Gemini) if they appear on leaderboards, but flag them as "API only / not self-hostable". If the user explicitly asked for local/open models only, exclude them.
Step 6: Output
Comparison table
| # | Model | Params | [Benchmark 1] | [Benchmark 2] | License | On device |
|---|-------|--------|--------------|--------------|---------|-----------|
| ⭐1 | [org/name](https://huggingface.co/org/name) | 7B | 85.2% | — | Apache 2.0 | Yes (fp16) |
| 2 | [org/name](https://huggingface.co/org/name) | 13B | 83.1% | 71.5% | MIT | Q4 only |
| 3 | [org/name](https://huggingface.co/org/name) | 70B | 90.0% | 81.0% | Llama | Too large |
- Link model names to
https://huggingface.co/<model_id> - Use
—for benchmarks where the model wasn't evaluated - Star the top recommended pick with ⭐
- "On device" values:
Yes (fp16),Q4 only,Too large,API only
Follow-up
After presenting the table, ask the user: "Would you like to run [top recommended model]?"
If they say yes, ask whether they'd prefer to:
- Run locally — ask about their device if not already known, then give appropriate setup instructions
- Run on HF Jobs — point them to the HF Jobs guide: https://huggingface.co/docs/huggingface_hub/en/guides/jobs
Error handling
- Leaderboard not found: skip, note "leaderboard unavailable" in output
- Model missing from hub_repo_details: fall back to parsing size from model name
- No benchmarks found for task: use the curated fallback table above, or try
hub_repo_searchwithfilters=["<task>"]sorted bytrendingScore - All leaderboards fail: fall back to
hub_repo_searchfor popular models tagged with the task, note that results are by popularity rather than benchmark score
More from huggingface/skills
hf-cli
Hugging Face Hub CLI (`hf`) for downloading, uploading, and managing models, datasets, spaces, buckets, repos, papers, jobs, and more on the Hugging Face Hub. Use when: handling authentication; managing local cache; managing Hugging Face Buckets; running or scheduling jobs on Hugging Face infrastructure; managing Hugging Face repos; discussions and pull requests; browsing models, datasets and spaces; reading, searching, or browsing academic papers; managing collections; querying datasets; configuring spaces; setting up webhooks; or deploying and managing HF Inference Endpoints. Make sure to use this skill whenever the user mentions 'hf', 'huggingface', 'Hugging Face', 'huggingface-cli', or 'hugging face cli', or wants to do anything related to the Hugging Face ecosystem and to AI and ML in general. Also use for cloud storage needs like training checkpoints, data pipelines, or agent traces. Use even if the user doesn't explicitly ask for a CLI command. Replaces the deprecated `huggingface-cli`.
693hugging-face-model-trainer
This skill should be used when users want to train or fine-tune language models using TRL (Transformer Reinforcement Learning) on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, and model persistence. Should be invoked for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.
418huggingface-gradio
Build Gradio web UIs and demos in Python. Use when creating or editing Gradio apps, components, event listeners, layouts, or chatbots.
411transformers-js
Use Transformers.js to run state-of-the-art machine learning models directly in JavaScript/TypeScript. Supports NLP (text classification, translation, summarization), computer vision (image classification, object detection), audio (speech recognition, audio classification), and multimodal tasks. Works in browsers and server-side runtimes (Node.js, Bun, Deno) with WebGPU/WASM using pre-trained models from Hugging Face Hub.
404huggingface-llm-trainer
Train or fine-tune language and vision models using TRL (Transformer Reinforcement Learning) or Unsloth with Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, model selection/leaderboards and model persistence. Use for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.
378huggingface-datasets
Use this skill for Hugging Face Dataset Viewer API workflows that fetch subset/split metadata, paginate rows, search text, apply filters, download parquet URLs, and read size or statistics.
374