huggingface-local-models
Hugging Face Local Models
Search the Hugging Face Hub for llama.cpp-compatible GGUF repos, choose the right quant, and launch the model with llama-cli or llama-server.
Default Workflow
- Search the Hub with
apps=llama.cpp. - Open
https://huggingface.co/<repo>?local-app=llama.cpp. - Prefer the exact HF local-app snippet and quant recommendation when it is visible.
- Confirm exact
.gguffilenames withhttps://huggingface.co/api/models/<repo>/tree/main?recursive=true. - Launch with
llama-cli -hf <repo>:<QUANT>orllama-server -hf <repo>:<QUANT>. - Fall back to
--hf-repoplus--hf-filewhen the repo uses custom file naming. - Convert from Transformers weights only if the repo does not already expose GGUF files.
Quick Start
Install llama.cpp
brew install llama.cpp
winget install llama.cpp
git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp
make
Authenticate for gated repos
hf auth login
Search the Hub
https://huggingface.co/models?apps=llama.cpp&sort=trending
https://huggingface.co/models?search=Qwen3.6&apps=llama.cpp&sort=trending
https://huggingface.co/models?search=<term>&apps=llama.cpp&num_parameters=min:0,max:24B&sort=trending
Run directly from the Hub
llama-cli -hf unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q4_K_M
llama-server -hf unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q4_K_M
Run an exact GGUF file
llama-server \
--hf-repo unsloth/Qwen3.6-35B-A3B-GGUF \
--hf-file Qwen3.6-35B-A3B-UD-Q4_K_M.gguf \
-c 4096
Convert only when no GGUF is available
hf download <repo-without-gguf> --local-dir ./model-src
python convert_hf_to_gguf.py ./model-src \
--outfile model-f16.gguf \
--outtype f16
llama-quantize model-f16.gguf model-q4_k_m.gguf Q4_K_M
Smoke test a local server
llama-server -hf unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q4_K_M
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer no-key" \
-d '{
"messages": [
{"role": "user", "content": "Write a limerick about exception handling"}
]
}'
Quant Choice
- Prefer the exact quant that HF marks as compatible on the
?local-app=llama.cpppage. - Keep repo-native labels such as
UD-Q4_K_Minstead of normalizing them. - Default to
Q4_K_Munless the repo page or hardware profile suggests otherwise. - Prefer
Q5_K_MorQ6_Kfor code or technical workloads when memory allows. - Consider
Q3_K_M,Q4_K_S, or repo-specificIQ/UD-*variants for tighter RAM or VRAM budgets. - Treat
mmproj-*.gguffiles as projector weights, not the main checkpoint.
Load References
- Read hub-discovery.md for URL-first workflows, model search, tree API extraction, and command reconstruction.
- Read quantization.md for format tables, model scaling, quality tradeoffs, and
imatrix. - Read hardware.md for Metal, CUDA, ROCm, or CPU build and acceleration details.
Resources
- llama.cpp:
https://github.com/ggml-org/llama.cpp - Hugging Face GGUF + llama.cpp docs:
https://huggingface.co/docs/hub/gguf-llamacpp - Hugging Face Local Apps docs:
https://huggingface.co/docs/hub/main/local-apps - Hugging Face Local Agents docs:
https://huggingface.co/docs/hub/agents-local - GGUF converter Space:
https://huggingface.co/spaces/ggml-org/gguf-my-repo
More from huggingface/skills
hf-cli
Hugging Face Hub CLI (`hf`) for downloading, uploading, and managing models, datasets, spaces, buckets, repos, papers, jobs, and more on the Hugging Face Hub. Use when: handling authentication; managing local cache; managing Hugging Face Buckets; running or scheduling jobs on Hugging Face infrastructure; managing Hugging Face repos; discussions and pull requests; browsing models, datasets and spaces; reading, searching, or browsing academic papers; managing collections; querying datasets; configuring spaces; setting up webhooks; or deploying and managing HF Inference Endpoints. Make sure to use this skill whenever the user mentions 'hf', 'huggingface', 'Hugging Face', 'huggingface-cli', or 'hugging face cli', or wants to do anything related to the Hugging Face ecosystem and to AI and ML in general. Also use for cloud storage needs like training checkpoints, data pipelines, or agent traces. Use even if the user doesn't explicitly ask for a CLI command. Replaces the deprecated `huggingface-cli`.
697hugging-face-model-trainer
This skill should be used when users want to train or fine-tune language models using TRL (Transformer Reinforcement Learning) on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, and model persistence. Should be invoked for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.
418huggingface-gradio
Build Gradio web UIs and demos in Python. Use when creating or editing Gradio apps, components, event listeners, layouts, or chatbots.
413transformers-js
Use Transformers.js to run state-of-the-art machine learning models directly in JavaScript/TypeScript. Supports NLP (text classification, translation, summarization), computer vision (image classification, object detection), audio (speech recognition, audio classification), and multimodal tasks. Works in browsers and server-side runtimes (Node.js, Bun, Deno) with WebGPU/WASM using pre-trained models from Hugging Face Hub.
407huggingface-llm-trainer
Train or fine-tune language and vision models using TRL (Transformer Reinforcement Learning) or Unsloth with Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, model selection/leaderboards and model persistence. Use for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.
381huggingface-datasets
Use this skill for Hugging Face Dataset Viewer API workflows that fetch subset/split metadata, paginate rows, search text, apply filters, download parquet URLs, and read size or statistics.
378