train-sentence-transformers

Installation
SKILL.md

Train a sentence-transformers Model

This SKILL.md is a router, not a manual. It tells you which references and example scripts to load for your task. The actual content — recommended losses, evaluators, training-script structure, model selection, training-arg knobs, troubleshooting — lives in references/ and scripts/.

Do not synthesize a training script from this file alone. Open the per-type production template (scripts/train_<type>_example.py) and copy it as your starting point. The templates contain load-bearing scaffolding (autocast helper, model-card class, logger silencing list, force=True, seed, TF32, version-compatible imports, named-evaluator metric handling) that prior agent runs have repeatedly missed when rolling their own from a synthesized snippet.

1. Identify the model type

Tag Class What it does When to pick
[SentenceTransformer] SentenceTransformer (bi-encoder) Maps each input to a fixed-dim dense vector Retrieval, similarity, clustering, classification, paraphrase mining, dedup
[CrossEncoder] CrossEncoder (reranker) Scores (query, passage) pairs jointly Two-stage retrieval (rerank top-100 from bi-encoder), pair classification
[SparseEncoder] SparseEncoder (SPLADE) Sparse vectors over the vocabulary Learned-sparse retrieval, inverted-index backends (Elasticsearch / OpenSearch / Lucene)

Tiebreakers when the request is ambiguous: "embedding model" / "vector search" / "similarity" → [SentenceTransformer]. "rerank" / "ranker" / "two-stage" → [CrossEncoder]. "SPLADE" / "sparse" / "inverted index" → [SparseEncoder]. If still unclear, ask.

2. Required reading

Read these in full before writing any code. Do not triage by perceived relevance.

Related skills

More from huggingface/skills

Installs
33
GitHub Stars
10.5K
First Seen
6 days ago