search
Search Knowledge Base
Search the qmd index for relevant documents.
Prerequisites
- qmd installed:
bun install -g @tobilu/qmd - Collection set up: Use
/setupfirst
Verify setup:
qmd status
Workflow
1. Verify Knowledge Base
qmd status
Should show your collection(s) with document counts.
2. Run Search
qmd query "<query>" --json
Examples:
qmd query "authentication flow" --json
qmd query "API design patterns" --json
qmd query "deployment process" --json
3. Present Results
Parse the JSON output and present:
- Document path
- Relevance score
- Relevant excerpt
Arguments
| Argument | Type | Default | Description |
|---|---|---|---|
query |
string | required | Search query |
mode |
string | query | Search mode: query, vsearch, search |
limit |
int | 5 | Number of results |
collection |
string | all | Restrict to specific collection |
Search Modes
| Mode | Description |
|---|---|
query |
Semantic search (default) |
vsearch |
Vector search with scores |
search |
Hybrid search |
Examples
# Basic search
qmd query "authentication" --json
# Limit results
qmd query "API design" --limit 10 --json
# Search specific collection
qmd query "deployment" --collection api-docs --json
# Vector search with scores
qmd vsearch "configuration" --json
Output Format
JSON output structure:
{
"results": [
{
"path": "docs/guide.md",
"score": 0.89,
"content": "..."
}
]
}
Integration with Agents
When using this skill:
- Run the search query
- Parse JSON results
- Present top results with scores
- Optionally read full documents for deeper context
Troubleshooting
If no results:
- Check collection exists:
qmd status - Verify embeddings generated:
qmd embed - Try broader query terms
Provider-Specific Notes
qmd (current)
- Storage: Local SQLite with sqlite-vec extension
- Embeddings: Local model (no API key required)
- Best for: Small to medium corpora, offline usage
pinecone (planned)
- Storage: Pinecone cloud
- Embeddings: OpenAI or custom embeddings
- Best for: Large-scale production deployments
weaviate (planned)
- Storage: Weaviate instance (self-hosted or cloud)
- Embeddings: Configurable
- Best for: Enterprise deployments with hybrid search
More from etalab-ia/dragster
memory
Maintain persistent memory for document ingestion and issues. Use when ingesting documents, tracking parsing problems, or recalling collection state. Works with ctx for cross-session persistence.
2rag-parse
Use this skill when the user asks to parse, perform multi-format document conversion or spatially extract text from an unstructured file (PDF, DOCX, PPTX, XLSX, images, etc.) locally without cloud dependencies.
2rag-tracking
External persistent memory for document ingestion and issues, designed for agents without built-in memory (Claude Code, Codex, OpenCode). Use when ingesting documents, tracking parsing problems, or recalling collection state. Works with ctx for cross-session persistence. NOT needed for Letta Code which has native memory.
2rag-index
Index a document corpus for semantic search. Use when the user wants to set up a knowledge base, create a searchable index from markdown documents, or enable semantic search. Triggers on keywords like "index documents", "create knowledge base", "setup search", "semantic search".
2rag-search
Search the knowledge base for relevant documents. Use when the user wants to find documents in their indexed corpus, has questions that could be answered by their documents, or needs context from their knowledge base. Triggers on keywords like "search documents", "find in knowledge base", "query index".
2parse
Use this skill when the user asks to parse, perform multi-format document conversion or spatially extract text from an unstructured file (PDF, DOCX, PPTX, XLSX, images, etc.) locally without cloud dependencies.
1