dspy
DSPy: Declarative Language Model Programming
Stanford NLP's framework for programming—not prompting—language models.
Quick Start
import dspy
# 1. Configure
dspy.settings.configure(lm=dspy.OpenAI(model='gpt-4o-mini'))
# 2. Define Module
qa = dspy.ChainOfThought("question -> answer")
# 3. Run
response = qa(question="What is the capital of France?")
print(response.answer)
Learning Path (DAG)
The DSPy framework follows a natural progression from core concepts through optimization to advanced applications. Use this directed acyclic graph to understand dependencies and navigate the skill components.
Foundation Layer (Start Here)
-
- Prerequisites: None
- Next: Signatures, Modules, Datasets
-
- Prerequisites: LM Configuration
- Next: Modules, Optimization
-
- Prerequisites: Signatures
- Next: Optimization, Applications
-
- Prerequisites: None
- Next: Optimization
Optimization Layer
-
- Prerequisites: Modules, Datasets
- Techniques: LabeledFewShot, BootstrapFewShot, KNNFewShot
- Next: Applications
-
- Prerequisites: Modules, Datasets
- Techniques: COPRO, MIPROv2, GEPA
- Next: Applications
-
- Prerequisites: Modules, Datasets
- Techniques: BootstrapFinetune
- Next: Applications
-
- Prerequisites: Multiple trained modules
- Next: Applications
Application Layer
-
- Prerequisites: Modules, Optimization (recommended)
-
- Prerequisites: Modules, Datasets
-
- Prerequisites: Modules, Haystack knowledge
Advanced Features (Cross-Cutting)
-
- Prerequisites: Modules
-
- Prerequisites: Signatures
-
- Prerequisites: ChainOfThought module
Reference Documentation
- Modules Reference - Complete module catalog
- Optimizers Reference - All optimization techniques
- Examples Reference - Real-world implementations
Common Workflows
Workflow 1: Basic QA System
- Configure LM → Design Signature → Build Module
- Path:
configuring-language-models.md→designing-signatures.md→building-modules.md
Workflow 2: Optimized RAG System
- Configure LM → Build RAG Module → Optimize with Few-Shot → Evaluate
- Path:
configuring-language-models.md→building-rag-pipelines.md→few-shot-learning.md→evaluating-programs.md
Workflow 3: Production Agent
- Configure LM → Design Signature → Build ReAct Module → Add Assertions → Optimize Instructions → Evaluate
- Path:
configuring-language-models.md→designing-signatures.md→building-modules.md→assertions-validation.md→instruction-optimization.md→evaluating-programs.md
Installation
pip install dspy
# Or with specific providers
pip install dspy[anthropic] # Claude
pip install dspy[openai] # GPT
pip install dspy[all] # All providers
Additional Resources
- Official Docs: dspy.ai
- GitHub: github.com/stanfordnlp/dspy
More from zpankz/mcp-skillset
network-meta-analysis-appraisal
Systematically appraise network meta-analysis papers using integrated 200-point checklist (PRISMA-NMA, NICE DSU TSD 7, ISPOR-AMCP-NPC, CINeMA) with triple-validation methodology, automated PDF extraction, semantic evidence matching, and concordance analysis. Use when evaluating NMA quality for peer review, guideline development, HTA, or reimbursement decisions.
16software-architecture
Guide for quality focused software architecture. This skill should be used when users want to write code, design architecture, analyze code, in any case that relates to software development.
13cursor-skills
Cursor is an AI-powered code editor and development environment that combines intelligent coding assistance with enterprise-grade features and workflow automation. It extends beyond basic AI code comp...
13textbook-grounding
Orthogonally-integrated Hegelian syntopical analysis for SAQ/VIVA/concept grounding with systematic textbook citations. Implements thesis extraction → antithesis identification → abductive synthesis across multiple authoritative sources. Tensor-integrated with /m command: activates S×T×L synergies (textbook-grounding × pdf-search × qmd = 0.95). Triggers on requests for model SAQ responses, VIVA preparation, concept explanations requiring textbook evidence, or any PEX exam content needing systematic cross-reference validation.
12obsidian-process
This skill should be used when batch processing Obsidian markdown vaults. Handles wikilink extraction, tag normalization, frontmatter CRUD operations, and vault analysis. Use for vault-wide transformations, link auditing, tag standardization, metadata management, and migration workflows. Integrates with obsidian-markdown for syntax validation and obsidian-data-importer for structured imports.
12terminal-ui-design
Create distinctive, production-grade terminal user interfaces with high design quality. Use this skill when the user asks to build CLI tools, TUI applications, or terminal-based interfaces. Generates creative, polished code that avoids generic terminal aesthetics.
10