ai-building-chatbots
Build a Conversational AI Chatbot
Guide the user through building a multi-turn chatbot that remembers context, follows conversation flows, and produces high-quality responses. Uses DSPy for optimizable response generation and LangGraph for conversation state, memory, and flow control.
Step 1: Define the conversation
Ask the user:
- What does the bot do? (answer questions, resolve issues, qualify leads, guide onboarding?)
- What states can the conversation be in? (greeting, gathering info, resolving, escalating, closing?)
- When should the bot escalate to a human? (complex issues, angry users, sensitive topics?)
- What docs or data should it draw from? (help articles, product docs, FAQs, database?)
Step 2: Build the response module (DSPy)
The core of your chatbot is a DSPy module that generates responses given conversation history and context.
lm = dspy.LM("openai/gpt-4o-mini") # or "anthropic/claude-sonnet-4-5-20250929", etc.
dspy.configure(lm=lm)
Basic response module
import dspy
class ChatResponse(dspy.Signature):
"""Generate a helpful, on-brand response to the user's message."""
conversation_history: str = dspy.InputField(desc="Previous messages in the conversation")
context: str = dspy.InputField(desc="Relevant information from docs or database")
user_message: str = dspy.InputField(desc="The user's latest message")
response: str = dspy.OutputField(desc="Helpful response to the user")
class ChatBot(dspy.Module):
def __init__(self):
self.respond = dspy.ChainOfThought(ChatResponse)
def forward(self, conversation_history, context, user_message):
return self.respond(
conversation_history=conversation_history,
context=context,
user_message=user_message,
)
With intent classification
Route different intents to specialized handlers:
from typing import Literal
class ClassifyIntent(dspy.Signature):
"""Classify the user's intent from their message."""
conversation_history: str = dspy.InputField()
user_message: str = dspy.InputField()
intent: Literal["question", "complaint", "request", "greeting", "goodbye"] = dspy.OutputField()
class ChatBotWithRouting(dspy.Module):
def __init__(self):
self.classify = dspy.Predict(ClassifyIntent)
self.respond_question = dspy.ChainOfThought(AnswerQuestion)
self.respond_complaint = dspy.ChainOfThought(HandleComplaint)
self.respond_request = dspy.ChainOfThought(HandleRequest)
self.respond_greeting = dspy.Predict(Greeting)
def forward(self, conversation_history, context, user_message):
intent = self.classify(
conversation_history=conversation_history,
user_message=user_message,
).intent
handler = {
"question": self.respond_question,
"complaint": self.respond_complaint,
"request": self.respond_request,
"greeting": self.respond_greeting,
}.get(intent, self.respond_question)
return handler(
conversation_history=conversation_history,
context=context,
user_message=user_message,
)
Step 3: Add conversation state (LangGraph)
LangGraph manages the conversation flow — what state the bot is in, when to transition, and when to escalate.
Define conversation state
from langgraph.graph import StateGraph, START, END
from typing import TypedDict, Annotated
import operator
class ConversationState(TypedDict):
messages: Annotated[list[dict], operator.add] # full message history
current_intent: str
context: str # retrieved docs/data for current turn
escalate: bool # whether to hand off to a human
resolved: bool # whether the issue is resolved
turn_count: int
Build the conversation graph
import dspy
# Initialize DSPy modules
classifier = dspy.Predict(ClassifyIntent)
responder = dspy.ChainOfThought(ChatResponse)
def classify_node(state: ConversationState) -> dict:
"""Classify the user's intent."""
history = format_history(state["messages"][:-1])
user_msg = state["messages"][-1]["content"]
result = classifier(conversation_history=history, user_message=user_msg)
return {"current_intent": result.intent}
def retrieve_node(state: ConversationState) -> dict:
"""Retrieve relevant docs for the current message."""
user_msg = state["messages"][-1]["content"]
# Your retrieval logic here (see /ai-searching-docs)
docs = retrieve_relevant_docs(user_msg)
return {"context": "\n".join(docs)}
def respond_node(state: ConversationState) -> dict:
"""Generate a response using DSPy."""
history = format_history(state["messages"][:-1])
user_msg = state["messages"][-1]["content"]
result = responder(
conversation_history=history,
context=state["context"],
user_message=user_msg,
)
return {
"messages": [{"role": "assistant", "content": result.response}],
"turn_count": state["turn_count"] + 1,
}
def check_escalation(state: ConversationState) -> dict:
"""Decide if this needs human handoff."""
should_escalate = (
state["current_intent"] == "complaint"
and state["turn_count"] > 3
)
return {"escalate": should_escalate}
def format_history(messages: list[dict]) -> str:
return "\n".join(f"{m['role']}: {m['content']}" for m in messages[-10:])
# Build the graph
graph = StateGraph(ConversationState)
graph.add_node("classify", classify_node)
graph.add_node("retrieve", retrieve_node)
graph.add_node("respond", respond_node)
graph.add_node("check_escalation", check_escalation)
graph.add_edge(START, "classify")
graph.add_edge("classify", "retrieve")
graph.add_edge("retrieve", "respond")
graph.add_edge("respond", "check_escalation")
def route_after_escalation_check(state: ConversationState) -> str:
if state["escalate"]:
return "escalate"
return "done"
graph.add_conditional_edges(
"check_escalation",
route_after_escalation_check,
{"escalate": END, "done": END},
)
app = graph.compile()
Run a conversation turn
result = app.invoke({
"messages": [{"role": "user", "content": "How do I reset my password?"}],
"current_intent": "",
"context": "",
"escalate": False,
"resolved": False,
"turn_count": 0,
})
print(result["messages"][-1]["content"])
Step 4: Add memory
Session memory with checkpointing
LangGraph's checkpointer persists conversation state across requests:
from langgraph.checkpoint.memory import MemorySaver
checkpointer = MemorySaver()
app = graph.compile(checkpointer=checkpointer)
# Each user session gets a unique thread_id
config = {"configurable": {"thread_id": "user-abc-123"}}
# Turn 1
result = app.invoke(
{"messages": [{"role": "user", "content": "Hi, I need help with billing"}],
"current_intent": "", "context": "", "escalate": False, "resolved": False, "turn_count": 0},
config=config,
)
# Turn 2 — state is preserved, the bot remembers the conversation
result = app.invoke(
{"messages": [{"role": "user", "content": "I was charged twice last month"}]},
config=config,
)
For production, use a persistent backend:
from langgraph.checkpoint.postgres import PostgresSaver
checkpointer = PostgresSaver(conn_string="postgresql://user:pass@localhost/chatbot")
app = graph.compile(checkpointer=checkpointer)
Conversation summary for long chats
When conversations get long, summarize older messages to stay within token limits:
class SummarizeConversation(dspy.Signature):
"""Summarize the conversation so far, preserving key details."""
conversation: str = dspy.InputField()
summary: str = dspy.OutputField(desc="Concise summary of the conversation so far")
summarizer = dspy.Predict(SummarizeConversation)
def maybe_summarize(state: ConversationState) -> dict:
"""Summarize if conversation is getting long."""
if len(state["messages"]) > 20:
history = format_history(state["messages"][:-5])
summary = summarizer(conversation=history).summary
# Keep summary + last 5 messages
return {
"messages": [
{"role": "system", "content": f"Summary of earlier conversation: {summary}"},
*state["messages"][-5:],
]
}
return {}
Step 5: Ground responses in docs
Retrieve relevant documents each turn to keep responses factual.
class DocGroundedResponse(dspy.Signature):
"""Answer the user's question based on the provided documentation.
Only use information from the docs. If the docs don't cover it, say so."""
conversation_history: str = dspy.InputField()
docs: list[str] = dspy.InputField(desc="Relevant documentation passages")
user_message: str = dspy.InputField()
response: str = dspy.OutputField()
class GroundedChatBot(dspy.Module):
def __init__(self, retriever):
self.retriever = retriever
self.respond = dspy.ChainOfThought(DocGroundedResponse)
def forward(self, conversation_history, user_message):
# Retrieve docs relevant to the current message
docs = self.retriever(user_message).passages
return self.respond(
conversation_history=conversation_history,
docs=docs,
user_message=user_message,
)
See /ai-searching-docs for setting up retrievers and vector stores, including loading data from PDFs, Notion, and other sources with LangChain document loaders.
Step 6: Add guardrails
Response quality with dspy.Refine
Use dspy.Refine with a reward function to enforce guardrails on chatbot responses:
class GroundedChatBotInner(dspy.Module):
def __init__(self, retriever):
self.retriever = retriever
self.respond = dspy.ChainOfThought(DocGroundedResponse)
def forward(self, conversation_history, user_message):
docs = self.retriever(user_message).passages
return self.respond(
conversation_history=conversation_history,
docs=docs,
user_message=user_message,
)
def chatbot_response_reward(args, pred):
"""Score chatbot response quality. Returns 0.0-1.0."""
response = pred.response
score = 1.0
# Hard constraint -- don't break character
if "I am an AI" in response:
return 0.0
# Soft penalties
if len(response.split()) >= 200:
score -= 0.2 # prefer concise responses
condescending = ["obviously", "clearly", "simply"]
if any(word in response.lower() for word in condescending):
score -= 0.1 # avoid condescending language
return max(score, 0.0)
def make_guarded_chatbot(retriever):
return dspy.Refine(
module=GroundedChatBotInner(retriever),
N=3,
reward_fn=chatbot_response_reward,
threshold=0.8,
)
Human-in-the-loop for sensitive actions
Use LangGraph's interrupt to pause before the bot takes real actions:
app = graph.compile(
checkpointer=checkpointer,
interrupt_before=["execute_refund", "cancel_account"], # pause here
)
# Bot runs until it reaches a sensitive action
result = app.invoke(input_state, config)
# Human agent reviews the proposed action
# If approved, resume:
result = app.invoke(None, config) # continues from checkpoint
Step 7: Optimize and evaluate
Conversation-level metrics
def chatbot_metric(example, prediction, trace=None):
"""Score a single conversation turn."""
judge = dspy.Predict(JudgeTurn)
result = judge(
user_message=example.user_message,
expected_response=example.response,
actual_response=prediction.response,
conversation_history=example.conversation_history,
)
return result.is_good
class JudgeTurn(dspy.Signature):
"""Judge if the chatbot response is helpful, accurate, and on-topic."""
user_message: str = dspy.InputField()
expected_response: str = dspy.InputField()
actual_response: str = dspy.InputField()
conversation_history: str = dspy.InputField()
is_good: bool = dspy.OutputField()
Build a training set from real conversations
trainset = []
for convo in real_conversations:
for turn in convo["turns"]:
trainset.append(
dspy.Example(
conversation_history=turn["history"],
user_message=turn["user_message"],
context=turn["context"],
response=turn["response"],
).with_inputs("conversation_history", "user_message", "context")
)
Optimize
optimizer = dspy.MIPROv2(metric=chatbot_metric, auto="medium")
optimized_bot = optimizer.compile(chatbot, trainset=trainset)
# Save optimized prompts
optimized_bot.save("chatbot_optimized.json")
Key patterns
- DSPy for response generation, LangGraph for flow control — DSPy modules handle what the bot says; LangGraph handles conversation state and routing
- Checkpointing is your memory — use LangGraph's checkpointer so conversations persist across HTTP requests
- Retrieve every turn — don't assume context from earlier turns is still relevant; re-retrieve each time
- Summarize long conversations — once past ~20 messages, summarize older context to stay within token limits
- Classify intent early — knowing the user's intent lets you route to specialized handlers
- Interrupt before real actions — use LangGraph's
interrupt_beforeso humans approve refunds, cancellations, etc. - Optimize on real conversations — collect actual chat logs to build training data for DSPy optimization
Gotchas
- Claude puts conversation flow logic inside DSPy modules. DSPy modules should only handle LM calls (classify, respond, summarize). State transitions, routing, and memory belong in LangGraph nodes and edges. If you catch yourself writing
if/elsechains insideforward()to manage conversation state, move that logic to LangGraph. - Claude passes full message history every turn. This works for short conversations but blows up token usage on long ones. After ~20 messages, summarize older messages and keep only the summary + last 5 messages. Use the
maybe_summarizepattern from Step 4. - Claude forgets
with_inputs()when building conversation training data. Everydspy.Examplefor chatbot training needs.with_inputs("conversation_history", "user_message", "context")— without it, the optimizer treats all fields as outputs and optimization silently produces garbage. - Claude defines a single monolithic
ChatResponsesignature for all intents. Different intents need different handling — a complaint needs empathy and escalation logic, a question needs retrieval accuracy, a greeting needs brevity. UseClassifyIntent+ separate handler modules per intent rather than one signature trying to do everything. - Claude skips the escalation check. Chatbots that can't hand off to humans are a liability. Always include an escalation path — at minimum, a turn-count threshold combined with intent detection for complaints or sensitive topics.
Additional resources
- For worked examples (support bot, FAQ assistant), see examples.md
Cross-references
Install any skill:
npx skills add lebsral/DSPy-Programming-not-prompting-LMs-skills --skill <name>
- Search docs for grounding — see
/ai-searching-docs - Bot takes actions (APIs, tools) — see
/ai-taking-actions - Multiple bots working together — see
/ai-coordinating-agents - Measure and improve chatbot accuracy — see
/ai-improving-accuracy - Multi-step AI pipeline design — see
/ai-building-pipelines - Composing DSPy modules — see
/dspy-modules - Iterative refinement with reward functions — see
/dspy-refine - ReAct for tool-using chatbots — see
/dspy-react - Install
/ai-doif you do not have it — it routes any AI problem to the right skill and is the fastest way to work:npx skills add lebsral/DSPy-Programming-not-prompting-LMs-skills --skill ai-do
More from lebsral/dspy-programming-not-prompting-lms-skills
ai-switching-models
Switch AI providers or models without breaking things. Use when you want to switch from OpenAI to Anthropic, try a cheaper model, stop depending on one vendor, compare models side-by-side, a model update broke your outputs, you need vendor diversification, or you want to migrate to a local model. Also use when your prompt broke after a model update, prompts that work for GPT-4 do not work for Claude or Llama, or you need to do a model migration. Covers DSPy model portability with provider config, re-optimization, model comparison, and multi-model pipelines. Also used for migrate from OpenAI to Anthropic, GPT to Claude migration, try Llama instead of GPT, model comparison framework, multi-provider AI setup, avoid vendor lock-in for AI, prompts break when switching models, model-agnostic AI code.
56ai-stopping-hallucinations
Stop your AI from making things up. Use when your AI hallucinates, fabricates facts, is not grounded in real data, does not cite sources, makes unsupported claims, or you need to verify AI responses against source material. Also use when your LLM makes up facts, responses are disconnected from the input, or outputs are not grounded in source documents. Covers citation enforcement, faithfulness verification, grounding via retrieval, confidence thresholds, and evaluation of anti-hallucination quality. Also used for AI makes up citations, LLM fabricates data, ground AI in source documents, RAG but AI still hallucinates, force AI to cite sources, factual accuracy for AI, prevent AI from inventing facts, AI confident but wrong, LLM confabulation, hallucination detection, verify AI claims against documents.
49ai-do
Describe your AI problem and get routed to the right skill with a ready-to-use prompt. Use when you are not sure which ai- skill to use, want help picking the right approach, or just want to describe what you need in plain language. Also use this when someone says I want to build an AI that..., how do I make my AI..., or describes any AI/LLM task without naming a specific skill, I need AI but do not know where to start, which AI pattern should I use, what is the best way to add AI to my app, recommend an AI approach, AI feature discovery, too many AI options, overwhelmed by AI frameworks, just tell me what to build, new to DSPy, beginner AI project help, which LLM pattern fits my use case, confused about AI architecture, help me figure out my AI approach.
23ai-improving-accuracy
Measure and improve how well your AI works. Use when AI gives wrong answers, accuracy is bad, responses are unreliable, you need to test AI quality, evaluate your AI, write metrics, benchmark performance, optimize prompts, improve results, or systematically make your AI better. Also use when you spent hours tweaking prompts, trial and error prompt engineering is not working, quality plateaued early, or you have stale prompts everywhere in your codebase. Covers DSPy evaluation, metrics, and optimization., my AI is only 60% accurate, how to measure AI quality, AI evaluation framework, benchmark my LLM, prompt optimization not working, systematic way to improve AI, AI accuracy plateaued, DSPy optimizer tutorial, MIPROv2 optimization, how to go from 70% to 90% accuracy.
21ai-parsing-data
Pull structured data from messy text using AI. Use when parsing invoices, extracting fields from emails, scraping entities from articles, converting unstructured text to JSON, extracting contact info, parsing resumes, reading forms, pulling data from transcripts (VTT, LiveKit, Recall), extracting fields from Langfuse traces, or any task where messy text goes in and clean structured data comes out. Also use when emails are messy and lack structure, or structured data extraction from unstructured content is unreliable., extract entities from text, parse PDF with AI, structured extraction from unstructured text, OCR plus AI extraction, convert email to structured data, pull fields from documents automatically, AI data entry automation, invoice parsing, resume parsing with AI, medical record extraction.
21ai-reasoning
Make AI solve hard problems that need planning and multi-step thinking. Use when your AI fails on complex questions, needs to break down problems, requires multi-step logic, needs to plan before acting, gives wrong answers on math or analysis tasks, or when a simple prompt is not enough for the reasoning required. Covers ChainOfThought, ProgramOfThought, MultiChainComparison, and Self-Discovery reasoning patterns in DSPy., AI gives shallow answers, LLM does not think before answering, chain of thought prompting, make AI show its work, AI fails at math, complex analysis with LLM, multi-step problem solving, AI reasoning errors, LLM logic mistakes, think step by step DSPy, AI cannot do basic arithmetic, deep reasoning with language models, self-consistency for better answers, tree of thought.
21