langchain-deploy-integration
SKILL.md
LangChain Deploy Integration
Overview
Deploy LangChain applications to production using LangServe, Docker, and cloud platforms. Covers containerization of chains and agents, LangServe API deployment, and integration with LangSmith for production observability.
Prerequisites
- LangChain application with chains/agents defined
- Docker installed for containerization
- LangSmith API key for production tracing
- Platform CLI (gcloud, aws, or docker compose)
Instructions
Step 1: LangServe API Setup
# serve.py
from fastapi import FastAPI
from langserve import add_routes
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
app = FastAPI(title="LangChain API")
# Define your chain
prompt = ChatPromptTemplate.from_template("Answer: {question}")
chain = prompt | ChatOpenAI(model="gpt-4o-mini")
# Add LangServe routes
add_routes(app, chain, path="/chat")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000) # 8000: API server port
Step 2: Dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
ENV LANGCHAIN_TRACING_V2=true
ENV LANGCHAIN_PROJECT=production
EXPOSE 8000 # 8000: API server port
CMD ["uvicorn", "serve:app", "--host", "0.0.0.0", "--port", "8000"] # API server port
Step 3: Docker Compose for Development
version: "3.8"
services:
langchain-api:
build: .
ports:
- "8000:8000" # 8000: API server port
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY}
- LANGCHAIN_TRACING_V2=true
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"] # API server port
interval: 30s
Step 4: Cloud Run Deployment
gcloud run deploy langchain-api \
--source . \
--region us-central1 \
--set-secrets=OPENAI_API_KEY=openai-key:latest \
--set-secrets=LANGCHAIN_API_KEY=langsmith-key:latest \
--set-env-vars=LANGCHAIN_TRACING_V2=true \
--min-instances=1 \
--memory=1Gi
Step 5: Health Check with LangSmith
from langsmith import Client
async def health_check():
try:
client = Client()
# Verify LangSmith connection
client.list_projects(limit=1)
return {"status": "healthy", "tracing": "enabled"}
except Exception as e:
return {"status": "degraded", "error": str(e)}
Error Handling
| Issue | Cause | Solution |
|---|---|---|
| Import errors | Missing dependencies | Pin versions in requirements.txt |
| LangSmith timeout | Network issue | Set LANGCHAIN_TRACING_V2=false as fallback |
| Memory exceeded | Large context | Increase container memory, use streaming |
| Cold start slow | Heavy imports | Use gunicorn with preload |
Examples
Production Requirements
langchain>=0.3.0
langchain-openai>=0.2.0
langserve>=0.3.0
langsmith>=0.1.0
uvicorn>=0.30.0
fastapi>=0.115.0
Resources
Next Steps
For multi-environment setup, see langchain-multi-env-setup.
Output
- Configuration files or code changes applied to the project
- Validation report confirming correct implementation
- Summary of changes made and their rationale
Weekly Installs
17
Repository
jeremylongshore…s-skillsGitHub Stars
1.6K
First Seen
Feb 18, 2026
Security Audits
Installed on
codex17
kimi-cli16
gemini-cli16
amp16
github-copilot16
opencode16