langchain-hello-world
SKILL.md
LangChain Hello World
Overview
Minimal working example demonstrating core LangChain functionality with chains and prompts.
Prerequisites
- Completed
langchain-install-authsetup - Valid LLM provider API credentials configured
- Python 3.9+ or Node.js 18+ environment ready
Instructions
Step 1: Create Entry File
Create a new file hello_langchain.py for your hello world example.
Step 2: Import and Initialize
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini")
Step 3: Create Your First Chain
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}")
])
chain = prompt | llm | StrOutputParser()
response = chain.invoke({"input": "Hello, LangChain!"})
print(response)
Output
- Working Python file with LangChain chain
- Successful LLM response confirming connection
- Console output showing:
Hello! I'm your LangChain-powered assistant. How can I help you today?
Error Handling
| Error | Cause | Solution |
|---|---|---|
| Import Error | SDK not installed | Run pip install langchain langchain-openai |
| Auth Error | Invalid credentials | Check environment variable is set |
| Timeout | Network issues | Increase timeout or check connectivity |
| Rate Limit | Too many requests | Wait and retry with exponential backoff |
| Model Not Found | Invalid model name | Check available models in provider docs |
Examples
Simple Chain (Python)
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
chain = prompt | llm | StrOutputParser()
result = chain.invoke({"topic": "programming"})
print(result)
With Memory (Python)
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import HumanMessage, AIMessage
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
MessagesPlaceholder(variable_name="history"),
("user", "{input}")
])
chain = prompt | llm
history = []
response = chain.invoke({"input": "Hi!", "history": history})
print(response.content)
TypeScript Example
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const llm = new ChatOpenAI({ modelName: "gpt-4o-mini" });
const prompt = ChatPromptTemplate.fromTemplate("Tell me about {topic}");
const chain = prompt.pipe(llm).pipe(new StringOutputParser());
const result = await chain.invoke({ topic: "LangChain" });
console.log(result);
Resources
Next Steps
Proceed to langchain-local-dev-loop for development workflow setup.
Weekly Installs
15
Repository
jeremylongshore…s-skillsGitHub Stars
1.6K
First Seen
Feb 18, 2026
Security Audits
Installed on
gemini-cli15
github-copilot15
amp15
codex15
kimi-cli15
opencode15