tooyoung:chainlit-builder
Chainlit Demo Builder
Build AI chat demos quickly for product demonstrations and proof-of-concept.
Use Cases
- Demonstrate AI product concepts to stakeholders
- Rapidly validate conversation interaction ideas
- Build internal POC demos
Quick Start (3-minute demo)
Step 1: Initialize Project
mkdir demo && cd demo
uv init && uv add chainlit openai
Step 2: Create app.py
import chainlit as cl
from openai import AsyncOpenAI
client = AsyncOpenAI()
@cl.on_message
async def main(message: cl.Message):
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": message.content}
],
stream=True
)
msg = cl.Message(content="")
async for chunk in response:
if chunk.choices[0].delta.content:
await msg.stream_token(chunk.choices[0].delta.content)
await msg.send()
Step 3: Run
uv run chainlit run app.py -w
# Visit http://localhost:8000
Demo Scenario Templates
Scenario A: Multi-turn Conversation (with memory)
import chainlit as cl
from openai import AsyncOpenAI
client = AsyncOpenAI()
@cl.on_chat_start
async def start():
cl.user_session.set("history", [
{"role": "system", "content": "You are the AI assistant for XX product."}
])
@cl.on_message
async def main(message: cl.Message):
history = cl.user_session.get("history")
history.append({"role": "user", "content": message.content})
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=history,
stream=True
)
msg = cl.Message(content="")
full_response = ""
async for chunk in response:
if chunk.choices[0].delta.content:
token = chunk.choices[0].delta.content
full_response += token
await msg.stream_token(token)
await msg.send()
history.append({"role": "assistant", "content": full_response})
Scenario B: File Upload + Analysis
import chainlit as cl
from openai import AsyncOpenAI
client = AsyncOpenAI()
@cl.on_chat_start
async def start():
files = await cl.AskFileMessage(
content="Please upload the file to analyze",
accept=["text/plain", "application/pdf"],
max_size_mb=10
).send()
if files:
file = files[0]
# Read file content
with open(file.path, "r") as f:
content = f.read()
cl.user_session.set("file_content", content)
await cl.Message(f"File loaded: {file.name}, you can start asking questions").send()
@cl.on_message
async def main(message: cl.Message):
file_content = cl.user_session.get("file_content", "")
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": f"Answer questions based on this document:\n\n{file_content[:8000]}"},
{"role": "user", "content": message.content}
],
stream=True
)
msg = cl.Message(content="")
async for chunk in response:
if chunk.choices[0].delta.content:
await msg.stream_token(chunk.choices[0].delta.content)
await msg.send()
Scenario C: Tool Calling Demo (Step Visualization)
import chainlit as cl
from openai import AsyncOpenAI
import json
client = AsyncOpenAI()
tools = [
{
"type": "function",
"function": {
"name": "search_knowledge",
"description": "Search the knowledge base",
"parameters": {
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"]
}
}
}
]
@cl.step(type="tool")
async def search_knowledge(query: str):
"""Simulate knowledge base search"""
return f"Found 3 relevant records for '{query}'..."
@cl.on_message
async def main(message: cl.Message):
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": message.content}],
tools=tools
)
msg = response.choices[0].message
if msg.tool_calls:
for tool_call in msg.tool_calls:
args = json.loads(tool_call.function.arguments)
result = await search_knowledge(args["query"])
# Continue conversation...
await cl.Message(content=msg.content or "Processing complete").send()
Styling Configuration (Make demos look professional)
Create .chainlit/config.toml:
[project]
name = "XX Product AI Assistant"
[UI]
name = "AI Assistant"
default_theme = "light"
# Optional: custom_css = "/public/style.css"
[features]
prompt_playground = false # Disable for demos
Create chainlit.md (welcome page):
# Welcome to XX AI Assistant
This is an AI-powered intelligent Q&A system that supports:
- Multi-turn conversation
- Document analysis
- Knowledge retrieval
Please enter your question below.
Troubleshooting
Proxy causing startup failure
NO_PROXY="*" uv run chainlit run app.py -w
Python version issues
Chainlit requires Python < 3.14:
# pyproject.toml
requires-python = ">=3.10,<3.14"
Advanced Topics
For more API details (authentication, custom components, deployment, etc.), use Context7 to query the official Chainlit documentation.
More from shiqkuangsan/oh-my-daily-skills
tooyoung:excalidraw-artist
Create Excalidraw hand-drawn style diagrams, including architecture, flowchart, swimlane/timeline, sequence, basic wireframe, ERD/data model, state machine, matrix/comparison table, tree/hierarchy, and CI/CD pipeline. Trigger words: draw diagram, architecture diagram, flowchart, swimlane, timeline, roadmap, Gantt, sequence diagram, excalidraw, ERD, data model, state machine, comparison table, matrix, tree, hierarchy, CI/CD pipeline
24tooyoung:threejs-builder
Create simple Three.js web apps with scene setup, lighting, geometries, materials, animations, OrbitControls, particles, and responsive rendering. Use for Three.js scenes, WebGL demos, 3D showcases, and interactive 3D web content. Trigger: threejs, Three.js, 3D scene, WebGL, 三维展示, 3D showcase, interactive 3D
23tooyoung:openclash-merger
将 vless+reality 等新协议配置转换为带 GEOSITE 规则的配置文件,支持 11 地区分组 + AI/媒体/游戏分流,可直接上传 OpenClash 使用。触发词:合并 OpenClash、转换订阅、Clash 配置
23tooyoung:nano-banana-builder
Build Next.js App Router image-generation apps using Gemini Nano Banana / Nano Banana Pro with AI SDK. Covers exact model names, Server Actions/API routes, conversational multi-turn image editing, storage, rate limiting, safety, and cost controls. Trigger: nano banana, Gemini image, AI 生图, 图片生成, text-to-image, image generation app, iterative image editor, multi-turn image editing
23tooyoung:easy-openrouter
Test individual LLM models through OpenRouter and compare observed latency, cost, token usage, and outputs. Includes model ID format, :nitro/:online modifiers, rankings/provider lookup, and simple manual comparison workflows. Trigger words: OpenRouter, test model, model ID, compare models, provider latency, throughput, cheapest provider, fastest provider, :nitro, :online
22tooyoung:ink-reader
Read public or accessible URLs into clean Markdown using platform-aware fallback strategies. Covers common Chinese platforms, X/Twitter, and generic websites; login or anti-bot pages are best-effort. Trigger words: read url, read link, fetch article, extract content, clean markdown, WeChat article, 搜公众号文章, ink-reader
22