prompt-chaining
Prompt Chaining
Prompt Chaining is the practice of decomposing a complex task into a series of smaller, sequential sub-tasks. Each sub-task is handled by a specific LLM call, with the output of one step feeding into the next. This approach improves reliability, testability, and allows for intermediate processing (like validation or formatting) between steps.
When to Use
- Complex Transformations: When a single prompt is too complex or prone to error (e.g., "Research topic X, then write an article, then translate it").
- Step-by-Step Logic: When the logic requires a strict sequence of operations (e.g., Extract Data -> Validate Data -> Summarize Data).
- Token Limits: When the input or intermediate context exceeds the context window of a single call.
- Debugging: To isolate failures in a complex workflow by inspecting intermediate outputs.
Use Cases
- Document Processing: Extract text -> Summarize -> Translate -> Format as JSON.
- Code Generation: Write tests -> Write code to pass tests -> Refactor code.
- Content Creation: Generate outline -> Draft sections -> Polish tone -> Generate Title.
Implementation Pattern
def prompt_chain_workflow(input_data):
# Step 1: Extraction
# Focuses solely on getting the right data out of the raw input.
extracted_data = llm_call(
prompt="Extract key entities from this text...",
input=input_data
)
# Optional: Deterministic Validation
# We can run code check here before proceeding.
if not validate(extracted_data):
raise ValueError("Extraction failed")
# Step 2: Transformation
# Focuses on converting the data into the desired format/style.
final_output = llm_call(
prompt="Transform this extraction into a marketing summary...",
input=extracted_data
)
return final_output
More from lauraflorentin/skills-marketplace
multi-agent-collaboration
A structural pattern where multiple specialized agents communicate and coordinate to solve a problem that is too complex for a single agent. Use when user asks to "build a multi-agent system", "agents working together", "agent collaboration", or mentions team of agents, distributed agents, or swarm.
21reflection
A recursive pattern where an agent evaluates and critiques its own output to iteratively improve quality and catch errors. Use when user asks to "add self-reflection", "agent introspection", "self-critique", or mentions self-evaluation, meta-cognition, or quality self-assessment.
18human-in-the-loop
A hybrid pattern where the system pauses execution to request human approval, input, or disambiguation before proceeding with critical actions. Use when user asks to "add human approval", "require human review", "human-in-the-loop", or mentions approval workflows, human oversight, or escalation.
16planning
A high-level cognitive pattern where an agent formulates a structured sequence of actions (a plan) before executing any of them, ensuring goal-directed behavior. Use when user asks to "add planning to my agent", "task planning", "agent planning", or mentions plan generation, plan execution, or step-by-step planning.
14parallelization
A concurrency pattern where multiple agent tasks are executed at the same time to speed up processing or gather diverse perspectives. Use when user asks to "run agents in parallel", "parallelize tasks", "concurrent execution", or mentions parallel processing, fan-out, or batch execution.
13routing
A control flow pattern where a central component classifies an input request and directs it to the most appropriate specialized agent or tool. Use when user asks to "route between agents", "agent routing", "task dispatch", or mentions classifier routing, intent detection, or agent selection.
12