reflection
Reflection
Reflection (also known as Self-Correction or Self-Refinement) gives an agent the ability to look at its work and ask, "Is this correct?" or "Can this be better?". Instead of accepting the first draft, the agent acts as its own critic, identifying flaws and generating an improved version. This significantly boosts performance on reasoning and coding tasks.
When to Use
- Quality Control: When high accuracy is required (e.g., generating code, writing legal text).
- Compliance: To ensure the output follows specific formatting or policy constraints.
- Iterative Improvement: When "good enough" isn't enough, and polish is needed.
- Hallucination Check: Asking the model to verify facts against its own knowledge or retrieved context.
Use Cases
- Code Repair: Creating code -> Running it -> Reading the error -> Fixing the code.
- Writing Polish: Drafting an email -> Critiquing tone and clarity -> Rewriting.
- Safety Check: Generating a response -> Checking for policy violations -> Regenerating if unsafe.
Implementation Pattern
def reflection_workflow(task):
# Step 1: Draft
draft = llm_call(f"Write a draft for: {task}")
# Step 2: Critique
# The prompt explicitly asks to find problems, not just "rewrite".
critique = llm_call(
prompt="Critique this draft. List logical errors, tone issues, or missing info.",
input=draft
)
# Step 3: Refine
# The final generation sees both the draft and the specific feedback.
final_version = llm_call(
prompt="Rewrite the draft to address the following critique...",
input={"draft": draft, "critique": critique}
)
return final_version
More from lauraflorentin/skills-marketplace
multi-agent-collaboration
A structural pattern where multiple specialized agents communicate and coordinate to solve a problem that is too complex for a single agent. Use when user asks to "build a multi-agent system", "agents working together", "agent collaboration", or mentions team of agents, distributed agents, or swarm.
21human-in-the-loop
A hybrid pattern where the system pauses execution to request human approval, input, or disambiguation before proceeding with critical actions. Use when user asks to "add human approval", "require human review", "human-in-the-loop", or mentions approval workflows, human oversight, or escalation.
16planning
A high-level cognitive pattern where an agent formulates a structured sequence of actions (a plan) before executing any of them, ensuring goal-directed behavior. Use when user asks to "add planning to my agent", "task planning", "agent planning", or mentions plan generation, plan execution, or step-by-step planning.
14parallelization
A concurrency pattern where multiple agent tasks are executed at the same time to speed up processing or gather diverse perspectives. Use when user asks to "run agents in parallel", "parallelize tasks", "concurrent execution", or mentions parallel processing, fan-out, or batch execution.
13routing
A control flow pattern where a central component classifies an input request and directs it to the most appropriate specialized agent or tool. Use when user asks to "route between agents", "agent routing", "task dispatch", or mentions classifier routing, intent detection, or agent selection.
12adaptation
A dynamic pattern where an agent system modifies its own behavior, prompts, or tools over time based on feedback or performance metrics. Use when user asks to "make my agent adaptive", "add learning capabilities", "self-improving agent", or mentions adaptive behavior, online learning, or feedback loops.
12