reflection
SKILL.md
Reflection
Reflection (also known as Self-Correction or Self-Refinement) gives an agent the ability to look at its work and ask, "Is this correct?" or "Can this be better?". Instead of accepting the first draft, the agent acts as its own critic, identifying flaws and generating an improved version. This significantly boosts performance on reasoning and coding tasks.
When to Use
- Quality Control: When high accuracy is required (e.g., generating code, writing legal text).
- Compliance: To ensure the output follows specific formatting or policy constraints.
- Iterative Improvement: When "good enough" isn't enough, and polish is needed.
- Hallucination Check: Asking the model to verify facts against its own knowledge or retrieved context.
Use Cases
- Code Repair: Creating code -> Running it -> Reading the error -> Fixing the code.
- Writing Polish: Drafting an email -> Critiquing tone and clarity -> Rewriting.
- Safety Check: Generating a response -> Checking for policy violations -> Regenerating if unsafe.
Implementation Pattern
def reflection_workflow(task):
# Step 1: Draft
draft = llm_call(f"Write a draft for: {task}")
# Step 2: Critique
# The prompt explicitly asks to find problems, not just "rewrite".
critique = llm_call(
prompt="Critique this draft. List logical errors, tone issues, or missing info.",
input=draft
)
# Step 3: Refine
# The final generation sees both the draft and the specific feedback.
final_version = llm_call(
prompt="Rewrite the draft to address the following critique...",
input={"draft": draft, "critique": critique}
)
return final_version
Weekly Installs
11
Repository
lauraflorentin/…ketplaceFirst Seen
Feb 11, 2026
Security Audits
Installed on
openclaw10
gemini-cli10
claude-code10
github-copilot10
codex10
kimi-cli10