craft-survey
craft-survey
Purpose
Study comparable prompts, skills, or repo assets, extract the best patterns, and turn them into actionable improvements.
A grounded survey matters because agents that invent from scratch often rediscover bad shapes the community has already outgrown. A focused pass through prior art surfaces the handful of patterns that genuinely carry their weight — and, just as importantly, identifies the ones that shouldn't be copied.
Use this when
- there are older assets worth learning from
- a new skill should be grounded in proven patterns
- the current artifact feels under-informed
- best practices should be incorporated without cargo-culting them
Inputs
- target artifact or problem
- local references if available
- optional public references if allowed
- constraints and intended audience
Steps
- Start with local or in-repo references first. They're closest to the current context and cheapest to verify.
- Identify comparable assets and group them by purpose. Grouping makes recurring shapes visible.
- Extract repeated patterns that seem genuinely useful. Prefer patterns that show up across multiple sources.
- Separate core patterns from optional stylistic choices. Confusing the two is how cargo-culting starts.
- Synthesize a small set of recommended changes. Synthesis beats quotation — don't just paste from sources.
- Apply only the changes that improve clarity, reuse, or quality. Novelty alone isn't a reason to adopt a pattern.
Output format
Survey target
One or two sentences naming the specific artifact being improved and the concrete survey question driving this pass. A reader seeing only this section should know what would count as a useful answer — not a generic "strengthen the artifact."
Reference patterns
Short list of patterns found in comparable assets. Every pattern must cite its provenance — either inline per item, or via a section-opening source map that binds each pattern to its source file + section. A pattern with no traceable source doesn't belong on the list.
Adopt
Patterns worth bringing into CraftKit. Each item is two parts: the pattern (short handle) AND a rationale clause naming a concrete benefit — what failure it prevents, what friction it removes, what quality it raises. Bare phrases don't satisfy the section; they model without grounding.
Avoid
Patterns that should not be copied. Each item is two parts: the pattern AND a rationale clause naming the specific harm — what it breaks, what coupling it introduces, what portability it costs. Bare phrases don't satisfy the section.
Recommended edits
Concrete file or section changes. Each item must name three things: (a) the target file (path or skill name), (b) the specific section/heading/location within that file, and (c) the edit verb (add / tighten / remove / replace / refactor). Scale the number of edits to the scope of the research ask — a narrow question deserves 1–3 edits, not a full rewrite list.
Risks
Survey-specific risks only. At least one risk must reference either the actual source set surveyed or a named constraint of the target artifact. Generic risks that could appear verbatim in any prior-art survey don't count — if the risk is portable across unrelated survey passes, it hasn't engaged this survey.
Guardrails
- prefer first-principles synthesis over copy-paste
- do not import provider-specific quirks into core assets
- cite provenance in docs when relevant
- use a few strong patterns, not a giant laundry list
Failure modes
- treating one strong reference as a template and overfitting to its quirks
- turning the output into a literature review instead of actionable edits
- importing provider-specific phrasing that breaks portability
- skipping the "avoid" list and copying every pattern uncritically
Example
Input
Improve a tuning skill using older prompt-builder assets and meta-skills as references.
Output
Survey target Strengthen the tuning skill so it produces more consistent and portable edits.
Reference patterns
- explicit inputs and outputs
- minimal-diff editing
- critique before revision
- examples embedded in docs
Adopt
- revised artifact plus changelog structure
- failure-mode note
- preservation of original intent
Avoid
- overly vendor-specific phrases
- giant all-in-one instructions
Recommended edits
- tighten the output template in
craft-tune - add a short failure-mode section
- add one compact example
Risks
- overfitting to one repo's naming conventions
More from sungjunlee/craftkit
craft-scaffold
Turn a rough idea into a structured prompt or skill scaffold with explicit objective, inputs, workflow, outputs, and a concrete file plan. Use this whenever the user wants to design a new prompt or skill, scaffold a skill-like workflow, mentions "scaffold," "blueprint," "structure," or "plan" for a prompt, or arrives with a vague request that needs to be shaped before implementation — even if they don't explicitly ask to scaffold.
9craft-tune
Diagnose and improve an existing prompt or skill in one pass — surface the issues that are actually blocking the goal, then apply targeted minimal-diff edits that preserve the original intent. Use this whenever the user wants to refine, sharpen, tighten, review, audit, or upgrade an existing prompt or skill, says it "feels off" or "behaves inconsistently," asks "what's wrong with this," or wants to make it better. Defaults to diagnose-and-edit; switch to diagnose-only mode when the user wants the findings before any rewrite (asks to "review only," "diagnose only," or "don't edit yet").
9craft-prompt
Craft well-structured, copy-paste-ready prompts for any LLM — Claude, GPT, Gemini, Perplexity, or any other. Use this whenever the user wants a prompt built from scratch, asks to "write/make/build a prompt," needs a session handoff prompt to carry work into a new Claude Code or Codex session, shares scattered notes that need to be shaped into a usable prompt, or requests a reusable prompt template — even if they don't explicitly say "prompt." Also triggers on Korean equivalents like "프롬프트 만들어," "프롬프트 작성," "프롬프트 빌드.
9craft-autoresearch
Eval-driven autonomous optimization loop for a prompt or skill. Define eval criteria and a run harness, then iterate — run the artifact on test inputs, score outputs, mutate the prompt or skill, keep improvements, discard regressions, and stop on a written condition. Use this whenever the user wants to automatically improve a prompt or skill, mentions autoresearch, eval-driven optimization, benchmarking a skill, running evals, self-improving a prompt, or any iterative test-and-refine request — including 스킬 개선, 스킬 최적화, 스킬 벤치마크, 스킬 평가 — even if they don't say "autoresearch.
9craft-handoff
>-
8craft-critique
Critique a prompt or skill and surface ambiguity, hidden assumptions, weak structure, portability issues, and likely failure modes before any rewrite happens. Use this whenever the user asks to review, audit, critique, or diagnose a prompt or skill, mentions a prompt that "feels off" or behaves inconsistently, or is about to start a large rewrite and should stop to diagnose first — even if they don't explicitly say "critique" or "review.
7