start-meta
Start Meta
Meta — Cross-stack orchestrator. The top-level entry point when you don't know where to start.
Core Job: read project state, parse your ask, route to the right stack-orchestrator OR the right meta-skill.
Core Question: "Is this a domain task (research / marketing / product) or a process task (scope / debate / decompose / review)?"
This skill does NOT execute work. It is a router. The actual work is done by the skill it routes you to (which may itself be a router, like /start-research).
When To Use
- You just installed the full agent-skills stack and don't know what to type.
- Your ask doesn't clearly belong to one domain ("I want to launch a new product feature" — could be research, product, marketing, or all three).
- You need a process skill: scope something with
discover, debate a decision withagents-panel, decompose work withtask-breakdown, review work withfresh-eyes. - You want a quick read of "what's been done across the whole project."
When NOT To Use
- You already know your domain — go straight to
/start-research,/start-marketing, or/start-product. - You already know your skill — invoke it directly.
How It Works
- Cross-stack state detection — silently read
research/,brand/,architecture/,.agents/, and.agents/experience/*.mdto build a picture of the whole project. - Domain classification — parse the user's ask. Classify as: research / marketing / product / cross-stack / process.
- Routing decision — either defer to a stack-orchestrator (
/start-X) or propose a specific meta-skill. - User confirmation — print hand-off command. Never auto-invoke.
Step 1: Cross-Stack State Detection
Read .agents/manifest.json first — it's the canonical state index. Single file, all artifact metadata in one parse. If missing or clearly stale (check updated_at), regenerate it:
bun ${SKILLS_ROOT:-.claude/skills}/meta-skills/scripts/manifest-sync.ts
Status-aware lookup: for each artifact entry in manifest.artifacts, read status and stale to qualify the state map:
| Manifest signal | State map value |
|---|---|
status: done, stale: false |
✅ done |
status: done_with_concerns |
⚠️ done-with-concerns — surface the concern in routing output |
status: blocked or needs_context |
treat as missing |
stale: true |
✅ done (stale) — propose refresh as an option, don't block |
frontmatter_present: false |
✅ done (legacy, no frontmatter) — quality unknown, suggest refresh |
Experience block: manifest.experience tracks .agents/experience/{domain}.md files separately. The entries count per domain is a heuristic for "how much context has been gathered" — a domain with 7 entries is well-covered; one with 1 entry barely is.
See ../../references/manifest-spec.md for the full contract.
Path reference / filesystem fallback — used only when .agents/manifest.json doesn't exist (fresh project) or sync hasn't been run:
| Path | What it tells you |
|---|---|
CLAUDE.md (project) |
Project name, stack, conventions. |
research/product-context.md |
Cross-stack foundation exists. |
research/icp-research.md, research/market-research.md |
Research stack progress. |
brand/BRAND.md, brand/DESIGN.md |
Marketing stack foundation. |
architecture/system-architecture.md |
Product stack architecture done. |
.agents/skill-artifacts/product/flow/index.md + flow files |
Product flows mapped. |
.agents/skill-artifacts/meta/specs/*.md |
Spec exists from discover. |
.agents/skill-artifacts/meta/tasks.md |
Tasks decomposed from task-breakdown. |
.agents/skill-artifacts/meta/records/diagnose-*.md, .agents/skill-artifacts/meta/sketches/prioritize-*.md, .agents/skill-artifacts/meta/records/targets-*.md |
Research mid-pipeline outputs. |
.agents/skill-artifacts/mkt/campaign-plan.md + .agents/skill-artifacts/mkt/content/, .agents/skill-artifacts/mkt/lp-brief/, etc. |
Marketing artifacts. |
.agents/skill-artifacts/meta/records/cleanup-*.md, .agents/skill-artifacts/meta/records/machine-cleanup-*.md |
Cleanup audits. |
.agents/skill-artifacts/meta/decisions/[date]-*.md, .agents/skill-artifacts/meta/records/[date]-fresh-eyes-*.md |
Meta-skill artifacts (dated, immutable — lifecycle: decision / snapshot). |
.agents/experience/*.md |
All cold-start answers across stacks. |
.agents/experience/meta-workflow.md |
Prior /start-meta breadcrumb. |
.agents/skill-artifacts/meta/records/learned-rules.md |
Behavior corrections from prior sessions. |
Build a cross-stack state map:
research:
product-context: done | partial | missing
icp: done | partial | missing
market: done | partial | missing
diagnose: done | not run
prioritize: done | partial | missing
targets: done | partial | missing
marketing:
brand: done | partial | missing
campaign: done | partial | missing
content: [list of slugs]
lp: [audit / brief / both / neither]
seo: [list of modes]
short-form: [list of slugs]
outreach: [list of slugs]
product:
spec: done | partial | missing
flows: [list]
architecture: done | partial | missing
tasks: done | partial | missing
code-cleanup: done | not run
docs: [skim]
meta:
panel-reports: [count, latest mtime]
fresh-eyes: [count, latest mtime]
learned-rules: [count]
Project-fit check: if CLAUDE.md describes a B2B SaaS but research/icp-research.md describes a consumer app, flag the mismatch. State may be from a different project.
Step 2: Domain Classification
Parse the user's argument. Classify into one of these:
| User says | Classification | Route to |
|---|---|---|
| "audience", "ICP", "competitors", "market", "diagnose", "prioritize", "targets", "funnel" | research | /start-research |
| "brand", "campaign", "copy", "headline", "landing page", "LP", "SEO", "video", "TikTok", "cold email", "outreach", "humanize", "VN tone" | marketing | /start-marketing |
| "user flow", "tech stack", "architecture", "schema", "API", "code", "refactor", "machine cleanup", "docs", "README" | product | /start-product |
| "scope this", "clarify", "what should we build", "requirements" | process | /discover |
| "debate this", "multiple perspectives", "poll", "consensus", "what do experts think" | process | /agents-panel |
| "decompose", "task list", "break down", "implementation order", "tasks" | process | /task-breakdown |
| "review my work", "second opinion", "did I miss anything", "post-implementation review" | process | /fresh-eyes |
| Ambiguous, multi-domain, or "I want to launch a new product" | cross-stack | propose 2–3 stack orchestrators in sequence |
| Empty | unknown | ask scoping question |
If empty, ask:
"What are you trying to do? Pick the closest match:
- Understand my customers / market / problem (research)
- Build brand / campaigns / content (marketing)
- Design a feature / system / flow (product)
- Scope something vague (
discover)- Debate a decision (
agents-panel)- Decompose work into tasks (
task-breakdown)- Review work I just did (
fresh-eyes)- I'm not sure — show me what's been done so far"
Option 8 prints the cross-stack state map and asks again.
Step 3: Routing Decision
Domain routing:
- Single-domain intent → defer to that stack's orchestrator. "Looks like a research task — run
/start-research." Don't try to do/start-research's job here. - Process intent → propose the specific meta-skill (
/discover,/agents-panel,/task-breakdown,/fresh-eyes) with rationale. - Cross-stack intent ("launch a new product feature") → propose a 2-3 step path:
- Step 1:
/start-research(verify ICP exists; if no, run icp-research) - Step 2:
/start-product(design feature: flows + architecture) - Step 3:
/start-marketing(positioning + LP + content for launch) - Optional terminal:
/fresh-eyesafter each.
- Step 1:
Process-skill rules:
discover— recommend when scope is genuinely unclear. Don't recommend if user has clear intent — they'll find it patronizing.agents-panel— recommend when user explicitly wants debate, OR when multiple equally-valid options surface in routing and user wants help deciding.task-breakdown— recommend when spec.md OR system-architecture.md exists AND user is about to build. Hard-gated on at least one upstream artifact.fresh-eyes— recommend when user is finishing implementation OR has just produced a critical artifact (e.g., system-architecture, brand-system, lp-brief).
Wrap-around suggestions:
- After ANY recommendation that touches security-sensitive code, data-mutation code, or critical artifacts, mention
/fresh-eyesas a terminal step. - Before any non-trivial build, mention
/discoveras upstream if scope is unclear.
Step 4: Present + Confirm
Output format for single-domain:
## Where you are (cross-stack snapshot)
Research: icp ✅ · market ❌ · prioritize ❌
Marketing: brand ❌ · campaign ❌ · content (none)
Product: spec ❌ · flows (none) · architecture ❌
Meta: no reports yet
## What you asked
"I want to figure out who my customers are" → research domain.
## Recommended: route to /start-research
Why: this is a research-domain task. /start-research will read the
research-stack state and propose the next skill (likely icp-research).
→ /start-research
Output format for cross-stack:
## What you asked
"I want to launch a new product feature" → cross-stack (research + product + marketing).
## Recommended path
1. /start-research → verify audience clarity for the feature
2. /start-product → design flows + architecture
3. /start-marketing → positioning, LP, content for launch
(optional /fresh-eyes after each artifact)
Each /start-X is its own router; you'll get sub-recommendations from each.
→ Run /start-research first.
Output format for process skill:
## What you asked
"I just finished implementing the auth migration — can you review it?"
→ process intent: post-implementation review.
## Recommended: /fresh-eyes
Why: post-implementation independent review. Runs an independent
agent against your changes, returns issues + severity.
Cost: ~$0.15-0.50 · Duration: ~3 min · Produces: .agents/skill-artifacts/meta/records/[date]-fresh-eyes-<slug>.md
→ /fresh-eyes
Step 5: Persist + Hand Off
Append to .agents/experience/meta-workflow.md:
## Session 2026-05-06
- Read state: cross-stack snapshot
- User intent: research-domain (audience clarity)
- Recommended: /start-research
- User confirmed: yes
Print:
Run
/start-researchnext. Re-run/start-metaif your task shifts to a different domain.
Exit.
Pipeline Reference
For the canonical cross-stack pipeline, decision rules, and per-skill catalog, see ./references/workflow-graph.md.
Anti-Patterns
- Don't ignore the manifest — always read
.agents/manifest.jsonfirst; per-path filesystem scans are a fallback, not the default. - Don't duplicate work of /start-research, /start-marketing, /start-product. When intent is single-domain, route there. Don't pick the specific skill yourself.
- Don't lecture about all 24 skills. Show only what's relevant to the user's ask + state.
- Don't auto-invoke. Always print
/skill-namefor the user to type. - Don't recommend
discoverdefensively when the user has clear intent. That's patronizing. - Don't recommend
task-breakdownwithout a spec or architecture upstream. It's hard-gated. - Don't recommend more than 3 hops in a cross-stack path. If it needs 5+, surface that the project is too vague and recommend
discoverfirst.
Output
- Inline only.
- Side effect: appends one entry to
.agents/experience/meta-workflow.md.
Status
Ends with one of:
DONE— recommendation given, hand-off printed.BLOCKED— couldn't read project state.NEEDS_CONTEXT— empty ask + state too sparse to infer. Ask scoping question.
More from hungv47/meta-skills
task-breakdown
Decomposes a spec or architecture into buildable tasks with acceptance criteria, dependencies, and implementation order for AI agents or engineers. Produces `.agents/skill-artifacts/meta/tasks.md`. Not for clarifying unclear requirements (use discover) or designing architecture (use system-architecture). For code quality checks after building, see fresh-eyes.
74discover
Conversational discovery — adapts from quick scoping (3-5 questions) to deep interviews (multi-round). Talk until we're clear, then build. Produces inline decisions; optionally saves spec.md or scope contract. Not for multi-perspective debate (use agents-panel). Not for decomposing work (use task-breakdown). Not for diagnosing a known metric decline or root-causing a problem (use diagnose).
70agent-room
Multi-agent discussion rooms — debate or poll a problem from multiple perspectives. Standalone or invoked by other skills as a sub-routine. Mode=debate: N agents argue in rounds, converge. Mode=poll: N agents independently analyze, aggregate by consensus. Not for implementation (use system-architecture). Not for verification (use review-chain). For clarifying requirements first, see discover. For decomposing work after a decision, see task-breakdown.
67review-chain
Post-implementation quality check via fresh-eyes review. Chain: Implement → Review (independent agent) → Resolve (if issues). Max 2 rounds. Auto-triggers for security-sensitive and data-mutation code. Not for code refactoring (use code-cleanup). Not for decision analysis (use agent-room).
61navigate
Artifact status + multi-phase orchestration. Scan what exists, check freshness, compose and track complex workflows across sessions. Not for skill routing (the agent does that proactively).
57fresh-eyes
Post-implementation quality check via fresh-eyes review. Chain: Implement → Review (independent agent) → Resolve (if issues). Max 2 rounds. Auto-triggers for security-sensitive and data-mutation code. Not for code refactoring (use code-cleanup). Not for decision analysis (use agents-panel).
11