seo-maker
@rules/seo-workflow.md @rules/validation.md
SEO Maker
Audit and improve a project's search visibility across traditional search engines and AI answer engines.
<output_language>
Default all user-facing deliverables, saved artifacts, reports, plans, generated docs, summaries, handoff notes, commit/message drafts, and validation notes to Korean, even when this canonical skill file is written in English.
Preserve source code identifiers, CLI commands, file paths, schema keys, JSON/YAML field names, API names, package names, proper nouns, and quoted source excerpts in their required or original language.
Use a different language only when the user explicitly requests it, an existing target artifact must stay in another language for consistency, or a machine-readable contract requires exact English tokens. If a localized template or reference exists (for example *.ko.md or *.ko.json), prefer it for user-facing artifacts.
</output_language>
- Audit website or project SEO in a systematic way.
- Cover on-page SEO, technical SEO, content SEO, and Core Web Vitals.
- Evaluate AEO readiness for featured snippets, voice search, and direct-answer surfaces.
- Evaluate GEO readiness for citation likelihood in generative AI responses.
- Evaluate LLMO readiness for AI crawler access, freshness, and model-readable context.
- Save prioritized recommendations and evidence under
.hypercore/seo-maker/[slug]/. - Update existing reports so SEO improvement history remains traceable.
- If the user asks for highest score, max score, maximum score, perfect score, or continuous improvement, run an audit to fix/recommendation to re-audit loop and keep the best result.
<routing_rule>
Use seo-maker when the main outcome is an SEO/AEO/GEO/LLMO audit, optimization report, or evidence-backed search visibility improvement plan.
Route neighboring work elsewhere:
- Page or product UI design: use
designeror the relevant frontend design skill. - Competitor or market research without site audit: use
research. - Pre-release build and deployment checks: use
pre-deploy. - Pure performance engineering without search context: use the relevant performance or optimization workflow.
- Broad AI search trend research without a target site or content set: use
research.
</routing_rule>
<trigger_conditions>
Positive examples:
- "Audit this site's SEO."
- "Check metadata and structured data."
- "Create an SEO audit report."
- "Review search-engine optimization status and give improvement recommendations."
- "Summarize how to improve Core Web Vitals scores."
- "Optimize our content so AI search engines can cite it."
- "Check whether ChatGPT or Perplexity can surface our brand."
- "Analyze this site from AEO and GEO perspectives."
- "Keep iterating fixes until the SEO score is as high as possible."
- "Audit, fix, and re-verify until the search optimization score is close to perfect."
Negative examples:
- "Design this landing page." -> use
designer. - "Research competitor market positioning." -> use
research. - "Check the pre-deploy checklist." -> use
pre-deploy.
Boundary examples:
- "Optimize this page's performance."
Use
seo-makeronly when performance is evaluated through SEO/Core Web Vitals impact. - "Research AI search trends."
Use
seo-makeronly when the output is tied to a target site, page, or content inventory.
</trigger_conditions>
| Situation | Mode |
|---|---|
| Full SEO audit for a new project or site | create |
| On-page SEO review for a specific page | create |
| Add a new analysis to an existing SEO report | update |
| Focused Core Web Vitals or technical SEO analysis | create |
| Re-check after SEO improvements | update |
| Iterative improvement toward best or perfect score | optimize |
| AEO/GEO citation readiness analysis | create |
| Add AEO/GEO analysis to an existing report | update |
<supported_targets>
- Metadata and SEO elements in HTML pages and Next.js/React components.
robots.txt,sitemap.xml,llms.txt, canonical tags, and structured data.- Core Web Vitals signals such as LCP, INP, and CLS.
<head>elements including title, meta description, Open Graph, and Twitter Card.- Heading hierarchy from
h1throughh6. - Image alt text and internal link structure.
- Schema.org JSON-LD markup, including AI trust signals.
- AEO elements such as Q&A formats, direct-answer structure, and featured-snippet optimization.
- GEO elements such as citable sentence structure, statistics with sources, and entity authority.
- LLMO elements such as
llms.txt, AI crawler accessibility, and content freshness.
</supported_targets>
<complexity_routing>
| Complexity | Signals | Handling |
|---|---|---|
| Simple | Single-page review, one SEO element, quick metadata audit | Direct: write report.md immediately |
| Complex | Full-site audit, many pages, technical SEO plus content SEO plus Core Web Vitals, competitor comparison | Tracked: use flow.json for phase tracking |
Before starting, record:
Complexity: [simple/complex] — [one-line reason]
Mode: [create/update/optimize]
Target: [site/page/project path]
Proof surface: [commands, browser checks, web sources, or local files]
</complexity_routing>
<universal_intake>
Before scoring any project, classify the audit context so this skill works across stacks:
target_type:live-url,local-static,nextjs,react-spa,docs-site,ecommerce,blog, orapp-with-marketing-pagesaccess_level: live URL, local files only, Search Console available, analytics available, field Core Web Vitals available, or AI citation probe availableallowed_action:audit-only,recommend,edit-code, oroptimize-loopmeasurement_confidence: lower confidence when live URL, Search Console, field Core Web Vitals, or AI citation probes are unavailable
Do not hide missing evidence. If a recommendation is based on static files, lab data, synthetic probes, or heuristics, label it that way in results.json.
</universal_intake>
<artifact_contract>
Create or update .hypercore/seo-maker/[slug]/.
Expected files:
.hypercore/seo-maker/[slug]/
├── dashboard.html # Browser-readable dashboard
├── results.json # Structured audit results
├── results.js # File URL fallback for browser rendering
├── report.md # Markdown report
├── sources.md # Source and evidence log
└── flow.json # Required for complex or optimize mode
For simple mode, report.md and sources.md are the minimum. For complex or optimize mode, all files are expected.
Follow references/artifact-spec.md for the file schema.
Render order:
- Gather evidence and write/update
results.json. - Generate
results.jsfor direct local browser viewing. - Render
dashboard.htmlfrom the current results. - Write
report.mdandsources.mdwith links or file references.
</artifact_contract>
| Phase | Task | Output |
|---|---|---|
| 0 | Determine target, mode, complexity, proof surface, and universal intake fields | Execution brief |
| 1 | Establish measurement methods and confidence limits | measurement_methods |
| 2 | Collect evidence from local code, pages, browser checks, and web sources | Evidence log |
| 3 | Audit technical SEO, platform policy, AEO, GEO, LLMO, Core Web Vitals, and structured data | Structured findings |
| 4 | Separate official requirements from field/tool/lab/synthetic/heuristic findings | Evidence-graded findings |
| 5 | Prioritize issues by impact, confidence, effort, and source tier | Recommendation set |
| 6 | Write artifacts and dashboard | .hypercore/seo-maker/[slug]/ |
| 7 | If optimize mode, apply or recommend fixes and re-audit | Best verified result |
| 8 | Summarize score, wins, confidence limits, risks, and next actions | Final report |
<audit_dimensions>
Check these dimensions when relevant to the target:
- Technical SEO: crawlability, indexability, canonicalization, sitemap, robots directives, response status, redirects, and duplicate pages.
- Platform policy: Googlebot, Google-Extended, OAI-SearchBot, GPTBot, ChatGPT-User, PerplexityBot/ClaudeBot when present, snippet controls, X-Robots-Tag, and optional
llms.txt. - On-page SEO: title, description, heading hierarchy, keyword alignment, URL readability, and internal links.
- Content SEO: intent match, depth, freshness, topical coverage, uniqueness, and readability.
- Core Web Vitals: LCP, INP, CLS, render-blocking resources, image sizing, and interaction latency.
- Structured data: JSON-LD validity, Schema.org fit, visible-content parity, entity identifiers, breadcrumbs, FAQs, products, articles, or organization markup. Do not imply structured data guarantees rich results or AI citations.
- AEO: concise visible answer blocks, Q&A structure, snippet-ready summaries, voice-search phrasing, and direct-answer clarity. Treat fixed answer lengths as heuristic.
- GEO: citable claims, statistics with sources, entity authority, author or brand trust signals, and content that AI systems can quote safely.
- LLMO: optional
llms.txt, AI crawler access, clean markdown or semantic HTML, clear entity relationships, and updated canonical content. Missingllms.txtis not critical by default.
</audit_dimensions>
Use a transparent 100-point score when enough evidence exists:
- Technical SEO: 20
- On-page SEO: 20
- Content SEO: 15
- Core Web Vitals: 15
- Structured data: 10
- AEO readiness: 10
- GEO/LLMO readiness: 10
If evidence is incomplete, mark affected categories as unknown instead of inventing certainty.
Each finding should include:
- Severity:
critical,warning, orinfo(use impact/effort fields for prioritization beyond severity). - Confidence: high, medium, or low.
evidence_grade:official,field,tool,lab,synthetic, orheuristic.measurement_method: scan, tool, probe, source, or command used.source_tier:official-doc,observed-file,field-data,tool-output,synthetic-probe, orresearch-backed-heuristic.- Evidence: command output, URL, local file path, browser observation, or saved probe result.
- Recommendation: specific action and expected impact.
- Owner surface: code, content, infrastructure, analytics, or external platform.
<optimize_loop>
Use optimize mode when the user requests a maximum score, perfect score, continuous iteration, or "keep fixing until it passes" behavior.
Loop rules:
- Run a baseline audit and write the score.
- Pick the highest-impact fix or recommendation with the best confidence/effort ratio.
- Apply safe local code/content fixes when they are in scope; otherwise record an actionable recommendation.
- Re-run the relevant audit checks.
- Keep the change only if the score or verified evidence improves without regression.
- Stop when the score target is met, no safe local fixes remain, or further work requires external credentials or business decisions.
Do not fake a perfect score. If external evidence is unavailable, report the unknowns and the best verified score.
</optimize_loop>
At completion, .hypercore/seo-maker/[slug]/ should contain:
results.jsonwith structured audit results and statuscompletefor complex or optimize mode.dashboard.htmlrendered from the latest results when dashboard output is expected.results.jsfor local browser fallback when dashboard output is expected.report.mdwith prioritized findings, score, and recommendations.sources.mdwith the evidence log.
Validate:
- Every critical or warning finding has evidence.
- Recommendations are specific enough for an engineer, marketer, or content owner to act on.
- Scores are derived from observed evidence, not assumptions.
- Google AI features are not described as requiring special schema, AI text files, or magic markup.
- FAQPage recommendations distinguish Google rich-result eligibility from answer-friendly visible FAQ content.
- Unknowns are explicitly marked.
- Optimize mode records baseline score, changes/recommendations, re-audit evidence, and the best verified result.
More from alpoxdev/hypercore
bug-fix
[Hyper] Analyze bugs, present repair options, then implement and verify the user-selected fix path. Routes simple bugs directly; tracks complex multi-phase investigations via .hypercore/bug-fix/ JSON flow.
47tanstack-start-architecture
[Hyper] Enforce TanStack Start architecture in existing Start projects, especially route structure, server functions, loader/client-server boundaries, importProtection, hooks, SSR/hydration, and hypercore conventions. Use before structural code changes, route work, server function work, or architecture audits in TanStack Start codebases.
45gemini
[Hyper] Use when the user wants to invoke Google Gemini CLI (`gemini`) for reasoning, research, or AI assistance. Trigger phrases: \"use gemini\", \"ask gemini\", \"run gemini\", \"call gemini\", \"gemini cli\", \"Google AI\", \"Gemini reasoning\", or when users request Google's Gemini models, research with web search, plan-mode review, or want to resume a previous Gemini session. Do not use for generic writing, runbook cleanup, or local edits that do not require the Gemini CLI.
45crawler
[Hyper] Investigate websites with Playwriter plus CDP to choose a crawl strategy, capture API/auth evidence, document findings under `.hypercore/crawler/[site]/`, and generate crawler code only after discovery is grounded.
45research
[Hyper] Produce a multi-source, source-backed markdown research report for fact-finding, comparisons, market/trend analysis, or evidence-backed recommendations across live web, official docs, GitHub, and local repo sources. Use when synthesis and citations are needed, not for one-source lookups.
45genius-thinking
[Hyper] Generate and prioritize differentiated ideas for stuck product, strategy, or innovation problems when ordinary brainstorming is too shallow. Saves structured multi-file analysis under .hypercore/genius-thinking/[topic-slug]/ with phase tracking.
44