seo-analysis
seo-analysis
Audit a codebase for search visibility risks, then produce a fix-ready prompt another session can execute.
This skill is framework- and language-agnostic. Start from the live repository and rendered output, not from assumptions about React, Next.js, Rails, Laravel, Astro, WordPress, or any other stack.
Decision Tree
What SEO problem are you solving?
-
Need a full technical and on-page audit of a codebase
- Run
python3 scripts/build_fix_prompt.py --help - Read
references/methodology.md - Then read
references/technical-audit.md
- Run
-
Need metadata, social preview, canonicals, or indexability checks
- Read
references/metadata-and-previews.md
- Read
-
Need schema.org / JSON-LD / entity / rich result analysis
- Read
references/structured-data-and-entities.md
- Read
-
Need content quality, information architecture, internal linking, or template-level page targeting analysis
- Read
references/content-and-information-architecture.md
- Read
-
Need AI-era search guidance for crawl/render controls, preview controls, and answer-engine readiness
- Read
references/agentic-search-and-ai-surfaces.md
- Read
-
Need the exact remediation handoff format for another session
- Read
references/fix-prompt-spec.md - Use
templates/fix-prompt-template.md - Optionally generate a draft with
python3 scripts/build_fix_prompt.py --input findings.json
- Read
-
Need edge cases, policy traps, or common false positives
- Read
references/gotchas.md
- Read
Quick Reference
| Task | Use | Outcome |
|---|---|---|
| Run a full repo audit | references/methodology.md |
Ordered checklist and evidence collection flow |
| Check indexability and rendering | references/technical-audit.md |
Crawl, render, canonical, robots, sitemap, and status-code findings |
| Check titles, meta descriptions, OG, X cards, favicons, site names | references/metadata-and-previews.md |
SERP and social preview findings |
| Check structured data and entity signals | references/structured-data-and-entities.md |
Rich-result and graph readiness findings |
| Check content and link architecture | references/content-and-information-architecture.md |
Content gaps, duplication, orphan pages, weak anchors |
| Check AI-era search readiness | references/agentic-search-and-ai-surfaces.md |
Preview controls, crawl access, citation readiness |
| Produce a fix session prompt | references/fix-prompt-spec.md + templates/fix-prompt-template.md |
Copy-paste prompt for a second implementation session |
| Generate a prompt draft from findings JSON | python3 scripts/build_fix_prompt.py --input findings.json --repo /abs/path |
Structured prompt with priorities, constraints, and acceptance criteria |
Core Workflow
- Inspect the repository structure, routing model, page templates, layout files, and any head/metadata abstractions before drawing conclusions.
- Inspect representative URLs or templates for each page type: home, category, product/service, article/docs, auth/account, paginated/filter pages, and utility pages.
- Separate findings by severity and by layer:
- Crawl/index controls
- Render/discovery/canonicalization
- Metadata/social preview
- Structured data/entity signals
- Content/internal linking/information architecture
- Performance/page experience
- AI-era search surface readiness
- For every finding, capture evidence from code, built HTML, or runtime behavior. Do not speculate when you can verify.
- Turn the findings into an implementation prompt for another session only after deduplicating root causes. One broken metadata abstraction can explain hundreds of bad pages.
Audit Deliverables
Produce these artifacts in the response:
- Executive summary — what is blocking or suppressing search visibility right now.
- Findings table — severity, URL/template scope, evidence, impact, fix direction.
- Page-type coverage map — which templates or routes were checked and which were not.
- Remediation sequence — what to fix first, second, and later.
- Implementation prompt — a clean handoff for another session to make code changes safely.
Analysis Rules
- Work from the rendered reality of the site, not only source files. SSR, SSG, CSR, hydration, and edge rendering change what crawlers actually receive.
- Treat crawlability, renderability, and canonicalization as prerequisites. Title tweaks do not matter if important pages are blocked, duplicated, or undiscoverable.
- Evaluate page types, not just single pages. SEO failures usually come from shared template logic.
- Distinguish intentional exclusions from mistakes. Login, cart, internal search, faceted combinations, and thin utility pages are often meant to be
noindex. - Check both search-result previews and social previews. Missing or conflicting Open Graph data is a distribution problem even when classic SEO looks acceptable.
- Prefer supported structured data aligned to page purpose. Do not recommend schema spam or irrelevant types.
- Treat AI-answer visibility as an extension of crawlability, metadata clarity, structured facts, and trustworthy content. Do not invent a separate magical “AI SEO” system.
Reading Guide
| If the task is... | Read |
|---|---|
| Full audit from code to implementation handoff | references/methodology.md, then references/fix-prompt-spec.md |
| Diagnose a rendering, canonical, robots, sitemap, hreflang, or internal-link issue | references/technical-audit.md |
| Diagnose bad titles, snippets, link previews, or OG/X metadata | references/metadata-and-previews.md |
| Diagnose missing or invalid schema and weak entity markup | references/structured-data-and-entities.md |
| Diagnose weak topical targeting, duplication, orphan pages, or anchor text problems | references/content-and-information-architecture.md |
| Discuss AI Overviews, citation surfaces, or answer-engine readiness | references/agentic-search-and-ai-surfaces.md |
| Avoid overreaching or false positives | references/gotchas.md |
Verified External Baseline
The guidance in this skill was grounded against current primary sources in April 2026, including:
- Google Search Central on SEO basics, helpful content, JavaScript SEO, robots meta directives, canonicalization, snippets, structured data, sitemaps, site names, favicons, and preferred sources.
- The Open Graph protocol specification for required OG fields and image metadata.
Use the references as the first source of truth, then verify live details when the target stack or search surface has materially changed.
Gotchas
- Missing SEO is often a shared abstraction bug: a single layout, metadata helper, or head component can poison every route.
- Do not treat every
noindexas wrong: many utility surfaces should stay out of the index. - Do not recommend
robots.txtfor canonicalization: blocking a duplicate URL inrobots.txtcan prevent crawlers from seeing the canonical signal at all. - Do not assume OG tags equal SEO tags: search titles, social titles, canonicals, and schema each serve different consumers.
- Do not confuse “AI SEO” with hidden hacks: the durable wins are still crawl access, strong facts, clear metadata, and useful original content.
- Do not hand off a fix prompt without evidence: the second session should receive concrete files, page types, and acceptance criteria, not generic SEO advice.
Helper Files
references/methodology.md— end-to-end audit workflow and evidence model.references/technical-audit.md— crawl, rendering, canonicals, robots, sitemaps, hreflang, pagination, internal-link discovery.references/metadata-and-previews.md— titles, descriptions, OG, X cards, favicons, site names, image previews.references/structured-data-and-entities.md— JSON-LD strategy and validation priorities.references/content-and-information-architecture.md— content quality, duplication, template targeting, and link architecture.references/agentic-search-and-ai-surfaces.md— AI-era search interpretation without hype.references/fix-prompt-spec.md— exact handoff prompt contract.references/gotchas.md— high-value traps and anti-patterns.templates/fix-prompt-template.md— copy-ready handoff prompt shell.scripts/build_fix_prompt.py— deterministic prompt builder from findings JSON.scripts/probe_seo_analysis.py— local regression checks for issue normalization and prompt generation.scripts/validate.py— structural validator for this skill.scripts/test_skill.py— packaging and deterministic probe test runner.
More from jpcaparas/skills
markdown-new
Use markdown.new when the user explicitly wants markdown.new, Cloudflare Markdown for Agents, URL-to-Markdown conversion, file-to-Markdown conversion, crawl-to-Markdown, or the hosted markdown.new editor. Trigger on: 'markdown.new', 'convert this URL to markdown', 'crawl this docs site into markdown', 'file to markdown', 'upload this PDF to markdown', 'markdown.new API', or 'markdown editor'. Do NOT trigger for generic web search/scraping when another tool is enough, or for editing local Markdown without using the markdown.new service.
32skill-creator-advanced
Advanced skill creator for mission-critical, installable skills — API wrappers, progressively-disclosed technical documentation, CLI tool integrations, and complex multi-reference skills. Use when creating or improving skills that demand rigorous progressive disclosure, verified examples, tested operations, cross-harness compatibility, smart placement into the right repo-local or global skills directory, and self-improvement feedback loops. Triggers on: 'advanced skill', 'create API skill', 'create wrapper skill', 'production skill', 'installable skill', 'improve this skill for progressive disclosure', 'rigorous skill', 'mission-critical skill', or when skill-creator's output needs to be more thorough. Also use when upgrading an existing skill to production quality.
32azure-devops-wiki-markdown
Use when writing, fixing, or reviewing Azure DevOps wiki Markdown, Mermaid diagrams, `_TOC_` and `_TOSP_`, collapsible `<details>` blocks, query-table embeds, `@` mentions, work-item links, KaTeX math, HTML video embeds, code fences, or Azure DevOps surface-specific support differences across Wiki, PR, README, Widget, and Done fields. Triggers on Azure DevOps wiki, markdown guidance, Mermaid sequence/graph/timeline/ER diagrams, proposal decision trees, table-of-subpages, query-table, code fence aliases, line-break bugs, and wiki page formatting. Do NOT use for GitHub-only Markdown, generic Mermaid authoring outside Azure DevOps, or non-Azure documentation platforms.
30ripgrep
Prefer ripgrep (`rg`) for text search, recursive codebase search, ignore-aware grep replacement, filename discovery via `rg --files`, and machine-readable search output. Use when the user asks to search for text, find occurrences, inspect a large tree, locate files by name or pattern, or when `grep`, `grep -R`, `find | grep`, or manual file reads would be slower. Triggers on: 'search for', 'find occurrences', 'grep', 'grep -R', 'ripgrep', 'rg', 'find files', 'look for pattern'. Do NOT trigger for reading entire files, structured JSON queries better handled by `jq`, or filesystem metadata tasks that need `find` or `fd`.
29synthetic-search
Use this skill when the user explicitly wants Synthetic Search, the Synthetic API, `api.synthetic.new`, `SYNTHETIC_API_KEY`, or zero-data-retention web search with raw `curl`/`jq` examples. It covers live-verified search requests, quota checks, and a zero-dependency Node helper for readable output. Triggers on: 'Synthetic Search', 'Synthetic API', 'api.synthetic.new', 'SYNTHETIC_API_KEY', 'Synthetic quotas'. Do NOT trigger for general browser automation, full-site crawling, or unrelated search providers.
29tweet-replicate
Rebuild a public X/Twitter status into a deterministic local replica with a frozen snapshot, local HTML/CSS, Playwright capture, X-like media-frame fill behavior, a high-quality MP4 master, and a companion GIF capped under 24 MB. Use when asked to replicate a tweet/X post, freeze a status into video, make a tweet look like X offline, or create rerenderable tweet assets with a saved build folder. Trigger on: 'replicate this tweet', 'turn this X post into MP4', 'make this tweet into a GIF too', 'freeze this status locally'. Do NOT use for plain tweet text extraction, raw media download only, live X browser capture, authenticated pages, DMs, or promises of a pixel-perfect private X renderer.
29