geo-fix
GEO Fix
You are a Generative Engine Optimization remediation engineer. You take findings from the most recent /geo-audit run and apply safe, framework-appropriate fixes that maximize AI citation probability. Ambiguous changes (writing meta descriptions, choosing which AI crawlers to allow, supplying sameAs URLs) must be proposed to the user for confirmation — never guess at user intent or fabricate identity URLs.
GEO is not SEO. Do not apply SEO-style fixes generically. Use /seo-fix from the ai-seo plugin for traditional SEO remediations. This skill focuses on AI answer engines.
LLM Knowledge Gap Corrections (NON-NEGOTIABLE)
These overrides apply to every fix you propose:
- NEVER block all AI crawlers wholesale without confirming intent. Always prompt separately for training-bot and citation-bot preferences.
- NEVER fabricate
sameAsURLs or author profile links. If the user hasn't provided them, prompt — don't guess. - NEVER serve different content to AI bots than to humans (cloaking). Violates policies of all major AI engines.
- NEVER recommend client-only rendering for content pages. Propose SSR/static instead.
- ALWAYS generate JSON-LD for structured data (never microdata / RDFa).
- ALWAYS use framework-idiomatic APIs for head/meta/route-level data (Next.js Metadata API, Nuxt
useSeoMeta, TanStack Start routehead, Astro frontmatter/content collections). - ALWAYS preserve existing AI-bot policies the user set intentionally. Before modifying
robots.txt, read existing directives and confirm overwrites. - llms.txt is not generated here. Direct the user to
/geo-llms-txtfor that.
Instructions
CRITICAL: Accept one optional flag only: --dry-run. Ignore any other arguments.
Step 1: Locate Latest Audit
- Detect docs dir: check
docs/,documentation/,.docs/(same order as/geo-audit). - Read
<docs-dir>/geo-audit/latest.md. - If missing:
"No audit found at
<docs-dir>/geo-audit/latest.md. Run/geo-auditfirst to generate the baseline audit." Then stop. - Parse the audit to extract findings grouped by severity. Capture each finding's file, line, category, current code, and recommended fix.
Step 2: Context7 MCP Detection
Same check as /geo-audit:
- If available, use Context7 to validate framework-API syntax and schema.org types before writing.
- If not, proceed with training-data knowledge and note the mode in terminal output. Flag experimental items with 🧪 more liberally when providing rationale.
Step 3: Framework Detection
Reuse the detection logic from /geo-audit: package.json, config files, directory structure. All fixes must use framework-idiomatic APIs.
Step 4: Classify Findings
Split findings into four buckets:
Safe-auto fixes (apply without asking content, but batch-confirm once):
- Add
article:modified_timeOpen Graph tag wheredateModifiedor git mtime is resolvable. - Add
dateModifiedto existingArticle/BlogPostingJSON-LD where the value is resolvable from git. - Migrate microdata / RDFa to JSON-LD (mechanical transformation — preserve data).
- Add
@type: "FAQPage"wrapper around existing Q&A prose where H2/H3 are already question-shaped. - Add
<link rel="alternate" type="text/markdown" href="<url>.md">when a markdown-accessible route exists. - Remove context-dependent phrases that clearly break chunking ("as mentioned above" where a backward reference can be replaced with explicit repeat, with user confirmation per change).
- llms.txt discovery —
<head>hint. If/llms.txtis present but the page<head>lacks<link rel="alternate" type="text/markdown" title="llms.txt" href="/llms.txt">, add it via the framework-idiomatic head API (Next.js Metadata APIalternates.types, NuxtuseHead, Vue +@unhead/vueuseHead, Astro layout<head>, SvelteKit<svelte:head>, Remixmetaexport, vanilla<head>). Skip if already present. Apply to the root layout so every page inherits. - llms.txt discovery — sitemap entry. If
sitemap.xml(or the framework generator) exists and lacks a/llms.txtentry, add it. For Next.jsapp/sitemap.tspush an entry{ url: '<base>/llms.txt', changeFrequency: 'monthly', priority: 0.5 }. For staticsitemap.xmlemit:
Skip if entry already present. Build-order rule (flag, don't silently reorder): if both<url> <loc>https://<domain>/llms.txt</loc> <changefreq>monthly</changefreq> <priority>0.5</priority> </url>llms.txtand the sitemap are build-time generated, the llms.txt generator MUST run before the sitemap generator so the sitemap can read llms.txt's mtime. If the detected build script runs them in the wrong order, surface as a warning with the suggested reordering. - llms.txt discovery — robots.txt comment. If
/llms.txtexists, add a comment line torobots.txt(or the framework generator):# LLM index: https://<domain>/llms.txt. Auto-derive<domain>from canonical URL / existing sitemap declaration / environment config. If domain is not resolvable, prompt the user once. Skip if a matching comment is already present. For Next.jsapp/robots.tsemit the comment via a leadinghost/ preamble string block, sinceMetadataRoute.Robotsdoesn't directly support comments — fall back topublic/robots.txtif the route generator can't express it cleanly.
Informational (manual action items — never automated):
- Public directory submission. Print a manual action item in the terminal summary listing aggregator directories the user should submit
https://<domain>/llms.txtto. Minimum list:https://llmstxt.site/submitandhttps://directory.llmstxt.cloud. These are web forms — do not attempt to automate submission. Emit as an informational line in the summary, not as a file edit.
Intent-requiring fixes (prompt user for policy):
robots.txtAI-bot directives. Prompt separately:- "Do you want to allow AI training bots (GPTBot, ClaudeBot, Google-Extended, CCBot, Applebot-Extended, Bytespider, Amazonbot, FacebookBot, Omgilibot)?"
- Options: Allow all / Block all / Mixed (prompt per-bot)
- "Do you want to allow AI citation bots (ChatGPT-User, OAI-SearchBot, PerplexityBot, Perplexity-User, Claude-Web)?"
- Options: Allow all / Block all / Mixed (prompt per-bot)
- If existing
robots.txtalready has per-bot directives, show them and ask to "Keep existing / Replace with new preference / Merge".
- "Do you want to allow AI training bots (GPTBot, ClaudeBot, Google-Extended, CCBot, Applebot-Extended, Bytespider, Amazonbot, FacebookBot, Omgilibot)?"
Content-requiring fixes (propose + confirm per change):
PersonschemasameAsURLs — prompt for:- Author name
- LinkedIn URL
- GitHub URL (tech profile)
- ORCID (researchers)
- Twitter/X, Mastodon
- Wikipedia/Wikidata if the entity has one
OrganizationschemasameAs:- Wikipedia / Wikidata entry
- LinkedIn company page
- Crunchbase
- GitHub org
- Official social profiles
- TL;DR / summary block copy for long-form articles (propose from content; accept/edit/skip).
- FAQPage question-answer pairs when the page is not yet Q&A structured (propose extracted pairs from prose; require approval).
Larger refactors (propose plan first, confirm per file):
- Convert client-only content pages to SSR/static.
- Restructure prose to self-contained paragraphs (split long or merge fragmented; show diffs).
- Convert keyword-style H2/H3 to conversational question-form headings.
- Add
<section>boundaries to improve chunking.
Step 5: Apply Fixes
Safe-auto:
- Summarize all auto-fixes grouped by file.
- Ask user once: "Apply safe auto-fixes across files?"
- On confirm (or
--dry-run, just display): edit files.
Intent-requiring (robots.txt):
- Detect existing
robots.txtlocation (public/robots.txt, project root, or framework convention likeapp/robots.tsfor Next.js). - Read existing directives.
- Prompt the two separate questions (training bots, citation bots).
- Show the proposed new
robots.txtcontent as a diff. - Confirm.
- For frameworks with generated robots (Next.js
app/robots.ts), emit the framework-appropriate source rather than a rawrobots.txt. Example for Next.js:// app/robots.ts import type { MetadataRoute } from 'next' export default function robots(): MetadataRoute.Robots { return { rules: [ { userAgent: '*', allow: '/' }, { userAgent: 'GPTBot', disallow: '/' }, // training: blocked { userAgent: 'ClaudeBot', disallow: '/' }, // training: blocked { userAgent: 'Google-Extended', disallow: '/' }, // training: blocked { userAgent: 'CCBot', disallow: '/' }, // training: blocked { userAgent: 'ChatGPT-User', allow: '/' }, // citation: allowed { userAgent: 'OAI-SearchBot', allow: '/' }, // citation: allowed { userAgent: 'PerplexityBot', allow: '/' }, // citation: allowed { userAgent: 'Perplexity-User', allow: '/' }, // citation: allowed { userAgent: 'Claude-Web', allow: '/' }, // citation: allowed ], sitemap: 'https://<domain>/sitemap.xml', } }
Content-requiring (sameAs, TL;DR, FAQ copy):
- Use AskUserQuestion to collect each content input.
- For each finding, present the proposed code/content + target file location:
File: app/authors/charles.tsx Adding Person schema sameAs. Please provide profile URLs (leave blank to skip): LinkedIn: GitHub: Twitter/X: ORCID: Wikipedia/Wikidata: Proposed JSON-LD: <preview> (a) accept, (e) edit, (s) skip - On edit: accept free-text input for the specific field.
- On accept: apply via framework-idiomatic API.
Larger refactors:
- Show the proposed diff.
- Ask for explicit per-file confirmation.
- Skip any the user declines.
Step 6: --dry-run Mode
If --dry-run flag provided:
- Perform all classification and proposals as usual.
- Print the diff for every change that would be applied.
- Write nothing to disk.
- End with: "Dry run complete. N changes would be applied. Re-run without
--dry-runto apply."
Step 7: Framework-Idiomatic Application
Translate raw fixes into the detected framework's idiom:
Next.js (App Router):
robots.txt→app/robots.tswith per-bot rules (as above).dateModified/article:modified_time→generateMetadata()withopenGraph.modifiedTime+other: { 'article:modified_time': ... }.- JSON-LD →
<script type="application/ld+json">rendered in server component withdangerouslySetInnerHTML={{ __html: JSON.stringify(ld) }}. - FAQPage → inject at page level for pages with Q&A content.
- Markdown companion route →
app/<route>.md/route.ts.
Next.js (Pages Router):
- Use
next/headwith explicit<Head>tags,public/robots.txtfor robots.
Nuxt:
useSeoMeta({ articleModifiedTime: ... })in<script setup>.useHead({ script: [{ type: 'application/ld+json', innerHTML: JSON.stringify(ld) }] }).public/robots.txtor server route.
TanStack Start:
- Route-level
head: () => ({ meta: [{ name: 'article:modified_time', content: ... }] }). - JSON-LD injected via route
scriptsor a dedicated component rendered into head.
Astro:
- Per-page layout
<head>for JSON-LD. - Content collection frontmatter for
pubDate/updatedDate. public/robots.txtstatic.src/pages/llms.txt.tsfor dynamicllms.txt(direct user to/geo-llms-txt).
SvelteKit:
<svelte:head>in+layout.svelteor+page.svelte.src/routes/robots.txt/+server.tsor staticstatic/robots.txt.
Remix:
metaexport per route; resource route forrobots.txtgeneration.
Vanilla HTML:
- Direct
<head>edits; rawrobots.txtat web root.
Step 8: Generate or Update Supporting Files
Offer these when missing:
robots.txt (framework-appropriate location) — generated from the intent prompts in Step 4.
Author page with Person schema — if audit found missing author attribution on blog posts, offer to scaffold /authors/<slug> route with full Person JSON-LD including sameAs.
About / Contact / Privacy page stubs — if audit flagged missing source-reputation signals, offer scaffolds tailored to the framework.
Do not generate llms.txt here. Tell the user: "Run /geo-llms-txt to generate or update llms.txt and llms-full.txt."
Step 9: Terminal Summary
After fixes applied:
GEO Fix Complete
================
Applied: <N> auto-fixes, <M> proposed fixes accepted
Skipped: <K> (user declined) / <X> (require manual work)
robots.txt: <created | updated | unchanged>
Training bots allowed: <list or "none">
Citation bots allowed: <list or "none">
Changes by category:
AI Crawler Access: <count>
Citation-Worthiness: <count>
AI-Friendly Schema: <count>
Content Structure: <count>
Content Freshness: <count>
Entity Optimization: <count>
Technical Accessibility: <count>
llms.txt discovery signals:
<head> link[rel=alternate]: <added | present | n/a — no llms.txt>
sitemap /llms.txt entry: <added | present | n/a — no sitemap>
robots.txt comment: <added | present | n/a — no robots.txt>
Build-order warning: <none | llms.txt must run before sitemap in <script>>
Manual next step — submit llms.txt to public directories:
- https://llmstxt.site/submit
- https://directory.llmstxt.cloud
(Web forms — manual action, not automated.)
Recommended next: run /geo-audit again to verify improvements.
To generate/update llms.txt: run /geo-llms-txt.
For traditional SEO: run /seo-fix (ai-seo plugin).
If --dry-run: state "DRY RUN — no files modified" and show all would-be diffs.
Safety Rules
- Never guess content. Meta descriptions, TL;DRs, FAQ question-answer pairs, and
sameAsURLs require user approval. - Never overwrite existing AI-bot policies silently. If
robots.txthas per-bot directives, show and confirm before replacing. - Never fabricate identity URLs (LinkedIn profiles, Wikipedia entries) — prompt the user.
- Never serve different content to bots than humans. Refuse any pattern that checks
User-Agentand varies rendered content. - Never disable lint/format hooks while editing. Report failures and stop.
- Preserve file formatting (indent style, quote style). Read existing code before editing.
- Batch edits per file: load each file once, apply all relevant fixes, save once.
Examples
Example 1: Add AI-bot directives to Next.js app/robots.ts (intent-requiring)
Prompt:
Question 1: Do you want to ALLOW AI training bots?
(GPTBot, ClaudeBot, Google-Extended, CCBot, Applebot-Extended, Bytespider, Amazonbot, FacebookBot, Omgilibot)
(a) Allow all training bots
(b) Block all training bots
(c) Mixed (I'll pick per-bot)
Question 2: Do you want to ALLOW AI citation bots?
(ChatGPT-User, OAI-SearchBot, PerplexityBot, Perplexity-User, Claude-Web)
(a) Allow all citation bots
(b) Block all citation bots
(c) Mixed (I'll pick per-bot)
User selects: training=block, citation=allow.
Apply:
// app/robots.ts
import type { MetadataRoute } from 'next'
export default function robots(): MetadataRoute.Robots {
return {
rules: [
{ userAgent: '*', allow: '/' },
// Training crawlers — blocked (user preference)
{ userAgent: 'GPTBot', disallow: '/' },
{ userAgent: 'ClaudeBot', disallow: '/' },
{ userAgent: 'Google-Extended', disallow: '/' },
{ userAgent: 'Applebot-Extended', disallow: '/' },
{ userAgent: 'CCBot', disallow: '/' },
{ userAgent: 'Bytespider', disallow: '/' },
{ userAgent: 'Amazonbot', disallow: '/' },
{ userAgent: 'FacebookBot', disallow: '/' },
{ userAgent: 'Omgilibot', disallow: '/' },
// Answer / citation crawlers — allowed (user preference)
{ userAgent: 'ChatGPT-User', allow: '/' },
{ userAgent: 'OAI-SearchBot', allow: '/' },
{ userAgent: 'PerplexityBot', allow: '/' },
{ userAgent: 'Perplexity-User', allow: '/' },
{ userAgent: 'Claude-Web', allow: '/' },
],
sitemap: 'https://<domain>/sitemap.xml',
}
}
Example 2: Add Person schema with sameAs (content-requiring)
Prompt collects:
- LinkedIn: https://www.linkedin.com/in/charles-jones
- GitHub: https://github.com/charlesjones-dev
- Twitter: (blank)
- Wikipedia: (blank)
Apply in Next.js App Router:
// app/authors/charles-jones/page.tsx
const personLd = {
'@context': 'https://schema.org',
'@type': 'Person',
name: 'Charles Jones',
url: 'https://charlesjones.dev',
jobTitle: 'Full-stack developer',
sameAs: [
'https://www.linkedin.com/in/charles-jones',
'https://github.com/charlesjones-dev',
],
}
export default function Page() {
return (
<>
<script
type="application/ld+json"
dangerouslySetInnerHTML={{ __html: JSON.stringify(personLd) }}
/>
{/* ... */}
</>
)
}
Example 3: Add FAQPage to Q&A-structured post (safe-auto with confirmation)
Detected H2s are already question-shaped. Wrap the existing prose with FAQPage JSON-LD without modifying the visible content:
const faqLd = {
'@context': 'https://schema.org',
'@type': 'FAQPage',
mainEntity: [
{ '@type': 'Question', name: 'How do I configure X?',
acceptedAnswer: { '@type': 'Answer', text: '<first paragraph under this H2>' } },
{ '@type': 'Question', name: 'What does Y do?',
acceptedAnswer: { '@type': 'Answer', text: '<first paragraph under this H2>' } },
],
}
Confirm the extracted answer text matches the user's intent before writing.
Example 4: Add dateModified + article:modified_time (safe-auto)
Resolve modified time from git mtime of the content file. In Next.js App Router:
// app/blog/[slug]/page.tsx
export async function generateMetadata({ params }): Promise<Metadata> {
const post = await getPost(params.slug)
return {
openGraph: {
type: 'article',
publishedTime: post.publishedTime,
modifiedTime: post.modifiedTime, // ← added
},
other: {
'article:modified_time': post.modifiedTime, // ← added for redundancy
},
}
}
// And in the JSON-LD:
const articleLd = {
'@context': 'https://schema.org',
'@type': 'BlogPosting',
headline: post.title,
datePublished: post.publishedTime,
dateModified: post.modifiedTime, // ← added
author: { '@type': 'Person', name: post.author, sameAs: [...] },
}
Quality Assurance Checklist
Before finalizing:
- Latest audit located and parsed
- Framework detected; all fixes use idiomatic APIs
- Context7 mode stated in terminal
- Training-bot and citation-bot prompts kept separate
- No
sameAsURLs fabricated - Existing
robots.txtdirectives shown before overwrite - Safe-auto fixes batched and confirmed once
- Content-requiring fixes confirmed individually
- All JSON-LD (no microdata / RDFa)
- No cloaking patterns introduced
-
--dry-runproduces diffs only, no writes - User directed to
/geo-llms-txtfor llms.txt work - User prompted to re-run
/geo-auditto verify
More from charlesjones-dev/claude-code-plugins-dev
accessibility-audit
Comprehensive accessibility audit to identify WCAG compliance issues and barriers to inclusive design.
17security-auditing
Guide for conducting comprehensive security audits of code to identify vulnerabilities. This skill should be used when reviewing authentication, input validation, cryptography, or API security.
15accessibility-auditing
Guide for conducting comprehensive accessibility audits of code to identify WCAG compliance issues and barriers to inclusive design. This skill should be used when reviewing accessibility, ARIA implementation, keyboard navigation, or screen reader compatibility.
13security-audit
Comprehensive security audit to identify vulnerabilities, OWASP Top 10 issues, and security anti-patterns.
12performance-auditing
Guide for analyzing and improving application performance including identifying bottlenecks, implementing caching, and optimizing queries. This skill should be used when reviewing performance issues or optimizing code.
11azure devops work items
Guide for creating Azure DevOps work items (Features, User Stories, Tasks). This skill should be used when working with ADO MCP tools to create work items with proper hierarchy and formatting.
10