seo-ai-optimizer
SEO & AI Bot Optimizer
Audit and optimize website codebases for search engines and AI systems.
Repo Sync Before Edits (mandatory)
Before modifying any project files, sync the current branch with remote:
branch="$(git rev-parse --abbrev-ref HEAD)"
git fetch origin
git pull --rebase origin "$branch"
If the working tree is not clean, stash first, sync, then restore:
git stash push -u -m "pre-sync"
branch="$(git rev-parse --abbrev-ref HEAD)"
git fetch origin && git pull --rebase origin "$branch"
git stash pop
If origin is missing, pull is unavailable, or rebase/stash conflicts occur, stop and ask the user before continuing.
Prerequisites
Before starting the SEO audit, ensure the following:
- Environment: The project must be managed by a git repository.
- Tools: Python 3.x must be installed and available in the path.
- Audit Script:
scripts/audit_seo.py(shipped with this skill) is invoked against the audited project:python scripts/audit_seo.py <project-root>. - Access: You must have write access to the project files and permission to create new files (robots.txt, llms.txt, etc.).
Quick Reference
Consult these reference files as needed during the workflow:
references/workflow-detail.md— Detailed checklists, templates, and implementation stepsreferences/technical-seo.md— Full SEO checklist and best practicesreferences/framework-configs.md— Framework-specific configurationreferences/ai-bot-guide.md— AI crawler directives, llms.txt format, JSON-LD templates
Environment Check
This skill has two modes of operation:
With Subagent Architecture (Recommended):
If the Agent tool is available in your environment, the audit runs via a 4-phase subagent workflow for maximum accuracy and depth. See references/subagent-architecture.md.
Without Subagent Tool (Fallback): If Agent is not available, the skill runs a complete audit in a single conversation. The end result (SEO audit report) is the same.
Important
- Audit first, present findings, then propose a plan — never modify files without user approval
- Safety First: Always show a diff and get explicit confirmation before writing any file change
- Fetch latest best practices via web search during each audit to supplement embedded knowledge
Workflow
- Detect -- Identify project framework and scan for relevant files
- Audit -- Run automated scan + manual review across 4 categories
- Research -- Web search for latest SEO/AI bot best practices
- Report -- Present findings grouped by severity
- Plan -- Propose prioritized improvements for user approval
- Implement -- Apply approved changes following the Safety Protocol
- Validate -- Re-check modified files
Step 1: Detect Project Type
Run the audit script to detect framework and scan files:
python scripts/audit_seo.py <project-root>
If the script reports "No HTML/template files found," inform the user: this skill is designed for web frontends with HTML output.
Step 2: Audit
The audit script checks per-file issues and project-level issues. After running the script, perform a manual review for items requiring human judgment (content quality, links, E-E-A-T).
For the full manual review checklist, see references/workflow-detail.md.
Step 3: Research Latest Best Practices
Use web search to check for updates (SEO best practices, AI bot directives, llms.txt spec, algorithm updates). Compare findings with embedded knowledge in references/.
Step 4: Report
Present the audit report grouping findings by severity (Critical, Warning, Info) and project-level findings (robots.txt, sitemap, llms.txt, JSON-LD).
Step 5: Plan
Present a prioritized improvement plan using the template in references/workflow-detail.md.
Ask the user: "Which improvements should I implement? You can approve all, select specific items, or modify the plan."
Do NOT proceed without explicit approval.
Step 6: Implement
Apply approved changes following the Safety First protocol:
- Show Diff: For every file change, generate and show a clear diff or summary.
- Confirm: Request explicit user confirmation before writing each file (or batch).
For detailed implementation instructions per category (Technical SEO, robots.txt, llms.txt, JSON-LD, sitemaps), see references/workflow-detail.md.
Step 7: Validate
After implementing changes, re-run the audit script on modified files to verify critical issues are resolved and check for regressions.
Step Completion Reports
After each step, emit a ◆ status block. For templates and per-step check lists, see references/step-reports.md.
Acceptance Criteria
A run passes when the audit report is complete, the improvement plan was user-approved, the Safety Protocol (diff + confirmation) was followed, and validation shows critical issues are resolved.
Expected Output
After a full run, the agent should produce:
- Audit Report: A structured markdown report grouping findings by severity.
- Implementation: Modified or new files (robots.txt, llms.txt, sitemap.xml, JSON-LD) with confirmed changes.
- Validation Report: A post-fix verification showing critical issues reduced to 0.
For a concrete example of the audit report output, see references/workflow-detail.md.
More from luongnv89/skills
ollama-optimizer
Optimize Ollama configuration for the current machine's hardware. Use when asked to speed up Ollama, tune local LLM performance, or pick models that fit available GPU/RAM.
126logo-designer
Generate professional SVG logos from project context, producing 7 brand variants (mark, full, wordmark, icon, favicon, white, black) plus a showcase HTML page. Skip for raster-only logos, product illustrations, or full brand-guideline docs.
122code-optimizer
Analyze code for performance bottlenecks, memory leaks, and algorithmic inefficiencies. Use when asked to optimize, find bottlenecks, or improve efficiency. Don't use for bug-hunting code review, security audits, or refactoring without a perf goal.
76code-review
Review code changes for bugs, security vulnerabilities, and code quality issues — producing prioritized findings with specific fix suggestions. Don't use for performance tuning, writing new features from scratch, or generating test cases.
75idea-validator
Evaluate app ideas and startup concepts across market viability, technical feasibility, and competitive landscape. Use when asked to validate, review, or score a product idea. Don't use for writing a PRD, detailed go-to-market plans, or financial/investor pitch decks.
70test-coverage
Generate unit tests for untested branches and edge cases. Use when coverage is low, CI flags gaps, or a release needs hardening. Not for integration/E2E suites, framework migrations, or fixing production bugs.
63