isitagentready
isitagentready
Audit a live repository against Cloudflare's agent-readiness checks, then write a fix-ready markdown report packet grounded in runtime evidence and source inspection.
Decision Tree
What do you need to do?
-
Run the full audit and create the report packet
- Read
references/methodology.md - Run
python3 scripts/create_report_packet.py --repo . - Then read
references/signal-map.md
- Read
-
Understand the full Cloudflare signal inventory, score boundaries, and applicability rules
- Read
references/signal-map.md - Then read
references/shared.md
- Read
-
Verify the live production site before source inspection
- Read
references/runtime-and-browser.md - If a browser skill is available, load
{{ skill:agent-browser }}
- Read
-
Search the repository surgically for likely implementations, gaps, or deployment clues
- Read
references/repo-search-playbook.md
- Read
-
Write the final markdown report in the expected format
- Read
references/report-format.md - Use
templates/agent-readiness-report.md
- Read
-
Avoid false positives, score inflation, or applicability mistakes
- Read
references/gotchas.md
- Read
Quick Reference
| Task | Use | Outcome |
|---|---|---|
| Create a report packet in the repo root | python3 scripts/create_report_packet.py --repo . --url https://example.com |
Creates a timestamped folder with agent-readiness-report.md, sources.md, and metadata.json |
Fetch the official isitagentready.com scan JSON for a deployed URL |
python3 scripts/scan_site.py --url https://example.com --output ./isitagentready-report/scan-results.json |
Saves the raw scan JSON for evidence and score context |
| Run the audit workflow in the right order | references/methodology.md |
Browser/runtime check first, repo inspection second, report synthesis last |
| Map repo evidence to Cloudflare checks | references/signal-map.md + references/repo-search-playbook.md |
Per-signal pass/fail/partial/not-applicable assessment |
| Write the final report | references/report-format.md + templates/agent-readiness-report.md |
Detailed markdown report with evidence, coverage, and remediation order |
| Validate the packaged skill | python3 scripts/validate.py skills/isitagentready |
Structural validation |
| Test the packaged skill | python3 scripts/test_skill.py skills/isitagentready |
Cross-reference, eval, and helper-script checks |
Core Workflow
- Resolve the repository root first. This skill is meant to run inside repositories, not against arbitrary URLs in isolation.
- Ask one direct question when the production URL is missing and live verification is possible:
What production URL should I audit for this repository? - If a production URL exists and
{{ skill:agent-browser }}is available, load it and complete the browser pass before opening source files. Wait for that pass to finish; use it to anchor the later code review. - Create the local report packet with
python3 scripts/create_report_packet.py --repo <repo> [--url <production-url>]. - If network access is available, fetch the official Cloudflare-style scan JSON with
python3 scripts/scan_site.py --url <production-url> --output <report-dir>/scan-results.json. - Inspect the repository for every scored and supporting signal. Search static files, route handlers, middleware, CDN config, edge config, and deployment transforms before concluding a signal is missing.
- Separate findings into four buckets:
- Confirmed in production
- Present in source but not yet proven deployed
- Missing or contradicted by source/runtime evidence
- Not applicable or currently neutral
- Write
agent-readiness-report.mdwith concrete evidence, applicability reasoning, and a prioritized remediation order. Do not leave the report as a checklist dump.
Audit Deliverables
Produce these artifacts in the report packet:
- Executive summary — what materially limits agent readiness right now.
- Evidence sources — browser pass, official scan JSON, repo inspection, and unresolved areas.
- Official scan snapshot — only when a live URL was scanned. Include
level,levelName, andnextLevelif present. - Findings by category — discoverability, content, bot access control, protocol discovery, and commerce/supporting signals.
- Applicability decisions — why a signal is scored, neutral, or not applicable for this repo.
- Repository coverage map — which files, routes, configs, and deployment layers were inspected.
- Prioritized remediation — what to fix first, second, and later.
Analysis Rules
- Treat the Cloudflare runtime scan as authoritative for deployed behavior, but never let it replace repository inspection. Many failures are deployment drift, not missing code.
- Do not invent an official Cloudflare level from source code alone. Without a live scan, produce a repository assessment, not a claimed score.
- Ask for the production URL before the browser step. Do not silently guess from package metadata, DNS, or README files unless the user already stated the deployment target.
- Mark optional or neutral checks explicitly. A static content site should not be penalized for lacking commerce flows or OAuth protected resource metadata unless the product genuinely exposes those capabilities.
- Search deployment surfaces, not just app code. Headers and well-known routes are often emitted by CDN rules, edge middleware, or reverse proxies.
- Distinguish
missing in source,present but unverified, andfailing in production. Those are different remediation paths. - When the repository has separate web and API apps, treat the user-supplied production URL as the authoritative runtime surface. Source-only backend OpenAPI or MCP config is not a deployed pass unless the public site exposes or links it.
- Keep user-facing evidence repo-relative. Cite paths such as
apps/web/app/robots.ts, not absolute workstation paths.
Reading Guide
| If the task is... | Read |
|---|---|
| Full audit from repo to report packet | references/methodology.md, then references/report-format.md |
| Understand Cloudflare's checks and how they map to code | references/signal-map.md |
| Run browser-first/live-site verification | references/runtime-and-browser.md |
| Search the repo for likely implementations or deployment clues | references/repo-search-playbook.md |
| Avoid mis-scoring optional or neutral checks | references/shared.md and references/gotchas.md |
Verified External Baseline
This skill was grounded against current primary Cloudflare sources retrieved on April 19, 2026:
https://isitagentready.com/https://blog.cloudflare.com/agent-readiness/published April 17, 2026https://isitagentready.com/.well-known/agent-skills/index.json- Representative
SKILL.mddocuments published byisitagentready.comfor robots.txt, sitemap, link headers, markdown negotiation, content signals, Web Bot Auth, API Catalog, OAuth discovery, OAuth Protected Resource metadata, MCP Server Card, A2A Agent Card, Agent Skills discovery, WebMCP, x402, UCP, and ACP
Use the references in this skill as the first source of truth, then verify live details when the target stack, deployment layer, or browser surface has changed.
Gotchas
- Ask for the production URL before the browser pass: this skill is explicitly browser-first when a live site and headless browser are available.
- Do not claim an official Cloudflare score without a live scan: code inspection alone is not the same as the deployed score returned by
isitagentready.com. - Commerce does not currently count toward the score: the April 17, 2026 Cloudflare blog states that x402, UCP, and ACP are checked but do not currently contribute to the score.
llms.txtis adjacent, not the same as markdown negotiation: Cloudflare's default score checks markdown negotiation;llms.txtandllms-full.txtare useful supporting signals and may appear in customized scans.- Headers often live outside the app: missing
LinkorContent-Type: text/markdownbehavior may be implemented in CDN, edge, or proxy config rather than route code. - A wildcard robots rule is insufficient for AI-specific bot rules: Cloudflare's
ai-rulesskill explicitly expects named AI crawler blocks. - WebMCP must be verified in a rendered page: source search helps, but the check is effectively a browser/runtime behavior.
Helper Files
references/shared.md— shared terminology, scoring boundaries, and source baseline.references/methodology.md— end-to-end audit workflow and evidence model.references/signal-map.md— full Cloudflare signal inventory with applicability notes.references/runtime-and-browser.md— production URL handling, browser-first workflow, and scan API usage.references/repo-search-playbook.md— surgical search heuristics across frameworks and deployment layers.references/report-format.md— the exact output packet layout and report contract.references/gotchas.md— common traps and misreadings.templates/agent-readiness-report.md— starting template for the markdown report.scripts/create_report_packet.py— deterministic report-packet scaffolder.scripts/scan_site.py— helper to fetch the liveisitagentready.comscan JSON.scripts/validate.py— structural validator for this skill.scripts/test_skill.py— packaging, eval, and helper-script test runner.
More from jpcaparas/skills
markdown-new
Use markdown.new when the user explicitly wants markdown.new, Cloudflare Markdown for Agents, URL-to-Markdown conversion, file-to-Markdown conversion, crawl-to-Markdown, or the hosted markdown.new editor. Trigger on: 'markdown.new', 'convert this URL to markdown', 'crawl this docs site into markdown', 'file to markdown', 'upload this PDF to markdown', 'markdown.new API', or 'markdown editor'. Do NOT trigger for generic web search/scraping when another tool is enough, or for editing local Markdown without using the markdown.new service.
32skill-creator-advanced
Advanced skill creator for mission-critical, installable skills — API wrappers, progressively-disclosed technical documentation, CLI tool integrations, and complex multi-reference skills. Use when creating or improving skills that demand rigorous progressive disclosure, verified examples, tested operations, cross-harness compatibility, smart placement into the right repo-local or global skills directory, and self-improvement feedback loops. Triggers on: 'advanced skill', 'create API skill', 'create wrapper skill', 'production skill', 'installable skill', 'improve this skill for progressive disclosure', 'rigorous skill', 'mission-critical skill', or when skill-creator's output needs to be more thorough. Also use when upgrading an existing skill to production quality.
32azure-devops-wiki-markdown
Use when writing, fixing, or reviewing Azure DevOps wiki Markdown, Mermaid diagrams, `_TOC_` and `_TOSP_`, collapsible `<details>` blocks, query-table embeds, `@` mentions, work-item links, KaTeX math, HTML video embeds, code fences, or Azure DevOps surface-specific support differences across Wiki, PR, README, Widget, and Done fields. Triggers on Azure DevOps wiki, markdown guidance, Mermaid sequence/graph/timeline/ER diagrams, proposal decision trees, table-of-subpages, query-table, code fence aliases, line-break bugs, and wiki page formatting. Do NOT use for GitHub-only Markdown, generic Mermaid authoring outside Azure DevOps, or non-Azure documentation platforms.
30ripgrep
Prefer ripgrep (`rg`) for text search, recursive codebase search, ignore-aware grep replacement, filename discovery via `rg --files`, and machine-readable search output. Use when the user asks to search for text, find occurrences, inspect a large tree, locate files by name or pattern, or when `grep`, `grep -R`, `find | grep`, or manual file reads would be slower. Triggers on: 'search for', 'find occurrences', 'grep', 'grep -R', 'ripgrep', 'rg', 'find files', 'look for pattern'. Do NOT trigger for reading entire files, structured JSON queries better handled by `jq`, or filesystem metadata tasks that need `find` or `fd`.
29synthetic-search
Use this skill when the user explicitly wants Synthetic Search, the Synthetic API, `api.synthetic.new`, `SYNTHETIC_API_KEY`, or zero-data-retention web search with raw `curl`/`jq` examples. It covers live-verified search requests, quota checks, and a zero-dependency Node helper for readable output. Triggers on: 'Synthetic Search', 'Synthetic API', 'api.synthetic.new', 'SYNTHETIC_API_KEY', 'Synthetic quotas'. Do NOT trigger for general browser automation, full-site crawling, or unrelated search providers.
29tweet-replicate
Rebuild a public X/Twitter status into a deterministic local replica with a frozen snapshot, local HTML/CSS, Playwright capture, X-like media-frame fill behavior, a high-quality MP4 master, and a companion GIF capped under 24 MB. Use when asked to replicate a tweet/X post, freeze a status into video, make a tweet look like X offline, or create rerenderable tweet assets with a saved build folder. Trigger on: 'replicate this tweet', 'turn this X post into MP4', 'make this tweet into a GIF too', 'freeze this status locally'. Do NOT use for plain tweet text extraction, raw media download only, live X browser capture, authenticated pages, DMs, or promises of a pixel-perfect private X renderer.
29