finding
finding
Create and query curated findings in Crawlio's observation log. Findings are agent-created insights backed by observation evidence.
When to Use
Use this skill when the user wants to:
- Record an insight or issue discovered during analysis
- Create an evidence-backed finding that persists across sessions
- Review previously created findings for a site
Creating Findings
Findings are the agent's judgment layer on top of raw observations. A good finding:
- Has a clear, descriptive title
- References specific observation IDs as evidence
- Includes a synthesis explaining the pattern or issue
Workflow
- Query observations to identify patterns:
get_observations({ host: "example.com", source: "extension", limit: 50 })
-
Identify the pattern — look for recurring issues, framework signals, error patterns, or notable behaviors.
-
Create the finding with evidence:
create_finding({
title: "Mixed content: HTTP images on HTTPS page",
url: "https://example.com",
evidence: ["obs_a3f7b2c1", "obs_b4e8c3d2"],
synthesis: "Homepage loads 3 images over HTTP despite serving over HTTPS. Network observations show requests to http://cdn.example.com/img/ which should use HTTPS. This triggers mixed content warnings in Chrome and may cause images to be blocked in strict mode.",
confidence: "high",
category: "security"
})
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
title |
string | Yes | Short, descriptive title |
url |
string | No | URL this finding relates to |
evidence |
[string] | No | Array of observation IDs (obs_xxx) |
synthesis |
string | No | Detailed explanation |
confidence |
string | No | high, medium, low, or none |
category |
string | No | Dimension (e.g. performance, security, framework) |
Finding Quality Checklist
- Title: Is it specific? "Mixed content on homepage" > "Issue found"
- Evidence: Do the observation IDs actually support the claim?
- Synthesis: Does it explain why this matters, not just what was observed?
- URL: Is it scoped to the right page or left empty for site-wide findings?
Querying Findings
All Findings
get_findings({})
Findings for a Specific Host
get_findings({ host: "example.com" })
Recent Findings
get_findings({ limit: 10 })
Finding Categories
When creating findings, consider these common categories:
| Category | Example Title |
|---|---|
| Performance | "Render-blocking scripts delay FCP by 2.3s" |
| Security | "Mixed content: HTTP resources on HTTPS page" |
| SEO | "Missing meta descriptions on 12 pages" |
| Framework | "Next.js App Router with ISR detected" |
| Errors | "3 JavaScript errors on product pages" |
| Structure | "Orphaned pages not linked from navigation" |
| Accessibility | "Missing alt attributes on hero images" |
Evidence Chain
The full evidence chain workflow:
analyze_page→ returnsevidenceIdcreate_finding→ reference theevidenceIdin theevidencearrayget_observation→ verify the evidence entry exists and supports the finding
Tips
- Create findings as you analyze, not all at the end — they persist across sessions
- Reference multiple observation IDs when a finding draws from several data points
- Use synthesis to explain the impact, not just restate the observation
- Findings with evidence chains are much more useful than findings without
- Use
confidenceto signal how strongly the evidence supports the claim - Use
categoryto enable filtering by dimension (performance, security, SEO, etc.)
More from crawlio-app/crawlio-plugin
crawl-site
Use this skill when the user asks to "crawl a site", "download a website", "mirror a site", "scrape a site", or wants to download web pages for offline access or analysis. Configures Crawlio settings based on site type, starts the crawl, monitors progress, and reports results.
18audit-site
Use this skill when the user asks to "audit a site", "analyze a website", "review a site", "site health check", or wants a comprehensive analysis including technology stack, issues, and recommendations. Orchestrates a full crawl, enrichment capture, observation analysis, and findings report.
11observe
Use this skill when the user asks to "check observations", "what did Crawlio see", "show crawl timeline", "query the observation log", or wants to review what happened during a crawl session. Queries the append-only observation log with filtering by host, source, operation, and time range.
9crawlio-mcp
Complete reference for the Crawlio MCP server — 37 tools, 6 code-mode tools, 4 resources, 4 prompts. Use this skill when orchestrating website crawling, export, enrichment, or analysis via Crawlio MCP.
1web-research
Use this skill when the user asks to "research a site", "compare sites", "analyze technology", or wants structured evidence-based web research. Teaches the acquire-normalize-analyze protocol using CrawlioMCP's composite analysis tools.
1extract-and-export
Use this skill when the user asks to "download and export a site", "crawl and extract content", "archive a website", "export as WARC/ZIP/PDF", or wants a complete crawl-extract-export pipeline. Crawls the site, extracts structured content, and exports in the requested format.
1