researching-on-the-internet
Researching on the Internet
Overview
Gather accurate, current, well-sourced information from the internet to inform planning and design decisions. Test hypotheses, verify claims, and find authoritative sources for APIs, libraries, and best practices.
When to Use
Use for:
- Finding current API documentation before integration design
- Testing hypotheses ("Is library X faster than Y?", "Does approach Z work with version N?")
- Verifying technical claims or assumptions
- Researching library comparison and alternatives
- Finding best practices and current community consensus
Don't use for:
- Information already in codebase (use codebase search)
- General knowledge within Claude's training (just answer directly)
- Project-specific conventions (check CLAUDE.md)
Core Research Workflow
- Define question clearly - specific beats vague
- Search official sources first - docs, release notes, changelogs
- Cross-reference - verify claims across multiple sources
- Evaluate quality - tier sources (official → verified → community)
- Report concisely - lead with answer, provide links and evidence
Hypothesis Testing
When given a hypothesis to test:
- Identify falsifiable claims - break hypothesis into testable parts
- Search for supporting evidence - what confirms this?
- Search for disproving evidence - what contradicts this?
- Evaluate source quality - weight evidence by tier
- Report findings - supported/contradicted/inconclusive with evidence
- Note confidence level - strong consensus vs single source vs conflicting info
Example:
Hypothesis: "Library X is faster than Y for large datasets"
Search for:
✓ Benchmarks comparing X and Y
✓ Performance documentation for both
✓ GitHub issues mentioning performance
✓ Real-world case studies
Report:
- Supported: [evidence with links]
- Contradicted: [evidence with links]
- Conclusion: [supported/contradicted/mixed] with [confidence level]
Quick Reference
| Task | Strategy |
|---|---|
| API docs | Official docs → GitHub README → Recent tutorials |
| Library comparison | Official sites → npm/PyPI stats → GitHub activity |
| Best practices | Official guides → Recent posts → Stack Overflow |
| Troubleshooting | Error search → GitHub issues → Stack Overflow |
| Current state | Release notes → Changelog → Recent announcements |
| Hypothesis testing | Define claims → Search both sides → Weight evidence |
Source Evaluation Tiers
| Tier | Sources | Usage |
|---|---|---|
| 1 - Most reliable | Official docs, release notes, changelogs | Primary evidence |
| 2 - Generally reliable | Verified tutorials, maintained examples, reputable blogs | Supporting evidence |
| 3 - Use with caution | Stack Overflow, forums, old tutorials | Check dates, cross-verify |
Always note source tier in findings.
Search Strategies
Multiple approaches:
- WebSearch for overview and current information
- WebFetch for specific documentation pages
- Check MCP servers (Context7, search tools) if available
- Follow links to authoritative sources
- Search official documentation before community resources
Cross-reference:
- Verify claims across multiple sources
- Check publication dates - prefer recent
- Flag breaking changes or deprecations
- Note when information might be outdated
- Distinguish stable APIs from experimental features
Reporting Findings
Lead with answer:
- Direct answer to question first
- Supporting details with source links second
- Code examples when relevant (with attribution)
Include metadata:
- Version numbers and compatibility requirements
- Publication dates for time-sensitive topics
- Security considerations or best practices
- Common gotchas or migration issues
- Confidence level based on source consensus
Handle uncertainty clearly:
- "No official documentation found for [topic]" is valid
- Explain what you searched and where you looked
- Distinguish "doesn't exist" from "couldn't find reliable information"
- Present what you found with appropriate caveats
- Suggest alternative search terms or approaches
Common Mistakes
| Mistake | Fix |
|---|---|
| Searching only one source | Cross-reference minimum 2-3 sources |
| Ignoring publication dates | Check dates, flag outdated information |
| Treating all sources equally | Use tier system, weight accordingly |
| Reporting before verification | Verify claims across sources first |
| Vague hypothesis testing | Break into specific falsifiable claims |
| Skipping official docs | Always start with tier 1 sources |
| Over-confident with single source | Note source tier and look for consensus |
More from ed3dai/ed3d-plugins
functional-core-imperative-shell
Use when writing or refactoring code, before creating files - enforces separation of pure business logic (Functional Core) from side effects (Imperative Shell) using FCIS pattern with mandatory file classification
107playwright-debugging
Use when Playwright scripts fail, tests are flaky, selectors stop working, or timeouts occur - provides systematic debugging approach for browser automation issues
26creating-an-agent
Use when creating specialized subagents for Claude Code plugins or the Task tool - covers description writing for auto-delegation, tool selection, prompt structure, and testing agents
18writing-for-a-technical-audience
Use when writing documentation, guides, API references, or technical content for developers - enforces clarity, conciseness, and authenticity while avoiding AI writing patterns that signal inauthenticity
18using-generic-agents
Use to decide what kind of generic agent you should use
18asking-clarifying-questions
Use after initial design context is gathered, before brainstorming - resolves contradictions in requirements, disambiguates terminology, clarifies scope boundaries, and verifies assumptions to prevent building the wrong solution
17