researching-web
Web Research with Perplexity
Default: Call Perplexity MCP directly. Only spawn agent when codebase context is explicitly needed.
Best For
- Technology comparisons (X vs Y)
- Best practices, industry standards
- OWASP, security guidelines
- Documentation references
- Stable technical content
Default Mode: Direct MCP Call
Use this for 90% of research requests. When user says "ask Perplexity", "research", "look up", etc.:
mcp__perplexity-ask__perplexity_ask({
"messages": [{ "role": "user", "content": "Your research question" }]
})
This is fast, reliable, and what users expect.
Deep Mode: Agent (Rare)
Only use when user explicitly asks to compare research with their current code.
Trigger phrases that warrant agent:
- "compare my code to best practices"
- "is my implementation following standards"
- "research and show how my code differs"
Task(subagent_type="perplexity-researcher", prompt="Research: <topic>", run_in_background=true)
DO NOT use agent for:
- Simple "ask Perplexity about X" requests
- General research questions
- "What is the best way to do X" (unless they mention their code)
Query Formulation Tips
- Be specific: "Go 1.25 error handling best practices 2025"
- Include context: "Redis vs Memcached for session storage in Go services"
- Ask comparisons: "Pros and cons of gRPC vs REST for microservices"
- Include year: "Claude Code context optimization 2025"
Reference Following (Deep Research)
After Perplexity returns results with citations:
- Review all cited URLs in the response
- WebFetch top 2-3 most relevant sources for deeper context
- Synthesize comprehensive answer combining all sources
# After Perplexity response with citations
WebFetch(url="<cited-url-1>", prompt="Extract key details about <topic>")
WebFetch(url="<cited-url-2>", prompt="Extract implementation examples")
Use reference following when:
- Initial answer is high-level and needs specifics
- User asks "tell me more" or "dig deeper"
- Implementing something that needs detailed guidance
Output Structure
## Summary
[Key findings - 2-3 sentences]
## Details
[Organized findings by topic]
## Recommendations
[Actionable items for the project]
## Sources
- [Source](url) - [what was learned]
More from alexei-led/cc-thingz
improving-tests
Improve test design and coverage, including TDD/red-green-refactor guidance. Use when user says "improve tests", "refactor tests", "test coverage", "combine tests", "table-driven", "parametrize", "test.each", "test-first", "TDD", "red-green-refactor", or wants to remove test waste.
4refactoring-code
Batch refactoring via MorphLLM edit_file. Use for "refactor across files", "batch rename", "update pattern everywhere", large files (500+ lines), 5+ edits in same file, or applying an approved architecture-deepening refactor.
3debating-ideas
Dialectic thinking — spawn thesis and antithesis agents to stress-test ideas, then synthesize and verify against code. Use when user says "debate", "argue both sides", "devil's advocate", "stress test this idea", "pros and cons of approach", or wants rigorous evaluation of a design decision.
3linting-instructions
Lint plugin agent/skill prompts against rules derived from Anthropic model cards (Opus 4.6, Sonnet 4.6). Use when authoring or reviewing skills and agents — "lint instructions", "audit prompts", "model card rules".
3learning-patterns
Extract learnings and generate project-specific customizations (CLAUDE.md, commands, skills, hooks). Use when user says "learn", "extract learnings", "what did we learn", "save learnings", "adapt config", or wants to improve Claude Code based on conversation patterns.
3documenting-code
Update project documentation based on recent changes. Use when user says "update docs", "document", "add documentation", "update readme", "write docs", or wants to improve documentation.
3