res-deep
Deep Research
Iterative multi-round research across Web, Reddit, X/Twitter, GitHub, Hacker News, Substack, Financial Media, LinkedIn, and more with structured output frameworks (comparison, landscape, deep-dive, decision).
Architecture
| Source | Tool | Cost |
|---|---|---|
| Web | Claude Code WebSearch + xAI web_search |
Free + $0.005/call |
Claude Code WebSearch (site:reddit.com) + xAI web_search |
Free + $0.005/call | |
| X/Twitter | xAI x_search only |
$0.005/call |
| GitHub | Claude Code WebSearch (site:github.com) + xAI web_search |
Free + $0.005/call |
| Hacker News | Claude Code WebSearch (site:news.ycombinator.com) + xAI web_search |
Free + $0.005/call |
| Substack | Claude Code WebSearch (site:substack.com) + xAI web_search |
Free + $0.005/call |
| Financial Media | Claude Code WebSearch (site-specific) + xAI web_search |
Free + $0.005/call |
| Wall Street Oasis | Claude Code WebSearch (site:wallstreetoasis.com) |
Free |
Claude Code WebSearch (site:linkedin.com) + xAI web_search |
Free + $0.005/call | |
| Crunchbase | Claude Code WebSearch (site:crunchbase.com) |
Free |
| YouTube | Claude Code WebSearch (site:youtube.com) |
Free |
| Tech Blogs | Claude Code WebSearch (site-specific) |
Free |
Results from multiple sources are merged and deduplicated for comprehensive coverage.
Prerequisites
Required Tools
| Tool | Purpose | Install |
|---|---|---|
| uv | Python package manager (handles dependencies) | curl -LsSf https://astral.sh/uv/install.sh | sh |
Optional Tools
| Tool | Purpose | Install |
|---|---|---|
| scrapling | Headless browser fallback for sites that block WebFetch (403, captcha, empty responses) | uv tool install 'scrapling[all]' |
API Keys
| Service | Purpose | Required | Get Key |
|---|---|---|---|
| xAI | X/Twitter search + supplemental web/GitHub/HN search | Recommended | https://console.x.ai |
Note: The skill works without xAI key (web-only mode via Claude Code), but X/Twitter data and broader coverage require xAI.
Keychain Setup (One-Time, for xAI)
# 1. Create a dedicated keychain
security create-keychain -p 'YourPassword' ~/Library/Keychains/claude-keys.keychain-db
# 2. Add keychain to search list
security list-keychains -s ~/Library/Keychains/claude-keys.keychain-db ~/Library/Keychains/login.keychain-db /Library/Keychains/System.keychain
# 3. Store your xAI API key
echo -n "Enter xAI API key: " && read -s key && security add-generic-password -s "xai-api" -a "$USER" -w "$key" ~/Library/Keychains/claude-keys.keychain-db && unset key && echo
Before using: security unlock-keychain ~/Library/Keychains/claude-keys.keychain-db
Workflow Overview
| Step | Action | Purpose |
|---|---|---|
| 0 | Detect xAI key | Determine Full vs Web-Only mode |
| 1 | Parse query | Extract TOPIC, FRAMEWORK, DEPTH |
| 2 | Round 1: Broad search | Discover entities, themes, initial findings |
| 3 | Gap analysis | Identify missing perspectives, unverified claims |
| 4 | Round 2: Targeted follow-up | Fill gaps, verify claims, deepen coverage |
| 5 | Round 3: Verification | (deep only) Primary source verification |
| 6 | Synthesis | Structure findings into framework template |
| 7 | Expert mode | Answer follow-ups from cached results |
Step 0: Detect xAI Key
MANDATORY — run before every research session.
security find-generic-password -s "xai-api" -w ~/Library/Keychains/claude-keys.keychain-db 2>/dev/null && echo "XAI_AVAILABLE=true" || echo "XAI_AVAILABLE=false"
- XAI_AVAILABLE=true: Use Full mode — Claude WebSearch AND xAI scripts in parallel.
- XAI_AVAILABLE=false: Use Web-Only mode — Claude WebSearch only. Append note suggesting xAI setup.
This step is NOT optional. Always check before starting research.
Step 1: Parse Query
Extract from user input:
1a. TOPIC
The subject being researched. Strip framework indicators and depth modifiers.
1b. FRAMEWORK
Detect output framework from query patterns:
| Framework | Detection Patterns | Example |
|---|---|---|
| COMPARISON | "X vs Y", "compare X and Y", "X or Y", "which is better" | "React vs Vue for enterprise apps" |
| LANDSCAPE | "landscape", "ecosystem", "market", "what's out there", "overview of" | "AI agent frameworks landscape" |
| DEEP_DIVE | "deep dive", "how does X work", "explain", "tell me about", "what is" | "Deep dive into WebAssembly" |
| DECISION | "should I/we", "evaluate", "which should we use", "recommend" | "Should we use Kafka or RabbitMQ?" |
Explicit override: User can force a framework with [comparison], [landscape], [deep-dive], or [decision] anywhere in query.
Default: If no framework detected, use DEEP_DIVE.
1c. DEPTH
| Depth | Trigger | Rounds | Target Sources |
|---|---|---|---|
| quick | "quick", "brief", "overview" | 1 | 10-15 |
| default | (none) | 2 | 25-40 |
| deep | "deep", "comprehensive", "thorough" | 3 | 60-90 |
Step 2: Round 1 — Broad Search
Query Generation
Generate 6-9 queries covering different angles of the TOPIC:
- Direct query:
"{TOPIC}"— the topic as stated - Temporal query:
"{TOPIC} 2026"or"{TOPIC} latest" - Reddit query:
site:reddit.com "{TOPIC}" - GitHub query:
site:github.com "{TOPIC}" - HN query:
site:news.ycombinator.com "{TOPIC}" - Framework-specific query:
- COMPARISON:
"{Alt A} vs {Alt B}" - LANDSCAPE:
"{TOPIC} ecosystem" OR "{TOPIC} landscape" - DEEP_DIVE:
"how {TOPIC} works" OR "{TOPIC} explained" - DECISION:
"{TOPIC}" experience OR recommendation
- COMPARISON:
- Substack query:
site:substack.com "{TOPIC}" - Financial media query:
site:tradingview.com OR site:benzinga.com OR site:seekingalpha.com "{TOPIC}"(for finance/economics topics) - LinkedIn query:
site:linkedin.com "{TOPIC}"(when topic involves people or companies)
Parallel Execution
Run searches simultaneously:
Claude Code (free):
WebSearch: direct queryWebSearch: temporal queryWebSearch: Reddit-targeted queryWebSearch: GitHub-targeted queryWebSearch: HN-targeted queryWebSearch: Substack-targeted queryWebSearch: Financial media-targeted query (for finance/economics topics)WebSearch: LinkedIn-targeted query (when topic involves people/companies)WebSearch: YouTube-targeted queryWebSearch: WSO-targeted query (for finance topics)WebSearch: Crunchbase-targeted query (for company/startup topics)
xAI scripts (if available, run as background Bash tasks):
uv run scripts/xai_search.py web "{TOPIC}" --json &
uv run scripts/xai_search.py reddit "{TOPIC}" --json &
uv run scripts/xai_search.py x "{TOPIC}" --json &
uv run scripts/xai_search.py github "{TOPIC}" --json &
uv run scripts/xai_search.py hn "{TOPIC}" --json &
uv run scripts/xai_search.py substack "{TOPIC}" --json &
uv run scripts/xai_search.py finance "{TOPIC}" --json &
uv run scripts/xai_search.py linkedin "{TOPIC}" --json &
Merge and Deduplicate
MERGED_WEB = dedupe(claude_web + xai_web)
MERGED_REDDIT = dedupe(claude_reddit + xai_reddit)
MERGED_GITHUB = dedupe(claude_github + xai_github)
MERGED_HN = dedupe(claude_hn + xai_hn)
MERGED_SUBSTACK = dedupe(claude_substack + xai_substack)
MERGED_FINANCE = dedupe(claude_finance + xai_finance)
MERGED_LINKEDIN = dedupe(claude_linkedin + xai_linkedin)
X_RESULTS = xai_x_results (no Claude equivalent)
WSO_RESULTS = claude_wso (WebSearch only)
CRUNCHBASE_RESULTS = claude_crunchbase (WebSearch only)
YOUTUBE_RESULTS = claude_youtube (WebSearch only)
Deduplication by URL, keeping the entry with more metadata.
Round 1 Internal Notes
Record (internally, NOT in output):
KEY_ENTITIES: [specific tools, companies, people discovered]
THEMES: [recurring themes across sources]
GAPS: [what's missing — feed into Step 3]
CONTRADICTIONS: [conflicting claims]
LEADS: [URLs worth deep-reading via WebFetch (or scrapling fallback) in Round 2]
Step 3: Gap Analysis
After Round 1, run gap analysis. See references/iterative-research.md for full checklist.
Gap Categories
| Gap | Check | Action |
|---|---|---|
| Missing perspective | Have developer, operator, and business views? | Target missing perspective |
| Unverified claims | Any claims from only 1 source? | Seek corroboration |
| Shallow coverage | Any entity mentioned but unexplained? | Deep-search that entity |
| Stale data | Key facts > 12 months old? | Search for recent updates |
| Missing source type | Missing Reddit / GitHub / HN / X / blogs? | Target that platform |
| Missing financial/business context | Missing Substack / financial media / LinkedIn / Crunchbase? | Target that platform |
Plan Round 2
Select top 4-6 gaps. Generate targeted queries for each. See references/search-patterns.md for multi-round refinement patterns.
Skip to Step 6 if depth is quick (single round only).
Step 4: Round 2 — Targeted Follow-Up
Query Rules
- Never repeat Round 1 queries
- Entity-specific queries — target names/tools discovered in Round 1
- Source-type specific — target platforms underrepresented in Round 1
- Framework-adapted — see targeting table in
references/iterative-research.md
Execution
Same parallel pattern as Round 1, but with targeted queries.
Additionally, use WebFetch for high-value URLs discovered in Round 1:
- Official documentation pages
- Benchmark result pages
- Engineering blog posts
- Comparison articles with methodology
Maximum 4-6 WebFetch calls in Round 2.
Scrapling fallback: If WebFetch returns 403, empty content, a captcha page, or a blocked response, retry using the auto-escalation protocol from cli-web-scrape:
scrapling extract get "URL" /tmp/scrapling-fallback.md→ Read → validate content- If content is thin (JS-only shell, no data, mostly nav links) →
scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources→ Read → validate - If still blocked →
scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare - All tiers fail → skip URL and note "scrapling blocked"
Confidence Update
After Round 2, re-assess all claims:
| Before | New Evidence | After |
|---|---|---|
| [LOW] | Second source found | [MEDIUM] |
| [MEDIUM] | Third source found | [HIGH] |
| Any | Contradicted | Note conflict, present both sides |
Skip to Step 6 if depth is default.
Step 5: Round 3 — Verification (Deep Only)
Round 3 is for verification only. No new discovery.
Budget
Maximum 6-10 WebFetch lookups targeting:
| Target | Purpose | Max Calls |
|---|---|---|
| Primary sources for key claims | Verify accuracy | 3-4 |
| Independent benchmark sites | Validate performance claims | 1-2 |
| Both sides of contradictions | Resolve conflicts | 1-2 |
| Official sites for versions/dates | Confirm recency | 1-2 |
Scrapling fallback: Same auto-escalation protocol as Round 2 — try HTTP tier, validate content, escalate to Dynamic/Stealthy if thin or blocked.
Rules
- Verify, don't discover — no new topic exploration
- Target highest-impact claims — those that would change the recommendation
- Check primary sources — go to the original, not summaries
- Update confidence — upgrade or downgrade based on findings
- Trust primary over secondary — if primary contradicts secondary, note it
Step 6: Synthesis
Framework Selection
Load references/output-frameworks.md and select the template matching the detected FRAMEWORK.
Filling the Template
- Header block — Framework type, topic, depth, source count, date
- Core content — Fill framework sections with research findings
- Confidence indicators — Mark each claim:
[HIGH],[MEDIUM], or[LOW] - Community perspective — Synthesize Reddit/X/HN/GitHub sentiment
- Statistics footer — Source counts and engagement metrics
- Sources section — Organized by platform with URLs and metrics
Engagement-Weighted Synthesis
Weight sources by signal strength. See references/iterative-research.md for full weighting table.
| Signal | Threshold | Weight |
|---|---|---|
| Reddit upvotes | 100+ | High |
| X engagement | 50+ likes | High |
| GitHub stars | 1000+ | High |
| HN points | 100+ | High |
| Substack likes | 50+ | High |
| Multi-platform agreement | 3+ sources | High |
| Dual-engine match | Claude + xAI | High |
| Seeking Alpha comments | 20+ | Medium |
| WSO upvotes | 10+ | Medium |
| YouTube views | 10K+ | Medium |
| Recent (< 7 days) | Any | Medium |
| Single source only | Any | Low |
Statistics Footer Format
Research Statistics
├─ Reddit: {n} threads │ {upvotes} upvotes
├─ X: {n} posts │ {likes} likes │ {reposts} reposts
├─ GitHub: {n} repos │ {stars} total stars
├─ HN: {n} threads │ {points} total points
├─ Substack: {n} articles │ {likes} likes
├─ Financial: {n} articles │ {sources}
├─ LinkedIn: {n} profiles/articles
├─ YouTube: {n} videos
├─ WSO: {n} threads
├─ Crunchbase: {n} profiles
├─ Web: {n} pages │ {domains}
├─ Merged: {n} from Claude + {n} from xAI
└─ Top voices: r/{sub1} │ @{handle1} │ {blog1}
Web-Only Mode footer:
Research Statistics
├─ Reddit: {n} threads (via Claude WebSearch)
├─ GitHub: {n} repos (via Claude WebSearch)
├─ HN: {n} threads (via Claude WebSearch)
├─ Substack: {n} articles (via Claude WebSearch)
├─ Financial: {n} articles (via Claude WebSearch)
├─ LinkedIn: {n} profiles/articles (via Claude WebSearch)
├─ YouTube: {n} videos (via Claude WebSearch)
├─ Web: {n} pages
└─ Top sources: {site1}, {site2}
Add X/Twitter + broader coverage: Set up xAI API key (see Prerequisites)
Step 7: Expert Mode
After delivering research, enter Expert Mode:
- Answer follow-ups from cached results
- No new searches unless explicitly requested
- Cross-reference between sources
New search triggers (exit Expert Mode):
- "Search again for..."
- "Find more about..."
- "Update the research..."
- "Look deeper into..."
Operational Modes
| Mode | Sources | When |
|---|---|---|
| Full | Claude WebSearch + xAI (web + X + Reddit + GitHub + HN + Substack + Finance + LinkedIn) | Step 0 returns XAI_AVAILABLE=true |
| Web-Only | Claude WebSearch only | Step 0 returns XAI_AVAILABLE=false |
Mode is determined by Step 0 — never skip it or assume Web-Only without checking.
Depth Control
| Depth | Rounds | Sources | xAI Calls (Full) | Use Case |
|---|---|---|---|---|
| quick | 1 | 10-15 | 8 | Fast overview, time-sensitive |
| default | 2 | 25-40 | 13 | Balanced research |
| deep | 3 | 60-90 | 18 + 6-10 WebFetch | Comprehensive analysis, important decisions |
Cost Awareness
| Action | Cost |
|---|---|
| Claude Code WebSearch | Free (subscription) |
| xAI search call (any type) | $0.005/call |
| WebFetch (built-in) | Free |
| scrapling fallback (optional) | Free |
Estimated cost per research session:
| Depth | Full Mode | Web-Only |
|---|---|---|
| quick | ~$0.04 (8 xAI calls) | Free |
| default | ~$0.065 (13 xAI calls) | Free |
| deep | ~$0.09 (18 xAI calls) | Free |
Cost-Saving Strategy:
- Claude WebSearch handles most needs (free)
- xAI adds X/Twitter (unique value) + broader coverage per platform
- WebFetch for deep-reading specific URLs (free), with scrapling fallback for blocked sites
Critical Constraints
DO:
- Run Step 0 (xAI key detection) before every research session
- If xAI key exists: run Claude WebSearch AND xAI scripts in parallel (Full mode)
- If xAI key missing: use Claude WebSearch only (Web-Only mode)
- Run gap analysis between rounds — never skip it
- Merge and deduplicate results by URL
- Exclude archived GitHub repositories from results — they are unmaintained and read-only
- Mark every claim with confidence:
[HIGH],[MEDIUM], or[LOW] - Ground synthesis in actual research, not pre-existing knowledge
- Cite specific sources with URLs
- Use
--jsonflag when calling xAI scripts for programmatic parsing - Load framework template from
references/output-frameworks.md
DON'T:
- Skip Claude Code WebSearch (it's free)
- Run sequential searches when parallel is possible
- Display duplicate results from different search engines
- Quote more than 125 characters from any single source
- Repeat queries across rounds — each round targets gaps from previous
- Add Round 3 for quick or default depth — it's deep-only
- Discover new topics in Round 3 — verification only
Troubleshooting
xAI key not found:
security find-generic-password -s "xai-api" ~/Library/Keychains/claude-keys.keychain-db
If not found, run keychain setup above.
Keychain locked:
security unlock-keychain ~/Library/Keychains/claude-keys.keychain-db
No X/Twitter results: Requires valid xAI API key. Check at https://console.x.ai
WebFetch blocked (403/captcha/empty): Install scrapling and follow the auto-escalation protocol from cli-web-scrape (HTTP → validate → Dynamic → Stealthy):
uv tool install 'scrapling[all]'
scrapling install # one-time: install browser engines for Dynamic/Stealthy tiers
Script errors: Ensure uv is installed: which uv. If missing: curl -LsSf https://astral.sh/uv/install.sh | sh
References
references/output-frameworks.md— Framework templates (comparison, landscape, deep-dive, decision)references/search-patterns.md— Search operators and multi-round query patternsreferences/iterative-research.md— Gap analysis, round procedures, cross-referencing methodology
More from molechowski/claude-skills
res-price-compare
Polish market product price comparison: 20+ shops, shipping costs, manufacturer vs seller warranty, B2B/statutory warranty analysis, stock status, distribution chain. Export TXT/XLSX/HTML. Use when: looking for a product to buy, price comparison, where to buy cheapest. Triggers: cena, porównaj, gdzie kupić, najtaniej, sklep, price compare, best price, kup, ile kosztuje.
36doc-vault-project
Manage multi-note research projects in Obsidian vault with phased subdirectory structure (concept, research, design, implementation). Scaffold new projects, add component notes, track status, link existing research, promote topics to projects. Use when: creating a project, adding to a project, checking project status, linking research to a project, promoting a research topic to a full project. Triggers: project init, project add, project status, project link, project promote, create project, new project.
35doc-daily-digest
Process Obsidian daily notes: classify raw URLs and loose ideas, fetch content (X tweets, GitHub repos, web pages), run deep research on ideas, create structured vault notes, replace raw items with wikilinks. Orchestrates doc-obsidian, res-x, and res-deep skills. Use when: processing daily note links, digesting saved URLs into notes, turning ideas into research, daily note cleanup. Triggers: daily digest, process daily, daily links, triage daily, digest daily note.
35res-x
Fetch X/Twitter tweet content by URL and search X posts. Resolves tweet links that WebFetch cannot scrape. Use for: reading saved X/Twitter links, fetching tweet content from URLs, searching X for posts on a topic, batch-processing X links from notes. Triggers: x.com link, twitter.com link, fetch tweet, read tweet, what does this tweet say, X search, twitter search.
34doc-project
Update all project documentation in one pass: CLAUDE.md, AGENTS.md, README.md, SKILLS.md, CHANGELOG.md. Orchestrates doc-claude-md, doc-readme, doc-skills-md, and doc-changelog skills sequentially. Use when: project docs are stale, after major changes, initial project setup, sync all docs. Triggers: update all docs, update project docs, sync documentation, refresh docs, doc-project.
34doc-obsidian
Obsidian vault management combining qmd (search) and notesmd-cli (CRUD). No Obsidian app needed. Use for: (1) searching notes with keyword, semantic, or hybrid search, (2) creating/editing/moving/deleting notes, (3) daily journaling, (4) frontmatter management, (5) backlink discovery, (6) AI agent memory workflows, (7) vault automation and scripting. Triggers: obsidian vault, obsidian notes, vault search, note management, daily notes, agent memory, knowledge base, markdown vault.
34