unblocked-context-research
Unblocked Context Search
Unified retrieval for engineering context. Calls context_research with a natural-language query to search across code, PRs, docs, message threads, and issues — whether you need a focused single-entity lookup, a filtered data query, or a multi-source research synthesis. One tool replaces the need to choose between semantic search, structured retrieval, and deep investigation.
How to Invoke
Do not infer CLI availability from the MCP tool list — fine-grained tools are CLI-only, so the MCP surface tells you nothing. Run command -v unblocked once per session and cache the result. See unblocked-tools-guide for full routing rules.
CLI (preferred):
unblocked context-research --query "<your query>" [--instruction "<instruction>"] [--effort low|medium|high]
MCP fallback (only if CLI is confirmed unavailable): call context_research with equivalent arguments (query, instruction, effort). context_research is exposed on MCP in virtually all environments.
If neither is available: stop and tell the user Unblocked is not configured in this environment (see unblocked-tools-guide for the full message). Do not substitute with web search or other sources.
When This Adds Value Over Grep/Read
Grep and Read show you what the code does now. This tool adds:
- Why it was built this way (PR discussions, design decisions)
- What was tried before (rejected approaches, prior incidents)
- What the team expects (conventions from team messages, docs, review comments)
- What's documented elsewhere (issue trackers, wiki pages, message threads)
- What happened (filtered activity — PRs merged, issues completed, message threads in a time range)
- What exists elsewhere (code in other repos, services, or systems not in the local workspace)
If your question is purely about current implementation and the code is local, Grep/Read is faster. If your question involves intent, history, conventions, activity across systems, or code outside the current repo, this tool surfaces context that isn't available locally.
Gotchas
- Keyword queries return noise —
authorrate limitingscatters results across too many entities. Write a full natural-language question with concrete identifiers:How does AuthService.validateToken() handle expired JWTs? - Not mining identifiers from results before re-querying — the first result contains stronger nouns (file paths, class names, PR numbers) than the original request. Extract them before forming follow-up queries.
- Treating returned code as current local state — results reflect the default branch, not the local workspace. Always verify against local files before acting.
- Asking questions the code can answer directly — if you only need the current implementation (not history or reasoning), use Grep/Glob/Read instead. The tool's value is organizational context, not code search.
- Confusing "completed" date semantics — "completed" uses resolved date (issue trackers) or merged status (PRs), not created date. "Issues I completed last week" = resolved by me in that range.
- Using time filters for current status — "what am I working on" = status filter (open/InProgress), no time range. Time filters are for activity windows ("last week", "since Monday").
- Not using "me" for self-references — when the user says "I"/"my"/"me", include
mein the query. Use actual names only for other people.
Input
| Parameter | Required | Description |
|---|---|---|
query |
Yes | What to search for — the topic, entities, and any hard filters (date range, author, status). Write a complete phrase, not bare keywords. |
effort |
No | Search effort: low (default), medium, or high. Use low for targeted lookups anchored on one entity, URL, or file; medium for exploratory queries without a clear anchor; high for planning, architecture reviews, migrations, incident retros, and cross-system investigations. |
include_content |
No | String. If "true", return full content for each match. If omitted, return only title and URL. |
instruction |
No | Relevance criteria that shape which results surface and in what order, without changing what is searched. E.g., "Prefer architecture decision records over API reference docs". |
max_results |
No | String. Maximum number of documents to return. Defaults to the server's limit if omitted. |
effort selection: low (default) for targeted lookups anchored on one entity, URL, or file. medium for exploratory queries without a clear anchor. high for planning, architecture, migrations, incident retros, and cross-system investigations — use whenever you're about to produce a plan or design.
include_content selection: Use "true" when you need to read the actual content inline. Omit it for initial discovery passes where titles and URLs suffice — you can always follow up with include_content: "true" or resolve individual URLs.
Write each query as a complete question or directive. Include the most concrete details you have:
- Service, module, class, or method names
- File paths or endpoints
- Project keys, channel names, repo names
- Date ranges, statuses, assignees
- Decision topics or feature names
The tool routes your query to the right retrieval strategy internally — you don't need to specify whether you want semantic search, a filtered lookup, or a research synthesis.
Splitting Queries
Split distinct unknowns into separate context_research calls rather than cramming everything into one query. Each call should have one objective. Run them in parallel when the unknowns are independent.
One query, two unknowns (diluted results):
Investigate the authentication flow and the rate limiting conventions in the API gateway.
Two parallel queries (focused results):
Query 1: How does AuthService.validateToken() verify JWTs and handle expiration?
Query 2: What conventions does the team follow for rate limiting middleware in the API gateway?
For complex investigations that span many entities, write a detailed 2-5 sentence directive rather than a short keyword fragment. Include the specific entities, systems, and questions you want answered.
Data Sources
The tool can retrieve structured records in addition to semantic search results.
| Source | Lookup Types | Key Filters | Details |
|---|---|---|---|
| Issue trackers | Single issue, filtered lists, by epic/board/sprint/label | project, status, assignee/creator, date ranges, label | See references/issue-tracker-queries.md |
| Messaging | Channel summary/data, thread summary/data | channel name, date range, content criteria | See references/messaging-queries.md |
| PRs | Single PR, filtered lists | author, status, repository, time range, limit | See references/pr-and-code-queries.md |
For structured queries, include specific details: project keys, channel names, repo names, date ranges, and statuses. Concrete details produce better results than vague requests.
When to Skip
- You only need current implementation, not history or reasoning, and the code is found locally via Grep/Glob/Read. If local search can't find the referenced entity, use this tool instead.
- You already know exactly which file and line to look at — direct file reads are faster.
- The question is about syntax or structure with no organizational context plausibly relevant.
Interpreting Results
- Returned source code reflects the default branch, not the current local workspace — verify against local files before acting.
- Mine concrete identifiers (file paths, class names, PR numbers, owner names, channel names) from results and use them in follow-up queries.
- Cross-reference key claims against at least one primary artifact (source file, config, authoritative doc) before driving decisions.
- Say explicitly when the tool returned thin or conflicting context so the user knows the confidence level.
- For messaging results, distinguish summary vs data content — if you got the wrong mode, re-query with the other.
- If important gaps remain, make one targeted follow-up query with the identifiers you mined from the first result.
Going Deeper with Source-Specific Tools
context_research casts a wide net across all sources. When results point toward a specific source type and you want more depth from that source alone, use the fine-grained tool for it — they search the same data but let you focus and filter without noise from other sources.
| If results surface… | And you want more… | Use |
|---|---|---|
| Code snippets or implementations | Code from other repos, or semantic code search | context_search_code |
| PR descriptions or review discussions | More PR history, rejected approaches, change reasoning | context_search_prs |
| Issue reports or tickets | More issues, related bugs, in-progress work | context_search_issues |
| Wiki pages, runbooks, or design docs | More documentation across Confluence, Notion, Google Drive | context_search_documentation |
| Slack threads or team conversations | More messaging context, specific channels or time ranges | context_search_messages |
When to escalate to a fine-grained tool:
- The broad results surfaced one strong signal in a specific source type and you want to pull more from it
- You want to filter or shape results within a source (e.g., only merged PRs, only open issues, only a specific channel)
- The initial results mixed sources in a way that's noisy — a focused tool returns cleaner, ranked results for that source
Run fine-grained follow-ups in parallel when you're chasing independent threads across multiple source types.
Reference
references/query-patterns.md— general query-writing guidance and filter semanticsreferences/issue-tracker-queries.md— issue tracker lookups and filter semanticsreferences/messaging-queries.md— messaging channel and thread queriesreferences/pr-and-code-queries.md— PR lookups, bug investigation, architecture, conventions, and research queries