rfc-research
RFC Research
You're landing this RFC on the desk of a chief architect who has no time and no patience. They reject two kinds of documents: shallow ones (hand-wavy claims, no evidence, "we should consider...") and bloated ones (walls of text, obvious statements, sections that exist to look thorough). You get one shot. The RFC must be deeply researched, grounded in real code, and dense enough that every paragraph teaches them something they didn't know. If it reads like filler, it gets rejected. If it lacks evidence, it gets rejected. Deliver a document that respects their time and earns their trust.
Phase 0: Pre-flight Check
Before starting, verify octocode MCP is available by checking if mcp__octocode__githubSearchRepositories (or any mcp__octocode__* tool) is in your available tools.
IF octocode tools are available: proceed to Phase 1.
IF octocode tools are NOT available: stop and tell the user:
Octocode MCP server is not configured. This skill requires it for GitHub code research.
Install options:
1. npm (recommended):
npx -y @anthropic-ai/claude-code mcp add octocode -- npx -y octocode-mcp
2. Manual — add to ~/.claude/settings.json under "mcpServers":
{
"octocode": {
"command": "npx",
"args": ["-y", "octocode-mcp"]
}
}
After adding, restart Claude Code and re-run this skill.
Do NOT proceed without octocode MCP. The skill cannot produce evidence-backed RFCs without it.
Workflow
Phase 1: Scope the RFC
Use AskUserQuestion to clarify scope before researching. Only ask about what's missing or ambiguous from the user's request — skip questions you can infer.
Question 1 — Problem & scope (ask if problem statement is vague):
- "What specific problem are you trying to solve?" with options based on what you inferred from their request
Question 2 — Research targets (ask if not obvious):
- "Which ecosystems/repos should I investigate?" with options like specific libraries, orgs, or "Open-ended — find the best options"
Question 3 — Decision drivers (always ask — priorities shape the RFC):
- "What matters most for this decision?" with options like: Performance, Developer experience, Compatibility/migration cost, Community/ecosystem size — allow multi-select
After answers, present a brief summary:
RFC: [Title]
Problem: [1-2 sentences]
Research targets: [repos/libraries to investigate]
Decision drivers: [ranked list]
Proceed?
Phase 2: Research Plan
Break the RFC topic into 2-5 concrete research questions. Each question maps to octocode MCP tool calls.
Example research questions:
- "How does [library X] implement [feature]?" ->
githubSearchCode+githubGetFileContent - "What repos solve [problem]?" ->
githubSearchRepositories - "What changed when [library] adopted [pattern]?" ->
githubSearchPullRequests - "What's the directory structure of [project]?" ->
githubViewRepoStructure
Present the plan to the user before executing:
## Research Plan
1. [Question] -> [tool] on [target repo/org]
2. [Question] -> [tool] on [target repo/org]
...
Proceed?
Phase 3: Execute Research
Use octocode MCP tools via subagents for parallel investigation.
Rules:
- ALWAYS use the Agent tool with
subagent_type="Explore"for octocode MCP calls (keeps main context clean) - Independent research domains -> parallel agents
- Sequential dependencies -> sequential agents
- Every tool call MUST include
mainResearchGoal,researchGoal, andreasoning - Follow hints in tool responses
- Collect file:line references for every finding
Tool selection guide:
| Research Need | Tool | When |
|---|---|---|
| Find repos | githubSearchRepositories |
Discovering projects, comparing solutions |
| Find code patterns | githubSearchCode |
Locating implementations, API usage |
| Read source | githubGetFileContent |
Understanding implementation details |
| Explore structure | githubViewRepoStructure |
Understanding project layout |
| Find PR history | githubSearchPullRequests |
Understanding why decisions were made |
| Find packages | packageSearch |
Looking up npm/pypi packages |
Research depth:
- For each research question, aim for 2-3 concrete code references
- Read actual implementations, not just READMEs
- Look at PRs for context on why patterns were adopted
- Compare at least 2 approaches when evaluating alternatives
Phase 4: Synthesize RFC
Structure the output using the RFC template below. Every claim must link to evidence found in Phase 3.
RFC Document Structure:
# RFC: [Title]
**Status:** Draft
**Date:** [today]
**Author:** [user or team]
## 1. Summary
[2-3 sentence overview of what this RFC proposes]
## 2. Problem
[What problem exists today? Why does it matter?]
[Include metrics, pain points, or user feedback if available]
## 3. Context & Prior Art
[What exists today in the ecosystem?]
[How do other projects/teams solve this?]
For each prior art finding:
- **[Project/Library]**: [How they solve it]
- Evidence: [GitHub URL with line numbers]
- Tradeoffs: [What they gain/lose]
## 4. Proposal
[Detailed description of the proposed solution]
[Include code examples, API sketches, or architecture diagrams]
### 4.1 Design Decisions
[Key decisions and their rationale, backed by research]
| Decision | Choice | Rationale | Evidence |
|----------|--------|-----------|----------|
| [What] | [Chosen approach] | [Why] | [link] |
### 4.2 Implementation Outline
[High-level steps to implement]
## 5. Alternatives Considered
For each alternative:
### 5.N [Alternative Name]
- **Description:** [What this approach does]
- **Pros:** [Advantages]
- **Cons:** [Disadvantages]
- **Evidence:** [Links to repos/code using this approach]
- **Why not:** [Specific reason for rejecting]
## 6. Risks & Mitigations
| Risk | Likelihood | Impact | Mitigation |
|------|-----------|--------|------------|
| [Risk] | Low/Med/High | Low/Med/High | [How to address] |
## 7. Open Questions
[Unresolved items that need further discussion or decision]
## 8. References
[All GitHub URLs, docs, and sources cited in this RFC]
Phase 5: Roast & Distill (Subagent)
Why a subagent: The main agent is biased — it spent tokens researching, has sunk-cost attachment to findings, and sees every detail as important. A fresh subagent receives only the RFC text with zero research context. Its clean context window IS the objectivity. It reads the RFC the way a reviewer would: cold.
Spawn a subagent with the following prompt (pass the full RFC markdown as input):
You are a senior staff engineer reviewing an RFC you've never seen before. You have 5 minutes. Your job: cut this RFC down to only what's needed to make a decision, then return the edited version.
## Kill on sight
- Obvious statements ("We need good performance", "Security is important")
- Generic risks that apply to any project ("Team needs to learn new tool", "Migration takes time")
- Filler prior art that doesn't inform the decision — if removing it doesn't change the recommendation, cut it
- Hedging language ("It might be worth considering", "One could argue") — take a position or delete
- Redundant alternatives where "Why not" is obvious from the proposal
- Open questions that are just rephrased risks
## Compress
- Prior Art: max 3-4 entries that directly shaped the proposal
- Alternatives: only 1-2 strongest contenders a reviewer might push back with
- Risks: max 3 rows. Low-likelihood AND low-impact = cut
- Implementation: bullet points only, max 5-7 steps
- Design decisions: every row needs an evidence link. No link = cut or flag
## Shorten without losing substance
- Rewrite paragraphs as single sentences
- Replace prose with tables or bullet lists
- Merge sections that say the same thing from different angles
- Inline tiny sections into their parent heading
- Code snippets over prose for behavior ("returns X when Y" → show code)
- Cut transitions ("Now let's look at...", "As mentioned above...")
## Targets
- Summary: exactly 2-3 sentences
- Problem: max 1 paragraph (3 sentences to feel the pain)
- Total: under 500 lines of markdown
## Output
Return the complete edited RFC markdown. Add a brief "## Roast Notes" section at the end listing what you cut and why (this section will be removed before delivery — it's for the main agent to review your cuts).
If any section makes you think "obviously" — that section shouldn't exist.
After the subagent returns:
- Review the roast notes — if any cut removed genuinely important context, restore it
- Remove the "Roast Notes" section
- The result is the final RFC
Phase 6: Deliver
- Save the RFC to
docs/rfcs/NNNN-[slug].md - Present a summary in the conversation with key findings and the file path
Research Quality Gates
Before completing each research question, verify:
- At least 2 concrete code references (file:line or GitHub URL)
- Actual source code was read, not just repo descriptions
- Both positive evidence (this works) and negative evidence (this doesn't) considered
Before completing the RFC, verify:
- Every claim in "Prior Art" has a GitHub link
- "Alternatives Considered" has real-world examples, not hypotheticals
- "Risks" are grounded in evidence, not speculation
- "Open Questions" are genuine unknowns, not lazy gaps
Troubleshooting
No results from octocode:
- Broaden search terms, try synonyms
- Search by topic instead of keyword
- Try a different owner/repo combination
Too many results:
- Add
ownerandrepofilters - Use
pathfilter to narrow to specific directories - Filter by
starsfor quality signal
Can't find prior art:
- This is a valid finding - document it as "novel approach" in the RFC
- Search for the problem being solved, not the specific solution
- Look at adjacent ecosystems (e.g., if no React solution, check Vue/Angular)
References
- For the full RFC template, see references/rfc-template.md