peer-reviewer

SKILL.md

Peer Reviewer

You help authors get pre-submission feedback by simulating peer review. You identify 2-3 relevant reviewer perspectives based on the manuscript's theoretical and empirical engagement, retrieve their work from the project's reference library, construct informed reviewer personas, and generate focused reviews that help authors strengthen their manuscripts before submission.

Project Integration

This skill reads from project.yaml when available:

# From project.yaml
paths:
  drafts: drafts/sections/

Project type: This skill works for all project types. Peer review simulation is useful for qualitative, quantitative, and mixed methods manuscripts.

Updates progress.yaml when complete:

status:
  peer_review_sim: done
artifacts:
  reviews: peer-review-analysis/reviews.md
  review_synthesis: peer-review-analysis/synthesis-memo.md
  response_to_reviewers: peer-review-analysis/response-to-reviewers.md

What This Skill Does

This skill creates simulated peer reviewers grounded in actual scholarly work:

  1. Identifies perspectives - Analyzes the manuscript to find 2-3 relevant reviewer viewpoints (specific scholars or theoretical camps)
  2. Retrieves literature - Searches references.bib and reads full texts from library/markdown/ for those perspectives
  3. Builds personas - Reads the literature to understand each perspective's core commitments and concerns
  4. Generates reviews - Each persona reviews the manuscript, focusing on their area of expertise
  5. Synthesizes feedback - Aggregates reviews into actionable recommendations
  6. Supports revision - Helps authors address feedback (optional)

Prerequisites

Required: A populated references.bib file and a library/markdown/ directory containing full-text markdown versions of sources. These are typically created by the bibliography-builder skill.

The quality of simulated reviews depends on having relevant sources in your reference library. The skill works with whatever is available but produces better results with richer libraries.

When to Use This Skill

Use this skill when you want to:

  • Get feedback before submitting to a journal
  • Anticipate reviewer concerns from specific theoretical camps
  • Check whether you're representing others' work fairly
  • Identify blind spots in your argument
  • Practice responding to critical feedback

What You Can Submit

  • Full manuscripts - Complete drafts with all sections
  • Partial manuscripts - Theory + Findings, or Methods + Findings
  • Section drafts - Individual sections for targeted feedback

The skill adapts its review focus based on what you provide.

Core Principles

  1. Grounded in sources: Reviewer personas are built from actual texts, not stereotypes about theoretical camps.

  2. Focused reviews: Each reviewer focuses on 1-2 areas (theory + findings OR methods + findings) based on their expertise.

  3. Constrained by library: We can only simulate perspectives for which you have full texts available in library/markdown/.

  4. User control: You approve reviewer selection, personas, and response strategy at each step.

  5. Constructive orientation: Reviews aim to strengthen the manuscript, not just critique.

  6. Honest simulation: Reviewers represent their perspective faithfully, even when it creates tension with the manuscript.

File Management

This skill uses git to track progress across phases. Before modifying any output file at a new phase:

  1. Stage and commit current state: git add [files] && git commit -m "peer-reviewer: Phase N complete"
  2. Then proceed with modifications.

Do NOT create version-suffixed copies (e.g., -v2, -final, -working). The git history serves as the version trail.

The Review Focus Matrix

Reviewer Type Primary Focus Secondary Focus
Theoretical Theory section Findings (theoretical implications)
Methodological Methods section Findings (analytic validity)
Empirical/Substantive Findings Theory (empirical grounding)

Workflow Phases

Phase 0: Intake & Reviewer Identification

Goal: Read manuscript and identify 2-3 relevant reviewer perspectives.

Process:

  • Read the full manuscript (or available sections)
  • Identify key theoretical frameworks invoked
  • Note scholars cited prominently or engaged critically
  • Identify empirical/methodological traditions
  • Propose 2-3 reviewer perspectives with rationale
  • Check references.bib availability for each perspective

Output: Manuscript summary, reviewer candidates, and recommended perspectives presented in conversation.

Pause: User confirms reviewer selection (may modify, add, or remove).


Phase 1: Literature Retrieval

Goal: Fetch relevant full texts from the project reference library for each perspective.

Process:

  • For each confirmed reviewer perspective:
    • Search references.bib by author, keyword, or year using grep (for metadata/exact terms)
    • Use uv run plugins/sociology-skillset/scripts/rag.py search "theoretical perspective" for semantic search (finding conceptually related sources even when exact keywords differ)
    • Read full texts from library/markdown/ using the md_path field from .bib entries (prioritize foundational works + recent pieces)
    • Note any gaps (perspectives without sufficient sources)
  • Compile source list for each perspective

Output: Retrieved sources organized by reviewer perspective, presented in conversation.

Pause: User reviews retrieved sources, may suggest additions.


Phase 2: Persona Construction

Goal: Read sources and build reviewer profiles.

Process:

  • For each perspective, read retrieved sources to identify:
    • Core theoretical commitments
    • Methodological preferences
    • Key concepts and terminology
    • Common critiques they make of others' work
    • What they value in scholarship
  • Construct a reviewer persona profile
  • Assign review focus (theory + findings OR methods + findings)

Output: Reviewer persona profiles presented in conversation for user approval.

Pause: User approves personas (may refine characterizations).


Phase 3: Simulated Reviews

Goal: Each persona reads the manuscript and writes a review.

Process:

  • For each reviewer persona:
    • Read the manuscript through their lens
    • Evaluate their assigned sections
    • Check: Is their work cited? Accurately represented?
    • Assess theoretical/methodological/empirical engagement
    • Write a focused review (strengths, concerns, suggestions)
  • Present each review to the user

Output: reviews.md with each perspective as a ## Reviewer: section.

Pause: User reads each review before synthesis.


Phase 4: Synthesis & Response Strategy

Goal: Aggregate feedback and develop response approach.

Process:

  • Identify convergent concerns (raised by multiple reviewers)
  • Identify divergent concerns (perspective-specific)
  • Classify feedback as:
    • Quick fixes - Can address immediately
    • Minor revisions - Require some rewriting
    • Major revisions - Require structural changes or new analysis
    • Acknowledge but decline - Valid perspective, but outside scope
  • Prioritize by impact and feasibility
  • Draft response strategy

Output: synthesis-memo.md with all sub-analyses (feedback inventory, convergent/divergent concerns, response classification, strategy, and response drafts) as sections.

Pause: User confirms response strategy.


Phase 5: Revision Support

Goal: Help author address feedback.

Process:

  • Work through prioritized items
  • For theory revisions: may invoke argument-builder patterns
  • For methods revisions: may invoke methods-writer patterns
  • For findings: work directly with author
  • Track changes made
  • Optionally re-run affected reviewers to verify improvements

Output: Manuscript edited in place (git tracks changes); response-to-reviewers.md if applicable.

Iterative: User involved throughout revision process.


Naming Convention: Theory, Not Person

IMPORTANT: Reviewer personas are always named for theoretical perspectives, methodological traditions, or conceptual frameworks—never for individual scholars.

Even when sources come primarily from one author, name the persona for the perspective that author represents:

Instead of... Use...
"Deborah Gould" "Emotions in Movements Perspective"
"Corrigall-Brown" "Movement Disengagement Typology"
"Fillieule" "Activist Career Approach"
"Annette Lareau" "Cultural Capital in Education"

This avoids the awkwardness of simulating a specific person and keeps focus on the theoretical lens being applied.

Reviewer Persona Template

Each constructed persona includes:

## Reviewer: [Theoretical Perspective Name]

**Perspective**: [Name of theoretical/methodological framework]

**Key sources**: [Authors whose work informs this perspective]

**Core commitments**:
- [Key theoretical position 1]
- [Key theoretical position 2]
- [Methodological preference]

**Sources consulted**:
- [Source 1 - citation key]
- [Source 2 - citation key]
- [Source 3 - citation key]

**What this perspective values**:
- [Quality 1]
- [Quality 2]

**Common critiques from this perspective**:
- [Type of critique this tradition makes]

**Review focus**: [Theory + Findings] OR [Methods + Findings]

**Relationship to manuscript**:
- Cited: [Yes/No, how]
- Engaged: [Directly/Tangentially/Not at all]

Review Template

Each simulated review follows this structure:

## Review from [Theoretical Perspective Name]

**Perspective**: [Brief description of this theoretical/methodological tradition]
**Focus areas**: [Theory + Findings] OR [Methods + Findings]

### Summary
[1-2 paragraph summary of the manuscript from this perspective]

### Strengths
- [Strength 1]
- [Strength 2]
- [Strength 3]

### Concerns

#### Major
- [Major concern 1 with specific reference to manuscript]
- [Major concern 2]

#### Minor
- [Minor concern 1]
- [Minor concern 2]

### Representation Check
- **Is key work from this perspective cited?** [Yes/No]
- **Is it represented accurately?** [Assessment]
- **Suggested corrections**: [If any]

### Recommendations
1. [Specific recommendation 1]
2. [Specific recommendation 2]
3. [Specific recommendation 3]

### Overall Assessment
[Constructive summary of what would strengthen the manuscript from this perspective]

Invoking Phase Agents

Use the Task tool for each phase:

Task: Phase 0 Intake
subagent_type: general-purpose
model: opus
prompt: Read phases/phase0-intake.md. Analyze the manuscript at [path] and identify 2-3 reviewer perspectives. Check references.bib availability.

Model Recommendations

Phase Model Rationale
Phase 0: Intake Opus Strategic judgment about perspectives
Phase 1: Retrieval Sonnet .bib searches, source organization
Phase 2: Persona Opus Deep reading, profile construction
Phase 3: Reviews Opus Inhabiting perspectives, critical reading
Phase 4: Synthesis Opus Prioritization, strategy
Phase 5: Revision Opus Writing support

Starting the Process

When the user is ready to begin:

  1. Ask about the manuscript:

    "Where is your manuscript? Is it a complete draft or specific sections?"

  2. Ask about known concerns:

    "Are there specific perspectives or scholars you're worried about engaging? Anyone whose work you cite critically or build on heavily?"

  3. Ask about the reference library:

    "Do you have a references.bib file and full texts in library/markdown/? Are sources organized by theoretical tradition or scholar?"

  4. Proceed with Phase 0 to analyze the manuscript and identify perspectives.

Key Reminders

  • Phases 0-2 are conversation: Manuscript summary, reviewer candidates, personas, and source lists are presented in conversation — no files created until Phase 3.
  • Phases 3-4 produce files: reviews.md (all reviewer perspectives as sections) and synthesis-memo.md (consolidated synthesis) are the key file outputs.
  • Phase 5 edits in place: Revisions go directly into the manuscript; git tracks changes. Only response-to-reviewers.md is a new file output.
  • Library is the constraint: We can only build personas from sources you have in references.bib and library/markdown/. Better library = better simulation.
  • 2-3 reviewers is optimal: More becomes unwieldy; fewer misses perspectives.
  • Focus beats breadth: Reviewers examining 1-2 sections deeply > shallow full-manuscript reads.
  • User controls personas: You can adjust characterizations if they don't match your understanding.
  • Simulation, not prediction: This anticipates concerns, not specific reviewers you'll get.
  • Constructive goal: The point is strengthening the manuscript, not discouraging the author.

Output File Structure

Phases 0-2 produce no files — all assessment, retrieval, and persona results are presented in conversation. Phases 3-5 produce the following files in a peer-review-analysis/ folder in the manuscript directory:

peer-review-analysis/
├── reviews.md                   # All reviewer perspectives as ## Reviewer: sections
├── synthesis-memo.md            # Consolidated synthesis with all sub-analyses as sections
└── response-to-reviewers.md     # R&R response document (if applicable)

The manuscript is edited in place during Phase 5. Git tracks all changes — no separate revision log files are created.

Weekly Installs
7
GitHub Stars
3
First Seen
13 days ago
Installed on
openclaw7
gemini-cli7
claude-code7
github-copilot7
codex7
kimi-cli7