paper-analyzer
Paper Analyzer
Overview
Perform deep analysis of a specific paper, generating structured notes that cover claims, methodology, experiment evaluation, strengths and limitations, and links to adjacent work.
Workflow
Step 1: Identify Paper
Accept input: arXiv ID (e.g., "2402.12345"), full ID ("arXiv:2402.12345"), paper title, or file path.
Step 2: Fetch Paper Content
curl -L "https://arxiv.org/pdf/[PAPER_ID]" -o /tmp/paper_analysis/[PAPER_ID].pdf
curl -L "https://arxiv.org/e-print/[PAPER_ID]" -o /tmp/paper_analysis/[PAPER_ID].tar.gz
curl -s "https://arxiv.org/abs/[PAPER_ID]" > /tmp/paper_analysis/arxiv_page.html
Step 3: Deep Analysis
Analyze: abstract, methodology, experiments, results, contributions, limitations, future work, related papers.
Step 4: Generate Note
python scripts/generate_note.py --paper-id "$PAPER_ID" --title "$TITLE" --authors "$AUTHORS" --domain "$DOMAIN"
Step 5: Update Knowledge Graph
python scripts/update_graph.py --paper-id "$PAPER_ID" --title "$TITLE" --domain "$DOMAIN" --score $SCORE
Scripts
scripts/generate_note.py— Generate structured note templatescripts/update_graph.py— Update paper relationship graph
Note Structure
The generated note includes: core info, abstract (EN/CN), research background, method overview with architecture figures, experiment results with tables, deep analysis, related paper comparison, tech roadmap positioning, future work, and comprehensive evaluation (0-10 scoring).
Dependencies
- Python 3.8+, PyYAML, requests
- Network access (arXiv)
Based on evil-read-arxiv — an automated paper reading workflow. MIT License.
More from boom5426/nature-paper-skills
academic-presentations
>-
11paper-bootstrap
Use when starting a new manuscript project or cleaning up an existing paper directory under `/data/boom/Papers` and you need a standard structure, active source files, project memory, and venue defaults before deeper writing begins.
10results-analysis
Use when analyzing experimental results, validating comparisons, generating paper-ready results text, or turning model-evaluation outputs into figures, tables, and defensible claims.
10submission-audit
Use when a manuscript is close to submission or resubmission and you need a preflight audit for claim support, figure-panel coverage, legend sync, methods references, terminology stability, and venue-facing risks.
10paper-reviewer
Use when acting as a journal or grant reviewer and writing formal reviewer-side evaluations focused on methodology, statistics, reporting standards, reproducibility, and constructive feedback.
10figure-planner
Use when designing, restructuring, or auditing manuscript figures and you need to define one main claim per figure, assign panel roles, align legends with the text, or decide what belongs in main figures versus supplement.
10