pre-submission-report
SKILL.md
Pre-Submission Report
Aggregates all quality checks into one dated report. Run before submitting to a journal/conference or sharing with collaborators.
When to Use
- Before submitting a paper to a venue
- Before sharing a draft with supervisors or co-authors
- When the user says "pre-submission check", "is this ready?", "run everything"
Input
- A
.texfile path, or auto-detectpaper/main.texin the current project
Critical Rule
Python: Always use uv run python or uv pip install. Never bare python, python3, pip, or pip3. Include this in any sub-agent prompts.
Steps
1. Locate the Paper
If no argument provided, search for the main .tex file:
- Check
paper/main.tex - Check
paper/*.texfor a file containing\begin{document} - Ask the user if ambiguous
2. Run Quality Checks
Run these sequentially (each depends on a clean state):
- Compilation — invoke
/latex-autofixon the main.texfile. Record pass/fail and any remaining warnings. - Citation audit — invoke
/bib-validate. Record missing, unused, and suspect keys. - Adversarial review — launch
paper-criticagent (via Task tool). Capture the CRITIC-REPORT.md score and findings.
3. Aggregate Report
Save to audits/quality-reports/YYYY-MM-DD_<project-name>.md:
# Pre-Submission Quality Report
**Project:** <project name>
**Date:** YYYY-MM-DD
**File:** <path to main.tex>
**Target:** <venue from project CLAUDE.md, or "not specified">
---
## Overall Score: XX/100 — [Verdict]
Verdict uses the quality scoring framework:
- 90-100: Publication-ready
- 80-89: Minor revisions needed
- 70-79: Significant revisions needed
- Below 70: Not ready
---
## Compilation
- **Status:** PASS / FAIL
- **Warnings:** <count>
- **Details:** <brief summary of any issues>
## Citations
- **Missing keys:** <count> — <list>
- **Unused keys:** <count> — <list>
- **Suspect entries:** <count> — <list>
## Adversarial Review
- **Score:** XX/100
- **Key findings:**
- <finding 1>
- <finding 2>
- ...
## Research Quality Score
Load `skills/shared/research-quality-rubric.md` and report the weighted aggregate (X.X / 5.0) with verdict.
## Remaining Issues
| # | Severity | Category | Issue |
|---|----------|----------|-------|
| 1 | High/Medium/Low | Compilation/Citation/Content | <description> |
## Recommendation
**[Submit / Revise / Not ready]**
<1-2 sentence summary of what needs to happen before submission>
4. Present Summary
Display the report path and the summary table to the user. If the recommendation is "Submit", congratulate. If "Revise", list the top 3 issues to fix first.
Error Handling
- If compilation fails after
/latex-autofix, still run the remaining checks. Mark compilation as FAIL in the report. - If
paper-criticagent fails, note it in the report and base the overall score on compilation + citations only. - Always produce the report file, even if some checks failed.
Integration
| Skill/Agent | Role in this workflow |
|---|---|
/latex-autofix |
Compilation + auto-fix |
/bib-validate |
Citation cross-reference |
paper-critic agent |
Adversarial content review |
quality-scoring.md |
Verdict thresholds |
Weekly Installs
1
Repository
flonat/claude-researchGitHub Stars
13
First Seen
12 days ago
Security Audits
Installed on
amp1
cline1
opencode1
cursor1
kimi-cli1
codex1