pre-submission-report
Pre-Submission Report
Aggregates all quality checks into one dated report. Run before submitting to a journal/conference or sharing with collaborators.
When to Use
- Before submitting a paper to a venue
- Before sharing a draft with supervisors or co-authors
- When the user says "pre-submission check", "is this ready?", "run everything"
Input
- A
.texfile path, or auto-detectpaper/main.texin the current project
Critical Rule
Python: Always use uv run python or uv pip install. Never bare python, python3, pip, or pip3. Include this in any sub-agent prompts.
Steps
1. Locate the Paper
If no argument provided, search for the main .tex file:
- Check
paper/main.tex - Check
paper/*.texfor a file containing\begin{document} - Ask the user if ambiguous
2. Integrity Gate (hard gate — must pass before quality checks)
Run these checks first. If any fail, stop and report — do not proceed to quality checks.
- Placeholder scan — grep the
.texfile(s) forTODO,FIXME,XXX,TBD,[INSERT,PLACEHOLDER,Lorem ipsum. Any match is a FAIL. - Citation integrity — invoke
/bib-validatein verify mode. Every\cite{}key must resolve to a.bibentry. Any missing key is a FAIL. - Section completeness — check that all standard sections exist and are non-empty (Abstract, Introduction, and at least one body section before Conclusion/References). An empty or missing section is a FAIL.
- Broken references — grep for
??in the compiled PDF output or.logfile (undefined\ref{}or\cite{}). Any??in output is a FAIL.
If any check fails:
INTEGRITY GATE: FAIL
Blockers (must fix before quality checks):
- [ ] 3 TODO placeholders found (lines 47, 112, 289)
- [ ] 2 undefined references (\ref{fig:missing}, \cite{nonexistent2024})
- [ ] Abstract section is empty
Fix these and re-run /pre-submission-report.
If all pass: proceed to Step 3.
3. Run Quality Checks
Run these sequentially (each depends on a clean state):
- Compilation — invoke
/latexon the main.texfile. Record pass/fail and any remaining warnings. - Citation audit — invoke
/bib-validate(full mode — deep verify). Record missing, unused, and suspect keys. - Adversarial review — launch
paper-criticagent (via Task tool). Capture the CRITIC-REPORT.md score and findings.
4. Aggregate Report
Save to log/audits/quality-reports/YYYY-MM-DD_<project-name>.md:
# Pre-Submission Quality Report
**Project:** <project name>
**Date:** YYYY-MM-DD
**File:** <path to main.tex>
**Target:** <venue from project CLAUDE.md, or "not specified">
---
## Integrity Gate: PASS / FAIL
- **Placeholders:** 0 found
- **Citation integrity:** all keys resolved
- **Section completeness:** all sections present
- **Broken references:** none
---
## Overall Score: XX/100 — [Verdict]
Verdict uses the quality scoring framework:
- 90-100: Publication-ready
- 80-89: Minor revisions needed
- 70-79: Significant revisions needed
- Below 70: Not ready
---
## Compilation
- **Status:** PASS / FAIL
- **Warnings:** <count>
- **Details:** <brief summary of any issues>
## Citations
- **Missing keys:** <count> — <list>
- **Unused keys:** <count> — <list>
- **Suspect entries:** <count> — <list>
## Adversarial Review
- **Score:** XX/100
- **Key findings:**
- <finding 1>
- <finding 2>
- ...
## Research Quality Score
Load `skills/shared/research-quality-rubric.md` and report the weighted aggregate (X.X / 5.0) with verdict.
## Remaining Issues
| # | Severity | Category | Issue |
|---|----------|----------|-------|
| 1 | High/Medium/Low | Compilation/Citation/Content | <description> |
## Recommendation
**[Submit / Revise / Not ready]**
<1-2 sentence summary of what needs to happen before submission>
5. Present Summary
Display the report path and the summary table to the user. If the recommendation is "Submit", congratulate. If "Revise", list the top 3 issues to fix first.
Error Handling
- If compilation fails after
/latex, still run the remaining checks. Mark compilation as FAIL in the report. - If
paper-criticagent fails, note it in the report and base the overall score on compilation + citations only. - Always produce the report file, even if some checks failed.
Integration
| Skill/Agent | Role in this workflow |
|---|---|
/latex |
Compilation + auto-fix |
/bib-validate |
Citation cross-reference |
paper-critic agent |
Adversarial content review |
quality-scoring.md |
Verdict thresholds |
More from flonat/claude-research
update-focus
Use when you need to update current-focus.md with a structured session summary.
10project-safety
Use when you need to set up safety rules and folder structures for a research project.
10latex-autofix
Use when you need to compile LaTeX with autonomous error resolution and citation audit.
7literature
Use when you need academic literature discovery, synthesis, or bibliography management. Supports standalone searches and end-to-end project pipelines with vault sync and auto-commit.
7process-reviews
Use when you need to process referee comments from a reviews PDF into tracking files.
6code-archaeology
Use when you need to review and understand old code, data, or analysis files.
6