obsidian-experiment-log
Obsidian Experiment Log
Use this skill whenever project work changes the experimental state.
Role in the workflow
This is a supporting skill under obsidian-project-memory.
It should help maintain canonical experiment and result notes, not create note sprawl.
Default outputs
- the relevant canonical note in
Experiments/ - the relevant canonical note in
Results/, if a durable finding exists - links from today's
Daily/note - relevant hub or plan references only when project state materially changes
Main rules
- Prefer updating an existing experiment note over creating a sibling note for the same experiment line.
- Prefer updating an existing result note over creating a parallel result page for the same durable finding.
- Raw logs, metric dumps, and temporary analysis fragments should usually stay in
Daily/until they are interpreted. - A result note should exist only when the outcome is stable enough to reference later.
Minimum experiment sections
- Goal / hypothesis
- Code or config entrypoint
- Dataset / split
- Metrics
- Status (
planned,running,done,failed) - Findings / notes
- Next step
Minimum result sections
- Linked experiment
- Main observation
- Key numbers
- Evidence
- Interpretation
- Decision: keep / iterate / discard
Linking rule
Link experiments and results directly to each other, and link both back to 00-Hub.md, 01-Plan.md, or Daily/ only when those references improve the main working surface.
Research path handoff
Treat experiment notes as the bridge between Papers/ and Results/:
- paper-derived hypotheses, baselines, and ablations should land here,
- stable findings should be promoted from here into
Results/, - when a result becomes claim-worthy, update
Writing/rather than leaving the chain unfinished.
More from galaxy-dawn/claude-scholar
paper-self-review
This skill should be used when the user asks to "review paper quality", "check paper completeness", "validate paper structure", "self-review before submission", or mentions systematic paper quality checking. Provides comprehensive quality assurance checklist for academic papers.
194citation-verification
This skill provides reference guidance for citation verification in academic writing. Use when the user asks about "citation verification best practices", "how to verify references", "preventing fake citations", or needs guidance on citation accuracy. This skill supports ml-paper-writing by providing detailed verification principles and common error patterns.
177ui-ux-pro-max
This skill should be used when the user asks to design or review a UI, create a landing page or dashboard, choose colors or typography, improve accessibility, or implement polished frontend interfaces with a clear design system.
143review-response
Systematic review response workflow from comment analysis to professional rebuttal writing. Use when the user asks to "write rebuttal", "respond to reviewers", "draft review response", or "analyze review comments". Improves paper acceptance rates.
132ml-paper-writing
Write publication-ready ML/AI papers for NeurIPS, ICML, ICLR, ACL, AAAI, COLM. Use when drafting papers from research repos, conducting literature reviews, finding related work, verifying citations, or preparing camera-ready submissions. Includes LaTeX templates, citation verification workflows, and paper discovery/evaluation criteria.
131results-analysis
This skill should be used when the user asks to "analyze experimental results", "run strict statistical analysis", "compare model performance", "generate scientific figures", "check significance", "do ablation analysis", or mentions interpreting experiment data with rigorous statistics and visualization. It focuses on strict analysis bundles, not Results-section prose.
127