sg-visual-review
/sg-visual-review — Interactive Screenshot Review
Generate and open a self-contained HTML page to visually review all Visual test screenshots, annotate problems, and export re-run manifests.
Invocations
| Command | Behavior |
|---|---|
/sg-visual-review |
Build + start server + tell user to open http://localhost:8888 |
Always: Build the review page, start the HTTP server, and give the user the URL. No flags, no options. To stop: /sg-visual-review-stop.
Prerequisites
/sg-visual-discoverhas been run (manifests exist invisual-tests/)/sg-visual-runhas been run at least once (screenshots + report exist invisual-tests/_results/)- No external npm dependencies — the build script uses a built-in YAML parser
What It Does
Step 1: Build the Review Page
Run the build script:
node visual-tests/build-review.mjs --serve
This script:
- Reads all YAML test manifests from
visual-tests/ - Reads
visual-tests/_results/report.mdfor PASS/FAIL status per test - Reads
visual-tests/_regressions.yamlfor failure reasons - Matches screenshots from
visual-tests/_results/screenshots/ - Generates a self-contained
visual-tests/_results/review.html(inline CSS + JS, no dependencies) - If
monitor-data.jsonexists in_results/, a "Monitor" tab appears showing the Gantt timeline of the last audit
Step 2: Open in Browser
open visual-tests/_results/review.html
# Or via agent-browser:
agent-browser open file://$(pwd)/visual-tests/_results/review.html
Step 3: Human Review
The review page provides:
Visual Tests tab
- All tests displayed as cards with screenshot thumbnails
- Color-coded badges: PASS (green), FAIL (red), STALE (yellow)
- Priority badges (critical, high, medium, low)
- Sidebar with category filters
- Status filter bar (ALL / PASS / FAIL / STALE)
- Search by test name
Code Audit tab
- Shows bug cards from
audit-results.jsonif present in_results/ - Filter by severity, category, fix status, and free-text search. CSV export available
Monitor tab
- Appears only when
monitor-data.jsonexists in_results/or an audit is in progress - Shows a Gantt timeline of the last audit run — per-agent duration, token usage, estimated cost, and bugs found per zone
Lightbox
- Click any card to open full screenshot + test details
- Shows: test name, status, URL, description, steps to reproduce
- For FAIL tests: shows failure reason
Annotation Pen
- In lightbox, click the pen icon to activate drawing mode
- Draw red rectangles on problem areas in the screenshot
- Annotations are stored per test and exported with re-run manifests
- Drawing an annotation auto-selects the test
Multi-Select + Re-run
- Click checkbox overlay on cards to select tests
- Floating action bar shows selection count
- "Re-run selected" → downloads JSON manifest with test IDs + annotations
- "Copy IDs" → copies test paths to clipboard
- JSON format:
{
"action": "rerun",
"timestamp": "2026-04-09T...",
"tests": [
{
"test": "auth/login",
"annotations": [
{ "x1": 0.2, "y1": 0.3, "x2": 0.8, "y2": 0.6 }
]
}
]
}
Validate & Generate Report workflow
- Select one or more failed tests (checkbox overlay on cards)
- Optionally annotate each test with the pen tool to mark the problem area
- Click "Validate & Generate Report" in the floating action bar
- The page POSTs
fix-manifest.jsonto the server viaPOST /save-manifest - The saved manifest is then consumed by
/sg-visual-fixto implement fixes
Step 4: Re-run Failed/Annotated Tests
Take the exported JSON and feed it back:
/sg-visual-run <paste test IDs>
Or use the test paths directly:
/sg-visual-run auth/login dashboard/home settings/profile
Build Script Location
The build script and template are installed to the project:
| File | Purpose |
|---|---|
visual-tests/build-review.mjs |
Node.js build script |
visual-tests/_review-template.html |
HTML template with inline CSS + JS |
visual-tests/_results/review.html |
Generated output (not committed) |
Setup
If the build script is not yet in the project:
# Copy from plugin
cp ~/.claude/plugins/shipguard/skills/sg-visual-review/build-review.mjs visual-tests/
cp ~/.claude/plugins/shipguard/skills/sg-visual-review/_review-template.html visual-tests/
# Add npm script (optional)
# In package.json: "visual:review": "node visual-tests/build-review.mjs"
Design
- Dark theme (slate-900 bg, copper accents)
- Responsive grid (4 columns desktop, 1 column mobile)
- No external dependencies (works with file:// protocol)
- Keyboard shortcuts: Escape to close lightbox/clear selection
More from bacoco/shipguard
sg-scout
GitHub intelligence for ShipGuard — scans repos for code audit, debugging, and self-improving agent techniques, then files actionable improvement proposals. Use when you want to discover new approaches, benchmark against similar tools, or find inspiration for ShipGuard improvements. Trigger on "sg-scout", "scout github", "find skills", "benchmark shipguard", "veille technique", "competitive analysis", "what are others doing", "find improvements".
1sg-visual-fix
Process human-annotated Visual screenshots — analyze marked problem areas, trace to source code, implement fixes, capture before/after screenshots, and regenerate the review page with a comparison tab. Trigger on "sg-visual-fix", "fix annotated tests", "process review annotations", "visual fix", "fix les annotations", "traite la review".
1sg-improve
Auto-improve ShipGuard from real session learnings. Run this after any /sg-code-audit, /sg-visual-run, or debugging session. Analyzes what worked, what broke, and what was slow — saves project-specific learnings locally (zone sizing, patterns, infra timing) and files generic improvements as GitHub issues. The local learnings feed back into the next audit run automatically. Trigger on "sg-improve", "improve shipguard", "ameliore shipguard", "shipguard feedback", "session insights", "retex", "retrospective", "what did we learn".
1sg-record
Record browser interactions as replayable ShipGuard test manifests. Opens a Playwright browser with a floating toolbar — user navigates, clicks Check to mark assertions, clicks Stop to generate YAML. Trigger on "sg-record", "record test", "record interactions", "macro recorder", "enregistrer test", "enregistre les interactions".
1sg-code-audit
Parallel AI codebase audit — dispatches agents to find and fix bugs across the entire repo. Produces structured JSON results viewable in /sg-visual-review. Trigger on "sg-code-audit", "code audit", "audit codebase", "find bugs", "code-audit", "audit code", "static audit", "security audit", "ship guard".
1sg-visual-run
Execute Visual test manifests using agent-browser with hybrid scripted+LLM assertions. Accepts natural language to describe what to test or what changed — the skill finds and runs the right tests, generating missing ones if needed. Trigger on "sg-visual-run", "visual run", "run visual tests", "test regressions", "run tests", "visual-run", "check if the app works", "I changed X check it still works".
1