debugging-and-error-recovery
Debugging and Error Recovery
Overview
Systematic debugging with structured triage. When something breaks, stop adding features, preserve evidence, and follow a structured process to find and fix the root cause. Guessing wastes time. The triage checklist works for test failures, build errors, runtime bugs, and production incidents.
When to Use
- Tests fail after a code change
- The build breaks
- Runtime behavior doesn't match expectations
- A bug report arrives
- An error appears in logs or console
- Something worked before and stopped working
The Stop-the-Line Rule
When anything unexpected happens:
1. STOP adding features or making changes
2. PRESERVE evidence (error output, logs, repro steps)
3. DIAGNOSE using the triage checklist
4. FIX the root cause
5. GUARD against recurrence
6. RESUME only after verification passes
Never push past a failing test or broken build to work on the next feature. Errors compound. A bug in Step 3 that goes unfixed makes Steps 4-10 wrong.
The Triage Checklist
Work through these steps in order. Do not skip steps.
Step 1: Reproduce
Make the failure happen reliably. If you can't reproduce it, you can't fix it with confidence.
Can you reproduce the failure?
├── YES → Proceed to Step 2
└── NO
├── Gather more context (logs, environment details)
├── Try reproducing in a minimal environment
└── If truly non-reproducible, document conditions and monitor
For test failures:
# Run the specific failing test
npm test -- --grep "test name"
# Run with verbose output
npm test -- --verbose
# Run in isolation (rules out test pollution)
npm test -- --testPathPattern="specific-file" --runInBand
Step 2: Localize
Narrow down WHERE the failure happens:
Which layer is failing?
├── UI/Frontend → Check console, DOM, network tab
├── API/Backend → Check server logs, request/response
├── Database → Check queries, schema, data integrity
├── Build tooling → Check config, dependencies, environment
├── External service → Check connectivity, API changes, rate limits
└── Test itself → Check if the test is correct (false negative)
Use bisection for regression bugs:
# Find which commit introduced the bug
git bisect start
git bisect bad # Current commit is broken
git bisect good <known-good-sha> # This commit worked
# Git will checkout midpoint commits; run your test at each
git bisect run npm test -- --grep "failing test"
Step 3: Reduce
Create the minimal failing case:
- Remove unrelated code/config until only the bug remains
- Simplify the input to the smallest example that triggers the failure
- Strip the test to the bare minimum that reproduces the issue
A minimal reproduction makes the root cause obvious and prevents fixing symptoms instead of causes.
Step 4: Fix the Root Cause
Fix the underlying issue, not the symptom:
Symptom: "The user list shows duplicate entries"
Symptom fix (bad):
→ Deduplicate in the UI component: [...new Set(users)]
Root cause fix (good):
→ The API endpoint has a JOIN that produces duplicates
→ Fix the query, add a DISTINCT, or fix the data model
Ask: "Why does this happen?" until you reach the actual cause, not just where it manifests.
Step 5: Guard Against Recurrence
Write a test that catches this specific failure:
// The bug: task titles with special characters broke the search
it('finds tasks with special characters in title', async () => {
await createTask({ title: 'Fix "quotes" & <brackets>' });
const results = await searchTasks('quotes');
expect(results).toHaveLength(1);
expect(results[0].title).toBe('Fix "quotes" & <brackets>');
});
This test will prevent the same bug from recurring. It should fail without the fix and pass with it.
Step 6: Verify End-to-End
After fixing, verify the complete scenario:
# Run the specific test
npm test -- --grep "specific test"
# Run the full test suite (check for regressions)
npm test
# Build the project (check for type/compilation errors)
npm run build
# Manual spot check if applicable
npm run dev # Verify in browser
Error-Specific Patterns
Test Failure Triage
Test fails after code change:
├── Did you change code the test covers?
│ └── YES → Check if the test or the code is wrong
│ ├── Test is outdated → Update the test
│ └── Code has a bug → Fix the code
├── Did you change unrelated code?
│ └── YES → Likely a side effect → Check shared state, imports, globals
└── Test was already flaky?
└── Check for timing issues, order dependence, external dependencies
Build Failure Triage
Build fails:
├── Type error → Read the error, check the types at the cited location
├── Import error → Check the module exists, exports match, paths are correct
├── Config error → Check build config files for syntax/schema issues
├── Dependency error → Check package.json, run npm install
└── Environment error → Check Node version, OS compatibility
Runtime Error Triage
Runtime error:
├── TypeError: Cannot read property 'x' of undefined
│ └── Something is null/undefined that shouldn't be
│ → Check data flow: where does this value come from?
├── Network error / CORS
│ └── Check URLs, headers, server CORS config
├── Render error / White screen
│ └── Check error boundary, console, component tree
└── Unexpected behavior (no error)
└── Add logging at key points, verify data at each step
Safe Fallback Patterns
When under time pressure, use safe fallbacks:
// Safe default + warning (instead of crashing)
function getConfig(key: string): string {
const value = process.env[key];
if (!value) {
console.warn(`Missing config: ${key}, using default`);
return DEFAULTS[key] ?? '';
}
return value;
}
// Graceful degradation (instead of broken feature)
function renderChart(data: ChartData[]) {
if (data.length === 0) {
return <EmptyState message="No data available for this period" />;
}
try {
return <Chart data={data} />;
} catch (error) {
console.error('Chart render failed:', error);
return <ErrorState message="Unable to display chart" />;
}
}
Instrumentation Guidelines
Add logging only when it helps. Remove it when done.
When to add instrumentation:
- You can't localize the failure to a specific line
- The issue is intermittent and needs monitoring
- The fix involves multiple interacting components
When to remove it:
- The bug is fixed and tests guard against recurrence
- The log is only useful during development (not in production)
- It contains sensitive data (always remove these)
Permanent instrumentation (keep):
- Error boundaries with error reporting
- API error logging with request context
- Performance metrics at key user flows
Common Rationalizations
| Rationalization | Reality |
|---|---|
| "I know what the bug is, I'll just fix it" | You might be right 70% of the time. The other 30% costs hours. Reproduce first. |
| "The failing test is probably wrong" | Verify that assumption. If the test is wrong, fix the test. Don't just skip it. |
| "It works on my machine" | Environments differ. Check CI, check config, check dependencies. |
| "I'll fix it in the next commit" | Fix it now. The next commit will introduce new bugs on top of this one. |
| "This is a flaky test, ignore it" | Flaky tests mask real bugs. Fix the flakiness or understand why it's intermittent. |
Red Flags
- Skipping a failing test to work on new features
- Guessing at fixes without reproducing the bug
- Fixing symptoms instead of root causes
- "It works now" without understanding what changed
- No regression test added after a bug fix
- Multiple unrelated changes made while debugging (contaminating the fix)
Verification
After fixing a bug:
- Root cause is identified and documented
- Fix addresses the root cause, not just symptoms
- A regression test exists that fails without the fix
- All existing tests pass
- Build succeeds
- The original bug scenario is verified end-to-end