test-coverage
Test Coverage Expander
Expand unit test coverage by targeting untested branches and edge cases.
When to Use
- User asks to "increase test coverage", "add more tests", "expand unit tests", or "cover edge cases"
- A CI pipeline reports low coverage and the user wants it improved
- A code review flags untested error paths or boundary conditions
- The user wants to identify and fill gaps in an existing test suite before a release
Instructions
- Sync the branch with remote (see Repo Sync section below)
- Create a feature branch for the new tests
- Run the project's coverage tool to get a baseline report
- Identify the lowest-coverage files and untested code paths
- Write tests for error paths, boundary values, and missing branches
- Re-run coverage to confirm improvement
- Commit the new tests with a descriptive message
Repo Sync Before Edits (mandatory)
Before creating/updating/deleting files in an existing repository, sync the current branch with remote:
branch="$(git rev-parse --abbrev-ref HEAD)"
git fetch origin
git pull --rebase origin "$branch"
If the working tree is not clean, stash first, sync, then restore:
git stash push -u -m "pre-sync"
branch="$(git rev-parse --abbrev-ref HEAD)"
git fetch origin && git pull --rebase origin "$branch"
git stash pop
If origin is missing, pull is unavailable, or rebase/stash conflicts occur, stop and ask the user before continuing.
Workflow
0. Create Feature Branch
Before making any changes:
- Check the current branch - if already on a feature branch for this task, skip
- Check the repo for branch naming conventions (e.g.,
feat/,feature/, etc.) - Create and switch to a new branch following the repo's convention, or fallback to:
feat/test-coverage
1. Analyze Coverage
Detect the project's test runner and run the coverage report:
- JavaScript/TypeScript:
npx jest --coverageornpx vitest --coverage - Python:
pytest --cov=. --cov-report=term-missing - Go:
go test -coverprofile=coverage.out ./... - Rust:
cargo tarpaulinorcargo llvm-cov
From the report, identify:
- Untested branches and code paths
- Low-coverage files/functions (prioritize files below 60%)
- Missing error handling tests
2. Identify Test Gaps
Review code for:
- Logical branches (if/else, switch)
- Error paths and exceptions
- Boundary values (min, max, zero, empty, null)
- Edge cases and corner cases
- State transitions and side effects
3. Write Tests
Use project's testing framework:
- JavaScript/TypeScript: Jest, Vitest, Mocha
- Python: pytest, unittest
- Go: testing, testify
- Rust: built-in test framework
Target scenarios:
- Error handling and exceptions
- Boundary conditions
- Null/undefined/empty inputs
- Concurrent/async edge cases
4. Verify Improvement
Run coverage again and confirm measurable increase. Report:
- Before/after coverage percentages
- Number of new test cases added
- Files with the biggest coverage gains
Expected Output
After a successful run on a Python project, the final verification report shows:
Coverage before: 61% (47/77 statements)
Coverage after: 84% (65/77 statements)
New tests added: 9
Files improved:
- src/parser.py 52% → 91% (+7 tests: null input, empty string, unicode overflow)
- src/auth.py 71% → 88% (+2 tests: expired token, missing header)
All 56 tests passing. No regressions.
Acceptance Criteria
A run passes when all of the following are true:
- Coverage report exists from a runnable command for the detected stack (e.g.,
jest --coverage,pytest --cov,go test -cover). - Post-run total coverage is strictly higher than the pre-run baseline — no test additions that fail to move the metric.
- New tests target previously-untested branches, error paths, or boundary values — not duplicates of existing assertions.
- The full test suite passes locally before committing (
npm test,pytest,go test ./..., etc.). - All new tests live on a feature branch (e.g.,
feat/test-coverage), never onmain/master. - Commit message records the before/after coverage percentages and the files newly covered.
Edge Cases
- No test framework detected: Skill checks
package.json,pyproject.toml,Cargo.toml, orgo.modfor test dependencies; if none found, asks the user which framework to use before writing any tests. - Coverage tool not installed: Installs the appropriate tool (
pytest-cov,nyc,cargo tarpaulin, etc.) and retries rather than failing silently. - Existing tests are already failing: Does not add new tests until existing failures are resolved; reports the failing tests to the user first.
- 100% coverage already reached: Reports this to the user and exits — no tests are added unnecessarily.
- Generated code or vendored files in coverage report: Excludes auto-generated and third-party directories from analysis to avoid writing tests for code the project does not own.
- Async / concurrent code paths: Uses framework-appropriate async test utilities (e.g.,
pytest-asyncio,jest fakeTimers) rather than bare sync wrappers.
Step Completion Reports
After completing each major step, output a status report in this format:
◆ [Step Name] ([step N of M] — [context])
··································································
[Check 1]: √ pass
[Check 2]: √ pass (note if relevant)
[Check 3]: × fail — [reason]
[Check 4]: √ pass
[Criteria]: √ N/M met
____________________________
Result: PASS | FAIL | PARTIAL
Adapt the check names to match what the step actually validates. Use √ for pass, × for fail, and — to add brief context. The "Criteria" line summarizes how many acceptance criteria were met. The "Result" line gives the overall verdict.
Branch Setup phase checks: Feature branch created, Base coverage measured
Analysis phase checks: Coverage report parsed, Gaps identified, Priority ranked
Test Writing phase checks: Tests written, Edge cases covered, Framework conventions followed
Verification phase checks: Tests pass, Coverage improved, No regressions
Error Handling
No test framework detected
Solution: Check package.json, pyproject.toml, Cargo.toml, or go.mod for test dependencies. If none found, ask the user which framework to use and install it.
Coverage tool not installed
Solution: Install the appropriate coverage tool (nyc, pytest-cov, etc.) and retry.
Existing tests failing
Solution: Do not add new tests until existing failures are resolved. Report failing tests to the user first.
Guidelines
- Follow existing test patterns and naming conventions
- Place test files alongside source or in the project's existing test directory
- Group related test cases logically
- Use descriptive test names that explain the scenario
- Do not mock what you do not own — prefer integration tests for external boundaries
More from luongnv89/skills
ollama-optimizer
Optimize Ollama configuration for the current machine's hardware. Use when asked to speed up Ollama, tune local LLM performance, or pick models that fit available GPU/RAM.
126logo-designer
Generate professional SVG logos from project context, producing 7 brand variants (mark, full, wordmark, icon, favicon, white, black) plus a showcase HTML page. Skip for raster-only logos, product illustrations, or full brand-guideline docs.
122code-optimizer
Analyze code for performance bottlenecks, memory leaks, and algorithmic inefficiencies. Use when asked to optimize, find bottlenecks, or improve efficiency. Don't use for bug-hunting code review, security audits, or refactoring without a perf goal.
76code-review
Review code changes for bugs, security vulnerabilities, and code quality issues — producing prioritized findings with specific fix suggestions. Don't use for performance tuning, writing new features from scratch, or generating test cases.
75idea-validator
Evaluate app ideas and startup concepts across market viability, technical feasibility, and competitive landscape. Use when asked to validate, review, or score a product idea. Don't use for writing a PRD, detailed go-to-market plans, or financial/investor pitch decks.
70tasks-generator
Generate development tasks from a PRD file with sprint-based planning. Use when users ask to create tasks from PRD, break down the PRD, generate sprint tasks, or want to convert product requirements into actionable development tasks. Creates/updates tasks.md and always reports GitHub links to changed files. Don't use for writing a PRD, authoring a TAD, or executing tasks (see openspec-task-loop).
62