test-coverage
Test Coverage
Expand unit test coverage by targeting untested branches and edge cases.
Repo Sync Before Edits (mandatory)
Before making any changes, sync with the remote to avoid conflicts:
branch="$(git rev-parse --abbrev-ref HEAD)"
git fetch origin
git pull --rebase origin "$branch"
If the working tree is dirty, stash first, sync, then pop. If origin is missing or conflicts occur, stop and ask the user before continuing.
Workflow
0. Create Feature Branch
Before making any changes:
- Check the current branch - if already on a feature branch for this task, skip
- Check the repo for branch naming conventions (e.g.,
feat/,feature/, etc.) - Create and switch to a new branch following the repo's convention, or fallback to:
feat/test-coverage
1. Analyze Coverage
Use sub-agents for parallel discovery. Launch multiple Agent tool calls concurrently to keep the main context clean:
- Agent 1 — Stack detection: Scan for
package.json,tsconfig.json,pyproject.toml,setup.py,Cargo.toml,go.mod, and identify the primary language(s), testing framework, and coverage tool. Check for existing test configuration (jest.config, vitest.config, pytest.ini, .coveragerc). Return a structured summary. - Agent 2 — Test inventory: List all existing test files and directories, identify the testing patterns in use (file naming, directory structure, assertion style). Return a checklist of test locations and conventions.
Collect the results from both agents before proceeding.
Then run the appropriate coverage command for the project's stack:
# JavaScript/TypeScript (Jest)
npx jest --coverage --coverageReporters=text --coverageReporters=json-summary
# JavaScript/TypeScript (Vitest)
npx vitest run --coverage
# Python (pytest)
python -m pytest --cov=. --cov-report=term-missing
# Go
go test -coverprofile=coverage.out ./... && go tool cover -func=coverage.out
# Rust
cargo tarpaulin --out Stdout
From the report, identify:
- Untested branches and code paths (look for lines marked as uncovered)
- Low-coverage files/functions (below 80% line coverage)
- Missing error handling tests
2. Identify Test Gaps
Use sub-agents for parallel file analysis. When multiple low-coverage files are identified, dispatch independent agents to analyze them concurrently:
- Agent per file group: For each low-coverage file (or small group of related files), launch a sub-agent to read the source and identify specific untested code paths. Each agent should return a list of gaps with line numbers and suggested test scenarios.
Each agent should look for:
- Logical branches: if/else, switch/match, ternary operators
- Error paths: try/catch, error returns, validation failures
- Boundary values: min, max, zero, empty string, null/undefined, off-by-one
- Edge cases: empty collections, single-element collections, duplicate values
- State transitions: before/after mutations, async race conditions
- Integration points: API calls, database queries, file I/O
Collect all agent results and prioritize gaps by risk: error paths and boundary values cause the most production bugs.
3. Write Tests
Use the project's existing testing framework and follow its conventions. Detect which framework is in use by checking config files and existing test files.
| Stack | Framework | Test Location Pattern |
|---|---|---|
| JS/TS | Jest | __tests__/ or *.test.ts |
| JS/TS | Vitest | *.test.ts or *.spec.ts |
| Python | pytest | tests/ or test_*.py |
| Go | testing | *_test.go in same package |
| Rust | built-in | #[cfg(test)] module in same file |
Use sub-agents for parallel test writing. When gaps span multiple independent files or modules, dispatch sub-agents concurrently to write tests in parallel:
- Agent per test file: For each source file (or module) that needs new tests, launch a sub-agent to write the test cases. Each agent receives the gap analysis from Step 2 and the project conventions from Step 1. Each agent should return the path of the test file it created or updated.
Collect all agent results, then verify no conflicts between test files (e.g., duplicate test names, shared fixtures).
For each gap, write focused test cases:
- One assertion per logical concept (a test can have multiple asserts if they test the same behavior)
- Use descriptive names:
test_parse_returns_error_on_empty_inputnottest_parse_2 - Group related tests logically (by function or behavior)
4. Verify Improvement
Run coverage again with the same command from Step 1 and confirm:
- New tests pass
- Coverage percentage increased
- Previously uncovered lines are now covered
Report the before/after coverage numbers to the user.
Guidelines
- Follow existing test patterns and naming conventions in the project
- Add tests to existing test files when appropriate (don't create new files unnecessarily)
- Focus on meaningful coverage — skip trivial getters/setters unless they contain logic
- Use descriptive test names that explain the scenario being tested
- Avoid mocking unless the project already uses mocks extensively
More from montimage/skills
skill-auditor
Analyze agent skills for security risks, malicious patterns, and potential dangers before installation. Use when asked to "audit a skill", "check if a skill is safe", "analyze skill security", "review skill risk", "should I install this skill", "is this skill safe", "scan this skill", or when evaluating any skill directory for trust and safety. Also triggers when the user pastes a skill install command like "npx skills add https://github.com/org/repo --skill name". Produces a comprehensive security report with a clear install/reject verdict. Trigger this skill proactively whenever the user is about to install a third-party skill or mentions concerns about skill safety.
30code-review
Perform code reviews following best practices from Code Smells and The Pragmatic Programmer. Use when asked to "review this code", "check for code smells", "review my PR", "audit the codebase", "find bugs", "check code quality", "what's wrong with this code", "is this code good", or any request for quality feedback on code changes. Supports both full codebase audits and focused PR/diff reviews. Outputs structured markdown reports grouped by severity. Trigger this skill whenever the user wants a second opinion on code, even if they don't explicitly say "review".
15skill-creator
Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, update or optimize an existing skill, package a skill for distribution, or iterate on skill quality. Trigger this skill whenever the user says "create a skill", "build a skill", "make a skill for X", "update this skill", "improve this skill", "package this skill", or mentions wanting to extend Claude's capabilities with specialized workflows or tools.
9oss-ready
Transform projects into professional open-source repositories with standard components. Use when users ask to "make this open source", "add open source files", "setup OSS standards", "create contributing guide", "add license", "prepare for public release", "add CODE_OF_CONDUCT", "add SECURITY.md", "GitHub templates", or want to prepare a project for public release with README, CONTRIBUTING, LICENSE, and GitHub templates. Trigger this skill whenever the user mentions open-sourcing, public repos, community standards, or making a project contribution-ready — even if they just say "let's open source this".
7devops-pipeline
Implement pre-commit hooks and GitHub Actions for quality assurance. Use when asked to "setup CI/CD", "add pre-commit hooks", "create GitHub Actions", "setup quality gates", "automate testing", "add linting to CI", "setup code quality checks", "configure CI pipeline", "add automated checks", or any DevOps automation for code quality. Detects project type and configures appropriate tools. Trigger this skill whenever the user mentions CI, CD, pre-commit, GitHub Actions, linting automation, or quality gates — even if they don't use those exact terms.
7docs-generator
Restructure project documentation for clarity and accessibility. Use when users ask to "organize docs", "generate documentation", "improve doc structure", "restructure README", "write docs", "create README", "document my code", "add API docs", "document this project", "help with documentation", or need to reorganize scattered documentation into a coherent structure. Analyzes project type and creates appropriate documentation hierarchy. Trigger this skill whenever the user needs documentation created, reorganized, or improved — even if they just say something like "this project needs docs" or "the README is a mess".
5