test-fixing
Test Fixing
Systematically identify and fix all failing tests using smart grouping strategies.
When to Use
- Explicitly asks to fix tests ("fix these tests", "make tests pass")
- Reports test failures ("tests are failing", "test suite is broken")
- Completes implementation and wants tests passing
- Mentions CI/CD failures due to tests
Systematic Approach
1. Initial Test Run
Run make test to identify all failing tests.
Analyze output for:
- Total number of failures
- Error types and patterns
- Affected modules/files
2. Smart Error Grouping
Group similar failures by:
- Error type: ImportError, AttributeError, AssertionError, etc.
- Module/file: Same file causing multiple test failure
- Root cause: Missing dependencies, API changes, refactoring impacts
Prioritize groups by:
- Number of affected tests (highest impact first)
- Dependency order (fix infrastructure before functionality)
3. Systematic Fixing Process
For each group (starting with highest impact):
-
Identify root cause
- Read relevant code
- Check recent changes with
git diff - Understand the error pattern
-
Implement fix
- Use Edit tool for code changes
- Follow project conventions (see CLAUDE.md)
- Make minimal, focused changes
-
Verify fix
- Run subset of tests for this group
- Use pytest markers or file patterns:
uv run pytest tests/path/to/test_file.py -v uv run pytest -k "pattern" -v - Ensure group passes before moving on
-
Move to next group
4. Fix Order Strategy
Infrastructure first:
- Import errors
- Missing dependencies
- Configuration issues
Then API changes:
- Function signature changes
- Module reorganization
- Renamed variables/functions
Finally, logic issues:
- Assertion failures
- Business logic bugs
- Edge case handling
5. Final Verification
After all groups fixed:
- Run complete test suite:
make test - Verify no regressions
- Check test coverage remains intact
Best Practices
- Fix one group at a time
- Run focused tests after each fix
- Use
git diffto understand recent changes - Look for patterns in failures
- Don't move to next group until current passes
- Keep changes minimal and focused
Example Workflow
User: "The tests are failing after my refactor"
- Run
make test→ 15 failures identified - Group errors:
- 8 ImportErrors (module renamed)
- 5 AttributeErrors (function signature changed)
- 2 AssertionErrors (logic bugs)
- Fix ImportErrors first → Run subset → Verify
- Fix AttributeErrors → Run subset → Verify
- Fix AssertionErrors → Run subset → Verify
- Run full suite → All pass ✓
More from claudiodearaujo/izacenter
bun-development
Modern JavaScript/TypeScript development with Bun runtime. Covers package management, bundling, testing, and migration from Node.js. Use when working with Bun, optimizing JS/TS development speed, or migrating from Node.js to Bun.
9tailwind-patterns
Tailwind CSS v4 principles. CSS-first configuration, container queries, modern patterns, design token architecture.
8marketing-psychology
When the user wants to apply psychological principles, mental models, or behavioral science to marketing. Also use when the user mentions 'psychology,' 'mental models,' 'cognitive bias,' 'persuasion,' 'behavioral science,' 'why people buy,' 'decision-making,' or 'consumer behavior.' This skill provides 70+ mental models organized for marketing application.
3production-code-audit
Autonomously deep-scan entire codebase line-by-line, understand architecture and patterns, then systematically transform it to production-grade, corporate-level professional quality with optimizations
2pentest checklist
This skill should be used when the user asks to "plan a penetration test", "create a security assessment checklist", "prepare for penetration testing", "define pentest scope", "follow security testing best practices", or needs a structured methodology for penetration testing engagements.
2pentest commands
This skill should be used when the user asks to "run pentest commands", "scan with nmap", "use metasploit exploits", "crack passwords with hydra or john", "scan web vulnerabilities with nikto", "enumerate networks", or needs essential penetration testing command references.
2