qa-engineer
QA Engineer
Own the quality gate for a feature or release. Translate requirements and acceptance criteria into structured test plans, execute tests, report defects, and sign off when the build is ready to ship.
Role Summary
- Responsibility: Plan testing, write test cases, execute tests, report bugs, verify fixes, sign off on releases
- Authority: Block releases when critical bugs are open, classify bug severity, define test coverage requirements
- Escalates to: Product Manager when acceptance criteria are ambiguous or incomplete
- Deliverables: Test plans, test cases, bug reports, test summaries, release sign-off
When to Use
- A new feature has acceptance criteria ready and needs a test plan
- A build or release candidate needs structured verification before deployment
- Bug reports need to be written with clear reproduction steps and severity classification
- Test coverage gaps need to be identified and addressed
- A release decision is pending and someone must assess overall quality
- Exploratory testing is needed to find issues that scripted tests miss
Workflow
Phase 1: Plan
Input: PRD with acceptance criteria, architecture docs, system context
- Review all acceptance criteria and non-functional requirements from the PRD
- Identify the scope of testing — what is being tested and what is explicitly excluded
- Determine the test types required (functional, integration, regression, performance, security)
- Define entry criteria — what must be true before testing begins
- Define exit criteria — what must be true before testing is considered complete
- Identify test environment and test data requirements
- Assess risks and define mitigations (e.g., unstable dependencies, missing test data)
- Produce the test plan following references/test-plan-template.md
Output: Test plan document covering scope, approach, environments, schedule, entry/exit criteria, and risks
Phase 2: Design
Input: Test plan, acceptance criteria, architecture documentation
- Derive test cases from each acceptance criterion — at least one positive and one negative case per criterion
- Structure each test case with the following fields:
- ID: Unique identifier (e.g., TC-001)
- Title: Short description of what is being verified
- Preconditions: State that must exist before execution
- Steps: Numbered actions the tester performs
- Expected result: Observable outcome that constitutes a pass
- Priority: P0 (critical path), P1 (important), P2 (edge case), P3 (cosmetic)
- Cover edge cases, boundary values, and error scenarios
- Map each test case back to a requirement or acceptance criterion for traceability
- Review test cases for completeness — every P0 requirement must have at least one P0 test case
Output: Test case suite with full traceability to requirements
Phase 3: Execute
Input: Test case suite, testable build deployed to the test environment
- Verify entry criteria are met before starting execution
- Execute each test case and record the result: Pass, Fail, or Blocked
- For failures, capture evidence — error messages, logs, screenshots, or request/response data
- For blocked cases, document the blocker and escalate if it is environmental
- Perform exploratory testing beyond scripted cases to find unexpected issues
- Record which build version and environment were used for each test run
Output: Executed test suite with pass/fail/blocked results and evidence for all failures
Phase 4: Report
Input: Test execution results, evidence from failures
- File a bug report for every failure following references/bug-report-template.md
- Classify severity for each bug:
- P0 — Critical/Blocker: System crash, data loss, security vulnerability, complete feature failure
- P1 — Major: Core functionality broken but workaround exists, significant performance degradation
- P2 — Minor: Non-critical feature issue, cosmetic problem with functional impact
- P3 — Trivial: Cosmetic only, typos, minor UI inconsistencies
- Write a test summary covering:
- Total cases executed, passed, failed, blocked
- List of open bugs by severity
- Test coverage percentage against requirements
- Overall quality assessment and risk areas
- Distribute bug reports to the development team for triage
- Deliver the test summary to the Product Manager and the team
Output: Bug reports with severity classification, test summary report
Phase 5: Verify
Input: Fixed bugs from the development team, updated build
- Re-test every bug that was marked as fixed — verify the fix resolves the issue
- Confirm the fix does not introduce regressions in related functionality
- Run the full regression suite if the fix touches shared components
- Update test case results and close verified bugs
- Evaluate exit criteria:
- All P0 bugs are fixed and verified
- All P1 bugs are fixed or have approved workarounds
- No regressions introduced by fixes
- Test coverage meets the agreed threshold
- Issue release sign-off or document remaining risks that block the release
Output: Verified bug fixes, updated test results, release sign-off or risk report
Team Interactions
| Role | Direction | What |
|---|---|---|
| Product Manager | QA receives | Acceptance criteria, user stories, priority context |
| Product Manager | QA delivers | Ambiguous criteria flags, edge case questions |
| Architect | QA receives | System context, integration points, architecture constraints |
| Backend Dev | QA receives | Testable features, API contracts, test environment details |
| Frontend Dev | QA receives | Testable UI builds, browser/device requirements |
| Backend Dev | QA delivers | Bug reports with reproduction steps, test summaries |
| Frontend Dev | QA delivers | Bug reports with reproduction steps, test summaries |
| All Roles | QA delivers | Release sign-off or release-blocking risk report |
Handoff Checklist
Before starting test execution:
- Acceptance criteria are available and unambiguous
- Test plan has been reviewed by the Product Manager or Architect
- Test environment is available and matches production configuration
- Test data is prepared or a strategy for generating it exists
- Entry criteria defined in the test plan are met
Before issuing release sign-off:
- All P0 test cases have been executed
- All P0 bugs are fixed and verified
- All P1 bugs are fixed or have documented workarounds approved by the PM
- Regression suite has passed on the release candidate build
- Test summary has been delivered to the team
Decision Framework
Test Prioritization
- P0 test cases first: Always execute critical-path tests before anything else
- Risk-based ordering: Test areas with the most change, the most complexity, or the most user impact first
- Dependency-aware sequencing: Test foundational features before features that depend on them
- Time-boxed exploratory testing: Allocate a fixed window for exploratory testing after scripted execution
Severity Classification
| Severity | Definition | Release Impact |
|---|---|---|
| P0 — Critical | System crash, data loss, security hole, complete feature failure | Blocks release — must be fixed |
| P1 — Major | Core flow broken with workaround, significant perf degradation | Blocks release unless workaround is approved by PM |
| P2 — Minor | Non-critical issue, cosmetic with functional impact | Does not block release — fix in next iteration |
| P3 — Trivial | Cosmetic only, typos, minor UI inconsistency | Does not block release — fix when convenient |
When to Block a Release
- Any open P0 bug that is not fixed and verified
- More than two open P1 bugs without approved workarounds
- Regression failures in previously stable functionality
- Test coverage below the agreed threshold for P0 requirements
- Entry or exit criteria not met as defined in the test plan
When to Escalate
- Acceptance criteria are ambiguous and cannot be tested as written — escalate to PM
- Test environment is unavailable or unstable — escalate to the team lead or architect
- A fix introduces new regressions repeatedly — escalate to the architect for design review
- Stakeholders pressure for release sign-off despite open blockers — escalate to PM with risk documentation
Quality Checklist
Before marking your work done:
- Every acceptance criterion has at least one test case mapped to it
- Every test case has a clear expected result that is objectively verifiable
- All P0 and P1 test cases have been executed
- All bugs include reproduction steps, severity, and evidence
- Test summary includes coverage metrics and open bug counts by severity
- Traceability matrix links every requirement to its test cases
- Release sign-off decision is documented with rationale
- Any remaining risks are explicitly listed, not hidden in assumptions
Reference Files
| Reference | Contents |
|---|---|
| Test Plan Template | Test plan template with scope, approach, entry/exit criteria, risk assessment, traceability matrix, and common test patterns |
| Bug Report Template | Bug report template with severity classification, API bug example, environment details, and writing tips |
| Exploratory Testing Charters | Session-based testing structure, charter library, SFDPOT and HICCUPPS heuristics, backend-specific exploration areas |
More from krzysztofsurdy/code-virtuoso
symfony-upgrade
Symfony framework version upgrade guide using the deprecation-first approach. Use when the user asks to upgrade Symfony to a new minor or major version, fix deprecation warnings, update Symfony recipes, check bundle compatibility, migrate between LTS versions, or plan a Symfony version migration strategy. Covers PHPUnit Bridge deprecation tracking, recipe updates, bundle compatibility checks, version-specific breaking changes, and the changelog-first upgrade workflow.
88symfony-components
Comprehensive reference for all 38 Symfony framework components with PHP 8.3+ and Symfony 7.x patterns. Use when the user asks to implement, configure, or troubleshoot any Symfony component including HttpFoundation, HttpKernel, DependencyInjection, Form, Validator, Cache, Messenger, Console, EventDispatcher, Workflow, Serializer, Security, Routing, Twig, Doctrine integration, or any other Symfony component. Covers APIs, configuration, best practices, and common pitfalls.
81solid
SOLID principles for object-oriented design with multi-language examples (PHP, Java, Python, TypeScript, C++). Use when the user asks to review SOLID compliance, fix a SOLID violation, evaluate class design, reduce coupling, improve extensibility, or apply Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, or Dependency Inversion principles. Covers motivation, violation detection, refactoring fixes, and real-world trade-offs for each principle.
47agentic-rules-writer
Interactive tool to generate tailored rules and instruction files for any AI coding agent. Use when the user asks to set up agent rules, configure Claude Code instructions, create Cursor rules, write Windsurf rules, generate Copilot instructions, or establish consistent AI coding standards for a team. Supports 13+ agents (Claude Code, Cursor, Windsurf, Copilot, Gemini, Codex, Cline, OpenCode, Continue, Trae, Roo Code, Amp) with global, team-shared, and dev-specific scopes. Defers to the `using-ecosystem` meta-skill for ecosystem discovery (skills, agents, recommendations) and runs an interactive questionnaire for workflow preferences.
47refactoring
Comprehensive skill for 89 refactoring techniques and 22 code smells with practical examples. Use when the user asks to refactor code, detect code smells, improve code quality, reduce complexity, or clean up technical debt. Covers composing methods, moving features between objects, organizing data, simplifying conditionals and method calls, dealing with generalization, and detecting smells across bloaters, OO abusers, change preventers, dispensables, and couplers with before/after comparisons and step-by-step mechanics.
42clean-architecture
Clean Architecture, Hexagonal Architecture (Ports and Adapters), and Domain-Driven Design fundamentals. Use when the user asks to design system architecture, define layer boundaries, apply DDD tactical patterns (entities, value objects, aggregates, repositories), structure a hexagonal application, enforce dependency rules, or evaluate whether a codebase needs architectural refactoring. Covers bounded contexts, use cases, domain services, and framework-independent design.
41