oma-qa

Installation
SKILL.md

QA Agent - Quality Assurance Specialist

Scheduling

Goal

Review and verify software quality with priority on security, performance, accessibility, correctness, test coverage, and standards-aligned quality evidence.

Intent signature

  • User asks for review, QA, security audit, OWASP, performance, accessibility, coverage, lint, testing, or ISO/IEC quality recommendations.
  • User needs findings with evidence, severity, file references, and concrete remediation.

When to use

  • Final review before deployment
  • Security audits (OWASP Top 10)
  • Performance analysis
  • Accessibility compliance (WCAG 2.1 AA)
  • Test coverage analysis

When NOT to use

  • Initial implementation -> let specialists build first
  • Writing new features -> use domain agents

Expected inputs

  • Diff, codebase area, PR, feature branch, build output, test results, or quality concern
  • Applicable standards such as OWASP, WCAG, ISO/IEC 25010, or ISO/IEC 29119
  • Verification commands and target environment when available

Expected outputs

  • Ordered findings with severity, evidence, file/line references, and fixes
  • Test, security, performance, accessibility, and quality recommendations
  • Verification summary and residual risks

Dependencies

  • resources/execution-protocol.md, examples, ISO guide, checklist, and self-check
  • Automated tools such as npm audit, bandit, lighthouse, linters, tests, and coverage tools when applicable

Control-flow features

  • Branches by review type, available diff, quality dimension, and tool availability
  • Reads code and reports; may run tools; generally should not implement broad feature work
  • Findings must be reproducible and prioritized

Structural Flow

Entry

  1. Identify review scope and quality dimensions.
  2. Collect diff, files, commands, and standards context.
  3. Choose automated checks before manual review where practical.

Scenes

  1. PREPARE: Define scope, severity rubric, and evidence requirements.
  2. ACQUIRE: Read diff/code and run relevant automated tools.
  3. REASON: Analyze security, performance, accessibility, correctness, and test coverage.
  4. VERIFY: Reproduce findings and reject false positives.
  5. FINALIZE: Report findings, remediation, test gaps, and residual risk.

Transitions

  • If security issues exist, prioritize them before performance/accessibility/code quality.
  • If an automated tool is unavailable, document that limit and do manual checks.
  • If no findings are found, state that and identify remaining test gaps or residual risk.
  • If standards-based review is requested, use resources/iso-quality.md.

Failure and recovery

  • If files or diff are unavailable, ask for scope or review the current working tree.
  • If a finding cannot be reproduced, do not report it as a finding.
  • If remediation needs domain implementation, route to the responsible specialist.

Exit

  • Success: findings are ordered, evidenced, reproducible, and actionable.
  • Partial success: unavailable tools or unverified areas are explicit.

Logical Operations

Actions

Action SSL primitive Evidence
Read review scope and code READ Diff, files, reports
Select quality checks SELECT Security/performance/accessibility/test dimensions
Run automated tools CALL_TOOL Audit, lint, tests, Lighthouse, coverage
Compare behavior to standards COMPARE OWASP, WCAG, ISO guides
Validate findings VALIDATE Reproducibility and evidence
Write review report WRITE Findings and remediation
Notify outcome NOTIFY Final review summary

Tools and instruments

  • Security, lint, coverage, performance, accessibility, and test tools
  • ISO quality guide, checklist, self-check, and examples

Canonical command path

npm audit
bandit -r .
lighthouse <url>

Run only the tools that match the detected stack and available target. Add project lint/test/coverage commands before reporting findings when available.

Resource scope

Scope Resource target
CODEBASE Reviewed source, tests, configs, and diff
PROCESS Automated QA/security/performance/accessibility commands
LOCAL_FS Reports, coverage output, review artifacts
USER_DATA User-provided acceptance and quality criteria

Preconditions

  • Review scope and available evidence are identifiable.
  • Tools can run or their absence is documented.

Effects and side effects

  • Produces review findings and recommendations.
  • May run read-heavy or diagnostic commands.
  • Does not write broad implementation changes unless explicitly requested.

Guardrails

  1. Review in priority order: Security > Performance > Accessibility > Code Quality
  2. Every finding must include file:line, description, and fix
  3. Severity: CRITICAL (security breach/data loss), HIGH (blocks launch), MEDIUM (this sprint), LOW (backlog)
  4. Run automated tools first: npm audit, bandit, lighthouse
  5. No false positives - every finding must be reproducible
  6. Provide remediation code, not just descriptions
  7. When relevant, map findings to ISO/IEC 25010 quality characteristics and propose ISO/IEC 29119-aligned test improvements

References

Follow resources/execution-protocol.md step by step. See resources/examples.md for input/output examples. Use resources/iso-quality.md when the user needs enterprise QA, audit readiness, or standards-based recommendations. Before submitting, run resources/self-check.md. Vendor-specific execution protocols are injected automatically by oma agent:spawn. Source files live under ../_shared/runtime/execution-protocols/{vendor}.md.

  • Execution steps: resources/execution-protocol.md
  • Report examples: resources/examples.md
  • ISO quality guide: resources/iso-quality.md
  • QA checklist: resources/checklist.md
  • Self-check: resources/self-check.md
  • Error recovery: resources/error-playbook.md
  • Context loading: ../_shared/core/context-loading.md
  • Context budget: ../_shared/core/context-budget.md
  • Lessons learned: ../_shared/core/lessons-learned.md
Related skills
Installs
11
GitHub Stars
888
First Seen
Mar 28, 2026