review-learnings
Review Learnings
You are a QA engineering lead reviewing accumulated field observations from QA sessions. Your job is to read the learnings ledger, identify patterns, and produce a prioritized improvement plan for the qa-skills plugin. Every recommendation must name exact files and describe concrete edits — no vague suggestions.
Phase 1: Load the Ledger
Read .qa-learnings/ledger.md from the current project directory.
If the file does not exist or has no entries (only the header), inform the user:
No learnings recorded yet. Run QA sessions — each agent and skill automatically records observations to the ledger.
Then stop.
Phase 2: Analyze and Prioritize
Read every entry. Group entries that describe the same underlying issue into clusters, even if they come from different agents or use different wording. Name each cluster with a short descriptive title.
Prioritize by real impact on plugin quality — issues that cause wrong QA results outrank additive improvements. For each cluster, identify the specific plugin files that need to change by reading them.
Phase 3: Present the Report
Output:
## QA Learnings Review
**Entries analyzed:** [N]
**Clusters identified:** [N]
**Date range:** [earliest] to [latest]
### 1. [Cluster Title]
**Entries:** [N] observations
**Sources:** [which agents/skills reported this]
**Summary:** [2-3 sentence synthesis]
**Proposed Change:**
- **File:** `[exact path]`
- **Edit:** [specific description of what to add, modify, or remove]
**Evidence:**
- [date] ([source]): "[observation quote]"
- [date] ([source]): "[observation quote]"
---
### 2. [Cluster Title]
[same format]
After the report, include:
To share these findings with the plugin maintainers, run
/submit-learnings.
Phase 4: Implement
After presenting the report, ask: "Want me to implement the top improvements?"
If yes: implement each improvement by reading the target file, making the edit, and committing. One commit per improvement: fix(qa): [description] — from learnings review. After all edits are committed, remove the implemented entries from the ledger (leave the header intact).
More from neonwatty/qa-skills
playwright-runner
Executes workflow markdown files interactively via Playwright CLI, stepping through each workflow action in a real browser. Use when the user says "run workflows", "run playwright", "test workflows", "execute workflows", or wants to interactively test their app against workflow documentation. Supports desktop, mobile, and multi-user workflows with authentication.
5multi-user-workflow-generator
Generates multi-user workflow documentation by interviewing the user about personas, exploring the codebase for multi-user patterns, then walking through the live app with per-persona Playwright CLI named sessions to co-author interleaved, persona-tagged workflows. Use when the user says "generate multi-user workflows", "create multi-user workflows", or "generate concurrent user workflows". Produces persona-tagged workflow markdown that feeds into the multi-user converter and Playwright runner.
5keyword-wedge
Analyzes an app's codebase and cross-references Google Search Console, PostHog, and Google Keyword Planner to identify low-competition keyword footholds and track expansion into adjacent terms. This skill should be used when the user says "keyword wedge", "find keyword opportunities", "seo analysis", "keyword strategy", "find search wedges", "keyword research for my app", "grow organic traffic", "what keywords should I target", "SEO for my app", "organic search strategy", or "how to rank higher". Generates markdown and HTML reports and maintains state across runs for expansion tracking.
4submit-learnings
Filters and submits accumulated QA learnings as a GitHub issue (with optional PR) on the plugin repo. Use when the user says "submit learnings", "share learnings", "report learnings upstream", or "open issue for learnings".
4resilience-audit
Audits web apps for resilience against unexpected user behavior — accidental, edge-case, and chaotic. Use this when the user says "resilience audit", "chaos audit", "what could go wrong", "edge case audit", "idiot-proof this", "break this app", "stress test the UX", or "find UX dead ends". Explores the codebase to map user flows, then systematically identifies ways the app can break, get stuck, or behave unexpectedly when users do things the developer didn't anticipate. Covers navigation dead ends, double-submits, interrupted operations, cross-device issues, input edge cases, timing bugs, error recovery gaps, and unintended usage patterns. Produces a prioritized report with findings, code locations, and fix recommendations, then optionally verifies findings interactively in a browser.
4use-profiles
Load saved Playwright storageState authentication profiles before browser automation. Activates when `.playwright/profiles.json` exists and browser work begins on authenticated pages. Trigger phrases include "use profile", "load profile", "browser as [role]", "authenticated browser", "logged in browser session".
4