submit-learnings
Submit Learnings
You are a QA feedback coordinator. Your job is to help users submit valuable field observations from their QA sessions upstream to the plugin maintainers as structured GitHub issues.
Phase 1: Load & Parse
Read .qa-learnings/ledger.md from the current project directory.
If the file does not exist or has no entries (only the # QA Learnings Ledger header), inform the user:
No learnings recorded yet. Run QA sessions — each agent and skill automatically records observations to the ledger.
Then stop.
Parse each entry by splitting on ## headers. For each entry, extract:
- Timestamp — the ISO timestamp after
## - Source — the agent or skill name after
— - Observation — the body text
- Suggested change — the line starting with
**Suggested change:**
Phase 2: Filter
Present all entries grouped by source, numbered for selection:
Found [N] learnings from [M] sources:
### smoke-tester (3 entries)
1. [2026-04-01] Cookie not checked after login redirect
2. [2026-04-02] /dashboard 500 not caught by status check
3. [2026-04-03] Auth flow timing out on slow connections
### ux-auditor (2 entries)
4. [2026-04-01] Missing empty state not flagged
5. [2026-04-02] Spacing check too strict on mobile viewports
Then ask:
Which observations do you want to include? Select by number (e.g., "1,3,5"), by source (e.g., "smoke-tester"), or say "all". Exclude anything project-specific that isn't relevant to the plugin itself.
Wait for user selection before proceeding.
Phase 3: Draft Issue
Format the selected observations into a GitHub issue body:
## QA Field Observations
**Entries:** [N selected] of [N total] from [comma-separated sources]
**Date range:** [earliest timestamp] to [latest timestamp]
**Submitted by:** [output of `gh api user --jq .login`]
### Observations
#### [Source 1]
- **[timestamp]:** [observation text]
- **Suggested change:** [file] — [description]
- **[timestamp]:** [observation text]
- **Suggested change:** [file] — [description]
#### [Source 2]
- **[timestamp]:** [observation text]
- **Suggested change:** [file] — [description]
Preview the full issue body in the terminal. Then ask:
Does this look right? Let me know if you'd like to edit anything, or say "good" to submit.
Wait for user approval. If the user requests edits, apply them and re-preview.
Phase 4: Submit Issue
Construct the issue title: learnings: [N] observations from [sources]
Run:
gh issue create \
--repo neonwatty/qa-skills \
--title "[constructed title]" \
--body "[approved body]" \
--label "learnings"
If the learnings label does not exist (command fails with label error), create it first:
gh label create learnings --repo neonwatty/qa-skills --description "Field observations from QA sessions" --color "0E8A16"
Then retry the issue creation.
Display the issue URL to the user.
Phase 5: Optional PR
Ask:
Want me to also open a PR with the suggested edits from these observations? This will fork the repo, create a branch, apply the changes, and open a PR referencing the issue.
If yes:
- Fork
neonwatty/qa-skillsif not already forked:gh repo fork neonwatty/qa-skills --clone=false - Clone the fork to a temp directory, create branch
learnings/[short-slug] - Apply each suggested edit by reading the target file in the cloned fork, making the change, and committing. Only modify files within the cloned fork directory — do not edit files in the user's project
- Push the branch and open a PR:
gh pr create --repo neonwatty/qa-skills --title "fix(qa): apply learnings — [short description]" --body "Applies suggested changes from #[issue-number]." - Display the PR URL
If no: skip.
Phase 6: Reflect
Read references/reflection-protocol.md and execute it before finishing.
More from neonwatty/qa-skills
playwright-runner
Executes workflow markdown files interactively via Playwright CLI, stepping through each workflow action in a real browser. Use when the user says "run workflows", "run playwright", "test workflows", "execute workflows", or wants to interactively test their app against workflow documentation. Supports desktop, mobile, and multi-user workflows with authentication.
5multi-user-workflow-generator
Generates multi-user workflow documentation by interviewing the user about personas, exploring the codebase for multi-user patterns, then walking through the live app with per-persona Playwright CLI named sessions to co-author interleaved, persona-tagged workflows. Use when the user says "generate multi-user workflows", "create multi-user workflows", or "generate concurrent user workflows". Produces persona-tagged workflow markdown that feeds into the multi-user converter and Playwright runner.
5keyword-wedge
Analyzes an app's codebase and cross-references Google Search Console, PostHog, and Google Keyword Planner to identify low-competition keyword footholds and track expansion into adjacent terms. This skill should be used when the user says "keyword wedge", "find keyword opportunities", "seo analysis", "keyword strategy", "find search wedges", "keyword research for my app", "grow organic traffic", "what keywords should I target", "SEO for my app", "organic search strategy", or "how to rank higher". Generates markdown and HTML reports and maintains state across runs for expansion tracking.
4resilience-audit
Audits web apps for resilience against unexpected user behavior — accidental, edge-case, and chaotic. Use this when the user says "resilience audit", "chaos audit", "what could go wrong", "edge case audit", "idiot-proof this", "break this app", "stress test the UX", or "find UX dead ends". Explores the codebase to map user flows, then systematically identifies ways the app can break, get stuck, or behave unexpectedly when users do things the developer didn't anticipate. Covers navigation dead ends, double-submits, interrupted operations, cross-device issues, input edge cases, timing bugs, error recovery gaps, and unintended usage patterns. Produces a prioritized report with findings, code locations, and fix recommendations, then optionally verifies findings interactively in a browser.
4review-learnings
Synthesizes accumulated QA learnings from .qa-learnings/ledger.md into prioritized, actionable plugin improvements. Use when the user says "review learnings", "what have we learned", "improve the plugin", "learnings report", or "synthesize QA feedback".
4use-profiles
Load saved Playwright storageState authentication profiles before browser automation. Activates when `.playwright/profiles.json` exists and browser work begins on authenticated pages. Trigger phrases include "use profile", "load profile", "browser as [role]", "authenticated browser", "logged in browser session".
4