qa-analyzing-ux-flows
Analyzing UX Flows
This skill evaluates user experience quality by detecting friction points, accessibility barriers, confusing navigation, and violations of established UX heuristics. It goes beyond functional correctness to assess whether the application is genuinely usable.
When to Use
- After feature completion to assess UX quality before release
- When users report "it works but it's confusing"
- For accessibility compliance audits (WCAG 2.1 AA)
- To evaluate onboarding or sign-up flows
- When redesigning existing workflows
Evaluation Framework
Nielsen's 10 Usability Heuristics
Use these as your evaluation lens for every flow:
-
Visibility of system status: Does the user always know what's happening?
- Loading indicators during async operations
- Progress bars in multi-step flows
- Success/error feedback after actions
-
Match between system and real world: Does the UI use familiar language?
- Jargon-free labels and descriptions
- Intuitive icons and metaphors
- Logical information hierarchy
-
User control and freedom: Can users undo mistakes?
- Back button works in all flows
- Undo/cancel options for destructive actions
- Clear exit paths from any state
-
Consistency and standards: Does the UI follow conventions?
- Consistent button styles and placement
- Standard form patterns (labels, validation)
- Platform-appropriate interactions
-
Error prevention: Does the UI prevent mistakes before they happen?
- Confirmation dialogs for irreversible actions
- Input validation before submission
- Disabled states for unavailable actions
-
Recognition rather than recall: Is information visible, not memorized?
- Breadcrumbs showing location
- Recent items / search history
- Contextual help text
-
Flexibility and efficiency: Can experienced users move quickly?
- Keyboard shortcuts
- Bulk actions
- Saved preferences / defaults
-
Aesthetic and minimalist design: Is the UI clutter-free?
- No unnecessary elements
- Clear visual hierarchy
- Appropriate whitespace
-
Help users recognize, diagnose, and recover from errors:
- Error messages in plain language
- Specific guidance on how to fix the issue
- Link to help documentation when relevant
-
Help and documentation:
- Contextual tooltips
- Onboarding tours for new features
- Accessible FAQ or help center
Flow Metrics
For each user flow analyzed, measure:
Click depth: Number of clicks to complete the task
- Excellent: ≤3 clicks for common tasks
- Acceptable: 4-5 clicks
- Needs improvement: 6+ clicks
Cognitive load: Number of decisions required per step
- Each form field is a decision
- Each navigation choice is a decision
- Fewer decisions = lower cognitive load
Error recovery cost: Steps to recover from a mistake
- Best: 1 click (undo button)
- Acceptable: 2-3 clicks (back + re-enter)
- Poor: Start over from scratch
Time to complete: Expected time for the full flow
- Benchmark against similar applications
- Flag if significantly longer than expected
Accessibility Audit (WCAG 2.1 AA)
Check these critical accessibility requirements:
Perceivable
- All images have alt text (or are marked decorative)
- Color is not the only means of conveying information
- Text contrast ratio ≥ 4.5:1 (normal text) or ≥ 3:1 (large text)
- Content is readable at 200% zoom
- Video has captions; audio has transcripts
Operable
- All functionality available via keyboard
- No keyboard traps (focus can move freely)
- Skip navigation link present
- Focus indicators visible on all interactive elements
- No content flashes more than 3 times per second
- Page titles are descriptive
Understandable
- Language attribute set on HTML element
- Form labels associated with inputs
- Error messages identify the field and describe the error
- Consistent navigation across pages
Robust
- Valid HTML (no duplicate IDs, proper nesting)
- ARIA attributes used correctly
- Custom components have appropriate roles
Using the Analysis Script
python skills/qa-analyzing-ux-flows/scripts/analyze_flow.py \
--url https://staging.example.com \
--flow checkout \
--output ux-analysis/
Or analyze from an existing exploration session:
python skills/qa-analyzing-ux-flows/scripts/analyze_session.py \
--session exploration-results/session-log.json \
--output ux-analysis/
Accessibility Checker
Run automated accessibility checks (axe-core based):
python skills/qa-analyzing-ux-flows/scripts/accessibility_check.py \
--url https://staging.example.com \
--pages /,/login,/dashboard \
--output accessibility-results/
Output Format
ux-analysis/
├── flow-analysis.json ← structured flow metrics
├── heuristic-evaluation.json ← per-heuristic scores and findings
├── accessibility-report.json ← WCAG violations found
├── friction-map.json ← identified friction points
└── ux-summary.md ← human-readable UX assessment
Friction Point Format
{
"friction_id": "UX-001",
"severity": "high",
"heuristic": "Error prevention",
"location": "/checkout (payment step)",
"description": "No confirmation dialog when user changes payment method, causing cart reset",
"user_impact": "Users lose cart contents when exploring payment options",
"recommendation": "Add 'Are you sure?' dialog, or preserve cart across payment method changes",
"effort_estimate": "small"
}
UX Score Card
Generate an overall UX score card for the analyzed flows:
{
"flow": "checkout",
"overall_score": 7.2,
"heuristic_scores": {
"visibility": 8,
"match_real_world": 7,
"user_control": 6,
"consistency": 8,
"error_prevention": 5,
"recognition_not_recall": 7,
"flexibility": 6,
"minimalist_design": 9,
"error_recovery": 5,
"help_documentation": 7
},
"accessibility_score": "AA (partial)",
"click_depth": 5,
"friction_points": 3,
"critical_issues": 1
}
Each heuristic is scored 1-10:
- 9-10: Excellent — exemplary UX
- 7-8: Good — minor improvements possible
- 5-6: Adequate — noticeable issues
- 3-4: Poor — significant usability problems
- 1-2: Critical — flow is nearly unusable
More from wizeline/sdlc-agents
editing-pptx-files
Use this action any time a .pptx file is involved in any way — as input, output, or both. This includes: creating slide decks, pitch decks, or presentations; reading, parsing, or extracting text from any .pptx file (even if the extracted content will be used elsewhere, like in an email or summary); editing, modifying, or updating existing presentations; combining or splitting slide files; working with templates, layouts, speaker notes, or comments. Trigger whenever the user mentions \"deck,\" \"slides,\" \"presentation,\" or references a .pptx filename, regardless of what they plan to do with the content afterward. If a .pptx file needs to be opened, created, or touched, use this action.
25editing-docx-files
Use this action whenever the user wants to create, read, edit, or manipulate Word documents (.docx files). Triggers include: any mention of \"Word doc\", \"word document\", \".docx\", or requests to produce professional documents with formatting like tables of contents, headings, page numbers, or letterheads. Also use when extracting or reorganizing content from .docx files, inserting or replacing images in documents, performing find-and-replace in Word files, working with tracked changes or comments, or converting content into a polished Word document. If the user asks for a \"report\", \"memo\", \"letter\", \"template\", or similar deliverable as a Word or .docx file, use this action. Do NOT use for PDFs, spreadsheets, Google Docs, or general coding tasks unrelated to document generation.
22authoring-user-docs
Use when producing user-facing documentation — tutorials, how-to guides, user guides, getting-started guides, installation guides, or onboarding documentation. Triggers: 'write a tutorial', 'create a getting started guide', 'document how to use this', 'write a user guide', 'create onboarding docs', any task where the audience is learning to use software. Always load authoring-technical-docs first.
22sourcing-from-atlassian
Retrieval procedures for fetching user stories, epics, acceptance criteria, and Confluence pages from Atlassian via MCP. Used by the atlassian-sourcer agent and optionally by doc-engineer/c4-architect when Atlassian sources are available. Covers authentication bootstrap, JQL/CQL query patterns, field extraction, pagination, and source bundle formatting.
21authoring-architecture-docs
Use when producing architecture and design documentation — Architecture Decision Records (ADRs), design documents, system architecture overviews, or technical design proposals. Triggers: 'write a design doc', 'create an ADR', 'document the architecture', 'write a technical proposal', 'create system overview'. Always load authoring-technical-docs first.
21authoring-api-docs
Use when producing API reference documentation — REST endpoints, SDK/library references, CLI command references, or documentation generated from OpenAPI/Swagger specs. Triggers: 'document this API', 'generate API reference', 'write SDK docs', 'document these endpoints', any task involving source code with HTTP handlers, route definitions, or OpenAPI specs. Always load authoring-technical-docs first.
20