accelint-persona-review
Persona-Based Design Review
Evaluate Figma designs from the perspective of specific operator personas. Generic UX advice ("make it more intuitive") misses insights that emerge from the persona's documented profile - their responsibilities, pain points, systems they monitor, and operational context.
Workflow
1. Load Persona Profile
Start by loading the persona index to find available personas:
Read references/personas/_index.md
Then load the specific persona requested by the user:
Read references/personas/{persona-id}.md
Do NOT load multiple persona files - only load the one requested by the user.
Do NOT load evaluation-examples.md yet - wait until Step 4.
If the persona doesn't exist, list available options from the index and ask the user to choose.
2. Gather Design Context
Figma URL provided:
Use appropriate Figma MCP tool to fetch the design (e.g., mcp__figma-desktop__get_design_context with extracted node ID from URL pattern node-id=1-2 → 1:2).
No URL (default): Use Figma MCP desktop to get current file/selection. If nothing selected, prompt user to select a frame or component.
Figma MCP unavailable: Ask user to provide a screenshot of the design. Analyze the screenshot using visual inspection, but note that without full design context (component properties, layout constraints, interaction states), the review will be limited to visual elements only.
3. Search Supporting Documentation
Use Outline MCP to find relevant context. Since Outline requires workspace selection, use this pattern:
ListMcpResourcesTool(server: "outline")
Search for documents covering:
- UI standards/guidelines for this operator role
- Previous design reviews or feedback
- System requirements or specifications
- Training materials or user guides
Prioritize documents mentioning the persona's role, responsibilities, or systems they interact with.
Outline MCP unavailable: Proceed with the review based solely on the persona profile and design context. Note in your review that supporting documentation wasn't available, and recommend areas where organizational standards should be consulted.
4. Analyze & Critique
Load evaluation examples to calibrate your approach:
Read references/evaluation-examples.md
Use the evaluation framework below, but adapt structure to findings - don't force insights into rigid sections.
Evaluation Framework
Cognitive Load Assessment
- Information density: Can they process all displayed data given their experience level and work tempo?
- Visual hierarchy: Does critical info for their role stand out immediately?
- Mental models: Does the interface match systems they already use (documented in "Sees")?
Communication Pattern Alignment
- "Says & Does" support: Does the UI facilitate their typical actions and communications?
- Workflow integration: How well does this fit documented workflows?
- Error prevention: Does it prevent mistakes aligned with their documented pain points?
Pain Point Mitigation
- Direct pain relief: Which documented pain points does this design address?
- Inadvertent pain creation: Does this introduce new friction or complexity?
- System consolidation: If they juggle multiple systems, does this reduce context switching?
Context Awareness
- Experience calibration: Is complexity appropriate for their rank/experience (e.g., E4 vs E7)?
- Responsibility alignment: Does the design support their specific responsibilities?
- Schedule considerations: Can they use this effectively given their work schedule/tempo?
System Visibility
- "Sees" coverage: Are the systems they monitor visible/accessible (e.g., BCS-F, RS-4, ERSA)?
- Integration gaps: What critical systems are missing?
- Redundancy: Is there unnecessary duplication of information they see elsewhere?
Communication Support
- "Hears" integration: Does the design support their communication channels (e.g., Surveillance Net)?
- Information relay: Can they easily relay information as documented in "Says & Does"?
- Notification design: Are alerts/notifications appropriate for their attention budget?
Output Structure
Provide critique in this general format (adapt as needed):
## Persona Review: [Persona Name]
### Design Summary
[1-2 sentence summary of what you reviewed]
### Critical Findings
[2-3 most important insights specific to this persona]
### Detailed Evaluation
**Cognitive Load**: [Assessment with specific examples from persona profile]
**Communication Patterns**: [How well it supports their "Says & Does"]
**Pain Point Mitigation**: [Which pain points addressed/created]
**Context Awareness**: [Appropriate for their experience/responsibilities]
**System Visibility**: [Coverage of their "Sees" systems]
**Communication Support**: [Integration with their "Hears" channels]
### Recommendations
[Prioritized list of actionable improvements, grounded in persona profile]
### Supporting References
[Links to relevant Outline docs found during research]
This is an example structure, not a rigid template. Adapt based on:
- Depth of findings in specific areas
- Completeness of persona profile
- Design scope (component vs. full dashboard)
The critical elements are:
- Clear connection to persona's documented profile
- Specific, actionable recommendations
- Prioritization based on operational impact
- Evidence from supporting docs (when available)
Evaluation Principles
Be specific to the persona: Generic UX advice helps no one. Ground every observation in the persona's documented profile (Profile, About, Hears, Sees, Says & Does, Pain Points).
Prioritize operational impact: A minor UI inconsistency that breaks muscle memory for a high-tempo operator matters more than major visual polish issues. Consider the stakes of their work.
Assume domain expertise: These operators are experts in their field. Don't suggest "simplifications" that remove necessary complexity they need to do their jobs.
Consider the full context: Review their entire profile - insights often emerge from connections between sections. A pain point in one area may relate to systems they monitor or communication channels they use.
Connect across profile sections: The most valuable insights synthesize multiple parts of the persona profile (e.g., a pain point + systems they see + actions they take = integrated solution opportunity).
NEVER Do When Reviewing
- NEVER give generic UX advice like "make it more intuitive" or "improve the user experience" - these could apply to any interface. Ground every observation in the persona's specific profile.
- NEVER suggest simplifications that remove necessary complexity - these operators are domain experts. Complexity that serves their documented responsibilities is valuable.
- NEVER ignore operational context - a minor UI inconsistency that breaks muscle memory matters more than major visual polish issues for high-tempo operators.
- NEVER treat all personas as the same - an E4 AST review should differ from an O4 MCC review for the same interface.
- NEVER skip loading the persona profile - generic reviews without persona context miss the entire value of this skill.
References
- Persona profiles:
references/personas/{persona-id}.md - Persona index:
references/personas/_index.md - Evaluation examples:
references/evaluation-examples.md
Load these on-demand to minimize context usage.
More from gohypergiant/agent-skills
accelint-nextjs-best-practices
Next.js performance optimization and best practices. Use when writing Next.js code (App Router or Pages Router); implementing Server Components, Server Actions, or API routes; optimizing RSC serialization, data fetching, or server-side rendering; reviewing Next.js code for performance issues; fixing authentication in Server Actions; or implementing Suspense boundaries, parallel data fetching, or request deduplication.
214accelint-ts-testing
Comprehensive vitest testing guidance for TypeScript projects. Use when (1) Writing new tests with AAA pattern, parameterized tests, or async/await, (2) Reviewing test code for anti-patterns like loose assertions (toBeTruthy), over-mocking, or nested describe blocks, (3) Optimizing slow test suites, (4) Implementing property-based testing with fast-check - especially for encode/decode pairs, roundtrip properties, validators, normalizers, and idempotence checks. Covers test organization, assertions, test doubles hierarchy (fakes/stubs/mocks), async testing, performance patterns, and property-based testing patterns. Trigger keywords on vitest, *.test.ts, describe, it, expect, vi.mock, fast-check, fc.property, roundtrip, idempotence.
203accelint-react-best-practices
React performance optimization and best practices. ALWAYS use this skill when working with any React code - writing components, hooks, JSX; refactoring; optimizing re-renders, memoization, state management; reviewing for performance; fixing hydration mismatches; debugging infinite re-renders, stale closures, input focus loss, animations restarting; preventing remounting; implementing transitions, lazy initialization, effect dependencies. Even simple React tasks benefit from these patterns. Covers React 19+ (useEffectEvent, Activity, ref props). Triggers - useEffect, useState, useMemo, useCallback, memo, inline components, nested components, components inside components, re-render, performance, hydration, SSR, Next.js, useDeferredValue, combined hooks.
172accelint-ts-performance
Systematic JavaScript/TypeScript performance audit and optimization using V8 profiling and runtime patterns. Use when (1) Users say 'optimize performance', 'audit performance', 'this is slow', 'reduce allocations', 'improve speed', 'check performance', (2) Analyzing code for performance anti-patterns (O(n²) complexity, excessive allocations, I/O blocking, template literal waste), (3) Optimizing functions regardless of current usage context - utilities, formatters, parsers are often called in hot paths even when they appear simple, (4) Fixing V8 deoptimization (monomorphic/polymorphic issues, inline caching). Audits ALL code for anti-patterns and reports findings with expected gains. Covers loops, caching, batching, memory locality, algorithmic complexity fixes with ❌/✅ patterns.
163accelint-ts-best-practices
Comprehensive TypeScript/JavaScript coding standards focusing on type safety, defensive programming, and code correctness. Use when (1) Writing or reviewing TS/JS code, (2) Fixing type errors or avoiding any/enum/null, (3) Implementing control flow, state management, or error handling, (4) Applying zero-value pattern or immutability, (5) Code review for TypeScript anti-patterns. Covers naming conventions, function design, return values, bounded iteration, input validation. For performance optimization, use accelint-ts-performance skill. For documentation, use accelint-ts-documentation skill.
153accelint-readme-writer
Use when creating or editing a README.md file in any project or package. Recursively parses codebase from README location, suggests changes based on missing or changed functionality, and generates thorough, human-sounding documentation with copy-pasteable code blocks and practical examples.
153