response-feedback
Response Feedback
Feedback mechanisms for AI responses
What it solves
A Response Feedback pattern helps teams create a reliable way to collect structured feedback on AI output quality while preserving the context needed to improve later responses. It is most useful when teams need thumbs up and down on generated responses. Compared with adjacent patterns, this pattern should reduce friction without hiding the state, rules, or recovery paths people need to keep moving.
When to use
- Thumbs up and down on generated responses
- Quality review flows for internal copilots
- Regeneration loops after an unsatisfactory answer
When to avoid
- Avoid adding AI-specific UI when a standard non-AI workflow would be clearer and more reliable.
- Do not expose advanced controls unless users can actually benefit from them.
- Do not hide model uncertainty behind polished visuals alone.
Implementation workflow
- Confirm the pattern matches the problem and constraints before copying the example.
- Start from the anatomy and examples in
references/pattern.md, then choose the smallest viable variation. - Apply accessibility, performance, and interaction guardrails before layering visual polish.
- Use the testing guidance to verify behavior across keyboard, screen reader, responsive, and failure scenarios.
Accessibility guardrails
Keyboard Interaction
- Verify that response feedback can be completed using keyboard alone.
- Keep focus order logical when the pattern opens, updates, or reveals additional UI.
- Preserve a visible focus state that is still readable at high zoom.
Screen Reader Support
- Use semantic elements first, then add ARIA only where semantics alone are not enough.
- Announce state changes such as errors, loading, or completion in the right place and with the right politeness.
- Connect labels, hints, and status text with
aria-describedbyor structural headings when useful.
Visual Accessibility
- Do not rely on color alone to convey severity, completion, or selection state.
Performance guardrails
- Budget for network latency, token usage, and client-side rendering of long responses together, not as separate concerns.
- Stream or chunk content when it improves time-to-first-value, but stabilize layout so reading does not become jittery.
- Track expensive states such as long prompts, model changes, and retries so you can tune the experience with evidence.
Common mistakes
Hiding the system state
The Problem: Users cannot tell whether the model is waiting, streaming, retrying, or done.
How to Fix It? Expose clear request lifecycle states and keep them visible near the content they affect.
Treating failures like standard form errors
The Problem: AI failures include safety blocks, context limits, model availability, and partial output, not just a failed request.
How to Fix It? Differentiate failure modes and give recovery actions that match each one.
Ignoring token and latency budgets
The Problem: The experience feels unpredictable when responses get slower, shorter, or more expensive without explanation.
How to Fix It? Design token, latency, and provider constraints into the interface from the beginning.
Related patterns
- https://uxpatterns.dev/patterns/ai-intelligence/ai-error-states
- https://uxpatterns.dev/patterns/social/like-button
- https://uxpatterns.dev/patterns/user-feedback/notification
For full implementation detail, examples, and testing notes, see references/pattern.md.
Pattern page: https://uxpatterns.dev/patterns/ai-intelligence/response-feedback
More from thedaviddias/ux-patterns-for-developers
button
Use when you need to trigger actions and submit forms.
5autocomplete
Use when implementing suggest options as users type.
4notification
Use when you need to inform users about important updates.
3accordion
Use when implementing expand and collapse content sections.
3tooltip
Use when implementing provide additional context on hover or focus.
3navigation-menu
Use when you need to organize and structure site navigation.
3