review-api-design
REST API Design Review
Review the following API design: $ARGUMENTS
If no design was provided above, ask for an API design to review (OpenAPI spec, endpoint list, or verbal description).
This skill vets API designs before implementation — during the planning phase. It reviews contracts, endpoint structures, and architectural decisions against proven best practices. Reviews are constructive but thorough — challenging the design, flagging gaps, and surfacing trade-offs.
When This Skill Activates
- An OpenAPI/Swagger specification (YAML or JSON)
- A list of endpoints with descriptions
- A verbal description of an API being planned
- A diagram or document describing API architecture
- Questions about how to design specific API aspects
Workflow
Step 1: Understand the Context
Before reviewing, gather context. Ask about anything not already clear:
- Domain — What business domain does this API serve?
- Consumers — Who will call this API? (frontend, mobile, third-party, internal services)
- Scale — Expected traffic volume and growth trajectory
- Auth requirements — What authentication/authorization is planned?
- Deployment — Where will this run? (cloud provider, on-prem, serverless)
- Existing systems — Does this integrate with legacy systems or other APIs?
- Team — How experienced is the team with REST API development?
Do not ask all of these mechanically. Read what was already provided and only ask what's missing and relevant. If given an OpenAPI spec, extract most of this from the spec itself.
When the input is a vague verbal description (no concrete endpoints, no spec, no endpoint list — just "I'm building an API for X"), asking clarifying questions is mandatory before producing any review. A vague description does not contain enough information to assign severity levels or make specific recommendations. Ask 3-5 targeted questions, wait for answers, then proceed to Step 2. Do not produce a full review from a verbal description alone.
Step 2: Load Relevant References
Based on the design presented, read the reference files that are most relevant. Not all files need loading for every review.
Always load:
references/design-principles.md— naming, versioning, CRUD, idempotency, health checks, tracing, parametersreferences/payloads-errors.md— response structure, pagination, error format, identifiers
Load based on context:
references/security-auth.md— if the API handles auth, identity, tokens, or trust boundariesreferences/security-defense.md— if the API is public-facing, handles user input, or needs hardening (CSRF, CORS, enumeration, information disclosure)references/design-extensibility.md— if the design involves extensibility, metadata fields, backwards compatibility strategy, or arity decisionsreferences/resilience.md— if the API calls downstream services or needs high availabilityreferences/api-gateways.md— if there are multiple services or gateway architecture questionsreferences/api-communication-patterns.md— if weighing REST vs GraphQL vs WebSockets vs SSE, or the design shows signs that a different pattern might fit betterreferences/human-aspect.md— if adoption, documentation, or developer experience is a concernreferences/pragmatism.md— if there are technology choice or build-vs-buy decisions
Step 3: Conduct the Review
Systematically evaluate the design against each relevant domain. For each finding:
- Identify what's good (reinforce good decisions)
- Identify what's missing or could be improved
- Explain why it matters, not just what to change
- Assign a severity level
Severity levels:
| Level | Meaning |
|---|---|
| Critical | Will cause production incidents, security vulnerabilities, or major integration pain. Must fix before building. |
| Warning | Likely to cause problems at scale or create tech debt. Should fix before building. |
| Suggestion | Would improve the design but not blocking. Consider for this iteration or next. |
| Good | Something done well worth reinforcing. |
Step 4: Produce the Review Document
Structure output as follows:
API Design Review: {API Name}
Review date: {date} Input: {what was provided — spec, endpoint list, description} Context: {1-2 sentence summary of the API's purpose and audience}
Summary of Findings
| # | Domain | Finding | Severity |
|---|---|---|---|
| 1 | Design | ... | Critical |
| 2 | Security | ... | Warning |
Detailed Findings
For each finding:
- What: The specific issue or observation
- Why it matters: The consequence of not addressing it
- Recommendation: What to do about it, citing relevant standards or guides from
references/sources.mdso the user has a concrete next step (e.g., "See OWASP CSRF Prevention Cheat Sheet in sources.md")
Group findings by domain (Design Principles, Security, Resilience, etc.).
What's Missing?
Flag areas the design didn't address. Common gaps:
- No error response format defined
- No versioning strategy
- No pagination approach for list endpoints
- No authentication/authorization model
- No rate limiting strategy
- No health check endpoints
- No idempotency strategy for writes
- No caching strategy
- No correlation ID / tracing strategy
Readiness Assessment
- Ready to build — No critical or warning findings.
- Ready with changes — Has warnings. List the top 3 priorities.
- Needs more design work — Has critical findings. Summarize what must happen first.
Example Review Excerpt
Given input: POST /users, GET /users/{id}, DELETE /users/{id}, GET /orders
| # | Domain | Finding | Severity |
|---|---|---|---|
| 1 | Design | No versioning strategy — endpoints lack version prefix | Warning |
| 2 | Payloads | No error format defined — consider RFC 9457 Problem Details | Warning |
| 3 | Security | No auth model specified for a user-facing API | Critical |
| 4 | Design | DELETE /users/{id} is idempotent by nature |
Good |
Finding 1 — No versioning strategy
- What: Endpoints have no version prefix (
/usersinstead of/v1/users). - Why it matters: Without versioning, breaking changes will either break consumers or force awkward workarounds. Adding versioning retroactively is painful.
- Recommendation: Add URL-based versioning with major version only:
/v1/users,/v1/orders. Plan Sunset headers (RFC 8594) for future deprecation.
Behavioral Guidelines
- Planning, not coding. Review designs, not code. Only generate implementation examples if specifically asked.
- Pragmatic, not dogmatic. Best practices are guidelines. Flag deviations as conscious decisions, not oversights.
- Context-sensitive. Scale rigor to the context — internal microservice vs public API for thousands of developers.
- Constructive tone. Lead with what's good before what needs work.
- Ask before assuming. If something looks wrong but might be intentional, ask.
- Teach the why. Explain reasoning behind each finding.
- Prioritize. Make it clear what's critical vs. nice-to-have.
More from psenger/ai-agent-skills
vault-scribe
>
16agentic-skeleton-dir-structure
Scaffolds production-ready directory structures for agentic AI projects using Agent-OS v3 (Builder Methods). Use when the user asks to set up, scaffold, initialize, or restructure a project for agentic development — including mono-repos, single repos, multi-language repos, full-stack, backend, frontend, or middleware projects. Triggers on "scaffold directory", "project structure", "agentic scaffold", "project layout", "initialize AI project", "directory structure", "agent-os setup", "mono-repo layout", "IaC structure".
13git-commit-pr-message
>
11design-critique
>
7arch-lens
>
7create-a-skill
>
5