skills/tercel/spec-forge/tech-design-generation

tech-design-generation

SKILL.md

Tech Design Generation Skill

What Is a Technical Design Document?

A Technical Design Document (TDD) is the engineering blueprint that translates product requirements into a concrete, implementable architecture. It sits between the Software Requirements Specification (SRS) and the actual code, serving as the contract between the engineering team and the rest of the organization about how a system will be built. A well-crafted TDD reduces implementation risk, surfaces architectural trade-offs early, and provides a lasting record of why specific technical decisions were made.

The Google Design Doc tradition emphasizes that design documents are not just about documenting a decision after the fact -- they are a tool for thinking through a problem rigorously before committing to code. The RFC (Request for Comments) tradition adds the dimension of structured peer review, ensuring that designs benefit from collective expertise. Uber and Meta engineering standards contribute a focus on scalability, operational readiness, and production-grade thinking from day one.

This skill combines all three traditions. Every generated Technical Design Document presents at least two alternative solutions, evaluates them against explicit criteria, and documents the rationale behind the chosen approach. The document covers architecture, API design, data modeling, security, performance, observability, and deployment -- everything an engineering team needs to move from design to implementation with confidence.

Seven-Step Workflow

Every Technical Design Document generated by this skill follows a disciplined seven-step process. Each step must be completed before moving to the next.

Sub-agent boundary: Steps 1-3 are performed by the orchestrator (commands/tech-design.md) in the main context. Steps 4-7 are performed by a generation sub-agent (Task(subagent_type="general-purpose")) that reads references/generation-instructions.md, references/template.md, and references/checklist.md directly. The sub-agent operates in an isolated context and receives a structured prompt with all Step 1-3 outputs.

Step 1 -- Deep Scan Codebase

Before writing anything, perform a thorough scan of the current project to build deep technical understanding.

  1. Glob the project tree to discover the repository structure, module boundaries, service boundaries, and naming conventions. Use patterns such as **/*.md, **/package.json, **/go.mod, **/Cargo.toml, **/docker-compose.yml, **/Dockerfile, or language-specific manifests to map the landscape.
  2. Read the README and any architectural documentation to understand the project's purpose, existing design decisions, and conventions.
  3. Scan the docs/ directory for existing documents, paying special attention to architecture decision records (ADRs), prior design documents, and API documentation.
  4. Analyze the codebase with Grep to identify: frameworks and libraries in use, architectural patterns (MVC, microservices, monolith, hexagonal), API patterns (REST, GraphQL, gRPC), database technologies and ORM usage, testing frameworks, and CI/CD configuration.
  5. Identify infrastructure patterns by scanning for Kubernetes manifests, Terraform files, CloudFormation templates, or serverless configuration.

This automated scanning ensures the generated design document is grounded in the real architecture rather than generic assumptions.

Step 2 -- Find Upstream Documents

Search for matching upstream documents that feed into this design. Determine the operating mode:

  • Upstream mode: PRD and/or SRS found → design will trace to formal requirement IDs
  • Idea-first mode: Idea draft found at ideas/<feature-name>/draft.md, no PRD/SRS → §3.5 User Scenarios, §3.6 Acceptance Criteria, and §3.7 Success Metrics are derived from the idea draft's problem statement, MVP scope, and demand validation results
  • Standalone mode: No upstream documents → these sections are populated from user clarification answers

Upstream mode search:

  1. Search for PRD files matching docs/*/prd.md related to the feature being designed. Read all found PRD documents to extract product goals, user stories, and success metrics.
  2. Search for SRS files matching docs/*/srs.md related to the feature. Read all found SRS documents to extract functional requirements (FR-XXX-NNN), non-functional requirements (NFR-XXX-NNN), data models, and interface definitions.
  3. Summarize upstream context including the requirement IDs that this design must address.

If no PRD/SRS found, check for idea draft at ideas/<feature-name>/draft.md.

Step 3 -- Clarify Questions

Present the user with targeted clarifying questions. These questions fill gaps that cannot be inferred from the codebase or upstream documents. Typical areas include:

  • Architecture preference and any mandated patterns to follow.
  • Technical constraints or forbidden technology choices.
  • Performance targets for latency, throughput, and scalability.
  • Data strategy including new databases, tables, or migration needs.
  • Integration points with external services, APIs, or message queues.
  • Security requirements including authentication method and data sensitivity.
  • Deployment strategy including cloud provider, orchestration, and environment topology.
  • Timeline constraints that might affect technical decisions.

Do not proceed to generation until the user has answered enough questions to inform the core design sections.

Step 4 -- Generate Design

Using the template at references/template.md, generate the complete Technical Design Document. Key requirements for this step:

  • Present at least two alternative solutions with a structured comparison matrix.
  • Recommend one solution and provide explicit rationale for the decision.
  • Use Mermaid syntax for all diagrams following the C4 model levels.
  • Design APIs with complete endpoint specifications, request/response schemas, and error codes.
  • Define database schemas with ER diagrams, index strategies, and migration plans.
  • Address security as a first-class concern with authentication, authorization, encryption, and audit logging.
  • Set specific, measurable performance targets with caching and optimization strategies.
  • Plan observability with logging, monitoring, and alerting.
  • Define deployment strategy with environments, CI/CD pipeline, and rollback procedures.

Step 5 -- Traceability

If upstream documents (PRD and SRS) were found:

  • Map each SRS functional requirement to the technical components that implement it.
  • Map each SRS non-functional requirement to the architecture decisions that satisfy it.
  • Verify that all FR and NFR items from the SRS are addressed somewhere in the design.
  • Document any requirements that are intentionally deferred with justification.

This traceability ensures no requirements fall through the cracks between specification and design.

Step 6 -- Quality Check

Validate the completed tech-design document AND all generated feature specs in docs/features/ against every item in references/checklist.md. Fix any issues before presenting the final document to the user. Summarize the checklist results so the user can see what passed and whether any items were intentionally skipped with justification.

Step 7 -- Feature Spec Generation

After the main Technical Design Document passes the quality check, automatically generate individual feature specs for each component identified in Section 8 (Detailed Design). This eliminates the need to run /spec-forge:feature separately — the tech-design now produces both the architecture document and the implementation-ready feature specs in a single pass.

7.1 Identify Components

Extract all components from the Component Overview table in §8.1. Each row in the table becomes one feature spec file.

7.2 Generate Feature Specs

For each component, create a feature spec at docs/features/{component-name}.md using the template below. The feature spec contains the implementation-level detail that was traditionally written in §8 of the tech-design — method signatures, logic steps, field mappings, state machines, and error handling specifics.

Feature Spec Template:

# {Component Name}

> Feature spec for code-forge implementation planning.
> Source: extracted from docs/{project}/tech-design.md §8
> Created: {date}

| Field | Value |
|-------|-------|
| Component | {component-slug} (same as filename without `.md`) |
| Priority | {P0 / P1 / P2, derived from requirement priorities} |
| SRS Refs | {mapped from traceability matrix, e.g., FR-XXX-001..005} |
| Tech Design | §8.1 — {component row reference} |
| Depends On | {component slugs this depends on, or "—" if none} |
| Blocks | {component slugs this blocks, or "—" if none} |

## Purpose

{One paragraph describing what this component does and why it exists. Derived from the Component Overview table.}

## Scope

**Included:**
- {responsibility 1}
- {responsibility 2}

**Excluded:**
- {explicitly out of scope}

## Core Responsibilities

1. **{Responsibility name}** — {brief description}
2. ...

## Interfaces

### Inputs
- **{input name}** ({source}) — {description}

### Outputs
- **{output name}** ({destination}) — {description}

### Dependencies
- **{module/system name}** — {what it provides}

## Data Flow

{Mermaid diagram if 3+ steps}

## Key Behaviors

{Implementation-level detail: method signatures, logic steps, field mapping tables, state machines, algorithms. This is where the depth of the traditional §8 detailed design lives.}

### {Behavior 1}
{Detailed description with logic steps, code signatures, data structure transformations}

### {Behavior 2}
{Detailed description}

## Constraints

- **{constraint type}**: {description}

## Acceptance Criteria

[Map back to AC-IDs from tech-design §3.6 where applicable. Add component-specific criteria not covered at the feature level.]

| AC-ID | Criterion | Verification Method |
|-------|-----------|---------------------|
| AC-{nnn} | {Specific, testable condition that must be true for this component to be done} | {Unit test / Integration test / Manual — be specific about what to call and what to assert} |

## Error Handling

{Component-specific error handling: which exceptions, error codes, sanitization rules}

## File Structure

{src-root}/ └── {module-path}/ ├── {component-name}.{ext} # Main implementation ├── {component-name}.test.{ext} # Unit tests (co-located) └── {sub-module}/ └── {file}.{ext} # Sub-module files if applicable


{Use the actual project source root (e.g., `src/`, `lib/`, `app/`), the actual file extension, and the actual module path. Do NOT write a placeholder — look at the project structure and derive the real path.}

## Test Module

**Test file**: `{exact/path/to/component-name.test.ext}`

**Test scope**:
- **Unit**: {Specific functions/methods to unit test, e.g., `processPayment()`, `validateCard()`}
- **Integration**: {Integration points to test: API endpoints, DB operations, external services}
- **Fixtures / Mocks**: {What to mock or set up, e.g., "mock Stripe client", "seed user with active subscription"}

7.3 Generate Overview

Create or update docs/features/overview.md:

# Feature Overview

> Auto-generated from tech-design. See docs/{project}/tech-design.md for architecture context.
> Updated: {date}

## Features

| Feature | Description | Dependencies | Priority | Status |
|---------|-------------|--------------|----------|--------|
| [{name}](./{name}.md) | {one-line} | {deps or "—"} | {P0/P1/P2} | draft |

## Execution Order

{Ordered list based on dependency graph. Features with no dependencies come first.}

1. **{feature-name}** — {reason, e.g., "no dependencies"}
2. **{feature-name}** — {reason, e.g., "depends on auth-service"}

## Architecture Reference

For system-level concerns (solution design, API specifications, security, performance, deployment), see:
- **Tech Design**: `docs/{project}/tech-design.md`

7.4 Dependency Resolution

When filling Depends On / Blocks fields and building the execution order in overview.md:

  1. Parse component dependencies from the Component Overview table and from component interaction diagrams
  2. Use each component's slug (its filename without .md) as the stable cross-reference identifier — do NOT assign numeric IDs. Execution order lives exclusively in docs/features/overview.md.
  3. Cross-reference with SRS requirement IDs from the Traceability Matrix (§18, Appendix D — Requirements Traceability) to fill SRS Refs (leave empty if no upstream SRS exists)
  4. Derive priority from the requirement priorities (P0 requirements → P0 feature)

7.5 Content Depth Guidance

Each feature spec should contain enough detail that an engineer (or code-forge:plan) can implement without referencing back to the main tech-design. Specifically:

  • Method signatures with parameter types and return types
  • Step-by-step logic for non-trivial methods (numbered steps with conditional branches)
  • Field mapping tables for data transformation operations
  • State transitions specific to this component (if applicable)
  • Default values, validation rules, and constants
  • Error conditions and what exception/error to raise for each

Do NOT include system-level concerns that belong in the main tech-design (API endpoint specifications, database schema, security design, deployment strategy, performance targets). Feature specs focus on component internals.

Architecture Diagram Standards -- C4 Model

All architecture diagrams follow the C4 model, which provides four levels of abstraction for communicating software architecture.

Level 1 -- Context Diagram. Shows the system as a single box surrounded by the people who use it and the external systems it interacts with. This is the highest-level view and should be understandable by non-technical stakeholders. Use a Mermaid flowchart TB diagram with clear labels for each actor and system.

Level 2 -- Container Diagram. Zooms into the system box and shows the high-level technology building blocks: web applications, APIs, databases, message queues, file storage, and other containers. Each container is labeled with its technology choice. Use a Mermaid flowchart TB diagram with subgraphs to group related containers.

Level 3 -- Component Diagram. Zooms into a single container and shows the major structural components inside it: controllers, services, repositories, domain models, and their relationships. Use a Mermaid flowchart LR or flowchart TB diagram.

Level 4 -- Code Diagram. Typically not included in the design document itself but may be referenced for particularly complex algorithms or data structures. When needed, use Mermaid classDiagram notation.

All Mermaid code blocks must use the ```mermaid fence so they render correctly in GitHub, GitLab, and most Markdown viewers. Every diagram must have a descriptive title, and every node must have a human-readable label.

Technology Stack Decision

Every technical design must begin with explicit technology choices. The Technology Stack section requires a table listing every layer of the system (programming language, runtime, framework, ORM, database, cache, message queue, frontend framework, testing framework, build tool, containerization) with the specific version and a rationale explaining why it was chosen. Rationales must be project-specific -- "it's popular" is not sufficient; "Go 1.22 was chosen because the team has 3 years of Go experience and its concurrency model fits our real-time event processing needs" is.

Naming Conventions

Inconsistent naming across code, APIs, and databases is one of the most common sources of confusion in engineering teams. The design document must define naming conventions at three levels:

Code Naming. Specify conventions for files/modules, classes/structs, interfaces/traits, functions/methods, variables, constants, enums, and test files. Follow the chosen language ecosystem's conventions (e.g., camelCase for JavaScript, snake_case for Python/Rust, PascalCase for Go exported names).

API Naming. Specify conventions for URL path segments (kebab-case plural nouns recommended for REST), query parameters, request/response body fields, custom headers, and error codes. The request and response field conventions must match.

Database Naming. Specify conventions for table names (snake_case plural recommended), column names, primary/foreign keys, index names, constraint names, and enum types. Use a consistent pattern like idx_<table>_<columns> for indexes and fk_<table>_<referenced_table> for foreign keys.

Parameter Validation & Input Parsing

Every parameter that crosses a trust boundary must have explicit validation rules. The design document must include a Validation Rules Matrix table where each row defines: parameter name, type, required/optional, minimum value, maximum value, pattern/format (regex or standard like RFC 5322 for email), default value, sanitization strategy, and the specific error message returned on failure.

Beyond the matrix, the document must define type coercion rules (how strings are parsed to integers, booleans, dates, enums), input sanitization strategy (HTML/XSS, SQL injection, path traversal, command injection, JSON depth limits), and the distinction between null, missing, and empty values.

Boundary Values & Edge Cases

The design must document every system limit and what happens when it is exceeded. This includes: request body size, string field lengths, array sizes, concurrent connections, rate limits, file upload sizes, JSON nesting depth, pagination result caps, and bulk operation batch sizes. For each limit, specify the exact number, the behavior when exceeded (specific HTTP error code), and the rationale.

Edge cases must be documented in a table covering at minimum: empty string input, unicode/emoji handling, idempotent duplicate requests, behavior during database migration, concurrent update conflicts, referential integrity on delete, timezone handling, numeric overflow, null vs zero semantics, and long-running request timeouts.

Business Logic Rules

All business rules must be documented precisely enough that an engineer can implement them without ambiguity.

State Machines. If entities have lifecycle states, define the state machine using a Mermaid stateDiagram-v2 diagram. For every transition, document: from state, to state, trigger, guard conditions (what must be true), and side effects (what happens as a result).

Computation Rules. For every calculation or derived value, document: a rule ID, description, formula/logic, inputs, output type, numeric precision and rounding strategy, and a worked example with real numbers.

Conditional Logic. For complex branching behavior, document each condition with what happens when true and when false, plus any relevant notes about configurability or thresholds.

Error Handling Strategy

Define a comprehensive error taxonomy covering every error category the system can produce. For each category, specify: HTTP status code, error code pattern, whether the client should retry, and the user-facing message. Additionally, define retry and circuit breaker configuration for every external dependency: retry count, backoff strategy, circuit breaker threshold, timeout, and fallback behavior.

API Design Conventions

The design document must specify API conventions appropriate to the chosen protocol.

RESTful APIs. Follow resource-oriented design. Endpoints use plural nouns (e.g., /api/v1/users, /api/v1/orders/{orderId}/items). Use standard HTTP methods: GET for retrieval, POST for creation, PUT/PATCH for updates, DELETE for removal. Version the API in the URL path (e.g., /api/v1/). Define standard error response shapes with error codes, messages, and request IDs.

GraphQL APIs. Define the schema with queries, mutations, and subscriptions. Document resolver responsibilities and data loader patterns for N+1 prevention. Specify error handling conventions within the GraphQL response structure.

gRPC APIs. Define service and message protobuf schemas. Document streaming patterns (unary, server-streaming, client-streaming, bidirectional). Specify deadline and retry policies.

Regardless of protocol, every API specification must include: endpoint or operation name, authentication requirements, request schema with field types and validation rules, response schema with example payloads, and a complete error code table.

Database Design Standards

Database design sections must include the following elements.

Schema Design. Define every table or collection with its columns, data types, constraints (primary key, foreign key, unique, not null, defaults), and purpose. Use a table format for clarity.

ER Diagram. Use Mermaid erDiagram syntax to visualize entity relationships. Label every relationship with its cardinality and nature.

Index Strategy. For each table, define the indexes needed: primary indexes, unique indexes, composite indexes for common query patterns, and partial or conditional indexes where appropriate. Document the rationale for each index in terms of the queries it supports.

Migration Strategy. Plan how schema changes will be applied: migration tool selection, forward and backward compatibility, zero-downtime migration techniques (expand-contract pattern), and data backfill procedures.

Solution Comparison Methodology

Every design document must evaluate at least two alternative solutions. The comparison follows a structured methodology.

  1. Describe each solution with enough detail that a reader can understand its architecture, key technology choices, and implementation approach.
  2. List pros and cons for each solution, organized by technical merit, operational impact, and business alignment.
  3. Build a comparison matrix evaluating all solutions against consistent criteria: implementation complexity, performance characteristics, scalability ceiling, operational cost, team expertise alignment, time to delivery, and risk profile. Use a rating system (e.g., High / Medium / Low or numeric scores) for each criterion.
  4. Document the decision with explicit rationale explaining why the recommended solution was chosen and why the alternatives were rejected.

Security Design Principles

Security must be treated as a first-class architectural concern, not a bolt-on afterthought.

Authentication. Specify the authentication mechanism (OAuth 2.0, JWT, API keys, mTLS, SAML) and document the token lifecycle including issuance, validation, refresh, and revocation.

Authorization. Define the authorization model (RBAC, ABAC, or hybrid). Document roles, permissions, and access control rules. Specify how authorization is enforced at the API gateway, service, and data layers.

Data Encryption. Specify encryption at rest (algorithm, key management, rotation policy) and encryption in transit (TLS version, certificate management). Document handling of sensitive fields (PII, payment data) including tokenization or field-level encryption where applicable.

Audit Logging. Define what events are logged (authentication attempts, data access, configuration changes), the log format, retention policy, and how audit logs are protected from tampering.

Performance Design

Performance sections must be specific and measurable, not aspirational.

Target Metrics. Define concrete targets: API response time at p50, p95, and p99 percentiles; throughput in requests per second; error rate thresholds; and resource utilization limits.

Caching Strategy. Specify what is cached (query results, computed values, static assets), where caching happens (browser, CDN, application layer, database query cache), cache invalidation strategy (TTL, event-driven, manual), and cache warming procedures.

Optimization Plan. Document specific optimization techniques: query optimization, connection pooling, lazy loading, pagination strategies, batch processing, and async processing for non-critical paths.

Observability

The design must plan for production observability from the start.

Logging. Define log levels, structured log format (JSON recommended), correlation ID propagation, and log aggregation destination. Specify what must be logged at each level.

Monitoring and Metrics. Define the key metrics to track: RED metrics (Rate, Errors, Duration) for services, USE metrics (Utilization, Saturation, Errors) for resources, and business metrics. Specify the monitoring tool and dashboard requirements.

Alerting. Define alerting rules with conditions, severity levels, notification channels, and escalation procedures. Include runbook references for each alert.

Deployment Strategy

The deployment section ensures the design is production-ready.

Environments. Define the environment topology (development, staging, production) with purpose, configuration differences, and access controls for each.

CI/CD Pipeline. Describe the pipeline stages: build, unit test, integration test, security scan, artifact creation, deployment, smoke test, and promotion. Specify any gates or approval steps.

Rollback Strategy. Define how to roll back a failed deployment: blue-green switching, canary percentage reduction, feature flag disabling, or database migration reversal. Specify the rollback decision criteria and the maximum time to rollback.

Reference Files

This skill relies on two reference files stored alongside it.

  • references/template.md -- The full Technical Design Document template with placeholder text for every section. The generated document is built by filling in this template. Note that §8 (Detailed Design) is an overview only — the detailed per-component content goes into feature specs.
  • references/checklist.md -- A quality checklist organized into four categories (Completeness, Quality, Consistency, Format) plus feature spec validation. The checklist is used during Step 6 to validate the finished document.

Always read both files before generating a document so that any updates to the template or checklist are picked up automatically.

Output Location

The tech-design skill produces two types of output:

1. Main Technical Design Document:

docs/<feature-name>/tech-design.md

where <feature-name> is a lowercase, hyphen-separated slug derived from the feature name (for example, docs/user-authentication/tech-design.md or docs/payment-processing/tech-design.md). If the docs/<feature-name>/ directory does not exist, create it. If a file with the same name already exists, confirm with the user before overwriting.

2. Feature Specs (auto-generated from §8 Detailed Design):

docs/features/{component-name}.md     — one per component
docs/features/overview.md             — feature index with dependencies and execution order

Feature specs contain the implementation-level detail (method signatures, logic steps, field mappings) that code-forge:plan consumes directly. This eliminates the need to run /spec-forge:feature as a separate step.

Automatic Upstream Document Scanning

Before generating any Technical Design Document, the skill automatically searches for upstream PRD and SRS documents to ensure the design is grounded in established requirements.

  1. Search for PRD files using the pattern docs/*/prd.md. Read matching documents to extract product goals, user personas, feature requirements, and success metrics.
  2. Search for SRS files using the pattern docs/*/srs.md. Read matching documents to extract functional requirements, non-functional requirements, data models, and interface specifications.
  3. Cross-reference upstream IDs so that the design document can map components and decisions back to specific PRD and SRS requirement identifiers, maintaining full traceability across the documentation lifecycle.

This scanning phase feeds directly into Steps 2 and 5 of the workflow and ensures every generated design reflects the actual requirements rather than assumptions.

Weekly Installs
1
GitHub Stars
2
First Seen
9 days ago
Installed on
opencode1