accelint-onboard-openspec
Onboard OpenSpec
Guide the user through a conversational interview to produce a complete,
project-specific openspec/config.yaml configured for the QRSPI methodology.
Companion Skill
This skill produces the project DNA layer of the agent instruction stack:
structural facts about what the project is. It is the companion to the
accelint-onboard-agents skill, which produces the behavior layer (AGENTS.md /
CLAUDE.md): how the agent acts, communicates, and makes decisions.
If during this interview the user volunteers behavioral content (commit
conventions, workflow steps, decision heuristics, tool preferences), acknowledge
it and redirect: "That's behavioral — it belongs in AGENTS.md. I'll note it
here for reference, but the accelint-onboard-agents skill is the right place to
capture it." Do not write behavioral content into config.yaml.
AGENTS.md / CLAUDE.md → accelint-onboard-agents skill → HOW the agent behaves
openspec/config.yaml → this skill → WHAT the project is
Mental Model
The config has two jobs:
context:— Objective facts about the codebase injected into every AI artifact. Think of it as the "DNA" that makes AI suggestions feel native to the project. Facts only, no opinions.rules:— Per-artifact checkpoints (proposal / design / tasks / spec) that encode the team's quality bar.
Phases
Phase 0 — File State Detection
Before any interview question is asked, check whether openspec/config.yaml
exists and assess its state. Never silently pick a mode — always announce the
detected mode to the user and confirm before proceeding.
Detection logic:
Does openspec/config.yaml exist?
│
├── No → MODE 1: Create
│ Full interview from scratch.
│
└── Yes → Read the file, then assess:
│
├── Empty or near-blank (schema: line only, no context/rules)?
│ → MODE 1: Create (with overwrite confirmation)
│ Ask: "config.yaml exists but appears empty — should I
│ populate it from scratch, or preserve any current content?"
│
├── Contains recognised fields?
│ (context: block present, rules: block with known artifact keys)
│ → MODE 3: Refresh
│ Abbreviated interview covering only detected drift and
│ unresolved # TODO: fill in markers.
│
└── Contains real content in an unrecognised shape?
→ MODE 2: Import
Present three options (A / B / C) before proceeding.
Recognised shape = file is valid YAML with at least a context: key
whose value is a non-empty string, or a rules: key with at least one
of the known artifact IDs (proposal, specs, design, tasks).
Mode 1: Create
Run the full Phase 1 → Phase 2 → Phase 3 → Phase 4 interview. This is the happy path for a fresh repo.
Mode 2: Import
The file has real content that was not generated by this skill. Present the user with three options before touching anything:
"This
config.yamlhas existing content with a structure I don't recognise. How would you like to proceed?(a) Restructure — I'll import your existing content, map it onto the
context:/rules:schema, flag any material that belongs inAGENTS.mdinstead (workflow steps, commit conventions, tool preferences), run a targeted interview to fill gaps, and produce a merged file ready to replace the current one.(b) Append — I'll run the full interview and add the skill's
context:andrules:sections alongside your existing content without modifying what's already there.(c) Dry run — I'll run the full interview and show you exactly what I would have generated, with no changes to the filesystem. Use this to evaluate fit before committing."
If option (a) is chosen:
- Read the file in full.
- Map existing content onto
context:sub-sections andrules:artifact keys where possible. - Flag any content that violates the separation-of-concerns boundary
(e.g., commit conventions, workflow steps, tool preferences, agent
decision heuristics) — these belong in
AGENTS.md. For each violation, ask: "This looks behavioral — it belongs in AGENTS.md. Should I move it there and remove it from config.yaml?" - Run a targeted interview covering only the gaps (context sub-sections with no existing coverage; artifact keys with no rules).
- Show a merged preview before writing. Existing content is labelled
# from existing file; new content is labelled# new.
If option (b) is chosen:
Run the full Phase 1 → Phase 4 interview and write the generated context:
and rules: blocks alongside existing content. Add a comment at the top:
# Sections below added by accelint-onboard-openspec skill.
If option (c) is chosen: Run the full Phase 1 → Phase 4 interview and present the output in the conversation. Explicitly state: "No files were changed." Offer to re-run as (a) or (b) if the user is satisfied.
Mode 3: Refresh
The file matches the skill's expected schema — it was likely produced by a previous run. Run an abbreviated interview covering only:
-
Drift detection — scan the codebase for changes since the file was last updated:
Signal Where to look Runtime / Node version changed .nvmrc,.node-version,DockerfileNew packages / frameworks added package.jsondeps, workspace rootsTypeScript config tightened tsconfig.json— newstrict*flagsNew packages in monorepo pnpm-workspace.yaml,turbo.jsonBuild tooling changed vite.config.*,tsup.config.*CI/CD workflows added .github/workflows/New domain concepts New top-level directories, new entity types in source Anti-patterns deprecated @deprecatedtags,// TODO: replacecomments added -
Unresolved TODOs — find all
# TODO: fill inmarkers left from the previous run and surface them as targeted questions. -
Announce findings before asking anything:
"I found [N] context sections that may have drifted and [M] unresolved TODOs. I'll only ask about those — the rest looks current."
-
After the targeted interview, show only the changed sections in the preview before writing. Do not re-emit unchanged sections.
Phase 1 — Discovery Interview
Run the interview conversationally. Don't dump all questions at once. Group them into natural topic turns. If the user mentions a stack, infer related tooling and confirm rather than asking again.
Turn 1 — Project Identity
- What is the project name and its primary purpose?
- Monorepo, single package, or something else? If monorepo, what workspaces?
- Build system / task orchestration? (Turbo, Nx, Make, npm scripts, Makefile…)
- Package manager and any private registries? (npm, pnpm, yarn, bun…)
Turn 2 — Tech Stack (ask as a grouped block, not one by one)
- Runtime and version (Node.js 20, Bun 1.x, Python 3.12, etc.)
- Language + config (TypeScript strict?
exactOptionalPropertyTypes? Python type hints?) - Framework(s) and version (React 18, Next.js 14, Express, FastAPI, etc.)
- Key domain libraries (Deck.gl, Apache Arrow, Prisma, SQLAlchemy, etc.)
- Data layer (Postgres, MongoDB, DynamoDB, ORM/query builder, data formats)
- Testing setup (Vitest, Jest, Pytest, testing-library, Playwright, etc.)
- Linting / formatting (ESLint, Biome, Prettier, Black, Ruff, etc.)
- Build tools (Vite, tsup, esbuild, Webpack, etc.)
- CI/CD (GitHub Actions, CircleCI, etc.)
- Versioning approach (Changesets, standard-version, conventional commits, etc.)
Turn 3 — Architecture
- How is the codebase organised? (feature-based, layer-based, domain-driven?)
- Where does shared/utility code live?
- Any path aliases? (
@/,~/,src/,#lib/, etc.) - Design patterns commonly in use? (factory, repository, observer, CQRS, etc.)
Turn 4 — Domain Concepts
- What are the 3–5 most important domain entities? Example prompt: "For a mapping app this might be Layer, Source, Viewport, Feature, Style."
- Any domain-specific terminology the AI should know?
- Any specialised concepts with non-obvious meanings in this codebase? Example: "orchestration" means something specific to us — it's the runtime layer that merges style with data, not a general workflow term.
Turn 5 — Performance
- Any concrete performance targets? (p95 < 200 ms, 60 fps, < 50 MB heap, etc.)
- Known hot paths or performance-critical areas?
- Memory or bundle-size constraints?
Turn 6 — Code Patterns
- Export style: named exports, default exports, or mixed?
- Naming conventions: files, variables, functions, constants? Example: "kebab-case files, camelCase vars, SCREAMING_SNAKE_CASE for enums, PascalCase for types."
- Error handling: throw,
Result<T,E>, error boundaries, something else? - Testing structure:
describe/it,test/expect, AAA pattern? - Test file location: co-located with source or a separate
__tests__/tree? - Fixture / factory approach for test data?
Note: Commit message convention is a workflow procedure — it belongs in
AGENTS.md, not here. If the user raises it now, capture it mentally and surface it in theaccelint-onboard-agentsskill. Do not add it toconfig.yaml.
Turn 7 — Anti-Patterns
- Any patterns explicitly banned in code review?
- Deprecated patterns still in the codebase that new code should NOT emulate?
- Known performance traps specific to this stack?
Turn 8 — Proposal Rules What does YOUR team require in a proposal? Good prompts:
- "Do you need proposals to call out database migration impact?"
- "Do you need proposals to flag API breaking changes?"
- "Any security review checklist items?"
Turn 9 — Design Rules Project-specific design concerns to encode? Good prompts:
- "Docker / Kubernetes resource changes to document?"
- "Performance implications section required?"
- "Specific architecture diagram style (ASCII, Mermaid)?"
Turn 10 — Task Rules
- How do you tag tasks by package or module?
Example:
[PKG:auth],[MODULE:pipeline], GitHub labels… - Rollback plan required for database changes?
- Deployment-specific test gates (smoke tests, canary checks)?
Phase 2 — Smart Defaults
After each stack answer, surface relevant conventions to confirm. Use these examples as a pattern; extend to other stacks as appropriate.
Next.js + TypeScript + Tailwind → suggest confirming:
- App Router vs Pages Router and which patterns apply
- Server Component vs Client Component boundary rules
"use client"directive placement convention- API route organisation (
app/api/vspages/api/)
React + Vitest + testing-library → suggest confirming:
userEventoverfireEventpreferencescreenquery priority (role > label > testid)renderwrapper for providers
Python + FastAPI → suggest confirming:
- Pydantic v1 vs v2 (different field-validator syntax)
- Dependency injection for DB sessions (
Depends) - Alembic migration workflow
lifespanvsstartup/shutdownevent hooks
Node.js + Prisma → suggest confirming:
prisma.$transactionpatterns- Soft-delete vs hard-delete convention
- Migration naming convention
Phase 3 — Codebase Inference (fill gaps before generating)
After the interview, audit every config field that still has no answer. For each gap, attempt to derive the answer directly from the codebase before asking the user or leaving the field empty. All config sections are load-bearing — a missing field degrades every downstream AI artifact, so inference is always preferable to omission.
Inference targets and where to look:
| Gap | Files / signals to inspect |
|---|---|
| Runtime / Node version | .nvmrc, .node-version, package.json#engines, Dockerfile |
| TypeScript config | tsconfig.json (compilerOptions flags, paths aliases) |
| Package manager | package-lock.json, yarn.lock, pnpm-lock.yaml, bun.lockb |
| Monorepo workspaces | package.json#workspaces, pnpm-workspace.yaml, turbo.json, nx.json |
| Build tools | vite.config.*, webpack.config.*, tsup.config.*, esbuild scripts in package.json |
| Test framework | vitest.config.*, jest.config.*, pytest.ini, pyproject.toml#tool.pytest |
| Linting / formatting | .eslintrc*, biome.json, .prettierrc*, ruff.toml |
| CI/CD | .github/workflows/, .circleci/, Jenkinsfile |
| Versioning | .changeset/, CHANGELOG.md, commitlint.config.*, .releaserc* |
| Path aliases | tsconfig.json#compilerOptions.paths, vite.config#resolve.alias |
| Architecture organisation | Directory tree of src/ or workspace roots — infer feature-based vs layer-based |
| Design patterns | Sample source files — look for factory functions, repository objects, observer hooks |
| Export style | Sample 3–5 source files; tally named vs default exports |
| Naming conventions | Sample file names, exported identifiers; describe what you observe |
| Error handling | Grep for throw, Result, Either, tryCatch, error boundary components |
| Test structure | Sample test files — describe/it nesting depth, file location relative to source |
| Anti-patterns | eslint rule overrides marked off or warn, comments like // TODO: replace, @deprecated |
After inference, for each field resolved this way, note the source in the preview with a trailing comment, e.g.:
- Runtime: Node.js 20 LTS # inferred from .nvmrc
- Language: TypeScript 5.4, strict, exactOptionalPropertyTypes # inferred from tsconfig.json
If a field genuinely cannot be inferred (e.g., performance targets, domain
concepts, team-specific rules), mark it with # TODO: fill in rather than
omitting it. The user can resolve these after reviewing the preview. Do not
silently drop a section — an explicit TODO is a prompt to act; an absent section
is an invisible gap.
Phase 4 — Generation
- Show a labeled preview of the full config before writing anything.
Inferred values carry their source comment; unresolved fields carry
# TODO: fill in. This gives the user a complete picture of confidence level across every field. - Ask: "Does this look right? Any sections to correct or expand before I write the file?"
- After confirmation, write to
openspec/config.yaml(create directory if needed), stripping the inference source comments — they are for review only, not the final file. - Print a brief summary of what was configured, what was inferred vs answered
directly, and which
# TODOfields still need human input.
Config Template
Use this exact structure. Fill every [placeholder] with content from the
interview or codebase inference. If a field cannot be resolved by either means,
replace its placeholder with # TODO: fill in — never omit the field. Every
section is load-bearing for downstream AI artifact quality.
schema: spec-driven
# Project Context
# Injected into every AI-generated artifact (proposal, design, spec, tasks).
# QRSPI principle: objective research layer — facts only, no opinions.
context: |
# ═══════════════════════════════════════════════════════════════════════════
# STACK FACTS
# ═══════════════════════════════════════════════════════════════════════════
## Project Identity
[project name and one-sentence purpose]
[repo structure: monorepo / single-package / workspaces list]
[build system and task orchestration]
[package manager + registries]
## Tech Stack
- Runtime: [e.g., Node.js 20 LTS]
- Language: [e.g., TypeScript 5.4, strict mode, exactOptionalPropertyTypes]
- Framework: [e.g., Next.js 14 App Router]
- Key Libraries: [domain-specific dependencies with versions]
- Data Layer: [databases, ORMs, data formats, query builders]
- Testing: [framework, utilities, coverage tooling]
- Linting/Formatting: [tools and config files in use]
- Build Tools: [bundlers, compilers, transpilers]
- CI/CD: [platform and key workflow names]
- Versioning: [release strategy and changelog tooling]
## Architecture Patterns
- Organisation: [feature-based / layer-based / domain-driven / other]
- Shared code: [path to shared utilities / packages]
- Path aliases: [list of aliases and their resolved paths]
- Key patterns: [design patterns in common use]
## Domain Concepts
- [Entity or concept]: [one-line definition]
- [Entity or concept]: [one-line definition]
- [Entity or concept]: [one-line definition]
## Performance Targets
- [metric]: [target value and context]
# ═══════════════════════════════════════════════════════════════════════════
# PATTERNS TO FOLLOW
# ═══════════════════════════════════════════════════════════════════════════
## Code Patterns
- Exports: [named / default / mixed — and when each applies]
- Naming: [files, variables, functions, constants, types]
- Error handling: [throw / Result<T,E> / boundaries / other]
- Validation: [approach and library]
- Constants: [enum pattern or constant object pattern]
## Architecture Patterns
- [pattern name]: [brief description of how it's used here]
## Testing Patterns
- Structure: [describe/it nesting convention]
- File location:[co-located / __tests__ / other]
- Fixtures: [factory functions / fixture files / inline data]
- Assertions: [preferred assertion style]
- Benchmarks: [approach if any]
# NOTE: Commit message convention, PR workflow, and tool preferences
# are behavioral — they belong in AGENTS.md, not here.
# ═══════════════════════════════════════════════════════════════════════════
# PATTERNS TO AVOID
# ═══════════════════════════════════════════════════════════════════════════
- [anti-pattern]: [why it's banned or deprecated]
- [anti-pattern]: [why it's banned or deprecated]
# ═══════════════════════════════════════════════════════════════════════════
# PER-ARTIFACT RULES
# ═══════════════════════════════════════════════════════════════════════════
rules:
proposal:
# QRSPI: Scope definition, not a plan.
- State the requirement or ticket driving this change
- Define scope boundaries — explicitly list what is OUT of scope
- Keep under 100 lines (tight and focused)
[user-specific proposal rules]
design:
# QRSPI: The "brain surgery" checkpoint — reviewed before any code is written.
# Target ~200 lines capturing current state, desired state, open questions.
# Required sections (in this order):
- Start with "Current State": what the code does today, key files, entry
points, relevant data flows
- "Desired End State": what changes after this work, what stays the same
- "Patterns to Follow": ONLY if specific files/functions to reference exist
for this change's domain
- "Patterns to Avoid": ONLY if specific anti-patterns apply to this change
- "Open Questions": genuine uncertainties requiring human input. If none,
state explicitly "No unresolved questions."
- "Resolved Decisions": numbered (Decision 1, Decision 2…) with Choice,
Rationale, Alternatives Considered
# Technical depth:
- Use ASCII diagrams for data flows, state machines, architecture
- Call out performance implications where relevant
[user-specific design rules]
# Constraints:
- Keep under 250 lines total
tasks:
# QRSPI: Vertical slicing for early failure detection.
# Vertical slicing (strong preference):
- Order as vertical slices — each task delivers a testable end-to-end path
- Do NOT group by architectural layer unless explicitly justified
- Horizontal (layer-by-layer) only for pure infrastructure; include
justification in the task description when used
- Each task MUST include an explicit "Test:" line describing what to verify
before proceeding to the next task
- Prefer 3–5 major slices; more than 5 suggests scope is too large
# Granularity:
- Max 2 hours per task; break larger work into subtasks
[user-specific task tagging, e.g., [PKG:name] or [MODULE:name]]
- Call out inter-task dependencies explicitly
[user-specific rollback requirements]
[user-specific deployment test gates]
spec:
- Use Given/When/Then for behaviour specifications
- Include concrete example data relevant to the domain
- Document edge cases explicitly
[user-specific spec rules]
Interaction Principles
- Conversational, not interrogative. Bundle related questions into a single turn. Use natural language, not bullet-dump forms.
- Infer and confirm. "You mentioned Vitest — I'll assume you're using
@testing-library/reactfor component tests; correct?" is better than asking from scratch. - Examples reduce ambiguity. When asking about naming conventions, give an example first so the user can pattern-match.
- Iterative. Let the user amend answers. Don't lock them into the first response.
- Preview before writing. Always show the full generated config and get explicit confirmation before touching the filesystem.
- Infer before asking, ask before omitting. Always attempt codebase
inference for any unanswered field. If inference fails, surface a
# TODOrather than dropping the section. A config with explicit TODOs is actionable; a config with missing sections silently degrades every artifact it drives.