fp-init

SKILL.md

Initialize E2E Specs

Bootstrap E2E_TESTS.md specification files for a project by analyzing its source code, release history, and architecture. This command is designed for projects that have no existing E2E specs and produces one or more spec files depending on the project's structure.

Additional instructions from the user: "$ARGUMENTS". Ignore if empty.

Phase 1: Understand the Project

  1. Read the project root to build a mental model:

    • README, CONTRIBUTING, and documentation files
    • Package manifest (package.json, Cargo.toml, go.mod, pyproject.toml, etc.)
    • Monorepo structure (workspaces, packages/, crates/, apps/)
    • Entry points, CLI definitions, exported APIs, route definitions
    • CI/CD configuration (to understand how tests are run)
  2. Check for existing E2E_TESTS.md files. If any exist, inform the user and suggest fp-update-spec or fp-add-spec instead. Stop here unless the user explicitly wants to proceed.

  3. Identify testable artifacts — each one will get its own E2E_TESTS.md:

    • In a monorepo: each package/app with user-facing behavior
    • In a single-package project: one spec file at the package root
    • Skip internal libraries, shared utilities, and type-only packages — they are tested indirectly through the artifacts that consume them

Phase 2: Mine Features from History and Code

For each testable artifact identified in Phase 1:

Release history (if available)

  1. Check for releases or tags:
    • git tag --list to find version tags
    • GitHub/GitLab releases if accessible
    • CHANGELOG.md or HISTORY.md files
  2. Walk through releases chronologically to build a feature timeline:
    • What capabilities were added in each release?
    • What breaking changes were introduced?
    • What bugs were fixed that suggest important invariants worth testing?
  3. Prioritize features that appear in multiple releases (iterated on) or were highlighted in release notes — these are the most important to users.

Source code analysis

  1. Read the code carefully to identify features worth testing end-to-end:

    • Commands and CLI interfaces: Each command or subcommand is typically a suite
    • API endpoints: Each resource or route group is typically a suite
    • Workflows: Multi-step user journeys (e.g., create → configure → deploy)
    • Integrations: Interactions with external services, databases, file systems
    • Configuration: Config file parsing, environment variable handling, defaults
    • Error handling: User-facing error messages, graceful degradation, validation
  2. For each feature, read the implementation to understand:

    • What inputs does it accept? (flags, arguments, config, environment)
    • What outputs does it produce? (files, stdout, exit codes, side effects)
    • What can go wrong? (missing deps, invalid input, permission errors)
    • What external dependencies does it have? (Docker, APIs, CLI tools)
  3. Cross-reference with tests that already exist (unit tests, integration tests):

    • What is already well-tested at lower levels? (less urgent for E2E)
    • What gaps exist where E2E coverage would add the most value?
    • What test utilities or fixtures already exist that E2E tests could reuse?

Feature prioritization

Rank features for inclusion using these criteria:

  • User-facing: Does a real user interact with this? (prioritize over internal plumbing)
  • Breakage impact: If this breaks, how bad is it? (prioritize high-impact paths)
  • Complexity: Does it involve multiple components working together? (E2E adds most value here)
  • History of bugs: Has this area had fixes or regressions? (suggests it needs coverage)

Phase 3: Draft Specs

  1. Create the root-level spec (docs/E2E_TESTS.md or E2E_TESTS.md) with:

    • Project-wide testing philosophy and constraints
    • Isolation rules (temp directories, environment restoration)
    • Cleanup patterns
    • Mock strategies (what to mock, what to use for real)
    • Environment variable conventions for conditional tests
    • How to run the E2E tests
  2. For each testable artifact, draft a package-level E2E_TESTS.md:

    • Include an Index section at the top (after the title and description) with a markdown link to every suite (e.g., - [Suite Name](#suite-name))
    • Group features into suites by command, workflow, or component
    • For each suite, write these sections using heading levels appropriate to their nesting depth (e.g., if the suite is an H2, then these sections are H3; if the suite is an H3, they are H4; and so on):
      • Preconditions (heading with bullet list): Setup required for all features in the suite
      • Features (heading) containing individual features as subheadings one level deeper, across categories:
        • core: Happy-path scenarios covering the primary functionality
        • edge: Boundary conditions, unusual-but-valid inputs
        • error: Failure modes, validation errors, missing dependencies
        • side-effect: Observable interactions (hooks, notifications, file writes)
        • idempotency: Safe repetition of operations
      • Postconditions (heading with bullet list): Verifiable end states
    • Add <!-- skip: ... --> metadata for features requiring real external services, with clear instructions on how to unskip
    • Ensure every feature has a <!-- category: ... --> comment and at least one concrete, testable assertion
  3. Aim for practical coverage, not exhaustive coverage:

    • Every artifact should have its core happy paths covered
    • Add edge/error cases for the most critical or historically problematic features
    • Do not pad specs with low-value tests just to increase count
    • A focused spec with 15 high-value features is better than a sprawling one with 50 trivial features

Phase 4: Review

Present a summary to the user:

Flightplanner Init Summary
====================

Project type: <monorepo|single-package>
Specs to create: N files

Root spec: docs/E2E_TESTS.md
  - Project-wide testing constraints

Package specs:
  packages/cli/E2E_TESTS.md
    Suites: 4 (Init, Task, Push, Config)
    Features: 23 (14 core, 3 edge, 4 error, 1 side-effect, 1 idempotency)
    Skipped: 2 (requires-real-agent)

  packages/server/E2E_TESTS.md
    Suites: 3 (Auth, Projects, Webhooks)
    Features: 18 (10 core, 2 edge, 4 error, 2 side-effect)
    Skipped: 1 (requires-database)

Feature sources:
  From release history: 15 features
  From source code analysis: 26 features
  From bug fix history: 4 features

Total: N specs, N suites, N features

Show the full draft of each spec file. Ask the user for feedback and adjustments before writing.

Phase 5: Write

  1. Write the approved root-level spec.
  2. Write each approved package-level spec.
  3. Present a final summary of files created.
  4. Inform the user of next steps:
    • fp-review-spec to validate spec format and completeness
    • fp-generate to create test files from the specs
    • fp-audit to check coverage after generation
Weekly Installs
2
GitHub Stars
3
First Seen
3 days ago
Installed on
amp2
cline2
opencode2
cursor2
kimi-cli2
codex2