skills/jinjin1/cursor-for-product-managers/identify-test-assumptions

identify-test-assumptions

SKILL.md

Identify and Test Assumptions (Continuous Discovery Habits)

Goal

To extract explicit assumptions from insights and opportunities, categorize and prioritize them to identify "leap-of-faith" assumptions, and design a lightweight, iteratively-scaled testing plan that reduces risk across desirability, usability, feasibility, viability, and ethical dimensions.


When to Use


Input

  • Primary Sources:
    • Prioritized opportunities from opportunities/
    • Early solution sketches from solutions/
    • Interview snapshots from user-interviews/snapshots/
    • Synthesis documents from user-interviews/synthesis/
  • Optional Sources:
    • Product analytics or behavioral data
  • Minimum Requirements:
    • 1 target opportunity with supporting evidence, and
    • 2–3 candidate solution ideas OR a single idea with key user journeys

Output

Format: Markdown (.md) Location: assumptions/[topic]/ Filename: assumptions-[opportunity-name]-v[version].md

Semantic Naming Guidelines:

  • opportunity-name: kebab-case opportunity name from opportunity document (e.g., newsletter-creation, user-onboarding)
  • version: auto-incrementing version number (v1, v2, v3...)
  • Example: assumptions-newsletter-creation-v1.md

Version Management:

  • Check existing files with same opportunity-name pattern before creating new assumptions
  • Auto-increment version number (v1 → v2 → v3...)
  • Never overwrite existing assumption files
  • Preserve all assumption versions for comparison
  • No date dependency required

AI Instructions for Assumption Identification

When Receiving Research and Opportunity Data

  • Validate Inputs: Confirm target opportunity, evidence strength, and affected segments.
  • Clarify Scope: Define which ideas/journeys will be story-mapped.
  • Note Constraints: Technical, legal, GTM, success metrics, time/resource limits.
  • Topic Extraction: Analyze opportunity content to extract main topic for semantic filename.
  • File Existence Check: MANDATORY - Check for existing files with pattern assumptions-[topic]-v*.md before creating new file.
  • Version Management: MANDATORY - Auto-increment version number based on existing files. Never overwrite existing files.

Assumption Generation Guidelines

  • Use Five Categories: Desirability, Usability, Feasibility, Viability, Ethical.
  • Phrase Positively and Specifically: State what must be true, in concrete, testable language.
  • Tie to Behavior: Prefer assumptions about what users will do over what they say.
  • Attach Evidence: Link each assumption to quotes, behaviors, or data when available.
  • Normalize Granularity: Split vague, compound assumptions into specific, testable statements.

Evidence Classification (Binary)

  • Strong Evidence:
    • Direct user quotes or behaviors supporting assumption
    • Quantitative data from multiple sources
    • Previous successful tests of similar assumptions
  • Weak Evidence:
    • No direct evidence
    • Single source or anecdotal evidence
    • Theoretical or assumed knowledge only

Importance Evaluation (Binary)

  • More Important:
    • Core value proposition depends on this assumption
    • Assumption failure would kill the solution
    • Blocking other critical assumptions
  • Less Important:
    • Nice-to-have features or optimizations
    • Minor impact on solution success
    • Non-blocking assumptions

Prioritization Guidelines (Assumption Mapping)

  • Place each assumption on a 2D grid:
    • X-axis: Evidence known (left = strong evidence, right = weak evidence)
    • Y-axis: Importance to idea success (bottom = less important, top = more important)
  • Leap of Faith (LoFA): Select ONLY assumptions in the top-right quadrant:
    • Weak Evidence (right side of X-axis)
    • More Important (top half of Y-axis)
    • Maximum 3 LoFA per assumption document
    • Visual indicator: Mark with circle or highlight

Testing Guidelines

  • Simulate an Experience, Evaluate Behavior: Design minimal simulations that let users behave in line with or against the assumption.
  • Define Success Upfront: Use numbers (e.g., "≥ 3 of 10 participants choose X"), not percentages.
  • Recruit the Right Audience: Screen by target opportunity and segment; select for variation.
  • Start Small, Then Scale: Begin with quick signals; escalate to larger tests only if warranted.
  • Triangulate: Combine small, different methods to reduce false positives/negatives.

Semantic File Naming Guidelines

  • Topic Extraction: Analyze opportunity content to identify main topic/theme
  • Filename Format: Use semantic naming pattern assumptions-[opportunity-name]-v[version].md
  • Version Management: Auto-increment version number based on existing files
  • Folder Organization: Create topic-specific subfolders for better organization
  • No Date Dependency: Remove all date-based filename requirements

Topic Extraction Process:

  1. Analyze opportunity document for main theme and keywords
  2. Identify the most relevant topic/theme from opportunity content
  3. Convert to kebab-case format (e.g., "Newsletter Creation" → "newsletter-creation")
  4. Ensure topic uniqueness across different assumption files
  5. Use topic as primary identifier instead of date

Version Management Process:

  1. MANDATORY STEP 1: Check existing files with pattern assumptions-[opportunity-name]-v*.md
  2. MANDATORY STEP 2: Find the highest version number for the same opportunity
  3. MANDATORY STEP 3: Auto-increment version number (v1 → v2 → v3...)
  4. MANDATORY STEP 4: Generate new filename with incremented version
  5. MANDATORY STEP 5: Verify no file with new filename exists before creation
  6. CRITICAL: Never overwrite existing files - always create new version
  7. No manual version tracking required

Smart Topic Detection:

  • Content Analysis: Extract main themes from opportunity document
  • Keyword Frequency: Use most frequent relevant keywords as topic
  • Kebab-Case Format: Convert spaces to hyphens, lowercase (e.g., "User Onboarding" → "user-onboarding")
  • Uniqueness Check: Ensure topic doesn't conflict with existing files
  • Fallback: Use generic topic name if extraction fails

Process

0) File Management (MANDATORY PRE-STEP)

  1. Extract topic from opportunity document content
  2. Check existing files with pattern assumptions-[topic]-v*.md in target directory
  3. Find highest version number for same topic (e.g., if v1, v2 exist, next is v3)
  4. Generate new filename with incremented version: assumptions-[topic]-v[version].md
  5. Verify no file exists with new filename before proceeding
  6. CRITICAL: Never overwrite existing files - always create new version

1) Prepare Context and Actors

  1. Confirm target opportunity and desired outcome(s).
  2. Identify key actors (end-user types, internal systems, third parties).

2) Story Map Candidate Ideas (or Key Journeys)

  1. Assume the solution exists; map what users do to get value.
  2. Sequence steps by actor; highlight moments critical to success.

3) Generate Assumptions (Five Categories)

For each pivotal step, enumerate assumptions across: Desirability, Usability, Feasibility, Viability, Ethical.

4) Pre-Mortem (Prospective Hindsight)

"It's six months later; launch failed. What went wrong?" Convert reasons into specific assumptions that must be true.

5) Walk OST Lines (Outcome ↔ Opportunity ↔ Solution)

Write why the solution addresses the opportunity and drives the outcome. Extract each inference as a testable assumption (esp. viability).

6) Normalize, Deduplicate, and Attach Evidence

Rewrite assumptions to be positive, specific, and single-concept. Link supporting quotes, behaviors, analytics.

7) Map and Prioritize (Assumption Mapping)

  1. Plot all assumptions on 2D grid using binary classification
  2. Identify top-right quadrant (Weak Evidence + More Important)
  3. Select maximum 3 assumptions from this quadrant
  4. If more than 3 in quadrant: Prioritize by impact, test complexity, dependencies
  5. Mark selected LoFA with visual indicator

8) Define Test Cards for LoFA Assumptions

For each of the 3 selected LoFA, design the smallest simulation with clear success criteria, sample size, method, audience, and time window.

9) Run Tests → Record Results → Update the Map

Move assumptions leftward as evidence grows; iterate simulation quality or move to next riskiest item.

10) Decide and Proceed

Use accumulated evidence to: evolve the idea, change the opportunity focus, or scale the solution test.


Output Structure (assumptions-[opportunity-name]-v[version].md)

# Assumptions — [Opportunity Name]

**Topic:** [Extracted topic name]  
**Version:** [v1, v2, v3...]  
**Target Opportunity:** [Opportunity statement]  
**Related Documents:** [Snapshots/Synthesis/Opportunities/Solutions]

---

## Story Map Snapshot
- **Actors:** [End-user types, systems, partners]
- **Key Steps:**
  1. [Actor] — [Step]
  2. [Actor] — [Step]

---

## Assumption Log
| ID | Category | Assumption (positive, specific) | Evidence (link/quote/data) | Importance | Evidence Known | LoFA |
|----|----------|----------------------------------|-----------------------------|------------|----------------|------|
| A-01 | Desirability | [What must be true] | [Quote/analytics/ref] | More/Less Important | Strong/Weak | Yes/No |

---

## Assumption Map (Summary)
- **Top-right (LoFA):** [A-01, A-07, A-12] - Maximum 3 assumptions
- **Notable clusters:** [e.g., viability assumptions lacking data]
- **Visual Grid:** Use 2D grid with binary classification (Strong/Weak Evidence, More/Less Important)

---

## Test Cards (LoFA)

### Test Card: [A-01] — [Short name]
- **Assumption:** [Assumption statement]
- **Simulation:** [Prototype/mock experience/data query/concept test]
- **Method:** [Unmoderated test | 1-question survey | Customer letter technique|  Data Analysis | Concierge Test | Wizard of Oz | Usability Test | Live-data prototype | Fake door test | Landing Page Demand Test | Ealry Adopters | Longitudinal User Study | Qualitative Value Testing | Dogfood | Fishfood | Smoke Tests]
- **Audience:** [Screening criteria; segment]
- **Sample Size & Window:** [e.g., n=10 over 2 days]
- **Success Criteria:** [e.g., ≥ 3/10 do X]
- **Risks & Biases:** [Key concerns and mitigations]
- **Next Step if Pass/Fail:** [Scale test / iterate assumption / Experiment(e.g., Multivariate, A/B tests) / pivot idea]

*(Repeat per LoFA assumption)*

---

## Results and Decisions
- **Outcomes:** [Observed behaviors vs. criteria]
- **Map Update:** [Assumptions moved left; new LoFA]
- **Decisions:** [Proceed/iterate/stop; changes to opportunity or idea]

---

## Next Steps
- [ ] Run next LoFA test
- [ ] Evolve idea based on findings
- [ ] Share summary with stakeholders

Templates

Assumption Statement Pattern

  • Actor + Action + Context + Outcome expected
    Example: "Prospective subscribers will select a live game from our home screen when browsing evening entertainment options."

Pre-Mortem Prompt

  • "It's six months after launch and this failed. What happened?"
    Capture each reason → rewrite as a positive, specific assumption that must be true.

Story Map (Quick)

Actors: [User, Platform, Partner]
1) [User] does …
2) [Platform] shows …
3) [Partner] provides …

Assumption Mapping Template

                    Evidence Known
                Strong              Weak
More     [A-01]  [A-03]    [A-02]  [A-04] ← LoFA (3개)
Important [A-09]  [A-10]    [A-05]  [A-06] ← LoFA (3개)

Less     [A-15]  [A-14]    [A-18]  [A-17]
Important [A-12]  [A-11]    [A-13]  [A-16]

LoFA Selection Process

  1. Plot all assumptions on 2D grid using binary classification
  2. Identify top-right quadrant (Weak Evidence + More Important)
  3. Select maximum 3 assumptions from this quadrant
  4. If more than 3 in quadrant: Prioritize by:
    • Impact on solution success
    • Test complexity (easier first)
    • Dependencies (blocking first)
  5. Mark selected LoFA with visual indicator (circle or highlight)

Success Criteria Pattern

  • Define n participants and success threshold as an absolute number (not %).
    Example: "≥ 4 of 10 choose sports content on the prototype home screen."

Quality Indicators

Strong

  • Specific & Positive: Testable statements tied to concrete behavior
  • Evidence-Linked: Quotes, analytics, or prior tests attached
  • Right LoFA: Maximum 3 from top-right quadrant (Weak Evidence + More Important)
  • Binary Classification: Consistent Strong/Weak, More/Less Important evaluation
  • Clear Criteria: Absolute numbers, audience defined, timeboxed
  • Iterative Cadence: Start small; scale only with positive signals

Weak

  • Too Many LoFA: More than 3 assumptions selected
  • Inconsistent Classification: Mixed evaluation criteria
  • Vague: Generic or compound statements
  • Opinion-Based: Future-intent questions; no behavior
  • Ambiguous Criteria: Percentages, no audience, no time window
  • Over-Building: Large experiments before early signals

Common Anti-Patterns and Guardrails

  • Too many LoFA → Select maximum 3 from top-right quadrant only
  • Inconsistent classification → Use binary classification (Strong/Weak, More/Less Important)
  • Not generating enough assumptions → Use story map + pre-mortem + OST lines
  • Negative phrasing → Rewrite as what must be true (positive)
  • Not specific enough → Add actor, context, behavior, and outcome
  • Favoring one category → Cover all five: Desirability/Usability/Feasibility/Viability/Ethical
  • Overly complex simulations → Design smallest viable simulation first
  • Using percentages for criteria → Use absolute counts (e.g., 3 of 10)
  • Missing evaluation details → Define audience, n, window, and behavior
  • Testing with wrong audience → Screen for the target opportunity
  • Designing for less than best-case → Start where passing is most likely; increase difficulty later

Error Handling

  • Insufficient Data: Request more snapshots/synthesis; reduce scope.
  • Weak Evidence: Mark evidence as "Weak" and prioritize accordingly.
  • Conflicting Results: Triangulate with another small method before decisions.
  • Ethical Concerns: Document potential harms; design mitigation and/or halt.

File Naming Issues

  • Topic Extraction Failure: Ensure clear topic identification from opportunity content
  • Version Conflicts: Always check existing files before creating new assumptions
  • Incorrect Format: Use kebab-case for opportunity-name, v[number] for version
  • Missing Topic: Extract main theme from opportunity document for filename
  • Overwriting Prevention: Never overwrite existing assumption files, always increment version
  • File Existence Check Failure: MANDATORY - Always verify file existence before creation
  • Version Detection Error: If version detection fails, start with v1 and add warning note

Process Flow

Individual Interviews → Create Snapshots → Synthesize Patterns → Create Opportunities → Generate Solutions → Identify & Test Assumptions
     ↓                    ↓                    ↓                    ↓                       ↓                        ↓
[Raw Data]        [Structured Stories]   [Shared Patterns]    [Problem Statements]    [Product Ideas]      [Risks & Tests]

Recommended Folder Structure

assumptions/
├── newsletter-creation/
│   ├── assumptions-newsletter-creation-v1.md
│   └── assumptions-newsletter-creation-v2.md
├── user-onboarding/
│   ├── assumptions-user-onboarding-v1.md
│   └── assumptions-user-onboarding-v2.md
└── payment-flow/
    └── assumptions-payment-flow-v1.md

Quality Assurance Checklist

Input Validation

  • Clear target opportunity statement with supporting evidence
  • Opportunity context and problem definition complete
  • Solution ideas or user journeys defined
  • Evidence strength assessed and documented

Assumption Quality

  • Assumptions cover all five categories (Desirability, Usability, Feasibility, Viability, Ethical)
  • Assumptions are positive, specific, and testable
  • Evidence linked to each assumption where available
  • Assumption mapping completed with LoFA identified

Testing Quality

  • Test cards designed for LoFA assumptions
  • Success criteria defined with absolute numbers
  • Audience screening criteria specified
  • Sample size and time window defined

Process Completion

  • All assumptions evaluated and mapped
  • Test results documented and analyzed
  • Decisions made based on evidence
  • Next steps are clear and actionable

File Naming Validation

  • MANDATORY: Extracted clear topic from opportunity content before creating filename
  • MANDATORY: Checked existing files with pattern assumptions-[topic]-v*.md before creation
  • MANDATORY: Verified no file with new filename exists before creation
  • Filename uses semantic naming format: assumptions-[opportunity-name]-v[version].md
  • Version number is correctly auto-incremented for same-opportunity assumptions
  • Opportunity name is descriptive and kebab-case formatted
  • CRITICAL: No overwriting of existing assumption files - always create new version
  • Topic extraction completed through content analysis before assumption creation
  • Version management process followed step-by-step

AI Implementation Checklist

Before Creating Any Assumption File:

  1. Topic Extraction

    • Analyze opportunity document for main theme
    • Extract kebab-case topic name (e.g., "newsletter-creation")
    • Verify topic is descriptive and unique
  2. File Existence Check

    • Search for existing files: assumptions-[topic]-v*.md
    • List all found files with their version numbers
    • Identify highest version number
  3. Version Management

    • Calculate next version number (highest + 1)
    • Generate new filename: assumptions-[topic]-v[version].md
    • Verify new filename doesn't already exist
  4. File Creation

    • Create file with new filename only
    • Never overwrite existing files
    • Add version number to document header

Error Handling:

  • If topic extraction fails: Use generic topic name and add warning
  • If version detection fails: Start with v1 and add warning note
  • If file already exists: Increment version and try again
  • If multiple topics found: Use most relevant one and document choice

Related Frameworks

Weekly Installs
1
GitHub Stars
50
First Seen
Mar 3, 2026
Installed on
amp1
cline1
opencode1
cursor1
kimi-cli1
codex1