mom-test
The Mom Test Framework
Framework for having useful customer conversations that won't lead you astray. Based on a fundamental truth: everyone is lying to you -- not because they're malicious, but because you're asking the wrong questions. Your mom will tell you your idea is great because she loves you. Investors, friends, and even potential customers will do the same. The Mom Test provides rules for asking questions so good that even your mom can't lie to you.
Core Principle
Good customer conversations are about their life, not your idea. The moment you mention what you're building, people switch from sharing truth to performing politeness. They tell you what you want to hear. The antidote is simple: talk about their problems, their lives, and their existing behavior instead of pitching your solution. Ask about specifics in the past, not hypotheticals about the future. And above all, talk less and listen more.
Scoring
Goal: 10/10. When reviewing or planning customer conversations, rate them 0-10 based on adherence to the principles below. A 10/10 means questions focus entirely on the customer's life and past behavior, with no leading, no pitching, and clear commitment signals; lower scores indicate gaps to address. Always provide the current score and specific improvements needed to reach 10/10.
Framework Sections
1. The Mom Test Rules
Core concept: Three simple rules that, when followed, make it impossible for even your most supportive loved ones to give you false validation. The rules shift conversations from opinion-gathering to fact-finding.
Why it works: Opinions are worthless because people are unreliable predictors of their own future behavior. Past behavior is the only reliable data. By focusing on what people have actually done rather than what they say they would do, you extract facts that can genuinely inform product decisions.
Key insights:
- Rule 1: Talk about their life, not your idea -- never mention your solution until the end (if at all)
- Rule 2: Ask about specifics in the past, not generics or hypotheticals about the future
- Rule 3: Talk less, listen more -- aim for them to speak 80% of the time
- A question fails the Mom Test if the answer is always "yes" regardless of whether the business will succeed
- Good questions are ones that could potentially destroy your currently imagined business
- You want facts and commitments, not compliments and opinions
- The best learning happens when you shut up and let awkward silences do the work
Product applications:
| Context | Application | Example |
|---|---|---|
| Idea validation | Ask about the problem, never the solution | "Tell me about the last time you tried to [problem area]" instead of "Would you use an app that does X?" |
| Feature prioritization | Discover what people actually do vs. what they say | "Walk me through how you handled this last week" reveals real workflow |
| Pricing research | Anchor to existing spending behavior | "What are you currently paying to solve this?" instead of "Would you pay $X?" |
Copy patterns:
- "Tell me about the last time you..."
- "What happened next?"
- "How are you dealing with that currently?"
- "Can you walk me through your process?"
- "What else have you tried?"
Ethical boundary: Never weaponize someone's honest answers against them. The Mom Test earns trust by respecting people's time and honesty -- using vulnerability data to manipulate sales crosses the line.
See: references/question-patterns.md
2. Good vs Bad Questions
Core concept: Most customer interview questions are fundamentally broken because they ask people to predict the future, evaluate hypothetical products, or confirm your assumptions. Good questions anchor in observable past behavior and extract concrete facts.
Why it works: Humans are terrible at predicting their own behavior. Asking "would you buy this?" is like asking "will you go to the gym next week?" -- the answer is always yes, the follow-through is rarely there. Questions about what people have already done are reliable because behavior has already happened and can't be rationalized away.
Key insights:
- Bad: "Do you think it's a good idea?" -- always gets a yes
- Bad: "Would you buy a product that does X?" -- hypothetical, meaningless
- Bad: "How much would you pay for X?" -- people anchor to what you want to hear
- Good: "How are you dealing with this problem today?" -- reveals actual behavior
- Good: "What have you tried before and why did you stop?" -- reveals past decisions
- Good: "Where does the money come from for solutions like this?" -- reveals real budgets
- The scariest questions (ones you're afraid to ask) usually produce the most useful data
- Ask questions that have the power to change what you're building
Product applications:
| Context | Application | Example |
|---|---|---|
| Problem validation | Confirm the problem exists and matters enough | "When did this last come up? What did you do? What didn't work?" |
| Market sizing | Understand if enough people have this problem | "Who else in your company/industry deals with this? How do they handle it?" |
| Competitive analysis | Discover real alternatives people already use | "What tools/processes do you currently use for this?" |
Copy patterns:
- "What's the hardest part about [doing this thing]?"
- "Why was that hard?"
- "How often does this come up?"
- "What does a perfect week look like for this workflow?"
- "Talk me through the last time this happened"
Ethical boundary: Never use leading or loaded questions that anchor the respondent toward your desired answer. Your job is to learn, not to sell.
See: references/question-patterns.md
3. Avoiding Compliments and Opinions
Core concept: There are three types of bad data that feel like progress but actively mislead you: compliments ("That's a great idea!"), fluff (hypothetical statements, maybes, future promises), and ideas (feature requests disconnected from real problems). Learning to deflect these and dig for truth is the core skill of customer conversations.
Why it works: Compliments are the fool's gold of customer development. They feel amazing -- "Everyone loves our idea!" -- but they contain zero information about whether anyone will actually pay for or use your product. Fluff and opinions give the illusion of validation without any concrete evidence. Only specifics about real past behavior and genuine commitments provide signal.
Key insights:
- Compliments: deflect immediately and get back to concrete facts ("Thanks -- but let me understand how you're actually handling this today")
- Fluff: generic claims ("I usually," "I always," "I would never") are worthless without a specific instance
- Ideas: when someone suggests a feature, dig into the motivation ("That's interesting -- what's driving that? Tell me about the last time you needed something like that")
- The "would you buy this?" trap: the answer is always yes because saying no feels rude
- Fishing for compliments: unconsciously seeking validation ("Don't you think this would be really useful?")
- Symptoms of a bad conversation: you walk away feeling great but have no concrete facts or commitments
Product applications:
| Context | Application | Example |
|---|---|---|
| Post-demo feedback | Deflect "this looks awesome" to get actionable data | "Thanks! What part of your current workflow would this actually replace?" |
| Feature requests | Dig for the underlying job behind the request | "Why do you want that? Can you show me the last time you needed it?" |
| Investor conversations | Separate encouragement from real interest | Ask for intros to customers, not just "great idea" feedback |
Copy patterns:
- "Thanks, but to make sure I'm not wasting your time -- what does your current process look like?"
- "Interesting. Can you tell me about a specific time that happened?"
- "When you say you'd 'definitely' use this, what would you stop using?"
- "That's a great feature idea -- what problem would it solve for you specifically?"
Ethical boundary: Do not manipulate people into false commitments. Deflecting compliments is about getting to truth, not about pressuring someone into a sale.
See: references/avoiding-bad-data.md
4. Commitment and Advancement
Core concept: The currency of a customer conversation is not compliments -- it's commitment. Real interest shows up as willingness to invest something of value: time, reputation, or money. Every conversation should end with a clear "advance" (moving toward a sale/adoption) or a clear "rejection" (which is also valuable data). The worst outcome is a "zombie lead" -- someone who is polite but never commits.
Why it works: Talk is cheap. When someone says "I'd definitely buy that," it costs them nothing. When someone offers to introduce you to their boss, puts a deposit down, or agrees to a pilot program, they're investing something real. The gap between what people say and what they do is the most dangerous trap in customer development. Commitment closes that gap.
Key insights:
- Commitment currencies: time (meeting, trial), reputation (intro, testimonial), money (deposit, pre-order, letter of intent)
- Advancing: the conversation moves the relationship closer to a sale or adoption
- Spinning wheels: pleasant conversations that never progress and produce zombie leads
- Always know your "ask" before the meeting -- what's the minimum commitment that proves this is real?
- A "no" is more valuable than a "maybe" -- at least you can learn from it and move on
- First meeting ask: "Would you be open to a 15-minute trial next week?"
- If they won't give you their time, they definitely won't give you their money
Product applications:
| Context | Application | Example |
|---|---|---|
| Early validation | Request a commitment that tests real interest | "Can I follow up with a prototype next week for 15 minutes of your time?" |
| B2B sales | Advance toward a decision-maker meeting | "Could you introduce me to the person who handles the budget for this?" |
| Pre-launch | Collect pre-orders or letters of intent | "We're launching in 8 weeks -- would you like to be in the first cohort at 40% off?" |
Copy patterns:
- "What's the next step here?"
- "Who else should I talk to about this?"
- "Would you be willing to try a prototype next week?"
- "Can I put you on the early access list?"
- "If I built this, would you be willing to pilot it for 30 days?"
Ethical boundary: Never pressure people into commitments they'll regret. The goal is to separate real interest from politeness, not to close a sale prematurely.
See: references/commitment-advancement.md
5. Finding Conversations
Core concept: You don't need a formal meeting to learn from customers. The best customer conversations happen casually -- at industry events, through warm intros, in online communities, or over coffee. Formal "customer interview" framing triggers performance mode where people tell you what they think you want to hear. Casual conversations produce more honest data.
Why it works: When you say "Can I interview you about your problems?", people put on armor. They become polished, guarded, and performative. When you say "I'm trying to learn about the industry -- can I buy you coffee?", people open up. The framing of the conversation determines the quality of the data you receive.
Key insights:
- Cold outreach: keep it short, lead with their expertise, don't pitch
- Warm intros: the best source -- one good advisor can open dozens of doors
- Industry events and meetups: go where your customers already gather
- Online communities: participate genuinely before asking questions
- Landing pages: use "learn more" signups to find engaged prospects
- Keep it casual: "I'm trying to learn" beats "I'm doing customer research"
- Vision/framing/weakness/pedestal/ask: a five-part structure for getting meetings
- Advisors as a distribution channel: formalize relationships with well-connected people
Product applications:
| Context | Application | Example |
|---|---|---|
| Pre-idea exploration | Immerse yourself in the target community | Attend 3 industry events and have 20 casual conversations before writing a line of code |
| B2B prospecting | Use warm intros through advisors and investors | "Our advisor [Name] suggested I talk to you about how you handle [problem area]" |
| Consumer research | Intercept people at the point of behavior | Talk to people in line at the store, at the gym, at the coworking space |
Copy patterns:
- "I'm researching how [industry] handles [problem] -- could I learn from your experience over a 15-minute coffee?"
- "[Mutual contact] suggested I talk to you because you know a lot about [area]"
- "I'm not trying to sell anything -- I'm just trying to understand the space"
- "I'm thinking about starting something in [space] and want to make sure I'm not delusional"
Ethical boundary: Never disguise a sales call as a learning conversation. If you already have a product and are selling, be transparent. The Mom Test is for genuine learning, not for covert pitching.
See: references/finding-conversations.md
6. Processing and Learning
Core concept: Customer conversations are only useful if you process them properly. Raw notes must be distilled into beliefs, updated regularly, and shared with your team. Without a system, you'll cherry-pick quotes that confirm your biases and ignore signals that challenge your assumptions.
Why it works: Memory is unreliable and biased toward recent and emotionally charged information. Without structured note-taking and review, teams selectively remember the data that confirms what they already believe. Processing conversations as a team prevents any single person's bias from dominating the narrative.
Key insights:
- Take notes during or immediately after -- never rely on memory
- Separate facts (what they said and did) from interpretations (what you think it means)
- Share raw notes with your team, not filtered summaries
- Update your three key beliefs: the problem, the customer segment, and the solution
- Know when to stop talking and start building -- when conversations start repeating, you've learned enough
- Conversations are for learning, not for convincing yourself you're right
- Use a simple spreadsheet: who, date, key quotes, facts, commitments, and belief changes
Product applications:
| Context | Application | Example |
|---|---|---|
| Team alignment | Share notes in weekly standups to build shared understanding | Review 5 conversations per week as a team and update the belief board |
| Pivot decisions | Track when evidence contradicts your core beliefs | If 8 of 10 conversations reveal a different problem than expected, pivot |
| Feature validation | Count how many people mention a problem unprompted | A problem mentioned by 7 of 10 people is real; one mentioned by 1 of 10 might not be |
Copy patterns:
- "Here are the exact quotes from this week's conversations"
- "Our current belief is X -- here's what confirms it and what challenges it"
- "We've heard this from N of M people -- is that enough signal?"
- "Time to stop talking and build -- conversations are repeating"
Ethical boundary: Never misrepresent or selectively quote customer conversations to justify a predetermined conclusion. Honest processing means accepting uncomfortable truths.
See: references/processing-learning.md
Common Mistakes
| Mistake | Why It Fails | Fix |
|---|---|---|
| Pitching your idea instead of asking about their life | Triggers politeness, produces compliments instead of facts | Don't mention your idea until the very end, if at all |
| Asking "would you buy this?" | People always say yes to hypotheticals; it costs them nothing | Ask what they've already done: "How much are you spending on this now?" |
| Accepting compliments as validation | "Great idea!" contains zero information about future behavior | Deflect immediately: "Thanks -- but what are you doing about this today?" |
| Talking too much | You learn nothing while talking; you learn everything while listening | Set a timer: they should talk 80% of the time or more |
| Not having a clear ask at the end | Produces zombie leads -- pleasant conversations that go nowhere | Know your advance before the meeting: trial, intro, pre-order |
| Running formal "interview" sessions | Triggers performance mode where people filter their answers | Keep it casual: coffee, hallway conversations, Slack DMs |
| Not processing notes as a team | Individual bias filters raw data into confirmation of existing beliefs | Share raw notes weekly and update shared beliefs together |
Quick Diagnostic
| Question | If No | Action |
|---|---|---|
| Did the conversation focus on their life and past behavior, not your idea? | You ran a pitch, not a Mom Test conversation | Redo with zero mention of your solution |
| Did you get concrete facts about what they've already done? | You collected opinions and hypotheticals, which are meaningless | Ask about the last time they experienced the problem and what they did |
| Did they give you a commitment (time, reputation, or money)? | You may have a zombie lead -- polite but not interested | Ask for a specific next step: trial, intro, or pre-order |
| Did they do most of the talking? | You talked too much and learned too little | Practice silence; let awkward pauses work for you |
| Did you learn something that could change what you're building? | You asked safe questions that confirmed what you already believed | Ask the scary questions you've been avoiding |
| Did you update your beliefs based on the conversation? | You're collecting data but not learning from it | Review notes with your team and update your problem/segment/solution beliefs |
| Can you summarize the key facts (not opinions) from the conversation? | You didn't take good notes or you're confusing opinions for facts | Separate facts from interpretations in your notes immediately after |
Reference Files
- question-patterns.md: Good vs bad question examples, the three rules in depth, question formulation exercises
- commitment-advancement.md: Commitment currencies, advancing vs spinning wheels, how to push for commitment
- avoiding-bad-data.md: Compliments, fluff, ideas -- the three types of bad data and how to deflect them
- finding-conversations.md: Where to find people, cold vs warm approaches, keeping conversations casual
- processing-learning.md: Note-taking, team sharing, updating beliefs, knowing when to stop talking
- case-studies.md: Realistic scenarios showing Mom Test principles applied to SaaS, consumer, B2B, and marketplace contexts
Further Reading
This skill is based on The Mom Test methodology developed by Rob Fitzpatrick. For the complete framework, examples, and deeper insights, read the original book:
- "The Mom Test: How to Talk to Customers & Learn if Your Business is a Good Idea When Everyone is Lying to You" by Rob Fitzpatrick
About the Author
Rob Fitzpatrick is an entrepreneur, author, and educator who has founded multiple venture-backed startups and learned the hard way that most customer conversations are useless. After years of collecting misleading feedback and building products nobody wanted, he distilled the principles of effective customer conversations into The Mom Test (2013), which became one of the most recommended books in the startup ecosystem. The book has been translated into over 20 languages and is required reading at accelerators including Y Combinator, Techstars, and 500 Startups. Fitzpatrick has also written The Workshop Survival Guide and Write Useful Books, applying the same evidence-based approach to education and publishing. He teaches and advises startups across Europe and the US, and is known for his direct, practical style that prioritizes actionable frameworks over theory. He is based in the UK.