The first time a user pauses on a call and says, “I don’t know, that’s just how I work,” you can hear the interview slip a gear. You asked them why. They gave you an answer that isn’t quite an answer. Now you have a choice: ask why again, or back up and ask differently. Most founders ask why again. The interview ends with a polite shrug and a transcript that doesn’t tell you what to build next.
Learning how to ask why without asking why is the single craft skill that separates a useful user interview from a friendly chat. It is a practical, learnable thing. This piece is for the technical founder of a bootstrapped B2B SaaS product with somewhere between fifty and a thousand users — post-launch, pre-researcher, no background in qualitative work. It walks through why the literal “why?” question fails so often, and what the canon of user research recommends in its place.
What goes wrong when you ask why
The “why?” question feels like a power tool. You have an answer that is too thin, you ask why, you get the layer underneath. In an interview it rarely works that way. Two things break.
The first is defensive framing. Nielsen Norman Group describes the problem clearly: leading and direct questions can put participants on the back foot, and “why” in particular implies the person should justify their behaviour. As NN/G’s piece on leading questions puts it, asking “Why did you have difficulty with the navigation?” makes the interviewee feel they must explain or excuse the difficulty. The same dynamic applies outside usability tests. Ask a churned customer “Why did you cancel?” and you will get a defensible answer rather than a true one — pricing, time, “we just weren’t using it enough.” The honest reason might be that they never trusted your CSV import, but they will not say so unprompted, because the question framed the conversation as a defence.
The second is that people often do not know why they did what they did. They have post-hoc explanations, which is not the same thing. Indi Young, whose book Practical Empathy is one of the canonical texts on this kind of listening, makes the case repeatedly that asking people about their inner thinking — their guiding principles, their reactions — is a different skill from asking about events. People can describe what happened. They are unreliable witnesses to their own motivations. If you ask “why” and accept the first answer, you get the rationalisation, not the principle.
The 5 Whys technique, much-cited in product circles, is sometimes offered as a fix. ASQ describes it as a root-cause-analysis technique developed by Sakichi Toyoda: a way to peel away layers of symptoms by repeating the question. That context matters. It is a manufacturing and quality tool, built for problems where causality can often be tested against the process or the machine, not a conversational shortcut for understanding a person’s working life. Critics of the technique, including the TapRooT root-cause-analysis community, point out that even in manufacturing it tends to stop at symptoms, produces non-repeatable results, and cannot find causes the asker doesn’t already suspect. Carry it into a user interview and the failure mode is worse: each “why” digs you further into the participant’s confabulation rather than closer to the truth.
This is not a mystery. The whole canon of qualitative research has been pointing at it for thirty years. The fix is to stop hunting for “why” directly, and instead build the conditions in which a real reason becomes audible.
Replace why with what happened
The replacement is older than user research as a discipline. Reporters know it, therapists know it, ethnographers know it: ask about events, not abstractions. Specifically, ask about a recent, concrete event, and let the reasons fall out of the story.
Rob Fitzpatrick, in The Mom Test, boils this into one of three rules a founder should never break: “Ask about specifics in the past, not hypotheticals about the future.” The full set of rules, paraphrased clearly here, is: talk about their life, not your idea; ask about specifics in the past, not hypotheticals; talk less and listen more. The reasoning is that hypothetical questions (“would you pay for this?”, “would you use this if it had X?”) return optimism — the person describes who they want to be, not who they are. Past behaviour is real and verifiable. The question “the last time you faced this problem, what did you do?” answers itself; the question “would you use a tool that did this?” answers itself in the wrong direction.
Teresa Torres builds the same insight into the foundation of her continuous-discovery methodology. In her piece on story-based customer interviews, she argues that traditional interview questions — “what features matter most to you?”, “how do you decide?” — produce abstractions, summaries, and the participant’s theory of themselves, not actual data. Her replacement is the prompt that has become a small piece of methodology canon in its own right: “Tell me about the last time you…”. Tell me about the last time you watched Netflix. Tell me about the last time you missed a release. Tell me about the last time you cancelled a tool. The story unfolds. The participant, as Torres puts it, does most of the talking, and the reasons are inside the story rather than in the answer to a direct question.
Why does this work? Because the moment a person reaches into a specific memory, they stop performing the answer and start describing the scene. They tell you what tab they had open, who they messaged, what they tried first, what they gave up on. Inside that narration sit the real causes — the friction point, the workaround, the thing they nearly switched to. You did not have to ask why. The story handed it over.
Use the alternatives to why deliberately
Replacing “why?” is not a single move. Steve Portigal, whose Interviewing Users is among the most thorough field guides for this work, gives the interviewer a small toolkit of probes that get behind a thin answer without ever using the word. A few worth memorising before your next call.
Comparison probes. “How was that different from the time before?” “Was that easier or harder than how you used to do it?” Comparisons make the participant evaluate, which forces the underlying criteria to surface. If they say “this version was easier,” you can ask what made it easier without asking why.
Exception probes. “Was there a time it didn’t work that way?” “Has it ever gone differently?” People remember exceptions vividly because exceptions broke the pattern. You learn the implicit rule from the moment it was violated.
Specific-example probes. “Can you walk me through the last time that happened?” “Give me an example of when that mattered.” This is the workhorse. It is the prompt that drags an interview back from theory to evidence whenever it drifts.
Quiet. Portigal makes the case for silence as a structural part of the method. After someone gives you a thin answer, count to four in your head before you fill the gap. The participant will often elaborate without prompting. Junior interviewers — and most founders are interviewing-junior — fill silence. Trained ones leave it.
Recasting. “It sounds like you’re saying that X was the part that frustrated you. Have I got that right?” Reflecting back forces them to either confirm or correct. Both are useful. If they correct, you learn the nuance you missed; if they confirm, you have validation without having led them to it.
None of these probes use the word “why”. All of them get to a “why”-shaped answer, because they hand the participant a path into the story instead of demanding a justification.
Get behind opinions, not at them
There is a deeper layer that even good probing can miss. Indi Young’s framing of the work is that opinions, preferences and explanations sit on top of something more useful — what she calls inner thinking, described at length in her writing on listening sessions and Practical Empathy. The interviewer’s task is not to extract more opinions but to listen past them.
In practice, this looks like ignoring the first answer to a question and listening for what got the participant to that answer. If they say “I just like the way Linear feels,” that is an opinion, and you are not going to learn much by asking why it feels that way. You will learn more by getting them to describe a recent moment where they felt that — when did they last open Linear and feel something, what were they about to do, what does the alternative feel like by contrast. The opinion is the wrapper. What you are after is what was inside before they wrapped it.
This is a different posture from probing. Probing chases an answer. Listening, in Young’s sense, treats the conversation as a slow excavation in which the participant’s reasoning becomes visible only when they stop performing a summary of it. It is the harder skill. It is also why founders often describe their early interviews as exhausting — the work is not in talking, it is in catching what gets said when the participant stops trying to be useful.
A short script of rewrites for technical founders
Theory does not save an interview at 3pm on a Tuesday when your participant has fifteen minutes and you have forgotten everything you read. So here is the practical layer: the questions a founder reaches for instinctively, and the rewrites that work better. Steal these.
- Instinct: “Why did you sign up?” → Rewrite: “Walk me through the day you signed up. What was happening that week?” The story will tell you whether they signed up because of a tweet, an internal mandate, or a competitor’s price hike — three very different signals that “I needed it” hides.
- Instinct: “Why did you cancel?” → Rewrite: “Tell me about the last week you actively used it. What changed between then and when you cancelled?” The gap between the last active week and the cancellation is the actual story; the cancellation form was just where it ended.
- Instinct: “Why don’t you use feature X?” → Rewrite: “Talk me through the workflow that feature is meant to help with. Where does it currently fit in?” The honest answer is often that the feature does not fit the real workflow at all.
- Instinct: “Why is that important?” → Rewrite: “When did that last matter? What was at stake?” Importance is abstract; stakes are concrete. The story makes the difference visible.
- Instinct: “Would you pay for this?” → Rewrite: “What are you currently paying for, or budgeting for, to handle this?” Y Combinator’s “How to talk to users” guide makes the same point: existing spend is data; intention is theatre. The Mom Test puts it more bluntly — anything someone says about a future purchase is best treated as a polite fiction until you see the budget line.
- Instinct: “Why do you do it that way?” → Rewrite: “How did you arrive at that workflow? Is it different from the way you used to do it?” Process histories reveal the constraints that explain the present.
The pattern is consistent. The instinctive question asks for an opinion or a justification. The rewrite asks for a scene.
When the conversation gets thin
Even with the right questions, interviews go thin. The participant gives short answers, the silences feel awkward, you start filling them with leading prompts. Three rescues to keep in your back pocket.
The first is to stop and reconstruct a single decision. Bob Moesta, co-creator of Jobs-to-be-Done, has been arguing for years that the most useful single piece of data you can pull from a customer is the narrative of a recent purchase or switch. He calls it the four forces — what pushed them to look, what pulled them to a choice, what held them back, what they gave up — and the way to surface it is to ask the participant to walk you back through the days and weeks before the decision, not the moment of it. If your interview is dying, “let’s go back to the week you decided to look for an alternative” almost always revives it.
The second is to honour a vague answer instead of fighting it. If somebody says “I don’t really know,” that is data. Ask what they mean by not knowing. Ask whether they have always felt that way or whether it changed. Ask what they would have to see to feel they did know. You can learn a lot about decision-making from the texture of a participant’s uncertainty.
The third is to recognise that weak planning often makes interviews feel thin. NN/G points out in Why User Interviews Fail that unplanned interviews meander, produce superficial findings, and waste the participant’s time. If the conversation has loosened late, do not rescue it with leading prompts. Return to one concrete scene: “Can you walk me through the last time that happened?”
What to write down
Bad interviewing notes often look like a list of complaints with no shape. That is what happens when you transcribe answers but not stories. After each call, write down three things instead of a feature list: the scene the participant described in most detail, the moment they used a phrase you would never have used yourself, and the place in their workflow that you did not know existed. Those three things will lead you back to the real reasons. A bullet list of complaints will not.
This is also where the cost of running interviews badly becomes obvious. First Round Review notes that founders often report having “talked to dozens or hundreds” of customers and yet have nothing to show for it, because volume without rigour produces a dataset of justifications. A single interview with five real scenes in it is worth more than ten interviews with fifty justifications.
The discipline, in one paragraph
You stop asking why directly. You ask about specific past events. You probe with comparisons, exceptions, examples and silence. You listen past the opinions and explanations to the underlying reasoning. You write down the scenes, not the verdicts. None of this is a trick — it is an old craft, refined over decades by Fitzpatrick, Torres, Portigal, Young, Moesta, and the Nielsen Norman team — and it works because it respects the limits of what people can reliably tell you about themselves. They can tell you what happened. They can tell you what came before, after, and around it. They cannot reliably tell you why. So you stop asking that question, and let the answer arrive on its own.
When it does, you will recognise it. You will be in minute twenty-six of an interview you nearly skipped, listening to a participant describe the small workaround they built around your product. They will say, almost in passing, the thing you came to find out. You did not ask them why. You asked them what happened. They told you.