A common pricing-interview pattern goes like this: a founder runs sixteen interviews in three weeks. The guide is solid until the closing item — the rest of the pricing interview questions work, but that final prompt unravels the data. She asks: “So, would you pay £49 a month for this?” Most people say yes. A few are noncommittal. Nobody says no. She launches the price two weeks later, and real conversion refuses to match the calls.
The interviews were not the problem. The closing question was. She was using the one pricing question every credible practitioner in the field tells you not to ask, and she was getting the predictable answer: a polite, well-meaning, almost-certainly-fictional yes.
This is a template for pricing interview questions for the rest of us — bootstrapped or tiny-team B2B SaaS founders, post-launch, with somewhere between fifty and a thousand users. You don’t have a researcher on the team. You probably don’t have time to read five books on pricing. You do, however, need to set or change a price, and you would prefer not to do it on a coin flip. The good news is that the canon — Fitzpatrick, Torres, Ramanujam, Moesta, Campbell, Van Westendorp — agrees on the basic shape of the conversation, even when the details differ. This piece pulls that consensus into a script you can use this week.
Why “would you pay?” is the question that ruins your data
Start with the question itself. In The Mom Test, Rob Fitzpatrick is unambiguous: asking “would you buy a product that did X?” produces the same answer almost every time, regardless of what the product does or what the price is, because you’re asking for opinions and hypotheticals from people who want to make you happy. Adding a number — “would you pay £49 for this?” — does not save the question. It just narrows the kind of yes you’re going to get.
Teresa Torres puts it more bluntly: “What would you do?” answers are “garbage.” She is making a specific claim, not a rhetorical one: the part of the brain that answers a “what would you do” question is not the part that opens its wallet on a Tuesday morning.
The academic literature on this has a name for it: hypothetical bias. A meta-analysis by Jonas Schmidt and Tammo Bijmolt looking at decades of stated-preference studies puts the average hypothetical-bias gap at around 21 percent. People overstate, on average, what they would pay by roughly a fifth, and the gap is wider for higher-value purchases and for products people have never used. For a SaaS founder, that means a clean “would you pay £49?” study with twenty enthusiastic yeses is, on the meta-analysis, telling you that real willingness may sit closer to £39 — and that’s before you adjust for the fact that an admiring early-stage customer is more biased, not less, than a randomly recruited stranger.
So the first rule of pricing interviews is the easiest to follow: don’t ask the question whose answer you can’t trust. Everything below is a way to get the same information without it.
What to ask instead: anchor in the past, not the future
The single most useful pivot in a pricing interview is to swap the future tense for the past tense. Instead of “would you pay X for this,” you ask what they’re currently doing — and currently spending — to deal with the problem your product is meant to solve.
Fitzpatrick’s framing in The Mom Test is that the answers you can trust are about specifics in the past. Sachin Rekhi’s summary of the book makes this concrete for pricing: ask how much the problem currently costs the customer. If they’re paying £100 a month for a workaround stitched together from spreadsheets, a contractor’s time, and a Zapier subscription, you know roughly the ceiling. If they’re paying nothing because the problem isn’t important enough to motivate any spend, you know that too — and you’ve just learned something far more useful than “would you pay £49?”
Teresa Torres recommends the same move for SaaS specifically. The “good” version of pricing research, in her telling, is asking what subscriptions a customer has today and what they pay for them. That gives you a concrete, observable fact: this person spends £79 a month on Notion, £29 on Linear, £49 on a competitor of yours. They’ve already shown you, with their cheque book, what a tool in your category is worth to them.
Giff Constable’s customer-development checklist makes the structural advice memorable: replace “how likely are you to” with “tell me about the last time you.” Pricing is no exception. “Tell me about the last time you bought a tool like this — what was the conversation with your co-founder, what made you say yes” is a question that opens up an entire decision in the participant’s memory. “Would you pay £49 for this?” closes it.
A starter set of past-tense pricing questions you can paste into your script:
- “Walk me through the last tool you bought for your team. What did it cost, and how did the decision get made?”
- “What are you currently doing to deal with [the problem]? How much time or money does that take you in a typical month?”
- “When you signed up for [a competing tool], who else was in the decision? What number did you have in your head before they showed you a price?”
- “Tell me about a tool you signed up for and then cancelled. What was the moment you decided it wasn’t worth the money?”
- “If you had to pull a number out of your last expenses report for tools that do something like this, what comes up?”
Note what’s missing: any question about the future. Any hypothetical. Any number you’ve supplied. The participant is doing the work, and the answers are anchored in things that actually happened.
The willingness-to-pay talk, reframed
Once you’ve established the past, you can move to the present. This is where Madhavan Ramanujam’s “willingness-to-pay talk” earns its keep — but it’s worth noticing what he is and is not asking. Ramanujam, who has run more than a hundred and twenty-five monetisation projects at Simon-Kucher and wrote Monetizing Innovation, doesn’t ask people what they would pay. He pitches the product they would actually be buying — the value, the outcome, the alternative — and then asks three questions in sequence. Strategyzer’s summary of his interview captures the structure:
- “What do you think is an acceptable price for this?”
- “What do you think is an expensive price for this?”
- “What do you think is a prohibitively expensive price for this?”
These three are deceptively similar to the question we said not to ask, and the difference matters. They aren’t asking the participant whether they would buy. They’re asking the participant to map a range. People are surprisingly good at the relative task — drawing the line between “fine,” “ouch but OK,” and “no chance” — and surprisingly bad at the absolute task of predicting their own commitment. Ramanujam’s framing exploits the first ability and avoids the second.
This is also the conceptual heart of the Van Westendorp Price Sensitivity Meter, introduced by Dutch economist Peter van Westendorp at the 1976 ESOMAR Congress. The four prompts, as restated by SurveyMonkey, are:
- At what price would this be so cheap you’d question the quality?
- At what price would it feel like a bargain?
- At what price would it feel expensive but still worth considering?
- At what price would it be so expensive you wouldn’t buy?
You can run these as written in a survey, plot the curves, and read off a band of plausible prices. But you can also run them as a conversation. Ask one, listen, ask what led them there, then ask the next. In a thirty-minute call you’ll often hear something more useful than the four numbers: the story of how the participant arrived at each one, which other tools they were comparing in their head, and which they’d cut from the budget if forced to choose.
The Gabor–Granger method, developed by economists Clive Gabor and André Granger in the 1960s, is the quantitative cousin: a yes/no ladder that adapts to the participant’s previous answer until you find the highest price they’ll commit to. As a stand-alone survey it suffers from exactly the hypothetical bias the Springer meta-analysis warns about. But as a closing move at the end of an interview — after you’ve already mapped past spend, alternatives, and the participant’s own price language — it’s a useful sanity check. You’re not relying on the absolute number; you’re using it to see whether the qualitative story holds up.
Talk to the people who already chose, and the people who left
The most under-used pricing interview is the one that doesn’t feel like a pricing interview at all: the switch interview. Bob Moesta, co-creator of the Jobs-to-be-Done switch methodology, points teams toward the people who recently chose you and the people who recently left you or a competitor. Pricing data lives at both ends of that decision.
Two interviews, same question, opposite ends of the decision. From the new customer you learn what made the price feel worth it on the day they swiped a card — and crucially, what they nearly went with instead. From the recent churner you learn the price story that broke. The four forces framework — push from the current situation, pull from the new option, anxiety about the new, habit of the present — gives you a checklist for what to probe when those forces come up against money.
A switch interview script for a recent buyer:
- “Take me back to the day you bought. What had happened that week that made you actually do it?”
- “What else were you considering? Walk me through the comparison.”
- “When you saw our price, what were you comparing it to in your head?”
- “Was there a moment you nearly didn’t buy? What was that about?”
- “What did you tell your co-founder / boss / partner the price was for?”
A switch interview script for a churner:
- “Walk me through the moment you decided to cancel. What had happened in the days before?”
- “If the price had been half, would you still have left? What would have changed?”
- “What were you using us for in the last month? What stopped working — the tool, or the use case?”
- “Where did the money go instead?”
- “If you came back, what would have to be true?”
Notice what these scripts do not do. They don’t ask “would you pay?” They don’t ask “what’s the right price?” They reconstruct a real decision the participant has already made, with their own money, in the recent past — which, as Patrick Campbell argues on Acquired, is the kind of pricing data that survives contact with reality. Campbell’s broader argument is that serious pricing work often reveals underpricing, so if your interviews are telling you to lower the price, interrogate that signal before you act on it.
Use a pricing page, not a number
There is one more move worth adding to the script, and it comes from Torres again. When direct questions about future behaviour fail, the next-best thing is to simulate the buying experience as closely as you can. Mock up a pricing page. Show it to the participant in the interview. Ask them to talk through it — which plan they’d pick, why, what’s missing, what would make them go up a tier or down a tier.
This is closer to what Nielsen Norman Group’s research on pricing pages shows real users actually do: situate price in a usage scenario, not in the abstract. A participant looking at three columns labelled Solo, Pro, Research will give you a much richer reaction than the same participant asked “would you pay £79 a month?” — because the pricing page does the comparing for them. They can show you which features moved them up a tier; they can roll their eyes at a feature gated below where they’d want it; they can tell you which plan they’d actually pretend to be on for the trial month. None of that information is in a single yes/no answer.
For B2B specifically, Étienne Garbugli’s Lean B2B recommends pairing the mocked pricing page with a budget question — not “would you pay this,” but “where would the money come from, and who else would have to sign off?” In a B2B sale, “would you pay” is irrelevant unless the participant controls the line item. The buying-process question often surfaces the real ceiling: whatever number their CFO has marked as “doesn’t require approval.”
A 30-minute script: pricing interview questions, end to end
Here is the script in its working form. Thirty minutes, six questions in five sections, no “would you pay?” anywhere.
Open (3 minutes). “I’m trying to understand how teams like yours make decisions about tools in this category. There are no right answers. I’m not selling anything in this conversation.”
Past (8 minutes). “Walk me through the last tool in this category that you bought. What was the trigger? Who was in the decision? What did it cost, and how did you arrive at that being a number you said yes to?”
Present (5 minutes). “What are you doing right now to handle [the problem]? Roughly what does that cost you in money and time per month? What’s the most annoying part?”
Range (5 minutes). “If I were describing a tool that does [the value], in your head, what would be an acceptable price? What would feel expensive but still worth it? What price would put you off entirely?”
Pricing page (7 minutes). “I’m going to share a draft pricing page. Talk me through it as if you were considering signing up — which plan would you pick, what’s missing, what would make you move up or down a tier?”
Close (2 minutes). “If you were going to recommend this to one other person, who would it be — and what would you tell them the price was for?”
Run this with ten or twelve people in your ICP and you will have more useful pricing data than a polite yes/no survey can give you.
What good looks like, and what to do with it
The output of this kind of interview is not a single number. It is three things: a band of prices that the canon — Ramanujam, Van Westendorp, Campbell — agrees describes plausible willingness; a set of comparison anchors, which are the tools your customers are actually weighing you against; and a set of specific stories about why a particular price felt right or wrong on a particular day. The number is the easy part. The stories are what tell you whether to raise the price, change the unit (per seat, per workspace, per interview), or change the value proposition so that the price you want feels obvious.
First Round Review’s argument — drawing again on Ramanujam — is that pricing belongs in the discovery conversation, not three months after launch. For a tiny SaaS team, that’s both a constraint and a relief. You don’t have time to run a Van Westendorp study, plot the curves, and write a report. You do, however, have time to have ten conversations in the next two weeks, with the right questions, and learn enough to make a better pricing decision.
What you skip when you swap “would you pay?” for the script above is the one question whose answer you can’t use. What you gain is the rest of the conversation — the part where your customer accidentally tells you what they actually do, and what it actually costs them, and what they actually decided last Tuesday. Pricing isn’t really about numbers in the abstract. It’s about decisions in the past tense. Ask about those, and the right number tends to fall out.
If you’d rather not run those conversations yourself — or you’ve tried and the answers feel stage-managed — that’s the conversation Maren is built to have. She doesn’t ask “would you pay?” either.