INTERVIEWS

Choosing an interview type

Candor supports four interview types. The one you pick shapes how the guide gets generated, how the auto-interviewer behaves, and how the synthesis report frames the results. Pick the type that matches the decision you’re trying to make, not the topic you’re trying to cover.

Where you pick

The interview type is Step 1 of the guide-creation wizard. You pick once per guide, and a guide can only run interviews of its own type. If you want both problem validation and concept testing on the same audience, that’s two guides and two runs, not one.

Problem discovery

Pick this when you don’t yet know what problems matter most for your audience. You know who you want to serve, but not what to build for them or what their actual constraints look like.

  • Inputs you provide: learning goals describing what you want to understand. No hypotheses, no concepts, no prices.
  • What the guide does: open-ended sections that probe context, frustrations, current workarounds, and priorities. Lots of follow-up.
  • What synthesis returns: themes and tensions, but no opportunity framing. The point of discovery is understanding, not direction.
  • Use it when: you’re entering a new market, exploring a problem space you don’t live in, or sceptical that your assumptions about the audience are right.
  • Don’t use it when: you have specific hypotheses you want tested, or a concept you want evaluated. Discovery is too broad to give you those answers efficiently.

Problem validation

Pick this when you have a clear hypothesis about what problem matters and you want explicit evidence supporting or contesting it.

  • Inputs you provide: one or more problem hypotheses (statements about what’s painful for the audience), plus learning goals.
  • What the guide does: probes for behavioral evidence around each hypothesis. With five or more hypotheses, the guide automatically switches to a card-sort methodology where personas sort hypotheses into pain buckets and the interviewer deep-dives the top-ranked ones.
  • What synthesis returns: an explicit verdict per hypothesis (supported, contested, contradicted) with the supporting quotes attached.
  • Use it when: you have a hypothesis to kill or confirm. The verdict structure is what makes this type worth doing — it forces an answer.
  • Don’t use it when: you have only one vague hypothesis. Problem validation needs sharp statements; vague ones get ambiguous verdicts.

Concept testing

Pick this when you have a specific product concept and you want to know whether it lands.

  • Inputs you provide: one or more concept statements (a paragraph each describing the concept), plus learning goals.
  • What the guide does: re-establishes the relevant pain from the persona’s memory, then presents the concept and probes comprehension, fit, objections, and willingness to engage. With multiple concepts the methodology adapts: comparative when there are a handful, card-sort when there are many.
  • What synthesis returns: what’s working, what needs rethinking, and who the concept is actually for. Structured to drive a build-or-pivot decision, not just generate themes.
  • Use it when: you have a real concept, not a vague direction. Concept testing rewards specificity in the input.
  • Don’t use it when: you don’t yet know whether the underlying problem is real. Run problem validation first, then concept testing on what survives.

Price testing

Pick this when you have a concept that’s already resonating and you need to understand what people will pay for it and how price interacts with perceived value.

  • Inputs you provide: the concept(s) being priced, candidate price points if you have them, and learning goals around willingness-to-pay or value perception.
  • What the guide does: probes how the persona thinks about cost, the alternatives they compare against, anchors they reach for, and where they’d draw price-sensitivity thresholds.
  • What synthesis returns: value perception gaps, willingness-to-pay ranges, and where price becomes a deal-breaker for which segments.
  • Use it when: you have validated demand and need pricing intelligence to commit a number.
  • Don’t use it when: the concept hasn’t been validated for fit yet. Pricing feedback on a misfit concept is misleading.

A simple decision aid

Three questions, in order:

  • Do you know what problem matters for this audience? No → problem discovery. Yes, but want evidence → problem validation. Yes, with evidence → keep going.
  • Do you have a specific concept to test? No → don’t pick concept testing yet. Yes → concept testing.
  • Has the concept already been validated for fit? No → don’t pick price testing yet. Yes → price testing.

A common stack for new ventures

Many product teams sequence these in order: problem discovery (orient), problem validation (sharpen), concept testing (test the solution), price testing (commit to a number). Each stage informs the next. You don’t always need all four — pick the ones that map to decisions you actually need to make.

Where to go next

Candor is in development.

Be the first to know when it launches.

No spam. Just a note when Candor is ready. Powered by Highline Beta.