USE CASE
Take the assumptions baked into your roadmap, business plan, or pitch deck and stress-test them one by one. Layer it on any study type. Find out which assumptions are load-bearing and which are wishful thinking, before the launch tells you which is which.
Assumption validation is the right call before any commitment big enough that being wrong about an underlying claim would change the work. A board meeting where a roadmap is locked. A launch where the targeting and pricing have been set. A funding round where the pitch hinges on specific market claims. A quarterly planning cycle where last quarter’s assumptions silently became this quarter’s plan.
Most strategic decisions rest on a small number of load-bearing assumptions and a long tail of supporting ones. The work fails when one of the load-bearing ones turns out to be wrong, and nobody noticed because the team had stopped questioning it. Assumption validation is a structured way to put those assumptions back in front of evidence.
You list the assumptions explicitly and pick the study format that fits each. Some assumptions test best as problem-validation hypotheses (“buyers in this segment actively struggle with X”). Some test best as concept tests (“this solution would change buyer behavior in this segment”). Some test best as price-testing probes (“buyers in this segment would pay this much for this”). Candor lets you layer assumption tracking onto any of those formats: each interview probes the assumptions you named up front, and synthesis returns per-assumption verdicts.
The critic agent validates each persona response against the persona’s prior statements and the underlying evidence, so a load-bearing assumption doesn’t sneak through on the back of an inconsistent answer. The synthesis pipeline tags each assumption’s evidence by the archetypes that confirmed and contested it, so you can see where an assumption holds for one segment but breaks for another.
A synthesis report with per-assumption verdicts: supported, contested, not supported, with the supporting and contradicting evidence and the persona quotes behind each call. Contested verdicts are tagged with the segments where the assumption confirmed and where it broke. The output is shaped so a stakeholder review can move directly from verdict to decision: this assumption is solid, that one needs follow-up with real customers, the other one is wishful thinking and the plan that depends on it needs reworking.
The most useful output is usually the contested verdicts. They point at segmentation work that hasn’t happened yet (“this assumption holds for SaaS but not for CPG”) or at hidden tradeoffs (“this works for the buyer but not the user”). Killing wishful thinking saves cycles. Surfacing tradeoffs reshapes the work.
Assumption validation often layers onto another study type rather than running standalone. Pair it with problem validation when the assumptions are about pains. Pair it with concept testing when they’re about reactions to a candidate solution. Pair it with price testing when they’re about willingness-to-pay or buying authority. If you’re still in the open exploration phase and the assumptions are loose, start with problem discovery to surface what the assumptions should actually be.
Be the first to know when it launches.
No spam. Just a note when Candor is ready. Powered by Highline Beta.