USE CASE

Assumption validation

Take the assumptions baked into your roadmap, business plan, or pitch deck and stress-test them one by one. Layer it on any study type. Find out which assumptions are load-bearing and which are wishful thinking, before the launch tells you which is which.

When to run assumption validation

Assumption validation is the right call before any commitment big enough that being wrong about an underlying claim would change the work. A board meeting where a roadmap is locked. A launch where the targeting and pricing have been set. A funding round where the pitch hinges on specific market claims. A quarterly planning cycle where last quarter’s assumptions silently became this quarter’s plan.

Most strategic decisions rest on a small number of load-bearing assumptions and a long tail of supporting ones. The work fails when one of the load-bearing ones turns out to be wrong, and nobody noticed because the team had stopped questioning it. Assumption validation is a structured way to put those assumptions back in front of evidence.

How Candor runs it

You list the assumptions explicitly and pick the study format that fits each. Some assumptions test best as problem-validation hypotheses (“buyers in this segment actively struggle with X”). Some test best as concept tests (“this solution would change buyer behavior in this segment”). Some test best as price-testing probes (“buyers in this segment would pay this much for this”). Candor lets you layer assumption tracking onto any of those formats: each interview probes the assumptions you named up front, and synthesis returns per-assumption verdicts.

The critic agent validates each persona response against the persona’s prior statements and the underlying evidence, so a load-bearing assumption doesn’t sneak through on the back of an inconsistent answer. The synthesis pipeline tags each assumption’s evidence by the archetypes that confirmed and contested it, so you can see where an assumption holds for one segment but breaks for another.

What you walk away with

A synthesis report with per-assumption verdicts: supported, contested, not supported, with the supporting and contradicting evidence and the persona quotes behind each call. Contested verdicts are tagged with the segments where the assumption confirmed and where it broke. The output is shaped so a stakeholder review can move directly from verdict to decision: this assumption is solid, that one needs follow-up with real customers, the other one is wishful thinking and the plan that depends on it needs reworking.

The most useful output is usually the contested verdicts. They point at segmentation work that hasn’t happened yet (“this assumption holds for SaaS but not for CPG”) or at hidden tradeoffs (“this works for the buyer but not the user”). Killing wishful thinking saves cycles. Surfacing tradeoffs reshapes the work.

Where to go next

Assumption validation often layers onto another study type rather than running standalone. Pair it with problem validation when the assumptions are about pains. Pair it with concept testing when they’re about reactions to a candidate solution. Pair it with price testing when they’re about willingness-to-pay or buying authority. If you’re still in the open exploration phase and the assumptions are loose, start with problem discovery to surface what the assumptions should actually be.

Common questions

Problem validation tests whether a specific problem is real for a specific segment. Assumption validation is broader: it stress-tests any assumption baked into your work, including assumptions about the audience, the competition, the buying process, the willingness to switch, the regulatory landscape, the channel, and the value perception. A problem-validation study is one kind of assumption-validation study. Most teams use the broader frame when they need to pressure-test a strategic decision rather than a specific product hypothesis.

A testable assumption is specific, falsifiable, and load-bearing. Specific means it names a concrete claim, not a vague belief. Falsifiable means evidence could plausibly contradict it. Load-bearing means if it turns out to be wrong, a meaningful decision has to change. 'Senior insights managers at CPG companies treat concept-test budget as a quarterly fixed line item, not a per-launch variable spend' is testable. 'CPG teams care about research' is not.

Not all of them in one study, and you wouldn't want to. Most roadmaps have a small number of load-bearing assumptions and a long tail of supporting ones. Pick the four to seven assumptions that, if wrong, would force you to change the roadmap. Test those. Re-run after major roadmap changes or quarterly, with a fresh batch of assumptions reflecting where the work has moved.

Specific enough that the verdict is unambiguous, broad enough that the verdict matters. 'Buyers in this segment will switch from incumbent X if our product saves them 30% of their time' is the right shape. 'Buyers want efficiency' is too vague. 'Buyers will switch if Tuesday afternoons feel different' is too narrow. The test is: if the verdict is 'not supported', would you change a real decision? If yes, the granularity is right.

Two filters. First, criticality: which assumptions, if wrong, force the biggest decision change? Validate those first. Second, uncertainty: of the critical assumptions, which are you least confident in based on what you already know? Confidence isn't a moral judgment, it's an investment guide. High criticality plus high uncertainty equals validate now. High criticality but high confidence equals validate later (or skip if the team is aligned). Low criticality equals don't bother.

More FAQs →

Candor is in development.

Be the first to know when it launches.

No spam. Just a note when Candor is ready. Powered by Highline Beta.