USE CASE

Problem validation

Pressure-test whether a specific problem is real, painful, and worth solving for a specific segment. Cut the problems your team is over-indexed on. Surface the ones you missed. Get explicit verdicts on each hypothesis, with the evidence behind them.

When to run problem validation

Problem validation is the right call when you have specific hypotheses about what your audience cares about and you need to know which to invest in. Hypotheses come from anywhere: a prior discovery study, customer support tickets, sales calls, a leadership-team workshop, an executive’s strong intuition. Wherever they come from, the next question is the same. Are these problems real, do the right segments feel them, and which ones are worth solving?

Without validation, teams often invest engineering effort into problems that turn out to be edge cases or executive pet theories. The cost shows up later, when a feature ships and the metrics don’t move. Validation early is the cheapest filter you have.

How Candor runs it

You state the hypotheses up front. Candor runs structured interviews probing each hypothesis against the persona’s actual situation, with follow-up questions that test the persona’s reaction in the context of their work. Hypothesis-introduction questions are delivered verbatim from your study guide so the framing stays consistent across every persona, and the interviewer agent then probes for behavioral evidence, frequency, severity, and counterfactuals.

The critic agent reviews every persona response against the persona’s prior statements and the underlying evidence. If a persona contradicts itself, or if a response drifts from what the source evidence supports, the response is regenerated before you see it. You get clean signal, not the noise of an inconsistent persona accumulating across a long study.

What you walk away with

A synthesis report with explicit verdicts on each hypothesis: supported, contested, or not supported. Each verdict comes with the supporting and contradicting evidence, the archetypes that confirm or deny it, and the specific persona quotes behind the call. Contested hypotheses are tagged with which segments confirmed and which didn’t, so you can decide whether you’re looking at a real problem with a narrower audience than you thought, or a problem that doesn’t hold up.

The output is designed to be acted on. Killed hypotheses stop consuming roadmap time. Validated hypotheses move forward with evidence behind them. Contested hypotheses become segmentation decisions. The report links every claim back to the source quote, so a stakeholder review can audit any finding without a handoff document.

Where to go next

If you don’t yet have hypotheses to validate, start with problem discovery to surface them. Once you have validated problems, move to concept testing to pressure-test candidate solutions, or value-prop testing to refine how you talk about the problem and the solution. To stress-test the broader assumptions sitting behind the work, run assumption validation.

Common questions

Validated means the evidence across personas supports the hypothesis as stated, with consistency across multiple personas in the relevant archetypes. A problem can also come back contested (some segments confirm it, others contradict it) or not supported (the personas don't recognize the problem the way you framed it). The synthesis output makes the verdict explicit and links it to specific quotes, so a contested verdict isn't a failure. It usually means your problem statement is true for one segment and not another.

A good hypothesis names the segment, the situation, and the consequence. For example: 'Senior insights managers at CPG companies struggle to test concept variations quickly enough to keep up with brand-team launch timelines, and this delays go-to-market by weeks.' That's testable. A weak hypothesis like 'CPG teams want better research' isn't, because it doesn't name a specific situation or consequence the personas can react to.

The synthesis report highlights the split: which archetypes confirm the problem, which contest it, and what the contradicting evidence looks like. Contested verdicts often point at segmentation work to do. A problem that's real for one buyer type and not another usually means you have two different positioning conversations on your hands, not one. Treat contested as 'narrow your audience' or 'change the framing', not 'kill the hypothesis'.

Problem validation tests whether a problem is real and worth solving. Concept testing tests whether a specific solution resonates. The order matters: a great concept solving a non-existent problem won't ship. A real problem with no candidate solution still has value (it tells you where to point your roadmap). Validate problems before you put effort into testing concepts against them.

As confident as the underlying evidence allows. Each persona's reaction is grounded in the research evidence that built them, and every quote in the synthesis links back to its source. Problems that pass validation across multiple archetypes with citation-backed reasoning are higher-confidence than problems that pass with thinner evidence. Treat synthetic validation as a strong filter, not a final answer. High-stakes decisions still benefit from a small follow-up with real customers on the validated problems.

More FAQs →

Candor is in development.

Be the first to know when it launches.

No spam. Just a note when Candor is ready. Powered by Highline Beta.