USE CASE

Concept testing

Pressure-test new products, features, or campaigns before you commit. Find out which ideas resonate, which fall flat, and why. Cut the unpromising ones in hours, not weeks. With every reaction grounded in real evidence and traceable to its source.

When to run concept testing

Concept testing is the right call when you have a candidate solution and you need to learn how the audience reacts before you invest engineering, design, or campaign budget. The goal is to find out which concepts resonate, which fall flat, and who the concept is actually for, with enough specificity that you can either commit, kill, or refine.

The most common moments: pre-launch validation, stage-gate review, choosing between concept variants, refining a redesign before engineering scope is locked, deciding whether to greenlight a campaign or a feature. The pattern that fails most often: ship the concept that won the loudest internal vote and discover after launch that no segment actually wanted it.

How Candor runs it

You provide the concept (description, wireframes, copy, or a mix) and define the audience. Candor builds synthetic personas grounded in real evidence and runs the concept test against them. Three formats are supported: monadic (one concept per persona, depth focus), comparative (two or more concepts per persona, head to head), and card sort (multiple concepts ranked by each persona, useful when you have several variants).

Each interview probes resonance, comprehension, friction, and willingness to act. The persona’s prior conversation context stays loaded across questions, so reactions cohere instead of drifting. The critic agent validates each response against the persona’s prior statements before delivery, so a persona who praised something in turn three doesn’t silently contradict it in turn ten.

What you walk away with

A synthesis report that frames concept-specific findings: resonance points by archetype, friction points by archetype, audience-fit alignment, and the tensions where two segments disagreed. For comparative or card-sort studies, the report includes ranking patterns and the reasoning behind each persona’s preference. Every finding links back to the specific quotes, the personas who said them, and the evidence sources behind those personas.

The output is shaped to drive the next decision. Concepts that won across most archetypes move forward. Concepts that won only with a narrow segment become segmentation decisions: ship to that segment, or refine for a broader fit. Concepts that lost across the board get killed before they consume more cycles.

Where to go next

Before concept testing, validate that the underlying problem is real with problem validation. After concept testing, refine how you describe the winning concept with value-prop testing, and pressure-test the price with price testing. If you want to stress-test the strategic assumptions sitting behind the concept itself, layer in assumption validation.

Common questions

More detail produces sharper signal. A concept stated as one sentence will return reactions to that sentence; a concept with positioning, key features, target audience, and a price point will return reactions across each of those dimensions. The minimum useful concept is enough that a reader can answer 'what does it do, who is it for, and why would I use it.' Wireframes, screenshots, and example messaging are welcome inputs.

Yes. The card-sort and comparative variants of concept testing are designed for this. You can show personas multiple concepts (or multiple variants of one concept) and observe how each archetype ranks them, which dimensions drive the ranking, and where personas split. The synthesis output highlights which concept won where, and the qualitative reasoning behind each persona's preference.

Concept testing evaluates the whole proposition: the product, the audience fit, the use case, and the value claim together. Value-prop testing isolates the message and tests how it lands. The same product can be concept-tested once and then value-prop-tested several times as you refine how you describe it. If you only have words on a page, you're closer to value-prop testing; if you have a working idea or a wireframe, you're closer to concept testing.

Yes. Candor models B2B and B2C as fundamentally different decision domains. A B2B concept test probes the buying-committee dynamics, the procurement constraints, the integration questions, and the use cases that justify the spend. A B2C concept test probes the impulse triggers, the substitution behavior, the price anchors, and the social context. The personas, biases, and decision frameworks are different in each. You pick the audience type when you create the study.

Yes. Concept testing isn't only for new builds. Teams use it to pressure-test a redesign, a new feature, a packaging change, or a repositioning of an existing product. The framing shifts (you're often testing 'this version' against 'the current version' or 'a competitor's version'), but the methodology is the same: show the concept to personas, probe reactions, and synthesize what resonates and what doesn't.

More FAQs →

Candor is in development.

Be the first to know when it launches.

No spam. Just a note when Candor is ready. Powered by Highline Beta.