INTERVIEWS

Running an auto-interview

An auto-interview runs a guide against your personas in parallel and produces a synthesis at the end. This article walks through the launch, the run page, the chips and badges, and how to read what comes back.

Launching a run

From an approved guide, the launch flow is one screen. By default, Launch all interviews is checked, which runs the guide against every persona in the project. Uncheck it to pick a subset — personas grouped by archetype with tri-state checkboxes per group.

Once you click Launch interviews, the run kicks off and you’re redirected to the run page. Up to five sessions execute in parallel; the rest queue.

The run page

Top of the page: a status card showing how many sessions are complete, how many failed, and a progress bar split between the interview phase and the synthesis phase that follows it. Below that is a list of session cards, one per persona.

Session cards

Each card shows the persona name, a status label (Queued, Interviewing (turn N), Interview complete, Failed, or Abandoned), and a turn count. Cards are clickable once they have at least one turn — click to expand the transcript inline.

The chips

On the right of each session card you’ll see chips summarising what happened. The set you see depends on the interview type.

  • Coverage — format like 6/7 deep or 4 deep · 2 placement · 1 none. Counts which learning goals were substantively covered (deep), partially covered (placement), or not reached (none). Amber if any goal got zero coverage.
  • Card-sort outcome (problem validation only) — labels like Healthy card sort, Promoted from somewhat, Overloaded — top 4,Low discrimination, No pain resonated, Audience mismatch, No concepts resonated, or Lukewarm — gap-diagnose top 2. Hover for the outcome rationale.
  • Concept comprehension (concept testing only) — like 3/4 comprehended, plus a count of concepts skipped because the persona didn’t understand them.
  • Persona-fidelity flags — the count of critic-detected drift events for this session, with the top one or two flag types named. 0 flags is clean. Hover for the per-flag breakdown with turn numbers and excerpts.

Reading a transcript

Click a session card to expand. You’ll see the full interviewer/persona turn-by-turn transcript with confidence and critic markers per turn. For problem-validation sessions, the card-sort outcome table appears at the top showing each hypothesis with its bucket, rank, whether it was deep-dived, and how many probes were spent on it.

Once a session completes, an auto-generated session summary sits above the transcript. Skim summaries first if you have many sessions — full transcripts are for the ones whose chips made you curious.

Exporting transcripts

Click Export transcripts in the run header to download every session as a single Markdown file. Inside an expanded session, Export transcript grabs just that one. Files contain the full Q/A history with no truncation, so you can feed them to another tool or LLM for offline analysis.

When sessions stop

Each session has stopping criteria built into the guide. Sessions end when learning goals are sufficiently covered, the persona is going off-profile (degradation detection), or section budgets are spent. You’ll see this reflected in the final coverage chip — a clean ending shows good coverage, an early stop will show in the chip with amber styling.

After the run

Once every session is in a terminal state (complete or failed), Candor automatically kicks off synthesis. The progress bar shifts to the synthesis phase. When synthesis finishes, a green banner appears with a View report link — that’s where the structured findings live.

If synthesis fails, the run page surfaces an error and a link to the report page where you can read the failure reason and retry.

A note on what happens to memory

Auto-interview sessions write to the same persona memory as live interviews. If you run an auto-interview against a persona you’ve already had a live conversation with, the auto-interviewer knows what you discussed and the persona references it. This is why session ordering matters; running a guide twice produces two distinct sessions, both retained.

Where to go next

Common questions

Most projects run between 8 and 16 personas per auto-interview. Below 8 and the synthesis loses cross-persona signal. Above 16 and you're spending budget for diminishing returns, since coverage tends to plateau. The right number depends on how heterogeneous your audience is. A narrow B2B audience with two archetypes works with 8. A broad B2C audience with five archetypes wants 16 or more. The launch flow lets you pick which personas to include, grouped by archetype.

Coverage shows how thoroughly the guide's learning goals got addressed: deep means substantive coverage, placement means partial, none means the goal wasn't reached. Persona-fidelity flags count critic-detected drift events in that session, with the top flag types named. A session with full coverage and zero fidelity flags is clean. A session with amber coverage or several fidelity flags is worth opening to read the transcript before trusting the synthesis. Use the chips to prioritise which sessions to actually read.

Three common reasons. First, the persona was going off-profile and degradation detection ended the session to protect transcript quality. Second, section budgets were exhausted (the guide capped how many turns can be spent in each section). Third, an LLM or infrastructure timeout occurred and the session was retried but couldn't complete. The run page surfaces the cause on each session card. Retry usually works for the third case. The first two cases are signal: the persona isn't a great fit for the guide.

Yes, if the live conversations and the guide are about the same topic, since the personas will reference what you discussed before. If the live session covered different ground, also yes, since the auto-interviewer can extend coverage in directions you didn't go. The one case to avoid: don't run an auto-interview that asks the persona to "discover" something they already discussed with you. They'll skip it as known. Auto-interviews work best on fresh ground or on personas you haven't talked to yet.

As soon as every session is in a terminal state (complete, failed, or abandoned). You don't have to launch synthesis manually. The progress bar shifts to the synthesis phase and a green "View report" banner appears when it's done. If a session is still running and others have failed, synthesis waits. If synthesis itself fails, the run page surfaces the error with a retry button. Most synthesis failures are transient and a retry succeeds.

More FAQs →

Candor is in development.

Be the first to know when it launches.

No spam. Just a note when Candor is ready. Powered by Highline Beta.