Skip to main content
Petanque Life

Quality Control

F20.05 4 features

At a glance

Quality Control instruments every response with behavioral signals — straight-lining patterns, speed outliers, and drop-off coordinates — and exposes per-response quality flags that analysts can use to filter, weight, or exclude submissions. The result is research-grade fieldwork that meets academic and regulatory standards without manual data cleaning, and dashboards that surface design problems in the questionnaire while a survey is still in field.

How it works

As responses come in, a QC pipeline scores each submission against a battery of heuristics. Straight-lining detection inspects multi-item Likert and Matrix blocks and flags responses where a respondent picked the same value for every item, optionally with a tolerance for one-or-two deviations to avoid false positives on grids where the same answer is genuinely valid. Speed outlier detection captures the time spent on each question and the survey as a whole; submissions that complete in less than 10% of the median total duration — calibrated per survey rather than globally — are flagged as likely speeders.

Drop-off analysis records the last interacted question and the action that preceded abandonment (closed tab, navigated away, idle timeout) so authors can see exactly where attention is lost. The dashboard renders a funnel from invitation to first answer to each subsequent question to completion, with conversion rates per step and side-by-side comparison across segments to surface design problems such as a confusing matrix or an over-long open-ended block. Every submission carries a quality_flags array combining these signals plus optional custom flags from validators (impossible answers, contradictory branches, suspected bot patterns when Turnstile risk score is elevated).

Analytics queries and exports accept a quality filter — strict (no flags), permissive (timing flags allowed), or raw — so the same dataset can serve a board-ready report and a sensitivity analysis without forking the data. Flags are advisory rather than destructive: no response is auto-deleted, and authors can review flagged submissions individually with the full timeline (timestamps, branches taken, edits) before deciding to keep, weight, or exclude them.

Key capabilities

  • Straight-lining detection on Likert and Matrix blocks with tolerance configuration
  • Speed outlier detection calibrated per survey median completion time
  • Drop-off analysis with funnel visualization across questions and segments
  • Per-response quality_flags array combining behavioral and validator signals
  • Quality filter (strict/permissive/raw) applied to queries and exports
  • Non-destructive review workflow with full response timeline and edit history

In practice

A research lead reviewing a closed satisfaction survey notices the dashboard reports 1,240 responses but the strict-quality count is 1,108. She opens the QC panel, sees 92 straight-liners and 47 speed outliers (with 7 flagged in both categories). The drop-off funnel reveals 18% abandonment on a 10-row matrix question, which she earmarks for a redesign.

She inspects ten flagged responses to confirm they are not legitimate edge cases, then exports two datasets — strict for the board report and permissive for an internal sensitivity analysis — without altering the underlying data, and shares both together with the funnel chart in her methodology appendix.

Features in this subsystem

4
ID Status Features
F20.05.01 Shipped Straight-lining detection — flags responses with identical Likert values ✅ PL-T079
F20.05.02 Shipped Speed outlier detection — flags responses faster than 10% of median ✅ PL-T079
F20.05.03 Shipped Drop-off analysis — tracks where respondents abandon the survey ✅ PL-T079
F20.05.04 Shipped Quality flags — per-response quality signals for filtering ✅ PL-T079

Stakeholders who need this subsystem

Surfaces in 1 stakeholder analyses