Reduce Your Research Screener’s Drop-off Rate

An analysis of over 42,000 screeners reveals best practices for selecting the number and type of questions to increase completion rates.
an illustration collage of two colorful abstract blobs, a series of question marks, and two figures at a table with speech bubbles

“How long should my screener survey be?”

“Is there anything I can do to boost my completion rates?”

“I like open-ends. How many should I use in my screener?”

The “answers” to these questions might be based on experience, anecdotes from colleagues, or a gut feeling. But what about using a massive trove of screeners to surface trends? That might create a more confident and satisfying path forward.

This report does just that. Specifically, the User Interviews team analyzed over 42,000 screeners launched on its platform. These screener surveys were used by a wide variety of companies, team types, and industries. They were used for moderated, unmoderated, and combo-style research designs.

Read on for data to use the next time you or someone on your team asks one of those questions about screener surveys.

For more information about screener surveys, including best practices on question design and skip logic, check out our Field Guide on participant recruiting.

Method

For this report, we examined a single criterion of screener success: drop-off rate, which is the percentage of participants who do not complete your screener after starting it. When drop-off rates are high, your pool of potential research candidates is smaller, reducing your flexibility in selecting for core personas, demographic groups, or locational diversity. For qualitative research especially, sample quality is key (given that studies are typically smaller and can therefore be skewed by just 1 or 2 poor-fit participants).

What causes a high drop-off rate in research screeners?

A high drop-off rate (sometimes called abandonment) may be due to a low project-participant fit (e.g., someone who is not a caregiver or parent starts a screener focused on daycare programs), but it might also points to core issues with the screener’s design. Aspects such as length, question wording, or unclear/undefined research concepts are all potential culprits. 

Average drop-off rates vary depending on modality (digital vs. in-person vs. phone), industry, and audience type—to name just a few. Moreover, most public benchmarks for drop-off rate are based on “surveys” and not screener surveys specifically (some screeners launched on User Interviews, for example, are recruiting participants for subsequent research surveys).

The 42,756 screeners in our sample were launched between November 2021 and November 2023 via the User Interviews Recruit platform. These screeners were used in a range of project types, from unmoderated surveys and usability testing to moderated interviews. All screeners were digital (i.e., taken on a computer).

Results

Across all User Interviews screeners, the average dropoff rate is 6%. Although screeners varied in length (between 1 and 118 questions), the average is about 10, with 4-question screeners being the most prevalent. The median screener length is 7 questions. The average 10-question screener uses 8 closed and 2 open-ended questions (an 80/20 mix).

The impact of length and question type

But how do the number and kind of questions impact your screener’s dropoff rate? The data suggests that for both B2C (consumer) and B2B (professional) audiences, open-ended questions are more likely to impact screener dropoff rate than closed-ended questions. The dropoff rate increases with each additional open-ended question.

Note: Charts were capped at 30 questions for data visualization purposes.

On the other hand, screener dropoff is relatively low and stable as more closed-ended questions are added.

Differences between professional (B2B) and consumer (B2C) audiences

Comparing screeners by audience type yields useful information about the impact of screener length. Specifically, B2B screeners show higher dropoff rates overall and those rates are more likely to be impacted by screener length.

Put into action

How might you apply these findings to your next research screener survey?

  1. Consider the number of open-ended questions used. The data suggest that this question type—compared with closed-ends—has a stronger effect on drop-off.
  2. Professional (B2B) audiences are more sensitive to screener survey length than consumer (B2C) ones. If working with this population, strive for conciseness.
  3. If working with someone new to screener surveys looking for advice on design, try sharing the averages from our sample as a general benchmark: most screeners are 10 questions long, with an 80/20 mix of closed to open-ended questions.