SUBSCRIBE TO OUR NEWSLETTER
Taking just a little extra time to get your screener right can drastically improve your recruit's effectiveness.
Including too many demographic and geographic characteristics in your screener survey is a common mistake that can not only make your criteria too niche, but can also introduce bias and make your survey results less representative of the real world you’re trying to study.
Don’t include geographic questions in your screener unless you have a clear reason to target based on location (like doing an in-person study or, say, researching the recycling habits of people in Houston vs. Austin). Similarly, avoid targeting based on age, ethnicity, income, education level, etc. unless these characteristics are genuinely disqualifying factors for your study.
A good rule of thumb: Worry less about how people are categorized on a census and more about how they think, feel and behave.
Screening for behaviors and psychographics lets you group people based on how they live, what they value, and how they relate to your product—all of which is probably much more relevant to your research than whether they graduated from a four-year college.
Of course, sometimes demographic data is relevant. For instance, if you want to test for accessibility with a mix of racial/gender identities, age ranges, and educational backgrounds, adding demographic criteria will allow you to target a diverse audience.
Rather than automatically accepting or rejecting based on this criteria, you can use demographics to filter for a variety of participants as a final step.
💡 Psst: Some recruiting services (like User Interviews) will automatically provide basic demographic, geographic, and technographic information for you. That means you don’t need to include it in your screener, and can focus on the interesting behavioral questions instead.
As you can see, it’s important not to make your screener criteria too narrow. But it’s equally important to make sure that your recruit has focus. Otherwise, you might lose track of what you’re trying to accomplish.
Running a study about iOS mobile apps? You could include questions that filter out Android users, people with outdated iPhones, or perhaps people who don’t have a smartphone at all.
Take the time up front to define the audience that best fits your research needs. This will prevent an overly broad recruit and will give you a more refined candidate pool that you won’t have to comb through later.
A leading question prompts or encourages someone to give a desired answer. Oftentimes this is by design (which is annoying, right?), but can also happen by mistake if you’re not careful about how you phrase a question.
This can be especially problematic in research. Leading questions can skew research by indirectly nudging users to answer a certain way. When this happens in screener surveys, it can leave you with a pool of participants who aren’t actually a good fit for your study.
A good way to identify whether a question might be leading is if it includes a hint, excludes possible answers, or influences responses with emotive language.
Example 1: This question assumes the respondent hated the season and uses emotive language that may influence the answer.
Leading: On a scale of 0 to “Still resentful”, how much did you absolutely despise the last season of Game of Thrones?
Not leading: On a scale from 1 to 10, where 1 is terrible and 10 is excellent, rank how you regard the last season of Game of Thrones.
Example 2: The “wasn’t it” prompts agreement.
Leading: Wonder Woman 1984 was godawful, wasn’t it?
Not leading: How did you feel about Wonder Woman 1984?
Example 3: The word “amazing” could lead to respondent bias.
Leading: Do you like our amazing Zoom integration? (😉)
Not leading: How would you rate our Zoom integration?
Example 4: The word “always” plus a binary option would almost invariably lead participants to to say “no.”
Leading: Do you always eat ramen for dinner?
Not leading: How often do you eat ramen?
Another way to avoid leading questions is to provide a series of unrelated options as answers. For example, if you want to screen for users who have a high level of concern around internet privacy issues, rather than diving right into questions about internet privacy, you can create a question like this to get to the people who really care:
Binary questions can only tell you so much At User Interviews we offer a variety of question formats—we encourage you to use them!.
Sure, you could list a bunch of questions with simple quick-click options, but why not add a little fun and nuance into the mix? Plus, adding some more interesting questions may provide you with insights you didn’t anticipate.
In one study, for example, a researcher was interested in speaking to people who proactively look for deals when shopping online for clothing. The original screener looked something like this:
This was such a missed opportunity to gain some valuable consumer insights! Our revisions, pictured below, would add some much-needed clarity to candidate responses.
A few minutes of thoughtful revisions can really improve data collection. And as you can see, screeners don’t need to be long to be insightful.
Another reason to reword your Y/N questions is that they can be leading, or give away too much about the intent of your stufy up front. When you give away the plot, you can devalue the screener process itself, which is designed to find participants who are a good fit for your study—not just folks who give the answers you want to hear. Let’s say your research is about country music. Rather than this:
… try something like this:
Related tip: Rather than calling this study “Country Music Lovers Only,” you might want to try something more generic, like “Music Study”. You’ll be able to pull in a wide audience and carefully sift out the true country music fans. In addition, you’ll gain more information about your participant’s interests if you find yourself wanting to dig into data further.
Skip logic is a great way to customize your screener—so don’t skip it! You can use skip logic to customize which questions a participant sees, depending on their responses to a previous question. You can also use it to avoid leading people to certain answers to ensure sure you’re getting honest and accurate responses.
Let’s say you are conducting a study on pet ownership. You might want to capture more information about the type of pets someone has, but you also don’t want to exclude people who don’t have pets.
In the example below, you’ll notice that the questions on page two are only relevant for respondents who do not have pets. Skip logic allows you to set up the respondent journey so that pet owners will essentially jump over these questions. It creates a more personalized experience and avoids confusion for the respondent, all while gathering the information you need.
The purpose of a screener is to filter your candidate pool—but that doesn’t mean you can’t also use these questions to get a sense of a candidate’s personality, and even have a little fun!
We encourage researchers to include at least one articulation question at the end of the screener. Perhaps you’re looking for creative-thinkers to join in an interactive in-person focus group; use an open-ended question to capture more insight about the way they approach situations, and think/problem solve. Here are a few of our favorites:
Feeling ready to put this new knowledge into practice? If you already use User Interviews for your research, we hope this article will help you craft the perfect screener survey for your next project.
New to User Interviews? Start recruiting with us and we’ll give you 3 free participants to help kick things off!
Note: This article was originally published in 2018 by Melanie Albert, who helped manage thousands of successful research projects during her time as VP of Operations at User Interviews. It has been updated in 2021 with fresh content and insights.
Content marketer by day, thankless servant to cats Frodo and Elaine Benes by night. Loves to travel, has a terrible sense of direction. Bakes a mean chocolate tart, makes a mediocre cup of coffee. "Eclectic."