Screener Surveys

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Note to the reader:

This part of the field guide comes from our 2019 version of the UX Research Field Guide. Updated content for this chapter is coming soon!

Want to know when it's released?
Subscribe to our newsletter!

In the last chapter, How to Recruit Participants for User Research Studies, we introduced you to the topic of screener surveys. Well, we actually introduced that topic way back in chapter about the UXR process, and again when we walked you through how to create a user research plan


Why do we keep harping on about screeners?


Because they’re important! Screener surveys are what stand in the way of you and hoards of unscrupulous, unqualified, uncommunicative participants. (That’s a bit dramatic, but you get the point.


Screener surveys are essential tools for qualitative research. And while they sound simple, they’re actually very easy to get wrong. Which is why we’ve dedicated a whole chapter to them!


In this chapter

  • What is a screener survey and why do you need one?
  • How to create an effective screener survey
  • Examples of common screener questions and formats
  • When double-screening makes sense
  • Avoiding no-shows


What is a screener survey and why do you need one?

Screener surveys, or just ‘screeners,’ are surveys people take before participating in a research study. They’re made up of a few questions, designed to weed out the folks who aren’t your intended audience and capture the ones who are. 


You can think of a screener survey as a sieve that captures the people who hit all your ‘must have’ criteria and filters out the ones who don’t quite fit the bill.

How to create an effective screener survey

If you want the right participants, you’ve got to design smart screening questions.


That’s not always as straightforward as you might think. You have to ask questions in a somewhat roundabout way to avoid leading people to certain responses, but also in a clear way to make sure you’re universally understood.


The devil is in the details, but luckily there are some pro strategies that anyone can learn and put to use right away. 


The guiding principles explained in this chapter will help get you there.


Tip: If you haven’t already got your ideal participant profile nailed down, review the previous chapter on finding the right participants for your study before reading further.

1. Know your goals 

We covered this bit in previous chapters, and the advice here is the same as the last. 


Clearly defined goals and objectives are must-have requirements for any user research project. These goals—aka your reason for doing research—should be hammered out well before you start writing your screener surveys. (If they’re not, head on back to the chapter on planning research to get some clarity. Go on, we’ll wait...) 

2. Define specific target audience criteria

This is the part where you consider your research question, imagine the ideal participant who can give you the answers you need, and identify the targeting criteria (the things that must be true about them) needed to qualify for your study.


Your targeting criteria will typically be defined by using a mixture of:


  • Psychographics: Activities, hobbies, interests, and opinions
  • Behaviors: What they do (e.g. ‘regularly commutes by car’)
  • Demographics: Age, gender, education, income, marital status, etc.
  • Geographics: Country, city, region, or radius around an area


So… how do you decide what criteria to target by? 

To figure out who to recruit for UX research, ask yourself:

  • What is the goal of your research? Consider what insight would be most useful, then work backward to figure out who can best provide that insight.
  • Are you in the discovery, testing and validating, or post-launch phase of product development? In general, your audience should be broad in the early stages and get more targeted as development progresses.
  • What is your research question? A good research question like “What tools do 20-somethings use to manage their finances?” has much of the targeting criteria already baked in.
  • Who can answer that question? Think through the specific traits a potential participant would need to have. In the example above, you’d want to talk to people in their twenties who are interested in actively managing their money.
  • Who can’t answer that question? Don’t ask retirees how their grandchildren manage their finances. Likewise, if a 25-year old says ‘nah, I’m not really interested in having a budget,’ they are the wrong person for your study.


We covered all of those points in finer detail in the last chapter. If you’re still struggling to pin down who to recruit, we recommend revisiting those recommendations before moving onto the next steps.


3. Screen for behaviors and psychographics over demographics

Take a closer look at your targeting criteria. Do they include demographic criteria like age, gender, race, income, etc? Do they need to?


Demographics are the low-hanging fruit of screener surveys, but these characteristics have their limitations.


For instance, where people live is important if you’re doing an in-person study, or if your app will only serve certain locations. But if you don’t have a clear reason to target based on geography… don’t. The same thing applies to demographics. In many cases, a person’s gender or how much they earn per year won’t determine how they interact with a product. 


Our assumptions about these characteristics are prone to bias, which can invalidate your study and do real harm to the people your research will ultimately impact.


Also, it’s just bad form to waste valuable screener questions on criteria that aren't absolutely essential.


Screening for psychographics and behaviors lets you group people based on how they live, what they value, and how they relate to your product or category. That’s the juicy stuff!


Rule of thumb: Worry less about how people are categorized on a census and more about how they think, feel, and behave.


When asking demographic questions makes sense

Let’s say you want to test for accessibility with a mix of gender identities, age ranges, and educational backgrounds. In this case, adding demographic criteria will allow you to target a diverse audience.


Not every question on your screener has to result in an automatic in or out, but can be used to filter for a variety of participants as a final step. Accept anyone who could be a fit based on any given question. 


Tip: Some recruiting services (User Interviews included) will automatically provide basic demographic, geographic, and technographic information for you, so you don’t need to include it in your screener survey. Win for you and your participants!

Recruit from our panel of 700k+ participants or bring your own

Sign up for free

4. Write precise, carefully worded questions

Once you know the characteristics of your target participants and you’ve broken that down into specific criteria for how you’ll identify the people who qualify, it’s time to write the questions that will help you filter out the good’uns.


The language you use in your screener is important. When writing screener questions:

  • Avoid double negatives.
  • Keep the questions short and sweet.
  • Leave out industry jargon (unless knowledge of it is a requirement for participation).
  • Be specific.


The more clearly worded and specific your questions are, the less likely participants will be to get confused and answer inaccurately. Leave no room for misinterpretation!


Similarly, make sure the multiple choice options you provide are carefully worded. Being clear in your responses is just as important as being clear with your questions.


Have you ever taken a survey where, on a certain question, you found yourself forced to choose between more than one answer that applied to you?


To avoid putting your audience in that position, make sure your answers have clear borders without any overlap. For example, when asking for numerical values (age, size, frequency etc.), make sure your values are mutually exclusive:

  • Correct: 0-3, 4-7, 8-12
  • Incorrect: 0-3, 3-7, 7-12


For less definitive answers or for answers that can’t be made mutually exclusive, ask participants to select the answer that is most true or give them the option to select all that apply, rather than a single answer.


5. Put your screener questions in the right order

Don’t make prospective participants complete your entire screener before finding out they don’t qualify. Eliminate unqualified people early.


Think of the process as a funnel. You’re refining your participants, and refining them further. Or, think of it like weeding a grown-over garden. The biggest, tallest, most obvious weeds come out first, simply because they’re the easiest to grab. Start with the questions that are most likely to weed people out. 


The easiest way to do this is to write out your questions, rank them in order of importance, and look for any interdependencies. 


For example, if you’re doing an in-person study, ask about location right away. Location here is a must and must-have criteria go first.


Before diving into questions about how people use apps on their smartphones, find out if they use a smartphone at all. Then, move on to the questions that tap into specific behaviors, interests, and preferences.  


If you’re not working with a recruiting service that gathers demographics for you, ask any demographic questions that you need to ensure a diverse recruit pool.


6. Avoid leading or loaded questions

You know how some folks add a “right?” at the end of every sentence, so that you have no choice but to nod or shrug in agreement? Right? 


That’s an example of ‘leading.’ Leading questions will influence people to answer in a certain way. 


It’s a handy conversational device if, say,  you’re a dogged prosecutor in a courtroom TV series and the judge will allow it (for the drama, obviously). But leading questions have no place in user research—and definitely not in your screener survey. This is not the place to try to validate your assumptions. You’ll end up with skewed results or the wrong kind of participants.


Here’s an example:

  • Leading: Would you like it if there was a feature that did [X]?
  • Not leading: Are there any features that don’t currently exist in the product that would help you do [X]? If so, what are they?


A good way to identify whether a question might be leading is if it includes a hint or excludes possible answers.


Another way to avoid leading questions is to provide a series of unrelated options as answers. 


For example, if you want to screen users who have a high level of concern around internet privacy issues, rather than diving right into questions about internet privacy by asking:

  • Leading: Are you concerned about internet privacy?


… you can create a question like this (not leading): Which of the following topics is most concerning to you regarding internet use in your life?

  • How much time my kids spend online. - Reject
  • Data privacy issues. - Accept
  • False information appearing in search results. - Reject
  • I don’t have any concerns about the internet. - Reject 
  • I don’t know. / None of the above. - Reject


Likewise, avoid yes/no or true/false questions, which tend to be leading. Users might answer in the way they believe will entitle them to participate in the study. Whenever possible, replace these questions with multiple choice options or provide a scale for degree of agreement with a given question. 


Exception: In cases where a black and white answer is required—for example, when asking if a person is willing or able to participate under the conditions of your study—a binary question will be your best bet.


‘Loaded questions’ are similar to leading questions (and the two are often conflated), in they push the participant to answer a certain way. Loaded questions do this by making assumptions, which are implicit in the question itself.


Here’s an example:

  • Loaded: On a scale of 0-100, how much do you despise pistachio ice cream with every fiber of your being?
  • Not loaded: On a scale from 1-10 where 1 is ‘gross,’ 5 is ‘neutral,’ and  10 is ‘delicious’, please rank how you regard pistachio ice cream. 


A good way to identify whether a question might be loaded is if it includes strong language or excludes possible answers.

  • Loaded: What is your favorite thing about the new, improved app?
  • Not loaded: How does your experience with this version of the app compare to your experience with the previous version?


7. Provide a catchall alternative option

If you create multiple choice responses, don’t assume that you’ve presented the user with every possible option. Even the best survey designers have their limitations. As Gandalf once said, “even the very wise[st survey designers] cannot see all ends.” 🧙


Include a ‘none of the above,’ ‘I don’t know,’ or ‘other’ option to account for any outliers. 


Otherwise, you could end up with someone in your study who doesn’t belong there because they were forced to choose an answer that didn’t apply to them. Likewise, you might screen good participants out because they didn’t quite fit the answers you provided.


8. Include an open-ended question to screen for articulation

Spare yourself the pain of having to drag answers out of a reticent participant by screening uncommunicative people out of your study.


Screener surveys help you to get more value for your time and money on a per-participant basis. Sometimes that means excluding certain people who otherwise perfectly fit your ideal audience profile.


Screen for expressive participants by asking ‘articulation questions.’ These are open-ended questions designed to test a user’s capacity to communicate. If a person can express their ideas with depth of thought, they’re likely to be a helpful participant. 


Including open-ended questions also helps weed out “professional participants” who are just looking to make a quick buck by qualifying for any and every study.


9. Don’t reveal too much

A screener survey is meant to help you find the candidates who are a perfect fit for your study. 


Giving away too much information about the purpose of your study—by, say, revealing the name of your company to non-users or telling participants who you’re looking to interview (which is a real mistake that we’ve seen)—can devalue the screening process and make your research less effective.


And this advice doesn’t just apply to your screener. The title and description you give your study, the way you talk about it when you’re recruiting participants, and the things you reveal in the lead-up to the session itself—it all matters. 


For instance, let’s say you are doing research on (yet another) photo editor app for influencers and people who actively post photos on social media. You might tell participants it’s a study related to social media habits. That way they have some context (which can help them decide to click into the screener), but they don’t know what type of social media habits (editing photos) you’re looking for.


This will make it harder for professional testers to guess what you want, making it more likely you’ll get authentic responses.


To avoid tipping your hand, don’t:

  • Reveal the purpose of your research study.
  • Reveal the name of your company or product.
  • Ask leading questions.


Pro tip: If you’re struggling to write descriptive titles and copy that don’t give away too much information, see if there’s a friendly wordsmith on your marketing team who can lend a hand.

10. Manage the expectations of survey takers

Make sure your participants are clear about what they’re doing, and at what stage of the process they’re at. 


The screener survey is a sort of dress rehearsal, and it will help the participant to know they’re not yet in the final round. Be sure the candidate knows what they’re in for if they do make it. 

If there are any possible deal-breakers (like NDA agreements, for example) let them know up front. 


And of course, be clear that they won’t be paid until they make it through to complete the actual survey.


11. Remember to keep it brief

Finally, keep your screener surveys short and sweet. We’ve seen some screener surveys get so long that participants mistake them for a (paid) research survey! If you’re looking for a rough guideline on length, try to keep your screener to fewer than 10 questions. 


Examples of common screener questions and formats

Remember, the point of a screener survey is to help you find the right participants for your research. This list of screener questions is meant to be used for inspiration, and to help you get a gut check on your own screener. It’s not a library of general use questions to copy and paste from in all circumstances. 


With that caveat out of the way, here are some sample screening questions to ask, depending on the type of criteria you’re filtering for.


Sample screening questions for different types of criteria

Industry or occupation 

Ask employment questions when you want to screen for people with a certain level of familiarity with a particular industry, or exclude those who work for competitors.


Example question: What industry do you work in?  

Answers: A list of industries (retail, IT, healthcare, education, etc.)

Format: Single select 


Example question: Which category best describes your job function?

Answers: A list of job functions (marketing, accounting, engineering, product design)

Format: Single select 


Familiarity with a product or service

If you need to test with novices, experienced users, or some combination of each, ask about familiarity with a given product.


Example question: Please rank your experience with {name of product}.

Answer: A range from expert to novice

Format: Scale rating or single select


Example question: Which of these tools do you use for work? (Select all that apply)

Answer: A list of software products

Format: Multiple select


Frequency of performing specific tasks

Asking about frequency of use or action is useful when you’re screening for users who regularly do a specific task, or who used to behave in a certain way and then stopped.


Consider defining terms like often (every day) and rarely (once a year) so there’s no guesswork.


Example question: Please tell us how often you {name the task}.

Answer: A range of time from often to rarely.

Format: Scale rating or single select


Example question: When was the last time you {name the task}.

Answer: A range of time from today to never.

Format: Scale rating or single select


Comfort with sharing personal information

Just because someone meets your screening criteria doesn’t mean they’re actually going to be willing to participate, especially if your study touches on sensitive topics like health, income, lifestyle, marital status, etc. Ask participants directly if they’re willing and able to answer personal questions.


Example question: This study will require you to share openly about {examples}. Do you agree to share honestly about these subjects?

Answer: Agree or disagree

Format: Single select.

A note on extra requirements for medical researchers

If you’re conducting medical research, the recruitment process and screening process are considered separate activities. Recruitment—in which you reach out to research candidates and tell them about the planned study—is a pre-screening activity that can be done without informed consent. But even your pre-screening process may have to be submitted to your Institutional Review Board (IRB) before you can proceed.


Before you gather protected health information or obtain medical records to determine study eligibility, you’ll need patients to sign a consent form to proceed with screening activities. Your screening script for interacting with possible participants and gathering information also has to be submitted for IRB review.


For more information about IRBs, refer to the FDA website. For your institution’s specific rules regarding screening and research procedures, refer to its specific IRB.

When double-screening makes sense

If you want, your screening procedures can include a phone call to potential candidates who seem most promising. Double-screening like this typically isn’t necessary, but it can be a good way to be absolutely sure that you’re getting the right people to talk to during your study.


We’ve seen researchers use double-screening when the study they’re doing is high-profile (visible to important stakeholders in their organizations) or when they have a highly specific research need. 


Avoiding no-shows

​​People who promise to show up for your study and don’t will cost you time, and likely a moment of discomfort with colleagues and bosses who are forced to sit around waiting. Save yourself some trouble and work to prepare your participants and yourself to avoid the dreaded no-show.


  • Get your users contact information including email and cell phone.
  • Send reminder emails from an individual (not a generic group email).
  • Give participants your number or the number of the testing office so they can get in touch if they’ll be late.
  • Send great instructions for how to get where they’re going.
  • Most of all, stress the importance of their participation so that they’re incentivized to show up because they’re able to perceive their own value in the project.


Also consider being prepared by recruiting a few extra folks who you can call in at the last minute, if need be.

In summary

Long story short, envision your ideal participant—know who they are, know who they aren’t—and build a screener survey that allows you to filter out the right people to answer your research question. 


The specifics of how to get there are outlined above. Just remember to keep an open mind as to who these study participants might be, and don’t limit yourself with prejudgements mired in demographics.


We’ll leave you with a few rules of thumb:

Screener survey best practices

  • Eliminate people early—get the big, must-have criteria out of the way first.
  • Don’t reveal what the study is about or who you’re trying to recruit.
  • Don’t screen for demographics unless strictly necessary.
  • Ask open-ended questions about behaviors, feelings, habits, and past actions..
  • Don’t ask leading questions that hint at what the ‘correct’ answer might be.
  • Avoid ‘yes’ or ‘no’ questions.
  • Provide an ‘other’ option on multiple choice questions.
  • Keep it brief—10 (or fewer) screener questions is typically plenty.


Oh, and did we mention that you can build and fully customize screener surveys with User Interviews? Launch a research project and easily build your screener—we'll give you 3 free participants to get started.

Subscribe
X

Master UX Research

Get the full course and exclusive pro tips delivered straight to your inbox. (New content dropping in 2022)!
Thanks! See you in your inbox 👋
Oops! Something went wrong while submitting the form.