All posts
UX Research Topics
Podcasts
Field Guide
SUBSCRIBE TO OUR NEWSLETTER
Don't cut corners while recruiting for research — there are ways to speed up recruitment while maintaining the integrity of your project.
Jonathan Santiago
If you're like most user researchers, learning new things and talking to users are the best parts of the job. But usability test recruitment? Maybe not so much.
No researcher wants a study that has poor retention or yields less than stellar insights, so cutting corners (even if recruiting is a pain) isn’t a good option. But there are ways to speed up recruitment while maintaining the integrity of your project.
We’ve helped thousands of researchers quickly find the right participants for their projects, and in the process, we’ve learned a lot about what does and doesn’t work for participant recruiting. We’ll cover what you need to know about finding the right users for usability studies, including:
Worth noting: User Interviews’ median time to find the first qualified participant after you submit your request is just 2 hours. You can screen for participants based on demographics and behavior, then see their responses and approve final participants yourself. Try User Interviews risk-free, and schedule your first three participants for free.
What are the goals of your usability test? Before you start recruiting, answer that question.
To determine your objectives, have a clear understanding of the problem at hand. From there, assess what's currently happening and why it's the status quo. Finally, speak to the stakeholders involved to understand their perspective on the problem.
Sometimes, usability tests are simple to pin down (such as “Make sure sign-up flow is intuitive and fast for users”). Other times, it’s easy to make them too general (“Make sure the user experience is intuitive”). When in doubt, look for ways to make the question you’re answering more specific.
Once you’ve settled on your goals, decide what kind of test to run. Depending on the task, you can run the test yourself (moderated) or leave a set of instructions for the participant to complete on their own time (unmoderated).
In moderated usability tests, researchers hold more influence. Your presence during the study allows you to guide the conversation, regardless of whether you’re bringing participants into the office or conducting remote testing.
Moderated user research works well if you plan to ask follow-up questions in real time. They’re also valuable for troubleshooting usability problems or technical difficulties your users have in the moment.
The downside is that they may be more difficult to schedule, especially if testers need to arrive in person.
The freedom of unmoderated testing allows users to complete studies at their own pace and time. It’s a major benefit for participants to have a say in when they have to complete the test. That autonomy can also lead to the discovery of valuable insights you didn’t expect.
At the same time, participants may miss something important because you’re not around to guide them. And unlike moderated sessions, you can't ask follow-ups or address hurdles in real time.
We've seen slightly higher no-show rates for unmoderated tasks, perhaps because meeting face-to-face (whether in person or not) is a more firm commitment. Either way, it's good to have backups ready to go for both types of studies in case someone needs to drop out.
To find out if recruits are the right fit for your study, create an effective screening process. When developing yours, keep in mind the following three elements:
When you write the description of your project to encourage people to take your screener, give them enough information to pique their curiosity without revealing enough for them to game the system.
We recommend not revealing the name of your company to potential participants. It’s also better that they don’t know what experiences and demographics you want from a tester.
Why? Let’s be honest: there’s money involved. You want to make sure someone isn’t giving you the answers they think you want rather than the most truthful option. And sometimes, people don’t even realize they are being led to an answer by how the question is worded. If you give away too much information in the survey description (or in the questions themselves), you may end up with participants who aren’t actually a good fit for your study.
For example, we once worked with a pharmaceutical company that needed access to pharmacy technicians. To help their recruitment efforts, we suggested that they pitch the general topic of pharmaceuticals.
Doing so generated interest from a variety of people whose lives touched the pharmaceutical process. Some people applied hoping that it was about shopping for medicine. Others applied because they worked in different parts of the pharmaceutical industry. Ultimately, enough of those applicants were pharmacy technicians, so the study was able to go forward.
For example, a yes/no question like “Are you a pharmacy tech?” is too revealing about what you want. On the other hand, you can ask a question like, “What is your job role?” or “Which of the following best describes the work you do?” (with multiple choice options following the question).
Usability test participants only need a general idea of what they’re getting into. If they are interested in your project topic, they’re likely to have more commitment to the task. And sometimes, they’re pleasantly surprised by how much they enjoyed the research session. We’ve had participants tell us afterward that they were thrilled to learn which company they were providing feedback for during the test because they’re big fans of the brand.
In summary:
The incentives of your usability test play a huge role in recruiting. Base your incentives on the kind of test you design and the people you’re recruiting.
Between moderated and unmoderated usability tests, you can usually offer a lower incentive for unmoderated studies. That's because participants can take them on their own time, and they usually take less time.
For moderated tests, you can choose to run them either on site or off site. For on-site tests, we’ve found that it takes more money to entice someone to volunteer their time (for specific numbers, see our incentive calculator).
Duration can influence the cost of your incentives, too. The longer the session, the more participants will want as compensation.
For instance, we’ve found that 30-minute, moderated test sessions generate interest at a minimum of $40. Meanwhile, some participants take as little as $15 for a 20-minute, unmoderated session. The first test is more rigid and takes longer, while the latter is shorter and flexible.
To figure out how to incentivize your next usability test, try using this incentive calculator we created specifically for user researchers.
Good screening questions go beyond demographics. While there are many kinds of questions you can ask, we’ll focus on two types today: open-ended questions and indirect questions.
Open-ended questions take some effort to answer. Participants have to use their own words and consider how much they really want to say. Because of this, open-ended questions can give you insight into your participants' commitment levels as well as a sense of how articulate or talkative your applicants are. To ensure you receive good feedback in your study, value these characteristics.
For example, you might ask, “Tell me about the last time you purchased new technology.” This question requires them to describe what they’ve done, but it also serves as an indirect way to determine how tech-savvy they are.
That brings us to the importance of indirect questions. These kinds of questions protect you from revealing too much information about your study. Asking indirect questions is a good way to avoid biasing your participants. Here are a few more examples:
Finally, rescreen your applicants if it’s appropriate. Rescreening gives you a chance to ask them any important follow-up questions. It’s also useful for double-checking if applicants are as expressive in person as they are on paper. With User Interviews, you can achieve this by using our Advanced Screening feature.
At this point, you’ve set your goals, determined the kind of test you’ll run, and designed your screening process. Now, it’s time to start recruiting participants.
Always look for them to have a few particular qualities. The best participants often exhibit the following three traits:
No matter what kind of user experience research you conduct, this is a must-have quality. You don’t want participants who aren’t comfortable sharing their opinions.
It’s valuable to recruit participants who aren’t afraid to ask questions or explore. When they do, they help you uncover useful insights you don’t expect.
In any research study, forces outside of your control can decrease user participation. Maybe someone receives a product to test, but it arrives broken on delivery. Or perhaps technical issues, such as outdated browsers, cause hiccups in your website test.
But sometimes, hiccups are caused because people lose interest or get distracted easily. You can improve your dropout rate by screening for committed participants. Look for people who are resilient if your test becomes a little difficult. It’s equally important to recruit people who don’t become bored too easily. To improve your odds of working with committed people, be open about the total time commitment your study involves and avoid surprises, extra assignments, and the like.
If you find participants through User Interviews, keep in mind that all participants are rated by the researchers they’ve worked with previously. Participants with bad reviews (for poor quality feedback, lack of commitment, etc.) are deprioritized in the algorithm that matches them with your study. That way, you’re more likely to find someone who is willing to complete your whole study in the given time frame.
Commitment matters most when you’re running a usability test over a long period of time. You want participants who are willing to interact with your product in their daily routine.
That being said, it’s also important to keep your own expectations reasonable. If you want someone to use a physical product for two weeks and report in every day, you can’t ask them many questions during that time without risking burnout.
When you’re ready to receive applicants for your usability test, there are a few places you can turn to. Below are some options for your participant recruitment process:
Finding great participants is one thing. Setting them up for success is another. While designing your next usability test, ask yourself the following three questions:
Users may fail to engage with your test for many reasons. Poorly worded instructions are one of them.
This is an important question to answer for both unmoderated and moderated tests. In moderated tests, you can fix mistakes from weak instructions as they happen. But in unmoderated tests, you're stuck with whatever results you get.
The amount of guidance a test needs to have often surprises new user researchers. They usually underestimate how much structure they must provide. Your instructions must be direct about what you want users to do. For example, if you want users to send a help request on the pricing page of your website, tell them to do so.
Make it obvious to your participants that they won’t get paid if they don’t complete the test by your deadline. Doing so in your confirmation message can reinforce their commitment.
Specific instructions provide your participants the context they need to finish the task.
Many moderated tests don't need instructions sent ahead of time. But for unmoderated tests, instructions can make or break the testing session. Don’t send those instructions at the last minute.
People who sign up for your usability test are eager to get started. If they have the option, they’d rather finish your study sooner rather than later. Sending them instructions early enough provides them flexibility. That makes them more likely to finish the tasks you send.
Giving your participants instructions early also mitigates any ambiguity about your usability test. When you send instructions too close to the deadline, your participants may feel left in the dark. They may wonder if they volunteered their time for an experiment that won’t actually happen.
When you send instructions soon after participants sign up, that leaves room for them to ask questions and budget enough time in their schedule to take the test.
We recommend preparing instructions ahead of time and including them with the confirmation message to newly approved participants. If you’re not able to do that, send details as soon as possible. Take advantage of the fact that you have their attention.
It’s best to run usability tests on a firm rather than a fluid timeline. In our experience, the ones that are clear about time commitment perform much better.
Say you’re running a test that includes a follow-up survey after it’s finished. You’re more likely to receive a high response rate if participants know when that survey will arrive. But if you send it out on a whim, your follow-up can get lost in the shuffle of their lives.
Much like clear instructions, having a specific timeline provides your users with certainty. They know what to expect. That decreases any level of stress they might have on their end.
When you’re ready to launch your research project, try User Interviews. You can use this link to find your first three usability testing participants for free. Signing up is risk-free: You only pay if the people we find for you actually participate in your test.
Author
Jon Santiago is a freelance writer from Northern California. You can learn more about him and see some more work by visiting his website.