SUBSCRIBE TO OUR NEWSLETTER
October 30, 2020
Common snafus in recruitment are when the participants aren't a good fit or your study or it's delayed due to no-shows. Here's what to do.
Before you start your recruitment process, have a strong understanding of what you’re trying to discover with your research. You’ll use that information to write screener survey questions, set appropriate incentives, and secure eligible participants. The first thing you need to do is:
Holding a stakeholder interview is critical pre-recruitment work if you’re collaborating with other teams on your project. Sometimes, a research request is well documented and straightforward. Other times, it may not have enough information, or the request may not match what you suspect the real need is.
Asking any research stakeholders questions that shed light on what the project is for and what you should know going into it will give you key information on who the study should focus on and how to achieve your goals.
Here are four sample questions:
Here is an example scenario:
Let’s say you work for a menswear company that sells men's suits at a discounted price, and the majority of your sales are online. In speaking with the marketing head, you learn there is a discrepancy between who they expected to be their best customers (young male professionals who need suits but don’t want to break the bank) and who is actually buying the suits (thanks to Google Analytics, you can see that it’s men 40 - 55 years old ordering via desktop rather than the app). Your job is to find out why.
This is a fairly vague request. By speaking with the stakeholders involved, you can decide which insights would be most valuable — for example, do you focus on finding out why the younger men who work at startups aren’t buying the suits? Or do you find out why the older men are buying the suits and double down on conversions with that demographic? Do you refocus on social media marketing and optimize the app purchasing experience, or do you look at how you can improve the desktop version of your site since that’s where most purchases are currently occurring?
Getting insights from research that the requesting team can actually act upon is critical. Chatting with stakeholders is the first step in making that happen. Once you’ve done that, it’s time to:
In screener surveys, questions should eliminate or select for potential participants as quickly as possible. That means you should include your most important criteria for disqualifying a candidate first, including both immediate disqualifiers (non-negotiables) and subject-to-approval disqualifiers (negotiables).
Continuing the example above, say you’ve found that men aged 20-29 often browse items via the company app but abandon their carts at a high rate. When you’re recruiting participants, their age (20-29 or so) and gender (male) are your non-negotiables because of what you want to discover.
Perhaps you’re hoping to speak mainly with young men who work at startups or in tech. But in this case, that could be negotiable since it’s not as critical as getting the right age and gender.
Non-negotiables can be demographics such as gender, age, location, and so forth. But they can just as often be tied to behavior. If you’re looking for marketing professionals who use InDesign, for example, then their experience using InDesign is non-negotiable. Meanwhile, their age or gender may not be as relevant.
Have your survey questions start with non-negotiables, then narrow your focus as you go. You’ll also want to avoid asking “leading” questions, such as, “How unhappy are you with your current banking system?”
The question implies that the participant is unhappy with their banking system and signals to the participant that this may be what you’re looking for. Instead, try something neutral like this: “On a scale of 1 to 5, 1 being not satisfied at all and 5 being very satisfied, please rate your satisfaction with your current banking system.”
Then, your next question could ask the participant to explain why they chose their rating. This helps you test the participant’s ability to engage critically with the questions, and brings us to our next section:
Qualitative researchers are looking for the reasons behind a specific action. If you’re testing a website, for example, you want to know your customer’s journey through your website, but you also want to know what motivated their taps and clicks.
That means you need participants who can narrate their choices and explain the logical or emotional processes that lead them down one path and not another. These characteristics come with high-quality participants.
Knowing whether or not a participant is articulate is probably one of the hardest parts of successful recruitment. Focusing on “articulation questions” — questions designed to test study participants’ ability to describe what they are thinking and feeling — is your best bet for finding out if a participant can give you the depth of information you want.
Here are three examples you could use in your next survey:
It isn’t important that these questions relate back to the study. What you’re looking for here is to see whether or not the participant answers the questions above with detail or with the bare minimum.
At User Interviews, we offer advanced screening, which is the ability to speak with a participant before the actual research interview. This process can help you decide if they can elaborate on answers enough to be helpful in your study. While advanced screening isn’t necessary for every study, some qualitative research methods demand more articulate users than others.
If you’re trying to put together a focus group for market research, for example, make sure everyone in the room can express themselves well. Or if you have a higher-up (or many!) sitting in on some of the sessions, it’s nice to know for sure that the person you’re talking to is as eloquent with the spoken word as they are in their written screener responses.
Some people volunteer for user testing as a side hustle. Research participation is an easy way to a little extra money. People who enjoy participating in research they’re a good fit for while making some spare change often make great participants.
But problems arise when testers try to game the system to get into more tests so they can make more money. Some testers use fake emails to set up multiple accounts within a platform or lie about their demographics and behaviors to get chosen for a wider variety of studies.
As a researcher, you can do some leg work to make sure the participants are who they say they are. But that can be difficult and time consuming.
User Interviews works to save you time by vetting participants via social media accounts. Plus, in the algorithm we use to recommend candidates, we penalize participants who were no-shows. Those who have good reviews are favored over those with bad reviews.
If the participant at User Interviews uses the same email account tied to social media accounts, an icon appears by their name (see below).
Note: You can click on the LinkedIn icon and view the participant’s LinkedIn profile. However, due to privacy regulations, you are unable to see the participant’s Facebook profile.
Pulling in verified social media accounts is one of the steps we take to keep our participant pool accurate and honest.
Figuring out exactly what (and how much) to offer as an incentive can be difficult. Generally, the longer you’ll need a participant for a study, the higher the incentive will need to be. Plus, whether or not the research is remote or face-to face can affect how much you should offer. Finally, people in certain professions (or with certain income levels) often won’t be attracted to participate unless the incentive is high enough.
Here’s a little cheat sheet we put together:
For a more personalized number, fill out our User Research Incentive Calculator to see what you should offer participants for your next study.
If you’re issuing Amazon gift cards as an incentive, User Interviews can handle this part of the process for you. That way, after finishing the study, you can focus on the results while we quickly compensate your participants.
If you’ve ever taken an Uber, you know you’re pinged with an alert to rate your driver the moment your ride ends. This rating tells Uber and the driver how they are doing, with the goal of keeping only high-quality drivers on the road.
We do something similar at User Interviews.
After each study, you can rate the effectiveness of the participant.
It’s a simple post-study survey that asks you to rate the participant as either:
By doing this, we can show our participants to you as rated and vetted by your fellow researchers. But even with processes like this, no-shows can happen.
No matter the reason, a no-show hinders your ability to complete your research. Because of this, we recommend you keep a list of potential back-ups of participants you can contact to fill any open spots.
Remember above, when we were talking about writing your screener survey to have specifications that are a mix of non-negotiables and negotiables?
Another reason we recommend that is because it allows you to prepare for a rainy day by increasing the number of participants who can be involved in your research project. Categorize your potential participants as follows: best fit, potential fit, and poor fit.
Now, you can schedule your study with your “best fit” participants, and keep a list of “potential fits” as backups.
At User Interviews, we make this process intuitive and easy for you. Simply look through your participant pool, see the responses to your screener survey, and sort them into the appropriate category.
And if the worst does happen and a participant doesn’t show up for your study, mark them as a no-show. A screen will pop up immediately and ask if you want us to find a replacement on your behalf (reaching out to your already vetted participant pool). If you’ve already approved extra participants, then they will be automatically scheduled to make up for the cancellation. With this process, your study is more likely to start on time.
Our median time to find your first qualified participant is only 2 hours. Why? We have over 200,000 screened participants whose demographic information and some behavioral info is already known. When you launch a study, we’ll send it straight to the people who are most likely to qualify.
If your study is more niche, it may take extra time. But if we can’t find all the people you need, we have ways of recruiting more participants beyond our existing pool. When an applicant fits the criteria you created, you can qualify or disqualify the participant based on their answers and confirm their identity via their LinkedIn profile.
As a participant recruitment platform, we are focused on speed, ease of use, and the quality of our participants. Researchers rate each participant after testing, which helps us keep the best participants at the top of your list. Plus, we handle scheduling and incentive payments, giving you more time to focus on analyzing and implementing user feedback.
Give us a try and get your first three participants free. You won’t pay a cent until you actually run an interview or other test with the participants we find for you.