SUBSCRIBE TO OUR NEWSLETTER
October 2, 2019
We’ll share how we recruit participants through Facebook advertising and notifications to our existing panel, plus screener survey tips.
What do you really want to learn, and who can best help you learn it?
Let’s begin with an example. Say you’re on a research team that undertakes a study evaluating the views of nurses working in eldercare. What you’re really hoping to get are nurses who travel to and administer care to homebound seniors. But if you failed to mention that in the screener, you’d end up with a large pool of candidates who work in nursing homes or assisted living facilities.
According to the screener, they fit all your criteria; but in reality, you end up wasting your time by recruiting and then rejecting nurses who work in the wrong locations.
In another scenario, let’s say a researcher is recruiting for a study on the attitudes of “average American” consumers. When the results are in, they reject a majority of the candidates identified because they live in large urban centers along the East and West coast. What went wrong?
They neglected to specify that their real goal was to gather a sampling of people across what they consider to be “the heartland” in smaller or mid-sized communities. If they had clarified that goal earlier in the process and added that information to the screener, they could have filled their interview slots far more quickly.
Assumptions matter. While this may seem obvious, if you haven’t precisely defined everything you’re looking for — or not looking for — it will be harder to fill your study. Could you be unnecessarily eliminating your best respondents based on assumptions about their suitability due to educational level, employment status, or some other factor?
Whether you source your own participants or delegate the task to someone else, you need to make these kinds of nuances and distinctions clear.
There is nothing wrong with screening a candidate pool by common demographic filters, such as gender, race, geography, or income. In our experience, though, this isn’t enough. It’s easy to lean too heavily on demographics, even in broader, exploratory market research and focus groups.
Instead, focus on the core behaviors and characteristics of the people who are your target customers or users. This is true whether you are doing quantitative or qualitative research.
For example, “Midwestern women” isn’t as focused as “Mothers in Chicago who cook.” Likewise, “Millennials in Los Angeles who listen to Spotify” is more useful than “young adults in the United States.”
Here’s what that might look like if you were to translate behavioral details into Facebook ad targeting:
If you also have unique needs, such as a certain mix of participants, make sure you take that account into account while planning. If you were using User Interviews to fill your interview slots, for example, it could be as simple including in your request that you want an equal number of men and women.
We’ve had the most success by creating ads specifically targeting the kinds of participants we are seeking and then running those ads online — particularly on Facebook.
When we’re tasked with filling a research recruiting request, we use some combination of these ads and our own pool of potential candidates, who have joined through ads, organic search, referrals, and word of mouth. Whenever we employ new rounds of targeted advertising, it brings results.
When writing these kinds of ads, we aim for a very targeted message, getting as close to what the study needs as possible. Here’s an example of a free Craigslist post (not to be confused with their paid ads) that we’ve run:
On Facebook, we also try to accomplish this by targeting every possible criterion that Facebook allows us to target. Examples include:
As we input our targeting criteria, we monitor audience size based on those filters.
Obviously, Facebook and Craigslist ads cost money. There’s no point in spending that money on ads unless you can get as niche as needed in your targeting. Otherwise, you’ll get broad results with large numbers of respondents who don’t qualify for the study.
On the other hand, if you attempt to combine every criterion that Facebook allows, will it return enough results? There’s a risk that over-targeting narrows the numbers too drastically. Depending on the results you get back, you may need to adjust the filters and experiment.
Another potential obstacle: There are some criteria Facebook no longer allows you to target, such as income. That’s a lot more difficult to get around, unless you’re willing to proactively run ads to lookalike audiences (something we’ll explain in the section below).
Here are a few more things we’ve learned after years of research recruiting via ads.
LinkedIn was more useful, but the cost to run ads on the platform is often over ten times more than what it costs to run similar ads on Facebook. It’s simply not your most cost-effective option.
Facebook and Instagram have been the most consistent and most economical places to find study participants, out of the four. Since Instagram is owned by Facebook, ads you create for FB can also be pushed to Instagram.
We run experiments every month to tweak our ads for optimum success. For example, we’ve run split tests on the ad creative and found that a video vs. a static photo seems to be more eye-catching.
If you’re running an ad campaign to fill a survey, it often takes about a week to find enough people (depending, of course, on how many people you need). But for us, ads are a way to supplement the existing User Interviews database. If you’re starting from scratch, it could take longer.
When you’re running ads, keep an eye on how well they’re performing. Facebook makes it easy to keep spending more and more money on existing ads. But if an ad isn’t converting, it’s in your best interest to stop running it and focus on other methods. It’s best to decide what your cut-off point is ahead of time then stick to it even if you haven’t filled your study yet.
One way to improve your ad conversion rate is to look at the incentive you’ve set for the study. For studies with lower than average incentives (or studies that pay $25 or less), we do not see nearly the same traction as with ads that pay either the average or over $25. Aside from that, roughly $1 per minute of interview time is often successful for consumer studies. For occupation-based studies, in-home, or in-person studies, it often takes more than $1 per minute to attract a good participant pool.
Here are our recommendations for setting incentives based on participants’ profession/income bracket. If your initial incentive doesn’t seem to be attracting many participants, try increasing it by $10 or $15 per person, if you can. That will make a big difference in your response rate.
Also, keep in mind that really niche studies, very specific occupation studies, purchase behavior studies, studies outside of big cities, and so forth are just more difficult to recruit for. You may need to get creative in finding where they spend their time and reach out to them there.
If you’re having trouble targeting a specific demographic (for example, because you can’t target by income using Facebook ads), then look-alike audiences could be helpful. We primarily use them for proactive ads. When we identify a group that may be tough to recruit, we pull a list of participants from our database that meet that criteria and form a lookalike audience.
Continuing with income as an example, high income participants can be tricky to target, so we have a proactive ad using a lookalike audience to recruit those participants on an ongoing basis.
We determine which ads to run by top need. For example, we’ll always need participants in NYC and Boston and have consistent demand for small business owners.
If you’re trying to use lookalike audiences for your own research, it’s more difficult. In order for that to be a viable strategy, you’ll need a source of data from which to create the lookalike audience on Facebook.
If you’re not finding qualified research participants, you may need to revisit the way you structure your survey questions. It’s possible you could be inadvertently eliminating candidates or even letting through problematic people.
Over time, and after trial and error, we’ve discovered what does and does not work and have developed best practices for writing effective screener surveys.
Here are some of our recommendations:
In our experience, targeted online advertising combined with tightly crafted screener surveys bring proven results. If these kinds of practices are brand new to you, there will be a bit of a learning curve.
Writing a great screener is perhaps the easiest part of the process, if you take the time to define who you’re targeting, then mercilessly cut extra questions. But running ads is a bit trickier. It’s easy to make them too broad or too specific, and learning the technical aspects of Facebook ads will take time. Plus, Facebook is always changing the rules. You’ll have to keep an eye on your ad spend and really learn the ins and outs of targeting to find enough of the right people.
Alternatively, we’d be happy to source participants for you so you can focus on your actual research. We have an extensive, continually updated database of diverse candidates and have consistent success filling niche requests from researchers. Many times, you can have your first participants scheduled within the day.
Want to see what it’s like? You can create an account for free — and you only pay for completed sessions.
Greg is a freelance writer with a broad background writing for the web, print, and radio. A former longtime radio broadcaster, he’s also still active behind the mic as a professional voice talent.