
Research recruitment fundamentals haven't changed. You still need good screeners, clear communication, fair incentives, and quality verification to source qualified (and quality) participants.
What has changed is the context: AI has entered the fray, fraud has become industrial, and trust in participant quality is at a premium.
When faced with the new realities of recruiting, will you choose to spin your wheels or embrace the challenges—or more importantly, opportunities—in front of you?
Here, we’ll focus on what it will take to move forward. We’ll break down the participant recruitment landscape today vs yesterday, the new challenges and realities researchers face, and how to get on the path to modernize your participant recruitment efforts.
And since we know a thing or two about participant recruitment, we wanted to share all of our knowledge we’ve gained from our 6 million+ panel and diverse community of researchers. See the full report below and be sure to also download our Participant Recruitment Tactical Guide for hands-on guidance using the button here.
What’s changed in participant recruitment in recent years? A lot! But for the sake of this guide, we’ll focus on three major shifts: AI’s role in recruitment, a rise in coordinated fraud, and the role that platforms can play in supporting better recruitment.
We’ve witnessed the rise of AI use among researchers first hand—74% of researchers reported using ChatGPT to support their workflows, according to our most recent AI in UX Research Report.
But perhaps the bigger story is that participants are using AI as well, upending recruitment—and raising the risk and reality of fraud—in the process. Participants are increasingly using AI across three key categories: screener optimization, presenting “professionally”, and pattern matching.
AI might be enabling researchers to experiment with new methods—both in tool use and for developing AI tools themselves—which more than half of researchers (54%) said they did in 2025.
-2025 State of User Research

In practice, AI responses are pretty easy to spot (for now): they tend to feature perfect grammar with vague content, professional structure lacking specifics, generic examples applicable to anyone, or an overly formal tone. In contrast, human insights are grounded in more specific examples, natural language with imperfections, and idiosyncratic descriptions.
AI doesn’t need to upend your recruitment, but rather supplement your strategy. Recruitment platforms can support these efforts with better fraud detection (more on this below) and screener optimization, among other key benefits.
The reality is that there are legitimate uses of AI in screener responses, but all AI usage is not created equal. When evaluating participant quality against AI usage, here’s a few considerations to keep in mind:
1. Don't automatically disqualify AI assistance. AI has become a game-changer for translation for non-native speaking audiences that may struggle to convey otherwise valuable insights. AI for language polishing doesn’t always equal fraud.
2. Verify depth, not polish. Follow up with specific detail requests during sessions. A typical "tell me about the last time you..." question rather than something more generic can help in this regard.
3. Look for concrete examples. Keep an eye out for specific, recent, idiosyncratic details. These are harder to AI-generate convincingly today.
4. Leverage AI as a validation tool. Turn the tables! Use AI to review screener responses at scale, flag suspicious patterns, and identify overly-generic language.
Participant quality and reliability was cited as a major recruiting challenge by 54% of the respondents to the 2025 State of User Research, the third most behind finding the right participants and time to recruitment.
To help address this pain point, User Interviews recently conducted an assessment of its 6 million+ participant pool, finding that only ~1% of sessions were reported for potential misrepresentation, and <0.3% of sessions were confirmed fraudulent.
So how can User Interviews (or any other platform) ensure it stays this way, particularly when fraud suddenly becomes a thriving industry instead of a “thing that happens” in the participant recruitment process?
Quirk’s Media published an open letter last year on the topic, citing multiple sources on the growing prevalence of fraud:
“Kantar’s report, The State of Online Research Panels, reveals that “researchers are discarding on average up to 38% of the data they collect due to quality concerns and panel fraud...with one prospect citing they had to return 70% of data provided by a leading panel provider.” And this report is not an outlier. A study by Greenbook found that up to 30% of online survey responses are fraudulent and a LinkedIn Pulse article pegged the number at 40%.”

Before we talk tactics, let’s first unpack the fraud landscape.
Fraud networks are becoming more sophisticated, using communities like Discord, Telegram, and closed forums to share information, such as which studies are "easy", screener questions and "correct" answers, verification workarounds, and (eek!) platform vulnerabilities.
How can fraudsters do all of this at scale? Bad actors are increasingly enabled by the prevalence of new tools that facilitate fraud, such as:
There’s also a marginal utility assessment to it all. Fraud networks are optimizing their return per hour by study type against the risk-reward by platform. Plus, they know when to hit the eject button and quickly abandon burned accounts.
In all, we have to recognize that this is no longer a one-off phenomenon, but rather an organized effort that operates like other online fraud industries (e.g., click farms, review manipulation, coupon abuse).
Tier 1: Opportunistic
Individuals stretching truth
Minimal preparation
Easy to catch with basic verification
Tier 2: Professional
Experienced participants optimizing across studies
Shared strategies, no automation
Passes basic verification
Affects data quality
Tier 3: Industrial
Coordinated networks leveraging tools
Sophisticated evasion
Very difficult to detect
Can invalidate studies
On top of that, mitigating fraud before it happens has become increasingly challenging for researchers managing their own panel, as they can no longer solely rely on single-point verification.
Yes, the ultimate goal is to prevent all instances of fraud. However, that’s become increasingly difficult.
Unfortunately, fraud is not a problem you can solve overnight. However, with the right recruitment tools and best practices, you can mitigate and manage it.
Participant recruitment is still a major challenge for researchers: finding enough qualified participants, the time to recruit, and participant quality remain the biggest struggles, according to the 2025 State of User Research.
Modern times call for sophisticated tools, and recruitment platforms are no exception: they’ve shifted from a nice-to-have resource to a piece of essential research infrastructure. In fact, more than 70% of respondents to our State of User Research said that they use a tool for qualitative recruiting, while 56% said they rely on a tool for quantitative recruiting.
That said, all tools are not created equal. Let’s (re)set the stage on participant recruitment platforms, what they enable, and where they fit into a modern research toolstack.
1. Scale and speed: Platforms make recruitment dramatically faster and more accessible than manual methods. What used to take weeks now takes hours.
2. Fraud mitigation: Platforms have implemented layered defenses that individual researchers can't: automated fraud detection, participation history tracking, behavioral signals, and continuous monitoring.
3. Access to niche audiences: Platforms maintain pools that include hard-to-reach professionals and specialized demographics. Finding eight healthcare executives in Atlanta and Chicago has suddenly become feasible, not aspirational.
4. Operational efficiency: Automated screening, scheduling, incentive distribution, and communication reduce the operational burden on research teams, letting them focus on research instead of logistics.
Start talking to hard-to-reach professionals and qualified consumer participants in a matter of hours, not weeks. Launch a free project to get started.
Sign up nowRecruitment platforms like User Interviews are powered by matching algorithms that facilitate all of the features described above.
What happens behind the scenes:
This becomes an alignment question: what do researchers need to get their jobs done faster versus producing the best insights possible? Platforms should improve participant quality and recruitment efficiency, but also put the “humans” in the driver seat—researchers can ultimately decide who the best participants are for their study through screeners and other direct assessments.
Platforms need to optimize for:
Faster recruitment
Participant quality
Operational efficiency at scale
Helping users assess incentive costs
You need to optimize for:
Quality participants
Audience/study fit
Insights at scale
Can work within various research budgets
Platforms succeed when researchers get good participants quickly. But not always perfectly:
This creates valuable efficiency and consistency. It also means switching platforms has real costs—in time, retraining, process changes, and stakeholder confidence.
The question isn't whether to avoid platforms. It's how to use them strategically.
1. How you write screeners
Behavioral questions, open-ended responses for review, questions that reveal depth
2. What verification you add
Manual review of flagged responses, credential spot-checks, in-session verification, work samples
3. How you communicate
Context beyond platform defaults, rapport building, clear expectations
The best recruitment platforms want this feedback. They improve based on how researchers actually use their systems.
Two questions worth asking today when debating tools vs direct recruitment:
Factors to consider when answering these questions:
Like any tool, recruitment platforms work best when they support your needs: you understand what they're good at, where they have limitations, and how to use them within your larger strategy.
Easier said than done, we know. There are clear challenges in today’s recruitment landscape. We’ve identified five of them below.
So what are the participant recruitment challenges facing researchers today? Let’s unpack the five major hurdles we’ve seen across the research community—and how to address them right now.
In recent years, researchers started asking (whispering?) a new question: "What if we just... used AI participants?"
The idea isn't as farfetched as it sounds. AI systems can now simulate user behaviors, generate realistic responses that mimic specific personas, and provide feedback at scale. This isn't theoretical—it's already happening, and the research community is actively debating where the boundaries should be.
1. Internal ideation: Teams routinely use AI to model user personas, generate "what would X type of user say?" responses, and explore edge cases—as long as it's clearly labeled as simulation.
2. Supplemental research: Some teams now use real participants for primary research, then turn to AI to explore variations or scale their testing. This augmentation approach has some adoption, but (such as the case against synthetic users by IDEO last year) about whether it dilutes the authenticity of findings.
3. Primary research: Treating AI responses as equivalent to user feedback and making product decisions based on synthetic data remains limited but is growing, despite ethical concerns from the research community.
Naturally, there are proponents and critics of synthetic users.
What the proponents say
What the critics say
Not to mention there's a fundamental ethical concern: using synthetic participants without the proper disclosures could be considered deceptive, at minimum.
We’ve outlined the consensus on a few of those use cases here:
Acceptable (but not widely adopted):
Debated (ongoing):
Not acceptable (yet):
Researchers have legitimate slippery slope concerns, reporting conversations to the tune of:
Take the FREE 2-week course that combines theoretical understanding with practical application, leveraging your expertise and existing frameworks.
Sign up nowOk step four might fall more on the extreme end of the spectrum, but budget concerns are real: our 2025 Research Budget Report showed that headcount, tools, and participant recruitment eat up the majority (71%) of research budgets.
Let’s assume avoidance is not an option. At minimum, researchers should:
The question isn't whether synthetic participants will exist. They already do. The question is drawing the line between acceptable and unacceptable use at your organization to ensure that research quality remains high. As a result, it’s incumbent upon researchers to revisit and revise their guidelines on synthetic users on a regular basis.
Has research participation jumped the shark into becoming gig work? It could be the case. Professional participants are on the rise. They're not fraudulent by definition, but they're fundamentally different from the casual participant who does a study or two. The professional participant is here (and not going anywhere).
So what makes them different? Professional participants approach research strategically. They monitor multiple platforms for opportunities, track their participation rates and earnings, optimize their qualification strategies, build reputations through researcher ratings, and calculate their ROI on time invested.
This poses a research dilemma: professional participants show up reliably, understand instructions quickly, provide articulate feedback, and make sessions run smoothly.
But they are arguably worse for research in other ways. They may provide answers that sound good, but may lack authenticity (aka telling researchers what they think they want to hear). Fresh perspectives erode into practiced performances. They become professional critics rather than authentic users.
The tenets of good research—professional participant or not—should still apply. Here’s some steps you can take to gut check your research:
1. Track participation frequency: Know how many studies participants have completed recently, both on your platform (and across others if possible).
2. Match to study type: Put in guardrails based on whether you’re conducting exploratory research, usability testing, or validation research.
3. Document participant behavior: Keep track of participants (e.g., "2 participants had completed 1 prior study 6+ months ago; 6 were first-time participants."). Platforms can be helpful in this regard, but this can also be managed individually.
4. Refresh your participant pool: Sometimes you want experienced participants who can articulate nuanced feedback quickly. Other times you need fresh eyes who haven't been trained by dozens of prior studies. Be intentional about what you need, what you're getting, and whether it matters for your specific goals.
Tech layoffs, persistent inflation, economic uncertainty, and market volatility didn't just affect research budgets—they fundamentally changed who participates in research.
The tech industry layoff wave brought more highly skilled participants into the market—laid-off product managers, designers, and engineers who were suddenly available during work hours and willing to participate for income they previously didn't need. They brought legitimate expertise and professional experience. But their motivation had shifted fundamentally from viewing research as an interesting side activity to treating it as necessary income.
This created a tension in quality that's hard to resolve: more qualified participants is a good thing, but professional participant patterns have grown (not so good) and economic desperation is on a similar trajectory (unknown impact).
Add inflation and incentive erosion into the equation, and the $75 that felt generous in 2020 covers roughly 20% less in 2025.
Does this exchange sound familiar?
The result? Quality concerns and increased challenges in recruitment.
Economic stress creates what some researchers call "desperate participants"—people who will say anything to qualify, embellish their experience, or participate even though they do not fit the criteria. This has raised questions such as:
The impact cuts both ways. On the quality improvement front:
On the quality concern end of the spectrum:
While this pattern doesn’t necessarily fit fraud in the traditional sense, addressing these challenges often requires similar tactics: stricter verification to ensure they're the highest quality participants for your study.
Privacy regulation is no longer a “future concern” and with it comes operational complexities that impact recruitment decisions.
Beginning with GDPR in Europe and continuing in the U.S. with regulations such as the CCPA in California, researchers are now tasked with navigating different laws by location, determining jurisdiction across participant location versus company location versus data storage, and assessing varying enforcement risks by region and company size.
Its impact can be felt most acutely across four categories: Participation tracking across studies, demographic collection, data retention, and third-party sharing, as our diagram above shows.
A simple checkbox approach will no longer do—research participant consent has taken on new (and complicated) forms. Recruitment platforms can help, but there’s limitations.
Platforms typically handle
You're still responsible for
In the coming years, it’s expected that a combination of new U.S. privacy laws, stricter EU enforcement, and new AI-specific regulations will continue to impact research as we know it.
Recruiting will get more complex in this regard, so understanding privacy laws—and their implications on your research— is fundamental to your strategy.
In the past, recruiting B2B participants meant slow, manual outreach: LinkedIn messages, email chains, referrals through professional networks.
Today, B2B recruitment operates more like expert network infrastructure. Specialized platforms emerged to solve B2B recruitment specifically—not as a side feature of general panels, but as their core business. While this shift may have improved access to a coveted pool of participants, it also introduced trade-offs.
1. Speed: Sourcing, vetting, and scheduling C-suite executives used to take weeks (if not more). Now you can source B2B participants in hours or days. The advent of expert networks, which maintain pre-verified pools of professionals across industries, functions, and seniorities, had a lot to do with this advancement.
2. Professional verification: Rigorous verification is now normalized—LinkedIn profile matching and work email confirmation and industry-specific vetting are just a few of the avenues to ensuring participant quality today. This also raises the bar: when one platform verifies thoroughly, researchers expect all platforms to do the same.
3. The higher incentive norm: Expert networks established market rates that reflect professional opportunity costs. A 60-minute interview with senior titles can be upwards of $200-400 (or more), not $75. This reset expectations industry-wide.
4. Specialized matching: Platforms have become more sophisticated in matching criteria that matters for B2B research, such as specific technologies used, company size managed, decision-making authority, years in role, or industry vertical.
1. From "hard to reach" to "available but expensive": The good news is that you can find senior professionals. The bad news is that you may not have the budget to afford them. This created a two-tier system: well-funded research uses expert networks and gets verified professionals quickly, while budget-constrained research still relies on manual outreach and takes longer.
2. Verification matured: In consumer research, participants self-report demographics and you verify selectively. In modern B2B research, credential verification is expected as a baseline. All titles are not created equal in B2B, nor does decision-making authority, tools used, responsibilities, and other important criteria needed for high quality research.
What platforms check: Profile exists and is complete, job title matches screening criteria, company matches claimed employer, connections and endorsements appear authentic, activity suggests real professional presence.
User Interviews, for one, has partnered with LinkedIn’s verification program. Read more about it here.
Why it works: Most professionals maintain accurate LinkedIn profiles for career reasons. Fabricating a convincing professional identity there requires sustained effort.
3. A shift from “any professional” to “the right professional”: Early B2B research often settled for a warm body. Need enterprise IT decision-makers? Talk to anyone in enterprise IT. Expert networks have helped connect wants to needs. Finding product managers who've launched B2B SaaS products in the past 18 months is now in the realm of feasibility.
For all of the positives we’ve outlined that have accompanied more rigorous B2B verification, there’s an inclusionary trade-off that researchers need to be aware of—and mitigate. There are some high value groups that risk being excluded in B2B research:
How much verification is enough without excluding valuable perspectives? It ultimately depends on study type, stakes, and your research plan.
Benefits of built-in expert networks
Challenges expert networks help address
B2B recruitment had a glow up—what was once highly manual became tool-enabled, on-demand infrastructure. Access improved dramatically, verification standards rose, speed increased, specialization became more feasible than ever before.
The question is no longer around whether B2B recruitment is easier or harder, it’s now about weighing the trade-offs of leveraging expert networks versus alternative recruitment methods.
That’s a fair question! Below, we’ll summarize the questions you should stop and start asking as you consider all of today’s participant recruitment developments and challenges—and how it will influence the ways you modernize your research practice.
The rise of AI in user research means that it’s time to reassess its value to study moderation, transcription, analysis, synthesis, and many more elements of your research.
Stop asking: "How do I prevent AI use?"
Start asking: "How do I design for a world where both researchers and participants use AI?"
In practical terms, this could include:
Want more tactical recommendations and templates? Download our handy PDF.

Fraud is a new reality researchers face in the participant recruitment process. It’s critical to take a pragmatic approach to mitigating it.
Stop asking: "If I just had better verification, wouldn’t fraud go away?"
Start asking: "How do I manage acceptable fraud risk?"
To start, you can leverage participant platforms and improve your own practices, which includes:
Researchers need to determine the best platforms poised to support a modern recruitment environment.
Stop asking: "Which platform will work exactly the way I want?"
Start asking: "Which platforms offer the flexibility to recruit on my own terms?"
Criteria to look for could include:
For more support on selecting the best participant recruitment platform for your needs, check out our Participant Recruitment Platform Buyer’s Guide.

The use of synthetic participants is going mainstream, which means asking the right questions today will help you put the best guardrails in place for tomorrow.
Stop asking: "What is the value of synthetic participants?"
Start asking: "How can synthetic participants supplement my human insights?”
Start preparing for a conversation around synthetic users by:
Putting the right systems and infrastructure in place for your organization is the most weatherproof participant recruitment tool you can have.
Stop asking: "Isn’t quality all about who I recruited?"
Start asking: "Isn’t quality about whether I can clearly explain the system that selected them?"
To ensure quality, you can:
For more tactical tips and templates on how to approach modern, high-quality participant recruitment, download our companion PDF guide.

The good news is that the fundamentals of high quality recruitment still matter: Find the right, qualified people, treat and incentivize them fairly, and frequently revisit and document your processes as conditions inevitably change.
There’s no bad news, but there are changes to the context you’re now operating in. Fraudsters got smarter. Synthetic participants are a thing. Economic pressures shape behavior. Privacy regulations add constraints.
In light of these changes to the recruitment landscape, we do not recommend business as usual (i.e., ignoring these changes and expecting old approaches to work). But you also don't need to panic or rebuild everything from the ground up, either.
Start by identifying what has changed (hopefully we’ve given you a head start in this regard) and prepare for what will. Then adapt your practice strategically.
While this guide cannot claim to solve every recruitment challenge that arises, it can help you make better decisions in a landscape where trust is harder to build, easier to lose, and more important than ever.
For comprehensive foundational guidance, see our Field Guide chapter on recruiting participants.
1. User Interviews. (2025). 2025 State of User Research Report.
https://www.userinterviews.com/state-of-user-research-report
2. User Interviews. (2025). 2025 State of Research Strategy Report.
https://www.userinterviews.com/state-of-research-strategy
3. User Interviews. (2025). 2025 State of Research Operations Report.
https://www.userinterviews.com/state-of-research-operations
4. User Interviews. (2024). AI in UX Research Report.
https://www.userinterviews.com/ai-in-ux-research-report
5. User Interviews. (2025). 2025 Research Budget Report.
https://www.userinterviews.com/research-budget-report
6. User Interviews. (n.d.). How Does User Interviews Deter Fraud on the Platform? User Interviews Support Center.
https://www.userinterviews.com/support/how-does-user-interviews-deter-fraud-on-the-platform
7. User Interviews. (n.d.). Recognizing and Reporting Fraud. User Interviews Academy.
https://academy.userinterviews.com/lesson/recognizing-reporting-fraud
8. User Interviews. (n.d.). User Research Fraud Prevention. User Interviews Blog.
https://www.userinterviews.com/blog/user-research-fraud-prevention
9. The State of Online Research Panels. Cited in: Quirk's Media. (2024). An Open Letter to the Research Industry.
https://www.quirks.com/articles/an-open-letter-to-the-research-industry
10. The Voice of User. (n.d.). Synthetic Personas & Stochastic Theater: Why LLM Outputs Aren't Qualitative Insight.
https://www.thevoiceofuser.com/synthetic-personas-stochastic-theater-why-llm-outputs-arent-qualitative-insight
11. IDEO. (n.d.). The Case Against AI-Generated Users. IDEO Journal.
https://www.ideo.com/journal/the-case-against-ai-generated-users

Download our participant recruitment toolkit, which includes our Participant Recruitment Tactical Guide, 2025 Panel Report, templates, and more.