The Modern Guide to Research Participant Recruitment

Research in the Age of AI, Bots, and Industrialized Fraud
Illustration of a person using a magnifying glass to select a profile from a list, with a target and an arrow hitting the bullseye, and signs reading 'Participants Needed'.

Research recruitment fundamentals haven't changed. You still need good screeners, clear communication, fair incentives, and quality verification to source qualified (and quality) participants.

What has changed is the context: AI has entered the fray, fraud has become industrial, and trust in participant quality is at a premium.

When faced with the new realities of recruiting, will you choose to spin your wheels or embrace the challenges—or more importantly, opportunities—in front of you?

Here, we’ll focus on what it will take to move forward. We’ll break down the participant recruitment landscape today vs yesterday, the new challenges and realities researchers face, and how to get on the path to modernize your participant recruitment efforts.

And since we know a thing or two about participant recruitment, we wanted to share all of our knowledge we’ve gained from our 6 million+ panel and diverse community of researchers. See the full report below and be sure to also download our Participant Recruitment Tactical Guide for hands-on guidance using the button here.

Participant Recruitment: The 3 Big Shifts

What’s changed in participant recruitment in recent years? A lot! But for the sake of this guide, we’ll focus on three major shifts: AI’s role in recruitment, a rise in coordinated fraud, and the role that platforms can play in supporting better recruitment.

1. AI’s Double-Sided Integration Into Participant Recruitment

down arrow icon

We’ve witnessed the rise of AI use among researchers first hand—74% of researchers reported using ChatGPT to support their workflows, according to our most recent AI in UX Research Report.

But perhaps the bigger story is that participants are using AI as well, upending recruitment—and raising the risk and reality of fraud—in the process. Participants are increasingly using AI across three key categories: screener optimization, presenting “professionally”, and pattern matching.

AI might be enabling researchers to experiment with new methods—both in tool use and for developing AI tools themselves—which more than half of researchers (54%) said they did in 2025.

-2025 State of User Research

quotation mark hand drawn icon

In practice, AI responses are pretty easy to spot (for now): they tend to feature perfect grammar with vague content, professional structure lacking specifics, generic examples applicable to anyone, or an overly formal tone. In contrast, human insights are grounded in more specific examples, natural language with imperfections, and idiosyncratic descriptions. 

How participants use AI in recruitment

AI doesn’t need to upend your recruitment, but rather supplement your strategy. Recruitment platforms can support these efforts with better fraud detection (more on this below) and screener optimization, among other key benefits.

Making sense of legit AI use versus bad actors

The reality is that there are legitimate uses of AI in screener responses, but all AI usage is not created equal. When evaluating participant quality against AI usage, here’s a few considerations to keep in mind:

1. Don't automatically disqualify AI assistance. AI has become a game-changer for translation for non-native speaking audiences that may struggle to convey otherwise valuable insights. AI for language polishing doesn’t always equal fraud.

2. Verify depth, not polish. Follow up with specific detail requests during sessions. A typical "tell me about the last time you..." question rather than something more generic can help in this regard.

3. Look for concrete examples. Keep an eye out for specific, recent, idiosyncratic details. These are harder to AI-generate convincingly today.

4. Leverage AI as a validation tool. Turn the tables! Use AI to review screener responses at scale, flag suspicious patterns, and identify overly-generic language.

2. When Fraud Becomes Industrial, Not Episodic

down arrow icon

Participant quality and reliability was cited as a major recruiting challenge by 54% of the respondents to the 2025 State of User Research, the third most behind finding the right participants and time to recruitment.

To help address this pain point, User Interviews recently conducted an assessment of its 6 million+ participant pool, finding that only ~1% of sessions were reported for potential misrepresentation, and <0.3% of sessions were confirmed fraudulent.

So how can User Interviews (or any other platform) ensure it stays this way, particularly when fraud suddenly becomes a thriving industry instead of a “thing that happens” in the participant recruitment process? 

Quirk’s Media published an open letter last year on the topic, citing multiple sources on the growing prevalence of fraud:

“Kantar’s report, The State of Online Research Panels, reveals that “researchers are discarding on average up to 38% of the data they collect due to quality concerns and panel fraud...with one prospect citing they had to return 70% of data provided by a leading panel provider.” And this report is not an outlier. A study by Greenbook found that up to 30% of online survey responses are fraudulent and a LinkedIn Pulse article pegged the number at 40%.”

quotation mark hand drawn icon

Before we talk tactics, let’s first unpack the fraud landscape.

How modern fraud operates

Fraud networks are becoming more sophisticated, using communities like Discord, Telegram, and closed forums to share information, such as which studies are "easy", screener questions and "correct" answers, verification workarounds, and (eek!) platform vulnerabilities.

How can fraudsters do all of this at scale? Bad actors are increasingly enabled by the prevalence of new tools that facilitate fraud, such as:

  • VPNs
  • AI-assisted response generation
  • Automated profile creation
  • Fake credential generators

There’s also a marginal utility assessment to it all. Fraud networks are optimizing their return per hour by study type against the risk-reward by platform. Plus, they know when to hit the eject button and quickly abandon burned accounts.

In all, we have to recognize that this is no longer a one-off phenomenon, but rather an organized effort that operates like other online fraud industries (e.g., click farms, review manipulation, coupon abuse).

The three fraud tiers

Tier 1: Opportunistic

Individuals stretching truth

Minimal preparation

Easy to catch with basic verification

Tier 2: Professional

Experienced participants optimizing across studies

Shared strategies, no automation

Passes basic verification

Affects data quality

Tier 3: Industrial

Coordinated networks leveraging tools

Sophisticated evasion

Very difficult to detect

Can invalidate studies

On top of that, mitigating fraud before it happens has become increasingly challenging for researchers managing their own panel, as they can no longer solely rely on single-point verification.

Why single-point verification fails

Yes, the ultimate goal is to prevent all instances of fraud. However, that’s become increasingly difficult.

Unfortunately, fraud is not a problem you can solve overnight. However, with the right recruitment tools and best practices, you can mitigate and manage it.

3. The Rise of Recruitment Platforms—And What It Means for You

down arrow icon

Participant recruitment is still a major challenge for researchers: finding enough qualified participants, the time to recruit, and participant quality remain the biggest struggles, according to the 2025 State of User Research.

Modern times call for sophisticated tools, and recruitment platforms are no exception: they’ve shifted from a nice-to-have resource to a piece of essential research infrastructure. In fact, more than 70% of respondents to our State of User Research said that they use a tool for qualitative recruiting, while 56% said they rely on a tool for quantitative recruiting.

That said, all tools are not created equal. Let’s (re)set the stage on participant recruitment platforms, what they enable, and where they fit into a modern research toolstack.

What today’s best participant recruitment platforms enable

1. Scale and speed: Platforms make recruitment dramatically faster and more accessible than manual methods. What used to take weeks now takes hours.

2. Fraud mitigation: Platforms have implemented layered defenses that individual researchers can't: automated fraud detection, participation history tracking, behavioral signals, and continuous monitoring.

3. Access to niche audiences: Platforms maintain pools that include hard-to-reach professionals and specialized demographics. Finding eight healthcare executives in Atlanta and Chicago has suddenly become feasible, not aspirational.

4. Operational efficiency: Automated screening, scheduling, incentive distribution, and communication reduce the operational burden on research teams, letting them focus on research instead of logistics.

Incredibly fast and targeted recruiting for niche B2B audiences

Start talking to hard-to-reach professionals and qualified consumer participants in a matter of hours, not weeks. Launch a free project to get started.

Sign up now

How they do it

Recruitment platforms like User Interviews are powered by matching algorithms that facilitate all of the features described above.

What happens behind the scenes:

  • Hundreds of individuals may have been shown your study
  • Many started but didn't complete
  • Some qualified but weren't selected
  • Matching criteria operates on platform logic you may not fully see

This becomes an alignment question: what do researchers need to get their jobs done faster versus producing the best insights possible? Platforms should improve participant quality and recruitment efficiency, but also put the “humans” in the driver seat—researchers can ultimately decide who the best participants are for their study through screeners and other direct assessments.

Platforms need to optimize for:

Faster recruitment

Participant quality

Operational efficiency at scale

Helping users assess incentive costs

You need to optimize for:

Quality participants

Audience/study fit

Insights at scale

Can work within various research budgets

Platforms succeed when researchers get good participants quickly. But not always perfectly:

  • You might want maximum verification that balances security while minimizing participant friction
  • You might want to distinguish repeat participants from experienced ones that are high-quality
  • You might want complete transparency in matching so that it’s clear participants (or the platform) aren’t gaming the system

This creates valuable efficiency and consistency. It also means switching platforms has real costs—in time, retraining, process changes, and stakeholder confidence.

The question isn't whether to avoid platforms. It's how to use them strategically.

How to balance direct panel oversight with flexibility of recruitment platforms

1. How you write screeners
Behavioral questions, open-ended responses for review, questions that reveal depth

2. What verification you add
Manual review of flagged responses, credential spot-checks, in-session verification, work samples

3. How you communicate
Context beyond platform defaults, rapport building, clear expectations

The best recruitment platforms want this feedback. They improve based on how researchers actually use their systems.

Two questions worth asking today when debating tools vs direct recruitment:

  1. How do I work with this platform as infrastructure?
  2. What does this platform enable and where do I need to supplement?

Factors to consider when answering these questions:

  • Can I leverage fraud detection and participant pools I couldn't build myself?
  • Do I have automation capabilities to increase operational efficiency?
  • Is there manual review and additional verification when stakes are higher?
  • Can I share quality issues and improvement suggestions with specific examples?

Like any tool, recruitment platforms work best when they support your needs: you understand what they're good at, where they have limitations, and how to use them within your larger strategy.

Easier said than done, we know. There are clear challenges in today’s recruitment landscape. We’ve identified five of them below.

dashed horizontal orange line/border

5 Emerging Recruitment Challenges

So what are the participant recruitment challenges facing researchers today? Let’s unpack the five major hurdles we’ve seen across the research community—and how to address them right now.

Challenge 1: The Synthetic Participant Question

down arrow icon

In recent years, researchers started asking (whispering?) a new question: "What if we just... used AI participants?"

The idea isn't as farfetched as it sounds. AI systems can now simulate user behaviors, generate realistic responses that mimic specific personas, and provide feedback at scale. This isn't theoretical—it's already happening, and the research community is actively debating where the boundaries should be.

Where synthetic participants appear

1. Internal ideation: Teams routinely use AI to model user personas, generate "what would X type of user say?" responses, and explore edge cases—as long as it's clearly labeled as simulation.

2. Supplemental research: Some teams now use real participants for primary research, then turn to AI to explore variations or scale their testing. This augmentation approach has some adoption, but (such as the case against synthetic users by IDEO last year) about whether it dilutes the authenticity of findings.

3. Primary research: Treating AI responses as equivalent to user feedback and making product decisions based on synthetic data remains limited but is growing, despite ethical concerns from the research community.

Naturally, there are proponents and critics of synthetic users.

What the proponents say

  • AI can generate 1,000 responses instantly versus weeks of recruitment, at roughly $0.10 per AI response versus $75-150 per human participant
  • There are no no-shows, no fraud concerns, no quality variance
  • Synthetic participants can test scenarios that would be impossible or dangerous with real humans

What the critics say

  • AI pattern-matches text without authentic lived experience
  • Synthetic users amplify training data biases and miss the unexpected insights that real humans provide
  • The polished, coherent responses from AI can create false confidence in findings

Not to mention there's a fundamental ethical concern: using synthetic participants without the proper disclosures could be considered deceptive, at minimum.

We’ve outlined the consensus on a few of those use cases here:

Acceptable (but not widely adopted):

  • Internal ideation and brainstorming
  • Exploring edge cases
  • Generating test scenarios
  • Clearly labeled as synthetic

Debated (ongoing):

  • Supplementing small human samples
  • Testing variations after human research
  • Specific constrained tasks

Not acceptable (yet):

  • Replacing human research entirely
  • Undisclosed use
  • Product decisions based solely on AI
  • Presenting AI feedback as user feedback

Researchers have legitimate slippery slope concerns, reporting conversations to the tune of:

  • Step 1: "Let’s use AI to generate test scenarios."
  • Step 2: "Let’s supplement five interviews with 20 AI responses.”
  • Step 3: "Let’s use AI, which is consistent and cheaper, more frequently."
  • Step 4: "Why recruit at all?"

AI for User Research 101

Take the FREE 2-week course that combines theoretical understanding with practical application, leveraging your expertise and existing frameworks.

Sign up now

Ok step four might fall more on the extreme end of the spectrum, but budget concerns are real: our 2025 Research Budget Report showed that headcount, tools, and participant recruitment eat up the majority (71%) of research budgets. 

Let’s assume avoidance is not an option. At minimum, researchers should:

  • Provide explicit disclosure in findings
  • Clearly label synthetic vs. human data
  • Provide documentation of AI model and prompts
  • Outline synthetic user limitations and caveats

The question isn't whether synthetic participants will exist. They already do. The question is drawing the line between acceptable and unacceptable use at your organization to ensure that research quality remains high. As a result, it’s incumbent upon researchers to revisit and revise their guidelines on synthetic users on a regular basis. 

Challenge 2: The Professional Participant

down arrow icon

Has research participation jumped the shark into becoming gig work? It could be the case. Professional participants are on the rise. They're not fraudulent by definition, but they're fundamentally different from the casual participant who does a study or two. The professional participant is here (and not going anywhere).

So what makes them different? Professional participants approach research strategically. They monitor multiple platforms for opportunities, track their participation rates and earnings, optimize their qualification strategies, build reputations through researcher ratings, and calculate their ROI on time invested.

This poses a research dilemma: professional participants show up reliably, understand instructions quickly, provide articulate feedback, and make sessions run smoothly.

But they are arguably worse for research in other ways. They may provide answers that sound good, but may lack authenticity (aka telling researchers what they think they want to hear). Fresh perspectives erode into practiced performances. They become professional critics rather than authentic users.

Addressing the professional participant challenge

The tenets of good research—professional participant or not—should still apply. Here’s some steps you can take to gut check your research:

1. Track participation frequency: Know how many studies participants have completed recently, both on your platform (and across others if possible).

2. Match to study type: Put in guardrails based on whether you’re conducting exploratory research, usability testing, or validation research.

3. Document participant behavior: Keep track of participants (e.g., "2 participants had completed 1 prior study 6+ months ago; 6 were first-time participants."). Platforms can be helpful in this regard, but this can also be managed individually.

4. Refresh your participant pool: Sometimes you want experienced participants who can articulate nuanced feedback quickly. Other times you need fresh eyes who haven't been trained by dozens of prior studies. Be intentional about what you need, what you're getting, and whether it matters for your specific goals.

Challenge 3: Economic Volatility Shapes Participation

down arrow icon

Tech layoffs, persistent inflation, economic uncertainty, and market volatility didn't just affect research budgets—they fundamentally changed who participates in research.

The tech industry layoff wave brought more highly skilled participants into the market—laid-off product managers, designers, and engineers who were suddenly available during work hours and willing to participate for income they previously didn't need. They brought legitimate expertise and professional experience. But their motivation had shifted fundamentally from viewing research as an interesting side activity to treating it as necessary income.

This created a tension in quality that's hard to resolve: more qualified participants is a good thing, but professional participant patterns have grown (not so good) and economic desperation is on a similar trajectory (unknown impact).

Add inflation and incentive erosion into the equation, and the $75 that felt generous in 2020 covers roughly 20% less in 2025.

Does this exchange sound familiar?

  • Researchers: "We can't increase budgets."
  • Participants: "This doesn't cover my time anymore."

The result? Quality concerns and increased challenges in recruitment.

Economic stress creates what some researchers call "desperate participants"—people who will say anything to qualify, embellish their experience, or participate even though they do not fit the criteria. This has raised questions such as:

  • Are responses authentic or optimized for qualification? 
  • Is their behavior genuine or performance shaped by financial need?

What this means for recruitment

The impact cuts both ways. On the quality improvement front:

  • More skilled people available and motivated to do well
  • Higher stakes mean participants care about ratings
  • More professional behavior in sessions

On the quality concern end of the spectrum:

  • Over-participation risk increases
  • Economic motivation trumps authentic experience
  • Income takes precedent over insight

While this pattern doesn’t necessarily fit fraud in the traditional sense, addressing these challenges often requires similar tactics: stricter verification to ensure they're the highest quality participants for your study.

Challenge 4: Privacy Regulation’s Impact on Recruitment

down arrow icon

Privacy regulation is no longer a “future concern” and with it comes operational complexities that impact recruitment decisions. 

Beginning with GDPR in Europe and continuing in the U.S. with regulations such as the CCPA in California, researchers are now tasked with navigating different laws by location, determining jurisdiction across participant location versus company location versus data storage, and assessing varying enforcement risks by region and company size.

Its impact can be felt most acutely across four categories: Participation tracking across studies, demographic collection, data retention, and third-party sharing, as our diagram above shows.

How regulations impact recruiting platforms

A simple checkbox approach will no longer do—research participant consent has taken on new (and complicated) forms. Recruitment platforms can help, but there’s limitations.

Platforms typically handle

  • Basic consent flows
  • Standard data processing agreements
  • Common compliance requirements
  • Geographic restrictions

You're still responsible for

  • Your specific research purposes
  • Your data retention policies
  • Your internal data usage practices
  • Your vendor relationships

In the coming years, it’s expected that a combination of new U.S. privacy laws, stricter EU enforcement, and new AI-specific regulations will continue to impact research as we know it. 

Recruiting will get more complex in this regard, so understanding privacy laws—and their implications on your research— is fundamental to your strategy.

Challenge 5: B2B Recruitment Improved, Albeit At A Premium

down arrow icon

In the past, recruiting B2B participants meant slow, manual outreach: LinkedIn messages, email chains, referrals through professional networks.

Today, B2B recruitment operates more like expert network infrastructure. Specialized platforms emerged to solve B2B recruitment specifically—not as a side feature of general panels, but as their core business. While this shift may have improved access to a coveted pool of participants, it also introduced trade-offs.

What expert networks enable

1. Speed: Sourcing, vetting, and scheduling C-suite executives used to take weeks (if not more). Now you can source B2B participants in hours or days. The advent of expert networks, which maintain pre-verified pools of professionals across industries, functions, and seniorities, had a lot to do with this advancement.

2. Professional verification: Rigorous verification is now normalized—LinkedIn profile matching and work email confirmation and industry-specific vetting are just a few of the avenues to ensuring participant quality today. This also raises the bar: when one platform verifies thoroughly, researchers expect all platforms to do the same.

3. The higher incentive norm: Expert networks established market rates that reflect professional opportunity costs. A 60-minute interview with senior titles can be upwards of $200-400 (or more), not $75. This reset expectations industry-wide.

4. Specialized matching: Platforms have become more sophisticated in matching criteria that matters for B2B research, such as specific technologies used, company size managed, decision-making authority, years in role, or industry vertical.

How B2B recruitment evolved

1. From "hard to reach" to "available but expensive": The good news is that you can find senior professionals. The bad news is that you may not have the budget to afford them. This created a two-tier system: well-funded research uses expert networks and gets verified professionals quickly, while budget-constrained research still relies on manual outreach and takes longer.

2. Verification matured: In consumer research, participants self-report demographics and you verify selectively. In modern B2B research, credential verification is expected as a baseline. All titles are not created equal in B2B, nor does decision-making authority, tools used, responsibilities, and other important criteria needed for high quality research.

How LinkedIn verification has become standard:

What platforms check: Profile exists and is complete, job title matches screening criteria, company matches claimed employer, connections and endorsements appear authentic, activity suggests real professional presence. 

User Interviews, for one, has partnered with LinkedIn’s verification program. Read more about it here.

Why it works: Most professionals maintain accurate LinkedIn profiles for career reasons. Fabricating a convincing professional identity there requires sustained effort.

3. A shift from “any professional” to “the right professional”: Early B2B research often settled for a warm body. Need enterprise IT decision-makers? Talk to anyone in enterprise IT. Expert networks have helped connect wants to needs. Finding product managers who've launched B2B SaaS products in the past 18 months is now in the realm of feasibility.

The inclusion tradeoff

For all of the positives we’ve outlined that have accompanied more rigorous B2B verification, there’s an inclusionary trade-off that researchers need to be aware of—and mitigate. There are some high value groups that risk being excluded in B2B research:

  • Contractors and freelancers: May lack corporate email, LinkedIn shows multiple concurrent roles, don't fit "traditional employee" verification
  • Small company employees: Less formal email systems, smaller LinkedIn networks, fewer verifiable signals
  • International professionals: Different professional platform norms, work email practices vary by region, LinkedIn adoption varies globally
  • Career transitioners: Recently changed roles/companies, credentials don't reflect current work, learning new fields
  • Privacy-conscious professionals: Minimal LinkedIn presence by choice, reluctant to share work email, legitimate privacy concerns

How much verification is enough without excluding valuable perspectives? It ultimately depends on study type, stakes, and your research plan.

Benefits of built-in expert networks

  • High-stakes decision research: Need to trust participant credentials completely, budget allows premium pricing
  • Specialized technical roles: Need specific expertise (cloud architects using Kubernetes, radiologists using AI diagnostic tools)
  • Senior decision-makers: Need people with budget authority, strategic responsibility, organizational influence
  • Time-sensitive research: Need participants fast and can pay for speed

Challenges expert networks help address

  • Budget constraints: Expert networks are expensive—if you’re looking at upwards of $200-400 per participant (or more), recruitment confidence is crucial
  • Non-traditional professionals: If your target includes freelancers, contractors, small company employees, verification requirements may exclude them
  • Ongoing relationships: Expert networks optimize for one-time consultations—building longitudinal panels or advisory councils requires different infrastructure

B2B recruitment had a glow up—what was once highly manual became tool-enabled, on-demand infrastructure. Access improved dramatically, verification standards rose, speed increased, specialization became more feasible than ever before. 

The question is no longer around whether B2B recruitment is easier or harder, it’s now about weighing the trade-offs of leveraging expert networks versus alternative recruitment methods.

dashed horizontal orange line/border

What Does All of This Mean For Me?

That’s a fair question! Below, we’ll summarize the questions you should stop and start asking as you consider all of today’s participant recruitment developments and challenges—and how it will influence the ways you modernize your research practice.

1. Embrace the Role of AI in Research Design

down arrow icon

The rise of AI in user research means that it’s time to reassess its value to study moderation, transcription, analysis, synthesis, and many more elements of your research.

Stop asking: "How do I prevent AI use?"
Start asking: "How do I design for a world where both researchers and participants use AI?"

In practical terms, this could include:

  • Focusing screeners on specificity and recent examples
  • Verifying depth in live sessions
  • Using AI yourself for pattern detection
  • Being mindful of AI as a participant accessibility tool

Want more tactical recommendations and templates? Download our handy PDF.

A silhouette of a person looking at a document with a profile picture and text.

2. Approach Fraud as a Continuous, Not Episodic Occurrence

down arrow icon

Fraud is a new reality researchers face in the participant recruitment process. It’s critical to take a pragmatic approach to mitigating it.

Stop asking: "If I just had better verification, wouldn’t fraud go away?"
Start asking: "How do I manage acceptable fraud risk?"

To start, you can leverage participant platforms and improve your own practices, which includes:

  • Layering verification by study
  • Documenting participant observations across studies to identify red flags
  • Continuous monitoring and panel refresh considerations

3. With Great Platform Power Comes Great Responsibility

down arrow icon

Researchers need to determine the best platforms poised to support a modern recruitment environment.

Stop asking: "Which platform will work exactly the way I want?"
Start asking: "Which platforms offer the flexibility to recruit on my own terms?"

Criteria to look for could include:

  • Participant pool health (e.g., size, diversity, show rates)
  • Sophisticated, layered verification to help mitigate fraud
  • Support with regulatory needs

For more support on selecting the best participant recruitment platform for your needs, check out our Participant Recruitment Platform Buyer’s Guide.

Black and white illustration of a standing rolodex rotating cards with a triangular base and waves indicating motion.

4. Prepare for Synthetic Participants (Even if You Don’t Use Them Right Now)

down arrow icon

The use of synthetic participants is going mainstream, which means asking the right questions today will help you put the best guardrails in place for tomorrow.

Stop asking: "What is the value of synthetic participants?"
Start asking: "How can synthetic participants supplement my human insights?”

Start preparing for a conversation around synthetic users by:

  • Understanding the current consensus on acceptable use at your company
  • Document potential synthetic participant use cases
  • Considering future stakeholder arguments for or against synthetic users, especially in how it pertains to budget and scale

5. Build Quality You Can Defend

down arrow icon

Putting the right systems and infrastructure in place for your organization is the most weatherproof participant recruitment tool you can have.

Stop asking: "Isn’t quality all about who I recruited?"
Start asking: "Isn’t quality about whether I can clearly explain the system that selected them?"

To ensure quality, you can:

  • Review and update your recruitment strategies as needed
  • Be transparent about any challenges or trade-offs you face
  • Document your verification efforts
  • Acknowledge limitations openly

For more tactical tips and templates on how to approach modern, high-quality participant recruitment, download our companion PDF guide.

Text reading 'PARTICIPANTS NEEDED' in a hand-drawn style on two separate white banner shapes against a black background.
dashed horizontal orange line/border

What's next?

The good news is that the fundamentals of high quality recruitment still matter: Find the right, qualified people, treat and incentivize them fairly, and frequently revisit and document your processes as conditions inevitably change.

There’s no bad news, but there are changes to the context you’re now operating in. Fraudsters got smarter. Synthetic participants are a thing. Economic pressures shape behavior. Privacy regulations add constraints.

In light of these changes to the recruitment landscape, we do not recommend business as usual (i.e., ignoring these changes and expecting old approaches to work). But you also don't need to panic or rebuild everything from the ground up, either.

Start by identifying what has changed (hopefully we’ve given you a head start in this regard) and prepare for what will. Then adapt your practice strategically.

While this guide cannot claim to solve every recruitment challenge that arises, it can help you make better decisions in a landscape where trust is harder to build, easier to lose, and more important than ever.

For comprehensive foundational guidance, see our Field Guide chapter on recruiting participants.

dashed horizontal orange line/border

Sources Cited & Appendix

Sources Cited

down arrow icon

1. User Interviews. (2025). 2025 State of User Research Report.
https://www.userinterviews.com/state-of-user-research-report

2. User Interviews. (2025). 2025 State of Research Strategy Report.
https://www.userinterviews.com/state-of-research-strategy

3. User Interviews. (2025). 2025 State of Research Operations Report.
https://www.userinterviews.com/state-of-research-operations

4. User Interviews. (2024). AI in UX Research Report.
https://www.userinterviews.com/ai-in-ux-research-report

5. User Interviews. (2025). 2025 Research Budget Report.
https://www.userinterviews.com/research-budget-report

6. User Interviews. (n.d.). How Does User Interviews Deter Fraud on the Platform? User Interviews Support Center.
https://www.userinterviews.com/support/how-does-user-interviews-deter-fraud-on-the-platform

7. User Interviews. (n.d.). Recognizing and Reporting Fraud. User Interviews Academy.
https://academy.userinterviews.com/lesson/recognizing-reporting-fraud

8. User Interviews. (n.d.). User Research Fraud Prevention. User Interviews Blog.
https://www.userinterviews.com/blog/user-research-fraud-prevention

9. The State of Online Research Panels. Cited in: Quirk's Media. (2024). An Open Letter to the Research Industry.
https://www.quirks.com/articles/an-open-letter-to-the-research-industry

10. The Voice of User. (n.d.). Synthetic Personas & Stochastic Theater: Why LLM Outputs Aren't Qualitative Insight.
https://www.thevoiceofuser.com/synthetic-personas-stochastic-theater-why-llm-outputs-arent-qualitative-insight

11. IDEO. (n.d.). The Case Against AI-Generated Users. IDEO Journal.
https://www.ideo.com/journal/the-case-against-ai-generated-users

Appendix: Terms

down arrow icon
  • AI recruitment: AI influences recruitment indirectly (participants use AI tools) rather than through explicit "AI-powered recruiting"
  • Professional participant: Person who completes many studies regularly; may be appropriate or problematic depending on study needs
  • Synthetic participant/user: AI-generated persona; currently acceptable only for internal ideation/exploration, not primary research
  • Layered verification: Using multiple signals (behavioral responses + credentials + manual review) rather than single check
  • Participation frequency: How often someone completes studies; important to track for quality assessment
  • Expert network: Specialized B2B recruitment platforms with pre-verified professional pools
  • LinkedIn verification: Checking professional credentials through LinkedIn profile matching
an illustration of a donut chart behind a bar chart

Download the full guide

Download our participant recruitment toolkit, which includes our Participant Recruitment Tactical Guide, 2025 Panel Report, templates, and more.

[X]