down arrow
A lineup of unknown research participants, many of whom may be fraudulent

The Next Decade of Fraud in User Research: A Guide to Staying Ahead

Synthetic users, real consequences

I’ll never forget the moment I realized just how real, and how sophisticated, fraud in user research had become. It was 2021, and I was teaching a cohort of Ask Like A Pro students, each running their own moderated studies. Nine of them were actively recruiting hard-to-reach participants. Three of them, without knowing it, were being targeted by a coordinated fraud ring operating halfway around the world. Let’s just say their screeners got more traffic than a Taylor Swift ticket drop.

The signs were subtle at first: identical answers showing up in screeners, suspicious booking patterns, participants claiming to be U.S.-based but calling in from unfamiliar time zones. Eventually, we discovered that fake respondents in Kenya and Nigeria were flooding the screeners, testing responses over and over to learn the “happy path” that would qualify them for participation and ultimately incentives. They stole LinkedIn identities and crafted fake US driver's licenses. They used proxy IPs to fake U.S. locations. And they parroted industry jargon convincingly enough to make it through.

What started as a one-off incident quickly became a pattern, one that was hard to ignore and even harder to prevent. Fraud in research isn’t new, but what we’re seeing now is DIFFERENT. It's faster. Smarter. And oftentimes, indistinguishable from legitimate participation. Now, with easily accessible and free AI tools, better global coordination, and accessible incentive platforms, the risk is systemic.

Over the past nine months, I’ve been leading enterprise AI research. I’m focused on safety guardrails, ethical use, how models respond across different languages and cultures, and more. That work has sharpened my awareness of how easily AI-generated outputs can sound convincing while missing local nuance, industry domain savvy, and authenticity. Before that, I led research at a global payments company, focusing on a machine learning-powered fraud detection platform that used velocity tracking, anomaly detection, and behavioral heuristics to identify and prevent payment fraud. In both cases, the goal was the same: stay one step ahead of bad actors. The parallels between payment fraud and participant fraud in user research are becoming harder, and riskier, to ignore.

💡 The State of Research Operations found that fraud is becoming an ever increasing top priority for 2025, with ReOps teams citing 'junk data' from AI bots as a primary challenge.

The fraud prevention methods we implement today will either become the bedrock of long-term research integrity or future vulnerabilities waiting to be exploited. The key is designing adaptive systems that evolve alongside AI, not static defenses that expire with the next generation of tools.

This article is not meant to spark panic. My goal is to inspire preparation.

So, how will fraud in UXR evolve in the next 10 years? And what can we do to get ready?

🛡️ Fraud in research is evolving—so should your defenses. Learn how in our 4-lesson Academy course on Preventing & Recognizing Fraud.

The evolution of research participant fraud: synthetic users and AI-generated participants

The most obvious and immediate shift will be increased AI-generated respondents. We’re already seeing examples of them using ChatGPT to answer surveys, diary prompts, and even live interview questions. In a recent study, one sample used AI to generate all their responses and later “translated” them into personal anecdotes. Their intention wasn’t malicious. They were trying to save time. But the result? A research artifact shaped more by machine than human experience.

Over time, this behavior will scale. LLMs can be prompted to pass screeners, simulate user frustration, and generate realistic feedback. These synthetic identities may even cross studies, building a trail of activity that gives them credibility. Think synthetic “personas” with long-standing behavioral histories, making them harder to spot, not easier.

User Interviews flags copy-and-pasted responses to researchers and removes participants with this frequent behavior. Learn more

We’ll also see teams internally using synthetic users as “test personas” to model behavior or ideate product ideas. I do this today to get a better understanding of people I’m interviewing (E.g., an enterprise developer responsible for setting safety modes in an org that does X and is located in Y). These tools can be helpful if clearly flagged as simulated or used as inputs to internal ideation workflows, not treated as truth sources. The risk lies in teams treating them as stand-ins for real human feedback, creating false confidence that misguides product and other business decisions.

How AI is changing user research fraud: inauthentic human responses, incentive laundering, and identity fraud

Human deception is also evolving. It’s increasingly common for coordinated groups, sometimes fraud farms, to target paid research opportunities, using VPNs, spoofed IDs, and community scripts to pass as legitimate participants. These bad actors often:

  • Share eligibility criteria in closed forums (It must be like a boot camp for fake users, minus the team spirit!)
  • Coach each other on what to say during screeners
  • Use AI to polish written responses or mask errors

Even well-meaning respondents may default to AI responses when tired or uncertain, especially in screeners and asynchronous studies. This creates a gray zone: the answers “look good,” but lack real-world grounding. As models improve, it will become harder to distinguish coached or AI-assisted responses from genuine human reflection. 

This problem is magnified in global research, where incentive disparities are significant. In some regions, $50 for a 30-minute interview can equal a full day’s wage, or more, amplifying the incentive to game the system.

I once watched a client’s study calendar fill up overnight—20 booked sessions with participants who supposedly met a very niche set of criteria. At first, it felt like a recruiting miracle. A UX fairy godmother! Then we noticed some red flags: duplicate IPs, slightly altered email addresses, and copy-pasted screener responses. One person had even booked four different times using slight variations of the same name.

We ended up canceling every session. The team had to re-screen, rebook, and rebuild trust in the process. Fraud doesn’t just waste incentives, it erodes momentum, burns hours, and makes everyone second-guess the data. That experience stuck with me.

A User Interviews project coordinator guides every Recruit project, working directly with researchers and investigating and verifying accounts to source best-fit participants. Learn more.

As fraud grows more global, researchers will need regionally-aware fraud detection methods that account for local infrastructure, device norms, and incentive economics.

Over the next decade, we may see research incentives abused in more sophisticated ways. These include:

  • Identity laundering, where fake or stolen identities are used to collect payments. I dread the day I receive a FaceTime call from my tween daughter that looks like her, sounds like her, uses our safe word, then hear she needs money. How will any parent know if their kiddo is really in trouble? Simulated video and audio is scary stuff! 
  • Incentive flipping, where organized groups sell or trade research slots for real money. Kinda like eBay for research participation? 
  • Crypto-based reward systems, which may further obscure the trail of who, or what, is actually participating. ‘Nough said. 

What’s particularly concerning to me is the potential for aggregate-level fraud where the data looks clean in isolation but is heavily polluted in volume. This type of data contamination doesn’t just skew a finding. It reshapes roadmaps and misleads teams long after the study ends.

Preventing coordinated fraud in participant recruiting: platform vulnerabilities and recruiting vendor risk

Most fraud today happens before the main data collection even begins, at the participant recruitment stage. And this will continue to be the most vulnerable point in the process.

Recruiting vendors and platforms vary widely in how they vet participants. Some rely on device fingerprinting, IP detection, or screener logic. But those tactics are already outdated. AI can spoof all of these signals AND help real people pass as others.

The challenge is that vendors may not have incentives to report fraud, especially when volume and speed are prioritized. As researchers, we need to start evaluating vendors not just by cost and fill rate, but by fraud detection posture. This includes asking:

  • How do you detect repeat participants across studies?
  • Do you use behavioral signals (not just demographic ones)?
  • Can we audit or verify participant identity?

The strongest vendors will begin adopting multi-layered verification protocols, including government ID, phone/email validation, advanced biometrics, and behavioral consistency checks, to stay ahead.

User Interviews automates checks for digital identity overlap with known fraudulent accounts, requires re-verification of profile and contact details at certain levels, and flags risk scores above a threshold based on ongoing participant activity and signup factors. Learn more

Let me be clear. Vendors are not the enemy! They’re our partners. But if they don’t evolve, they may become a liability. If we, researchers and ReOps, don't evolve, we’ll also become a liability. Garbage in. Garbage out. 

What user researchers can learn from fintech

In fintech, we didn’t assume we could eliminate fraud; we aimed to mitigate it, make it expensive, detectable, and containable. The same principles apply here.

FinTech uses signal fusion, combining data from device behavior, transaction velocity, known fraud indicators, and behavioral anomalies. These principles could inspire:

  • Participation velocity checks: flagging users who complete too many studies in a short period
  • Behavioral baselining: looking for patterns that deviate from normal engagement
  • Multi-signal validation: not just a screener score, but corroborated indicators (e.g., LinkedIn match, calendar metadata, or payment history)

Think of this as building a “research integrity infrastructure” that strengthens over time. Foundational systems that can scale with AI’s evolution, not collapse under it.

Building fraud-resistant research operations: practical recommendations

Here’s what we, the research community and research operations professionals, can do today to prepare for tomorrow:

  • Audit your vendors: Ask tough questions about how they detect and prevent fraud. (Let me save you some time. These are the proactive tactics User Interviews is taking.)
  • Use layered validation: Don’t rely solely on screener responses, triangulate with metadata when possible
  • Build internal trust: Explain fraud risks and safeguards to stakeholders, so insights are not blindly trusted or dismissed
  • Expect AI use: Recognize that not all AI use is malicious, but track patterns and anomalies
  • Flag and share fraud: If you detect fraudulent behavior, report it back to vendors and teams to improve collective awareness

Yes, these steps take effort, but so does explaining to your stakeholders why 80% of your ‘users’ live at the same IP address!

Most importantly: Start thinking in systems, not just tactics. The verification methods we use today should be flexible enough to evolve, not expire.

What comes next: retrospective diaries and always-on data

Looking ahead, we may see research shift away from self-reported methods altogether. I bet we’ll soon use AI to analyze a participant’s past prompts, digital behaviors, or ambient data as a form of retrospective diary study. I’ve been thinking about this for ABOUT A YEAR.

Imagine screening participants based on their actual digital patterns, what they search, save, or ask. This could verify eligibility, reduce deception, and provide a window into how people actually think and feel, not just what they recall in the moment. Heck, we could conduct entire studies on past prompts! 

But, sure, this also raises new privacy and consent concerns, especially as wearables become more advanced. We’re not far from a world where always-on devices record audio and video by default. In that world, any offhand comment or behavior could be captured, analyzed, and used, knowingly or not. Tricky waters here, for sure. 

Future research methodologies will need to embed ethical guidelines that evolve alongside biometric and behavioral data collection, from typing cadence to voice patterns, as these become part of verification systems.

These data sources could unlock incredible insight, but they also carry the risk of surveillance, misinterpretation, and misuse. If taken out of context or used maliciously, this data could cause way more harm than good.

Final thought: this is a research opportunity

Fraud will never disappear, but how we respond will shape the next generation of research practices. The good news? We’re not powerless. We can redesign systems to be more resilient, define new norms of verification and trust, and build communities that reward transparency.

This is not just a threat. It’s an opportunity to lead.

And hey, in a world full of fake personas, being the real deal has never been more valuable.

Michele Ronsen
Researcher, educator, and founder of Curiosity Tank

Michele Ronsen is a researcher, educator, and founder of Curiosity Tank. Over the past year, she has led enterprise AI research focused on system transparency, model evolution, and culturally responsive design. She also writes Fuel Your Curiosity, a free newsletter for user researchers and curious minds alike.

Subscribe to the UX research newsletter that keeps it fresh
illustration of a stack of mail
[X]
Table of contents
down arrow
Latest posts from
Research Strategyhand-drawn arrow that is curved and pointing right