an illustration of a donut chart behind a bar chart

Download the full report + dataset

Explore our full report in more depth + subscribe to our newsletter to get free access to the full dataset of tools and takeaways across AI in UX research.

[X]

The Early Adopter’s Guide to AI Moderation in UX Research

A one-stop resource for adding this powerful research method to your toolkit, including best practices and use cases for starting.
author photo for Ben W
Get AI Resources
a figure sitting at a laptop that is emitting text bubbles

UX professionals’ use AI for a variety of research activities: from project ideation and question generation to top-line analysis and creating unique deliverables. AI can be a UXR’s “thought partner,” “sounding board,” “research assistant,” or all three.

AI is also helping research, product, and design professionals with fieldwork—specifically moderated studies. Improvements in language models and best-in-class platforms are creating a unique opportunity for moderated user research studies: making them faster and more accessible, showcasing the value of mixed-methods user research for stakeholders along the way.

We’ve taken extra care to ensure this guide has everything you need to get started with this research approach. It’s based on first-hand experience running AI moderated studies and collaborating with best-in-class platforms

We believe humans are required for human-centered research (no AI “participants” please), and that AI technology can step in and help today’s busy user researcher. (For example, we’re using AI to make recruiting real participants faster, easier, and more reliable.) AI isn’t going anywhere, so the UX professional who takes a curiosity-first approach is most likely to succeed in the ever-automated future.

What is AI moderation?

So, what even is AI moderation?

Aaron Cannon, co-founder of AI moderation platform Outset, defines it as “Using artificial intelligence to autonomously conduct a dynamic conversation with a participant.” 

In practice this typically looks like a chat window or an unmoderated test, where AI is serving questions and allowing participants to respond or a messaging window, with speaker roles, an input bar, and real-time message history. The AI moderator shares pre-programmed questions and prompts (including media such as concepts or closed-ended survey questions) that the participant responds to. 

Depending on how it’s been set up, the AI will move through a question guide sequentially or will make a decision to follow a participant down a particular conversational thread—this is the LLM’s “dynamism” at work.

Read: Using AI in daily UX workflows

What AI moderation is not

  • A replacement for human researchers: The AI moderator will dutifully run through a conversation or interview guide, but it won’t know your company, stakeholders, and product like you do. That means AI can’t do it all. You’ll still need to contextualize, clarify, and combine the results into something meaningful for your stakeholders.
  • A chance to ask anything: No matter who is posing the questions—human or AI—they should still be designed carefully. This means reviewing them for bias and clarity so that when the AI is fielding, participants aren’t having a poor (or harmful) experience. The speed and scale of the method requires extra ethical consideration.
  • A foolproof path to “the truth”: Like any research method, even one with the power of cutting edge technology, it won’t get the whole story. Cross-checking results with things like product analytics, support feedback, and other research is critical to ensuring what customers share aligns with other experience signals.
  • An excuse to stay home: Although many product experiences have digital aspects, most are omnichannel, meaning there is some in-person touchpoint. Don’t let this method keep you from getting in-field occasionally, whether that’s observational sitework, intercepts, or other tried-and-true in-person research methodologies.

Steps in an AI moderation study

Although there are platform-specific differences in AI moderation (see below for tools), most projects have six core steps, from design to share out.

an illustration of a pencil writing on paper

Step 1: Add project details

Complete information like project goals, deadlines, and any information participants will need to best respond to the AI moderator’s questions and prompts (e.g., “Make sure you have your account settings page open before starting the interview.”). 

an illustration of a flow chart document

Step 2: Program the interview script

Add the questions and prompts you want the AI to ask participants. Some platforms (more on those shortly) offer multiple interview scripts within the same study. Still others offer AI feedback based on the questions uploaded, with the option to select specific follow ups.

an illustration of a magnifying glass searching through profiles

Step 3: Organize your recruitment

AI moderation methods are best with human participants. As we’ll detail later, the benefits of this approach are realized with actual customers, not approximations or composites. Many AI moderation platforms include some screening capabilities, still others directly integrate with best-in-class participant recruitment platforms. Make sure to communicate participation expectations and information about consent and incentives (if applicable).

an illustration of a computer chip with "AI" printed on it

Step 4: Field the interviews

As participants sign up for and complete the interviews, you’ll begin seeing information populate in the tool’s results section. Some offer in-flight adjustments of the script, so it’s important to monitor the first few participants for confusion or unforeseen problems. Because interviews progress automatically, your time might be balanced between monitoring response quality and processing incentives for completed participants (assuming this workflow isn’t automated).

three file folders

Step 5: “Analysynthesize” the data

AI moderation tools offer automatic transcription and analysis, producing high level themes with supporting interview sections. Because some of the pre-analysis work is handled by the AI, your time might shift between refining the initial analysis and starting to synthesize: AKA “analysynthesis.” Depending on the tool used and your research needs, this step might include full interview exports, additional tagging, or downloading data visualizations.

a figure looking up at a collection of data visualizations

Step 6: Share and socialize findings

Depending on your stakeholders, sharing the results might occur entirely outside the AI moderation platform (via decks, reports, etc.). Some platforms offer built-in querying of the data, allowing each project to become  a self-serve repository. Giving colleagues a view-only project link with instructions for querying the data might unlock more action.

screenshot of a LinkedIn post from Andrew Warr
Read our Product x Research Collaboration Report, based on AI interviews with Andrew and 150 other Product Leaders.

Benefits of using AI moderation

There are positives when using AI moderation for the researchers, participants, and even stakeholders.

Conduct more interviews faster

The tradeoff of conducting research interviews is usually deep and rich data that comes at the expense of time and sample size. Researchers can struggle conducting enough interviews to uncover reliable themes before stakeholder deadlines.

AI moderation changes that tradeoff. After programming an interview guide and recruiting participants, the fieldwork is (largely) set-and-forget. Participants choose  when to start the interview (no need to coordinate calendars) and multiple interviews can take place simultaneously (the AI moderator can be everywhere at once). 

The result is more interviews completed in less time. A researcher’s role during fieldwork might involve: 

  • QA’ing early responses
  • Answering participant questions
  • Coordinating incentives. 

Better still, field time might be used for analysynthesis planning (more on that term shortly).

More flexible fieldwork

Because of the effort required for rigorous interview studies, changes in-flight are minimal. Consistency in interview scripts helps build reliability and reduce bias. But there are many times when early interview insights might suggest a different avenue of exploration. Traditionally, these ideas are relegated to a holding tank or parking lot such as a document or sticky note.

With AI moderation platforms, projects can evolve and iterate with customer insights. This usually takes three forms: launching multiple interview scripts, launching new studies concurrently, and launching new studies iteratively. Let’s take a look at each. 

1. Launch multiple interview scripts

Here, a researcher might have a lot that they want to ask. Instead of selecting a subset of questions, they might break the larger discussion guide into multiple tracks. In this way, they can test iterations of questions, stray stakeholder needs, and even avenues of future research…all in the same study.

2. Launch new studies concurrently

In this form, a researcher might start with a single interview script, then build and launch separate, subsequent studies as early insights are assessed. The first study proceeds normally, with a second (or more) separate studies launched to capitalize on learnings. 

The result is adaptive and evolving fieldwork based on real-time interview results.

3. Launch new studies iteratively

Finally, the speed and scale of AI moderation might lead a researcher to fit multiple interview studies with insights from the first rolling into the second, and so on. Rolling or iterative research programs are typically reserved for methods like in-app testing or surveys, where participants can self-pace and instruments are quickly-launched. With AI this design can be applied to moderated interviews.

Non-researchers get involved

Conducting effective research interviews takes training and practice. When teams adopt democratization programs, non-researchers are typically trained in unmoderated studies like usability or survey design. With AI moderating an interview, however, more colleagues  can begin using this powerful method.

Instead of coaching on interview moderation best practices, researchers and operations specialists can focus on effective script/discussion guide design, tackling concepts like leading questions, effective follow ups, and being mindful of participant fatigue. AI moderation tools can encourage more roles within a company to launch interview studies, informing their work with real customer insights (which we know create better decisions).

Chart comparing Humans vs AI in User Research & exploring combinations of just humans, just AI, and a mix

Reduces some biases

Compared to humans, AI moderators don’t come with as many biases. They don’t have an agenda or a subject position that might interact (and interfere) with the way questions are received The AI simply follows a pre-programmed guide, following up when and where it deems appropriate (often as indicated by you the researcher). 

Moreover, the AI won’t “ask” questions differently depending on the participant’s characteristics (seen or unseen). Each interview’s questions will be delivered in the same way. This is not to say that AI in general is bias free, but AI moderation platforms use this technology in a very controlled, narrow way, limiting the potential for biased interviewing.

Encourages participant sharing

Participation in studies—of any kind—can alter what a person shares and how they act. This is known as the Hawthorne Effect. Reliability and validity in data is crucial for confident recommendations—researchers need to know their work is not biased. Researchers often start interviews trying to allay participant concerns about feeling judged, using statements like “We want your honest opinion.” or “There are no ‘right’ answers.”

Interestingly, it seems that many people feel more comfortable disclosing (even deep, personal information) to AI chatbots. This is beneficial not just to researchers generally, but those who might work on sensitive topics where participants might be more likely to feel stigmatized or “judged” by their responses. With AI moderation, participants might not feel judged and share more freely, helping researchers make more accurate recommendations.

Projects become mini-repositories

Time-strapped researchers often struggle to organize, centralize, and socialize their research projects. And despite the proliferation of research repository tools, lots of projects end up hidden away, “untouched” after the delivery or shareout of results.

Many AI moderation tools query-based interaction, whereby questions about a project are fielded by a chatbot within the project. This has the effect of extending the shelflife of research work. Instead of meticulously migrating individual projects to a central repository, each project serves as its own—researchers simply centralize project links and encourage stakeholders to “ask what you want” of the data.

Researchers know that providing timely and relevant insights to stakeholders can be a bottleneck to effective collaboration. AI moderation tools offer a self-directed opportunity for curious stakeholders, who might not have the time to “learn” how to use a repository, but would consider typing into a chat box, “What do we know about X?”

Stakeholders start to see the value of user research and seek out customer insights earlier in their workflows, offering researchers an opportunity to build credibility for their broader value.

Watch: AI UXRs answer burning questions.

When should you use AI moderation?

Like most UX methods, it’s important to consider factors like your research goals, timeline, stakeholder group, and resources. There are, however, some characteristics that make AI moderation a more suitable methodological choice.

When you have standardized or simple interview guide

AI moderators are good at following directions. The AI progresses through interview questions in the order you set, asking follow up questions at moments you’ve designated. 

(Note: many platforms allow for setting how rigid or flexible the AI sticks to a guide or script). Generative AI models are competent at recognizing the right moment to ask follow-ups, but will rely on your questions (derived from your research goals) to propel the conversation forward.

If a research opportunity is at the “unknown-unknowns” stage (as with new or developing products), it might be better to use human moderators familiar with the market, industry, and product. If a research opportunity has a clearly-defined set of outcomes and questions (in something like a usability test), then an AI moderator is a good choice.

When trying to interview hard-to-reach participants

AI moderation puts customers in charge of their participation. They can schedule, start, and stop the engagement at their leisure. This is undoubtedly flexible for a researcher, but the convenience for participants is a boon for those who are harder to reach (e.g. customers). This might be because of busy schedules, being on the other side of the world, or just a belief that it’s invasive. 

Whatever the reason, AI moderation’s flexible, DIY scheduling and familiar chat-based interaction format might help teams capture more feedback from high value participants they don’t often hear from.

For qualitative researchers with quantitative stakeholders

Mixed methods and qualitatively-minded researchers tend to value the richness and depth of data that methods like diary studies, ethnography, and interviews offer. When presented to stakeholders on teams like product or engineering, however, these methods can be discounted because of their smaller sample sizes. Time is often working against us.

AI moderation methods can offer a happy medium, pairing the nuance and texture of open-ended responses with the quantity of methods like surveys. Instead of sharing recommendations based on 10-15 interviews, the number can swell to the hundreds. Paired with automatic analysis features of many platforms, AI moderation offers a researcher the opportunity to present richly-saturated thematic analyses (codes, quotes, and video) at sizes that are more likely to convince data-minded decision makers.

screenshot of a LinkedIn post from John Whalen, PhD

Getting the most from AI moderation

The best practices for running an AI-moderated study are similar to those with human moderators. There are some nuances and differences to keep in mind. Here are a few based on our experience and talking with senior researchers.

Bring some question variety into the mix

AI moderation tools often offer more than just open-ended questions. Prompts like closed-ends and rich media (photo or video) diversify the data collected from participants and can boost confidence in conclusions derived from research results. If stakeholders use a certain metric in their planning, for example, consider including it in your discussion guide.

Prepare for pivots during fieldwork

As interviews begin, you might find another line of questioning (or research entirely) emerge. This might mean a revision of your interview script or a new study. Planning for pivots is important, especially because insights from AI moderated conversation come in fast! Pilot your conversation with a few participants to assess how confident you are in the direction of early results. Most platforms offer edits of interview guides, so plan accordingly.

Consider personalization of the AI

That’s right—you can in fact make robots more human, at least in terms of AI moderation. Some platforms offer personal touches to your AI moderator, such as custom art or names. It might seem trivial, but adding brand touches such as a logo or company name, can help reinforce the credibility of the study (when compared to an unnamed and “stock” chatbot).

An in-app screenshot of how we set up our AI moderator in HEARD.
A look at how we personalized our AI moderator in HEARD.

Start small and iterate as you learn

Depending on the fieldwork, you may collect several dozen interviews in a matter of hours. When the insights cup overfloweth, do you have the time to make sense of 10 interview questions (and follow ups); more importantly, do your participants? 

A good practice is to reduce your interview guide by two questions after it’s finished. Some questions might produce deeper, more complex responses—this reduction helps combat fatigue (both for the participant and you during synthesis).

Have an “analysynthesis” plan going in

Analysis plans should not be unique to AI moderation, but the fact that your interviews will be transcribed and partially analyzed should necessitate a different one. Start with your research rationale: why did you launch the study? What data will help you make a recommendation? 

Most platforms offer interpretation and analysis at multiple levels:

  • Project-wide themes across all interviews, participants, and even tracks
  • Question-specific across all participants (with supporting quotes)
  • Persona-specific, based on groupings set before or after the study

Your research questions may require digging into all three levels; others may require only one. A plan helps combat the analysis-paralysis that can accompany that first look at a mountain of open-ended data (and themes). Ask, “What do I need to learn?” and “What data will help me identify it?” There will be plenty of time to play around and discover.

Embrace the unexpected (insight discovery)

A unique benefit of many AI moderation platforms is the option to query the data like you would a chatbot or assistant. Using this feature in tandem with your “analysynthesis” plan can produce novel insights, illuminating quotes, and possibly more evidence to reinforce your recommendations. Think of it as an interactive double-check on your own work.

Listen: Balancing AI’s hype with its UX impact.
screenshot of a LinkedIn post from Michael Riddering

Tools for AI moderation studies

HEARD company logo

HEARD uses AI to design, moderate and synthesize user interviews to gather customer feedback and test prototypes and live websites. Learn more.

outset.ai company logo

Lets AI conduct and synthesize video, audio, and usability sessions with hundreds of participants at once. Learn more.

these two tools
integrate with
User Interviews!
AskMore company logo

AskMore uses AI to conduct your user interviews so you get more feedback, faster, and in any language. Learn more.

Genway company logo

Their AI interviewer schedules, conducts and analyzes findings so you can gain a deeper understanding of your customers and product. Learn more.

Strella company logo

AI-moderated interviews and instant synthesis, powering smarter & faster decisions. Learn more.

UserCall company logo

Make better decisions with 10x+ deeper insights via AI agent moderated 1:1 voice interviews. Learn more.

logo for Listen Labs

Qualitative research conducted and analyzed by AI. Replace manual research methods with AI-moderated customer interviews. Learn more.

Userology logo

Conduct in depth qualitative research 10x faster with unbiased AI moderation. Learn more.

Download your UXR AI toolkit, which includes:

  • Comparisons of over 25 AI tools for research
  • An on-demand moderation case study with Intuit
  • Original reporting on AI in UX and collaboration