We Surveyed 1093 Researchers About How They Use AI—Here’s What We Learned

How are researchers using AI? Insights from a survey about the tools, use cases, ethical concerns, and guardrails for AI in UX research.
headshot for Lizzy Burnam

According to the 2023 State of User Research Report, 20% of researchers are currently using artificial intelligence for research, with another 38% planning to join them in the future.

With the majority of researchers either using or planning to use AI at work (as of May 2023, when the State of User Research survey was conducted), we wanted to learn more about the role of this new technology in our industry.

So in August 2023, we sent out a short survey to learn:

  • Which AI tools are researchers currently using?
  • Which aspects of the research process are researchers automating?
  • What ethical concerns do researchers have about AI?
  • What guardrails do researchers have in place to ensure ethical use of AI?
  • What benefits and shortcomings are researchers experiencing with AI?

In this article, you’ll find an overview of our discoveries about the role of AI in UX research, based on 1,000+ survey responses from User Researchers, ReOps Specialists, and people who do research (PwDRs).

Methodology

The AI in UX Research Report was created by Product Education Manager (former Content Marketing Manager) Lizzy Burnam, with strategic guidance and support from Lead UX Researcher Morgan Mullen, Senior Data Scientist Hayley Johnson, Career Coach and UX Research Leader Roberta Dombrowski, Head of Creative Content & Special Projects Katryna Balboni.

✍️🎨This report was authored by Lizzy, and brought to life by Illustrator Olivia Whitworth.

We used SurveyMonkey to create the survey (with 3 screener questions, 10 single- or multi-select questions and 2 open-response questions, to get a fair mix of quant and qual data) and distributed it to User Interviews audiences via our social media pages, our weekly newsletter, and a banner on our website. We also shared the survey in relevant UX research communities on Linkedin, Facebook, and Slack.

Between August 14th and 18th of 2023, we collected 1093 qualified responses from User Researchers, Research Ops Specialists, and people who do research (PwDRs) as part of their jobs. (An additional 1091 people took our screener but did not qualify for our survey based on their responses.)

In most of the questions, we gave participants the option to write-in an “other” response. Where applicable, we omitted redundant answers (e.g. when the response would be covered by a future question) or merged them with existing categories. For the two open-response questions, we eliminated exact duplicates and comments without meaning (e.g. “asdfghjk”).

Audience

More than half of our audience self-identified as User Researchers (30.4%) and Product/UX Designers (20.8%). Only 5.9% of our audience identified as Research Ops professionals—we were surprised by this relatively low response rate, since ReOps is typically heavily involved in exploring the value of new technologies like AI for their team. But perhaps we just weren’t as successful in reaching this audience.

Write-ins for the “other” category included Founders/CEOs, Content Designers/Strategists, Developers, and CX/UX Strategists or Consultants.

percentage of survey participants by job title - UX/user researcher	30.4%	333 Product designer/UX designer	20.8%	228 Data scientist/analyst	11.9%	131 Product manager	11.4%	125 Customer experience/support specialist	10.0%	110 Marketer	8.6%	94 Research operations specialist	5.9%	65 Other	1.0%	11
Zoom in or open image in a new tab to view a larger version.

Our findings

Our main takeaways from the survey include:

  • 📈 77.1% of the researchers in our audience are using AI in at least some of their work.
  • 🧰 The most-used UX research tools with AI-based features include HotJar, Dscout, Glassbox, and Dovetail.
  • 🤖 ChatGPT is the most widely used AI-specific tool, with about half (51.1%) of our audience saying they use it for their research.
  • 🧲 About a third of researchers are using AI for document signature collection (34.5%), scheduling (32.5%), and screening applicants (31.6%).
  • ✍️ Nearly half (47.8%) of our audience said they use AI for transcription.
  • 🧑‍💻 Qualitative coding is the most popular analysis use case for AI.
  • 📊 A plurality (45.5%) of our audience is using AI to help with writing reports.
  • ⏩ Efficiency is the most-cited benefit of AI—but researchers still have reservations.
  • 🛑 Despite the benefits, some researchers believe that AI’s current shortcomings may be too acute for the tools to be truly valuable.
  • 😓 Researchers’ biggest concern about using AI is the potential for inaccurate or incomplete analysis.
  • ❌ Nearly half (49%) of our audience is limiting the type of data they use with AI.
  • ⚠️ UX Researchers seem to be the most cautious segment regarding AI.

We’ll dive deeper into these takeaways below.

🔑 How to use this report

  • UX Researchers & people who do research (PwDRs): Learn which tools your peers trust for different use cases and how recruiting methods differ between solutions.
  • Research Ops Specialists: Explore popular UX research tools and their use cases to help you evaluate and reorganize your team’s toolstack.
  • Other curious readers: Stay up to date with software trends in the UX research industry.

📈 77.1% of the researchers in our audience are using AI in at least some of their work.

Obviously, not everyone’s using AI in their research—but among those who are, how often are they using it?

Note: There is probably some self-selection bias in our data for this question; because our survey was focused on AI in UX research, we likely attracted a higher proportion of folks who are open to using AI in their research than would be present in a general population of researchers.

That said, here’s what our audience said about how often they use AI.

The majority (77.1%) of our audience said they’re using AI in at least some of their research projects, while a minority (22.9%) said they’ve tried AI but don’t regularly use it in their work.

An additional 8.0% said they have never used AI. Since our goal with this survey was to learn more about how researchers are using AI, we filtered these folks out for the rest of the questions.

how frequently researchers use AI in their research projects - I use AI tools in almost all of my research projects (≥90% of projects)	14.6%	160 I use AI tools in most of my research projects 	28.0%	306 I use AI in some of my research projects 	34.5%	377 I have tried AI tools but do not regularly use them in my research projects 	22.9%	250
Zoom in or open image in a new tab to view a larger version.
percentage of researchers using AI features in UX research tools - HotJar	23.6%	258 Dscout	20.4%	223 Glassbox	20.0%	219 Dovetail	19.8%	216 Notion	19.7%	215 Grain	17.6%	192 Indeemo	16.3%	178 UserTesting	14.5%	158 Lookback	14.1%	154 Optimal Workshop	14.0%	153 Qualtrics	13.8%	151 Maze	13.2%	144 LoopPanel	13.1%	143 Marvin	9.9%	108 Userlytics	8.8%	96 UserZoom	8.5%	93 QuestionPro	7.0%	77 Phonic	6.5%	71 Usabilla	5.8%	63 SightX	5.1%	56 SessionStack	5.0%	55 Survicate	4.9%	54 Sprig	4.8%	53 Suzy	4.8%	53 Tetra Insights	4.6%	50 Upsiide	3.0%	33 WEVO	2.9%	32
Participants were asked to check all the tools that they use, so responses were not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
illustration of a brain next to a computer chip that reads "AI" on it

🧰 The most-used UX research tools with AI-based features include HotJar, Dscout, Glassbox, and Dovetail.

If you’ve reviewed your tech stack recently, you’ll notice that many UX research tools have robust AI features built into their platforms. In our survey, we distinguished between these UXR tools and AI-specific tools because some folks may use the AI features in these popular UXR tools (either knowingly or unknowingly), and we wanted to capture this usage as well as the intentional use of AI-only tools.

More than 10% of our audience said they’re using the AI features in the following tools:

  • HotJar: AI-based survey builder and/or insights generation (23.6%)
  • Dscout: AI-powered analysis and insights generation (20.4%)
  • Glassbox: AI-driven visualization and/or analytics (20%)
  • Dovetail: Automated clustering, summarization, and/or sentiment analysis (19.8%)
  • Notion: Writing, editing, and/or brainstorming (19.7%)
  • Grain: AI-based meeting notes, summaries, and/or workflows (17.6%)
  • Indeemo: AI-powered prompts, character recognition, and/or analysis (16.3%)
  • UserTesting: AI-powered insights generation (14.5%)
  • Lookback: Automated analysis, note-taking, and/or insights generation (14.1%)
  • Optimal Workshop: AI-based analysis and/or data visualization (14%)
  • Qualtrics: AI-powered analytics (13.8%)
  • Maze (13.2%) — Note: We incorrectly listed Maze’s AI features as automated analysis and reports. Maze’s actual AI-based capabilities include asking follow-up questions, rephrasing questions, renaming tests, and summarizing interviews. This discrepancy may have affected the number of folks who reported using Maze.
  • Looppanel: AI-based transcription and/or note-taker tools (13.1%)

A small portion of our audience (4.2%) said they aren’t using any of these tools. About 2.8% of folks provided write-in responses for tools such as Miro, Zoom, Great Question, Google/Microsoft, Condens, Trint, Notably, and Grammarly.

✉️ P.S.—If you’re interested in UX research tools, then you should know we’ll be publishing a 2023 UX Research Software & Tooling Report very soon. Subscribe to our newsletter, Fresh Views, to be notified when that report drops!

🤖 ChatGPT is the most widely used AI-specific tool, with about half (51.1%) of our audience saying they use it for their research.

As one of the newest and most advanced AI tools, ChatGPT has garnered attention in almost every industry, from academia and advertising to entertainment and education (and even US courtrooms!).

Given this widespread buzz, we weren’t surprised to see that the majority (51.1%) of our audience said they use ChatGPT in their research. That’s more than 3x the number who said they use ClientZen, the second-most popular AI-specific tool in our survey, which is used by just under 15% of our audience.

A little over 5% of our audience wrote-in additional tools. Common write-ins included Bard, CoNote, Claude, Azure, TL;DV, Fireflies, RewatchAI, and Read AI.

Notably, 5 write-ins said they’re using internally-built/proprietary AI tools. In a later question, some people listed proprietary tools as a guardrail for safe, ethical use of AI in UX research, since they don’t allow public access to the tools or data.

4.1% of our audience said they aren’t using any of these tools. And despite the popularity of ChatGPT among our audience, some responses to our later question about AI’s limitations noted serious concerns about the role of ChatGPT (and language learning models in general) in UX research:

“I am very skeptical about the role that AI should directly play in research because it violates the core epistemologies (especially reproducibility) that underlie qualitative methods[…] LLMs are probabilistic agents and the fact that the same dataset can get a different analysis each run through the model makes it an inappropriate tool for the job.”

📚 More reading: 20+ Powerful AI-Based Tools for UX Research Toolkits in 2023

Percentage of researchers using AI-only tools
Participants were asked to check all of the tools that they use, so responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
how are researchers using AI for recruiting - Collecting signatures for consent forms, NDAs, etc	34.5%	377 Scheduling sessions	32.5%	355 Screening applicants	31.6%	345 Outreach	19.7%	215 Incentives distribution	16.8%	184 I am not using AI for recruiting.	24.2%	265
Participants were asked to check all the tasks they use AI for, so responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
tasks where researchers avoid using AI by percentage of respondents - I am not using AI for recruiting.	24.2%	265 I am not using AI for synthesis and reporting.	9.6%	105 I am not using AI for research analysis.	8.8%	96 I am not using AI for conducting research. 	4.70%	51
This is a comparison of responses from the questions on recruiting, synthesis & reporting, analysis, and conducting research, for which the responses were not mutually exclusive. Zoom in or open image in a new tab to view a larger version.

🧲 About a third of researchers are using AI for document signature collection (34.5%), scheduling (32.5%), and screening applicants (31.6%).

Recruiting-related tasks are notoriously one of the most painful aspects of a researcher’s job. In fact, 97% of researchers reported experiencing challenges during recruitment in the 2023 State of User Research Report.

(👋 That’s why User Interviews exists: to connect researchers with quality participants quickly, easily, and affordably).

Where there’s pain, there’s potential for the right technology to provide relief—but is AI the right tool for the job? Let’s look at what the data said about AI’s impact on recruiting.

According to our survey, about a third of researchers say they use AI for signature collection, scheduling, and screening applicants.

(⏸️ Speaking of scheduling—did you know that User Interviews just revamped our scheduling feature for moderated research?)

In general, though, it seems like AI hasn’t influenced recruitment quite as much as the other aspects of research; the percentage of folks who said they do not use AI for recruitment (24.2%) was significantly higher than those who said the same for conducting research (4.7%), analysis (8.8%), or synthesis (9.6%).

There are different potential reasons for this—maybe folks don’t trust AI for the delicate tasks of recruiting, screening, and approving applicants. Or maybe there just aren’t as many well-developed AI-powered solutions for recruiting.

Whatever the reason, we know from the 2023 State of User Research Report that researchers use multiple methods to recruit participants—on average, 3 methods for recruiting customers and 2 methods for recruiting external participants. Perhaps, in the future, AI might emerge as another top recruiting method, but for now, the most popular methods include email, intercept surveys and self-serve recruiting tools like User Interviews.

We also got a handful (0.8% of our audience) of write-ins. Most of these mentioned using AI for writing/editing recruitment assets and different aspects of panel management, such as segmenting audiences and updating participant data.

📚 More reading: 17 Research Recruiting and Panel Management Tools

illustration of a group of people conversing while sitting at a table

Participant recruitment that’s unbelievably fast

Launch in minutes, source in hours, complete your study in days. Get detailed feedback and quality insights from our top-rated pool of over 4 million users.

Sign up free

✍️ Nearly half (47.8%) of our audience said they use AI for transcription.

Effective note-taking during research sessions is tough, especially when you’re trying to be an attentive listener and moderator at the same time. It seems that this is one area where AI has offered a lot of help, with 47.8% of our audience saying they use AI for transcription and 40.8% saying they use it for note-taking.

2.5% of our audience also wrote in other ways they use AI to conduct research, including:

how are researchers using AI to conduct research - Transcription	47.8%	523 Note-taking	40.8%	446 Surveys	37.7%	412 User testing	29.9%	327 Recording sessions	29.5%	322 I am not using AI for conducting research.	4.7%	51
Participants were asked to check all the tasks they use AI for, so responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
How are researchers using AI for analysis - Qualitative coding	40.4%	442 Statistical analysis	37.9%	414 Sentiment analysis	33.9%	371 Competitive analysis	32.5%	355 Product analytics	26.8%	293 I am not using AI for research analysis.	8.8%	96
Researchers were asked to select all of the analysis tasks they use AI for, so responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.

🧑‍💻 Qualitative coding is the most popular analysis use case for AI.

Qualitative coding is one of the most complex and time-consuming types of UX research analysis—so it’s no surprise that a plurality (40.4%) of our audience is using AI to streamline the process.

However, it sounds like AI’s analysis capabilities still leave something to be desired. A handful (0.5%) of write-ins said they’ve experimented with AI for analysis but found it wasn’t very helpful, or that they could do it more quickly and accurately on their own. Although the number of such responses is small, these sentiments align with many of the responses we received about AI’s shortcomings in a later question.

For example, one respondent said:

“I tried qualitative coding and it was a disaster. Worse - it looked right for the first 10 minutes and then turned out to be very, very wrong.”

📊 A plurality (45.5%) of our audience is using AI to help with writing reports.

Given our audience’s high self-reported usage of ChatGPT—a large language model known for its ability to generate smart, textual responses to prompts—it may come as no surprise that a large percentage of our audience (45.5%) reported using AI to help them write reports.

A small percentage of our audience (0.5%) used the “Other” field to tell us they are using AI for editing, data clustering/mapping, and summarization.

📚 More reading: 31 Creative UX Research Presentations and Reports – Templates and Examples

how are researchers using AI for synthesis & reporting - Writing reports	45.5%	497 Presenting findings	43.8%	479 Data visualization	43.2%	472 Storing & organizing insights	32.6%	356 I am not using AI for synthesis and reporting.	9.6%	105
Participants were asked to check all the tasks they use AI for, so responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.

⏩ Efficiency is the most-cited benefit of AI—but researchers still have reservations.

We’ve learned that researchers are using AI in their projects, and we’ve explored the different areas of research that they’re using AI for. Now, let’s uncover why they’re using AI—what benefits does it offer?

perceived benefits of AI for UX research - Increased efficiency/productivity	310	28.4% Improved analysis and synthesis	225	20.6% Writing, editing, and proofreading research assets	160	14.6% Automating manual/tedious tasks 	139	12.7% Supports creativity and overcoming mental blocks 	82	7.5% Improved user experience	76	7.0% Improved accuracy and reduced human error	53	4.8% Processing large/complex datasets	51	4.7% Improved preliminary research and information access	46	4.2% More logical decision-making and problem-solving	43	3.9% More comfort and convenience in daily life	31	2.8% Detecting bias and identifying risks	25	2.3% Translation and multilingual support	20	1.8% Easier data visualization and asset creation	15	1.4% Predictive capabilities	12	1.1% Increased income and career growth	11	1.0%
This was an open-response question. Themes that came up in fewer than 1% of responses have been omitted. Zoom in or open image in a new tab to view a larger version.

In open-response comments describing the benefits of AI, the terms “efficiency,” "time savings," "doing more with less," and related phrases came up the most often (in 28.4% of comments). For example, one person described valuable time savings for their research team of one:

“As the only UX'er in my company, my research projects often span a longer timeline than I'd like (out of necessity). Plugging my session notes into ChatGPT to quickly generate commonalities or using Notion to summarize session transcripts inline gives me a good starting point to dig into my analysis.”

Folks also had a lot to say about AI’s creative support, with one researcher describing themselves as having a “thought partner relationship” with AI. AI seems to help many researchers “just get started” when tackling a blank page:

“If I'm putting off getting started on drafting a research plan, recruitment email, or coding data I'll let AI take a stab at it because it spits out stuff instantly! Then I'm editing instead of generating which feels easier to get started. Most of the time I have to do 95% of the work over, but getting that start is valuable. It also helps me check my initial ideas by [comparing them with] what AI thinks.”

Others said that AI helped them bridge various types of communication gaps, from communicating across cultures and languages to collaborating across teams:

“By far the most valuable part of AI tools for me is being able to translate terminology and requirements from stakeholder language to researcher language, and back again, to help bridge communication gaps.”

This same sentiment was echoed by another researcher, who also said that AI-based insights seem to improve their perceived credibility among stakeholders:

“Stakeholders also tend to believe [my analysis] more if the AI finds the trends over the humans, which is a bit annoying as a researcher, but is gratifying to see the work being accepted by the client.”

Despite these reported benefits, almost everyone seems to agree that AI is not perfect. Many people were careful to include qualifiers about the extent of these benefits and their potentially hidden costs. For example, some researchers mentioned the need to check AI outputs and the dangers of assuming that AI is correct:

“AI tools (LLMs mostly) are useful at processing large volumes of words for things like sentiment analysis and other forms of pattern detection. I've tested a couple of tools that purport to do this with research material but have not had a great deal of success. The tools I've tried have been incomplete and inaccurate to a degree that I question whether I'm saving any time or effort.”

Let’s dive deeper into the limitations of AI in the next section.

🛑 Despite the benefits, some researchers believe that AI’s current shortcomings may be too acute for the tools to be truly valuable (yet).

The researchers in our audience listed increased efficiency as the top benefit of AI. But does this efficiency come at too great a cost?

In an open-response question about the limitations of AI, many people noted that AI’s capabilities are often very low-quality, spitting out inaccurate, surface-level, or incomplete outputs.

Here’s a handful of the words and phrases that participants used to describe AI’s outputs: “hallucinations,” “wonky,” “absurd,” “nonsense,” “embarrassing,” “really poor (and frankly, hilarious),” “like a junior assistant,” “like a seventh grader,” “like a glorified Google search,” and “basically useless.”

an illustration of a figure shrugging their shoulders, looking unsure

One respondent said:

“Accuracy is the number one issue. AI doesn't know the context and it can only get you 70% of the way there. My fear is researchers will fail to understand the why behind their craft if they lean too heavily on AI to do the work for them. Research is not always black and white and you often have to look beyond the face value of things.”

Many people also said that AI needs so much human review (described by one respondent as “hand holding”) to catch errors that its limitations undermine any efficiency gains. Others noted that, although human review may be considered a limitation, it’s also par for the course; “the tools were never meant to function by themselves.”

Several comments also mentioned the danger of inexperienced researchers (such as junior UX researchers or people who do research outside of the dedicated UX research team) putting too much trust in AI, leading to false insights and misguided decision-making.

For example, one person said:

“It's too easy to accept what the AI has suggested as the right answer — especially working with junior colleagues and interns who may not disclose that they're using AI to generate insights and synthesize data. Everyone has to be more critical, and that ends up creating more friction on the team.”

Ethical concerns such as biased outputs and negative socioeconomic outcomes (e.g. discrimination and unemployment) were also a recurring theme in the responses. One person said:

“The inherent gender and racial bias we've encountered in anything more than summarizations or decisions made with strict parameters are absolutely a deterrent to using [AI] for anything more ‘cutting edge’.”

In general, the researchers in our audience seemed to have very low trust in AI, and many wished AI solution providers were more transparent about their data protection policies (or lack thereof). One researcher said:

“AI tools are not transparent enough about what is happening with the data. We need to know EXACTLY where our data is going (who owns the server that the data is being transferred to, ex AWS, Azure, GCP). This is not immediately transparent when interacting with the feature itself. We do not have the bandwidth to engage reps from each of these companies to gather these answers, and sometimes they don't know themselves [...] Making this transparent would greatly alleviate many of our issues.

Others mentioned a desire for specific guidance on how to use AI effectively without risking user data, describing a “learning curve” that limits their ability to try new tools.

A very small portion (2%) of our audience felt that there were no limitations, or that they hadn't used AI enough to notice any.

perceived shortcomings of AI for UX research - Inaccurate or poor quality outputs			298	27.3% Ethical concerns and socioeconomic implications			105	9.6% Requires time for human review			102	9.3% Lack of data privacy and security risks			102	9.3% Limited by the quality of training data			90	8.2% Inability to process nuance or context			84	7.7% Low trust and lack of transparency			80	7.3% Lack of creativity, empathy and critical thinking			66	6.0% Costly to learn, access, and implement			50	4.6% Unable to process unstructured datasets			49	4.5% Unpredictability and lack of control			24	2.2% None/unsure			22	2.0% Human dependence on machines			19	1.7% Mechanical, vague, or inflexible writing style			18	1.6% Lack of citations/sources			15	1.4% Limited effectiveness for non-native English speakers			13	1.2% Clunky UI / usability issues			16	1.5% Company and/or local restrictions			12	1.1%
This was an open-response question. Themes that came up in fewer than 1% of responses have been omitted. Zoom in or open image in a new tab to view a larger version.
an illustration of a cluster of graph elements
researchers top ethical concerns regarding ai in ux research - Inaccurate and/or incomplete analysis	29.7%	325 Lack of data privacy	19.7%	215 Lack of sources and citations	14.3%	156 Introducing bias into study findings	14.1%	154 Errors, poor performance, and other usability issues	13.9%	152 AI eliminating the need for human researchers	6.4%	70 I do not have concerns about the ethics of AI in UX research.	0.3%	3
We wanted to get a sense of the most pressing concerns, so responses to this question were mutually exclusive. Even so, we still had a handful of researchers write that they were concerned about all of the above. Zoom in or open image in a new tab to view a larger version.
an illustration of a figure scratching their head with question marks hovering above their head

😓 Researchers’ biggest concern about using AI is the potential for inaccurate or incomplete analysis.

It’s safe to say that folks have concerns about AI.

Anecdotally, you don’t have to search for long on the internet to find opinion articles, social media posts, TikTok videos, and other examples of people raising the alarm about the potential implications of AI on how we work, live, and evolve. Indeed, only 0.5% of our audience (3 people!) said they had no concerns regarding AI in UX research.

When we asked researchers to select their primary concern about AI in UX research from a list of options, we found that for a plurality of researchers (29.7%), the #1 concern is the potential for inaccurate and/or incomplete analyses. After inaccuracy, the biggest concerns were a lack of data privacy (19.7%) and the potential for introducing bias into study findings (14.1%).

Folks seem least concerned about the prospect of AI eliminating the need for human researchers (only 6.4% selected this as their biggest concern). This may be due to a lack of trust in AI’s current capabilities; in our open-response questions, many folks said they wouldn’t rely on AI without taking the time to manually review its outputs. As one respondent said:

“AI will never replace human centered research. The 'human' part has to go both ways. I don't think AI can truly catch all the tiny aspects that go into a user experience and synthesize them in a way that makes sense or is helpful/accurate. I see it as a tool to help us save time on tedious tasks, but not to be able to do research on its own.”

Note that in order to get a sense of people’s most pressing concerns regarding AI, we required participants to choose only one response to this question. However, 8 participants couldn’t choose, explaining in write-in responses that they were concerned about all of the above. As one response said:

“I can't pick a single primary concern. I have several. And they all matter.”

❌ Nearly half (49%) of our audience is limiting the type of data they use with AI.

Whether you’re worried about it or not, AI is now an intrinsic part of work-life. If researchers are going to use AI at work—regardless of its existing limitations—it stands to reason that there should be guidelines and guardrails in place to ensure the safest, most effective use possible.

So, what kind of steps are researchers taking to mitigate their concerns about AI and its shortcomings?

Nearly half (49%) of our audience said they limit the type of data they're using with AI tools. A large segment (38.8%) also said they ensure human review of all AI outputs, while many others (35.5%) said they collaborate with their legal team to stay compliant with AI laws and regulations.

In some cases, researchers felt that these guardrails may be interfering with AI’s potential to provide true benefit to research teams. As one researcher noted in an open response to a later question about limitations:

“The AI Summaries feature could help our team generate high level insights for informal debriefs... However, we currently can't leverage this technology because of corporate guardrails in place limiting our use of AI.”

Of the write-in responses we received (1.8%), several said they avoid using AI in their research entirely, as it’s “the best guardrail there is.” A handful of other folks said they’re still working to determine the right guardrails with their team.  

guardrails in place to address ethical concerns regarding AI in UX research - Limits on the type of data used with AI tools (e.g. not sharing sensitive user data)	49.0%	536 Human review of all AI outputs	38.8%	424 Collaboration with legal team to stay compliant with AI laws and regulations	35.5%	388 System for tracking decisions based on AI outputs	31.6%	345 Policies limiting the use of AI on your team	30.0%	328 Defined roles/dedicated managers of AI tools	16.9%	185 I do not have any guardrails in place. 	3.3%	36
Participants were asked to check all of the guardrails they’re currently using, so responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.

⚠️ UX Researchers seem to be the most cautious segment regarding AI.

Of all the folks who participated in our survey, dedicated UX Researchers seem to be the most cautious about AI. This may not come as a surprise, since UX Researchers are typically trained to apply higher levels of rigor to their research than folks who conduct research as an adjacent part of their job.

When we segmented the data by role, Customer Experience/Support professionals were the most likely to say they use AI in almost all of their research projects (40%), followed by ReOps Specialists (19.4%) and Marketers (17%). UXRs were the most likely to say they've tried AI but avoid using it in their research (44.3%).

Frequency of AI usage by role/title - 	Almost all of my research projects	Most of my research projects	Some of my research projects	I have tried AI tools but don't regularly use them in my research projects UX/User researcher	6.8%	14.6%	34.4%	44.3% Research operations specialist	19.4%	27.4%	30.7%	22.6% Product manager	16.0%	28.8%	34.4%	20.8% Product designer/UX designer	10.4%	33.3%	42.8%	13.5% Data scientist/analyst	13.9%	40.8%	36.2%	9.2% Marketer 	17.0%	39.4%	36.2%	7.5% Customer experience/support specialist	40.0%	36.4%	18.2%	5.5% 	N	N	N	N UX/User researcher	22	47	111	143 Research operations specialist	12	17	19	14 Product designer/UX designer	23	74	95	30 Data scientist/analyst	18	53	47	12 Product manager	20	36	43	26 Marketer 	16	37	34	7 Customer experience/support specialist	44	40	20	6
Note that statistical significance was calculated using a standard 95% confidence level. With the exception of the 30.7% of Research Operations Specialists who said they use AI in some of their projects, all of these differences were found to be statistically significant. Zoom in or open image in a new tab to view a larger version.

UX Researchers were also the least likely to use AI in their projects across all aspects of research. For example, 11.5% of UX Researchers said they’re not using AI to conduct research, compared to less than 4% in each of the other segments.

The UXRs who said they are using AI to conduct research are mostly (60.7%) using it for transcription.

Percentage of participants who said they were not using AI to conduct research - UX/User researcher	11.5%	37 Research operations specialist	3.2%	2 Product manager	2.4%	3 Product designer/UX designer	1.8%	4 Data scientist/analyst	0.8%	1 Marketer 	2.1%	2 Customer experience/support specialist	0.0%	0
Responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.

The UX Researchers in our audience were also the least likely to have tried AI tools and UXR tools with AI-based features. These results were consistent across all of the tools and use case related questions, which may indicate higher rates of trepidation about AI among folks in the UXR segment.

Percentage of participants who said they hadn't tried any AI tools - UX/User researcher	10.2%	33 Product manager	3.2%	4 Research operations specialist	1.6%	1 Product designer/UX designer	0.9%	2 Data scientist/analyst	0.8%	1 Marketer 	0.0%	0 Customer experience/support specialist	0.0%	0 - In the 2023 AI in UX Research Survey by User Interviews, participants were asked to select from a list all of the AI tools they use in their research. The data in this chart represents folks who selected “none of the above.”
Responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.

More questions than answers

All in all, our clearest takeaway from this survey is that researchers definitely have mixed feelings about the (mostly) uncharted territory of AI.

  • 🔎 AI can introduce bias into study findings, but it can also be used to detect bias in interview scripts and analyses.
  • 💬 AI can support multilingual communication, but it also falls short in its effectiveness for non-native speakers.
  • ⏳ AI can improve the efficiency of common research tasks, but the time required for review may ultimately undermine those efficiency gains.

These are just some of the complex issues and contradictions that researchers brought up in their survey responses.

At the end of the day, it seems that there’s no quick or clear solution for reaping AI’s benefits without risk. As one respondent said:

“AI that automates decisions can scale both benefits and harm.”

So, while we learned a lot from this survey, the results have also opened up a whole new wave of questions for us, including:

  • What motivates researchers to choose certain AI tools over others?
  • What signals and characteristics can researchers look for when assessing the accuracy, security, and reliability of different AI tools?
  • How much do researchers actually value efficiency/speed in research, and do they value this enough to see the efficiency gains from AI as worth it? Does it vary by role, team structure, or seniority?
  • Why are fewer researchers using AI for recruiting than for conducting research, analysis, and synthesis?
  • How concerned are researchers about AI replacing the demand for human participants (e.g. with tools like Synthetic Users)? Are they concerned at all?
  • Do the primary ethical concerns differ among researchers who’ve never tried or used AI tools?
  • What resources does the UX research community need to develop the skills and knowledge for effective decision-making about if, when, and how to use AI?
  • How are researchers tracking the impact of decisions made based on AI-generated insights?
  • Are researchers more or less likely to use AI in their projects in specific industries or company types?
  • What can AI tool providers do to overcome trust issues among researchers?

We’ll be chewing on these questions as the UXR industry and AI technology continues to evolve. In the meantime, we’d love to hear from you—what are your thoughts about this report? What questions do you still have? Tag us on Linkedin or Twitter to share your feedback, or email the author directly at lizzy@userinterviews.com.

Keep an eye out for future insights about UX Research

If you found this report valuable, then you’ll probably like these resources too:

✉️Sign up for our newsletter to be the first to know when we publish new research about the who, what, how, and why of UX research.

The UX research newsletter that keeps it fresh

Join over 100,000 subscribers and get the latest articles, reports, podcasts, and special features delivered to your inbox, every week.

Yay! Check your inbox for a welcome email from katryna@userinterviews.com.
Oops! Something went wrong while submitting the form.
illustration of a stack of mail
lettuce illustration against an abstract blob filled with a teal brushstroke texture

The UX research newsletter that keeps it fresh

Join over 100,000 subscribers and get the latest articles, reports, podcasts, and special features delivered to your inbox, every week.

Yay! Check your inbox for a welcome email from katryna@userinterviews.com.
Oops! Something went wrong while submitting the form.
[X]