the state of

User Research

2023

Welcome to the fifth annual State of User Research report, brought to you by the curious folks at User Interviews

This report unpacks the findings from our annual survey on the state of user research and the people who do it. 

In addition to our usual exploration of research methods, salaries, tools, and feelings, this year, we took a special look at the makeup of research teams and practices, the increasing prevalence of AI in research, recruiting pain points, and the impact of the economy on individual researchers and teams.

We are enormously grateful to all of our survey participants, our colleagues, and our partners for their contributions to this essential industry report. 

Happy reading and researching!

Download the dataset
hand drawn lightbulb icon with dashes coming off of the bulb
key insights
  • Most US-based UXRs earn between $100 - $200k.
  • Over 50% of research practices are now decentralized.
  • A single ReOps Specialist supports the research efforts of 21 people on average.
  • Most people (87%) conduct a majority of their research remotely, regardless of their remote work status.
  • Fully remote work among researchers is on the declinedown from 89% in 2021 to just 51% in 2023.
  • 65% of researchers primarily rely on their own customers for research.
  • The average UXR toolkit includes 13 different tools for research.
  • A fifth of researchers are currently using AI in their research; an additional 38% plan to incorporate it in the future.
  • Half of researchers were directly or indirectly affected by layoffs in the last 12 months.
hand drawn icon of two figures with dashes coming off of them
In Partnership with:
hand drawn icon of a megaphone with dashes coming out of it
Share on Social:

Methodology

an illustration of graphs, tables, and charts expanding outside a desktop computer screen
A guide to commonly used terms
  • UXRs: People whose titles include UX/User Research (or similar terms)
  • PWDRs: People Who Do Research—in this report, folks who aren’t UXRs but who spend 10% or more of their time on user research
  • ReOps Specialists: People whose titles include Research Operations (or similar terms)
  • Researchers: In this report, a catchall term for anyone involved in user research (UXRs, PwDRs, and ReOps alike)
  • Research teams: When written with a lower-case “r,” research teams refers to all the folks who are involved in research at an organization as a group, regardless of whether they report to a Research department.

Methodology

The State of User Research survey was created by Katryna Balboni, Content Director, and Morgan Mullen, Senior UX Researcher, and built using SurveyMonkey. Analysis was done using Mode and Google Sheets/Excel. 

This report was authored by Katryna, and brought to life by Holly Holden, Senior Visual Designer, and illustrator Olivia Whitworth.

Between May 4 and May 15, 2023, we collected 929 qualified responses from User Researchers, ReOps Specialists, and people who do research (PWDRs) as part of their jobs. (An additional 2,745 people took our screener but did not qualify for our survey based on their responses.)

The survey was distributed to User Interviews audiences via our LinkedIn and Twitter pages, our weekly newsletter (Fresh Views), and an in-product Appcues slideout. We also shared the survey in relevant groups on LinkedIn, Facebook, and Slack. Members of our team and friends within the UX research community also shared the survey with their professional networks. 

This year, we partnered with other companies in the User Research space to extend the reach of our survey and this report. Our partners shared the survey with their audiences via their own newsletters and social media channels. Those partners are: Lookback, Marvin, MeasuringU, the ReOps Community, and UXtweak.

Our Audience

We believe that research is for everyone. 

Whether they hold a PhD in Behavioral Anthropology or are a junior product designer for an ecommerce app—or, heck, both! We know people take all sorts of roads into research—we think that everyone should be empowered to ask questions and seek answers in a methodical way. 

That’s why we’ve included not just dedicated UX Researchers (UXRs), but also people who regularly do user research as part of their jobs (PWDRs) and ReOps Specialists in our survey.

In this section, we break down our audience by job title, seniority, years of experience, company size, industry, and geographic location.

two figures standing among an easel with text lines, profile icons, and chat bubbles expanding across the screen

Job titles and seniority

The majority (69%, N=637) of our audience are UX/User Researchers (UXRs). People who do research (PWDRs)—including Designers, Product Managers, Marketers, and Data Analysts—accounted for 26% (N=243) of all responses. A small but meaningful 5% (N=49) of our audience are ReOps Specialists.

Most of these folks (63%, N=589) are individual contributors (ICs), meaning they do not manage a team. Of these 589 people, 19% are what we’d call Senior ICs (people with 10+ years of experience); 44% are “mid-career” (4-9 years); and 37% are “early career” or Junior ICs (0-3 years).

A fifth of responses (20%, N=186) came from Managers, followed by Directors and Senior Directors (9%, N=87), and freelancers/self-employed (5%, N=42). A small percentage of our audience are at the VP level or higher (3%, N=25).

Notably, our PWDR audience skews more toward the management side of things, with 21% at the Director level or above.

Geography

Responses came from 65 countries around the world, with the people living in the United States representing a full 50% (N=466) of the total audience. 

The next most represented countries are the United Kingdom (7%, N=61), Canada (6%, N=55), Poland (5%, N=43), Germany (3%, N=32), Brazil (2%, N=22), Australia (2%, N=21), and Spain (2%, N=20). 

The remaining 209 responses came from folks on every continent (well, minus Antarctica), from Finland to Egypt to the Philippines.

Company sizes and industries

These individuals represent companies of all sizes, from 2-person operations to multinational giants with over 10,000 employees. 

A fifth (21%, N=198) of the folks we surveyed work at agencies or consulting firms that are contracted to conduct projects on their clients’ behalf. 

Just over a third (35%, N=321) of our audience are User Interviews customers. This group includes users of both Recruit (our panel of over 3 million users) and Research Hub (our highly rated research CRM).

In terms of industry, a plurality (35%, N=327) of the researchers we surveyed work in tech and software companies. The next-most represented sectors are IT (10%) and finance/accounting/insurance (also 10%), followed by business consultancy or management (9%), and healthcare and education (5% each).

Confidence in research comes with time.

As we might expect, UXRs are the most likely to have a formal university education in user research or a closely related discipline, with 42% saying this was how they primarily acquired their research skills and knowledge. 

Other folks—especially ReOps Specialists—most commonly learn about user research on the job. (That is not to say they don’t have advanced formal training in other areas—in fact, the majority of our audience (69%) hold a Master's degree or higher.)

How researchers primarily acquire their UXR skills

magnifying glass icon
A line graph showing from where UXRs gain their skills. "On the job" is the majority answer.
59% (N=29) of ReOps Specialists primarily learned user research on the job, followed by 40% (N=97)  of PWDRs and 35% (N=223) of UXRs. PWDRs are the most likely to have attended a UX Research bootcamp (14%, N=34).

When we asked people to rate their own experience level with UX research on a scale from 1 (beginner) to 5 (expert), we found that folks with formal training in research rated their own expertise the highest on average (3.90), while those who primarily learned about UX research through a bootcamp (9% of our audience) gave their experience the lowest average rating (3.17). 

Self-assessments of research experience were also slightly higher among folks who reported that UXRs or ReOps Specialists were responsible for user research education at their company (3.79 for UXR-led and 3.72 for ReOps-led education), compared to those who said there was no one no one in charge of research education at their company (3.41).

But regardless of job title or education, it appears that the biggest factor in how highly a researcher rated their own level of expertise was simply time. Predictably, people’s sense of their own research expertise increases with years of experience.

hand-drawn icon of the world
Happening now

Researchers are back at the office (sometimes).

Fully remote work among researchers is on the decline. The percent of people who said they work exclusively from home has decreased from 89% in 2021 (when our survey went out at the height of COVID-19 precautions) to 77% in 2022, to just 51% in 2023.

Frequency of remote work over time

Responses to the question "How often do you work remotely?" in 2021-2023.

magnifying glass icon
Line graph showing frequency of remote work over time, where 79% of folks still work remotely 3 or more days out of the week
People are still spending the majority of their time out of the office—79% (N=738) work remotely 3 or more days out of the week. This percentage is down from 95% in both 2022 and 2021.

Even so, fewer than 1% of our audience said they were in the office full-time. Instead, hybrid workers (people who work remotely 1 to 4 days per week) represent a growing minority with 43% reporting hybrid work in 2023 (compared to 21% in 2022 and just 7% in 2021).

North American and Latin American researchers are the most likely to be fully remote (63% and 66%, respectively), while in-office work was most common among researchers in Asia, with 15% saying they never or rarely worked remotely.

Remote vs. in-person research

magnifying glass icon
a bar graph showing remote vs. in-person research.  Nearly half (46%, N=431) of our audience said that all of their research happens remotely.
Nearly half (46%, N=431) of our audience said that all of their research happens remotely. Only 1% (N=7) said that all their research is conducted in-person. The remaining 491 folks (53%) fall somewhere in between, with most favoring remote methods.

But working remotely does not necessarily mean researching remotely. Among the 51% of people who said they work exclusively remotely, 39% said that a portion of their research happens in-person. Conversely, 96% of the people who never work remotely say that at least some (if not all) of their research is remote.

In fact, most people (regardless of their remote work status) conduct a majority of their research remotely. This suggests that remote research is a prevalent and essential aspect of the research process, even for those who primarily work in a physical office.

There is a positive correlation between remote work and overall fulfillment.

We asked our audience to rate both their overall fulfillment at work and their feelings about several job factors on a scale from 1 (very unfulfilled/very dissatisfied) to 5 (very fulfilled/very satisfied).

People who work remotely at least 1 day per week had an average fulfillment rating between 3.51 and 3.55, compared to those who rarely (3.37) or never work remotely (2.71).

People who are fully onsite also reported lower satisfaction with work-life balance on average (2.71) compared to their fully remote counterparts (3.76). Onsite workers were also the least satisfied with cross-functional collaboration and the level of bureaucracy in day-to-day decision-making. (Though frankly, no group is particularly satisfied with the latter).

User research teams

Thomas Edison didn’t actually invent the lightbulb. He improved upon existing technology to produce an affordable long-burning incandescent light bulb—and even then he had considerable help from the nearly 200 people who worked in his lab. And in fact, it was a Black inventor named Lewis Latimer who perfected Edison’s lightbulb, making it more durable and efficient to produce.

As we’ve been saying for a while now: Research is a team sport.

To better understand the key players, we looked at team sizes and structures at companies of different sizes. Here’s what we learned:

Larger companies have larger research teams. 

Shocking headline, we know.

In previous years, we asked about team sizes using buckets. This year, we used an open-text field to collect numerical data, which allowed us to calculate the average number of UXRs, PWDRs, and ReOps Specialists and better understand the actual sizes of research teams in different organizations.

(Remember, in this context, “research teams”  include all the UXRs, ReOps Specialists, and PWDRs involved in research at a company, regardless of which department they report to.)

And our first finding here was as unsurprising as the headline suggests: The average research team size scales with company size.

Research team sizes

magnifying glass icon
A data visual showing research time sizes. 46% of our audience work in large (1,000 to 9,000 employees) or very large (10,000+) companies, where the average research team sizes are 86 and 243 respectively. Yet the majority (60%, N=554) of people actually work on smaller research teams of 2 to 25 people, while another 13% (N=121) work on mid-sized teams of 26 to 50 researchers.
46% of our audience work in large (1,000 to 9,000 employees) or very large (10,000+) companies, where the average research team sizes are 86 and 243 respectively. Yet the majority (60%, N=554) of people actually work on smaller research teams of 2 to 25 people, while another 13% (N=121) work on mid-sized teams of 26 to 50 researchers.

A plurality of our audience (37%) reported team sizes of between 2 and 10, followed by 23% who said there are 11 to 25 people involved in research at their organization. Five percent (5%) of responses came from solo researchers—people represent a research team of one.

It’s worth noting that when we analyzed how people rated their satisfaction with various job factors on a scale from 1 (very dissatisfied) to 5 (very satisfied) by team size, solo researchers appear to be least satisfied when it comes to their tool stacks, budgets, buy-in from leadership, cross-functional collaboration, and how research is used to make decisions at their company.

The average ReOps Specialist supports the research of 21 people.

To understand the makeup of these different teams, we looked at the average ratios of different roles to one another. The number of PWDRs per every UXR ranges from 1 to 7, with an average ratio of 3 PWDRs: 1 UXR.

Meanwhile the ratio of people conducting research (UXRs + PWDRs) per every ReOps Specialist ranges from 4 to 32, with an average ratio of 21:1. In other words, the average ReOps Specialist supports the research efforts of 21 people.

Of course, not every company has a dedicated ReOps Specialist. In fact, our audience was evenly split between people who have a ReOps function at their company and those who don’t. Interestingly, 5% of people told us that their company outsources research operations work to an external agency or individual.

In any case, dedicated UXRs represent 39% of an average research team. Research Ops, when it is present, account for 16% of the average team size, with PWDRs constituting the remaining 45%. When there is no Research Ops function, PWDRs make up 61% of all researchers.

Over 50% of research practices are decentralized.

A third of our audience (33%) work in companies with a centralized Research department, with this model being more common in agencies (43%).

Meanwhile, over half (52%) work in organizations where the research practice is decentralized—meaning it is either fully distributed (dedicated researchers are embedded within a non-research team) or a hybrid wherein some UXRs sit in a Research department, while others are embedded in a non-research team. 

Fully centralized practices seem to become less common as companies scale.

illustration of figures at a desk whimsically cluttered with laptops, speech bubbles, browser windows, and decorative swirls
hand-drawn icon of the world
Happening now

The dark side of democratization?

You don’t have to look far to find someone within User Research with an opinion on democratization and the role it may have played in the recent wave of UXR layoffs. 

For anti-democratization folks, seeing fellow-researchers laid off has only confirmed their belief that democratization poses an existential threat that must be challenged and resisted. One researcher wrote:

“It's not shocking that UXRs are being laid off in droves after the whole ‘democratization’ trend kicked off. If everyone thinks they can do research (and they can't), then there will be no jobs for dedicated researchers.”

For others, recent events are a sign that UXR is due for a reckoning.

“Research is about discovery, so how can it be centralized and stay unbiased, diverse and inclusive to all walks of life?”

In our survey, we had people rate their feelings about democratization on a scale from 1 (very concerned/dissatisfied) to 5 (very excited/satisfied). On average, our audience rated their feelings a 2.95 out of 5—just below neutral. Sentiment toward democratization was lowest among UXRs, who gave an average rating of 2.84. 

There does seem to be a “the water’s better once you’re in” scenario at play—folks on centralized teams (especially UXRs) took a dimmer view of democratization than people already working in distributed practices (2.84 vs. 3.05 on average).

We dug into qualitative responses on this topic to understand where our researchers were coming from:

Negative feelings about democratization

Of those who left a qualitative response on this subject (N=526), 45% shared more negative views about democratization. People seem primarily concerned that it reduces research quality (17% of open responses), reduces research impact (5%), puts UXR jobs at risk by giving leadership an excuse to ax research-specific departments (7%), and puts more work on researchers by asking them to become educators—and indeed, on PWDRs by asking them to become researchers—which may not align with their expected roles (3%). 

The latter point—that UXRs are expected to be educators, not that UXRs necessarily dislike this role—is supported by our data: Most (73%) of the people we surveyed said that the responsibility of teaching research best practices falls on the UXRs in their organization, even when there is a Research Ops function present.

Positive feelings about democratization

Some folks, on the other hand, welcome the shift. Roughly 28% of those who left a qualitative response expanded on their positive view of democratization, saying it enhances research by bringing more perspectives into the fold, reduces biases, and increases the amount of research that can be done.

“It’s fine and I welcome it, everyone has the right to be data-informed,“ wrote one person. “The more research, the better!” said another.

Mixed feelings about democratization

Another 11% expressed more balanced views about the subject, saying that execution is key. In the words of one survey participant: “It's necessary, we just have to get it right.”

And some folks are, frankly, just done with this conversation:

“I'm honestly just sick of talking about it. Why does our industry have to have a single view on this? Stop trying to make fetch happen and accept that ‘it depends on organizational context’ is the answer.”

More researchers are tracking the impact of their work.

This year, 82% of the folks in our survey said they track the impact of their research—a notable uptick from last year (when 68% said the same). This suggests an increasing awareness of the importance of measuring research outcomes.

Common methods for tracking research impact

magnifying glass icon
Bar graph showing common methods for tracking research impact, the largest being "follow up meeting with stakeholders" at 50% of responses. We excluded people who work for agencies (N=198) from this analysis, given the difficulty of tracking impact in a client’s organization (a challenge that was called out by survey takers last year, when we asked about impact tracking as an open-response question).
We excluded people who work for agencies (N=198) from this analysis, given the difficulty of tracking impact in a client’s organization (a challenge that was called out by survey takers last year, when we asked about impact tracking as an open-response question). Responses are not mutually exclusive.

Follow-up meetings with stakeholders are the most common method of assessing research impact overall, followed by the use of Key Performance Indicators (KPIs) and manually tracking the number of decisions influenced. 

UXRs are more likely to use the latter method—43% say they manually track research influenced-decisions compared to 29% of PWDRs and 24% of ReOps Specialists.

Meanwhile PWDRs are the most likely to use KPIs or other quantitative methods—47% compared to 39% of UXRs and 41% of ReOps Specialists. 

These folks may be onto something—when we looked at how people rated their satisfaction with the way research is used to make decisions at their company, people who use KPIs to track their impact tend to rate their satisfaction in this regard highly (4-5 out of 5), as do those who say their company built a custom tool for this purpose. 

On the flipside, nearly half (47%) of the folks who say they do not track research impact at all said they were dissatisfied or very dissatisfied (1-2 out of5) with how their work is used in decision-making.

Even more interestingly, we found that there was a clear correlation between how successful researchers felt in their efforts to track research impact and their overall fulfillment at work. 

People who felt tracking efforts were very successful had an average fulfillment rating of 4.26. Comparatively, people who felt very unsuccessful in this regard had an average fulfillment rating of 3.09. (Perhaps confirming the old “ignorance is bliss” adage, people who make no effort to track their research outcomes rated their overall satisfaction somewhat more highly at 3.28 on average.)

illustration of a laptop with windows, papers, graphs, profile photos, cursors, and abstract shapes flowing out of the screen and up

Approaches to user research

Generative, evaluative, continuous. Qualitative, quantitative, mixed methods. Discovery, testing, go-to-market. Moderated, unmoderated. Longitudinal. Biometric. AI-driven.

There are many ways to conduct user research (and we’ve written about many of them in the User Experience Research Field Guide, by the way.)

As part of our survey, we asked people about how different types of research were handled in their organizations. An analysis of the data revealed some interesting patterns, but not many surprises.

UX Researchers favor a mixed methods approach.

UXRs tend to conduct both generative and evaluative research, and seem to favor a mixed methods approach (85% of people said the UXRs on their team use mixed methods, compared to less than a third who said the same of Designers and PMs). 

When PMs conduct research, they are most commonly focused on evaluative goals (according to 77% of our audience), using either qualitative or quantitative methods. Meanwhile, it seems that when Designers are involved in research, they typically focus on evaluative research (say 93% of our audience, compared to 40% who report Designer involvement in generative research) using qualitative methods (82% vs. 31% quantitative or mixed methods). 

Note that this data excludes answers from people who said “I don’t know” or reported that a role was not involved in any such research.

PMs are the most likely to conduct continuous research.

Overall, 38% of our audience said that their teams conduct continuous research, with PMs and ReOps being the most likely to respond affirmatively (65% and 54%, respectively, compared to 37% of UXRs). 

In fact, PMs are twice as likely as UXRs to regularly employ continuous discovery interviews in their research.

User research methods

We asked people how many moderated, unmoderated, and mixed methods studies they conducted in the last 6 months. All role segments reported that they conduct moderated studies most frequently, followed by unmoderated studies and then mixed methods.

The most commonly used methods are 1:1 interviews (which 88% of people said they use often or always), usability tests (80%), surveys (62%), concept tests (51%), and competitive analysis (44%).

PWDRs use a wider variety of methods than UXRs.

UXRs rely most heavily on interviews, usability tests, and surveys, while PWDRs—especially Product Managers—seem to use a wider variety of methods more frequently than UXRs.

illustrated figure looking left through large binoculars

Commonly used research methods

Percentage of researchers who "often" or "always" use each method

magnifying glass icon
A heatmap showing commonly used research methods. Designers and PMs appear more likely to use a wider range of methods on a regular basis (“often” or “always”) while UXRs are more likely to stick to a trifecta of interviews (91%), usability tests (83%), and surveys (65%).
Designers and PMs appear more likely to use a wider range of methods on a regular basis (“often” or “always”) while UXRs are more likely to stick to a trifecta of interviews (91%), usability tests (83%), and surveys (65%).

When we drill down further into the methods that people say they “always” use, we find that both PMs and Designers are much more likely than UXRs to use quantitative methods (like A/B or multivariate tests and behavioral product analytics) and biometric methods (like eye-tracking), as well as heuristic analysis and competitive analysis. 

Designers are also 2x more likely to say they always conduct accessibility tests and preference tests as part of their studies. 

Meanwhile, PMs are over 2x more likely than UXRs to frequently conduct continuous discovery interviews, and 5-10x more likely to use focus groups, card sorts, participatory/co-design studies, and diary studies on a regular basis.

Most frequently used research methods

Comparing the percentage of UXRs, PMs, and Designers who "always" use these methods

magnifying glass icon
A data visual of most frequently used research methods (comaring the percentage of UXRs, PMs, and Designers who "always" use these methods). The percentage of PMs (N=35) who say they “always” use a method was higher than the percentage of Designers (N=137) and UXRs (N=637) for 14 of the 22 methods we asked about in our survey.
The percentage of PMs (N=35) who say they “always” use a method was higher than the percentage of Designers (N=137) and UXRs (N=637) for 14 of the 22 methods we asked about in our survey.

When it comes to learning about customers through other methods, UXRs are more inclined than other groups to use data science/product analytics reports and online research. 

On the other hand, PWDRs (particularly PMs) rely more heavily on CS/support team notes or reports (80% vs 60% of UXRs) and ad hoc conversations with customers (77% vs. 52% of UXRs). 

ReOps Specialists and PMs are twice as likely as UXRs and Designers to utilize customer advisory boards (40-43% vs. 21-24%).

Commonly used methods for learning about users/customers

magnifying glass icon
A stacked bar chart showing commonly used methods for learning about users/customers.  The majority (54-65%) of our researchers also use data science/product analytics reports, online articles or public information, CS/support team notes, market research studies, and/or ad hoc conversations with customers. Responses are not mutually exclusive.
The majority (54-65%) of our researchers also use data science/product analytics reports, online articles or public information, CS/support team notes, market research studies, and/or ad hoc conversations with customers. Responses are not mutually exclusive.
hand-drawn icon of the world
Happening now

Researchers are adopting AI for research.

All in all, artificial intelligence (AI) in user research is a topic of both interest and caution. While a significant portion of researchers are currently using AI or planning to do so, factors like DEI considerations, data privacy concerns, and personal attitudes towards AI play a role in shaping researchers' decisions.

AI adoption among researchers

magnifying glass icon
A pie chart showing AI adoption among UXRs. A plurality (38%, N=350) of researchers say they haven’t yet adopted AI for research, but plan to join the 20% (N=183) of their peers who have. Another 17% (N=155) are on the fence, while over a quarter (26%, N=241) plan to leave artificial intelligence out of their practice.
A plurality (38%, N=350) of researchers say they haven’t yet adopted AI for research, but plan to join the 20% (N=183) of their peers who have. Another 17% (N=155) are on the fence, while over a quarter (26%, N=241) plan to leave artificial intelligence out of their practice.

A fifth (20%) of our audience is currently using AI in their research, with an additional 38% planning to incorporate it in the future. 

PMs are the most likely to have adopted AI already, while our small sample of Marketers (N=16) seem the most eager to jump on the AI bandwagon, with 56% of them planning to utilize AI for research at some point.

ReOps Specialists and UXRs appear the most hesitant about this new tech—27% of these folks said they have no plans to use AI for research, the largest percentage among role segments. 

Novice and experienced researchers seem to be embracing or rejecting AI at similar rates, suggesting that readiness to adopt this particular new technology is not necessarily tied to one's level of experience in the field.

We analyzed qualitative responses on the topic to understand what most excites and worries researchers about the rise of AI.

Negative feelings about AI in research

Half (50%) of the qualitative responses we received on this subject (total N=582) trended negative—although average sentiment regarding AI was neutral (3.0 out of 5). 

Folks who take measures to ensure that their research is diverse, equitable, and inclusive are somewhat less likely to currently use AI (19% vs. 23%) and somewhat more inclined to say they have no plans to do so (27% vs. 22%), compared to people who take no measures regarding DEI in research. 

In their open-ended responses, some of these people (24%) expressed skepticisms about data accuracy, the replacement of real people by AI participants, and rigor in research:

“[I] have concerns that people who don't understand UXR will think it's viable to replace us with an AI tool. I also think it will amplify our own biases and misinformation.”

There seems to be a correlation between concerns about data privacy and inclusion, and one's willingness to embrace AI. People who expressed positive feelings about the current state of data privacy and security were more likely to be current users of AI in their research (27% vs. 16% of those who felt negatively about this issue). 

In open responses, conversely, 16% said they worry about the lack of regulation and data privacy, and that we’re adopting this new technology too quickly. 

And 9% expressed fears about what AI might mean for their job security.

“Not sure if ChatGPT will help my job or eliminate it.”

Positive feelings about AI in research

On the other hand, 42% of open responses focused on the positive impacts of AI—namely that it offers new opportunities, streamlines research processes, reduces mundane tasks, and/or enhances their work (or has the potential to do so).

“The more sophisticated it becomes, the less I have to do!”

Mixed feelings about AI in research

Other folks are on the fence. Around 15% of qualitative responses reflected uncertain/mixed sentiments (or apathy). “Cautious” and “cautiously optimistic” were terms used to describe their feelings on the subject of AI.

Some people said they’re excited by this new technology, but worry that researchers are getting ahead of themselves:

“I have mixed feelings. I’m excited for certain productivity gains around rote processes, [but feel] skepticism about nuanced analysis [and] concern that there will be an overreliance on AI in UX before it's ready for prime time.”

Teams without ReOps take fewer measures to ensure their research is diverse and inclusive.

While the majority of our audience (86%) indicated that they take measures to ensure inclusivity in their research, this number is down slightly from 91% in 2022.

This dip could be partially attributed to the higher participation of global researchers in this year’s survey—data suggests that researchers outside of North America are less likely to take DEI into consideration in their studies. To quote one European researcher, “I live in a country where nobody cares about [inclusivity and diversity] a lot.”

How research teams ensure their work is inclusive and representative

magnifying glass icon
A bar graph showing how research teams ensure their work is inclusive and representative. 14% (N=129) of our audience said that they take none of the listed measures, and declined to write an “other” response. Compared to North American researchers (10%, N=55), folks in Europe were twice as likely to say they made no such efforts (19%, N=52). Our small sample of Middle Eastern researchers (N=12) were the most likely to select “None of the above” (33%, N=4). Responses are not mutually exclusive.
14% (N=129) of our audience said that they take none of the listed measures, and declined to write an “other” response. Compared to North American researchers (10%, N=55), folks in Europe were twice as likely to say they made no such efforts (19%, N=52). Our small sample of Middle Eastern researchers (N=12) were the most likely to select “None of the above” (33%, N=4). Responses are not mutually exclusive.

But before we go pointing fingers at folks on the other side of the world, it’s worth considering the response of one US-based researcher who wrote: “Often projects have very limited user pools or available stakeholders, so inclusivity is a privilege.”

It's encouraging that a majority of researchers are taking steps to ensure that their work is representative and respectful of the needs and perspectives of a diversity of users. 

But there is room for improvement—both in regions where the adoption of these measures seems to be lower, and on teams where inclusive research and design is seen as a luxury.

Interestingly, people without a Research Operations function were almost twice as likely (18% vs. 10%) to say that they did not implement any inclusivity measures, suggesting that Research Ops can positively impact the adoption of inclusive practices within research.

Research recruiting

This is the part of the report where we—the leading research recruiting and panel management solution—talk about ourselves and shamelessly plug our products. 

Don’t worry, there’s plenty of data here, too. But we’d be remiss if we didn’t tell you (for the sake of both transparency and marketing) that at User Interviews, we are 100% focused on simplifying participant recruitment. That’s all we do, and we do it better than any alternative method. 

If you’re curious to learn more Zoe Nagara, our Senior Product Marketing Director, wrote a great article called “Why We Exist” about what we do and why we do it, which you can read on our blog.

A typical moderated study includes 8 qualified participants.

The median number of participants in a moderated study is 8, according to our data.

We also found that our researchers conducted a median of 4 moderated studies in the last 6 months, indicating that a typical researcher recruits around 32 participants within that time. 

(That’s just for moderated research. If we include the median number of mixed methods studies (3) in our calculations, we find that a “typical” researcher needs to recruit around 56 qualified participants in a 6-month period.)

So who are these participants, and how do researchers recruit them?

Customer vs. external user recruitment

magnifying glass icon
A percentage bar graph showing customer vs. external user recruitment. The majority of researchers (65%, N=609) conduct most of their research with their own customers or users—17% (N=159) say they do so exclusively. It is less common for researchers to rely solely on external users (7%, N=69). These percentages were similar between agencies and non-agencies.
The majority of researchers (65%, N=609) conduct most of their research with their own customers or users—17% (N=159) say they do so exclusively. It is less common for researchers to rely solely on external users (7%, N=69). These percentages were similar between agencies and non-agencies.

Most researchers recruit existing customers for research.

Most researchers (65%) primarily rely on their own customers for research, especially in non-agency settings. 

These participants are most commonly selected by their patterns of product usage or professional criteria, except when a researcher primarily recruits external users—in which case they tend to select participants from the general population, rather than product or job experience.

Researchers use 2 to 3 recruiting methods to get the job done.

Our audience uses a blend of recruiting methods to source participants. On average, people use 3 different methods for recruiting their own customers, and 2 different methods for outside participants.

Email emerges as the most popular method for recruiting customers overall (used by 51% of our audience), followed by intercept surveys (43%). Among User Interviews customers (N=321) , self-serve recruitment tools (like ours) are the most popular method for recruiting one’s own customers (49%), followed by email (48%).

Methods for recruiting customers/existing users

magnifying glass icon
A bubble graph showing methods for recruiting customers/existing users. The most commonly used methods for recruiting customers include email (51%, N=438), intercept surveys (43%, N=371), CS/Sales (34%, N=295), built-in tester panels (26%, N=222), and self-serve recruitment tools like User Interviews (25%, N=218). Responses are not mutually exclusive.
The most commonly used methods for recruiting customers include email (51%, N=438), intercept surveys (43%, N=371), CS/Sales (34%, N=295), built-in tester panels (26%, N=222), and self-serve recruitment tools like User Interviews (25%, N=218). Responses are not mutually exclusive.

When it comes to recruiting external users, User Interviews is the most popular method by far among our own customers (60%), followed by a built-in tester panel (such as those offered by UserTesting, UXtweak, and other platforms–34%). 

Non-UI customers are more likely to use solutions like recruiting agencies (45%) and built-in testing panels (43%).

Methods for recruiting external participants

magnifying glass icon
A bubble chart showing methods for recruiting external participants. The most commonly used methods for recruiting external users include built-in tester panels (39%, N=289), recruiting agencies (39%, N=283), self-serve recruitment tools like User Interviews (33%, N=238), and groups on LinkedIn/Slack, Facebook/etc (20%, N=148). Responses are not mutually exclusive.
The most commonly used methods for recruiting external users include built-in tester panels (39%, N=289), recruiting agencies (39%, N=283), self-serve recruitment tools like User Interviews (33%, N=238), and groups on LinkedIn/Slack, Facebook/etc (20%, N=148). Responses are not mutually exclusive.

PWDRs are more inclined than UXRs to post in LinkedIn/Facebook/Slack groups (17-29%, vs. 9-15%), company social channels (17-18% vs. 11-14%) and Customer Support or Sales teams (17-40% vs. 7-35%) for both internal and external participant recruitment. (This is perhaps unsurprising, given that this group includes Marketing, Sales, and Customer Support folks). 

Meanwhile ReOps Specialists tend to employ a wider variety of recruitment methods overall, but are less likely to rely on built-in survey panels or social media.

55% of UI customers fill interview studies in less than 1 week.

Predictably, it takes longer to recruit for moderated studies than it does to recruit unmoderated studies.  

A plurality of people (40-43%) said that it takes 1 to 2 weeks to recruit for moderated studies (1:1 interviews, diary studies, and focus groups) and 3 to 5 days for unmoderated usability tests and surveys (35-36%).

Over half (55%) of User Interviews customers (N=321) said they typically fill an interview study in under a week, whereas only 41% of non-customers (N=608) achieve the same outcome.

Common recruiting timelines for different study types

magnifying glass icon
An area graph showing common recruiting timelines for different types of studies. Recruiting timelines seem to be shortest for unmoderated usability studies—35% of researchers  (N=253) said it takes less than a day to fill a study. Finding participants for diary studies takes the most time—29% of people who conduct this type of study (N=125) said it takes 3 weeks or more.
Recruiting timelines seem to be shortest for unmoderated usability studies—35% of researchers (N=253) said it takes less than a day to fill a study. Finding participants for diary studies takes the most time—29% of people who conduct this type of study (N=125) said it takes 3 weeks or more.

Gift cards are the most common incentive type.

Gift cards are used by a majority of our audience (66%), making them the most popular form of user research incentives, followed by cash or cash equivalents (39%).

Researchers in certain industries, such as healthcare (N=50), energy/utilities (N=11), and government (N=10) appear more likely to not offer any incentives.

We asked the folks who offer gift cards, cash, or cash equivalents how much they typically pay for different study types, and calculated the median and average incentive rates.

Keep in mind that the amounts in this table do not necessarily reflect the most effective incentive amount for your target participants.

22% of ReOps Specialists use our UX Research Incentives Calculator.

Almost half of the folks in our survey (46%) said they rely on predetermined guidelines provided by their company to set incentive rates, while 23% say they “just took an educated guess.” 

Research Ops professionals were the least likely to say they guessed (just 2%), instead favoring more data-backed approaches. They are the most likely (22%) to rely on the User Research Incentives Calculator from User Interviews, which is used by 15% of our audience overall.

hand drawn lightbulb icon with dashes coming off of the bulb
UX Research Incentives Calculator
How much should you pay participants? Let us do the math.

What's the right incentive rate for a 60-minute interview with software engineers? A 15-minute unmoderated test of an ecommerce app? A 6-month diary study? Use our free, data-backed User Research Incentives Calculator to get a customized recommendation for your next study.
Try it now
illustration of a figure stepping through a browser window surrounded by graphs, text bubbles, and whimsical lines

97% of researchers experience challenges during recruitment.

The vast majority of researchers in our audience (97%) experience some type of recruiting pain.

The most common challenge is finding enough participants who match their criteria (70%), followed by slow recruitment times (45%) and too many no-shows/unreliable participants (41%). 

User Interviews customers were less likely to say they experience these pain points than non-customers.

Pain points in research recruiting

magnifying glass icon
bar chart showing pain points in research recruiting. Nearly 3/4 researchers (70%, N=654) say that finding enough participants is a challenge. Other common pain points include slow recruiting timelines (45%, N=416), too many no-shows and unreliable participants (41%, N=379), bad fit between sample and recruiting criteria (37%, N=345), and managing a panel of their own participants (34%, N=317). Responses are not mutually exclusive.
Nearly 3/4 researchers (70%, N=654) say that finding enough participants is a challenge. Other common pain points include slow recruiting timelines (45%, N=416), too many no-shows and unreliable participants (41%, N=379), bad fit between sample and recruiting criteria (37%, N=345), and managing a panel of their own participants (34%, N=317). Responses are not mutually exclusive.

They are also less likely to indicate administrative challenges, such as scheduling sessions, distributing incentives, and collecting NDAs.

Some of the most common pain points seem to be alleviated by Research Ops. This is especially when it comes to managing a panel of participants: 41% of people without a ReOps Specialist found it to be a pain point, compared to 30% of those with Research Ops.

User research tools

If you’ve seen our UX Research Tools Map—an illustrated guide to the ever-changing user research software landscape—you’ll know that we spend a lot of time thinking about UXR tools around here. 

That’s because the tools we use can shape the work we do, and the way we do it. 

In our survey we not only asked about the tools people use, but how they use them in the course of their research. We collected a lot of data, so you can expect a separate report on UX research tools later this year (as well as the upcoming 5th edition of our UX Research Tools Map). So consider this section a precursor to future tools content!

If you're not already, subscribe to our newsletter to be the first to know when future tools reports are published.

See Appendix.

Explore the audiences you can recruit with User Interviews

Check out our transparent, in-depth Panel Report for the low-down on our network of 3 million+ quality participants, including professionals in 73k occupations and 140 industries.
Learn more about our panel

User Interviews is the most popular dedicated recruiting solution.

While we’re on the subject of research recruiting, let’s talk about recruiting and panel management tools.

User Interviews topped the list of recruiting tools among our audience (29% of people said they use our tools), followed by Respondent (12%), Salesforce (10%), HubSpot (6%), and TestingTime (6%).

While many recruiting tools now offer features for panel management, the most popular solutions for this are actually general-purpose tools like Google Workspace (22%) and Microsoft 365 (16%).

User Interviews is an exception—it’s the most popular dedicated panel management tool (10%) and 3rd most commonly used solution overall, followed by additional general-use tools like Airtable (5%), Notion (5%), Miro (4%), and Slack (3%).

Note: Our own customers make up 35% of our total survey audience (this includes both Recruit and Research Hub customers). We purposefully recruited outside our customer base and followers to expand our audience and reduce this bias.

Top panel management tools

magnifying glass icon
bar chart showing top panel management tools. User Interviews is the most popular purpose-made  solution for panel management among our survey audience—used by 10% of our total audience and 16% of people who use any panel management tool (N=92). Responses are not mutually exclusive.
User Interviews is the most popular purpose-made  solution for panel management among our survey audience—used by 10% of our total audience and 16% of people who use any panel management tool (N=92). Responses are not mutually exclusive.

Research runs on Dunkin general-purpose software.

General-purpose tools play a crucial role in all stages of user research, forming the backbone of most tool stacks. Figma tops the list—a whopping 81% of researchers use the tool for prototyping/design, while 31% use FigJam, the company’s flexible whiteboarding tool. Google Workspace is somewhat more popular than Microsoft 365 among our audience (57% vs. 51%), followed by Miro (51%).

General-purpose tools used for research

magnifying glass icon
bar chart showing top general purpose tools used for research. General-purpose tools form the backbone of most user research tool stacks. Office suites—like Google Workspace (57%, N=532) and Microsoft 365 (51%, N=476)—and flexible whiteboarding and prototyping tools—like Figma (81%, N=417; FigJam: 31%, N=292)—are the most widely used. Responses are not mutually exclusive. Sample size for Figma = 518 (for all others, N=929).
General-purpose tools form the backbone of most user research tool stacks. Office suites—like Google Workspace (57%, N=532) and Microsoft 365 (51%, N=476)—and flexible whiteboarding and prototyping tools—like Figma (81%, N=417; FigJam: 31%, N=292)—are the most widely used. Responses are not mutually exclusive. Sample size for Figma = 518 (for all others, N=929).

Of course, no 2023 tool stack would be complete without a solid video conferencing solution. Zoom remains the biggest player here, and is used by 68% of our audience. Google Meet is the second-most popular option (41%), followed by Microsoft Teams (35%), and trailed by WebEx and WhatsApp (11% each).

There are 13 tools in the average UXR toolkit.

Researchers use an average of 13 tools to conduct their research. While general use tools form the foundation of most UXR tool stacks, there are plenty of research-specific tasks that require purpose-made solutions. 

Luckily, there are no shortage of options out there. (Indeed, our most recent UX Research Tools Map included over 230 products.)

In our survey, we found that, in contrast to general-purpose tools (where popularity is generally concentrated in a handful of well-established product suites), usage is spread more widely across an array of made-for-UXR tools, with smaller percentages reporting use of any one product.

The most popular made-for-UXR tools overall are:

  1. UserTesting (39%)
  2. Dovetail (36%)
  3. Qualtrics (34%)
  4. SurveyMonkey (28%)
  5. Optimal Workshop (19%)

Click on the dropdown box below to see the most popular tools for different UXR use cases:

Most popular tools for
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Be the first to know when the 2023 UX Research Tools Map drop.

Subscribe to Fresh Views, our weekly newsletter, to get notified about future data reports and UXR content. (We'll also send you a copy of the State of User Research 2023 survey data to explore.)

Subscribe to Fresh Views, our weekly newsletter, to get notified about future data reports and UXR content. (We'll also send you a copy of the State of User Research 2023 survey data to explore.)

Subscribe to get notified

Money Matters

Let’s talk turkey. It doesn’t matter how much you love the work you do—unless you inherited vast amounts of generational wealth and are truly working for the sheer joy of it (looking at you, Julia Louis Dreyfus), the money matters. We all have rents to pay, cats to feed, and well-deserved vacations to bankroll.

In this next section, we’ll be sharing our findings on median UXR income in the United States and elsewhere. But we’ll also be talking about some of the ways those salaries have been impacted in the last 12 months—from raises and bonuses to layoffs and pay cuts—and other changes that researchers have seen as a result of the current economy.

We promise it’s not all bad news, though there is plenty of that. 

(We’ll be digging into UX Researcher salaries more thoroughly in a separate report later this year, so consider this next section a preview of things to come.)

User Researcher salaries are 49-323% higher than national median incomes.

To analyze UXR salaries, we had to break things down by geography. That’s because North America—particularly the United States—offers average and median salaries for UXRs that are significantly higher than those elsewhere in the world.

We also had to use our judgment when interpreting some of the open-response answers we received. (You can read more about those judgment calls and assumptions in the Appendix—look for the 🝋 symbol.)

But on the whole, we found that researchers are a well-paid bunch. The median researcher salary was considerably higher than national benchmarks for every country we looked at.

Among US researchers in our audience, for example, the median salary is $141,500—or 149% higher than the median US salary of $56,940 (2023 data, Bureau of Labor Statistics). The median salary among our UK audience is £63,648 GBP ($81,000 USD)—91% higher than the median UK salary of £33,280 GBP or $42,368 USD (2022 data, Office for National Statistics).

The median salary for Brazilian researchers (converted to USD) is $29,797 (323% higher than typical yearly earnings of $7,044—April 2023 data, CEIC); our German researchers earn a median salary of $75,514 (57% higher than the national median salary of $48,210—2022 data, Gehaltsatlas); median income for Spanish researchers is 108% higher than typical earnings, for Australian researchers the difference is +49%... and well, you get the picture.

It is worth bearing in mind that our sample sizes for regions outside North America and Europe were small, and therefore reflect likely trends rather than definitive ones. 

That said, it appears that African UXRs receive the lowest median and average salaries, followed by Asia (the second-lowest median), and Latin America and the Caribbean. Oceania has relatively high average and median salaries, especially for managers.

The majority of UXRs in the United States earn between $100–$200k.

A plurality (37%) of US-based UXRs at the individual contributor (IC) level earn between $100,000 and $149,000, with another 30% earning between $150,000 and $200,000 per year. 

Most (66%) Managers remain in these income brackets, with an additional 29% reporting salaries between $200,000 and $499,999.

Median UXR salaries by years of experience

United States vs. Europe

magnifying glass icon
A line graph showing median UXR salaries by years of experience in the US and Europe.  Median US salaries increase steadily with experience at an average rate of +15% from one bracket to the next. Growth in median salary for European UXRs fluctuates up until about 7-9 years of experience, after which point the median salary increases at a relatively steady rate, commensurate with experience.
Median US salaries increase steadily with experience at an average rate of +15% from one bracket to the next. Growth in median salary for European UXRs fluctuates up until about 7-9 years of experience, after which point the median salary increases at a relatively steady rate, commensurate with experience.

UXRs in the US earn 2.5x more than their European counterparts. 

European UXRs earn less than their American counterparts. In fact, the median UXR salary for both ICs and Managers is 155% higher in the United States than in Europe. (For those at the Director to VP level, the difference is 114%). 

By comparison with the US figures above, in Europe, the majority of UXRs at the IC level (67%) earn between $25,000 and $74,999. Most of those folks can expect to remain in that income bracket at the Manager level (45% of European UXR Managers report salaries within this range), while another 21% earn between $75,000 and $99,999 per year.

hand-drawn icon of the world
Happening now

Layoffs, hiring freezes, and cutbacks (oh my)!

We can’t talk about the state of User Research in 2023 without addressing the elephant in the room: namely, the widespread layoffs and cutbacks that have hit our industry and our colleagues so hard amid the current economic downturn.

Over a third (35%) of our researchers experienced a negative change in their compensation and/or benefits this past year. These changes range from reductions in benefits like home office stipends (reported by 16%) and cuts to actual base pay (reported by 5% of our audience).

Changes to researchers’ benefit packages | Q2 2022 - Q2 2023

magnifying glass icon
A bar chart showing changes to researchers' benefits packages over the last year. A significant minority of researchers said their company reduced or eliminated benefits between May 2022 and May 2023. These include home office stipends or commuter benefits 16% (N=147), PTO or vacation time, (5%, N=44), health insurance (5%, N=43), and retirement plans 2% (N=23). Not shown: “None of the above” (N=380) and “I do not know” (N=52). Responses not mutually exclusive.
A significant minority of researchers said their company reduced or eliminated benefits between May 2022 and May 2023. These include home office stipends or commuter benefits 16% (N=147), PTO or vacation time, (5%, N=44), health insurance (5%, N=43), and retirement plans 2% (N=23). Not shown: “None of the above” (N=380) and “I do not know” (N=52). Responses not mutually exclusive.

50% of researchers were directly or indirectly impacted by layoffs in the last 12 months.

Half of the people in our survey were affected by layoffs this past year.

The majority of those folks (77%) said that their organizations laid off non-researchers, while 43% lost fellow researchers as a result of personnel cuts. And a fifth of those affected (11% of our total audience) were actually laid off themselves.

Layoffs in User Research | Q2 2022 - Q2 2023

magnifying glass icon
A bar chart showing layoffs in User Research over the last year. A full 50% (N=466) of the researchers in our survey said they were directly or indirectly affected by layoffs in the last 12 months (between May 2022 and May 2023). A fifth of those people (11% of our total audience) were laid off themselves. Responses not mutually exclusive.
A full 50% (N=466) of the researchers in our survey said they were directly or indirectly affected by layoffs in the last 12 months (between May 2022 and May 2023). A fifth of those people (11% of our total audience) were laid off themselves. Responses not mutually exclusive.

In open-ended responses, researchers told us that seeing so many of their colleagues let go has been taking a toll:

“Every day, my LinkedIn feed is filled with more people being laid off, and I just hope I'm not next.”

Some people also reiterated their concerns about democratization, and its possible contribution to the recent spate of UXR layoffs:

“I worry that research isn't seen as a specialized function worth keeping around. Democratizing research is good for increasing insights and buy in, but I fear it makes it appear anyone can do our jobs.”

Widespread hiring freezes present a challenge to UXR jobseekers. 

Amid a climate of layoffs, and for the same reasons, many companies have stopped or drastically reduced hiring—59% of researchers reported that their teams experienced a hiring freeze in the last 12 months, two-thirds of whom said that this freeze remains in place and that their company had not yet resumed hiring as usual.  

Hiring freezes were more common in larger companies—55% of people on teams of 10,000+ employees said that a hiring freeze was still in place, compared to 24% of folks at companies with under 50 employees.

Hiring freezes in User Research | Q2 2022 - Q2 2023

magnifying glass icon
A pie chart showing hiring freezes in User Research over the last year. Over half of our audience (59%, N=545) reported that their team experienced a hiring freeze within the last 12 months (May 2022 to May 2023). The majority of this group (N=400) said that this freeze was still in effect at the time our survey was taken (May 2023). Conversely, 17% (N=156) reported that hiring on their team has actually picked up speed in that timeframe.
Over half of our audience (59%, N=545) reported that their team experienced a hiring freeze within the last 12 months (May 2022 to May 2023). The majority of this group (N=400) said that this freeze was still in effect at the time our survey was taken (May 2023). Conversely, 17% (N=156) reported that hiring on their team has actually picked up speed in that timeframe.

It’s not all doom and gloom.

As disheartening as these stats are, they don’t give us the full picture. 

In fact, 17% of the folks in our survey said that at their companies, hiring has actually accelerated over the past 12 months. Just as hiring freezes were more common in larger orgs, folks at smaller companies were more likely to report that hiring had picked up speed since May 2022.

Changes to researcher salaries | Q2 2022 - Q2 2023

magnifying glass icon
A bar graph showing changes to user researchers' salaries over the last year. Despite the widespread layoffs and hiring freezes reported by our audience, 60% (N=561) also reported an increase in their base salary between May 2022 and May 2023. 404 people (43%) received a raise as a result of job performance, while 238 researchers (26%) reported a cost-of-living or other non-merit-based bump in pay.
Despite the widespread layoffs and hiring freezes reported by our audience, 60% (N=561) also reported an increase in their base salary between May 2022 and May 2023. 404 people (43%) received a raise as a result of job performance, while 238 researchers (26%) reported a cost-of-living or other non-merit-based bump in pay.

There’s good news at the individual level, too: 60% of researchers said that they received a positive adjustment to their base salary. This includes the 43% of people who received a raise in the last 12 months as a result of job performance (nicely done, everyone!) and the 26% who received a non-performance-based bump (e.g. a cost of living adjustment, or market rate calibration). Some people received both.

And 11% said they received a larger-than-expected bonus in the last year, while 16% reported that their company expanded or introduced new employee benefits.

An illustration of figures parachuting among abstract web browser windows, clouds, emails, and icons.

Feelings

We probably don’t have to tell you that it’s not easy to quantify feelings. But we still tried. 

In our survey, we asked our audience to rate their overall fulfillment at work, their satisfaction with several job factors, and their feelings about current trends in the industry on scales from 1 to 5.

Some of our findings—like the correlation between remoteness and work-life balance, or between efforts to track research impact and overall satisfaction—have already been discussed in the sections above. 

In this last section, we’ll take a closer look at the relationship between different job factors and overall fulfillment, as well as researcher opinions regarding AI, data privacy, democratization, job security, and the level of diversity in the field.

hand-drawn icon of the world
Happening now

In this economy?

We asked people to rate their feelings about current topics in the industry on a scale from 1 (very concerned/dissatisfied) to 5 (very excited/satisfied).

On average, feelings are more or less neutral—except when it comes to the economy, a subject that has many researchers sweating it at an average score of 2.05/5. 

When we asked folks to explain the score they gave, some just raised an eyebrow at the question: “Have you been outside lately?“ quipped one person. 

“​​Do I even need to put anything here?” asked another, “Every day there is a new headline about the state of our economy. We are in tatters.”

Overall job fulfillment by average satisfaction score

magnifying glass icon
A bar graph showing overall job fulfillment by average satisfaction score. Researchers were asked to rate their feelings about the current topics on a scale from 1 (very concerned/dissatisfied) to 5 (very excited/satisfied). Average sentiment scores were neutral or slightly negative (2.72 to 3.05) for most topics, except the economy—researchers rated their feelings on this subject as 2.03 on average.
Researchers were asked to rate their feelings about the current topics on a scale from 1 (very concerned/dissatisfied) to 5 (very excited/satisfied). Average sentiment scores were neutral or slightly negative (2.72 to 3.05) for most topics, except the economy—researchers rated their feelings on this subject as 2.03 on average.

Individual contributors seem the most anxious about the current economic climate (rating their feelings on the matter at 1.93/5 on average), as well as job security and opportunities for growth in this field. 

Freelancers/self-employed researchers (who, perhaps, feel somewhat more in control of their next paycheck), were the least concerned about the economy (with an average score of a still-low 2.57/5). 

Analyzed along geographic lines, our 22 African researchers (who had the highest average sentiment score for nearly every topic) seem the least concerned about the economy; this group rated their feelings 3.77 out of 5, on average. 

Meanwhile, folks in Australia and New Zealand (N=24) are stressed—they gave themselves the lowest scores in every category except diversity (for which they were the second-most negative group after North American researchers), rating their feelings about the economy 1.83 out of 5, on average. (Note that these sample sizes are small and these findings should be taken as potential trends.)

Perhaps most worryingly, our core audience of UXRs gave the lowest average sentiment scores for all topics except data privacy (PWDRs have the lowest rating here), by a factor of 0.28 or 0.53 points. In open-ended responses, they expressed concerns about job security and opportunities for growth in their own field.

“The tech market [has not been] going well for close to a year now. A lot of the time when layoffs happen, researchers are let go. It makes me nervous to pursue challenges and goals in my career and I'm afraid I'll need to stay put at my job [...] I think it’s not a good time for big life decisions and career-wise it takes a toll.”

Indeed—as we’ll see in just a moment—career stagnation and a lack of opportunities for growth play a major role in overall job fulfillment.

Researchers (still) want a clear and fulfilling path for career growth.

We compared how researchers rated their overall fulfillment at work on a scale from 1 (very unfulfilled) to 5 (very fulfilled) to how they rated their feelings about several job factors on a scale from 1 (very dissatisfied) to 5 (very satisfied). 

Among the factors that appear to have an impact on overall fulfillment (based on large differences between the average fulfillment scores of folks who are “very dissatisfied” or “very satisfied”) are:

Confidence in their company leadership/outlook (2.55 vs. 4.33), buy in from their peers about the importance of research (2.37 vs. 3.95), and the level of bureaucracy involved in day-to-day decision making (2.75 vs. 4.23).

But of all the factors we looked at, how satisfied people are with their opportunities for career growth had the strongest correlation with overall job fulfillment.

People who are very dissatisfied in this regard had one of the lowest average fulfillment scores (2.38—1.12 points below the overall average), while people who are very satisfied had the highest average fulfillment score (4.50) of any segment that we analyzed (including job titles, geographic regions, research practice models, team size, etc). 

In other words, people who are happy with their path for growth are the most fulfilled at work, full stop. This finding is consistent with last year’s report, when we shared similar takeaways.

The future state of User Research

In this report, we’ve talked about the current state of: user research teams, the methods researchers use to recruit and research, UXR salaries, and sentiments regarding their job and trends within the industry. But what about tomorrow? Where do we go from here?

User Research is not a monolith. That much is clear from the diversity of answers that we received. But if we had to summarize the findings of this report into a single takeaway, it would be this: User Research is changing. 

The UXR landscape looks very different at this point in 2023 than it did in 2022 (which in turn looked different than in 2021, and so on). 

Looking around at this landscape, our audience feels somewhat meh about the future of User Research overall (3.6 out of 5, on average). And look, the people who tell you that User Research as you know it is dying are half right; the future of this field will not look the same as it does today. That is, frankly, inevitable. But it does not have to be a bad thing.

Ours is still a relatively young industry, one that is coming of age in an era marked by a global pandemic, looming recession, protest, war, disruptive new tech, deepening societal divisions… the list goes on. User Research is experiencing some growing pains. There are challenges ahead, but there are opportunities, too.

As Gregg Bernstein, author of Research Practice and Awkward Silences guest recently explained, there is no one way to do UX research:

“​​It’s a long journey from the place we’re hired to the place we think we should be, but we’re not without agency. We’re researchers—our superpower is to take stock of a complex scenario and spot the possible paths forward.”

Once again, we are enormously grateful to our partners, colleagues, readers, and—most of all—the 929 researchers who took our survey and made this report possible.

If you’d like to explore their answers in more depth, feel free to download the (anonymized) dataset and run your own analysis. And if you uncover any big ‘aha’ insights that we missed, let us know!

Download the survey results

Appendix

1. If the input salary was much lower than expected (between $300 and $9,000 USD) for individuals located outside the United States and Canada, we presumed that the participant had entered their monthly salary, rather than annual. We multiplied this value by 12, and confirmed that the resulting amount was within a reasonable range for local salaries based on a quick Google search.

2. If a participant from the United States or Canada entered a 3-digit salary that appeared to be shorthand for a typical salary (based on that individual’s location, seniority, and years of experience), we multiplied the response by 1,000. For example, in the case of a VP/Senior VP in the US with 15+ years of experience, we interpreted the input “350” as $350,000.

3. Responses of $0 were excluded from median and average calculations. When a response was inscrutable (i.e. far outside the expected range, but did not meet either criteria above), we also excluded it from our calculations.

◕ Due to an error in skip logic, the sample size for most tools-related questions was reduced from 929 to 518. The sample of 518 represents researchers who said they used any one of the research recruitment tools we asked about in Q58 (i.e. it excludes the 411 participants who selected “None of the above”).

Those tools were: ARCS, Askable, Disqo, Ethnio, Great Question, HubSpot, HubUX, PanelFox, RallyUXR, Respondent, Salesforce, TestingTime, User Interviews or an open-response “Other” option.

We have used percentages when comparing tools with a different tool N.


🝋 We asked for salaries in USD (and provided a link to a currency calculator), using an open-response field. We failed to specify that we were looking for annual salaries (rather than monthly or weekly). This is the standard format when discussing salaries in the United States, whereas in other regions it is more typical to use monthly salary. As a result, 129 responses required interpretation. For transparency, the assumptions and changes we made were as follows: