Dive into the responses to understand how budgets change according to industry, company size, budget setting-department, self-assessed maturity, and more.
These days, budgets are a sore subject. Businesses have been bracing for a possible recession for years, and it seems like research teams are often the ones facing cuts as a result. In our 2024 State of User Research report, participants reported more layoffs and fewer dedicated Researchers and ReOps specialists. And yet, despite these cuts, something surprising also emerged: At most companies, buy-in for research overall remained strong–and has been steadily growing over the past few years.
“It’s a strange time where we are needed more than ever, and business leaders acknowledge the value of our work, yet there is resistance to invest,” wrote one State of User Research survey participant.
Money matters, but the word “investment” is far too vague. It’s usually based on a gut feeling compared to other departments, not on solid facts. So how can teams know if they’re underfunded when there’s no clear data or benchmarks showing what typical research spending looks like today?
Once we asked it, we realized no one has really taken a broad look at research budgets. (A quick Google shows that the most commonly cited numbers still come from our 2021 State of User Research report!)
Knowledge is power–and current, accurate knowledge is even better. So, we surveyed 180 people who either control or have insight into research budgets at organizations of all sizes and research maturities. We asked about how their budgets are set, how they’re used, how people feel about them, what’s changing now, and what they expect in the future.
Down below, we’ll cover:
We’ve aimed for a comprehensive view of research budgets, but we know your needs may be more specific. That’s why we’re sharing the full dataset, so you can explore how budgets vary by industry, company size, budget setting-department, self-assessed maturity level, and more.
Use this report as a benchmark to see how your team stacks up against others with similar budgets. It might help you make better decisions on how your research dollars are spent, spot areas for improvement, or even make the case for greater investment.
At first glance, research budgets don’t look as bad as the headlines suggest. Yes, some budgets decreased year over year, but most companies kept their budgets steady or even increased them a bit.
Still, a common challenge arose: the biggest budget hurdle for research is often deciding between growing the team or investing in improved efficiency. Headcount nearly takes up one third of the average research budget, and many teams find it hard to cover salaries that are two or three times higher than other research expenses—while also paying for the tools and resources needed to do the work. Many participants also said research salaries seem high compared to other research costs
This sentiment matches what we found in our 2024 UX Research Salary Report: fewer ICs and managers are making over $150,000 than before. This trend is seen elsewhere too. In 2024, the BBC reported that salaries for new tech jobs in the U.S. are stagnating or even dropping. A recent Dice report showed entry-level tech roles are being hired at lower pay year after year. However, that same report found that companies are investing more in retraining skilled professionals who haven’t yet hit top pay.
With ongoing talks about AI and democratization, many teams are trying to do more research with fewer dedicated hires. They’re focusing on tools, different methods, and training to improve efficiency before adding new research roles. That shift suggests the challenge isn’t just temporary—it’s a long-term strategy. For many, the answer isn’t simply “hire more,” but “invest wisely” in the people or tools that best link research to business results.
So let’s take a closer look at the data to understand how we arrived at that big picture:
But those with low maturity are more likely to have smaller budgets.
We asked respondents about their total research budget last fiscal year.1 In this report, we will refer to budgets in four categories: low- (<$25K), low-mid- ($25K+-100K), upper-mid- ($100K+-$500K), and high-investment ($500K+). For the sample, we found budgets were pretty evenly distributed. While the plurality of research budgets (29%) fell in the lowest investment group, each corresponding budget tier had a similar distribution—even when looking at industry, maturity, and company size. One fifth each had low-mid investment or high-mid investment. And just under one fifth (17%) said their budget was in the highest investment group.
For the most part, budget size didn’t correlate with maturity. For the highest and middle maturity teams, budgets were equally distributed across the four budget categories (low, low-mid, high-mid, and high). The only exception to this was low maturity teams: 73% of those who rated themselves a 1 or 2 in this category fell into the lowest budget bucket—suggesting that less mature teams may still be building their case for funding.
1. Last fiscal year/this fiscal year: Not every budget resets in January. We created our baseline budgets based on last fiscal year’s budget so it’d be a static number for every participant. When we asked comparative forecasts in regards to the current fiscal year, we referred to an organization's currently used budget.
Click through the graphics to get a good sense of the companies, teams, and research strategies behind each budget size.
Only 17% of participants said that their research budget shrank.
We asked participants how last year’s budget compares to this fiscal year’s budget so far. If it was still being finalized, we asked them to estimate. Close to half (41%) of participants said their budget for this year was about the same as last year, indicating stability in funding for a large portion of teams. Another 35% of respondents said this year’s budget grew. On the other end of the spectrum, 17% said their budget shrank compared to last year.
Of the respondents that said that their budgets grew significantly, 31% said they had achieved the highest individual level of maturity, aka “user-driven” status. One of those who said their budgets shrank significantly, none had achieved this status, suggesting a strong correlation between higher maturity and budget growth.
Teams pay for an average of 2-5 tools with their research budget.
On average, 71% of budgets were spent on three things: headcount, tools, and participant recruitment. When looking at the entire sample size, the highest budget cost went to headcount, eating up nearly a third of budgets (32%). The next highest expenditure category was participant recruitment, with a fifth of budgets allocated to recruitment services and participant incentives. And rounding out the top three expenses was tooling/platforms, with teams spending nearly one fifth (19%) of their budgets on this category.
We asked participants to estimate how many of their tools or platforms are paid with research funds. The majority (58%) of participants said that their budget accounts for 2-5 research tools.
We’ve converted these percentages into numbers by budget size and average allocation to help you compare with your own expenditures.
But participants were more likely to be very satisfied than very dissatisfied.
We asked survey participants “How satisfied are you with the size of your research budget?” The majority of respondents (51%) were either satisfied or very satisfied with budgets, 27% were neutral, and roughly 21% said they were dissatisfied or very dissatisfied.
We also asked participants to estimate their company leadership’s satisfaction levels. Participants estimated that their leadership were more satisfied with the research budgets than they were themselves.
We asked participants to rate the relationship between research budget and research impact, with 0 being no relationship and 100 being a perfect relationship. We found participants rated a strong relationship (70 median) between budget amount and perceived impact.
When we broke down responses by budget size, a clear trend emerged: the bigger the budget, the greater the perceived impact of research within the organization. One exception stood out—teams with budgets between $100K and $500K reported a slight drop in perceived impact. This dip may reflect a transition phase: organizations at this funding level are likely hiring their first dedicated Researcher. As teams scale their capabilities, growing pains are likely to emerge.
But the C-Suite still holds the purse strings for most.
We asked participants which team or department sets the budget. For the highest percentage of respondents, the C-Suite set the budget (29%). However, Research or Research Operations (20%) or Product or Design (18%) also commonly set the budget.
Surprisingly, who set the budget had no real effect on budget size. However, it did affect satisfaction. Respondents were most satisfied with their budget when Research or Research Operations set it, with 71% claiming they were satisfied or very satisfied and only 17% claiming they were dissatisfied. Their satisfaction dropped 9% when the C-Suite set the budget, but dissatisfaction only grew 2%. Of this group, respondents were most likely to be dissatisfied when product or design set the budget, with satisfaction dropping to 48% and dissatisfaction rising to 21%.
Given that 39% of our respondents sat on Research or Research Operations teams, this higher satisfaction when those functions set the budget may reflect a sense of ownership and alignment, rather than purely objective outcomes. When looking at responses from those that didn’t sit on these teams, sentiment becomes more apathetic: Nearly a third (31%) of participants were neither satisfied nor dissatisfied with the budget.
We also asked which team or department controls the research budget. We found that Research or Research Operations teams were most likely to be in control of the research budget (36%), followed by C-Suite or Executive (23%), then Product or Design (15%).
When we asked respondents to rate the relationship between budget and research impact, we saw slight variances according to who controlled the budget. The strongest relationship was when Research or Research Operations controlled the budget (73), followed by the C-Suite/Executive control (70), and Product/Design (64). When removing Research/Research Operations team members from the sample, the pattern remained the same: The strongest relationship was with Research/Research Operations (74), followed by C-Suite/Executive (68) and Product/Design (66).
We asked participants how often their research budgets were reviewed and/or adjusted. A little over one third (38%) of participants said budgets are reviewed/adjusted annually. However a quarter said budgets were set on an ad-hoc basis, with adjustments made as needed or based on project scope.
Of those groups, those who set their budgets annually were slightly less likely to grow their budget year over year (-4%) and slightly more likely (+4%) to shrink it than those who set their budgets on an ad hoc basis.
In an open response question, we asked participants what they thought was the hardest expense to justify spending research budget and why. The top three hardest expenses to justify were:
Exemplary quotes:
“It’s so difficult and time-consuming to determine and communicate ROI for UX tools. Often, executives don’t see the big picture until the software has been purchased and is being put to use, but it’s nearly impossible to get to that point without serious trust and deference to a UX team.”
“Management thinks that you can just do it by yourself. They’ve already paid you to be the expert. Why do you need tools? Why do you need software?”
Exemplary quotes:
“Foundational field research is most difficult to justify spending because of the high cost (recruitment, incentives, local moderation, translations, travel) relative to an in-lab usability study. Additionally, the research typically isn’t tied to a specific business decision or feature launch, so stakeholders don’t see the ‘tangible’ benefit.”
“The hardest thing to justify spending a research budget on is something that feels too far ahead of its time—the kind of moonshot idea that sounds cool but has no clear path to practical results. It might be fascinating, and maybe even groundbreaking one day, but when you’re sitting in front of a budget committee, it’s tough to explain how that’s going to help anyone within the next 5-10 years. It usually comes down to that tension between visionary research and measurable ROI.”
Exemplary quotes:
“We rely on product teams to forecast their needs for research but most are not mature enough in their understanding of the role of UXR to include us throughout the lifecycle. So they scramble at various points to find project funding to pull us in. If research was embedded in the lifecycle we could hire more people because the teams would include the resource in their estimates ahead of time.”
“A lot of UXR has been cut and those who leave are not backfilled. Instead, vendors and contractor workers are hired but they aren’t given proper tooling or background and the research suffers. It’s hard to justify a FTE because they are so expensive with benefits, etc. So we hire more and more contract people and overseas vendors.”
Quotes have been edited for length and clarity.
But navigating growing headcount across other opportunities is often a tricky balance.
The biggest difference between the percentage of what research budgets actually pay for versus what they'd like to pay for was in headcount, with participants wanting to spend an average of 3% less of their budgets on the expense; and training, with participants wanting to spend 2% more.
But if we focus only on how frequently participants either chose to spend less or spend more, we see a more nuanced story. While respondents did want to cut down on headcount costs, a slightly greater amount actually said that they wanted to spend more on headcount. Respondents also ultimately wished more of their budget was spent on training/development (39%), tools/software (36%), and participant recruitment/incentives (35%).
With double the budget, teams would try and increase research’s impact exponentially.
In another open-ended question, we asked participants to imagine that their research budget doubled overnight. How would they deploy those funds? Participants would increase investment in the following three areas:
Exemplary quotes:
“The more tools, the faster the research gets concluded.”
“Better data systems, automation, testing environments—things that make the whole team faster and smarter.”
Exemplary quotes:
“With another researcher, we could reduce the research load on our product managers, who are struggling under the weight of doing their own research alongside their other priorities.”
“Recruiting specialized talent would enhance our expertise and broaden project capabilities. This investment would ensure that we can scale operations and handle more complex projects without overburdening current staff.”
Exemplary quotes:
“I’d expand participant recruitment efforts—especially for hard-to-reach or underrepresented user groups—and increase incentive offerings to ensure better engagement and richer, more inclusive data.”
“I would invest in more participant recruitment and incentives to ensure diverse and high-quality data.”
Faced with budget cuts, teams would try to do more research with fewer resources
Lastly, in an open-ended question, we asked participants to imagine what they’d prioritize if their research budget was cut in half overnight. Participants would manage by prioritizing:
“I would prioritize high-impact, tactical research that directly supports key product decisions and business goals. This means focusing on evaluative studies like usability testing, concept validation, and rapid feedback loops–research that helps teams make immediate, informed choices and avoid costly mistakes. These activities typically require fewer resources and deliver fast, actionable insights that keep product development aligned with user needs, even under tighter constraints.”
“Customer panel and in-depth interactions with profile customers.”
“I’d use tools and AI to focus on research efficiency.”
“We would prioritize tools that allowed the most people to complete studies. We would no longer experiment with new or cutting edge tools.”
“Personnel is the backbone of research. Preserving key internal headcount would be critical to ensure ongoing projects continue without disruption. Efficiencies could be sought through process improvements, but the expertise of our core team remains non-negotiable.”
“We can be scrappy if needed, but layoffs go beyond losing capacity – morale and culture shifts dramatically, and it’s difficult to recover.”
So yes, buy-in for research is getting higher each year, and often investments are too. (That said, the data in this report is shaped by responses from those currently engaged in research work, which may influence the overall perspective presented.) But, often investment comes in smaller increments that don’t always add up to new full-time headcount—and it’s not clear if they will, as tools become more impactful and democratized.
While this is true, so is the fact that teams know that to have research, you must have dedicated researchers: the question many budget holders and controllers are asking today is how many researchers are most effective.
Ultimately, Varun Murugesan, co-founder of Apple and Banana said it best: Being good at scrappy user research is like being the Jason Bourne of research—being effective with what you have, even when you’re shot down by a tight research budget or limited in bandwidth. When budgets grow or shrink, the types of research and support changes, but ultimately research is still happening within organizations and will continue to happen.
You’ve gotten the high-level picture, now dive into the responses to understand how budgets change according to industry, company size, budget setting-department, self-assessed maturity, and more.
research/Research teams: In this report, we will use “research” (with a lower-case r) as the catchall term for research that happens in an organization. We will use Research/Research Operations teams (with a capital R) to refer to those with a dedicated Research department.
Participant: The folks from Research, Design, Product, Engineering, etc. teams that responded to our survey.
Budget: The total amount allocated to spend on research, including headcount, tooling, recruitment/incentives, outside vendors, training/development, events, etc.
Research tools: Tools/software/equipment used for research and paid from the research budget. Does not include tools that come from other teams’ budgets.
Setting budget: Determining (or working with finance to determine) the spendable amount for the fiscal year.
Controlling budget: Approving purchases made against the research budget.
Research budget allocation: How budgets are divided up among the following categories: internal headcount/staffing, tools/software, participant recruitment/incentives, outside vendor support (excluding recruitment), training/professional development, retreats/events, and other (swag, appreciation, gifts, etc.) The allocation must total 100%.
From March 31 to April 8, we collected responses via SurveyMonkey after promoting the survey via our weekly newsletter (Fresh Views), a dedicated email send, social media, and an in-product slideout; we posted the survey in research-related groups on LinkedIn and Slack, and members of our team and friends within the UX research community shared the survey with their own professional networks. To be included in the survey, participants needed to either control the research budget (either alone or with others), or have visibility into the research budget. If a participant did not know an answer, we asked them to skip the question and adjusted the corresponding sample size for analysis. Each response was manually inspected for fraud. Qualified participants received a $5 incentive for their responses.
Our final data set consisted of 180 qualified respondents with the following demographic breakdown:
We first asked respondents “Which of the following best describes you?” Participants who were not sure or did not have control or visibility into the research budget were disqualified. Of the qualified participants, 52% controlled the budget (either alone or with others), and 48% had visibility without control.
Of this visibility/no control group, 41% knew how the budget was spent, 27% could roughly estimate spend, 22% could closely estimate spend, and 10% knew the general budget amounts.
Next, we asked participants to best describe their current role level. About a fourth of those surveyed were individual contributors—strategists, specialists, etc. (27%) or manager/senior managers (26%). About a fifth each were staff or principal ICs or C-level, director or VPs. And a tenth were either advisor consultant (4%) or other (6%), including freelancers and cross-disciplinary roles.
We asked respondents what team they currently sit on. If they were on a combination team, we asked them to choose what team reflects their work most of the time.
Nearly two-fifths of respondents (39%) sat on a Research or Research Operations team. More than a quarter (26%) reported into either UX, Product Design, or Product Management. Another quarter reported into either Engineering, Data Science, or Insights (13%), or Sales, Marketing, or Customer Success (13%). Finally, a tenth of respondents sat on other teams (4%) or were the executive/founders of their organizations (6%).
We asked participants to roughly estimate how many employees their company has. We will refer to company size in three groups: small businesses (1-100 employees), medium-sized businesses (101-1000 employees), and large enterprises (1000+ employees). Two-fifths of employees were from small businesses. A little over a third were from medium size businesses, and roughly a quarter were from the largest enterprises.
We asked participants to best describe their company’s industry, using Indeed’s 19 types of industries. The most represented industry was Computer and Technology, with 31% of respondents working in tech. Many respondents were also from advertising/marketing, education, healthcare, and finance.
We also had respondents rate, in their opinion, the maturity of their current team or department’s level of research maturity. We used NN/g’s maturity model, paring down the six UX maturity levels into three tiers:
Roughly two thirds of respondents ranked themselves in middling maturity. A little less than one third ranked themselves as functioning at the highest maturity. Less than a tenth of respondents said they were at the most basic levels of maturity.
Schedule your demo and see us in action today.