Note to the reader:
This part of the field guide comes from our 2019 version of the UX Research Field Guide. Updated content for this chapter is coming soon!
Want to know when it's released?
Subscribe to our newsletter!
Congratulations, you’ve completed your study! Maybe you conducted surveys with existing users, or had conversations with potential users. Regardless of the type of study you did or how many participants you recruited, you’ve gathered a lot of information and data.
Regardless of whether your data is qualitative or quantitative, the next step after completing any study is analysis.
In this chapter, we’ll go over everything you need to know about analyzing your data, teasing out meaningful insights, and synthesizing research to share with your stakeholders.
Research analysis is the umbrella term used to define the process of classifying, organizing, and transforming raw data into valuable information, and eventually a conclusion. When performed correctly, your analysis will generate the building blocks you’ll need to construct your research deliverables.
As UX Researcher JJ Knowles said:
“It’s the act of taking raw data and turning it into something useful.”
Since data can be interpreted an infinite number of ways, part of your job as a researcher is to decide how to analyze your data and use it to tell a compelling story. The methods you use to analyze your data will depend on the methods you used to gather it.
Qualitative interview analysis and quantitative survey analysis are two very different beasts! Yet the ultimate reason for doing research analysis is the same:
“Regardless of whether you’re dealing with a numeric dataset or a verbal interview, you’re always on the lookout for patterns and themes that can tell you something meaningful about the user, the product, or both.” – Career Foundry
Research analysis, as defined above, is the process of sorting and categorizing data.
Synthesis involves interpreting research data and pulling out insights and key findings that can be used to impact decisions.
Synthesis may follow once all the analysis is done, or the two processes might happen more or less in tandem, depending on the methods you use at this stage.
Together, research analysis and synthesis are key processes that create meaning from raw data. With this meaning, you can make better, more informed decisions.
This chapter comes late in our Field Guide, since you need to have data from user research in order to analyze it.
However, you should not be thinking about analysis for the first time after having already collected your data. Good research analysis starts at the very beginning of a project, before research even begins. As you’re creating your user research plan, you should be thinking about the types of analysis you want to do, what you want to learn, and how you’re going to use the data post-study.
Conducting some analysis and synthesis as you go will save you time at the end of a project, and gives you valuable snapshots to share with stakeholders.
In other words, analysis should occur:
Always be analyzing, folks.
As we’ve said time and time again throughout this Field Guide—the first thing you need to do is define your goals.
What are the objectives of this study? What do you want to learn? What are the key research questions?
Keep the scope of your research narrow, and then dig in. During the planning stage, think about how you will categorize and catalog your data. What kinds of themes do you expect to emerge?
Brainstorm tags that you can assign to data and session notes as you go—this will save you mountains of time at the end, and give everyone involved in the research a shared system for coding notes and insights.
You don’t have to wait until you’ve completed your study to begin your analysis. In fact, it’s often helpful to think about what your data might look like, and what it is starting to look like, as it’s being collected.
Periodic analysis may help you discover that you’re asking the wrong questions or even building the wrong product or feature. Analyzing your data, your variables for analysis, and the assumptions behind them means you’ll be able to catch mistakes and anomalies early on—which can end up saving the whole team a lot of time and money.
Analyzing and synthesizing as you go is also more efficient. It ensures that you don’t miss important details that might become the bedrock of your work’s final quality.
The things you jot down during a session identify what was most important to you, the client, and other stakeholders in the moment. Use your predefined list of themes and tags pre-defined to code your notes and data in real time. Then, give yourself a buffer of 15 minutes or more after each participant to review, analyze, and discuss the session.
By the time you sit down to complete the final deliverable, you will have already created much of the framework for your entire analysis.
To go beyond real-time coding, Roberta Dombrowski, VP of User Research at User Interviews, suggests the following workflow for analyzing, synthesizing, and sharing findings over the course of a research project:
Most research analysis happens at the end of a project. This is when you have all your raw data and (if you’ve followed our advice) tagged notes and initial analysis from individual sessions in hand.
At this stage, you’re looking for patterns and themes that exist between participants and data sets. If you’ve been synthesizing as you go, this will be a lot easier! Either way, you’ll still need to synthesize the results of your project, look for answers to your initial research questions, and provide stakeholders with actionable insights that enable further decision making.
Before you begin analyzing your data, you should be aware of some of the most common mistakes people make during analysis, including:
Some of these mistakes can be mitigated by developing your self awareness through a reflexive practice.
Of course, analysis is challenging. With large quantities of rich, sometimes contradicting data, you’re bound to mishandle the analysis process without taking an informed, systematic approach.
Let’s dive deeper into that analysis approach, and how it differs based on the type of data you’re working with.
Qualitative and quantitative data are two very different beasts—and for that reason, the analysis frameworks you apply to them will be quite different as well.
In quantitative UX analysis, you are looking to develop insights, through patterns in that data you’ve collected, about the how and the why of people using that product. You might also have, as part of your project, the task of gauging the quality of the overall user experience through survey or behavioral UX data.
Large datasets are often analyzed with tools like R, Python, or SPSS, or (for smaller datasets) in a spreadsheet. Common variables that are analyzed in quantitative UX data include:
In addition to answering questions about how people are using a product, you might also have the task of gauging the quality of the overall user experience through attitudinal surveys or behavioral UX data.
Demographic and geographic data is then layered into the analysis, in case they are helpful in determining patterns among certain groups of users.
It’s a lot of number crunching—but what you are doing at heart is trying to understand how people use a certain product, what problems they may be experiencing, and where improvements could be made.
Because qualitative data can be wildly diverse in format and subjective in nature, there are very few hard-and-fast, agreed-upon rules as to how this data should be handled.
We’ll discuss some of the different data reduction methods in the “How to do data analysis in UX research” section below, along with a framework for doing qualitative analysis.
But regardless of which methods you use to collect and analyze qualitative data, there are some questions and practices that will make the process more focused and a lot less daunting.
These questions should be in the back of your mind the second you start collecting data. You may even want to make yourself a little print out or index card to keep on you as a reminder.
In this section, we’ll go over everything you need to know about analyzing your data and using it to tell a meaningful story. Specifically, we’ll discuss the following process for conducting UX research analysis:
The following steps and recommendations are geared more toward qualitative research than quantitative—for example, quantitative data may not require any kind of thematic coding. However, many of the basic principles (setting objectives, identifying significant trends, synthesizing, and writing recommendations) still apply. We’ll make note of essential differences where applicable.
Setting goals and analysis objectives ahead of time helps focus your study—and prevents you from collecting too much “noise.”
Product Designer Lucy Denton experienced this overload of “noisy” data while running a large-scale research project:
“We decided to interview 45 people…. I'm not sure in hindsight if speaking to so many people was the right approach. It made the analysis process really overwhelming and difficult.”
If she could do the project over again, she thinks she’d find ways to narrow the scope of the project—and much of that scoping could’ve taken place during the planning phase of the project.
By setting learning goals and carefully designing your study, you can collect just enough data to find meaningful insights without overwhelming yourself during analysis.
If you’re conducting data analysis for interviews, multiple rounds of focus groups, or ethnographic fieldwork—you can improve your efficiency by reviewing notes, videos, transcripts, or other materials from each session and jotting down initial impressions immediately after.
This process is called periodic analysis and its benefits include:
Ultimately, these benefits help you answer important research questions as efficiently and thoroughly as possible. Work smarter, not harder!
If you’re conducting user interviews with other team members, reconvene with your team after each conversation. Have a discussion about how your participants’ responses fit into your research questions. Does the whole team agree? Maybe you missed something that your teammate picked up on.
Periodic analysis is also useful in quantitative research, as going into a study with the wrong questions, metrics, or ranges can lead to big headaches down the line during analysis. By analyzing your variables for analysis, the assumptions behind them, and your data, as you go, you’ll be able to catch mistakes and anomalies early on, some of which may lead you to adjust your study.
For example, say your final data should be a bimodal distribution like the graph below.
See how the graph has two peaks with a range of -4 to 4? Well, what if you assumed that the range you should be testing is -4 to 0? You would just have a normal distribution curve and be missing one of your peaks because you limited your range from the get-go.
Considering the implications of certain anomalies and outliers early on in the process can save you lots of time and money. But the only way to catch these things early on, or sometimes at all, is if you are analyzing your data at each phase of the study.
Before tagging, organizing, or analyzing anything, scan through the entirety of your dataset to see what jumps out at you.
Just looking through your data ≠ analysis. But taking a moment to slow down and orient yourself to what’s there can make a massive difference in your ability to understand and apply analytical frameworks to the data.
Once you’ve gotten familiar with what’s there, you can start to sort it into an easier, more manageable form.
Qualitative data tends to yield a wealth of information, but not all of it is meaningful to your research goals. As the evaluator, it’s your job to sift through the raw data and find patterns, themes, and stories that are significant in the context of your research question.
This process of organizing your data is known as “qualitative data reduction.”
Data reduction is the process of thickening and intensifying the flavor of a qualitative data by simmering or boiling.
Oh wait, no… that’s how you make a reduction sauce. Let’s try that again...
Data reduction is the process of transforming raw data into a simplified, ordered, and categorized form.
Basically, you’re reducing the volume of data into a summarized and more meaningful format… kind of like turning juice into a flavorful syrup by boiling away all the boring water.
There are several common ways to organize qualitative research data. The most common methods in a UX research context are thematic analysis, content analysis, and narrative analysis. Discourse analysis, framework analysis, and grounded theory, while less commonly used in user research, are two other methods worth noting.
Thematic analysis is a systematic approach to grouping data into themes that represent user needs, motivations, and behaviors. In some cases, these themes may be directly adapted from your learning goals and research questions, while in others, you may see these themes emerge after the data is collected.
Content analysis is a structured organization of large amounts of textual data using codes for certain words or themes. By assigning codes to different pieces of qualitative data (in a process called qualitative coding), you can begin to identify patterns and interpret their meanings.
For example, if you’re doing a study about how college students choose their majors, you might create codes for passion, salary considerations, and family ties, and identify how often these themes come up in participant responses.
Narrative analysis is a framework for understanding the stories people tell and the ways in which they’re told. For example, you could use narrative analysis to understand the importance of specific content to their participant, their motivations behind certain actions, and their perspectives.
Discourse analysis is a method for drawing meaning from conversations, either in written or spoken language. As opposed to content analysis, which may involve analyzing textual data from a participant’s written survey responses, discourse analysis looks at text within its social context.
For example, analyzing the way an entry-level employee speaks to an executive (and vice versa) can tell you about the culture and power dynamics of the company.
Framework analysis is an advanced, systematic method which involves five stages: familiarizing, identifying themes, coding or ‘indexing’ themes, charting, and interpreting.
Because framework analysis is more systematic and prescriptive than other types of analysis, it’s popular for applied research and researchers who need to adhere to high, strict quality standards.
Grounded theory is an analysis method which involves analyzing a single set of data to form a theory (or theories), and then analyzing additional sets of data to see if the theory holds up. Instead of approaching the data with an existing theory or hypothesis, grounded theory analysis allows the data to speak for itself—requiring the analyst to develop their theory from the “ground” up.
You can also organize your data using various synthesis frameworks—structured analysis methods that help you visualize your raw data to find common themes. Some common synthesis frameworks include affinity mapping, matrices, and spectrums.
Regardless of whether you are analyzing your data quantitatively, qualitatively, or both, you will be looking for trends and keeping a count of problems or themes that occurred across participants.
Each theme and finding should be prioritized by severity and importance. You should always go back to the original research objectives at this point—hopefully you took notes about what’s most important for this project to address. Use that overall understanding of project objectives as the backdrop for your data, as you rank the most meaningful patterns, themes, and stories you’ve found thus far.
Synthesis is the process of breaking down everything you learned into small, bite-sized insights (sometimes referred to as ‘atomic research nuggets’) that can be easily shared with your team.
Keep in mind the 3 parts of a key insight and how to communicate them:
For example, a key insight from a project about college students’ motivations for picking majors might be: Students who prioritize earning potential are worried about being able to pay off student loans. #salary #student-loans — with links and attachments to the data which supports this claim.
You can take several approaches to synthesizing data, including:
You’ve analyzed the data and boiled it down to several key insights—now, what do you actually do with those insights?
The final step of analysis is providing insight-backed recommendations for next steps. Final recommendations take your analysis one step further and allow your stakeholders to quickly understand the big takeaway from this project, and take the appropriate actions in response to those findings.
It would be a shame, after all, if stakeholders funded a whole study, and then didn’t do anything with it, because they simply didn’t know what they were supposed to be doing with that information.
Your recommendations might look like these:
Analysis, whether qualitative or quantitative, takes time—and many busy researchers do not have the capacity to spend as much time as they’d like on analysis. Fortunately, there are special tools and programs built to help you accelerate and improve the analysis process.
These programs can help you do things like organize your data sets based on demographics or descriptors, code your data to help with organization and pattern recognition, and collaborate with other researchers on your team.
Some of the most popular analysis tools include:
Since every research team and project is unique, we recommend looking into the different software options to determine which program or combination of programs would best fit your research, team structure, and workflow.
Your approach to analysis can make or break your overall impact as a user researcher. From raw data to actionable insights, the process can be complicated and sometimes overwhelming. We hope you use this chapter of the Field Guide as a reference while you dig into your next dataset—and remember, always be analyzing!
Head to the next module to move onto the next step in your research journey: Creating effective research reports and deliverables.