Surveys

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Note to the reader:

This part of the field guide comes from our 2019 version of the UX Research Field Guide. Updated content for this chapter is coming soon!

Want to know when it's released?
Subscribe to our newsletter!

Erika Hall, co-founder of Mule Design, describes surveys as “the most dangerous research method.” 

Why? Because although surveys are one of the easiest research methods to conduct, they’re also one of the easiest methods to mess up. 

In this chapter:

  • What are surveys?
  • When, why, and how to use surveys for UX research
  • How to design an effective research survey
  • Examples of effective survey questions
  • Recruiting for UX research surveys
  • Analyzing survey results and reporting findings
  • Tools for conducting surveys
  • Hybrid research: Combining surveys with other methods

What are surveys?

A survey is a set of questions you use to collect data from your target audience. In the words of survey expert Caroline Jarrett, a survey is: 

“A process of asking questions that are answered by a sample of a defined group of people to get numbers that you can use to make decisions.”

Surveys are appealing because they’re cost effective while having the potential to reach large numbers of people quickly and easily. You can ask almost anything, and the results are easy to tally. 

However, these are the exact same reasons why surveys are so dangerous; surveys are deceptively simple. Although they’re relatively easy and inexpensive to implement, you can’t let this ease fool you into cutting corners. Getting valid and valuable data from a survey still requires strict adherence to best practices. 

Surveys vs. questionnaires vs. polls

What’s the difference between a survey, a questionnaire, and a poll? The terms ‘survey’ and ‘poll’ can be used interchangeably, while a questionnaire refers to the content of the survey/poll you’re conducting: 

  • Survey (aka Poll): The task or process of collecting data in order to gather insights from a specific set of respondents.
  • Questionnaire: The content—a list of questions—that you are asking respondents to answer. 

Using surveys for UX research

In UX research, surveys are typically used as an evaluative research method, but they can also be used in generative research and as a continuous research method. In-product surveys, for example, are an easy and effective way to automate ongoing data collection. 

There is a variety of approaches you can take to designing and implementing a survey. Here’s an overview of the different types of survey methods in UX research.

Types of survey methods in user research

Quantitative vs qualitative research surveys

Quantitative surveys collect a large number of responses to questions that can be answered using checkboxes or radio buttons. These types of surveys are designed to answer “how many” questions; like all quantitative research methods, these types of surveys are meant to deliver statistically meaningful results representative of the broader population. 

Quantitative survey types include descriptive and causal:

  • Descriptive surveys provide insight into behaviors, attitudes, or opinions. 
  • Causal surveys are designed to define the cause-and-effect relationship between two or more variables.

Qualitative surveys use open-ended, exploratory questions to collect more detailed comments, feedback, and suggestions. While these types of responses cannot be as quickly and easily tallied as the data from quantitative surveys, the insights they provide can be very valuable. Researchers often run a qualitative survey with a small group to gain a deeper understanding of the respondents and identify the best questions and answers to include in a quantitative survey aimed at a larger group.

Cross-sectional vs. longitudinal vs retrospective surveys

Cross-sectional surveys look at a specific, isolated situation and solicit a single set of responses from a small population sample over a short period of time.The aim is to get quick answers to a standalone question. This method is purely observational, and does not measure causation.

Longitudinal surveys collect a series of responses from a larger group of participants over a longer period of time (from weeks to decades). This observational method helps researchers study change over time, and includes three types of studies: trend, cohort, and panel. These long-term studies are most successful when designed around short, frequent sessions, instead of lengthy interviews that can be burdensome for participants.

Retrospective surveys combine longitudinal and cross-sectional elements by looking at change over a long period of time via a one-time survey. This approach helps reduce the time and money needed to perform the survey, but can introduce accuracy issues depending on participants’ memory recall. 

Ultimately, choosing the right research method for you will depend on your goals and the questions you’re hoping to answer. 

💫 Continuous user feedback surveys can help inform future UX research studies, improve product designs, and speed up iteration cycles. Explore best practices, tools, and examples for effective surveys in the UX Research Field Guide

The advantages and disadvantages of surveys for user research

Surveys are a popular and advantageous research method for many reasons:

  • They are inexpensive. 
  • They are scalable. 
  • They are usually less susceptible to the Hawthorne effect (the phenomenon of study participants changing their behavior based on the knowledge that they are being observed and tailoring responses to meet assumed expectations).
  • They give your team confidence in making survey-informed design decisions. 
  • They provide statistically meaningful data that is reassuring to business stakeholders.

The biggest downside to surveys is how easy they are to run. 

As Erika Hall explains, 

It is too easy to run a survey. That is why surveys are so dangerous. They are so easy to create and so easy to distribute, and the results are so easy to tally. And our poor human brains are such that information that is easier for us to process and comprehend feels more true. This is our cognitive bias. This ease makes survey results feel true and valid, no matter how false and misleading. And that ease is hard to argue with.

It’s much much harder to write a good survey than to conduct good qualitative user research. Given a decently representative research participant, you could sit down, shut up, turn on the recorder, and get good data just by letting them talk. … But if you write bad survey questions, you get bad data at scale with no chance of recovery.”

That’s why it’s critical you take the time to create a thorough and effective user research plan when designing a survey. 

So, when should you use surveys in UX research?

It can be tempting to use surveys for all kinds of quantitative research, but that’s not always the best use of the method. While surveys are undeniably a relatively fast, unmoderated, and low-cost way to compile a lot of data on a lot of people, there are many ways that data can be skewed to deliver biased or otherwise compromised results. 

A better use of a survey is as a qualitative tool that helps you explore and define the area of investigation. Using open-ended survey questions as a starting point can help you gain a much deeper understanding about the subject at hand and the end users you’re surveying. And that understanding is invaluable for mitigating the risk of designing subpar solutions as well as increasing your ability to design better products more efficiently.

How to design an effective research survey

Thorough and thoughtful planning and preparation are crucial to survey success. You need to be clear and explicit about your goals. One of the most common mistakes new researchers make is to cast a wide net, asking for responses to a wide range of questions, and then try to reverse-engineer a central goal from the resulting data.

A strong survey explores a very specific and clearly defined research question via well-written, bias-free survey questions and an optimized survey flow.

Defining the research question and scope

The research question is different from the individual questions that you eventually include in your questionnaire. A research question defines the topic you are exploring. It is the ‘big idea’ at the heart of your survey.

Identifying your research question typically starts with brainstorming to develop a list of all the possible suggestions. From there, you refine the question to its essence. Caroline Jarret does this by running each possible question through a series of four challenges:

  • What do you want to know?
  • Why do you want to know?
  • What decision will you make based on the answers?
  • What number do you need to make the decision?
defining your research question based on 4 key challenges

Look for your most crucial question (MCQ)

As you run your various questions through this 4-part challenge, you’re looking for your Most Crucial Question (MCQ), which Caroline defines as, “the one that makes a difference. It’s the one that will provide essential data for decision-making.” She suggests stating your question in two parts:

  • “We need to ask _______.”
  • “So that we can decide ___________.”

And don’t stop there. Before you settle on your MCQ, you want to really pick it apart by questioning what you mean by each word. You want to wind up with something that is incredibly clear and doesn’t require any interpretation. For more info on how to “attack” your MCQ in this way, check out this article by Effortmark

Remember, in order for your research to make a meaningful impact, it needs to be designed to enable decisions for your company, product, or service. Read more about a framework for doing decision-driven research

Writing great survey questions

Clarity and specificity are the two most important characteristics of effective survey questions. With that in mind, here are our top tips for creating questions that help you get the answers you need:

  • Use the first person to encourage people to focus on their experience rather than their opinions.
  • Replace UX jargon with simple, familiar words.
  • Use language that a child could understand. 
  • Use straightforward syntax. 
  • Choose words with clear, unambiguous meanings that don’t require any interpretation.
  • Be as specific as possible with your wording.
  • Eliminate any double-barreled questions that ask two questions at once.
  • Avoid the use of single or double negatives.

Types of survey questions

Mixing up the types of questions on your survey can help you collect the most insightful data. There are two main types of questions: open and closed.

Open-ended survey questions allow participants to respond in their own words. This type of question typically generates a great deal more qualitative detail and can uncover unexpected responses. Open-ended questions help you get at the “why” behind people’s answers. However, it can be difficult and time-consuming to analyze and draw conclusions from open-ended responses. 

Closed survey questions have pre-populated response options that participants choose using tools like checkboxes or radio buttons. Closed survey questions have higher response rates since they require less effort from the respondent. Because the response options are standardized, closed questions deliver data that is easier to analyze and which provides statistically significant findings. 

There are several different types of response options for closed survey questions:

  • Multiple choice (radio buttons)
  • Checkbox (select as many as apply)
  • Rating scale
  • Ranking order
  • Dichotomous (Yes/No)
  • Matrix
  • Image Choice
  • Likert scale (e.g., strongly disagree to strongly agree, never to always, very dissatisfied to  very satisfied, etc.)

Avoid bias and leading questions

It is very easy to accidentally insert bias into your survey questions. And it’s also possible for participants to be biased in their responses. We already mentioned the Hawthorne effect, which causes people to change their behavior when they know they are being observed. There is also acquiescence or agreement bias, which causes survey respondents to agree with research statements in an effort to be more agreeable or to give the answer they think you want. 

Here are a few different kinds of bias that can sneak into your questions:

  • Confirmation bias happens when all your questions are geared to support your existing hypothesis. 
  • Implicit bias relates to subconscious and/or unspoken attitudes and associations about different stereotypes.
  • Framing effect has to do with how question presentation can influence responses.
  • Hindsight bias can distort memories by giving people the sense that they “knew it all along” and could have easily predicted events that have already happened.
  • Clustering bias is a cognitive bias that causes people to see patterns where none exist.
  • Serial position effect refers to how it’s easier to remember the first and last items in a list.
📚 Related Reading: How to Reduce Bias in UX Research

Examples of effective survey questions

Here are a few examples of poorly-worded survey questions and how to fix them from Miro:

INSTEAD OF: "What do you like about the current banking app?" 

This is a bad question because it assumes a positive experience.

TRY: "Tell me about your experience using your current banking app."
INSTEAD OF: "Was using the app for the first time easy?" 

This is bad because it's a yes or no question and it assumes a positive experience. 

TRY:
"What were your impressions of the onboarding experience within the app?"
INSTEAD OF: "Was this feature confusing?" 

This example assumes a negative experience. 

TRY:
"What does this feature mean to you?"

The ‘right’ questions here are all better because they’re more open-ended and they don’t make assumptions about the respondent’s experience. Instead of leading respondents to the answer you’re looking for, let them lead you to the details and experiences that actually matter to them. 

Other survey tips to consider

  • Test your survey by having colleagues answer draft questions, iterating, having a small group of participants answer draft questions, iterating, and so forth. 
  • Keep the survey as short as possible (especially if you are not compensating respondents).
  • Consider allowing anonymous responses to encourage honesty.
  • Include a quick introduction that includes context, how to ask for help, and—if applicable—information on any research incentives you are offering.
  • Use logic functionality and filter questions to eliminate irrelevant questions. For example, if one of the questions asks about respondents’ employment status, anyone who responds that they’re unemployed or a student shouldn’t get a follow-up question asking about their role or seniority. 
  • Place the most important questions at the start of the survey. 
  • Ensure that early questions are easy to answer. By making the first part of the survey as smooth an experience as possible, you help build rapport with the participant and encourage them to follow through with the rest. 
  • Group related questions together, ordered from most general to most specific. This allows for a smoother experience for the survey participants. 
  • Make your list of response options as comprehensive as possible to ensure that all participants have the ability to answer them honestly. 
  • Ensure that response options are mutually exclusive; e.g. you shouldn’t make “marketing” and “digital marketing” two separate options. 
  • Include an “other” option as often as possible.
  • If possible, include a progress indicator to manage participants' expectations about the time to completion. 
  • Allow people to comment on the survey itself to collect additional feedback about anything they think is relevant that wasn’t included in the survey, or any concerns with the survey design.  

Recruiting for UX research surveys

Since it’s impossible to survey every person who currently uses or may someday use your product, survey research relies on the statistical concept of sampling to select and survey certain individuals from a broader target population.

There are two main types of sampling:

  • Probability sampling uses randomization to select participants, and is usually used when you need a large group of respondents. However, it does not guarantee a truly representative selection, especially if you’re running a qualitative survey.
  • In non-probability sampling, survey participants are selected based on a specific set of criteria, typically in order to gain insight into a very specific persona. The risk with this type of sampling is that by sorting people into artificial categories—that don’t necessarily influence behavior—a researcher may unintentionally impose their own biases.

In addition to determining the type of sampling most appropriate for your research, you also need to identify the right number of participants. 

The right number of participants for you will depend on a number of factors, including the scope of your research, the type of study you’re running, and the available resources like budget and time. As a general rule, NN/g recommends 5–10 participants for qualitative studies and at least 40 participants for quantitative studies. 

How to find participants

Your approach to recruitment will depend on the type of survey you’re running. 

  • For quantitative surveys that require a certain number of respondents to deliver statistical significance, you’ll usually use probability sampling to recruit a large, random, representative sample. 
  • For qualitative surveys, which require a smaller number of more specific respondents, non-probability sampling will provide the best solution. 

There are many sources that can help you find large numbers of respondents to participate in broad surveys designed to deliver quantitative data. Online survey tools such as SurveyMonkey and Pollfish, for example, offer such panels. It is also often possible to source respondents by sharing your survey link in a relevant group on Facebook, LinkedIn, Slack, or other social platform. 

If you are looking for more targeted, qualitative responses from a more closely defined type of respondent, User Interviews can help you achieve the best results. Our Recruit tool allows you to source from a pool of more than 700,000 participants tailored to your exact specifications, and our Hub tool allows you to build and manage your own in-house panel.

Find legit participants for your research study

Sign up for free

Screener surveys

Your survey results won’t be useful if you fail to survey the right people. Screener surveys help ensure that you’re recruiting high-quality participants who can provide the information and insights you need to make relevant decisions. 

Here are a few quick tips for building effective screener surveys:

  • Get your participant criteria “just right.” Criteria that is too niche can severely limit your ability to find enough people for your research. Criteria that is too broad can leave you with a group that’s not focused enough to deliver the specific answers you need.
  • Steer clear of demographic and geographic restrictions. Unless you have a very specific reason (such as needing to establish availability to do a follow-up study in person), avoid focusing on things like geographic location, age, ethnicity, income, education level, etc. Instead, pay more attention to behavioral traits—how people think and feel. 
  • Keep your questions neutral. It’s really easy to accidentally ask a leading question—one that influences a person to answer in a particular way. Be careful to avoid language that might offer a hint about the answer you’re hoping to get, or inadvertently try to sway a respondent with framing.
  • Take advantage of skip logic. Skip logic allows you to make your survey more personal by customizing questions based on previous responses. 

Analyzing survey results and reporting findings

Analyzing and synthesizing your data is the final step in survey research. For larger surveys—more than 100 respondents—it makes sense to do quantitative analysis using mathematical and statistical methods. In addition to looking at which responses were most popular, you should also look for patterns in how different types of users answered various questions. 

If your survey includes qualitative, open-ended questions, your review of the results will involve sentiment analysis. This approach looks at whether a response is positive or negative as well as looking at the specifics of the response. 

To effectively summarize sentiments so you can identify action steps, organize your results based on each question rather than each user. By grouping all the responses to each question together, you’ll easily be able to identify sentiment themes. You can then associate those themes with specific UX opportunities. 

Survey Tools

There are lots of different survey tools available, many of which we have captured in our UX Research Tools Map

Here are just a few of the most popular survey tools:

Hybrid research: Combining surveys with other methods

Surveys are an extremely flexible research method and particularly effective as part of hybrid and mixed methods approach to research. 

For instance, you can combine a quantitative survey with qualitative interviews. Or you could use a structured survey to complement observational field work. You can also use surveys to compare participants’ self-reported data to their actual behavior. The possibilities are nearly endless.

And there you have it—the “most dangerous research method,” made safer and simpler with intentional survey design and best practices.

Subscribe
X

Research like the pros

Get the full course and exclusive pro tips delivered straight to your inbox.
Thanks! See you in your inbox 👋
Oops! Something went wrong while submitting the form.