The State of User Research 2022 report has arrived! Explore the data.
SUBSCRIBE TO OUR NEWSLETTER
The difference between a great prototype test and a poor one is finding the right users (made easier by implementing user disqualification).
The first step in this process will be instantly familiar to anyone who has run a prototype test before. You start by gathering potential participants based on a relatively general set of qualification criteria. This can include demographics, stakeholder type, occupation, or behaviors.
If you’ve run a test before, you probably already know how to screen participants, so we won’t belabor the point. Just remember, you start with a target group of generally right-fit customers before filtering down to the participants you know will be a great fit for your product.
Once you have a handful of potential interviewees, it’s time to disqualify any false positives you may have in your current group of participants. This is where you dig into the behavior and reactions of your test users to find out which ones demonstrate true customer interest. Look for an emotional reaction to the problem you’re attempting to solve with your product in addition to your initial target qualification criteria.
The key to uncovering this real customer interest is to find evidence of three specific, powerful types of mental pain points:
Time, monetary, and emotional pain.
These are critical because they’re powerful enough to drive real, immediate action when encountered.
According to Christian, you observe evidence of these pain points during a one-on-one conversation with your user, so you can ask a series of questions to clarify the potential fit:
“If we start talking to somebody about a particular problem, and they say to us, ‘Yeah, I have that problem all the time,’ we need to ask them follow-up questions to confirm and ask them whether they’ve experienced this problem recently, and if they can actually pinpoint when it last happened,” he explains, “And if they can’t, we know that maybe this person jumped the gun and felt like they had to give us a response.”
In a similar manner, you can ask specific questions about monetary and emotional pain. These questions can be straightforward, or you can filter them into a conversation as they come up naturally. The key is to look for whether your users can provide you with some level of additional and specific context in each response.
For example, if you’re trying to qualify whether the user experiences any monetary pain points, you can literally ask, “How much money are you spending to solve this problem?” Then wait to see if you get a nebulous number in return, or something specific and concrete.
If the amount they’re spending to solve this problem isn’t something they can easily recall, it could be that spending this amount of money for them isn’t considered a problem worthy of inspiring real customer interest and motivation. Or, it could mean you're talking to the wrong stakeholder, and there’s merely someone else who would be the better subject of your user interviews.
Specificity is important because it points to an emotional impact. For example, if someone says, “We spent five thousand dollars to solve that problem last month, and three thousand the month before that,” and can produce those numbers in an instant, you know those numbers are noteworthy in their mind.
Contrast that with the last time you ordered something from Amazon. Can you name how much you spent without looking at your financial statement? Probably not, because the cost was relatively unimportant to your day-to-day life.
To get started with customer interest disqualification, check out this list of qualifiers to try or watch for. This list isn’t exhaustive, and you’re encouraged to come up with your own, but it should give you the boost you need to get started for your next test.
Filter for Emotional Pain Points
Ask: Tell me about a time you experienced this problem. How did you feel when this problem happened?
Look for: Situational experiences they can recall, like short personal stories. Useful responses are often subtle and relate more toward specific pain points, like office or personal drama when they're stuck doing work that bleeds into their personal time or makes them seem unproductive.
The direct cost of a business problem can be heavily outweighed by the perceived cost of the right stakeholder losing access to their next promotion.
Filter for Monetary Pain Points
Ask: How much money do you currently spend trying to solve the problem? Is that cost currently preventing you from doing anything else? How much did you expect to spend?
Look for: A specific number or set of numbers. The response doesn’t have to be accurate down to the cent, but it should include a specific dollar amount, rather than a more nebulous statement like, “We spent thousands!”
Filter for Time-Based Pain
Ask: How long ago did you experience this problem? How much time do you currently invest in solving it?
Look for: Again, the key is to look for additional context. If you get a response that includes a specific number — of days, hours, or other units of time — you know you’ve struck gold.
When you’re asking questions, it’s important to realize that specificity isn’t a black-and-white concept but rather falls on a spectrum. The goal is to obtain a contextualized response, one that allows you to clarify and validate the pain points they’re experiencing. It needs to be detailed enough that your research will be useful in determining the solution to those issues.
As you’re going through this process, you only need to perform a handful of tests when you’re working with right-fit customers. In fact, when qualified correctly, you can even perform prototype testing with as few as three properly vetted individuals. So while the qualification process may be a bit more exhaustive than you’re used to, the effort pays off almost immediately by reducing the number of overall tests required.
One of the challenges of getting such deep insights on candidates’ pains is that you have to talk to each user before they participate in a test. You have to listen to tone of voice and judge the immediacy of a response, both of which are (by default) heavily influenced when interpreted through text.
If you use a platform such as User Interviews to find test participants, you can use a process called “double-screening” to make this happen. Screen an initial set of participants based on typical demographic and behavioral criteria, then schedule a quick phone interview to qualify or disqualify participants from there.
When you use User Interviews specifically, you only pay for sessions you complete, and your first three potential test participants will be vetted for free (if you use this link).
If you’ve exhaustively qualified — and disqualified — potential test participants, it’s important to make the most of each interview opportunity. The key to making each test successful is setting them up to allow emotions and pain points to surface early and often.
This starts with creating something between low- and high-fidelity prototypes. Fidelity is important because you need to elicit an emotional response: a low-fi prototype, such as a pen and paper prototype, simply can’t do — even for the simplest usability testing.
According to Christian, this is because paper prototypes force abstraction, which makes participant feedback when interacting with them both muted, as well as not realistic.
“I just don’t see paper prototypes giving users the emotional satisfaction and dissatisfaction that I think is necessary to use as evidence for real engineering work,” he explains. “They force the users to respond within the context of a weird and unnatural situation, and their responses are often thrown off as a result.”
On the other hand, a hi-fi user interface could cause problems as well. Users might focus too closely on design details, resulting in missed feedback for crucial business functionality. And when you do get important feedback, it’s a lot more work to change the prototypes.
For this reason, Christian recommends creating what he calls a “low-fidelity, fully-digital,” mockup. Think wireframes designed and pieced together using prototyping tools Sketch and Invision, Figma, or Adobe XD.
The key is to create a simple interface that the test participant can use to complete a target goal from start to finish, without moderator intervention.
Once you have a working prototype and a handful of quality participants who look like your end users, you’re ready to perform your tests. Christian recommends starting with a few disclaimers to make participants more comfortable providing valuable feedback.
He’ll start by reviewing the overall goal of the process, explaining that the most helpful feedback is often negative, and that it’s normal — and helpful — to express frustration if you have it. Without this step, people often feel uneasy providing negative feedback.
Once you feel the participant is reasonably comfortable, it’s time to dive in. Start by defining a task or goal for the participant and asking them to complete it to the best of their ability.
As the test gets underway, interviewing best practices apply. State the prompt clearly, don’t provide help unless absolutely necessary, and keep the participant talking and explaining their thoughts as they move through the process.
Note that you should avoid using a formal tone when prompting the participant. This may seem a little pedantic, but it’s important because when you slip into a tone of formality, your test participant will likely do so as well. And that can seriously hurt your ability to gather authentic emotional insights.
As people speak formally, they also tend to shift into a more professional state of mind. And because people are typically more risk-averse in professional settings, your test participant will be more likely to downplay negative emotions or avoid speaking up when they’re annoyed. This is the exact opposite of what you want.
On the other hand, it’s important not to “mirror” emotion by actively agreeing with a complaint or exaggerating a participant’s claim. This will cause the user to exaggerate claims as well, which skews feedback and can influence test results.
It’s a bit of a balancing act between the two extremes — don’t be too professional, or too agreeable — but finding a way to walk that line is important if you want to extract accurate, but emotionally-laden insights.
After the user testing goal has been completed, you can obtain further insights by asking follow-up questions about the tester’s experience. Again, it’s important to focus on gathering emotional insights when possible and dig deep into any friction points or frustration that arose during the test.
Avoid hypotheticals or anything related to new features or product development, as these prompt notoriously unhelpful feature suggestions. (Think about all the times you’ve heard “Hey, if you could just add a button here, that would be great.”)
Instead, Christian emphasizes the importance of sticking with interview best practices. Start with open-ended questions, and then ask clarifying questions to follow up. One method he finds especially useful is restating a user’s complaint or frustration back to them. Then he’ll wait to see if the user agrees or wants to correct him before moving on.
When working through this process, remember it’s important not to rush. Instead, ask for more clarification until you feel confident you’ve extracted needed contextual information.
When analyzing the results of your tests, take the time to clarify each problem before ideating potential solutions. Rather than saying something like “we need to add better labels to this form,” a more precise and helpful description would be, “In five of our tests, participants struggled to understand what information was required in this form.”
After initially defining the problem, revisit it several times with your team members. Then, as you restate and rethink the problem from more than one point of view, you can be more confident in the accuracy and precision of your understanding.
Restating the problem could also help you become more accurate. Continuing with our form example, the problem statement could evolve into, “Four users expressed difficulty reading the prompt for the form,” which gives you additional insight into how you can rework the design to solve the problem.
In this case, the difference between “We need to add better labels to this form” and “Four users expressed difficulty reading the prompt” gives you enough added context to know the instructions need to be more visible, not necessarily rewritten. That’s directly actionable feedback, and the solution might involve changing font size, color, or position, rather than writing new copy.
“A lot of work goes into right-sizing form complexity,” Christian commented. “The resulting checkout or form field design is not a direct result of evenly splitting the number of text fields between multiple pages, but instead the result of trying to maximize a user's perceived context towards achieving their user goal and avoiding reaching a negative emotional threshold point where users will likely abandon the process.”
The next time you’re preparing to test a prototype, remember that the most important part of testing is identifying your users’ pain points in the early stages of your process — before the test even begins.
When you do this, you know that your user’s feedback is anchored in real frustration, and you have the opportunity to remove that pain. This leads to feelings of relief and satisfaction when the user interacts with your final product, which is the key to creating “delightful,” or more precisely, “emotionally satisfying” user experiences.
Solving for your users’ pains also gives you the opportunity to drive market differentiation, just as Christian has for his clients.
“We had a product that was already a leader in its vertical, that had accomplished product-market fit,” he explains, “And yet we identified all these points of friction. It’s like we have this castle — which is the product — and when we solve these issues, we’re completing our competitive moat.”
This is the power of prototype testing. When performed correctly, you’re able to drive user success and business value: a true win-win scenario.
If you want help finding picture-perfect interviewees for your next test, give User Interviews a try. We offer a complete platform for finding and managing participants in the U.S., Canada, and abroad. Find your first three participants for free. Or, streamline research with your own users in Research Hub (forever free for up to 100 participants).
Josh is a conversion-focused content writer and strategist based in New York. When not reading or writing, you can find him exploring his home state, visiting new cities, or unwinding at a family barbecue.