All posts

UX Research Topics

Podcasts

Field Guide

SUBSCRIBE TO OUR NEWSLETTER

Thank you! You are all signed up.
Oops! Something went wrong while submitting the form.

FOLLOW US

BlogResearch Methods & Deliverables

July 22, 2020

  • Last Updated:
  • July 24, 2020

How to Use Prototype Testing to Create Emotionally Satisfying User Experiences

The difference between a great prototype test and a poor one is finding the right users (made easier by implementing user disqualification).

Josh Piepmeier

UX pros put a ton of time and energy into testing prototypes, but too often, that effort doesn’t result in the kind of meaningful insights that drive growth and results. Typically, this problem doesn’t manifest itself as a complete lack of feedback but rather as a frustrating amount of feedback that’s ambiguous and hard to filter through when the tests are done. 

And poor feedback can have an insidious consequence: spending time, money, and human capital on something that doesn’t resonate with the market. Market timing and positioning, pricing, and finding product-market fit can be greatly impacted when product testing goes wrong.

If you’ve experienced a nagging lack of confidence in your test results, the solution to your problem may not be what you’d expect. It may not be about the questions you’re asking, or the fidelity of your prototype, or the way you’re sorting the information after the tests are complete. 

You might just be talking to the wrong people. 

More specifically, you’re likely talking to people who seem identical to your target customers, but who aren’t actually a good fit for your digital product, mobile app, or service. 

For example, let’s say your target test subject is a project manager who wants to lead a team more efficiently using a project management app. You can interview someone with that role, and that exact goal, but if they’re satisfied with their current solution, they’ll just tell you to make your product more like what they already use — not exactly helpful feedback. Or, they might not have an emotional response at all, in which case their feedback is clouding the overall research results.

The key to filtering these prospects out is a process called user dis-qualification. And just as you’d expect from the name, it’s similar to user qualification, and it often happens at the same time.    

The difference is that the former helps you dig beyond demographics and high-level psychographics and filter users based on their emotional response. You do this by asking potential interviewees a series of questions meant to help uncover any frustrations they have with whatever app, product, or solution they’re currently using to reach their goals. 

This gives you the ability to only interview people who are frustrated with their current solution and use each prototype test as an opportunity to uncover specific pain points that your product can solve.  

When you find a pain, you have the opportunity to relieve that pain, in this case by offering them a more compelling solution. And doing so is the key to creating emotionally-satisfying user experiences. 

Your job in customer testing and interviews is to reverse engineer what this user experience actually is.

In this article, we’ll show you step-by-step how to perform the entire prototype testing process in a way that surfaces only high-quality feedback. Specifically, we’ll cover:

  • How to zero in on perfect-fit customers that provide consistently useful feedback
  • What kind of prototype you should use to elicit a real emotional response
  • How to get into the right mindset and ask the right questions to surface pain points while testing
  • A method of synthesizing feedback into meaningful conclusions that can be used to justify making investments in development. 

By the end of the article, you’ll be equipped to run your tests in a manner that will dramatically increase the quality, impact, and business value of the feedback you receive. This process is brought to you thanks to Christian von Uffel⁠: product leader and customer research extraordinaire at Perfecting Product and our expert resource for today’s post. 

Note: Looking for a specific target audience to participate in your user research? User Interviews offers a complete platform for finding and managing participants in the U.S., Canada, and abroad. Find your first three participants for free. Or, streamline research with your own users in Research Hub (forever free for up to 100 participants).

The best stories about user research

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

First Qualify, Then Disqualify

The first step in this process will be instantly familiar to anyone who has run a prototype test before. You start by gathering potential participants based on a relatively general set of qualification criteria. This can include demographics, stakeholder type, occupation, or behaviors. 

If you’ve run a test before, you probably already know how to screen participants, so we won’t belabor the point. Just remember, you start with a target group of generally right-fit customers before filtering down to the participants you know will be a great fit for your product. 

Once you have a handful of potential interviewees, it’s time to disqualify any false positives you may have in your current group of participants. This is where you dig into the behavior and reactions of your test users to find out which ones demonstrate true customer interest. Look for an emotional reaction to the problem you’re attempting to solve with your product in addition to your initial target qualification criteria. 

The key to uncovering this real customer interest is to find evidence of three specific, powerful types of mental pain points:

Time, monetary, and emotional pain. 

These are critical because they’re powerful enough to drive real, immediate action when encountered. 

According to Christian, you observe evidence of these pain points during a one-on-one conversation with your user, so you can ask a series of questions to clarify the potential fit: 

“If we start talking to somebody about a particular problem, and they say to us, ‘Yeah, I have that problem all the time,’ we need to ask them follow-up questions to confirm and ask them whether they’ve experienced this problem recently, and if they can actually pinpoint when it last happened,” he explains, “And if they can’t, we know that maybe this person jumped the gun and felt like they had to give us a response.” 

In a similar manner, you can ask specific questions about monetary and emotional pain. These questions can be straightforward, or you can filter them into a conversation as they come up naturally. The key is to look for whether your users can provide you with some level of additional and specific context in each response. 

For example, if you’re trying to qualify whether the user experiences any monetary pain points, you can literally ask, “How much money are you spending to solve this problem?” Then wait to see if you get a nebulous number in return, or something specific and concrete.

If the amount they’re spending to solve this problem isn’t something they can easily recall, it could be that spending this amount of money for them isn’t considered a problem worthy of inspiring real customer interest and motivation. Or, it could mean you're talking to the wrong stakeholder, and there’s merely someone else who would be the better subject of your user interviews. 

Specificity is important because it points to an emotional impact. For example, if someone says, “We spent five thousand dollars to solve that problem last month, and three thousand the month before that,” and can produce those numbers in an instant, you know those numbers are noteworthy in their mind. 

Contrast that with the last time you ordered something from Amazon. Can you name how much you spent without looking at your financial statement? Probably not, because the cost was relatively unimportant to your day-to-day life. 

Use Probing Questions to Uncover User Pain Points

To get started with customer interest disqualification, check out this list of qualifiers to try or watch for. This list isn’t exhaustive, and you’re encouraged to come up with your own, but it should give you the boost you need to get started for your next test. 

Filter for Emotional Pain Points

Ask: Tell me about a time you experienced this problem. How did you feel when this problem happened?

Look for: Situational experiences they can recall, like short personal stories. Useful responses are often subtle and relate more toward specific pain points, like office or personal drama when they're stuck doing work that bleeds into their personal time or makes them seem unproductive.

The direct cost of a business problem can be heavily outweighed by the perceived cost of the right stakeholder losing access to their next promotion.

Filter for Monetary Pain Points 

Ask: How much money do you currently spend trying to solve the problem? Is that cost currently preventing you from doing anything else? How much did you expect to spend?

Look for: A specific number or set of numbers. The response doesn’t have to be accurate down to the cent, but it should include a specific dollar amount, rather than a more nebulous statement like, “We spent thousands!”

Filter for Time-Based Pain

Ask: How long ago did you experience this problem? How much time do you currently invest in solving it?

Look for: Again, the key is to look for additional context. If you get a response that includes a specific number — of days, hours, or other units of time — you know you’ve struck gold.

When you’re asking questions, it’s important to realize that specificity isn’t a black-and-white concept but rather falls on a spectrum. The goal is to obtain a contextualized response, one that allows you to clarify and validate the pain points they’re experiencing. It needs to be detailed enough that your research will be useful in determining the solution to those issues.

As you’re going through this process, you only need to perform a handful of tests when you’re working with right-fit customers. In fact, when qualified correctly, you can even perform prototype testing with as few as three properly vetted individuals. So while the qualification process may be a bit more exhaustive than you’re used to, the effort pays off almost immediately by reducing the number of overall tests required. 

Easily Filter Test Participants with Double-Screening from User Interviews

One of the challenges of getting such deep insights on candidates’ pains is that you have to talk to each user before they participate in a test. You have to listen to tone of voice and judge the immediacy of a response, both of which are (by default) heavily influenced when interpreted through text. 

If you use a platform such as User Interviews to find test participants, you can use a process called “double-screening” to make this happen. Screen an initial set of participants based on typical demographic and behavioral criteria, then schedule a quick phone interview to qualify or disqualify participants from there. 

A preview of the User Interviews hub

When you use User Interviews specifically, you only pay for sessions you complete, and your first three potential test participants will be vetted for free (if you use this link). 

If you’d like to bring in existing users for a test, you can use User Interviews’ Research Hub to manage and track interactions with your customers. It’s free for up to 100 of your own users. 

Build Prototypes That Are Low Fidelity but Fully Digital

A laptop sits on a table in the kitchen

If you’ve exhaustively qualified — and disqualified — potential test participants, it’s important to make the most of each interview opportunity. The key to making each test successful is setting them up to allow emotions and pain points to surface early and often.

This starts with creating something between low- and high-fidelity prototypes. Fidelity is important because you need to elicit an emotional response: a low-fi prototype, such as a pen and paper prototype, simply can’t do — even for the simplest usability testing

According to Christian, this is because paper prototypes force abstraction, which makes participant feedback when interacting with them both muted, as well as not realistic.

“I just don’t see paper prototypes giving users the emotional satisfaction and dissatisfaction that I think is necessary to use as evidence for real engineering work,” he explains. “They force the users to respond within the context of a weird and unnatural situation, and their responses are often thrown off as a result.”

On the other hand, a hi-fi user interface could cause problems as well. Users might focus too closely on design details, resulting in missed feedback for crucial business functionality. And when you do get important feedback, it’s a lot more work to change the prototypes.

For this reason, Christian recommends creating what he calls a “low-fidelity, fully-digital,” mockup. Think wireframes designed and pieced together using prototyping tools Sketch and Invision, Figma, or Adobe XD.

The key is to create a simple interface that the test participant can use to complete a target goal from start to finish, without moderator intervention. 

Start Tests with a Disclaimer and the Right Mindset 

Once you have a working prototype and a handful of quality participants who look like your end users, you’re ready to perform your tests. Christian recommends starting with a few disclaimers to make participants more comfortable providing valuable feedback. 

He’ll start by reviewing the overall goal of the process, explaining that the most helpful feedback is often negative, and that it’s normal — and helpful — to express frustration if you have it. Without this step, people often feel uneasy providing negative feedback.  

Once you feel the participant is reasonably comfortable, it’s time to dive in. Start by defining a task or goal for the participant and asking them to complete it to the best of their ability.

As the test gets underway, interviewing best practices apply. State the prompt clearly, don’t provide help unless absolutely necessary, and keep the participant talking and explaining their thoughts as they move through the process.

Note that you should avoid using a formal tone when prompting the participant. This may seem a little pedantic, but it’s important because when you slip into a tone of formality, your test participant will likely do so as well. And that can seriously hurt your ability to gather authentic emotional insights. 

As people speak formally, they also tend to shift into a more professional state of mind. And because people are typically more risk-averse in professional settings, your test participant will be more likely to downplay negative emotions or avoid speaking up when they’re annoyed. This is the exact opposite of what you want. 

On the other hand, it’s important not to “mirror” emotion by actively agreeing with a complaint or exaggerating a participant’s claim. This will cause the user to exaggerate claims as well, which skews feedback and can influence test results. 

It’s a bit of a balancing act between the two extremes — don’t be too professional, or too agreeable — but finding a way to walk that line is important if you want to extract accurate, but emotionally-laden insights. 

Ask Open-Ended Questions, Then Get Clarification

After the user testing goal has been completed, you can obtain further insights by asking follow-up questions about the tester’s experience. Again, it’s important to focus on gathering emotional insights when possible and dig deep into any friction points or frustration that arose during the test. 

Avoid hypotheticals or anything related to new features or product development, as these prompt notoriously unhelpful feature suggestions. (Think about all the times you’ve heard “Hey, if you could just add a button here, that would be great.”)

Instead, Christian emphasizes the importance of sticking with interview best practices. Start with open-ended questions, and then ask clarifying questions to follow up. One method he finds especially useful is restating a user’s complaint or frustration back to them. Then he’ll wait to see if the user agrees or wants to correct him before moving on. 

When working through this process, remember it’s important not to rush. Instead, ask for more clarification until you feel confident you’ve extracted needed contextual information. 

Clarify Problems Before Coming Up with Solutions

UX researcher working with team members to clarify test results.

When analyzing the results of your tests, take the time to clarify each problem before ideating potential solutions. Rather than saying something like “we need to add better labels to this form,” a more precise and helpful description would be, “In five of our tests, participants struggled to understand what information was required in this form.” 

After initially defining the problem, revisit it several times with your team members. Then, as you restate and rethink the problem from more than one point of view, you can be more confident in the accuracy and precision of your understanding. 

Restating the problem could also help you become more accurate. Continuing with our form example, the problem statement could evolve into, “Four users expressed difficulty reading the prompt for the form,” which gives you additional insight into how you can rework the design to solve the problem. 

In this case, the difference between “We need to add better labels to this form” and “Four users expressed difficulty reading the prompt” gives you enough added context to know the instructions need to be more visible, not necessarily rewritten. That’s directly actionable feedback, and the solution might involve changing font size, color, or position, rather than writing new copy. 

“A lot of work goes into right-sizing form complexity,” Christian commented. “The resulting checkout or form field design is not a direct result of evenly splitting the number of text fields between multiple pages, but instead the result of trying to maximize a user's perceived context towards achieving their user goal and avoiding reaching a negative emotional threshold point where users will likely abandon the process.”

Identify Pain Points Early, and Your Prototype Tests Will Become More Effective

The next time you’re preparing to test a prototype, remember that the most important part of testing is identifying your users’ pain points in the early stages of your process — before the test even begins. 

When you do this, you know that your user’s feedback is anchored in real frustration, and you have the opportunity to remove that pain. This leads to feelings of relief and satisfaction when the user interacts with your final product, which is the key to creating “delightful,” or more precisely, “emotionally satisfying” user experiences. 

Solving for your users’ pains also gives you the opportunity to drive market differentiation, just as Christian has for his clients. 

“We had a product that was already a leader in its vertical, that had accomplished product-market fit,” he explains, “And yet we identified all these points of friction. It’s like we have this castle — which is the product — and when we solve these issues, we’re completing our competitive moat.” 

This is the power of prototype testing. When performed correctly, you’re able to drive user success and business value: a true win-win scenario. 

If you want help finding picture-perfect interviewees for your next test, give User Interviews a try. We offer a complete platform for finding and managing participants in the U.S., Canada, and abroad. Find your first three participants for free. Or, streamline research with your own users in Research Hub (forever free for up to 100 participants).

Josh Piepmeier

Josh is a conversion-focused content writer and strategist based in New York. When not reading or writing, you can find him exploring his home state, visiting new cities, or unwinding at a family barbecue.

More from this author