SUBSCRIBE TO OUR NEWSLETTER
March 3, 2020
We highlight best practices in website usability testing (from high-fidelity prototyping to selecting participants) and mistakes to avoid.
It may sound obvious, but it’s worth repeating: To conduct website usability testing, you must have something specific in mind you want to test. Website usability testing isn’t a process where you go in with a vague concept and let the method change with the test.
Maybe you want to test sign-up button placement on your front page. Maybe you want to test how user-friendly it is for customers to add products to their cart and then continue to shop. Whatever it is, make sure you’re able to clearly state what you’re testing before you start designing prototypes and sourcing participants.
For example, at User Interviews, we wanted to give researchers the ability to edit screener surveys after they had launched their projects (something originally only available to our project coordinators). So, as part of the design and product development process, we conducted usability testing to evaluate how intuitive it was for researchers to edit their screeners.
We knew that if a researcher edited their screener, a new version would be created automatically after hitting Save. The specific questions we had were:
When we were testing how researchers would respond to the edit feature of the screener, we knew what was and wasn’t a focus, so we knew exactly what screens were needed when it came time to prototype.
Who at your company will be affected by the changes you make to a certain page or feature? We recommend keeping them updated throughout any user research process. Facilitate collaboration and regular team check-ins. Failure to do so can result in otherwise avoidable mistakes.
In the past, one of our product design team members worked on usability and user testing for a real estate property management platform for independent landlords. The property platform was testing to see if changing the language on a specific part of the ledger balance made paying rent more effective.
The actual testing went well, but the testers weren’t actually paying rent. When they rolled out an updated taxonomy based on the test results, it created a new set of problems where actual tenants were scheduling duplicate payments.
In retrospect, this unforeseen complication could have been averted by being more inclusive and collaborative throughout the entire testing process. If the engineers and customer support team members were shown the prototype, they might have realized submitted rent payments weren’t going to show up in the UI — because that job doesn’t run until eight hours later, creating a gap between what the user understands to be happening and what is actually happening.
Exploring a website’s usability problems can help lead to well-informed design decisions, but the results you get are directly tied to your usability testing methods and how well the test is designed to simulate a real-life experience as navigated by real people. Starting with a focused, well-defined goal and keeping the right people involved helps set your usability study up for success.
Whether you use high-fidelity or low-fidelity prototypes depends on what part of the process you’re in and what questions you’re trying to answer.
If you’re vetting a new idea or concept, low-fidelity prototypes can be enough to get you started. But for late-stage testing that results in viable user feedback, we almost always recommend high-fidelity prototypes.
Low-fidelity prototypes just don’t give as much chance for distraction, and without built-in distractions, the test doesn’t accurately depict how users will act in real life. High-fidelity prototypes give your users the freedom to test against the intended design, to complete tasks (or fail to complete tasks) in ways you never expected, and that’s key.
If you're using a click-through prototype in Invision, Figma, or Sketch, testers can see which parts of the screen are clickable. And if you only have things that are clickable as a part of your task, then it’s like lighting the way out of a maze for your participants: They need to be able to find their way through without your help. And if your test doesn’t allow them to navigate naturally, the results may not be as insightful as they could be.
We address this concern by making prototypes that include secondary and tertiary features that are are not necessarily part of the test, such as a Preview feature next to a Save button. These red herrings can help you see where users may get distracted or confused.
When making your prototypes, figure out what screens you need to facilitate the task as well as potential flows or breaking points in the process. Give the user more than one way to complete the task. Don’t simply map your task in steps in the prototype; what you want the user to do needs to be possible in the prototype, but it shouldn't be the only possible action.
For example, at User Interviews, we currently have one way for our customers to create a new project. They simply click the button in the top right corner of the navigation bar. If we wanted to test a new flow, we might test adding a button at the bottom of the page while keeping the one on the navigation bar the same.
With more than one way to complete the same task, you can see which process in the available information architecture is more intuitive.
The closer your usability testing scenario gets to real life, the better. High-fidelity prototypes with red herrings help create a life-like scenario. Similarly, when it comes to screening participants, it’s important to focus on user behavior above demographics.
When we wanted to let researchers edit their screeners, we knew we had to test this function with actual researchers. Testing this new feature with our project coordinators (who knew this feature already) would be pointless, as would testing the functionality of the ‘edit’ button with people who were never going to recruit research participants.
Luckily, we had easy access to our own existing customer base. We ran our usability test with five different researchers to understand their process and learn what issues were causing the researchers to edit their screeners in the first place.
But it isn’t always so simple to source participants. Perhaps there’s red tape preventing you from contacting your own users, or maybe your existing user base isn’t the target audience for your new product. Whatever the case may be, there are plenty of best practices for finding good research participants.
Plus, User Interviews allows you either to recruit participants for your testing (whose user behavior matches real users) or to centralize your communication with your own user base (with CRM and automation tools built in to help you keep track of who’s participated in what test and when).
You want your testers to behave as they normally would, but you also want insight into their thought process during the test. Whether the test is done in a usability lab or you’re conducting remote usability testing, the main focus is creating an environment where participants are comfortable describing their experience.
One of the challenges we’ve faced is getting our testers to talk aloud as they complete a task. Narrating every action doesn’t come naturally to many people, but if participants aren’t explaining their choices, you lose a lot of valuable information. Creating a natural environment where participants feel comfortable to share their thought process in real time is a huge win for your usability test.
Just make sure you’re encouraging them to speak without asking leading questions that prompt them to say what you want to hear.
For example, say you want to know how someone will interact with a feature such as editing a screener survey. You might think to ask, “Do you have any issues with editing your screener?” This assumes, albeit subtly, that they either need to edit their screener survey or have issues doing so. In this scenario, you’re pushing the participant to say something you want to hear.
To avoid leading the participant, you could rephrase your question and say, “Tell me about your experiences with writing and using screener surveys in your research studies.” In this scenario, you’re now asking the participant to reflect on their own process and experiences. They might share what you want to hear — or maybe not — but at least you know it came from them rather than from an attempt to say what they think you want.
Some of the results from website usability testing will be pretty straightforward. Otherwise, it will take dedicated analysis to pull out meaningful insights.
In our usability test to let researchers at User Interviews edit screeners after launching a project, we learned immediately that the process of editing screeners was intuitive. That said, we still found that there was a level of disconnect in the language we used to convey that the screener was saved. This gave us a specific area where we could improve usability.
When you’re reviewing your session data, it can be helpful to start by grouping insights by category. It’s a good jumping-off point to identify problems and start exploring solutions.
All five of our steps revolve around the simple premise of making your website usability testing focused and nuanced, so the results you get will mirror (as closely as possible) the results you’ll get when changes are implemented.
Note: Looking for a specific audience to participate in your website usability testing? User Interviews offers a complete platform for finding and managing participants. Tell us who you want, and we’ll get them on your calendar. Find your first three participants for free.