Validation & Testing Methods
: CHAPTER #
3

First Click Testing

First click testing =  testing to see where users click first.

The end. Thanks for reading!

Just kidding, of course. Although that is the long and short of it, there’s lots more to know: like why the first click for each task is so essential; when it makes sense to explore where and why users are clicking; and, how to conduct your own first click test.

The first click dictates overall session success

In 2006, Bob Bailey and Cari Wolfson conducted one of the most influential usability studies out there. Their findings are still relevant today, and will probably stay relevant for years to come. Their study revealed that when users had trouble executing the very first thing they wanted to do on a website, “they frequently had problems finding the overall correct answer for the entire task scenario.”

Meaning when the first click fails, the rest of the session tends to tank as well. More specifically, when the first click is incorrect, the chance of eventually getting the overall scenario correct is about 50/50. Participants are about twice as likely to succeed in the overall mission when they select the correct response on their first click.

How ‘bout them apples?

How first click testing works

First click testing is usually discussed in terms of testing web pages, but the technique can work equally well for any service with a user interface. The idea is simply to find out if users can figure out how the site/service/product works, and if they can access the information they want (or execute a given task) in a sequence that makes sense, in a timeframe that makes sense.

First click testing has a pretty simple order of operations.

1. Create your scenarios

Ex: A bank website wants to know where a user would click to find out when tellers are available.

2. Determine the best, most optimal path toward accomplishing the task

Ex: From homepage, user should click “info → hours”

3. Observe where users click (info on how to do this is below)

Ex: User clicked services → bank → info → hours

4. Record how long it took them to click (info on how to do this is below)

Ex: It took 12 seconds for user to travel from the homepage to the hours page. Aka, too long!

5. Take note of how difficult it was for them to get there / how sure they were

Ex: They clicked on the right menu buttons after two tries, but “info” seemed like a best guess, not a confident choice

How to figure out what to track

Define your top 5-10 tasks. These will be either the top 5-10 things you want your users to do on your site or app, or the 5-10 things they’re already trying to do. Probably, it will be a mix of both.

Most site owners will have a goal in mind: we want users to put things in their cart, we want users to complete a purchase, we want users to submit a photo. Whatever it is, you likely already know.

In some cases, however, your users will tell you what they want. Over time, popular content will become clear in your analytics, and you might want to make navigation toward those pages clearer. In these cases, explore the search terms people have used. Look at pages where users hang out. Make use of your analytics to see how people are using the site, and if it makes sense, create new navigation to get them there faster. Then use first click testing to see if your new navigation works.

Other sample scenarios that make sense for first click testing:

  • A news site wants to know where users might click to share an article on social media.
  • A retail site wants to see how a user might select their shoe size while browsing for pumps.
  • A poetry app wants to know how you’d find a poem written by a certain poet.
  • A ride-sharing app wants to know how easy it is to lodge a complaint about another user.
  • A food-ordering app wants to see how users might switch from mobile to web in the middle of an order and continue their request.

When to use first click testing

An example of first click testing in action, from Chalkmark

In short, first click testing is inexpensive, and the information gleaned is usually relatively simple to take action on. You can use first click testing at almost every stage of development, and after launch to enhance and improve functionality.

You can test on a wireframe.

One advantage of first click testing is that your website doesn't actually have to exist yet at the time of testing—the "click" doesn't actually have to do anything. The user just has to demonstrate where they would click, if the button were active.

You can test every version of a page, from concept to completion.

While testing literally every every step is probably not necessary, early tests can catch problems before they get expensive. You can take advantage of first click testing on initial designs.

Along the way, you can learn from prior tests…

Since earlier feedback will inform how you users perceive website elements, and how they expect pages to function.

Once your site is up and running.

If analytics suggest that your conversion rate is poor or your bounce rate is high, or if users are spending more time on a certain page than makes sense, or asking your help-desk questions that should have been answered on the site, navigation problems could be to blame. You can use first click testing to check for the problem.

How prep for first click testing

The logistics of first click testing are simple, especially if you already have functioning software.

The prep is simply about getting clear on what you want to know:

  • which pages are you concerned about
  • which tasks do you want to test
  • what are the important things you want your users to be able to accomplish

Clear research questions will help you determine what pages, or parts of pages, to include in your test.

It will also inform how many tasks to assign to your testers.

  • you can ask many questions about one page
  • or you can ask the same few questions about many different pages

You'll have to decide where you want users to click. Are there right answers? Ahead of time, you’ll determine an ideal path through the site or app in order to accomplish a given task. This amounts to your hypothesis. Users will either affirm or disprove your hypothesis as they click toward the conclusion. In the end, if more than one path works, you’ll decide whether to steer users towards your preferred path, or if both are equally valid.


Determining how to track clicks

The choice between in person testing or software-based testing depends on budget (services are more expensive, though not bank-breaking), how many responses you need (an online test is more convenient to gather more results) and how much time you have (both types of testing can be accomplished relatively quickly).

There are services out there that will track clicks for you.

They’ll show you how often users click on which buttons, and not only where users click, but where they mouse around in their search for the next thing to click. These services can also offer things like tree paths, which can show you the most common paths users take through the various buttons on your site. With some of these services, you can define your own tree, and allow them to recruit participants to attempt to accomplish the tasks you define.

You can use one of these more automated types of services, or you can recruit real-life participants to observe in real time through a guided study.

This can be as simple as asking a few well-chosen friends how they’d accomplish X task. Though, a minimum of 20 responses to each question will give you more meaningful results. 50 to 100 responses will give you an even clearer picture, but sometimes only five people are sufficient.

Your testers should be representative of your target users.

If you have an established pool of users you can contact, that’s ideal. Otherwise, services can provide paid testers from whatever demographic categories you ask for, even if you run the test yourself.

Testing wireframes is possible in both contexts.

Even a concept sketch can be tested remotely/digitally. Software packages or services can display your test image—either a scanned copy of a hand rendered sketch or a screenshot of a wireframe or a finished page—together with the test question, and then record where testers clicked and how long they took. The results can be shown as a heat map with colors representing where users hovered.

When testing an older site...

If you're testing a redesign or an update to an existing site, be sure to test the original also. This will give you a baseline for comparison.

Choosing which questions to ask

Once you identify your primary research question (Ex: how to get users to travel from homepage → bank hours) it's time to write the questions you’ll use to guide your testers.

With in-person tests, the challenge is getting participants to behave naturally. People’s inclination will usually be to try to pass a test—to give you what they think you want—even when there’s no wrong answer. One way to encourage testers to think like users is to assign them goals, rather than simply asking questions. In real life, people usually have something to accomplish when they look at websites, even when they’re just browsing. For example: find out when the bank teller will be available, buy shoes for the prom, buy one of those lucite toilet seats with the fish in them. So, create a scenario for your study participant.

Instead of asking:

Where would you click to find the bank's lobby hours?

Try: 

You want to add someone as a signatory on your checking account, and you know this needs to be done in person because you must present ID. You want to find out whether the tellers will still be working when you get off work. Where would you click to get that information?

Instead of asking: 

Where would you click to choose your shoe size?

Try: 

Here’s a page full of formal shoes. How would you go about buying a pair for prom?

Instead of asking: 

How would you navigate to clear toilet seats with fish in them?

Try:

On this bathroom decor shop, how would you go about finding a specific color or design of toilet seat?

Instead of asking: 

Where would you click to find out which dates are available for concert tickets?

Try:

You want to buy Jane’s Addiction tickets for April 30th. How would you do that?

Other tips:

  • Avoid using words that could give away the answers. Don't ask "where would you click to get help" if the right answer is a button that says "HELP."
  • Avoid technical language or other terms your target users might not know
  • Don't assign more than ten tasks per test. You don't want your tester getting confused or tired.

After testers have clicked:

  • Consider asking why they clicked where they did. These qualitative answers can be just as valuable as the quantitative data of click location and timing.
  • Don't tell the tester whether the click location was the right one. Participants need to feel like there’s no right or wrong answer.

Test for many steps

For example, to find the bank hours when tellers are available, it might be necessary to follow a path like locations → bank information → hours. The test doesn't necessarily need to include every step, only the first click: Does the user recognize that "locations" is on the path that eventually leads to "hours"?

There are all kinds of problems that might prevent users from clicking on the right thing.

Some of the most common include:

  • counterintuitive menu labels
  • confusing navigation pathways
  • labels that look right but aren't ("Services" looks like it might be a logical place to find out when services are available, for example, as opposed to what services are offered)
  • buttons that are hard to find because of their color or size.

Analyzing Test Data

The first data point you’ll observe is: did they click where I wanted them to?

But good information might be hidden in observing where they clicked, even when they were wrong, so to speak.

Keep track of where users click. You might realize that instead of attempting to force users where you want them to go, it might just make more sense to rearrange your design to allow users to travel to wherever they’re naturally drawn.

If a heat map shows wrong answers clustered together, that suggests testers are being distracted by what looks like the right answer. Clusters inform you of where participants are attracted, and how they expect your website to work.

If the wrong answers are scattered, testers may be confused and choosing randomly.

Measure in percentages.

Figure out how many clicks are possible on the page. This way you’ll be able to measure clicks, and the areas that were clicked on, in percentages.

For example, if there are 10 buttons on a page, those 10 buttons make up 100% of the clickable possibilities. At the end of the test you’ll be able to see what percentage of all clicks each area got. Using percentages rather than just a count of clicks allows you compare tests that collected different numbers of results.

Measure times, too.

If testers are taking excessively long to figure out the right answer, your design isn’t doing its job. On the other hand, if testers reach the wrong conclusion very quickly, that suggests something looks so much like the right option that they never stop to look at the right option.

NEXT CHAPTER

Task Analysis

This chapter walks you through how to do a task analysis, and when you need one in your user research.
Go to next chapter →

Get the UX Research Field Guide delivered weekly.

Subscribe