All posts

UX Research Topics

Podcasts

Field Guide

SUBSCRIBE TO OUR NEWSLETTER

Thank you! You are all signed up.
Oops! Something went wrong while submitting the form.

FOLLOW US

BlogInterviewing & Research Skills
  • Last Updated:

September 9, 2020

Remote Interviews with Participants with Visual Disabilities

Remote testing is becoming more important to user research, how do we ensure it's accessible to those with visual impairments?

Sydney Stewart

As remote testing becomes a more popular choice for user research, it is important to make sure accessibility is considered at all levels. This includes continuing to make an effort to include diverse populations in your study. Testing this segment of the population is, for the most part, no different than any other. However, there are a few things to keep in mind when conducting remote interviews with participants who are blind or have low vision to make the process easier for everyone. 

1. Make sure the testing materials meet baseline accessibility criteria (and test them!).

It is good to check that whatever you are testing meets basic accessibility guidelines. Some examples of common accessibility issues include:

Font size is too small:  Participants with visual impairments often use tools to enlarge text. It is recommended that fonts are not smaller than 12px, and you should make sure that text can be zoomed to 200%. 

The best stories about user research

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.


Missing alternative “alt”  text: If you have images included in your testing materials they need to have descriptive alternative text. Participants with a visual impairment use screen readers that will read the alt attribute. 

Inappropriate color contrast: All testing materials should have appropriate color contrasts. Generally, visuals or text need to have a contrast ratio of at least 4.5:1. There are countless tools to test this, but I personally like WebAim’s contrast checker. 

Not navigable using only the keyboard: Every component of your testing materials should be navigable by keyboard only. An easy way to test this is by using the Tab key to see if you go through all the elements in the intended reading order.  

Additionally, you should be pre-testing your materials anyways, but it is especially important that you run through manual accessibility checks of your materials before your actual session. I suggest testing manually because many “built-in” accessibility checkers can miss important things, and automated checkers also can’t tell you if your alt text is sufficient. 

2. Allow participants to use their own assistive technology.

Participants customize and use assistive technology in different ways. It makes the process run smoother if you can send participants the testing materials ahead of the interview if possible. Then you can guide them through the materials using their own assistive technology. 

Be prepared to be flexible! You never know when technology will decide not to cooperate.  Additionally, not all blind or low vision participants use screen-readers so it is good to have some back-up plans. For example, when concept testing with blind users, I offered the option for me to read through the materials out-loud during the interview before asking my questions. 

3. Plan for additional time (and budget for higher incentive costs).

Blind or low vision participants might take more time than sighted participants to complete tasks. You might have to plan for a more extended interview period if you want to get through all your tasks.  If you have longer tasks at the end of your script, consider moving them to the front end to ensure they get completed. Be sure to adjust the incentive cost to account for a longer interview as well. 

This is by no means a comprehensive list and keep in mind that individuals with visual disabilities are not a homogenous group. Each participant might require various accommodations, depending on their situation. If you are new to testing with blind or low vision individuals, I suggest recruiting a smaller sample size and use fewer tasks to start.

Sydney Stewart

Sydney is an advocate for taking a user-first evaluative approach to solve problems and improve digital experiences. She's motivated to create innovative digital solutions that leave a lasting impact and am interested in opportunities to work on projects at the intersection of technology, politics & culture. She has an M.S. in Museums and Digital Culture from Pratt Institute with several years of experience in digital media, user research, content strategy, and digital analytics, and project management.

More from this author