All posts
UX Research Topics
Podcasts
Field Guide
SUBSCRIBE TO OUR NEWSLETTER
How do you improve the user experience of UX testing? Nicholas Aramouni of Userlytics explains how to test the UX of your own testing.
The dual nature of research on research is exciting in itself, especially for Senior UX Researcher Nicholas Aramouni. In this episode, things get meta as we address the UX of UX research. Nicholas discusses the importance of testing everything, testing early, and testing often. He elaborates on his approach to UX research from different angles and describes the beauty (and absurdity) of what it’s like when UX researchers become participants.
Nicholas Aramouni is a Senior Communications Manager and UX Researcher at Userlytics who specializes in global UX practices. Nicholas has experience in various industries, including music, entertainment, media, and e-commerce. He is passionate about humanities, holds a B.Ed. in Social Studies from Mount Royal University, and was the former co-host of Mindspark. A learning podcast focused on K-12 education.
[00:00:00] Nicholas Aramouni: I'm not anti-unmoderated because I feel it serves its purpose, but I'm certainly pro-moderated in the sense of if we're running a new, let's say, feature like biorhythmic analysis, whatever, eye tracking, I want to be there experiencing what that participant is going through. Sometimes the participant, when they are a researcher. What a beautiful thing when you're testing your research with a researcher, that creates a beautiful environment for you to get the input that you need.
[00:00:29] Erin May: This is Erin May.
[00:00:31] John-Henry Forster: I'm John-Henry Forster, and this is Awkward Silences.
[00:00:36] Erin: Silences. Hello, everybody, and welcome back to Awkward Silences. Today, we're here with Nicholas Aramouni, who is the Senior UX Researcher at Userlytics. Thank you for joining us.
[00:00:54] Nicholas: Thank you for having me. Looking forward to being here and chatting today, for sure.
[00:00:58] Erin: Awesome. I've got JH, here too.
[00:01:00] JH: I feel like on some past episodes, we've talked about the importance of testing the test, and today, we get to talk about the UX of UX testing. We have another pithy saying in our quiver here. It's good.
[00:01:11] Erin: Yes. We're going to be talking about the user experience of user experience testing. Sort of amazing we haven't talked about this before I guess, because we talk about both of these things all the time, but never together. I think this will be really fun, a new spin on both of these things. Again, thanks for joining us.
[00:01:29] Nicholas: Looking forward to it. It's one of those things that's like, there's so many avenues you can explore with it too because it is so meta in a sense, but a lot of fun when you dive into it. This should be a good one, I'm sure.
[00:01:41] Erin: Let's start at the beginning. How do you build a great UX to test UX? How do you even think about that?
[00:01:52] Nicholas: I guess the funny thing with that question when I think about that too, is the idea of what good UX is doesn't change. What I mean by that is the intention is to make something meaningful, relevant, and effortless, regardless of the asset. That's the whole idea, is we want people to feel that meaningful, relevant, effortless.
When we step into UX testing though, the important part is it has to be flawless. I say that specifically regardless of if you're the researcher or somebody stepping in as a participant. The number one priority is the test. It's not everything outside of it that goes into programming or getting the invite, the priority is the test, which means everything has to be maximized. How do you do that? Making it flawless, but of course, making it logical, relevant, predictable, great. That's general step one. That's how I think about it, that flawlessness.
The next part is, I think the important part is simplicity to its absolute core and simplicity, intentionality together. The reason I say that— we're in a time where if you talk about democratized research, more and more people, which is a hot topic, are getting into this industry may be as-- and I'm going to put this in quotations, "non-professional practitioners," which means there's an influx of participants X, Y, Z, we know what that means.
We have to make the users and the testers feel as if they can do this no matter what. Even if they're new, they need to know that they can enter into this with simplicity and just focus on what we just said, the test at hand. Intentionality is there, important. The intentionality, not just of asking specific questions, which I think is the general approach people think, how do you write a good screen? Or how do you make a good test? Making it intentional, great.
It's also being intentional in the people that are there. Relevant people are able to create meaningful experiences, relevant experiences, effortless experiences to get that insight.
[00:03:49] JH: Just to jump in, what's an example of ways that people can make this unnecessarily complex? I feel like maybe it's a little hard to get an example of something that's really elegant and does this in a perfect way, but it feels maybe easier to point to. Here's an example of really overworking it and making it much more harder to parse than it needs to be.
[00:04:06] Nicholas: I think if we're speaking from a research usage perspective and how they approach, it's adding in too much detail that's not relevant. When I say not relevant, I don't mean you don't need to know it, but it's not directly tied to your research goal, your objective. We can talk about that a bit later too, but you don't want to be asking questions outside, of course, building rapport in a moderated session that are not tied directly to a research objective.
You don't want to ask questions for the sake of asking questions. Or, if we're talking platform, you don't want to make the participant jump through hoops to get into the test. Remember, the test is the asset. The before and after have already been worked, hopefully, fingers crossed. Again, that test is the asset, making sure that garnering that insight becomes the priority, is how I would respond to that.
[00:04:56] Erin: There's a lot of vectors to this. There is the UX for who, the UX for the researcher, the UX for the participant, and then you could probably go deeper, the UX for other people joining the research session, on the researcher side, for the consumers of the insights and it goes on and on, but really, you've got your researcher and you've got your participant.
Then you've got the UX of whatever tools that you're using, and it might be more than one tool that you have to think about as well as the design of the test and how that design fits together with whatever platforms or tools you're using. You mentioned the number one thing is it just has to be flawless. It just has to be perfect. No big deal. Talk a little bit more about that. What do you mean by that?
[00:05:42] Nicholas: I'm glad you asked that. The way you do it is essentially becoming your own client. Testing your test, testing your platform, and I say this because my favorite saying is, test everything, test early, test often. Really as an extension of that, what you're able to do is iterate and change to what people are demanding of your product and service. The things that we need as UX researchers in a remote sense, has changed significantly over the last three years, even after the last year and a half. No matter what capabilities, those things are in need of iteration.
The most important way you do that is by mimicking what people experience in every single day life. We can dig into that when we talk about perhaps research side or participant side, but we don't want people to feel they're doing something out of the ordinary, whether you're a new researcher, new tester, or experienced on both ends.
[00:06:33] JH: It sounds like there's a piece here where a big part of the UX for the researchers is that they need to be really mindful of the UX for participants. In the way that they're setting things up for themselves, they really have to go through it as what the participant's going to experience and make sure that they've covered that and that it flows cleanly and the handoff and the way they're explaining things and stuff. Is that a fair way to think about it?
[00:06:53] Nicholas: 100%. As a researcher, one of the primary tools that you have is empathy and putting yourself in someone else's shoes. Especially, again, in remote sense too, when you're designing your test or designing what you need to be doing to get the insights you want, you need to have your participant in mind. Again, we mentioned that test is the priority. Making sure that that becomes what they're focusing on and just getting the answers that you need, not more, not less.
[00:07:21] JH: I like that. I think we see this in a lot of things you see it in product development too, but just like, while we have the hood up, let's do all this other stuff and it's like, while we're talking to this person, let's jam in all these other questions or ancillary things. I think the idea of being really strict on that and keeping it focused and in core to what's important to learn is a really good piece of advice.
On the participant side, that experience feels a little easier for me to wrap my head around. They need clear prompts and communications about what's next and what to do and write expectation setting. You need to make sure that whatever tool you're putting them through, whether it's a video call for moderated or some unmoderated testing tool for this is going to provide them the guidance to be able to successfully do the thing that you need them to do. Is that the main ingredients or when you think about the participant side, are there other things that you really want to pay attention to and get right?
[00:08:11] Nicholas: I think that's an overall general theme of the idea. Is taking into account that participants might be doing this for the first time, and this could be their first touchpoint ever being in the test. Even if it isn't, I promise you, the participants are nervous. I can almost guarantee that they're like, "Am I qualified to be here? What am I about to do? Do I have to share private information?" Especially when you talk about testing internationally, what's allowed, what isn't. These things all play a factor.
To encapsulate that, I always relate it to making their participation from the first touchpoint, the best possible user experience they could have. You do that by translating it into what I call making an online reservation. Let me explain what I mean by that. When you go to book an online reservation, you click a link, you end up at a page, it asks you a few simple, direct, intentional questions. What's your name? For how many people? Which location?
Great. Step one is done. Step two is, what time would you like to join the session? These are the times we have available, great. Step three, what's your email, contact information? Email confirmation sent. That's what your screener should do. It should ask those intentional questions. Keep it simple, direct. How do they get in touch with you?
When that confirmation email is sent in this reservation, it gives them the information they need. That well-informed part that you just mentioned. How do you join the session? How can you be prepared? Do you have your audio working? You need headphones to do X,Y, Z. Here's how you cancel, here's who you contact. Done. Of course, the prompts and alerts keep them there as well.
[00:09:48] Erin: When you think about the participant experience, who does the onus of creating a good one fall on? Is it the researcher or is it the tool? Or is it both?
[00:10:02] Nicholas: The second one, it's both. If you're using a tool, that by and large, should be done to make life easier. Really, hopefully, you're using it because you're trying to make life easier. The onus falls on the tool. This is where we talk about things like, what's a research tool and what isn't? Is Zoom a research tool by extension? Sure. Is it a UX testing platform? Maybe not. The onus falls on the design, of course, of the researcher to write intentional questions and be specific.
The tool and the gold standard researchers expect from these tools because they are the integrity of the industry— needs to align, first and foremost. Again, the idea of the test is the asset. Let's not have participants need to log on and share their screen and remote click and type in length. No, the platform should provide the prompts, should provide where they need to go, and give the instruction and keep it at that. It should be click, drag, click, drop, copy, paste, X, Y, Z. Straightforward.
[00:11:06] Erin: Userlytics is, of course, a testing platform. When you mention the Zoom example, it seems like for more of a purpose-built testing platform, it's going to hopefully take you further in making it for an easy participant experience than something that's not purpose-built well. How do you think about the right balance of building a really opinionated UX for participants and researchers that maybe makes it easier to make that a good experience but
maybe takes some of the control out of the hands by being less flexible? How do you think about getting that balance right that it's more turnkey and easy to use, but maybe not as customizable for the needs of the researcher?
[00:11:50] Nicholas: If I was to give an example even of how you make something turnkey but also flexible in a sense, too, let's talk about how you would as a researcher set up a test. We know that for researchers setting up a test, Erin, that this is an additional step in the process. They probably don't want to be doing this. They want to be getting to the real purpose, which is that insight. I know I've beaten that horse a lot, but that's really what it is.
How do you simplify the journey? Well, if a researcher comes on a platform, don't throw the kitchen sink at them. Please don't throw it. Keep it step by step. You land on a page. What's the name of your study? What kind of study do you want to run? How many people? Step one is done.
What's the next important part? Participants. Instead of making a researcher go in and copy link, send link, X, Y, Z, contact participants, if it's an all-in-one tool, you should lay out, let's say, all the key elements that you'd want to know about a persona your targeting, age, gender, region, whatever those are. We all know what they are. Make that accessible. Don't make them search it up and click it up. On, off, on, off. This is what I want to search for. Boom. Participants are done.
Now, let's go into a situation where you're creating your test. Plug, play. Have the standard questions people ask, perhaps on the left-hand side. Drag, drop, rating questions already set up. You don't have to fill in all the information. Keep it intuitive, rolling, simple. Of course, customization is there. If you want to change the words, you can.
Don't force the work to be creating the test. Give them the option to change that if they want. Be predictive. What's relevant to your UX research? The questions. Know what those questions are in this industry standard way, and plug and play from there.
[00:13:37] JH: What about when you think about the experience? How do you factor in the unexpected issues or glitches that come up inevitably? Obviously, the goal here is to make it flawless, as you said. It is two humans trying to connect with some software in the middle, things go sideways, someone's kid is sick, someone needs to reschedule, whatever. I think a lot of the time people think of UX as the screen or whatever, but it's much more holistic than that.
How you help a participant or researcher recover from an issue or something going off the rails a little bit is probably also very meaningful to how it's perceived. As you call that, these people are nervous. They want to do a good job. Is that something you try to plan for upfront? Or is it more of just be really empathetic and we'll help people as issues come up?
[00:14:16] Nicholas: Yes. When I use the word flawless originally, J, I knew that that was going to keep some ears like, "What the heck is that? How do you do that? It's not possible." It isn't, but what a great target to have. We can't control what tech’s always going to do. There are bugs, there's glitches, absolutely. What's a simple way that we can create reliability and stability, for instance? I think we experience it in some platforms we use now.
In a sense, IT and Dev are like unsung heroes of what this really means to keep things safe and almost flawless. An example of that is when you're going to log on to a test, let's say, as a participant. If you've already done the right thing to inform them, tell them what they need to do, how they need to prepare, how they can reach out on that confirmation reservation email. If you need to change, just click here. Submit. Done. Keep them informed and let them know it's not that it's not the end of the world. Just let us know. We're here to help. That's one way you do it.
Another way when you're actually getting into the test that I see all the time now with the platforms I use is that pre-connection test, which creates this reliability. Is your connection good? It runs that little diagnostic. Is your connection secure? Let's not forget safety here, data safety. Is your audio working? Is it the right camera? Whatever these contexts are, giving them that checklist saying, "Hey, we've done this check for you." All you have to do now is step in and same for the researcher.
"Hey, we've done the check for you. Your test is already ready to be launched. All you have to do is hit preview, submit, and focus." Again, I'm making it sound so simple and it's not that simple, but that's really what it should be. Again, the test asset, number one. Everything else outside of it is making people feel like what they're doing is meaningful, relevant, and effortless.
[00:16:04] JH: I like what you're getting at is it sounds like it's the simple but not easy thing. There are some simple guiding principles or things to aim at that are easy to remember and maybe put on a checklist or something for yourself. Then what you have to do is in any given test, it can be a little different depending on the tools, figure out how to actually strive towards those. Is that maybe a fair way of thinking about it? What you're aiming at is pretty simple, how you pull it off on any tests, a little bit more involved and nuanced.
[00:16:32] Nicholas: Never easy. What the intention there too is to let you know, to always start small as well as you get started on using platforms. The idea of simple not easy is definitely what it is. I think we talked about even heuristic analysis, let's say, in the field of UX research. Everything heuristic analysis tells you to do is simple. It's these 10 principles that you follow. Here are the 10 things you should do. Well, great.
Now, let's actually go do them. That alludes back to what I said before, J, is that test everything, test early, test often if you're on the back end creating a platform, you need to be able to test all the things that you're providing clients, providing people, and ensuring that again, it's iterative, it's trustworthy, it's purposeful, and what it's accomplishing.
[00:17:21] Erin: When it comes to stability and reliability, there's some other things to talk about here too. When you think of an all-in-one platform, you're talking about, obviously, some way to connect researchers and participants, different test modalities. There's the analytics component, the insight sharing. There's all these different points where things could fall apart. How do you think about creating reliability across all of these different components of the software?
[00:17:53] Nicholas: I think I must have alluded to that in the sense before, like, the IT and devs are unsung heroes here. I also say that because research is very much a collaborative approach and where I'm going with this is IT and dev when developing a platform that's going to give you that strong connection that we just talked about, or giving you the ability to pre-check tests, and stuff like that. That's great.
What matters, even more, is the collaboration aspect between the researchers themselves using the platform and the teams developing these tools. I know I've also alluded to that in terms of developing a good product or developing a purposeful product. That collaboration effort is going to make sure that if you're auto-generating metrics from rating questions that are programmed in, or if you're automatically storing data that's recorded onto a platform, you're doing it in a way that people know is safe by doing checks, talking to your customers, talking to your clients.
For instance, if you're dealing with legal issues in GDPR, you know where the information needs to be, you know where to store it. These things make a massive difference, even on that first one.
[00:19:01] JH: Quick awkward interruption here. It's fun to talk about user research, but you know what's really fun? Is doing user research. We want to help you with that.
[00:19:10] Erin: We want to help you so much that we have created a special place. It's called userinterviews.com/awkward. For you to get your first three participants free.
[00:19:21] JH: We all know we should be talking to users more, so we went ahead and removed as many barriers as possible. It's going to be easy. It's going to be quick. You're going to love it. Get over there and check it out.
[00:19:30] Erin: Then when you're done with that, go on over to your favorite podcasting app and leave us a review, please.
[00:19:38] JH: A question may be from a different angle I have for you is, as you try to find the optimal great experience for researchers and participants, does that mean that you think researchers should be out there trying lots of tools, like go out and find the best one? Or is it like, stick with what works and really learn how to use it and be a little bit more conservative about jumping into cause I feel like this is always a struggle on the product side is you get a little bit of like shiny object syndrome of like, "Oh, this one looks cool," but there's advantages to keeping the thing that you have and you know how to use.
[00:20:09] Nicholas: If we're talking about like researchers testing new platforms?
[00:20:13] JH: Yes. I really want my experience to be great and you have one researcher who's like, "I'm going to go out and look at all the tools and play with all of them." Another one is like, "This thing works pretty well. We're just going to optimize the hell out of it and make it great." Do you see one of those paths tend to work better than the other?
[00:20:27] Nicholas: I am a researcher that believes that more that you can do in one place, the better. You don't want to be running around with-- well, maybe you do, maybe it's efficient for you. For me, personally, I think using a platform that does it all or using tools that do as much as possible is much more valuable than trying to find one tool that does something really good, another tool that does something really good.
You're invariably creating more disconnects between what— the insight that you're trying to gather and the fluidity of your test. Make the user experience for yourself positive. Keep it all in one place and don't find yourself sprawling around trying to find this one, that one, this one. I think if you can master one platform or maybe even two and keeping it really just tuned into what are the capabilities and does this one really suits everything I need to do well, I believe you're much better off.
I guess what I relate this to is when we talk about doing translation in the study, what that means is you're adding an extra step in where the insight comes from. You're having someone translate what someone's thinking to a different language and then you're reporting on it. You're almost adding an unnecessary node in the connection line here. I feel like doing that with a whole bunch of platforms is the exact same thing. You're running a risk of missing something or failure on one end and it's not cohesive from my perspective.
[00:21:47] Erin: I was going to say, let's turn this into a debate. This will be fun. [laughs] Definitely pros and cons to all in one and multiple platforms, of course. You prefer to work on all-in-one platforms, and you do a lot of research for different areas of your own platform, I'm curious if you could just walk us through some of the testing methods that you rely on to test these different areas of both the researcher, the participant experience, different surface areas within the app and so on. Do you have some go-tos or you--
[00:22:19] Nicholas: I guess it's hard for me on in a second to say, what do we typically do to test a certain asset or a certain element of the type? I like to hear it from the person's mouth really. My testing is always going to be when I'm working on my platform, is I want to get on a call with somebody and I want to walk them through whatever that is in a flow. I'm very much what's this flow and how does it relate to the experience you're having?
I'm not anti-unmoderated, because I feel like it serves its purpose. I'm certainly pro-moderated in the sense of if we're running a new, let's say, feature like a biorhythmic analysis, whatever, eye tracking, I want to be there experiencing what that researcher or sorry, what that participant is going through. Sometimes the participant-- well, most of the times they are a researcher. What a beautiful thing, when you're testing your research with a researcher, that creates a beautiful environment for you to get the input that you need.
[00:23:17] Erin: Are there challenges while we're on that topic to researching researchers? Do they know too much? Do they make good participants? They make bad participants, challenging participants? Tell us about researching researchers.
[00:23:33] Nicholas: I can appreciate that, because I'm an optimist by nature, which maybe comes off in the way I speak in the things I talk about. Oh, God, yes. [laughs] There's some challenges there certainly. You said it in the general sense of, do they know too much, and do they require too much? Do they think that everything that they need can be solved in one second? Typically, yes, because I think I refer to this.
[00:23:57] Erin: They should know better. I'm sure all their insights have not turned into solutions immediately. [laughs]
[00:24:04] Nicholas: People don't listen to researchers or designers. That's crazy. What world are we living in? That's actually a good point too. I think there's that need to want to see and that's in the industry though too. When you're a researcher, you want to see the impact that you're making.
When you're testing out new things with researchers, they're saying, "This needs to happen now, because I run tests like this all the time and I always run into this issue. This has to change." You're like, "Ooh, that's not the way this works." Like what you said, it doesn't happen this way. We're in prototype phase. That's a good point that you made, Erin, for sure.
[00:24:38] JH: I think I'm thinking a little bit about is we hear this from users a fair amount is people are often stretched thin and limited bandwidth to put into the research compared to maybe what they would do in an optimal situation. If we're saying, you really need to test everything and optimize the experience, there's maybe three categories. There's the pretest stage where you're getting the participant to sign up and setting the right expectations.
There's the actual test and facilitation and then you have some like wrap-up conclusion stage. Where is the best ROI? If I'm tight on time, I know I need to test it all and I want the whole experience to be great, but I'm limited. Is it you want to do it in order? It's like a funnel. You want to make sure step one is good before you get into step two? Is it the test is everything, so really focus there. If you had to 80/20, what would you give somebody advice on how to go about doing that?
[00:25:26] Nicholas: I would actually see it as a cycle. I wouldn't see it as one as more important than the other. What I mean by that too is, could you possibly say that the participants' experience on the back end is less important than the test that they just went through? I don't think you could justify saying that "Hey, it matters less." Everything matters. That's the idea here.
Where I think where it all joins together is the point of where the test is, but it starts with the planning. The ROI comes with your test plan. That's why I mentioned that intentionality, J, is when you're being intentional and specific about what you're trying to garner, whether it's the screener questions and you're starting broad and getting more tight or it's the actual test portion and you're only testing after building a rapport, of course. The asset at hand, they all tie in together.
They need to be one relies on the other, relies on the other, because what if this tester's a great tester too? You maybe want to follow up with them, but if you gave them a poor experience or they felt like they were maybe out of whack or insecure, you've almost missed that opportunity as well. I think it's all about that intentional planning for sure, but there is no one that's more important than the other, if I can be honest.
[00:26:38] JH: If you were any time crunch, you do need to look at the whole journey and prioritize, think about experience end-to-end, but maybe you can just find ways to more efficiently check on each step to make sure you don't have any glaring mistakes or experience gaps as you go through it?
[00:26:51] Nicholas: It's that start portion, for sure. I think it is where that planning goes at the very beginning that can yield you all the ROIs, but I think it ties back in at the end after a participant has engaged with you making it wholesome. I know time crunches are like a pretty serious thing that we all experience, but by and large, I also think that knowing what you need to get done in a reasonable time is, that's a whole new issue. I think that comes to prioritizing and what needs to be tested and what doesn't. Not something that I would want to speak on a whim without diving into all the other features that go into it.
[00:27:26] Erin: What are your top tips for researchers either using a new UX testing platform, because there are so many of them or even just sticking with an all-in-one that has new features and new UI, things are always changing, hopefully. What are your tips for researchers to make sure they're familiar with the UX in a way that they get what they need from it, including having a good experience for participants?
[00:27:52] Nicholas: I guess I'll answer this from their perspective of, if you're going to use a single platform, which again, some people prefer to do, and that's great as well. I think the important thing that a researcher needs to do is ask the right questions and understand the capabilities of where they're going with their test and the platform that's at hand. What I mean is not just read about it, but you have a due diligence as a researcher.
Well, your job is to probe. [laughs] Why not probe into the platform and find out what those max capacity capabilities are, inquire into the things that are coming in the roadmap, maybe even elect yourself to be part of the person that's developing those tools. Sometimes what you find online isn't exactly what it does. You need to make sure that, if it says you can conduct live moderated testing. Great, but perhaps, does that moderated testing allow you to have a backroom chat and annotations on the side or time tags? Explore that for yourself.
If you don't need that, great, but the due diligence is on you to probe. Another idea here is to ask for help. [laughs] I feel like researchers and I'm guilty of this too, is, "Oh, I can figure this out myself. I'm a researcher. I know how this tool works." Then you start playing around with it and wow, what a difference it would've made if you would've said, "Hey, can I get on a call with someone, and you can show me the capabilities or how these short streams my use of this product?" That's a huge difference.
The last one, and I use this analogy often. Please don't just get in a Ferrari and hit the gas pedal if you've never driven a car before. Maybe try it on your friend's car, maybe try out an old car. You don't need to jump in and start huge, Erin. You don't have to go in and start a 400-participant study with a card sort tree testing, again, sentiment analysis. No, just start small and work yourself into the platform. Don't exhaust yourself in trying to figure it out. That would be probably the three things I'd say.
[00:29:48] JH: Do you have any like strong beliefs or almost hot takes here of, "This is something that most people are probably doing poorly in their experience. Your confirmation emails participants probably stinks and here's why." Do you think there's parts of the experience that people tend to drop the ball on most that you see, like as you talk to researchers and maybe give advice there?
[00:30:07] Nicholas: Yes, and actually, what's funny about this question, J, is that I think I have to be self-reflective on this because it was something that I did wrong when I first started. That's why researchers make mistakes, people. Researchers make mistakes. I think it's the screener. The reason why I say that is because I think the screener becomes-- well, I mentioned this, is the first touchpoint participants go through, and really why that matters is, because again, you want participants to feel like they're comfortable.
That good UX experience starts with that specific thing. That intentionality point that we talk about getting the right people comes from that screener and making people feel like they can contribute. Again, if we're bringing in people, we want to make sure that they are capable, but they also feel like they can.
I think with screeners, the top things that I see people almost forgetting is, they put in questions from their test that maybe don't relate to what the screener is actually for. You're not trying to get answers. You're trying to build a wall of people coming. It's called a screener. You don't need to be asking people these questions that don't necessarily relate to who they are.
You want to be broad to start. Ask those qualification questions as early as possible. You don't want someone to get to the end of your screener and get disqualified because they use ketchup less than four times a week. No, of course, that's a terrible example, but you get where I'm going with this and that's where that start broad, be specific, be precise, and provide that positive reservation style at the restaurant experience comes in, I think.
[00:31:39] Erin: Do you ever participate in studies? Have you workshopped that piece of the experience?
[00:31:45] Nicholas: Yes, I have. Internally, on products certainly have, but I feel like one of the really cool things about that is I get to see the back end of what our participants go through. A lot of that comes from wanting me to be able to inform participants on what maybe they might experience or be able to solve problems ahead of time. I've sat in on about a couple dozen as the researcher on the other side, which is a very strange situation to be in. Certainly odd, especially when you get--
[00:32:13] Erin: Leading question. Leading question.
[00:32:15] Nicholas: That's why you start calling people out and actually, that's what happens when you interview researchers. They're like, "Well, I don't know if I'd ask it that way."
[00:32:22] Erin: Yes. It's like--
[00:32:24] Nicholas: I'm like, "What? Are we switching seats here?" It gets actually even funnier, Erin, when you're in a focus group because you almost don't want to over-talk. I feel like you try and predict what someone's going to ask you, or you're like, "Oh, I bet you, they're asking this because--"
[00:32:41] Erin: What are they going to do with this? Let me sneak in my feature request here, one way or another. Totally.
[00:32:49] Nicholas: Which is horrible but then you also don't want to over-talk over people because again, everyone else is a user too. It's this double-edged sword that presents itself.
[00:32:58] Erin: Nick, I always like to ask people, what got you into the user research game, and what do you love most about it?
[00:33:04] Nicholas: I didn't start with UX research being on the top of my list. I was into music. I got into research, like music research policy, which is a real job. I promise you, if you Google it, it's a real job. What made me fall in love with it was when I became a teacher afterwards.
The empathy component that was required as a teacher to understand the different students that walk into your room, whether of international backgrounds, or diverse learning needs, wanting to understand how to connect with those people. That social dynamic engagement piece was like, whoa, this is super cool. I have to not only educate people, but I have to really connect with them one by one.
As I sort of was in my teaching career, I got an opportunity to do a UX research test on a one-off project. Somebody asked me if I would help them with it. My friend started a business that was involved in UX testing, jumped in, and I was like, "This is it. This is where my passion is at." I smile when I do this. I smile when I'm running tests.
It's the need to connect to people, and it's almost like a puzzle. I want to figure people out, and this vocation allows me to do that. I can't even look at products around my room without being like, "How was that? How was that done?" It has to be that piece for me, Erin.
[00:34:23] Erin: Any parting words of wisdom or thoughts on UX for UX testing you want to leave us with?
[00:34:29] Nicholas: I'm going to say it one more time.
[00:34:30] Erin: Yes, let's hear it.
[00:34:31] Nicholas: Test early. Test everything. Test often. Please, make the world more user-friendly for yourself, for everyone else. That's the best advice I can give.
[00:34:40] Erin: All right.
[00:34:41] JH: Yes, I think there's a lot of truth in that, for sure.
[00:34:43] Erin: Thank you, Nick. Thanks for joining us.
[00:34:45] Nicholas: Thank you for having me. Appreciate it.
[00:34:49] Erin: Thanks for listening to Awkward Silences, brought to you by User Interviews.
[00:34:54] JH: Theme music by Fragile Gang.
[00:34:58] [END OF AUDIO]
VP, Growth & Marketing
Left brained, right brained. Customer and user advocate. Writer and editor. Lifelong learner. Strong opinions, weakly held.