down arrow
How to Ask Great UX Research Questions with Amy Chess of Amazon

How to Ask Great User Research Questions with Amy Chess of Amazon

Great user research starts with a great research question. Amy explains how to get this important part of the UX research process right.

Your research question is where it all begins. It’s what drives your research forward, informs when you’re ready to wrap up your project, and it’s what gets everyone involved on the same page. But how do you know you’re asking the right research questions? 

This week on the podcast, Amy Chess, UX Researcher at Amazon, chatted with us about how she chooses which questions to ask in her user research projects.

Amy talked about…

  • The difference between research questions, research objectives, and interview questions
  • How to ask the right research questions
  • Why you can’t pick a methodology before you know your research question
  • How building trust with stakeholders can push your research questions further
⚠️ Warning: Poorly-designed research questions can introduce bias. Learn how to recognize and minimize bias in UX research.

Listen to the episode

Click the embedded player below to listen to the audio recording. Go to our podcast website for full episode details.

Highlights

[1:30] What's the difference between a research question and an interview question?

[7:38] What comes first? The research question or the methodology? 

[11:13] How to not to ask bad research questions. 

[19:07] Go where the research leads you, even if it’s not where you planned to go. 

[25:53] Structuring user interviews to answer your research questions.

[36:09] Work with user research before you have it all figured out.

About our guest

Amy Chess is a UX Researcher for Talent Products at Amazon. She specializes in qualitative data collection techniques and the meaningful synthesis of qualitative and quantitative user data.

Amy is intensely invested in developing new methodologies to evaluate integration efforts from a UX perspective, promoting the value of UX research amongst stakeholders, and pioneering approaches for UX practitioners, technical teams, and product management to collaborate and partner together.

Transcript

Amy: [00:00:00] The beauty of research is that when it works really well you start to discover all of the stuff that you hadn't considered before, and it opens up a ton. You should come out of it with a ton more questions than you had going in. 

Erin: [00:00:30] Hello everybody and welcome back to awkward silences. Today we're here with Amy Chess, a UX researcher on the Talent Products team at Amazon. Today, we're here to talk about generating good research questions. Something that I'm pretty sure is applicable to just about anybody that might be listening.

So, Amy, thanks so much for joining us to talk about this very interesting topic. 

Amy: [00:00:55] Thank you both for having me. I am super happy to be here, and this is one of my favorite topics and is really, really important. One in the user research space.

Erin: [00:01:04] JH is here too. 

JH: [00:01:05] I feel like I could do a seminar on bad podcast questions. So I'll uh, can incorporate some 

Erin: [00:01:10] stay for the Stay for the deep cut, director's cut after hours for JH's hot takes. Very bad podcast questions. Alright, fantastic. So we're not here to talk about interview questions. We're talking about research questions. So what's the difference just to set the stage here. 

Amy: [00:01:29] Yeah, absolutely. So a research question pertains to what you need to learn in order to make a decision. And the interview question is what you ask in order to get that information. So it's two pretty different things, but people get them mixed up commonly. And so, you know, for your research questions, you may come up with questions that you want to have answered in order to make decisions that you would never ever in a million years put in front of a user.

You may ask those questions in subtly different ways or sometimes in pretty substantially different ways. And the questions that you ask your users do tend to be a little bit more indirect. Sometimes just kind of going around the topic. I can give an example if that would be helpful for everyone.

Erin: [00:02:19] Yeah. We'd love examples. 

Amy: [00:02:20] Yeah. So do I. So, I was working on a user flow at one point previously where the initial home screen had an entry point that would take you onto a secondary screen. And on that secondary screen, there were a variety of tabs and we needed to make a decision about what is the order of those tabs and what's the default tab? 

So that was our research question or one of our research questions. So, which is the default tab? Now we didn't ask that question directly to our users, because of course, when you ask a question like that, you're forcing your user to abandon themselves as the user. You're asking them to leave their shoes and enter the designer's shoes.

And that doesn't really help us as researchers. And it's kind of unfair to the user, right? So we need another way to get at that information besides asking them well, what needs to be at the default? So in that case, your interview question becomes, what do you expect to find when you click on that button, what information do you think you'll see there?

And then in that way, you kind of begin to enter the participant’s world and see what are their expectations, what are their needs, what do they expect based on what they've already seen in the flow. And those expectations can then form the basis for what the default becomes. So that's one really good distinction between what the research question would be and then what the actual interview question would be and how they are different.

JH: [00:03:52] I love that example. In the way you described that you had mentioned a research question or research questions, and I guess I had already made the assumption that you only have a single research question when you go out to learn something, is it the case that you can have multiple if they're similar enough or is there a best practice?

Amy: [00:04:07] Yeah. So I typically have for any given research project, four to five, sometimes more research questions, but with any given research question, you may have to ask multiple interview questions in order to get an answer to an individual research question. There is the case when we are prioritizing with teams that you may have more than that, four to five research questions, maybe you have a very long list and then you have to begin to prioritize because it becomes too much to tackle in a single research session.

So prioritization becomes really important, but I've never seen a situation where you have only a single research question that you want to have answers to. Typically it's, you know, four to five, maybe a little more than that.

Erin: [00:04:52] Is that. Is there a research question different from a learning objective or are they related concepts? 

Amy: [00:04:57] I think they're related concepts. Cause to me, the point is all about exploration, right? I don't like approaching research as a validation exercise or an exercise to confirm what we think we already know. Right. So we should be, we should have the objective of learning about our users, learning about their unmet needs, their challenges, what they're trying to achieve. So that's absolutely the purpose of the research. I would say that those are analogous concepts, research, questions and objectives. 

Erin: [00:05:25] Okay. So, you have this list of like research questions, these four or five research questions that you want to get answered.

What do you think about as you know, your participant starts kind of giving you information, has it been answered or not? And like adapting your script or your sort of sub-questions to check that box. Got that one answered. Can move on. 

Amy: [00:05:48] Yeah. Yeah, that's a really good question. So I think there's a distinction between answering the question in the context of a single interview versus answering the questions in aggregate across your entire sample. So for example, I had an interview this morning where I had a certain set of questions, but because of the mental model that the participant had going into it it was difficult for him to really address the questions that I had pre-prepared because he, he came into it with a really different way of visualizing the problem, which is also really important for us to know. In a sense, that opens up another set of questions. So I always say things like, research is never really done.

Right? So part of it is, yeah you're getting some answers, but you're also learning to ask questions that you didn't know you should be asking. And that's one of the greatest benefits of doing research. It's not about getting answers. It's really about understanding what you don't know. And so in this case this morning, when I was talking to somebody, I had a long list of pre-prepared questions. And then what he offered up to me was that, Hey, I understand this process in a way that's really different from what you've laid out here. So he wasn't able to answer the questions I had prepared, but what he gave me instead was actually much more valuable. It was a different perspective on the problem that we hadn't considered before.

JH: [00:07:08] Nice. Yeah. I feel like that's something you always hear from experienced researchers is that improvisation and pivoting to what you're hearing is like actually, where you get some of the most interesting stuff. And I think people who are earlier to research are a little bit more anxious and like, I need to stick to the things that I had defined ahead of time.

So that's a really good point to zoom out for a second on like order of operations. I think we're going to research questions and interview questions, but is it actually like research question methodology to answer that question and then you get into specifics or does the methodology come first?

And then you go into the question or where does that fit?

Amy: [00:07:38] That's a fantastic question. So I always tell the teams I work with don't come to me with a methodology ever. Always come to me with the question that you want to have answered first. And the reason for this is because I think people can get really hung up on like, is it a survey? Is it analytics? Is it an interview? 

And they can spin their wheels. And really like, the value that the rest of the team brings is their own expertise around the thing that they're trying to figure out. And the value that I bring is I can say, oh, you're asking this kind of question, then we need to be using this kind of technique.

So usually the way it goes is. Identify the research questions, identify the appropriate technique, then identify how you want to actually pose those questions. You know, I've certainly had people come to me before asking things like, can you do interviews so we can see how many people use this feature?

And I'm like, no, not really. Like, we should look at analytics for that. And then if you want to know why the people who are using it, why they're using it, then we can do interviews. So it's really kind of helping direct what it is that people need. Really getting them to focus a lot more on what the questions need to be, rather than coming to me with a specific technique.

Erin: [00:08:47] Do people usually know what their questions are or do you have to kind of co-create those questions together? 

Amy: [00:08:54] Yeah. Yeah. So it's very, very collaborative. People generally need a lot of help coming up with the questions. And that's very common when people haven't worked with a researcher before, it's extremely common to not know where to start. And I try to be really encouraging because developing that skill of query formation is so essential to being able to work on a product team and being able to be effective and that skill will serve that person throughout the rest of their career.

So I really bring them into the process and to get them to identify good questions. I keep it, I ask them really simple questions. I'm like, what keeps you up at night?

Right. Like, what are you really worried about? What is going to happen if we don't understand this? And get them to identify areas of risk because at the end of the day, user research is a risk mitigation strategy.

So if we can get people to hone in on those areas that feel slippery, that feel nuanced, that feel like whoa, we could really mess this up. Or if you are up against a potentially irreversible decision, what we call one way door then that's something that you probably need to direct research resources to.

So that's where I try to get people to, to dive in first. And then I also ask them, you know, what decisions do you need to make? What is outstanding that you feel like you don't have clarity on? Or what are you arguing with people on the team about, what are you arguing with other teams about, you know, what are the unresolved issues?

And the research is not there to tell us who's right and who's wrong. It's to inspire a more informed debate. It's intended to give us sort of directionally correct information, kind of pull us in the right direction and help us resolve things enough where we feel like we can bring in tech teams and start the actual building process.

Yeah. 

Erin: [00:10:48] Yeah, I, yeah. Cause you hear that, like help me settle a debate. It's like you're both wrong. 

Amy: [00:10:53] Yeah, you could both be wrong. Right. And that happens.

Erin: [00:10:57] yeah. Right. 

Amy: [00:10:59] That's right. 

JH: [00:11:00] Yeah.

I love the phrase, a one way door. I feel like I, you talked about that in the product side where I work. You talk about, you know, what's reversible and things like that quite a bit, but that's a good way to put a point on it. 

Where can a research question go wrong? Like how do you end up with a bad research question?

Amy: [00:11:13] Yeah. So I think bad research questions you know, one, if they're not anchored specifically to a decision that needs to be made or two you know, understanding what's maybe two to three years off in the future.  A huge benefit of having research is being able to be forward-looking. I think another area where research questions can go wrong is if they become very solution specific and not anchored to customer problems.

So a lot of times you'll see people asking things like you know, they'll come to me with the research question. Like, well, what features should we have? And I'm like, you know, this gets us so far from the actual issues that people are experiencing. 

And I think with user research, you want to stay very grounded in what are the person's goals? What are they trying to accomplish? What are the barriers and challenges? What are their compensatory strategies when they are not able to get what they need? How does that impact their other work? And in fact, I think it's really important in research when somebody is talking about a problem that they're having, because they don't have a product that meets their needs.

There are actually a bunch of downstream cascading problems that occur from that, that aren't always documented, but I think it's really important to understand what they are. So for example let's say there is some sort of manual process that you have to do today as a user and you're building pivot tables in Excel, and you're spending tons of hours doing this manual stuff that could be done in an automated fashion with the tool you know, asking questions around, well , what kind of problems does that present for you?

Okay. And then how do you resolve that? And who do you have to pull in to help with that? Oh, you had to build this other thing to compensate for that. Oh, it delays you. And then going through that whole downstream you know, cascading sort of network of bad things that happen to your product, isn't just addressing that first primary issue, which is the manual work.

It is actually addressing all of the downstream crap that happens as a consequence of that initial thing, not working well for them. And so even when we're thinking about success metrics, we can think about. All of those downstream things that are happening are being ameliorated by the product. And how cool is that?

Like the success that we have when we introduce a product that solves a problem. That success can, ramify out to a lot of different things. So I, that's why I don't like to be solution focused in research and really concentrate on what are the problems. What problem does that cause? And then what happens then? What are the compensatory strategies? 

And then my other thing that I absolutely love to ask people about is, well, if you had this automated and you get all that time back, how do you use it? What do you do with all that time? People's eyes just light up. They're like, oh, well, gosh, I get to do all the strategic work that I was hired for.

And, you know, I get to use the skills that, you know, I want to use to be effective and I get to talk to more people. And so that is another form of success, right? You are free, it's not just about what your product is doing for people. It's not just about the future. It's about all the other stuff that the product or service enables.

Erin: [00:14:28] Yeah, absolutely. I mean, shameless plug for user interviews. We hear that all the time. No one wants to spend their time doing recruiting logistics. Right, so yeah. Yeah. If you can automate stuff for people. That's always a huge win. 

Amy: [00:14:44] Huge win. Yep. 

Erin: [00:14:46] All right. So, so what's a good research question look like? You know, and you talked about, sometimes you ask a question, you learn, I, you know, whoever you're talking to this morning, some other interesting stuff that maybe is answering a different question.

So what, Yeah. What's a good research question?

Amy: [00:15:02] Yeah. So I think good research questions, one at the outset of uncovering how people think about fundamental concepts that we may be taking for granted. So, it could be, you know, I used to work at Amazon web services and I would talk to people about how they define applications. What does the concept of an application mean to you?

Really basic question, but really important that we have a shared understanding of what that is at the beginning of an interview, so that we're talking about the same thing, right? Same in the human resources space, just making sure, Hey, we're talking about the concept of succession planning.

What does it mean to you? How do you define it? What are the steps? What is the sequence of those things? So getting an understanding of things. May seem really obvious, I think it is a great place to start because a lot of people kind of blow right past. And then if you do that, you're number one, you're making a lot of assumptions during the interview itself, thinking that you're talking about the same concepts in the same way.

But number two, you know, if you establish it at the outset, you have a nice foundation upon which to build for the rest of the interview. And that can be really valuable. So I think, you know, number one, understanding having like that shared understanding of certain fundamental concepts. And then number two, getting a little bit of information around how different concepts, processes, events, et cetera, relate to each other or not.

So then you can start to build out a schema or a mental map of that individual user's perception of the world that they work in. Right. So. What does this particular process mean to you? And how does it relate to the thing over here and then what do you do? And is it always that way or just under certain circumstances?

And then from there you can see, do those models match up across your participants? Do they differ along specific demographic characteristics, like geography or level of expertise? Those are all really important things to know. So, I would say those are really good types of research questions because it gives you enduring insights.

It's not just kind of a one-off thing. You can return to it over and over again because generally speaking the way people conceptualize these kinds of concepts, the relationships to each other, the challenges and barriers in their work. Those things tend to be relatively long lasting and enduring, and you can return to that research over and over again.

It's much better than the notion of someone asking, like which design do they prefer? It's not, that's not a research question, right? Like that doesn't give you enduring insights. It's a, I prefer this or I prefer that it doesn't give you any sort of insight into the mechanism behind it or the why.

JH: [00:17:45] Something you said there was about you know, does this vary by segment or other kind of demographic vectors and stuff like that? Is that one of the research questions? Like you're upfront saying, like, we need to figure out does this work here? Or is it like you answer, you're asking one question and you're just talking to a different cross section and so you're keeping an eye on if it is varying or it's both, or like, how does that actually work out?

Amy: [00:18:04] Yeah. So sometimes we do have explicit research questions where we do want to dig into the differences between different demographic groups, because we have suspicions either through, you know, informal conversation. Or some other piece of information that there may be a difference between them. And we need to explore it in a more systematic way.

Sometimes it emerges during the research. Like we, we go into it, not expecting a difference and then we start to see something. So that's been really interesting. Some of my recent research I've been observing differences in tenure of people and how that impacts their perceptions of certain activities.

So that was not something that I expected at the outset, but it's definitely something that emerged as a consequence of the research.

JH: [00:18:50] Yeah. And so on when you're actually in, you know, a research project and doing this, and you started out with a set of research questions. Do you modify it and groom it as you go, or like, cause you're learning new things and you want to adjust it a little bit? Or is that more of like you factor that into future research and to kind of stick to the original set that you had started with?

Amy: [00:19:07] Yeah, that's a really good question. I think it's a little bit of both. So sometimes I would say most of the time when I'm modifying research questions, As I go it's because I've things aren't landing, right. Or we, you know, there's a reason that we need to maybe focus on a certain area a little bit more as we've gotten information during that set of research.

Sometimes it's the case that you open up a can of worms and you're like, oh wow. This is like a whole new world and we do actually need to dedicate a secondary set of research to it because you've simply learned more. And now you're aware of what you didn't know before, and there are what we call the known unknowns.

You guys have probably heard that phrase before. The beauty of research is that when it works really well you start to discover all of the stuff that you hadn't considered before, and it opens up a ton. You should come out of it with a ton more questions than you had going in.  

Erin: [00:20:02] Seems like a, It's a good way to keep yourself in the job. Right. So I've

Amy: [00:20:06] Exactly.

Erin: [00:20:08] some more questions here, guys.

JH: [00:20:09] Yeah. 

Amy: [00:20:10] It's infinitely self-reinforcing. You always have more questions. I think that's actually something that can feel intimidating a little bit to people who aren't used to research because they want some finality, some conclusions like, oh we're good. But you don't really get that with research.

You know, you get enough information where you feel like you can make an informed choice. You make the choice and then you have a robust plan to continue to collect information after you've released it to people. It never really ends and so you're like constantly in this feedback loop of getting that information and then making changes and folding it back into the product or service.


 Erin: [00:21:28] So you want to ask a good research question. We talked a little bit about what some characteristics are there. And you want to chip away at something approximating, like sort of answering them. Right. But you're going to get new questions out of this and it's never like over.

And so what is the enterprise like zooming way out of user research in terms of like, do you over time if you knew a customer 0%, do you like get closer to 100%? Or, you know, the product's always changing. The market's always changing. The world's always changing. So the goal is not, the goal is not to know every customer in their totality 100%, but is the goal to over time get closer that you feel that you really start to know your customers and their various segments and their contexts better and better over time? 

Amy: [00:22:17] Definitely. Yep. And you can begin to anticipate their needs perhaps. I've never stayed in a space long enough to know, or to observe things changing so substantially that the research was no longer valid. Like if you're doing good generative research that information tends to be relatively enduring, but, you know, I would imagine like a fast paced place, like cloud computing. I wasn't in that space for that long, but I would imagine because it changes so quickly there, you would perhaps have to do a lot of work to kind of catch up to all the changes that the customers have. 

Erin: [00:22:51] Yeah, so the questions keep piling on, but it's not this sort of Sisyphean thing of like, you're starting over. The questions, keep coming and you get closer to an answer 

Amy: [00:23:02] Yeah. Yeah. Yeah, Yeah, exactly. Yeah. You're kind of, it's more addictive than starting over from scratch and feeling like, oh, I don't understand any of this. It's more like, okay, now I'm filling in the nuance. You start out like in any research area, you start out and initially you're going kind of broad and just understanding.

You know, what are the needs and challenges at a really high level? And then as you start to amass the information, you can start to get more detailed, more nuanced in those conversations and just kind of fill in the gaps.

JH: [00:23:33] Yeah, it feels like almost like there's like a tree and you kind of keep adding branches to it and it gets to like, you know, it means like you have one branch and you kinda know where you are and then it has sub branches and more leaves. And you have more coverage in some, like, that's what's coming to mind for me. 

Amy: [00:23:45] Yeah. 

JH: [00:23:45] You, you mentioned you know, like a common way of having a bad research question is being too solution-focused. And so I would almost think of that as being like, too narrow, right? Like we're really zoomed in on the thing we want to build and we're asking or trying to learn that. It feels that to me feels like something where even if you're not an amazing practitioner of this, you could probably identify like a group of people could probably be like, Hey, this seems a little solution-y maybe we should zoom out a little bit? 

But I'd imagine you can miss in the other direction where you go way too broad, where your research question is like, what do you value in life or something. Right. And it's like a way to detach from what you're doing. And that almost feels like maybe harder to diagnose.

Like when would we maybe overshot in the other direction? How do you make sure that you don't know, you don't end up too far removed from things?

Amy: [00:24:24] Yeah, that's a great question. I think the answer to that is to always make sure that your research questions are very tightly coupled to the decisions that you need to make. So, one exercise I've done in the past is I've kept an Excel spreadsheet where we list out the research questions. And then we have a column for the decision that question would help inform.

It could be an item on the roadmap. It could be a specific requirement. It could be some sort of, you know, piece of customer feedback that came in and we needed a deeper dive. But I think that's really the solution to having it be too out there in the world and not having it be tethered to anything specific is just to make sure that there's a nice relationship between the research question and the decision that you're trying to address with it.

JH: [00:25:14] Nice. Yeah, that's good advice.

Erin: [00:25:18] going back to the other end, we'll swing back and forth. Right? So I'm on the more narrow side of things, you know, you talked about good research questions having enduring insights. Right. And not being like, which one do you like better a or B that's what AB tests are for. Right. But is there, like what makes a good research question for an evaluative test, right?

Like is there's clearly a place for later stage evaluative testing within the UX research toolkit. So how do you ask good research questions kind of later in the game? 

Amy: [00:25:53] Yeah, I think that's a fantastic question. So my favorite technique for that is to at the beginning of an, a value of an evaluative session, spend a few minutes, understanding their goals, what they're trying to achieve and the work that they're trying to do, then we'll explore the prototype. And they'll give me feedback on that and I'll have some targeted questions. 

And then I'll ask them, how does this relate to what you told me you needed at the beginning? Does this align with what you said you needed? Is it off base? And then that way we can have a conversation that's very much rooted in what their needs are, because if you just give them a prototype without any kind of, you know, introductory conversation about what their needs are.

It's a little decontextualized and it becomes really difficult to be able to tie it back to the needs that they have. So typically, and this is true of generative or evaluative research that I do, the way that I structure my interviews is I start out really general and then go increasingly specific, more and more specific about certain functionality, features, et cetera.

The reason I do that is because having it open-ended at the beginning allows them to offer up whatever's top of mind for them. It allows them an opportunity to put their priorities out there and to direct the conversation. It keeps them firmly rooted in their perspective as the user. And I think that's really important. Even when we're testing solutions, even when we're, you know, testing prototypes, having everything be anchored to the work that they're trying to do is extremely important.

And then bringing the conversation back to that and saying, okay, how does this align with what you need? So then it's not a question of preference. It's a question of, does it help you? You know, it's a question of does it have value?

Erin: [00:27:43] So a good evaluative research question is, you know, what is your need? What is your pain? What is your job to be done? Whatever. And does this solution do that? 

Amy: [00:27:53] Yeah exactly. Exactly 

Erin: [00:27:56] And, when you're choosing people to talk to, do you already know what their pain is? Right. Like, have you like done your SQL to, you know, find the people with the right pain points or, or like, you know, you have a sense of maybe like that they're a relevant user, but you're trying to like zero in on exactly how they would describe their relationship to this kind of?

Amy: [00:28:16] I have a general sense sometimes of what the issue is, but I always have them go through it because sometimes they're subtle differences depending upon, wherever they are in the organization. And so I do have them go through it then also it keeps it front of mind for them too, if we've just talked about that you know, all of their pain points and what they're trying to achieve, it's salient for them.

They're kind of primed then. And so then when they're going through the prototype that they can easily circle back to the things we discussed at the beginning. But yeah the point is just to keep it very relevant to them.

JH: [00:28:50] Nice. Yeah. It's like a, the need sandwiches, the prototype sort of like a line on it upfront, then show them something then like circle back on it at the end.

Amy: [00:28:57] Right. I love that. I love that sandwich analogy. That's great. Yeah, cause you don't, I think a bad approach would be to do a prototype review first. And then ask them if it meets their needs, because I think sometimes people feel a little bit of implicit pressure to be like, yeah, sure. I guess it does.

But if you have them just state their needs upfront, you know, without any kind of biasing information. Then you're going to get a better pulse on what it is they really need. They're not going to be there feeling like that, that, you know, social desirability pressure. I think having it in, sequencing it in that order is really important.

JH: [00:29:40] Yeah, it feels like there's a powerful, like psychology thing there. Where if you do it in the order, you just show them something and then say, do you like it? They want to be pleasing. So they have some pressure to say yes. Whereas if you ask them upfront what they need or care about, then show them something and then ask them about it.

Like, there's almost a cognitive dissonance to not disagree with themselves. Like they're pro you know, I mean, it's like, you've got, you've kind of really flipped it there. It's

Amy: [00:29:59] Yeah. 

JH: [00:29:59] Cool. 

Erin: [00:30:00] Do you tell them upfront, like you always do some sort of intro how's the weather, this is what we're going to talk about today. Do you like, kind of tell them like the outline of like, like, obviously you're not like I'm going to ask you about your pain. And then I'm going to see, Right.

But like, do you give them a sense of how the time's going to go, that we're just going to like, kind of chat for a bit and then I'm going to show you some stuff and then we're going to, like, how much do you reveal so that they're oriented about how the time's going to go without sort of revealing the act? 

Amy: [00:30:32] Yeah, that's a really good question. So I give them an overview at a very high level, so I'll tell them, yeah, I'm going to talk to you for the first few minutes to learn a little bit more about you and the work that you do, any challenges or you know, barriers you experienced in that work things that you do today to accomplish it.

And then we'll do a review of the prototype and you can give your insights on that. And that's generally all that I give them at that point. I keep it pretty high level. I don't explain the rationale behind it certainly. 

Erin: [00:31:03] right, right, right, right. 

Amy: [00:31:04] I keep that to myself.

JH: [00:31:06] Yeah. To switch gears a little, it seems like stakeholders you're working with, it sounds like they don't often come to you with great research questions off the bat, and they might come with you with a methodology in mind or a business question or a solution. And then you're trying to help them, you know, let's Hey let's unpack this a little bit and actually get it to a better research place.

Are people pretty receptive to that? And is that a pretty natural back and forth? Or do you get into a little bit of a headbutting thing of, no, I just want to run a survey. Like let's run the survey.

Amy: [00:31:30] No, actually, people are very receptive to the feedback because it is a conversation. And I do want them to own part of that and it should be a really nice collaboration. It should be fun really, if you're doing it right. So I try to bring people in, they'll ask questions and then I'll ask them questions back.

I'll be like, well, why is this important for us to know? What is this going to help us understand? And then if they realize that, well, maybe it's not going to help us understand anything that's important for what we're doing, then maybe we don't need it. And then we go through that process with people who have been very receptive to it.

Very engaged. And I think it's important to, you know, meet people where they're at and bring them in. I mean, user research is pretty new for most teams, right? There's not a lot of precedent for it. Almost every team I've been on it's been the first time where they've worked with a researcher. So I'm making it a positive experience, I think is really important and making it a good partnership is really important.

Making it fun is really important because it really is. It's a journey. It's an act of discovery. You can and through your research, learn and discover things that no one else in your company knows. And you're the first person to see it. And how cool is that? Right. So I love bringing people into that process.

I think it's awesome. So yeah, people have been extremely receptive. It's definitely a back and forth around what the questions should be. And, you know, giving people advice like, you know, you don't need to come to the table with a technique in mind too. People have been really receptive to that as well.

And being better about, you know, thinking about the precise question they need to have answered.

Erin: [00:33:13] I feel like there's like so much trust uh, that must be you know, relationship building EQ. I mean, things we associate with UX researchers, but with internal stakeholders right, too? I mean, you mentioned asking a good research question. Let's just start with like, what keeps you up at night? No big deal.

It's like, oh God, how much time do you have? Right, 

Amy: [00:33:35] right. Right. Exactly.

Erin: [00:33:36] You probably get to know people, you know, digging into that kind of existential stuff. Right? I mean work stuff, but Yeah.

It must put a lot of importance on building those relationships, right? To get to great research questions. 

Amy: [00:33:53] Yeah. It is like that. I think that relationship between researcher and the rest of the team, it feels like a special one because you are sometimes revealing things that are hard for people to hear too. You know, maybe an idea that we thought would be, would land really well maybe it's not. Maybe the need that we thought was there in the market isn't as there as we thought it was . So sometimes we come back with tough information. And I think it's all about like, not personalizing it, it's like the research and the data. It's like a gift that is supposed to make our lives easier. Prevent us from building things that don't get used or get used in the wrong way.

It's supposed to be a help, not a judgment. And so I never want to couch things in terms of. This was good or this was bad, or they preferred this, or didn't prefer that it's like, I want to couch things in terms of an understanding of people and understanding of human behavior and the mechanisms behind it, and, you know, really get people to learn and be curious about all of that, because there's, there is so much to learn.

So, yeah, I just, I think that relationship is very special. And I think trust building is really important, so I'm always really open to, just when I'm conducting interviews if I've asked a question, like maybe not in the best way I'll point it out to people, you know, I'll put my vulnerability out there, I'll put my mistakes out there so people can learn from them.

Cause this is really all about learning, but it is a vulnerable space. And you know, I think it can be intimidating for people who have not worked with research before.

JH: [00:35:28] Yeah. I always feel like stakeholders are always juggling other considerations too. Right. And I think sometimes when they come in, maybe with a question that is a little loaded. I'm like, I just want validation for this idea. It's probably not because they need to be right, like from an ego perspective, it's probably right of like, well I'm, I have to have a Q3 plan outlined soon and all my thinking is around this idea so far. So I would like to kind of check the box and say 

Amy: [00:35:53] Sure. Sure. Yeah. 

JH: [00:35:56] Yeah so is it a matter of just getting with them early enough. So there's some runway that if you need to course correct, because you learn something new, it's not like catastrophic or how do you get them comfortable with the idea that, we're maybe just not going to rubber stamp this and we're actually going to do some real discovery here.

Amy: [00:36:09] Right. That is an excellent question. So I literally just gave a presentation on this, like a week or two ago. And one of my slides said something along the lines of work with UX research before you're ready to work with UX research, because I think there's the tendency for people to have kind of their idea, more fully fleshed out.

What is my product concept and who is the market and have all of this you know, filled out and understood before the research. But the research is actually the method that you use to get to more clarity and to have those things more fleshed out. So I think that the reluctance to come to research early enough is really out of place. It feels really uncomfortable sometimes to go to somebody on your team and say, this is a big ambiguous mess.

And I really don't know where to start. It could go this way. It could go that way. I don't really know. And so I think sometimes people will, because that is uncomfortable and doesn't feel great when the ambiguous people will put off coming to research longer than they should. And then it becomes a real headache to try to get everything done in time.

So I tell people it's okay. If you feel uncomfortable, it's okay if it's ambiguous, I love ambiguity. Bring it on. Let's have a conversation about it. Again, it's not about getting definitive answers. It's about understanding the space enough to know, have we identified risks?

Have we identified questions we didn't know were there before? And so at that very early stage of research, you know, that's kind of more of what you're focusing on.

JH: [00:37:53] Yeah. I feel like it's such a super power when people realize that sharing work early is to their benefit, like in whatever you do, right? Whether it's an early draft of a marketing material or a sketch of a design or a product strategy. There's something I think when you're earlier in your career and you don't have as much confidence, it's harder to do because you want to show people that you're competent and present something.

That's good. I think when you get a little bit more tenured and experienced, you start to have some of that confidence. I'm just like, eh, here's something off the cuff. Like what do you think about it? Because the input's valuable to me and I want it real time and it's a good point. 

Amy: [00:38:22] Absolutely. Yep. 

Erin: [00:38:24] Two thumbs up.  What do you like most about UX research? 

Amy: [00:38:29] Hmm. Um, The most, I, I would say I have like a top five, probably. I don't know if there's like a single thing. I like the most, I think I like teaching team members. That's probably my favorite thing. 

Erin: [00:38:45] Other UX researchers or Stakeholders or both? 

Amy: [00:38:48] stakeholders. Yeah. I haven't had too many opportunities to work with other UX researchers, but in terms of bringing research to stakeholders, I absolutely love just the process of opening people's eyes up to what this world looks like.

And having our team members become better at query formation, asking better questions, being curious. Those are lifelong skills that can be applied anywhere, but I think really teaching all of that stuff, I just absolutely adore. I have a teaching background, so that's probably where that comes from. I get my teaching fix, like on the team.

Which is great. Yeah, but seeing them ask questions and make their own discoveries and gain confidence around that, and then be able to develop some autonomy around developing their own questions and doing their own research is really great to see. So I would say that's probably my favorite part.


Erin May
SVP, Marketing

Left brained, right brained. Customer and user advocate. Writer and editor. Lifelong learner. Strong opinions, weakly held.

Subscribe to the UX research newsletter that keeps it fresh
illustration of a stack of mail
lettuce illustration against an abstract blob filled with a teal brushstroke texture

The UX research newsletter that keeps it fresh

Join over 100,000 subscribers and get the latest articles, reports, podcasts, and special features delivered to your inbox, every week.

Yay! Check your inbox for a welcome email from katryna@userinterviews.com.
Oops! Something went wrong while submitting the form.
[X]