All posts

UX Research Topics

Podcasts

Field Guide

SUBSCRIBE TO OUR NEWSLETTER

Thank you! You are all signed up.
Oops! Something went wrong while submitting the form.

FOLLOW US

BlogAwkward Silences

January 7, 2019

  • Last Updated:
  • May 11, 2020

7 Reasons Not To Do User Research with Michele Ronsen

We chatted with Michele Ronsen about when companies should pause before doing qualitative user research

Carrie Boyd

When Michele Ronsen posted her “7 reasons not to do user research” we knew we had to chat with her on the podcast. As user research’s biggest fans, we wanted to chat with Michele about why teams could possibly not want to spend time doing user research. After all, we’re all about using the right tools and methods for the right job—user research isn’t the right tool for every job (though we do think it is way underused!). Michele had a lot to say about the steps teams should take before engaging in user research and when your time could be better spent focusing on something other than qualitative research.

The best stories about user research

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Erin and JH talked to Michele about some of the reasons a company may not want to do user research right now. She provided some really interesting insight, drawing from her experience running her own consulting company, Ronsen Consulting, and teaching new UXers at General Assembly and Berkeley. Below, 7 reasons not to do user research. 

1. When you can answer your research question with analytics, secondary research, or by talking to your team. 

User research is not omnipotent. It’s not the best way to answer every single question your team could think of asking. In fact, Michele finds that many questions don’t need to be answered through new qualitative user research at all. 

When Michele consults with a new client, she likes to start with what they already have. Learning about the analytics they collect, the research they’ve already done, and the help tickets that come through their customer support helps Michele learn a lot about the research that needs to be done. It also helps her learn about the problems the company may be facing without spending extra time on research.

There are many different ways to answer research questions, and choosing the right method is the first step to conducting great research. Michele has created some templates to help people find the right method for their research. Research methods vary greatly, from quick first click tests to in-depth ethnography studies. Choosing the one that best answers your question is a part of ensuring you're using your time and budget wisely during the course of your research project.

2. When you don’t know who your should be talking to. 

Not everyone can answer every question. For example, I, a 25 year old Content Creator from Atlanta, GA, probably can’t tell a business owner much about pool installation in Chicago. Since research budgets are tight, and time tables can be even tighter, it’s important to ensure you’re asking the right questions to the right people. This helps your team get the best, most helpful insights from participants. 

Michele told a story on the pod about how business owners sometimes don’t make the distinction between feedback and feedback from their target audience. 

“I had someone approach me and ask if I would be interested in collaborating on a project with my General Assembly students, and I asked them to tell me a little bit about the project. He was redesigning a website for a makeup provider, kind of like a Sephora-but-not type of retailer. And in conversation with him, he thought it would be a good fit because it would give my students exposure to a real world project, and it would be a good fit for him because he would be able to gather a ton of actual feedback. And I was curious by the requests because, to me, that's not at all the people that he should be gathering feedback from.
Because first of all, half of my General Assembly students are male, and my hypothesis is that most people buying makeup online are female. Secondly, the demographics of my students are probably more educated and perhaps more tech savvy because of where we're located in the Bay Area, and that might not fit their core buyer.
But he was very surprised, and he kind of picked and poked at me and said, ‘But don't you think this would be a great experience?’
And I said, ‘I think this would be a great experience of what not to do.’
And it turned out to be a very fruitful conversation, and he thanked me very much, and he said, ‘I just, I never really thought about it like that.’
And I said, ‘Gathering the right information from the right people is kind of... That's foundational. We don't want to ask the wrong people questions because we're not going to gather meaningful feedback.’”

So how do you know who you should be asking your research questions to? If you already have a product out in the real world, it’s likely that you have some data on who your customers are. From there, you can break them out into personas, find your best customers, or settle on a target market. 

For example, if you want to learn more about how to reduce churn in your business, it’s probably best to talk to recently churned customers. If you’re trying to learn about how to launch a new feature to your target market, you can talk to people who don’t use your project, but fit the same description of your target market, or your current customers who could benefit from your new features. You can work with your analytics, customer support, or even marketing departments to use the data you already have to figure out who you need to talk to next. 

3. When you don’t know what you want to learn, or how you’ll use those learnings.

Doing user research for the sake of doing user research isn’t the best use of your time. Michele sees this a lot with her consulting clients. Sometimes, a client will come to her for her help with research, but they’re not sure what exactly they want to learn or how they’ll implement their learnings. 

“If you don't know why you're doing the research, I don't think you're going to be able to build a great plan and ask the right questions. Knowing that end goal will really be informative, and this is also a key reason to get your stakeholders involved, too, because your stakeholders should all understand why you're doing it as well.
So, if we didn't want to do a general exploration, that's fine. Maybe we're doing it to become a little bit more informed about a product or service or a new profile or target that we might go after. So, maybe we're doing it sort of in a generative way so we can become a little bit more informed about what could be. That's a totally fine answer to why.
But if we're doing something a little more tactical, or we're doing something that requires any sort of task based something or evaluative based something or generative something, we want to know why we're doing it. If we can't answer the question why, I would suggest abandoning ship at that point. And also, if you can't agree on why, there may be more than one why, and that's fine, but if we can't agree on the why, then we're not going to be moving forward in lock step.”

It’s important to start research with a specific, actionable, and practical research question. This means it’s specific enough that you can answer it within the scope of one research study, actionable in that you can reasonably create a solution, and practical in that it is achievable in the time frame and with the resources you have available. Zeroing in on the right research question is the first step in doing research that helps your business grow. 

4. When you don’t have enough time to do qualitative user research.

Sometimes, projects have tight timelines, and there may not be enough time for your team to do comprehensive qualitative research. In these cases, Michele suggests that teams use analytics or customer support data they already have, conduct secondary research, or conduct quick unmoderated quantitative tests

Unfortunately, “we don’t have enough time,” is often a team’s excuse for not doing research in the first place. Michele suggests reserving this particular reason for really extreme situations, like one or two day turnarounds. She also noted that, even if a project doesn’t leave time for research, it’s important to take the opportunity to address how you’ll make time for research in the future. 

Michele suggests implementing an ongoing research program, with time allocated for research on a regular basis. Even blocking off an hour or two a week for research sessions can help teams keep users front and center. This also means your team won’t be able to fall into the “we never have time for research” trap. 

5. When you’re going through the motions to prove your solution is right. 

Sometimes, people on a team can push for research just to validate their own ideas for the product. Many times, this isn’t malicious or even a conscious choice, it’s just something people do. But if you’re only doing research to validate your ideas, what’s the point of doing research at all? 

It’s pretty easy to determine whether or not you’re doing research for the good of the users or to validate your solution. Michele suggests asking everyone on your team why they want to complete this specific research this way. Using the 5 whys, you can uncover more about the goals of your teammates during research, and ensure you’re in it for the right reasons. 

6. When you don’t have stakeholder buy-in.

Without stakeholder buy-in, your research won’t get very far. Michele pointed out that, unfortunately, budgets are a thing and without buy-in from stakeholders, research is unlikely to get the resources it needs to do well. She also pointed out that, without stakeholder buy-in, research isn’t as likely to be heard and absorbed by the rest of the team. 

If your stakeholders aren’t on board with your research plan, take some time to sit down with them and learn about how you can work together to make research valuable to your organization as a whole. Zach Lamm, UX researcher at SoFi, recently sat down with our team for an interview about stakeholder interviews (very meta). He suggested asking stakeholders three key questions about what they expect from research. Here they are—

  1. What motivated the stakeholder to come to research with this question in mind? 
  2. How does answering this question fit into the broader context of the business?
  3. Above all, what problem are we trying to solve for users?

These questions help you learn more about what stakeholders expect from research, and helps you understand what you need to do to win their buy-in. Some stakeholders will have unrealistic expectations of research, which means you’ll need to do a bit of work setting clear boundaries and expectations about what research can and can’t do. Other stakeholders may be skeptical about the value of research, or be confused about what they can expect from it at all. In these cases, you as the researcher can lay out the benefits of each research project and present a clear plan for how other teams can use research to learn and grow. 

7. When you haven’t right-sized the question.

Some questions are just too big for user research to answer in the scope of a single project. Questions like, “What do our customers want from their real estate providers?” or “What’s the best way to build a product discovery app?” are too big to be answered in the scope of a single study. That’s why Michele suggests right-sizing your question before starting user research. That means that your research question is actually something you can answer during the scope of your project, with the resources you have available. 

We’ve found that the best way to ensure your research question is practical for your project is to make sure it is specific, actionable, and practical. This means, rather than asking “What’s the best way to build a product discovery app?”, you can ask questions like “How do people discover new products right now?”, or, “How often do people search for product discovery platforms online?”. These questions are specific enough for you to know when you’ve found the answer, practical in that they could be answered within the scope of a single study, and actionable in that you could do something with the answer you find. 

Right-sizing your research question means you’ve considered not only whether or not you can answer your question, but whether or not you can answer it in the time frame you have with the resources you have available. 

For example, to answer the question, “How do people discover new products right now?”, you would likely need to do at least 5 generative interviews with early adopters, which would likely take a day or two to complete. With the project preparation, participant recruitment (which, on average, takes about 3 hours with User Interviews 😄), completing the sessions, and coding your notes. 

In contrast, “How often do people search for product discovery platforms online?”, could be answered in an afternoon, with no outside help. You can use tools like ahrefs to find the search volume on a product discovery platforms, compile a list of the most relevant results, and have information compiled by the end of the day. 

Still struggling to understand when’s the right time for user research? Check out this little flowchart we made to help you decide when you should start research, and when you should take a step back. 👇

About our guest

Michele Ronsen is a UX and design researcher, founder of Ronsen Consulting, a faculty member at UC Berkeley, and an instructor at General Assembly. She loves digging deep into research, being people’s research buddy, and introducing teams to the power of research. 

Transcript

Erin: Hi, everybody and welcome back to Awkward Silences. We are here today with Michele Ronsen. She is a consultant and educator, all things design and UX research. She teaches at General Assembly and UC Berkeley, and has a thriving consulting business as well. Today, we're going to talk about something I'm real excited about, which is... Look, we love user research around here a lot, but it doesn't mean you should always do user research, and we're going to talk about when you should not do user research. So, this should be a fun one.

Erin: Thanks for joining us, Michele.

Michele: Thank you for having me here.

Erin: And we've got JH here too.

JH: Yeah, this topic seems a little blasphemous for us, but I'm nothing if not open minded, so I'm excited for it.

Erin: We'll rile the troops, here.

JH: I'm willing to be convinced, yeah.

Erin: All right, so Michele get us started. What is one time, one reason one might not want to do user research?

Michele: That's a great question. First, maybe kind of set the context.

Erin: Sure.

Michele: More delighted than me that... Or a few people, many people sharing my delight of the recognition of how critical user research is in the product development and service development cycle, and more individuals and organizations are excited to learn more about user research and how to do it. I think we are bordering on becoming sort of a feedback obsessed culture, and that's a totally different topic.

Erin: Wait, is that a good thing or a bad thing?

Michele: You know, I think it has its pros and cons. I think we could [inaudible 00:02:23] about each, right?

Erin: Yeah. I personally have mixed emotions. When I read obsessed, user obsessed, it's like, maybe you should chill out a little bit.

Michele: Right? I mean, how many times do you rate your Uber driver? How many times... Does anybody even look at that? I just hit five and go. I'm a Lyft person. I don't even think about it anymore. It's not meaningful.

JH: That's a good point, right? You can only hedge your uncertainty to a certain level. You can't eliminate it fully, and so at what point are you hitting diminishing returns is pretty important.

Michele: I think that's a great question, which is also a great segue into this list that I compiled: seven reasons not to conduct user research.

Michele: It came about because I field a lot of questions for my students and my clients about what's the best way to approach X? Or, what's the best method to use for Y? Or, how do we phrase this, XYZ? We're trying to learn about one, two, three, and not all of these questions are user research problems.

Michele: Many different questions can be answered by different means that don't have to do with interviewing or interacting with customers directly, and one of the first things I like to communicate in my classes and with new clients is, my personal preference is to let's look under our own hoods first. Let's find out what we could learn without talking to people.

Michele: There's a lot of experience in the room. If you're asking the question, there's some people who know something about the topic. The two people that I like to seek out first in an organization are the data analysts and the customer support people, people that are handling help tickets.

Michele: So, you have an existing product, or you have something that exists out there, or a competitor does, and there's so much to learn from what's already out there. So, I think that if you can answer a question better with analytics, use analytics. Or at least start there, and that will help you make a more informed decision about what to do later on.

JH: How do you know if you can better answer a question with analytics and if you can't?

Michele: I think the first question, I would ask myself, and how I like to teach this is, do you have something that exists? Do you have something that is out there in the wild in some sort of format? And if the answer is yes, and then the next question is, is it live somewhere?

Michele: Then you have some sort of analytics, whether it's just the Google analytics, and you can tell where are these people are coming from or where they're clicking or where they're going or where they were abandoning or keyword searches or something a little bit more robust. If it's a product or service that's been out there for even a couple of months and you have any sort of tagging going on, you can, you can learn a hell of a lot.

JH: On the customer support side, how do you go about making sense or picking up on trends of what they're hearing? I feel like it's such a rich source, because they're on the front lines, but often it's not really organized or necessarily prepared in a way to just dig through. So, how do you actually start to find insights there?

Michele: I should probably temper this conversation with saying that I'm not a formally trained researcher, I'm a recovering designer. So, my approach is I'm probably somewhat unorthodox.

Michele: So, the way I approach it is I like to introduce myself to the person that oversees the support in that group, and just say, very candidly, "Hey, I'm a researcher, and I'm working on this topic that I understand that you help oversee. I'm curious about, let's just make something up, I'm curious about the onboarding process. What do you think? What's going on here? What's working? What's not? And then, what sort of suggestions have you made, if any? What sort of improvements have occurred over time? How often are you interacting with your product manager or designer, if ever?"

Michele: And a couple times, I'm the first person to introduce that triangle, and it seems like these people are such a source of truth, and the same with the analysts.

Michele: So, I go to the analyst, I'm, "Hey, I'm new. I don't know anything. I have this beginner mindset. I want to be your friend. I bet you know a lot about what's going on here. Let's look under the hood. Tell me what do you see?"

Michele: And then the three of us, I mean, those two people can tell you what's happening, based on their records and based on their spikes and drops. They can't necessarily see why it's happening, and that's where user research is really, it's just a terrific complement. But I love having those two. Those are my buddies. Those are the first buddies I want in the company.

Erin: Right, right.

Erin: Okay. All right. You got a nice plug for user research in there. We knew it was going to happen, but let's talk about why user research is the wrong thing to do. What's another... So, if analytics can answer your question, use analytics. What's another reason you maybe shouldn't do user research?

Michele: So, let me complete that thought. If the analytics can answer your question in terms of what's happening. So, for example, where are people dropping off in the onboarding flow, or where's the friction in the onboarding flow? We can answer that question through analytics.

Michele: Then, use the analytics to inform some sort of user research and follow up to talk to people about why that particular section in the flow is difficult, if there's something out there in the wild.

Michele: Another example would be if time doesn't permit it. So, if we're looking to understand, for example, the college application process, and we're looking to understand what's working and what's not, but the team needs the answers in two days, we're not going to have a time to a proper diary study. We shouldn't force something within... because of a time constraint. We shouldn't truncate our... The appropriate methodology because of a time constraint.

Michele: If that's a real time constraint, well, let's figure out another way to approach it. And maybe another way to approach it is to do some secondary research and comb through some reviews, or some college application sites, or something like G2 Crowd and find out what people are saying about the process or about this tool. So, we can learn in other ways, but I'm not a big fan of shoving a square peg into a circle sort of hole, if you will.

JH: Yeah. This is super interesting one to me, because I agree with you. We shouldn't force it or compromise the research process so heavily that it's not actually valid or useful anymore. My concern about it, and I wonder if you have thoughts on this, is it also feels like something... There's a pretty common excuse people might use of why they're going to skip research and just follow their instincts or do what's in their gut.

JH: So, how do you make sure that this is coming up for the right reasons, and it's not something that people are leaning on as a lazy crutch to get out of having to do research?

Michele: So, I don't like to say no, but what I like to do is come back with options and say, "I don't think this is the best approach right now, but what we can do is start to put our heads together and do more of a rolling research series of studies."

Michele: Let's just commit to run a study every week, or let's commit to running every other week, and let's identify... Let's look at your roadmap and find out what kinds of questions and what kinds of people we think we're going to want to talk to at certain phases in the overall process. And then let me plan for that.

Michele: I'm happy to do that. It's probably going to be more meaningful to you anyway, because if we are a little bit more agile and we have a little bit more improv... I'm a huge believer... To me, user research is part art, part science, and part improv, and if we all kind of get a little jiggy with it, we're going to work together, and that's my goal. I want to get you the most meaningful input at the right time from the right people.

Michele: The time thing though, it's interesting to me, and one of the push backs that I have is, it's funny you don't think you have the time to do research right now, but you think that you have the time to correct the mistakes that research might not have. Invest the time now, or invest the time later, but my suggestion is let's not stop anything. Let me ride sidecar along with you, and I want to be your friend. So just try to... Let's be buddies. I don't want to disrupt...

JH: I think we'll call this episode. Michelle Ronsen Wants to be Your Friend.

Michele: Your research buddy. Your research guru.

JH: What I love about that is I do think it's a very pragmatic perspective. I think sometimes it's easy to be idealistic and be like, "Stop the whole project, and we're going to make more time for research."

JH: And to your point, while that might be great or the right solution, it doesn't win you a lot of friends in some cases, and there are other ways to get ahead of future problems in a way that might be better received by the different stakeholders and so forth.

Michele: Absolutely. I mean, to me, the biggest predictor of success on a study or in a client relationship is how involved the stakeholders are. And if the stakeholders are involved from the onset, and they may not be believers to begin with, but if I can convert them, and if I have my stakeholders that are active, engaged, I'm 90% sure that I'm going to be able to move from insights into action and get that team to act and address the learnings along the way.

Michele: And that's how I measure my success. It's not just in providing the information that they're looking to learn about, it's about moving the teams to be able to act on that information quickly, and efficiently, and effectively. So, I want to make friends. I want to include them, and hey, they know more than I do about the product, and they have different expertise, and I want to learn from them. It will just be a whole, everything is just stronger and better if we work in concert.

JH: Totally. Well, another one on your list was, if you don't have stakeholder buy in, that's not a great situation to do research within, and this kind of feels related to that. How have you seen that one play out?

Michele: Totally. I mean, there have been times where I've been brought on in stealth mode, where the VP of something says, "Hey, I have a hypothesis. I want you to explore it, but you're going to be working in a vacuum, because we can't tell anyone about this."

Michele: But other than those situations, which are far and few between, again, I find that the biggest predictor of success is stakeholder involvement. And I'm a big believer, from an education standpoint, I understand people learn in different ways, and the different roles on a product development team, from your engineer to your product manager, to your designer, to your content strategist, everybody brings something different to the table, and I want to learn from them, and if we can all come together, and get actively engaged, and, ideally, tie the research goals, to their goals, to their individual performance goals or their team goals, that will just increase the chance of success exponentially.

Erin: You talked about stakeholder involvement, and another phrase you said was stakeholder buy-in. So those are different things, buy in and involvement? Potentially... Do you ever see a case where...

Erin: Is all stakeholder involvement good? Do you ever get any naysayers involved who are maybe not helpful or beginners who don't really know what's going on with the research and find a way to use it for harm, or is all stakeholder involvement just good, and just be buddies with everybody, and get everybody involved, and that's going to be a good thing?

Erin: You have any tips around kind of how to get stakeholders involved?

Michele: Yeah, that's a really... No, it's not all good. I like to just... First, I guess I suss out their UX maturity and understand their level of experience with user research in the past. And if they don't have that, then do some quick education right there on the spot in the moment.

Michele: So, for example, my research plan is not going to be 12 pages. It's not going to include every single question in the guide. For me, the research plan will be successful if we are identifying the umbrella questions and understand very, very clearly how the research will be applied or what will happen to those learnings and when, and we can go from there. So, to me, that's the first level. Let's just make sure that we're on the same page.

Michele: So, this is what we're trying to learn. This is why we're trying to learn it, and this is how we're going to apply those learnings. That's sort of the first step.

Michele: The next step is, okay, let's coordinate and dig a little deeper and explore some components of the plan, and I also want to make sure that each person in the room is tied to or will benefit from that exploration in some way.

Michele: But I'll author the document. I'll provide you with commenting rights. We'll get it to a point, but we have a deadline, so we'll get to a good enough point, and then move on. And these are living, breathing documents, and I want you to be involved in the documents.

Michele: At some point though, we have to either agree to disagree or agree to just keep progressing, because we've got to keep moving, and let's try it out. If it's not working, this is what pilot sessions are for, as well. If the questions aren't clear, or if we're not getting the types of responses we hoped for or the depth, or we have too many questions or too few questions, we do a series of pilot...at least one pilot session, if not a series, to test that.

JH: All right, a quick awkward interruption here. It's fun to talk about user research, but you know what's really fun is doing user research, and we want to help you with that.

Erin: We want to help you so much that we have created a special place, it's called user interviews.com/awkward, for you to get your first three participants free.

JH: We all know we should be talking to users more, so we went ahead and removed as many barriers as possible. It's going to be easy. It's going to be quick. You're going to love it, so get over there and check it out.

Erin: And then when, you're done with that, go on over to your favorite podcasting app and leave us a review, please.

Erin: All right, when else should we not do research? One of the ones you talked about was obviously something we're going to totally agree with, which is when you don't know who you should be talking to, when you don't know who's going to give you the insights you seek. Tell us a little more about that.

Michele: So, gathering feedback from the right people is really paramount. Here's an example, and this comes up a lot, mostly in discovery calls or new business calls.

Michele: I had someone approach me and ask if I would be interested in collaborating on a project with my general assembly students, and I asked them to tell me a little bit about the project, and he was redesigning a website for a makeup provider, kind of like a Sephora but not type of retailer. And in conversation with him, he thought it would be a good fit because it would give my students exposure to a real world project, and it would be a good fit for him because he would be able to gather a ton of actual feedback. And I was curious by the requests because, to me, that's not at all a good use of the people that he should be gathering feedback from.

Michele: Because first of all, half of my general assembly students are male, and my hypothesis is that most people buying makeup online are female. Secondly, the demographics of my students are probably more educated and perhaps more natural and more tech savvy because of where we're located in the Bay area, and that might not fit their core buyer.

Michele: But he was very surprised, and he kind of picked and poked at me and said, "But don't you think this would be a great experience?"

Michele: And I said, "I think this would be a great experience of what not to do."

Michele: And it turned out to be a very fruitful conversation, and he thanked me very much, and he said, "I just, I never really thought about it like that."

Michele: And I said, "Gathering the right information from the right people is kind of... That's foundational. We don't want to ask the wrong people questions because we're not going to gather meaningful feedback."

Erin: I say that all the time. Human beings are all wonderful flowers and everyone deserves to be heard, but you got to be smart about who you listen to for what problems you're trying to solve.

Michele: Exactly. I don't... Getting feedback from someone who's 28, male student who's 28 years old about buying eye shadow...

Erin: Hey, to be fair, could be into eyeshadow.

Michele: To be fair, but the majority, I mean.

Erin: Yes. Right, right. If you're doing just bulk targeting, yeah, absolutely.

Michele: Absolutely not the right people to be asking, but yeah, it would be a good experience, but a very different experience.

Erin: Absolutely. Absolutely. All right. What else we got here? If you don't know why you're doing the research, seems obvious, and how and when the learnings will be applied, important part of that. Pause right there. Tell us more about that. You got a why, a how, and a when. You need to know why. You need to know how and you need to know when the learnings will be applied.

Michele: Yeah. So, these three questions are pretty paramount. If you don't know why you're doing the research, I don't think you're going to be able to build a great plan and ask the right questions. Knowing that outcome or knowing that end goal will really be informative, and this is also a key reason to get your stakeholders involved, too, because your stakeholders should all understand why you're doing it as well.

Michele: So, if we didn't want to do a general exploration, that's fine. Maybe we're doing it to become a little bit more informed about a product or service or a new profile or target that we might go after. So, maybe we're doing it sort of in a generative way so we can become a little bit more informed about what could be. That's a totally fine answer to why.

Michele: But if we're doing something a little more tactical, or we're doing something that requires any sort of task based something or evaluative based something or generative something, we want to know why we're doing it. If we can't answer the question why, I would suggest abandoning ship at that point. And also, if you can't agree on why, there may be more than one why, and that's fine, but if we can't agree on the why, then we're not going to be moving forward in lock step.

Erin: Right. And I'm guessing, this one sounds so obvious, but I'm guessing you've encountered this happening before?

Michele: Yes. So, and that's really a clear indicator to me of how mature the organization is in regards to user research, and sometimes this is about finding my buddies and collaborating in getting everyone to kind of sing the same tune.

Michele: It's like herding a team of feral cats. You can do it. If you can do it, it is so much more powerful and so much more successful. And again, you're able to move from insights into action just so much faster.

JH: Do you find that it's the ones pressed to state the why that most teams can usually... It's kind of floating around somewhere, they just haven't articulated it, and if they think about it for a little bit, they can narrow it down and articulate it, or is it, there are actually some people who truly, even when pressed just can not get there, and they're like, "We have no idea?"

Michele: Well, I think more of the, at least in my experience, it's we're doing it to find out which one's better, or we're doing it to find out which ones resonate or which ones are preferred. But then I'll dig deeper. It's like, but why do we want to know that? But why do we want to know that? Why?

Michele: So, using the five why's, or laddering, usually gets us there, but the engineer might want to know why for a different reason. He might want to know, or she might want to know why, because he's thinking about how can I repurpose some sort of code set. And the designer might want to know why because that's going to influence some sort of pattern library that's being developed by her partner team at the same time. And the product manager might want to know.

Michele: So, they each might be coming at this with slightly different angles, which is totally fine. I want to understand all those angles, too. Because, again, my goal is to make sure that whatever we're learning about is going to be meaningful and impactful to that whole team so we can move that much more quickly into action.

JH: Gotcha. So, it's like just don't let it go unsaid. Actually get it all out on the table. Make sure everyone's why is understood and out there, and then figure it out from there, sort of.

Michele: Yeah, and by hearing the disparate views of why and then understanding how it can be helpful, it actually brings us closer together as a team.

Erin: Right, right, right.

Erin: Oh, that's interesting. The feral cats kumbaya-ing's quite... [inaudible 00:25:50]

Michele: Right? And then, and then the when is really important to you. So, if you have two months to explore this question that will open up many, many different doors for how you might explore it versus if you have two days or two weeks. So, when would be too late for you to have this information, is a great question to ask, and why would it be too late at that date?

Erin: Wait, I wonder if when you start uncovering the whys, assuming that there are whys, there are some motivations there, does that ever relate to another one of your reasons not to do user research, which is if you're just trying to sell your design that you've already come up with. Does that come up, where it's, I don't want to say malicious, but let's say not pure intentions of just uncovering the truth? Does that ever come out when you kind of dig into these whys?

Michele: Not as explicitly with the people that I work with, but I think what you're talking about is research as a weapon, and it can definitely be weaponize. With the toolkit and the access, we can pretty much blow anybody up we want, but the ethics there prevent us from doing that.

Michele: But doing or conducting user research or wanting to conduct user research to mask it as a way to prove a point or to validate one direction or another is just wrong. It's just wrong. I mean, user research is... As an industry, we work in service to to that user, not to ourselves. And you know what, it's going to come back and bite you anyway...

JH: This feels...

Michele: ...because it's not going to be legitimate. It's not going to be legitimate feedback.

JH: This feels like a really hard one to detect. Some of the other ones, of we don't have enough time, kind of obvious. Hey, why are you doing this research? And someone's like, "Uh..."

JH: Pretty obvious. But someone has bad intentions, and they're actually just trying to advance their own idea and sell their design, how do you actually discover that? How do you know... How do you put the brakes on it in this situation?

Michele: Well, as a researcher, you're kind of the maestro of the organization. So, I think what I would say is, "Are we looking to understand which concept resonates the most?"

Michele: Then when I'm building the script or the guide, and going through the planning process and conducting the sessions, I'm going to make sure, or do my absolute best to make sure, that any sort of bias is removed.

JH: Okay. That makes sense. Cool.

Michele: So, and that's another great activity to do while we're all kumbaya-ing, is let's get our biases and assumptions and hypotheses out on the table before we begin. It's not right or wrong to have biases or assumptions or hypotheses. We all have them. Let's just be honest about them.

Michele: And sometimes it's kind of fun. I really want concept A, because my mom likes it, or it's purple, or whatever. There's nothing wrong with that. Let's get it out. Again, it's going to bring us closer. "Oh, your mom likes purple. My mom looks purple, too."

Erin: All right, last one. Right size the question. Is your question too big, too small? And this gets into the time again, do you have the right amount of time? Talk to us about the importance of having the right size question.

Michele: This is a great one, and this comes up a lot in sort of first conversations or discovery calls. So, for example, I had a commercial real estate company, a series of conversations with them last month. And their original question was, we'd like to learn how to maximize our assets. And that's a great question, right? But that's not necessarily a user research question, it's just which way too big.

Michele: A commercial real estate company can maximize their assets by... And I went through this. I don't know too much about commercial real estate, but I said, "I hypothesize you could raise your rents. You could cut your utility costs. You could purchase more space. You could convert... You could sell ice cream. I don't know. You could increase your services. There is so many ways to increase your assets. We want to focus. We want to... Let's right size that question."

Michele: Okay. So let's find out: can we shorten the time it takes to apply to rent a property, so therefore we shorten the vacancy window? That would be one way we can... We don't want to boil the ocean. So, let's right-size it. Should we look at all your utilities, or should we look at the process to pay your utilities? Should we look at your rental rates and do the comparison, which by the way, wouldn't be user research either. That would be more market research.

Michele: But that's another point we can add here. When is it market research? I'll have to add that.

Michele: But that question, how do we maximize our assets? That's just way too big. It's way too big to explore. I mean, maybe we want to do a generative study to find out the three most viable ways and then dive in, but it's just too broad.

Michele: In the same respect, we don't want to ask something that is way too narrow, as well. So, for example, we don't want to spend a couple of weeks studying...

Erin: I was going to say button color. That's the go-to. What color should the button be? I want to see the case study of when the button color just changed the business...

JH: Oh, I have one.

Erin: ...itself so much. You have one? All right. Tell me about it.

JH: When I was working at Vistaprint we had all these custom landing pages that were made, and they had product titles on them, and there was a Get Started button or something, and it was very bland and kind of got lost in the page, and somebody ran an AB test to make it a bright orange, and it can increase conversion enough to be worth over 100K a month in profit. It was crazy.

Erin: Amazing. That's a good one.

Michele: That is crazy.

JH: She was very junior, and she's like, "Hey, what if we just made this button a lot more obvious. Wouldn't More people click it?"

JH: And so, they ran the AB test, and they're like, "It did very, very well."

JH: We're like, "Wow. Great idea."

Erin: Now I'm looking at our button color. All right. So, not too big, not too small, Goldilocks questions. What do you see the most commonly? Where do people get tripped up the most?

Michele: I would say that, in terms of frequency, not leveraging your analytics, not marrying your analytics and looking under your own hood first. Last year I had a year long consulting engagement at a well known firm, and the product team that I was working with, the designer had never interacted with the analyst or the person in tech support, customer support. Didn't even know who it was, actually, and never really thought to make that connection. Nor did the researcher who they had worked with prior. So, it was just never a relationship that they had fostered. They just didn't think about it that way.

Erin: What about a similar flavor, or different flavor, similar thing: what about not looking at user research that's already been done? So, not analytics, necessarily, but insights you might already have.

Michele: That's a great one. Absolutely. What do we know about this? Have we explored it internally? Have we done some secondary research on it? How did we get here? How did the question get to this point? How did the question get to me? What happened to lead us here? And that ties, also, back to earlier conversations in terms of what do we know? What we hypothesize? What do we assume? And that is generally based on...

JH: What I am starting to think as I look at this list holistically, and now having talked about it, is it does feel like a lot of things on here serve as an early warning system or like the canary, and if you catch these things early enough, there's a chance to right size it or adjust and actually go on to have successful research.

JH: And there's, if you can't fix it, pull the plug, to your point, and the whole premise of this discussion. And then, the first thing comes to mind is the whole a checklist manifesto. It almost feels like this could be a cool thing for teams to go down and be like, "Can we answer this analytics? Yes or no. Do we have enough time? Yes or no." And actually use it.

JH: Have you thought about using it that way at all?

Michele: Yeah, I have a number of checklists and resources that I've developed. Ronsonconsulting.com, and this is another great checklist. I have a couple of checklists up there. One is about how to evaluate bias, and one is, I think, 15 questions you should ask yourself before you launch a study.

JH: Nice.

Michele: And there's a great question starters kit in there, so break down the typical phases in an interview. And then, I developed a question bank, if you will, of three pre study questions or warm up questions. Well, 15, and then if you choose three from pretty much the pre and then the digging deeper and then the wrap up, you have a pretty good framework for a guide.

Michele: I think that one thing that's important to discuss, if I can just pivot just slightly, is that there's no one single way to gather information, and there's no one right way for a team to go about it. There's better ways, but if there's one thing that I've taken away from consulting for seven years in the business, it's that every culture is different. Every question is different. There are similarities, but there's a lot of differences, and if we get out of the mindset of right or wrong or the best wold to do something, there's more room for improvement, and the more you know, the better you get. So, it's okay to just start somewhere. It's better to just start somewhere then than not start at all.

Erin: Good pep talk.

JH: Yeah, I love that nuance. I feel like, to pivot off that, even, I think what's fun about the whole podcast format is it is a format that actually allows for nuance and some of that fine tune stuff that you were just getting into in a way that a lot of our other online formats do not seem to allow for very well. There's no one way to gather information.

JH: And so, there's a lot of nuance and context and all of this stuff that you have to factor in to know how to go and do that in a given situation, and in this type of discussion, it allows us to explore that and appreciate that context and nuance and richness. And I feel like just when you see people on Twitter or other common places, it's usually much more absolute stances of the only way to answer a question is blah, instead of just...

JH: That's what I really enjoy about these conversations, is we actually get to get into some of that, well, it depends, and there's a lot of factors, and there's not an absolute way to do everything.

Erin: Right, and reading a long, it depends, article gets not enjoyable really fast. It's like, well, it depends, and then sub point B, and then now we're over here, and I'm like, "Geez, God."

Erin: But in a conversation, I think it feels, hopefully, just engaging and natural and organic and dynamic, three-dimensional things like that.

Michele: I think it's also authentic. One of the fascinating things about our industry and user research is that it's moving. It's constantly changing. It's evolving. It's dynamic. It's living. It's breathing.

JH: Totally.

Michele: It's amorphous.

Erin: Thanks for listening to awkward silences, brought to you by User Interviews.

JH: Theme music by Fragile Gang.

Erin: Editing and sound production by Carrie Boyd.

Carrie Boyd

Content Creator

Carrie Boyd is a Content Creator at User Interviews. She loves writing, traveling, and learning new things. You can typically find her hunched over her computer with a cup of coffee the size of her face.

More from this author