When Michele Ronsen posted her “7 reasons not to do user research” we knew we had to chat with her on the podcast. We wanted to learn from Michele about why and when research is not the right approach for every project and every problem. Michele had a lot to say about the steps teams should take before engaging in user research and when your time could be better spent focusing on something other than qualitative research.
We talked to Michele about some of the reasons a company may not want to do user research right now. She provided some really interesting insight, drawing from her experience running her own consulting company, Curiosity Tank (formerly Ronsen Consulting) and teaching new UXers at General Assembly. Below, 7 reasons not to do user research.
Listen to the episode
Click the embedded player below to listen to the audio recording. Go to our podcast website for full episode details.
About our guest
Michele Ronsen is a UX and design researcher, founder of Curiosity Tank (formerly Ronsen Consulting) and an instructor at General Assembly. She loves digging deep into research, being people’s research buddy, and introducing teams to the power of research.
1. When you can answer your research question with analytics, secondary research, or by talking to your team.
User research is not omnipotent. It’s not the best way to answer every single question your team could think of asking. In fact, Michele finds that many questions don’t need to be answered through new qualitative user research at all.
When Michele consults with a new client, she likes to start with what they already have. Learning about the analytics they collect, the research they’ve already done, and the help tickets that come through their customer support helps Michele learn a lot about the research that needs to be done. It also helps her learn about the problems the company may be facing without spending extra time on research.
There are many different ways to answer research questions, and choosing the right method is the first step to conducting great research. Michele has created some templatesto help people find the right method for their research. Research methods vary greatly, from quick first click tests to in-depth ethnography studies. Choosing the one that best answers your question is a part of ensuring you're using your time and budget wisely during the course of your research project.
2. When you don’t know who your should be talking to.
Not everyone can answer every question. For example, I, a 25 year old Content Creator from Atlanta, GA, probably can’t tell a business owner much about pool installation in Chicago. Since research budgets are tight, and time tables can be even tighter, it’s important to ensure you’re asking the right questions to the right people. This helps your team get the best, most helpful insights from participants.
Michele told a story on the pod about how business owners sometimes don’t make the distinction between feedback and feedback from their target audience.
“I had someone approach me and ask if I would be interested in collaborating on a project with my General Assembly students, and I asked them to tell me a little bit about the project. He was redesigning a website for a makeup provider, kind of like a Sephora-but-not type of retailer. And in conversation with him, he thought it would be a good fit because it would give my students exposure to a real world project, and it would be a good fit for him because he would be able to gather a ton of actual feedback. And I was curious by the requests because, to me, that's not at all the people that he should be gathering feedback from.
Because first of all, half of my General Assembly students are male, and my hypothesis is that most people buying makeup online are female. Secondly, the demographics of my students are probably more educated and perhaps more tech-savvy because of where we're located in the Bay Area, and that might not fit their core buyer.
But he was very surprised, and he kind of picked and poked at me and said, ‘But don't you think this would be a great experience?’
And I said, ‘I think this would be a great experience of what not to do.’
And it turned out to be a very fruitful conversation, and he thanked me very much, and he said, ‘I just, I never really thought about it like that.’
And I said, ‘Gathering the right information from the right people is kind of... That's foundational. We don't want to ask the wrong people questions because we're not going to gather meaningful feedback.’”
So how do you know who you should be asking your research questions to? If you already have a product out in the real world, it’s likely that you have some data on who your customers are. From there, you can break them out into personas, find your best customers, or settle on a target market.
For example, if you want to learn more about how to reduce churn in your business, it’s probably best to talk to recently churned customers. If you’re trying to learn about how to launch a new feature to your target market, you can talk to people who don’t use your project, but fit the same description of your target market, or your current customers who could benefit from your new features. You can work with your analytics, customer support, or even marketing departments to use the data you already have to figure out who you need to talk to next.
3. When you don’t know what you want to learn, or how you’ll use those learnings.
Doing user research for the sake of doing user research isn’t the best use of your time. Michele sees this a lot with her consulting clients. Sometimes, a client will come to her for her help with research, but they’re not sure what exactly they want to learn or how they’ll implement their learnings.
“If you don't know why you're doing the research, I don't think you're going to be able to build a great plan and ask the right questions. Knowing that end goal will really be informative, and this is also a key reason to get your stakeholders involved, too, because your stakeholders should all understand why you're doing it as well.
So, if we didn't want to do a general exploration, that's fine. Maybe we're doing it to become a little bit more informed about a product or service or a new profile or target that we might go after. So, maybe we're doing it sort of in a generative way so we can become a little bit more informed about what could be. That's a totally fine answer to why.
But if we're doing something a little more tactical, or we're doing something that requires any sort of task based something or evaluative based something or generative something, we want to know why we're doing it. If we can't answer the question why, I would suggest abandoning ship at that point. And also, if you can't agree on why, there may be more than one why, and that's fine, but if we can't agree on the why, then we're not going to be moving forward in lock step.”
It’s important to start research with a specific, actionable, and practical research question. This means it’s specific enough that you can answer it within the scope of one research study, actionable in that you can reasonably create a solution, and practical in that it is achievable in the time frame and with the resources you have available. Zeroing in on the right research question is the first step in doing research that helps your business grow.
4. When you don’t have enough time to do qualitative user research.
Sometimes, projects have tight timelines, and there may not be enough time for your team to do comprehensive qualitative research. In these cases, Michele suggests that teams use analytics or customer support data they already have, conduct secondary research, or conduct quick unmoderated quantitative tests.
Unfortunately, “we don’t have enough time,” is often a team’s excuse for not doing research in the first place. Michele suggests reserving this particular reason for really extreme situations, like one or two day turnarounds. She also noted that, even if a project doesn’t leave time for research, it’s important to take the opportunity to address how you’ll make time for research in the future.
Michele suggests implementing an ongoing research program, with time allocated for research on a regular basis. Even blocking off an hour or two a week for research sessions can help teams keep users front and center. This also means your team won’t be able to fall into the “we never have time for research” trap.
5. When you’re going through the motions to prove your solution is right.
Sometimes, people on a team can push for research just to validate their own ideas for the product. Many times, this isn’t malicious or even a conscious choice, it’s just something people do. But if you’re only doing research to validate your ideas, what’s the point of doing research at all?
It’s pretty easy to determine whether or not you’re doing research for the good of the users or to validate your solution. Michele suggests asking everyone on your team why they want to complete this specific research this way. Using the 5 whys, you can uncover more about the goals of your teammates during research, and ensure you’re in it for the right reasons.
6. When you don’t have stakeholder buy-in.
Without stakeholder buy-in, your research won’t get very far. Michele pointed out that, unfortunately, budgets are a thing and without buy-in from stakeholders, research is unlikely to get the resources it needs to do well. She also pointed out that, without stakeholder buy-in, research isn’t as likely to be heard and absorbed by the rest of the team.
If your stakeholders aren’t on board with your research plan, take some time to sit down with them and learn about how you can work together to make research valuable to your organization as a whole. Zach Lamm, UX researcher at SoFi, recently sat down with our team for an interview about stakeholder interviews (very meta). He suggested asking stakeholders three key questions about what they expect from research. Here they are—
- What motivated the stakeholder to come to research with this question in mind?
- How does answering this question fit into the broader context of the business?
- Above all, what problem are we trying to solve for users?
These questions help you learn more about what stakeholders expect from research, and helps you understand what you need to do to win their buy-in. Some stakeholders will have unrealistic expectations of research, which means you’ll need to do a bit of work setting clear boundaries and expectations about what research can and can’t do. Other stakeholders may be skeptical about the value of research, or be confused about what they can expect from it at all. In these cases, you as the researcher can lay out the benefits of each research project and present a clear plan for how other teams can use research to learn and grow.
7. When you haven’t right-sized the question.
Some questions are just too big for user research to answer in the scope of a single project. Questions like, “What do our customers want from their real estate providers?” or “What’s the best way to build a product discovery app?” are too big to be answered in the scope of a single study. That’s why Michele suggests right-sizing your question before starting user research. That means that your research question is actually something you can answer during the scope of your project, with the resources you have available.
We’ve found that the best way to ensure your research question is practical for your project is to make sure it is specific, actionable, and practical. This means, rather than asking “What’s the best way to build a product discovery app?”, you can ask questions like “How do people discover new products right now?”, or, “How often do people search for product discovery platforms online?”. These questions are specific enough for you to know when you’ve found the answer, practical in that they could be answered within the scope of a single study, and actionable in that you could do something with the answer you find.
Right-sizing your research question means you’ve considered not only whether or not you can answer your question, but whether or not you can answer it in the time frame you have with the resources you have available.
For example, to answer the question, “How do people discover new products right now?”, you would likely need to do at least 5 generative interviews with early adopters, which would likely take a day or two to complete. With the project preparation, participant recruitment (which, on average, takes about 3 hours with User Interviews 😄), completing the sessions, and coding your notes.
In contrast, “How often do people search for product discovery platforms online?”, could be answered in an afternoon, with no outside help. You can use tools like ahrefs to find the search volume on a product discovery platforms, compile a list of the most relevant results, and have information compiled by the end of the day.
Still struggling to understand when’s the right time for user research? Check out this little flowchart we made to help you decide when you should start research, and when you should take a step back. 👇
If the stakeholders are involved from the onset and they may not be like believers to begin with, but if I can convert them and if I have my stakeholders that are active, engaged, like 90% sure that I'm going to be able to move from insights into action and get that team to act and address the learnings along the way. And that's how I measure my success. It's not just in providing the information that they're looking to learn about. It's about moving the teams to be able to act on that information. You know, quickly and efficiently and effectively.
This is Erin May.
I'm JH Henry Forster. And this is Awkward. Silences
Hi everybody and welcome to awkward silences. We are here today with Michelle Ronson. She is a consultant and educator of all things design and UX research. She teaches at general assembly and UC Berkeley, and has a thriving consulting business as well. Today. We're going to talk about something I'm really excited about, which is look, like we love user research around here a lot. But, it doesn't mean you should always do user research and we're going to talk about when you should not do user research. So this should be a fun one. Thanks for joining us, Michelle .
Thank you for having me here.
We've got JH here too.
Yeah. This topic seems a little blasphemous for us, so, but I'm nothing if not, open-minded.
All right. So Michelle, get us started. What is just, what is one time, one reason, one might not want to do user research.
That's a great question. Firstly, maybe, kind of set the context. We've many people sharing my delight of the recognition of how critical user research is in the product development and service development cycle. And more individuals and organizations are excited to learn more about user research and how to do it. Think we are bordering on becoming sort of a feedback, obsessed culture, and that's a totally different topic.
Is that a good thing?
You know, I think it has its pros and cons.
Yeah. I personally have mixed, mixed emotions when I read user obsessed. It's like, well, maybe you should chill out a little bit.
Yeah. It's a good point. Right? Like you can only hedge your uncertainty to a certain level, like you can't eliminate it fully. And so at what point are you hitting diminishing returns is pretty important.
I think that's a great question. So, which is also a great segue into this list that I compiled, seven reasons not to conduct user research. And it came about because I field a lot of questions for my students and my clients about what's the best way to approach X or what's the best method to use for why or how we phrase XYZ. We're trying to learn about 1, 2, 3, and not all of these questions. Our user research problems. Many, many different questions can be answered by different means that don't have to do with interviewing or interacting with customers directly. And one of the first things I like to communicate in my classes and with new clients is my personal preference is to let's look under our own hoods first. Like let's find out what we could learn about. Without talking to people like, well, there's a lot of experience in the room. If you're asking the question, there's some people who know something about the topic. The two people that I like to seek out first in an organization or the data analyst and the customer support people, people that are handling help tickets. So you have an existing product or you have something that exists out there or a competitor does. And there's so much to learn from what's already out there. So. I think that if you can answer a question better with analytics, use analytics or at least start there, and that will help you make a more informed decision about what to do later on.
How do you know if you can better answer a question with analytics? And if you can't?
I think the first question I would ask myself and how I, how I like to teach this is, do you have something that exists? Do you have something that is out there in the wild, in some sort of format? And if the answer is yes. And then the next question is, is it alive somewhere. And when you have some sort of analytics, whether it's just Google analytics and you can tell where these people are coming from or where they're clicking or where they're going, or they're abandoning or keyword searches, or something a little bit more robust. If it's, if it's a product or service that's been out there for even a couple of months and you have any sort of tagging going on. You can, you can learn a hell of a lot
On the customer support side. How do you go about making sense or picking up on trends and what they're hearing? I feel like it's such a rich source because they're on the front lines, but often it's not really organized or necessarily prepared in a way to just dig through. So how do you actually start to find insights there?
I should probably temper this conversation by saying that I am not a formally trained researcher. I'm a recovering designer. So my approach is probably somewhat unorthodox. So, the way I approach it is to introduce myself to the person that oversees the support. Group and just say very giggly. Hey, I'm a, I'm, I'm a researcher and I'm working on this topic that I understand that you help oversee. I'm curious about, let's just make something up. I'm curious about the onboarding process. What do you think what's going on here? What's working, what's not, and then, what sort of suggestions have you made, if any, what sort of improvements have occurred over time? How often are you interacting with your product manager or designer if ever? And a couple times, like I'm the first person to introduce that training. And it seems like it's like, these people are such a source of truth and the same with the analyst. So I'm going to the analyst. Hey, I'm new. I don't know anything. I have this beginner mindset. I want to be your friend. I bet you know a lot about what's going on here. Let's look under the hood. Tell me what do you see? And then the three of us, I mean, those two people can tell you what's happening based on their, based on their records and based on their spikes and drops, but they can't necessarily see why it's happening. And that's where the user, it's just a terrific compliment, but I love having those two, like those are my buddies. Those are the first buddies I want in the company.
Okay. All right. You got a nice plug for user research in there. We knew, knew it was going to happen, but let's talk about why user research is the wrong, wrong thing to do. If analytics can answer your question, use analytics, what's another reason you maybe shouldn't do user research?
So let me complete that thought if analytics can question in terms of what's happening. So for example, where are people dropping off in the onboarding flow or where's the friction in the onboarding flow? We can answer that question through analytics. Then we use that analytics to inform some sort of user research and follow up to talk to people about why that particular section in the flow is difficult. If there's something out there in the world, another example would be if time doesn't permit it. It, so if we're looking to understand, for example, the college application process, and we're looking to understand what's working and what's not.
But the, the, the team needs the answers in two days, we're not going to have time to do a proper diary study. We shouldn't force something within, because of a time constraint. We shouldn't truncate the appropriate methodology because of a time constraint. If that's a real time constraint, well, let's figure out another way to approach it. And maybe another way to approach it is to do some secondary research and comb through some reviews or some college application sites or something like G2 crowd and find out what people are saying about the problems or about this tool. So we can learn in other ways, but I'm not a big fan of shoving a square peg into a circle hole, if you will.
Yeah. This is a super interesting one to me because I agree with you. Like we shouldn't force it or compromise the research process so heavily that it's not actually valid or useful anymore. My concern about it - and I wonder if you have thoughts on this - is it also feels like something that is a pretty common excuse people might use of why they're going to skip research and just follow their instincts or do what's in their gut. So how do you make sure that this is coming up for the right reasons? You know what I mean? And it's not something that people are leaning on as a lazy crutch to get out of having to do research.
So if I'm understanding your question together, you're asking how do we get out of the, we don't have time sort of excuse.
Yeah. Like I'm a, I'm a bad PM who likes to just do whatever he wants. And so someone's like, Hey, are you guys going to do research on this new feature? And I'm like, Nope, no time. Like that feels like a bad cycle to get into.
So I don't really, I like to say no. But, what I like to do is come back with options and say, I know, I think this is the best approach right now, but what we can do is start to put our heads together and do more of like a rolling research
Series of studies where I will be answering, maybe specific questions or we focus on a specific topic and we get deeper and deeper and deeper as we go. So I can feed you information that will help you make more informed decisions. If we work together on a plan that can get you the information that will be helpful for you to progress. But what I don't think we should do is, very much to your point is, try to shove something in let's right-size. Let's look at the actual time we have, and I'm sure we can find something. You know, to work within that timeframe and better yet, let's be a little bit more strategic about it and step back and then plan something for the next six weeks or plan something for the next six months.
Yeah, that's a good point. I think, cause, the underlying issue can often be. Well, we never have time or we're, we're just not making research a priority in general, but I think your point here, Michelle is, well, you've made that mistake. Don't make it worse by pretending you're doing meaningful research when you haven't given yourself the time. Right. And make the best use of the time you do have, with the right method at which might be analytics to your point or something other than a longitudinal study or whatever.
Let's look at your roadmap and find out what kinds of questions and what kinds of people we think we're going to want to talk to at certain phases in, the overall process. And then let me plan for that. And, I'm happy to do that. It's good. Probably going to be more meaningful to you anyway, because if we are a little bit more agile and we have a little bit more. Improv. I'm a huge believer to me. User research is part art, part science and part improv. And if we can all kind of get a little jiggy with it, right. And that's my goal, right? I want to get you, I want to get you the most meaningful input at the right time from the right people. Yeah. The time thing though. It's interesting to me and one of the pushbacks that I have is it's funny, you don't think you have the time to do research right now, but you think that you have the time to correct the mistakes that research might not have, you know? Right, right. So you invest the time they invested time later, but my suggestion is let's not stop anything. Let me ride sidecar along with you. And, I want to be your friend. So let's be buddies. I don't want to disrupt.
I think we'll call this episode Michelle Ronson wants to be your friend.
What I love about that is I do think it's a very pragmatic perspective, right? Like I think sometimes it's easy to be idealistic and be like, stop the whole project. And we're going to make more time for research and to your point that while that might be great or the right solution, it doesn't win you a lot of friends in some cases. And there are other ways to get ahead of future problems in a way that might be better received by, by the different stakeholders and so forth.
Absolutely. I mean, to me, the biggest predictor of success in a study or in a client relationship is how involved the stakeholders are. And if, if the stakeholders are involved from the onset and they may not be like believers to begin with, but if I can convert them and if I have my stakeholders that are active, engaged, I mean like 90% sure that I'm going to be able to move from insights into action and get that team to act and address the learnings along the way. And that's how I measure my success. It's not just in providing the information that they're looking to learn about. It's about moving the teams to be able to act on that information quickly and efficiently. So I'm out to make friends. I want to include them and Hey, they know more than I do about the product and they have different expertise and I want to learn from them. It will just be a whole, everything was just stronger and better. If we work in concert. Yeah. I sound like a hallmark card.
Totally. Well, another one on your list was if you don't have stakeholder buy-in, that's not a great situation to do research within, and this kind of feels related to that. How have you seen that one play out?
Totally. I mean, there have been times where I've been brought on in stealth mode. You know, the VP of something says, "Hey, I have a hypothesis. I want you to explore it, but you're going to be working in a vacuum because we can't tell anyone about this." But other than those situations, which are far and few between, I, again, I, I find that the biggest predictor of success is stakeholder involvement and I'm a big believer, from an education standpoint, I understand people learn in different ways and the different roles on a product development team, from your engineer to your product manager, to your designer, to your content strategists, like everybody brings something different to the table and I want to learn from them. And if we can all come together and get actively engaged and ideally tie the research goals to their goals, to their individual performance goals or their team goals, that will just increase the chance of success exponentially.
You talked about stakeholder involvement, right? And another phrase you said was stakeholder buy-in. Right? So those are different things. Buy-in and involvement potentially. Do you ever see a case where like, is all stakeholder involvement good? Do you ever get any like naysayers involved who are maybe not helpful or like beginners who don't really know what's going on with the research and find a way to use it for harm or is all stakeholder involvement just good and just be buddies with everybody and get everybody involved and that's going to be a good thing. Do you have any tips around kind of how to get stakeholders involved?
You know, that's really interesting, no, it's not all good. I like to just first, I guess I understand their level of experience with user research in the past. If they don't have that, then do some quick education right there on the spot at the moment. So for example, my research plan is not going to be 12 pages. It's not going to include every single question in the guide. For me, the research plan will be successful if we are identifying the umbrella questions and understand very, very clearly how the research will be applied or what will happen to those learnings and when, and we can go from there. So to me, it's like, that's the first sort of level let's, let's just make sure that we're on the same page. So this is what we're trying to learn. This is why we're trying to learn it. And this is how we're going to apply those learnings. That's sort of the first step. The next step is okay. Let's coordinate, dig a little deeper and explore some components of the plan. And I also want to make sure that each person in the room is. Tied to, or will benefit from that exploration, in some way. But all authors, the document, I'll provide you with commenting rights. We'll get it to a point, but we have a deadline, so we'll get to a good enough point, and then move on. And these are living, breathing documents and. To be involved in the documents at some point, though, we have to either agree to disagree or agree to just keep progressing because we've got to keep moving. Let's try it out. You know, if it's not working, this is what pilot sessions are for as well. If we're not getting the questions, aren't clear. If we're not getting the types of responses we hoped for, or the depth, or we have too many questions or too few questions, you know, we do a series of pilots, at least one pilot session, if not a series to test that. Iterate and tinker along the way. I'm a big tinkerer.
Hmm. Is a pilot session, just the first session or it's like, you try to space it out from the other sessions. So there's time to regroup and make changes.
Really good question. I think it depends on the culture and it depends on what we're trying to learn and the maturity and the timeline of the overall project. I definitely find that the longer I work with a client team the more symbiotic the processes and we're able to move exponentially faster as that relationship grows for a couple of reasons. One they're more familiar with the process too. There's more trust that's been developed over time and three it's I've been able to demonstrate progress and results that have been to help them. Progressing makes more informed decisions with confidence. So everything just gets shorter. Like we two become a shorthand relationship, like with your partner or your roommate, you can kind of shoot each other looks about who's doing dishes or the difference, you know, it's a similar relationship, but sort of in a different format.
All right, quick awkward interruption here. It's fun to talk about user research, but what's really fun is doing user research and we wanna help you with that.
We want to help you so much that we have created a special place. It's called user interviews.com/awkward for you to get your first three participants free.
We all know we should be talking to users more. So we've gone ahead and removed as many barriers as possible. It's going to be easy. It's going to be quick. You're going to love it. So get over there.
And then when you're done with that, go on over to your favorite podcasting app and leave us a review. All right. When else should we not do research? One of the ones you talked about was obviously something we're going to totally agree with, which is when you don't know who you should be talking to, when you don't know who's going to give you the insights you seek. Tell us a little more about that.
So gathering feedback from the right people is really paramount. Here's an example, and this comes up a lot, most in discovery calls or new business calls. I had someone approach me and ask if I would be interested in collaborating on a project with my general assembly students. And I asked him to tell me a little bit about the project and he was redesigning a website for a makeup provider, kind of like a Sephora but not type of retailer. And then in conversation with him, he thought it would be a good fit because it would give my students exposure to a real world project. And it would be a good fit for him because he would be able to gather a ton of actual feedback. And I was curious by the requests because to me, that's not at all a good use of the people that he should be gathering feedback from. Because first of all, half of my general assembly students are male, right? And my hypothesis is that most people buying makeup online are female. Secondly, the demographics of my students are probably more educated and perhaps more natural and more tech savvy because of where we're located in the bay area. And that might not fit their core buyer kind of, but he was sorry. He was very surprised. And he picked and poked at me and said, but don't you think this would be a great experience? And I said, I think this would be a great experience in what not to do. And it turned out to be a very fruitful conversation and he thanked me very much and he said, I just, I never really thought about it like that. And I said, gathering the right information from the right people. It's kind of like, that's foundational, right? We don't want to ask the wrong people questions because we're not going to gather meaningful feedback.
I say that all the time, you know, human beings are all wonderful flowers and everyone deserves to be heard, but you gotta, you know, be smart about who you listen to for what problems you're trying to solve.
Getting feedback from someone who's 28, male. You know, a student who's 28 years old about buying eye shadow.
To be fair, could be into eye shadow
To be fair, but the majority, I mean, as a
Yes. Right. If you're doing bespoke targeting. Yeah, absolutely.
Absolutely not the right people to be asking. But yeah, it wouldn't be a good experience, but every different experience,
if you don't know why you're doing the research seems obvious, and how, and when the learnings will be applied - important part of that - pause right there. Tell us more about that.
Yeah. So these three questions are pretty paramount. If you don't know why you're doing the research, I don't think you're going to be able to build a great plan and ask the right questions. Knowing that outcome or knowing that end goal will really be informative. And this is also a key reason to get your stakeholders involved too, because your stakeholders should all understand why you're doing it as well. So if we didn't want to do a general exploration, that's fine. You know, maybe we're doing it to become a little bit more informed about a product or service or a new profile or target that we might go after. So maybe we're doing it sort of in a generative way, so we can become a little bit more informed about what. That's a totally fine, you know, answer to why. But if we're, if we're doing something a little more, more tactical or we're doing something that requires, any sort of task based something or evaluative based something, or, you know, generative something, we want to know why we're doing it. If we can't answer the question, why. I would suggest ship at that point. And also if we can't, if you can't agree on why there may be more than one, why, and that's fine, but if we can't agree on the why, then we're not going to be moving forward in lockstep.
And I'm guessing this one sounds so obvious, but I'm guessing you've encountered this happening before.
Yes. So, and that's really a clear indicator to me of how mature the organization is in regards to user research. and sometimes this is about finding my buddies and collaborating. In getting everyone to kind of sing the same tune. It's like herding a team of feral cats. Right. But you can do it if you can do it, it is so much more powerful and so much more successful. And again, you're able to move from insights into action, just so much.
Do you find that it's the ones pressed to like state the why that most teams can usually, like, it's kind of floating around somewhere. They just haven't articulated it. And if they think about it for. They can narrow it down and articulate it or has it, there are actually some people who truly like, even when pressed just can not get there. And they're like, we have no idea
Most of the, at least in my experience, it's, we're doing it to find out which one's better or we're doing it to find out which ones resonate or which ones are preferred, but then I'll dig deeper. It's like, but why do we want to know that? But why do we want to know that? Why do we want so, and, you know, using the five whys or laddering usually gets us there, but the engineer might want to know why for a different reason he might want to know, or she might want to know why, because he's thinking about how can I repurpose some sort of code. And the designer might want to know why, because that's going to influence some sort of pattern library. You know, that's being developed by her partner team at the same time and the product manager might want to know. So they each might be coming at this with slightly different angles, which is totally fine. I want to understand all those angles. Because again, my goal is to make sure that whatever we're learning about is going to be meaningful and impactful to that whole team so we can move that much more quickly into.
Gotcha. Yeah. So it's like, just, don't let it go unsaid like actually get it all out on the table. Make sure everyone's why is understood and out there and then figure it out from there, sort of.
Yeah. And by hearing the disparate views of why, and then understanding how it can be helpful, it actually brings us closer together as a team. And then the when is really important to you. So if you, if you have two months to explore this question, that, you know, will open up many, many different doors for how you might explore it. Versus if you have two days or two weeks. You know, so when, when would be too late for you to have this information is a great question to ask and why would it be too late at that date?
Okay. I wonder if, when you start uncovering the whys, right? Assuming that they're why there's some motivation there. Does that ever relate to another, another one of your reasons not to do user research, which is if you're just trying to sell your design that you've already come up with. Does that come up right where it's? I don't want to say malicious, but let's say not, pure intentions of just uncovering the truth. Right. Does that ever come out when you kind of dive, dig into these, these whys?
Not as explicitly with the people that I work with, but I think what you're talking about is like research as a weapon and it can definitely be a weapon, you know, where with the toolkit and the access. I mean, we can, we can, we can pretty much blow anybody up. We want the ethics there, you know, prevent us from doing that. But you know, they're conducting user research or. Wanting to conduct user research, to mask it as a way to prove a point or to validate, you know, one direction or another is just wrong. It's just wrong. I mean, user research is, as an industry, we work in service to that, to that user, not to ourselves. And you know what, it's going to come back and bite you. Right. So that could be legitimate.
This feels like a really hard one to detect like some of the other ones, right. If we don't have enough time, it's kind of obvious, hey, why are you doing this research? And someone's like pretty obvious. But someone has bad intentions and they're actually just trying to advance their own idea and sell their design. How, how do you actually discover that? Like how do you know. How do you put the brakes on it in this situation?
Well, as a researcher, you're kind of the Maestro of the organization. So. I think what I would say is, are we looking to understand which concept resonates the most? Then when I'm building the script I'm going to, or the guide and going through the planning process and conducting the sessions, I'm going to make sure that there is or do my absolute best to make sure that any sort of bias is removed.
Okay, that makes sense. Cool.
So that's another great activity to do while we're all kumbaya is, let's get our biases and assumptions and hypotheses out on the table before we know that it's not right or wrong to have biases or assumptions or hypotheses. Like we all have, like, let's just be honest about it. And sometimes it's kind of fun. You know, I really want concept a, because you know, my mom likes to hand it over if it's purple or whatever. There's nothing wrong with that. Like let's, let's get it out again. It's going to bring us closer. Oh, your mom likes purple. My mom looks purple too.
It sounds like you're working with a lot of like very evolved people cause I don't know to me, I would just think if I, let's say I want my designs, you know, when this research test, because I spent like a lot of time on it or it's been my pet idea for two years, like, am I going to admit that. To myself or to another fruit. I don't know. It sounds like you're having good luck with getting kind of extracting that from folks.
Well, I think it's about building that trust, right. And I think now, laying that framework and I try to go in with that beginner's mindset. And I don't remember if I said this, but I come from a design background. Like I know how painful it is to kill your babies. I totally remember that. And I still have to do it now, like in research. I mean, sometimes I can't use the method I really want to use or some other way, but I think it's about building that trust and it's, it's about. You know, really trying to approach it in a partnership and the more that we can build that rapport and that trust with each other. And the more we're going to open up.
All right. Last one, right? The question is too big, too small, and this gets into the time again. Do you have the right amount of time to talk to us about the importance of having the right size question?
Yeah. This is a great one. And this one comes up. A lot in first conversations or discovery columns. So for example, I had a commercial real estate company, a series of conversations with them last month. And their original question was, we'd like to learn how to maximize. And that's a great question, right? But that's not necessarily a user research question. It's a way for a commercial real estate company can maximize their assets by, and I went through this, I don't know too much about commercial real estate, but I said, I hypothesize you could raise your rents. You could cut your utility costs. You could purchase more space. You could convert, you know, you could sell ice cream. No, you know, you could increase your services. There are so many ways to increase your assets. Like we want to focus. We want to let's right-size that question. Okay. So let's find out like, can we shorten the time it takes to apply to rent a property? So therefore we shorten the vacancy window. Like that would be, one way we can, like, we don't want to boil the ocean. So let's, right-size it. Should we look at all your utilities or should we look at the process to pay your utilities? Should we look at your rental rates and do like a comparison, which by the way, wouldn't be user research either that would read more murderers, but, that's another point we can add here is like, when is it market research? But that question, how do we maximize our assets? That's just, that's just way too big. It's way too big, too. To explore. I mean, maybe, maybe we want to do a generative study to find out the three most viable ways and then dive in, but it's just too broad and in the same respect we don't want to ask something that is way too narrow as well. So, for example, we don't want to spend a couple of weeks studying, if somebody can sign on to. a financial app.
I was going to say, say, say button color. That's the go-to—what color should the button be? I want to see the case study of when the button color, like just changed.
The business comes up so much. Oh, I have one. All right. Tell me about it.
Okay. Yeah. When I was working at Vista print, we had all these custom landing pages that were made and they had product titles on them. And there was like a get started button or something. And it was very bland and kinda got lost in the page. And somebody ran an AB test to make it like a bright orange and it can increase conversion enough to be worth over like a hundred K a month and like profit. It was crazy.
That's a good one.
That is crazy.
Yeah. She was very junior and she's like, Hey, what if we just made this button like a lot more obvious when more people click it? And so they ran the AB test and it's like, did very, very well. And we're like, oh, great idea.
And I'm looking at her button color. All right. So, okay. So not, not too big, not too small Goldilocks questions. Obviously we can turn a lot of these nos into yeses by, you know, by inverting them. And right-sizing the question now, now we have permission to go and do some user research, right? What do you see most commonly, where do people get tripped up the most?
I would see that, in terms of frequency, not leveraging your analytics. It's not marrying your analytics and looking under your own hood first. It was last year I was on a year-long consulting engagement and a well-known firm and the product team that I was working with the designer had never interacted with the analyst or the person in tech support, customer support didn't even know who it was actually. And never really thought to make that connection nor did the researcher who they had worked with prior. So it was just never a relationship that they had.
Different flavors. Similar thing is what about not looking at user research that's already been done, right? So not analytics necessarily, but you know, insights you might already have.
Okay. What do we know about this? Have we explored? Internally, have we done some secondary research on it? How did we get here? How did the question get to this point? How did the question get to me? Like what happened to lead us here and in the ties also back to earlier conversations in terms of what do we know, what we hypothesize, what do we assume? You know, and that is generally based on. Or hopefully garner the conversation about what other work has been done in this space.
What I am starting to think as I look at this list holistically, and now having talked about it is it does feel like a lot of things on here serve as like an early warning system where like the Canary, and if you catch these things early enough, there's a chance to like, right-size it or adjust and actually go on to have successful research. Right? And there's, if you can't fix it, like pull the plug to your point and the whole premise of this discussion. And I, and then the first thing comes to mind is like the whole checklist manifesto. It almost feels like this could be like a cool thing for teams to go down and be like, can we answer this with analytics? Yes or no? Do we have enough time? Yes or no, like, and actually kind of use it, like, have you, have you thought about using it that way at all?
Yeah. I have a number of checklists and resources that I've developed. They're all on my website for download Ronson consulting.com. In the, in the resources section. And this is another great checklist. I have a couple of checklists up there. One is about how to evaluate bias and one is I think 15 questions you should ask yourself before you launch. And there's a great question starters kit in there. So, break down the typical phases in an interview. And then I developed a question bank, if you will, of three pre study questions or warm up questions or. 15. And then if you choose three from pretty much the pre and then the digging deeper, and then the wrap-up like you have a pretty good framework for, for a guide. I think that one thing that's important to discuss, if I can just pivot just slightly, is that there's no one single way to gather information and there's no one right way for a team to go about. You know, there's better ways, but if there's one thing that I've taken away from consulting for seven years in the business, it's that every culture is different. Every question is different. There are similarities, but there's a lot of differences. And if we get out of the mindset of like right or wrong, or the best way to do something, there's more room for improvement and the better you get. So it's better to just start somewhere, then, then not start at all.
Yeah. I love that nuance. I feel like what's really the pivot kind of off that even, right. It's like, I think what's fun about the whole podcast format is it is a format that actually allows for nuance and some of that kind of like. Fine tune stuff that we're just getting into in a way that like a lot of our other online formats do not seem to allow for very well,
what do you mean fine tune in the format?
Just like, sorry, like what you were saying, like there's no way, one way to gather information, right? And so like there's a lot of nuance and context and all of this stuff that you have to kind of factor in to know how to go and do that in a given situation. And like in this type of discussion, It allows us to explore that and kind of appreciate that context and nuance and richness. And I feel like just when you see people like on Twitter or other common places, it's usually much more like absolute stances of the only way to answer a question is blah. You know what I mean? And instead of just a. That's what I really enjoy about these conversations is we actually get to get into some of that, like, well, it depends, and there's a lot of factors and there's not like an absolute way to do everything.
Right. And reading like a long, you know, it depends. Article is not enjoyable, really fast, right? It's like, well it depends. And then sub point B and then over it, now we're over here and I'm like, geez, God, but a. You know, in a conversation, I think it feels hopefully just engaging and natural and organic and dynamic three-dimensional, things like that.
I think it is also authentic, right? One of the fascinating things about our industry and user research is that it's constantly moving. It's constantly changing. It's evolving. It's dynamic, it's living, it's breathing, it's amorphous. But I also think that's where a lot of the improv comes in. Right. So the podcast allows for this like improv to totally take place, which is so fun. I'm loving,
close with my just being like how sweet is this podcast guys? So maybe we should cut that part better.
I think in improv. We're supposed to say yes and. Yeah, no bad ideas
a friend actually, who now is doing, you know, some kind of product and design consulting type stuff. And that gets into research as well, but he does a lot of improv. And so he actually just did at a larger company, like an improv improv training course with our user research team. It's kind of like. I think on your feet and other stuff, I don't actually know how it went, but it sounded really fun. And I was kind of jealous that he got to do that.
Thanks for listening to awkward silences brought to you by user interviews.
Theme music by fragile gang.