SUBSCRIBE TO OUR NEWSLETTER
The best research plans leave room for evolution and consistent learning.
Some people think you have to choose between qual and quant research when you start your study. Not Cat Anderson. At AP Intego, she works with a fairly small and scrappy team (just like us!), but that doesn’t mean she’s not doing big things with her research. She lets qualitative and quantitative research flow into and inform each other, creating overall better research results.
[1:23] Cat walks us through how she found her way to a UX Writer position
[6:01] The importance of stakeholder buy-in for user research
[8:38] Cat talks about designing her cybersecurity research from the bottom up
[9:42] The Experian effect: A surprising insight that got baked into the research from qualitative to quantitative
[11:26] Going from quantitative to qualitative
[13:23] Planning for free-flowing research
[16:32] Deciding what kind of research to do when
[18:20] When are you finally done with your research?
[22:34] Cat’s final thoughts on doing great research
[26:04] Two thumbs up for remote user testing
Great news! You don’t really have to choose one or the other, and you don’t have to do every part of all research on your own either. When you’re looking for user insight, use all the tools available to you. Some things are better suited for qual, some for quant, and some need input from both sides to succeed. Cat’s advice? Match the method to the question you’re trying to answer. Do you need to gather enough data to get statistical significance? Do you just need to understand how users feel about a certain thing? Asking questions like this before you even start planning the research will help you decide what kind of research you need to be doing.
Cat was working on a study where she needed to know what users already understood about cybersecurity in the first place. Her goal was just to understand what knowledge her target market had already and how they felt about insuring their cyberselves. So she did a qualitative round of interviews where she just sat down with small business owners (the AP Intego target customer) and chatted with them about what they knew about cybersecurity. That led to an insight that changed her quantitative research dramatically, the “Experian effect”. Her participants felt that there wasn’t much of a point in insuring themselves against online breaches of security, since all of their data had probably already been leaked in the Experian breach. Knowing that many people felt this way, Cat was able to create better questions in her qualitative research and dig more into the breadth of the “Experian effect”.
Research is all about leaving your ego at the door. That means going into your studies with no preconceived notions about what you’ll learn. So planning for what happens after you learn something you can’t plan on learning gets a little bit tricky.
For Cat, it all circles back to tying your method to what you want to discover in the first place. If you don’t know a lot about the thing you’re researching, you may want to consider talking to some people to learn about your topic, then move into the more focused part of your research. If you’re already focused in, it’s time to gather some actionable insights. These typically come from quantitative data that can show statistical significance, but could also come from things like user research sessions and diary studies.
The first rule of doing a ton of research without a ton of money? Be scrappy! Blast your survey out on Twitter, send it out to your email list, and pare down your participants. Surprisingly, you only really need 5 people to conduct good qualitative research in many cases. That’s because, after around 5 people, you’ll start seeing a lot of the same answers again and again, meaning adding more people to your study doesn’t mean you’ll actually get more valuable insight.
Even if you need more than 5 people to find trends, starting small will allow you to update your questions or test based on what you are learning (or not learning) from the initial sample set. This allows you to learn and grow from each new piece of research, creating a more agile process. This actually makes it easier for you, as a researcher, to flow from qual to quant as your studies evolve.
So how do you set yourself up for success? Build a habitual research practice. Research, like product development, is cyclical. When you have specific questions, look into information you already have, then set up studies with methods likely to get you the answers you need. No active burning questions? Great time to get ahead of the next one with some customer development or discovery work to better understand your target audiences. In this way you’ll build a practice of continued research, both qualitative and quantitative.
You can do this by setting up surveys within your product, or even just attaching one to the occasional email. You can use things like NPS scores to do this quickly and organically at key touchpoints in your users journey. These are easy to implement and allow you to keep an eye on the general sentiment of your customers over time. Tools like YesInsights, Appcues, or even Google Forms could help you create constant customer surveys.
Cat used FullStory to gather quantitative data that helped her understand the difficulties her users were facing. She then was able to build better qualitative studies that gave her the answers her team needed to create better customer experiences. You can use FullStory or other heat mapping tools, like Hotjar and CrazyEgg, to help you keep an eye on what’s your users are doing on your website. Bonus: they all have free versions.
You can also set yourself up for constant qualitative research by blocking off an hour or so a week to actually sit down and talk to users. No formal study or reason required, just to listen to their story about your product. When you sit down to create a formal study, you’ll have more insight about your customer’s journey right from the start. And it’ll be fresh and up to date. This can also be a great way to learn about issues you never would have thought existed. You can do this by using a managed solution (hint hint, Research Hub) or by reaching out to users and sending them a link to a scheduling page, like Calendly or YouCanBookMe. Make sure to store what you learn in a way you and others can access it later too.
Cat Anderson is a UX Writer at AP Intego. She has a background in anthropology, research, and UX design. She’s relentlessly curious, occasionally funny, and perpetually snuggling as many dogs as she can.
Erin May: This is Erin May.
JH Forster: I'm John Henry Forster. This is Awkward Silences.
Erin: All right. Well, today welcome back to our podcast, which by the time you'll hear this, will have a name. I'm Erin. I'm here with...
JH: John Henry. I'm also here with Cat Anderson, who's our second guest ever on our, to-be-named podcast. She is a UX writer at AP Intego. They're a small business insurance company. I'm sure she'll tell us all about it. Yeah. UX Writer. Big job title these days and excited to talk about what UX research looks like and her role there. Welcome Cat.
Cat Anderson: Thanks so much. It's wonderful to be here.
Erin: Yes. Tell us a little bit about your kind of day-to-day at AP Intego and what being a UX writer's all about there.
JH: Yeah. How did you get the title UX Writer? I feel like that's Erin's dream role of writing and UX, like check, check. How did you find your way into that?
Cat: Yeah. It was pure luck actually. I had been chasing after UX researcher jobs. I'd always ... Writing had always been a huge part of, pretty much every job I'd ever done. Something that I felt really comfortable with. I knew Stephanie from before. She had joined as the first UX designer on the team at this company. I was like, "Wow. Aren't you scared to be the only one?" She's like, "Nah. I got this." She just didn't ... English is not her first language and she didn't feel super comfortable writing the copy for the stuff that she was designing.
Cat: A couple of months into her role, she said, "Hey you guys, can we bring somebody on who can be more responsible for the actual written content on these products and these apps and the website and stuff." That's how I got brought on. That's ... I really, kind of just lucked into it. I'd really been chasing user researcher roles. Because it was a new team and UX design was kind of, a new thing to the company and its totality, it was easy for me to kind of worm some user research into my role.
Cat: It's turned out to be something that I was not expecting which is kind of, a defining feature of how I envision myself as a person on my team, where I can't imagine doing UX writing without the research piece. I don't really have a sense of how common or unusual that is across the industry, or across from UX writer to UX writer. To me, it's been the biggest and most pleasant surprise of this role.
JH: Awesome. It makes sense, right? I think there's finally been this realization that, to do design well, you need to talk to people and understand how they're interacting with it and where they're coming from. I think, kind of piggybacking on that is ... I think we've started to acknowledge the copy in micro copy and all that stuff is a huge part of design and actually making sure people understand how to navigate through an app or an experience. The idea that you would need to do research to do that well, it fits perfectly when you actually think about it from end to end.
Cat: Yeah, it seems super, super logical. I think that there's a few different ways that you can go with UX writing. I mean, when you type in, when you Google UX writing, and a thousand million things pop up. They're all ... Almost all of them focus on the actual craft of UX writing. It's a lot of stuff about best practices, how to go about it, why it's important, that sort of, a thing.
Cat: What I don't see a lot of, is articles or information about, well how do you know what the right approach is, to get there? How do you know what your users need? How do you know what their mental models are? How do you know what they expect if you haven't actually don't the research? I think in a lot of companies, there's a dedicated UX research team, who might supply that information, that the UX writer can then put into practice in the actual craft of writing interface designs or whatever, or micro copy.
Cat: In a team as small and scrappy as ours, I think it's worked extraordinarily well for me to just have my own hands in the research and sort of guide some of that research design, so that I can make sure that I'm getting what I need from the users to craft that content, that's gonna be the most helpful and useful for them.
Erin: Yeah, for sure. I think as someone who's had a variety of different kind of editing writing, managing editors and writers, sorts of roles, I think one of the things you see happen a lot with these, kind of UX copy kinds of requests is ... We have this email. We need to make it sound better, or, this page isn't converting good enough, let's make the copy better. It's like, "Okay, well, what's the user trying to accomplish?" It sounds like you're a smallish team.
Cat: I mean, we're growing. That's one of the nice ... As our company grows, our team is growing right along with it. We're really one of the lucky ones in that we have buy-in throughout the entire company for UX research and UX design. Our managing directors are extraordinarily supportive. They give us pretty wide latitude, which is awesome. They gave us permission to test a prototype that was so early in its stages, that it didn't even have a UI yet.
Cat: They're like, "Yeah. Go test that out." I was like, "Really?" They're like, "Yeah. We need to know if it works." I was like, "That's awesome you guys." It's a pretty flat hierarchy. We're not super siloed. We're allowed to kind of free range around the company and talk to whoever we wanna talk to and tap into subject matter experts on any of the other teams. We do that quite frequently.
Cat: Insurance, new flash, it's a pretty complex and irritating field to understand for most people. Most people don't want to understand. That's actually one of the hardest parts of my job is, understanding the fine line to walk between giving people enough information, so that they can make good choices and feel like they're, well informed, but not overloading them with a bunch of stuff that they don't actually need to understand in order to make a good choice.
JH: That's so great. I feel like we hear this a lot still, that people who are in organizations, where they really support research and talking to users. It still has driven so much by somebody at the top, who believes in it and kind of pushes it through. I'm sure we'll have a future episode about how to do it more from the bottoms up and evangelize. Nice to hear that you're in a good open, supportive environment. It sounds like that freedom has led to some pretty creative approaches about how you go about research right? I know one thing we wanted to dive into was kind how you pair quantitative research and qualitative research together sometimes is ...
JH: Is that something planned or is it something that happens organically?
Cat: It's both. I did come from social science, anthropology background. I had official training in research design and that sort of thing. I do bring that to bear sometimes, when we're tackling a major problem. For example, we, a few months ago finished up a giant research round on cyber security and cyber insurance. That is ... As far as insurance products go it's pretty new. There's not a lot out there that's known about it. There's not actually a lot of data to pull from.
Cat: As we're trying to develop products that actually fit user needs and user mental models, we were really starting from zero. They approached me to say, "How can we design some research to find out some of, the answers to these questions? We don't even know what we don't know." What I did there, was I laid out the whole idea for the research from the outset. I started out with a qualitative round, that was just super open-ended. We didn't know what people knew. We just didn't have any idea where to even start. It was just tapping small business owners to talk about cyber security and cyber insurance for an hour.It was really as open-ended as that. I had questions that I wanted to ask them and questioned that I asked each of them. It was really, really open-ended. From there we found some really interesting insights.
Cat: For example, something I call the Experian effect, which this is idea that because there have been so many massive data breaches, there's this mental model that people have, where it only happens to the big guys. Even if it does happen to me, it doesn't matter. Everybody already know everything about everybody already. It's like, after the Experian hack or the Target hack, all my stuff's out there in the world anyway and it doesn't really matter what happens to my business. I don't have anything that hackers don't already have. That was a really interesting insight.
Cat: We were able to use that insight about the Experian effect to craft the questions on the survey, that would either validate that at a statistically significant level, or invalidate it as just a, "Hey. You talk to five or eight people and they said this. What does it actually mean?" That was an example of how we were able to use something that we were totally not ... We didn't set out to find out about that specifically. It was one of the things that came out as a pattern from our qualitative research, that we then baked into our quantitative round. Now, it's something that we kind of incorporate into a lot of other research initiatives that we have. We know that this is something that is a prevailing mental model out there, when it comes to cyber security. That's been pretty interesting.
Cat: There are times when it's the opposite. We use quantitative to inform our qualitative round. An example of that is when we draw from full stories, the platform where you can record the screen as somebody's using your app or your website. We watched hundreds and hundreds of full stories on this one interaction. It was heartbreaking. You watch people struggle. It just breaks your heart.
Cat: After watching hundreds of people struggle on this one interaction, we were then able to design a qualitative research round that really put it into a more holistic perspective and context for us. Why? Why is this a difficult step for you? Oh, well, it's because of all these other things that are happening around it in a small business owner's life or in a small business owner's journey. That was an example of the qualitative informing the qual ... I'm sorry, the quantitative informing the qualitative. And then, there's just a whole lot of complex interplay once again, between those moving forward as we do new rounds of research.
Erin: Love both of those examples. One question I have is, given one kind of study flowing into another and how much they can kind of compliment and give you a full story, a full picture of what's really going on, how do you plan for that, you know from a budget perspective, resources, whatever you might kind of wanna know ahead of time. Do you just assume qual is gonna lead to quant and vice versa? How do you kind of plan your research, given that one thing might lead into another?
Cat: Yeah. It's pretty hard to plan it to be honest. I mean, it's definitely one of the challenges that we face here. I think it's always about matching the method to what you're trying to discover. If we really are starting from scratch and we don't know much at all about a certain phenomenon or a certain category or an industry or something, my inclination is always to start out with qualitative.
Cat: The nice thing about qualitative is that you can learn a lot in a short amount of time, talking to a small number, of people. You can do it kind of on the cheap. If you recruit from your own book of business, or your own pool of users, it can be pretty cost effective to just talk to five people and start to see some patterns. I mean, I get a lot of questions, especially from my social science friends sometimes about, "Is five people really enough?" It's sort of amazing how quickly patterns can emerge when you're drilling down on a specific topic. It's pretty amazing how quickly you can come to see some patterns in how people respond or the problems that people face.
Cat: If you don't, that's fine too. That's what qualitative research can be really good at also, is teasing out the difference between a pattern or just a one-off, like an anecdotal problem that one person faced, or it's just not a widespread issue for people. Leaving it open-ended like that really opens you up to, you don't know what you're gonna find. That's what makes it so hard to plan out, is because with a qualitative round, you might discover something that you were not expecting at all.
Cat: And then, if you want to use that to go into your quantitative, to validate what you learned in your qualitative, that's fine. When you get into quantitative, you really start to have questions about whether you want it to be statistically significant and if you're doing that in a survey format or an AB test or something, that can get a little pricey. From a budgetary standpoint, it can be super tricky. You kind of have to be scrappy sometimes and just put out a survey on UX mastery, slack channel, or put it out on Twitter and see what you get back. It's not ideal, but it might still give you something that you can move to your team forward and then maybe use that as a stepping stone to something a little, bit more robust.
JH: Keep it discovery style. Little more open-ended, to see what trends you find and then use those in the quantitative next step. If you're going the other way and starting with quantitative and you know you're likely gonna follow it up with a qualitative round, will you do anything differently in the quantitative step, knowing that you have qualitative coming later, or is it still kind of like, just run the survey and then maybe we'll cherry pick a few people to speak to or ... Does that make sense? Would you do it if it was in isolation, you do it one way but when it's gonna be paired with something else, you do it differently or is it kind of the same across the board?
Cat: It really depends. For example, the quantitative full story research, we didn't really know what we were gonna find. We just set the thing up and then just watched a bunch, to see what emerged as a patter. It was like qualitative in that sense, that we really didn't know what we were gonna get.
Cat: Another example would be conducting research based off of NPS surveying, where it's such a cold hard number there, staring at you from the page, when you're looking at something like NPS scores that, when you use that to take a deeper dive in your qualitative, it still was sort of, a surprise. You still didn't know exactly what you were gonna get. The quantitative wasn't contextualized. If you're designing a survey, and starting with quantitative in that regard, then yeah, I would say that there is a sort of, a similar thing where you ...
Cat: With a quantitative survey that you're just putting out there, you start with a hypothesis, you check to see if it's validated and then if it is, or it isn't, or whatever result you get out of the quantitative, you can bring that into your qualitative instrument or you know, decide what qualitative method you wanna employ at that point. Is it a question about usability? In that case, you might wanna set up an un-moderated user test. If it's a question about mental models, or customer journey or anything like that, you're probably gonna wanna do a moderated interview.
Erin: When do you say the study is done? We've learned enough. We're gonna move onto the next thing.
Cat: Oh, Erin, that's so funny that you say that. That's a big question that we've had. In a certain sense, it's kind of, a cop out to say, "Okay. I've structured out my research. I've planned it all out," like I did for the cyber research for example. I'm doing some qualitative to get the ball rolling, then we're gonna move it into quantitative. There's gonna be some analysis. And then, we're present the final results, and that's going to inform a bunch of different teams at our company as to, how to move forward. That's the beautiful plan in everybody's mind.
Cat: And then, in terms of how it actually plays out, I think if you wrap up a round of research that includes qualitative, quantitative and it's on a fairly large scale, and you've wrapped it all up neatly with a bow at the end, God love you. You're better at this than 99.9% of user researchers out there. I mean, I think there's always going to be questions left on the table. This is part of what's exciting about allowing qualitative to flow into quantitative and vice versa, because I see the way that the research that we did for cyber has influenced the research that we've done for other products as well.
Cat: Can I say that the cyber research is finished? Well, I did the final presentation. In a certain sense, yes, yes it is. It's all wrapped up. It's all good to go. I mean, did we have 10 million other questions at the end of that research? We sure did. What we've done is, we've kind of ... We had to close the door on that specific research round, because of time constraints and bandwidth and all kinds of other things. There was always, I think, this understanding that we were going to carry it forward in other ways. It's very amorphic to be honest.
Cat: It's not anything that I can definitively point to and say, "Oh yes, well, we've asked this question because of what we learned in this survey." It's not that neat and tidy. I think that there is tremendous value in allowing your team to have the kind of freedom to carry the research forward in other directions and in other research rounds, either formally or informally. It's never really done. It's never really done.
JH: The way I like to describe it is, you have long periods of humility, of we don't know, we don't know, we don't know. And then you pair it with moments of decisiveness of, just being like, "All right. Let's do this," and then you're right back to being like, "We don't know. We don't know." At some point, you do have to make decisions, but the majority of time is spent trying to figure stuff out and learn it as best you can. Every once in a while you put a stake in the ground, move forward and then come back to it and continue learning.
Cat: Absolutely. Absolutely. I think that, that's really ... I mean, I think that's the best that any of us can hope for really. At a certain point, you do have to take action. You can't just stay in the research phase forever. The research that you do has to be in service of some sort of progress, or at least some sort of forward movement in terms of what you're trying to accomplish. Yeah. I like the way you described it as sort of punctuated equilibrium. It's like, "Agh. Okay. Okay. Agh," and then, you just keep you doing that pattern until something good comes out of it.
JH: Yeah. I mean, we pretty much make every decision in our lives based off of imperfect information. At some point you have to get comfortable with moving forward.
Cat: Yep. Truer words were never spoken.
Erin: Anything else you wanna get off your chest, or ...
Cat: The most important thing that you can do in user research is match the right ... Well, first of all you wanna make sure that you're asking good questions. This idea of the right question. Okay. Maybe in some cases, really specific cases, there is such a thing as a right question or a wrong question. Most importantly, you just wanna be asking questions. You wanna be able to match the method to the information that you wanna learn. And so, it's always got to be some sort of, a complex interplay, between qualitative and quantitative.
Cat: It's like nature and nurture. There's always gonna be ... It can tilt to one side or tilt to the other side. The end, if you really wanna know something, you kind of have to include the spectrum of ways of knowing about it. To me, the research is really about that. It's about learning, it's about figuring out, really drilling down and introspecting about what it is we wanna learn and then finding the right people and the right instrument to get at what it is that we're trying to know. That's really how I see qualitative and quantitative playing together. It's like, it's always gonna be a tension between them. It's always gonna sway to one side or the other side, depending on what we're trying to learn. At the end of the day, it's really gotta be a little of both at least.
JH: I like that.
Erin: Something we didn't talk about a lot was methods, right? We talked about interviews, we talked about surveys. There are a dozen more, at least. Talked about AB testing. How have you figured out over the course of your career, what method to match to what question?
Erin: Certainly there are resources out there that address that question, such as the field guide, which you can find at userinterviews.com.
JH: Always plug. Always plug.
Erin: No. Is it something you get a feel, from experience? How do you figure out what method to use for what thing?
Cat: Yeah. I mean, I think it boils down to, are you trying to learn something about people, or are you trying to validate an idea? I think that's ... I don't know, maybe that's too simplistic of a dichotomy. For me it often kind of comes down to that, where am I just really truing to learn about people's lives and how they think about things, in which case I'm probably gonna start at least with qualitative, or am I trying to validate something?
Cat: Am I trying to put something out into the world and see what people's reactions to it is, in which case I'll probably start with quantitative and of course there are going to be all kinds of scenarios in which that would not be the case, but I would say, generally speaking, you just kind of start off from there. Do we know anything about this already? Okay, what do we know?
Cat: From there you decide, "Do I really need to know more about the context around something, before I go forward, or do I already know enough about something, where it's time to just put something out into the world and see how people react to it, and gauge what I want to learn from that." I heard that you guys in your first podcast, talked about the desirability of remote testing. I wanna throw down and say two enthusiastic thumbs up for remote user testing. I think that it is a fantastic, super useful way to access populations that you would never have access to.
Cat: I mean, we're on the east coast here. I don't just wanna talk to east coast people. When I have a day where I can talk to Mark, the guy who sells janitorial supplies in the Ozark's, that is my best day. When would I ever, ever get a chance to talk to that guy under other circumstances? Only because of this awesome job, that I get to talk to that guy and learn about his life and learn about his pain points, when purchasing business insurance for his janitorial supplies company.
Cat: And then, my job is to take what I've learned from that guy and help him, and help it be, better for him. That to me, is just the best thing. Using the method, is really just about the remoteness of the interview or the user test allows me access to that population. That's another whole ... That's probably just a whole other topic for a different podcast.
JH: For sure.
Cat: That's why I'm really a fan of the remote stuff. You can access so many more people. I don't think that you lose anything really. Under most circumstances, you don't really lose anything by testing somebody in their natural environment, as opposed to bringing them into a lab.
JH: For sure. I like that you prove that you actually listen to the first step. That was pretty deep in there too. That's a good reference. I was just gonna say on the, what's the right tool for the right job question. I think there's a part of me that almost thinks that's an overly reductive question. Right? I wanna drive a nail into this piece of wood, so it's a hammer. I wanna cut the wood, so it's a saw. It's like, well, what if I wanna build a house? Then you need a bunch of tools.
JH: I think what you've been telling us and describing throughout the conversation is, how to combine tools to get to outcomes. I think that's actually the much more interesting piece of this. It's pretty freaking simple to learn what tools are good for what. The skill of using tools in a combined way, I think leads to much more powerful outcomes. Hopefully that's something that will kind of grow and people will think about more.
Cat: Totally agree. Totally. Could not agree more.
Erin: Thanks for listening to Awkward Silences, brought to you by User Interviews.
JH: Theme music by Fragile Gang.
Erin: Editing and sound production by Carrie Boyd.
Carrie Boyd is a Content Creator at User Interviews. She loves writing, traveling, and learning new things. You can typically find her hunched over her computer with a cup of coffee the size of her face.
Research Methods & Deliverables
December 2, 2019
Joey Mangini explains how to present findings to hesitant or resistant stakeholders, improve communication between parties, and set the right expectations for stakeholders.