SUBSCRIBE TO OUR NEWSLETTER
How to create research reports that are engaging, informative, and actionable.
Caitria has three principles for great research reports. They must be enjoyable, informative, and actionable. These three tenets are flexible enough to apply to any report, and can help you make reports that are interesting to people at your organization.
First up, the fun stuff. Sure, it’s not the most important of the three tenets, but it has a big impact on the sticky-ness of your research. Make your research report enjoyable for your audience to interact with. Caitria does this by incorporating project specific themes, fun extras like memes and emojis, and using narrative storytelling structure.
Themes make your research project easy to identify and organize, and if you pick a good one, they can also make your research project more fun to engage with. When Caitria was working at Facebook, they had a project that was code named “Balto," so she created a cool wolves in space theme that she used on all her research reports.
Themes can not only make your presentation more fun to sit through, they can help other team members find your research again later. By sorting everything by theme, it’s easier to locate “that one study where we learned about churn” because it’s more likely your audience will remember a fun theme like “Wolves in Space," and it’s easy to identify which presentations are “Wolves in Space” themed. Two birds, one slide deck.
Caitria doesn’t just use themes in her slides. At Airbnb, she titled every stakeholder email related to a certain project “Weird Flex, But Ok.” This not only got more people to open and read her emails, but it made her findings easier for her colleagues to remember and refer to in meetings.
Using a narrative storytelling structure helps to keep your readers engaged until the very end of your report. Caitria does this by structuring her reports around a few main bullet points. This is especially important in written reports, which don’t have added design elements like title or section slides to keep the audience engaged.
“Bullet points frame the whole thing. What are the top three sections? That's the storytelling tool. If you're doing a report on the quality of the app, you could make it fat, slow and big, or something like that. That's a storytelling device. You're really emotionally affecting people with those three section titles. I wouldn't suggest using them that bluntly unless you have a really good relationship with your team, but you can still use those same kind of interesting, provocative tools no matter the type of tool you're using.”
Including fun extras, like memes, emojis, and jokes help people connect with and remember your research long after its gone. When Caitria presented her “Wolves in Space” themed research at Facebook, people referred to different research projects based on the image associated with it, e.g. the “wolf with the sunglasses” research, or the “wolf eating space food” slide.
Caitria also said including emojis in her email subject lines typically resulted in higher open rates. Throwing in a quick 👾for a space themed project, or some 😎💯emojis help remind recipients of the theme, add a bit of emotion, and make them stand out in your stakeholders cluttered inbox.
Caitria’s second tenet is that research reports should be informative. For her, that means they go beyond “two out of three users did this” or “73% of users navigated the site this way”. Informative reporting talks about what happened in research, but also digs into why and how users did those things.
“A lot of times I see reports come out, especially from researchers who are maybe dabbling for the first time in product research, and the goal was to figure out, does this button work? The report comes back and the report says no.
But, that doesn't give anyone more information on what to do about it? What didn't work? What about it didn't work? If it was changed in X, Y, or Z way, what would that mean for this whole design all together?”
Informative research reporting helps keep the report engaging and provides the audience with valuable information they can use to actually do their jobs more effectively. Providing this kind of information starts before any research is actually done. Caitria told us about how she worked with the team at Airbnb to understand their perspectives and goals before research was conducted.
“Airbnb has several tiers of different types of merchandise now, so that means they have the lux version, which is really expensive, like $5,000 a night. There is a butler included. And then, they have a one-bedroom with a cat lady and you're sleeping on her couch. These are very different levels.
What I was trying to do with the team when I just joined was figure out what does a luxury accommodation page look like? It can't look like the one that we already had. It can't look like the cat lady's home. But, what is luxury? It's extremely hard for us to find people in the over $800,000 dollar per year income range to talk to. You can't just do easy research on this.
So, the team had created a perspective that they had a point of view, a very strong point of view that showing things like the amenities in photo form would be luxurious feeling, so showing a picture of a Keurig instead of listing coffee service, or something like that in text.
By working with them beforehand, I realized the goal was not to put photos on the page, even though that's how the product was framed. They were like, "We need these photos." I was like, "Why?" "To feel luxurious." "Okay, that I can work with." So, we took three different versions of two different photo design directions and a list version out to very affluent guests and had them just free-form describe and react and compare in a variety of different ways these three approaches.
We had people end up saying, "Ugh, this looks like clip art," or, "basic," or just the exact opposite of the team's stated goal. We found out the list was much more effective for feeling. In a list you could do things like, oh outdoor amenities, and that felt very fancy. It was not what we expected.
Because it was nixing a, I think it was like a $200,000 photo shoot, or something like that at the last minute. And I was doing it on a new team. I was trying to be sensitive, so I was really trying to speak the language of, what is our goal, so a whole slide about this is what we're trying to do. And then a slide about what our hypothesis was that photos might do this. And then the slide that actually had the user's voice, "basic, clip art, word art." There were a lot of great choice quotes in there.
And then, it doesn't feel so much like you as a new person, as an untrusted product visionary, or whatever. You're not coming in there with your own opinion. You're just saying, we achieved or did not achieve this team goal on this particular thing. And then, suggestions on things to do moving forward.
So in this case, the photos were not good for showing amenities, but they were very helpful for teaching people about services that they didn't know about. So for example, if you're going to book a once in a lifetime villa, you might never have experienced a butler, or transportation services, so having photos that described those services was actually helpful. And it was on goal because we needed people to understand these to value them. It wasn't about the photos. It was about how they were applied. And it's about teaching the team where to use that tool.”
The third tenet of great research reports is that they are actionable. This means your stakeholders can leave a research presentation knowing exactly what they need to do next, and what you’ll do next.
“This means you need to put in clear, near-term recommendations. That's sometimes hard for a researcher. We're supposed to be very objective, but that does not mean that you can't suggest the next steps in the process.
So, if the button was invisible, no one could find it, I don't have to tell them how to make it visible. I can give some suggestion if I'm feeling very confident, or if I did an audit. But, I do get to say things like, I'm going to test the next version of this and the button needs to be findable, or I can say things like, these are the top four things in order that make this design good or bad. It has to be a very clear recommendation and that push to make sure that people actually come back and act on it, even if they enjoyed the whole deck and your memes. It doesn't mean anything if they don't walk away with clear next steps to work with it.”
Caitria also suggests structuring your slides so they are clearly actionable. Instead of listing a simple “what happened” problem, she suggests listing why the problem happened and what the product team can revisit to fix it.
She also suggests starting your research with a quick TL;DR, contrasted with next-step action items to make your findings easier to digest, even for skimmers.
This cheatsheet not only makes it easier for skimmers to digest your findings in the moment, but makes it easier for your research report to become reference material later. Someone searching for research on a particular subject months or even years later can easily understand what your research covered.
So now that we know what makes a good research report, let’s look at the common pitfalls that can make a bad one. Doing these things in your reports makes it more difficult for your audience to enjoy and interact with your findings, which typically leads to forgotten research findings.
Ok, this is like the first rule of slide presentations, I know, I know. But avoiding the Wall O’ Text isn’t just for slides, it applies to your written reports too. Reading a written research report is a huge time investment for the reader, and you, as the writer, need to keep them moving along.
Caitria suggests using simple headers to keep the reader engaged, and provide markers as they progress through the report. She also stressed that not everyone needs to know every single bit of information, so be judicious with what you include and what goes in the footnotes.
This is probably the most common mistake people make when creating presentations of any kind. Overloading one slide with everything you want to say makes it difficult for your audience to focus on what you’re trying to tell them. This slide is packed with good information, like what the researchers found, what they recommend going forward, quotes from participants, and a screenshot of the page they’re talking about. These things are all important to the presentation, but they don’t need to all be on the same page.
Often, presentations seem to skip from topic to topic without much of a connecting thread, or an overarching storyline. According to Caitria, a good presentation or report should be like a good story.
Researchers are, in many ways, storytellers. They convey the story of the users to the rest of the team through research. Caitria suggests using narrative storytelling to help propel your readers along. Break up your presentation into clear parts and let the reader know where you’re going.
Caitria also suggests presenting things in order of importance, and making it clear to your audience that’s what you’re doing. Often, she sees people get caught on a small detail, or something they found especially interesting. It’s easier to keep things in focus and on track when everyone is clear about what’s important and how important that thing is.
Caitria says one of the easiest ways to stay on top of the ever-growing sea of research findings is to ensure she’s involved in the product planning process that happens before research. This helps her stay up to date on what the product team is hoping to learn from research, which helps her frame her sessions accordingly.
“For a typical study, let's say a usability study where the product is in flight and we're trying to figure out what to build. For one of those sessions, I will generally throw a meeting where I have the product manager, the designers, data scientists, and maybe one or two of the front end folks come and sit down, and then go through where the product is at, so I'll have the designer walk through.
And then, take down everyone's questions, what they're worried about, what they hypothesize. And then, I add to those to my own questions that I have about the product at that stage, so that I have some insight into what this study ideally needs to answer. I frame my questions around that. But, from that point on, I know what the report needs to say, so I'm already able to have a template for my notes before I start my sessions. So, I know for each session I need to answer these five different categories. I need it, details on this, so I can almost spreadsheet it out at that point.”
Talking to stakeholders not only makes it easier to do actionable research, it makes Caitria’s research reporting process much quicker. Armed with a template, she can take better notes during sessions, translate those notes more quickly into research findings, and put together a concise report in less time.
One of the biggest problems with research reports is that they get buried in a sea of information, and they can be hard to find or refer to later if you don’t know exactly where to find them. When Caitria starts at a new company or team, she takes some time to go through old research and take stock. Caitria outlined how she did this when she started working on a new team at Airbnb.
“As I was starting to work with that team, I just started collecting everything into two big research reports around how people search on Airbnb, mental models, the types of terms, their navigation through the site, where they land, the types of things they're looking for, the ways that they make decisions when they land on search results, all those sorts of things.
And then, another one around auto complete and types of queries people enter, the types of things that they click on when they're shown a set of results, all of that kind of information.
There was some research already done on it, so you can pull out and then cite basic principles. And then, you can also identify when you're doing this process what information we still don't know. So, if you're approaching a topic and you're like, okay, well, I don't know any of these things, so I have to find the reports that tell me that, or if they don't exist I have to do that, it's a good way to go through it and identify what the next big need might be for research as well.”
Since she was already starting on a new team and needed to catch up anyway, she used that time to create research reports that helped guide her future studies. Creating reports like this at the outset help her and her team build off of existing research rather than wasting time looking for old findings or redoing old studies.
After the initial organizing, Caitria likes to organize her research reports so that evergreen findings can be reused easily.
“Usually when I'm doing a deck, I'll separate out the information that's evergreen, so anything about the user's behaviors, motivations, barriers to changing, current method of solving a problem. And then also, I'll dig into their mental models in there, what they expect, why, what are their current app uses, the barriers and differentiation between those scenarios where you might want to use one or another. That kind of stuff, I want to be able to strip off the end of any deck and combine those together at the end and say, this is what you need to know about the user to design a Blue Sky version.”
Separating out this information in every report makes it easier for research and product teams to build upon their understanding of the user with every research project. This way, each report builds on the team’s understanding, even if the research didn’t find what the team had hoped for. This method also makes it easy to go bad to old reports and find information you’ll always need, separate out findings for quarterly reports, and build a clearer picture of your user.
Caitria O’Neill is a Senior UX Researcher at Google. She previously worked for companies like Airbnb and Facebook, helping them conduct great user research. Her favorite emoji is the scream 😱.
Erin: Hello, everybody. Welcome back to Awkward Silences. We are here today with Caitria O'Neill. She's a UX researcher, formerly of Airbnb and heading to Google in just a couple weeks here, so we caught her in between gigs. We're here to talk about a really important topic. It could seem boring, but it's not. It's critical to being successful and Caitria is going to tell us how to take something that could be really boring and make it not boring, so that you can be really impactful. And that thing is research reports, and research findings, and how to put them together and to share them with your teams to be very successful and impactful in your research work. So, thanks so much for joining us to school us on all things research reporting, Caitria.
Caitria: Thanks so much for having me.
Erin: JH is here too.
JH: I am. I feel like the pressure is on to make this episode not boring given the topic, so we'll do our best.
Erin: It's going to be really, really exciting, so let's get started. Let's talk about what makes a good research report?
Caitria: All right, so I first want to kick this off with what makes a bad research report just to give us a little perspective.
Erin: Perfect, yeah.
Caitria: I bet everybody has experienced finding some amazing nugget of information in a report that was made four years ago after you joined the organization, or finding that something that you just discovered isn't that new. The problem is often that the good information is buried in a bad report, and it might not even be that it was poorly framed, or it wasn't explanatory enough. It might just be that it didn't, it wasn't exciting enough for people to read, or the way that the person brought it to the people, their stakeholders, didn't encourage them to interact with it. So, the things that make a good research report are making it... The big thing three for me are enjoyable, informative, and actionable. I can double tap on each of those if you guys are interested?
Erin: Yeah, yeah, please.
Caitria: Okay, so what's enjoyable? I put that first. It's not the most important thing, but this is one of the biggest ways to make impact. It's to get people to interact with your research. Making it enjoyable means making it the kind of thing that people want to read. If you write a report that people don't want to read and don't read, it doesn't have any impact even if it had the possible insights in it, so make it enjoyable. For me means, using things like themes, memes on occasion, jokes, emojis. Basically, you're trying to come up with some kind of narrative tool that will help the reader, first of all, get engaged in the topic and then pull all the way through all of those possibly dry details.
Caitria: For some things, I find in companies like Facebook, they were always coming up with crazy code names for things. One of those names and terms was Balto, and that turned into a wolves and space theme at some point on the team. I just went with. Every single report that I put there had a wolf somewhere in space. I had to create some of the images on my own at some point. They looked terrible, but it got people pretty excited and you get people reading reports and referring to them as, oh, you know, the one with the wolf in the sunglasses.
Caitria: I had another set of reports where I was working on date flexibility at Airbnb. This wasn't even an image. I just titled the emails to all of my stakeholders about the results as, "Weird Flex, But Okay." I mean, it summed up the whole thing. People were like, it was usable, but they were like, ah. It messed with their mental model. People in meetings that I wasn't even attending were referring it to it as the, "Weird Flex."
Caitria: It's using just any kind of memory trick. We're really, at the heart of our work, we're storytellers. It's our job to educate people. And if you think of any set of tricks that a high school teacher or a storyteller would use, we have those same ones available to us as UX researchers. And it's a big part of work because again, I've seen amazing findings crumble into dust in the Wikis just because no one got excited, or no one knew what thread to pull to find them.
JH: Does the delivery channel matter at all? Does it matter if it's a slide deck, or a written doc, or a Wiki, or can you do all the things in any of those formats?
Caitria: That's an excellent question. I think it's a little bit easier to use memes and images and stuff like that when you're using a deck. That's a great thing for both presentation and then also to keep the findings in and share with people in an evergreen way. But for a written report, it might need to be cleaned up a little bit. There is not as much space in the border and the margins for emojis and stuff. So, I do find written reports are a little bit drier.
Caitria: And then it's really when you're thinking of the storytelling tool, then it's the narrative arc that's really important. It's those bullet points that frame the whole thing. What are the top three sections? That's the storytelling tool. So, if you're doing a report on the quality of the app, you could make it fat, slow and big, or something like that. That's still a storytelling device. You're really emotionally affecting people with those three section titles. I wouldn't suggest using them that bluntly unless you have a really good relationship with your team, but you can still use those same kind of interesting, provocative tools no matter the type of tool you're using.
Caitria: The other parts that make a really good research report, so the informative and actionable side. Informative means that it needs to teach the team valuable information. So that means, a lot of times I see reports come out, especially from researchers who are maybe dabbling for the first time in product research, and the goal was figure out, does this button work, or does this work for the user? The report comes back and the report says no. But, that doesn't give anyone more information on what to do about it? What didn't work? What about it didn't work? If it was changed in X, Y, or Z way, what would that mean for this whole design all together?
Caitria: So, making a report informative means not just saying the what, so that's four of X users had trouble finding this, but the Y? So, that digs into things like their reactions. They just didn't see it, their mental models, it looks like this other thing, so that's why all of these people thought it behaved in a different way. You can really start teaching someone about what about the design was not sufficient? When you look into mental models you can start doing mini audits, like showing people, oh, if everyone thought it did work this way, or it should work a different way, here is how it works on these other apps and that's what informed them. So basically, your goal is to have the designer come away not saying like, I did it, or I didn't do it and now I feel bad. But rather, being a better designer because of all the information you gave them.
Caitria: And then the last bit is actionable, so you're really targeting this stuff. You can write an amazing report about all the ways that the button was invisible to people, but again, you need people to act on it in the short-term, so you're really trying to inform your stakeholders who should do what and also what you will do next. So, on the actionable side you'd need to put in clear, near-term recommendations. That's sometimes hard for a researcher. We're supposed to be very objective, but that does not mean that you can't suggest the next steps in the process.
Caitria: So, if the button was invisible, no one could find it, I don't have to tell them, the designer or the product manager how to make it visible. I can give some suggestion if I'm feeling very confident, or if I did an audit. But, I do get to say things like, I'm going to test the next version of this and the button needs to be findable, or I can say things like, these are the top four things in order that make this design good or bad, button has to be findable, information has to usable, whatever the product is I'm working on. It has to be a very clear recommendation and that push to make sure that people actually come back and act on it, even if they enjoyed the whole deck and your memes. It doesn't mean anything if they don't walk away with clear next steps to work with it.
Erin: Right. So, which of these is the most important?
Caitria: Oh, goodness.
Erin: And you can't say all three.
JH: It depends.
Erin: It depends is the other popular answer, yeah.
Caitria: Yeah, I think, well, if I'm not allowed to say it depends. That would be it because I think it depends largely on the actions that you need people to take. So for example, if you're just having people understand the value of UX research and your value as a researcher and understand the user and some of that stuff, enjoyable is a big deal. You're getting an engineer whose never touched research before excited.
Caitria: On the other hand, actionable, if it's you're up against conversion requirements from your company and it's the end of the year and someone is going to get fired, it's much more important to go into actionable details.
Caitria: But I think overall, to answer the question, informative is the most important thing you can do as a researcher, so teaching valuable information. I love to put out there an important distinction in user research that is helpful to bring into your decks and the other types of insights that you share. It's cutting things into, first, that short-term, product-focused research. Did it work? Did it not? Here is what worked, here is what didn't, and here are the next steps for the products that we're building. Separating that kind of information from evergreen learnings.
Caitria: So, in any kind of research session, for example, if I'm doing a one hour product-focused, usability-focused, task-based session at, let's say Facebook. And I start off the session with some questions. I usually ask things like, what types of things are important for people to find on Facebook? How do they search for things? These are warm-up questions that I asked across hundreds of participants over the years. So, you're getting, over time, a lot of foundational information even from that small usability focus study that you can eventually start to share. And that kind of information, if the product succeeds or if the product fails, those are still important findings. Whatever can be stripped away from that individual effort, that individual guess that the designers and the product team have come up with and teach, that's really important.
Caitria: So, usually when I'm doing a deck, most decks that I do, I'll separate out the information that's evergreen, so anything about the user's behaviors, motivations, barriers to changing, current method of solving a problem. And then also, I'll dig into their mental models in there, what they expect, why, what are their current app uses, the barriers and differentiation between those scenarios where you might want to use one or another. That kind of stuff, I want to be able to strip off the end of any deck and combine those together at the end and say, this is what you need to know about the user to design a Blue Sky version.
Caitria: And then, the rest of it is very much focused, sharing screenshots, showing the exact state and time that that product was at and then the information about how the user reacted.
JH: I want to come back to the evergreen thing.
JH: But, just a quick one. Since you mentioned informative being one of the most important or the most important, as I was thinking about this and how I might deploy it myself if I was going to write a report, informative to me also seems like the hardest, or the most at odds with the enjoyable aspect because the first that comes to mind for me is informative it's like, okay, just throw everything in and then you get this dense, hard to navigate report, which is probably boring. How do you do informative in a way that doesn't hurt the enjoyability?
Caitria: So, I think that as a UX researcher, you can take the position of being a product leader by helping define and redefine and share in your decks the brief of what the product ideally should do. So that means, if I'm looking at the product and it does X, Y, and Z, in my brief I can say, or sorry, in the deck I can say that this is the most important thing that this needs to do. The user needs to be able to find the booking button. This is the second most important thing, the user has to feel engaged. But, by ordering those two, I can take a position and say that it's more important that this is usable than that it is engaging. Basically, by taking a firm point of view on what the product should do, I can start to winnow down the amount of information that I should be sharing. I don't think I've worded that very well, but it's a-
Erin: No, no, you have.
Caitria: Yeah, please.
Erin: No, I think it's really interesting because you said something interesting also before, right, which was that researchers are supposed to be objective, right. But, that doesn't mean that you don't come out of a study without actionable stuff, whatever you want to call that.
Erin: Opinion or whatever, right. And so, you're talking a little bit about that again in the context of part of how you make your very informative deck approachable and not just gobs of information overload is taking a little bit of a stand and point of view. How do you thread that needle, right, between the coming to a point of view and to an opinion and shaping even the product vision, but also this objectivity. How does that all come together?
Caitria: It's a back flip for sure, but I think it's similar to the ones that we execute when trying to give really objective information about hurting the designer, or the product team's feelings, right. So, there was a POV and we're saying it did or didn't match up to it. It's almost like working with your product team to understand their POV and why they think this is the most important thing for the product are that. And then also, to educate them about what the user's needs are compared with those other product needs, for example.
Caitria: So, part one is having a dialogue with your team, so you're not just coming in at the end of the process saying, here is what we actually are trying to do. These are words they should have heard in meetings and stuff like that, or your other briefs. Two, you can start building it into other parts of the product process before your deck comes out, before your insights come out, so that might look like writing in a set of user needs or user goals, or just you can use your statements into an experiment plan, or a product description working with the team for that. And then, you could point back at it and say, hey, these are things that it did or didn't do. And then at the end, it's a little bit more about framing.
Caitria: So, I'll use an example from Airbnb. Airbnb has several tiers of different types of merchandise now, so that means they have the lux version, which is really expensive, like $5,000 a night. There is a butler included. And then, they have a one-bedroom with a cat lady and you're sleeping on her couch in a spare room that her child left many years before. There is very different levels.
Caitria: What I was trying to do with the team when I just joined was figure out what does a luxury accommodation page look like? It can't look like the one that we already had. It can't look like the cat lady's home. But, what is luxury? It's extremely hard for people to... for us to find people in the over $800,000 dollar per year income range to talk to. You can't just do easy research on this either. So, the team had created a perspective that they had a point of view, a very strong point of view that's showing things like the amenities in photo form would be luxurious feeling, so showing a picture of a Keurig instead of listing coffee service, or something like that in text.
Caitria: And by working with them beforehand, I got to realize the goal was not to put photos on the page, even though that's how the product was framed. They were like, "We need these photos." I was like, "Why?" "To feel luxurious." "Okay, that I can work with." So, we took three different versions of two different photo design directions and a list version out to very affluent guests and had them just free-form describe and react and compare in a variety of different ways these three approaches. We had people end up saying like, "Ugh, this looks like clip art," or, "basic," or just the exact opposite of the team's stated goal. We found out the list was much more effective for feeling... especially with using header titles in a list you could do things like, oh outdoor amenities and that felt very fancy. It was not what we expected.
Caitria: And because it was nixing a, I think it was like a $200,000 photo shoot, or something like that at the last minute. This was not well timed. And I was doing it on a new team. I was trying to be sensitive, so I was really trying to speak the language of, what is our goal, so a whole slide about this is what we're trying to do. And then a slide about what our hypothesis was that photos might do this. And then the slide that actually had the user's voice, "basic, clip art, word art." There were a lot of great choice quotes in there. And then, it doesn't feel so much like you as a new person, as an un-trusted product visionary, or whatever. You're not coming in there with your own opinion. You're just saying, we achieved or did not achieve this team goal on this particular thing. And then, suggestions on things to do moving forward.
Caitria: So in this case, the photos were not good for showing amenities, but they were very helpful for teaching people about services that they didn't know. So for example, if you're going to book a once in a lifetime villa, you might never have experience a butler, or something like that, or transportation services, so having photos that described those services was actually helpful. And it was on goal because we needed people to understand these to value them. It wasn't about the photos. It was about how they were applied. And it's about teaching the team where to use that tool.
Erin: Right. Yeah, the details matter there. Airbnb is famous for the photos being so important, right, and early part of the experience.
Erin: So, the surprising insights are always the most satisfying. So, thanks for sharing that one.
JH: All right, a quick awkward interruption here. It's fun to talk about user research, but do you know what's really fun? Is doing user research and we want to help you with that.
Erin: We want to help you so much that we have created a special place. It's called userinterviews.com/awkward. It's for you to get your first three participants free.
JH: We all know we should be talking to users more, so we've went ahead and removed as many barriers as possible. It's going to be easy. It's going to be quick. You're going to love it, so get over there and check it out.
Erin: And then when you're done with that, go on over to your favorite podcasting app and leave us a review, please.
JH: Could we do a quick tangent back to the evergreen insights you mentioned? Do you actually have, or do you intend to build out reports for those, and do you employ the same tricks of making them, I don't want to call them tricks, but same tactics, I guess that's not even a better word, same approach of making them enjoyable and informative and stuff? Does it just grow forever? What does that actually look like because it seems really interesting?
Caitria: My method of making my decks, separating them into evergreen and then more actionable product insights, helps me do something else that is in my toolbox, which is research reviews. Whenever I join a team, or whenever I start working on a new product, I have to learn a lot immediately. I have to read through all of the research that's ever happened and look at the dashboards that exists for this, and interview people about the product, attempts that happened in the past, look at experiment results, things like that. So, I'm already doing that human computer thing and crunching a whole bunch of information.
Caitria: I also realized that teams are not... That's the way I say this, teams change frequently in tech. So, you're going to have people who are joining and having that same experience of needing to get up to speed and know all of this information and really understand that evergreen stuff as well as you. So, I find it's a great opportunity whenever I'm getting started with a topic or a team to create a research review that briefly summarizes what we know about a topic. So, I'll use an example from, let's say, yeah, let's say Airbnb.
Caitria: So for example, the team had just put in an auto complete system for Airbnb. They hadn't had an auto complete, or an auto suggest system of a while. I knew that we were going to be starting to do some experimentation moving forward in the future. And we didn't have a lot of information about, collected, about what auto complete should do ideally, or are we understanding how the search system works?
Caitria: So as I was learning about it starting to work with that team, I just started collecting everything into two big research reports around how people search on Airbnb, mental models, the types of terms, their navigation through the site, where they land, the types of things they're looking for, the ways that they make decisions when they land on search results, all those sorts of things. And then, another one around auto complete and types of queries people enter, the types of things that they click on when they're shown a set of results, all of that kind of information.
Caitria: There was some research already done on it, so you can pull out and then cite basic principles. And then, you can also identify when you're doing this process what information we still don't know. So, if you're approaching a topic and you're like, okay, well, I don't know any of these things, so I have to find the reports that tell me that, or if they don't exist I have to do that, it's a good way to go through it and identify what the next big need might be for research as well.
Caitria: And then, when I'm creating these evergreen reports, I find that I just chop those sections off and then add them on. And so, with a research review you shouldn't have all of the information, or every slide, you should have the point of all of that research as one slide, and then have a double click into everything else. Basically, if you can give people a way to self service this stuff, you do a lot less time in one-on-ones and explaining stuff to people.
Erin: Why do they call it evergreen? Because they're ever, okay, I get it, it grows year round. [crosstalk 00:23:49]. It's a forest, right? It's massive. I mean, you're there for a year and you put the slides together and now we have 100 slides, but we just go to the main point, so we're back done to 10 slides. But, how do you manage this over time? And obviously, this is a whole other topic in terms of insight management. How does it scale and does the validity of the quote, unquote, "evergreen information," have an expiration date? Are you pruning the forest over time? Let's go with this metaphor as much as possible.
Caitria: This is so good, yeah.
Erin: This is related to the topic of just good reports in general. How do you keep this growing ecosystem, forest of information manageable as it expands?
Caitria: Wonderful question. I'm going to respond with an example from a company at scale. So, at Facebook for example, they've been doing research on... One of their goals is getting to the next billion users and so therefore, they've been doing research on India as a market forever. And every team has been doing research on the Indian market. So for example, the teams that are doing app performance and app size, it was a huge concern a couple of years ago that the app was so heavy they had to create a [inaudible 00:25:19] app. People are also looking into the cultural implications of Facebook or WhatsApp in India, or looking into the safety issues for example. Like, many people might use one email for accounts. So, there is just an incredible amount of information pouring out of every team at Facebook.
Caitria: And when I was there, they had finally reached a point where the research reviews just, it wasn't effective anymore. But, the best next step that they had gotten to before I left was having rolling research reviews committed to and created by a set of teams. I think it was either quarterly or by half they were looking... They were going through that research review, that document that should teach you all of the stuff you need to [oncord 00:26:07] about this topic. They were going through at least every half year and adding new information and changing it out.
Caitria: That's the same kind of process I've seen at smaller scales as well. So for example, when I joined the search team at Airbnb there was a document with the biggest issues in search, the most common issues, and some stuff on mental models. But, we had already solved some of those issues, so it was time for another update that came up with the biggest issues, rearranged them, and then still kept those important look at mental models as well. Yeah, it is a great question because I don't think the work is ever done.
Caitria: That's where another communication step becomes really effective, which is the creation of self service places to put this information where you can get into versioning and stuff like that, so creating Wikis, websites, indexes where you list all the types of research done on one topic. That kind of stuff is super effective because you can sent people there instead of-
Erin: Yes, please.
Caitria: ... slapping the thumb, or sending the message individually to answer their questions.
JH: Yeah. There is a, if a research report falls in a evergreen forest and nobody reads it, does it make an impact, or something, right, like that. Cool.
JH: The question I had actually is a continuation of that a little bit. When you're putting together a set of slides to walk people through what was learned and everything, are you doing it from the perspective of making the slides in a way that you're planning to present these, like live in front of people and be there to annotate it with a voiceover, or are you trying to write them in a way that is stand-alone and people can digest on their own time and take it in independently?
Caitria: I try to make sure that the assets that I'm creating with insights and findings are usable both for presentation and for sharing out, so I do a couple of things. One, I try to make sure that each of the slides are self explanatory, so linking out to other stuff, making sure that there is arrows, current screenshots, all of that. But two, I try to make sure that they are readable from the back of the room if I am going to present it in person. So, I do try to make sure that each of my slides are constrained to two or three different font sizes, a big one and then nothing smaller than about 30, if I can help it, in Keynote. And then, I also try and make sure that the screenshots that I'm showing I basically...
Caitria: When you're presenting to people you really need to ground them in the visible. When you're working with UI you have to ground them in the visible issue that you're talking about to make it really hit. So, I space my information really widely across these slides, show a big close-up of what I'm talking about, an arrow pointing right at it, some audits. The next slide might be a whole bunch of ways that other sites do it, actual screenshots. I try to stay super visual.
Erin: Do you include a video, or images of your participants at all?
Caitria: Yes, but only where it actually will make a difference. I find that people don't click on those play-
Caitria: ...button symbols in decks very often. And so, if I'm going to give a presentation and I think it will be a nice effect I might include it, but rarely otherwise. The notable difference is when those videos make, or are extremely important to the conclusion. So for example, I was testing different search input interfaces, so those are the place where you would either type or input your filters for Airbnb. And I notice that because of the way the site was loading we were having people pause, have another moment of cognitive load, then think, then take an action. And because cognitive load was so important to the way that this particular designs stacked up against another one, it was really helpful to have the person go ah, with their finger hovering over it, and then eventually take the action. And then, I could show that side-by-side with a video of the alternative where the person was very quickly able to identify what they need and move on.
Caitria: I think for presentations especially, I love to have that stuff. It depends on personal preference. But, when you're sharing it, not the team level, so I'll probably do everyone up to director at the company level with really fun stuff, really irreverent language. But, once you get up to sharing at maybe a C-level team meeting, or something like that, or a full company event, I do recommend polishing up and formalizing some of the tone and really focusing even less on the data except where it supports that overall narrative. The narrative, the story that someone takes away, the urgency behind it is the big thing at that meta level, and then as you drill in and then you're sitting down one-on-one with the engineer, that's where all the detail can come up.
Erin: That's counterintuitive to me. You think the C-suite, right, give me all the charts, give the data. You're saying, the takeaway is the most important thing. How do you convince them the takeaway is right, or do they need to be convinced at that level? What's the goal?
Caitria: I think that'll really depend on the circumstance.
Caitria: The example I'm thinking of right now, I was engaged at Airbnb in a 2020 planning effort. A small group of researchers when through a lot of their information and data from across the whole company and tried to figure out what are the big blind spots for 2020? What does the company need to focus on, or other people will eat their lunch in 2020, that kind of thing. There is so much information. I think we came up with a set of seven different meta themes and then combined that into a group of five that we dug deeper into for several months with more research. There is so much data. There is so many charts at that point that it's overwhelming. People shut down.
Caitria: So, we eventually, through working with different, really talented storytellers across the company, which I definitely recommend as a researchers... We are not the best storytellers even though it's important to us. But, it's an important tool rather. By working with really talented storytellers across the company, we were able to break it all down into a two-part message that really spoke to people.
Caitria: Instead of saying, my first draft might have been, the app is fat, it is slow and it's too big to download, according to everyone and all of our competitors are doing better. So, I'm already getting into some of the data and stuff. But, you could wrap that up in a much more positive way and say, we could be a better host or something. We could be better to the user. That kind of message, you don't have to have every single bullet point of what was wrong to take that on as the mindset of what we need to do better.
Caitria: So yeah, the whole company doesn't need to know the exact app size. The whole company doesn't need to know the exact rate of bugs, or a big issue between competitors, but they do need to know that the big focus is treating these, or in this particular way moving forward.
JH: Cool. I have a quick two-parter.
JH: One is, how do you make the time for it? I feel like, at least for me, whenever I finish research you're always just ready to move on. And so, wrapping up the report sometimes can be hard to actually spend time on. And the second one is, how much time do you make for it between maybe three categories of upfront alignment with the team, actually scheduling and facilitating all the sessions, and then wrapping up a report? What's the breakdown of your time spent across those buckets?
Caitria: Awesome. I think I move pretty fast on this stuff, but it's only because I use a ton of tricks and just speed methods to make sure that I don't have to spend a ton of time on each. I think if I was sitting down and really giving it my all any of these stages could take a ton of time. So in terms of upfront, I try to make sure that I am engaged before any research is done with product planning. It does take a little bit of time. It means, for example, having people sending me, send me their one-pagers for product description, the experiment plan, and timeline, experiment results and their hypothesis, those kinds of things. It's an open door policy where I just want the team to know that I should be engaged in interpreting and hypothesizing what the user is experiencing in the product.
Caitria: And then, that means that I have an eye on what people think the answers and the questions are, so that I can frame the report in a way that answers those questions, or validates, or invalidates those answers. So I'd say, upfront I don't know how to give a good time estimate, but I will describe for a typical study, let's say a usability study where the product is in flight and we're trying to figure out what to build. For one of those sessions, I will generally throw a meeting where I have the product manager, the designers, data scientists, and maybe one or two of the front end folks come and sit down, and then go through where the product is at, so I'll have the designer walk through.
Caitria: And then, take down everyone's questions, what they're worried about, what they hypothesize? And then, I add to those to my own questions that I have about the product at that stage, so that I have some insight into what this study ideally needs to answer. I frame my questions around that. But, I also know from that point on, I know what the report needs to say, so I'm already able to have a template for my notes before I start my sessions. So, I know for each session I need to answer these different five categories. I need it, details on this, so I can almost spreadsheet it out at that point.
Caitria: So, the upfront time is one meeting and then framing the questions and dealing with the prototypes and all of that, I'd say maybe half a day to a day of upfront time to prepare for this study. The study themselves, I generally take a day to two days depending on the number of people that I'm actually going to interview. I'm doing more in the eight range to the 10 range recently because of the types of questions I've been asking, but generally eight to 10. And then, one of the speed up methods I use at that time is, I have those five topics or however many topics I know I need to get to for an interview, I first of all, take notes almost real time, jotting down time signatures when something touches any of those points. And then, I schedule about 15 to 30 minutes in between sessions where I fill out a rubric that answers for each participant did they... what relevant information on each of these relevant points did we come up with?
Caitria: And then, I share that generally in a slack channel, my Airbnb channel. I got a slack channel called WTF Just Happened and anyone who is involved in one of my studies could join and see the short notes in each of the five, or so relevant topics from each of the sessions and then they could see, jump into the next recording, or something like that, or ask questions for follow-ups. So, I'm trying to engage people and share those early findings as we go. If I'm doing field work, at the end of day one or the end of day two, I'll generally send a notes from the field, or that same kind of what happened quick notes.
Caitria: Then it's pretty easy. I have a lot of slide templates that I've built up over the years, but I generally start off by listing, make section dividers for the slides first. You already know what the slide deck needs to show, so you can start filling out these are going to be the five sections. What did you learn out of that? You go back to your sets of notes and you can really easily look across them and say these are what seemed significant.
Caitria: I try to make sure that when I'm doing a deck it's always weighted from most important, most impactful thing down to least impactful. I try and make sure that people know that it is impactful or isn't because sometimes I've noticed people get excited about one detail at the expense of something more important.
Caitria: I use a whole bunch of clip art, or I go on Dribble and I steal art. Don't tell anyone. Sorry, this is a podcast. You could use and then cite people's work from Dribble, or any of those places. It's not our job to make it beautiful, but we do have a lot of tools that are at our disposal and templates that make it fast and easy to make something that's pretty attractive.
Erin: You used the word that I've been wondering about, which is template.
Erin: Do you have templates that you share with folks who are maybe newer at this, or certainly you have templates for sharing with yourself. How did you get to that template? Look, I can object to my own question. I don't know if you would, right. You have to be enjoyable and informative and actionable and there is no template per se for that. It's based on the specific kind of research you're trying to do.
Erin: But, do you have any kind of shorthands, or outline? I mean, we talked about a lot of it, but I'm wondering if we can help folks out who...
Caitria: I do have a shorthand, so I mean, the short answer to it is I prefer to make my decks most of the time without templates because it lets me have a lot of freedom and I've done so many at this point that it feels like a template. You just throw stuff up there. I have a set of old decks that maybe I'll go back and copy a ton of shit out of it if I'm really pressed for time. And then, I share very widely with people. I even have Medium articles where I've shared some just more general templates for people because if right aligning or left aligning everything isn't your skillset, you should be doing data analysis. People who don't want to be messing with pixels should not be.
Caitria: In terms of things, ways to help people out, I have a couple rules of thumb. The first is that cover slide, so important and rarely done. People need to put the dates, their names and contact information and what the study concerned because usually it's like, Usability Study Trial #17, or something like that and that doesn't help people find that information, so yeah. And then something really memorable, some kind of really memorable either a color set that helps them remember it, or an image that helps them know that this is that particular product, or even a screenshot of the screen in the particular thing that they're testing.
Caitria: Next, it's really helpful to highlight the method, so that's the sample size, what types of people it was. And then also, a note, especially if your team hasn't worked with user research that much, a note on what that means. So, that might mean either telling people, for example, you can use this to understand what is usable, or maybe not usable for the average person, but not preferences and desires, or something like that, or attitudes. You can help them know how to interpret this particular method that you've used.
Caitria: The other is a TLDR. So that up front list of, if you don't have any other time, these are the things I really need you to know. And then, bolding some of it. Help those parsers and skimmers. Bold the most important stuff.
Caitria: And then the same thing with a list of recommendations. I put those up front too because I get paid or don't get paid based on my impact, so I want it right up there at the front of the deck, you should do this. Then, I think another really helpful thing is to break something down into thematics sections.
Caitria: It feels overwhelming to get all of the information at once. You want to help people, so let's say they're only interested in the date picker, or the button, or the thing, or a particular service. Break it down into the sections that you know your team members need, and then make clear section dividers that help them jump between. And then, if you're using something like Google Slides, or Keynote online, you might even be able to link those, so that they can jump and have an up front glossary of what's going to be in the deck.
Caitria: Once you get into the sections, you should have very clear screenshots of what was tested. The product changes so often. So, if you were like, it was hard to find. Which version? It's changed again already before the sessions were even done. So, having that what we tested up front and links to the prototype, links to video of the sessions, or your notes from the sessions and the script are really helpful.
Caitria: And then, having recommendations both in [inaudible 00:42:57], so when you're teaching them the thing, the button is not findable, give the recommendation right there. Make it purple or move it to the center of the screen. But then also, have that section at the end that recaps the recommendations.
Caitria: And then a last thing, I like to put in place and at the end of decks and the next steps both through research and for the product team just to get it out there and give people that kick in the butt to get moving, I'm like, okay, the next session needs to test these open questions. We still don't know X, Y, and Z. And then, I'll say, the next step for the product team is to give me mocks that help us test X, Y, and Z. You almost set up the product process at that point using research as the yardstick. You're saying, hey, we're not done yet. We haven't done this yet and we need to, so your job is to help us figure out what we need to do, to do so.
Erin: Love it, cliffhanger.
JH: That was a great look. [inaudible 00:43:50], amazing.
Erin: All right, well, we are at 4:04 and I think we...
JH: Hang on, I have a good question to close on.
JH: Favorite emoji?
Caitria: Oh, I'm going to say the scream emoji.
JH: That's a good one.
Caitria: Do I need to give a reason.
Erin: I think you do. This is a user research podcast. We need to know why. Right.
JH: When is the last time you used it [crosstalk 00:44:17]?
Erin: Will you use it again?
Caitria: Oh, definitely in a deck, yeah. Definitely in a deck and I think it was reacting to a particular user's, just some quote that just seemed kind of crazy. This hasn't happened recently. I hadn't worked at Facebook in years, but I worked there during the election. You'd send a survey out about something that had nothing to do with anything. It's just about Messenger and then you'd just get crazy, crazy responses back. So, I like sharing just something that you don't, you shouldn't have learned. It came out of the person's mouth. Share that with the scream emoji and it helps bring humor out in some of the responses.
Erin: Well, JH, now you have to say what your favorite emoji is.
Caitria: Yeah, that's true.
JH: What did I say? We did this as an onboarding thing the other day. What did I say? I said I like the eyes that are glancing to the side. I think you can use that in a lot of different ways. I'm a big fan of the ghost emoji, just kind of playful.
Caitria: Ghost is good, yeah.
JH: Yeah. Erin, what was yours?
Erin: Well, now I'm trying to think. I should just split. Slack tell me what it is because it knows, which one I use the most often, doesn't it?
JH: Yeah, that doesn't mean it's your favorite. You know what I mean?
Erin: Well, yeah, favorite.
JH: Like the plus symbol gets a lot of use, but that's nobody's favorite.
Erin: I do use the heavy plus sign quite a bit.
Caitria: Balloon is another great one. And last pro tip, when I'm sharing findings I put emojis in the email title and it gets people to click.
JH: Balloon, I don't think I've ever used balloon.
Erin: Bro, you know my favorite? I like the shrug and the face palm. Those are my two favorites.
JH: The shrug, yeah. Yeah, the shrug ones are good.
Carrie Boyd is a UXR content wiz, formerly at User Interviews. She loves writing, traveling, and learning new things. You can typically find her hunched over her computer with a cup of coffee the size of her face.