down arrow
Driving actionable decisions from insights with Pejman Babaei podcast thumbnail

Driving Actionable Decisions from Insights with Pejman Mirza-Babaei, UX Research Consultant

UX research consultant Pejman Mirza-Babaei discusses games, user research, and how to turn research insights into strong design decisions.

As a researcher, you know the value of using research to drive business decisions—but other stakeholders might not. In order to drive action from research, you need to create alignment between key decision-makers and your research. 

Pejman Mirza-Babaei joins the Awkward Silences podcast to discuss how to transform research insights into actionable design decisions by collaborating with stakeholders. With over fifteen years of experience doing UXR and interaction design on various applications, Pejman shares insights on the importance of effective communication and collaboration; the need for actionable and cost-effective research; and the challenges and biases that can arise in decision-driven research.


[00:01:53] Pejman’s extensive experience with games and user research

[00:04:40] Making good, directed, actionable decisions from research

[00:08:53] The inspiration behind The Game Designer’s Playbook

[00:13:14] Collaboration with stakeholders to determine research necessity

[00:16:53] Ensuring alignment between decision-makers and researchers

[00:23:30] When decision-centric research backfires

[00:27:31] Avoiding marketing your company to your participants

[00:30:07] Specificity in context 

[00:32:28] Navigating difficult decisions stemming from research

[00:37:03] Successfully seeing research insights through to action

[00:43:46] Life motivations mirrored in games

Watch or listen to the episode

Click the embedded players below to listen to the audio recording or watch the video. Go to our podcast website for full episode details.

Sources mentioned in the episode

About our guest

Pejman Mirza-Babaei is a UX research consultant, author, and professor. His latest book, The Game Designer's Playbook, was published in September 2022. He also co-edited the Games User Research book (2018), a compendium of insights from over 40 experts on UX research in games. He has over 15 years of experience doing UXR and interaction design on various applications, from mobile apps to VR games and everything in between streaming content, console, PC games, and even delivery robots! He worked as the UX Research Director at Execution Labs (Montréal, Canada) from 2015 to 2017 and was a UX Researcher at Vertical Slice and Player Research (UK) from 2009 to 2013.


Pejman - 00:00:00: As a researcher, as decision-makers, especially if you are working on a cutting edge product, usually games projects are, we are trying to make a guess, and we are trying to make, I would say, educated guess. And then the research, in most cases, won't tell us to only do this. It'll help you to make that educated guess with more education, let's say.

Erin - 00:00:19: This is Erin May.

JH - 00:00:21: I'm John Henry Forster and this is Awkward Silences.

Erin - 00:00:33: Hello, everybody, and welcome back to Awkward Silences. We're thrilled to have Pejman Babaei here to talk about games user research and how to really turn research insights into strong design decisions. You've written a couple of books on the subject, one coming out soon and one already published, the Game Designers Playbook, as well as Games User Research. We're thrilled to have you here. Thanks for joining us.

Pejman - 00:00:56: Thank you so much. My pleasure to be here. I've been listening to your podcast, and I'm a fan, and I'm happy to have the chance to contribute. I hope the listeners will enjoy this talk.

Erin - 00:01:05: I'm sure that they will. Now we've got JH here too.

JH - 00:01:08: Yeah, anyone who's worked with me has probably seen me share an article called Making Good Decisions as a Product Manager by Brandon Chu. So I'm very into decision-making, so I like this theme already.

Erin - 00:01:17: Well, thanks again for being here. Why don't we just start with hearing a little bit about your background? Because I know you have some extensive experience with games user research, and I've written a couple of books and everything. So tell us a little bit about you.

Pejman - 00:01:27: Sure. So I did my undergrad in computer engineering, so nothing to do with UX. I worked as a network administrator for a year, and I found it very boring. So I decided to do a master's in commerce and UX. After that, I worked in web UX for a year and then decided to do a PhD focusing on UX and Games. I think my thesis was probably one of the first ones on the topic of formative evaluation of games. So doing research with the intent to improve products that are still in development versus, you know, lots of past research that tried to prove a hypothesis, for example, is playing with a friend would get you more enjoyment than playing alone. So my focus was improving the product, and I think that was one of the first ones as a PhD thesis on that topic. I worked as a games researcher. I started at a startup in the UK called Vertical Slice and later on Player Research, which was bought by Keywords Studios a couple of years ago. And then, in 2013, I was offered the research professor position in Canada. So I moved from the UK to here, and I'm a University Professor, but I do most of my work in collaboration with game companies or other products. I worked as a UX Research Director for a company called Execution Labs for two years, from 2015 to 2017. And since 2018, I have my own consulting firm where I work with many clients on doing UX. But I think of myself as a UX researcher who likes to do a bit of design. And I talk to many people who are designers or producers, and then they say they do research as well. So I think that the opposite is that I'm a researcher that does a bit of design. So that's where I see myself situated at the moment.

Erin - 00:03:01: Very nice, very nice. And how long did it take you to get from that beginning where you were a developer and bored and looking for more, to where you are now?

Pejman - 00:03:08: Probably 15, 20 years, but it's life. You know, things happen, and sometimes when it happens, you don't really know what it's going to bring you in the future. But it turns out to be good. I'm really enjoying what I do these days.

Erin - 00:03:20: Yeah, I heard somewhere the other day it's like whoever's having the most funds winning. I think, you know, someone came up with that a long time ago, but I like that. And I think if you're not having a fun time doing what you're doing, what's better than games user research, right? So I love that.

JH - 00:03:34: Yeah. So since you have, you know, familiarity with both the research side and the design side, how do you think about what is a design decision or like an actionable decision that can come out of research? It's easy to imagine lots of decisions or lots of ideas coming out of research. Like, what makes a good, directed, actionable one, in your opinion?

Pejman - 00:03:50: Sometimes, when I talk to people, they think about the decision aspect of the research when it comes to reporting the research. To me, having that goal in mind from the very beginning, like how are you going to report this in terms of making or supporting the decisions that will be made easier? That obviously influences everything you do, from the study design to conducting the research. And then, obviously, in terms of reporting the result of your research. The way I like to think of it is how that research will be used, basically what sort of decision people would make. Sometimes I work with designers or producers that are looking for ideation. So they have something in mind, but maybe it's not their focus, or they have experience in that topic. So they want to talk to some users just to get some new ideas from those users. So I call them maybe a more exploratory ideation phase. Sometimes the designers have two or three, or four alternative designs, and they're looking for some form of research to help them select which alternative makes the most sense, and they should go for it. And then sometimes they already have a very strong design intent and have already developed that into a close-to-finish prototype. And then they want to assess that. So they already know their design intent. They already know what they're designing. They come to do research to see if the users experience the design intent in the same way that they design for. So having those quite broad categories in mind obviously helps us when we design our research and how we support the designers in making those decisions. So if it's in ideation, I usually try to have designers involved. They would come to the session, they would watch, and they may even interrupt and ask questions. And there is no strong validity in terms of proving anything. Even if one person says something and the designer likes it, that's a useful outcome. And the designer may build on that and make some decisions. Versus if you are trying maybe to prove or assess between two or three alternatives, then you want a different set of methods. You probably want larger users. And then, in your report, you need to probably clearly say this alternative works best. And here are the reasons and how you can maybe improve it more. So having that decision that you're going to make from the beginning of your research process is super helpful in terms of the methods and how to conduct the research and things like that. So that would be my take.

JH - 00:05:58: Just to play that back a little, it sounds like what you're saying is you want to go into research knowing what sort of decision you're trying to make. And you also want to know what sort of category of research you're doing. And what you get from that is if you're doing something really specific and evaluative, like a usability test across a couple of concepts or whatever, you know, you're probably making a pretty specific decision like this or that. Whereas if you're doing something more generative or open-ended and ideation your same, it's okay to be the decision might be a little fluffier or broader, but you know that, and it's still can be actioned. Is that what you're getting at?

Pejman - 00:06:27: Exactly. And then the reporting of that may be a bit different. So if you are doing that ideation, you may run a workshop, and then you bring all the team members in, and everyone going to ideate on, you know, something that may be one or two users said, but if you are trying to like validate or assess between alternatives, then you probably have some measures that you're measuring against, and you're looking for some whole light significance comparing different prototype or different alternatives and make a decision on that basis. And your reporting would be very different in those cases.

Erin - 00:06:54: I know we're inspired by this topic because you're writing a book on this topic. So just wanted to back up and ask why you want to write this book.

Pejman - 00:07:00: This book is very personal to me. It's done, so it was published a few months ago. So it was a very personal book. And I decided to write that; I would say, maybe for two key reasons. One was that I did that book called Games User Research that focused slowly on doing user research and methods for doing user research, particularly for games. And that came out in 2018. We started that in 2015 and came out in 2018. After my work with Execution Lab, I learned that my view of doing research changed a little bit. And then I thought about helping developers to build better games or better products; there are other ways to do that than just doing research. So there might be an easier and faster way to help them than doing primary-type research with real users. So that was what motivated me to work on the Game Designers Playbook because as we do research on products and games, we learn about what makes a good design. And I thought I can just combine them as some form of a recipe of good design and put them in one book from everything I learned from doing research on different games. So it's basically a combination of design features that work in existing games. So the book talks about them and talks about why that design works, and that came from the UX prospect. And hopefully, that might be an easier way for people to assess their work or think about their game design than just jumping in and doing research. And that's one of the downsides that I've seen many companies when they have a question or a decision they want to make. They immediately think of, let's go and do a user research, interview 50 people. And usually, my job is to say, hey, there are other ways to get the answer to your question than just doing user research because doing user research is expensive and could be quite time-consuming and hard.

Erin - 00:08:47: I think that's great, and it's super timely for right now. So we've done a couple of episodes like we did on, you know, when we did not do user research, right? And, of course, being the company that we are that supports recruiting for research and being advocates for research, we like research, but I think it doesn't help anyone to do research when research isn't additive or helpful, right? Particularly in this market. It's like really using research where it's needed and can be helpful to drive better decisions, and every decision doesn't need research all the time.

Pejman - 00:09:16: When I say user research, usually I'm referring to bringing in actual users like from the target audience. But again, sometimes you may still talk to experts. So I actually used your product to interview some experts because the question we had, I thought, was very complex and to put it in front of target users. So I thought if I just interview experts, they're going to tell me what I need to know at that stage. And that's what I'm trying to highlight in the book that don't immediately think about doing your usability test or talking to the target audience. There might be other ways you can look at products; you can maybe interview product owners, you can maybe interview experts, and then try to get to the answer and support the decision you are making than just directly talking to target users.

JH - 00:10:02: So you're describing a situation where you might not need research to make the decision. There might be other ways to get a signal. There's probably the inverse, right, of decisions being made that probably really do need research, but nobody's doing it. So it sounds like part of this in terms of making good actionable design decisions based on research is also knowing when it's needed. So any thoughts on how you make sure you're interacting with stakeholders and having that collaboration to determine if we need research here or not?

Pejman - 00:10:26: So usually in the work that I've done, it's actually interesting because as a consultant, when I come in, someone feels that there needs to be some external body to come in and intervene in what's going on. So in most cases, there is some friction that's happening inside the company, and they need some extra opinions to come in. So they contact me, like, hey, can you come and help us here? Sometimes as you say, yes, there is a need for user input. But in some cases, I've seen that there is a lack of communication in terms of what's the vision of different team members. That's sometimes the biggest challenge. So back to your question, John, I think before doing research, one step might be doing some form of UX research for your company or for the company and then see if everyone is on board and if everyone shares the same vision for a product that they are making. And that's usually a common problem that I see, like lack of communication between team members, lack of shared vision, like the CEO may want a product and thinks about something, and then the engineers are thinking about building different products. Obviously, user research or usability can help in those cases. Like you bring in and say, hey, by the way, what you created, this is how your user actually reacts to it. And then that's facilitated or brings those disconnects internally, highlighting them. But arguably, there could also be other ways to figure out those. I think you had a session on coaching and talking about that. And I think that was super interesting because a lot of time it's about how people perceive what's going on and how they want to push their vision forward and things like that.

Erin - 00:11:59: Yeah, honestly, it's been one of my biggest learnings doing all of these episodes is that so much of the research you do is internal and internal stakeholder management and really figuring out what are the real questions, what are the real assumptions. That's at least half of the job.

Pejman - 00:12:14: So I would spend time, especially as an external researcher that I come in usually, I would try to spend as much as possible understanding the product and the vision behind that. In the game industry, it's very common for games to have a document called a design document. I haven't seen them much in productivity or web apps. But in games, usually, they try to explain every feature clearly and what sort of impact they expect. And those are very helpful for people like me as external researchers because I read that, I was like, okay, now I know what's your intent. Let's try to evaluate that intent. And that's something that I haven't seen many app companies or productivity software do very often.

JH - 00:12:51: I think about the fact that the person doing the research is probably often not the person making the decision from the insights. And so how do you make sure that you're doing this good research and you're coming up with insights that are guided at making decisions, but then somebody else has to make them? How do you make sure there's good alignment there and a good partnership there?

Pejman - 00:13:08: One side is obviously understanding the company, understanding the question, understanding the product. But then, when we get closer to doing the research, I would probably categorize them into four things that I try to pay attention to. So one thing is that focus, both in terms of knowing what you want to evaluate and reporting the evaluation, need to be specific enough. Usually, the danger is you talk to some companies, and they're like, just tell me if my product is good. So that's actually quite a difficult thing to do without knowing what specific thing they want to have. That's your key research question. And then when it comes to reporting, if I do work and then I just say that, hey, like this game, this level is not good, it's very hard to act on it because it's not very specific. But if I say this level is not good because in the previous level, you didn't teach this mechanic correctly, and you want the players to use them in this level. So I give them very specific issues to deal with. So that's around a specificity that makes things actionable. The other one that I've seen, especially when I worked with some junior researchers, it's around the time, I would say, the timeliness of both the research and what you need to do to basically meet the need to make it actionable. So one of the key questions I have when I talk to people is to understand their development cycle and process and where they are. For example, when is your sprint cycle? Where is your next sprint? Where we can actually implement some of the changes because if I miss that, the sprint that they had to work on their UI and my research takes, I don't know, two weeks to complete and I missed that sprint, then immediately it's not actionable, or it needs to wait for like another month for the UI sprint to happen again. So knowing that time. And then the cost, obviously, like if I go in and say, hey, I'm going to do this, and it's going to cost too much. So is there a ROI based on the research and because the decision is basically an issue that it wants to solve. And my favorite one is what I call motivation, and that's something people usually don't pay attention to. There's a very great article I read a couple of years ago. It talks about the plausibility and persuasiveness of UX research. And I thought that was super interesting. Like, do the stakeholders trust or believe in your work, and then can your report or the way you do it convince them to make changes? And I think those are very, very important. Like I've seen some research projects that, no, you just do the research, send the report, and it's not exciting to read. Like no one's going to read like a hundred-page text. And so, how can we make the stakeholder motivated to take action? I think that's also really key.

Erin - 00:15:32: Yeah, and there's two parts to that, right? There's like, are you selling it? Which gets into some of the other things you were saying, like, is it timely? Have you proven the ROI? Were you specific? Did you give context? Are you selling it? But also, I imagine there's an element of, did you do the work to understand your stakeholders and what will motivate them? Like we all want to be objective, but we all also have our passion projects or areas we're really excited to improve or whatever emotional stuff is there. Can you tap into those biases, emotions?

Pejman - 00:16:04: I have an interesting story to tell. A couple of years ago, I was brought into a project by a publisher, and again, in the gaming world; usually, you have a company that developed the game, and then you have a publisher who published the game and usually funds the project. So the publisher contacted me saying that they have this game and they feel that it's not good enough, but the developer, a different company who developed the game, is not receptive to any of their feedback. So they think that their game is great. And then they brought me; it's like, can you evaluate this game? Let us know what you think. So I worked on a game, I wrote my report, and then we decided to do an in-person presentation. We booked a meeting with the developers in their office, and the publisher asked to meet me in a cafe nearby before we went in. So we went there, and then she was like, oh, just wanted to let you know we don't have a very good relationship. So if they were very harsh or dismissed your feedback, have that in mind, so don't get upset, basically. And I was like, okay, don't worry; I got this. And I knew that relationship based on the conversation that I had in the past. So in my presentation, usually when you have a presentation, you start with high-level stuff, your research question or key finding, and things like that. But my first two slides were just about myself. I was like, okay, here is who I am. I'm not just this random guy coming to your company. Here are like 30, 40 games that I evaluated out of these 40 games, like 10 of them very similar to what you are creating. So I spend like two slides to just talk about, let's say, my credibility of doing the work on their game. And then, obviously, I try to make the presentation interesting. I had lots of support by having videos, having quotes from the players, like, I don't know, prioritizing the finding, like all the usual stuff that you want to see in user research was there. But those two slides that I thought it's unusual to spend the first five to ten minutes just to make sure that the company trusts you for the work that you've done. And it was so cool because as soon as the presentation finished, the company owner came to me, and he was like, oh, we have two other games with different publishers. Can you evaluate them as well? And that was such a validation for me that he took that. And obviously, they made some changes to the game and things like that. But it was a very memorable work for me, and that's how it ended. At the end, I got a few other contracts out of that relationship.

JH - 00:18:10: Yeah, that's awesome. Yeah, there's so much interpersonal stuff in all of this that making sure that it's perceived in the right way and people are ready to hear it and consider it is so important. All right, a quick Awkward interruption here. It's fun to talk about user research, but you know what's really fun is doing user research, and we want to help you with that.

Erin - 00:18:28: We want to help you so much that we have created a special place. It's called for you to get your first three participants free.

JH - 00:18:39: We all know we should be talking to users more, so we went ahead and removed as many barriers as possible. It's going to be easy, it's going to be quick, you're going to love it. So get over there and check it out.

Erin - 00:18:47: And then when you're done with that, go on over to your favorite podcasting app and leave us a review, please.

JH - 00:18:56: When you are really decision centric in how you're approaching your research, can that ever backfire or have ill effects where you do research and it's actually quite inconclusive? Like there's not an obvious thing to do here. Does it almost create pressure to be like, well, we got to make a call. Like we want to be actionable, and we want to make this decision. And so, like we can't really tell from these insights, but we're just going to go with this, so that looks like we're moving forward. Is there any backfiring that you've seen?

Pejman - 00:19:19: Yeah, I mean, the nature of research is usually something that you don't know a clear answer to, and you do some research. And in most cases, the outcome is let's do more research. It's not a bad thing to be in conflict. I mean, if you try to be specific, usually you end up with the answer you want. But there are a couple of things that I want to differentiate. So as a researcher, as a decision-maker, especially if you're working on a cutting-edge product, usually games projects are, we are trying to make a guess, and we are trying to make, I would say, an educated guess. And then the research, in most cases, won't tell us to only do this. It helps you to make that educated guess with more education, let's say. So people who make decisions, obviously, there are a number of factors they would consider in that. And that research input could be one factor in that process. So I would say even if it's not completely conclusive, it probably provides some useful insight to the person who is making the decision who can make a better guess at the end. So that's one thing to have in mind. The challenge, however, is sometimes I've seen loads of biases in research that could lead to less accurate findings and those could be a dangerous situation. I'm actually currently working on a product. It's not a game. It's a variable sensor. It goes to your body. It's crazy. They sent me a report from a study that was conducted. And the very element that I saw came up in lots of interviews. The participants commented on that as something that is problematic. But the report that they sent me didn't talk about that problematic feature. And the reason they didn't talk about it was because no one asked for the questionnaire. It's for the survey base. So there were no survey questions to say, hey, we are thinking of doing this. So somehow give us some feedback. So the absence of that feature in the survey meant that the report didn't highlight that problematic issue. And it went to our decision-makers. And I had to send them, like, hey, guys, can you just pay attention here? This is a problematic thing. The reason it is absent from this report is that no one included that question in the survey. If we talk about inconclusive research, I just want to differentiate that from biased research. That could be quite dangerous when you are making decisions. There are obviously so many things that can bias our data. One would be not asking about important stuff.

Erin - 00:21:33: Right. Well, that reminds me one of our old episodes, Why Surveys [Almost Always] Suck, with Erika Hall, but it seems like in that case, just adding a question of what else could be helpful there to open things up to things you didn't think to ask in the survey. So there, we know there are lots of biases that come up and influence research negatively. Other than all of them, are there others that you see come up quite a bit?

Pejman - 00:21:55: Oh, I have a few favorites. Most of them are from games because they're a little bit different. But one of my favorite stories, as we know, is that we don't want to emotionally bias our participants. If you start an interview, let's say, and tell them that, hey, I'm working in this amazing company. We have so many great games, and this is our new game, and we are so excited. So they are already emotionally biased towards that, especially if you are from a big company and you already have loads of successful products or successful titles. In one of the projects I had, we wanted to do some in-house testing, and I tried to find a neutral space in the company. So it was a meeting room that wasn't part of the dev sitting area because in game companies, usually where the developers sit, there are loads of posters, there are loads of cool stuff that participants come in and walk through the development area. They get excited about a game or something like that. So we found a meeting room that was on the other side of the office, and then it was neutral, like a normal boardroom. And I was like, wait; this is perfect. We're going to run our study here. So the day of the study, I came in two hours before we started. And then I saw that the team had decorated that room, but they moved all their awards, they moved all the posters into the room. I was like, what's going on, guys? We wanted the participants to know how great we are. We actually don't want them to know because we don't want to bias their opinion about your game or your companies. I made them remove all their awards and all their posters from the meeting room. That was again one of those memorable moments that unintentionally, obviously, they thought that they needed to promote how good they are to the participants.

Erin - 00:23:22: That's a good one. You know, I hear about that one all the time. Don't market your company to the participants who are trying to get unbiased feedback from. That's a good one. Awesome. Maybe we could drill a little more into some of these things you mentioned in terms of setting insights up for good decisions. The first one you talked about was specificity and context. Anything to add there?

Pejman - 00:23:42: There are two things that you want to go deeper in there. When I try to report something, obviously, you want to be specific about what the problem is that you are reporting. Again, a developer came to me, and then they were like, we have a problem because people are not buying anything in our game. A monetization, people were playing the game, but they were not spending money. They came with that problem, and then the question would be, what's going on? Is the game not motivating enough for people to spend money? Maybe the UI shop is not clear enough. Maybe the language that is used there is not clear. Maybe the people don't see the different icons you have for different items they want to buy. Maybe they don't see them as exciting things. There are multiple problems. Being specific when reporting back to them to say, okay, the reason people don't buy stuff in your game was because the picture of those purchasable items you had, they didn't really communicate the value of that item and what they could do for the player. That could be one case. Another case, they had a problem because the purchase side only accepted something like a US dollar or something. People in other countries are very uncomfortable spending USD or whatever. Being more specific on what exactly caused that problem, it's super important, and that could also help them to act on it. If you say that the reason they didn't purchase is because those images you have didn't communicate the value, then that's a visual designer's job to come up with a new icon, and then that could fix the issue. If it's a backend issue with your credit card thing, that's obviously a different thing. That's why being specific is very important, and that links to the solution that could solve that problem.

JH - 00:25:17: What about when the research and the insights you gather make you realize that you need to make a difficult decision? So you were going down this path, you're designing this level or whatever, right? And you're hoping that it's pretty good, and you're going to do some research on it, and you find out actually it's pretty flawed. And the right thing to do now is to do a big refactor, a big redesign, which is going to put the project at risk or timelines, right? Like I'd imagine that's difficult for people to process and accept in situations.

Pejman - 00:25:39: Yeah, I had one experience with that many years ago. I was back in the UK. So developers came to us. They had this really cool idea that, again, this is from 2009. And there might be games now with that vision. But the idea was that it was a music and then action game. So you could only attack on the beats of the music. As you play, you can't attack. You just wait for a beat, and then you attack there. Or your attack would only cause damage when it's in sync with the music beats. So they had a prototype. They came to us, and then we did some work with them, and ran some research. And then the outcome was no one got it. So they enjoyed fighting, but they didn't even realize that they had to do it based on those beats. And obviously, again, that was one input, I guess, for the developer. But in the end, they made a decision to cancel the project. And then, we got a very nice email from the project owner or the CEO of the company saying that it helped him to make that difficult decision and cancel the project and arguably saved him lots of time and money other than going towards the full development. So it's a rare type of case that they canceled the whole project. But there are also smaller things that you tell them. For example, there were more recent ones that I worked on. And after I did my work, I talked to the CEO, and I said, look, it's a deeper issue of your team not performing to the level that's expected than something I can help you with. So go fix your team before doing more research. He already knew the issue. Our research confirmed the issue. The team who's supposed to execute that just didn't have the skill set to do a better job. And that's obviously a hard decision, probably firing some people and rehiring some people. But that was the outcome. Like there is no technical level quality for the people who reach the level that's expected. Again, our work usually supports those educated guests to give them more evidence to probably act on something. And the outcome was probably hard, but that's why they bring on researchers.

Erin - 00:27:34: Well, and you're in a unique position as a consultant coming in too, where they're not necessarily saying, is the problem with our team? They're asking you a different question. You're actually saying the problem with your team, but that's harder to do when you're in-house, I imagine.

Pejman - 00:27:47: Yeah, I'm in a very weird situation because, again, I'm coming in as a consultant, and I make good money from my consulting job, but my real income that pays for my mortgage is from my university, my professor job. So sometimes, you know, obviously tell people that, okay, you know, go fire some people like they may cancel that contract with you. Okay, let's go find a nice consultant. But again, I'm in a situation where I'm not relying on consulting money. I can be very flexible, but also, you know, tell my opinion without fearing too much of losing the client or something like that because my mortgage is paid from a different kind of-

Erin - 00:28:23: All right, so that gets into, I mean, motivations and biases. You can't get away from them.

Pejman - 00:28:28: Exactly. You have those motivational biases yourself, you know when you go into the project.

Erin - 00:28:32: Right. And I'm sure that cuts both ways, right? You could make an argument if you've got to like work with these people six months from now, that may have some benefits too.

Pejman -00:28:39: 100%, yes.

Erin – 00:28:41: Okay, so this gets me to, like, we wanted to talk about there's all these different ways in which you can do research, but you want to plan from the beginning, right? You're not going to do research that's not set up to be timely in terms of the insights you're going to fight, right? So you've done this research, which could be primary, secondary, and you've found these insights that are set up to yield good decisions. In terms of handing them off and in terms of seeing them through to action, in terms of following up on once there's action or hasn't been action, how do you see success there? And I imagine that does vary whether you're in-house or not in-house, what that might look like.

Pejman - 00:29:16: Yeah, so that definitely varies in my perspective in terms of my engagement with the company. Sometimes I'm working with them long-term, and I can try to like support as we go forward. Sometimes my engagement is to come do this, let us know what you think, and then see you later. So again, that depends. And again, maybe I'd tell an interesting story. So I did one project from my university job with permission from the clients; I thought I can look at some form of a long-term study from the project that I worked on five years ago and look at how those projects are doing now in the market and then do some sort of analysis to see if I my report five years ago, highlighted the comments or the issue that users may actually bring up when the product is live. Again, the gaming world is a bit different because once the Product is launched, the life of it might be a few years. So that was my intention. So I got permission from the companies that I worked with. I looked at the research report that I wrote, and I looked at the reviewers' feedback on the product. In games, there is a website called Metacritic. It's a place for both players and professionals to write feedback. So I thought I'm going to compare what the community says about this game based on what my research report highlighted. I wanted to see if there is any relationship between the stuff that we found and the stuff that players, later on, commented on, or professionals, later on, commented on. And that was the really, I think, unique study because of the challenge in academic work; usually, we are designing very short studies like, okay, let's do this summer and then publish it. So this was weird that it took like four, five years to complete from the beginning to end. But it was super interesting in terms of what I could identify based on a research method that we use and what players commented. For example, I ran a local usability test on a game, and my report was this game is really good. You're going to be successful. I mean, with detail, obviously, but it was a positive report. And then players commented a lot on network issues over the internet. So that was something that I didn't focus on because it was a local test. So there were disconnects, something that I couldn't identify. And there were also loads of stuff that my report identified, and also players like either commented positively or negatively. So that was cool in terms of seeing the life of the project and what users commented on. But in terms of the time to implement those, it's varied based on my involvement.

Erin - 00:31:30: Cool. So I think one thing we didn't talk about yet was just there's always quality, quantity, velocity, like these trade-offs between these things. And so, you know, we talked a little bit about just like, when do you call it and ship some insights versus do more research? Research begets more research. But when you look back on the research you're doing, like how many decisions got made? Is there a quantity or velocity component to that that you factor into the success of your research?

Pejman - 00:31:59: So for me, I guess similar to anyone else, I guess, there are lots of reflections, especially coming in with a consulting pattern. So how could I do better? How could I support my clients better? How would I change my approach to be more effective in what I do and then try to find a spot where I could be useful and I could provide value, and it balances with my life and other things that I want to do with it. So that's usually something that's going on in my mind. And part of that was something like, for example, do I want to stay with the client for a really long time or do I want to do one project and then find another client or move on and what basically works best for the situation that I am in. So I'm trying to figure that out, to be honest. I'm moving in between. I have liked both types of clients that I worked with for two years, almost like an internal user researcher on their team or the type of client that, you know, I take a call once or twice a month and then just answer a question and then move on. And based on that, obviously, my approach is very different depending on the type of engagement we have. So I guess maybe the usual metrics are quite similar with other things. So looking at the retention of them, they're coming back. But most importantly, if I see any changes in the product that we are working on and any improvement that I can link back to the research that we did.

Erin - 00:33:12: Yeah. All right. I've got one more really important question for you, which is why does my husband love the New Zelda game so much?

Pejman - 00:33:21: It's actually quite an interesting question, and I give you some hints. Maybe you can explore that more on your...

Erin - 00:33:26: Yeah, on my own time, yeah.

Pejman - 00:33:28: So one thing that we know is that all of us have different motivations for our daily life. That's our personality, basically. So you may be motivated to get the best job at work and have the highest salary. You may be motivated to go to work because you just like spending time with people. You don't care about the salary, or you may just feel because your boss is very nice and you may do that. Or I don't know; you may just like the product that you're working on. So we have different motivations, and that impacts our behavior in our daily life. So the same motivation is going to translate when we are playing games. So if you are that type that you wanted to be the best, if the game gives you an opportunity to build that in the game, like get the highest score, I don't know, kill all the enemies, then you're going to enjoy that. If you are the type that just wants to socialize and the game would give you an opportunity to talk to other players or maybe use what's happening in the game to socialize with your colleagues, then you're going to be playing that game. So what I would say is, think about your husband's behavior. And I bet the stuff that motivates him; there are probably going to be some similarities in the New Zelda Game. And that's why he gets enjoyment from this game.

Erin - 00:34:40: So what motivates us in games is the same as what motivates us in life. We don't now become some alter ego version of ourselves.

Pejman - 00:34:48: Basically, what motivates us in life impacts our daily behavior. And those things won't change even if you're at work or if you're at home. If you are playing the game, usually your motivation stays the same. Your behavior might change because of different contexts. We were running some studies. It was a husband and wife situation, actually. So we looked at participants' motivation, and we were linking it to better behavior in the game. So again, the motivation that I described, the one who wants to win all the time, we call them killer motivation. So they want to be the best. So the husband identified through our questionnaire that he has a killer motivation personality. The wife was more like a socializing type. So they were playing, and we were recording their physiological measure as well as their verbal comment. And as a killer, we expected loads of what we call trash talk. So they would be saying, hey, I'm winning. I'm better than you. But the husband didn't show any of those behaviors. So he was completely quiet as he was playing the game. It was a competitive game between the two, but his body was reacting like crazy. Like we were looking at what we call GSR. So that shows the level of arousal, so linked to anxiety or enjoyment, or frustration. So we could see loads of changes in his body, but he was completely quiet, with no behavior. So in the post-session interview, I asked him, I was like, you know, look at your body reaction. Look at your behavior that you show, completely quiet. Why is that? And he was like; obviously, I was so happy that I was winning, but I didn't want to upset my wife. So I was completely quiet, but my body was still reacting to all those points that got him ahead in the game, and eventually won the game. So I think that's very interesting. And the difference between what we show and what we really feel. And that's really cool.

Erin - 00:36:26: Yeah, a lot of stuff in there. Cool. This has been really interesting and educational, and got to talk about games, but also just generally how to make your research impactful. So lots of good stuff in here for everybody.

Pejman - 00:36:38: Awesome. Thanks again for having me. I also really enjoyed this discussion.

Erin - 00:36:42: Yeah, thanks for joining us. Hey there, it's me, Erin.

JH - 00:36:45: And me, JH.

Erin - 00:36:46: We are the hosts of Awkward Silences, and today we would love to hear from you, our listeners.

JH - 00:36:51: So we're running a quick survey to find out what you like about the show, which episodes you like best, which subjects you'd like to hear more about, which stuff you're sick of, and more just about you, the fans that have kept us on the air for the past four years.

Erin - 00:37:02: Filling out the survey is just going to take you a couple of minutes. And despite what we say about surveys almost always sucking, this one's going to be fantastic. So And thanks so much for doing that.

JH - 00:37:15: Thanks for listening.

Erin - 00:37:17: Thanks for listening to Awkward Silences, brought to you by User Interviews.

JH - 00:37:22: Theme music by Fragile Gang

Erin May
SVP, Marketing

Left brained, right brained. Customer and user advocate. Writer and editor. Lifelong learner. Strong opinions, weakly held.

Subscribe to the UX research newsletter that keeps it fresh
illustration of a stack of mail
Table of contents
down arrow
Latest posts from
Research Strategyhand-drawn arrow that is curved and pointing right