Best Practices

On Running My First Card Sort and What I Learned

Step-by-step how I ran my first card sort and all the lessons and learnings along the way.

Erin May
/
January 17, 2018

One of my favorite things about “work”—the stuff you do that someone pays you to do?—is the reward of constant pushing and learning. Embracing the discomfort that comes with the unknown, of learning to do something you didn’t know how to do before, and in ways that actually have tangible impact—unlike academic learning?—but, at least in the case of startups, are low risk as long as you keep moving and learning and don’t mind failing a bit.

Anyway I’ve been learning a lot in my first 3 months at User Interviews. As employee 11 (8 if you don’t count our co-founders), I’m learning how to get things done in a whole new way than when I had a team of 3, 5, or more in previous roles. And I’m doing that while actually working pretty normal and somewhat flexible hours, out of my home and occasional coffee shops/local freelancer haunts. I’m learning about working remotely. I’m learning about building and orchestrating a marketing automation program directly, plus Mixpanel, Webflow, and other tools with a decently steep learning curve. But mostly, I’m learning a ton about UX and, more specifically, UX research.

There are a lot of ways to learn and one of my favored methods is reading. I would read all day if I could. But another pretty great method—that combines well with the reading method—is to learn by doing, and basically by definition, screwing it up a bit in the process. Sorry and thank you to all of my participants!

The User Interviews Team, January, 2018

Why Card Sorting?

So as I set out to launch a robust reference resource on all things user experience research, I knew a basic structure, a taxonomy, and eventually a full list of topics would be an early priority. I have launched reference sites/experiences before. I’ve developed and executed content strategies before. I have built taxonomies before. I have built taxonomies before using internal data (stakeholder interviews and surveys) and public data (keyword search data, competitive analysis). I have not launched a site taxonomy before with the benefit of an initial open card sort. Now was my chance. We are all about the meta here.

Given we operate in the domain of user experience research, that the upside of learning more about our platform, how to operate a card sort, and hopefully how to organize the content in our forthcoming user research resource in a way that made sense for our intended audience seemed to outweigh the downside of maybe not doing it perfectly, the choice was obvious.

Onward to card sorting.

How Card Sorting

As with so many things, the first step is to Google. I’ve been part of card sorts before, helping other teammates update some information architecture, and for launching new brands and websites working with brand teams who are experts on this sort of thing. But this was my first time running one on my own. Like many researchers—and I am not a researcher, I just play the part sometimes, perhaps like some of you—I was short on time and resources. Still, I knew the value of validating my ideas with my intended audience.

‍Internal survey to gain insights on our customers

I read about open and closed card sorts and determined open would be better for my needs. I had a long, but not complete or final, list of topics I knew we wanted to cover. I built these from an internal survey focused on our customer questions and needs, browsing respected websites in our domain, and seeing what’s popular and how they organized things, etc. I did not group them or put them into any sort of hierarchy. I had ideas, but I really wanted to see what trends emerged from my group of guinea pigs participants.

Finding Participants

I’m excited that we are launching a Bring Your Own Audience product soon. (Write JH@userinterviews.com if you’re interested in providing feedback!). This will allow me to communicate with prospective study participants through our platform. In the meantime, I used our marketing automation tool, Autopilot, to reach out to our audience of researchers who had launched a project in the last year. I sent them a quick text based email:

‍Card sort participation invite email

I was thrilled with the response. It’s wonderful to have such an engaged audience eager to participate in user research on the “other side of the table.” 53 people completed my card sort and 59 started it. The average time to complete was 21:29. I can see all that here in the tool I used, Proven By Users.

High level participation stats in my card sort

Card Sort Tool of Choice: Proven By Users

Speaking of Proven By Users, I came across this tool doing some searches on things like “remote card sorting tool” and after pretty quickly comparing a few options, I determined Proven By Users best met my criteria for this card sort: ease of use, quality and variety of analysis tools, cost: free, thank you beta program!

I definitely recommend it. Check out this dendrogram action. I confess, first timer on the dendrogram. Add that to the learning list. Score.

Dendrograms are crazy! Partial view of mine.

What I learned from the card sort

After spending some time with the various reports—similarity matrix, maximum agreement, groups, cards, groups + cards—the loudest signals I arrived at were:

  • Surveys, types of research, methods, these types of things all fit together
  • Usability, testing, remote, unmoderated, these types of things all fit together
  • There’s a group around customer journeys, personas, empathy maps
  • IA, Designing thinking, etc are too big or amorphous to fit well with some of the other topics. These will be good to include somewhere once larger structure is complete.
  • There’s a group around recruiting and logistics of setting up studies.
  • There’s a group around politics and internal communications and “soft skills.”

The result of these and some internal kicking around is a structure that looks like this:

Modules > Chapters > Topics to cover in each chapter

The modules are:

  • Getting Started
  • Discovery Methods
  • Validation Methods
  • Post-Launch Methods
  • Recruiting
  • Research Deliverables
  • Tools & Logistics
  • Stakeholders, Politics & Soft Skills

We’ll end up with 30-40 lessons in our initial plan, likely to build and iterate from there. We’re looking forward to a February soft launch of the first few modules, and you’ll be able to sign up to get the full experience, or individual modules of interest, as an email series. We’re really excited about it and determined to make it a wonderful source of essential UX research content. Make sure to sign up for our newsletter (top or bottom nav form) to be notified when it launches!

Qualitative feedback:

Proven By Users lets you include a survey. I asked a question soliciting general feedback on the card sort itself, because learning. Some highlights:

I had no idea how to categorize the cards, so any sort of prompt would have been helpful (unless part of it was to see how we would sort the cards).

Yes and no! I certainly could have provided more context even though point was to see how folks would organize cards on their own. I worry about providing too much up front context and losing folks sometimes, but I’ll definitely provide more in the future.

I wanted to group analysis with something else. But it's really its own thing.

Agreed. Would love to have done group, in-person sessions, but was able to get a lot of feedback quickly here. Next time!

Terrible! There was no (obvious) way to get context for sort?!?!?

More context. Got it!

What do you learn by forcing me to sort all cards? I should have been able to leave some cards in the list, especially the cards that have no value or meaning to me.
‍Setting options in Proven By Users

I probably could have better explained that not every card had to be sorted! I think I was relying on the app to handle that more clearly than it did.

Interesting worthwhile venture.

Phew!

Follow up Email

Thanks guys!

Despite some useful but critical feedback on the card sort, I was happy to see an 80% open rate on my thank you note and 0 people who opted out for ongoing communication. You’re a great bunch!

Next Steps

After I publish this blog post, I’ll share it out with the new segment of awesome participants who completed the card sort, to keep them (you) informed on our progress.

We’ve already begun designing the experience and crafting the initial content, and I’ll share the progress of those efforts soon.

Card Sort Evolving Segment and Communication

Final tangent: On vulnerability

Tactical and strategic skills are great, but a “soft skill” I’m also learning to be more comfortable with is vulnerability. User Interviews is building a strong culture that values this through consistent 360 feedback, monthly personal growth team sessions, daily water coolers and a general vibe of humanity.

Madeline Gins and Arakawa say that their house in East Hampton, N.Y., opposes death. Credit Eric Striffler for The New York Times

There’s this great New York Times article I always remember about how discomfort actually helps you live longer. It’s kind of kooky, but I love the idea. Fundamentally, learning is about embracing the discomfort of not knowing, working through that, and emerging bigger.

So in my new job and in this new year especially, I hope to keep embracing that discomfort and to continue learning and growing. Thanks for joining me on the journey.

Want to contribute to User Interviews content? Here’s how.

P.S. Can we email you?

Get "Fresh Views," our weekly newsletter, delivered to your inbox. We feature interviews and point-of-views from UXers, product managers, and research nerds of all stripes.

Subscribe
Erin May

Marketing, content, UX, CRM, and brand enthusiast. Customer and user advocate. Writer and editor. Lifelong learner. Strong opinions, weakly held. Lead marketing at User Interviews.

Recent Posts

Subscribe to FRESH VIEWS newsletter

Weekly interviews and point of views all about UX research

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.