Four steps to synthesis
In UX Matters, Lindsay Ellerby identifies four basic steps for analysis and synthesis. No matter which methods or strategies you take, these steps seem to always show up:
“1. Collect and organize the data
2. Mine the data
3. Sort and cluster the data
4. Identify insights.”
Starting analysis with data collection
Synthesis waits. Be mindful to try not to jump ahead to conclusions and solutions while you’re in the throes of research (though if you go there, it’s ok, we all do! Just tuck that thought away for the moment.)
Analysis, on the other hand, might start right away. After every user interaction, de-brief with your fellow researchers about what stood out to each of you. Start scribbling on post-it notes, and maybe even begin an affinity map. Get a war room started and fill it with raw data as soon as research starts.
Another thing you can do as you go is clean up your notes. This may seem like a simple administrative task, but it can be really powerful.
You may have a gut instinct about what was really important in an interview, but you also come to the room with plenty of biases. Taking the time to review your notes after each interview gives you space to check yourself—to see what actually came up repeatedly in the conversation. Here’s some helpful advice for cleaning up your notes after user interviews.
Then, go back and review everything
After you’ve completed the research, often, we need to go back to the beginning of our work and review everything (though not all research questions warrant this effort, of course).
This step entails going back and reviewing everything with fresh eyes. This may be a good time to start an affinity map if you haven’t yet; to add to yours and move post-it notes around; or even take a picture, tear it down and start a new one to compare it with. IDEO’s Download Learnings activity is another good method you might use here.
Steve Portigal explains why this practice matters much more eloquently than I ever could in this interview:
“The next step is to go back to the data (videos, transcripts, artefacts, whatever you have) and look at it fresh. You’ll always see something different happened than what you think, and that’s where the deeper learning comes from.
It’s a big investment of time, and maybe not every research question merits it. But if you don’t go back to the data (and a lot of teams won’t do it, citing time pressure), you are leaving a lot of good stuff on the cutting room floor.”
Not only does reviewing your data once over help you get to deeper learning — it also helps to mitigate the inevitable bias we bring to our synthesis.
This article is aimed at designers, but it is a fantastic primer on managing bias when looking at evidence, written by Jasmine Friedl. Friedl suggests we always “know the origin of our understanding” as one way to mitigate against bias. Taking a second look at your data is a step towards that deep knowledge of your learnings.
Find and prioritize themes
You’ve reviewed your data a few times by now; right after the research happened, in your head throughout the research process, and again during the mining step. You’ve already likely started putting concepts into groups and identifying themes.
In this step of analysis, you want to take a critical look at your themes. Revisit your notes and ask whether a theme you have identified is truly represented in multiple spots, or just inflated by your bias.
One method used here is card sorting: grab all your post-its and put the data into groups. You may also just re-sort your affinity map, or use a spreadsheet to link themes directly to your notes.
Distil themes into insights
Next, distill your findings into a manageable number of insight statements. Insights we uncover should come from multiple sources in our research. Depending how much research you did, the number of insights you uncover may vary.
What’s an insight? I defer to my colleague Kiley to define that one for you. He says that an insight contains both an ideal state, and a tension that’s getting in the way of it.
Here is another place where bias almost always comes up. Jasmine Friedl offers a great suggestion: “argue the other side.” Build a case, from your research, against your key insights, to see if they still stand up.
Tell stories with your insights
(This is a whole other article I’ll eventually write, but I wanted to touch on it in context of synthesis.)
When it comes to sharing research insights, it’s key to contextualize them for the audience you are sharing them with and the customer problems they are thinking about. Erika Hall writes: “ It doesn’t matter how much research you do if the people who have acquired the most knowledge write a report and move on.”
Tell a story about what you learned, using thoughtful tools, like maps, diagrams, or even just short Slack messages — whatever approach makes sense to get the message across to the people who need to hear it.
Rikke Dam and Teo Siang break down lots of great approaches to sense-making and sharing user research, like personas, scenarios, and “how might we” questions.