Why User Research isn’t a magical unicorn.
As user researchers, we spend a lot of time and effort on convincing people of the value we could bring. Because of this, we are sometimes hesitant to consider how our results may not hold the best and truest insights. It isn’t necessarily stemming from an egotistical wish to always be right, but more fear-based. We don’t want to lose the buy-in we fought so hard for. If our results are pristine, actionable, innovative, “right,” then we could potentially lose what we worked so hard for: the right to do research.
As I tell many clients and students, user research isn’t the answer. It isn’t a magical unicorn that farts perfect, rainbow-colored insights. User research gives us an understanding of our users so we can make informed decisions. I didn’t say the right decision, but informed decisions. It gives us a direction (or several) that is based off of our knowledge of our users. However, as many times as we say this, user research can often be looked at as the one thing to solve them all (any LOTR fans out there?). When research fails to live up to this impossible standard, stakeholders can get frustrated and shut it down.
We need to ease up on our expectations of research, and use it as guidance as opposed to the one and only true path. With this, user researchers can analyze their results without fear and find where the research may be less valid.
The Concept of Validity
Validity is how well a test or research measures what it was intended to measure. It also the extent to which the results can be generalized to a wider population.
Validity has always been controversial in qualitative research, and many user researchers have abandoned this concept completely, as it sits too much in the world of quantitative data and statistical analysis. As qualitative user researchers, we simply don’t have the large numbers to back up a concept like validity in the same way quantitative data does.
However, validity is important in understanding how you might be wrong about a result. This concept is called a validity threat. They are other possible explanations or interpretations, something Huck and Sandler called “rival hypotheses.” What validity threats boil down to are alternative ways of understanding your data.
How does this occur? It can be for many reasons (have to love confounding variables), such as your participants not presenting their actual views (social desirability bias), missing data points that support/disprove your hypothesis, asking leading questions, or simply having certain biases towards the people or ideas you are researching.
In short, a lot can go wrong, which is why, I believe, it is important to rethink the meaning of validity in terms of qualitative research, instead of ignoring it completely. What do we need to look into in order to redefine validity in terms of qualitative user research?
Two Most Common Threats: Researcher Bias & Participant Reactivity
As humans, we are inherently biased in a given situation. Even if you do everything in your power to remove these biases, they will continue to linger on an unconscious level. This can impact user research when conclusions/results are based on data that fit the researcher’s existing theories, goals or perceptions and the selection of insights that “stand out” to a researcher. These insights or quotes may catch a researcher’s eye, but it is understanding why those pieces of data are particularly important. In order to know this, you have to be aware of your biases and how you will deal with them. Is it by writing them down before you analyze your data or by having a few researchers look through the results in order to see how different people are approaching the insights?
Similarly to bias, the user research will always have an influence on the participant, causing a degree of reactivity from the user. Quite obviously, eliminating the impact a researcher may have on a participant is impossible, so the goal is not to eliminate, but to understand what the potential impact may have been and use it when analyzing results. There are some measures a researcher can take to decrease a participant’s level of reactivity, such as not asking leading questions, dressing similarly to the interviewee, sitting on the same level and using open body language. Like with bias, it is important to understand how you could be influencing the participant and how that may affect the results.
Write a Memo
In order to help ourselves better understand what threats we face, we can do a simple exercise that involves writing down the answers to the following questions:
- What do you think are the most serious threats for this research objective? Why are these the most serious threats? What are the main ways you might interpret the data incorrectly? Be confused about the insights? How could you interpret results incorrectly?
- What do you think are the most serious threats other people (colleagues, stakeholders) may have? Why do you feel this way?
- How do you think you can best assess and mitigate these different threats, for yourself and others? How might others be able to help you assess and mitigate the threats?
What Else Can Help?
There are several other ways qualitative researchers can explore and impact validity:
- Long-term studies: By continuously interviewing and observing participants, you are able to learn more about them, beyond what they tell you in a 60-minute window. You can understand users much more deeply, and start to grasp which insights or quotes may be shallow or random. This goes against many company’s desire to do research quickly, but, if you have the buy-in and time, the more research you do, the more valid and generalizable your conclusions will be
- Rich data: It is wonderful when someone can take notes for you during an interview, or when you can take notes yourself, however, people tend to drift in and out of focus during interviews, so it is difficult to pick out everything a participant says. Note-taking can be an early form of bias — someone writes down what they deem as important — a better alternative is to get a transcript, record the interview and have several people taking notes to compare later
- Respondent validation: Humans do a great job of assuming…as in we do it a lot, but we aren’t actually very good at it. If a participant explained something to you, and you were a little lost or confused, don’t just assume you will figure it out later. Always ask participants to clarify, when it makes sense to. This is the easiest way of ruling out the possibility of misunderstanding what a participant meant. This can also help you understand your biases and misconceptions
- Don’t ignore data: I know we all want to get it right, especially when we are getting pressure from above, and we want to deliver the most favorable results. With this type of tunnel-vision, we can easily miss or ignore data that goes against the more flattering hypothesis. Even if the data sucks and tells you your company is doing everything wrong, it’s better to report that than waste time. Another note on this, make sure you have a few hypotheses lined up, not just the ones everyone wants you to validate — write these down like your biases — that way you are looking at the data from several different standpoints
- Comparing data: Talk to your users at different times of day, in different settings and in different ways. This helps rule out confounding variables. For example, if you are a health company and speak to all of your users first thing in the morning, you may hear a very different story from them than those same users later at night, when they are tired and would much rather reach for potato chips than your juice cleanse. Ensure you aren’t replicating the exact settings for all your users, and you will get data you can more confidently generalize.
Even with these approaches, there is no way to truly know if your qualitative results are perfectly valid (in fact, they will never be, as is the same with quantitative data). But, this is a step towards a more open-minded approach when thinking about the validity and qualitative research. User research isn’t supposed to be known for sure, as nothing in life is. We need to give researchers a break and let them approach their data with the potential that it isn’t always the right answer. That way, we can work together towards more informed and less forced solutions for our users.