UX research, as a discipline is growing rapidly. When I started my UX career, the few UX researchers I knew about were all based in Silicon Valley and were almost always from a HCI background. Things have changed, for the better, with UX research evolving as a separate discipline and more people becoming UX researchers. However, as people from a variety of backgrounds enter research, the rigor that HCI students brought to the table because of their intensive research background can sometimes be missing.
Why do we do research? Simply put, we research to save money. Because, researching before making decisions will more likely help you make sound decisions (and save money if you made the wrong ones). As Albert Einstein famously said about the importance of nailing the correct problem (through research):
If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.
The thinking he refers to is research. Without delving into how to conduct sound research, let’s head straight to what the article is about: avoiding common mistakes while framing research questions.
Bias/Leading: Perhaps the most common mistake that researchers can make while writing a question is to unconsciously have some hidden bias in it. We know from research that humans, researchers included, have a strong confirmation bias, so it is almost natural that we’ll look to word questions in a way that lead to a specific response. Leading questions include or imply the desired answer to the question in the phrasing of the question itself. The key for UX researchers is to recognize this human shortcoming and consciously weed out any biases that might seep into the questions.
For example, “You like using that notification feature, don’t you?”. Clearly leads the respondent into saying Yes (or No). Another one which I have witnessed was “Why did you have difficulty with the navigation?”. Again, the user might not have had (or felt) a problem with the navigation, but once led in this direction, they will almost be compelled to answer about the perceived difficulty.
Both these can be questions can be reconstructed to remove the bias. The former could be worded as “Have you used the notification feature?” potentially followed by “Describe your experience while using the feature”. The latter could be asked as “What was easy or difficult about getting to the content you wanted?”.
Embedded Assumption: This pitfall can be quite similar to the first in that it guides the user into answering in a certain way. For example, while conducting research about a website,“What are the problems you face while using our website?”. The user might not have any problems, but the question assumes that they would have had some problems (and conversely, the user might think that there is something wrong with them if they haven’t faced any so they’ll look for problems now!). Another example could be “How slow was the service while ordering your food?” The assumption is that the service was slow, and more importantly that the user was bothered by the potential slowness of the service.
The first question can be easily reworded as “Describe your experience while using our website (or narrow it to a task/section of the website if this is too broad)”. The second could be changed to read “How was your experience while ordering and waiting for the food?” If the user, in response, talks about the speed (or lack of), then the researcher can probe further into that.
With both these pitfalls, its important to remember that:
In user research, the facilitator’s choice of words can affect the participants’ feedback or behavior.
Double-barreled questions: These are one of my favorites, that is favorites to weed out, because I’ve seen them make it to questions written by experienced researchers. While any researcher worth their money tends to focus on removing biases, double-barreled questions are hard to spot, mostly because most people don’t know what’s wrong with them. A double-barreled question is one where the question touches upon multiple issues but allows only for answering one. As an example, “Do you think the website should have more content and products”? A typical answer would be yes, but yes to what? To more content and more products, or more contents and same products, or more products and the same content?
Double-barreled questions can skew research data without the researcher realizing it. If the 80% of respondents to the question responded yes, the researcher might summarize the results by claiming that “80% of respondents wanted more content”, which is simply not true. There are two ways to resolve this problem. The first one is more mathematical (and impractical), where in you can provide all possible permutations of answers. This would technically be correct, but will just confuse the respondents. The better approach is to break down the question into two: ask about content and products separately.
Close-ended questions: Closed-ended questions can be answered with “Yes” or “No,” or they have a limited set of possible answers (such as: A, B, C, or All of the Above). Open-ended questions are questions that allow someone to give a free-form answer.
While asking close-ended questions is not a problem in itself, as there are plenty of applications where close-ended questions can be valuable and sensible, asking them in the wrong context can ruin your research. Close-ended questions can help with statistical data, but will rob you of the insights that UX researchers are usually after.
An example that you can relate to would be “Are you satisfied with the shopping experience?”. Here, the respondent is almost compelled to say Yes or No. And while it might be helpful to know if there were or weren’t satisfied, what’s most useful for researchers is to know why they were or weren’t (satisfied). This objective could be achieved by asking “ How satisfied or dissatisfied are you with the shopping experience?” instead.
The most important benefit of open-ended questions is that they allow you to find more than you anticipate: people may share motivations that you didn’t expect and mention behaviors and concerns that you knew nothing about.
Confusing questions: Self explanatory in what it means, confusing questions are those where the respondent cannot accurately understand what is it that you want to know. Confusion can be caused by incorrect grammar, usage of sophisticated words, long questions (often double barreled), usage of industry jargon, and so on. Remember, the research interview is not an IQ test. If the respondent cannot understand the question, it is a reflection of the researchers IQ than that of the participant!
Examples of confusing questions are countless, but one could be “ How does Facebook’s Ad Manager compare to the way ad campaigns are created on Google Adwords?”. Are you sure your user understands the terminology used? Instead, ask questions that lead to what you want to learn step-by-step to figure out what your interviewee uses or understands. For example:
- When was the last time you advertised online?
- Show me what you used to run your advertising campaign?
- Did you ever try a different tool for online advertising?
- (If so) How do you compare these tools?