That’s the question: Whether ’tis nobler in the mind to make mistakes.
The most part of designers considers user involvement as “must have” element in the process of creation or improving a product. They attract users to hear from them about their desires, problems, an opinion on improving the product and the necessary functionality.
But, actually, this is quiete dangerous, because the goal of user involvement is not to involve users, but to learn and understand it. This requires deep preparation and careful analysis of qualitative data. And here we can see the main problem that designers are not geared towards this, but rather towards making quick decisions and progress.
So why not?
User statements can easily confirm existing biases and ideas. Not through any malicious intent, but because organisations reward quick decisions. Instead of trying to understand the whole picture of the user’s life conditions, designers tend to grab first user statements that confirm something they already believe to be true. Then they make a decision and move forward. This is a problem if we want user involvement to provide a value into research results.
Designers shouldn’t ask users what they want. Instead, they should have a vision of ways that product could be done differently and, as a result, better. The vision of how that product can change somebody’s difficulties and living conditions. Users don’t really see the value or need until they saw the new product and start to use it. When they try it, they will love it.
Listening to users is a tricky thing. Users often don’t know what they want, and even if they did, the communication is likely to get garbled somewhere between them and you.
The structured and unstructured way of receipt of feedback. In the first case, the designer considers the process of receiving feedback and prepares the script to get the correct information. But in this case, there is a big risk that the script will pre-program the user to give a specific feedback that will not be objective. In the case of receiving an unstructured feedback, then in the future, it will be difficult to properly process it and isolate the structure.
Implementing more of the user’s desires can complicate and increase the weight of your product so much that even users, who prompted you to use these features, will not use the final product.
Usually, designers ask and record about the errors that happen during the process of product use, but nothing about what actually works. They kind of assume everything works if users don’t complain about the product.
Designers can ask the wrong questions or too many questions that can confuse a potential user and he can not be objective, rational and honest.
Often designers listen to the loudest one in the room. This one is kind of obvious, but to some extent, the situation happens in many teams. People are discussing design drafts and prototypes and designers just overlook someone because someone else gets more attention.
Where misunderstanding and usability problems occur, the designer just can’t rely on users to provide accurate feedback. No one is actively thinking about those details when the designer asks about them. Or, they want to present the best possible face to the designer (who frequently intimidates them) when they’re asked.
Users say “add a button so I can do this”. But if the designer had dug a little deeper maybe he would have realized that the action users want, the button to do, should be done automatically without the additional UI clutter.
And what to do?
To discover which designs work best, watch users as they attempt to perform tasks with the user interface. This method is so simple that many people overlook it, assuming that there must be something more to usability testing.
It boils down to the basic rules of usability:
- Watch what people actually do.
- Do not believe what people say they do.
- Definitely don’t believe what people predict they may do in the future.
Any data can be relevant (user feedback and statistics included), but if you don’t collect the correct data (or enough of it), any action you take will be based on guesswork and personal experience. — Tweet it
So, to design the best user experience, the designer should mostly pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior. Users do not know what they want and what will they behave after meeting new product. Usually, they behave in a way they would never behave in real life.
Anyway, don’t forget that getting feedback based on real interaction with a product is paramount, but that doesn’t mean usability specialists shouldn’t listen to people at all. Try to listen to the people only after they are done with tasks.
The best way to get user feedback is not just to log them, or listen to them, or even to watch them use it, but to watch them use it having asked them to speak aloud their thoughts as they do. You will learn ten times more about what is wrong with your product this way (and it will stagger you what’s going on in normal users’ heads — that sure ain’t your System Model that they’re working to). — Tweet it
That said — ethnographic analysis of users (both in the lab and in situ) should always be a major part of product design and revision. “Watching what they do” is a critical of the usability puzzle.
If you keep in mind all of the above, you do not have to listen intently to your users, take notes, nod knowingly, promise to do everything the user requests, then go to the product the way you know it needs to be done and later try to convince the user that’s what they asked for…
Do you know a story from World War II where the Allies wanted to address the loss rate of planes from bombing missions, which was in excess of 50%?
They went to study the planes, when they returned, to determine where the planes were being most damaged and worked on reinforcing those areas.
However, the loss rate did not improve. The problem was that the data they had was from the planes that did make it back. Their damage was in the non-critical areas, to begin with. The data they needed was for the planes that were lost.
The point about data collecting is good, but make sure you have the right or relevant data is key. — Tweet it!
Consider, if its possible, to implemented some features to provide your product type usability observation:
1An internal audit trail. Every change that the user makes to any data in the database is recorded in a product database table that could records the following data: the date time, the original table that held the data, the column name of the field that was changed, the original value, the new value, the page it was changed from, and the user who made the change.
What this can do for you — it can reveal which features the users were actually using. It also can tell you which users are actually using your digital product.
2 An internal event log. Any time an exception is thrown, it’s recorded in this log. Any time a user logs in, it’s recorded in this log. Any time a user uses the Log Off button, it’s recorded in this log.
This can tell you which users are actually logging into the system, and how many are logging off correctly as opposed to just letting their sessions expire. Because the table includes the ID of the user who was logged in at the time and the date and time it was thrown, designers and programmers can find relevant information in the Audit Trail at the same time.
So, sometimes there’s no need to directly watch users (although it never hurts) when you have detailed logs showing what they actually did. — Tweet it!
The main thing that logs only show what people have done. Watching the users directly gives you an idea of how quickly they made decisions (how is your UX, is your UI bad or confusing), and you get an idea what they were trying to do at the time, compared to what they actually did.
For example, logs which show the hit count for a websites’ sitemap, don’t show you if the user went there by mistake, or if they chose that route because the navigation bar was not understood.
Don’t trust your users blindly! — Tweet it!
If you have a different opinion — write a comment. If you agree: