Today I will tell you about the product experience of conducting user testing for one of the fantasy sports site.
Fantasy Sport — it’s a game in which take part participants with virtual teams of sportsmen, whose prototypes take part in real competitions and gain fantasy points depending on the actual statistics of their performances.
One of the most important interfaces for such site are fantasy team and tournament creation pages. With these pages, users interact often and from these pages depends how much money the company will earn from the users. It’s important to make it user-friendly, understandable and credible.
We chose between “Create tournament” and “Create team” pages. For “Create tournament”I’ve recently made a prototype that I wanted to test with real users, but later we decided that get feedback from the page in production is more valuable. So it was decided to start testing from testing “Create team” page.
This page wasn’t changed for a long time and required an update. We realized that we work on improvements (previously I conducted a heuristic evaluation of this page), but do not track the effectiveness of these improvements. Quantitative data was not enough and we decided to get qualitative data — a deeper understanding of user interaction with this page.
Therefore to conduct user testing: to find out what users think, confirm hypotheses and identify problems.
My role: Arranging the process of user testing: approvement with stakeholders, creating tasks for respondents, technical adjustment of equipment, taking part in testing like the facilitator and note taker.
We wanted to get acquainted with our users, so we chose moderated in-house testing. We asked HR manager to pick up 15 potential respondents for testing.
Based on the research of our target audience, we needed: 25–34 years old males, who are interested in sports, English or Russian speakers from Finland, Russia or Ukraine.
We needed both experienced users and beginners. In the percentage ratio (30% / 70%).
From regular users we were interested to learn their experience and insights, and from beginners — to found out how clear and user-friendly to use an unfamiliar interface, how they perceive it.
We paid attention to beginners segment because page understandable and convenient for beginners would be an important step for business growth.
After we selected respondents, we started creating tasks/questions for execution. Design team collected feedbacks, about problems related to this page, we formed hypotheses for verification and questions to check all user scenarios on this page.
We discussed the list of questions for the respondents with the team and improved them. A conducted dry run in order to assess how long it takes to perform tasks, and whether the formulations are clear.
Some of the questions were re-written, a more consistent structure of tasks was obtained, which were divided into general and clarifying tasks.
Our team was divided into two locations: one for the respondent and the facilitator, the second one for the observers. In the room where were facilitator and respondent stood a laptop with the opened page for testing, this laptop was connected to skype broadcast, watched by observers in another room. Skype broadcast was with the function screen-sharing, and with the video which showed the respondent’s face.
Observers tracked the time spent on the tasks performed, and how respondents interact with the interface and carry out tasks, where were painful points for them, where everything was clear and arose ideas or questions.
To take notes as quickly and efficiently as possible, we divided all the questions into blocks related to the interaction of the interface and made the large paper spreadsheet. In order to quickly find the desired block, we added them with printed screenshots of the interface.
During testing, observers glued stickers with notes for the relevant questions: red (for pain points), green (for successful interaction), and yellow (for questions/ideas).
We arranged with each of the respondents about testing at a certain time on Saturday. We wanted to endear each of them and make them relaxed so that they behave as naturally as possible in a familiar environment.
Therefore, we prepared the introduction speech, where we first got acquainted, talked about general topics, showed him the office, offered drinks, snacks.
Facilitator told to respondent, that we do not test him, we test the product. Also, respondent was told, that for us it’s important to hear any opinions, so he shouldn’t be afraid to offend anyone, because we are not developers, we are only researchers.
When respondents get used to the environment we started to ask questions about his experience. That helped us to better understand our users and also we wanted to design a persona based on this info in future.
During the execution of tasks, the facilitator asked respondent clarifying questions and asked them to express his thoughts aloud, what he is doing now. While researchers in the other room were observing and making notes, one of the observers assessed the time of task completion by respondents (fast/normal/long).
The challenge for facilitator was to return respondents to perform tasks because they often distracted to wander on the site. Also, we had to return respondents to express they thought own experience, not to tell stories from their friends or the internet, which are not suitable because only their own experience is valuable for our product.
When the facilitator finished testing he led respondents to the room, where the observers sat. We showed and explained how the test was organized, thanked for the time spent and for the valuable experience that we earned. Thus, we had an opportunity to get to know our users closer and to make communication more informal and to get their loyalty.
Users reacted very positively, some of them stayed in the office after testing, talked with us and discussed the product. In general, they were pleased with the page, but each of them had their own opinion so what it could be improved.
For example, someone did not trust the algorithms of autocomplete team, someone lacked expanded statistics on football players, which would help them to make the right choice while creating a fantasy team.
After the testing was over, the next day was devoted to analyzing notes and writing them into a spreadsheet. All the notes were summarized in one large document of user experience with this page — the generalized Customer Journey Map: which included positive moments, pain points, opportunities to focus, emotions.
Negative feedback was analyzed: the most priority to change was for those where most of the users received negative experience, + those where users performed for a long time.
Feedback & implementation
An important discovery of this user testing was to learn that users lacked statistical data on players (statuses, states, gossips, attitudes with the coach). To create the ideal fantasy team, they look this info in other services.
That was the surprise for us, that users didn’t even know that on the site were an advanced statistics feature for the player (almost no one couldn’t find it), which was created to help the user to analyze the data and create a team.
Also, problematic place on the page was the block of operations with a team on the field. Those functions, that were invented by the developers were completely not claimed by users.
The site has the feature helping with the formation of the team “Ask the coach” and “Autocomplete”. As we found out, users don’t want to use these functions, because they do not trust the algorithms (User: “Why should I trust algorithms to spend my money?”). They don’t like the way the algorithm generates a team, and that team needs to be reworked.
Results & next steps
As a result of the testing, we received valuable experience in understanding the interaction between our users and the product, understanding what we needed to change, improve the user experience and business development. Conducting the testing helped the product get rid of the daily loss of money, eliminate the grossest mistakes that we even didn’t think, and earn user loyalty.
The next step will be validation with the next user testing that our implementation of feedback from users during testing is correct, for this I created a prototype, which will be tested again. The prototype can be viewed here.