1. Define our users, and their key user journeys for our product or service

Clearly define our users using personas or need states to help make sure everyone is on the same page.

Next, map out our product or service’s key journeys, and identify important interactions or pain-points along the journey. These are your targets for measurement and improvement.

For example, at the Tax Office, the most important service is gathering tax from people each year — so our key user journey is submitting your annual tax return.

2. Design a set of key metrics to measure

Metrics are quantifiable measures used to track, monitor and assess key aspects of the customer or user experience.

A standard set of metrics can provide us with a solid baseline that enables us to analyse and measure improvements over time.

So what metrics do we use?

Many Experience measurement systems fail to make the connection between different types of metrics.

As a product and service team, we may not have access to every metric we would like to measure — so, we have to make some smart decisions about what we can measure…

For this example, we’re going to use a combination of metrics direct from customer feedback, and metrics that we need to respond to.

Things that we score

  1. Impact — severity of issue, and what effect it will have on customers
  2. Reach — scope of affected users, which could range from a few outliers to entire population
  3. TCT — Task completion time, how long it took the customer to complete their task or achieve their outcome

Things that users score

4. CSAT — Customer Satisfaction, how satisfied the customer is with their experience and outcome

5. CES — Customer Effort, how much effort the customer felt their experience took to achieve an outcome

6. NPS — Net Promoter Score, how likely the customer is to recommend the product or service to others

For more info on what metrics to track, check out this great UXPin article Which UX Metrics Should You Be Tracking?

3. Establish an experience measurement approach

Measurement approaches are many and varied. Choose an approach that balances our team skills and methods, with communication tools that will help build empathy and drive understanding within our organisation.

We will be using a combination of:

  • Journey maps with each key interactions and pain-points defined
  • Usability testing of key interactions and pain-points (both current interactions and concepts of improvements)
  • User feedback from key interactions

4. Design a scoring to assess interactions and pain-points

One of the problems with user pain-points, is that so often they are subjective and difficult to quantify.

We want to find a way to consistently quantify them — and we’ll do this using an anchored scale.

Example of an anchored scale used for assessment

Anchored scales

An anchored scale is a way of rating each metric in a consistent, and non-subjective way.

Our anchored scales will all be 1–5.

The team will need to workshop scales to find a consistent set that everyone agrees to and ensure there’s a shared understanding. But anchored scales should (we hope) also speak for themselves.

4. Measure!

Use a variety of user research, usability design & testing and user feedback to assess and measure your user journeys against the metrics you have established.

You will need to design your measurement activities carefully to ensure they can give you usable data to measure and align to the consistent set of metrics chosen.

Let’s try an example

Let’s say we have a pain-point that we think is a pretty big deal:

“I can only submit my tax return using the paper form.”

Let’s take this pain-point and assign some hypothetical scores on our imaginary anchored scales:

  1. Impact: scored at 2 (Very high)
  2. Reach: scored at 2 (Upwards of 75% of all personas)
  3. TCT: scored at 2 (1 week or less)
  4. CSAT: scored at 2 (Unsatisfied)
  5. CES: scored at 2 (High effort)
  6. NPS: scored at 1 (Extremely unlikely to recommend)

Now we have to do the math

We can calculate the overall score by multiplying the scores together, and dividing them by the total possible score:

(Impact x Reach x TCT) +(CSAT x CES x NPS)
Divided by total possible score (250)
Multiplied by 100 to find the percentage

(12 / 250 = 0.048) x 100 = 4.8

That leaves us with a total score of 5.

How bad is bad?

We now need to work out how to action our scores.

When is a score low enough to get immediate attention?
When is it high enough that we can leave that pain-point alone?

We need to start thinking about threshholds and tolerances:

So if our example scored 5…

“I can only submit my tax return using the paper form.”

According to our measurement framework, this pain-point scores below the minimum action threshhold.

This means that this is a high priority pain-point, and we should focus attention on improving it.

We still need to keep it in balance

Customer and User Experience improvements must always be balanced with business priorities and technical considerations.

This scoring system should be integrated with other prioritisation processes that weigh up the business value with the cost and effort to resolve.

While many organisations have these other prioritisation processes in place, few have a systematic approach to prioritising user experience work.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here