Usability test results analysis instructions

Instruction statement

These instructions provide best practice techniques for evaluating usability test results and statistical data in order to obtain actionable findings for website improvement. Usability tests should be conducted during the concept design phase for new websites in order to avoid expensive remedial activities after the website is built. Tests can also be conducted on live websites in order to clearly identify where improvements should be made. These instructions aim to ensure the integrity of the analysis that follows testing.

Exclusions

This instruction does not apply to:

  • courseware, including scholarly work, student work and teaching and learning materials
  • websites that have no relationship to RMIT (for example, personal or private sites)
  • Google sites

Instruction steps and actions

Introduction

The value of a usability study relies not only on observing and capturing unbiased data but also on the ability to correctly interpret the data to glean insight. Major usability issues are obvious yet others are subtle or hypothetical and require some education and experience to diagnose. With smaller sample sizes it becomes even more critical for the analyst to interpret the results carefully in order to make sound recommendations for improvement.

Key principles

Data integrity

The usability testing that has generated the results must have integrity in order to deliver useful data. The data should include a range of facets that the analyst can study, including:

  • Objective and quantitative data generated from the test tasks such as error rates, steps to completion, and completion rates
  • The participant’s expression and physical reactions during the session
  • Participant comments
  • Subjective feedback provided through questionnaires.

Refer to the Usability Testing Instructions for guidance how to design and deliver best practice usability tests.

Analyst responsibilities

Usability analysts diagnose usability issues through their understanding of:

With this knowledge, specific usability issues can be identified and recommendations for fixes made. These recommendations are normally presented as a formal report in which issues a ranked in terms of severity.

Digital and Customer Experience Strategy can provide guidance and support in relation to analysing test results. Please contact our Senior User Experience Analyst.

Be prepared to iterate

Recommendations derived from quality data and analysis lead to more effective websites supporting the business. Be prepared to iterate, prioritising the most critical problems.

Analysis best practices

Analysing assigned tasks

The test planning should have included written clear tasks that define what success is. This will make it easy to identify when a participant has either failed or completed the task. You may have also timed each task from when the participant digested the instructions to when they finished or gave up. For each task you can then calculate:

  • completion rate - the proportion of participants who were able to finish the task
  • average completion time - the total completion times for a task divided by the number of participants who completed it .

Aim to have high completion rates and low completion times.

These metrics are useful to chart to:

  • alert you to the tasks with the most severe usability issues
  • track progress between design iterations
  • compare the performance of the same task on competing websites

You may also wish to note task efficiency, which measures whether a user finished the task quickly without obstacles or took a meandering approach but finally got there. This can be annotated through a scale, for example, where 1 is ‘failed to complete the task’ and 5 is ‘completed the task without any problems’. This provides an extra level of detail to identify where the major usability issues are.

Capturing other metrics such as steps to completion, error rates and error severity all help to flush out and rank the site’s usability issues.

Analysing participant reactions

Participants’ body language and facial expressions reveal how they are feeling when using a website or application. These physical cues are important to acknowledge because they are difficult for participants to conceal. Observe when they try to be polite, persist with a task that is obviously a struggle or give contrary verbal feedback or questionnaire responses that the experience was fine. One-on-one access to a user is priceless because you can witness the non-verbal communication reinforcing or negating what they do and say.

There are Ten Emotion Heuristics you can use to evaluate a user’s emotional reactions. Familiarise yourself with these facial expressions so that you can recognise them during a session. They indicate states such as when a user needs to concentrate more, feels uncertain, lost or deceived, is getting frustrated, tired or is satisfied and enjoying the experience.

Brush up on basic body language cues too. Consider what the participant is doing with their hands, arms, torso, legs and feet. Are they leaning into the computer? Are they jiggling a foot? Is their hand at their forehead? Has their voice gone quiet? Have they become very animated? Are these behaviours in reaction to the subject matter or the moderator?

Write down the emotional response and body language in your session notes alongside the task results and verbatim comments, to give context to the participant’s performance. This way you know that even though a task may have been achieved successfully, it was done so with grimaces, sighs and head shaking, and therefore presents a usability issue nonetheless.

Analysing participant comments

Although your focus will be on what participants do, what they say during a session is often very illuminating. Participants will talk about what aspects of the website they like and dislike, what they expect to happen, don’t understand, what they find funny and suggested design improvements.

Rather than taking note of comments that are about solutions, the key is to pick up on comments that shed light on:

  • their mental model for how a task should be done
  • how the website should support them
  • give further detail to the problem the website is trying to solve

Participants aren’t designers, so their suggested fixes can be taken with a grain of salt. However, since they’re a target user of the website, they are an expert on the tasks it should accommodate. Comments that help the design team to better understand the task or how the site does or doesn’t support the task are very instructive.

Some comments will be so powerful that they’ll be considered as ‘lightbulb moments’ that reveal a great insight for the design team or stakeholders. Sound bites of such moments should be used in reports, presentations and highlight videos to drive home your design recommendations.

Analysing questionnaires

Usability questionnaires typically ask participants to rate perceived performance, emotional state or satisfaction at the end of a task or a set of tasks. Each published questionnaire denotes what is considered a successful score. The results are more reliable with larger sample sizes, such as 12 participants or more.

The questionnaires can be useful to benchmark a website, highlight problematic tasks, track improvements between design changes or compare experiences on different sites.

Consider how well the responses correlate with the objective and non-verbal data you have collected for your usability study. Also think about how easy it was for the participants to use the questionnaire. Avoid using the questionnaire results in isolation to communicate the performance of a website or application because they do not offer the same insight into the interface’s strengths or weaknesses.

When you can’t work out what the issue is

  • Review video footage of participants performing the task. You may pick up comments, actions or patterns that you weren’t able to absorb during the sessions.
  • Discuss what you’re seeing, or not seeing, with an objective colleague. Someone with another perspective on the problem is a useful counterbalance when you’ve become overly familiar with the system.
  • Play around with alternative design solutions anyway. Sometimes the issue with the original will be more tangible when you’ve prototyped and experienced other options.
  • Test your changes again. Hone the task to ensure it is robust. Show alternative design solutions and probe the effectiveness of each. Repeat these tests reveal the issue.

Digital and Customer Experience Strategy can provide guidance and support in relation to analysing test results. Please contact our Senior User Experience Analyst.

[Next: Supporting documents and information]