Usability testing instructions

Instruction statement

These instructions provide best practice techniques for running usability testing, including choosing appropriate testing modes, test planning, scenario planning, user recruitment, ethical testing and avoiding tester bias. All new website concept designs should follow this instruction, as testing early and often helps to avoid expensive remediation work after launch. Existing websites due for upgrade can also be evaluated by applying usability testing.


This instruction does not apply to:

  • courseware, including scholarly work, student work and teaching and learning materials
  • websites that have no relationship to RMIT (for example, personal or private sites).
  • Google sites

Instruction steps and actions


Usability testing evaluates the performance of a website or application against a set of criteria to identify problems. This kind of research is essential to creating a web site or application that meets people’s needs and performs how they expect it to. Usability tests with a handful of users can unearth the majority of a site’s issues, and provide an invaluable insight into user behaviour. They must be conducted with an understanding of the ethical principles that support professional research conduct, as outlined by the User Experience Professionals Association.

Getting started

When producing new websites, the rule of thumb is to “test often, test early”. Usability testing involves time and resources but delivers a large return on investment in the longer term.

When planning usability testing, a key decision to make is whether to use in-person lab testing versus remote testing. These best practices are written specifically for in-person “lab” studies, but the principles are common to both methods.

Benefits of lab testing

  • Ability to observe actual user interactions
  • Generates compelling evidence (e.g. video recordings) to support recommendations
  • Effective when testing within secure environments

Benefits of remote testing

  • Time and cost savings
  • Can reach large and geographically spread audiences
  • Effective when testing large, informational websites (due to scalability)

For information about recommended and RMIT licensed remote testing software, contact the Senior User Experience Analyst.

Step 1: Prepare for a kick-off meeting

  • Plan resources. Identify the people who will be performing the research (moderating, note taking and observing) and the study’s stakeholders, and invite the appropriate people. To gain benefits from the testing it is critical that they be planned and executed by experienced usability analysts. Consider whether a specialist consultant needs to be engaged. Digital and Customer Experience Strategy can provide advice in relation to trusted usability consultants. Contact the Senior User Experience Analyst.
  • Review the website. Familiarise yourself with the website or application, its purpose, and its perceived and actual problems. Collate any existing research.
  • Understand the target user groups. Review existing RMIT user research. Contact the Senior User Experience Analyst for guidance.

Step 2: Conduct kickoff meeting

  • Identify recruiting criteria. Agree on the website or application’s user profile and participant recruiting criteria.
  • Identify key user tasks. Determine the tasks you’d like users to perform and the suspected usability issues and usability criteria associated with each. Collate these as research goals and questions.
  • Identify appropriate method(s). Decide the best activities and tests for participants to perform to answer the research questions. See table below.

Table: Widely used usability testing activities


Users are asked questions that establish their context of use, preferences and build trust and rapport with the moderator.

Performance-based tasks

Users perform tasks and metrics such as the completion rate, completion time, task efficiency, error rates, error severity and steps to completion are measured.

Task walkthrough

Users perform tasks and think aloud to explain what they are doing and why, as they do it. This should not be done with performance-based tasks as talking aloud slows users down.

Card sort

Users group items according to their own or predefined categories, to indicate how they expect content and features within a site to be organised (information architecture).

Tree test

Users indicate where they would click on a skeleton navigation menu to find a certain piece of content or a feature.

Subjective rating

Users rate their experience using a website or application through a survey such as the:

or similar.

Eye tracking

Gaze sequence, hit rate and dwell time are captured by software that tracks the participant’s eye movements while using the website.

Desirability testing

Users select a set of product reaction cards to illustrate their emotional response to a website or application.

  • Set the schedule and venue for the study.
  • Confirm team roles.

Step 3: Recruit participants

Internal recruitment

  • Digital and Customer Experience Strategy may be able to assist with access to current staff and students who have indicated their interest in participating in RMIT web usability studies. Contact the Senior User Experience Analyst for advice.
  • Any activity involving current students must follow the policy for communication with current students.
  • Your recruitment message should include:
    • session details such as one-on-one format, incentive, date, time, duration, location and public transport and parking availability
    • how the session will be recorded and where these recordings may be used

External recruitment

  • Develop a recruitment specification to provide to a market research firm, or to guide your own screener.
  • The recruitment specification should include:
    • the project name and contact person
    • session details such as one-on-one format, incentive, date, time, duration, location and public transport and parking availability
    • how the session will be recorded and where these recordings may be used
    • the number of people required and quotas for different criteria
    • demographic criteria (e.g. age, gender, education, employment status)
    • behavioural criteria (e.g. propensity to shop online)
    • technology use criteria (e.g. smart phone ownership, home broadband internet access)
    • industries, competitors or roles to be excluded due to commercial sensitivity or ineligibility
  • The screener should list the questions that a recruiter will ask a potential participant over the phone, to determine whether they’re suitable. The screener should not divulge the nature of the research nor cue the interviewee as to what type of participant is required.

Step 4: Write the user tasks

  • Get access to relevant websites and applications or create a prototype.
  • Write the tasks that the participants will perform.
  • Write a brief, engaging scenario if necessary, to give the task context and help the user interpret what is required of them. Participants must be clear about what they need to do and why they are doing it without being told how to do it. Ensure that the scenario:
    • includes a trigger for performing the task and using the website
    • stipulates what task needs to be performed
    • mentions what outcome the person is seeking
    • is based on what you already know about the goals, demographics, behaviour, attitudes, preferences and technology use of your archetypal users.
  • Make the first task an easy one to ease the participant into the session.
  • Ensure that:
    • each user task is written so that its clear what the user’s goal is for each, where to start the task and what constitutes completion. Allow for the multiple correct paths users could take or destinations where they could complete the task.
    • there are no clues in the wording as to how to accomplish the task (e.g. use of navigation menu labels or section headings). For example, ‘Find the About Us section’ should be worded ‘Find out what year the organisation was founded’.
    • the tasks are realistic rather than contrived or edge cases. Focus on what is probable rather than possible, unless it is a specific, controversial research question the team needs to resolve. Ideally, craft the task so that the participant can inject their own experience into it. For example, ‘Find the shipping costs for a book you want to buy or bought recently’ is preferable to ‘Find a Chinese history textbook for under $50’, which has an arbitrary goal.
    • the tasks are in a suitable sequence. Consider randomising the order if you’re concerned about bias.
    • there are not too many tasks nor too many tasks where the users are expected to fail, so that participants are not bored, overwhelmed, overly frustrated or demoralised.
  • Get team feedback on the scope, wording and order of the tasks.

Step 5: Prepare test materials

  • Prepare the protocols and props and get team feedback where applicable:
    • Consent form and/or non-disclosure agreement
    • Moderator’s script
    • Printed out task instructions (one per page, large type)
    • Questionnaires (use Survey Monkey or Google Drive to create online forms)
    • Props (e.g. worksheets, dummy data, authorised credit card, login details)
    • Incentives in an envelope (e.g. cash payment, gift card)
  • Prepare the tools:
    • Note taking (including expected paths and correct destinations for performance-based tasks)
    • Audio, video, screen capture
    • Card sorting, tree testing, survey or other software
    • Stopwatch
  • Confirm the data synthesis and analysis approaches.
  • Conduct a pilot test to ensure the protocols, props and tools perform as intended. It is ideal to run the pilot test in the interview room but not essential if timing or access prevents it. Also, it’s not necessary to find a truly representative user for the pilot session since you are interested in whether the test runs smoothly rather than the validity of the responses. However, this is an opportunity to test whether the moderator’s script and tasks pose the right questions, are interpreted correctly and the session duration is ok. Revise the test materials as required.

Step 6: Organise the testing venue

  • Consider where participants will wait before the session. Inform reception staff to expect visitors and welcome them, if necessary. It may be efficient to ask participants to fill out paperwork in reception.
  • Set up the equipment, protocols, props and tools in the interview room.
  • Establish where the moderator, participant, note taker and observers will sit. Ideally notetakers and observers will watch from another room or remotely online, so as not to unsettle the participant. Otherwise, have the note taker sit behind the participant on the opposite side to the moderator so that they remain out of the participant’s field of vision.
  • Ensure the room is comfortable and does not display any material that will influence or intimidate the participant.
  • Test the equipment.

Step 7: Conduct the test

  • Welcome the participant and introduce the nature of the session.
  • Ensure the consent form is understood and signed before proceeding to record the session.
  • Start the recording equipment.
  • Follow the script and run through tasks and activities.
  • Note what the participant does, verbatim comments, body language and facial gestures, and your observations.
  • Ask closing questions.
  • Thank the participant and provide their compensation.
  • Stop the recording equipment and farewell the participant.
  • Reset the interview room and equipment.
  • Debrief with notetakers and observers.

Step 8: Analyse results

  • Synthesise the findings by completing your notes, summarising findings and tabulating data. Analyse the data and identify usability problems with reference to Usability Test Results Analysis Instructions.
  • Rate the usability issues, for example:
    • Catastrophic problems - the user is unable to complete the task, refuses to complete the task, experiences strong dissatisfaction or solves the task incorrectly without noticing.
    • Serious problems - the user is delayed significantly but manages to complete the task.
    • Minor problems - the user is delayed briefly but recovers well.

Step 9: Create recommendations

  • Describe the usability issues by outlining their severity: what went wrong, the impact on users completing a task, the number of users who encountered the issue or how frequently it occurred, and how quickly users may learn and recover from the issue.
  • Generate suggestions for improvement. The suggested fix might be described, sketched or mocked up.
  • Write an executive summary that outlines how the more severe issues can be addressed.

Step 10: Document the findings and recommendations

  • Consider the intended audiences for the findings and devise a communication plan to share the results with them. This may entail creating reports, presentations or other artifacts in a variety of formats to ensure suitability.
  • If participants consented to their recording being used for review purposes, create a brief (e.g. 10 minute) highlights video of key issues and moments during the tests. Consider these highlights video tips.
  • Share the results widely with the project team, business owner, technical owner and other stakeholders.
  • Work with stakeholders to create an action plan to implement the recommendations.

[Next: Supporting documents and information]