Delivering insight into the expectations, mental models and behaviours of your users, usability testing is one of the most powerful ways to optimise your websites and applications for increased satisfaction, repeat visits and higher conversion rates.
Usability testing can be employed to add value at any stage of the development lifecycle, from optimising current products and services that may be underperforming to cost-effectively validating ideas during a project.
It’s vitally important however that you approach usability testing in the right way, to ensure the insight you gather drives targeted and measurable improvements
The two main ways to test the usability of a website or application are formative and summative testing. Formative testing refers to testing that takes place as you’re building a product or service, while summative testing occurs once it’s already on the market. Each approach has its own benefits, with formative testing delivering early insight, and summative testing supporting ongoing understanding and improvement.
Generally however, the earlier you can begin testing the better, as applying fixes during development is significantly more expensive than doing so at the earlier stages of a project. Paper prototypes, coded prototypes, outputs from a wide variety of tools including Figma and InVision… all of these are worthwhile to test, although you can of course test at any point and are encouraged to do so, as any testing is better than no testing at all!
Once you know when and what you’re going to be testing, you need to consider how you’re going to conduct your testing activities. While testing in a dedicated laboratory space has traditionally been the gold standard to provide rich, qualitative insight, the pandemic has driven increased demand for remote testing activities (both of which we support here at Box UK, with a purpose-built testing suite as well as support for both moderated and unmoderated remote usability testing).
Even without the constraints of the pandemic, there are many reasons why you might require a remote approach, for example if you’re targeting specific geographical regions, or if you want to engage with a lot of participants for large volumes of quantitative insight. Quantitative insight can also be gained via A/B testing, which is ideal for micro changes to your design or copy that tie directly back to specific goals and actions (often as part of a programme of conversion rate optimisation).
For your usability testing to deliver the improvements and business impact you’re looking for, you must be clear on your goals both for the project and for the wider organisation, as well as the goals of your audience.
Usability testing is not generally an isolated activity that can be done on its own – it’s part of the overarching user experience research. It’s important therefore to look at your end-to-end experience and user journeys – using analytical tools and techniques such as empathy mapping – to identify the key areas and interactions that matter most to your business (such as the checkout process, for ecommerce organisations) as well as specific user needs and pain points.
As you build up your goals, you can start to build up the usability tests that will deliver against these, shaping the scope and scale of your project. For example:
There’s all sorts of guidance around how many participants deliver the optimum level of insight, but generally for three tasks (which tend to be around an hour long) you’d want to test with between 3 and 7 people. The ‘80/20’ rule can apply here, where a relatively small number of tests highlights common patterns in behaviour and feedback, revealing the majority of issues across a site or application.
When recruiting participants for your testing project, refer back to your goals and segment in line with what you know of your target audience/core user groups – personas and analytics can be helpful here to refine your demographic split. You may also want to recruit across current and prospective customers so you’re not only getting insight from those familiar with your brand, and could even consider speaking to those who have previously had a bad experience with you, to capture some truly honest feedback!
While you can (and likely should) mine your own database for usability testing participants, working with a dedicated recruitment company or usability testing specialist gives you access to a wider additional pool of testers. These partners will also manage the end-to-end process for you, from booking session slots to providing incentives for taking part.
Your goals will additionally shape the tasks your participants complete, which you’ll need to outline in test scripts for your testing facilitator to follow; these keep sessions consistent and help ensure they deliver the focused insight needed.
Test scripts typically include repeatable instructions that can be used across testing sessions. For example, if you are testing a purchase journey for an ecommerce site, tasks may include navigating to a particular product, adding it to the basket, setting payment and delivery details, and completing the purchase.
Your test script can also cover nudges and prompts to support participants should they get stuck, however it’s important to leave these as open as possible to avoid leading participants and influencing results. So rather than asking a closed question such as ‘did you click on that button because it was green?’ – which invites a one-word, affirmative answer – try asking a more open question such as ‘what was the reason you clicked on that button?’.
In addition to thinking about how you’ll write your usability tests, you’ll need to prepare diaries for the user testing sessions to record feedback and additional observations (more on which shortly), and ensure you have assets for your participants to test with. Whether this is a paper prototype, coded prototype or developed website / application, it’s important that all your documentation and materials are aligned, so be sure to review them – and conduct a practice if possible – in advance of your sessions.
When preparing for your testing sessions, make sure you have someone in place to act as the facilitator – taking participants through the tasks and making them feel relaxed and comfortable – and someone to act as the observer throughout the session. The observer is responsible for recording participant feedback and monitoring nuances around expressions, body language and other non-verbal feedback, and will generally be sat outside the room observing via a remote video / audio link, to maintain a natural environment for the participant.
It’s also important to ensure you have all the equipment you need in advance of the session, which may include laptops / tablets / smartphone devices, camera and audio equipment, and eye tracking software, as well as access to the site or prototype you’ll be testing with. Check all equipment well in advance, as well as after you complete your first session – it’s better to find issues at this point rather than once you’ve completed your testing activities!
Another crucial element of your session prep is your communication with participants, so that they are aware of where they need to be and when, and what will be required from them. Think about whether you’ll be sending digital or physical invitations and how you’ll manage other documentation, such as consent forms and pre-session questionnaires.
By taking the time to plan and prepare your testing programme, the process of running your sessions should be relatively straightforward – but certainly not without its own set of skills.
Common techniques for testing include:
A good facilitator will be experienced in encouraging participants to explore freely, and responding to their actions with relevant and insightful questions to understand why they’re doing what they’re doing – linking back to the need for open rather than closed questions.
An additional tool to capture participant feedback is the System Usability Score (SUS) survey. Comprising 5-10 questions about various aspects of the overall experience, this is a great way to quickly capture valuable insight, and provides a clear score that’s perfect for benchmarking and to communicate what you’re doing to senior stakeholders.
One of the best things about usability testing is that you can start learning from your sessions instantly.
Whole team analysis activities such as clustering notes on observations and participant feedback – whether using physical post-it notes, or tools such as Miro – will help reveal patterns in behaviour and common pain points where you can focus your efforts. You may also find outliers that require further investigation, for example where participants aren’t as familiar with digital technologies and may be coming to your product or service for the first time.
There are many frameworks out there to help quantify the issues you uncover, such as Rose, Thorn and Bud:
These techniques will help you build a prioritised and actionable backlog of work, covering fixes to broken elements of your site or application along with new features to enhance the customer experience.
I’ve taken you through a general workflow pattern for usability testing, but within this you will need to tailor the approach, activities and tools you use – driven always by your wider strategic and tactical goals.
If you want to learn more about how to approach usability testing effectively, here are some good books and articles to give a good base understanding of context:
We’ve also written a free white paper on common testing mistakes to avoid, drawing on our team’s experience conducting hundreds, if not thousands, of hours of usability testing for clients across a wide range of industries. Download your copy here, and if you’re ready to start shaping your testing project, get in touch by emailing firstname.lastname@example.org or calling +44 (0) 20 7439 1900 to find out how we can help you.