Usability Testing: the most important thing you can do for your company.
To break it down like James Brown, within the digital world, usability is a necessary condition for survival. If an app is difficult to use, people don't use it. If the homepage fails to clearly state what a company offers and what users can do on the site, people leave. If users get lost on a website, they leave. If a website's information is hard to read or doesn't answer users' key questions, they leave. Notice a pattern here? There are plenty of other websites available; leaving is the first line of defense when users encounter a difficulty.
When do we develop and conduct Usability Tests?
Usability testing can play a role in each stage of the development process. The resulting need for multiple studies is one reason for making individual studies fast and cheap. Here are the main stages where you could conduct testing. Here, I'm talking about a website, but this goes across all products or experiences that are user-facing:
- Before starting the new design, test the old design to identify the good parts that you should keep or emphasize, and the bad parts that give users trouble.
- Test your competitors' designs to get cheap data on a range of alternative interfaces that have similar features to your own.
- Conduct a study to see how users behave in their natural habitat.
- Conduct information architecture testing to give you the insights you need to design or refine a great information architecture.
- Make wireframe prototypes of one or more new design ideas and test them. The less time you invest in these design ideas the better, because you'll need to change them all based on the test results.
- Refine the design ideas that test best through multiple iterations, gradually moving from low-fidelity prototyping to high-fidelity representations. Test each iteration.
- Conduct accessibility testing to ensure your website can be used by users with a range of assistive technologies, such as screen readers or speech recognition software.
- Once you decide on and implement the final design, test it. Subtle usability problems always creep in during implementation.
Quantitative vs. Qualitative
The difference between quantitative and qualitative research is often explained using contrasting terminology, like “hard vs. soft”, “numeric vs. non-numeric”, “statistics vs. insights”, “measure vs. explore”, “what vs. why”.
Broadly speaking, quantitative research can provide path and performance analyses by capturing the “what” of user behavior and typically involves larger pools of testers. It enables you to study your users’ experience by the numbers (e.g. how many people were able to buy a product successfully on my website?). Quantitative tests usually come in the form of A/B tests, surveys, click tests, eye tracking, card sorts and the number of complaints or issues you have with a particular feature or page.
In contrast, a goal of qualitative research is to gain valuable insight into the thought processes – the “why” – behind user’s actions and is done in smaller batches. Qualitative tests can be dairy studies, participatory design workshops, interviews, focus groups or usability tests.
What do I test?
Usability testing is about watching users do tasks which should simulate actual usage of your product, app or website. An initial usability test should focus on core functionality – areas that will see the most usage. If you are tasked with testing a specific piece of functionality, then naturally your test would need to focus on that area.
For example, if you were testing a rental car website, the core tasks could be: renting a car, renting a car with additional options, finding the address of a rental location and finding the opening hours of a rental location on a Tuesday morning.
Sometimes you have to test more than the core tasks, even if it is the initial usability test. As a balance you can have some core tasks tested first and if there is still time during a test session, test a set of peripheral tasks that include the fringe functionality.
Ok, I'm done testing... assimilating the data?
The moment a participant completes an evaluation, the responses can be viewed and reports are generated. There is an art to this and remain acutely aware of your own bias is top priority. Deep empathy for your user cannot be overstated here. There are many ways to evaluate the data you receive, this all depends of the unique nature of the study and of the company. This is very much not a comprehensive list:
- Understand what percentage of participants have difficulty navigating your website or digital product
- See where participants are failing your most important tasks
- Dive into individual responses to view details of a particular participant or to read the comments of open-ended questions.
- View heatmaps showing you where participants click on an individual page
- Generate clickstream data to see how participants really navigate your website.
- on and on and on...
Designing something because it's pretty doesn't cut it anymore.Usability testing is the only way to build anything these days and can give documentable evidence that something will work.... that the investment will pay off.