Conducting effective usability studies

Conducting effective usability studies
Photo by David Travis / Unsplash

In order to garner the most value from your tests, what I found is by spending a little time to prepare and plan for not only what you’re testing, but in addition to ‘why’, you’ll come out of each test with the most insight possible.

There is not exactly a “wrong” way to go about testing per se, what I mean by a wrong way is in the sense that there is the potential that you can waste the users time and your budget on a test that fails to gather the most valuable insight. I highly encourage testing, no doubt, but teams should also have a plan prior and anticipated outcome that provides value and insight to move forward with.

There is the potential that you can waste the users time and your budget on a test that fails to gather the most valuable insight.

So in an attempt to not get too in depth into the subject, I wanted to share a clear and somewhat non exhaustive guide to running a cheap, yet effective user test on your app.

The Basics

First things first, it would be a good idea to understand what a usability test is for, how it can help and when it can’t.

A usability test allows you to observe behaviors, this is where it excels because it tells you where the frustration points are within your interface. The observation of how a user flows through the app provides much more actionable insight than asking questions such as if they like it or not.

A mistake designers often make is testing to see how the users “feel” about the design, so you’ll hear questions like “would you use this?”. Don’t think that you’re going in the right direction simply because a user says that they would use it. I’ve conducted tests in the past where I’ve witnessed users fail to complete a single task and yet, still say that they loved and would use the application. It’s hard to know what to change with these sort of results.

And on another quick note, when an internal product team gathers in a room to give feedback and talk about a design, that isn’t a user test, that falls into the realm of a critique. In addition, focus groups aren’t user tests either. Designers (or anyone for that matter) should learn the distinction between the two as these are various activities that usually result in different outcomes.

Planning

Before going out and recruiting users, you should have a plan. Having a clear understanding of what you want to get out of the test beforehand is critical to its success. Now this doesn’t have to be an intricate plan, just try and determine what you want to get out of it to help you move forward in your design. There’s a higher chance of missing out on valuable insight if you don’t ask the right questions or have a plan set to figure out what you’re trying to validate. Here’s a quick, list of things you can do to prepare prior to a test.

  1. List everything you want to test. By everything I’m referring to things such as the discoverability of primary features and actions, the efficiency and completion rate of task based applications or something as simple as various navigational patterns and how effective they are when navigating throughout the application.
  2. Determine the metrics around what would make what you’re testing a success or failure.
  3. Based on everything above, put together a test scenario script around the things you need to validate. This is an instruction that the user will read and go through within the app. It should put them into an imagined circumstance and should have an end goal. Let’s say you’re building an app that helps dog owners locate and connect with dog-sitters nearby and you want to test the flow of locating a dog-sitter within the area with a good rating for a specific date and time. An example scenario could be, “You need to find a highly rated dog-sitter in your area tomorrow night from 7–8pm. Please use the app to do that”.
  4. Put together a quick general questionnaire to ask the user prior to the test. This should only take 2–3 minutes. These questions should be general questions. An example set of questions that I usually ask revolves around their occupation, their familiarity with technology, how often they use technology (mobile, desktop, etc.) in any given day and the apps they frequently use. Obviously, the questions should pertain to the type of app you are testing. Overall, you’ll want to get to know the user a bit before having them run through your test. Ideally, most of this should have been covered during the screening phase. Ethn.io is a great tool for finding and screening bother user research and testing candidates.
  5. If you’re recording the user during the test, cover your ass (from a legal standpoint) and put together a one page consent for the user to sign that clearly states what and why your recording them.

Recruiting

Try and recruit users that fall within the target demographic. For example, if you’re testing a mobile e-commerce app where the user needs to search for a product, add it to a cart and check out, you don’t want to test with 50 year old Billy who despises technology and only makes purchases in person using cash. Not a good recruit.

How many users?

The best results come from testing no more than 5 users and running as many small tests as you can afford.

There is some great insight into the studies that the Nielson Norman Group conducted behind the ideal number of people you should test so I won’t get into the details on that.

Location

The location upon where you do your testing may vary. You don’t need an elaborate testing lab, I honestly don’t prefer them as they can seem so “controlled” and inorganic. There really isn’t a hard rule or standard to where you do your testing, you just want to make sure that the test participants are comfortable and you’re not distracting others around you.

A close up of the Google Material design sticker sheet on a MacBook.
Photo by Tirza van Dijk / Unsplash

Conducting the test

Once you have a plan set, recruited your participants and found a nice, comfortable location, you’ll be ready to conduct your test.

Introduction and Overview

You’ll want to make sure the user is comfortable and be sure to thank them for taking the time for being there! Numerous times I’ve witnessed moderators go right into the test without setting expectations or easing the initial awkwardness that can happen prior to the test. Also, be sure to explain to them that you’re testing the application and not them. This is very important as you want to make sure that the user does not feel like they need to hold back on any useful, unsolicited feedback. Emphasize that any feedback they provide will not hurt anyones feelings and they should be as critical as possible.

In addition, be sure to remind the user to speak aloud as they navigate through the application and state that they can ask questions during, but you may not be able to answer them until after the test has ended.

After the initial intro, remember to go through the short questionnaire you put together during the planning stage. This high-level demographical data will be useful when putting together the brief.

Number of people in the room

Have only one or two people (not including the test participant) in the same room conducting the test. Try to avoid having your whole product team surrounding the user as it can potentially make them feel under pressure and stressed which could “taint” the results. Record the session to share with the team afterwards.

During the test

Aside from reading out the test script and answering questions, try to leave most of the talking to the test participant during the test. If you ask a question, make sure it’s open ended and try to avoid leading the user on as they try to navigate through the app.

Assuming every element isn’t active or wired up in the app or prototype, if users ask questions such as “What does that icon do?” don’t initially answer their question, simply follow up with asking where or what they think would happen if it were active. Asking a follow up question such as this does two things for you. One, It gives you insight into the users mindset and expectations and puts a test to the effectiveness of the icon that you’re using and two, whether or not the placement matches with the users expectations (i.e. platform conventions that they’re familiar with). You can take that as a hint that you may need to rethink about using that custom icon that you spent a few hours working on.

Post Test Analysis

After the test has concluded, be sure to set some time aside (at least an hour or so) to analyze the results with your team. You can also go over the videos if you recorded the sessions. During this phase you want to try and pinpoint any common usability issues or trends, such as an important element being overlooked or an issue that was common across a majority of the test participants.

Once you’ve compiled a list of the common usability issues you can group them into “buckets” that provides a more visual way of seeing where you can focus your efforts moving forward.

A task Pass/Fail chart used during a Comparative usability test

A Successful Test

You’ll know when your test was successful when you leave the test having the right amount of insight to move forward and make better informed decisions.

Keep in mind that user tests are good for finding issues, not for determining solutions. So results of the test will not show you what solutions would be optimal. What you will get from user testing (if performed correctly) is where the problem areas are and why they are problems. In order to determine which solution is optimal, you’ll need to test each one and see which one performs the best. This is an iterative process.

On the Flip side, a failed Test

Gathering the wrong insight, missing opportunities and not running the user through the right scenarios can negatively effect the design moving forward. As Jared Spool has laid out:

“There are two outcomes from poor decisions: either the user experience is worsened because of a change that just shouldn’t have happened; or a valuable opportunity is missed to improve the design’s user experience. Either way, when usability tests work, these results are significantly less likely.”

To Conclude

Testing is absolutely a necessity, every product team needs to do it if they want to create a better product. I’m hoping this article will give product teams some insight into how they can make good use of their time during the process of validating their designs moving forward.

How does your team go about conducting tests? Feel free to reach out and provide any comments or suggestions, I’d love to hear your thoughts!