10 tips for better usability tests
Para acceder a post original click aquí
It’s not enough to have a piece of software that meets the original functional requirements, you also need some insight into how accessible it is for your audience. Can they pick it up and use it without a steep learning curve? Can they achieve what they want with it? Does it meet, or ideally surpass, their expectations? You’ll find this information through usability testing.
The importance of usability testing has gained recognition in the last few years and it has also grown infinitely easier to conduct, thanks to the Internet and the fact that crowdsourcing is ideal for usability testing. But if you really want useful insight, then you must design your tests carefully. Here are some tips for conducting successful usability tests.
There are lots of good reasons, not least lack of budget, that you might use friends and family to test your app or game, but any volunteer that you have a personal relationship with is highly unlikely to be critical enough. You need honest opinions from people who don’t care about your feelings.
Another advantage of outsourcing your usability testing is that you can set a specific criterion. You want testers who are as similar as possible to your target audience, so they can really emulate the end user experience. If age, gender, geographical region, device type, and other factors are important, then use them to filter your tester pool.
Even with strangers, if they know you are the one who designed the software they’re testing, they will often feel reticent about being critical. If you’re conducting a usability test in person, discuss the app impersonally and give the impression that you are just running the test and were not involved in the development. You should also try to maintain some distance when you review results and try to keep an open mind. If you’re too defensive, you might miss opportunities to make real improvements.
Watch your testers
Wherever possible it is best to watch the testers and see what they do, as opposed to focusing on what they say. Their actions and expressions will often offer real insight that isn’t captured by your questions. If you’re in the room, you can observe and take notes. If you aren’t there, then consider recording the session in some way, but bear in mind that this is only worthwhile if you’re going to take the time to review the recordings.
Keep it short
It’s better to keep tests short, so that your testers don’t get fatigued or confused. You’ll get more useful data from a large selection of short tests than you will from a few long tests.
Ask neutral questions
People often write leading questions without realizing it. It’s very important to keep your tone neutral. If you hint at the answer you want, then you’re going to influence the tester.
You aren’t going to derive a lot of insight from questions that offer yes or no answers. Set up a five or seven-point Likert scale and you can score your answers to find an average. This also enables you to compare future tests with past ones and clearly see improvements. You can also weigh in other measurements here to create a clear picture of each test.
Measure and ask
In addition to your questions you should actually measure what your testers do. How long do they take for each task? What areas of the interface did they interact with? What errors did they make? Create a click-by-click or tap-by-tap flow of your user’s actions to see how they really used the software. You can cross-reference this data with your question results.
Ask for explanations
Collecting yes or no answers, or even answers on a scale, doesn’t necessarily explain your tester’s thought process, so ask them to expand. Why did they choose a certain answer? Why were they happy with this or unhappy with that? You’ll get some actionable insights if you ask for explanations from time to time.
Rinse and repeat
Don’t forget that the purpose of usability testing is to make your software more usable. That won’t happen unless you analyze and act on the data that comes in after the first round of testing and then measure again in round two. Make sure the changes you made had a positive impact. The more you rinse and repeat, the better your software will get.