In previous lectures, we've talked about Nielsen's 10 heuristics for good user interface design. In this lecture, I'm going to introduce a way to use these heuristics right now that you could employ to improve a user interface that you're working on. So, technique called heuristic evaluation. This is a discount user experience research method, so-called because it's cheaper and faster than usability testing, which is usually considered the gold standard for finding problems in a user interface design. It's cheaper and faster because you don't need users. Instead, it's an inspection method, which means that you perform a systematic close read of a user interface and apply the heuristics that we've just been talking about to find and explain problems with the interface. The way it works is, you choose a set of screens or interactions that you're going to focus on. This could be an entire system or it could be a subset of a system that is one that you really want to pay attention to. You then step through those screens and that interaction, applying the heuristics to find potential problems. You need to remember to test error cases because there are some heuristics that look specifically at errors and preventing them or helping users recover from them. You also need to be sure to look at the help system and how that's accessed, if there is a help system. You're going to write down all the violations of the heuristics that you come across, no matter how big or small they are. You're not going to worry too much about that in the first pass. You're going to write down the heuristics that they violate. You're also going to assess the severity of each problem. Is this a problem that's going to prevent users from completing their tasks? Or is this something that's just going to annoy them? Finally, you're going to create a prioritized list of the problems, the most important problems that need to be fixed. So, I mentioned that you need to assess the severity. Here's a rating scale that's often used in heuristic evaluation that's just a four-point rating scale, ranging from one, which represents a cosmetic problem, no real usability impact, so something that's maybe just annoying, to four, which is a usability catastrophe, absolutely imperative to fix, because users will not be able to make successful use of the system if that problem isn't fixed. So, here's an example of the kind of problem that you can find with a heuristic evaluation and how you might communicate that to the designer of a system so they will know what needs to be fixed and how to fix it. So, the finding is that the total duration or time remaining is not displayed in the video player that's embedded in this particular site. The severity of this issue has been rated a three out of four, indicating that it's a pretty serious problem, but it's not necessarily a showstopper. The heuristic that's been violated is the visibility of system status because the user is not able to tell how much of the video they have seen and how much is left. A more detailed description is offered, saying that the videos in the courses do not have the total duration displayed, explains how this violates the heuristic of visibility of system status. While there is some indication of the progress through the video, it's not enough for a user to plan their work and make sense of how much time is left. A screenshot is offered to indicate precisely where in the interface this problem occurs. Finally, a recommendation is offered about how this might be fixed. In this case, it's very simple. Simply add the total time remaining so that users can understand what's going on. When performing a heuristic evaluation, it's not uncommon to come up with dozens of violations of heuristics in even a simple interface. However, a really long list of problems can be less useful when trying to decide what to fix than a short list of the most important problems. So, the next thing you want to do after assessing the severity is, prioritize. You want to pick out the top 5 to 10 problems and highlight them and rank them in decreasing order of severity, with the most severe problem showing at the top and less severe problems as you go further down. You also want to make sure that you use the heuristics to explain why they matter, especially if you're working in a team. You're trying to communicate to decision makers or designers about not only what the problems are, but why they're a problem and why it might be important to fix them. Whenever possible, when performing a heuristic evaluation, it's better to have multiple eyes looking at the same thing rather than just having one person do it. Research on heuristic evaluation has shown that, on average, one evaluator will find about 35 percent or a third of the true problems that would be found with a full-fledged usability test. If you add more evaluators, you find more of the problems. So, when you get to five evaluators, you're able to find, on average, about 75 percent of the problems you would find from a usability test. Once you go past five, you don't get that much more bang for your buck. So, 10 evaluators only find 85 percent. So, now, you've doubled the number of evaluators from five to 10, but you've only found 10 percent more of the problems. So, the sweet spot is really between three and five evaluators. If you can get a team of three to five people to all perform their own independent heuristic evaluations of an interactive system and then pool together the problems that they find and agree on the severity of the priority, you can do a very good job of proximating what you would learn from conducting a full-fledged usability test. However, solo evaluation or evaluation by an individual can still be very valuable. You can still find about a third of the problems that you would've found from a usability test. Usually, you're going to find the most important problems. Heuristic evaluation is often used as a proxy for user testing. The advantages are that heuristic evaluation is generally cheaper, faster, and you don't have to use up potential users. Sometimes, recruiting can be difficult, and you want to save your users for when you have a system that's closer to being ready or where you need the information that you can get from a usability test. User testing, on the other hand, is more realistic. You generally find more problems than you find through heuristic evaluation. User testing allows you to assess other user experience qualities beyond usability such as usefulness, desirability, and so forth. In fact, both heuristic evaluation and user testing, along with other UX methods, are typically used in an iterative design and evaluation process. So, for example, you might perform user testing early in the process to make sure that you're on the right track, that you have the right functionality covered. Then, as you get a more refined prototype, you might perform a heuristic evaluation to shake out the bugs in the design that you've produced. After you fix those bugs and you want to see if you really got it right, you might perform more user testing, and you might go around like this, using multiple methods to try to zero in on the problems and what's most important to fix. To sum up, heuristic evaluation is a quick and inexpensive method to find significant flaws in a user interface. To perform a heuristic evaluation, you use Nielsen's heuristics, you inspect each screen or each state of the user interface, including erroneous inputs and the help system, document each violation, and assess the severity of those violations. Then, you prioritize the biggest problems that you found and document them in a way that will help guide further development and further improvements of the user interface.