Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics (Interactive Technologies)

Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics (Interactive Technologies)

Thomas Tullis, William Albert

Language: English

Pages: 336

ISBN: 0123735580

Format: PDF / Kindle (mobi) / ePub

Effectively measuring the usability of any product requires choosing the right metric, applying it, and effectively using the information it reveals. Measuring the User Experience provides the first single source of practical information to enable usability professionals and product developers to do just that. Authors Tullis and Albert organize dozens of metrics into six categories: performance, issues-based, self-reported, web navigation, derived, and behavioral/physiological. They explore each metric, considering best methods for collecting, analyzing, and presenting the data. They provide step-by-step guidance for measuring the usability of any type of product using any type of technology.

• Presents criteria for selecting the most appropriate metric for every case
• Takes a product and technology neutral approach
• Presents in-depth case studies to show how organizations have successfully used the metrics and the information they revealed

Network Warrior (2nd Edition)

Trustworthy Compilers

Commercial Data Mining

Windows 8 para Dummies











Research 7.4.3  Other Measures 7.5  Summary CHAPTER 8 Combined and Comparative Metrics 187 8.1  Single Usability Scores 8.1.1  Combining Metrics Based on Target Goals 8.1.2  Combining Metrics Based on Percentages 8.1.3  Combining Metrics Based on Z Scores 8.1.4  Using Single Usability Metric 8.2  Usability Scorecards 8.3  Comparison to Goals and Expert Performance 8.3.1  Comparison to Goals 8.3.2  Comparison to Expert Performance 8.4  Summary 187 188 189

mobile phones, web applications used as part of your job, and even the software program we used to write this book. These products need to be both easy to use and highly efficient. The amount of effort required to send a text message or download an application needs to be kept to a minimum. Most of us have very little time or patience for products that are difficult and inefficient to use. The first metric we would recommend is task time. Measuring the amount of time required to complete a set of

mystery novels. Would it be appropriate to include a task that involves trying to find a book that the store doesn’t carry, such as a science-fiction novel? If one of the goals of the study is to determine how well users can determine what the store does not carry, we think it could make sense. In the real world, when you come to a new website, you don’t automatically know everything that can or can’t be done using the site. A well-designed site not only makes it clear what is available on the

usability problems. Team A used only 6 participants, whereas Team H used 12. At first glance, this might be seen as evidence for the magic number 5, as a team that tested only 6 participants uncovered as many problems as a team that tested 12. But a more detailed analysis reveals a different conclusion. In looking specifically at the overlap of usability issues between just these two reports, they found only 28% in common. More than 70% of the problems were uncovered by only one of the two teams,

example of getting more reliable ratings of a feature of a website by asking for several different ratings of the feature and then averaging them together. Tullis (1998) conducted a study that focused on possible homepage designs for a website. (In fact, the designs were really just templates containing “placeholder” or “Lorem Ipsum” text.) One of the techniques used for comparing the designs was to ask participants in the study to rate the designs on three rating scales: page format,

Download sample