Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests, Second Edition

Book description

Whether it's software, a cell phone, or a refrigerator, your customer wants-no, expects-your product to be easy to use. This fully revised handbook provides clear, step-by-step guidelines to help you test your product for usability. Completely updated with current industry best practices, it can give you that all-important marketplace advantage: products that perform the way users expect. You'll learn to recognize factors that limit usability, decide where testing should occur, set up a test plan to assess goals for your product's usability, and more.

Table of contents

  1. Copyright
  2. About the Authors
  3. Credits
  4. Acknowledgments
  5. Foreword
  6. Preface to the Second Edition
  7. 1. Usability Testing: An Overview
    1. 1. What Makes Something Usable?
      1. 1.1. What Do We Mean by "Usable"?
      2. 1.2. What Makes Something Less Usable?
        1. 1.2.1. Five Reasons Why Products Are Hard to Use
          1. 1.2.1.1. Reason 1: Development Focuses on the Machine or System
          2. 1.2.1.2. Reason 2: Target Audiences Expand and Adapt
          3. 1.2.1.3. Reason 3: Designing Usable Products Is Difficult
          4. 1.2.1.4. Reason 4: Team Specialists Don't Always Work in Integrated Ways
          5. 1.2.1.5. Reason 5: Design and Implementation Don't Always Match
      3. 1.3. What Makes Products More Usable?
        1. 1.3.1.
          1. 1.3.1.1. An Early Focus on Users and Tasks
          2. 1.3.1.2. Evaluation and Measurement of Product Usage
          3. 1.3.1.3. Iterative Design and Testing
        2. 1.3.2. Attributes of Organizations That Practice UCD
          1. 1.3.2.1. Phases That Include User Input
          2. 1.3.2.2. A Multidisciplinary Team Approach
          3. 1.3.2.3. Concerned, Enlightened Management
          4. 1.3.2.4. A "Learn as You Go" Perspective
          5. 1.3.2.5. Defined Usability Goals and Objectives
      4. 1.4. What Are Techniques for Building in Usability?
        1. 1.4.1. Ethnographic Research
        2. 1.4.2. Participatory Design
        3. 1.4.3. Focus Group Research
        4. 1.4.4. Surveys
        5. 1.4.5. Walk-Throughs
        6. 1.4.6. Open and Closed Card Sorting
        7. 1.4.7. Paper Prototyping
        8. 1.4.8. Expert or Heuristic Evaluations
        9. 1.4.9. Usability Testing
        10. 1.4.10. Follow-Up Studies
    2. 2. What Is Usability Testing?
      1. 2.1. Why Test? Goals of Testing
        1. 2.1.1. Informing Design
        2. 2.1.2. Eliminating Design Problems and Frustration
        3. 2.1.3. Improving Profitability
      2. 2.2. Basics of the Methodology
        1. 2.2.1. Basic Elements of Usability Testing
        2. 2.2.2. Limitations of Testing
    3. 3. When Should You Test?
      1. 3.1. Our Types of Tests: An Overview
      2. 3.2. Exploratory or Formative Study
        1. 3.2.1. When
        2. 3.2.2. Objective
        3. 3.2.3. Overview of the Methodology
        4. 3.2.4. Example of Exploratory Study
      3. 3.3. Assessment or Summative Test
        1. 3.3.1. When
        2. 3.3.2. Objective
        3. 3.3.3. Overview of the Methodology
      4. 3.4. Validation or Verification Test
        1. 3.4.1. When
        2. 3.4.2. Objective
        3. 3.4.3. Overview of the Methodology
      5. 3.5. Comparison Test
        1. 3.5.1. When
        2. 3.5.2. Objective
        3. 3.5.3. Overview of the Methodology
      6. 3.6. Iterative Testing: Test Types through the Lifecycle
        1. 3.6.1. Test 1: Exploratory/Comparison Test
          1. 3.6.1.1. The situation
          2. 3.6.1.2. Main Research Questions
          3. 3.6.1.3. Brief Summary of Outcome
        2. 3.6.2. Test 2: Assessment Test
          1. 3.6.2.1. The Situation
          2. 3.6.2.2. Main Test Objectives
          3. 3.6.2.3. Brief Summary of Test Outcome
        3. 3.6.3. Test 3: Verification Test
          1. 3.6.3.1. The Situation
          2. 3.6.3.2. Test Objectives
          3. 3.6.3.3. Brief Summary of Test Outcome
    4. 4. Skills for Test Moderators
      1. 4.1.
        1. 4.1.1. Who Should Moderate?
        2. 4.1.2. Human Factors Specialist
        3. 4.1.3. Marketing Specialist
        4. 4.1.4. Technical Communicator
        5. 4.1.5. Rotating Team Members
        6. 4.1.6. External Consultant
      2. 4.2. Characteristics of a Good Test Moderator
        1. 4.2.1. Grounding in the Basics of User-Centered Design
        2. 4.2.2. Quick Learner
        3. 4.2.3. Instant Rapport with Participants
        4. 4.2.4. Excellent Memory
        5. 4.2.5. Good Listener
        6. 4.2.6. Comfortable with Ambiguity
        7. 4.2.7. Flexibility
        8. 4.2.8. Long Attention Span
        9. 4.2.9. Empathic "People Person"
        10. 4.2.10. "Big Picture" Thinker
        11. 4.2.11. Good Communicator
        12. 4.2.12. Good Organizer and Coordinator
      3. 4.3. Getting the Most out of Your Participants
        1. 4.3.1. Choose the Right Format
          1. 4.3.1.1. Sit-By Sessions versus Observing from Elsewhere
          2. 4.3.1.2. "Think-Aloud" Advantages and Disadvantages
          3. 4.3.1.3. Retrospective Review
        2. 4.3.2. Give Participants Time to Work through Hindrances
        3. 4.3.3. Offer Appropriate Encouragement
      4. 4.4. Troubleshooting Typical Moderating Problems
        1. 4.4.1. Leading Rather Than Enabling
        2. 4.4.2. Too Involved with the Act of Data Collection
        3. 4.4.3. Acting Too Knowledgeable
        4. 4.4.4. Too Rigid with the Test Plan
        5. 4.4.5. Not Relating Well to Each Participant
        6. 4.4.6. Jumping to Conclusions
      5. 4.5. How to Improve Your Session-Moderating Skills
        1. 4.5.1. Learn the Basic Principles of Human Factors/Ergonomics
        2. 4.5.2. Learn from Watching Others
        3. 4.5.3. Watch Yourself on Tape
        4. 4.5.4. Work with a Mentor
        5. 4.5.5. Practice Moderating Sessions
        6. 4.5.6. Learn to Meditate
        7. 4.5.7. Practice "Bare Attention"
  8. 2. The Process for Conducting a Test
    1. 5. Develop the Test Plan
      1. 5.1. Why Create a Test Plan?
        1. 5.1.1. It Serves as a Blueprint for the Test
        2. 5.1.2. It Serves as the Main Communication Vehicle
        3. 5.1.3. It Defines or Implies Required Resources
        4. 5.1.4. It Provides a Focal Point for the Test and a Milestone
      2. 5.2. The Parts of a Test Plan
        1. 5.2.1. Review the Purpose and Goals of the Test
          1. 5.2.1.1. When Not to Test
          2. 5.2.1.2. Good Reasons to Test
        2. 5.2.2. Communicate Research Questions
        3. 5.2.3. Summarize Participant Characteristics
        4. 5.2.4. Describe the Method
          1. 5.2.4.1. Independent Groups Design or Between Subjects Design
          2. 5.2.4.2. Within-Subjects Design
          3. 5.2.4.3. Testing Multiple Product Versions
          4. 5.2.4.4. Testing Multiple User Groups
        5. 5.2.5. List the Tasks
          1. 5.2.5.1. Parts of a Task for the Test Plan
          2. 5.2.5.2. Tips for Developing the Task List
          3. 5.2.5.3. Example Task: Navigation Tab on a Web Site
          4. 5.2.5.4. Ways to Prioritize Tasks
        6. 5.2.6. Describe the Test Environment, Equipment, and Logistics
        7. 5.2.7. Explain What the Moderator Will Do
        8. 5.2.8. List the Data You Will Collect
          1. 5.2.8.1. Sample Performance Measures
          2. 5.2.8.2. Qualitative Data
          3. 5.2.8.3. Sample Preference Measures
        9. 5.2.9. Describe How the Results Will Be Reported
      3. 5.3. Sample Test Plan
    2. 6. Set Up a Testing Environment
      1. 6.1. Decide on a Location and Space
        1. 6.1.1. In a Lab or at the User's Site?
        2. 6.1.2. Test in Multiple Geographic Locations?
        3. 6.1.3. Arranging Sessions at a User's Site
          1. 6.1.3.1. Minimalist Portable Test Lab
        4. 6.1.4. Setting up a Permanent or Fixed Test Lab
          1. 6.1.4.1. Simple Single-Room Setup
          2. 6.1.4.2. Modified Single-Room Setup
          3. 6.1.4.3. Large Single-Room Setup
          4. 6.1.4.4. Electronic Observation Room Setup
          5. 6.1.4.5. Classic Testing Laboratory Setup
      2. 6.2. Recommended Testing Environment: Minimalist Portable Lab
      3. 6.3. Gather and Check Equipment, Artifacts, and Tools
        1. 6.3.1. Basic Equipment, Tools, and Props
        2. 6.3.2. Gathering Biometric Data
      4. 6.4. Identify Co-Researchers, Assistants, and Observers
        1. 6.4.1. Data Gatherer/Note Taker
        2. 6.4.2. Timekeeper
        3. 6.4.3. Product/Technical Expert(s)
        4. 6.4.4. Additional Testing Roles
        5. 6.4.5. Test Observers
    3. 7. Find and Select Participants
      1. 7.1. Characterize Users
        1. 7.1.1. Visualize the Test Participant
        2. 7.1.2. Differentiate between Purchaser and End User
        3. 7.1.3. Look for Information about Users
          1. 7.1.3.1. Requirements and Specification Documents
          2. 7.1.3.2. Structured Analyses or Marketing Studies
          3. 7.1.3.3. Product Manager (R&D)
          4. 7.1.3.4. Product Manager (Marketing)
          5. 7.1.3.5. Competitive Benchmarking and Analysis Group
      2. 7.2. Define the Criteria for Each User Group
        1. 7.2.1. Define Expertise
        2. 7.2.2. Specify Requirements and Classifiers for Selection
        3. 7.2.3. Document the User Profile
        4. 7.2.4. Divide the User Profile into Distinct Categories
        5. 7.2.5. Consider a Matrix Test Design
      3. 7.3. Determine the Number of Participants to Test
      4. 7.4. Write the Screening Questionnaire
        1. 7.4.1. Review the Profile to Understand Users' Backgrounds
        2. 7.4.2. Identify Specific Selection Criteria
        3. 7.4.3. Formulate Screening Questions
        4. 7.4.4. Organize the Questions in a Specific Order
        5. 7.4.5. Develop a Format for Easy Flow through the Questionnaire
        6. 7.4.6. Test the Questionnaire on Colleagues and Revise It
        7. 7.4.7. Consider Creating an "Answer Sheet"
      5. 7.5. Find Sources of Participants
        1. 7.5.1. Internal Participants
        2. 7.5.2. Qualified Friends and Family
        3. 7.5.3. Web Site Sign-Up
        4. 7.5.4. Existing Customers from In-House Lists
        5. 7.5.5. Existing Customers through Sales Representatives
        6. 7.5.6. User Groups or Clubs, Churches, or Other Community Groups
        7. 7.5.7. Societies and Associations
        8. 7.5.8. Referrals from Personal Networks, Coworkers, and Other Participants
        9. 7.5.9. Craigslist
        10. 7.5.10. College Campuses
        11. 7.5.11. Market Research Firms or Recruiting Specialists
        12. 7.5.12. Employment Agencies
        13. 7.5.13. Newspaper Advertisements
      6. 7.6. Screen and Select Participants
        1. 7.6.1. Screening Considerations
          1. 7.6.1.1. Use the Questionnaire or Open-Ended Interview Questions?
          2. 7.6.1.2. Complete the Screener Always, or Only When Fully Qualified?
        2. 7.6.2. Conduct Screening Interviews
          1. 7.6.2.1. Inform the Potential Participant Who You Are
          2. 7.6.2.2. Explain Why You are Calling and How You Got the Contact Information
          3. 7.6.2.3. Go through the Questions in the Questionnaire
          4. 7.6.2.4. As You Eliminate or Accept People, Mark Them Off on Your List
        3. 7.6.3. Include a Few Least Competent Users in Every Testing Sample
        4. 7.6.4. Beware of Inadvertently Testing Only the "Best" People
        5. 7.6.5. Expect to Make Tradeoffs
      7. 7.7. Schedule and Confirm Participants
        1. 7.7.1. Compensate Participants
        2. 7.7.2. Protect Participants' Privacy and Personal Information
    4. 8. Prepare Test Materials
      1. 8.1. Guidelines for Observers
      2. 8.2. Orientation Script
        1. 8.2.1. Keep the Tone of the Script Professional, but Friendly
        2. 8.2.2. Keep the Speech Short
        3. 8.2.3. Plan to Read the Script to Each Participant Verbatim
        4. 8.2.4. Write the Orientation Script Out
          1. 8.2.4.1. Make Introductions
          2. 8.2.4.2. Offer Refreshments
          3. 8.2.4.3. Explain Why the Participant Is Here
          4. 8.2.4.4. Describe the Testing Setup
          5. 8.2.4.5. Explain What Is Expected of the Participant
          6. 8.2.4.6. Assure the Participant That He or She Is Not Being Tested
          7. 8.2.4.7. Explain Any Unusual Requirements
          8. 8.2.4.8. Mention That It Is Okay to Ask Questions at Any Time
          9. 8.2.4.9. Ask for Any Questions
          10. 8.2.4.10. Refer to Any Forms That Need to Be Completed and Pass Them Out
      3. 8.3. Background Questionnaire
        1. 8.3.1. Focus on Characteristics That May Influence Performance
        2. 8.3.2. Make the Questionnaire Easy to Fill Out and Compile
        3. 8.3.3. Test the Questionnaire
        4. 8.3.4. Decide How to Administer the Questionnaire
      4. 8.4. Data Collection Tools
        1. 8.4.1. Review the Research Question(s) Outlined in Your Test Plan
        2. 8.4.2. Decide What Type of Information to Collect
        3. 8.4.3. Select a Data Collection Method
          1. 8.4.3.1. Fully Automated Data Loggers
          2. 8.4.3.2. Online Data Collection
          3. 8.4.3.3. User-Generated Data Collection
          4. 8.4.3.4. Manual Data Collection
          5. 8.4.3.5. Other Data Collection Methods
      5. 8.5. Nondisclosures, Consent Forms, and Recording Waivers
      6. 8.6. Pre-Test Questionnaires and Interviews
        1. 8.6.1. Discover Attitudes and First Impressions
        2. 8.6.2. Learn about Whether Participants Value the Product
        3. 8.6.3. Qualify Participants for Inclusion into One Test Group or Another
        4. 8.6.4. Establish the Participant's Prerequisite Knowledge Prior to Using the Product
      7. 8.7. Prototypes or Products to Test
      8. 8.8. Task Scenarios
        1. 8.8.1. Provide Realistic Scenarios, Complete with Motivations to Perform
        2. 8.8.2. Sequence the Task Scenarios in Order
        3. 8.8.3. Match the Task Scenarios to the Experience of the Participants
        4. 8.8.4. Avoid Using Jargon and Cues
        5. 8.8.5. Try to Provide a Substantial Amount of Work in Each Scenario
        6. 8.8.6. Give Participants the Tasks to Do
          1. 8.8.6.1. Reading Task Scenarios to the Participants
          2. 8.8.6.2. Letting the Participants Read Task Scenarios Themselves
      9. 8.9. Optional Training Materials
        1. 8.9.1. Ensure Minimum Expertise
        2. 8.9.2. Get a View of the User after Experiencing the Product
        3. 8.9.3. You Want to Test Features for Advanced Users
        4. 8.9.4. What Are the Benefits of Prerequisite Training?
          1. 8.9.4.1. You Can Conduct a More Comprehensive, Challenging Usability Test
          2. 8.9.4.2. You Can Test Functionality That Might Otherwise Get Overlooked During a Test
          3. 8.9.4.3. Developing the Training Forces You to Understand How Someone Learns to Use Your Product
        5. 8.9.5. Some Common Questions about Prerequisite Training
      10. 8.10. Post-Test Questionnaire
        1. 8.10.1. Use the Research Questions(s) from the Test Plan as the Basis for Your Content
        2. 8.10.2. Develop Questionnaires That Will Be Distributed Either during or after a Session
        3. 8.10.3. Ask Questions Related to That Which You Cannot Directly Observe
        4. 8.10.4. Develop the Basic Areas and Topics You Want to Cover
        5. 8.10.5. Design the Questions and Responses for Simplicity and Brevity
        6. 8.10.6. Use the Pilot Test to Refine the Questionnaire
      11. 8.11. Common Question Formats
        1. 8.11.1. Likert Scales
        2. 8.11.2. Semantic Differentials
        3. 8.11.3. Fill-In Questions
        4. 8.11.4. Checkbox Questions
        5. 8.11.5. Branching Questions
      12. 8.12. Debriefing Guide
    5. 9. Conduct the Test Sessions
      1. 9.1. Guidelines for Moderating Test Sessions
        1. 9.1.1. Moderate the Session Impartially
        2. 9.1.2. Be Aware of the Effects of Your Voice and Body Language
        3. 9.1.3. Treat Each New Participant as an Individual
        4. 9.1.4. If Appropriate, Use the "Thinking Aloud" Technique
          1. 9.1.4.1. Advantages of the "Thinking Aloud" Technique
          2. 9.1.4.2. Disadvantages of the "Thinking Aloud" Technique
          3. 9.1.4.3. How to Enhance the "Thinking Aloud" Technique
        5. 9.1.5. Probe and Interact with the Participant as Appropriate
        6. 9.1.6. Stay Objective, But Keep the Tone Relaxed
        7. 9.1.7. Don't "Rescue" Participants When They Struggle
        8. 9.1.8. If You Make a Mistake, Continue On
        9. 9.1.9. Ensure That Participants Are Finished Before Going On
        10. 9.1.10. Assist the Participants Only as a Last Resort
          1. 9.1.10.1. When to Assist
          2. 9.1.10.2. How to Assist
      2. 9.2. Checklists for Getting Ready
        1. 9.2.1. Checklist 1: A Week or So before the Test
          1. 9.2.1.1. Take the Test Yourself
          2. 9.2.1.2. Conduct a Pilot Test
          3. 9.2.1.3. Revise the Product
          4. 9.2.1.4. Check Out All the Equipment and the Testing Environment
          5. 9.2.1.5. Request a Temporary "Freeze" on Development
        2. 9.2.2. Checklist 2: One Day before the Test
          1. 9.2.2.1. Check That the Video Equipment Is Set Up and Ready
          2. 9.2.2.2. Check That the Product, if Software or Hardware, Is Working
          3. 9.2.2.3. Assemble All Written Test Materials
          4. 9.2.2.4. Check on the Status of Your Participants
          5. 9.2.2.5. Double-Check the Test Environment and Equipment
        3. 9.2.3. Checklist 3: The Day of the Test
          1. 9.2.3.1. Prepare Yourself Mentally
          2. 9.2.3.2. Greet the Participant
          3. 9.2.3.3. Have the Participant Fill Out and Sign Any Preliminary Documents
          4. 9.2.3.4. Read the Orientation Script and Set the Stage
          5. 9.2.3.5. Have the Participant Fill Out Any Pre-Test Questionnaires
          6. 9.2.3.6. Move to the Testing Area and Prepare to Test
          7. 9.2.3.7. Start Recordings
          8. 9.2.3.8. Set Decorum for Observers in the Room
          9. 9.2.3.9. Provide Any Prerequisite Training If Your Test Plan Includes It
          10. 9.2.3.10. Either Distribute or Read the Written Task Scenario(s) to the Participant
          11. 9.2.3.11. Record Start Time, Observe the Participant, and Collect All Critical Data
          12. 9.2.3.12. Have the Participant Complete All Post-Test Questionnaires
          13. 9.2.3.13. Debrief the Participant
          14. 9.2.3.14. Close the Session
          15. 9.2.3.15. Organize Data Collection and Observation Sheets
          16. 9.2.3.16. Debrief with Observers
          17. 9.2.3.17. Provide Adequate Time between Test Sessions
          18. 9.2.3.18. Prepare for the Next Participant
      3. 9.3. When to Intervene
        1. 9.3.1. When to Deviate from the Test Plan
      4. 9.4. What Not to Say to Participants
    6. 10. Debrief the Participant and Observers
      1. 10.1. Why Review with Participants and Observers?
      2. 10.2. Techniques for Reviewing with Participants
      3. 10.3. Where to Hold the Participant Debriefing Session
      4. 10.4. Basic Debriefing Guidelines
      5. 10.5. Advanced Debriefing Guidelines and Techniques
        1. 10.5.1. "Replay the Test" Technique
          1. 10.5.1.1. The Manual Method
          2. 10.5.1.2. The Video Method
        2. 10.5.2. Audio Record the Debriefing Session
        3. 10.5.3. Reviewing Alternate Designs
        4. 10.5.4. "What Did You Remember?" Technique
        5. 10.5.5. "Devil's Advocate" Technique
          1. 10.5.5.1. How to Implement the "Devil's Advocate" Technique
          2. 10.5.5.2. Example of the "Devil's Advocate" Technique
      6. 10.6. Reviewing and Reaching Consensus with Observers
        1. 10.6.1. Why Review with Observers?
        2. 10.6.2. Between Sessions
        3. 10.6.3. At the End of the Study
    7. 11. Analyze Data and Observations
      1. 11.1. Compile Data
        1. 11.1.1. Begin Compiling Data as You Test
        2. 11.1.2. Organize Raw Data
      2. 11.2. Summarize Data
        1. 11.2.1. Summarize Performance Data
          1. 11.2.1.1. Task Accuracy
          2. 11.2.1.2. Task Timings
        2. 11.2.2. Summarize Preference Data
        3. 11.2.3. Compile and Summarize Other Measures
        4. 11.2.4. Summarize Scores by Group or Version
      3. 11.3. Analyze Data
        1. 11.3.1. Identify Tasks That Did Not Meet the Success Criterion
        2. 11.3.2. Identify User Errors and Difficulties
        3. 11.3.3. Conduct a Source of Error Analysis
        4. 11.3.4. Prioritize Problems
        5. 11.3.5. Analyze Differences between Groups or Product Versions
        6. 11.3.6. Using Inferential Statistics
    8. 12. Report Findings and Recommendations
      1. 12.1. What Is a Finding?
      2. 12.2. Shape the Findings
      3. 12.3. Draft the Report
        1. 12.3.1. Why Write a Report?
        2. 12.3.2. Organize the Report
          1. 12.3.2.1. Executive Summary
          2. 12.3.2.2. Method
          3. 12.3.2.3. Results
          4. 12.3.2.4. Findings and Recommendations (Discussion)
      4. 12.4. Develop Recommendations
        1. 12.4.1. Focus on Solutions That Will Have the Widest Impact
        2. 12.4.2. Ignore Political Considerations for the First Draft
        3. 12.4.3. Provide Both Short-Term and Long-Term Recommendations
        4. 12.4.4. Indicate Areas Where Further Research Is Required
        5. 12.4.5. Be Thorough
        6. 12.4.6. Make Supporting Material Available to Reviewers
      5. 12.5. Refine the Report Format
      6. 12.6. Create a Highlights Video or Presentation
        1. 12.6.1. Cautions about Highlights
        2. 12.6.2. Steps for Producing a Highlights Video
          1. 12.6.2.1. Consider the Points You Want to Make
          2. 12.6.2.2. Set Up a Spreadsheet to Plan and Document the Video
          3. 12.6.2.3. Pick the Clips
          4. 12.6.2.4. Review Timing and Organization
          5. 12.6.2.5. Draft Titles and Captions
          6. 12.6.2.6. Review and Wrap
  9. 3. Advanced Techniques
    1. 13. Variations on the Basic Method
      1. 13.1. Who? Testing with Special Populations
        1. 13.1.1. People Who Have Disabilities
          1. 13.1.1.1. Scheduling and Reminding
          2. 13.1.1.2. During the Session
        2. 13.1.2. Older Adults
          1. 13.1.2.1. Scheduling and Reminding
          2. 13.1.2.2. During the Session
        3. 13.1.3. Children
          1. 13.1.3.1. Scheduling and Reminding
          2. 13.1.3.2. During the Session
      2. 13.2. What: Prototypes versus Real Products
        1. 13.2.1.
          1. 13.2.1.1. Paper and Other Low-Fi Prototypes
          2. 13.2.1.2. Clickable or Usable Prototypes
      3. 13.3. How? Techniques for Monitored Tests
        1. 13.3.1. Flexible Scripting
          1. 13.3.1.1. What You Get
          2. 13.3.1.2. How to Use It
        2. 13.3.2. Gradual Disclosure or Graduated Prompting
          1. 13.3.2.1. What You Get
          2. 13.3.2.2. How to Use It
        3. 13.3.3. Co-Discovery (Two Participants at a Time)
          1. 13.3.3.1. What You Get
          2. 13.3.3.2. How to Use It
        4. 13.3.4. Alpha or Beta Testing with Favored Clients
          1. 13.3.4.1. What You Get
          2. 13.3.4.2. How to Use It
        5. 13.3.5. Play Tests
          1. 13.3.5.1. What You Get
          2. 13.3.5.2. How to Use It
      4. 13.4. Where? Testing Outside a Lab
        1. 13.4.1. Remote Testing
          1. 13.4.1.1. What You Get
          2. 13.4.1.2. How to Use It
        2. 13.4.2. Automated Testing
          1. 13.4.2.1. What You Get
          2. 13.4.2.2. How to Use It
        3. 13.4.3. Testing In-Home or On-Site
          1. 13.4.3.1. What You Get
          2. 13.4.3.2. How to Use It
      5. 13.5. Self-Reporting (Surveys, Diary Studies)
        1. 13.5.1.
          1. 13.5.1.1. What You Get
          2. 13.5.1.2. How to Use It
    2. 14. Expanding from Usability Testing to Designing the User Experience
      1. 14.1. Stealth Mode: Establish Value
        1. 14.1.1. Choose the First Project Carefully
          1. 14.1.1.1. Begin Your Education
          2. 14.1.1.2. Start Slowly and Conservatively, Get Buy-In
          3. 14.1.1.3. Volunteer Your Services
          4. 14.1.1.4. Create a Strategy and Business Case
      2. 14.2. Build on Successes
        1. 14.2.1. Set Up Long-Term Relationships
        2. 14.2.2. Sell Yourself and What You Are Doing
        3. 14.2.3. Strategize: Choose Your Battles Carefully
      3. 14.3. Formalize Processes and Practices
        1. 14.3.1. Establish a Central Residency for User-Centered Design
        2. 14.3.2. Add Usability-Related Activities to the Product Life Cycle
        3. 14.3.3. Educate Others within Your Organization
        4. 14.3.4. Identify and Cultivate Champions
        5. 14.3.5. Publicize the Usability Success Stories
        6. 14.3.6. Link Usability to Economic Benefits
      4. 14.4. Expand UCD throughout the Organization
        1. 14.4.1. Pursue More Formal Educational Opportunities
        2. 14.4.2. Standardize Participant Recruitment Policies and Procedures
        3. 14.4.3. Align Closely with Market Research and Industrial Design
        4. 14.4.4. Evaluate Product Usability in the Field after Product Release
        5. 14.4.5. Evaluate the Value of Your Usability Engineering Efforts
        6. 14.4.6. Develop Design Standards
        7. 14.4.7. Focus Your Efforts Early in the Product Life Cycle
        8. 14.4.8. Create User Profiles, Personas, and Scenarios
    3. A. Afterword

Product information

  • Title: Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests, Second Edition
  • Author(s):
  • Release date: May 2008
  • Publisher(s): Wiley
  • ISBN: 9780470185483