Cover image for Security and Usability

Book description

Human factors and usability issues have traditionally played a limited role in security research and secure systems development. Security experts have largely ignored usability issues--both because they often failed to recognize the importance of human factors and because they lacked the expertise to address them.

But there is a growing recognition that today's security problems can be solved only by addressing issues of usability and human factors. Increasingly, well-publicized security breaches are attributed to human errors that might have been prevented through more usable software. Indeed, the world's future cyber-security depends upon the deployment of security technology that can be broadly used by untrained computer users.

Still, many people believe there is an inherent tradeoff between computer security and usability. It's true that a computer without passwords is usable, but not very secure. A computer that makes you authenticate every five minutes with a password and a fresh drop of blood might be very secure, but nobody would use it. Clearly, people need computers, and if they can't use one that's secure, they'll use one that isn't. Unfortunately, unsecured systems aren't usable for long, either. They get hacked, compromised, and otherwise rendered useless.

There is increasing agreement that we need to design secure systems that people can actually use, but less agreement about how to reach this goal. Security & Usability is the first book-length work describing the current state of the art in this emerging field. Edited by security experts Dr. Lorrie Faith Cranor and Dr. Simson Garfinkel, and authored by cutting-edge security and human-computer interaction (HCI) researchers world-wide, this volume is expected to become both a classic reference and an inspiration for future research.

Security & Usability groups 34 essays into six parts:

  • Realigning Usability and Security---with careful attention to user-centered design principles, security and usability can be synergistic.

  • Authentication Mechanisms-- techniques for identifying and authenticating computer users.

  • Secure Systems--how system software can deliver or destroy a secure user experience.

  • Privacy and Anonymity Systems--methods for allowing people to control the release of personal information.

  • Commercializing Usability: The Vendor Perspective--specific experiences of security and software vendors (e.g., IBM, Microsoft, Lotus, Firefox, and Zone Labs) in addressing usability.

  • The Classics--groundbreaking papers that sparked the field of security and usability.

This book is expected to start an avalanche of discussion, new ideas, and further advances in this important field.

Table of Contents

  1. Special Upgrade Offer
  2. Preface
    1. Goals of This Book
    2. Audience for This Book
    3. Structure of This Book
    4. Conventions Used in This Book
    5. Safari Enabled
    6. How to Contact Us
    7. Acknowledgments
  3. I. Realigning Usability and Security
    1. One. Psychological Acceptability Revisited
      1. 1.1. Passwords
      2. 1.2. Patching
      3. 1.3. Configuration
      4. 1.4. Conclusion
      5. 1.5. About the Author
    2. Two. Why Do We Need It? How Do We Get It?
      1. 2.1. Introduction
      2. 2.2. Product: Human Factors, Policies, and Security Mechanisms
        1. 2.2.1. Impossible Demands
        2. 2.2.2. Awkward Behaviors
        3. 2.2.3. Beyond the User Interface
      3. 2.3. Process: Applying Human Factors Knowledge and User-Centered Approaches to Security Design
        1. 2.3.1. Security Is a Supporting Task
        2. 2.3.2. A Process for Designing Usable Secure Systems
      4. 2.4. Panorama: Understanding the Importance of the Environment
        1. 2.4.1. The Role of Education, Training, Motivation, and Persuasion
        2. 2.4.2. Building a Security Culture
      5. 2.5. Conclusion
      6. 2.6. About the Authors
    3. Three. Design for Usability
      1. 3.1. Death by Security
      2. 3.2. Balance Security and Usability
        1. 3.2.1. Exploit Differences Between Users and Bad Guys
        2. 3.2.2. Exploit Differences in Physical Location
        3. 3.2.3. Vary Security with the Task
        4. 3.2.4. Increase Your Partnership with Users
          1. 3.2.4.1. Trust the user
          2. 3.2.4.2. Exploit the special skills of users
          3. 3.2.4.3. Remove or reduce the user’s burden
        5. 3.2.5. Achieve Balanced Authentication Design
          1. 3.2.5.1. Remove unnecessary password restrictions
          2. 3.2.5.2. The Doctor and password madness
        6. 3.2.6. Balance Resource Allocation
      3. 3.3. Balance Privacy and Security
      4. 3.4. Build a Secure Internet
        1. 3.4.1. Ringworld
          1. 3.4.1.1. Within the Castle Keep
          2. 3.4.1.2. Within the Ramparts
          3. 3.4.1.3. The Town Wall
          4. 3.4.1.4. Beyond the Town Wall
        2. 3.4.2. Ringworld Interface
      5. 3.5. Conclusion
      6. 3.6. About the Author
    4. Four. Usability Design and Evaluation for Privacy and Security Solutions
      1. 4.1. Usability in the Software and Hardware Life Cycle
        1. 4.1.1. Unique Aspects of HCI and Usability in the Privacy and Security Domain
        2. 4.1.2. Usability in Requirements
        3. 4.1.3. Usability in Design and Development
        4. 4.1.4. Usability in Postrelease
      2. 4.2. Case Study: Usability Involvement in a Security Application
        1. 4.2.1. The Field Study
        2. 4.2.2. The User Tests
          1. 4.2.2.1. Test 1
          2. 4.2.2.2. Test 2
          3. 4.2.2.3. Test 3
        3. 4.2.3. The Return on Investment (ROI) Analysis
      3. 4.3. Case Study: Usability Involvement in the Development of a Privacy Policy Management Tool
        1. 4.3.1. Step One: Identifying Privacy Needs
        2. 4.3.2. Step Two: Performing In-Depth Interview Research
        3. 4.3.3. Step Three: Designing and Evaluating a Privacy Policy Prototype
        4. 4.3.4. Step Four: Evaluating Policy Authoring
      4. 4.4. Conclusion
      5. 4.5. About the Authors
    5. Five. Designing Systems That People Will Trust
      1. 5.1. Introduction
        1. 5.1.1. Definitions of Trust
        2. 5.1.2. The Nature of Trust in the Digital Sphere
      2. 5.2. The Trust-Risk Relationship
        1. 5.2.1. Technology Factors
        2. 5.2.2. Trust and Credibility
      3. 5.3. The Time-Course of Trust
      4. 5.4. Models of Trust
        1. 5.4.1. Early Work on Modeling Trust
        2. 5.4.2. Bhattacherjee’s Model of Trust
        3. 5.4.3. Lee, Kim, and Moon’s Model of Trust
        4. 5.4.4. Corritore’s Model of Trust
        5. 5.4.5. Egger’s Model of Trust
        6. 5.4.6. McKnight’s Model of Trust
        7. 5.4.7. Riegelsberger’s Model of Trust
        8. 5.4.8. Looking at the Models
      5. 5.5. Trust Designs
      6. 5.6. Future Research Directions
      7. 5.7. About the Authors
  4. II. Authentication Mechanisms
    1. Six. Evaluating Authentication Mechanisms
      1. 6.1. Authentication
        1. 6.1.1. Accessibility Barriers
        2. 6.1.2. Human Factors
        3. 6.1.3. Security
        4. 6.1.4. Context and Environment
      2. 6.2. Authentication Mechanisms
        1. 6.2.1. What the User Is—Biometrics
        2. 6.2.2. What the User Knows—Memometrics
          1. 6.2.2.1. Random passwords (uncued recall)
          2. 6.2.2.2. Cultural passwords (cued recall)
        3. 6.2.3. What the User Recognizes—Cognometrics
          1. 6.2.3.1. Recognition-based systems
          2. 6.2.3.2. Position-based systems
        4. 6.2.4. What the User Holds
        5. 6.2.5. Two-Factor Authentication
      3. 6.3. Quality Criteria
        1. 6.3.1. Accessibility
        2. 6.3.2. Memorability
        3. 6.3.3. Security
        4. 6.3.4. Cost
      4. 6.4. Environmental Considerations
        1. 6.4.1.  
          1. 6.4.1.1. Accessibility
          2. 6.4.1.2. Memorability
          3. 6.4.1.3. Security
          4. 6.4.1.4. Cost
      5. 6.5. Choosing a Mechanism
        1. 6.5.1. An Online Banking Example
          1. 6.4.1.5. The critical criterion: accessibility
          2. 6.4.1.6. The vital criterion: security
          3. 6.4.1.7. The significant criteria: memorability and cost
          4. 6.4.1.8. The incidental criterion: nothing
      6. 6.6. Conclusion
      7. 6.7. About the Author
    2. Seven. The Memorability and Security of Passwords
      1. 7.1. Introduction
      2. 7.2. Existing Advice on Password Selection
      3. 7.3. Experimental Study
      4. 7.4. Method
      5. 7.5. Results
      6. 7.6. Discussion
      7. 7.7. Acknowledgments
      8. 7.8. About the Authors
    3. Eight. Designing Authentication Systems with Challenge Questions
      1. 8.1. Challenge Questions as a Form of Authentication
        1. 8.1.1. Using Challenge Questions for Credential Recovery
        2. 8.1.2. Using Challenge Questions for Routine Authentication
      2. 8.2. Criteria for Building and Evaluating a Challenge Question System
        1. 8.2.1. Privacy Criteria
        2. 8.2.2. Security Criteria
        3. 8.2.3. Usability Criteria
      3. 8.3. Types of Questions and Answers
        1. 8.3.1. Question Types
        2. 8.3.2. Answer Types
      4. 8.4. Designing a Challenge Question Authentication System
        1. 8.4.1. Determining the Number of Questions to Use
        2. 8.4.2. Determining the Types of Questions and Answers to Use
          1. 8.4.2.1. Determining the appropriate question type
          2. 8.4.2.2. Determining the appropriate answer type
        3. 8.4.3. Complementary Security Techniques
      5. 8.5. Some Examples of Current Practice
        1. 8.5.1. About the Author
    4. Nine. Graphical Passwords
      1. 9.1. Introduction
      2. 9.2. A Picture Is Worth a Thousand Words
        1. 9.2.1. Image Recognition
        2. 9.2.2. Tapping or Drawing
        3. 9.2.3. Image Interpretation
      3. 9.3. Picture Perfect?
        1. 9.3.1. Security
          1. 9.3.1.1. Key generation
          2. 9.3.1.2. Authentication
        2. 9.3.2. Usability
        3. 9.3.3. Discussion
      4. 9.4. Let’s Face It
      5. 9.5. About the Authors
    5. Ten. Usable Biometrics
      1. 10.1. Introduction
        1. 10.1.1. Biometrics Types
        2. 10.1.2. Issues of Biometrics Specificity
        3. 10.1.3. The Fingerprint Example
      2. 10.2. Where Are Biometrics Used?
        1. 10.2.1. Physical Access Control
        2. 10.2.2. Immigration and Border Control
        3. 10.2.3. Law and Order
        4. 10.2.4. Transaction Security
      3. 10.3. Biometrics and Public Technology: The ATM Example
        1. 10.3.1. ATM Fingerprint Verification
        2. 10.3.2. ATM Face Verification
        3. 10.3.3. ATM Iris Verification
        4. 10.3.4. ATM Retina Verification
        5. 10.3.5. ATM Hand Verification
        6. 10.3.6. ATM Speaker Verification
        7. 10.3.7. ATM Signature Verification
        8. 10.3.8. ATM Typing Verification
      4. 10.4. Evaluating Biometrics
        1. 10.4.1. Performance Metrics
      5. 10.5. Incorporating User Factors into Testing
        1. 10.5.1. Size of User Base
        2. 10.5.2. Designing a Biometrics Solution to Maximize the User Experience
        3. 10.5.3. Enrollment
        4. 10.5.4. Biometrics Capture
        5. 10.5.5. Outliers and Fallback Strategies
          1. 10.5.5.1. Exception handling of outliers
          2. 10.5.5.2. Exception handling of temporary exclusions
          3. 10.5.5.3. Exception handling of aging
        6. 10.5.6. User Acceptance
          1. 10.5.6.1. Promoting user acceptance
          2. 10.5.6.2. Privacy
      6. 10.6. Conclusion
      7. 10.7. About the Author
    6. Eleven. Identifying Users from Their Typing Patterns
      1. 11.1. Typing Pattern Biometrics
      2. 11.2. Applications
        1. 11.2.1. Authentication
        2. 11.2.2. Identification and Monitoring
        3. 11.2.3. Password Hardening
        4. 11.2.4. Beyond Keyboards
      3. 11.3. Overview of Previous Research
      4. 11.4. Evaluating Previous Research
        1. 11.4.1. Classifier Accuracy
        2. 11.4.2. Usability
        3. 11.4.3. Confidence in Reported Results
      5. 11.5. Privacy and Security Issues
      6. 11.6. Conclusion
      7. 11.7. About the Authors
    7. Twelve. The Usability of Security Devices
      1. 12.1. Introduction
      2. 12.2. Overview of Security Devices
        1. 12.2.1. OTP Tokens
        2. 12.2.2. Smart Cards
        3. 12.2.3. USB Tokens
        4. 12.2.4. Biometrics Devices
      3. 12.3. Usability Testing of Security Devices
        1. 12.3.1. Setting Up the Test
        2. 12.3.2. Related Work
        3. 12.3.3. Usability Testing Methodology
      4. 12.4. A Usability Study of Cryptographic Smart Cards
        1. 12.4.1. Aim and Scope
        2. 12.4.2. Context and Roles Definition
        3. 12.4.3. User Selection
        4. 12.4.4. Task Definition
        5. 12.4.5. Measurement Apparatus
        6. 12.4.6. Processing for Statistical Significance
        7. 12.4.7. Computation of the Quality Attributes Scores
        8. 12.4.8. Results and Interpretation
        9. 12.4.9. Some Initial Conclusions
      5. 12.5. Recommendations and Open Research Questions
      6. 12.6. Conclusion
      7. 12.7. Acknowledgments
      8. 12.8. About the Authors
  5. III. Secure Systems
    1. Thirteen. Guidelines and Strategies for Secure Interaction Design
      1. 13.1. Introduction
        1. 13.1.1. Mental Models
        2. 13.1.2. Sources of Conflict
        3. 13.1.3. Iterative Design
        4. 13.1.4. Permission and Authority
      2. 13.2. Design Guidelines
        1. 13.2.1. Authorization
          1. 13.2.1.1. 1. Match the most comfortable way to do tasks with the least granting of authority.
          2. 13.2.1.2. 2. Grant authority to others in accordance with user actions indicating consent.
          3. 13.2.1.3. 3. Offer the user ways to reduce others’ authority to access the user’s resources.
          4. 13.2.1.4. 4. Maintain accurate awareness of others’ authority as relevant to user decisions.
          5. 13.2.1.5. 5. Maintain accurate awareness of the user’s own authority to access resources.
        2. 13.2.2. Communication
          1. 13.2.2.1. 6. Protect the user’s channels to agents that manipulate authority on the user’s behalf.
          2. 13.2.2.2. 7. Enable the user to express safe security policies in terms that fit the user’s task.
          3. 13.2.2.3. 8. Draw distinctions among objects and actions along boundaries relevant to the task.
          4. 13.2.2.4. 9. Present objects and actions using distinguishable, truthful appearances.
          5. 13.2.2.5. 10. Indicate clearly the consequences of decisions that the user is expected to make.
      3. 13.3. Design Strategies
        1. 13.3.1. Security by Admonition and Security by Designation
          1. 13.3.1.1. Security by admonition
          2. 13.3.1.2. Security by designation
          3. 13.3.1.3. Advantages of designation
          4. 13.3.1.4. Implementing security by designation
          5. 13.3.1.5. Implementing security by admonition
        2. 13.3.2. User-Assigned Identifiers
        3. 13.3.3. Applying the Strategies to Everyday Security Problems
          1. 13.3.3.1. Email viruses
          2. 13.3.3.2. Other viruses and spyware
          3. 13.3.3.3. Securing file access
          4. 13.3.3.4. Securing email access
          5. 13.3.3.5. Cookie management
          6. 13.3.3.6. Phishing attacks
          7. 13.3.3.7. Real implementations
      4. 13.4. Conclusion
      5. 13.5. Acknowledgments
      6. 13.6. About the Author
    2. Fourteen. Fighting Phishing at the User Interface
      1. 14.1. Introduction
        1. 14.1.1. Anatomy of a Phishing Attack
        2. 14.1.2. Phishing as a Semantic Attack
      2. 14.2. Attack Techniques
      3. 14.3. Defenses
        1. 14.3.1. Message Retrieval
          1. 14.3.1.1. Identity of the sender
          2. 14.3.1.2. Textual content of the message
        2. 14.3.2. Presentation
        3. 14.3.3. Action
        4. 14.3.4. System Operation
        5. 14.3.5. Case Study: SpoofGuard
      4. 14.4. Looking Ahead
      5. 14.5. About the Authors
    3. Fifteen. Sanitization and Usability
      1. 15.1. Introduction
      2. 15.2. The Remembrance of Data Passed Study
        1. 15.2.1. Other Anecdotal Information
        2. 15.2.2. Study Methodology
        3. 15.2.3. FORMAT Doesn’t Format
        4. 15.2.4. DELETE Doesn’t Delete
        5. 15.2.5. A Taxonomy of Sanitized Recovered Data
      3. 15.3. Related Work: Sanitization Standards, Software, and Practices
        1. 15.3.1. DoD 5220.22-M
        2. 15.3.2. Add-On Software
        3. 15.3.3. Operating System Modifications
      4. 15.4. Moving Forward: A Plan for Clean Computing
      5. 15.5. Acknowledgments
      6. 15.6. About the Author
    4. Sixteen. Making the Impossible Easy: Usable PKI
      1. 16.1. Public Key Infrastructures
      2. 16.2. Problems with Public Key Infrastructures
      3. 16.3. Making PKI Usable
        1. 16.3.1. Case Study: “Network-in-a-Box”
        2. 16.3.2. Case Study: Casca
        3. 16.3.3. Case Study: Usable Access Control for the World Wide Web
        4. 16.3.4. Instant PKIs
        5. 16.3.5. What Makes a PKI Instant?
      4. 16.4. About the Authors
    5. Seventeen. Simple Desktop Security with Chameleon
      1. 17.1. Introduction
        1. 17.1.1. File Organization in Chameleon
        2. 17.1.2. Interrole Communication and Network Access
        3. 17.1.3. Advanced Role Features
      2. 17.2. Chameleon User Interface
      3. 17.3. Chameleon Interface Development
        1. 17.3.1. Study 1: Paper Prototype (Security in Context)
        2. 17.3.2. Study 2: Paper Prototype (Security Mechanisms)
        3. 17.3.3. Study 3: Visual Basic Prototype
      4. 17.4. Chameleon Implementation
        1. 17.4.1. Window System Partitioning
        2. 17.4.2. Filesystem Security
        3. 17.4.3. Network Security and Interprocess Communication
        4. 17.4.4. Software Architecture for Usability and Security
      5. 17.5. Conclusion
      6. 17.6. Acknowledgments
      7. 17.7. About the Authors
    6. Eighteen. Security Administration Tools and Practices
      1. 18.1. Introduction
      2. 18.2. Attacks, Detection, and Prevention
      3. 18.3. Security Administrators
        1. 18.3.1. Profile of a Security Manager—Joe
        2. 18.3.2. Profile of a Security Engineer—Aaron
      4. 18.4. Security Administration: Cases from the Field
        1. 18.4.1. Security Checkup
          1. 18.4.1.1. Case 1: MyDoom
          2. 18.4.1.2. Case 2: Intrusion alert—false alarm
          3. 18.4.1.3. Case 3: Real-time network monitoring
          4. 18.4.1.4. Case 4: Security scan
        2. 18.4.2. Attack Analysis
          1. 18.4.2.1. Case 5: Persistent hackers
        3. 18.4.3. The Need for Security Administration Tools
      5. 18.5. Conclusion
      6. 18.6. Acknowledgments
      7. 18.7. About the Authors
  6. IV. Privacy and Anonymity Systems
    1. Ninteen. Privacy Issues and Human-Computer Interaction
      1. 19.1. Introduction
      2. 19.2. Privacy and HCI
      3. 19.3. Relevant HCI Research Streams
        1. 19.3.1. Usability Engineering
        2. 19.3.2. Computer-Supported Cooperative Work
        3. 19.3.3. Individual Differences
        4. 19.3.4. Ubiquitous Computing (Ubicomp)
      4. 19.4. Conclusion
      5. 19.5. About the Authors
    2. Twenty. A User-Centric Privacy Space Framework
      1. 20.1. Introduction
        1. 20.1.1. Privacy
        2. 20.1.2. Exoinformation
      2. 20.2. Security and Privacy Frameworks
        1. 20.2.1. Codes of Fair Information Practice
        2. 20.2.2. The ISTPA Privacy Framework
        3. 20.2.3. Schneier’s Security Processes Framework
        4. 20.2.4. The Privacy Space Framework
      3. 20.3. Researching the Privacy Space
        1. 20.3.1. Feature Analysis
          1. 20.3.1.1. Example 1: PGP Freeware
          2. 20.3.1.2. Example 2: WebWasher
          3. 20.3.1.3. Example 3: ZoneAlarm
          4. 20.3.1.4. Phase one results
        2. 20.3.2. Validation
      4. 20.4. Privacy as a Process
      5. 20.5. Conclusion
      6. 20.6. About the Author
    3. Twenty One. Five Pitfalls in the Design for Privacy
      1. 21.1. Introduction
        1. 21.1.1. Understanding
        2. 21.1.2. Action
      2. 21.2. Faces: (Mis)Managing Ubicomp Privacy
        1. 21.2.1. Faces Design
        2. 21.2.2. Formative Evaluation
      3. 21.3. Five Pitfalls to Heed When Designing for Privacy
        1. 21.3.1. Concerning Understanding
          1. 21.3.1.1. Pitfall 1: Obscuring potential information flow
          2. 21.3.1.2. Evidence: Falling into the pitfall
          3. 21.3.1.3. Evidence: Avoiding the pitfall
          4. 21.3.1.4. Pitfall 2: Obscuring actual information flow
          5. 21.3.1.5. Evidence: Falling into the pitfall
          6. 21.3.1.6. Evidence: Avoiding the pitfall
        2. 21.3.2. Concerning Action
          1. 21.3.2.1. Pitfall 3: Emphasizing configuration over action
          2. 21.3.2.2. Evidence: Falling into the pitfall
          3. 21.3.2.3. Evidence: Avoiding the pitfall
          4. 21.3.2.4. Pitfall 4: Lacking coarse-grained control
          5. 21.3.2.5. Evidence: Falling into the pitfall
          6. 21.3.2.6. Evidence: Avoiding the pitfall
          7. 21.3.2.7. Pitfall 5: Inhibiting established practice
          8. 21.3.2.8. Evidence: Falling into the pitfall
          9. 21.3.2.9. Evidence: Avoiding the pitfall
      4. 21.4. Discussion
        1. 21.4.1. Mental Models of Information Flow
        2. 21.4.2. Opportunities for Understanding and Action
        3. 21.4.3. Negative Case Study: Faces
        4. 21.4.4. Positive Case Study: Instant Messaging and Mobile Telephony
      5. 21.5. Conclusion
      6. 21.6. Acknowledgments
      7. 21.7. About the Authors
    4. Twenty Two. Privacy Policies and Privacy Preferences
      1. 22.1. Introduction
      2. 22.2. The Platform for Privacy Preferences (P3P)
        1. 22.2.1. How P3P Works
        2. 22.2.2. P3P User Agents
      3. 22.3. Privacy Bird Design
        1. 22.3.1. Capturing User Privacy Preferences
        2. 22.3.2. Communicating with Users About Web Site Privacy Policies
        3. 22.3.3. Privacy Icons
      4. 22.4. Privacy Bird Evaluation
        1. 22.4.1. User Survey
        2. 22.4.2. Laboratory Study
      5. 22.5. Beyond the Browser
      6. 22.6. About the Author
    5. Twenty Three. Privacy Analysis for the Casual User with Bugnosis
      1. 23.1. Introduction
      2. 23.2. The Audience for Bugnosis
      3. 23.3. Cookies, Web Bugs, and User Tracking
        1. 23.3.1. Tracing Alice Through the Web
          1. 23.3.1.1. Visiting multiple sites
          2. 23.3.1.2. Unique identification with referrers and third-party cookies
        2. 23.3.2. Using Web Bugs to Enable Clickstream Tracking
          1. 23.3.2.1. The web bug: a definition
          2. 23.3.2.2. What about second-party transactions?
        3. 23.3.3. Bugnosis: Theory of Operation
          1. 23.3.3.1. One-sided errors
          2. 23.3.3.2. Detecting but not blocking web bugs
        4. 23.3.4. Presenting the Analysis
        5. 23.3.5. Alerting the User
      4. 23.4. The Graphic Identity
      5. 23.5. Making It Simple Is Complicated
        1. 23.5.1. Using Browser Helper Objects and the Document Object Model
        2. 23.5.2. The Event Model
          1. 23.5.2.1. Provisional analysis
          2. 23.5.2.2. Rescanning, refreshing, and Old Paint
        3. 23.5.3. The Analysis Pane and Toolbar
          1. 23.5.3.1. The discovery dance
          2. 23.5.3.2. Making the Analysis Pane feel natural
          3. 23.5.3.3. Analyzing pop-up windows
        4. 23.5.4. Installation and Uninstallation
          1. 23.5.4.1. Installation/uninstallation problems
          2. 23.5.4.2. Windows XP Service Pack 2
      6. 23.6. Looking Ahead
        1. 23.6.1. Exposing Email Tracking
        2. 23.6.2. Platform for Privacy Preferences Project
        3. 23.6.3. Further Privacy Awareness Tools and Research
      7. 23.7. Acknowledgments
      8. 23.8. About the Author
    6. Twenty Four. Informed Consent by Design
      1. 24.1. Introduction
      2. 24.2. A Model of Informed Consent for Information Systems
        1. 24.2.1. Disclosure
        2. 24.2.2. Comprehension
        3. 24.2.3. Voluntariness
        4. 24.2.4. Competence
        5. 24.2.5. Agreement
        6. 24.2.6. Minimal Distraction
      3. 24.3. Possibilities and Limitations for Informed Consent: Redesigning Cookie Handling in a Web Browser
        1. 24.3.1. What Are Cookies and How Are They Used?
        2. 24.3.2. Web Browser as Gatekeeper to Informed Consent
        3. 24.3.3. Web Browser Development and Progress for Informed Consent: 1995-1999
        4. 24.3.4. Redesigning the Browser
        5. 24.3.5. Technical Limitations to Redesigning for Informed Consent
        6. 24.3.6. Reflections
      4. 24.4. Informing Through Interaction Design: What Users Understand About Secure Connections Through Their Web Browsing
        1. 24.4.1. Participants
        2. 24.4.2. Users’ Conceptions of Secure Connections
          1. 24.4.2.1. Definition of a secure connection
          2. 24.4.2.2. Recognition of a connection as secure or not secure
          3. 24.4.2.3. Visual portrayal of a secure connection
        3. 24.4.3. Reflections
      5. 24.5. The Scope of Informed Consent: Questions Motivated by Gmail
        1. 24.5.1. What Is Gmail?
        2. 24.5.2. How Gmail Advertisements Work
        3. 24.5.3. Gmail and the Six Components of Informed Consent
          1. 24.5.3.1. Disclosure
          2. 24.5.3.2. Comprehension
          3. 24.5.3.3. Voluntariness
          4. 24.5.3.4. Competence
          5. 24.5.3.5. Agreement
          6. 24.5.3.6. Minimal distraction
        4. 24.5.4. Two Questions Related to Informed Consent
          1. 24.5.4.1. The question of machines reading personal content
          2. 24.5.4.2. The question of indirect stakeholders
        5. 24.5.5. Reflections
        6. 24.5.6. Design Principles for Informed Consent for Information Systems
      6. 24.6. Acknowledgments
      7. 24.7. About the Authors
    7. Twenty Five. Social Approaches to End-User Privacy Management
      1. 25.1. A Concrete Privacy Problem
      2. 25.2. Acumen: A Solution Using Social Processes
        1. 25.2.1. Acumen Overview
        2. 25.2.2. The Acumen User Interface
      3. 25.3. Supporting Privacy Management Activities with Social Processes
        1. 25.3.1. Awareness and Motivation
          1. 25.3.1.1. Awareness and motivation in Acumen
        2. 25.3.2. Learning and Education
          1. 25.3.2.1. Learning and education in Acumen
        3. 25.3.3. Decision Making
          1. 25.3.3.1. Decision making and herd behavior
      4. 25.4. Deployment, Adoption, and Evaluation
        1. 25.4.1. Deployment and Adoption
          1. 25.4.1.1. Deployment and adoption in Acumen
        2. 25.4.2. User Needs Evaluation
          1. 25.4.2.1. User needs evaluation in Acumen
        3. 25.4.3. Technological Evaluation
          1. 25.4.3.1. Technological evaluation in Acumen
      5. 25.5. Gaming and Anti-gaming
        1. 25.5.1. Anti-gaming Techniques in Acumen
      6. 25.6. Generalizing Our Approach
        1. 25.6.1. Four Key Questions for a Privacy Management System
        2. 25.6.2. Sketching a System Design
      7. 25.7. Conclusion
      8. 25.8. About the Authors
    8. Twenty Six. Anonymity Loves Company: Usability and the Network Effect
      1. 26.1. Usability for Others Impacts Your Security
      2. 26.2. Usability Is Even More Important for Privacy
        1. 26.2.1. Case Study: Usability Means Users, Users Mean Security
        2. 26.2.2. Case Study: Against Options
        3. 26.2.3. Case Study: Mixminion and MIME
        4. 26.2.4. Case Study: Tor Installation, Marketing, and GUI
        5. 26.2.5. Case Study: JAP and its Anonym-o-meter
      3. 26.3. Bootstrapping, Confidence, and Reputability
      4. 26.4. Technical Challenges to Guessing the Number of Users in a Network
      5. 26.5. Conclusion
      6. 26.6. About the Authors
  7. V. Commercializing Usability: The Vendor Perspective
    1. Twenty Seven. ZoneAlarm: Creating Usable Security Products for Consumers
      1. 27.1. About ZoneAlarm
      2. 27.2. Design Principles
        1. 27.2.1. Know Your Audience
        2. 27.2.2. Think Like Your Audience
        3. 27.2.3. Eliminate Clutter
        4. 27.2.4. Eliminate Complexity
        5. 27.2.5. Create Just Enough Feedback
        6. 27.2.6. Be a Customer Advocate When Usability and Competitive Pressure Collide
      3. 27.3. Efficient Production for a Fast Market
      4. 27.4. Conclusion
      5. 27.5. About the Author
    2. Twenty Eight. Firefox and the Worry-Free Web
      1. 28.1. Usability and Security: Bridging the Gap
      2. 28.2. The Five Golden Rules
        1. 28.2.1. Identifying “The User”
        2. 28.2.2. 1. Enforce the Officer/Citizen Model
        3. 28.2.3. 2. Don’t Overwhelm the User
        4. 28.2.4. 3. Earn Your Users’ Trust
        5. 28.2.5. 4. Put Out Fires Quickly and Responsibly
        6. 28.2.6. 5. Teach Your Users Simple Tricks
      3. 28.3. Conclusion
      4. 28.4. About the Author
    3. Twenty Nine. Users and Trust: A Microsoft Case Study
      1. 29.1. Users and Trust
        1. 29.1.1. Users’ Reactions to Trust Questions
        2. 29.1.2. Users’ Behavior in Trust Situations
        3. 29.1.3. Security Versus Convenience
        4. 29.1.4. Making Decisions Versus Supporting Decisions
      2. 29.2. Consent Dialogs
        1. 29.2.1. Consent Dialog Redesign
      3. 29.3. Windows XP Service Pack 2—A Case Study
        1. 29.3.1. ActiveX Dialogs
        2. 29.3.2. File Download Dialogs
      4. 29.4. Pop-Up Blocking
      5. 29.5. The Ideal
      6. 29.6. Conclusion
      7. 29.7. About the Author
    4. Thirty. IBM Lotus Notes/Domino: Embedding Security in Collaborative Applications
      1. 30.1. Usable Secure Collaboration
      2. 30.2. Embedding and Simplifying Public Key Security
        1. 30.2.1. Signing and Decrypting Email
        2. 30.2.2. Encrypting Email
      3. 30.3. Designing Security Displays
        1. 30.3.1. User Security Panel
          1. 30.3.1.1. Displaying public key certificates
          2. 30.3.1.2. Limitations and results
        2. 30.3.2. Database Access Control Information
          1. 30.3.2.1. Adding power and complexity
      4. 30.4. User Control of Active Content Security
        1. 30.4.1. Deployment Study
        2. 30.4.2. Solutions and Challenges
      5. 30.5. Conclusion
      6. 30.6. About the Author
    5. Thirty One. Achieving Usable Security in Groove Virtual Office
      1. 31.1. About Groove Virtual Office
      2. 31.2. Groove Virtual Office Design
        1. 31.2.1. The Weakest Link
        2. 31.2.2. Do the Right Thing
        3. 31.2.3. Is That You, Alice?
        4. 31.2.4. Colorful Security
      3. 31.3. Administrators’ Strengths and Weaknesses
      4. 31.4. Security and Usability
      5. 31.5. About the Authors
  8. VI. The Classics
    1. Thirty Two. Users Are Not the Enemy
      1. 32.1. The Study
      2. 32.2. Users Lack Security Knowledge
      3. 32.3. Security Needs User-Centered Design
      4. 32.4. Motivating Users
      5. 32.5. Users and Password Behavior
      6. 32.6. About the Authors
    2. Thirty Three. Usability and Privacy: A Study of KaZaA P2P File Sharing
      1. 33.1. Introduction
        1. 33.1.1. Abuses on KaZaA Today
        2. 33.1.2. Unintended File Sharing Among KaZaA Users
        3. 33.1.3. Users Downloading Others’ Private Files
      2. 33.2. Usability Guidelines
      3. 33.3. Results of the Cognitive Walkthrough
        1. 33.3.1. Changing the Download File Directory
        2. 33.3.2. Sharing Files
        3. 33.3.3. Adding Files to the My Media Folder
        4. 33.3.4. Uploading Files
        5. 33.3.5. Summary of Usability Guidelines
      4. 33.4. A Two-Part User Study
        1. 33.4.1. Parts of the Study
          1. 33.4.1.1. KaZaA sharing comprehension questions
          2. 33.4.1.2. Current sharing settings discovery task
        2. 33.4.2. Results
          1. 33.4.2.1. KaZaA sharing comprehension questions
          2. 33.4.2.2. Current sharing settings discovery task
        3. 33.4.3. Suggested Design Improvements
      5. 33.5. Conclusion
      6. 33.6. Acknowledgments
      7. 33.7. About the Authors
    3. Thirty Four. Why Johnny Can’t Encrypt
      1. 34.1. Introduction
      2. 34.2. Understanding the Problem
        1. 34.2.1. Defining Usability for Security
        2. 34.2.2. Problematic Properties of Security
        3. 34.2.3. A Usability Standard for PGP
      3. 34.3. Evaluation Methods
      4. 34.4. Cognitive Walkthrough
        1. 34.4.1. Visual Metaphors
        2. 34.4.2. Different Key Types
        3. 34.4.3. Key Server
        4. 34.4.4. Key Management Policy
        5. 34.4.5. Irreversible Actions
        6. 34.4.6. Consistency
        7. 34.4.7. Too Much Information
      5. 34.5. User Test
        1. 34.5.1. Purpose
        2. 34.5.2. Description
          1. 34.5.2.1. Test design
          2. 34.5.2.2. Participants
        3. 34.5.3. Results
          1. 34.5.3.1. Avoiding dangerous errors
          2. 34.5.3.2. Figuring out how to encrypt with any key
          3. 34.5.3.3. Figuring out the correct key to encrypt with
          4. 34.5.3.4. Decrypting an email message
          5. 34.5.3.5. Publishing the public key
          6. 34.5.3.6. Getting other people’s public keys
          7. 34.5.3.7. Handling the mixed key types problem
          8. 34.5.3.8. Signing an email message
          9. 34.5.3.9. Verifying a signature on an email message
          10. 34.5.3.10. Creating a backup revocation certificate
          11. 34.5.3.11. Deciding whether to trust keys from the key server
      6. 34.6. Conclusion
        1. 34.6.1. Failure of Standard Interface Design
        2. 34.6.2. Usability Evaluation for Security
        3. 34.6.3. Toward Better Design Strategies
      7. 34.7. Related Work
      8. 34.8. Acknowledgments
      9. 34.9. About the Authors
  9. Index
  10. About the Authors
  11. Colophon
  12. Special Upgrade Offer
  13. Copyright