You are previewing Software Testing: Principles and Practices.
O'Reilly logo
Software Testing: Principles and Practices

Book Description

Software Testing: Principles and Practices is a comprehensive treatise on software testing. It provides a pragmatic view of testing, addressing emerging areas like extreme testing and ad hoc testing.

Table of Contents

  1. Cover
  2. Title Page
  3. Contents
  4. Dedication
  5. Preface
  6. Foreword
  7. Part I - Setting the Context
    1. 1. Principles of Testing
      1. 1.1 Context of Testing in Producing Software
      2. 1.2 About this Chapter
      3. 1.3 The Incomplete Car
      4. 1.4 Dijkstra's Doctrine
      5. 1.5 A Test in Time!
      6. 1.6 The Cat and the Saint
      7. 1.7 Test the Tests First!
      8. 1.8 The Pesticide Paradox
      9. 1.9 The Convoy and the Rags
      10. 1.10 The Policemen on the Bridge
      11. 1.11 The Ends of the Pendulum
      12. 1.12 Men in Black
      13. 1.13 Automation Syndrome
      14. 1.14 Putting it All Together
      15. References
      16. Problems and Exercises
    2. 2. Software Development Life Cycle Models
      1. 2.1 Phases of Software Project
        1. 2.1.1 Requirements Gathering and Analysis
        2. 2.1.2 Planning
        3. 2.1.3 Design
        4. 2.1.4 Development or Coding
        5. 2.1.5 Testing
        6. 2.1.6 Deployment and Maintenance
      2. 2.2 Quality, Quality Assurance, and Quality Control
      3. 2.3 Testing, Verification, and Validation
      4. 2.4 Process Model to Represent Different Phases
      5. 2.5 Life Cycle Models
        1. 2.5.1 Waterfall Model
        2. 2.5.2 Prototyping and Rapid Application Development Models
        3. 2.5.3 Spiral or Iterative Model
        4. 2.5.4 The V Model
        5. 2.5.5 Modified V Model
        6. 2.5.6 Comparison of Various Life Cycle Models
      6. References
      7. Problems and Exercises
  8. Part II - Types of Testing
    1. 3. White Box Testing
      1. 3.1 What is White Box Testing?
      2. 3.2 Static Testing
        1. 3.2.1 Static Testing by Humans
        2. 3.2.2 Static Analysis Tools
      3. 3.3 Structural Testing
        1. 3.3.1 Unit/Code Functional Testing
        2. 3.3.2 Code Coverage Testing
        3. 3.3.3 Code Complexity Testing
      4. 3.4 Challenges in White Box Testing
      5. References
      6. Problems and Exercises
    2. 4. Black Box Testing
      1. 4.1 What is Black Box Testing?
      2. 4.2 Why Black Box Testing?
      3. 4.3 When to do Black Box Testing?
      4. 4.4 How to do Black Box Testing?
        1. 4.4.1 Requirements Based Testing
        2. 4.4.2 Positive and Negative Testing
        3. 4.4.3 Boundary Value Analysis
        4. 4.4.4 Decision Tables
        5. 4.4.5 Equivalence Partitioning
        6. 4.4.6 State Based or Graph Based Testing
        7. 4.4.7 Compatibility Testing
        8. 4.4.8 User Documentation Testing
        9. 4.4.9 Domain Testing
      5. 4.5 Conclusion
      6. References
      7. Problems and Exercises
    3. 5. Integration Testing
      1. 5.1 What is Integration Testing?
      2. 5.2 Integration Testing as a Type of Testing
        1. 5.2.1 Top-Down Integration
        2. 5.2.2 Bottom-Up Integration
        3. 5.2.3 Bi-Directional Integration
        4. 5.2.4 System Integration
        5. 5.2.5 Choosing Integration Method
      3. 5.3 Integration Testing as a Phase of Testing
      4. 5.4 Scenario Testing
        1. 5.4.1 System Scenarios
        2. 5.4.2 Use Case Scenarios
      5. 5.5 Defect Bash
        1. 5.5.1 Choosing the Frequency and Duration of Defect Bash
        2. 5.5.2 Selecting the Right Product Build
        3. 5.5.3 Communicating the Objective of Defect Bash
        4. 5.5.4 Setting up and Monitoring the Lab
        5. 5.5.5 Taking Actions and Fixing Issues
        6. 5.5.6 Optimizing the Effort Involved in Defect Bash
      6. 5.6 Conclusion
      7. References
      8. Problems and Exercises
    4. 6. System and Acceptance Testing
      1. 6.1 System Testing Overview
      2. 6.2 Why is System Testing Done?
      3. 6.3 Functional Versus Non-Functional Testing
      4. 6.4 Functional System Testing
        1. 6.4.1 Design/Architecture Verification
        2. 6.4.2 Business Vertical Testing
        3. 6.4.3 Deployment Testing
        4. 6.4.4 Beta Testing
        5. 6.4.5 Certification, Standards and Testing for Compliance
      5. 6.5 Non-Functional Testing
        1. 6.5.1 Setting up the Configuration
        2. 6.5.2 Coming up with Entry/Exit Criteria
        3. 6.5.3 Balancing Key Resources
        4. 6.5.4 Scalability Testing
        5. 6.5.5 Reliability Testing
        6. 6.5.6 Stress Testing
        7. 6.5.7 Interoperability Testing
      6. 6.6 Acceptance Testing
        1. 6.6.1 Acceptance Criteria
        2. 6.6.2 Selecting Test Cases for Acceptance Testing
        3. 6.6.3 Executing Acceptance Tests
      7. 6.7 Summary of Testing Phases
        1. 6.7.1 Multiphase Testing Model
        2. 6.7.2 Working Across Multiple Releases
        3. 6.7.3 Who Does What and When
      8. References
      9. Problems and Exercises
    5. 7. Performance Testing
      1. 7.1 Introduction
      2. 7.2 Factors Governing Performance Testing
      3. 7.3 Methodology for Performance Testing
        1. 7.3.1 Collecting Requirements
        2. 7.3.2 Writing Test Cases
        3. 7.3.3 Automating Performance Test Cases
        4. 7.3.4 Executing Performance Test Cases
        5. 7.3.5 Analyzing the Performance Test Results
        6. 7.3.6 Performance Tuning
        7. 7.3.7 Performance Benchmarking
        8. 7.3.8 Capacity Planning
      4. 7.4 Tools for Performance Testing
      5. 7.5 Process for Performance Testing
      6. 7.6 Challenges
      7. References
      8. Problems and Exercises
    6. 8. Regression Testing
      1. 8.1 What is Regression Testing?
      2. 8.2 Types of Regression Testing
      3. 8.3 When to do Regression Testing?
      4. 8.4 How to do Regression Testing?
        1. 8.4.1 Performing an Initial "Smoke" or "Sanity" Test
        2. 8.4.2 Understanding the Criteria for Selecting the Test Cases
        3. 8.4.3 Classifying Test Cases
        4. 8.4.4 Methodology for Selecting Test Cases
        5. 8.4.5 Resetting the Test Cases for Regression Testing
        6. 8.4.6 Concluding the Results of Regression Testing
      5. 8.5 Best Practices in Regression Testing
      6. References
      7. Problems and Exercises
    7. 9. Internationalization (I18n) Testing
      1. 9.1 Introduction
      2. 9.2 Primer on Internationalization
        1. 9.2.1 Definition of Language
        2. 9.2.2 Character Set
        3. 9.2.3 Locale
        4. 9.2.4 Terms Used in This Chapter
      3. 9.3 Test Phases for Internationalization Testing
      4. 9.4 Enabling Testing
      5. 9.5 Locale Testing
      6. 9.6 Internationalization Validation
      7. 9.7 Fake Language Testing
      8. 9.8 Language Testing
      9. 9.9 Localization Testing
      10. 9.10 Tools Used for Internationalization
      11. 9.11 Challenges and Issues
      12. References
      13. Problems and Exercises
    8. 10. Ad hoc Testing
      1. 10.1 Overview of Ad Hoc Testing
      2. 10.2 Buddy Testing
      3. 10.3 Pair Testing
        1. 10.3.1 Situations When Pair Testing Becomes Ineffective
      4. 10.4 Exploratory Testing
        1. 10.4.1 Exploratory Testing Techniques
      5. 10.5 Iterative Testing
      6. 10.6 Agile and Extreme Testing
        1. 10.6.1 XP Work Flow
        2. 10.6.2 Summary with an Example
      7. 10.7 Defect Seeding
      8. 10.8 Conclusion
      9. References
      10. Problems and Exercises
  9. Part III - Select Topics in Specialized Testing
    1. 11. Testing of Object-Oriented Systems
      1. 11.1 Introduction
      2. 11.2 Primer on Object-Oriented Software
      3. 11.3 Differences in OO Testing
        1. 11.3.1 Unit Testing a set of Classes
        2. 11.3.2 Putting Classes to Work Together—Integration Testing
        3. 11.3.3 System Testing and Interoperability of OO Systems
        4. 11.3.4 Regression Testing of OO Systems
        5. 11.3.5 Tools for Testing of OO Systems
        6. 11.3.6 Summary
      4. References
      5. Problems and Exercises
    2. 12. Usability and Accessibility Testing
      1. 12.1 What is Usability Testing?
      2. 12.2 Approach to Usability
      3. 12.3 When to do Usability Testing?
      4. 12.4 How to Achieve Usability?
      5. 12.5 Quality Factors for Usability
      6. 12.6 Aesthetics Testing
      7. 12.7 Accessibility Testing
        1. 12.7.1 Basic Accessibility
        2. 12.7.2 Product Accessibility
      8. 12.8 Tools for Usability
      9. 12.9 Usability Lab Setup
      10. 12.10 Test Roles for Usability
      11. 12.11 Summary
      12. References
      13. Problems and Exercises
  10. Part IV - People and Organizational Issues in Testing
    1. 13. Common People Issues
      1. 13.1 Perceptions and Misconceptions About Testing
        1. 13.1.1 “Testing is not Technically Challenging”
        2. 13.1.2 “Testing Does Not Provide me a Career Path or Growth”
        3. 13.1.3 “I Am Put in Testing—What is Wrong With Me?!”303
        4. 13.1.4 “These Folks Are My Adversaries”
        5. 13.1.5 “Testing is What I Can Do in the End if I Get Time”
        6. 13.1.6 “There is no Sense of Ownership in Testing”
        7. 13.1.7 “Testing is only Destructive”
      2. 13.2 Comparison between Testing and Development Functions
      3. 13.3 Providing Career Paths for Testing Professionals
      4. 13.4 The Role of the Ecosystem and a Call for Action
        1. 13.4.1 Role of Education System
        2. 13.4.2 Role of Senior Management
        3. 13.4.3 Role of the Community
      5. References
      6. Problems and Exercises
    2. 14. Organization Structures for Testing Teams
      1. 14.1 Dimensions of Organization Structures
      2. 14.2 Structures in Single-Product Companies
        1. 14.2.1 Testing Team Structures for Single-Product Companies
        2. 14.2.2 Component-Wise Testing Teams
      3. 14.3 Structures for Multi-Product Companies
        1. 14.3.1 Testing Teams as Part of “CTO's Office”
        2. 14.3.2 Single Test Team for All Products
        3. 14.3.3 Testing Teams Organized by Product
        4. 14.3.4 Separate Testing Teams for Different Phases of Testing
        5. 14.3.5 Hybrid Models
      4. 14.4 Effects of Globalization and Geographically Distributed Teams on Product Testing
        1. 14.4.1 Business Impact of Globalization
        2. 14.4.2 Round the Clock Development/Testing Model
        3. 14.4.3 Testing Competency Center Model
        4. 14.4.4 Challenges in Global Teams
      5. 14.5 Testing Services Organizations
        1. 14.5.1 Business Need for Testing Services
        2. 14.5.2 Differences between Testing as a Service and Product—Testing Organizations
        3. 14.5.3 Typical Roles and Responsibilities of Testing Services Organization
        4. 14.5.4 Challenges and Issues in Testing Services Organizations
      6. 14.6 Success Factors for Testing Organizations
      7. References
      8. Problems and Exercises
  11. Part V - Test Management and Automation
    1. 15. Test Planning, Management, Execution, and Reporting
      1. 15.1 Introduction
      2. 15.2 Test Planning
        1. 15.2.1 Preparing a Test Plan
        2. 15.2.2 Scope Management: Deciding Features to be Tested/Not Tested
        3. 15.2.3 Deciding Test Approach/Strategy
        4. 15.2.4 Setting up Criteria for Testing
        5. 15.2.5 Identifying Responsibilities, Staffing, and Training Needs
        6. 15.2.6 Identifying Resource Requirements
        7. 15.2.7 Identifying Test Deliverables
        8. 15.2.8 Testing Tasks: Size and Effort Estimation
        9. 15.2.9 Activity Breakdown and Scheduling
        10. 15.2.10 Communications Management
        11. 15.2.11 Risk Management
      3. 15.3 Test Management
        1. 15.3.1 Choice of Standards
        2. 15.3.2 Test Infrastructure Management
        3. 15.3.3 Test People Management
        4. 15.3.4 Integrating with Product Release
      4. 15.4 Test Process
        1. 15.4.1 Putting Together and Baselining a Test Plan
        2. 15.4.2 Test Case Specification
        3. 15.4.3 Update of Traceability Matrix
        4. 15.4.4 Identifying Possible Candidates for Automation
        5. 15.4.5 Developing and Baselining Test Cases
        6. 15.4.6 Executing Test Cases and Keeping Traceability Matrix Current
        7. 15.4.7 Collecting and Analyzing Metrics
        8. 15.4.8 Preparing Test Summary Report
        9. 15.4.9 Recommending Product Release Criteria
      5. 15.5 Test Reporting
        1. 15.5.1 Recommending Product Release
      6. 15.6 Best Practices
        1. 15.6.1 Process Related Best Practices
        2. 15.6.2 People Related Best Practices
        3. 15.6.3 Technology Related Best Practices
      7. Appendix A: Test Planning Checklist
      8. Appendix B: Test Plan Template
      9. References
      10. Problems and Exercises
    2. 16. Software Test Automation
      1. 16.1 What is Test Automation?
      2. 16.2 Terms Used in Automation
      3. 16.3 Skills Needed for Automation
      4. 16.4 What to Automate, Scope of Automation
        1. 16.4.1 Identifying the Types of Testing Amenable to Automation
        2. 16.4.2 Automating Areas Less Prone to Change
        3. 16.4.3 Automate Tests that Pertain to Standards
        4. 16.4.4 Management Aspects in Automation
      5. 16.5 Design and Architecture for Automation
        1. 16.5.1 External Modules
        2. 16.5.2 Scenario and Configuration File Modules
        3. 16.5.3 Test Cases and Test Framework Modules
        4. 16.5.4 Tools and Results Modules
        5. 16.5.5 Report Generator and Reports/Metrics Modules
      6. 16.6 Generic Requirements for Test Tool/Framework
      7. 16.7 Process Model for Automation
      8. 16.8 Selecting a Test Tool
        1. 16.8.1 Criteria for Selecting Test Tools
        2. 16.8.2 Steps for Tool Selection and Deployment
      9. 16.9 Automation for Extreme Programming Model
      10. 16.10 Challenges in Automation
      11. 16.11 Summary
      12. References
      13. Problems and Exercises
    3. 17. Test Metrics and Measurements
      1. 17.1 What are Metrics and Measurements?
      2. 17.2 Why Metrics in Testing?
      3. 17.3 Types of Metrics
      4. 17.4 Project Metrics
        1. 17.4.1 Effort Variance (Planned vs Actual)
        2. 17.4.2 Schedule Variance (Planned vs Actual)
        3. 17.4.3 Effort Distribution Across Phases
      5. 17.5 Progress Metrics
        1. 17.5.1 Test Defect Metrics
        2. 17.5.2 Development Defect Metrics
      6. 17.6 Productivity Metrics
        1. 17.6.1 Defects per 100 Hours of Testing
        2. 17.6.2 Test Cases Executed per 100 Hours of Testing
        3. 17.6.3 Test Cases Developed per 100 Hours of Testing
        4. 17.6.4 Defects per 100 Test Cases
        5. 17.6.5 Defects per 100 Failed Test Cases
        6. 17.6.6 Test Phase Effectiveness
        7. 17.6.7 Closed Defect Distribution
      7. 17.7 Release metrics
      8. 17.8 Summary
      9. References
      10. Problems and Exercises
  12. Illustrations
  13. Bibliography
  14. Acknowledgements
  15. Copyright