You are previewing Foundations of Software Testing, 2nd Edition.
O'Reilly logo
Foundations of Software Testing, 2nd Edition

Book Description

This edition of

Table of Contents

  1. Cover
  2. Title Page
  3. Contents
  4. Dedication
  5. Preface to the Second Edition
  6. Preface to the First Edition
  7. Part I: Preliminaries
    1. Chapter 1: Preliminaries: Software Testing
      1. 1.1. Humans, Errors, and Testing
        1. 1.1.1 Errors, faults, and failures
        2. 1.1.2 Test automation
        3. 1.1.3 Developer and tester as two roles
      2. 1.2. Software Quality
        1. 1.2.1 Quality attributes
        2. 1.2.2 Reliability
      3. 1.3. Requirements, Behavior and Correctness
        1. 1.3.1 Input domain
        2. 1.3.2 Specifying program behavior
        3. 1.3.3 Valid and invalid inputs
      4. 1.4. Correctness Versus Reliability
        1. 1.4.1 Correctness
        2. 1.4.2 Reliability
        3. 1.4.3 Operational profiles
      5. 1.5. ΤEsting and Debugging
        1. 1.5.1 Preparing a test plan
        2. 1.5.2 Constructing test data
        3. 1.5.3 Executing the program
        4. 1.5.4 Assessing program correctness
        5. 1.5.5 Constructing an oracle
      6. 1.6. Test Metrics
        1. 1.6.1 Organizational metrics
        2. 1.6.2 Project metrics
        3. 1.6.3 Process metrics
        4. 1.6.4 Product metrics: generic
        5. 1.6.5 Product metrics: OO software
        6. 1.6.6 Progress monitoring and trends
        7. 1.6.7 Static and dynamic metrics
        8. 1.6.8 Testability
      7. 1.7. Software and Hardware Testing
      8. 1.8. Testing and Verification
      9. 1.9. Defect Management
      10. 1.10. Test Generation Strategies
      11. 1.11. Static Testing
        1. 1.11.1 Walkthroughs
        2. 1.11.2 Inspections
        3. 1.11.3 Software complexity and static testing
      12. 1.12. Model-Based Testing and Model Checking
      13. 1.13. Types of Testing
        1. 1.13.1 Classifier: CI: Source of test generation
        2. 1.13.2 Classifier: C2: Life cycle phase
        3. 1.13.3 Classifier: C3: Goal-directed testing
        4. 1.13.4 Classifier: C4: Artifact under test
        5. 1.13.5 Classifier: C5: Test process models
      14. 1.14. The Saturation Effect
        1. 1.14.1 Confidence and true reliability
        2. 1.14.2 Saturation region
        3. 1.14.3 False sense of confidence
        4. 1.14.4 Reducing ∆
        5. 1.14.5 Impact on test process
      15. 1.15. Principles of Testing
      16. 1.16. Tools
      17. Summary
      18. Exercises
    2. Chapter 2: Preliminaries: Mathematical
      1. 2.1. Predicates and Boolean Expressions
      2. 2.2. Control Flow Graph
        1. 2.2.1 Basic blocks
        2. 2.2.2 Flow graphs
        3. 2.2.3 Paths
        4. 2.2.4 Basis paths
        5. 2.2.5 Path conditions and domains
        6. 2.2.6 Domain and computation errors
        7. 2.2.7 Static code analysis tools and static testing
      3. 2.3. Execution History
      4. 2.4. Dominators and Post-dominators
      5. 2.5. Program Dependence Graph
        1. 2.5.1 Data dependence
        2. 2.5.2 Control dependence
        3. 2.5.3 Call graph
      6. 2.6. Strings, Languages, and Regular Expressions
      7. 2.7. Tools
      8. Summary
      9. Exercises
  8. Part II: Test Generation
    1. Chapter 3: Domain Partitioning
      1. 3.1. Introduction
      2. 3.2. The Test Selection Problem
      3. 3.3. Equivalence Partitioning
        1. 3.3.1 Faults targeted
        2. 3.3.2 Relations
        3. 3.3.3 Equivalence classes for variables
        4. 3.3.4 Unidimensional partitioning versus multidimensional partitioning
        5. 3.3.5 A systematic procedure
        6. 3.3.6 Test selection
        7. 3.3.7 Impact of GUI design
      4. 3.4. Boundary Value Analysis
      5. 3.5. Category-Partition Method
        1. 3.5.1 Steps in the category-partition method
      6. Summary
      7. Exercises
    2. Chapter 4: Predicate Analysis
      1. 4.1. Introduction
      2. 4.2. Domain Testing
        1. 4.2.1 Domain errors
        2. 4.2.2 Border shifts
        3. 4.2.3 ON-OFF points
        4. 4.2.4 Undetected errors
        5. 4.2.5 Coincidental correctness
        6. 4.2.6 Paths to be tested
      3. 4.3. Cause-Effect Graphing
        1. 4.3.1 Notation used in cause-effect graphing
        2. 4.3.2 Creating cause-effect graphs
        3. 4.3.3 Decision table from cause-effect graph
        4. 4.3.4 Heuristics to avoid combinatorial explosion
        5. 4.3.5 Test generation from a decision table
      4. 4.4. Tests Using Predicate Syntax
        1. 4.4.1 A fault model
        2. 4.4.2 Missing or extra Boolean variable faults
        3. 4.4.3 Predicate constraints
        4. 4.4.4 Predicate testing criteria
        5. 4.4.5 BOR, BRO, and BRE adequate tests
        6. 4.4.6 BOR constraints for non-singular expressions
        7. 4.4.7 Cause-effect graphs and predicate testing
        8. 4.4.8 Fault propagation
        9. 4.4.9 Predicate testing in practice
      5. 4.5. Tests Using Basis Paths
      6. 4.6. Scenarios and Tests
      7. Summary
      8. Exercises
    3. Chapter 5: Test Generation From Finite State Models
      1. 5.1. Software Design and Testing
      2. 5.2. Finite State Machines
        1. 5.2.1 Excitation using an input sequence
        2. 5.2.2 Tabular representation
        3. 5.2.3 Properties of FSM
      3. 5.3. Conformance Testing
        1. 5.3.1 Reset inputs
        2. 5.3.2 The testing problem
      4. 5.4. A Fault Model
        1. 5.4.1 Mutants of FSMs
        2. 5.4.2 Fault coverage
      5. 5.5. Characterization Set
        1. 5.5.1 Construction of the k -equivalence partitions
        2. 5.5.2 Deriving the characterization set
        3. 5.5.3 Identification sets
      6. 5.6. The W-Method
        1. 5.6.1 Assumptions
        2. 5.6.2 Maximum number of states
        3. 5.6.3 Computation of the transition cover set
        4. 5.6.4 Constructing Ζ
        5. 5.6.5 Deriving a test set
        6. 5.6.6 Testing using the W-method
        7. 5.6.7 The error detection process
      7. 5.7. The Partial W-Method
        1. 5.7.1 Testing using the Wp-method for m = n
        2. 5.7.2 Testing using the Wp-method for m > n
      8. 5.8. The UIO-Sequence Method
        1. 5.8.1 Assumptions
        2. 5.8.2 UIO sequences
        3. 5.8.3 Core and non-core behavior
        4. 5.8.4 Generation of UIO sequences
        5. 5.8.5 Explanation of gen-uio
        6. 5.8.6 Distinguishing signatures
        7. 5.8.7 Test generation
        8. 5.8.8 Test optimization
        9. 5.8.9 Fault detection
      9. 5.9. Automata Theoretic Versus Control-Flow Based Techniques
        1. 5.9.1 n-switch-cover
        2. 5.9.2 Comparing automata theoretic methods
      10. 5.10. Tools
      11. Summary
      12. Exercises
    4. Chapter 6: Test Generation From Combinatorial Designs
      1. 6.1. Combinatorial Designs
        1. 6.1.1 Test configuration and test set
        2. 6.1.2 Modeling the input and configuration spaces
      2. 6.2. A Combinatorial Test Design Process
      3. 6.3. Fault Model
        1. 6.3.1 Fault vectors
      4. 6.4. Latin Squares
      5. 6.5. Mutually Orthogonal Latin Squares
      6. 6.6. Pairwise Design: Binary Factors
      7. 6.7. Pairwise Design: Multi-valued Factors
        1. 6.7.1 Shortcomings of using MOLS for test design
      8. 6.8. Orthogonal Arrays
        1. 6.8.1 Mixed-level orthogonal arrays
      9. 6.9. Covering and Mixed-level Covering Arrays
        1. 6.9.1 Mixed-level covering arrays
      10. 6.10. Arrays of Strength > 2
      11. 6.11. Generating Covering Arrays
      12. 6.12. Tools
      13. Summary
      14. Exercises
  9. Part III: Test Adequacy Assessment and Enhancement
    1. Chapter 7: Test Adequacy Assessment Using Control Flow and Data Flow
      1. 7.1. Test Adequacy: Basics
        1. 7.1.1 What is test adequacy?
        2. 7.1.2 Measurement of test adequacy
        3. 7.1.3 Test enhancement using measurements of adequacy
        4. 7.1.4 Infeasibility and test adequacy
        5. 7.1.5 Error detection and test enhancement
        6. 7.1.6 Single and multiple executions
      2. 7.2. Adequacy Criteria Based on Control Flow
        1. 7.2.1 Statement and block coverage
        2. 7.2.2 Conditions and decisions
        3. 7.2.3 Decision coverage
        4. 7.2.4 Condition coverage
        5. 7.2.5 Condition/decision coverage
        6. 7.2.6 Multiple condition coverage
        7. 7.2.7 Linear code sequence and jump (LCSAJ) coverage
        8. 7.2.8 Modified condition/decision coverage
        9. 7.2.9 MC/ DC adequate tests for compound conditions
        10. 7.2.10 Definition of MC/ DC coverage
        11. 7.2.11 Minimal MC/ DC tests
        12. 7.2.12 Error detection and MC/DC adequacy
        13. 7.2.13 Short-circuit evaluation and infeasibility
        14. 7.2.14 Basis path coverage
        15. 7.2.15 Tracing test cases to requirements
      3. 7.3. Concepts From Data Flow
        1. 7.3.1 Definitions and uses
        2. 7.3.2 C-use and p-use
        3. 7.3.3 Global and local definitions and uses
        4. 7.3.4 Dataflow graph
        5. 7.3.5 Def-clear paths
        6. 7.3.6 Def-use pairs
        7. 7.3.7 Def-use chains
        8. 7.3.8 A little optimization
        9. 7.3.9 Data contexts and ordered data contexts
      4. 7.4. Adequacy Criteria Based on Data Flow
        1. 7.4.1 c-use coverage
        2. 7.4.2 p-use coverage
        3. 7.4.3 All-uses coverage
        4. 7.4.4 k-dr chain coverage
        5. 7.4.5 Using the k-dr chain coverage
        6. 7.4.6 Infeasible c- and p-uses
        7. 7.4.7 Context coverage
      5. 7.5. Control Flow Versus Data Flow
      6. 7.6. The “Subsumes” Relation
      7. 7.7. Structural and Functional Testing
      8. 7.8. Scalability of Coverage Measurement
      9. 7.9. Tools
      10. Summary
      11. Exercises
    2. Chapter 8: Test Adequacy Assessment Using Program Mutation
      1. 8.1. Introduction
      2. 8.2. Mutation and Mutants
        1. 8.2.1 First-order and higher order mutants
        2. 8.2.2 Syntax and semantics of mutants
        3. 8.2.3 Strong and weak mutations
        4. 8.2.4 Why mutate?
      3. 8.3. Test Assessment Using Mutation
        1. 8.3.1 A procedure for test adequacy assessment
        2. 8.3.2 Alternate procedures for test adequacy assessment
        3. 8.3.3 “Distinguished” versus “killed” mutants
        4. 8.3.4 Conditions for distinguishing a mutant
      4. 8.4. Mutation Operators
        1. 8.4.1 Operator types
        2. 8.4.2 Language dependence of mutation operators
      5. 8.5. Design of Mutation Operators
        1. 8.5.1 Goodness criteria for mutation operators
        2. 8.5.2 Guidelines
      6. 8.6. Founding Principles of Mutation Testing
        1. 8.6.1 The competent programmer hypothesis
        2. 8.6.2 The coupling effect
      7. 8.7. Equivalent Mutants
      8. 8.8. Fault Detection Using Mutation
      9. 8.9. Types of Mutants
      10. 8.10. Mutation Operators For C
        1. 8.10.1 What is not mutated?
        2. 8.10.2 Linearization
        3. 8.10.3 Execution sequence
        4. 8.10.4 Effect of an execution sequence
        5. 8.10.5 Global and local identifier sets
        6. 8.10.6 Global and local reference sets
        7. 8.10.7 Mutating program constants
        8. 8.10.8 Mutating operators
        9. 8.10.9 Binary operator mutations
        10. 8.10.10 Mutating statements
        11. 8.10.11 Mutating program variables
        12. 8.10.12 Structure Reference Replacement
      11. 8.11. Mutation Operators For Java
        1. 8.11.1 Traditional mutation operators
        2. 8.11.2 Inheritence
        3. 8.11.3 Polymorphism and dynamic binding
        4. 8.11.4 Method overloading
        5. 8.11.5 Java specific mutation operators
      12. 8.12. Comparison of Mutation Operators
      13. 8.13. Mutation Testing Within Budget
        1. 8.13.1 Prioritizing functions to be mutated
        2. 8.13.2 Selecting a subset of mutation operators
      14. 8.14. Case and Program Testing
      15. 8.15. Tools
      16. Summary
      17. Exercises
  10. Part IV: Phases of Testing
    1. Chapter 9: Test Selection, Minimization, and Prioritization for Regression Testing
      1. 9.1. What is Regression Testing?
      2. 9.2. Regression Test Process
        1. 9.2.1 Revalidation, selection, minimization, and prioritization
        2. 9.2.2 Test setup
        3. 9.2.3 Test sequencing
        4. 9.2.4 Test execution
        5. 9.2.5 Output comparison
      3. 9.3. Regression Test Selection: The Problem
      4. 9.4. Selecting Regression Tests
        1. 9.4.1 Test all
        2. 9.4.2 Random selection
        3. 9.4.3 Selecting modification traversing tests
        4. 9.4.4 Test minimization
        5. 9.4.5 Test prioritization
      5. 9.5. Test Selection Using Execution Trace
        1. 9.5.1 Obtaining the execution trace
        2. 9.5.2 Selecting regression tests
        3. 9.5.3 Handling function calls
        4. 9.5.4 Handling changes in declarations
      6. 9.6. Test Selection Using Dynamic Slicing
        1. 9.6.1 Dynamic slicing
        2. 9.6.2 Computation of dynamic slices
        3. 9.6.3 Selecting tests
        4. 9.6.4 Potential dependence
        5. 9.6.5 Computing the relevant slice
        6. 9.6.6 Addition and deletion of statements
        7. 9.6.7 Iden tifying variables for slicing
        8. 9.6.8 Reduced dynamic dependence graph
      7. 9.7. Scalability of Test Selection Algorithms
      8. 9.8. Test Minimization
        1. 9.8.1 The set cover problem
        2. 9.8.2 A procedure for test minimization
      9. 9.9. Test Prioritization
      10. 9.10. Tools
      11. Summary
      12. Exercises
    2. Chapter 10: Unit Testing
      1. 10.1. Introduction
      2. 10.2. Context
      3. 10.3. Test Design
      4. 10.4. Using Junit
      5. 10.5. Stubs and Mocks
        1. 10.5.1 Using mock objects
      6. 10.6. Tools
      7. Summary
      8. Exercises
    3. Chapter 11: Integration Testing
      1. 11.1. Introduction
      2. 11.2. Integration Errors
      3. 11.3. Dependence
        1. 11.3.1 Class relationships: static
        2. 11.3.2 Class relationships: dynamic
        3. 11.3.3 Class firewalls
        4. 11.3.4 Precise and imprecise relationships
      4. 11.4. OO versus Non-OO Programs
      5. 11.5. Integration Hierarchy
        1. 11.5.1 Choosing an integration strategy
        2. 11.5.2 Comparing integration strategies
        3. 11.5.3 Specific stubs and retesting
      6. 11.6. Finding A Near-optimal Test Order
        1. 11.6.1 The TD method
        2. 11.6.2 The TJJM method
        3. 11.6.3 The BLW method
        4. 11.6.4 Comparison of TD, TJJM, and the BLW methods
        5. 11.6.5 Which test order algorithm to select?
      7. 11.7. Test Generation
        1. 11.7.1 Data variety
        2. 11.7.2 Data constraints
      8. 11.8. Test Assessment
      9. 11.9. Tools
      10. Summary
      11. Exercises
  11. Acknowledgements
  12. Copyright
  13. Back Cover