You are previewing Software Testing, 4th Edition.
O'Reilly logo
Software Testing, 4th Edition

Book Description

This updated and reorganized fourth edition of Software Testing: A Craftsman's Approach applies the strong mathematics content of previous editions to a coherent treatment of Model-Based Testing for both code-based (structural) and specification-based (functional) testing. These techniques are extended from the usual unit testing discussions to full coverage of less understood levels integration and system testing. 

The Fourth Edition:

  • Emphasizes technical inspections and is supplemented by an appendix with a full package of documents required for a sample Use Case technical inspection
  • Introduces an innovative approach that merges the Event-Driven Petri Nets from the earlier editions with the "Swim Lane" concept from the Unified Modeling Language (UML) that permits model-based testing for four levels of interaction among constituents in a System of Systems
  • Introduces model-based development and provides an explanation of how to conduct testing within model-based development environments
  • Presents a new section on methods for testing software in an Agile programming environment
  • Explores test-driven development, reexamines all-pairs testing, and explains the four contexts of software testing

Thoroughly revised and updated, Software Testing: A Craftsman’s Approach, Fourth Edition is sure to become a standard reference for those who need to stay up to date with evolving technologies in software testing. Carrying on the tradition of previous editions, it will continue to serve as a valuable reference for software testers, developers, and engineers.

Table of Contents

      1. 1.1 Basic Definitions
      2. 1.2 Test Cases
      3. 1.3 Insights from a Venn Diagram
      4. 1.4 Identifying Test Cases
        1. 1.4.1 Specification-Based Testing
        2. 1.4.2 Code-Based Testing
        3. 1.4.3 Specification-Based versus Code-Based Debate
      5. 1.5 Fault Taxonomies
      6. 1.6 Levels of Testing
      7. References
      1. 2.1 Generalized Pseudocode
      2. 2.2 The Triangle Problem
        1. 2.2.1 Problem Statement
        2. 2.2.2 Discussion
        3. 2.2.3 Traditional Implementation
        4. 2.2.4 Structured Implementations
      3. 2.3 The NextDate Function
        1. 2.3.1 Problem Statement
        2. 2.3.2 Discussion
        3. 2.3.3 Implementations
      4. 2.4 The Commission Problem
        1. 2.4.1 Problem Statement
        2. 2.4.2 Discussion
        3. 2.4.3 Implementation
      5. 2.5 The SATM System
        1. 2.5.1 Problem Statement
        2. 2.5.2 Discussion
      6. 2.6 The Currency Converter
      7. 2.7 Saturn Windshield Wiper Controller
      8. 2.8 Garage Door Opener
      9. References
      1. 3.1 Set Theory
        1. 3.1.1 Set Membership
        2. 3.1.2 Set Definition
        3. 3.1.3 The Empty Set
        4. 3.1.4 Venn Diagrams
        5. 3.1.5 Set Operations
        6. 3.1.6 Set Relations
        7. 3.1.7 Set Partitions
        8. 3.1.8 Set Identities
      2. 3.2 Functions
        1. 3.2.1 Domain and Range
        2. 3.2.2 Function Types
        3. 3.2.3 Function Composition
      3. 3.3 Relations
        1. 3.3.1 Relations among Sets
        2. 3.3.2 Relations on a Single Set
      4. 3.4 Propositional Logic
        1. 3.4.1 Logical Operators
        2. 3.4.2 Logical Expressions
        3. 3.4.3 Logical Equivalence
      5. 3.5 Probability Theory
      6. Reference
      1. 4.1 Graphs
        1. 4.1.1 Degree of a Node
        2. 4.1.2 Incidence Matrices
        3. 4.1.3 Adjacency Matrices
        4. 4.1.4 Paths
        5. 4.1.5 Connectedness
        6. 4.1.6 Condensation Graphs
        7. 4.1.7 Cyclomatic Number
      2. 4.2 Directed Graphs
        1. 4.2.1 Indegrees and Outdegrees
        2. 4.2.2 Types of Nodes
        3. 4.2.3 Adjacency Matrix of a Directed Graph
        4. 4.2.4 Paths and Semipaths
        5. 4.2.5 Reachability Matrix
        6. 4.2.6 <em xmlns="http://www.w3.org/1999/xhtml" xmlns:epub="http://www.idpf.org/2007/ops">n</em>-Connectedness-Connectedness
        7. 4.2.7 Strong Components
      3. 4.3 Graphs for Testing
        1. 4.3.1 Program Graphs
        2. 4.3.2 Finite State Machines
        3. 4.3.3 Petri Nets
        4. 4.3.4 Event-Driven Petri Nets
        5. 4.3.5 StateCharts
      4. References
      1. 5.1 Normal Boundary Value Testing
        1. 5.1.1 Generalizing Boundary Value Analysis
        2. 5.1.2 Limitations of Boundary Value Analysis
      2. 5.2 Robust Boundary Value Testing
      3. 5.3 Worst-Case Boundary Value Testing
      4. 5.4 Special Value Testing
      5. 5.5 Examples
        1. 5.5.1 Test Cases for the Triangle Problem
        2. 5.5.2 Test Cases for the NextDate Function
        3. 5.5.3 Test Cases for the Commission Problem
      6. 5.6 Random Testing
      7. 5.7 Guidelines for Boundary Value Testing
      1. 6.1 Equivalence Classes
      2. 6.2 Traditional Equivalence Class Testing
      3. 6.3 Improved Equivalence Class Testing
        1. 6.3.1 Weak Normal Equivalence Class Testing
        2. 6.3.2 Strong Normal Equivalence Class Testing
        3. 6.3.3 Weak Robust Equivalence Class Testing
        4. 6.3.4 Strong Robust Equivalence Class Testing
      4. 6.4 Equivalence Class Test Cases for the Triangle Problem
      5. 6.5 Equivalence Class Test Cases for the NextDate Function
        1. 6.5.1 Equivalence Class Test Cases
      6. 6.6 Equivalence Class Test Cases for the Commission Problem
      7. 6.7 Edge Testing
      8. 6.8 Guidelines and Observations
      9. References
      1. 7.1 Decision Tables
      2. 7.2 Decision Table Techniques
      3. 7.3 Test Cases for the Triangle Problem
      4. 7.4 Test Cases for the NextDate Function
        1. 7.4.1 First Try
        2. 7.4.2 Second Try
        3. 7.4.3 Third Try
      5. 7.5 Test Cases for the Commission Problem
      6. 7.6 Cause-and-Effect Graphing
      7. 7.7 Guidelines and Observations
      8. References
      1. 8.1 Program Graphs
        1. 8.1.1 Style Choices for Program Graphs
      2. 8.2 DD-Paths
      3. 8.3 Test Coverage Metrics
        1. 8.3.1 Program Graph–Based Coverage Metrics
        2. 8.3.2 E.F. Miller’s Coverage Metrics
          1. 8.3.2.1 Statement Testing
          2. 8.3.2.2 DD-Path Testing
          3. 8.3.2.3 Simple Loop Coverage
          4. 8.3.2.4 Predicate Outcome Testing
          5. 8.3.2.5 Dependent Pairs of DD-Paths
          6. 8.3.2.6 Complex Loop Coverage
          7. 8.3.2.7 Multiple Condition Coverage
          8. 8.3.2.8 “Statistically Significant” Coverage
          9. 8.3.2.9 All Possible Paths Coverage
        3. 8.3.3 A Closer Look at Compound Conditions
          1. 8.3.3.1 Boolean Expression (per Chilenski)
          2. 8.3.3.2 Condition (per Chilenski)
          3. 8.3.3.3 Coupled Conditions (per Chilenski)
          4. 8.3.3.4 Masking Conditions (per Chilenski)
          5. 8.3.3.5 Modified Condition Decision Coverage
        4. 8.3.4 Examples
          1. 8.3.4.1 Condition with Two Simple Conditions
          2. 8.3.4.2 Compound Condition from NextDate
          3. 8.3.4.3 Compound Condition from the Triangle Program
        5. 8.3.5 Test Coverage Analyzers
      4. 8.4 Basis Path Testing
        1. 8.4.1 McCabe’s Basis Path Method
        2. 8.4.2 Observations on McCabe’s Basis Path Method
        3. 8.4.3 Essential Complexity
      5. 8.5 Guidelines and Observations
      6. References
      1. 9.1 Define/Use Testing
        1. 9.1.1 Example
        2. 9.1.2 Du-paths for Stocks
        3. 9.1.3 Du-paths for Locks
        4. 9.1.4 Du-paths for totalLocks
        5. 9.1.5 Du-paths for Sales
        6. 9.1.6 Du-paths for Commission
        7. 9.1.7 Define/Use Test Coverage Metrics
        8. 9.1.8 Define/Use Testing for Object-Oriented Code
      2. 9.2 Slice-Based Testing
        1. 9.2.1 Example
        2. 9.2.2 Style and Technique
        3. 9.2.3 Slice Splicing
      3. 9.3 Program Slicing Tools
      4. References
      1. 10.1 The Test Method Pendulum
      2. 10.2 Traversing the Pendulum
      3. 10.3 Evaluating Test Methods
      4. 10.4 Insurance Premium Case Study
        1. 10.4.1 Specification-Based Testing
        2. 10.4.2 Code-Based Testing
          1. 10.4.2.1 Path-Based Testing
          2. 10.4.2.2 Data Flow Testing
          3. 10.4.2.3 Slice Testing
      5. 10.5 Guidelines
      6. References
      1. 11.1 Traditional Waterfall Testing
        1. 11.1.1 Waterfall Testing
        2. 11.1.2 Pros and Cons of the Waterfall Model
      2. 11.2 Testing in Iterative Life Cycles
        1. 11.2.1 Waterfall Spin-Offs
        2. 11.2.2 Specification-Based Life Cycle Models
      3. 11.3 Agile Testing
        1. 11.3.1 Extreme Programming
        2. 11.3.2 Test-Driven Development
        3. 11.3.3 Scrum
      4. 11.4 Agile Model–Driven Development
        1. 11.4.1 Agile Model–Driven Development
        2. 11.4.2 Model–Driven Agile Development
      5. References
      1. 12.1 Testing Based on Models
      2. 12.2 Appropriate Models
        1. 12.2.1 Peterson’s Lattice
        2. 12.2.2 Expressive Capabilities of Mainline Models
        3. 12.2.3 Modeling Issues
        4. 12.2.4 Making Appropriate Choices
      3. 12.3 Commercial Tool Support for Model-Based Testing
      4. References
      1. 13.1 Decomposition-Based Integration
        1. 13.1.1 Top–Down Integration
        2. 13.1.2 Bottom–Up Integration
        3. 13.1.3 Sandwich Integration
        4. 13.1.4 Pros and Cons
      2. 13.2 Call Graph–Based Integration
        1. 13.2.1 Pairwise Integration
        2. 13.2.2 Neighborhood Integration
        3. 13.2.3 Pros and Cons
      3. 13.3 Path-Based Integration
        1. 13.3.1 New and Extended Concepts
        2. 13.3.2 MM-Path Complexity
        3. 13.3.3 Pros and Cons
      4. 13.4 Example: integrationNextDate
        1. 13.4.1 Decomposition-Based Integration
        2. 13.4.2 Call Graph–Based Integration
        3. 13.4.3 MM-Path-Based Integration
      5. 13.5 Conclusions and Recommendations
      6. References
      1. 14.1 Threads
        1. 14.1.1 Thread Possibilities
        2. 14.1.2 Thread Definitions
      2. 14.2 Basis Concepts for Requirements Specification
        1. 14.2.1 Data
        2. 14.2.2 Actions
        3. 14.2.3 Devices
        4. 14.2.4 Events
        5. 14.2.5 Threads
        6. 14.2.6 Relationships among Basis Concepts
      3. 14.3 Model-Based Threads
      4. 14.4 Use Case–Based Threads
        1. 14.4.1 Levels of Use Cases
        2. 14.4.2 An Industrial Test Execution System
        3. 14.4.3 System-Level Test Cases
        4. 14.4.4 Converting Use Cases to Event-Driven Petri Nets
        5. 14.4.5 Converting Finite State Machines to Event-Driven Petri Nets
        6. 14.4.6 Which View Best Serves System Testing?
      5. 14.5 Long versus Short Use Cases
      6. 14.6 How Many Use Cases?
        1. 14.6.1 Incidence with Input Events
        2. 14.6.2 Incidence with Output Events
        3. 14.6.3 Incidence with All Port Events
        4. 14.6.4 Incidence with Classes
      7. 14.7 Coverage Metrics for System Testing
        1. 14.7.1 Model-Based System Test Coverage
        2. 14.7.2 Specification-Based System Test Coverage
          1. 14.7.2.1 Event-Based Thread Testing
          2. 14.7.2.2 Port-Based Thread Testing
      8. 14.8 Supplemental Approaches to System Testing
        1. 14.8.1 Operational Profiles
        2. 14.8.2 Risk-Based Testing
      9. 14.9 Nonfunctional System Testing
        1. 14.9.1 Stress Testing Strategies
          1. 14.9.1.1 Compression
          2. 14.9.1.2 Replication
        2. 14.9.2 Mathematical Approaches
          1. 14.9.2.1 Queuing Theory
          2. 14.9.2.2 Reliability Models
          3. 14.9.2.3 Monte Carlo Testing
      10. 14.10 Atomic System Function Testing Example
        1. 14.10.1 Identifying Input and Output Events
        2. 14.10.2 Identifying Atomic System Functions
        3. 14.10.3 Revised Atomic System Functions
      11. References
      1. 15.1 Issues in Testing Object-Oriented Software
        1. 15.1.1 Units for Object-Oriented Testing
        2. 15.1.2 Implications of Composition and Encapsulation
        3. 15.1.3 Implications of Inheritance
        4. 15.1.4 Implications of Polymorphism
        5. 15.1.5 Levels of Object-Oriented Testing
        6. 15.1.6 Data Flow Testing for Object-Oriented Software
      2. 15.2 Example: ooNextDate
        1. 15.2.1 Class: CalendarUnit
        2. 15.2.2 Class: testIt
        3. 15.2.3 Class: Date
        4. 15.2.4 Class: Day
        5. 15.2.5 Class: Month
        6. 15.2.6 Class: Year
      3. 15.3 Object-Oriented Unit Testing
        1. 15.3.1 Methods as Units
        2. 15.3.2 Classes as Units
          1. 15.3.2.1 Pseudocode for Windshield Wiper Class
          2. 15.3.2.2 Unit Testing for Windshield Wiper Class
      4. 15.4 Object-Oriented Integration Testing
        1. 15.4.1 UML Support for Integration Testing
        2. 15.4.2 MM-Paths for Object-Oriented Software
        3. 15.4.3 A Framework for Object-Oriented Data Flow Testing
          1. 15.4.3.1 Event-/Message-Driven Petri Nets
          2. 15.4.3.2 Inheritance-Induced Data Flow
          3. 15.4.3.3 Message-Induced Data Flow
          4. 15.4.3.4 Slices?
      5. 15.5 Object-Oriented System Testing
        1. 15.5.1 Currency Converter UML Description
          1. 15.5.1.1 Problem Statement
          2. 15.5.1.2 System Functions
          3. 15.5.1.3 Presentation Layer
          4. 15.5.1.4 High-Level Use Cases
          5. 15.5.1.5 Essential Use Cases
          6. 15.5.1.6 Detailed GUI Definition
          7. 15.5.1.7 Expanded Essential Use Cases
          8. 15.5.1.8 Real Use Cases
        2. 15.5.2 UML-Based System Testing
        3. 15.5.3 StateChart-Based System Testing
      6. References
      1. 16.1 Unit-Level Complexity
        1. 16.1.1 Cyclomatic Complexity
          1. 16.1.1.1 “Cattle Pens” and Cyclomatic Complexity
          2. 16.1.1.2 Node Outdegrees and Cyclomatic Complexity
          3. 16.1.1.3 Decisional Complexity
        2. 16.1.2 Computational Complexity
          1. 16.1.2.1 Halstead’s Metrics
          2. 16.1.2.2 Example: Day of Week with Zeller’s Congruence
      2. 16.2 Integration-Level Complexity
        1. 16.2.1 Integration-Level Cyclomatic Complexity
        2. 16.2.2 Message Traffic Complexity
      3. 16.3 Software Complexity Example
        1. 16.3.1 Unit-Level Cyclomatic Complexity
        2. 16.3.2 Message Integration-Level Cyclomatic Complexity
      4. 16.4 Object-Oriented Complexity
        1. 16.4.1 WMC—Weighted Methods per Class
        2. 16.4.2 DIT—Depth of Inheritance Tree
        3. 16.4.3 NOC—Number of Child Classes
        4. 16.4.4 CBO—Coupling between Classes
        5. 16.4.5 RFC—Response for Class
        6. 16.4.6 LCOM—Lack of Cohesion on Methods
      5. 16.5 System-Level Complexity
      6. Reference
      1. 17.1 Characteristics of Systems of Systems
      2. 17.2 Sample Systems of Systems
        1. 17.2.1 The Garage Door Controller (Directed)
        2. 17.2.2 Air Traffic Management System (Acknowledged)
        3. 17.2.3 The GVSU Snow Emergency System (Collaborative)
        4. 17.2.4 The Rock Solid Federal Credit Union (Virtual)
      3. 17.3 Software Engineering for Systems of Systems
        1. 17.3.1 Requirements Elicitation
        2. 17.3.2 Specification with a Dialect of UML: SysML
          1. 17.3.2.1 Air Traffic Management System Classes
          2. 17.3.2.2 Air Traffic Management System Use Cases and Sequence Diagrams
        3. 17.3.3 Testing
      4. 17.4 Communication Primitives for Systems of Systems
        1. 17.4.1 ESML Prompts as Petri Nets
          1. 17.4.1.1 Petri Net Conflict
          2. 17.4.1.2 Petri Net Interlock
          3. 17.4.1.3 Enable, Disable, and Activate
          4. 17.4.1.4 Trigger
          5. 17.4.1.5 Suspend and Resume
        2. 17.4.2 New Prompts as Swim Lane Petri Nets
          1. 17.4.2.1 Request
          2. 17.4.2.2 Accept
          3. 17.4.2.3 Reject
          4. 17.4.2.4 Postpone
          5. 17.4.2.5 Swim Lane Description of the November 1993 Incident
      5. 17.5 Effect of Systems of Systems Levels on Prompts
        1. 17.5.1 Directed and Acknowledged Systems of Systems
        2. 17.5.2 Collaborative and Virtual Systems of Systems
      6. References
      1. 18.1 Exploratory Testing Explored
      2. 18.2 Exploring a Familiar Example
      3. 18.3 Observations and Conclusions
      4. References
      1. 19.1 Test-Then-Code Cycles
      2. 19.2 Automated Test Execution (Testing Frameworks)
      3. 19.3 Java and JUnit Example
        1. 19.3.1 Java Source Code
        2. 19.3.2 JUnit Test Code
      4. 19.4 Remaining Questions
        1. 19.4.1 Specification or Code Based?
        2. 19.4.2 Configuration Management?
        3. 19.4.3 Granularity?
      5. 19.5 Pros, Cons, and Open Questions of TDD
      6. 19.6 Retrospective on MDD versus TDD
      1. 20.1 The All Pairs Technique
        1. 20.1.1 Program Inputs
        2. 20.1.2 Independent Variables
        3. 20.1.3 Input Order
        4. 20.1.4 Failures Due Only to Pairs of Inputs
      2. 20.2 A Closer Look at the NIST Study
      3. 20.3 Appropriate Applications for All Pairs Testing
      4. 20.4 Recommendations for All Pairs Testing
      5. References
      1. 21.1 Mutation Testing
        1. 21.1.1 Formalizing Program Mutation
        2. 21.1.2 Mutation Operators
          1. 21.1.2.1 isLeap Mutation Testing
          2. 21.1.2.2 isTriangle Mutation Testing
          3. 21.1.2.3 Commission Mutation Testing
      2. 21.2 Fuzzing
      3. 21.3 Fishing Creel Counts and Fault Insertion
      4. References
      1. 22.1 Economics of Software Reviews
      2. 22.2 Roles in a Review
        1. 22.2.1 Producer
        2. 22.2.2 Review Leader
        3. 22.2.3 Recorder
        4. 22.2.4 Reviewer
        5. 22.2.5 Role Duplication
      3. 22.3 Types of Reviews
        1. 22.3.1 Walkthroughs
        2. 22.3.2 Technical Inspections
        3. 22.3.3 Audits
        4. 22.3.4 Comparison of Review Types
      4. 22.4 Contents of an Inspection Packet
        1. 22.4.1 Work Product Requirements
        2. 22.4.2 Frozen Work Product
        3. 22.4.3 Standards and Checklists
        4. 22.4.4 Review Issues Spreadsheet
        5. 22.4.5 Review Reporting Forms
        6. 22.4.6 Fault Severity Levels
        7. 22.4.7 Review Report Outline
      5. 22.5 An Industrial-Strength Inspection Process
        1. 22.5.1 Commitment Planning
        2. 22.5.2 Reviewer Introduction
        3. 22.5.3 Preparation
        4. 22.5.4 Review Meeting
        5. 22.5.5 Report Preparation
        6. 22.5.6 Disposition
      6. 22.6 Effective Review Culture
        1. 22.6.1 Etiquette
        2. 22.6.2 Management Participation in Review Meetings
        3. 22.6.3 A Tale of Two Reviews
          1. 22.6.3.1 A Pointy-Haired Supervisor Review
          2. 22.6.3.2 An Ideal Review
      7. 22.7 Inspection Case Study
      8. References
      1. 23.1 Craftsmanship
      2. 23.2 Best Practices of Software Testing
      3. 23.3 My Top 10 Best Practices for Software Testing Excellence
        1. 23.3.1 Model-Driven Agile Development
        2. 23.3.2 Careful Definition and Identification of Levels of Testing
        3. 23.3.3 System-Level Model-Based Testing
        4. 23.3.4 System Testing Extensions
        5. 23.3.5 Incidence Matrices to Guide Regression Testing
        6. 23.3.6 Use of MM-Paths for Integration Testing
        7. 23.3.7 Intelligent Combination of Specification-Based and Code-Based Unit-Level Testing
        8. 23.3.8 Code Coverage Metrics Based on the Nature of Individual Units
        9. 23.3.9 Exploratory Testing during Maintenance
        10. 23.3.10 Test-Driven Development
      4. 23.4 Mapping Best Practices to Diverse Projects
        1. 23.4.1 A Mission-Critical Project
        2. 23.4.2 A Time-Critical Project
        3. 23.4.3 Corrective Maintenance of Legacy Code
      5. References
  1. Software Engineering & Systems Development