Foundations of Software and System Performance Engineering: Process, Performance Modeling, Requirements, Testing, Scalability, and Practice

Book description

“If this book had been available to Healthcare.gov’s contractors, and they read and followed its life cycle performance processes, there would not have been the enormous problems apparent in that application. In my 40+ years of experience in building leading-edge products, poor performance is the single most frequent cause of the failure or cancellation of software-intensive projects. This book provides techniques and skills necessary to implement performance engineering at the beginning of a project and manage it throughout the product’s life cycle. I cannot recommend it highly enough.”

Don Shafer, CSDP, Technical Fellow, Athens Group, LLC

Poor performance is a frequent cause of software project failure. Performance engineering can be extremely challenging. In Foundations of Software and System Performance Engineering, leading software performance expert Dr. André Bondi helps you create effective performance requirements up front, and then architect, develop, test, and deliver systems that meet them.

Drawing on many years of experience at Siemens, AT&T Labs, Bell Laboratories, and two startups, Bondi offers practical guidance for every software stakeholder and development team participant. He shows you how to define and use metrics; plan for diverse workloads; evaluate scalability, capacity, and responsiveness; and test both individual components and entire systems. Throughout, Bondi helps you link performance engineering with everything else you do in the software life cycle, so you can achieve the right performance–now and in the future–at lower cost and with less pain.

This guide will help you

• Mitigate the business and engineering risk associated with poor system performance

• Specify system performance requirements in business and engineering terms

• Identify metrics for comparing performance requirements with actual performance

• Verify the accuracy of measurements

• Use simple mathematical models to make predictions, plan performance tests, and anticipate the impact of changes to the system or the load placed upon it

• Avoid common performance and scalability mistakes

• Clarify business and engineering needs to be satisfied by given levels of throughput and response time

• Incorporate performance engineering into agile processes

• Help stakeholders of a system make better performance-related decisions

• Manage stakeholders’ expectations about system performance throughout the software life cycle, and deliver a software product with quality performance

André B. Bondi is a senior staff engineer at Siemens Corp., Corporate Technologies in Princeton, New Jersey. His specialties include performance requirements, performance analysis, modeling, simulation, and testing. Bondi has applied his industrial and academic experience to the solution of performance issues in many problem domains. In addition to holding a doctorate in computer science and a master’s in statistics, he is a Certified Scrum Master.

Table of contents

  1. About This eBook
  2. Title Page
  3. Copyright Page
  4. Praise for Foundations of Software and System Performance Engineering
  5. Dedication Page
  6. Contents
  7. Preface
    1. Scope and Purpose
    2. Audience
  8. Acknowledgments
  9. About the Author
  10. Chapter 1. Why Performance Engineering? Why Performance Engineers?
    1. 1.1 Overview
    2. 1.2 The Role of Performance Requirements in Performance Engineering
    3. 1.3 Examples of Issues Addressed by Performance Engineering Methods
    4. 1.4 Business and Process Aspects of Performance Engineering
    5. 1.5 Disciplines and Techniques Used in Performance Engineering
    6. 1.6 Performance Modeling, Measurement, and Testing
    7. 1.7 Roles and Activities of a Performance Engineer
    8. 1.8 Interactions and Dependencies between Performance Engineering and Other Activities
    9. 1.9 A Road Map through the Book
    10. 1.10 Summary
  11. Chapter 2. Performance Metrics
    1. 2.1 General
    2. 2.2 Examples of Performance Metrics
    3. 2.3 Useful Properties of Performance Metrics
    4. 2.4 Performance Metrics in Different Domains
      1. 2.4.1 Conveyor in a Warehouse
      2. 2.4.2 Fire Alarm Control Panel
      3. 2.4.3 Train Signaling and Departure Boards
      4. 2.4.4 Telephony
      5. 2.4.5 An Information Processing Example: Order Entry and Customer Relationship Management
    5. 2.5 Examples of Explicit and Implicit Metrics
    6. 2.6 Time Scale Granularity of Metrics
    7. 2.7 Performance Metrics for Systems with Transient, Bounded Loads
    8. 2.8 Summary
    9. 2.9 Exercises
  12. Chapter 3. Basic Performance Analysis
    1. 3.1 How Performance Models Inform Us about Systems
    2. 3.2 Queues in Computer Systems and in Daily Life
    3. 3.3 Causes of Queueing
    4. 3.4 Characterizing the Performance of a Queue
    5. 3.5 Basic Performance Laws: Utilization Law, Little’s Law
      1. 3.5.1 Utilization Law
      2. 3.5.2 Little’s Law
    6. 3.6 A Single-Server Queue
    7. 3.7 Networks of Queues: Introduction and Elementary Performance Properties
      1. 3.7.1 System Features Described by Simple Queueing Networks
      2. 3.7.2 Quantifying Device Loadings and Flow through a Computer System
      3. 3.7.3 Upper Bounds on System Throughput
      4. 3.7.4 Lower Bounds on System Response Times
    8. 3.8 Open and Closed Queueing Network Models
      1. 3.8.1 Simple Single-Class Open Queueing Network Models
      2. 3.8.2 Simple Single-Class Closed Queueing Network Model
      3. 3.8.3 Performance Measures and Queueing Network Representation: A Qualitative View
    9. 3.9 Bottleneck Analysis for Single-Class Closed Queueing Networks
      1. 3.9.1 Asymptotic Bounds on Throughput and Response Time
      2. 3.9.2 The Impact of Asynchronous Activity on Performance Bounds
    10. 3.10 Regularity Conditions for Computationally Tractable Queueing Network Models
    11. 3.11 Mean Value Analysis of Single-Class Closed Queueing Network Models
    12. 3.12 Multiple-Class Queueing Networks
    13. 3.13 Finite Pool Sizes, Lost Calls, and Other Lost Work
    14. 3.14 Using Models for Performance Prediction
    15. 3.15 Limitations and Applicability of Simple Queueing Network Models
    16. 3.16 Linkage between Performance Models, Performance Requirements, and Performance Test Results
    17. 3.17 Applications of Basic Performance Laws to Capacity Planning and Performance Testing
    18. 3.18 Summary
    19. 3.19 Exercises
  13. Chapter 4. Workload Identification and Characterization
    1. 4.1 Workload Identification
    2. 4.2 Reference Workloads for a System in Different Environments
    3. 4.3 Time-Varying Behavior
    4. 4.4 Mapping Application Domains to Computer System Workloads
      1. 4.4.1 Example: An Online Securities Trading System for Account Holders
      2. 4.4.2 Example: An Airport Conveyor System
      3. 4.4.3 Example: A Fire Alarm System
    5. 4.5 Numerical Specification of the Workloads
      1. 4.5.1 Example: An Online Securities Trading System for Account Holders
      2. 4.5.2 Example: An Airport Conveyor System
      3. 4.5.3 Example: A Fire Alarm System
    6. 4.6 Numerical Illustrations
      1. 4.6.1 Numerical Data for an Online Securities Trading System
      2. 4.6.2 Numerical Data for an Airport Conveyor System
      3. 4.6.3 Numerical Data for the Fire Alarm System
    7. 4.7 Summary
    8. 4.8 Exercises
  14. Chapter 5. From Workloads to Business Aspects of Performance Requirements
    1. 5.1 Overview
    2. 5.2 Performance Requirements and Product Management
      1. 5.2.1 Sizing for Different Market Segments: Linking Workloads to Performance Requirements
      2. 5.2.2 Performance Requirements to Meet Market, Engineering, and Regulatory Needs
      3. 5.2.3 Performance Requirements to Support Revenue Streams
    3. 5.3 Performance Requirements and the Software Lifecycle
    4. 5.4 Performance Requirements and the Mitigation of Business Risk
    5. 5.5 Commercial Considerations and Performance Requirements
      1. 5.5.1 Performance Requirements, Customer Expectations, and Contracts
      2. 5.5.2 System Performance and the Relationship between Buyer and Supplier
      3. 5.5.3 Confidentiality
      4. 5.5.4 Performance Requirements and the Outsourcing of Software Development
      5. 5.5.5 Performance Requirements and the Outsourcing of Computing Services
    6. 5.6 Guidelines for Specifying Performance Requirements
      1. 5.6.1 Performance Requirements and Functional Requirements
      2. 5.6.2 Unambiguousness
      3. 5.6.3 Measurability
      4. 5.6.4 Verifiability
      5. 5.6.5 Completeness
      6. 5.6.6 Correctness
      7. 5.6.7 Mathematical Consistency
      8. 5.6.8 Testability
      9. 5.6.9 Traceability
      10. 5.6.10 Granularity and Time Scale
    7. 5.7 Summary
    8. 5.8 Exercises
  15. Chapter 6. Qualitative and Quantitative Types of Performance Requirements
    1. 6.1 Qualitative Attributes Related to System Performance
    2. 6.2 The Concept of Sustainable Load
    3. 6.3 Formulation of Response Time Requirements
    4. 6.4 Formulation of Throughput Requirements
    5. 6.5 Derived and Implicit Performance Requirements
      1. 6.5.1 Derived Performance Requirements
      2. 6.5.2 Implicit Requirements
    6. 6.6 Performance Requirements Related to Transaction Failure Rates, Lost Calls, and Lost Packets
    7. 6.7 Performance Requirements Concerning Peak and Transient Loads
    8. 6.8 Summary
    9. 6.9 Exercises
  16. Chapter 7. Eliciting, Writing, and Managing Performance Requirements
    1. 7.1 Elicitation and Gathering of Performance Requirements
    2. 7.2 Ensuring That Performance Requirements Are Enforceable
    3. 7.3 Common Patterns and Antipatterns for Performance Requirements
      1. 7.3.1 Response Time Pattern and Antipattern
      2. 7.3.2 “... All the Time/... of the Time” Antipattern
      3. 7.3.3 Resource Utilization Antipattern
      4. 7.3.4 Number of Users to Be Supported Pattern/Antipattern
      5. 7.3.5 Pool Size Requirement Pattern
      6. 7.3.6 Scalability Antipattern
    4. 7.4 The Need for Mathematically Consistent Requirements: Ensuring That Requirements Conform to Basic Performance Laws
    5. 7.5 Expressing Performance Requirements in Terms of Parameters with Unknown Values
    6. 7.6 Avoidance of Circular Dependencies
    7. 7.7 External Performance Requirements and Their Implications for the Performance Requirements of Subsystems
    8. 7.8 Structuring Performance Requirements Documents
    9. 7.9 Layout of a Performance Requirement
    10. 7.10 Managing Performance Requirements: Responsibilities of the Performance Requirements Owner
    11. 7.11 Performance Requirements Pitfall: Transition from a Legacy System to a New System
    12. 7.12 Formulating Performance Requirements to Facilitate Performance Testing
    13. 7.13 Storage and Reporting of Performance Requirements
    14. 7.14 Summary
  17. Chapter 8. System Measurement Techniques and Instrumentation
    1. 8.1 General
    2. 8.2 Distinguishing between Measurement and Testing
    3. 8.3 Validate, Validate, Validate; Scrutinize, Scrutinize, Scrutinize
    4. 8.4 Resource Usage Measurements
      1. 8.4.1 Measuring Processor Usage
      2. 8.4.2 Processor Utilization by Individual Processes
      3. 8.4.3 Disk Utilization
      4. 8.4.4 Bandwidth Utilization
      5. 8.4.5 Queue Lengths
    5. 8.5 Utilizations and the Averaging Time Window
    6. 8.6 Measurement of Multicore or Multiprocessor Systems
    7. 8.7 Measuring Memory-Related Activity
      1. 8.7.1 Memory Occupancy
      2. 8.7.2 Paging Activity
    8. 8.8 Measurement in Production versus Measurement for Performance Testing and Scalability
    9. 8.9 Measuring Systems with One Host and with Multiple Hosts
      1. 8.9.1 Clock Synchronization of Multiple Hosts
      2. 8.9.2 Gathering Measurements from Multiple Hosts
    10. 8.10 Measurements from within the Application
    11. 8.11 Measurements in Middleware
    12. 8.12 Measurements of Commercial Databases
    13. 8.13 Response Time Measurements
    14. 8.14 Code Profiling
    15. 8.15 Validation of Measurements Using Basic Properties of Performance Metrics
    16. 8.16 Measurement Procedures and Data Organization
    17. 8.17 Organization of Performance Data, Data Reduction, and Presentation
    18. 8.18 Interpreting Measurements in a Virtualized Environment
    19. 8.19 Summary
    20. 8.20 Exercises
  18. Chapter 9. Performance Testing
    1. 9.1 Overview of Performance Testing
    2. 9.2 Special Challenges
    3. 9.3 Performance Test Planning and Performance Models
    4. 9.4 A Wrong Way to Evaluate Achievable System Throughput
    5. 9.5 Provocative Performance Testing
    6. 9.6 Preparing a Performance Test
      1. 9.6.1 Understanding the System
      2. 9.6.2 Pilot Testing, Playtime, and Performance Test Automation
      3. 9.6.3 Test Equipment and Test Software Must Be Tested, Too
      4. 9.6.4 Deployment of Load Drivers
      5. 9.6.5 Problems with Testing Financial Systems
    7. 9.7 Lab Discipline in Performance Testing
    8. 9.8 Performance Testing Challenges Posed by Systems with Multiple Hosts
    9. 9.9 Performance Testing Scripts and Checklists
    10. 9.10 Best Practices for Documenting Test Plans and Test Results
    11. 9.11 Linking the Performance Test Plan to Performance Requirements
    12. 9.12 The Role of Performance Tests in Detecting and Debugging Concurrency Issues
    13. 9.13 Planning Tests for System Stability
    14. 9.14 Prospective Testing When Requirements Are Unspecified
    15. 9.15 Structuring the Test Environment to Reflect the Scalability of the Architecture
    16. 9.16 Data Collection
    17. 9.17 Data Reduction and Presentation
    18. 9.18 Interpreting the Test Results
      1. 9.18.1 Preliminaries
      2. 9.18.2 Example: Services Use Cases
      3. 9.18.3 Example: Transaction System with High Failure Rate
      4. 9.18.4 Example: A System with Computationally Intense Transactions
      5. 9.18.5 Example: System Exhibiting Memory Leak and Deadlocks
    19. 9.19 Automating Performance Tests and the Analysis of the Outputs
    20. 9.20 Summary
    21. 9.21 Exercises
  19. Chapter 10. System Understanding, Model Choice, and Validation
    1. 10.1 Overview
    2. 10.2 Phases of a Modeling Study
    3. 10.3 Example: A Conveyor System
    4. 10.4 Example: Modeling Asynchronous I/O
    5. 10.5 Systems with Load-Dependent or Time-Varying Behavior
      1. 10.5.1 Paged Virtual Memory Systems That Thrash
      2. 10.5.2 Applications with Increasing Processing Time per Unit of Work
      3. 10.5.3 Scheduled Movement of Load, Periodic Loads, and Critical Peaks
    6. 10.6 Summary
    7. 10.7 Exercises
  20. Chapter 11. Scalability and Performance
    1. 11.1 What Is Scalability?
    2. 11.2 Scaling Methods
      1. 11.2.1 Scaling Up and Scaling Out
      2. 11.2.2 Vertical Scaling and Horizontal Scaling
    3. 11.3 Types of Scalability
      1. 11.3.1 Load Scalability
      2. 11.3.2 Space Scalability
      3. 11.3.3 Space-Time Scalability
      4. 11.3.4 Structural Scalability
      5. 11.3.5 Scalability over Long Distances and under Network Congestion
    4. 11.4 Interactions between Types of Scalability
    5. 11.5 Qualitative Analysis of Load Scalability and Examples
      1. 11.5.1 Serial Execution of Disjoint Transactions and the Inability to Exploit Parallel Resources
      2. 11.5.2 Busy Waiting on Locks
      3. 11.5.3 Coarse Granularity Locking
      4. 11.5.4 Ethernet and Token Ring: A Comparison
      5. 11.5.5 Museum Checkrooms
    6. 11.6 Scalability Limitations in a Development Environment
    7. 11.7 Improving Load Scalability
    8. 11.8 Some Mathematical Analyses
      1. 11.8.1 Comparison of Semaphores and Locks for Implementing Mutual Exclusion
      2. 11.8.2 Museum Checkroom
    9. 11.9 Avoiding Scalability Pitfalls
    10. 11.10 Performance Testing and Scalability
    11. 11.11 Summary
    12. 11.12 Exercises
  21. Chapter 12. Performance Engineering Pitfalls
    1. 12.1 Overview
    2. 12.2 Pitfalls in Priority Scheduling
    3. 12.3 Transient CPU Saturation Is Not Always a Bad Thing
    4. 12.4 Diminishing Returns with Multiprocessors or Multiple Cores
    5. 12.5 Garbage Collection Can Degrade Performance
    6. 12.6 Virtual Machines: Panacea or Complication?
    7. 12.7 Measurement Pitfall: Delayed Time Stamping and Monitoring in Real-Time Systems
    8. 12.8 Pitfalls in Performance Measurement
    9. 12.9 Eliminating a Bottleneck Could Unmask a New One
    10. 12.10 Pitfalls in Performance Requirements Engineering
    11. 12.11 Organizational Pitfalls in Performance Engineering
    12. 12.12 Summary
    13. 12.13 Exercises
  22. Chapter 13. Agile Processes and Performance Engineering
    1. 13.1 Overview
    2. 13.2 Performance Engineering under an Agile Development Process
      1. 13.2.1 Performance Requirements Engineering Considerations in an Agile Environment
      2. 13.2.2 Preparation and Alignment of Performance Testing with Sprints
      3. 13.2.3 Agile Interpretation and Application of Performance Test Results
      4. 13.2.4 Communicating Performance Test Results in an Agile Environment
    3. 13.3 Agile Methods in the Implementation and Execution of Performance Tests
      1. 13.3.1 Identification and Planning of Performance Tests and Instrumentation
      2. 13.3.2 Using Scrum When Implementing Performance Tests and Purpose-Built Instrumentation
      3. 13.3.3 Peculiar or Irregular Performance Test Results and Incorrect Functionality May Go Together
    4. 13.4 The Value of Playtime in an Agile Performance Testing Process
    5. 13.5 Summary
    6. 13.6 Exercises
  23. Chapter 14. Working with Stakeholders to Learn, Influence, and Tell the Performance Engineering Story
    1. 14.1 Determining What Aspect of Performance Matters to Whom
    2. 14.2 Where Does the Performance Story Begin?
    3. 14.3 Identification of Performance Concerns, Drivers, and Stakeholders
    4. 14.4 Influencing the Performance Story
      1. 14.4.1 Using Performance Engineering Concerns to Affect the Architecture and Choice of Technology
      2. 14.4.2 Understanding the Impact of Existing Architectures and Prior Decisions on System Performance
      3. 14.4.3 Explaining Performance Concerns and Sharing and Developing the Performance Story with Different Stakeholders
    5. 14.5 Reporting on Performance Status to Different Stakeholders
    6. 14.6 Examples
    7. 14.7 The Role of a Capacity Management Engineer
    8. 14.8 Example: Explaining the Role of Measurement Intervals When Interpreting Measurements
    9. 14.9 Ensuring Ownership of Performance Concerns and Explanations by Diverse Stakeholders
    10. 14.10 Negotiating Choices for Design Changes and Recommendations for System Improvement among Stakeholders
    11. 14.11 Summary
    12. 14.12 Exercises
  24. Chapter 15. Where to Learn More
    1. 15.1 Overview
    2. 15.2 Conferences and Journals
    3. 15.3 Texts on Performance Analysis
    4. 15.4 Queueing Theory
    5. 15.5 Discrete Event Simulation
    6. 15.6 Performance Evaluation of Specific Types of Systems
    7. 15.7 Statistical Methods
    8. 15.8 Performance Tuning
    9. 15.9 Summary
  25. References
  26. Index

Product information

  • Title: Foundations of Software and System Performance Engineering: Process, Performance Modeling, Requirements, Testing, Scalability, and Practice
  • Author(s): André B. Bondi
  • Release date: August 2014
  • Publisher(s): Addison-Wesley Professional
  • ISBN: 9780133038149