Book description
“If this book had been available to Healthcare.gov’s contractors, and they read and followed its life cycle performance processes, there would not have been the enormous problems apparent in that application. In my 40+ years of experience in building leading-edge products, poor performance is the single most frequent cause of the failure or cancellation of software-intensive projects. This book provides techniques and skills necessary to implement performance engineering at the beginning of a project and manage it throughout the product’s life cycle. I cannot recommend it highly enough.”
–Don Shafer, CSDP, Technical Fellow, Athens Group, LLC
Poor performance is a frequent cause of software project failure. Performance engineering can be extremely challenging. In Foundations of Software and System Performance Engineering, leading software performance expert Dr. André Bondi helps you create effective performance requirements up front, and then architect, develop, test, and deliver systems that meet them.
Drawing on many years of experience at Siemens, AT&T Labs, Bell Laboratories, and two startups, Bondi offers practical guidance for every software stakeholder and development team participant. He shows you how to define and use metrics; plan for diverse workloads; evaluate scalability, capacity, and responsiveness; and test both individual components and entire systems. Throughout, Bondi helps you link performance engineering with everything else you do in the software life cycle, so you can achieve the right performance–now and in the future–at lower cost and with less pain.
This guide will help you
• Mitigate the business and engineering risk associated with poor system performance
• Specify system performance requirements in business and engineering terms
• Identify metrics for comparing performance requirements with actual performance
• Verify the accuracy of measurements
• Use simple mathematical models to make predictions, plan performance tests, and anticipate the impact of changes to the system or the load placed upon it
• Avoid common performance and scalability mistakes
• Clarify business and engineering needs to be satisfied by given levels of throughput and response time
• Incorporate performance engineering into agile processes
• Help stakeholders of a system make better performance-related decisions
• Manage stakeholders’ expectations about system performance throughout the software life cycle, and deliver a software product with quality performance
André B. Bondi is a senior staff engineer at Siemens Corp., Corporate Technologies in Princeton, New Jersey. His specialties include performance requirements, performance analysis, modeling, simulation, and testing. Bondi has applied his industrial and academic experience to the solution of performance issues in many problem domains. In addition to holding a doctorate in computer science and a master’s in statistics, he is a Certified Scrum Master.
Table of contents
- About This eBook
- Title Page
- Copyright Page
- Praise for Foundations of Software and System Performance Engineering
- Dedication Page
- Contents
- Preface
- Acknowledgments
- About the Author
-
Chapter 1. Why Performance Engineering? Why Performance Engineers?
- 1.1 Overview
- 1.2 The Role of Performance Requirements in Performance Engineering
- 1.3 Examples of Issues Addressed by Performance Engineering Methods
- 1.4 Business and Process Aspects of Performance Engineering
- 1.5 Disciplines and Techniques Used in Performance Engineering
- 1.6 Performance Modeling, Measurement, and Testing
- 1.7 Roles and Activities of a Performance Engineer
- 1.8 Interactions and Dependencies between Performance Engineering and Other Activities
- 1.9 A Road Map through the Book
- 1.10 Summary
-
Chapter 2. Performance Metrics
- 2.1 General
- 2.2 Examples of Performance Metrics
- 2.3 Useful Properties of Performance Metrics
- 2.4 Performance Metrics in Different Domains
- 2.5 Examples of Explicit and Implicit Metrics
- 2.6 Time Scale Granularity of Metrics
- 2.7 Performance Metrics for Systems with Transient, Bounded Loads
- 2.8 Summary
- 2.9 Exercises
-
Chapter 3. Basic Performance Analysis
- 3.1 How Performance Models Inform Us about Systems
- 3.2 Queues in Computer Systems and in Daily Life
- 3.3 Causes of Queueing
- 3.4 Characterizing the Performance of a Queue
- 3.5 Basic Performance Laws: Utilization Law, Little’s Law
- 3.6 A Single-Server Queue
- 3.7 Networks of Queues: Introduction and Elementary Performance Properties
- 3.8 Open and Closed Queueing Network Models
- 3.9 Bottleneck Analysis for Single-Class Closed Queueing Networks
- 3.10 Regularity Conditions for Computationally Tractable Queueing Network Models
- 3.11 Mean Value Analysis of Single-Class Closed Queueing Network Models
- 3.12 Multiple-Class Queueing Networks
- 3.13 Finite Pool Sizes, Lost Calls, and Other Lost Work
- 3.14 Using Models for Performance Prediction
- 3.15 Limitations and Applicability of Simple Queueing Network Models
- 3.16 Linkage between Performance Models, Performance Requirements, and Performance Test Results
- 3.17 Applications of Basic Performance Laws to Capacity Planning and Performance Testing
- 3.18 Summary
- 3.19 Exercises
- Chapter 4. Workload Identification and Characterization
-
Chapter 5. From Workloads to Business Aspects of Performance Requirements
- 5.1 Overview
- 5.2 Performance Requirements and Product Management
- 5.3 Performance Requirements and the Software Lifecycle
- 5.4 Performance Requirements and the Mitigation of Business Risk
-
5.5 Commercial Considerations and Performance Requirements
- 5.5.1 Performance Requirements, Customer Expectations, and Contracts
- 5.5.2 System Performance and the Relationship between Buyer and Supplier
- 5.5.3 Confidentiality
- 5.5.4 Performance Requirements and the Outsourcing of Software Development
- 5.5.5 Performance Requirements and the Outsourcing of Computing Services
- 5.6 Guidelines for Specifying Performance Requirements
- 5.7 Summary
- 5.8 Exercises
-
Chapter 6. Qualitative and Quantitative Types of Performance Requirements
- 6.1 Qualitative Attributes Related to System Performance
- 6.2 The Concept of Sustainable Load
- 6.3 Formulation of Response Time Requirements
- 6.4 Formulation of Throughput Requirements
- 6.5 Derived and Implicit Performance Requirements
- 6.6 Performance Requirements Related to Transaction Failure Rates, Lost Calls, and Lost Packets
- 6.7 Performance Requirements Concerning Peak and Transient Loads
- 6.8 Summary
- 6.9 Exercises
-
Chapter 7. Eliciting, Writing, and Managing Performance Requirements
- 7.1 Elicitation and Gathering of Performance Requirements
- 7.2 Ensuring That Performance Requirements Are Enforceable
- 7.3 Common Patterns and Antipatterns for Performance Requirements
- 7.4 The Need for Mathematically Consistent Requirements: Ensuring That Requirements Conform to Basic Performance Laws
- 7.5 Expressing Performance Requirements in Terms of Parameters with Unknown Values
- 7.6 Avoidance of Circular Dependencies
- 7.7 External Performance Requirements and Their Implications for the Performance Requirements of Subsystems
- 7.8 Structuring Performance Requirements Documents
- 7.9 Layout of a Performance Requirement
- 7.10 Managing Performance Requirements: Responsibilities of the Performance Requirements Owner
- 7.11 Performance Requirements Pitfall: Transition from a Legacy System to a New System
- 7.12 Formulating Performance Requirements to Facilitate Performance Testing
- 7.13 Storage and Reporting of Performance Requirements
- 7.14 Summary
-
Chapter 8. System Measurement Techniques and Instrumentation
- 8.1 General
- 8.2 Distinguishing between Measurement and Testing
- 8.3 Validate, Validate, Validate; Scrutinize, Scrutinize, Scrutinize
- 8.4 Resource Usage Measurements
- 8.5 Utilizations and the Averaging Time Window
- 8.6 Measurement of Multicore or Multiprocessor Systems
- 8.7 Measuring Memory-Related Activity
- 8.8 Measurement in Production versus Measurement for Performance Testing and Scalability
- 8.9 Measuring Systems with One Host and with Multiple Hosts
- 8.10 Measurements from within the Application
- 8.11 Measurements in Middleware
- 8.12 Measurements of Commercial Databases
- 8.13 Response Time Measurements
- 8.14 Code Profiling
- 8.15 Validation of Measurements Using Basic Properties of Performance Metrics
- 8.16 Measurement Procedures and Data Organization
- 8.17 Organization of Performance Data, Data Reduction, and Presentation
- 8.18 Interpreting Measurements in a Virtualized Environment
- 8.19 Summary
- 8.20 Exercises
-
Chapter 9. Performance Testing
- 9.1 Overview of Performance Testing
- 9.2 Special Challenges
- 9.3 Performance Test Planning and Performance Models
- 9.4 A Wrong Way to Evaluate Achievable System Throughput
- 9.5 Provocative Performance Testing
- 9.6 Preparing a Performance Test
- 9.7 Lab Discipline in Performance Testing
- 9.8 Performance Testing Challenges Posed by Systems with Multiple Hosts
- 9.9 Performance Testing Scripts and Checklists
- 9.10 Best Practices for Documenting Test Plans and Test Results
- 9.11 Linking the Performance Test Plan to Performance Requirements
- 9.12 The Role of Performance Tests in Detecting and Debugging Concurrency Issues
- 9.13 Planning Tests for System Stability
- 9.14 Prospective Testing When Requirements Are Unspecified
- 9.15 Structuring the Test Environment to Reflect the Scalability of the Architecture
- 9.16 Data Collection
- 9.17 Data Reduction and Presentation
- 9.18 Interpreting the Test Results
- 9.19 Automating Performance Tests and the Analysis of the Outputs
- 9.20 Summary
- 9.21 Exercises
- Chapter 10. System Understanding, Model Choice, and Validation
-
Chapter 11. Scalability and Performance
- 11.1 What Is Scalability?
- 11.2 Scaling Methods
- 11.3 Types of Scalability
- 11.4 Interactions between Types of Scalability
- 11.5 Qualitative Analysis of Load Scalability and Examples
- 11.6 Scalability Limitations in a Development Environment
- 11.7 Improving Load Scalability
- 11.8 Some Mathematical Analyses
- 11.9 Avoiding Scalability Pitfalls
- 11.10 Performance Testing and Scalability
- 11.11 Summary
- 11.12 Exercises
-
Chapter 12. Performance Engineering Pitfalls
- 12.1 Overview
- 12.2 Pitfalls in Priority Scheduling
- 12.3 Transient CPU Saturation Is Not Always a Bad Thing
- 12.4 Diminishing Returns with Multiprocessors or Multiple Cores
- 12.5 Garbage Collection Can Degrade Performance
- 12.6 Virtual Machines: Panacea or Complication?
- 12.7 Measurement Pitfall: Delayed Time Stamping and Monitoring in Real-Time Systems
- 12.8 Pitfalls in Performance Measurement
- 12.9 Eliminating a Bottleneck Could Unmask a New One
- 12.10 Pitfalls in Performance Requirements Engineering
- 12.11 Organizational Pitfalls in Performance Engineering
- 12.12 Summary
- 12.13 Exercises
- Chapter 13. Agile Processes and Performance Engineering
-
Chapter 14. Working with Stakeholders to Learn, Influence, and Tell the Performance Engineering Story
- 14.1 Determining What Aspect of Performance Matters to Whom
- 14.2 Where Does the Performance Story Begin?
- 14.3 Identification of Performance Concerns, Drivers, and Stakeholders
-
14.4 Influencing the Performance Story
- 14.4.1 Using Performance Engineering Concerns to Affect the Architecture and Choice of Technology
- 14.4.2 Understanding the Impact of Existing Architectures and Prior Decisions on System Performance
- 14.4.3 Explaining Performance Concerns and Sharing and Developing the Performance Story with Different Stakeholders
- 14.5 Reporting on Performance Status to Different Stakeholders
- 14.6 Examples
- 14.7 The Role of a Capacity Management Engineer
- 14.8 Example: Explaining the Role of Measurement Intervals When Interpreting Measurements
- 14.9 Ensuring Ownership of Performance Concerns and Explanations by Diverse Stakeholders
- 14.10 Negotiating Choices for Design Changes and Recommendations for System Improvement among Stakeholders
- 14.11 Summary
- 14.12 Exercises
- Chapter 15. Where to Learn More
- References
- Index
Product information
- Title: Foundations of Software and System Performance Engineering: Process, Performance Modeling, Requirements, Testing, Scalability, and Practice
- Author(s):
- Release date: August 2014
- Publisher(s): Addison-Wesley Professional
- ISBN: 9780133038149
You might also like
book
Software Requirements, 3rd Edition
Now in its third edition, this classic guide to software requirements engineering has been fully updated …
book
Effective Software Testing
Go beyond basic testing! Great software testing makes the entire development process more efficient. This book …
book
Quality Code: Software Testing Principles, Practices, and Patterns
Test-driven, test-first, and test-early development practices are helping thousands of software development organizations improve their software. …
book
Systems Performance, 2nd Edition
covers concepts, strategy, tools, and tuning for operating systems and applications, using Linux-based operating systems as …