You are previewing Efficient C++ Performance Programming Techniques.
O'Reilly logo
Efficient C++ Performance Programming Techniques

Book Description

Far too many programmers and software designers consider efficient C++ to be an oxymoron. They regard C++ as inherently slow and inappropriate for performance-critical applications. Consequently, C++ has had little success penetrating domains such as networking, operating system kernels, device drivers, and others.

Efficient C++ explodes that myth. Written by two authors with first-hand experience wringing the last ounce of performance from commercial C++ applications, this book demonstrates the potential of C++ to produce highly efficient programs. The book reveals practical, everyday object-oriented design principles and C++ coding techniques that can yield large performance improvements. It points out common pitfalls in both design and code that generate hidden operating costs.

This book focuses on combining C++'s power and flexibility with high performance and scalability, resulting in the best of both worlds. Specific topics include temporary objects, memory management, templates, inheritance, virtual functions, inlining, reference-counting, STL, and much more.

With this book, you will have a valuable compendium of the best performance techniques at your fingertips.



0201379503B04062001

Table of Contents

  1. Copyright
  2. Preface
  3. Introduction
  4. The Tracing War Story
    1. Our Initial Trace Implementation
    2. Key Points
  5. Constructors and Destructors
    1. Inheritance
    2. Composition
    3. Lazy Construction
    4. Redundant Construction
    5. Key Points
  6. Virtual Functions
    1. Virtual Function Mechanics
    2. Templates and Inheritance
    3. Key Points
  7. The Return Value Optimization
    1. The Mechanics of Return-by-Value
    2. The Return Value Optimization
    3. Computational Constructors
    4. Key Points
  8. Temporaries
    1. Object Definition
    2. Type Mismatch
    3. Pass by Value
    4. Return by Value
    5. Eliminate Temporaries with op=()
    6. Key Points
  9. Single-Threaded Memory Pooling
    1. Version 0: The Global new() and delete()
    2. Version 1: Specialized Rational Memory Manager
    3. Version 2: Fixed-Size Object Memory Pool
    4. Version 3: Single-Threaded Variable-Size Memory Manager
    5. Key Points
  10. Multithreaded Memory Pooling
    1. Version 4: Implementation
    2. Version 5: Faster Locking
    3. Key Points
  11. Inlining Basics
    1. What Is Inlining?
    2. Method Invocation Costs
    3. Why Inline?
    4. Inlining Details
    5. Inlining Virtual Methods
    6. Performance Gains from Inlining
    7. Key Points
  12. Inlining—Performance Considerations
    1. Cross-Call Optimization
    2. Why Not Inline?
    3. Development and Compile-Time Inlining Considerations
    4. Profile-Based Inlining
    5. Inlining Rules
    6. Key Points
  13. Inlining Tricks
    1. Conditional Inlining
    2. Selective Inlining
    3. Recursive Inlining
    4. Inlining with Static Local Variables
    5. Architectural Caveat: Multiple Register Sets
    6. Key Points
  14. Standard Template Library
    1. Asymptotic Complexity
    2. Insertion
    3. Deletion
    4. Traversal
    5. Find
    6. Function Objects
    7. Better than STL?
    8. Key Points
  15. Reference Counting
    1. Implementation Details
    2. Preexisting Classes
    3. Concurrent Reference Counting
    4. Key Points
  16. Coding Optimizations
    1. Caching
    2. Precompute
    3. Reduce Flexibility
    4. 80-20 Rule: Speed Up the Common Path
    5. Lazy Evaluation
    6. Useless Computations
    7. System Architecture
    8. Memory Management
    9. Library and System Calls
    10. Compiler Optimization
    11. Key Points
  17. Design Optimizations
    1. Design Flexibility
    2. Caching
    3. Efficient Data Structures
    4. Lazy Evaluation
    5. Useless Computations
    6. Obsolete Code
    7. Key Points
  18. Scalability
    1. The SMP Architecture
    2. Amdahl's Law
    3. Multithreaded and Synchronization Terminology
    4. Break Up a Task into Multiple Subtasks
    5. Cache Shared Data
    6. Share Nothing
    7. Partial Sharing
    8. Lock Granularity
    9. False Sharing
    10. Thundering Herd
    11. Reader/Writer Locks
    12. Key Points
  19. System Architecture Dependencies
    1. Memory Hierarchies
    2. Registers: Kings of Memory
    3. Disk and Memory Structures
    4. Cache Effects
    5. Cache Thrash
    6. Avoid Branching
    7. Prefer Simple Calculations to Small Branches
    8. Threading Effects
    9. Context Switching
    10. Kernel Crossing
    11. Threading Choices
    12. Key Points
  20. Bibliography