We compare algorithms by evaluating their performance on input data of size *n*. This methodology is the standard means developed over the past half-century for comparing algorithms. By doing so, we can determine which algorithms scale to solve problems of a nontrivial size by evaluating the running time needed by the algorithm in relation to the size of the provided input. A secondary form of performance evaluation is to consider how much memory or storage an algorithm needs; we address these concerns within the individual algorithm chapters, as appropriate.

We use the following classifications exclusively in this book, and they are ordered by decreasing efficiency:

Constant

Logarithmic

Sublinear

Linear

*n*log (*n*Quadratic

Exponential

We'll now present several discussions to illustrate some of these performance identifications.

**Discussion 0: Constant Behavior**

When analyzing the performance of the algorithms in this book, we frequently claim that some primitive operations provide constant performance. Clearly this claim is not an absolute determinant for the actual performance of the operation since we do not refer to specific hardware. For example, comparing whether two 32-bit numbers *x* and *y* are the same value should have the same performance regardless of the actual values of *x* and *y*. A constant operation is defined to have O(1) performance.

What about the performance of comparing two 256-bit numbers? Or two 1,024-bit numbers? It turns out that for a predetermined fixed ...

Start Free Trial

No credit card required