Principle: Decompose the Problem into Smaller Problems

When designing an efficient algorithm to solve a problem, it is helpful if the problem can be decomposed into two (or more) smaller subproblems. It is no mistake that Quicksort remains one of the most popular sorting algorithms. Even with the well-documented special cases that cause problems, Quicksort offers the best average-case for sorting large collections of information. Indeed, the very concept of an O(n log n) algorithm is based on the ability to (a) decompose a problem of size n into two subproblems of about n/2 in size, and (b) recombine the solution of the two subproblems into a solution for the original problem. To properly produce an O(n log n) algorithm, it must be possible for both of these steps to execute in O(n) time.

Quicksort was the first in-place sorting algorithm to demonstrate O(n log n) performance. It succeeds by the novel (almost counterintuitive) approach for dividing the problem into two halves, each of which can be solved recursively by applying Quicksort to the smaller subproblems.

Problems often can be simply cut in half, leading to impressive performance savings. Consider how Binary Search converts a problem of size n into a problem of size n/2. Binary Search takes advantage of the repetitive nature of the search task to develop a recursive solution to the problem.

Sometimes a problem can be solved by dividing it into two subproblems without resorting to recursion. Convex Hull Scan produces the final ...

Get Algorithms in a Nutshell now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.