O'Reilly logo

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Algorithms: 24-part Lecture Series

Video Description

Algorithms, Deluxe Edition, Fourth Edition

These Algorithms Video Lectures cover the essential information that every serious programmer needs to know about algorithms and data structures, with emphasis on applications and scientific performance analysis of Java implementations.

Description

This collection of video lectures provides a comprehensive exploration of fundamental data types, algorithms, and data structures, with an emphasis on applications and scientific performance analysis of Java implementations. The instructors offer readings related to these lectures that you can find in Algorithms, Fourth Edition, the leading textbook on algorithms today. These lectures provide another perspective on the material presented in the book and generally cover the material in the same order, though some book topics have been combined, rearranged, or omitted in the lectures.

Don’t have the book? Purchase Algorithms, Deluxe Edition, which includes the print book and full access to the lecture videos.

You also can find related resources on the instructors’ web site, including the following:

  • Full Java implementations
  • Test data
  • Exercises and answers
  • Dynamic visualizations
  • Lecture slides
  • Programming assignments with checklists
  • Other links to related material

About the Instructors

Robert Sedgewick is the William O. Baker Professor of Computer Science at Princeton University. He is a Director of Adobe Systems and has served on the research staffs at Xerox PARC, IDA, and INRIA. He earned his PhD from Stanford University under Donald E. Knuth.

Kevin Wayne also teaches in the Department of Computer Science at Princeton University. His research focuses on theoretical computer science, especially optimization and the design, analysis, and implementation of computer algorithms. Wayne received his PhD from Cornell University.

Skill Level

All Levels

What You Will Learn

These videos survey the most important computer algorithms in use today. The algorithms described in these lectures represent a body of knowledge developed of the last 50 years that has become indispensable. These lectures present:

  • Implementations of useful algorithms
  • Detailed information on performance characteristics
  • Examples of clients and applications

The early lectures cover our fundamental approach to studying algorithms, including data types for stacks, queues, and other low-level abstractions. Then we cover these major topics:

  • Sorting algorithms, highlighting the classic Quicksort and Mergesort algorithms.
  • Searching algorithms, including search methods based on balanced search trees and hashing.
  • String-processing algorithms, from tries and substring search to regular expression search and data compression.
  • Graph algorithms, starting with graph search, shortest paths, and minimum spanning trees, and working up to maximum flow/minimum cut and applications.
  • Reductions, linear programming, and intractability.

Who Should Take This Course

The study of algorithms and data structures is fundamental to any computer-science curriculum, but it is not just for programmers and computer science students. These lectures are intended for:

  • Any student (from high school to graduate level) having a good introduction to programming, with an interest in majoring in any science or engineering discipline (including computer science).
  • Anyone using a computer to address large problems that require an understanding of efficient algorithms.
  • Anyone interested in preparing for a career in industry involving software or computer applications.

Course Requirements

  • Basic familiarity with Java
  • Some background in programming

Table of Contents

Lecture 1: Union-Find. We illustrate our basic approach to developing and analyzing algorithms by considering the dynamic connectivity problem. We introduce the union-find data type and consider several implementations (quick find, quick union, weighted quick union, and weighted quick union with path compression). Finally, we apply the union-find data type to the percolation problem from physical chemistry.

Lecture 2: Analysis of Algorithms. The basis of our approach for analyzing the performance of algorithms is the scientific method. We begin by performing computational experiments to measure the running times of our programs. We use these measurements to develop hypotheses about performance. Next, we create mathematical models to explain their behavior. Finally, we consider analyzing the memory usage of our Java programs.

Lecture 3: Stacks and Queues. We consider two fundamental data types for storing collections of objects: the stack and the queue. We implement each using either a singly-linked list or a resizing array. We introduce two advanced Java features generics and iterators that simplify client code. Finally, we consider various applications of stacks and queues ranging from parsing arithmetic expressions to simulating queueing systems.

Lecture 4: Elementary Sorts. We introduce the sorting problem and Java's Comparable interface. We study two elementary sorting methods (selection sort andinsertion sort) and a variation of one of them (shellsort). We also consider two algorithms for uniformly shuffling an array. We conclude with an application of sorting to computing the convex hull via the Graham scan algorithm.

Lecture 5: Mergesort. We study the mergesort algorithm and show that it guarantees to sort any array of N items with at most NlgN compares. We also consider a nonrecursive, bottom-up version. We prove that any compare-based sorting algorithm must make at least NlgN compares in the worst case. We discuss using different orderings for the objects that we are sorting and the related concept of stability.

Lecture 6: Quicksort. We introduce and implement the randomized quicksort algorithm and analyze its performance. We also consider randomized quickselect, a quicksort variant which finds the kth smallest item in linear time. Finally, consider 3-way quicksort, a variant of quicksort that works especially well in the presence of duplicate keys.

Lecture 7: Priority Queues. We introduce the priority queue data type and an efficient implementation using the binary heap data structure. This implementation also leads to an efficient sorting algorithm known as heapsort. We conclude with an applications of priority queues where we simulate the motion of N particles subject to the laws of elastic collision.

Lecture 8: Elementary Symbol Tables. We define an API for symbol tables (also known as associative arrays) and describe two elementary implementations using a sorted array (binary search) and an unordered list (sequential search). When the keys are Comparable, we define an extended API that includes the additional methods min, max floor, ceiling, rank, and select. To develop an efficient implementation of this API, we study the binary search tree data structure and analyze its performance

Lecture 9: Balanced Search Trees. In this lecture, our goal is to develop a symbol table with guaranteed logarithmic performance for search and insert (and many other operations). We begin with 2-3 trees, which are easy to analyze but hard to implement. Next, we consider red-black binary search trees, which we view as a novel way to implement 2-3 trees as binary search trees. Finally, we introduce B-trees, a generalization of 2-3 trees that are widely used to implement file systems.

Lecture 10: Geometric Applications of BSTs. We start with 1d and 2d range searching, where the goal is to find all points in a given 1d or 2d interval. To accomplish this, we consider kd-trees, a natural generalization of BSTs when the keys are points in the plane (or higher dimensions). We also consider intersection problems, where the goal is to find all intersections among a set of line segments or rectangles.

Lecture 11: Hash Tables. We begin by describing the desirable properties of hash functions and how to implement them in Java, including a fundamental tenet known as the uniform hashing assumption that underlies the potential success of a hashing application. Then, we consider two strategies for implementing hash tables separate chaining and linear probing. Both strategies yield constant-time performance for search and insert under the uniform hashing assumption. We conclude with applications of symbol tables including sets, dictionary clients, indexing clients, and sparse vectors.

Lecture 12: Undirected Graphs. We define an undirected graph API and consider the adjacency-matrix and adjacency-lists representations. We introduce two classic algorithms for searching a graph depth-first search and breadth-first search. We also consider the problem of computing connected components and conclude with related problems and applications.

Lecture 13: Directed Graphs. In this lecture we study directed graphs. We begin with depth-first search and breadth-first search in digraphs and describe applications ranging from garbage collection to web crawling. Next, we introduce a depth-first search based algorithm for computing the topological order of an acyclic digraph. Finally, we implement the Kosaraju-Sharir algorithm for computing the strong components of a digraph.

Lecture 14: Minimum Spanning Trees. In this lecture we study the minimum spanning tree problem. We begin by considering a generic greedy algorithm for the problem. Next, we consider and implement two classic algorithms for the problem Kruskal's algorithm and Prim's algorithm. We conclude with some applications and open problems.

Lecture 15: Shortest Paths. In this lecture we study shortest-paths problems. We begin by analyzing some basic properties of shortest paths and a generic algorithm for the problem. We introduce and analyze Dijkstra's algorithm for shortest-paths problems with nonnegative weights. Next, we consider an even faster algorithm for DAGs, which works even if the weights are negative. We conclude with the Bellman-Ford-Moore algorithm for edge-weighted digraphs with no negative cycles. We also consider applications ranging from content-aware fill to arbitrage.

Lecture 16: Maximum Flow and Minimum Cut. In this lecture we introduce the maximum flow and minimum cut problems. We begin with the Ford-Fulkerson algorithm. To analyze its correctness, we establish the maxflow-mincut theorem. Next, we consider an efficient implementation of the Ford-Fulkerson algorithm, using the shortest augmenting path rule. Finally, we consider applications, including bipartite matching and baseball elimination.

Lecture 17: Radix Sorts. In this lecture we consider specialized sorting algorithms for strings and related objects.

We begin with a subroutine to sort integers in a small range. We then consider two classic radix sorting algorithms LSD and MSD radix sorts. Next, we consider an especially efficient variant, which is a hybrid of MSD radix sort and quicksort known as 3-way radix quicksort. We conclude with suffix sorting and related applications.

Lecture 18: Tries. In this lecture we consider specialized algorithms for symbol tables with string keys. Our goal is a data structure that is as fast as hashing and even more flexible than binary search trees. We begin with multiway tries; next we consider ternary search tries. Finally, we consider character-based operations, including prefix match and longest prefix, and related applications.

Lecture 19: Substring Search. In this lecture we consider algorithms for searching for a substring in a piece of text. We begin with a brute-force algorithm, whose running time is quadratic in the worst case. Next, we consider the ingenious Knuth-Morris-Pratt algorithm whose running time is guaranteed to be linear in the worst case. Then, we introduce the Boyer-Moore algorithm, whose running time is sublinear on typical inputs. Finally, we consider the Rabin-Karp fingerprint algorithm, which uses hashing in a clever way to solve the substring search and related problems.

Lecture 20: Regular Expressions. A regular expression is a method for specifying a set of strings. Our topic for this lecture is the famous grep algorithm that determines whether a given text contains any substring from the set. We examine an efficient implementation that makes use of our digraph reachability implementation from Lectures 1 and 2.

Lecture 21: Data Compression. We study and implement several classic data compression schemes, including run-length coding, Huffman compression, and LZW compression. We develop efficient implementations from first principles using a Java library for manipulating binary data that we developed for this purpose, based on priority queue and symbol table implementations from earlier lectures.

Lecture 22: Reductions. In this lecture our goal is to develop ways to classify problems according to their computational requirements. We introduce the concept of reduction as a technique for studying the relationship among problems. People use reductions to design algorithms, establish lower bounds, and classify problems in terms of their computational requirements.

Lecture 23: Linear Programming. The quintessential problem-solving model is known as linear programming, and the simplex method for solving it is one of the most widely used algorithms. In this lecture, we give an overview of this central topic in operations research and describe its relationship to algorithms that we have considered.

Lecture 24: Intractability. Is there a universal problem-solving model to which all problems that we would like to solve reduce and for which we know an efficient algorithm? You may be surprised to learn that we do not know the answer to this question. In this lecture we introduce the complexity classes P, NP, and NP-complete; pose the famous P = NP question; and consider implications in the context of algorithms that we have treated in this course.

Table of Contents

  1. Introduction
    1. Algorithms: Introduction 00:09:23
  2. Lecture 1: Union-Find
    1. Dynamic Connectivity 00:10:23
    2. Quick Find 00:10:19
    3. Quick Union 00:07:51
    4. Quick-Union Improvements 00:13:03
    5. Union-Find Applications 00:09:22
  3. Lecture 2: Analysis of Algorithms
    1. Introduction to Analysis of Algorithms 00:08:15
    2. Observations 00:10:06
    3. Mathematical Models 00:12:49
    4. Order-of-Growth Classifications 00:14:40
    5. Theory of Algorithms 00:11:36
    6. Memory 00:08:12
  4. Lecture 3: Stacks and Queues
    1. Stacks 00:16:24
    2. Resizing Arrays 00:09:57
    3. Queues 00:04:34
    4. Generics 00:09:27
    5. Iterators 00:07:17
    6. Stack and Queue Applications 00:13:25
  5. Lecture 4: Elementary Sorts
    1. Introduction to Sorting 00:14:43
    2. Selection Sort 00:06:59
    3. Insertion Sort 00:09:29
    4. Shellsort 00:10:49
    5. Shuffling 00:07:40
    6. Convex Hull 00:13:51
  6. Lecture 5: Mergesort
    1. Mergesort 00:23:55
    2. Bottom-up Mergesort 00:03:21
    3. Sorting Complexity 00:09:06
    4. Comparators 00:06:44
    5. Stability 00:05:40
  7. Lecture 6: Quicksort
    1. Quicksort 00:19:34
    2. Selection 00:07:09
    3. Duplicate Keys 00:11:26
    4. System Sorts 00:11:50
  8. Lecture 7: Priority Queues
    1. APIs and Elementary Implementations 00:12:53
    2. Binary Heaps 00:23:37
    3. Heapsort 00:14:30
    4. Event-Driven Simulation 00:22:39
  9. Lecture 8: Elementary Symbol Tables
    1. Symbol Table APIs 00:21:31
    2. Elementary Implementations 00:09:04
    3. Ordered Operations 00:06:26
    4. Binary Search Trees 00:19:57
    5. Ordered Operations in BSTs 00:10:32
    6. Deletion in BSTs 00:09:53
  10. Lecture 9: Balanced Search Trees
    1. Search Trees 00:16:56
    2. Red-Black BSTs 00:35:30
    3. B-Trees 00:10:37
  11. Lecture 10: Geometric Applications of BSTs
    1. Range Search 00:08:52
    2. Line Segment Intersection 00:05:47
    3. Kd-Trees 00:29:08
    4. Interval Search Trees 00:13:47
    5. Rectangle Intersection 00:08:11
  12. Lecture 11: Hash Tables
    1. Hash Functions 00:18:14
    2. Separate Chaining 00:07:28
    3. Linear Probing 00:14:38
    4. Context 00:10:10
    5. Sets 00:05:05
    6. Dictionary Clients 00:05:39
    7. Indexing Clients 00:07:53
    8. Sparse Vectors 00:07:42
  13. Lecture 12: Undirected Graphs
    1. Introduction to Graphs 00:09:33
    2. Graph API 00:14:47
    3. Depth-First Search 00:26:23
    4. Breadth-First Search 00:13:34
    5. Connected Components 00:18:56
    6. Graph Challenges 00:14:29
  14. Lecture 13: Directed Graphs
    1. Introduction to Digraphs 00:08:31
    2. Digraph API 00:04:56
    3. Digraph Search 00:20:57
    4. Topological Sort 00:12:54
    5. Strong Components 00:20:22
  15. Lecture 14: Minimum Spanning Trees
    1. Introduction to MSTs 00:04:04
    2. Greedy Algorithm 00:12:57
    3. Edge-Weighted Graph API 00:11:16
    4. Kruskal's Algorithm 00:12:29
    5. Prim's Algorithm 00:33:16
    6. MST Context 00:10:35
  16. Lecture 15: Shortest Paths
    1. Shortest Paths APIs 00:10:51
    2. Shortest Path Properties 00:14:46
    3. Dijkstra's Algorithm 00:18:58
    4. Edge-Weighted DAGs 00:19:24
    5. Negative Weights 00:21:02
  17. Lecture 16: Maximum Flow and Minimum Cut
    1. Introduction to Maxflow 00:10:34
    2. Ford-Fulkerson Algorithm 00:06:33
    3. Maxflow-Mincut Theorem 00:09:39
    4. Running Time Analysis 00:08:50
    5. Java Implementation 00:14:29
    6. Maxflow Applications 00:22:20
  18. Lecture 17: Radix Sorts
    1. Strings in Java 00:17:43
    2. Key-Indexed Counting 00:12:07
    3. LSD Radix Sort 00:15:01
    4. MSD Radix Sort 00:13:42
    5. 3-way Radix Quicksort 00:07:23
    6. Suffix Arrays 00:19:25
  19. Lecture 18: Tries
    1. R-way Tries 00:32:19
    2. Ternary Search Tries 00:22:43
    3. Character-Based Operations 00:20:04
  20. Lecture 19: Substring Search
    1. Introduction to Substring Search 00:06:42
    2. Brute-Force Substring Search 00:10:11
    3. Knuth-Morris-Pratt 00:33:15
    4. Boyer-Moore 00:08:37
    5. Rabin-Karp 00:16:14
  21. Lecture 20: Regular Expressions
    1. Regular Expressions 00:20:03
    2. REs and NFAs 00:13:14
    3. NFA Simulation 00:18:24
    4. NFA Construction 00:11:43
    5. Regular Expression Applications 00:20:06
  22. Lecture 21: Data Compression
    1. Introduction to Data Compression 00:22:27
    2. Run-Length Coding 00:05:59
    3. Huffman Compression 00:24:15
    4. LZW Compression 00:27:33
  23. Lecture 22: Reductions
    1. Introduction to Reductions 00:09:32
    2. Designing Algorithms 00:08:13
    3. Establishing Lower Bounds 00:09:23
    4. Classifying Problems 00:12:45
  24. Lecture 23: Linear Programming
    1. Brewer's Problem 00:21:16
    2. Simplex Algorithm 00:11:49
    3. Simplex Implementations 00:16:22
    4. Linear Programming Reductions 00:11:46
  25. Lecture 24: Intractability
    1. Introduction to Intractability 00:17:01
    2. Search Problems 00:10:57
    3. P vs. NP 00:16:30
    4. Classifying Problems 00:13:43
    5. NP-Completeness 00:12:38
    6. Coping with Intractability 00:14:02