0% found this document useful (0 votes)
4 views

Algorithms and Data Structures Cheatsheet

The document outlines the structure and content of the 'Algorithms, 4th edition' textbook, covering fundamental topics such as programming models, data structures, sorting, searching, and graph processing. It includes performance characteristics of various algorithms and data structures, along with mathematical concepts useful for algorithm analysis. Additionally, it provides tables summarizing the efficiency of different algorithms and data structures, as well as common functions and asymptotic notations.

Uploaded by

hxqknvjbpg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Algorithms and Data Structures Cheatsheet

The document outlines the structure and content of the 'Algorithms, 4th edition' textbook, covering fundamental topics such as programming models, data structures, sorting, searching, and graph processing. It includes performance characteristics of various algorithms and data structures, along with mathematical concepts useful for algorithm analysis. Additionally, it provides tables summarizing the efficiency of different algorithms and data structures, as well as common functions and asymptotic notations.

Uploaded by

hxqknvjbpg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Algorithms, 4th edition

1. Fundamentals
1.1 Programming Model
1.2 Data Abstraction
1.3 Stacks and Queues
1.4 Analysis of Algorithms
1.5 Case Study: Union-Find
2. Sorting
2.1 Elementary Sorts
2.2 Mergesort
2.3 Quicksort
2.4 Priority Queues
2.5 Sorting Applications
3. Searching
3.1 Symbol Tables
3.2 Binary Search Trees
3.3 Balanced Search Trees
3.4 Hash Tables
3.5 Searching Applications
4. Graphs
4.1 Undirected Graphs
4.2 Directed Graphs
4.3 Minimum Spanning Trees
4.4 Shortest Paths
5. Strings
5.1 String Sorts
5.2 Tries
5.3 Substring Search
5.4 Regular Expressions
5.5 Data Compression
6. Context
6.1 Event-Driven Simulation
6.2 B-trees
6.3 Suffix Arrays
6.4 Maxflow
6.5 Reductions
6.6 Intractability
:
Related Booksites

Web Resources
FAQ
Data
Code
Errata
Lectures
Cheatsheet
References
Online Course
Programming Assignments

Algorithms and Data Structures Cheatsheet

We summarize the performance characteristics of classic algorithms and data structures for sorting, priority queues, symbol
tables, and graph processing.

We also summarize some of the mathematics useful in the analysis of algorithms, including commonly encountered
functions; useful formulas and approximations; properties of logarithms; asymptotic notations; and solutions to divide-and-
conquer recurrences.

Sorting.
The table below summarizes the number of compares for a variety of sorting algorithms, as implemented in this textbook. It
includes leading constants but ignores lower-order terms.

ALGORITHM CODE IN PLACE STABLE BEST AVERAGE WORST REMARKS


n exchanges;
selection sort Selection.java ½n2 ½n2 ½n2 quadratic in best
case
use for small or
insertion sort Insertion.java n ¼ n2 ½ n2 partially-sorted
arrays
:
bubble sort Bubble.java n ½n2 ½n2 rarely useful;
use insertion
sort instead
n log3 tight code;
shellsort Shell.java unknown c n 3/2
n subquadratic

n log n
½ n lg
mergesort Merge.java n lg n n lg n guarantee;
n
stable
n log n
probabilistic
quicksort Quick.java n lg n 2 n ln n ½n2 guarantee;
fastest in
practice
n log n
heapsort Heap.java n † 2 n lg n 2 n lg n guarantee;
in place
†n lg n if all
keys are distinct

Priority queues.
The table below summarizes the order of growth of the running time of operations for a variety of priority queues, as
implemented in this textbook. It ignores leading constants and lower-order terms. Except as noted, all running times are
worst-case running times.

DATA
CODE INSERT DEL-MIN MIN DEC-KEY DELETE MERGE
STRUCTURE
array BruteIndexMinPQ.java 1 n n 1 1 n
binary heap IndexMinPQ.java log n log n 1 log n log n n

d-way heap IndexMultiwayMinPQ.java logd n d logd n 1 logd n d logd n n


binomial heap IndexBinomialMinPQ.java 1 log n 1 log n log n log n
Fibonacci
IndexFibonacciMinPQ.java 1 log n † 1 1† log n † 1
heap
†amortized
guarantee

Symbol tables.
:
The table below summarizes the order of growth of the running time of operations for a variety of symbol tables, as
implemented in this textbook. It ignores leading constants and lower-order terms.

worst case average case


DATA
CODE SEARCH INSERT DELETE SEARCH INSERT DELETE
STRUCTURE
sequential
search
SequentialSearchST.java n n n n n n
(in an
unordered list)
binary search
(in a sorted BinarySearchST.java log n n n log n n n
array)
binary search
tree BST.java n n n log n log n sqrt(n)
(unbalanced)
red-black
BST RedBlackBST.java log n log n log n log n log n log n
(left-leaning)
AVL AVLTreeST.java log n log n log n log n log n log n
hash table
(separate- SeparateChainingHashST.java n n n 1† 1† 1†
chaining)
hash table
(linear- LinearProbingHashST.java n n n 1† 1† 1†
probing)
† uniform hashing assumption

Graph processing.
The table below summarizes the order of growth of the worst-case running time and memory usage (beyond the memory for
the graph itself) for a variety of graph-processing problems, as implemented in this textbook. It ignores leading constants
and lower-order terms. All running times are worst-case running times.

PROBLEM ALGORITHM CODE TIME SPACE


path DFS DepthFirstPaths.java E+V V
shortest path (fewest
BFS BreadthFirstPaths.java E+V V
edges)
cycle DFS Cycle.java E+V V
:
directed path DFS DepthFirstDirectedPaths.java E+V V
shortest directed path
BFS BreadthFirstDirectedPaths.java E+V V
(fewest edges)
directed cycle DFS DirectedCycle.java E+V V
topological sort DFS Topological.java E+V V
bipartiteness / odd cycle DFS Bipartite.java E+V V
connected components DFS CC.java E+V V
strong components Kosaraju–Sharir KosarajuSharirSCC.java E+V V
strong components Tarjan TarjanSCC.java E+V V
strong components Gabow GabowSCC.java E+V V
Eulerian cycle DFS EulerianCycle.java E+V E+V
directed Eulerian cycle DFS DirectedEulerianCycle.java E+V V

transitive closure DFS TransitiveClosure.java V (E + V) V2


minimum spanning tree Kruskal KruskalMST.java E log E E+V
minimum spanning tree Prim PrimMST.java E log V V
minimum spanning tree Boruvka BoruvkaMST.java E log V V
shortest paths
Dijkstra DijkstraSP.java E log V V
(nonnegative weights)
shortest paths (no
Bellman–Ford BellmanFordSP.java V (V + E) V
negative cycles)
shortest paths (no cycles) topological sort AcyclicSP.java V+E V

all-pairs shortest paths Floyd–Warshall FloydWarshall.java V3 V2


E V (E +
maxflow–mincut Ford–Fulkerson FordFulkerson.java V
V)

V ½ (E +
bipartite matching Hopcroft–Karp HopcroftKarp.java V
V)
successive shortest
assignment problem AssignmentProblem.java n 3 log n n2
paths

Commonly encountered functions.


Here are some functions that are commonly encountered when analyzing algorithms.

FUNCTION NOTATION DEFINITION


floor ⌊x⌋ greatest integer ≤ x
⌈x⌉
:
ceiling ⌈x⌉ smallest integer ≥ x

binary logarithm lg x or log2 x y such that 2 y = x


natural logarithm ln x or loge x y such that e y = x
common logarithm log10 x y such that 10 y = x
iterated binary logarithm lg∗ x 0 if x ≤ 1; 1 + lg∗ (lg x) otherwise
harmonic number Hn 1 + 1/2 + 1/3 + … + 1/n
factorial n! 1×2×3×…×n

(k )
n n!
binomial coefficient k! (n−k)!

Useful formulas and approximations.


Here are some useful formulas for approximations that are widely used in the analysis of algorithms.

Harmonic sum: 1 + 1/2 + 1/3 + … + 1/n ∼ ln n


Triangular sum: 1 + 2 + 3 + … + n = n (n + 1) / 2 ∼ n2 / 2
Sum of squares:12 + 22 + 32 + … + n2 ∼ n3 / 3
Geometric sum: If r ≠ 1 , then

1 + r + r 2 + r 3 + … + r n = (r n+1 − 1) / (r − 1)
r = 1/2 : 1 + 1/2 + 1/4 + 1/8 + … + 1/2n ∼ 2
r = 2: 1 + 2 + 4 + 8 + … + n/2 + n = 2n − 1 ∼ 2n, when n
is a power of 2

lg(n!) = lg 1 + lg 2 + lg 3 + … + lg n ∼ n lg n
Stirling's approximation:

Exponential: (1 + 1/n)n ∼ e; (1 − 1/n)n ∼ 1/e


(k )
n
Binomial coefficients: ∼ nk / k! when k is a small constant
Approximate sum by integral: If f (x) is a monotonically increasing function, then
n n n+1

∫0 ∑ ∫1
f (x) dx ≤ f (i) ≤ f (x) dx
i=1
:
Properties of logarithms.

Definition: logb a = c means bc = a. We refer to b as the base of the logarithm.


Special cases: log b = 1, log 1 = 0
b b

blogb x = x
Inverse of exponential:

Product: log (x × y) = log x + log y


b b b
Division: log (x ÷ y) = log x − log y
b b b
Finite product:
logb (x1 × x2 × … × xn ) = logb x1 + logb x2 + … + logb xn
Changing bases: logb x = logc x / logc b
Rearranging exponents: x logb y = ylogb x
Exponentiation: logb (x y ) = y logb x

Asymptotic notations: definitions.


NAME NOTATION DESCRIPTION DEFINITION

f (n) is equal to g(n) asymptotically f (n)


Tilde f (n) ∼ g(n) lim =1
(including constant factors) n→∞ g(n)
there exist constants c > 0 and
f (n) is bounded above by g(n)
Big Oh f (n) is O(g(n)) asymptotically
n0 ≥ 0 such that
(ignoring constant factors)
0 ≤ f (n) ≤ c ⋅ g(n) for all
n ≥ n0
f (n) is bounded below by g(n)
Big Omega f (n) is Ω(g(n)) asymptotically g(n) is O(f (n))
(ignoring constant factors)
f (n) is bounded above and below by
Big Theta f (n) is Θ(g(n)) g(n) asymptotically f (n) is both O(g(n)) and
Ω(g(n))
(ignoring constant factors)

f (n) is dominated by g(n) f (n)


f (n) is o(g(n)) lim =0
Little oh asymptotically n→∞ g(n)
(ignoring constant factors)
:
Little f (n) dominates g(n) asymptotically
f (n) is ω(g(n)) g(n) is o(f (n))
omega (ignoring constant factors)

Common orders of growth.


NAME NOTATION EXAMPLE CODE FRAGMENT
array access
Constant O(1) arithmetic operation op();
function call
binary search in a sorted
array for (int i = 1; i <= n; i = 2*i)
Logarithmic O(log n)
insert in a binary heap op();
search in a red–black tree
sequential search
for (int i = 0; i < n; i++)
Linear O(n) grade-school addition op();
BFPRT median finding

mergesort for (int i = 1; i <= n; i++)


Linearithmic O(n log n) heapsort for (int j = i; j <= n; j = 2*j)
fast Fourier transform op();

enumerate all pairs for (int i = 0; i < n; i++)


Quadratic O(n 2) insertion sort for (int j = i+1; j < n; j++)
grade-school multiplication op();

enumerate all triples for (int i = 0; i < n; i++)


Floyd–Warshall for (int j = i+1; j < n; j++)
Cubic O(n3 ) grade-school matrix for (int k = j+1; k < n; k++)
multiplication op();

ellipsoid algorithm for LP


AKS primality algorithm
Polynomial O(nc ) Edmond's matching
algorithm

enumerating all subsets


O(n c)
Exponential 2 enumerating all permutations
backtracking search

Asymptotic notations: properties.

f (n) O(f (n))


:
Reflexivity: f (n) is O(f (n)) .
Constants: If f (n) is O(g(n)) and c > 0 , then c ⋅ f (n) is O(g(n))) .

Products: If f1 (n) is O(g1 (n)) and f2 (n) is O(g2 (n))), then f1 (n) ⋅ f2 (n) is
O(g1 (n) ⋅ g2 (n))).
Sums: If f1 (n) is O(g1 (n)) and f2 (n) is O(g2 (n))), then f1 (n) + f2 (n) is
O(max{g1 (n), g2 (n)}) .
Transitivity: If f (n) is O(g(n)) and g(n) is O(h(n)), then f (n) is O(h(n)).

Polynomials: Let f (n) = a0 + a1 n + … + ad nd with ad > 0. Then, f (n) is


Θ(nd ).
Logarithms and polynomials: logb n is O(nd ) for every b > 0 and every d > 0 .
Exponentials and polynomials: nd is O(r n ) for every r > 0 and every d > 0 .
Factorials: n! is 2Θ(n log n) .
f (n)
Limits: If lim = c for some constant 0 < c < ∞ , then f (n) is Θ(g(n)).
n→∞ g(n)

f (n)
Limits: If lim = 0, then f (n) is O(g(n)) but not Θ(g(n)).
n→∞ g(n)

f (n)
Limits: If lim = ∞ , then f (n) is Ω(g(n)) but not O(g(n)).
n→∞ g(n)

Here are some examples.

FUNCTION o(n2 ) O(n2 ) Θ(n2 ) Ω(n2 ) ω(n2 ) ∼ 2n2 ∼ 4n2


log2 n
10n + 45

2n2 + 45n + 12
4n2 − 2√n‾
:
3n3
2n

Divide-and-conquer recurrences.

For each of the following recurrences we assume T(1) = 0 and that n / 2 means either ⌊n / 2⌋ or
⌈n / 2⌉ .
RECURRENCE T(n) EXAMPLE
T(n) = T(n / 2) + 1 ∼ lg n binary search
T(n) = 2T(n / 2) + n ∼ n lg n mergesort
1 2
T(n) = T(n − 1) + n ∼ 2
n insertion sort

T(n) = 2T(n / 2) + 1 ∼n tree traversal


T(n) = 2T(n − 1) + 1 ∼ 2n towers of Hanoi
T(n) = 3T(n / 2) + Θ(n) Θ(nlog 2 3 ) = Θ(n1.58... ) Karatsuba multiplication

T(n) = 7T(n / 2) + Θ(n2 ) Θ(nlog 2 7 ) = Θ(n2.81... ) Strassen multiplication

T(n) = 2T(n / 2) + Θ(n log n) Θ(n log2 n) closest pair

Master theorem.

Leta≥1b≥2 , , and c>0 and suppose that T(n) is a function on the non-negative integers that
satisfies the divide-and-conquer recurrence

T(n) = a T(n / b) + Θ(nc )


with T(0) = 0 and T(1) = Θ(1), where n / b means either ⌊n / b⌋ or either ⌈n / b⌉ .

If c < logb a , then T(n) = Θ(nlog b a )


c
If c = log a , then T(n) = Θ(n log n)
b
c
If c > logb a , then T(n) = Θ(n )

Remark: there are many different versions of the master theorem. The Akra–Bazzi theorem is among the most powerful.

Last modified on July 05, 2021.


:
Copyright © 2000–2019 Robert Sedgewick and Kevin Wayne. All rights reserved.
:

You might also like