Absolute Beginners Guide To Algorithms - Kirupa Chinnathambi
Absolute Beginners Guide To Algorithms - Kirupa Chinnathambi
com
About This eBook
Kirupa Chinnathambi
OceanofPDF.com
Absolute Beginner’s Guide to Algorithms
Hoboken, NJ
ISBN-13: 978-0-13-822229-1
ISBN-10: 0-13-822229-0
$PrintCode
OceanofPDF.com
Pearson’s Commitment to Diversity,
Equity, and Inclusion
OceanofPDF.com
Contents at a Glance
OceanofPDF.com
Table of Contents
I Data Structures
1 Introduction to Data Structures
Right Tool for the Right Job
Back to Data Structures
Conclusion
Some Additional Resources
2 Big-O Notation and Complexity Analysis
It’s Example Time
It’s Big-O Notation Time!
Conclusion
Some Additional Resources
3 Arrays
What Is an Array?
Adding an Item
Deleting an Item
Searching for an Item
Accessing an Item
Array Implementation / Use Cases
Arrays and Memory
Performance Considerations
Access
Insertion
Deletion
Searching
Conclusion
Some Additional Resources
4 Linked Lists
Meet the Linked List
Finding a Value
Adding Nodes
Deleting a Node
Linked List: Time and Space Complexity
Deeper Look at the Running Time
Space Complexity
Linked List Variations
Singly Linked List
Doubly Linked List
Circular Linked List
Skip List
Implementation
Conclusion
Some Additional Resources
5 Stacks
Meet the Stack
A JavaScript Implementation
Stacks: Time and Space Complexity
Runtime Performance
Memory Performance
Conclusion
Some Additional Resources
6 Queues
Meet the Queue
A JavaScript Implementation
Queues: Time and Space Complexity
Runtime Performance
Memory Performance
Conclusion
Some Additional Resources
7 Trees
Trees 101
Height and Depth
Conclusion
Some Additional Resources
8 Binary Trees
Meet the Binary Tree
Rules Explained
Binary Tree Variants
What about Adding, Removing, and Finding
Nodes?
A Simple Binary Tree Implementation
Conclusion
Some Additional Resources
9 Binary Search Trees
It’s Just a Data Structure
Adding Nodes
Removing Nodes
Implementing a Binary Search Tree
Performance and Memory Characteristics
Conclusion
Some Additional Resources
10 Heaps
Meet the Heap
Common Heap Operations
Heap Implementation
Heaps as Arrays
The Code
Performance Characteristics
Removing the Root Node
Inserting an Item
Performance Summary
Conclusion
Some Additional Resources
11 Hashtable (aka Hashmap or Dictionary)
A Very Efficient Robot
From Robots to Hashing Functions
From Hashing Functions to Hashtables
Adding Items to Our Hashtable
Reading Items from Our Hashtable
JavaScript Implementation/Usage
Dealing with Collisions
Performance and Memory
Conclusion
Some Additional Resources
12 Trie (aka Prefix Tree)
What Is a Trie?
Inserting Words
Finding Items
Deleting Items
Diving Deeper into Tries
Many More Examples Abound!
Implementation Time
Performance
Conclusion
Some Additional Resources
13 Graphs
What Is a Graph?
Graph Implementation
Representing Nodes
The Code
Conclusion
Some Additional Resources
II Algorithms
14 Introduction to Recursion
Our Giant Cookie Problem
Recursion in Programming
Recursive Function Call
Terminating Condition
Conclusion
Some Additional Resources
15 Fibonacci and Going Beyond Recursion
Recursively Solving the Fibonacci Sequence
Recursion with Memoization
Taking an Iteration-Based Approach
Going Deeper on the Speed
Conclusion
Some Additional Resources
16 Towers of Hanoi
How Towers of Hanoi Is Played
The Single Disk Case
It’s Two Disk Time
Three Disks
The Algorithm
The Code Solution
Check Out the Recursiveness!
It’s Math Time
Conclusion
Some Additional Resources
17 Search Algorithms and Linear Search
Linear Search
Linear Search at Work
JavaScript Implementation
Runtime Characteristics
Conclusion
Some Additional Resources
18 Faster Searching with Binary Search
Binary Search in Action
Sorted Items Only, Please
Dealing with the Middle Element
Dividing FTW!
The JavaScript Implementation
Iterative Approach
Recursive Approach
Example of the Code at Work
Runtime Performance
Conclusion
Some Additional Resources
19 Binary Tree Traversal
Breadth-First Traversal
Depth-First Traversal
Implementing Our Traversal Approaches
Node Exploration in the Breadth-First
Approach
Node Exploration in the Depth-First Approach
Looking at the Code
Performance of Our Traversal Approaches
Conclusion
Some Additional Resources
20 Depth-First Search (DFS) and Breadth-First Search
(BFS)
A Tale of Two Exploration Approaches
Depth-First Search Overview
Breadth-First Search Overview
Yes, They Are Different!
It’s Example Time
Exploring with DFS
Exploring with BFS
When to Use DFS? When to Use BFS?
A JavaScript Implementation
Using the Code
Implementation Detail
Performance Details
Conclusion
Some Additional Resources
21 Quicksort
A Look at How Quicksort Works
A Simple Look
Another Simple Look
It’s Implementation Time
Performance Characteristics
Time Complexity
Space Complexity
Stability
Conclusion
Some Additional Resources
22 Bubblesort
How Bubblesort Works
Walkthrough
The Code
Conclusion
Some Additional Resources
23 Insertion Sort
How Insertion Sort Works
One More Example
Algorithm Overview and Implementation
Performance Analysis
Conclusion
Some Additional Resources
24 Selection Sort
Selection Sort Walkthrough
Algorithm Deep Dive
The JavaScript Implementation
Conclusion
Some Additional Resources
25 Mergesort
How Mergesort Works
Mergesort: The Algorithm Details
Looking at the Code
Conclusion
Some Additional Resources
26 Conclusion
Index
OceanofPDF.com
Acknowledgments
As I found out, getting a book like this out the door is no small
feat. It involves a bunch of people in front of (and behind) the
camera who work tirelessly to turn my ramblings into the
beautiful pages that you are about to see. To everyone at
Pearson who made this possible, thank you!
With that said, there are a few people I’d like to explicitly call
out. First, I’d like to thank Kim Spenceley for making this book
possible, Chris Zahn for meticulously ensuring everything is
human-readable, Carol Lallier for her excellent copyediting,
and Loretta Yates for helping make the connections that made
all of this happen years ago. The technical content of this book
has been reviewed in great detail by my long-time collaborators
Cheng Lou and Ashwin Raghav.
OceanofPDF.com
Dedication
To my wife, Meena!
OceanofPDF.com
About the Author
OceanofPDF.com
Tech Editors
Twitter / X: twitter.com/_chenglou
Twitter / X: twitter.com/ashwinraghav
OceanofPDF.com
Part I
Data Structures
OceanofPDF.com
1
Onward!
A rummager!
FIGURE 1-4
A toolbox is like the Marie Kondo of the DIY world, with its neat
compartments and organized bliss. Sure, it might take a smidge
more effort to stow things away initially, but that’s the price we
pay for future tool-hunting convenience. No more digging
through the toolbox like a raccoon on a midnight snack raid.
We have just seen two ways to solve our problem of storing our
tools. If we had to summarize both approaches, it would look as
follows:
What we can see is that both our cardboard box and toolbox
are good for some situations and bad for other situations. There
is no universally right answer. If all we care about is storing our
tools and never really looking at them again, stashing them in a
cardboard box is the right choice. If we will be frequently
accessing our tools, storing them in the toolbox is more
appropriate.
Conclusion
Over the next many chapters, we’ll learn more about what each
data structure is good at and, more important, what types of
operations each is not very good at. By the end of it, you and I
will have created a mental map connecting the right data
structure to the right programming situation we are trying to
address.
OceanofPDF.com
2
Any code you will ever write will have a specific set of inputs
that yields particular outputs. In an ideal world, we’d want your
code to run as fast as possible and take up as little memory as
possible in doing so.
However, the real world has its quirks, and your code might
decide to take a leisurely stroll instead, depending on the size
and characteristics of its input. While you can always glance at
your wall clock to clock its performance for a specific input set,
what we truly need is way to speak about how it performs with
any set of inputs. And that’s where the Big-O notation strides
onto the stage.
Onward!
FIGURE 2-1
FIGURE 2-2
FIGURE 2-3
The larger the number we provide as the input, the more digits
we have to count through to get the final answer. The important
detail is that the number of steps in our calculation won’t grow
abnormally large (or small) with each additional digit in our
number. We can visualize this by plotting the size of our input
versus the number of steps required to get the count (Figure 2-
4).
FIGURE 2-4
Let’s say that we have some additional code that lets us know
whether our input number is odd or even. The way we would
calculate the oddness or evenness of a number is by just looking
at the last digit and doing a quick calculation (Figure 2-5).
FIGURE 2-5
Notice that, in this graph of the steps required vs. the input size,
the amount of work doesn’t change based on the size of our
input. It stays the same. It stays . . . constant!
FIGURE 2-7
FIGURE 2-8
When we zoom all the way out and talk about really large input
sizes, this difference will be trivial. This is especially true when
we look at what the other various classes of values for n can be!
The best way to understand all of this is by looking at each
major value for n and what its input versus complexity graph
looks like (Figure 2-9).
FIGURE 2-9
Note
Conclusion
Okay! It is time to wrap things up. The Big-O notation is a
mathematical notation used to describe the upper bound or
worst-case scenario of a code’s time or space complexity. To get
all mathy on us, it provides an asymptotic upper limit on our
code’s growth rate. By using the Big-O notation, we can talk
about code complexity in a universally understood and
consistent way. It allows us to analyze and compare the
efficiency of different coding approaches, helping us decide
what tradeoffs are worth making given the context our code
will be running in.
OceanofPDF.com
3
Arrays
Onward!
What Is an Array?
FIGURE 3-2
Adding an Item
We append a new item to the end. This new item gets the next
index value associated with it. Life is simple and good.
Deleting an Item
FIGURE 3-5
Deleting an item from the end
We need to ensure that all of our array items after the removed item are properly
positioned and numbered
For example, we removed the first item from our array. Every
other item in our array now has to shift and recount to account
for this change. Phew!
FIGURE 3-7
A linear search
We have talked about the index position a few times so far, but
it is time to go a bit deeper. The index position acts as an
identifier. If we want to access a particular array item (via a
search or otherwise!), we refer to it by its index position in the
form of array[index_position], as shown in Figure 3-9.
FIGURE 3-9
A few tricks to keep in mind are that the first item will always
have an index position of 0. The last item will always have an
index position that is one less than the total number of items in
our array. If we try to provide an invalid index position, we will
get an error!
Array Implementation / Use Cases
For a thorough deep dive into learning the ins and outs of
everything arrays do, check out my comprehensive arrays
guide at www.kirupa.com/javascript/learn_arrays.htm. If you
aren’t yet proficient with arrays, take a few moments and get
really familiar with them. Many of the subsequent data
structures and algorithms we’ll be learning about use arrays
extensively under the covers.
FIGURE 3-11
Memory at work
FIGURE 3-12
FIGURE 3-13
We keep adding data into our array until we fill up all of our
allocated space (Figure 3-14).
FIGURE 3-14
New data can now go into the memory locations freed up by the move
Access
Insertion
Deletion
Searching
Conclusion
OceanofPDF.com
4
Linked Lists
Onward!
Linked lists, just like arrays, are all about helping us store a
collection of data. In Figure 4-1, we have an example of a linked
list we are using to store the letters A through E.
FIGURE 4-1
A linked list
It goes without saying that the node is a big deal. We can zoom
in on a node and visualize it, as shown in Figure 4-2.
FIGURE 4-2
Finding a Value
We have a linked list with a bunch of data, and we want to find
something. This is one of the most common operations we’ll
perform. We find a value by starting with the first node (aka
head node) and traversing through each node as referenced by
the next pointer (Figure 4-3).
FIGURE 4-3
Traversing nodes
If you think this sounds a whole lot like a linear search, you
would be correct. It totally is . . . and all the good and bad
performance characteristics that it implies. If you don’t think
so, that is okay. We look into linear search in greater detail in
Chapter 17.
Adding Nodes
Now, let’s look at how to add nodes to our linked list. The whole
idea of adding nodes is less about adding and more about
creating a new node and updating a few next pointers. We’ll see
this play out as we look at a handful of examples. Let’s say that
we want to add a node F at the end (Figure 4-4).
FIGURE 4-4
FIGURE 4-6
FIGURE 4-7
Deleting a Node
FIGURE 4-8
FIGURE 4-9
We also clear the next pointer on the D node. All of this makes
node D unreachable via a traversal and removes any
connection this node has with the rest of the nodes in our
linked list. Unreachable does not mean deleted, though. When
does node D actually get deleted? The exact moment varies, but
it happens automatically as part of something known as
garbage collection, when our computer reclaims memory by
getting rid of unwanted things.
It’s time for some more fun! We started off our look at linked
lists by talking about how fast and efficient they are. For the
most common operations, Table 4-1 summarizes how our linked
list performs.
The Table 4-1 glosses over some subtle (but very important)
details, so let’s call out the relevant points:
Search
Searching for an element in a singly linked list takes O(n)
time because we have to traverse the list from the
beginning to find the element.
If what we are looking for happens to be the first item,
then we return the found node in O(1) time.
Add/Insert
Inserting an element at the beginning or end of a singly
linked list takes O(1) time, as we only need to update the
reference of the new node to point to the current head or
tail of the list.
Inserting an element at a specific position in the list takes
O(n) time in the average and worst cases, for we have to
traverse through the list to find the position.
Delete
Similar to the adding case, deleting an element from the
beginning or end of a singly linked list takes O(1) time, as
we only need to update the reference of the first or last
node.
Deleting an element from a specific position in the list
takes O(n) time in the average and worst cases, for we
have to traverse the list to find the element and then delete
it.
Space Complexity
FIGURE 4-10
In a singly linked list, each node has exactly one pointer that
references the next node. For many situations, this one-way
behavior is perfectly adequate.
In a doubly linked list, each node has two pointers, one to the
previous node and one to the next node (Figure 4-11).
FIGURE 4-11
In a circular linked list, the last node’s next pointer points to the
first node, creating a circular structure (Figure 4-12).
FIGURE 4-12
FIGURE 4-13
We saw that linked lists are fast. Skip lists make things even
faster. A skip list is a linked list that includes additional “skip”
links that act like shortcuts to make jumping to points in the list
faster (Figure 4-14).
FIGURE 4-14
A skip list
Notice that each level of our skip list gives us faster access to
certain elements. Depending on what data we are looking for,
we will be traversing both horizontally as well as up and down
each level to minimize the number of nodes we need to
examine.
Skip lists are often used in situations where we need to perform
frequent lookups or searches on a large dataset. By adding skip
links to a linked list, we can reduce the amount of time it takes
to find a specific element while still maintaining the benefits of
a linked list (such as constant time insertion and deletion).
Implementation
class LinkedListNode {
constructor(data, next = null) {
this.data = data;
this.next = next;
}
}
class LinkedList {
constructor() {
this.head = null;
this.tail = null;
this.size = 0;
}
addFirst(data) {
const newNode = new LinkedListNode(data, this
this.head = newNode;
if (!this.tail) {
this.tail = newNode;
}
this.size++;
}
addLast(data) {
addLast(data) {
const newNode = new LinkedListNode(data);
if (!this.head) {
this.head = newNode;
this.tail = newNode;
} else {
this.tail.next = newNode;
this.tail = newNode;
}
this.size++;
}
addBefore(beforeData, data) {
const newNode = new LinkedListNode(data);
if (this.size === 0) {
this.head = newNode;
this.size++;
return;
}
while (current) {
if (current.data === beforeData) {
newNode.next = current;
prev.next = newNode;
this.size++;
return;
}
prev = current;
current = current.next;
}
addAfter(afterData, data) {
const newNode = new LinkedListNode(data);
if (this.size === 0) {
this.head = newNode;
this.size++;
return;
}
let current = this.head;
while (current) {
if (current.data === afterData) {
newNode.next = current.next;
current.next = newNode;
this.size++;
return;
}
current = current.next;
}
contains(data) {
let current = this.head;
while (current) {
if (current.data === data) {
return true;
}
current = current.next;
}
return false;
return false;
}
removeFirst() {
if (!this.head) {
this.head = this.head.next;
if (!this.head) {
this.tail = null;
}
this.size--;
}
removeLast() {
if (!this.tail) {
throw new Error('List is empty');
}
while (current.next) {
prev = current;
current = current.next;
prev.next = null;
this.tail = prev;
this.size--;
}
remove(data) {
if (this.size === 0) {
throw new Error("List is empty");
}
while (current.next) {
if (current.next.data === data) {
current.next = current.next.next;
this size--;
this.size ;
return;
}
current = current.next;
toArray() {
const arr = [];
while (current) {
arr.push(current.data);
current = current.next;
}
return arr;
}
get length() {
return this.size;
}
}
To see this code in action, here are some example prompts:
letters.addFirst("AA");
letters.addLast("Z");
letters.remove("C");
letters.removeFirst();
letters.removeLast();
letters.addAfter("D", "Q");
letters.addAfter("Q", "H");
ette s add te ( Q , );
letters.addBefore("A", "5");
console.log(letters.length); // 7
<script src="https://www.kirupa.com/js/linkedlist
As we’ll see shortly, the linked list plays a crucial role in how
several other data structures and algorithms are implemented.
Note
Conclusion
OceanofPDF.com
5
Stacks
FIGURE 5-1
Undo/Redo
Onward!
FIGURE 5-2
A stack
We remove the data from the end of our stack in the same
order we added them (Figure 5-5).
FIGURE 5-5
Data is removed from the end of the stack in the order it was added
A JavaScript Implementation
Now that we have an overview of what stacks are and how they
work, let’s go one level deeper. The following is an
implementation of a Stack in JavaScript:
class Stack {
constructor(...items) {
this.items = items;
}
clear() {
this.items.length = 0;
}
clone() {
return new Stack(...this.items);
}
contains(item) {
return this.items.includes(item);
}
peek() {
let itemsLength = this.items.length;
let item = this.items[itemsLength - 1];
return item;
}
pop() {
let removedItem = this.items.pop();
return removedItem;
}
push(item) {
this.items.push(item);
return item;
}
}
This code defines our Stack object and the various methods
that we can use to add items, remove items, peek at the last
item, and more. To use it, we can do something like the
following:
// Add items
myStack.push("One");
myStack.push("Two");
myStack.push("Three!");
// Remove item
let lastItem = myStack.pop();
console.log(lastItem) // Three
myStack.peek(); // Two
To add items to the stack, use the push method and pass in
whatever you wish to add. To remove an item, use the pop
method. If you want to preview what the last item is without
removing it, the peek method will help you out. The clone
method returns a copy of your stack, and the contains
method allows you to see if an item exists in the stack or not.
TABLE 5-1
Runtime Performance
Memory Performance
Conclusion
OceanofPDF.com
6
Queues
In Chapter 5, we saw that stacks are a last in, first out (LIFO)
data structure where items are added and removed from the
end. Contrasting that, we have the other popular data structure,
the queue. This is an interesting one that we’ll learn more
about in the following sections.
Onward!
FIGURE 6-1
People lining up
The person standing at the front of the line is the first one to
have shown up, and they are the first ones to leave as well. New
people show up and stand at the end of the line, and they don’t
leave until the person in front of them has reached the
beginning of the line and has left (Figure 6-2).
FIGURE 6-2
People leave the beginning of the queue, and they join at the end
Given that behavior, a queue follows a first in, first out policy,
more commonly shortened to FIFO. Except for the little big
detail about which items get removed first, queues and stacks
are pretty similar otherwise.
When adding items, the behavior with stacks is identical
(Figure 6-3).
FIGURE 6-3
Items are added to the end of the queue. When removing items,
they are removed sequentially, starting with the first item that
populated the data structure in a queue-based world (Figure 6-
4).
FIGURE 6-4
A JavaScript Implementation
To turn all of those words and images into working code, take a
look at the following Queue implementation:
class Queue {
constructor() {
this.items = new LinkedList();
}
clear() {
this.items = new LinkedList();
}
contains(item) {
return this.items.contains(item);
}
peek() {
return this.items.head.data;
}
dequeue() {
let removedItem = this.items.head.data;
this.items.removeFirst();
return removedItem;
}
enqueue(item) {
this.items.addLast(item);
}
get length() {
return this.items.length;
}
}
// remove item
let removedItem = my.dequeue(); // returns Item
TABLE 6-1
Runtime Performance
Memory Performance
Conclusion
Between what we saw earlier with stacks and what we saw just
now with queues, we covered two of the most popular data
structures that mimic how we model how data enters and
leaves. A queue is known as a FIFO data structure where items
get added to the end but removed from the beginning. This
“removed from the beginning” part is where our reliance on a
linked list data structure comes in. Arrays, as we have seen a
few times, are not very efficient when it comes to removing or
adding items at the front.
OceanofPDF.com
7
Trees
FIGURE 7-1
Visualizing the quirkiness of our programming lives nicely!
Onward!
Trees 101
Example of a tree
Now, just saying that our tree has a bunch of nodes connected
by edges isn’t very enlightening. To help give the tree more
clarity, we give the nodes additional labels, such as children,
parents, siblings, root, and leaves.
The easiest nodes to classify are the children. There are many of
them, for a child node is any node that is a direct extension of
another node. Except for the very first node at the very top, all
of the nodes we see in Figure 7-4 fit that description and are
considered to be children.
FIGURE 7-4
Child nodes
Parent nodes
Sibling nodes
We are almost done here. Earlier, we said that all nodes are
children except for the first node at the very top, which has no
parent. This node is better known to friends, family, and
computer scientists as the root (Figure 7-8).
FIGURE 7-8
While the root is a node that has no parent, on the other end
are the nodes that don’t have any children. These nodes are
commonly known as leaves (Figure 7-9).
FIGURE 7-9
Leaf nodes
Conclusion
Binary Trees
Onward!
Let’s dive a bit deeper into these rules, for they are important to
understand. They help explain why the binary tree works the
way it does, and they set us up for learning about other tree
variants, such as the binary search tree.
Rules Explained
The first rule is that each node in a binary tree can have only
zero, one, or two children. If a node happens to have more than
two children, that’s a problem (Figure 8-2).
FIGURE 8-2
The second rule is that a binary tree must have only a single
root node (Figure 8-3).
FIGURE 8-3
Now, we get to the last rule. The last rule is that there can be
only one path from the root to any node in the tree (Figure 8-4).
FIGURE 8-4
FIGURE 8-5
FIGURE 8-6
For this last row, there are some rules on how the nodes should
appear. If the last row has any nodes, those nodes need to be
filled continuously, starting from the left with no gaps. What
you see in Figure 8-7 wouldn’t be acceptable, for example.
FIGURE 8-7
There is a gap where the D node is missing its right child, yet
the I node is parented under the E node. This means we weren’t
continuously filling in the last row of nodes from the left. If the
I node were instead inserted as the D node’s right child, then
things would be good.
In other words, this means that the tree is not lopsided. All
nodes can be accessed efficiently.
class Node {
constructor(data) {
this.data = data;
this.left = null;
this.right = null;
}
}
We have a Node class, and it takes a data value as its
argument, which it stores as a property called data on itself.
Our node also stores two additional properties for left and
right .
FIGURE 8-11
class Node {
constructor(data) {
this.data = data;
this.left = null;
this.right = null;
}
}
const rootNodeA = new Node("A");
const nodeB = new Node("B");
const nodeC = new Node("C");
const nodeD = new Node("D");
const nodeE = new Node("E");
const nodeF = new Node("F");
const nodeG = new Node("G");
rootNodeA.left = nodeB;
rootNodeA.right = nodeC;
nodeB.left = nodeD;
nodeB.right = nodeE;
nodeE.left = nodeF;
nodeE.right = nodeG;
Notice that we are creating a new Node object for each node in
our tree, and the argument we pass in to the constructor is the
letter value of each node:
Once we have our nodes created, we set each node’s left and
right properties to the corresponding child node:
rootNodeA.left = nodeB;
rootNodeA.right = nodeC;
nodeB.left = nodeD;
nodeB.right = nodeE;
nodeE.left = nodeF;
nodeE.right = nodeG;
Conclusion
OceanofPDF.com
9
FIGURE 9-1
FIGURE 9-2
The child node to the left is less than the parent node’s value.
The child node to the right is greater than the parent node’s
value.
These two additional rules build on the three rules we saw for
plain binary trees to give us our blueprint for how to think
about binary search trees. What we are going to do next is dive
deeper into how binary search trees work by looking at how to
perform common add and remove operations.
Onward!
FIGURE 9-3
Adding Nodes
FIGURE 9-4
Next, let’s add the number 24 (Figure 9-5). Every new node we
add from here on out has to be a child of another node. In our
case, we have only our root node of 42, so our 24 node will be a
child of it. The question is, will it go left, or will it go right?
FIGURE 9-5
Where will our first node go?
If the value we are adding is less than the parent node, the
value goes left.
If the value we are adding is greater than the parent node,
the value goes right.
We start at the root node and start looking around. In our tree,
we have only one node, the root node of 42. The number we are
trying to add is 24. Because 24 is less than 42, we add our node
as a left child (Figure 9-6).
FIGURE 9-6
FIGURE 9-7
We are not done with adding more numbers to our tree. Now
that we have a few extra nodes beyond our root node, things
get a bit more interesting. The next number we want to add is
15. We start at the root. The root value is 42, so we look left
because 15 is less than 42. Left of 42 is the 24 node. We now
check whether 15 is less than 24. It is, so we look left again.
There are no more nodes to the left of 24, so we can safely park
15 there (Figure 9-8).
FIGURE 9-8
We will go a bit faster now. The next value we want to add is 50.
We start with our root node of 42. Our 50 value is greater than
42, so we look right. On the right, we have our 99 node. 99 is
greater than 50, so we look left. There is no node to the left of
our 99 node, so we plop our 50 value there (Figure 9-9).
FIGURE 9-9
The next value we want to add is 120. Using the same steps
we’ve seen a bunch of times, this value will find itself to the
right of the 99 node (Figure 9-10).
FIGURE 9-10
Where 120 ends up
FIGURE 9-11
Removing Nodes
There will be times when we’ll be adding nodes. Then there will
be times when we will be removing nodes as well. Removing
nodes from a binary search tree is slightly more involved than
adding nodes, for the behavior varies depending on which node
we are removing. We walk through those cases next.
When we remove a node with a single child, that child takes the
place of the removed node. In our example, when we remove
the 24 node, the 15 node takes its place (Figure 9-15).
FIGURE 9-15
FIGURE 9-16
FIGURE 9-17
If we walk through all the nodes in the tree after this shift, we’ll
again see that the integrity of the tree is still maintained. No
node is out of place.
FIGURE 9-18
FIGURE 9-19
Which node in our subtree has the next highest value from 99?
To describe the same thing differently, when we look at all the
children to the right of our 99 node, which node has the
smallest value? The answer to both of these questions is the
node whose value is 104. What we do next is remove our 99
node and replace it with our 104 node (Figure 9-20).
FIGURE 9-20
When we look at our binary search tree after this removal and
swap, the integrity of all of the nodes is maintained. This isn’t
an accident, of course. The inorder successor node will always
have a value that ensures it can be safely plopped into the place
of the node we are removing. That was the case with our 104
node that took over for our 99 node. That will be the case for
other nodes we wish to remove as well.
1. If the tree is empty, create a new node and make it the root.
2. Compare the value of the new node with the value of the root
node.
3. If the value of the new node is less than the value of the root
node, repeat steps 2 and 3 for the left subtree of the root node.
4. If the value of the new node is greater than the value of the
root node, repeat steps 2 and 3 for the right subtree of the
root node.
5. If the value of the new node is equal to the value of an
existing node in the tree, return a message to indicate that
the node was not added.
6. Create a new node and add it as either the left or right child
of the parent node where the new node should be inserted.
7. Rebalance the tree if necessary to maintain the binary search
tree property.
Go through the above steps and make sure nothing sounds too
surprising. They are almost the TL;DR version of what we saw
in the previous sections. Our code is mostly going to mimic the
preceding steps. In fact, let’s look at our code now!
Our binary search tree implementation is made up of our
familiar Node class and the BinarySearchTree class:
class Node {
constructor(data) {
this.data = data;
this.left = null;
this.right = null;
}
}
class BinarySearchTree {
constructor() {
this.root = null;
}
insert(value) {
// Create a new node with the given value
const newNode = new Node(value);
// T th t t fi d th t i
// Traverse the tree to find the correct posi
let currentNode = this.root;
while (true) {
if (value === currentNode.data) {
// If the value already exists in the tre
return undefined;
} else if (value < currentNode.data) {
// If the value is less than the current
if (currentNode.left === null) {
// If the left child is null, the new n
// the left child
currentNode.left = newNode;
return this;
}
currentNode = currentNode.left;
} else {
// If the value is greater than the curre
// go right
if (currentNode.right === null) {
// If the right child is null, the new
// the right child
currentNode.right = newNode;
return this;
}
currentNode = currentNode.right;
}
}
}
}
remove(value) {
// Start at the root of the tree
let currentNode = this.root;
let parentNode = null;
// C 2 N d h hild (l ft hi
// Case 2: Node has one child (left chi
if (parentNode === null) {
// If the node is the root of the tre
this.root = currentNode.left;
} else {
// If the node is not the root of the
if (parentNode.left === currentNode)
parentNode.left = currentNode.left;
} else {
parentNode.right = currentNode.left
}
}
return true;
} else if (currentNode.left === null &&
currentNode.right !== null) {
// Case 2: Node has one child (right ch
if (parentNode === null) {
// If the node is the root of the tre
this.root = currentNode.right;
} else {
// If the node is not the root of the
if (parentNode.left === currentNode)
parentNode.left = currentNode.right
} else {
parentNode.right = currentNode.righ
}
}
return true;
} l {
} else {
// Case 3: Node has two children
// Find the inorder successor of the no
let successor = currentNode.right;
let successorParent = currentNode;
while (successor.left !== null) {
successorParent = successor;
successor = successor.left;
}
currentNode.data = successor.data;
return true;
}
} else if (value < currentNode.data) {
// If the value we're looking for is less
// the current node's value, go left
parentNode = currentNode;
currentNode = currentNode.left;
} else {
// If the value we're looking for is grea
// the current node's value, go right
tN d tN d
parentNode = currentNode;
currentNode = currentNode.right;
}
}
// If we reach this point, the value was not
return false;
}
}
myBST.insert(10);
myBST.insert(5);
myBST.insert(15);
myBST.insert(3);
myBST.insert(7);
myBST.insert(13);
myBST.insert(18);
myBST.insert(20);
myBST.insert(12);
myBST.insert(14);
myBST.insert(19);
myBST.insert(30);
FIGURE 9-21
FIGURE 9-22
The 15 node is gone, but the 18 node takes its place as the
rightful inorder successor. Feel free to play with more node
additions and removals to see how things will look. To easily
see how all of the nodes are related to each other, the easiest
way is to inspect your binary search tree in the Console and
expand each left and right node until you have a good idea of
how things shape up (Figure 9-23).
FIGURE 9-23
FIGURE 9-24
Conclusion
Binary search trees are pretty sweet. They are a type of binary
tree with some added constraints to make them more suited for
heavy-duty data wrangling. The constraints are to ensure the
left child is always smaller than the parent and the right child is
always greater. There are a few more rules around how nodes
should arrange and rearrange themselves when they get added
or removed.
OceanofPDF.com
10
Heaps
If you are anything like me, you probably have a bunch of ideas
and too little time to act on them. To help bring some order, we
may rely on a tool that is designed to help us prioritize tasks
(Figure 10-1).
FIGURE 10-1
Building our own tool that does all of this sounds like a fun
activity, but we are going to stay focused on the data structures
side of the house. There is a very efficient data structure that we
can use to represent all of the things we want to do, and that
data structure is the heap. We learn all about it in the following
sections.
Onward!
Example of a heap
Our heap is a binary tree where each node has at most two
children.
The value of each node is greater than or equal to the values
of its children.
Now, here is the kicker that makes heaps really sweet. What we
are dealing with isn’t just any binary tree. It is a complete
binary tree where all rows of the nodes are filled left to right
without any gaps. This leads to a very balanced-looking tree
(Figure 10-4).
FIGURE 10-4
Inserting a Node
Let’s start with inserting a node, which is also the place to start
when we have a blank slate and want to build our heap from
scratch. The first item we want to add is the item with the value
13 (Figure 10-5).
FIGURE 10-5
This is our first item, and it becomes our root node by default.
This is the easy case. For all subsequent items we wish to add,
we need to follow these rules:
1. We add the new node to the bottom level of the heap, on the
leftmost available spot, with no gaps. This ensures that the
tree remains complete.
2. We compare the value of the new node with the value of its
parent node. If the value of the new node is greater than the
value of its parent node, we swap the new node with its
parent node. We repeat this process until either the new
node’s value is not greater than its parent’s value or we have
reached the root node.
3. After swapping, we repeat step 2 with the new parent and its
parent until the heap property is restored.
FIGURE 10-6
FIGURE 10-7
Now, is 24 less than the parent value of 10? No. So, we swap the
parent and child to ensure the child is always less than the
value of the parent (Figure 10-9).
FIGURE 10-9
FIGURE 10-10
FIGURE 10-12
FIGURE 10-13
The next number we add is 36. Our 36 starts off as the right
child of our 15 node. That location is only temporary! To
maintain the heap property, our 36 node will swap with the 15
node and then swap with the 24 node as well (Figure 10-14).
FIGURE 10-14
We add it at the leftmost level on the bottom level, and our node
containing the 3 value is a child of the 10 node. This maintains
the heap property, so we don’t need to do anything additional.
Our heap is in a good spot, and we have just seen what
inserting nodes into a heap looks like and the role bubbling up
plays in ensuring our nodes are properly positioned.
FIGURE 10-16
Here are the steps to remove the root node from our heap:
1. We remove the root node from the heap and replace it with
the last node in the heap.
2. We compare the value of the new root node with the values
of its children. If the value of the new root node is less than
the value of either of its children, we swap the new root node
with the larger of its children. We repeat this process until
either the new root node’s value is greater than or equal to
the values of its children or it has no children. This process is
called bubbling down.
3. After swapping, we repeat step 2 with the new child node
and its children until the heap property is restored.
When we remove our 36 node and swap it with our 3 node, our
heap will look as shown in Figure 10-18.
FIGURE 10-18
The root has been replaced with our last node
FIGURE 10-19
Time to rebalance
Let’s go through the removal steps just one more time to make
sure we have all of our i’s dotted and t’s crossed. Our new root
node has a value of 24, and we want to remove it (Figure 10-21).
FIGURE 10-21
FIGURE 10-22
The last node takes the place of the removed root node
After we do this, we compare our 3 node with the values of its
children. It is less than both of them, so we swap it with the
largest of its children, the 15 node (Figure 10-23).
FIGURE 10-23
After this swap, we are not done yet. We now check whether
our 3 node happens to be less than any of its children. Its only
child is the 5 node, and 3 is not less than 5. We do one more
swap (Figure 10-24).
FIGURE 10-24
Heap Implementation
Heaps as Arrays
Let’s look at a visual first (Figure 10-25), then talk about how
exactly this mapping works.
FIGURE 10-25
When we look at the items in our array (and their children and
parents), the calculations should track nicely.
The Code
class Heap {
constructor() {
// The heap is stored as an array
this.heap = [];
}
}
// If the heap has only one element, remove a
if (this.heap.length === 1) {
return this.heap.pop();
}
// Otherwise, remove the root element (maximu
// with the last element in the array
y
const max = this.heap[0];
const end = this.heap.pop();
this.heap[0] = end;
// Restore the heap property by bubbling down
this.#bubbleDown(0);
return max;
}
this.#bubbleUp(parentIndex);
}
}
[this.heap[index], this.heap[largestIndex]]
[this.heap[largestIndex], this.he
this.#bubbleDown(largestIndex);
}
}
console.log(myHeap.getMax()); // 50
console.log(myHeap.extractMax()); // 50
console.log(myHeap.extractMax()); // 18
console.log(myHeap.extractMax()); // 15
console.log(myHeap.extractMax()); // 14
Performance Characteristics
In a heap, we called out earlier that removing the root node and
inserting items into our heap are the two fundamental
operations we care about. Let’s look into how these fare from a
performance point of view.
The first step of swapping the root node with the last leaf node
takes constant time because we are just updating two array
elements. For example, Figure 10-26 represents what is
happening.
FIGURE 10-26
Inserting an Item
The first step of inserting the new item at the end of the heap
takes constant time because we are simply appending a new
element to the end of the array, like the 15 we are adding to the
heap (Figure 10-27).
FIGURE 10-27
Adding an item
Conclusion
OceanofPDF.com
11
Onward!
Here is the setup that will help us explain how hashtables work.
We have a bunch of food that we need to store (Figure 11-1).
FIGURE 11-1
We also have a bunch of boxes to store this food into (Figure 11-
2).
FIGURE 11-2
Boxes
Our goal is to take some of our food and store it in these boxes
for safekeeping. To help us here, we are going to rely on a
trusted robot helper (Figure 11-3).
FIGURE 11-3
FIGURE 11-4
Our robot analyzing our watermelon
This analysis tells our robot which box to put our watermelon
into. The exact logic our robot uses isn’t important for us to
focus on right now. The important part is that, at the end of this
analysis, our robot has a clear idea of where to store our
watermelon (Figure 11-5).
FIGURE 11-5
Storing an item
Next, we want to store the hamburger. The robot repeats the
same steps. It analyzes it, determines which box to store it in,
and then stores it in the appropriate box (Figure 11-6).
FIGURE 11-6
Once it has figured out where our fish is, it goes directly to the
right box and brings it back to us (Figure 11-9).
FIGURE 11-9
FIGURE 11-10
What exactly does our robot do? It analyzes the item we want to
store and maps it to a location to store it in. Let’s adjust our
visualization a little bit (Figure 11-11).
FIGURE 11-11
A hashing function
FIGURE 11-12
FIGURE 11-14
We somehow always end up with an array, don’t we?
What this means is that our hashtables can pull off constant-
time, aka O(1), lookup and insertion operations. This speedy
ability makes them perfect for the many data-caching and
indexing-related activities we perform frequently. We are going
to see how they work by looking at some common operations.
Let’s say that we want to add the following data in the form of a
[key, value] pair where the key is a person’s name and the
value is their phone number:
FIGURE 11-15
The input is both our keys and values. The key is sent to our
hashing function to determine the storage location. Once the
storage location is determined, the value is placed there.
Reading Items from Our Hashtable
FIGURE 11-16
Retrieving a phone number efficiently
JavaScript Implementation/Usage
// set values
characterInfo.set("Link", "(555) 123-4567");
characterInfo.set("Zelda", "(555) 987-6543");
characterInfo.set("Mario", "(555) 555-1212");
characterInfo.set("Mega Man", "(555) 867-5309");
characterInfo.set("Ryu", "(555) 246-8135");
characterInfo.set("Corvo", "(555) 369-1472");
// get values
console.log(characterInfo.get("Ryu")); // (555) 2
console.log(characterInfo.get("Batman")); // unde
// get size
console.log(characterInfo.size()); // 6
// delete item
console.log(characterInfo.delete("Corvo")); // tr
console.log(characterInfo.size()); // 5
f (l i i k l h i ) {
for (let i = 0; i < key.length; i++) {
// Add the Unicode value of each character in
hashValue += key.charCodeAt(i);
}
Notice that the returned hash value for Yur is the same 20 as it
is for Ryu. This doesn’t seem desirable, so let’s discuss it next!
FIGURE 11-17
Example of collisions
FIGURE 11-18
Action AverageWorst
Note
Is a Hashtable a Dictionary?
Conclusion
OceanofPDF.com
12
FIGURE 12-1
An example of autocomplete
As we keep typing, the partial results keep getting refined until
it nearly predicts the word or phrase we were trying to type
fully. This autocompletion-like interaction is one we take for
granted these days. Almost all of our user interfaces (aka UIs)
have some form of it. Why is this interesting for us at this very
moment?
Behind the scenes, there is a very good chance that the data
structure powering this autocomplete capability is the star of
this chapter, the trie (sometimes also called a prefix tree). In
the following sections, we learn more about it.
Onward!
What Is a Trie?
What is a trie?
Inserting Words
What we want to do is store the word apple inside a trie. The
first thing we do is break our word into individual characters:
a, p, p, l, and e. Next, it’s time to start building our trie tree
structure.
FIGURE 12-3
Our next step is to take the first letter (a) from the word (apple)
we are trying to store and add it to our trie as a child of our root
node (Figure 12-4).
FIGURE 12-4
Our child
We repeat this step for the next letter (p) and add it as a child of
our a node (Figure 12-5).
FIGURE 12-5
We’ll see later why marking the end is important. For now, let’s
go ahead and add a few more words to our trie. We are going to
add cat, dog, duck, and monkey. When we add cat and dog, our
trie will look like Figure 12-8.
FIGURE 12-8
The next word we are going to add is duck. Notice that the first
letter of our word is d, and we already have a d node at the top
as a child of our root. What we do is start from our existing d
node instead of creating a new d node. The next letter is u, but
we don’t have an existing child of d with the value of u. So, we
create a new child node whose value is u and continue on with
the remaining letters in our word.
The part to emphasize here is that our letter d is now a common
prefix for our dog and duck words (Figure 12-9).
FIGURE 12-9
The next two words we want to add are app and monk. Both of
these words are contained within the larger words of apple and
monkey respectively, so what we need to do is just designate the
last letters in app and monk as being the end of a word (Figure
12-12).
FIGURE 12-12
Ok. At this point, our trie contains apple, app, cat, dog, duck,
dune, monkey, and monk. We have enough items in our trie
now. Let’s look at some additional operations.
Finding Items
FIGURE 12-13
Our next task is to see if the word monk exists in our trie. The
process is the same. We check whether the first letter of the
word we are looking for (m) exists as the first letter in our trie.
The answer is yes (Figure 12-14).
FIGURE 12-14
Deleting Items
The last step we look at is how to delete an item from our trie.
Because we went into detail on how to add and find items in
our trie, how we delete items is more straightforward. There
are a few additional tricks we need to keep in mind. In our trie,
let’s say that we want to delete the word duck (Figure 12-17).
FIGURE 12-17
What we can’t do is just traverse this tree and delete all the
characters because:
FIGURE 12-18
For our example where we want to remove duck from our trie,
we start at the end with the letter k. This node is safe to delete,
so we delete it. We then move up to the letter c. This node is also
safe to delete, so our trie now looks like Figure 12-19.
FIGURE 12-19
Let’s start with the obvious one. Our trie is a tree-based data
structure. We can see that is the case (Figure 12-20).
FIGURE 12-20
Now, let us get to the really big elephant in the room: What
makes tries efficient for retrieving strings or sequences of
characters? The answer has a lot to do with what we are trying
to do. Where a trie is helpful is for a very particular set of use
cases. These cases involve searching for words given an
incomplete input. To go back to our example, we provide the
character d, and our trie can quickly return dog, duck, and dune
as possible destinations (Figure 12-21).
FIGURE 12-21
FIGURE 12-22
FIGURE 12-23
Spellcheck at work
FIGURE 12-24
A routing table
Each node in the trie represents a part of the IP address, and
the edges correspond to the possible values of that part. By
traversing the trie on the basis of the IP address, routers can
determine the next hop for routing packets in the network
efficiently.
Word games and puzzles: Tries can be handy for word
games like Scrabble or Wordle where players need to quickly
find valid words given a set of letters (Figure 12-25).
FIGURE 12-25
These are just a few examples of the many use cases where tries
can be super useful. The key idea is that tries allow us to
efficiently store, retrieve, and manipulate words or sequences
of characters, making them suitable for tasks that involve
matching, searching, or suggesting based on prefixes.
Note
Now that we can verbally describe how a trie works, let’s turn
all of the words and visuals into code. Our trie implementation
will support the following operations:
Inserting a word
Searching for whether a word exists
Checking whether words that match a given prefix exist
Returning all words that match a given prefix
class TrieNode {
constructor() {
// Each TrieNode has a map of children nodes,
// where the key is the character and the val
// child TrieNode
this.children = new Map();
}
// Return true if the last TrieNode represent
return current.isEndOfWord;
}
if (current) {
// If the node exists, traverse the Trie st
// to find all words and add them to the 'w
this.#traverse(current, prefix, words);
}
return words;
}
delete(word) {
let current = this.root;
const stack = [];
let index = 0;
if (!current.children.get(char)) {
// Word doesn't exist in the Trie, nothin
return;
}
current = current.children.get(char);
index++;
}
if (!current.isEndOfWord) {
// Word doesn't exist in the Trie, nothing
return;
}
#findNode(prefix) {
let current = this.root;
for (let i = 0; i < prefix.length; i++) {
const char = prefix[i];
trie.insert("apple");
trie.insert("app");
trie.insert("monkey");
trie.insert("monk");
trie.insert("cat");
trie.insert("dog");
trie.insert("duck");
trie.insert("dune");
console.log(trie.search("apple")); // true
console.log(trie.search("app")); // true
console.log(trie.search("monk")); // true
console.log(trie.search("elephant")); // false
console.log(trie.getAllWords("ap")); // ['apple',
console.log(trie.getAllWords("b")); // []
console.log(trie.getAllWords("c")); // ['cat']
console.log(trie.getAllWords("m")); // ['monk',
trie.delete("monkey");
console.log(trie.getAllWords("m")); // ['monk']
Our trie implementation performs all of the operations we
walked through in detail earlier, and it does it by using a
hashmap as its underlying data structure to help efficiently
map characters at each node to its children. Many trie
implementations may use arrays as well, and that is also a fine
data structure to use.
Performance
FIGURE 12-26
Note
Long story short, the elevator pitch is this: Tries are very
efficient data structures. That is something you can take to the
bank!
Conclusion
OceanofPDF.com
13
Graphs
Onward!
What Is a Graph?
FIGURE 13-1
FIGURE 13-3
Right now, the edges don’t have any direction to them. They are
considered to be bidirectional where the relationship between
the connected nodes is mutual. A graph made up of only
bidirectional edges is known as an undirected graph (Figure
13-4).
FIGURE 13-4
Newman!
We can now see that Jerry, Elaine, Kramer, and George are
mutual friends with each other. Newman is a mutual friend of
Kramer, and he has a one-way friendship with Jerry.
An acylic graph
FIGURE 13-10
Graph Implementation
Now that we have a good overview of what graphs are and the
variations they come in, it’s time to shift gears and look at how
we can actually implement one. If we take many steps back, the
most common operations we’ll do with a graph are:
Add nodes
Define edges between nodes
Identify neighbors:
If our graph is directional, make sure we respect the
direction of the edge
If our graph is nondirectional, all immediate nodes
connected from a particular node will qualify as a
neighbor
Remove nodes
Representing Nodes
FIGURE 13-12
Another example of a graph
Getting back to our A node, its neighbors are the nodes B, C, and
D. Some nodes will have fewer neighbors, and some nodes can
have significantly more. It all boils down to both the type and
volume of data our graph represents. So, how would we
represent a node’s neighbors? One really popular way is by
using what is known as an adjacency list.
A: [B, C, D]
B: [A]
C: [A]
D: [A]
By using a map, we can have the key be a node. The value will
be a set data structure whose contents will be all of the
neighboring nodes. Sets are great because they don’t allow
duplicate values. This ensures we avoid a situation where we
are going in a loop and adding the same node repeatedly.
The Code
With the background out of the way, let’s dive right in and look
at our implementation for the graph data structure:
class Graph {
constructor() {
// Map to store nodes and their adjacent node
this.nodes = new Map();
// Add nodes
characters.addNode('Jerry');
characters.addNode('Elaine');
characters.addNode('Kramer');
characters.addNode('George');
characters.addNode('Newman');
// Add edges
characters.addEdge('Jerry', 'Elaine');
characters.addEdge('Jerry', 'George');
characters.addEdge('Jerry', 'Kramer');
characters.addEdge('Elaine', 'Jerry');
characters.addEdge('Elaine', 'George');
characters.addEdge('Elaine', 'Kramer');
characters.addEdge('George', 'Elaine');
h t ddEd ('G ' 'J ')
characters.addEdge('George', 'Jerry');
characters.addEdge('George', 'Kramer');
characters.addEdge('Kramer', 'Elaine');
characters.addEdge('Kramer', 'George');
characters.addEdge('Kramer', 'Jerry');
characters.addEdge('Kramer', 'Newman');
characters.addEdge('Newman', 'Kramer');
characters.addEdge('Newman', 'Jerry');
// Remove a node
console.log("Remove the node, Newman: ")
characters.removeNode("Newman");
console.log(characters.getAllNodes());
// ['Jerry', 'Elaine', 'Kramer', 'George']
Conclusion
The graph data structure is one of those fundamental concepts
in computer science that you and I can’t avoid running into.
Because graphs provide a powerful way to model relationships
between things, their usefulness is through the roof. So many
activities we take for granted, such as navigating using an
online map, joining a multiplayer game, analyzing data,
navigating to anywhere on the Internet, following friends on
social media, and doing a billion other activities, all rely on the
core capabilities the graph data structure provides. We’ve only
scratched the surface of what graphs are capable of, so we are
going to cover more graph-related things in this book.
OceanofPDF.com
Part II
Algorithms
OceanofPDF.com
14
Introduction to Recursion
Onward!
A giant cookie!
Because of its size, most people will have no way to eat this
entire cookie in one bite. What we can do is break it into
smaller pieces (Figure 14-2).
FIGURE 14-2
As we can see, these smaller pieces are still too big to eat in one
bite. What we need to do is keep breaking our pieces down into
even smaller pieces. Eventually, we will have broken our cookie
down into a bite-sized piece that we can easily eat (Figure 14-3).
FIGURE 14-3
function hello() {
console.log("I'm a little function, short and s
hello();
}
FIGURE 14-4
Terminating Condition
Turning all of those words into code, let’s say we want our
hello function to act as an accumulator where we pass in a
number, and it returns the sum of all the numbers leading up to
the number we passed in. For example, if we pass in the
number 5, our code will add up all numbers leading up to it,
where it will calculate 5 + 4 + 3 + 2 + 1 and return a final value
of 15. Following is what our revised hello function will look
like if we do all of this:
function hello(num) {
if (num <= 1) {
// terminating condition
return num;
} else {
// recursive function call
return num + hello(num - 1);
}
}
console.log(hello(5)); // 15
FIGURE 14-5
Visualizing our code
FIGURE 14-6
We keep repeating this until our num value hits 1. When this
happens, we hit our terminating condition ( num <= 1 ) and
return the value of num itself, which is just 1 in our case
(Figure 14-7).
FIGURE 14-7
Taking a step back, for just one more time, we can see how we
started with a large problem and, with each recursive call,
broke the problem down into much smaller steps. We
continued until we hit our terminating condition and were left
with an easily digestible nugget, a plain old number. Really cool,
right?
Conclusion
OceanofPDF.com
15
fibonacci(0) = 0
fibonacci(1) = 1
fibonacci(2) = 1 // Sum of fibonacci(1) + fibonac
fibonacci(3) = 2 // Sum of fibonacci(2) + fibonac
fibonacci(4) = 3 // Sum of fibonacci(3) + fibonac
.
.
.
fibonacci(n) = fibonacci(n-1) + fibonacci(n-2)
This is cool . . . sort of. Why do we care about it? Besides its
many practical uses, the Fibonacci sequence is a great example
of an algorithm that can be solved recursively (Figure 15-1).
FIGURE 15-1
Onward!
function fibonacci(n) {
if (n == 0) {
return 0;
} else if (n == 1) {
return 1;
} else {
return fibonacci(n - 1) + fibonacci(n - 2);
}
}
FIGURE 15-3
FIGURE 15-4
Going deeper into Fibonacci
FIGURE 15-5
function fibonacci(n) {
if (n == 0) {
return 0;
} else if (n == 1) {
return 1;
} else {
return fibonacci(n - 1) + fibonacci(n - 2);
}
}
FIGURE 15-6
FIGURE 15-7
Visualizing the number of recursive calls
Note
FIGURE 15-8
function fibonacci(n) {
if (n == 0) {
return 0;
} else if (n == 1) {
return 1;
} else {
let a = 0;
let b = 1;
for (let i = 2; i <= n; i++) {
let c = a + b;
a = b;
b = c;
}
return b;
}
}
console.log('Result is', fibonacci(10));
There (really!) are three different lines shown in this graph. We only “see” two, for the
iteration and recursive + memorization values are nearly identical and overlapping.
What you are seeing here isn’t a glitch. The time for calculating
the Fibonacci sequence for the first 30 numbers is almost 0 in
the recursive + memoization and iteration-based approaches.
The purely recursive approach starts to take increasing
amounts of time at around the 17th Fibonacci number, and it
grows exponentially from there on out. There is a reason why
the chart includes only the first 30 numbers of the Fibonacci
sequence. The recursive-only approach couldn’t handle
larger numbers without massive delays and, ultimately, a
stack overflow error.
FIGURE 15-10
Conclusion
OceanofPDF.com
16
Towers of Hanoi
As puzzles go, nobody really did it better than the monks who
came up with the one we are going to learn about, the Towers
of Hanoi. Besides being a really cool puzzle, it has a lot of
practical (and historical!) significance as we learn about
recursion.
Onward!
FIGURE 16-2
The easiest way to play the game is to use a single disk (Figure
16-3).
FIGURE 16-3
FIGURE 16-4
FIGURE 16-5
Our goal is still the same. We want to shift these disks to our
destination, the third tower, while maintaining the same
stacking order and ensuring that a smaller disk is always placed
on top of a larger disk at every move along the way.
The first thing we need to do is clear a path for our larger disk
to reach its destination. We do that by first shifting our topmost
disk to our temporary second tower (Figure 16-6).
FIGURE 16-6
Once we’ve made this move, our larger disk has a direct path to
the destination. Our next move is to shift that disk to tower 3
(Figure 16-7).
FIGURE 16-7
The final step is to move our smaller disk from the temporary
tower to the destination as well (Figure 16-8).
FIGURE 16-8
Game is over
At this point, we’ve successfully shifted all of the disks from our
starting point to the destination while respecting the various
conditions. Now, with two disks we can see a little bit more
about what makes this puzzle challenging. To see how
challenging the Towers of Hanoi can be, we look at one more
example in great detail. We are going to throw another disk into
the mix!
Three Disks
All right. With three disks, the training wheels come off and we
really see what the monks who inspired this puzzle were up
against. We start off with all of our disks at the starting point
(Figure 16-9).
FIGURE 16-9
FIGURE 16-10
Next, let’s move our second disk to the empty spot in our
temporary tower (Figure 16-11).
FIGURE 16-11
This leaves our third and largest disk almost ready to be moved
to the destination tower. Our smallest disk currently stands in
the way, but we can move that to our temporary tower (Figure
16-12).
FIGURE 16-12
Clearing a path for our largest disk
FIGURE 16-13
At this point, we have only two disks in play. They are both in
our temporary tower. What we do now is no different than
what we started out doing earlier. We need to move our largest
disk to the destination tower. This time around, that disk is our
second one because our third disk is already safe at home in the
destination tower. You may start to see a pattern emerging.
FIGURE 16-15
We are now done moving three disks from our starting tower to
the destination tower. We can repeat all of these steps for more
disks, but we’ve seen all the interesting details to note about
this puzzle by now. Now let’s look more formally at all that this
puzzle has going on and figure out how we can get our
computers to solve it.
The Algorithm
1. Move the top N-1 disks from the starting tower to the
temporary tower.
2. Move the bottom most (aka Nth) disk from the starting tower
to the destination tower.
3. Move the remaining N-1 disks from the temporary tower to
the destination tower.
Algorithm explained
In this case, for just this move, our starting tower is really the
temporary tower. The destination tower remains the same, but
you can imagine the destination might be our starting tower in
some intermediate step. It is this fluidity in our tower roles that
makes it difficult for us to mentally make sense of this puzzle!
But it is exactly this fluidity that makes solving it using code
much easier as well.
The Code Solution
var numberOfDisks = 3;
If you run this code for the three disks, your console will look
like Figure 16-18.
FIGURE 16-18
If you follow the path the output displays, you’ll see what our
code does to ultimately solve the puzzle for three disks. If you
change the value for numberOfDisks from 3 to another
(larger) number, you’ll see a lot more stuff getting printed to
your console. If you plot the path shown in the console, you’ll
again see what the solution looks like and the path each disk
took in getting there. What we’ve just done is looked at the full
code needed to solve our monks’ Towers of Hanoi puzzle. We
aren’t done yet, though. Let’s look at this solution in greater
detail for a few moments.
If you take a quick glance at the code, you can tell that our
solution is a recursive one:
var numberOfDisks = 3;
I get that this doesn’t look very nice, but take a moment to
follow through with what is going on. Pay special attention to
how we swapped the values for where a disk needs to end up
by jumping between the starting, temporary, and destination
towers. The end result of all of this is still the same: our disks
move from their starting point to the destination without
breaking those annoying rules.
0 disks: 0 moves
1 disk: 1 move
2 disks: 3 moves
3 disks: 7 moves
4 disks: 15 moves
1. Move the top N-1 disks from the starting tower to the
temporary tower.
2. Move the bottom most (aka Nth) disk from the starting tower
to the destination tower.
3. Move the remaining N-1 disks from the temporary tower to
the destination tower.
Steps 1 and 3 take Tn-1 moves each. Step 2 takes just 1 move. We
can state all of this as:
T n = 2T n-1 + 1
T 1 = 2T 0 + 1 = 2(0) + 1 = 1
T 2 = 2T 1 + 1 = 2(1) + 1 = 3
T 3 = 2T 2 + 1 = 2(3) + 1 = 7
This seems to check out, so let’s prove that this form maps to the
Tn = 2n − 1 equation figured out earlier. Let’s assume that this
formula holds for n − 1. This would mean that our equation
could be rewritten as Tn-1 = 2n − 1 − 1.
T n = 2T n-1 + 1
n-1
T n = 2 (2 − 1) + 1
n-1
T n = 2(2 ) − 2 + 1
n-1
T n = 2(2 ) − 2 + 1
n-1+1
Tn = 2 − 1
n
Tn = 2 − 1
This proves out that the answer we came up with earlier holds
for all ranges n where n is 1 or greater. This is a less rigorous
form of an induction proof that doesn’t dot all the i’s and cross
the t’s, so don’t use it as the proof if you are asked to formally
prove it.
Conclusion
Do you know why the monks were moving 64 disks in the first
place? They believed that the world would end once the last
disk was placed in its rightful location. If that were true, how
long do we all have? Using the formula we have for the number
of moves, and knowing from legend that each move takes one
second, how long will our monks take to complete the puzzle?
Unfortunately for them, using the 264 − 1 formula, the amount
of time it will take them is somewhere around 585 billion years.
That’s good for us, though! To learn more about the history of
this puzzle and the French mathematician Édouard Lucas who
actually introduced it to everyone, visit
https://en.wikipedia.org/wiki/Towers_of_Hanoi.
OceanofPDF.com
17
FIGURE 17-1
Onward!
Linear Search
FIGURE 17-2
FIGURE 17-3
With linear search, we start at the beginning with the first item
(aka array index position 0) and ask ourselves this question: Is
the value at this location the same as what I am looking for?
For our example, we check whether our first item, 5, is the same
as 3—which we know isn’t the case (Figure 17-4).
FIGURE 17-4
FIGURE 17-6
JavaScript Implementation
If the item we are looking for is not found, our code returns a
-1 :
Beware of Duplicates
Runtime Characteristics
Our linear search algorithm runs in O(n) linear time. The best-
case scenario is when the item we are looking for happens to be
the first item in our collection of data. In this case, we can just
stop after reaching the first item. The worst-case scenario
happens in one of two cases:
Note
function global_linear_search(collectio
let foundPositions = [];
if (foundPositions.length > 0) {
return foundPositions;
} else {
// No items found
return -1;
}
}
let data = [5 8 3 9 1 7 3 2 4
let data = [5, 8, 3, 9, 1, 7, 3, 2, 4,
Conclusion
OceanofPDF.com
18
Onward!
FIGURE 18-2
A sorted list of items
Later, we’ll look into the various ways we have to properly take
an unsorted collection and sort the items inside it. For now, let’s
keep it simple and ensure that the items we throw into our
binary search algorithm are already sorted.
With the paperwork out of the way, the first real thing we do is
find the middle element of our entire collection and check
whether its value is the target we are looking for. For our
collection, the middle element is 32 (Figure 18-3).
FIGURE 18-3
FIGURE 18-4
The middle element is easy to spot when we have an odd number of items
For even numbers of items, the middle item is left of the midpoint
Later on, we’ll look at a more formal way of finding the middle
element that takes our hand-wavy explanation into something
more concrete. For now, we are good!
OK, where were we? Yes! We need to check whether our middle
element contains the target value of 60 that we are looking for.
We can tell that 32 is not equal to 60. What we do next is the
famed division operation that binary search is known for.
Dividing FTW!
With our middle element not matching our target value, our
next step is to figure out where to go and continue our search.
We put our finger on the middle element and mentally divide
our list into a left half and a right half (Figure 18-7).
FIGURE 18-7
For this next step, we ask ourselves whether the value we are
looking for (60) is greater than or less than the value in our
current middle element (32):
FIGURE 18-9
With only the right half of our collection in play, we repeat our
earlier steps. We find the middle element, check whether the
middle element’s value matches the target value we are looking
for, and if the value doesn’t match, divide and decide whether
to go deeper into the remaining left half or right half of our
collection.
FIGURE 18-10
We look for the middle element in the right half of the array
The middle element value is 71, and it isn’t the 60 value we are
looking for. Next, we check whether 71 is greater than or less
than our target 60 value. Because 71 is greater than 60, it means
the half of the collection we want to focus on is the left half
(Figure 18-11).
FIGURE 18-11
FIGURE 18-12
Our 40 value is not the same as 60. Because our current middle
point value of 40 is less than 60, we focus on the right half
(Figure 18-13).
FIGURE 18-13
We only have one item at this point, and this item will also be
our middle element for the next step (Figure 18-14).
FIGURE 18-14
If we take all of our many words and visuals from the previous
section and simplify how binary search works, we can land on
these steps:
Iterative Approach
// Iterative Approach
function binarySearch(arr, val) {
let start = 0;
let end = arr length 1;
let end = arr.length - 1;
return -1;
}
Recursive Approach
The values for left and right are the corresponding 5 and 9
index positions of our array, so if we substitute in those values
and calculate the middle point, the earlier expression will look
like Figure 18-16.
FIGURE 18-16
Our middle index position for this region is 7, and the value
here is 71. A trippy detail to note is that, even though we are
examining only a subset of our collection, our index positions
are relative to the entire collection itself.
Runtime Performance
FIGURE 18-17
In the first step, we work with the full n items. At the next step,
we work with n/2 items (Figure 18-18).
FIGURE 18-18
Assuming we never find the value we are looking for (or the
value we are looking for is the very last item), this pattern will
keep going where each step discards another half of the items
from the previous step (Figure 18-19).
FIGURE 18-19
We keep going until we reach the last step, where all we are left
with is a single element. There is a pattern here. We can see this
pattern by observing the number of elements in play at each
step (Figure 18-20).
FIGURE 18-20
FIGURE 18-21
OceanofPDF.com
19
FIGURE 19-1
This nice and easy approach doesn’t work with trees (Figure 19-
2).
FIGURE 19-2
Onward!
Breadth-First Traversal
FIGURE 19-3
Tree levels
FIGURE 19-4
We continue exploring
We explore C next
We then move to the E node, the F node, and the G node and
explore them to see if they have any children. They don’t have
any children (Figure 19-9).
FIGURE 19-9
At this point, we are done exploring one more row. Now, it’s
time to go into the next row and see what is going on with the H
and I nodes. We find that neither H nor I contains children.
There are no more nodes to discover and explore, so we have
reached the end of the line and have fully explored (aka
traversed) our tree (Figure 19-10).
FIGURE 19-10
Depth-First Traversal
Let’s walk through what this looks like. At the top, we have our
root node (Figure 19-12).
FIGURE 19-12
FIGURE 19-14
Backtracking
FIGURE 19-22
FIGURE 19-24
FIGURE 19-25
With DFS, the order in which we explore is different and is reflected in our general
approach
Our D node goes into the explored collection at the end, and we
discover it has two child nodes: H and I. Because we add the
children from right to left, we add our I node to the end of our
discovered collection first. We next add the H node to the end of
our discovered collection, which ends this step of our traversal.
Our next step is to continue exploring, and (you guessed it) we
pick the last item in our discovered collection. This process
keeps repeating until we have no more nodes to discover.
If we had to summarize the behavior for our depth-first
approach, we would add newly discovered nodes to the end of
our discovered collection. The next node we explore will also
come from the end of our discovered collection. This is the
behavior of a stack. Items are removed from the back. Items
are added to the back as well.
class Node {
constructor(data) {
this.data = data;
this.left = null;
this.right = null;
}
}
rootNodeA.left = nodeB;
rootNodeA.right = nodeC;
nodeB.left = nodeD;
nodeB.right = nodeE;
nodeC.left = nodeF;
nodeC.right = nodeG;
nodeD.left = nodeH;
nodeD.right = nodeI;
We use this tree for both our examples when testing our
breadth-first and depth-first traversal implementations. The
most important thing to note is that the root node for our tree is
referenced by the rootNodeA variable. All of the child nodes
will follow from there.
function breadthFirstTraversal(root) {
if (!root) {
return;
}
if (current.right) {
discovered.enqueue(current.right);
}
}
return explored;
}
<script src="https://www.kirupa.com/js/queue_v1.j
Can you guess what we’ll see when we examine the output of
this code? It will be all of our tree’s nodes listed in the order it
was explored when using our breadth-first traversal (Figure 19-
26).
FIGURE 19-26
function depthFirstTraversal(root) {
if (!root) {
return;
}
if (current.left) {
discovered.push(current.left);
}
}
return explored;
}
<script src="https://www.kirupa.com/js/stack_v1.j
p p p j j
FIGURE 19-27
Our DFS output
OceanofPDF.com
20
FIGURE 20-1
FIGURE 20-2
Onward!
Our map
Our goal is to start from our starting point and explore all of the
places in the graph. At first, this will look a bit like the traversal
examples we saw earlier. We’ll start here and shift to how it
applies to search as the chapter goes on. We’ll use both a DFS
approach and a BFS approach for our exploration. By the time
we’re done, we’ll be able to clearly see how these two
approaches differ!
Depth-First Search Overview
DFS is like exploring the map by picking one location and going
as far as possible along a single path before backtracking and
trying another path (Figure 20-4).
FIGURE 20-4
It’s like taking one road and following it until we can’t go any
further, then going back and trying a different road. We keep
doing this until we have explored all possible paths.
FIGURE 20-5
As you can see, the end result of using either DFS or BFS is that
we explore all the interesting landmarks. The gigantic detail lies
in how we do this exploration. Both DFS and BFS are quite
different here, and we go beyond the generalities and get more
specific about how they work in the next section.
To more deeply understand how DFS and BFS work, let’s work
with a more realistic graph example. Our example graph looks
like Figure 20-6.
FIGURE 20-6
FIGURE 20-7
Let’s take a quick timeout and call out two things here:
FIGURE 20-8
We discover unexplored children
FIGURE 20-9
FIGURE 20-10
FIGURE 20-11
Note that we didn’t add node E to the end of our discovered list.
We added it to the front, and this ensures that this is the node
we explore next. This is an important implementation detail of
the DFS approach that we should keep in mind.
FIGURE 20-12
FIGURE 20-13
FIGURE 20-14
FIGURE 20-15
FIGURE 20-17
FIGURE 20-18
Exploring neighbors
Exploring node B
Exploring node C
FIGURE 20-21
We are taking a different path here than what we did with DFS earlier
At this point, node D moves into the explored list, and our
discovered list now contains nodes E, F, and G.
The next node we explore is node E (Figure 20-22).
FIGURE 20-22
FIGURE 20-23
Node D has already been explored, but node H is new. Let’s add
it to the end of our discovered list and move on to node G
(Figure 20-24).
FIGURE 20-24
FIGURE 20-27
The path we took with BFS
A JavaScript Implementation
Now that we have seen in great detail how DFS and BFS work to
explore the nodes in a graph, let’s shift gears and look at how
both of these exploration approaches are implemented. We are
going to build on top of the Graph class we looked at earlier
when looking specifically at the graph data structure, so there is
a lot of code that is familiar. Some new code (which is
highlighted) implements what we need to have DFS and BFS
working:
class Graph {
constructor() {
// Map to store nodes and their adjacent node
this.nodes = new Map();
this.nodes.delete(node);
return false;
}
return this.isDirected;
}
getExploredNodes() {
return this.#explored;
}
//
// Depth First Search (DFS)
//
dfs(startingNode) {
// Reset to keep track of explored nodes
this.#explored = new Set();
#dfsHelper(node) {
// Mark the current node as explored
this.#explored.add(node);
//
// Breadth First Search (BFS)
//
bfs(startingNode) {
// Reset to keep track of explored nodes
this.#explored = new Set();
<script src="https://www.kirupa.com/js/queue_v1.j
graph.addNode("A");
graph.addNode("B");
graph.addNode("C");
graph.addNode("D");
graph.addNode("E");
graph.addNode("F");
graph.addNode("G");
graph.addNode("H");
graph.addEdge("A", "B");
graph.addEdge("A", "C");
graph.addEdge("A", "D");
graph.addEdge("C", "E");
graph.addEdge("D", "E");
graph.addEdge("D", "F");
graph.addEdge("D", "G");
graph.addEdge("F", "H");
console.log("DFS:");
graph.dfs("A"); // Perform DFS starting from node
console.log(graph.getExploredNodes());
console.log("BFS:");
graph.bfs("A"); // Perform BFS starting from node
console.log(graph.getExploredNodes());
When you run this code, pay attention to the console output
where we print the final explored node for both our DFS and
BFS approaches. Notice that the output matches what we
manually walked through in the previous sections.
Implementation Detail
Performance Details
There is one more thing before we wrap up here, and that has
to do with how efficient both DFS and BFS are when it comes to
exploring a graph.
DFS:
Runtime Complexity: The runtime complexity of DFS
depends on the representation of the graph and the
implementation. In the worst-case scenario, where every
node and edge is visited, DFS has a time complexity of
O(|N| + |E|), where |N| represents the number of nodes
and |E| represents the number of edges in the graph.
Memory Complexity: The memory complexity of DFS is
determined by the maximum depth of recursion, which is
the approach our implementation here takes. In the worst-
case scenario, where the graph forms a long path, DFS may
require O(|N|) space for the call stack.
BFS:
Runtime Complexity: The runtime complexity of BFS, just
like with DFS, is also influenced by the graph
representation and the implementation. In the worst-case
scenario, where every node and edge is explored, BFS has
a time complexity of O(|N| + |E|).
Memory Complexity: The memory complexity of BFS
primarily depends on the space required to store the
visited nodes and the queue used for traversal. In the
worst-case scenario, where the entire graph needs to be
explored, BFS may require O(|N|) space.
Conclusion
OceanofPDF.com
21
Quicksort
Onward!
A Simple Look
To start things off, imagine that the grid of squares in Figure 21-
1 represents the numbers we want to sort.
FIGURE 21-1
At first glance, how these three steps help us sort some data
may seem bizarre, but we see shortly how all of this ties
together.
Starting at the top, because this is our first step, the region of
values we are looking to sort is everything. The first thing we do
is pick our pivot, the value at the middle position, as shown in
Figure 21-2.
FIGURE 21-2
Choosing a pivot
We can pick our pivot from anywhere, but all the cool kids pick
(for various good reasons) the pivot from the midpoint. Since
we want to be cool as well, that’s what we’ll do. Quicksort uses
the pivot value to order items in a very crude and basic way.
From quicksort’s point of view, all items to the left of the pivot
value should be smaller, and all items to the right of the pivot
value should be larger, as highlighted by Figure 21-3.
FIGURE 21-3
Items less than our pivot and items greater than our pivot
There are a few things to note here. First, notice that all items to
the left of the pivot are smaller than the pivot. All items to the
right of the pivot are larger than the pivot. Second, these items
also aren’t ordered. They are just smaller or larger relative to
the pivot value, but they aren’t placed in any ordered fashion.
Once all of the values to the left and right of the pivot have been
properly placed, our pivot value is considered to be sorted.
What we just did is identify a single pivot and rearrange values
to the left or right of it. The end result is that we have one
sorted value. There are many more values to sort, so we repeat
the steps we just saw on the unsorted regions.
Two pivots!
In each unsorted section, we pick our pivot value first. This will
be the value at the midpoint of the values in the section. Once
we have picked our pivot, it is time to do some rearranging, as
shown in Figure 21-6.
FIGURE 21-6
Rearranging values
Notice that we moved values smaller than our pivot value to the
left. Values greater than our pivot were thrown over the fence
to the right. We now have a few more pivot values that are in
their final sorted section, and we have a few more unsorted
regions that need the good old quicksort treatment applied to
them. If we speed things up a bit, Figure 21-7 shows how each
step will ultimately play out.
FIGURE 21-7
FIGURE 21-8
If we take many steps back, what we did here was pick a pivot
and arrange items around it based on whether the item is less
than or greater than our current pivot value. We repeated this
process for every unsorted section that came up, and we didn’t
stop until we ran out of items to process.
FIGURE 21-9
Another example
FIGURE 21-10
Once the pivot has been picked, the next step is to move smaller
items to the left and larger items to the right of the pivot (Figure
21-11).
FIGURE 21-11
FIGURE 21-13
Rinse and repeat
The end result is that our left half is now semi-ordered and we
have a smaller range of values left to arrange. Let’s jump over
to the right half that we left alone after the first round of
reorderings and go mess with it (Figure 21-14).
FIGURE 21-14
Let’s rinse and repeat our pivot and reordering steps on this
side of our input (Figure 21-15).
FIGURE 21-15
All of these words and diagrams are only helpful for people like
you and me. Our computers have no idea what to do with all of
this, so that means we need to convert everything we know into
a form that computers understand. Before we go all out on that
quest, let’s meet everyone halfway by looking at some
pseudocode (not quite real code, not quite real English) first.
Take a few moments to walk through how this code might work
and how it might help you to sort an unsorted list of data.
Turning all of this pseudocode into real code, we have the
following:
// Loop
while (i <= j) {
if (i <= j) {
let tempStore = arrayInput[i];
let tempStore = arrayInput[i];
arrayInput[i] = arrayInput[j];
i++;
arrayInput[j] = tempStore;
j--;
}
// Swap
if (left < j) {
quickSortHelper(arrayInput, left, j);
}
if (i < right) {
quickSortHelper(arrayInput, i, right);
}
return arrayInput;
function quickSort(input) {
return quickSortHelper(input, 0, input.length
}
The code we see here is largely identical to the pseudocode we
saw earlier. The biggest change is that we have a
quickSortHelper function to deal with specifying the array,
left, and right values. This makes the call to the quickSort
function very clean. You just specify the array.
alert(myData);
Performance Characteristics
Time Complexity
Space Complexity
Stability
Conclusion
Well, you have reached the end of this dive into one of the
fastest sort algorithms. Will all of this knowledge help you out
in real (nonacademic) life? I highly doubt it. Almost all popular
programming languages have their own built-in sort
mechanism that you can use. Many are already based on
quicksort (or a highly optimized and specialized version of it),
so the performance gains you will see by using your own
version of quicksort compared to using a built-in sort approach
will be zero.
TABLE 21-2 Sorting Algorithms and Their Performance and Memory Characteristics
Selection n2 n2 n2 1
sort
Insertion n n2 n2 1
sort
And with that, you are free to go and use your newfound
knowledge to sort all sorts of things really, REALLY quickly.
OceanofPDF.com
22
Bubblesort
Onward!
FIGURE 22-1
Our unsorted list of numbers
FIGURE 22-2
Comparison ftw!
The first two numbers are compared. Then the next two
numbers are compared. Then the next two, and the next two,
and so on. You get the picture. The comparison it performs is to
see if the first number is smaller than the second number. If the
first number happens to be bigger, then the first and second
numbers are swapped. Let’s walk through this briefly.
In our example, the first comparison will be between the 6 and
the 2 (Figure 22-3).
FIGURE 22-3
The first number is not smaller than the second one—that is, 6
is not less than 2. What you do in this situation is swap the
numbers so that your first number is always smaller than the
second number (Figure 22-4).
FIGURE 22-4
The 6 is not less than 0, so another swap takes place (Figure 22-
6).
FIGURE 22-6
When you reach the last number, you go back to the beginning
and repeat this whole process again. This is because, as you can
see, your numbers are still not fully sorted. You repeat this
painfully time-consuming process over and over and over again
until you get to the point where all of your numbers are sorted
perfectly (Figure 22-8).
FIGURE 22-8
Walkthrough
FIGURE 22-9
Now, we start at the beginning and do our old song and dance
again (Figure 22-11).
FIGURE 22-11
At this point, if you look at the results of the last step, our
numbers are fully sorted. To us humans, we would call it a
night and take a break. Bubblesort doesn’t know that the
numbers are sorted just yet. It needs to run through the
numbers one more time to realize, when no swaps take place,
that its job is done (Figure 22-12).
FIGURE 22-12
The Code
Now that you’ve seen how bubblesort operates, let’s take a look
at one implementation of it:
function bubbleSort(input) {
l t Si l t
let swapSignal = true;
while (swapSignal) {
swapSignal = false;
for (let i = 0; i < input.length - 1; i++) {
if (input[i] > input[i + 1]) {
let temp = input[i];
input[i] = input[i + 1];
input[i + 1] = temp;
swapSignal = true;
}
}
}
}
let myData = [6, 2, 0, 9, 1, 7, 4, 4, 8, 5, 3];
bubbleSort(myData);
console.log(myData);
If you walk through this code, everything you see should map to
what we looked at in the previous sections. The main thing to
call out is the swapSignal variable that is used to indicate
whether bubblesort has gone through these numbers without
swapping any values. Besides that one thing of note, everything
else is just simple array and for loop tomfoolery.
Conclusion
As you’ve seen from the walkthrough, bubblesort is not very
efficient. For sorting four numbers, it took about 18 steps if you
count the shifting and comparison as two individual operations.
This was despite several numbers requiring no further sorting.
Bubblesort n n2 n2 1
Selection n2 n2 n2 1
sort
Insertion n n2 n2 1
sort
And with that, you are free to go and use your newfound
knowledge to sort all sorts of things really, REALLY slowly. If
you want to learn about a fast sorting algorithm that leaves
bubblesort behind in the dust, you should become friends with
quicksort.
Some Additional Resources
OceanofPDF.com
23
Insertion Sort
FIGURE 23-1
Onward!
The way insertion sort works is sorta kinda really cool. The best
way to understand it is by working through an example. We
formally describe the algorithm’s behavior a bit later. For our
example, as shown in Figure 23-2, our goal is to sort these bars
(aka our values) from shortest to tallest.
FIGURE 23-2
Our example
For the very first item in our collection, this question doesn’t
apply. There are no items that are already sorted, so we go
ahead and claim our first item as already being sorted (Figure
23-5).
FIGURE 23-5
It’s time for us to go to our next item, so move one item to the
right and mark it as active (Figure 23-6). This also happens to be
our first item in our unsorted region, so that’s another way to
refer to it.
FIGURE 23-6
We move on to our next item and repeat the steps we have been
following so far. We mark our third item (or the current first
unsorted item) as active (Figure 23-8).
FIGURE 23-8
In the case of this active item, we move it all the way to the
beginning of our sorted items region (Figure 23-10).
FIGURE 23-10
FIGURE 23-11
FIGURE 23-12
Moving on to our next active item and skipping a few steps, this
is a straightforward comparison where this active item is
already in the right spot given that it is now the largest sorted
item (Figure 23-13).
FIGURE 23-13
We are going to look at one more item before closing the book
on this example. Our next active item is shown in Figure 23-14.
FIGURE 23-14
FIGURE 23-17
Another example
FIGURE 23-18
Starting with our active number
FIGURE 23-19
Next, we move right and pick a new active number (Figure 23-
20).
FIGURE 23-20
FIGURE 23-21
If we turn this algorithm into code, what we’ll see will look as
follows:
function insertionSort(input) {
// Variable to store the current element being
let activeNumber;
insertionSort(myinput);
alert(myinput);
Performance Analysis
FIGURE 23-23
FIGURE 23-24
Now, it isn’t all bad news for all you insertion sort afficionados,
though! It isn’t memory intensive at all. Insertion sort takes up
a constant amount of memory, so keep insertion sort at the top
of your pile if you need to sort numbers (slowly) but are
memory constrained.
Conclusion
Bubblesort n n2 n2 1
Selection n2 n2 n2 1
sort
Insertion n n2 n2 1
sort
Overall, there are better sorting algorithms to use. Unless you
are sorting a small quantity of numbers, or you really need to
take advantage of its sweet constant memory usage, it’s best to
stay as far away as possible from insertion sort.
OceanofPDF.com
24
Selection Sort
FIGURE 24-1
Onward!
Usually, the first item is rarely the smallest item for long. When
selection sort encounters an item that is smaller, this new item
becomes the new smallest item. As you can see, this happens
immediately in our example, for the next item is smaller than
the first item (Figure 24-4).
FIGURE 24-4
FIGURE 24-5
Selection sort goes through the entire list until it has selected
the smallest item. For this example, that’s the bar shown in
Figure 24-7.
FIGURE 24-7
Just like before, the new smallest number is the first item. As
selection sort goes through the unsorted items to find the
smallest item, that will change. To be more precise and
foreshadowy, it will change to the bar shown in Figure 24-11
after all of the unsorted items are examined.
FIGURE 24-11
The next step is for this item to be swapped with our first
unsorted item with the sorted region of our list getting one
more entry (Figure 24-12).
FIGURE 24-12
FIGURE 24-14
Two regions
For the most part, given how most list-like data types work in
many languages, placing the sorted items at the beginning is
straightforward to implement. Placing them at the end or
creating an entirely new sorted list requires a little extra effort
on your part. Pick whatever makes your life easier. The
performance and memory characteristics of all three
approaches are pretty similar, so you don’t have to factor those
in as part of your decision.
function selectionSort(input) {
for (let i = 0; i < input.length; i++) {
alert(myinput);
The JavaScript doesn’t veer too far from the English description
you saw in the previous two sections. The outer loop
represented by the i variable is responsible for going through
each item in the list, and its position marks the dividing line
between the sorted and unsorted regions of our input (Figure
24-17).
FIGURE 24-17
The i variable divides the sorted and unsorted regions of our input
if (smallestPosition != i) {
var temp = input[smallestPosition];
input[smallestPosition] = input[i];
input[i] = temp;
}
Just because I like to optimize some small details for easy wins,
I do a check to do the swap only if our smallest item is indeed
different than the item we started off with. While that doesn’t
happen often, it is worth adding the check to avoid some
unnecessary operations. You can safely skip that if statement
if you can sleep well at night without it.
Conclusion
TABLE 24-1 Selection Sort versus the Other Types of Sort Algorithms by Speed and
Memory Characteristics
y
Name Best Average Worst Memory
Bubblesort n n2 n2 1
Selection n2 n2 n2 1
sort
Insertion n n2 n2 1
sort
If I were you and looking for a slow sort algorithm that is easy
to implement, I would probably choose insertion sort over
selection sort any day of the week.
OceanofPDF.com
25
Mergesort
FIGURE 25-1
Dinosaurs doing dinosaur things. (Source: Winsor McCay, Gerdie the Dinosaur,
animated short film, 1914. https://en.wikipedia.org/wiki/Gertie_the_Dinosaur#.)
Ever since then, mergesort (and variants of it!) can be seen
everywhere—ranging from sort implementations in the Perl,
Python, and Java languages to sorting data in tape drives. Okay,
maybe the tape drives bit isn’t relevant today, but mergesort
comes up in a lot of places due to its efficiency.
Onward!
FIGURE 25-2
FIGURE 25-3
We keep dividing these sections until we are left with just one
number in each section and can’t divide any further (Figure 25-
5).
FIGURE 25-5
FIGURE 25-6
We now repeat this process for the next two sections made up
of the numbers 4 and 1 (Figure 25-8).
FIGURE 25-8
Just like before, we combine the two sections into one section.
The sorting part is clearer this time around because the original
arrangement wasn’t already sorted. We start with a 4 and 1,
and the merged arrangement is 1 and 4. Pretty simple so far,
right?
The number 10 continues to be the odd one out and isn’t quite
in the right position to be sorted and merged, so we drag it
along for the next round (Figure 25-12).
FIGURE 25-12
This almost looks fully sorted! We have just one more round to
go, and to those of you deeply worried about the number 10 . . .
it makes the cut this time around (Figure 25-14).
FIGURE 25-14
When it comes to how much space it takes, things get a little less
rosy. Common mergesort implementations take up 2n space in
worst-case scenarios—which is not terrible, but it is something
to keep in mind if you are dealing with sorting within a fixed
region of limited memory.
The last detail is that mergesort is a stable sort. This means
that the relative order of items is maintained between the
original input and the sorted input. That’s a good thing if you
care about things like this.
function mergeSort(input) {
// Just a single lonely item
if (input.length < 2) {
return input;
}
// Divide
let mid = Math.ceil(input.length / 2);
let left = mergeSort(input.slice(0, mid));
let right = mergeSort(input.slice(mid));
// recursively sort and merge
return merge(left, right);
}
function merge(left, right) {
let result = [];
// Order the sublist as part of merging
while (left.length > 0 && right.length > 0) {
if (left[0] <= right[0]) {
result.push(left.shift());
} else {
result.push(right.shift());
}
}
// Add the remaining items to the result
while (left.length > 0) {
result.push(left.shift());
}
while (right.length > 0) {
result.push(right.shift());
}
// The sorted sublist
return result;
}
If you want to see this code in action, just call the mergeSort
function with an array of numbers as the argument:
Conclusion
TABLE 25-1 Mergesort versus the Other Sorting Algorithms by Speed and Memory
Characteristics
Bubblesort n n2 n2 1
Selection n2 n2 n2 1
sort
Insertion n n2 n2 1
sort
OceanofPDF.com
26
Conclusion
If you are reading this, you have nearly reached the end of this
book! Congratulations. I hope the preceding chapters gave you
a better appreciation for how our computers represent data
and think (in their own 101010101 ways) through common
computer problems. Let’s wrap up all of the preceding content
by talking about how this book came about.
As was all the rage with the cool kids back in the early 2000s, I
majored in Computer Science. I probably shouldn’t have, but I
liked computers. I liked science. How difficult could this be?
Fast forward to a few years in my undergrad program, and my
feelings about Computer and Science was at a low.
I hope this book hits the mark if you were looking for a
reimagined way of explaining very dry and boring algorithms-
related topics. In many ways, this is the book I wish I had all
those decades ago when I was learning about algorithms and
data structures.
Cheers,
OceanofPDF.com
Index
Numbers
JavaScript
arrays, implementing, 24–25
BFS, 300–306
binary searches, 250–253
binary search trees, 103–109
binary tree traversals, 273–274
breadth-first traversals, 274–276
depth-first traversals, 276–278
binary trees, 86–89
breadth-first traversals, 274–276
bubblesort, 333
depth-first traversals, 276–278
DFS, 300–306
Fibonacci sequences
calculating, 207–208
iteration, 215–216
recursive operations, 209–212
graphs, 192–196
hashtables (dictionaries), 148–150
heaps, 126–132
insertion sort, 349–351
linear search, 238–239
linked lists, 44–51
mergesort, 380–381
queues, 64–66
quicksort, 319–322
recursion
function calls, 202
terminating conditions, 203–205
selection sort, 366–369
stacks, 56–58
Towers of Hanoi, 229–232
tries (prefix trees), 173–179
K-L
maps
BFS, 281–283, 308
coding, 300–306
implementing, 306–307
JavaScript, 300–306
memory complexity, 307–308
overview, 284
performance, 307–308
runtime complexity, 307
walkthrough, 293–298
when to use, 298–300
DFS, 281–283, 308
coding, 300–306
implementing, 306–307
JavaScript, 300–306
memory complexity, 307
overview, 283
performance, 307
runtime complexity, 307
walkthrough, 285–291
when to use, 298–300
memoization, Fibonacci sequences, 213–215, 218
memory
arrays, 26–30
BFS, 307–308
binary search trees, 110–111
bubblesort, 324, 334, 353, 369, 382
DFS, 307
hashtables (dictionaries), 151–153
heapsort, 324, 334, 353, 369, 381
insertion sort, 324, 334, 353, 369, 382
memory overhead, queues, 67
mergesort, 324, 334, 353, 369, 381
queues, 67
quicksort, 324, 334, 353, 369, 381
selection sort, 324, 334, 353, 369, 382
stacks, 59
timsort, 324, 334, 353, 369, 382
mergesort, 371–372
2n space, 379
coding, 380–381
divide-and-conquer algorithms, 372
JavaScript, 380–381
memory, 324, 334, 353, 369, 381
operations, 379–380
performance, 324, 334
speed, 353, 369, 381
stable sorts, 379–380
tree depth, 371–372
walkthrough, 372–379
middle elements, binary search, 245–246, 252–253
mini-heaps, 115
O(1)—Constant Complexity, 13
O(2^n)—Exponential Complexity, 14
odd/even numbers, Big-O notation, 8–9
O(log n)—Logarithmic Complexity, 13
O(n log n)—Linearithmic Complexity, 14
O(n^2)—Quadratic Complexity, 14
O(n!)—Factorial Complexity, 14
O(n)—Linear Complexity, 13–14
searches
BFS, 281–283, 308
coding, 300–306
implementing, 306–307
JavaScript, 300–306
memory complexity, 307–308
overview, 284
performance, 307–308
runtime complexity, 307
walkthrough, 293–298
when to use, 298–300
binary searches, 32, 243, 257
coding, 250–253
dividing operations, 247–250
iteration, 250–251
JavaScript, 250–253
middle elements, 245–246, 252–253
operations, 244
recursive operations, 251
runtime, 254–257
sorted items, 244
walkthrough, 252–253
DFS, 281–283, 308
coding, 300–306
implementing, 306–307
JavaScript, 300–306
memory complexity, 307
overview, 283
performance, 307
runtime complexity, 307
walkthrough, 285–291
when to use, 298–300
global linear searches, 240–241
item searches in arrays, 22–23, 32
linear searches, 32, 235–236, 241
coding, 238–239
global linear searches, 240–241
implementing, 238–239
JavaScript, 238–239
runtime, 239
walkthrough, 236–238
queues, 66–67
search/contains operations, stacks, 59
selection sort, 355
coding, 366–369
implementing, 366–369
JavaScript, 366–369
memory, 324, 334, 353, 369, 382
performance, 324, 334
sorted regions, 355–356
speed, 353, 369, 382
walkthrough, 356–366
sibling nodes, 73
single child nodes, 99–100
singly linked lists, 42
skip lists, 44
sorted regions, selection sort, 355–356
sorting algorithms
bubblesort, 325
coding, 333
JavaScript, 333–334
memory, 324, 334, 353, 369, 382
performance, 324, 334
speed, 353, 369, 382
walkthrough, 326–332
heapsort
memory, 324, 334, 353, 369, 381
performance, 324, 334
speed, 353, 369, 381
insertion sort, 335–336
active numbers, 347
coding, 349–351
implementing, 349–351
JavaScript, 349–351
memory, 324, 334, 353, 369, 382
performance, 324, 334, 351–352
speed, 353, 369, 382
walkthrough, 336–348
mergesort, 371–372
2n space, 379
coding, 380–381
divide-and-conquer algorithms, 372
JavaScript, 380–381
memory, 324, 334, 353, 369, 381
operations, 379–380
performance, 324, 334
speed, 353, 369, 381
stable sorts, 379–380
tree depth, 371–372
walkthrough, 372–379
quicksort
coding, 319–322
divide-and-conquer algorithms, 309, 323–324
implementing, 319–322
JavaScript, 319–322
memory, 324, 334, 353, 369, 381
performance, 322, 324, 334
space complexity, 323
speed, 353, 369, 381
stability, 323
time complexity, 323
walkthrough, 310–319
selection sort, 355
coding, 366–369
implementing, 366–369
JavaScript, 366–369
memory, 324, 334, 353, 369, 382
performance, 324, 334
sorted regions, 355–356
speed, 353, 369, 382
walkthrough, 356–366
timsort
memory, 324, 334, 353, 369, 382
performance, 324, 334
speed, 353, 369, 382
space complexity
linked lists, 41
queues, 66–67
quicksort, 323
stacks, 58–59
space, performance, 30–31
speeds
bubblesort, 353, 369, 382
heapsort, 353, 369, 381
insertion sort, 353, 369, 382
mergesort, 353, 369, 381
quicksort, 353, 369, 381
selection sort, 353, 369, 382
timsort, 353, 369, 382
spell checking/correction, tries (prefix trees), 170
stability, quicksort, 323
stable sorts, mergesort, 379–380
stacks, 53–54, 59
behaviors, depth-first traversals, 273
coding, 56–58
defined, 54–55
implementing, 56–58
JavaScript, 56–58
LIFO, 55
peek operations, 59
performance, 58
memory, 59
runtime, 59
pop operations, 59
push operations, 59
search/contains operations, 59
space complexity, 58–59
time complexity, 58–59
Undo/Redo, 53
U-V
W-X-Y-Z
OceanofPDF.com
OceanofPDF.com
Code Snippets