STACKS, Queues and Linked Lists
STACKS, Queues and Linked Lists
WHAT IS AN ADT ?
n computer science, an abstract data type (ADT) is a
mathematical model for a certain class of data structures that
have similar behavior.
It is a mathematically specified entity that defines a set of
its instances ,with:
i)a specific interface –a collection of signatures of operations
that can be invoked on an instance.
ii)a set of axioms (pre and post conditions)
that defines the semantics of the operations i.e what the
operations do to the instances of the ADT
What are the operations?
Construction of the instance
Access functions
Manipulation methods
adts
..allow us to talk in a higher level of abstraction
..they encapsulate the data and the algorithms that
work on them.
Operations on Dynamic sets
Two types
i)queries
Ii) modifying
List of operations
Search
Insert
Delete
Maximum
Minimum
Successor
Predecessor
Set operations
Dynamic if we can add or remove objects
What are the ‘methods’ in this ADT
i) Create
ii) insert an element ….manipulation oper
iii) delete an element ….manipulation oper
iv)is in ?? -access (returns Tor F)
stacks
LIFO
Operations are push and pop
e.g a stack of books
The four methods associated with
stacks
???
New()
Push(S:ADT ,o:element) -pushes o into S
Pop(S:ADT ,o:element) -pops o from S
…what if S is empty?
o Top (S: ADT) –returns the top element of the stack
o …what if S is empty?
Some support methods
size (S:ADT)
isEmpty (S: ADT)
26
Basic Operations
Construct an empty list
Determine whether or not empty
Insert an element into the list
Delete an element from the list
Traverse (iterate through) the list to
Modify
Output
Search for a specific value
Copy or save
Rearrange
27
Designing a List Class
Should contain at least the following function members
Isempty()
insert()
delete()
display()
Implementation involves
Defining data members
Defining function members from design phase
28
Array-Based Implementation
of Lists
30
LINKED LISTS
A linked list is a data structure in which the objects are
arranged in a linear order.
Unlike an array, however, in which the linear order is
determined by the array indices, the order in a linked list is
determined by a pointer in each object.
Linked lists
The linked list uses dynamic memory allocation, that
is, it allocates memory for new list elements as needed.
A linked list is made up of a series of objects, called the nodes
of the list.
Because a list node is a distinct object (as opposed to
simply a cell in an array), it is good practice to make a
separate list node class.
A B C D
34 Linked Lists
Definition: A linked list is a colleciton of nodes that
together form a linear ordering.
Element
Node
Reference to an
element
Reference to next
another node
head
head
tail
head
50B0
5110 Toronto
50A0
5100
5090
node pointer to a
5070 50F0 next node
5080 50D0
50E0 Rome
0
5070 5110 pointer to
5080 50D0
Seattle an element
5060 50E0
Baltimore
5060 50C0
50C0
5050
50B0
5110 Toronto
50A0
node pointer to a
5100 next node
5090
5070 50F0
5080 50D0 pointer to
Rome
an element
50E0
0
5070 5110
Seattle
5080 50D0
5060 50E0
Baltimore
5060 50C0
50C0
5050
head
50B0
5110 Toronto
50A0
5100
5090
node pointer to a
5070 50F0 next node
5080 50D0
50E0 Rome
0
5110 pointer to
5070
Seattle an element
5080 50D0
5060 50E0
Baltimore
5060 50C0
50C0
5050
head
50B0
5110 Toronto
50A0
5110 Toronto
50A0
5100
5090
5070 50F0
5080 50D0
50E0 Rome
0
5070 5110
Seattle
5080 50D0
5060 50E0
Baltimore
5060 50C0
50C0
5050
head
Inserting at the Head
1. Allocate a new node
2. Insert new element
3. Make new node point to
old head
4. Update head to point to
new node
44 Linked Lists
Removing at the Head
1. Update head to point to
next node in the list
2. Allow garbage collector
to reclaim the former
first node
Linked Lists 45
Inserting at the Tail
1. Allocate a new node
2. Insert new element
3. Have new node point to
null
4. Have old last node
point to new node
5. Update tail to point to
new node
Linked Lists 46
Removing at the Tail
Removing at the tail of a
singly linked list cannot
be efficient!
There is no constant-time
way to update the tail to
point to the previous
node
Linked Lists 47
Stack with a Singly Linked List
We can implement a stack with a singly linked list
The top element is stored at the first node of the list
The space used is O(n) and each operation of the Stack ADT takes
O(1) time
nodes
t
elements
48 Linked Lists
Queue with a Singly Linked List
We can implement a queue with a singly linked list
The front element is stored at the first node
The rear element is stored at the last node
The space used is O(n) and each operation of the Queue ADT
takes O(1) time r
nodes
f
elements
49 Linked Lists
Doubly Linked List
A doubly linked list is often more
convenient! prev next
Nodes store:
element
link to the previous node
link to the next node
elem node
Special trailer and header nodes which are
sentinels or dummy nodes (they don’t store
any element
elements
50 Linked Lists
Insertion
We visualize operation insertAfter(p, X), which returns position q
p
A B C
A B q C
X
p q
A B X C
51 Linked Lists
Deletion
We visualize remove(p), where p == last()
p
A B C D
A B C p
A B C
52 Linked Lists
Sentinel Nodes
To simplify programming, two special nodes have been added at
both ends of the doubly-linked list.
Head and tail are dummy nodes, also called sentinels, do not store
any data elements.
Head: header sentinel has a null-prev reference (link).
Tail: trailer sentinel has a null-next reference (link).
53 Dr Zeinab Eid
What we see from a Douby-linked List?
A doubly-linked list object would need to store the
following:
1. Reference to sentinel head-node;
2. Reference to sentinel tail-node; and
3. Size-counter that keeps track of the number of nodes in
the list (excluding the two sentinels).
54 Dr Zeinab Eid
Empty Doubly-Linked List:
Using sentinels, we have no null-links; header trailer
instead, we have:
head.next = tail
tail.prev = head
first last
Singl Node List:
header trailer
Size = 1
This single node is the first node, and
also is the last node:
first node is head.next
last node is tail.prev
55 Dr Zeinab Eid
Worst-cast running time
In a doubly linked list
+ insertion at head or tail is in O(1)
+ deletion at either end is on O(1)
-- element access is still in O(n)…..we need to search for the
element i.e. ‘key’
56 Linked Lists
Implement the following data types
with a doubly linked list
Stack
Queue
Circular lists (no head or tail –just an entry point)
node
Parent
Child
Ancestor
Sibling
Tree
Nodes
Each node can have 0 or more children
A node can have at most one parent
Binary tree
Tree with 0–2 children per node
Leaf nodes
Is the maximum level-so the tree in the
previous slide has height ..
The degree of the node is the number of
children it has .
Leaves have degree 0
An ordered tree is a rooted tree in which the
children of each node are ordered.
That is, if a node has k children, then there is
a first child, a second child, . . . ,
and a kth child.
Organizational hierarchy
Chapters in a book
..is a ordered tree in which each node has
utmost two children
A binary tree is an ordered tree in which each
node has degree at most 2.
No of internal nodes
+number of leaves=
2^h -1 +2^h =2^(h+1)-1
How many internal nodes ? No of leaves-1
A binary tree has
..at most 2^i nodes at level i
….at most 1+2+2^2 +2^3+……+2^h-1
= (2^h) -1 nodes
If a tree has n internal nodes, then n <= (2^h) -1
Visit first
tree
‘J’
‘E’ ‘T’
B C
D E F G
H I
Traversing a Tree Preorder
B C
D E F G
H I
Result: A
Traversing a Tree Preorder
B C
D E F G
H I
Result: AB
Traversing a Tree Preorder
B C
D E F G
H I
Result: ABD
Traversing a Tree Preorder
B C
D E F G
H I
Result: ABDE
Traversing a Tree Preorder
B C
D E F G
H I
Result: ABDEH
Traversing a Tree Preorder
B C
D E F G
H I
Result: ABDEHC
Traversing a Tree Preorder
B C
D E F G
H I
Result: ABDEHCF
Traversing a Tree Preorder
B C
D E F G
H I
Result: ABDEHCFG
Traversing a Tree Preorder
B C
D E F G
H I
Result: ABDEHCFGI
Post –order
Visit children and then the node
Example : calculation of expenses
Post order-algorithm ??
Which child first?
It is an ordered tree…so…. ?
Binary tree
Post order(v)
If (v==null) then return
Else
Post-order (v.leftchild())
Post-order(v.rightchild())
visit v
Postorder Traversal:A H E M YTJ –left child first F
Visit last
tree
‘J’
‘E’ ‘T’
b c
b c a
b c
f
d e
g h i j
g h d i e b j f c a
Traversing a Tree Postorder
B C
D E F G
H I
Traversing a Tree Postorder
B C
D E F G
H I
Result:
Traversing a Tree Postorder
B C
D E F G
H I
Result:
Traversing a Tree Postorder
B C
D E F G
H I
Result: D
Traversing a Tree Postorder
B C
D E F G
H I
Result: D
Traversing a Tree Postorder
B C
D E F G
H I
Result: DH
Traversing a Tree Postorder
B C
D E F G
H I
Result: DHE
Traversing a Tree Postorder
B C
D E F G
H I
Result: DHEB
Traversing a Tree Postorder
B C
D E F G
H I
Result: DHEB
Traversing a Tree Postorder
B C
D E F G
H I
Result: DHEBF
Traversing a Tree Postorder
B C
D E F G
H I
Result: DHEBF
Traversing a Tree Postorder
B C
D E F G
H I
Result: DHEBFI
Traversing a Tree Postorder
B C
D E F G
H I
Result: DHEBFIG
Traversing a Tree Postorder
B C
D E F G
H I
Result: DHEBFIGC
Traversing a Tree Postorder
B C
D E F G
H I
Result: DHEBFIGCA
Binary tree -in order traversals
inOrder (v)
If (v== null) then return
Else inOrder (v.leftchild())
visit v
inOrder (v.rightchild())
Inorder Traversal: A E H J M T Y
Visit second
tree
‘J’
‘E’ ‘T’
B C
D E F G
H I
Traversing a Tree Inorder
B C
D E F G
H I
Traversing a Tree Inorder
B C
D E F G
H I
Traversing a Tree Inorder
B C
D E F G
H I
Result: D
Traversing a Tree Inorder
B C
D E F G
H I
Result: DB
Traversing a Tree Inorder
B C
D E F G
H I
Result: DB
Traversing a Tree Inorder
B C
D E F G
H I
Result: DBH
Traversing a Tree Inorder
B C
D E F G
H I
Result: DBHE
Traversing a Tree Inorder
B C
D E F G
H I
Result: DBHEA
Traversing a Tree Inorder
B C
D E F G
H I
Result: DBHEA
Traversing a Tree Inorder
B C
D E F G
H I
Result: DBHEAF
Traversing a Tree Inorder
B C
D E F G
H I
Result: DBHEAFC
Traversing a Tree Inorder
B C
D E F G
H I
Result: DBHEAFCG
Traversing a Tree Inorder
B C
D E F G
H I
Result: DBHEAFCGI
Inorder Traversal
a
b b a c c
b c
f
d e
g h i j
g d h b e i a f j c
Tree Traversals
Binary Tree Construction
Suppose that the elements in a binary tree are
distinct
Can you construct the binary tree from which a
given traversal sequence came?
When a traversal sequence has more than one
element, the binary tree is not uniquely defined
Therefore, the tree from which the sequence
was obtained cannot be reconstructed uniquely
Binary Tree Construction
postorder b b
= ab a a
level order a a
= ab b b
Preorder And Postorder
a a
preorder = ab
b b
postorder = ba
Preorder and postorder do not uniquely define a
binary tree.
Nor do preorder and level order (same example)
Nor do postorder and level order (same example)
Generating a tree
Pre-order : abcdfge
Post-order :cbfdgae
Inorder And Preorder
a
inorder = g d h b e i a f j c
preorder = a b d g h e i c f j gdhbei fjc
Scan the preorder left to right
using the inorder to separate a
left and right subtrees
1. a is the root of the tree; b fjc
gdhbei are in the left subtree; gdh ei
fjc are in the right subtree
a
2. b is the next root;
gdh are in the left subtree; b fjc
ei are in the right subtree
3. d is the next root; d ei
g is in the left subtree;
h is in the right subtree g h
Question-coding
Generate a tree from the given sequence and do a
post order transversal
Special case
We can’t get a tree from
pre-order and post-order
alone.
But a special case is when
every internal node has
exactly two children
Inorder And Postorder
Scan postorder from right to left using inorder to
separate left and right subtrees
inorder = g d h b e i a f j c
postorder = g h d i e b j f c a
Tree root is a;
gdhbei are in left subtree;
fjc are in right subtree
Traversal Applications
a
b c
f
d e
g h i j
* +
e f
+ -
a b c d
a b +c d - * e f + /
Gives postfix form of expression!
Do traversals
Draw a tree for
((2−1)−(3+(4×2)))
Do in order ,preorder and post order traversals for the
tree
If we do a preorder traversal of the tree we get this:
--21+3×42
b c
f
d e
g h i j
g d h b e i a f jc
Inorder Of Expression Tree
/
* +
e f
+ -
a b c d
a + b * c - d/ e + f
Gives infix form of expression (sans parentheses)!
Preorder Traversal-recursively
a
b a b c c
b c
f
d e
g h i j
a b d g h e i c f j
Preorder Of Expression Tree
/
* +
e f
+ -
a b c d
/ * +a b - c d +e f
Gives prefix form of expression!
Post order
Take a directory structure
And do the computation on the children first
Breadth-first traversal of a tree
B C
D E F G
H I
Breadth-first traversal of a tree
B C
D E F G
H I
Result: A
Breadth-first traversal of a tree
B C
D E F G
H I
Result: AB
Breadth-first traversal of a tree
B C
D E F G
H I
Result: ABC
Breadth-first traversal of a tree
B C
D E F G
H I
Result: ABCD
Breadth-first traversal of a tree
B C
D E F G
H I
Result: ABCDE
Breadth-first traversal of a tree
B C
D E F G
H I
Result: ABCDEF
Breadth-first traversal of a tree
B C
D E F G
H I
Result: ABCDEFG
Breadth-first traversal of a tree
B C
D E F G
H I
Result: ABCDEFGH
Breadth-first traversal of a tree
B C
D E F G
H I
Result: ABCDEFGHI
Level Order
Let t be the tree root; a
while (t != null)
{
visit t and put its
b c
children on a FIFO
f
queue; d e
remove a node from the
FIFO queue and call it g h i j
t;
// remove returns null
// when queue is empty a b c d e f g h i j
}
Inorder And Level Order
Scan level order from left to right using inorder to
separate left and right subtrees
inorder = g d h b e i a f j c
level order = a b c d e f g h i j
Tree root is a;
gdhbei are in left subtree;
fjc are in right subtree
Level order of a binary tree
Given a complete binary tree as below if we perform the
level-by-level numbering we get...
F S
G J H Q
X C P
Level order of a binary tree
Given a complete binary tree as below if we perform the
level-by-level numbering we get the lever order
numbering of all nodes. 1
Z
2 3
F S
4 5 6 7
G J H Q
8 9 10
X C P
Prefix, Postfix, Infix
Notation
Infix Notation
To add A, B, we write
A+B
To multiply A, B, we write
A*B
The operators ('+' and '*') go in
between the operands ('A' and 'B')
This is "Infix" notation.
Prefix Notation
Instead of saying "A plus B", we
could say "add A,B " and write
+AB
"Multiply A,B" would be written
*AB
This is Prefix notation.
Postfix Notation
Another alternative is to put the
operators after the operands as in
AB+
and
AB*
This is Postfix notation.
Prefix-Polish notation
Post fix-reverse Polish
The terms infix, prefix, and postfix
tell us whether the operators go
between, before, or after the
operands.
he description "Polish" refers to the
nationality of logician Jan
Łukasiewicz, who invented (prefix)
Polish notation in the 1920s.
The reverse Polish scheme
was proposed in 1954 by Burks,
Warren, and Wright and was
independently reinvented by F. L.
Bauer and E. W. Dijkstra in the early
1960s to reduce computer memory
access and utilize the stack to
evaluate expressions.
n computer science, postfix notation
is often used in stack-
based and concatenative
programming languages
Reverse Polish notation
The algorithms and notation for this
scheme were extended
by Australian philosopher and
computer scientist Charles Hamblin in
the mid-1950s.
Parentheses
Evaluate 2+3*5.
+ First:
(2+3)*5 = 5*5 = 25
* First:
2+(3*5) = 2+15 = 17
Infix notation requires Parentheses.
What about Prefix Notation?
+2*35=
=+2*35
= + 2 15 = 17
* +2 35=
=*+235
= * 5 5 = 25
No parentheses needed!
Postfix Notation
235*+=
=235*+
= 2 15 + = 17
23 +5 * =
=23+5*
= 5 5 * = 25
No parentheses needed here either!
Conclusion:
Infix is the only notation that
requires parentheses in order to
change the order in which the
operations are done.
Fully Parenthesized Expression
A FPE has exactly one set of
Parentheses enclosing each operator
and its operands.
Which is fully parenthesized?
(A+B)*C
( ( A + B) * C )
( ( A + B) * ( C ) )
Infix to Prefix Conversion
Move each operator to the left of its
operands & remove the parentheses:
( ( A + B) * ( C + D ) )
Infix to Prefix Conversion
Move each operator to the left of its
operands & remove the parentheses:
(+A B *(C+D))
Infix to Prefix Conversion
Move each operator to the left of its
operands & remove the parentheses:
*+A B (C+D)
Infix to Prefix Conversion
Move each operator to the left of its
operands & remove the parentheses:
*+A B +C D
A B+C* D E+F/-
Operand order does not change!
Operators are in order of evaluation!
Infix to postfix
Stacks are widely used in the design
and implementation of compilers. For
example, they are
used to convert arithmetic
expressions from infix notation to
postfix notation
Notice that the operands
in a postfix expression occur in the
same order as in the corresponding
infix expression.
a + b * c – d.
equivalent to
(a + (b * c)) – d.
a + b * c – d.
stack:<empty>
output: []
FPE Infix to Postfix
((A+B)*(C-E))/(F+G))
stack:(
output: []
FPE Infix to Postfix
(A+B)*(C-E))/(F+G))
stack:((
output: []
FPE Infix to Postfix
A+B)*(C-E))/(F+G))
stack:(((
output: []
FPE Infix to Postfix
+B)*(C-E))/(F+G))
stack:(((
output: [A]
FPE Infix to Postfix
B)*(C-E))/(F+G))
stack:(((+
output: [A]
FPE Infix to Postfix
)*(C-E))/(F+G))
stack:(((+
output: [A B]
FPE Infix to Postfix
*(C-E))/(F+G))
stack:((
output: [A B + ]
FPE Infix to Postfix
(C-E))/(F+G))
stack:((*
output: [A B + ]
FPE Infix to Postfix
C-E))/(F+G))
stack:((*(
output: [A B + ]
FPE Infix to Postfix
-E))/(F+G))
stack:((*(
output: [A B + C ]
FPE Infix to Postfix
E))/(F+G))
stack:((*(-
output: [A B + C ]
FPE Infix to Postfix
))/(F+G))
stack:((*(-
output: [A B + C E ]
FPE Infix to Postfix
)/(F+G))
stack:((*
output: [A B + C E - ]
FPE Infix to Postfix
/(F+G))
stack:(
output: [A B + C E - * ]
FPE Infix to Postfix
(F+G))
stack:(/
output: [A B + C E - * ]
FPE Infix to Postfix
F+G))
stack:(/(
output: [A B + C E - * ]
FPE Infix to Postfix
+G))
stack:(/(
output: [A B + C E - * F ]
FPE Infix to Postfix
G))
stack:(/(+
output: [A B + C E - * F ]
FPE Infix to Postfix
))
stack:(/(+
output: [A B + C E - * F G ]
FPE Infix to Postfix
)
stack:(/
output: [A B + C E - * F G + ]
FPE Infix to Postfix
stack:<empty>
output: [A B + C E - * F G + / ]
Problem with FPE
Too many parentheses.
Establish precedence rules:
Fast searching
Fast deletion
5
BST
Is a tree which has a search property
Binary tree property
each node has 2 children
result:
storage is small
operations are simple
average depth is small
Binary Search Tree
Dictionary Data Structure
Search tree property 8
all keys in left subtree smaller
than root’s key
all keys in right subtree larger 5 11
than root’s key
result:
easy to find any given key 2 6 10 12
Insert/delete by changing links
4 7 9 14
13
7
Example and Counter-Example
8
5
5 11
4 8
2 7 6 10 18
1 7 11
4 15 20
3
NOT A 21
BINARY SEARCH TREE BINARY SEARCH TREE
ees 8
Efficient and less efficient BSTs
The binary-search-tree
property allows us to print
out all the keys in a binary
search tree in sorted order by
a simple recursive algorithm,
called an inorder tree walk
Sorting
Tree sort is a sorting algorithm that is based on Binary
Search Tree data structure.
It first creates a binary search tree from the elements
of the input list or array and then performs an in-order
traversal on the created binary search tree to get the
elements in sorted order.
Sorting steps
Create a Binary search tree by inserting data items
from the array into the binary search tree.
Perform in-order traversal on the tree to get the
elements in sorted order.
In –order tree walk gives..
Searching in Binary Search Trees
Searching is the reason we have BSTs.
Like most operations in an BST, it is better described in
a recursive fashion.
But can be done iteratively also (may be more
efficient)
search(10)
12
9 16
2 11 13 21
1 3 10
search(10)
12
9 16
2 11 13 21
1 3 10
search(10)
12
9 16
2 11 13 21
1 3 10
search(10)
12
9 16
2 11 13 21
1 3 10
search(10)
12
9 16
2 11 13 21
1 3 10
search(10)
12
9 16
2 11 13 21
1 3 10
How much time?
We started at level 0,came down to ……………….
How many levels ? Height of the tree
Time for searching is O(h)
If the height of the tree is as large as the number of
keys n ,the time is O(n) in the worst case
Recursive search-
Ca
BST Bonus:
FindMin, FindMax
Find minimum
10
5 15
Find maximum
2 9 20
7 17 30
Min
Start at the top ,keep going
left ,until a null
Max/..
..keep going right …
Find Maximum
Find maximum is another common operation in binary
search trees and is helpful when implementing deletion.
Again the recursive thinking makes this implementation
much easier to understand
The idea is simple. If a node has a non-null pointer to the
left this node cannot be the maximum as the element in
the right must be greater.
Find Minimum
The idea of finding the minimum value in a binary
search tree will also help the delete operation
Its idea is very much like the maximum
Running time is
O(h)
Successor-Two cases
When the right sub tree is not
empty
Successor is left most node in
right subtree
ie. minimum in the right
subtree
Can be obtained by using the
function Treeminimum(right
subtree)
….see the successor of 15
Successor node
Case 1
The node has a right subtree.
If the given node has a right subtree then by the BST
property the next larger key must be in the right
subtree.
Since all keys in a right subtree are larger than the key
of the given node, the successor must be the smallest
of all those keys in the right subtree.
Successor Node
Next larger node
10
in this node’s subtree
5 15
2 9 20
7 17 30
1
Successor –case 2
The node does not have a
right subtree.
In this case we will have to look
up the tree since that's the only
place we might find the next
larger key.
There is no point looking at the
left subtree as all keys in the
left subtree are guaranteed to
be smaller than the key in the
given tree.
Successor –Case 2
When we look up from the given
node, there can be two cases:
first, the current node is the
left child of its parent.
In this case the parent is the
successor node. This is because
the parent always comes next in
inorder traversal if you are done
with left subtree (rooted at the
current node).
Second, the current node is
the right child of the parent.
In this case, as you keep
going up the ancestor chain
you encounter smaller values
if you are going up but larger
values if you are going right.
The successor node will be
the first node up the
ancestor chain that you
encounter on the right
chain.
Predecessor Node (is a mirror
problem)
Next smaller node
10
in this node’s subtree
5 15
2 9 20
7 17 30
12
9 16
2 11 13 21
1 3 10
insert(14)
12
9 16
2 11 13 21
1 3 10
insert(14)
12
9 16
2 11 13 21
1 3 10
insert(14)
12
9 16
2 11 13 21
1 3 10
insert(14)
12
9 16
2 11 13 21
1 3 10 14
Deletion
Three cases
z has no children
z has one child
z has two children
Deletion
Deleting node z, which may be a
root or left/right child of node q
z has no left child We replace z by its right
child
Node z has a left child ,no
Replace z by left child
right child
Delete z
We replace z by y updating l
to be the left child of y
Delete z
Running time
O(h)..same as search
Building a tree with minimum
height
Insert n keys in the order 1 to
n
Case 1
Set the parent of the node to null
Sorting using BST
Insert on the tree
Then do in order traversal
O(n)
Insertion on a tree
…depends on the ‘level’ at which it is to be inserted
Suppose the sequence is in an sorted order..
1,2,3,4…
Then we insert one by one
The number of accesses will be
1+2+3+4+… =O(n^2) for insertion
3
Heap Types
Max-heaps (largest element at root), have the max-
heap property:
for all nodes i, excluding the root:
A[PARENT(i)] ≥ A[i]
5
Operations on Heaps
Maintain/Restore the max-heap property
MAX-HEAPIFY
Priority queues
6
Maintaining the Heap Property
Suppose a node is smaller than a child
Left and Right subtrees of i are max-heaps
To eliminate the violation:
Exchange with larger child
Move down the tree
Continue until node is not smaller than
children
7
Example
MAX-HEAPIFY(A, 2, 10)
A[2] A[4]
A[2] violates the heap property A[4] violates the heap property
A[4] A[9]
8
MaxHeapify – Example
MaxHeapify(A, 2)
26
24
14 20
18
14
24 17 19 13
12 14
14
18 11
Comp 122
Maintaining the Heap Property
Assumptions: Alg: MAX-HEAPIFY(A, i, n)
Left and Right 1. l ← LEFT(i)
subtrees of i are 2. r ← RIGHT(i)
max-heaps 3. if l ≤ n and A[l] > A[i]
A[i] may be
4. then largest ←l
smaller than its
5. else largest ←i
children
6. if r ≤ n and A[r] > A[largest]
7. then largest ←r
8. if largest i
9. then exchange A[i] ↔ A[largest]
10. MAX-HEAPIFY(A, largest, n)
10
Intuitively:
MAX-HEAPIFY Running Time
- h
-
- 2h
- O(h)
11
Building a Heap
Convert an array A[1 … n] Alg: BUILD-MAX-HEAP(A)
into a max-heap (n = 1. n = length[A]
length[A]) 2. for i ← n/2 downto 1
The elements in the
3. do MAX-
subarray A[(n/2+1) .. n] HEAPIFY(A, i, n) 1
are leaves 4
Apply MAX-HEAPIFY on 2 3
1 3
elements between 1 and 4 5 6 7
2 16 9 10
n/2 8 9 10
14 8 7
A: 4 1 3 2 16 9 10 14 8 7
12
Example: A 4 1 3 2 16 9 10 14 8 7
4 4 4
2 3 2 3 2 3
1 3 1 3 1 3
4 5 6 7 4 5 6 7 4 5 6 7
8
2 9 10
16 9 10 8 2 9 10
16 9 10 8 14 9 10
16 9 10
14 8 7 14 8 7 2 8 7
i=2 i=1
1 1 1
4 4 16
2 3 2 3 2 3
1 10 16 10 14 10
4 5 6 7 4 5 6 7 4 5 6 7
8
14 9 10
16 9 3 8
14 9 10
7 9 3 8
8 9 10
7 9 3
2 8 7 2 8 1 2 4 1
13
24 21 23 22 36 29 30 34 28 27
Input Array:
BuildMaxHeap – Example
Initial Heap:
(not max-heap) 24
21 23
22 36 29 30
34 28 27
Comp 122
BuildMaxHeap – Example
MaxHeapify(10/2 = MaxHeapify(5)
5)
MaxHeapify(4)
MaxHeapify(3) 24
36
MaxHeapify(2)
21
34
24
36 30
23
MaxHeapify(1)
28
34
24
22 36
27
21 29 30
23
34
22 28
24 27
21
Comp 122
Running Time of BuildMaxHeap
Loose upper bound:
Cost of a MaxHeapify call No. of calls to MaxHeapify
O(lg n) O(n) = O(nlg n)
Tighter bound:
Cost of a call to MaxHeapify at a node depends on the
height, h, of the node – O(h).
Height of most nodes smaller than n.
Height of nodes h ranges from 0 to lg n.
No. of nodes of height h is n/2h+1
Comp 122
Running Time of BuildMaxHeap
Tighter Bound for T(BuildMaxHeap)
lg n
T(BuildMaxHeap) h
h 0 2
h
lg n
n
h 0 2
h 1
O ( h)
h
h
, x 1 / 2 in (A.8)
h 0 2
lg n h
O n h
1/ 2
h 0 2 (1 1 / 2) 2
2
lg n h h
O n h O n h
h 0 2 h 0 2
O ( n)
Comp 122
Running Time of BUILD MAX HEAP
Alg: BUILD-MAX-HEAP(A)
1. n = length[A]
O(n)
2. for i ← n/2 downto 1 O(lgn)
3. do MAX-HEAPIFY(A, i, n)
18
Running Time of BUILD MAX HEAP
HEAPIFY takes O(h) the cost of HEAPIFY on a node i is
proportional to the height of the node i in the tree
h
T (n) ni hi
h
2i h i O (n)
i 0
Height i 0 Level No. of nodes
h0 = 3 (lgn) i=0 20
h1 = 2 i=1 21
h2 = 1 i=2 22
h3 = 0 i = 3 (lgn) 23
Idea:
Build a max-heap from the array
Swap the root (the maximum element) with the last element
in the array
“Discard” this last node by decreasing the heap size
MAX-HEAPIFY(A, 1, 1)
22
Alg: HEAPSORT(A)
1. BUILD-MAX-HEAP(A) O(n)
2. for i ← length[A] downto 2
3. do exchange A[1] ↔ A[i] n-1 times
4. MAX-HEAPIFY(A, 1, i - 1) O(lgn)
23
HEAP-MAXIMUM
Goal:
Return the largest element of the heap
1. return A[1]
Heap A:
Heap-Maximum(A) returns 7
24
HEAP-EXTRACT-MAX
Goal:
Extract the largest element of the heap (i.e., return the max value and
also remove that element from the heap
Idea:
Exchange the root element with the last
Decrease the size of the heap by 1 element
Call MAX-HEAPIFY on the new root, on a heap of size n-1
25
Example: HEAP-EXTRACT-MAX
16 1
14 10 max = 16 14 10
8 7 9 3 8 7 9 3
2 4 1 2 4
Heap size decreased with 1
14
26
HEAP-EXTRACT-MAX
Alg: HEAP-EXTRACT-MAX(A, n)
1. if n < 1
3. max ← A[1]
4. A[1] ← A[n]
6. return max
27
HEAP-INCREASE-KEY
Goal:
Increases the key of an element i in the heap
Idea:
Increment the key of A[i] to its new value
If the max-heap property does not hold anymore:
traverse a path toward the root
16
to find the proper place
for the newly increased key
14 10
8 i 7 9 3
Key [i] ← 15 2 4 1
28
Example: HEAP-INCREASE-KEY
16 16
14 10 14 10
8 i 7 9 3 8 i 7 9 3
2 4 1 2 15 1
Key [i ] ← 15
16 16
i
14 10 15 10
i
15 7 9 3 14 7 9 3
2 8 1 2 8 1
29
HEAP-INCREASE-KEY
Alg: HEAP-INCREASE-KEY(A, i, key)
30
MAX-HEAP-INSERT 16
Goal:
14 10
Inserts a new element into a max-
8 7 9 3
heap
2 4 1 -
Idea:
16
Expand the max-heap with a new
element whose key is - 14 10
Calls HEAP-INCREASE-KEY to set 8 7 9 3
the key of the new node to its 2 4 1 15
correct value and maintain the max-
heap property
31
Example: MAX-HEAP-INSERT
Insert value 15: Increase the key to 15
- Start by inserting - Call HEAP-INCREASE-KEY on A[11] = 15
16 16
14 10 14 10
8 7 9 3 8 7 9 3
2 4 1 - 2 4 1 15
16 16
14 10 15 10
8 15 9 3 8 14 9 3
2 4 1 7 2 4 1 7
32
MAX-HEAP-INSERT
16
Alg: MAX-HEAP-INSERT(A, key, n)
14 10
1. heap-size[A] ← n + 1 8 7 9 3
2. A[n + 1] ← - 2 4 1 -
3. HEAP-INCREASE-KEY(A, n + 1, key)
33
Summary
We can perform the following operations on heaps:
MAX-HEAPIFY O(lgn)
BUILD-MAX-HEAP O(n)
HEAP-SORT O(nlgn)
MAX-HEAP-INSERT O(lgn)
HEAP-EXTRACT-MAX O(lgn)
HEAP-INCREASE-KEY O(lgn) Average
O(lgn)
HEAP-MAXIMUM O(1)
34
Priority Queues
12 4
35
Operations
on Priority Queues
36
Problems
Assuming the data in a max-heap are distinct, what are the
possible locations of the second-largest element?
37
GRAPHS
Applications – Communication Network
vertex = router
edge = communication link
Applications - Driving Distance/Time Map
vertex = city
edge weight = driving distance/time
Applications - Street Map
G=(V,E)
V(G): a finite, nonempty set
of vertices
E(G): a set of edges
(pairs of vertices)
Directed vs. undirected graphs
When the edges in a graph have no direction, the graph is
called undirected
Directed vs. undirected graphs (cont.)
No of adjacent edges
incident to it.
What is the degree of each
of these vertices?
What is the sum of the
degrees of all the
vertices=twice the number
of edges
..since each edge will be
counted twice
The Handshaking Lemma
In any graph, the sum of all the vertex-degree is equal to twice the
number of edges.
Proof Since each edge has two ends, it must contribute
exactly 2 to the sum of the degrees. The result follows
immediately.
The Following are the consequences of the
Handshaking lemma.
In any graph, the sum of all the vertex-degree is an even
number.
In any graph, the number of vertices of odd degree is even.
Path: a sequence of vertices
that connect two nodes in a
graph
Paths
1,2,3 is a path
1,4,3 is not a path
Simple path
No vertex is repeated
A cycle is a simple path
with the same start and
end vertex
Identify a simple path ,
And a cycle
Connected graph
A graph is connected if any
two vertices are connected
by some path
Subgraph
A graph H is a
subgraph of graph G iff
its vertex and edge sets
are subsets of those of G
Forest ..is a collection of trees
A(free) tree is a connected graph without cycles..need not be
rooted-
Free tree (not rooted)
Connectivity
Let n ,be vertices
And m be edges
Definition: A complete graph is a graph with n vertices
and an edge between every two vertices.
There are no loops.
Every two vertices share exactly one edge.
n=? ,m=?
For a complete and directed graph
No of edges =?
Trees and graphs
n=vertices
m=edges
m=n(n-1)/2 for a graph
No of edges in a tree =n-1
For a tree m= n-1
If m< n-1 , G is not connected
Spanning tree
David Luebke
27
Representing Graphs
Assume V = {1, 2, …, n}
An adjacency matrix represents the graph as a n x n matrix A:
A[i, j] = 1 if edge (i, j) E (or weight of edge)
= 0 if edge (i, j) E
David Luebke
28
Graphs: Adjacency Matrix
Example:
A 1 2 3 4
1
a 1
2
d
4 2
3
b c
??
3 4
David Luebke
29
Graphs: Adjacency Matrix
Example:
A 1 2 3 4
1
a 1 0 1 1 0
2
d
4 2 0 0 1 0
b c 3 0 0 0 0
3 4 0 0 1 0
David Luebke
30
Graphs: Adjacency Matrix
How much storage does the adjacency matrix require?
A: O(V2)
What is the minimum amount of storage needed by an adjacency
matrix representation of an undirected graph with 4 vertices?
David Luebke
31
Graphs: Adjacency Matrix
The adjacency matrix is a dense representation
Usually too much storage for large graphs
But can be very efficient for small graphs
Most large interesting graphs are sparse
E.g., planar graphs, in which no edges cross, have |E| =
O(|V|) by Euler’s formula
For this reason the adjacency list is often a more appropriate
respresentation
David Luebke
32
GRAPHS
Graphs: Adjacency List
• Adjacency list: for each vertex v V, store a
list of vertices adjacent to v
• Example:
– Adj[1] = {2,3} 1
– Adj[2] = {3}
– Adj[3] = {} 2 4
– Adj[4] = {3}
• Variation: can also keep 3
a list of edges coming into vertex
David Luebke
2
Graphs: Adjacency List
• How much storage is required?
– The degree of a vertex v = no of incident edges
• Directed graphs have in-degree, out-degree
– For directed graphs, no of of items in adjacency lists is
out-degree(v) = |E|
takes (V + E) storage (Why?)
– For undirected graphs,no of items in adj lists is
degree(v) = 2 |E| (handshaking lemma)
also (V + E) storage
• So: Adjacency lists take O(V+E) storage
David Luebke
3
Adjacency lists
Graph implementation
• Array-based implementation
– A 1D array is used to represent the vertices
– A 2D array (adjacency matrix) is used to
represent the edges
Array-based implementation
Linked-list implementation
Edge list
• ..stores the edges and vertices in two lists
• Easy to implement
• Finding the edges incident to a vertex is
inefficient ,since it requires examining the
entire edge sequence.
Edge lists
Breadth-First Search
• “Explore” a graph, turning it into a tree
– One vertex at a time
– Expand frontier of explored vertices across the
breadth of the frontier
• Builds a tree over the graph
– Pick a source vertex to be the root
– Find (“discover”) its children, then their children,
etc.
David Luebke
12
Graph Searching
• Given: a graph G = (V, E), directed or
undirected
• Goal: methodically explore every vertex and
every edge
• Ultimately: build a tree on the graph
– Pick a vertex as the root
– Choose certain edges to produce a tree
David Luebke
13
Breadth-First Search
• BFS follows the following rules:
1. Select an unvisited node x, visit it, have it be the root
in a BFS tree being formed. Its level is called the
current level.
2. From each node z in the current level, in the order in
which the level nodes were visited, visit all the
unvisited neighbors of z. The newly visited nodes from
this level form a new level that becomes the next
current level.
3. Repeat step 2 until no more nodes can be visited.
4. If there are still unvisited nodes, repeat from Step 1.
CS 103 14
BFS and DFS
BFS vs DFS
BFS
• Like unrolling a string ..
Undirected
A
Breadth First Search
H
B C G
D E
18
Undirected
A
0 Breadth
distance from A
First Search
H
visit(A) B C G
D E
F
get
Undiscovered
Fringe Queue: A
Active
Finished 19
Undirected
A
0 Breadth First Search
H
F discovered B C G
D E
1 F
Undiscovered
Fringe Queue:
Active
Finished 20
Undirected
A
0 Breadth First Search
H
B discovered B 1 C G
D E
1 F
Undiscovered
Fringe Queue: F
Active
Finished 21
Undirected
A
0 Breadth First Search
H
C discovered B 1 C 1 G
D E
1 F
Undiscovered
Fringe Queue: F B
Active
Finished 22
Undirected
A
0 Breadth First Search
H
1
G discovered B 1 C 1 G
D E
1 F
Undiscovered
Fringe Queue: F B C
Active
Finished 23
Undirected
A
0 Breadth First Search
H
1
A finished B 1 C 1 G
D E
1 F
get
Undiscovered
Fringe Queue: F B C G
Active
Finished 24
Undirected
A
0 Breadth First Search
H
1
A already
visited B 1 C 1 G
D E
1 F
Undiscovered
Fringe Queue: B C G
Active
Finished 25
Undirected
A
0 Breadth First Search
H
1
D discovered B 1 C 1 G
2 D E
1 F
Undiscovered
Fringe Queue: B C G
Active
Finished 26
Undirected
A
0 Breadth First Search
H
1
E discovered B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue: B C G D
Active
Finished 27
Undirected
A
0 Breadth First Search
H
1
F finished B 1 C 1 G
2
2 D E
1 F
get
Undiscovered
Fringe Queue: B C G D E
Active
Finished 28
Undirected
A
0 Breadth First Search
H
1
B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue: C G D E
Active
Finished 29
Undirected
A
0 Breadth First Search
H
1
A already
visited B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue: C G D E
Active
Finished 30
Undirected
A
0 Breadth First Search
H
1
B finished B 1 C 1 G
2
2 D E
1 F
get
Undiscovered
Fringe Queue: C G D E
Active
Finished 31
Undirected
A
0 Breadth First Search
H
1
A already
visited B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue: G D E
Active
Finished 32
Undirected
A
0 Breadth First Search
H
1
C finished B 1 C 1 G
2
2 D E
1 F
get
Undiscovered
Fringe Queue: G D E
Active
Finished 33
Undirected
A
0 Breadth First Search
H
1
A already
visited B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue: D E
Active
Finished 34
Undirected
A
0 Breadth First Search
H
1
E already
visited B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue: D E
Active
Finished 35
Undirected
A
0 Breadth First Search
H
1
G finished B 1 C 1 G
2
2 D E
1 F
get
Undiscovered
Fringe Queue: D E
Active
Finished 36
Undirected
A
0 Breadth First Search
H
1
E already
visited B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue: E
Active
Finished 37
Undirected
A
0 Breadth First Search
H
1
F already
visited B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue: E
Active
Finished 38
Undirected
A
0 Breadth First Search
H
1
D finished B 1 C 1 G
2
2 D E
1 F
get
Undiscovered
Fringe Queue: E
Active
Finished 39
Undirected
A
0 Breadth First Search
H
1
D already
visited B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue:
Active
Finished 40
Undirected
A
0 Breadth First Search
H
1
F already
visited B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue:
Active
Finished 41
Undirected
A
0 Breadth First Search
H
1
G already
visited B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue:
Active
Finished 42
Undirected
A
0 Breadth First Search
H3
1
H discovered B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue:
Active
Finished 43
Undirected
A
0 Breadth First Search
H3
1
E finished B 1 C 1 G
2
2 D E
1 F
get
Undiscovered
Fringe Queue: H
Active
Finished 44
Undirected
A
0 Breadth First Search
H3
1
E already
visited B 1 C 1 G
2
2 D E
1 F
Undiscovered
Fringe Queue:
Active
Finished 45
Undirected
A
0 Breadth First Search
H3
1
H finished B 1 C 1 G
2
2 D E
1 F
STOP
Undiscovered
Fringe Queue:
Active
Finished 46
Undirected
A
0 Breadth First Search
distance from A
H3
1
B 1 C 1 G
2
2 D E
1 F
47
BFS –pseudo code and analysis
• Given a graph G=(V,E) , and a source vertex s
,BFS…
• systematically explores the graph to discover
every vertex that is reachable from s
• It computes the distance (smallest no of
edges) from s to each reachable vertex
• Works on directed and undirected graphs
• To keep track of progress, breadth-first-search
colors each vertex. Each vertex of the graph is
in one of three states:
• 1. Undiscovered;
2. Discovered but not fully explored; and
3. Fully explored.
• The state of a vertex, u, is stored in a color
variable as follows:
• 1. color[u] = White - for the "undiscovered"
state,
2. color [u] = Gray - for the "discovered but
not fully explored" state, and
3. color [u] = Black - for the "fully explored"
state.
To keep track of discovered vertices
..colors are used
• All vertices start out white and then become
gray and then black
• A discovered vertex becomes gray
• When the search pertaining to a vertex is
’done with’ ,it is made black
• The BFS tree maintains the parent ,child
,ancestor …etc …relationship as in any tree..
• Since a vertex is discovered only once, it has
only one parent ..
• color of each vertex-u.color
• predecessor of u =u.pi
• When u has no predecessor, u.pi=NIL
• u.d holds the distance from the source
s to vertex u
• lines 1–4 paint every vertex white,
• And sets u.d=infinity ,u.pi =NIL
• Line 5 paints s gray Line 6 initializes s.d
to 0, and line 7 sets the predecessor to
NIL
• Lines 8–9 initialize Q to the queue
containing just the vertex s
• lines 10–18 iterates as long as there
remain gray vertices
• At test in line 10, the queue Q consists
of the set of gray vertices.
Analysis-Time
• Enqueuing and dequeuing takes O(1)
• Total queue operations are limited to O(V)
• The procedure scans the adjacency list only once
for each vertex (when the vertex is dequeued)
• The sum of the lengths of the adjacency lists is
Theta(E)
• The total time spent in scanning adjacency lists is
O(E).
• Initialiszation takes O(V)
• Running time of BFS is O(V+E)
Breadth First Search
2 4 8
s 5 7
3 6 9
55
Shortest path Breadth First Search
1
from s
2 4 8
0 s 5 7
3 6 9
Undiscovered
Discovered Queue: s
Top of queue
Finished 56
Breadth First Search
1
2 4 8
0 s 5 7
3 6 9
Undiscovered
Discovered Queue: s 2
Top of queue
Finished 57
Breadth First Search
1
2 4 8
0 s 5 7
1
3 6 9
Undiscovered
Discovered Queue: s 2 3
Top of queue
Finished 58
Breadth First Search
1
2 4 8
0 s 5 7
1
3 6 9
Undiscovered
Discovered Queue: 2 3 5
Top of queue
Finished 59
Breadth First Search
1 2
2 4 8
0 s 5 7
1
3 6 9
Undiscovered
Discovered Queue: 2 3 5
Top of queue
Finished 60
Breadth First Search
1 2
2 4 8
5 already discovered:
0 s 5 7
don't enqueue
1
3 6 9
Undiscovered
Discovered Queue: 2 3 5 4
Top of queue
Finished 61
Breadth First Search
1 2
2 4 8
0 s 5 7
1
3 6 9
Undiscovered
Discovered Queue: 2 3 5 4
Top of queue
Finished 62
Breadth First Search
1 2
2 4 8
0 s 5 7
1
3 6 9
Undiscovered
Discovered Queue: 3 5 4
Top of queue
Finished 63
Breadth First Search
1 2
2 4 8
0 s 5 7
1
3 6 9
1 2
Undiscovered
Discovered Queue: 3 5 4
Top of queue
Finished 64
Breadth First Search
1 2
2 4 8
0 s 5 7
1
3 6 9
1 2
Undiscovered
Discovered Queue: 3 5 4 6
Top of queue
Finished 65
Breadth First Search
1 2
2 4 8
0 s 5 7
1
3 6 9
1 2
Undiscovered
Discovered Queue: 5 4 6
Top of queue
Finished 66
Breadth First Search
1 2
2 4 8
0 s 5 7
1
3 6 9
1 2
Undiscovered
Discovered Queue: 5 4 6
Top of queue
Finished 67
Breadth First Search
1 2
2 4 8
0 s 5 7
1
3 6 9
1 2
Undiscovered
Discovered Queue: 4 6
Top of queue
Finished 68
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1
3 6 9
1 2
Undiscovered
Discovered Queue: 4 6
Top of queue
Finished 69
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1
3 6 9
1 2
Undiscovered
Discovered Queue: 4 6 8
Top of queue
Finished 70
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2
Undiscovered
Discovered Queue: 6 8
Top of queue
Finished 71
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2 3
Undiscovered
Discovered Queue: 6 8 7
Top of queue
Finished 72
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2 3
Undiscovered
Discovered Queue: 6 8 7 9
Top of queue
Finished 73
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2 3
Undiscovered
Discovered Queue: 8 7 9
Top of queue
Finished 74
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2 3
Undiscovered
Discovered Queue: 7 9
Top of queue
Finished 75
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2 3
Undiscovered
Discovered Queue: 7 9
Top of queue
Finished 76
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2 3
Undiscovered
Discovered Queue: 7 9
Top of queue
Finished 77
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2 3
Undiscovered
Discovered Queue: 7 9
Top of queue
Finished 78
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2 3
Undiscovered
Discovered Queue: 9
Top of queue
Finished 79
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2 3
Undiscovered
Discovered Queue: 9
Top of queue
Finished 80
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2 3
Undiscovered
Discovered Queue: 9
Top of queue
Finished 81
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2 3
Undiscovered
Discovered Queue:
Top of queue
Finished 82
Breadth First Search
1 2 3
2 4 8
0 s 5 7
1 3
3 6 9
1 2 3
Level Graph
83
How do we search a graph?
◦ At a particular vertices, where shall we go
next?
Two common framework:
the depth-first search (DFS)
the breadth-first search (BFS) and
◦ In DFS, go as far as possible along a single
path until reach a dead end (a vertex with no
edge out or no neighbor unexplored) then
backtrack
◦ In BFS, one explore a graph level by level
away (explore all neighbors first and then
move on)
It is 1,2,4,3
Rule 1 − Visit the adjacent unvisited vertex.
Mark it as visited. Display it. Push it in a
stack.
Rule 2 − If no adjacent vertex is found, pop
up a vertex from the stack. (It will pop up all
the vertices from the stack, which do not
have adjacent vertices.)
Rule 3 − Repeat Rule 1 and Rule 2 until the
stack is empty.
Initialize the stack.
Mark S as visited and
put it onto the stack.
Explore any unvisited
adjacent node from S.
So we visit C, mark it
as visited and put it
onto the stack.
As C does not have any unvisited adjacent
node so we keep popping the stack until we
find a node that has an unvisited adjacent
node.
a c d
b
e f
g h i j
Data Structure and Algorithm
DFS: Color Scheme
B C
D E F G
Depth-First Search
v
A
B C
D E F G
A
Depth-First Search
v
A
B C
D E F G
A
Depth-First Search
v
A
v
B C
D E F G
A B
Depth-First Search
v
A
v
B C
D E F G
A B
Depth-First Search
v
A
v
B C
D E F G
A B
Depth-First Search
v
A
v
B C
v
D E F G
A B D
Depth-First Search
v
A
v
B C
v
D E F G
A B D
Depth-First Search
v
A
v
B C
v
D E F G
A B D
Depth-First Search
v
A
v
B C
v
D E v F G
A B D E
Depth-First Search
v
A
v
B C
v
D E v F G
A B D E
Depth-First Search
v
A
v
B C
v
D E v F G
A B D E
Depth-First Search
v
A
v
B C
v
D E F G
A B D E
Depth-First Search
v
A
v
B C
v
D E v F G
A B D E
Depth-First Search
v
A
v
B C
v
D E v F G
A B D E
Depth-First Search
v
A
v
B C
v
D E v F G
A B D E
Depth-First Search
v
A
v
B C
v
D E v F G
v
A B D E F
Depth-First Search
v
A
v
B C
v
D E v F G
v
A B D E F
Depth-First Search
v
A
v
B C
v
D E v F G
v
A B D E F
Depth-First Search
v
A
v v
B C
v
D E v F G
v
A B D E F C
Depth-First Search
v
A
v v
B C
v
D E v F G
v
A B D E F C
Depth-First Search
v
A
v v
B C
v
D E v F G
v
A B D E F C
Depth-First Search
v
A
v v
B C
v
D E v F G
v
A B D E F C
Depth-First Search
v
A
v v
B C
v
v
D E v F G
v
A B D E F C G
Depth-First Search
v
A
v v
B C
v
v
D E v F G
v
A B D E F C G
Depth-First Search
v
A
v v
B C
v
v
D E v F G
v
A B D E F C G
Depth-First Search
v
A
v v
B C
v
v
D E v F G
v
A B D E F C G
Depth-First Search
v
A
v v
B C
v
v
D E v F G
v
A B D E F C G
Depth-First Search
v
A
v v
B C
v
v
D E v F G
v
A B D E F C G
Depth-First Search
v
A
v v
B C
v
v
D E v F G
v
A B D E F C G
Depth-First Search
v
A
v v
B C
v
v
D E v F G
v
A B D E F C G
Depth-First Search
B C
D E F G
A B D E F C G
Electronic circuit designs often need to make
the pins of several components electrically
equivalent by wiring them together.
To interconnect a set of n pins, we can
use an arrangement of n -1 wires, each
connecting two pins.
Of all such arrangements, the one that uses
the least amount of wire is usually the most
desirable
Example
5
Brinleigh Cornwell
3
4
8 6
8
Avonford Fingley Donster
7
5
4
2
Edan
Each step of the algorithm makes a choice
which is the best ‘at the moment’.
Globally optimum solutions are not
guaranteed .
But…certain greedy strategies do yield a MST
Minimum Connector Algorithms
5
Brinleigh Cornwell
3
4
8 6
8
Avonford Fingley Donster
7
5
4
2
Edan
We model the situation as a network, then the problem
is to find the minimum connector for the network
5
B C
3
4
8 6
8
A F D
7
5
4
2
E
Kruskal’s Algorithm
E
Kruskal’s Algorithm
8
A D
7 F
5
4
2
E
Kruskal’s Algorithm
3 ED 2
4 AB 3
8 6
8
A D
7 F
5
4
2
E
Kruskal’s Algorithm
3 ED 2
4 AB 3
8 6
CD 4 (or AE 4)
8
A D
7 F
5
4
2
E
Kruskal’s Algorithm
3 ED 2
4 AB 3
8 6
CD 4
AE 4
8
A D
7 F
5
4
2
E
Kruskal’s Algorithm
3 ED 2
4 AB 3
8 6
CD 4
AE 4
8
BC 5 – forms a cycle
A D
7 F EF 5
5
4
2
E
Kruskal’s Algorithm
5
4 Total weight of tree: 18
2
E
A tree is a connected graph with no cycles.
A forest is a bunch of trees.
In a tree, there's only one way to get from
one node to another, but this isn't true in
general graphs.
Tree Forest
…finds a safe edge to add to the growing
forest by finding, of all the edges that
connect any two trees in the forest, an edge
(u,v) of least weight.
B 5
C A
5
4
2
E
Prim’s Algorithm
8
A D
7 F
5
4
2
E
Prim’s Algorithm
8
A D
7 F
5
4
2
E
Prim’s Algorithm
8
A D
7 F
5
4
2
E
Prim’s Algorithm
8
A D
7 F
5
4
2
E
Prim’s Algorithm
5
4 Total weight of tree: 18
2
E
Some points to note
•Both algorithms will always give solutions with the same length.
•They will usually select edges in a different order – you must show
this in your workings.
•Occasionally they will use different edges – this may happen when
you have to choose between edges with the same length. In this
case there is more than one minimum connector for the network.
The tree starts from an arbitrary root vertex r
and grows until the tree spans all the vertices
in V .
Each step adds to the tree A a light edge that
connects A to an isolated vertex—one on
which no edge
of A is incident.
When the algorithm terminates, the edges in
A form a minimum spanning tree.
Why is this a greedy algorithm ?
We need a fast way to select a new edge to
add to the tree formed by the edges in A.
We use a min-priority queue(heap) to store
the vertices which are not in the tree.
Electronic circuit designs often need to make
the pins of several components electrically
equivalent by wiring them together.
To interconnect a set of n pins, we can
use an arrangement of n -1 wires, each
connecting two pins.
Of all such arrangements, the one that uses
the least amount of wire is usually the most
desirable
Example
5
Brinleigh Cornwell
3
4
8 6
8
Avonford Fingley Donster
7
5
4
2
Edan
Each step of the algorithm makes a choice
which is the best ‘at the moment’.
Globally optimum solutions are not
guaranteed .
But…certain greedy strategies do yield a MST
Minimum Connector Algorithms
5
Brinleigh Cornwell
3
4
8 6
8
Avonford Fingley Donster
7
5
4
2
Edan
We model the situation as a network, then the problem
is to find the minimum connector for the network
5
B C
3
4
8 6
8
A F D
7
5
4
2
E
Kruskal’s Algorithm
E
Kruskal’s Algorithm
8
A D
7 F
5
4
2
E
Kruskal’s Algorithm
3 ED 2
4 AB 3
8 6
8
A D
7 F
5
4
2
E
Kruskal’s Algorithm
3 ED 2
4 AB 3
8 6
CD 4 (or AE 4)
8
A D
7 F
5
4
2
E
Kruskal’s Algorithm
3 ED 2
4 AB 3
8 6
CD 4
AE 4
8
A D
7 F
5
4
2
E
Kruskal’s Algorithm
3 ED 2
4 AB 3
8 6
CD 4
AE 4
8
BC 5 – forms a cycle
A D
7 F EF 5
5
4
2
E
Kruskal’s Algorithm
5
4 Total weight of tree: 18
2
E
A tree is a connected graph with no cycles.
A forest is a bunch of trees.
In a tree, there's only one way to get from
one node to another, but this isn't true in
general graphs.
Tree Forest
…finds a safe edge to add to the growing
forest by finding, of all the edges that
connect any two trees in the forest, an edge
(u,v) of least weight.
B 5
C A
5
4
2
E
Prim’s Algorithm
8
A D
7 F
5
4
2
E
Prim’s Algorithm
8
A D
7 F
5
4
2
E
Prim’s Algorithm
8
A D
7 F
5
4
2
E
Prim’s Algorithm
8
A D
7 F
5
4
2
E
Prim’s Algorithm
5
4 Total weight of tree: 18
2
E
Some points to note
•Both algorithms will always give solutions with the same length.
•They will usually select edges in a different order – you must show
this in your workings.
•Occasionally they will use different edges – this may happen when
you have to choose between edges with the same length. In this
case there is more than one minimum connector for the network.
The tree starts from an arbitrary root vertex r
and grows until the tree spans all the vertices
in V .
Each step adds to the tree A a light edge that
connects A to an isolated vertex—one on
which no edge
of A is incident.
When the algorithm terminates, the edges in
A form a minimum spanning tree.
Why is this a greedy algorithm ?
We need a fast way to select a new edge to
add to the tree formed by the edges in A.
We use a min-priority queue(heap) to store
the vertices which are not in the tree.
Dijkstra’s Algorithm
The author: Edsger Wybe Dijkstra
http://www.cs.utexas.edu/~EWD/
Edsger Wybe Dijkstra
- May 11, 1930 – August 6, 2002
a 25
r
19
9 5
b
16 21
i 31
36 c
Label a with 0 and all others with . L0(a) = 0 and L0(v) =
L0(a) = 0 25
L0(r) =
19
9 5
L0(b) =
16 21
L0(i) = 31
36 L0(c) =
Labels are shortest paths from a to vertices.
S1 = {a, i}
L1(a) = 0 25
L1(r) =
19
9 5
L1(b) =
16 21
L1(i) = 9 31
36 L1(c) =
Lk(a, v) = min{Lk-1(a, v), Lk-1(a, u) + w(u, v)}
S2 = {a, i, b}
L2(a) = 0 25
L2(r) =
19
9 5
L2(b) = 19
16 21
L2(i) = 9 31
36 L2(c) =
S3 = {a, i, b, r}
L3(a) = 0 25
L3(r) = 24
19
9 5
L3(b) = 19
16 21
L3(i) = 9 31
36 L3(c) =
S4 = {a, i, b, r, c}
L4(a) = 0 25
L4(r) = 24
19
9 5
L4(b) = 19
16 21
L4(i) = 9 31
36 L4(c) = 45
Dijkstra’s Algorithm
Dijkstra’s Algorithm
Single Source Multiple Destination
Shortest Path Algorithm
Requirements
Works with directed and undirected graphs
Works with weighted and unweighted graphs
Rare type of algorithm
A greedy algorithm that produces an optimal
solution
Walk-Through
2
Initialize
3
5 F C array
A 10 7 3 K dv pv
4
8
18
A F
4
B D B F
9
10
H C F
2 9 25
3 D F
G E E F
7
F F
G F
H F
2
Start with
3
5 F C GK dv pv
10 A
A 7 3
8 4
18 B
4
B D
9 C
10
H D
2 9 25
3 E
G E
7 F
G T 0
H
2
Update unselected
3
5 F C nodes K d p
v v
10 A
A 7 3
8 4
18 B
4
B D
9 C
10
H D 2 G
2 9 25
3 E
G E
7 F
G T 0
H 3 G
2
Select minimum distance
3
5 F C K dv pv
10 A
A 7 3
8 4
18 B
4
B D
9 C
10
H D T 2 G
2 9 25
3 E
G E
7 F
G T 0
H 3 G
2
Update unselected nodes
3
5 F C K dv pv
10 A
A 7 3
8 4
18 B
4
B D
9 C
10
H D T 2 G
2 9 25
3 E 27 D
G E
7 F 20 D
G T 0
H 3 G
2
Select minimum distance
3
5 F C K dv pv
10 A
A 7 3
8 4
18 B
4
B D
9 C
10
H D T 2 G
2 9 25
3 E 27 D
G E
7 F 20 D
G T 0
H T 3 G
2
Update unselected nodes
3
5 F C K dv pv
10 A 7 H
A 7 3
8 4
18 B 12 H
4
B D
9 C
10
H D T 2 G
2 9 25
3 E 27 D
G E
7 F 20 D
G T 0
H T 3 G
2
Select minimum distance
3
5 F C K dv pv
10 A T 7 H
A 7 3
8 4
18 B 12 H
4
B D
9 C
10
H D T 2 G
2 9 25
3 E 27 D
G E
7 F 20 D
G T 0
H T 3 G
2
Update unselected nodes
3
5 F C K dv pv
10 A T 7 H
A 7 3
8 4
18 B 12 H
4
B D
9 C
10
H D T 2 G
2 9 25
3 E 27 D
G E
7 F 17 A
G T 0
H T 3 G
2
Select minimum distance
3
5 F C K dv pv
10 A T 7 H
A 7 3
8 4
18 B T 12 H
4
B D
9 C
10
H D T 2 G
2 9 25
3 E 27 D
G E
7 F 17 A
G T 0
H T 3 G
2
Update unselected nodes
3
5 F C K dv pv
10 A T 7 H
A 7 3
8 4
18 B T 12 H
4
B D
9 C 16 B
10
H D T 2 G
2 9 25
3 E 22 B
G E
7 F 17 A
G T 0
H T 3 G
2
Select minimum distance
3
5 F C K dv pv
10 A T 7 H
A 7 3
8 4
18 B T 12 H
4
B D
9 C T 16 B
10
H D T 2 G
2 9 25
3 E 22 B
G E
7 F 17 A
G T 0
H T 3 G
2
Update unselected nodes
3
5 F C K dv pv
10 A T 7 H
A 7 3
8 4
18 B T 12 H
4
B D
9 C T 16 B
10
H D T 2 G
2 9 25
3 E 22 B
G E
7 F 17 A
G T 0
H T 3 G
2
Select minimum distance
3
5 F C K dv pv
10 A T 7 H
A 7 3
8 4
18 B T 12 H
4
B D
9 C T 16 B
10
H D T 2 G
2 9 25
3 E 22 B
G E
7 F T 17 A
G T 0
H T 3 G
2
Update unselected nodes
3
5 F C K dv pv
10 A T 7 H
A 7
8 4
18 B T 12 H
4
B D
9 C T 16 B
10
H D T 2 G
2 9 25
3 E 19 F
G E
7 F T 17 A
G T 0
H T 3 G
2
Select minimum distance
3
5 F C K dv pv
10 A T 7 H
A 7
8 4
18 B T 12 H
4
B D
9 C T 16 B
10
H D T 2 G
2 9 25
3 E T 19 F
G E
7 F T 17 A
G T 0
H T 3 G
Done
Order of Complexity
Analysis
findMin() takes O(V) time
outer loop iterates (V-1) times
O(V2) time
Optimal for dense graphs, i.e., |E| = O(V2)
Suboptimal for sparse graphs, i.e., |E| = O(V)
Order of Complexity
If the graph is sparse, i.e., |E| = O(V)
– maintain distances in a priority queue
– insert new (shorter) distance produced by line
10 of Figure 9.32
O(|E| log |V|) complexity
Remarks
Implementing this algorithm as a computer
program, it uses O(n2) operations [additions,
comparisons]
Other algorithms exist that account for negative
weights
Dijkstra’s algorithm is a single source one. Floyd’s
algorithm solves for the shortest path among all
pairs of vertices.