0% found this document useful (0 votes)
36 views52 pages

CSE408 Dijkstra, Huffmancoding: Lecture # 27

Dijkstra's algorithm finds the shortest paths from a source node to all other nodes in a graph. It works by maintaining a priority queue of nodes ordered by distance from the source. At each step, it relaxes edges connected to the closest node not yet processed, updating distances in the queue. Huffman coding assigns variable-length codes to characters based on frequency to reduce file size. It builds a binary tree by repeatedly combining the least frequent nodes, then assigns codes by traversing the tree left or right for 0 and 1.

Uploaded by

avinash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views52 pages

CSE408 Dijkstra, Huffmancoding: Lecture # 27

Dijkstra's algorithm finds the shortest paths from a source node to all other nodes in a graph. It works by maintaining a priority queue of nodes ordered by distance from the source. At each step, it relaxes edges connected to the closest node not yet processed, updating distances in the queue. Huffman coding assigns variable-length codes to characters based on frequency to reduce file size. It builds a binary tree by repeatedly combining the least frequent nodes, then assigns codes by traversing the tree left or right for 0 and 1.

Uploaded by

avinash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 52

CSE408

Dijkstra,Huffmancoding

Lecture # 27
Dijkstra’s Algorithm
Edge Relaxation

• Consider an edge e = (u,z) such


that d(u) = 50
– u is the vertex most recently d(z) = 75
u e
added to the cloud s z
– z is not in the cloud

• The relaxation of edge e


updates distance d(z) as
follows:
d(u) = 50
d(z)  min{d(z),d(u) + weight(e)} d(z) = 60
u e
s z
Example
0 0
8 A 4 8 A 4
2 2
8 7 2 1 4 8 7 2 1 3
B C D B C D

 3 9  5 3 9 8
2 5 2 5
E F E F

0 0
8 A 4 8 A 4
2 2
8 7 2 1 3 7 7 2 1 3
B C D B C D

5 3 9 11 5 3 9 8
2 5 2 5
E F E F
Example (cont.)

0
8 A 4
2
7 7 2 1 3
B C D

5 3 9 8
2 5
E F
0
8 A 4
2
7 7 2 1 3
B C D

5 3 9 8
2 5
E F
Dijkstra’s Algorithm

• A priority queue stores Algorithm DijkstraDistances(G, s)


the vertices outside the Q  new heap-based priority queue
cloud for all v  G.vertices()
if v = s
– Key: distance setDistance(v, 0)
– Element: vertex else
• Locator-based methods setDistance(v, )
– insert(k,e) returns a l  Q.insert(getDistance(v), v)
locator setLocator(v,l)
– replaceKey(l,k) changes while Q.isEmpty()
the key of an item u  Q.removeMin()
for all e  G.incidentEdges(u)
• We store two labels with
{ relax edge e }
each vertex: z  G.opposite(u,e)
– Distance (d(v) label) r  getDistance(u) + weight(e)
– locator in priority queue if r < getDistance(z)
setDistance(z,r)
Q.replaceKey(getLocator(z),r)
Analysis

• Graph operations
– Method incidentEdges is called once for each vertex
• Label operations
– We set/get the distance and locator labels of vertex z O(deg(z)) times
– Setting/getting a label takes O(1) time
• Priority queue operations
– Each vertex is inserted once into and removed once from the priority
queue, where each insertion or removal takes O(log n) time
– The key of a vertex in the priority queue is modified at most deg(w) times,
where each key change takes O(log n) time
• Dijkstra’s algorithm runs in O((n + m) log n) time provided the graph is
represented by the adjacency list structure
– Recall that Sv deg(v) = 2m
• The running time can also be expressed as O(m log n) since the graph
is connected
Extension

• Using the template Algorithm DijkstraShortestPathsTree(G, s)


method pattern, we can
extend Dijkstra’s …
algorithm to return a
tree of shortest paths for all v  G.vertices()
from the start vertex to …
all other vertices setParent(v, )

• We store with each
vertex a third label: for all e  G.incidentEdges(u)
– parent edge in the { relax edge e }
shortest path tree z  G.opposite(u,e)
• In the edge relaxation r  getDistance(u) + weight(e)
step, we update the if r < getDistance(z)
parent label setDistance(z,r)
setParent(z,e)
Q.replaceKey(getLocator(z),r)
Why Dijkstra’s Algorithm Works

• Dijkstra’s algorithm is based on the greedy method.


It adds vertices by increasing distance.
 Suppose it didn’t find all shortest
distances. Let F be the first wrong 0
vertex the algorithm processed. 8 A 4
 When the previous node, D, on the 2
7 7 2 1 3
true shortest path was considered, its B C D
distance was correct.
5 3 9 8
 But the edge (D,F) was relaxed at that 2 5
time! E F

 Thus, so long as d(F)>d(D), F’s distance


cannot be wrong. That is, there is no
wrong vertex.
Purpose of Huffman Coding

• Proposed by Dr. David A. Huffman in 1952


– “A Method for the Construction of Minimum
Redundancy Codes”
• Applicable to many forms of data
transmission
– Our example: text files
The Basic Algorithm

• Huffman coding is a form of statistical coding


• Not all characters occur with the same
frequency!
• Yet all characters are allocated the same amount
of space
– 1 char = 1 byte, be it e or x
The Basic Algorithm

• Any savings in tailoring codes to frequency of


character?
• Code word lengths are no longer fixed like
ASCII.
• Code word lengths vary and will be shorter for
the more frequently used characters.
The (Real) Basic Algorithm

1. Scan text to be compressed and tally


occurrence of all characters.
2. Sort or prioritize characters based on number of
occurrences in text.
3. Build Huffman code tree based on
prioritized list.
4. Perform a traversal of tree to determine all code words.
5. Scan text again and create new file using the
Huffman codes.
Building a Tree Scan the original text

• Consider the following short text:

Eerie eyes seen near lake.

• Count up the occurrences of all characters in the


text
Building a Tree Scan the original text

Eerie eyes seen near lake.


• What characters are present?

E e r i space
ysnarlk.
Building a Tree Scan the original text

Eerie eyes seen near lake.


• What is the frequency of each character in the text?

Char Freq. Char Freq. Char Freq.


E 1 y 1 k 1
e 8 s 2 . 1
r 2 n 2
i 1 a 2
space 4 l 1
Building a Tree Prioritize characters

• Create binary tree nodes with character


and frequency of each character
• Place nodes in a priority queue
– The lower the occurrence, the higher the
priority in the queue
Building a Tree Prioritize characters

• Uses binary tree nodes


public class HuffNode
{
public char myChar;
public int myFrequency;
public HuffNode myLeft, myRight;
}
priorityQueue myQueue;
Building a Tree

• The queue after inserting all nodes

E i y l k . r s n a sp e
1 1 1 1 1 1 2 2 2 2 4 8

• Null Pointers are not shown


Building a Tree

• While priority queue contains two or more nodes


– Create new node
– Dequeue node and make it left subtree
– Dequeue next node and make it right subtree
– Frequency of new node equals sum of frequency of left
and right children
– Enqueue new node back into queue
Building a Tree

E i y l k . r s n a sp e
1 1 1 1 1 1 2 2 2 2 4 8
Building a Tree

y l k . r s n a sp e
1 1 1 1 2 2 2 2 4 8

E i
1 1
Building a Tree

y l k . r s n a sp e
2
1 1 1 1 2 2 2 2 4 8
E i
1 1
Building a Tree

k . r s n a sp e
2
1 1 2 2 2 2 4 8
E i
1 1

y l
1 1
Building a Tree

2
k . r s n a 2 sp e
1 1 2 2 2 2 4 8
y l
1 1
E i
1 1
Building a Tree

r s n a 2 2 sp e
2 2 2 2 4 8
y l
E i 1 1
1 1

k .
1 1
Building a Tree

r s n a 2 2 sp e
2
2 2 2 2 4 8
E i y l k .
1 1 1 1 1 1
Building a Tree

n a 2 sp e
2 2
2 2 4 8
E i y l k .
1 1 1 1 1 1

r s
2 2
Building a Tree

n a 2 sp e
2 2 4
2 2 4 8

E i y l k . r s
1 1 1 1 1 1 2 2
Building a Tree

2 4 e
2 2 sp
8
4
y l k . r s
E i 1 1 1 1 2 2
1 1

n a
2 2
Building a Tree

2 4 4 e
2 2 sp
8
4
y l k . r s n a
E i 1 1 1 1 2 2 2 2
1 1
Building a Tree

4 4 e
2 sp
8
4
k . r s n a
1 1 2 2 2 2

2 2

E i y l
1 1 1 1
Building a Tree

4 4 4
2 sp e
4 2 2 8
k . r s n a
1 1 2 2 2 2
E i y l
1 1 1 1
Building a Tree

4 4 4
e
2 2 8
r s n a
2 2 2 2
E i y l
1 1 1 1

2 sp
4
k .
1 1
Building a Tree

4 4 4 6 e
2 sp 8
r s n a 2 2
4
2 2 2 2 k .
E i y l 1 1
1 1 1 1

What is happening to the characters with a low number of occurrences?


Building a Tree

4 6 e
2 2 2 8
sp
4
E i y l k .
1 1 1 1 1 1
8

4 4

r s n a
2 2 2 2
Building a Tree

4 6 e 8
2 2 2 8
sp
4 4 4
E i y l k .
1 1 1 1 1 1
r s n a
2 2 2 2
Building a Tree

8
e
8
4 4
10
r s n a
2 2 2 2 4
6
2 2 2 sp
4
E i y l k .
1 1 1 1 1 1
Building a Tree

8 10
e
8 4
4 4
6
2 2
r s n a 2 sp
2 2 2 2 4
E i y l k .
1 1 1 1 1 1
Building a Tree

10
16
4
6
2 2 e 8
2 sp 8
4
E i y l k . 4 4
1 1 1 1 1 1

r s n a
2 2 2 2
Building a Tree

10 16

4
6
e 8
2 2 8
2 sp
4 4 4
E i y l k .
1 1 1 1 1 1
r s n a
2 2 2 2
Building a Tree

26

16
10

4 e 8
6 8
2 2 2 sp 4 4
4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2
Building a Tree

After enqueueing this node


there is only one node left in
priority queue.
26

16
10

4 e 8
6 8
2 2 2 sp 4 4
4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2
Building a Tree

Dequeue the single node left in the


queue.
26

This tree contains the new code 10


16
words for each character.
4 e 8
6 8
Frequency of root node should 2 2 2 sp 4 4
equal number of characters in text. 4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2

Eerie eyes seen near lake.  26 characters


Encoding the File Traverse Tree for Codes

• Perform a traversal of the


tree to obtain new code
words 26
• Going left is a 0 going right is 16
10
a1
• code word is only completed 4
6
e 8
8
when a leaf node is reached 2 2 2 sp 4 4
4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2
Encoding the File Traverse Tree for Codes

Char Code
E 0000
i 0001
y 0010
l 0011 26
k 0100 16
. 0101 10
space 011 4
e 10 e 8
6 8
r 1100 2 2 2
s 1101 sp 4 4
4
n 1110 E i y l k .
a 1111 1 1 1 1 1 1 r s n a
2 2 2 2
Encoding the File

• Rescan text and encode file


using new code words Char Code
E 0000
Eerie eyes seen near lake. i 0001
y 0010
l 0011
000010110000011001110001010110110100
k 0100
111110101111110001100111111010010010
. 0101
space 011
1 e 10
r 1100
s 1101
n 1110
a 1111
 Why is there no need for a
separator character?
.
Encoding the File

• Have we made things any


000010110000011001110001010110110100
better? 111110101111110001100111111010010010
1
• 73 bits to encode the text
• ASCII would take 8 * 26 =
208 bits

If modified code used 4 bits per


character are needed. Total bits
4 * 26 = 104. Savings not as great.
Decoding the File

• How does receiver know what the codes are?


• Tree constructed for each text file.
– Considers frequency for each file
– Big hit on compression, especially for smaller files
• Tree predetermined
– based on statistical analysis of text files or file types
• Data transmission is bit based versus byte based
Decoding the File

• Once receiver has tree it


scans incoming bit stream 26

• 0  go left 10
16

• 1  go right 4 e 8
6 8
2 2 2 sp 4 4
4
101000110111101111011 E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2
11110000110101
Summary

• Huffman coding is a technique used to compress files


for transmission
• Uses statistical coding
– more frequently used symbols have shorter code words
• Works well for text and fax transmissions
• An application that uses several data structures

You might also like