0% found this document useful (0 votes)
12 views

Module 4

Module-4 covers dynamic programming as a method for optimizing multistage decision processes, focusing on solving problems with overlapping subproblems through techniques like the Coin Row Problem, Change-making Problem, and the Knapsack Problem. It also introduces algorithms for finding transitive closures and all-pairs shortest paths, including Warshall's and Floyd's algorithms, which are applicable to various graph-related problems. The document provides recurrence relations, algorithms, and examples to illustrate the application of dynamic programming in optimization scenarios.

Uploaded by

C4rb0n
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Module 4

Module-4 covers dynamic programming as a method for optimizing multistage decision processes, focusing on solving problems with overlapping subproblems through techniques like the Coin Row Problem, Change-making Problem, and the Knapsack Problem. It also introduces algorithms for finding transitive closures and all-pairs shortest paths, including Warshall's and Floyd's algorithms, which are applicable to various graph-related problems. The document provides recurrence relations, algorithms, and examples to illustrate the application of dynamic programming in optimization scenarios.

Uploaded by

C4rb0n
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

ADA Module-4 SM

Module-4
Dynamic Programming
4.1. General Method
Dynamic programming is a general method for optimizing multistage decision processes. Thus, the word
“programming” in the name of this technique stands for “planning” and does not refer to computer
programming.

Dynamic programming is a technique for solving problems with overlapping subproblems. Typically, these
subproblems arise from a recurrence relating a given problem’s solution to solutions of its smaller
subproblems.

Rather than solving overlapping subproblems again and again, dynamic programming suggests solving
each of the smaller subproblems only once and recording the results in a table from which a solution to the
original problem can then be obtained.

Since most of the dynamic programming applications deal with optimization problems, a general principle
that underlines such applications is the principle of optimality. It says that an optimal solution to any
instance of an optimization problem is composed of optimal solutions to its sub instances.

4.2.Three Basic Examples

Example 1 Coin Row Problem

There is a row of n coins whose values are some positive integers c1, c2,...,cn, not necessarily distinct.
The goal is to pick up the maximum amount of money subject to the constraint that no two coins
adjacent in the initial row can be picked up

Let F (n) be the maximum amount that can be picked up from the row of n coins.

The Recurrence relation is

F (n) = max{cn + F (n − 2), F (n − 1)} for n > 1,

F (0) = 0, F(1) = c1

ALGORITHM CoinRow(C[1..n])
//Applies formula bottom up to find the maximum amount of money
//that can be picked up from a coin row without picking two adjacent coins
//Input: Array C[1..n] of positive integers indicating the coin values
//Output: The maximum amount of money that can be picked up
F[0]← 0; F[1]← C[1]
for i ← 2 to n do
F[i]← max(C[i] + F[i − 2], F[i − 1])
return F(n)

CSE@HKBKCE 1 2023-24
ADA Module-4 SM

Example : The application of the algorithm to the coin row of denominations 5, 1, 2, 10, 6, 2 is shown
below

To find the coins with the maximum total value found, we need to back trace the computations to see
which of the two possibilities, cn + F (n − 2) or F (n − 1) produced the maxima.

In the last application of the formula, it was the sum c6 + F (4), which means that the coin c6 = 2 is a
part of an optimal solution.
Moving to computing F (4), the maximum was produced by the sum c4 + F (2), which means that the
coin c4 = 10 is a part of an optimal solution as well.

Finally, the maximum in computing F (2) was produced by F (1), implying that the coin c2 is not the
part of an optimal solution and the coin c1 = 5 is.

Thus, the optimal solution is {c1, c4, c6}.


Time Complexity:  (n), Space Complexity:(n)

CSE@HKBKCE 2 2023-24
ADA Module-4 SM

EXAMPLE 2 Change-making problem


Give change for amount n using the minimum number of coins of denominations d1 < d2 < ... < dm
Here, a dynamic programming algorithm for the general case is considered, assuming availability of
unlimited quantities of coins for each of the m denominations d1 < d2 < ... < dm where d1 = 1. Let F (n)
be the minimum number of coins whose values add up to ‘n’, It is clear that F(0)=0 i.e ( when n is 0 no
coin denomination is required).

The amount n can only be obtained by adding one coin of denomination dj to the amount n-dj for j = 1,
2,...,m such that n ≥ dj .
Therefore, we can consider all such denominations and select the one minimizing F (n − dj )+1. Since 1
is a constant, we can, find the smallest F (n − dj ) first and then add 1 to it.
we have the following recurrence for F (n):
F (n) = min j : n≥dj {F (n − dj )} + 1 for n > 0,
F (0) = 0.
We can compute F (n) by filling a one-row table left to right. but computing a table entry here requires
finding the minimum of up to m numbers.

ALGORITHM ChangeMaking(D[1..m], n)
//Applies dynamic programming to find the minimum number of coins
//of denominations d1 < d2 < ... < dm where d1 = 1 that add up to a given amount n
//Input: Positive integer n and array D[1..m] of increasing positive integers indicating the coin
//denominations where D[1] = 1
//Output: The minimum number of coins that add up to n
F[0]← 0
for i ← 1 to n do
temp ← ∞; j ← 1
while j ≤ m and i ≥ D[j ] do
temp ← min(F[i − D[j ]], temp)
j←j+1
F[i]← temp + 1
return F[n]
Example: The application of the algorithm to amount n = 6 and denominations 1, 3, 4

CSE@HKBKCE 3 2023-24
ADA Module-4 SM

To find the coins of an optimal solution, we need to backtrace the computations to see which of the
denominations produced the minima.
For the instance considered, the last application of the formula (for n = 6), the minimum was produced
by d2 = 3.
The second minimum (for n = 6 − 3) was also produced for a coin of that denomination.
Thus, the minimum-coin set for n = 6 is two 3’s.
Total number of coins is 2
Time complexity: (nm)
Space Complexity: (n)

Example 3 Coin-collecting problem


Several coins are placed in cells of an n × m board, no more than one coin per cell. A robot, located in
the upper left cell of the board, needs to collect as many of the coins as possible and bring them to the
bottom right cell. On each step, the robot can move either one cell to the right or one cell down from its
current location. When the robot visits a cell with a coin, it always picks up that coin. Design an
algorithm to find the maximum number of coins the robot can collect and a path it needs to follow to do
this

CSE@HKBKCE 4 2023-24
ADA Module-4 SM

Recurrence Relation
F (i, j ) = max{F (i − 1, j ), F (i, j − 1)} + cij for 1 ≤ i ≤ n, 1 ≤ j ≤ m
F (0, j) = 0 for 1 ≤ j ≤ m and F (i, 0) = 0 for 1 ≤ i ≤ n
where cij = 1 if there is a coin in cell (i, j ), and cij = 0 otherwise.

ALGORITHM RobotCoinCollection(C[1..n, 1..m)


F[1, 1]← C[1, 1];
for j ← 2 to m do
F[1, j ]← F[1, j − 1] + C[1, j ]
for i ← 2 to n do
F[i, 1]← F[i − 1, 1] + C[i, 1]
for j ← 2 to m do
F[i, j ]← max(F[i − 1, j ], F[i, j − 1]) + C[i, j ]
return F[n, m]

4.3. Knapsack Problem and its Memory Functions

Given n items of known weights w1, . . . , wn and values v1, . . . , vn and a knapsack of capacity W, find
the most valuable subset of the items that fit into the knapsack.

We assume here that all the weights and the knapsack capacity are positive integers.

Recurrence relation
Consider an instance defined by the first i items, 1≤ i ≤ n, with weights w1, . . . , wi, values v1, . . . , vi ,
and knapsack capacity j, 1 ≤ j ≤ W. Let F(i, j) be the value of an optimal solution to this instance, i.e., the
value of the most valuable subset of the first i items that fit into the knapsack of capacity j. We can divide
CSE@HKBKCE 5 2023-24
ADA Module-4 SM

all the subsets of the first i items that fit the knapsack of capacity j into two categories: those that do not
include the ith item and those that do.
• Among the subsets that do not include the ith item, the value of an optimal subset is, by
definition, F(i − 1, j).
• Among the subsets that do include the ith item (hence, j – wi ≥ 0), an optimal subset is made up of
this item and an optimal subset of the first i − 1 items that fits into the knapsack of capacity j – wi.
The value of such an optimal subset is vi + F(i − 1, j − wi).

Thus the recurrence relation is

Initial condition

The goal is to find F(n, W), the maximal value of a subset of the n given items that fit into the knapsack
of capacity W, and an optimal subset itself

Table for solving the knapsack problem

Example:1
Consider the following instance

Using the recurrence relation the table can be computed as follows

The maximal values is M(4,5)=37

CSE@HKBKCE 6 2023-24
ADA Module-4 SM

We can find the composition of an optimal subset by tracing back the computations of the entry F(4,5) in
the table.
Since F(4,5)> F(3,5) item 4 has to be included in an optimal solution
5 − 2 = 3 units of the knapsack capacity is remaining.
Since F(3, 3) = F(2, 3), item 3 is not included in the optimal subset.
Since F(2, 3) > F(1, 3), item 2 is a part of an optimal selection.
Therefore 3-1= 2 units of the knapsack capacity is remaining
since F(1, 2) > F(0, 2), item 1 is the final part of the optimal solution
The items included are {item 1,item 2, item 4}.
The optimal subset is (x1,x2,x3,x4)=(1,1,0,1)

Algorithm that computes the optimal subset and maximum profit using bottom up approach

Algorithm Knapsack(int n, Int W)


{
// Implements the bottom up approach for the knapsack’s problem
//Input: Weights of the item wt[1..n], Values of each item Val[1..n]
//Output: The value of an optimal feasible subset of the first i items

for i = 0 to n do
for j = 0 to W do
{
if (i==0 || j==0)
K[i][j] = 0;
else if (j-wt[i]>=0 )
K[i][j] = max(val[i] + K[i-1][j-wt[i]], K[i-1][j]);
else
K[i][j] = K[i-1][j];
}
Return K[n,W]
}
4.4. Transitive closure
• Adjacency matrix A = {aij} of a directed graph is the boolean matrixthat has 1 in its ith row and
jth column if and only if there is a directed edge from the ith vertex to the jth vertex.
• Definition: The transitive closure of a directed graph with n vertices can be defined as the n × n
boolean matrix T = {tij }, in which the element in the ith row and the jth column is 1 if there exists a
nontrivial path (i.e., directed path of a positive length) from the ith vertex to the jth vertex;
otherwise, tij is 0.

Applications
• For investigating data flow and control flow dependencies
• For inheritance testing of object-oriented software.
• In electronic engineering, it is used for redundancy identification and test generation for digital
circuits.

Example:

CSE@HKBKCE 7 2023-24
ADA Module-4 SM

Figure 4.2 a) Digraph b) adjaceny matrix c) transitive closure

• We can generate the transitive closure of a digraph with the help of depth first search or breadth-
first search. Performing traversal starting at the ith vertex gives the information about the vertices
reachable from i and hence the columns that contain 1’s in the ith row of the transitive closure.
• Thus, doing such a traversal for every vertex as a starting point yields the transitive closure in its
• entirety.
• Since this method traverses the same digraph several times, we look for a better algorithm, such an
algorithm is called Warshall’s algorithm

Warshall’s algorithm
Warshall’s algorithm constructs the transitive closure through a series of n × n boolean matrices:

element rij(k) in the ith row and jth column of matrix R(k) is equal to 1 if and only if there exists a directed
path of a positive length from the ith vertex to the jth vertex with each intermediate vertex, if any, numbered
not higher than k.
Thus, we have the following formula for generating the elements of matrix R(k) from the elements of matrix
R(k-1)

This formula implies the following rules.


• If an element rij is 1 in R(k-1), it remains 1 in R(k).
• If an element rij is 0 in R(k-1), it has to be changed to 1 in R(k) if and only if the element in its row i
and column k and the element in its row k and column j are both 1’s in R(k−1).

Pseudo code of Warshall’s algorithm

ALGORITHM Warshall(A[1..n, 1..n])


//ImplementsWarshall’s algorithm for computing the transitive closure
//Input: The adjacency matrix A of a digraph with n vertices
//Output: The transitive closure of the digraph
R(0) ←A
for k←1 to n do
for i ←1 to n do
for j ←1 to n do
R(k)[i, j ]←R(k−1)[i, j ] or (R(k−1)[i, k] and R(k−1)[k, j])
return R(n)
CSE@HKBKCE 8 2023-24
ADA Module-4 SM

Time Efficiency is (n3)

Application of warhall’s algorithm to find the transitive closure

4.5. Floyd’s algorithm to the all pair shortest path problems


The all-pairs shortest paths problem finds the distances i.e., the lengths of the shortest paths from each
vertex to all other vertices.
Applications :communications, transportation networks, and operations research

Distance matrix: is an n × n matrix D. The element dij in the ith row and the jth column of this matrix
indicates the length of the shortest path from the ith vertex to the jth vertex.

Example

CSE@HKBKCE 9 2023-24
ADA Module-4 SM

Figure 4.3 a) Digraph b) Weight matrix c) Distance Matrix

Floyd’s algorithm

• It is applicable to both undirected and directed weighted graphs provided that they do not contain
a cycle of a negative length.
• Floyd’s algorithm computes the distance matrix of a weighted graph with n vertices through a
series of n × n matrices

• The element d(kij in the ith row and the jth column of matrix D(k) (i, j = 1, 2, . . . , n,k = 0, 1, . . . ,
n) is equal to the length of the shortest path among all paths from the ith vertex to the jth vertex
with each intermediate vertex, if any, numbered not higher than k.
• The series starts with D(0), which does not allow any intermediate vertices in its paths,hence,D(0)
is the weight matrix of the graph.
• The last matrix in the series, D(n), contains the lengths of the shortest paths among all paths that
can use all n vertices as intermediate and hence is the distance matrix.
• we can compute all the elements of each matrix D(k) from its immediate predecessor D(k−1) using
the following recurrence relation

• The element in row i and column j of the current distance matrix D(k−1) is replaced by the sum of
the elements in the same row i and the column k and in the same row k and column j if and only if
the latter sum is smaller than its current value.

Pseudo code for floyd’s algorithm


ALGORITHM Floyd(W[1..n, 1..n])
//Implements Floyd’s algorithm for the all-pairs shortest-paths problem
//Input: The weight matrix W of a graph with no negative-length cycle
//Output: The distance matrix of the shortest paths’ lengths
D ←W //is not necessary if W can be overwritten
for k←1 to n do
for i ←1 to n do
for j ←1 to n do
D[i, j ]←min{D[i, j ], D[i, k]+ D[k, j]}
return D

CSE@HKBKCE 10 2023-24
ADA Module-4 SM

Time Efficiency is (n3)

Illustration of the application of Floyd’s algorithm

Greedy Technique
The greedy approach suggests constructing a solution through a sequence of steps, each expanding a
partially constructed solution obtained so far, until a complete solution to the problem is reached
the choice made must be:
• Feasible, i.e., it has to satisfy the problem’s constraints
• Locally optimal, i.e., it has to be the best local choice among all feasible choices available on that
step
• Irrevocable, once a decision is made, it cannot be changed on subsequent steps of the algorithm

Prim’s Algorithm
• DEFINITION A spanning tree of an undirected connected graph is its connected acyclic subgraph
(i.e., a tree) that contains all the vertices of the graph.
• If such a graph has weights assigned to its edges, a minimum spanning tree is its spanning tree of
the smallest weight, where the weight of a tree is defined as the sum of the weights on all its
edges.
• The minimum spanning tree problem is the problem of finding a minimum spanning tree for a
given weighted connected graph.
CSE@HKBKCE 11 2023-24
ADA Module-4 SM

Applications
• It has direct applications to the design of all kinds of networks including communication, computer,
transportation, and electrical—by providing the cheapest way to achieve connectivity.
• It identifies clusters of points in data sets.
• It has been used for classification purposes in archeology, biology, sociology, and
• other sciences.
• It is also helpful for constructing approximate solutions to more difficult problems such the
traveling salesman problem .

Prim’s algorithm
• Prim’s algorithm constructs a minimum spanning tree through a sequence of expanding subtrees.
• The initial subtree in such a sequence consists of a single vertex selected arbitrarily from the set V
of the graph’s vertices.
• On each iteration, the algorithm expands the current tree in the greedy manner by simply attaching
to it the nearest vertex (connected by an edge of smallest weight) not in that tree.
• The algorithm stops after all the graph’s vertices have been included in the tree being
constructed.
• Since the algorithm expands a tree by exactly one vertex on each of its iterations, the total number
of such iterations is n − 1, where n is the number of vertices in the graph

ALGORITHM Prim(G)
//Prim’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G = (V, E)
//Output: ET , the set of edges composing a minimum spanning tree of G

VT ←{v0} //the set of tree vertices can be initialized with any vertex
ET ←∅
for i ←1 to |V| − 1 do
find a minimum-weight edge e ∗ = (v ∗ , u ∗ ) among all the edges (v, u) such that v is in VT and u is in V
− VT
VT ←VT ∪ {u ∗}
ET ←ET ∪ {e ∗}
return ET

Example: Apply Prim’s algorithm to the following graph. Include in the priority queue all the vertices
not already in the tree.

CSE@HKBKCE 12 2023-24
ADA Module-4 SM

Time complexity

If a graph is represented by its weight matrix and the priority queue is implemented as an unordered array,
the algorithm’s running time will be in (|V |2). On each of the |V| − 1 iterations, the array implementing
the priority queue is traversed to find and delete the minimum and then to update, if necessary, the priorities
of the remaining vertices.

If a graph is represented by its adjacency lists and the priority queue is implemented as a min-heap, the
running time of the algorithm is in O(|E| log |V |).

CSE@HKBKCE 13 2023-24
ADA Module-4 SM

3.1 Kruskals algorithm


The algorithm constructs a minimum spanning tree as an expanding sequence of subgraphs that are always
acyclic but are not necessarily connected on the intermediate stages of the algorithm.
The algorithm begins by sorting the graph’s edges in non decreasing order of their weights. Then, starting
with the empty subgraph, it scans this sorted list, adding the next edge on the list to the current subgraph if
such an inclusion does not create a cycle and simply skipping the edge otherwise.

ALGORITHM Kruskal(G)
{
//Kruskal’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G = (V, E)
//Output: ET , the set of edges composing a minimum spanning tree of G
sort E in non decreasing order of the edge weights w(ei1) ≤ . . . ≤ w(ei|E|)

ET ←∅; ecounter ←0 //initialize the set of tree edges and its size
k←0 //initialize the number of processed edges
while ecounter < |V| − 1 do
{
k←k + 1
if ET ∪ {eik} is acyclic
ET ←ET ∪ {eik };
ecounter ←ecounter + 1
}
return ET
}

Time complexity

with an efficient sorting algorithm, the time efficiency of Kruskal’s algorithm will be in O(|E| log |E|).

CSE@HKBKCE 14 2023-24
ADA Module-4 SM

Example

Dijkstra’s Algorithm
Single-source shortest-paths problem: for a given vertex called the source in a weighted connected
graph, find shortest paths to all its other vertices.

Applications
• Transportation planning and packet routing in communication networks, including the Internet.
• Finding shortest paths in social networks, speech recognition, document formatting, robotics,
compilers, and airline crew scheduling.

CSE@HKBKCE 15 2023-24
ADA Module-4 SM

• In the world of entertainment, one can mention path finding in video games and finding best
solutions to puzzles using their state-space graphs

Description of Dijkstra’s algorithm


• The best-known algorithm for the single-source shortest-paths problem, called Dijkstra’s
algorithm.
• This algorithm is applicable to undirected and directed graphs with nonnegative weights only.
• Dijkstra’s algorithm finds the shortest paths to a graph’s vertices in order of their distance from
a given source.
• First, it finds the shortest path from the source to a vertex nearest to it, then to a second nearest, and
so on.
• In general, before its ith iteration commences, the algorithm has already identified the shortest paths
to i − 1 other vertices nearest to the source.

The set of vertices adjacent to the vertices in Ti can be referred to as “fringe vertices”; they are the
candidates from which Dijkstra’s algorithm selects the next vertex nearest to the source

To facilitate the algorithm’s operations, we label each vertex with two labels.
• The numeric label d indicates the length of the shortest path from the source to this vertex found by
the algorithm so far;
• The other label indicates the name of the next-to-last vertex on such a path, (penultimate) i.e., the
parent of the vertex in the tree being constructed.

After we have identified a vertex u∗ to be added to the tree, we need to perform two operations.
• Move u ∗ from the fringe to the set of tree vertices.
• For each remaining fringe vertex u that is connected to u∗ by an edge of weight w(u∗, u) such that
du∗ + w(u∗, u) < du, update the labels of u by u ∗ and du ∗ + w(u ∗ , u), respectively.

ALGORITHM Dijkstra(G, s)
//Dijkstra’s algorithm for single-source shortest paths
//Input: A weighted connected graph G = (V, E) with nonnegative weights and its vertex s
//Output: The length dv of a shortest path from s to v and its penultimate vertex pv for every vertex v in
V
Initialize(Q) //initialize priority queue to empty
for every vertex v in V
dv←∞; pv ←null
Insert(Q, v, dv) //initialize vertex priority in the priority queue
ds ←0;
Decrease(Q, s, ds) //update priority of s with ds
VT←
for i ←0 to |V| − 1 do
u*←DeleteMin(Q) //delete the minimum priority element
VT ←VT ∪ {u*}
for every vertex u in V − VT that is adjacent to u * do
if du* + w(u *, u) < du then
du←du*+ w(u*, u);
pu ←u*

CSE@HKBKCE 16 2023-24
ADA Module-4 SM

Decrease(Q, u, du)

Time complexity
The time efficiency of Dijkstra’s algorithm depends on the data structures used for implementing the
priority queue and for representing an input graph itself.
It is (|V |2) for graphs represented by their weight matrix and the priority queue implemented as an
unordered array.
For graphs represented by their adjacency lists and the priority queue implemented as a min-heap, it is in
O(|E| log |V |).

Example:

Huffman Trees and Codes


Consider a text that comprises of symbols selected from some n-symbol alphabet. we can encode it by
assigning to each of the text’s symbols some sequence of bits called the codeword.

Fixed-length encoding: assigns to each symbol a bit string of the same length m (m ≥ log2n). Example
standard ASCII code .

Variable Length encoding –


CSE@HKBKCE 17 2023-24
ADA Module-4 SM

• A coding scheme that yields a shorter bit string on the average by assigning shorter code words to
more frequent symbols and longer code words to less frequent symbols.
• Example telegraph code -frequent letters such as e (.) and a (.−) are assigned short sequences of
dots and dashes while infrequent letters such as q(−−.−) and z (−−..) have longer codes.

Problem – to identify how many bits of an encoded text represent the first (or, more generally, the ith)
symbol.
• To avoid this complication, we can limit ourselves to the so-called prefix-free (or simply prefix)
codes. In a prefix code, no codeword is a prefix of a codeword of another symbol.
• If we want to create a binary prefix code for some alphabet, associate the alphabet’s symbols with
leaves of a binary tree in which all the left edges are labeled by 0 and all the right edges are labeled
by 1. The codeword of a symbol can then be obtained by recording the labels on the simple path
from the root to the symbol’s leaf.
• Since there is no simple path to a leaf that continues to another leaf, no codeword can be a prefix
of another codeword; hence, any such tree yields a prefix code.

The following greedy algorithm, invented by David Huffman assigns shorter bit strings to high-frequency
symbols and longer ones to low-frequency symbols.

Huffman’s algorithm
Step 1 Initialize n one-node trees and label them with the symbols of the alphabet given. Record the
frequency of each symbol in its tree’s root to indicate the tree’s weight.
Step 2 Repeat the following operation until a single tree is obtained.
Find two trees with the smallest weight (ties can be broken arbitrarily) Make them the left and right
subtree of a new tree and record the sum of their weights in the root of the new tree as its weight.

A tree constructed by the above algorithm is called a Huffman tree. It defines in the manner described
above a Huffman code.

Example: Consider the five-symbol alphabet {A, B, C, D, _} with the following occurrence frequencies
in a text made up of these symbols:

Solution

CSE@HKBKCE 18 2023-24
ADA Module-4 SM

The resulting code words are as follows

Hence, DAD is encoded as 011101, and 10011011011101 is decoded as BAD_AD.

With the occurrence frequencies given and the codeword lengths obtained, the average number of bits per
symbol in this code is
2 .*0.35 + 3 * 0.1+ 2* 0.2 + 2* 0.2 + 3* 0.15 = 2.25.

If we used a fixed-length encoding for the same alphabet, we would have to use at least 3 bits per
symbol. Thus, for this toy example, Huffman’s code achieves the compression ratio a standard measure
of a compression algorithm’s effectiveness of (3− 2.25)/3 * 100%= 25%

CSE@HKBKCE 19 2023-24

You might also like