0% found this document useful (0 votes)
14 views30 pages

Mod-4 Greedy Dynamic-24

Uploaded by

muthahira04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views30 pages

Mod-4 Greedy Dynamic-24

Uploaded by

muthahira04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

DYNAMIC PROGRAMMING

• Design method that can be used when the solution to a problem can be
viewed as the result of a sequence of decisions.

• Greedy strategy
– Explores only one path (decision sequence), which seems to be
optimal.
– In dynamic programming all the paths are explored.
• Brute Force
– Try all possible sequences and pick the best.
– Time and space requirements will be very large.
• It is similar to divide and conquer in that, an instance of the problem
is divided into smaller instances.

• In this approach, smaller instances are solved first and stored, instead
of recomputing it when it is required.


Dynamic programming Principle
• Drastically reduces the amount of enumeration by avoiding enumeration
of some decision sequences that cannot possibly be optimal.
• Uses principle of optimality to obtain an optimal decision sequence.

• Principle Of Optimality

• An optimal sequence of decisions has the property that whatever may be


the initial state and decisions are, the remaining decisions must constitute
an optimal decision sequence with respect to the state resulting from the
first decision.
Knapsack problem
• Given some items, pack the knapsack to get the maximum total value.
Each item has some weight and some value. Total weight that we can
• carry is no more than some fixed number W. So we must consider weights
of items as well as their values.

There are two versions of the problem:


1. “0-1 knapsack problem”
• Items are indivisible; you either take an item or not. Some special instances can be
solved with dynamic programming

2. “Fractional knapsack problem”


• Items are divisible: you can take any fraction of an item
0-1 Knapsack problem
• Given a knapsack with maximum capacity M, and a set S consisting of n
items
• Each item i has some weight wi and benefit value Pi (all wi and M are
integer values)
• Problem: How to pack the knapsack to achieve maximum total value of
packed items?
• Problem, in other words, is to find
max  Pi subject to  w Mi
iT iT

• The problem is called a “0-1” problem, because each item must be


entirely accepted or rejected.
Solution by Brute force
Let’s first solve this problem with a straightforward algorithm
• Since there are n items, there are 2n possible combinations of items.
• We go through all combinations and find the one with maximum
value and with total weight less or equal to W
• Running time will be O(2n)
Using Dynamic Approach
• We can do better with an algorithm based on dynamic programming
• We need to carefully identify the subproblems
Recursive Formula

 V [i  1, j ] if j-wi  0
V [i, j ] 
max{V [i  1, j ], V [i  1, j  wi ]  pi } if j-wi >=0

 The best subset of Si that has the total weight  j, either


contains item i or not.
 First case: wi>j. Item i can’t be part of the solution, since if it
was, the total weight would be > j, which is unacceptable.
 Second case: wi  j. Then the item i can be in the solution, and
we choose the case with greater value.
Warshall’s Algorithm
• Recall that the adjacency matrix A= {aij} of a directed graph is the boolean matrix
that has 1 in its ith row and jth column if and only if there is a directed edge from
the ith vertex to the jth vertex.

• Transitive Closure:
Rules for warshall’s Algorithm
Algorithm

Time complexity is O(n3)


Floyd's Algorithm for the All-Pairs Shortest-Paths Problem
• Given a weighted connected graph (undirected or directed), the all-
pairs shortest paths problem asks to find the distances (the lengths of
the shortest paths) from each vertex to all other vertices.
• to record the lengths of shortest paths in an n-by-n matrix D called the
distance matrix:
• We can generate the distance matrix with an algorithm that is very
similar to Warshall's algorithm. It is called Floyd's algorithm.
• It is applicable to both undirected and directed weighted graphs
provided that they do not contain a cycle of a negative length.
Time complexity of this algorithm: O(n3)
Greedy Technique

Constructs a solution to an optimization problem piece by

piece through a sequence of choices that are:


• Feasible
• -Any subset that satisfies the given constraints
• locally optimal-
-Feasible solution that either maximizes or minimizes a given objective
function.
• Irrevocable- the solution should be final
For some problems, yields an optimal solution for every instance.
For most, does not but can be useful for fast approximations
Dijkstra Algorithm: Finding shortest paths in order

• The single-source shortest-paths problem asks for a family of paths, each


leading from the source to a different vertex in the graph, though some paths
may, of course, have edges in common.

• This algorithm is applicable to graphs with nonnegative weights only. Since in


most applications this condition is satisfied, the limitation has not impaired
the popularity of Dijkstra's algorithm.

• First, it finds the shortest path from the source to a vertex nearest to it, then to
a second nearest, and so on.
Algorithm
• N: set of nodes for which shortest path already found
• Initialization: (Start with source node s)
• N = {s}, Ds = 0, “s is distance zero from itself”
• Dj=Csj for all j  s, distances of directly-connected neighbors
• Step A: (Find next closest node i)
• Find i  N such that
• Di = min Dj for j  N
• Add i to N
• If N contains all the nodes, stop
• Step B: (update minimum costs)
• For each node j  N
• Dj = min (Dj, Di+Cij)
• Go to Step A
2 3 1 2 3 1
1 1
6 6
5 2 5 2
3 3
4 1 4 2
1 2
3 3
2 2
5 4 5
4

2 3 1
1
6
5 2
3
1 4 2
3
2
4 5
Huffman Trees

• to encode a text that comprises characters from some n-character alphabet by assigning to
each of the text's characters some sequence of bits called the codeword.

• we can use a fixed-length encoding that assigns to each character a bit string of the same
length m.

• Example ASCII codes for all characters. 8 bit representation.

• Variable-length encoding, which assigns codewords of different lengths to different


characters, introduces a problem that fixed-length encoding does not have.

• To avoid this complication, we can limit ourselves to prefix-free (or simply prefix) codes.
In a prefix code, no codeword is a prefix of a codeword of another character
Hence, DAD is encoded as 011101, and 10011011011101 is
decoded as BAD_AD.

2. 0.35 + 3. 0.1 + 2. 0.2 + 2. 0.2 + 3. 0.15 = 2.25 bits/character.

You might also like