0% found this document useful (0 votes)
16 views

Module 2 DAA

data structure and algorithms notes

Uploaded by

alienx3659
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Module 2 DAA

data structure and algorithms notes

Uploaded by

alienx3659
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Module 2

Advanced Design and Analysis Techniques

Dynamic Programming: Matrix chain multiplication Problem, Optimal Binary


search tree etc., Greedy Algorithms; Amortized Analysis

Dynamic Programming:
Dynamic programming is a mathematical optimization method and an algorithmic paradigm that
was developed by Richard Bellman in the 1950s. It has found applications in numerous fields,
from aerospace engineering to economics.
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by
breaking it into subproblems using recursion and storing the results of subproblems to avoid
computing the same results again.
Following are the two main properties of a problem that suggest that the given problem can be
solved using dynamic programming.

1. Overlapping Subproblems
2. Optimal Substructure

1. Overlapping Subproblems Property:Like Divide and Conquer, Dynamic Programming


combines solutions to sub-problems. Dynamic Programming is mainly used when solutions to
the same subproblems are needed again and again. In dynamic programming, computed
solutions to subproblems are stored in a table so that these don’t have to be recomputed. So
dynamic Programming is not useful when there are no common (overlapping) subproblems
because there is no point in storing the solutions if they are not needed again.

2. Optimal Substructure Property:


A given problem is said to have Optimal Substructure Property if the optimal solution of the
given problem can be obtained by using the optimal solution to its subproblems instead of trying
every possible way to solve the subproblems.
The following are the steps that the dynamic programming follows:

1. It breaks down the complex problem into simpler subproblems.


2. It finds the optimal solution to these sub-problems.
3. It stores the results of subproblems (memoization). The process of storing the results of
subproblems is known as memorization.
4. It reuses them so that the same sub-problem is calculated more than once.
5. Finally, calculate the result of the complex problem.

Optimal Binary search tree


An Optimal Binary Search Tree (OBST), also known as a Weighted Binary Search Tree,
is a binary search tree that minimizes the expected search cost. In a binary search tree, the
search cost is the number of comparisons required to search for a given key.
In an OBST, each node is assigned a weight that represents the probability of the key
being searched for. The sum of all the weights in the tree is 1.0. The expected search cost of a
node is the sum of the product of its depth and weight, and the expected search cost of its
children.
To construct an OBST, we start with a sorted list of keys and their probabilities. We then
build a table that contains the expected search cost for all possible sub-trees of the original list.
We can use dynamic programming to fill in this table efficiently. Finally, we use this table to
construct the OBST.
The time complexity of constructing an OBST is O(n^3), where n is the number of keys.
However, with some optimizations, we can reduce the time complexity to O(n^2). Once the
OBST is constructed, the time complexity of searching for a key is O(log n), the same as for a
regular binary search tree.
Examples:
Input: keys[] = {10, 12}, freq[] = {34, 50}
There can be following two possible BSTs
10 12
\ /
12 10
I II
Frequency of searches of 10 and 12 are 34 and 50 respectively.
The cost of tree I is 34*1 + 50*2 = 134
The cost of tree II is 50*1 + 34*2 = 118
Input: keys[] = {10, 12, 20}, freq[] = {34, 8, 50}
There can be following possible BSTs
10 12 20 10 20
\ / \ / \ /
12 10 20 12 20 10
\ / / \
20 10 12 12
I II III IV V
Among all possible BSTs, the cost of the fifth BST is minimum.
Cost of the fifth BST is 1*50 + 2*34 + 3*8 = 142
Knapsack problem

Given a bag with maximum weight capacity of W and a set of items, each having a weight and a
value associated with it. Decide the number of each item to take in a collection such that the
total weight is less than the capacity and the total value is maximized.

The knapsack problem can be classified into the following types:


1. Fractional Knapsack Problem (Greedy Approach): we can put an item partly by
breaking it according to the ratio of weights into the bag.
2. 0/1 Knapsack Problem (Dynamic Approach) : we can either put an item
completely into the bag or cannot put it at all.
3. Bounded Knapsack Problem: number of items will be fixed
4. Unbounded Knapsack Problem: number of items are unlimited

0/1 Knapsack problem

# Python code to implement the above approach


def knapSack(W, wt, val, n):
# Making the dp array
dp = [0 for i in range(W+1)]
# Taking first i elements
for i in range(1, n+1):
# Starting from back,
# so that we also have data of
# previous computation when taking i-1 items
for w in range(W, 0, -1):
if wt[i-1] <= w:
# Finding the maximum value
dp[w] = max(dp[w], dp[w-wt[i-1]]+val[i-1])
# Returning the maximum value of knapsack
return dp[W]
# Driver code
if __name__ == '__main__':
profit = [60, 100, 120]
weight = [10, 20, 30]
W = 50
n = len(profit)
print(knapSack(W, weight, profit, n))

Time Complexity: O(N * W). Auxiliary Space: O(W)


Fractional knapsack problem

Input: arr[] = {{60, 10}, {100, 20}, {120, 30}}, W = 50


Output: 240
Explanation: By taking items of weight 10 and 20 kg and 2/3 fraction of 30 kg.
Hence total price will be 60+100+(2/3)(120) = 240

Input: arr[] = {{500, 30}}, W = 10


Output: 166.667

Time Complexity: O(N * logN)

Auxiliary Space: O(N)

Sr. No 0/1 knapsack problem Fractional knapsack problem

The 0/1 knapsack problem is solved Fractional knapsack problem is solved


1.
using a dynamic programming approach. using a greedy approach.

The 0/1 knapsack problem has not an The fractional knapsack problem has an
2.
optimal structure. optimal structure.

Fractional knapsack problem, we can


In the 0/1 knapsack problem, we are not
3. break items for maximizing the total
allowed to break items.
value of the knapsack.

0/1 knapsack problem, finds a most In the fractional knapsack problem, finds
4. valuable subset item with a total value a most valuable subset item with a total
less than equal to weight. value equal to the weight.

In the fractional knapsack problem, we


In the 0/1 knapsack problem we can take
5. can take objects in fractions in floating
objects in an integer value.
points.

Matrix Chain Multiplication Problem

Greedy Algorithms

Fractional Knapsack Problem

Job Sequencing Problem

Optimal File Merge Patterns

Amortized Analysis

You might also like