Greedy, Divide and Conquer, Dynamic Approach
Greedy, Divide and Conquer, Dynamic Approach
Greedy, Divide and Conquer, Dynamic Approach
As the name implies, this is a simple approach which tries to find the best solution
at every step. Thus, it aims to find the local optimal solution at every step so as to
find the global optimal solution for the entire problem.
The greedy algorithm has only one chance to compute the optimal solution and
thus, cannot go back and look at other alternate solutions. However, in many
problems, this strategy fails to produce a global optimal solution. Let's consider
the following binary tree to understand how a basic greedy algorithm works:
For the above problem the objective function is:
Since we need to maximize the objective function, Greedy approach can be used.
Following steps are followed to find the solution:
Step 1: Initialize sum = 0
Step 2: Select the root node, so its value will be added to sum, sum = 0+8 = 8
Step 3: The algorithm compares nodes at next level, selects the largest node
which is 12, making the sum = 20.
Step 4: The algorithm compares nodes at the next level, selects the largest node
which is 10, making the sum = 30.
Thus, using the greedy algorithm, we get 8-12-10 as the path. But this is not the
optimal solution, since the path 8-2-89 has the largest sum ie 99.
This happens because the algorithm makes decision based on the information
available at each step without considering the overall problem.
This approach minimizes the time required for generating the solution.
This approach does not guarantee a global optimal solution since it never
looks back at the choices made for finding the local optimal solution.
Although we have already covered that which type of problem in general can be
solved using greedy approach, here are a few popular problems which use greedy
technique:
1. Knapsack Problem
3. Dijkstra’s Problem
6. Huffman Coding
Algorithm
Select a new activity from the list if its start time is greater than or equal
to the finish time of the previously selected activity.
Repeat the last step until all activities in the sorted list are checked.
Example:
1. Binary Search
3. Tower of Hanoi.
Dynamic Programming
Dynamic programming is a method for solving a complex problem by breaking it
down into simpler subproblems, solving each of those subproblems just once, and
storing their solutions – in an array(usually).
Imagine you are given a box of coins and you have to count the total number of
coins in it. Once you have done this, you are provided with another box and now
you have to calculate the total number of coins in both boxes. Obviously, you are
not going to count the number of coins in the first box again. Instead, you would
just count the total number of coins in the second box and add it to the number
of coins in the first box you have already counted and stored in your mind. This is
the exact idea behind dynamic programming.
Dynamic programming can be implemented in two ways –
Memoization
Tabulation
Memoization
In this approach, we try to solve the bigger problem by recursively finding the
solution to smaller sub-problems. Whenever we solve a sub-problem, we cache
its result so that we don’t end up solving it repeatedly if it’s called multiple times.
Instead, we can just return the saved result. This technique of storing the results
of already solved subproblems is called Memoization.
Tabulation:
The problems that can be solved by using Dynamic Programming has the
following two main properties-
1. Overlapping sub-problems
2. Optimal Substructure
1) Overlapping Subproblems:
2) Optimal substructure
If the sequence is F(1) F(2) F(3)........F(50), it follows the rule F(n) = F(n-1) + F(n-2)
...