Greedy, Divide and Conquer, Dynamic Approach

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Greedy Approach

As the name implies, this is a simple approach which tries to find the best solution
at every step. Thus, it aims to find the local optimal solution at every step so as to
find the global optimal solution for the entire problem.

Consider that there is an objective function that has to be optimized (maximized/


minimized). This approach makes greedy choices at each step and makes sure
that the objective function is optimized.

A greedy algorithm works if a problem exhibits the following two properties:

1. Greedy Choice Property: A globally optimal solution can be reached at by


creating a locally optimal solution. In other words, an optimal solution can
be obtained by creating "greedy" choices.

2. Optimal substructure: Optimal solutions contain optimal subsolutions. In


other words, answers to subproblems of an optimal solution are optimal.

The greedy algorithm has only one chance to compute the optimal solution and
thus, cannot go back and look at other alternate solutions. However, in many
problems, this strategy fails to produce a global optimal solution. Let's consider
the following binary tree to understand how a basic greedy algorithm works:
For the above problem the objective function is:

To find the path with largest sum.

Since we need to maximize the objective function, Greedy approach can be used.
Following steps are followed to find the solution:

Step 1: Initialize sum = 0

Step 2: Select the root node, so its value will be added to sum, sum = 0+8 = 8

Step 3: The algorithm compares nodes at next level, selects the largest node
which is 12, making the sum = 20.

Step 4: The algorithm compares nodes at the next level, selects the largest node
which is 10, making the sum = 30.

Thus, using the greedy algorithm, we get 8-12-10 as the path. But this is not the
optimal solution, since the path 8-2-89 has the largest sum ie 99.

This happens because the algorithm makes decision based on the information
available at each step without considering the overall problem.

Advantages of Greedy Approach/Technique

 This technique is easy to formulate and implement.

 It works efficiently in many scenarios.

 This approach minimizes the time required for generating the solution.

Disadvantages of Greedy Approach/Technique

 This approach does not guarantee a global optimal solution since it never
looks back at the choices made for finding the local optimal solution.
Although we have already covered that which type of problem in general can be
solved using greedy approach, here are a few popular problems which use greedy
technique:

1. Knapsack Problem

2. Activity Selection Problem

3. Dijkstra’s Problem

4. Prim’s Algorithmfor finding Minimum Spanning Tree

5. Kruskal’s Algorithmfor finding Minimum Spanning Tree

6. Huffman Coding

7. Travelling Salesman Problem

What is the activity selection problem?


The activity selection problem is an optimization problem used to find the
maximum number of activities a person can perform if they can only work on one
activity at a time. This problem is also known as the interval scheduling
maximization problem (ISMP).

The greedy algorithm provides a simple, well-designed method for selecting the


maximum number of non-conflicting activities.

Algorithm

We are provided with n activities; each activity has its own start and finish time.


In order to find the maximum number of non-conflicting activities, the following
steps need to be taken:
 Sort the activities in ascending order based on their finish times.

 Select the first activity from this sorted list.

 Select a new activity from the list if its  start time is greater than or equal
to the finish time of the previously selected activity.

 Repeat the last step until all activities in the sorted list are checked.

 Example:

Consider the following 6 activities.

start[] = {1, 3, 0, 5, 8, 5};

finish[] = {2, 4, 6, 7, 9, 9};

The maximum set of activities that can be executed

by a single person is {0, 1, 3, 4}

Divide and Conquer


If we can break a single big problem into smaller sub-problems, solve the smaller
sub-problems and combine their solutions to find the solution for the original big
problem, it becomes easier to solve the whole problem.

In Merge Sort, the given unsorted array with n elements, is divided


into n subarrays, each having one element, because a single element is always
sorted in itself. Then, it repeatedly merges these subarrays, to produce new
sorted subarrays, and in the end, one complete sorted array is produced.
The concept of Divide and Conquer involves three steps:

1. Divide the problem into multiple small problems.

2. Conquer the subproblems by solving them. The idea is to break down the


problem into atomic subproblems, where they are actually solved.

3. Combine the solutions of the subproblems to find the solution of the


actual problem.
Examples: The specific computer algorithms are based on the Divide & Conquer
approach:

1. Binary Search

2. Sorting (merge sort, quick sort)

3. Tower of Hanoi.

Dynamic Programming 
Dynamic programming is a method for solving a complex problem by breaking it
down into simpler subproblems, solving each of those subproblems just once, and
storing their solutions – in an array(usually).

Now, everytime the same sub-problem occurs, instead of recomputing its


solution, the previously calculated solutions are used, thereby saving computation
time at the expense of storage space.

Imagine you are given a box of coins and you have to count the total number of
coins in it. Once you have done this, you are provided with another box and now
you have to calculate the total number of coins in both boxes. Obviously, you are
not going to count the number of coins in the first box again. Instead, you would
just count the total number of coins in the second box and add it to the number
of coins in the first box you have already counted and stored in your mind. This is
the exact idea behind dynamic programming.
Dynamic programming can be implemented in two ways –

 Memoization

 Tabulation

Memoization

In this approach, we try to solve the bigger problem by recursively finding the
solution to smaller sub-problems. Whenever we solve a sub-problem, we cache
its result so that we don’t end up solving it repeatedly if it’s called multiple times.
Instead, we can just return the saved result. This technique of storing the results
of already solved subproblems is called Memoization.

Tabulation:

Tabulation is the opposite of the top-down approach and avoids


recursion.Tabulation is a bottom-up approach. It starts by solving the lowest level
subproblem. The solution then lets us solve the next subproblem, and so forth.
We iteratively solve all subproblems in this way until we’ve solved all
subproblems, thus finding the solution to the original problem. We save time
when a subproblem needs the answer to a subproblem that has been called
before, and thus has had its value tabulated.

APPLICABILITY OF DYNAMIC PROGRAMMING-

The problems that can be solved by using Dynamic Programming has the
following two main properties-

1. Overlapping sub-problems

2. Optimal Substructure
1) Overlapping Subproblems:

Overlapping subproblems is a property in which a problem can be broken down


into subproblems which are used multiple times.

Dynamic Programming is mainly used when solutions of same subproblems are


needed again and again. In dynamic programming, computed solutions to
subproblems are stored in a array so that these don’t have to recomputed. So
Dynamic Programming is not useful when there are no overlapping subproblems
because there is no point storing the solutions if they are not needed again.

2) Optimal substructure

Optimal substructure is a property in which an optimal solution of the original


problem can be constructed efficiently from the optimal solutions of its sub-
problems.

Applications of dynamic programming

 0/1 knapsack problem


 Rod Cutting Problem
 All pair Shortest path problem
 Matrix `Chain Multiplication Problem
 Shortest Common Subsequence Problem
 Bellman Ford Algorithm
 Longest Common Subsequence Problem
Dynamic Programming Example

Take the case of generating the fibonacci sequence.

If the sequence is F(1) F(2) F(3)........F(50), it follows the rule F(n) = F(n-1) + F(n-2)

F(50) = F(49) + F(48)

F(49) = F(48) + F(47)

F(48) = F(47) + F(46)

...

Notice how there are overlapping subproblems, we need to calculate F(48) to


calculate both F(50) and F(49). This is exactly the kind of algorithm where
Dynamic Programming shines.

You might also like