Module Iii
Module Iii
Greedy Method
In greedy algorithm of optimizing solution, the best solution is chosen at any moment.
A greedy algorithm is very easy to apply to complex problems.
It decides which step will provide the most accurate solution in the next step.
This algorithm is a called greedy because when the optimal solution to the smaller instance is provided,
the algorithm does not consider the total program as a whole.
Once a solution is considered, the greedy algorithm never considers the same solution again.
A greedy algorithm works recursively creating a group of objects from the smallest possible component
parts.
Recursion is a procedure to solve a problem in which the solution to a specific problem is dependent on
the solution of the smaller instance of that problem.
Eg: Huffman coding, Knapsac problem, minimum spanning tree, job scheduling with deadline, single
source shortest path, travelling sales man problem.
Dynamic Programming
Dynamic programming is an optimization technique, which divides the problem into smaller sub-
problems and after solving each sub-problem, dynamic programming combines all the solutions to get
ultimate solution.
Unlike divide and conquer method, dynamic programming reuses the solution to the sub-problems
many times.
Recursive algorithm for Fibonacci Series is an example of dynamic programming.
Eg: Matrix chain application, 0/1 knapsac problem, All pair shortest, Floyd warshall algorithm, travelling
sales main problem.
Backtracking Algorithm
Backtracking is an optimization technique to solve combinational problems.
It is applied to both programmatic and real-life problems.
Eight queen problem, Sudoku puzzle and going through a maze are popular examples where
backtracking algorithm is used.
In backtracking, we start with a possible solution, which satisfies all the required conditions.
Then we move to the next level and if that level does not produce a satisfactory solution, we return one
level back and start with a new option.
Detailed
Dynamic Programming
Dynamic Programming is the most powerful design technique for solving optimization problems.
Divide & Conquer algorithm partition the problem into disjoint subproblems solve the subproblems
recursively and then combine their solution to solve the original problems.
Dynamic Programming is used when the subproblems are not independent, e.g. when they share the same
subproblems. In this case, divide and conquer may do more work than necessary, because it solves the same
sub problem multiple times.
o Optimal Substructure: If an optimal solution contains optimal sub solutions then a problem exhibits
optimal substructure.
o Overlapping subproblems: When a recursive algorithm would visit the same subproblems repeatedly,
then a problem has overlapping subproblems. Basically, there are two ways for handling the overlapping
subproblems:
a. Top-down approach
It is also termed as memoization technique. In this, the problem is broken into subproblem and these
subproblems are solved and the solutions are remembered, in case if they need to be solved in
future. Which means that the values are stored in a data structure, which will help us to reach them
efficiently when the same problem will occur during the program execution.
b. Bottom-up approach
It is also termed as tabulation technique. In this, all subproblems are needed to be solved in advance
and then used to build up a solution to the larger problem.
Principle of Optimality
• Definition: A problem is said to satisfy the Principle of Optimality if the sub solutions of an optimal
solution of the problem are themselves optimal solutions for their subproblems.
• Dynamic programming design involves 4 major steps:
o Develop a mathematical notation that can express any solution and sub solution for the problem
at hand.
o Prove that the Principle of Optimality holds.
o Develop a recurrence relation that relates a solution to its sub solutions, using the math notation
of step 1. Indicate what the initial values are for that recurrence relation, and which term signifies
the final solution.
o Write an algorithm to compute the recurrence relation.
• Steps 1 and 2 need not be in that order. Do what makes sense in each problem.
• Step 3 is the heart of the design process. In high level algorithmic design situations, one can stop at step
3. In this course, however, we will carry out step 4 as well.
• Without the Principle of Optimality, it won't be possible to derive a sensible recurrence relation in step 3.
• When the Principle of Optimality holds, the 4 steps of DP are guaranteed to yield an optimal solution. No
proof of optimality is needed.
1. Stages: The problem can be divided into several subproblems, which are called stages. A stage is a small
portion of a given problem. For example, in the shortest path problem, they were defined by the
structure of the graph.
2. States: Each stage has several states associated with it. The states for the shortest path problem were the
nodes reached.
3. Decision: At each stage, there can be multiple choices out of which one of the best decisions should be
taken. The decision taken at every stage should be optimal; this is called a stage decision.
4. Optimal policy: It is a rule which determines the decision at each stage; a policy is called an optimal
policy if it is globally optimal. This is known as Bellman principle of optimality.
5. Given the current state, the optimal choices for each of the remaining states does not depend on the
previous states or decisions. In the shortest path problem, it was not necessary to know how we got a
node only that we did.
6. There exists a recursive relationship that identify the optimal decisions for stage j, given that stage j+1,
has already been solved.
The Backtracking is an algorithmic-method to solve a problem with an additional way. It uses a recursive
approach to explain the problems. We can say that the backtracking is needed to find all possible
combination to solve an optimization problem.
Backtracking is a systematic way of trying out different sequences of decisions until we find one that
"works."
A backtracking algorithm is a problem-solving algorithm that uses a brute force approach for finding the
desired output.
The Brute force approach tries out all the possible solutions and chooses the desired/best solutions.
The term backtracking suggests that if the current solution is not suitable, then backtrack and try other
solutions. Thus, recursion is used in this approach.
This approach is used to solve problems that have multiple solutions. If you want an optimal solution, you
must go for dynamic programming.
• Optimisation problem used to find the best solution that can be applied.
• Enumeration problem used to find the set of all feasible solutions of the problem.
Branch and bound is an algorithm design paradigm which is generally used for solving combinatorial
optimization problems. These problems are typically exponential in terms of time complexity and may require
exploring all possible permutations in worst case. The Branch and Bound Algorithm technique solves these
problems relatively quickly.
Branch-and-bound usually applies to those problems that have finite solutions, in which the solutions can
be represented as a sequence of options.
The first part of branch-and-bound branching requires several choices to be made so that the choices will
branch out into the solution space.
In these methods, the solution space is organized as a treelike structure.
Branching out to all possible choices guarantees that no potential solutions will be left uncovered. But
because the target problem is usually NP-complete or even NP-hard, the solution space is often too vast
to traverse.
An important advantage of branch-and-bound algorithms is that we can control the quality of the solution
to be expected, even if it is not yet found.
What is the difference between FIFO Branch and Bound, LIFO Branch and Bound and LC Branch and
Bound?
Branch & Bound discovers branches within the complete search space by using estimated bounds to limit
the number of possible solutions. The different types (FIFO, LIFO, LC) define different 'strategies' to explore
the search space and generate branches.
FIFO (first in, first out): always the oldest node in the queue is used to extend the branch. This leads to
a breadth-first search, where all nodes at depth d are visited first, before any nodes at depth d+1 are visited.
LIFO (last in, first out): always the youngest node in the queue is used to extend the branch. This leads to
a depth-first search, where the branch is extended through every 1st child discovered at a certain depth,
until a leaf node is reached.
LC (lowest cost): the branch is extended by the node which adds the lowest additional costs, according to a
given cost function. The strategy of traversing the search space is therefore defined by the cost function.