0% found this document useful (0 votes)
18 views8 pages

Module Iii

Uploaded by

farispalayi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views8 pages

Module Iii

Uploaded by

farispalayi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

MODULE III

ALGORITHM DESIGN TECHNIQUES


 Selecting a proper designing technique for a parallel algorithm is the most difficult and important task.
 Most of the parallel programming problems may have more than one solution.

Designing Techniques for Parallel Algorithms


 Divide and conquer
 Greedy Method
 Dynamic Programming
 Backtracking
 Branch & Bound
 Linear Programming

Divide and Conquer Method


 In the divide and conquer approach, the problem is divided into
several small independent sub-problems which are easy to solve.
 Then the sub-problems are solved recursively and combined to
get the solution of the original problem.
 The division of the original problem into sub-problems take
place until we reach a sub-problem that has a direct solution.
 The divide and conquer approach involves the following steps at each level −
• Divide − The original problem is divided into sub-problems.
• Conquer − The sub-problems are solved recursively.
• Combine − The solutions of the sub-problems are combined together to get the solution of the
original problem.
 The D & C follows the strategy of reducing a problem of size ‘n’ into some number ‘p’ of
independent similar and small sub-problems, each of size p<=n/q, where p>=1 and q>1.
 These p sub-problems of approximate size q are solved recursively and each solution is combined to
create the solution of the original problem.
 The time complexity of D & C problem is given by an elegant mathematical concept called recurrence.
 The recurrence relation is defined as an equality or inequality describing function in terms of its behavior
on smaller inputs.
 The recurrence relation is derived for the algorithm and then solved to calculate the complexity.
 The general recurrence relation for D & C is;
T(n)= p T(n/2)+G(n)
Where,
T(n/2) := The time required to solve each sub-problem.
G(n) := The time required to combine the solutions of all the sub-problems to create the
solution of the original problem.
 The divide and conquer approach is applied in the following algorithms −
• Binary search
• Quick sort
• Merge sort
• Strassen’s Matrix multiplication
• Integer multiplication
• Matrix inversion

Greedy Method
 In greedy algorithm of optimizing solution, the best solution is chosen at any moment.
 A greedy algorithm is very easy to apply to complex problems.
 It decides which step will provide the most accurate solution in the next step.
 This algorithm is a called greedy because when the optimal solution to the smaller instance is provided,
the algorithm does not consider the total program as a whole.
 Once a solution is considered, the greedy algorithm never considers the same solution again.
 A greedy algorithm works recursively creating a group of objects from the smallest possible component
parts.
 Recursion is a procedure to solve a problem in which the solution to a specific problem is dependent on
the solution of the smaller instance of that problem.
 Eg: Huffman coding, Knapsac problem, minimum spanning tree, job scheduling with deadline, single
source shortest path, travelling sales man problem.

Dynamic Programming
 Dynamic programming is an optimization technique, which divides the problem into smaller sub-
problems and after solving each sub-problem, dynamic programming combines all the solutions to get
ultimate solution.
 Unlike divide and conquer method, dynamic programming reuses the solution to the sub-problems
many times.
 Recursive algorithm for Fibonacci Series is an example of dynamic programming.
 Eg: Matrix chain application, 0/1 knapsac problem, All pair shortest, Floyd warshall algorithm, travelling
sales main problem.

Backtracking Algorithm
 Backtracking is an optimization technique to solve combinational problems.
 It is applied to both programmatic and real-life problems.
 Eight queen problem, Sudoku puzzle and going through a maze are popular examples where
backtracking algorithm is used.
 In backtracking, we start with a possible solution, which satisfies all the required conditions.
 Then we move to the next level and if that level does not produce a satisfactory solution, we return one
level back and start with a new option.

Branch and Bound


 A branch and bound algorithm is an optimization technique to get an optimal solution to the problem.
 It looks for the best solution for a given problem in the entire space of the solution.
 The bounds in the function to be optimized are merged with the value of the latest best solution.
 It allows the algorithm to find parts of the solution space completely.
 The purpose of a branch and bound search is to maintain the lowest-cost path to a target.
 Once a solution is found, it can keep improving the solution. Branch and bound search is implemented in
depth-bounded search and depth–first search.
Linear Programming
 Linear programming describes a wide class of optimization job where both the optimization criterion and
the constraints are linear functions.
 It is a technique to get the best outcome like maximum profit, shortest path, or lowest cost.
 In this programming, we have a set of variables and we have to assign absolute values to them to satisfy
a set of linear equations and to maximize or minimize a given linear objective function.

Detailed

Dynamic Programming

Dynamic Programming is the most powerful design technique for solving optimization problems.

Divide & Conquer algorithm partition the problem into disjoint subproblems solve the subproblems
recursively and then combine their solution to solve the original problems.

Dynamic Programming is used when the subproblems are not independent, e.g. when they share the same
subproblems. In this case, divide and conquer may do more work than necessary, because it solves the same
sub problem multiple times.

Characteristics of Dynamic Programming:

o Optimal Substructure: If an optimal solution contains optimal sub solutions then a problem exhibits
optimal substructure.

o Overlapping subproblems: When a recursive algorithm would visit the same subproblems repeatedly,
then a problem has overlapping subproblems. Basically, there are two ways for handling the overlapping
subproblems:

a. Top-down approach
It is also termed as memoization technique. In this, the problem is broken into subproblem and these
subproblems are solved and the solutions are remembered, in case if they need to be solved in
future. Which means that the values are stored in a data structure, which will help us to reach them
efficiently when the same problem will occur during the program execution.

b. Bottom-up approach
It is also termed as tabulation technique. In this, all subproblems are needed to be solved in advance
and then used to build up a solution to the larger problem.

Principle of Optimality

• Definition: A problem is said to satisfy the Principle of Optimality if the sub solutions of an optimal
solution of the problem are themselves optimal solutions for their subproblems.
• Dynamic programming design involves 4 major steps:
o Develop a mathematical notation that can express any solution and sub solution for the problem
at hand.
o Prove that the Principle of Optimality holds.
o Develop a recurrence relation that relates a solution to its sub solutions, using the math notation
of step 1. Indicate what the initial values are for that recurrence relation, and which term signifies
the final solution.
o Write an algorithm to compute the recurrence relation.
• Steps 1 and 2 need not be in that order. Do what makes sense in each problem.
• Step 3 is the heart of the design process. In high level algorithmic design situations, one can stop at step
3. In this course, however, we will carry out step 4 as well.
• Without the Principle of Optimality, it won't be possible to derive a sensible recurrence relation in step 3.
• When the Principle of Optimality holds, the 4 steps of DP are guaranteed to yield an optimal solution. No
proof of optimality is needed.

Components of Dynamic programming

1. Stages: The problem can be divided into several subproblems, which are called stages. A stage is a small
portion of a given problem. For example, in the shortest path problem, they were defined by the
structure of the graph.

2. States: Each stage has several states associated with it. The states for the shortest path problem were the
nodes reached.

3. Decision: At each stage, there can be multiple choices out of which one of the best decisions should be
taken. The decision taken at every stage should be optimal; this is called a stage decision.

4. Optimal policy: It is a rule which determines the decision at each stage; a policy is called an optimal
policy if it is globally optimal. This is known as Bellman principle of optimality.

5. Given the current state, the optimal choices for each of the remaining states does not depend on the
previous states or decisions. In the shortest path problem, it was not necessary to know how we got a
node only that we did.

6. There exists a recursive relationship that identify the optimal decisions for stage j, given that stage j+1,
has already been solved.

7. The final stage must be solved by itself.

Steps of Dynamic Programming Approach

Dynamic Programming algorithm is designed using the following four steps −

• Characterize the structure of an optimal solution.

• Recursively define the value of an optimal solution.

• Compute the value of an optimal solution, typically in a bottom-up fashion.

• Construct an optimal solution from the computed information.


Applications of dynamic programming
In solving optimization problems in the areas of Bioinformatics, Control theory (Flight control, cruise control
and robotics control), Operating system (time sharing and scheduling), information theory, operations
research, AI, inventory management etc.
Floyd-Warshall’s all-pair shortest path algorithms, Bellman-Ford for shortest path routing in networks,
Viterbi for hidden Markov models, Cocke-Kasami-Younger for parsing context free grammars, Smith-
Waterman for sequence alignment etc are some of the famous dynamic programming algorithms.
Some well-known examples are:
1. Fibonacci sequence
2. Multistage graph 7. Travelling salesperson problem
3. Matrix-chain multiplication 8. Reliability design problem
4. Longest common subsequence (LCS) 9. Stagecoach problem
5. 0/1 knapsack problem 10. Optimal binary search tree construction
6. All pair Shortest path problem 11. Mathematical optimization problem
Backtracking

The Backtracking is an algorithmic-method to solve a problem with an additional way. It uses a recursive
approach to explain the problems. We can say that the backtracking is needed to find all possible
combination to solve an optimization problem.

Backtracking is a systematic way of trying out different sequences of decisions until we find one that
"works."

A backtracking algorithm is a problem-solving algorithm that uses a brute force approach for finding the
desired output.

The Brute force approach tries out all the possible solutions and chooses the desired/best solutions.

The term backtracking suggests that if the current solution is not suitable, then backtrack and try other
solutions. Thus, recursion is used in this approach.

This approach is used to solve problems that have multiple solutions. If you want an optimal solution, you
must go for dynamic programming.

Backtracking algorithm is applied to some specific types of problems,

• Decision problem used to find a feasible solution of the problem.

• Optimisation problem used to find the best solution that can be applied.

• Enumeration problem used to find the set of all feasible solutions of the problem.

Terms used in backtracking


Branch and Bound Algorithm

Branch and bound is an algorithm design paradigm which is generally used for solving combinatorial
optimization problems. These problems are typically exponential in terms of time complexity and may require
exploring all possible permutations in worst case. The Branch and Bound Algorithm technique solves these
problems relatively quickly.
 Branch-and-bound usually applies to those problems that have finite solutions, in which the solutions can
be represented as a sequence of options.
 The first part of branch-and-bound branching requires several choices to be made so that the choices will
branch out into the solution space.
 In these methods, the solution space is organized as a treelike structure.
 Branching out to all possible choices guarantees that no potential solutions will be left uncovered. But
because the target problem is usually NP-complete or even NP-hard, the solution space is often too vast
to traverse.
 An important advantage of branch-and-bound algorithms is that we can control the quality of the solution
to be expected, even if it is not yet found.

What is the difference between FIFO Branch and Bound, LIFO Branch and Bound and LC Branch and
Bound?

Branch & Bound discovers branches within the complete search space by using estimated bounds to limit
the number of possible solutions. The different types (FIFO, LIFO, LC) define different 'strategies' to explore
the search space and generate branches.

FIFO (first in, first out): always the oldest node in the queue is used to extend the branch. This leads to
a breadth-first search, where all nodes at depth d are visited first, before any nodes at depth d+1 are visited.

LIFO (last in, first out): always the youngest node in the queue is used to extend the branch. This leads to
a depth-first search, where the branch is extended through every 1st child discovered at a certain depth,
until a leaf node is reached.

LC (lowest cost): the branch is extended by the node which adds the lowest additional costs, according to a
given cost function. The strategy of traversing the search space is therefore defined by the cost function.

You might also like