0% found this document useful (0 votes)
23 views31 pages

Data Structure and Algorithm

Unit 5 part b

Uploaded by

Devi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
23 views31 pages

Data Structure and Algorithm

Unit 5 part b

Uploaded by

Devi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 31
List out the greedy algorithm techniques Greedy Technique uses iterative/recursive approach to solve an optimization problem and selects an optimal(best) solution. Greedy approach look for simple, easy-to- implement solutions to complex, multi step problems by deciding which next step will provide the most benefit. It is not always the greedy technique produce an optimal solution, but it helps in producing an approximate optimal solution in a reasonable time. Greedy generally works on heuristic approach. The greedy technique is one of the simplest approaches to solve the optimization problems in which we want to determine the local optimum of a given function by a sequence of steps where at each stage we can make a choice among a class of possible decisions. In this, the choice of the optimal decision is made in the information at hand without worrying about the effect these decisions may have in the future. The greedy choice perty and optimal sub-problem are two ingredients in the problem have in the future. The greedy choice property and optimal sub-problem are two ingredients in the problem that lend to a greedy strategy. It says that a globally optimal solution can be arrived at by making a locally optimal choice(with no Guarantee). OR we can say greedy method arrives at a solution by making a sequence of choices, each of which simply looks the best at moment. Greedy Choice Property: It says, we can assemble a globally optimal solution by making locally optimal choices i.e. The greedy choice is always part of certain optimal solution. Optimal Sub-Problem: It says, an optimal solution to the problem contains within its optimal solution to sub- problems i.e. Global optimal solution is constructed from local optimal solutions. In optimization problems, there are two types of solutions: « Feasible Solutions : these are not a clear optimal solution, but are close to optimal solution( can be said as In optimization problems, there are two types of solutions: « Feasible Solutions : these are not a clear optimal solution, but are close to optimal solution( can be said as approximate solution) « Optimal Solutions : these are fully acceptable optimized solution for the current optimization problem. Any Greedy Algorithm will have following five components: 1. Candidate Set: From which a solution is created. 2. Selection Function: Chooses the best candidate to be added to the solution. 3. Feasibility Function: Used to determine, if a candidate can be used to contribute to a solution. 4. Objective Function: Assigns a value toa solution or a partial solution 5. Solution Function: Indicates when we have discovered a complete solution. Greedy algorithms are easy to invent, implement and most of time they are er _. a we have discovered a complete solution. Greedy algorithms are easy to invent, implement and most of time they are efficient. However there are may problems that cannot be solved correctly by this approach and in many cases there is no guarantee that making locally optimal improvements in a locally optimal solution with produce the optimal global solution. Following are few algorithms that make use of greedy approach/technique: « Knapsack problem. « Kruskal's Algorithm. « Prim's Algorithm. « Dijkstra's Algorithm. « Huffman tree building. « Traveling salesman problem. etc. < Previous-Divide and Conquer. Technique Next - Dynamic Programming. Explain Greedy Algorithm What Does Greedy Algorithm Mean? A greedy algorithm is an algorithmic strategy that makes the best optimal choice at each small stage with the goal of this eventually leading to a globally optimum solution. This means that the algorithm picks the best solution at the moment without regard for consequences. It picks the best immediate output, but does not consider the big picture, hence it is considered greedy. Greedy Algorithm A greedy algorithm works by choosing the best possible answer in each step and then moving on to the next step until it reaches the end, without regard for the overall solution. It only hopes that the path it takes is the globally optimum one, but as proven time and again, this method does not often come up with a globally optimum solution. In fact, it is entirely possible that the most optimal short-term solutions lead to the worst possible global outcome. Think of it as taking a lot of shortcuts in a manufacturing business: in the s Think of it as taking a lot of shortcuts in a manufacturing business: in the short term large amounts are saved in manufacturing cost, but this eventually leads to downfall since quality is compromised, resulting in product returns and low sales as customers become acquainted with the “cheap” product. But this is not always the case, there are a lot of applications where the greedy algorithm works best to find or approximate the globally optimum solution such as in constructing a Huffman tree or a decision learning tree. For example: Take the path with “ largest sum overall. A greedy algori Huffman tree or a decision learning tree. For example: Take the path with the largest sum overall. A greedy algorithm would take the blue path, as a result of shortsightedness, rather than the orange path, which yields the largest sum. Camnonante: Explain dynamic programming and matrix chain multiplication Dynamic Programming Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it imo subproblems and stores the results of subproblems to avaid computing the same results again. Dynamic programming is used when the sub problems are not independent. Dynamic Programming is a bottom — up approach — we solve all possible small problems and then combine them to obtain solutions far bigger problems. Dynamic Programming is offen used in optimization problems (A problem with many possible solutions for which we want to find an optimal solution) Dynamic Programming works when a problem has the following two main properties. © Overlapping Subproblems * Optimal Substructure Overlapping Subproblems: When a recursive algorithm would visit the same subproblems repeatedly, then a problem hus overlapping subproblems. Like Divide and Conquer, Dynamic Pmgramming combines solutions to sub-problems. Dynamic Programming is mainly used when solutions of same subproblems are needed aguin and again. In dynamic programming, computed solutions to subpmblems are stored in a table so that these don’t have to recomputed, So Dynamic Programming is not usefull when there are no common (overlapping) subproblems because there isno point in storing the solutions if they are not needed again. ‘Optimal Substructure: = A given problems has Optimal Substructure Property if optimal solutian of the given problem can be obtained by using optimal solutions of its subproblems. ‘The Principle of Optimality To use dynamic progmmming the problem must obscrve the principle of optimality, that whatever the initial state is, remaining decisions must be optimal with regard the state following from the first decision. When developing a dynamic-programming algorithm, we follow a sequence of four steps: |, Characterize the structure of an optimal solution_ 2. Recursively define the value ofan optimal solution, 3, Compute the value of un optimal solution, typically in a botom-up fashion. 4, Construct an optimal solution from computed information, Chain — Matrix multiplication problem We can multiply two matrices 4 and 8 only if they are compadibie: the number of columns of A must equal the number of rows of 8.104 isa p xq mauix and 2 is aq *r matrix, the resulting matrix Cis ap xr matrix. There are p - r total entries in Cand each takes O(g) time to compute, thus the total time to multiply these two matrices is dominated by the number of scalar multiplication, which is p .q .r. The time to compute Cis dominated by the number of scalar multiplications which ispgr. We shall express costs of multiplying two matrices in terms of the number of scalar multiplications. Matrix multiplication is an associative opemtion, but not a commutative operation. By this, we mean that we have to follow the above matrix order for muluplication, but we are free to parenthesize the above multiplication depending upon our need, For example, | if the chain of matrices is (Ay. parenthesized in five di . Ay, Ag). the product Ay AzA;Aq can be fully For example, if the Chain of matrices is (Ay, Az, Ay, Aa), the product Ay Ay As Aq can be fully paromthesized in five distinct ways: {ACA ADA). {AAA AG). (Al AsWASAaD (A (ArAaD Aad. (CA) AyPAg) Aad To illustrate the different costs incurred by different parenthesizations ofa matrix product, comsider the problem ofa chain of three muitrices. Suppose that the dimensions of the matrices are 10. 100, 100 5, and 5 * $0, respectively. The possible order of multipticabion are a} If we multiply according to the pareethesteation (/41 42/43) > to compute the 10 = $ mntrix product 41 42, we perform 10 100 - 5 = $000 scalar ‘multiplications to multiply this matrix product 41 42 by matrix 43, we perform ancther 10 - 5 - 50= 2500 scalar multiplications Hence, to compute the product (/41 A243), a total of 7500 scalar multiplications. b) Minstead we multiply according to the parenthesization (4 1(42 43)), * to compute the 100 $0 matrix product 42 43 we perform 100+ 5» 50 = 25,000 scalar > to multiply this matrix product 4243 by matnx 41, we perform another 10° 100 - $0 = $0,000 scalar multiplications % Hence, to compute the product (41/42 43)), a total of 75.000 scaler multiplications, Thus, computing the product according to the first puncnihesization & 10 times faster. The matrix-chain multipicadon problem can be stated as follows: Given a sequence of m matrices Al, Ad... Aa, und thetr dimensions po, pi, pe... ps, where where i = 1. matrx éshas dimension pe 1 ju, determine the onder of multiplication that minimizes the the number of scalar multiplications. tives. Ou gual in Norm cleat ds thee catia Ladind sal ipllincatinas pont betas, owe oad ua cacy saan ipl yy ‘only to determine an onder far multiplying matrices that has the lowest cost. Step 1: The stricture of at optimal parenthesization ‘Our first step in the dynamic-programming paradigm is to find the optimal substructure und thes use it to consinact an optimal solition to the problem from optimal solutions to subproblems, For the mutrix-chain ‘fmuikiplication problem, we can perform this sicp as follows, For convenicnee, ket us adopt the notation Ai, where /S/, for the matrix that results from evaluating the product Ai Ait © ° Aj . Observe that if the proble tonirivial, ic../ < /. them any parenthesization of the product Ai Art 1 ~~» Ay must split the product between 4 and Ait+| for some imeger é in the range iS Vk, Viet = Vi), Where for all i, (vi, View) E and all vertices in the cycle are distinct except pair Vi, Vie. Subgraphs and Spanning Trees: Subgraphs: A graph G’ = (V’, E) is a subgraph of graph G = (V, E) iff V' V and E’ E. The undirected graph G is connected, if for every pair of vertices u, v there exists a path from wu to v. If a graph is not connected, the vertices of the graph can be divided into connected components. Two vertices are in the same connected component iff they are connected by a path. Tree is a connected acyclic graph. A spanning tree of a graph G = (V, E) is a tree that contains all vertices of V and is a subgraph of G. A single graph can have multiple spanning trees. Lemma 1: Let T be a spanning tree of a graph G. Then 1. Any two vertices in T are connected by 2 unique simpie path. 2. If any edge is removed from T, then T becomes disconnected. 3. If we add any edge into T, then the new graph will contain acycle. 4. Number of edges in Tis n-1. 2. If any edge is removed fram T, then T becomes disconnected. 3. If we add any edge into T, then the new graph will contain acyce. ExplainPriawedvaninra lespasming tree algorithm with clear example Minimum Spanning Trees (MST): A Spanning tree for a connected graph is a tree whose vertex set is the same as the vertex set of the given graph, and whose edge set Is a subset of the edge set of the given graph. i.e., any connected graph will have & spanning tree. Weight of a spanning tree w (T) is the sum of weights of all edges in T. The Minimum spanning tree (MST) Is a spanning tree with the smallest possible weight. IVP Callege of Engineering far Women Design and Analysis of Algorithms a_e O @ Aemph: Three (of many p fe) spanning trees from graph G: A welghted grape The minimal spanning tree from weighted graph G Here are some examples: To explain further upon the Minimum ‘Spanning Tree, and what it applies to, let's consider a couple of real-world exampies: 1. One practical application of a MST would be in the design of a network. For Instance, a group of individuals, who are separated by varying distances, wish to be connected together in a telephone network, Although MST cannot do anything about the distance from one connection to another, it can be used to determine the least cost paths with no cycles in this network, thereby Connecting everyone at a minimum cost. 2 Another useful application of MST would be finding airline routes. The vertices of the graph would represent cities, and the edges would represent routes between the cities. Obviously, the further one has to travel, the more it will cost, so MST can be applied to optimize airline routes by finding the least costly paths with no cycles: To explain how to find @ Minimum Spanning Tree, we will look at two algorithms: the Kruskal algorithm and the Prim algorithm. Both algorithms differ in their methodology, but both eventually end up with the MST. Kruskal’s algorithm uses edges, and Prim’s aigorkhm uses vertex connections in determining the MST. Explain Travelling salesman problem with diagram The Traveling Salesman Problem The salesman > 2) 2 _ must travel to all cities once Dx a) before returning home / \ as 12/ ° nw / | The distance between each city is / \ a 5 given, and is assumed to be the {ay 8X m 1" same in both directions ~ YS | Om \ mo 5 | stat 9\ J A Ne ft é Objective - Minimize the total “ a \ / -% distance to be travelled ev DEFINITION traveling salesman problem (TSP) The traveling salesman problem (TSP) is an algorithmic problem tasked with finding the shortest route between a set of points and locations that must be visited. TechTarget Contributor 26 Jun 2020 Goode Visit. The salesman's goal is to keep both the travel costs and the distance traveled as low as possible. Focused on optimization, TSP is often used in computer science to find the most efficient route for data to travel between various nodes. Applications include identifying network or hardware pees OB, wet me Me cc various nodes. Applications include identifying network or hardware optimization methods. It was first described by Irish mathematician W.R. Hamilton and British mathematician Thomas Kirkman in the 1800s through the creation of a game that was solvable by finding a Hamilton cycle, which is a non- overlapping path between all nodes. TSP has been studied for decades and several solutions have been theorized. The simplest solution is to try all possibilities, but this is also the most time consuming and expensive method. Many solutions use heuristics, which provides probability outcomes. However, the results are approximate and not always optimal. Other solutions include branch and bound, Monte Carlo and Las Vegas algorithms. Rather than focus on finding the most effective route, TSP is often concerned with finding the cheapest solution. In TSPs, the large amount of variables creates a challenge when finding the shortest route, which makes approximate, fast and cheap solutions all the more attractive.

You might also like