0% found this document useful (0 votes)
226 views136 pages

Daa Aakash

Uploaded by

gulatiishika16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
226 views136 pages

Daa Aakash

Uploaded by

gulatiishika16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 136
END TERM EXAMINATION [FEB. 2023] FIFTH SEMESTER [B.TECH] ALGORITHMS DESIGN AND ANALYSIS [ETCS-301] ‘Time: 3 Hrs. Max. Marks: 75 Note: Attempt any five questions in all including Q. No. 1 which is compulsory. Q.1. Attempt all questions: (5 x 5 = 25) Q.1 (a) Define time complexity and space complexity. Write an algorithm for adding n natural numbers and find the space required by that algorithm. Q.1 (b) Define Big ‘Oh’ notation, Formulate the order of growth. Compare the order of growth n! and 2n. Differentiate between Best, average and worst case efficiency. Q.1 (c) Differentiate divide and conquer and dynamic programming. Q.1 (a) Explain dynamic programming method of problem solving. What type of problems can be solved by dynamic programming? Q.1 (e) Determine an LCS of <1,0,0,1,0,1,0,1> and <0,1,0,1,1,0,1,1,0> Q.2 (a) Discuss the concepts of asymptotic notations and its properties. (4) Q.2 (b) Analyze the order of growth. (4) F(n) = 2n2 +5 and g(n) = 7n. Use the © (g(n)) notation. Q.2 (c) Evaluate the recurrence relations. (4.5) () . x(n) = x(n - 1) +5 for n> 1. (i) . X(n) = x(n/3) + 1 for n> 1, x (1) = 1. (Solve for n = 3k) Q3 (a) Which sorting algorithm is best if the list is already sorted? Why? @ Q3 (b) Prove that the average running time of Quick Sort is O(nlog(n)) where n is the number of elements. 4) Q.3 (c) What are stable algorithms? Which sorting algorithm is stable? Give one example and explain. (4.5) Q.4 (a) Implement UNION using linked list representation of disjoint sets. (4) Q.4 (b) Explain the characteristics of problems that can be solved using dynamic programming. (a) _ Q-4 (c) Give a control abstraction for Divide and Conquer method. Explain with an example. (4.5) Q5 (a) Explain the effect of negative weight edges and negative weight cycles on shortest paths. (4) Q.5 (b) Define strongly connected components. How DFS can be used to find strongly concerted components? (4) Q.5 (c) Find an optimal paranthe Sequence of dimensions is 4x10, 10: zation of a matrix-chain product whose 12, 12x20 (4.5) m and Analysis i 12-2021 Fifth Semester, Algorithms Desig h algorithm. Analyze th, Q6 (a) Write Dijkstra’s Single Source Shortest pat! 6) complexity. . h using Prim, Q6 (b) Find minimum spanning tree of the following grap a . ‘ind mini 5 algorithm and discuss complexity. a a oe vu ao 4\ = ~~ Soa |s | 1 \ Q.7 (a) Explain Rabin-karp string matching algorithm, in brief. (6) Q7 (b) Find longest common subsequence of following two strings X and Y using any algorithm: (6.5) X = 'aabdbacdcba’ Y ='aabddcbac' @8 (a) Differentiate between P, NP, NP. problems, “completeness and NP-Hard (4) @8 (b) How a problem is identified as NP five problems that can be classified as Ne ae blem? Give atleast lems, 4 @8 (©) With examples explain polynomial time reducibilit, 5 ity. (4.5) IMPORTANT QUESTIONS Q.1. Write note on the optimal binary search tree problems. Ans. Optimal binary search tree: We are given a sequence K = hy, kyy .. k, oft distinct keys in sorted order (so that k, 1 ~ 1, We also use @ table root{i, jJ, for recording the root of the subtree containing keys &,, ... &,. This table uses only the entries for which 1 sis j k?" We will reduce 3-SAT to Max-Clique. Specifically, given a 3-CNF formula F of m clauses over n variables, we construct a graph as follows. First, for each clause ¢ of F Heereate one node for every assignment to variables in e that satisfies e. Eg, say we Fe Va V Fy) A Vv Bada (5 v%) A ‘Then in this case we would create nodes likes thi (x, =0,x, =0,x,=0) (x) =0,2,=0) (a, (20st. ,1,24=0) Oy =0,24= 1x, (e problems in sroblems whose ’ defined as the n-deterministic mm in NP can b orify solutions problem (P). called the P iy, then every es that every that is, it ean NP-complete toCcan be er or not it rithm (on solve all fal graph that all contain F of m cofF ay we (20x22 1 =D Oy =hy= 1) Oy=1xye0) (= ex = 0,0, 20 = Lx,=0,4,=D (= Lxy= Tn, =0) (y= 1a3= 1,20 We then put an edge between two nodea if the partial assignments are consistent Notice that the maximum possible clique size is m because there are noeise, betwee Fone eave eae carTespond to the same clause ©. Moreover, ifthe 3-SAT preblon see ine gecatist¥ing assignment, then in fact there is an meelique (just pick come proce thee thienment-and take the m nodes consistent with that assignment) So, te erring ascieeeaction (with k = my is correct we need to show that if there ent a Sidg assignment to F then the maximum elique in the graph has sizoc.ay We can {en this clique must contain one node per elause c. So, just read off the necionnnns fiven in the nodes of the elique: this by construction will satisfy all the elacsce So, we have shown this graph has a clique of size m iff F was salisflable, Alea: our veduct ge Polynomial time since the graph produced has total size at most quadraticin the coc of the formula F (O(m) nodes, O(m?) edges). Therefore Max-Clique is NP-complete Independent Set An Independent Set in a raph is a set of nodes no two of which have an edge. E.g., ina 7-cycle, the largest independent set has size 3, and in the graph coloring problem, the set of nodes colored red is an independent set. The Independent Set problem is: given a graph G and an integer k, does G have an independent set of size > k? We reduce from Max-Clique. Given an instance (G,k) of the Max-Clique problem, we output the instance (H,k) of the Independent Set problem where H is the complement of G. That is, H has edge (u,v) if G does not have edge (u,v). Then H has an independent set of size k if G has a k-clique. Vertex Cover A vertex cover in a graph is a set of nodes such that every edge is incident to at least one of them. For instance, if the graph represents rooms and corridors in a museum, then a vertex cover is a set of rooms we can put security guards in such that every corridor is observed by at least one guard. In this case we want the smallest cover possible. The Vertex Cover problem is: given a graph G and an integer k, does G havea vertex cover of size < k? If Cis a vertex cover in a graph G with vertex set V , then V - Cis an independent set, Also if S is an independent set, then V —S is a vertex cover. So, the reduction from Independent Set to Vertex Cover is very simple: goiven an instance (G,k) for Independent Set, produce the instance (Gn ~ k) for Vertex Cover, where n = IV. In other words, to solve the question "is there an independent set of size at least k" just solve the question “is there a vertex cover of size s n- k?" So, Vertex Cover is NP-Complete too. * ‘The traveling salesman problem In the traveling salesman problem (TSP) we are given n vertices 2 distances between them, as well as a budget b. We are ask cycle that passes through every vertex exactly once, of total cost b or less or to repo, that no such tour exists. That is, we seek a permutation 7(1),..., (1) of the vertices suri, that when they are toured in this order the total distance covered is at most b: . Tye mand all ny d to find a tou,» dpey, reas + Ereayreay + Apr) $B Notice how we have defined the TSP as a search problem: given an instance, find » tour within the budget (or report that none exists). But why are we expressing the traveling salesman problem in this way, when in reality it is an optimization problem, in which the shortest possible tour is sought? Why dress it up as something, else? For « good reason. Our plan in this chapter is to compare and relate problems. The framework of search problems is helpful in this regard, because it encompasses optimization problems like the TSP in addition to true search problems like SAT. Q.3. Discuss in detail any one algorithm for finding the cost spanning tree. Ans. Given a connected, undirected graph, a spanning tree of that graph is a subgraph that is a tree and connects all the Vertices together. A single graph can have 1 different spanning trees. We can also assign a weight. to each edge, which is a number representing how unfavorable it is, and use this to assign a weight to a spanning tree by computing the sum of the weights of the edges in that spanning tree. A minimum spanning tree (MST) or minimum weight spanning tree is then a spanning tree with weight less than or equal to the weight of every other spanning tree. More generally, any undirected graph (nof necessarily connected) has a minimum spanning forest, which is a union of minimum Spanning trees for its connected components. Possible multiplicity ‘There may be several minimum spanning trees of the same weight having a minimum number of edges; in particular, if all the edge weights of a given graph are the same, then every spanning tree of that graph is minimum. If there are n vertices in the graph, then each spanning tree has n — 1 edges Uniqueness :If each edge has a distinct weight then there will be only one, unique minimum spanning tree. This is true in many realistic situations, such as the telecommunications company example above, where it's unlikely any two paths have exactly the same cost. This generalizes to spanning forests as well. If the edge weights are not unique, only the (multi-)set of weights in minimum spanning trees is unique, that is thesame for all minimum spanning trees. Pro 1. Assume the contrary, that there are two different MSTs A and B. . 2, Let e, be the edge of least weight that is in one of the MSTs and not the other. Without loss of generality, assume e, is in A but not inB. 3. As Bis a MST, (e,| UB must contain a cycle C. . 4, Then C has an edge e, whose weight is greater than the weight of e,, since ¢ edges in B with less weight are in A by the choice ofe, and C must have at least one ed; that is not in A because otherwise A would contain a cycle in contradiction with its beit an MST. 5. Replacing e, with e, in B yields a spanning tree with a smaller weight. 6. This contradicts the assumption that B is a MST. Minimum-cost subgraph :If the weights are positive, then a minimum span’ tree is in facta minimum-cost subgraph connecting all vertices, since subgraphs contait! cycles necessarily have more total weight. Prim’s alogrithm :The algorithm may informally be described as performis! following steps: the sti I D k rn ort uch nda s the stem Fora work ation tree. graph many imber ree by mum with other. ce all edge being ning ning the 1. Initializ 2, Grow the tree by one edge: of the ed the tree, find the minimum-weight edy 3. Repeat step 2 (until all vertices are in the tree), 1 tree with a single verte: le vertex, chosen arbitrarily from the graph. hat connect the t ne tree to vertices not ye and transfer it to the tree Bae In more detail, it may be implemented following the pseudocode 1 Associate with each vertex v of t 7 cannes wit onc ner tthe graph a number Oly (the che conneti n fo vy and an edge vl (the edge providing that segs mao ine en vay so elon (0 toa mane han the simon ‘de weight) and set each Ely) toa special flag value os Ro edge connecting to earlier vertices, F value indicating shat there ie 2. Initialize an empty forest F and a se snc Zied tatiana empty forest nd a set Q of vertices that have not yet been 3. Repeat the following steps until Q is empty 3 Find and remove a vertex v from @ having a vertex v from Q having the minimum possible val baad to F and, if Elv] is not the special flag value, also add E(u) to F deans over the edges ow connecting to other vertices w. Fs 1 other vertices w. For each such edge, if still belongs to Q and vw has smaller weight than Clw}, perform the ise aeal 7 (i) Set Clw] to the cost of edge vw eee (ii) Set Ew] to point to edge vw. 4. Return F 5, The time complexity of Prim’s algorithm the graph and for ordering the edges by weight, The following table shows the typical choices ight data structure [Time complexity (ota ] oaviy . | OWIVI + [EI log IVI) | =OUED log IVI) Fibonacci heap and adjacency list } OC EL + IVI tog IVI) | _L a pest cost of a £CIo) depends on the data structures used for nich can be done using a priority queue adjacency matrix, searching pinary heap and adjacency list Q.4. What is String matching? Explain the Knuth-Morris pratt Algorithm along with its complexity. "Ans, In computer science, string searching algorithms, sometimes called string matching algorithms, are an important any af string algorithms that try to find @ place where one or several etning (also called patterns) are found ‘within a larger string ordext. Let £ be an alphabe sand searched text are for example, the letters ‘binary alphabet (© = (0, t (finite set). Formally, both the patter vectors of elements of 2. ‘The ¥ may be a usual human alphabet ( WEirough Zin the Latin alphabet), Other ‘applications may use tor DNA alphabet (© = (A, C, G.I) bioinformatics. In practice, how the string is encoded can affect the feasible string search algorithms In particular iff variable width eroding is in use then itis slow (HA proportional to ‘any of the more advanced variatyter, This will significantly slow dow” whew ofcode units instead, N) to find the Nth : search algorithms. A possible solution is to search for the sequel F but doing so may produce false matches unless the encoding 1S specifically designed to avoid it. Knuth Morris Pratt Algorithm KNUTH Morcis Prat Algo KMP: Matcher (T,P) 1.ne length {71 2. m < length [P] 3. x < compute-prefix function (P) 4.q<0 5. forielton 6 dowhileq>0 and Piq +117 Nil 7 dog < xlq] 8. ifPlg+1l=7i1 9. thengeq+1 _ 10. ifg=m N u Then print “Pattern occues with shift” im 12. qenigl ~ Compute-prefix-function (P) 1m € length [P] 2. xf) 0 3.Ke0 4. Forq << 2tom do while K > 0 and PIK + 1) +Plq] dok — K] if PIK + 1] = Pig) thenKeK+1 alg] -K Sera 10. return x. Performance of KMP The running time of COMPUTE-PREFIX-FUNCTION is 6(m), using the potential method of amortized analysis. We associate a potential of k with the current state k of the algorithm. This potential has an initial value of 0, by line 3. Line 6 decreases k whenever itis executed, since n[t] 10 and ¢y = 1 18 holds forn 2 10.ande, = 1/5 > + Thus, ife, = 1/5, cy = 1, and ny = 10, then for all n 2 rip, forall n>, 1 2, ‘Thus we have show that 7” ~3n = @(n?) Q.1. (d) Differentiate Dynamic Programming and Divide and conque approach Ans, |__ Divide and conquer Approach | 1. The sub problems in divide and conquer approach are more are Jess independent. | 2. The subproblems are solved recursively until the instances are small enough to solve easily ‘Thus it does more work than required by repeatedly solving same sub problems. It may or may not ‘Provide an optimal solution. 4. It uses top down approach. 5. Binary search algorithm follows divide and conquer approach. Dynamic Programming 1. The sub problems in dynamic programming are dependent and thus overlap. 2. As sub problems are shared, it is solved just once and the solution is stored in table to use for solving higher level sub problems. 3. It guarantees an optimal solution, 4. It uses bottom-up approach, 5. Floyd-warshall algorithm uses dynamic programming. QL. (e) Prove following: @nt=O(n™ (i) 1h 42h 4 3h...ank = 0 (nel) Ans, (i) Jj) = nt = nn-1)(n—2)..1 An)s n™ where c = 1 and ny ii) lf) = O(n™)) fn) = Tey ky gk 4 S mkenkantanks Sant LP. University-(B/Toch)-Akash Books. 2015-3 fn) s nbs [fon = 0mm Q.2. (a) Solve the following recurrence relations: i (3x2) (#) Tem) = 27 (V7) +1 (using substitution method) ‘Ans. We guess that the solution i Tn) = Ov ion in T(n) = Ofnlgn) our method into prove that Tha (aign) for an appropriate choice ofthe constant es 0, We start by secenton Oat tia bound holds for Jn , that is all n =n, Tn) s Wn gn) Tims 2C Jn MgC) +1 $2CVn Ign! 41 d conquer 1 2¢( Jn 2 iginy+ s ( Sle) | = S Cynig(n)+1 g s Cnlogn 7 where the last step holds as long as ¢> 1. nic nt and (i) Tin) = 4T (|n/2]) +n (using iteration method) od, itis Ans. Tn) = AT(\n/2))+n lution is | we iterate it as follows: oIving Nn) = ar([3 [+= lution. = 447 {|S ||+2|an = ry 2 ‘The series Terminaters when Z = 1=pn=2!ori=log,n So, . M 2B 4 4A Nn) jbo? -1 ww w < < Bh (a-) n(n 1) +n? n(n-V+n jbo? n(n=1) + 200822) n(n-1) +n? n(n-D +n? 00h?) 5T'(2-3)+ O(n?) whenn>0 7 herwise (USing substract and conquer Tn) = 5T(n-3)+0(n2) a 5,b=3 nda nt aa d>1 Tin) = O(n4,a") O(n? am) Tn) = O(n2, 5n/3) Floyed Warshall Algorithm, « : the shortest path problem n which the objective: : distance as well as the Corresponding path for apie any given pair of nodes in a distat’ 'm can be solved using floyd’s algorithm. juer (4) est nce LP. University-(B-Tech)-Al sh Books 2015-5 ma from the final precedence matr Steps of Fld Atorith Stop 1 Sethe teaton number eos om iP Step 2: From the initial distance matrix [D°] and the inital distance Precedence ; = ~ pita the iteration number by 1(K =k +1) mn ae eee Hristo ia matrix [D"] for all its cells where j is not Dh = min(Db, Di + pf) Step 5: Obtain the values of the i Iues of the precedence matrix [pt it equal to using the following formula eee ees A i Pi = phy if Dj is not equal to Dict * pi otherwise. Step 6. If k =n go to step 7, otherwise k =k + 1 and go to step 4 step 7. For each source destinat nati ired ine en ne ene es ee ee re shortest path, from the final precedence matrix [p¥]. eae Floyd Warshall’s Algorithm: Floyd Warshalll (w) Ln crows (w) 2. DO —w 3. fork <1 ton do 4. fori 1tondo 5.forj< 1tondo 6. Di? < (Dit, Dg” + Df) 7. return D(n). oR Q.2. (6) Explain Strassen matrix multiplication with example. ‘Ans. Strassens Matrix Multiplication : By using divide-and-conquer technique, the overall complexity for multiplying two square matrices is reduced. This happens by decreasing the total number of multiplications performed at the expense of a slight increase in the number of additions. For providing optimality in multiplication of matries an algorithm was published by V, Strassen in 1969. which gives an overview how one can find the product C of two 2) dimension matrices A and B with just seven matliplications as opposed to eight required by the brute-force algorithm. Fifth Semester, Algorithm Analysis and Design rall procedure can be explained as below: where B= gy +441) * yy + Oy) Hy = (yy +0y)* bey 43 = Aq" (bp, — yy) % = ay, * GiB pp) 5 = (yy +0y))* by, Hq = (B19 +) * (By + bo) 4 = (Ay, +44))* iy +b,,) Thus, in order to multiply two 2 x 2 dimension matrices Strassen’s formula used Seven multiplications and eighteen additions/subtractions, whereas brute force guforithm requires eight multiplications and four additions. The utility of Strassevs formula is shown by its asymptotic superiority when order n of matrix reaches infinity. Let us consider two matrices Aand 6, n xn dimension, where nis a power of two, It can be observed that we can obtain four dimension submatrices from A,B and theis Product C. It can easily be verified by treating submatrices as number to get the correct, product, For example: C1 can be computed as Ayy * By, +Ay, *B,, or asx, +25~2, +5, wheres, %o can be found by using strassen’s formula, with the number corresponding submatrices. ‘We can have Strassen’s algorithm for matrix multiplication, yX.%q.and 8 replaced by the ifthe seven multiplication of "6x" matrices are computed recursively by the same method. OR Q3. (a) Explain Quicksort algorithm and explain worst case time complexity of the algorithm, (Sx Ans. Quick sort Algorithm QUICKSHORT (A, P, R) Lifpex : 3.1fk =r, then Alk] =x Figure shows the loop invari iti seenkif0re shows the loop invariant is tree for initialization, maintenance and Pi 2[e[7[*]*121¢l4 Bid z2)e)7]1]3a}s}]e]4 Bi Peel) i i 2 pt 7 2 P and 2]1)] 3 pa the > 1 ; i 2[1|3 6 | 4 Pp i Gi 2} 143 fee | 4 fe ep i r 1] 3 era jue |nee| (2) 2 4 Ree) See Performance of Quick sort: The running time of quick sort depends on whether the partitioning is balanced or unbalanced. Ifthe partitioningis balanced, the algorithm ae resymptotically as fast as merge sort and ifthe partitioning is unbalanced it runs asymptotically as slow as inserction sort. Worst Case: Worst case occures when the array is divided in two ‘unbalanced sub arrays where one subarray is empty. fe) ) Thus i 2 [34 [25] 4019 Fifth Semester, Algorithm Analysis and Design i al L - - —— [e] a [ 71 — i —> Tin) = Tin 1) + 10) +q(n) = Tin-1)+ qin) = O(n?) Q.3. (b) Sort the following numbers using Quicksort algorithm: 12 34 ion 40) = 15 10) 80 10 [30 (6) i [12 [34 [25 [40 19 [10] 30 [8] P j r © ifeysa [25] 40 [19 [10 30/8 :continues for all as for every j value is less than r Ali +1] OAlr) Ps IL. @)i [8 [34 [25 [40 [19 [70 [30 [12 Pij [8 [24] 25 [40719 10 [30 [12 Pi i © [aya [26 [4019 10 30 ]12 Pi J © [alsa[ae 40 19 [10 [30 [12 ij Pi [8 [34 [25 40[19]10[30]12 Pi i 10 LP. University-(B.Tech)-Akash Bor . 2015-9 oT 8 [10 [12] 40 [19 Js |a0 | a5 continues till: j = 2 and next exchange result 8 [10 [12] 25 [19 [34 [30 | a0] 8 ]10]32]19 |25 [30] 34 [40] Q-4. (a) Find the optimal sequence of dimensions are <2 zation of a matrix chain product whose a (5x2) Matrix dimesnsion PO = 2 A 2x3 Pi=3 A axd P2=4 A Axd P3=5 Ay 5x6 P4=6 Way of paranthesization s[1,4] = 3 (A1 A2 A3) (Ad) s{1,3] = 2 Optimal paranthesization |(Al A2)A3)A4)| Q.4. (6) Determine LCS of X = and ¥ = Ans4 (b) . Refer Q.No.5(b) of End Term Exam 2016 (Page No. 23-2016) SECOND TERM EXAMINATION [NOV. 2015] FIFTH SEMESTER [B.TECH] ALGORITHM ANALYSIS AND DESIGN [ETCS-302] MM. :3 Time :1.6 hrs. 0 Note: Attempt three question in total Q. No.1 is compulsory. Atlempt any two more questions from the remaining. (5x2) Q.1. (a) Define Matroid . Ans. Refer Q.No.1(h) End Term Exam 2016 (Page No.: 15-2016) Q.1. (b) What are string matching problems ? Ans. Refer Q.No.4 Important Questions (Page No.: 7) Q.1. (c) Explain prefix function used in Knuth Morris Pratt algorithm using suitable example. Ans, The perfix function x for a pattern encapsulates knowledge about how the pattern matches against shifts to itself. Give the pattern P[I...m], the prefix function for the pattern Pis the function x : (1, 2, ...,m)—> (0, 1, ...., m— 1} such that, x(q]=max (K:K (e[*Te[ey*[e[*]e]=]*]o]e]oTe I szse2_ [e]o]e[el@l sJolalela] ez a{ofa] a Figure shows the pattern P i oocuredatS=4,'The fe ore matches with text Tso that q = 5. The valid shift S+ 1 butits valid wheres" Pattern will not match t d = S+ 2. Wh 5 ‘ext T when the shift S’= array x can be represented as n{5] = 3. the pattern Pa is compared with Py we find the LP. University-C8-Tech)-Akash Books 2015-11 4 EL Mei down the worst case complexity of naive, Rabin-K and string matching using finite automata algorith ae Ans. Naive a xm) where mis the length of pattern and n is the len; 30 ength ot pattern and length text n hocmen equa algorithm rans i quate Lene none n runs in qua Same as Naive = O(mn) (quadratic in worst case) KMP: Running time for prefix function calculation = 0(m) and for KMP matcher = O(n) Hence total time: © (m +1) ng Finite Automata Iratic time Running time On) length of Text. fo Q.1. (e) Differentiate local and global optima. ‘Ans. Differentiate between global and local optima: When an algorithm finds ‘a solution to a linear optimization model it is the definitive best solution and we say it js the global optimum. A globally optimal solution has an objective value that js as good or better than all other feasible solutions. ‘A locally optimum solution is the one for which no better feasible solution can be found in the immediate neighbourhood of the given solution. Additional local optimal points may exist some distance away from the current solution. Q.2. (a) Find an optimal Huffman code for the following set of frequencies: Ads bi5 cS d:25 el 6) Ans. Optimal Huffman code. A:45 b: 15 C:5 d:t5 e: 10 (a) Huffman codes are widely used and popular technique for compressing data; savings upto 90% are typical, depending on the characters ofthe data being! ‘compressed. Huffman's greedy algorithm uses table of the’ frequencies of occurence of the characters to build an optimal way of representing each character as a binary string (a)e:5 e:10 6:15 d:25 a:45 (e) ¥ ‘ b:15 d:25 a:45 &5 e:10 (c) 4:25 e is and Design Fifth Semester, Algorithm Analysis an 12-2015 ifth S @ © Total bits required = (8x4543x15 4 3x5 + 3x25 + 3x10) 135 +45 4 15+75 +30 300 bits 0 " LP. University-(- Tech) Akash Bor 2015-13 characters long codes. This would require cree ore niceraat a-l x = 200 bits \ 2.(© Differentiate Prim’s and Kruskal’ algorithm. : Ans. Kruskul’s Algorithm: 1. Itis an Algorithm in graph theory that finds a minimum spanning tree for a connected weighted graph. . 2. Kruskal is where we order the nodes from smallest to largest and pick accordingly 3. Kruskal allows both new-new nodes and old-old nodes to get connected. 4, Kruskal’s algorithm builds a minimum spanning tree by adding one edge at a time,The next line is always the shortest only if it does not create a cycle. 5, Kruskal’s require us to sort the edge weight’s first. Prim’s Algorithm: 1 Itis the Algorithm that finds a minimum spanning tree for a connected weighted unidirected graph. 2. In Prim's algorithm we select an arbitrary node then correct the ones nearest to it 3. Prim’s always joins a new vertex to old vertex. 4, Prim’s builds a minimum spanning tree by adding one vertex at a time. The next vertex to be added is always the one nearest to a vertex already on a graph. 5. In Prim's algorithm we select the shortest edge when executing the algorithm. Q.3. (a) Write Rabin-Karp string matching algorithm. Consider working ‘spurious hits does the Rabin-Karp matcher algorithm module q= 11, how many finds in the text T=314159265348 when looking for the pattern P=26. (6) orn ‘Ans. Astring search algorithm which compares a string's hash values, rather than . the strings themselves. For efficiency. the hash value of the next position in the text is (2) easily computed from the hash value of the current position. 3 bits to How Rabin-Karp works: in radix-S notation. (S = (0,1,...9) Let characters in both array T and P be digits Let p be the value of the characters in P. Choose a prime number q such that fits within a computer word to speed computations. compute (p mod 4) ~The value of p mod q is what we will be using to find all matches of the pattern P in. Compute (T(s+1,...6+m) mod q) for s = 0..nm ) ‘Tost against P only those sequences in T having the same (mod q) value mentally computed by subtracting the high-order (T{s+1,..,8¢m) mod q) can be ine digit, shifting, adding the low-order bit, all in modulo q at ithmetie. is and Design rithm Analysis an Fifth Semester, Ale 14-2015 Algorithm: , RABIN-KARP-MATCHER (TP, ds) Ln length (7) 2.m « length [PI] 3.hed™! moda 4pe0 5.te0 ; 6.fori< 1tom Preprocessing. 7. dop «(dp + pli) mod q 8. ty © (dty + THE) mod g 9.fors <0 ton -m Matching. 10. doifp =t, 11, then ifp [1 to m] =T'[s + 1 tos +m) 12. then print "Pattern occurs with shift” s 13. ifs spurious hit [sh] [s [926s [ats 59 mod 11 = 4 equal to 4-> spurious hit BAR GOR GEE 92 mod 11=4 equal to 4-> an Spurious hit BEER sTepyeysyays 26 mod 11 Tied equal to 4 an exact match!! RTs Ts aye [says 65 mod 11 10 not equal to4 LP. University-(B-Tech)-Akash Books 2015-15 LETS [9216 [5|3]5) 13h 615] 3|5) 59 mod 11 =9 not engal to an ja] (sh }4h[s[o[2]6\5 [ays acwecansee, wnenamanen Spot tt=2 0b equal 4 was Azam see, when a match found, further testing done to inwure that amatch ‘Total spurious hits = 4 . Q.5. & Define the complexity classes: PNP and NPC. deaiAtts Define P and NP clans of problems: Informally th: . ccicton probleme solvable by some slgorithm within a nia Eee ae See re ven in ihe lngeh of the input. Tariog was aot contemned sith aa es Pic machines: Dut rather his concer was whether they can simulate arbitrary 35 Slgorithms given sufficient time. However it turns ee ee Slee Sivet acient computer modele for vammple machi alee simula(e more pounded random access memory) by at most squat eeccucete a fa ee hr cass of somputer models, Here we follow standard practice and define the class P in ‘Pom ally the ekm entsofthe casoP are languages. Let Zbe a finite alphat is, a finite nonempty set) with at least two sreneerandle =" tase ores cees isa fiona language over © is a subset L of £*, Each Turing machine M has : exer ated input alphabet 5. For each string w in E* there is a computation associated aa a arith input w. We say that M accepts w if this computation terminates in the accepting state. Note that M fails to accept jf the computation fails to terminate. The language accep! associated alphabet = and is defined by: L(M) = wet! M accepts w) 1 Turing machine M which runs in polynomial time] ‘The notation NP stands for “nondeterministc polynomial time”, since originally NP was defined in terms of nondeterministic machines (that is machines that have more Tan one possible move from a given configuration): Howsvit now it is customary to gye < vad Riw,y)) where Iw and ly! denote the Jengths of w andy, respectively. * 4 Soria NP-complete ‘tit isboth NP-hard and an element of NP (or 'NP-easy)- i finds a polynomial-time Tete problems are the hardest problems NP. Ifanyone i Mgorithm for even one NP-complete problem, then that would imply a polynomial-time gorithm for everyNP-complete problem. Literally thousands of problems have been aor TLtime algorithm for one of hem seems incredibly shown to be NP-complete, 50 @ unlikely, It is not imme NP-hardness is alrea tw either if this computation ends in the rejecting state, or ted by M, denoted L(M), has P=(L | L=LW) for some an equivalent de! relation Rc E* x 2," for som‘ relation R a language Lp, over ard or NP-complete cision problems are NP-h ee apn ‘at the problem also diately clear that an n ead @ 1 » of a problem; insisting th dy a lot to dema (penalli ysis and Design Fifth Semester, Algorithm Analy 16-2015 ea ith suitable example), tea ian cycle problem (with suita Rae eee proble: Hamiltonian path is a path that vi, oe Hamiltonian decomposition is an edge decomposition of a graph into Hamiltonian al-time algorithm seems almost comp, 1y two) Sia, le is end, ian [> Hamettarion graphs @ Non Hamiltonia OO-OO+O 4G) Q4. (6) Find the opi hs can als timal schedule for the followi ith gi i se) ana aay ¢ following ga8k with Siven weight A dead task dead] cant mat: patt belo indi aut 6, ¢: th stri For defi as | lat an ity-CB.Teeh)-Akash Books 1 Probleme: deadlines und penalties fora signle recess @y.-.0,) of unit tin (6) a set of'n intger deadlii @ ane 2a set om inter denies dy dey seh that each asin 1 dean SEO Integers or ponaltia wy, such tha ask a, is not finished by time d, and we incur'no penalty Wh task a yy time d, and we feu! 2015-17 problem of scheduli ‘oF has the following inputs, ee ue e Anputs ae, penalty oft ino penalty if task fontchos ea Given deadlines and penalties are 1j2 3 |4|5/ 6) @[2{2}ifsia{ir w 20 10/5125 Step 1. Arrange tasks in decreasing order of penalties 1|2/3|4(5|6 d,|1 (3 1 {sis w, | 26 | 20|15 mole 1 Step 2: Pairwise comparison. Hence tasks as a,, ay, and aj are acce ‘ . yy pted and can't be completed by their deadlines St re me cece eee ‘The final schedule (optimal) is [< 41,42, G5,43,04,0¢ >| and total penalty incurred Wy +W +e 15+10+1=26 \n Q.4. (c) String Matching with finite automata. (with suitable example) Ans. String matching algorithm using finite automata: There is a string- matching automaton for every pattern P; this automaton must be constructed from the pattern in a pre-processing step before it can be used to search the text string. Figure below illustrates this construction for the pattern P = ababaca. We shall assume that P is a given fixed pattern string; for brevity, we shall not indicate the dependence upon P in our notation. In order to specify the string-matching automaton corresponding to a given pattern. P{1...m), we first define an auxiliary function , called the suffix function corresponding to P. The function o is a mapping from 5 to (0, Tn) stich that o(x)is the length of the longest prefix of P that isa suffix of +: of2)= max tk: P, 21. ‘te suffix function o is well defined since the empty string Py =¢ i a suffix of every string, As examples, fr the pattern P=ab, we have o() = O,o(ceaca) = 1 and o(ccab) = 2. For pattern P of length m, we have a(x) = m if and only if P) x. 1b follows from the definition of the suffix function that ifJ y, then o(2) so). We define the string-miatching automaton that corresponds to given pattern P{L...m] as follows. , ; » The state set @ is (0, 1,...ml. The start state qa is state 0, and state m iS the only accepting state. + The transition function cis defined by the character a :8(q, a) =0(P,2). following equation, for any stateq and 5 Fifth Semester, Algorithm Analysis and Design 18-20 a as ~ 7 fy > a >, Se (6) ONTO O-+-Q+ So : 7 (a Input State a bc P o [4JoJo] @ 1 [s]2fo]» 2 [3fofo] a 3 [a[afo]» a 3 [tTfe] « i =120456705 6 /7/Olola Ti] -abababacaba 7 fz states) 012345 45 6MM2 3 © o ZB clarify the operation of a string-matching automaton, we now give a simple PMiicient program for simulating the behaviour of such an automaton (represented by it transition function 8) in finding occurrences of a pattern P of length m in an input tex T1...n]. FINITE-AUTOMATION-MATCHER (T, 8, m) 1.ne length (7) 29¢0 3. fori 1ton 4.doq —d(q, T1i)) 5.ifg=m 6. then print “Pattern occurs shift? i —m As for any string-matching. automaton for a pattern of length m, the state set Qis |i 1,...m), the start state is 0, and the only accepting state is state m. The simple log structure of FINITE-AUTOMATON-MATCHER implies that its matching time on text string of length n is (n), Q4. (d) Proof of, Correctness of Bellman-Ford algorithm, Ans. Proof of Correctness of Bellman Ford, Algorithm: Bellman Ford Algoritt: that computes shortest paths from a signle Source vertex to all of the other verticesiz weighted digraph, The Correctriess of algorithm can be shown, by induction, Lemma: Atter j Tepetition of for loop: * It distance (u) is not infinity, * Ifthere isa path from 5 length of the shortest path fro, . itis equal to the length of some path from sto tou with at most edges, then Distance (1) is at mos! mM s to u with at Most i edges, a simple, nted by its input text Qis (0, sle loop eon a orithm esina 10 Uy ot thé LP. University (ne . 8 u (BoTech)-Akash Rook 201 015-19 Proof: For the base ease of in oes f inductions consider {[i=0} | and the loop is executed for the first time fl Then, for the source vertex. ‘ource distance = 0] which is correct for other vertices 4 [uw dist fini Which is also correct because there no path from source to u with 0 edes For the inductive case, we first ase, We prove the first part. Consider a +3 vertex distance is updated by Hotpart Consider a moment whens ff \v Goal Kags Se [V distance = w. distances + wv wight] ee By inductive assumption [a, distance] is the length of the path from source to u Then [u. distance + uv. weight | is the length of the path from source to v that follows the path from source to u and then goes to v. : For the second part, consider the shortest path from soures to u with atmost i edges let v be the last. Vertex before u on this path. Then, the Part of the path from source to vis the shortest path from source to v with at most i-1 edges. By inductive assumption [vdistance] after i — 1 iterations is at most the length of this path. Therefore, luv. weight + v. distance] is at most the length of the path from s to u. In the ith iteration, fa. distance] gets compared with [uv. weight + v] [distance] and is set equal to it if [av. weight +v. distance] was smaller. Therefore after i iteration [u. distance] is at most the length of the shortest path gets from source to u that uses autmost i edges. Ifthere are no negative-weight cycles. Then every shortest path visits each vertex at most once. So at step 3 no futher improvement can be made conversely suppose no improvement can be made, Then for any cycle with vertices vi0} ..v(—1) v (i) distance < = v{(i- 1) mod K). distance + v(i —1) mod k] weight Summing around the cycle, the [i] distance terms and the v [(i-1) {mod k]] distance terms cancel leaving O's sum from 1 to k of u (é~ 1 (mod k)] v [i] weight i.e. every cycle has non-negative weight. END TERM EXAMINATION [DEC. 2015} FIFTH SEMESTER (B.TECH) ALGORITHM ANALYSIS AND DESIGN be Whi [ETCS-301] wh brs. MM. y, spt any five questions including Q.no.1 which is compulsory. vim Note: 1. Atte Q.1. (a) Define 0, 0 notations and expl Ans. 0 Notation: For'a given function g(n), we denote by O(g(n)) as : O(ein)) = (fn) : there exist positive constants ¢ and n, such that 01 Hence fir) # O(g{n)) or (3) # 02") Q.1. (©) Write an algorithm for merge sort. Find its worst case, best case and average case complexity. Ans. The merge sort algorithm closely follows the divide-and-conquer paradigm. Intuitively, it operates as follows. * Divide: Divide the n-element sequence to be sorted into two subsequences of n/2 elements each. * Conquer: Sort the two subsequences recursively using merge sort. + Combine: Merge the two sorted subsequences to produce the sorted answer. MERGE (A, p, g, 7) 1 neq-p+l 2 ner-a 3° create arrays L[1 ton, + 1Jand R[1 ton, +1]] 4 foriclton, 5 do Li] A [p +i-1) 6 forjelton, 7 doRUIAla +i 8 Linj+ileo 9 Rin+lleo Gel i jel 12 forkeptor 13 doif Lf] sR UI 4 then A [A] RU tor, Algorithm Analysis and Design subarray a ple , ns A[p to rl into two subarra, Ifp 2s the subar iq that partitions Al i 2 i ord ig + | tor}, containing [n/2] elements, 7 2y mply computes 80 veg oanta ig in /2] elements ORT (APY) tog], conta MERG Lifpsr . 2.theng [p+ rV2) 3. MERGE-SORT (A,p,9) 4. MERGE-SORT (A, q+ 1,1) 5. MERGE (A,p,a.r) Sorted sequence Initial sequence : The operation of merge sort on the array A = 5, 2,4, 7, 1, 3, 2, 6. The lengths of the sorted sequences being merged increase as the algorithm progresses from bottom to top Analysis of merge so: When we have n > 1 elements, Fi : Merge sort on just one element takes constant time. we break down the running time as follows. * Divide: The divide step just computes the middle of the subarray, constant time. Thus, Din) = @ (1). * Conquer: We recursively solve two subproblems, each of size n/2, which contributes 27 (n/2) to the running time. * Combine: We have already subarray takes time @ (n), so C(n) which takes noted that the MERGE Procedure on an n-element = (n). - ie itl 7 else Ak] © RUT . ti +1 7 tu - Jement and is therfore sorted. Otherwigg,y 2: Part expan¢ co LP. University-(B.Tech)-Akash Books 2015-25 We can solve the recurrence. For convenience that a pale gensolve the recurrence For convenience, we aan that i an exact power of ' "),schich in part (b) has been expanded in the __tree representing the recurrence. Part (c) shows this process ceesred a avalon be | Sar arried one step further by ; cn Tm) / \ oN eo) Thea) fo ‘ or wwe “8 / \ / \ Teva) Tnvay Tend) Tina © Part (d) shows the resulting tree /\ L\ i x ix k Nie nen Pordo dp Cf 4 ee ee eS % | (@) Total:en Ign +en Fig. The construction of a recursion tree for the recurrence T (n) = 27 (n/2) +en. ‘To compute the total cost represented by the recurrence (2.2), we simply add up the costs of all the levels. There are lg n + 1 levels, each costing cn, for a total cost of en (ign + 1) =en Ign + en. Ignoring the low-order term and the constant c gives the desired reautof0(n Ign). Q.1. (d) Explain 0-1 Knapsack problem and discuss its solution. ‘Ans. The 0-1 knapsack problem is posed as follows. A thief robbing a store finds n items; the ith item is worth vi dollars and weighs wi pounds, where vi and wi are integers. He wants to take as valuable a load as possible, but he can carry at most W pounds in his knapsack for some integer W. Which items should he take? (This is called the 0-1 knapsack problem because each item must either be taken or left behind; the thief cannot take a fractional amount of an item or take an item more than once.) Formal description: Given two n-tuples of positive numbers 24-2015 10>, and w> 0, we wish to determine the gy, 11.2. tn ot fies of store) that bi arto maximizes 2” Pea ws W : subject to 2." ford ‘oblem ic-Programming Solution to the 0-1 Knapsack Pr = ean ighest vimbered item in an eptiaal solution 8 for W pounds The, Sele Let be the eae ew. e solution Sig y relax =5“i)is an optimal solution for W-sv, pounds and the value to the solution Sis = the value of subproblem. We can express this fact in the following formula define Cli, W] to be the solutiog, items 1,2... and maximum weight w. Then 0 ifi =Oorw=0 eliw] =eli-10) if w, 20 Max [o,+e(i-1, w-wJe{i-1,w)} if> 0 and 2 w, This says that the values of the solution to items either include i item in wh, case itis v, plus a subproblem solution for (i-1) items and the weight excluding W, does not include i" item in which case it is a subproblem’s solution for (i -1) items &, the same weight. That ifthe thief picks item i thief takes v, value and thief can choo, from items w-w, and get C [i-1, w,-w, additional value. On other hand if theif deci. oto take item i, thief choose from item 1,2...i—1 upto the weight limit w, and getc, ng 4w] value. The better of these two choices should be made, Although the 0-1 knaspack problem the above formula for ¢ is similar to LQ Bel formula boundary values are 0, and other values are computed from the input ay neg jcarlier’ values of C. So the 0-1 knapsack algorithm is like the LCS-length algorith, alg given in CLR for finding a longest common subsequunce of two sequences The algorithm takes as input the maximum weight W. the number o} the two sequences v =<, UpynJ,> and w= < Wy, Wy~-st0,>. it stores the eli] valuesinth of table. that is a two dimensional array, e [0...n,0...w] whose entries are computed in g Tow-major order. That is, the first row of Cis filled in from left to right, then the secong we row, and s0 on. At the end 0. the computance c{[n,w] contains the maximum value that can be picked into the knapsack, Dynamic-0-1 Knapsack (v, w, n, W) Forw =0to W DO c{0,w)=0 FORi=1ton DOcli,0)=0 FORw=1TOW DOIFfw, R, the Bellman-Ford algorithm returns a boolean value indicating whether or not there is a negative-weight cycle that is reachable from the source. If there is such a cycle, the algorithm indicates that no solution exists. Ifthere is no such cycle, the algorithm produces the shortest paths and their weights. ‘The algorithm uses relaxation, progressively decreasing an estimate d{v] on the weight ofa shortest path from the source s to each vertex v ¢ V until it achieves the actual shortest- path weight &(s, v). The algorithm returns TRUE if and only if the graph contains no negative- weight cycles that are reachable from the source. BELLMAN-FORD (G, w, s) 1. INITIALIZE-SINGLE-SOURCE (G, s) 2. foric1to IVIG}I-1 3. do for each edge (u,v) FIG] 4. do RELAX(u, », w) 5. foreach edge (u, v) € E{G] 6. do if d{u] > du] + wu, ») 7. then return FALSE 8. return TRUE lations Q.2. (a) What are recurrence re! relation. x(n) =.x(n -1) for n > 0 and x (0) = 0. ‘Ans. Arecurrence relation is an equal say a, with one of more preceding terms. "he "one the sequence. The following is a general re a, = May» An (6.2 ‘The recurrence relation is said to be obeyed e relation. sip yy) 2M s? Solve following using recurrence tion that relates a general term in the sequence, 5) by qa) = a diately preceding terms of the sey, Lets positve inter. Fe the value ofa, depends on all the prior with full history ce relations, me general recurrenc The following. are examples of 60 Man = 4p 1 Flag a tt Cn ym a. = ea, , +/(n)fin) isa function of n Gym = Sytem tnt md G, = CO, M4 Oy mt = ag +0,4, gto t Ay 19 Solution of ree relation: : The recurrence elaion we solve is given below. in) = Tn-1)+e nay ere, cis a small positive constant. Tiassinaisiosreponta tas algorithm that makes one pass over each one ot the n elements. It takes c time to examine an element. Termination Condition: -_ In terms of an equation, we can say the following. T(0) = 0. Instantiations Tn) = Tin-D +0 is the recurrence relation we are going to use again and again. Assume n is an integer, n 2 1. For example, if we want to instantiate the recurrence for an argumen: value of m1, we get Ta~1) = Tin-2)+e 'o T on the right hand side is one le: ifwe instantiate the formula for an Tk) = Thk-1)+e Or if we instantiate it for an argument value of 2, wet get T2)=T(1) +e The solution to the recurrence relation follows: Tn) = [Ta=+e] = (Pin-2)+e)+0 = (T(n-3) +c) +20 = Mn-(n-1) + (n-1)¢ Tin-n) + (nJe 10) +ne = O+ne ne Note that the argument t ss than the argument on the left hand side, Thus, argument k, we get i We know 111) = 9 Siocation produces”? terminating condition of the Tecurrence, Therefor met the The sub and (n) Th Ifne Trene, one of is an ament of ument get res LP P. University- 1 be constants, let f(n) be a. nonnegative integers by the nstants, le /(n) bea function, and let (n) be defined onthe Tin) =aTin/b) + fin), where we interpret n/b to mean either n/b or n/ asymptotically as follows. nib or n/b, Then T (n) can be bounded There are 3 cases: 1 Iff (n) = O(n *€, *-*) for some constant € > 0, then Tin) = © (n ®*,"). 2. Iff(n) = 0 (n**,2), then Tin) =0 (n* Ign) 3. 1ff(n) (nl, **©) with €, and fin) satisfies the regularity condition, then T\n) = Q (ftn)). Regularity condition: af (n/b) < ef (n) for some constant ¢ < 1 and all sufficiently large n. How does this work? ‘Master method is mainly derived from recurrence tree method. If we draw recurrence tree of Tin) = aTtn/b) + fin), we can sce that the work done at root is fin) and work done at all leaves is Q(n‘) where c is Log,a. And the height of recurrence tree is Log,n fio) => (0) {(evb) finb)----=-> af(ovb) f(nfe) ) (nid?) (00b") favo") ZI fled }=> afin") fies) f(ane?)..A(evo?) nt) w Ab A Ms A AID AAP ) aw (1) (4) 1) it (1) @(1) @1)--ro(h™) Py otoa da oft) @(1) (1) (1) 1) OCF or” th Semester, Algorithm Analysis and Design Pith Semester, aleulate t method, we cale cal work hen leaces are the iemeaves and root ia asymptoticn =f work done a os i asmptntica™ nee ee ortiphied by work done at any level (Case 2) 1¢ otal work done. If the work done ay bee rt, and our result bert, In recurrence F our result hecomes height en our resuilt becomes work done at root. done nt mptotically more, then = Gone at root is aevmp FXAMPLE: Consider the recurrence aTin/2)4 08. each dividing the input by are again a~4 subproblems, each dividing the input iy, eon titre? Moreover, fin /2)''skn? for k=1 2, 80 Case applies, Thus T1,, . EXAMPLI Consider the recurrence Tin) = 4T(n/2) +n? ain a=4 subproblems, each dividing the input by jx, prion a eeet eae ences itnen?. Again nlet,s is n?, and fin) is thus O/n?, . 2 applies. Thus Tin) is -(n? log n). Note that increasing the work on each recurs call irom linear to quadratic has increased the overall asymptotic running time oni», + logarithmic factor. Q.3. (a) Explain Strassen's algorithm and explain. Ans. Refer Q.2. (b) of First term 2015 Q.3. (6) Compute following using strassen's algorithm. (8.25) (6.25) 121) [2 43 0 3 2\x!4 8 9 05 1J/|3 89 121) [248 03 2\xl4 8 9 05 4) [3 8 9 B We can divide the matrix A and B in four r square parts if n i , But and BN is nota degree of 2 so, a Parts ifn is a degree of 2. 1210) f2 4309 932 01/4 8 9 9 954 0/3 8 9 9 200 o}l0 009 : B We divide the matrices A& B into submatrices MG —— of size N/2 x N/2 2015-29 neve ssen method the Bs es the Inst “trices of result are caleulated using following tet se LF won PL = ah), P2mtasnyn = ase oe Galea c ies Po = #064, Petit (gery eC PT = (a-eesp se toar wat mess ltted using above gee, llowing are values of four sub-matrix of result Gone es oe bY bag, fin) a @ 15] fe |p P5+P4-P24 pg PIsP2 (oy State] « PRR eet | A B © , and C are square matrices of N x N 4, b, ¢, d are submatrices of A, of size N/ 2x NiZe. fg hy are submatrices B, of size Nx We Pine 3, P4, P5, P6, PT are submatrices of size N/2 x N/2 So, ursive nly by (6.25) (6.25) Pl = a(f-h) So, FACIE (alle li (5 26 oll 4] 1 P2 sand Design Algorithm Analysis 2 inh Sens oe - jz 2),[9 |- - Lp 3\“{o o} [18 o| (cre Pa = fo 5) [4 oe 4 2 28 56 [o able els 0 0 of’|4 8 | PA = dig) 4 o1/[3 8). 2 ‘J = [o ofllo of [4 3) 4 0Lf1 4]_ [4 16] “lo of 4-8} lo 0 =|734 20 i x, “0 3}"lis 8 les eo So, now, calculating Matrix Cas following: 83 36) [4 16) [18 0) [a6 iu ee “le allo oe 0 at ; 13 28 * lis 40 Universi "B.Tech Akash Book: 2015-31 Prope . [12 0) a o © 127 of lig o [30 0 Poe la oll 16) [42 72) 0 ofte o|>[% o| ene P1+ P5 -P3 ~ pz. 6 34 20 a a He 2] ; 13 28/30 0 0 4272/81 40 0 Q-4. (a) Explain any one algorithm for minimum cost. Write code/algorithm neatly, Ans. 4(a) Refer Q.No.3 of Important Questions (Pe. No. 6) Q-4. (b) Using same algorithm, find minimum spanning tree for the following: (6.5) generating spanning tree with (6.25) Ans. In prim’s algorithm, first we initialize the priority queue Q to contain all the vertices and the keys of each vertex to « except the root, whose key is set to 0 Let (0) is the root vertex. So, Step 1: edge (0,1) is selected ithm Analysis and Design Fifth Semester, Algorithm Anal 32-2015 Step 2: edge (1,21 is selected d Step 3: edge (2,31 is selecte v Step 4: edge (3,6) is selected Q.5. (a) What is Dynamic Programming Paradigm? Explain its characteresties, Ans. Dynami such problems th (6.25) ic Programming is typically applied to 9 wish to fin Ptimization problems. It ere can be many possible Solutions. Each solution has a value, and We i find a solution with the optimal (m imum) value, We call suché solution an optimal solution to the proble ‘0 the optimal solution, sint® there may be several Solutions that achie value. : inimum or max 'm, aS opposed ti ve the optimal optim solutic sub-so lot of reduny Char; 7 T 1. stage. F the gr 2 path 3 decisi 4 does was r 8 stage é toth illus ; 1 eee eee ee LP. Univer ‘The development ofa dyn, of four steps. ioe 1 Characterize the strc algorithm can be broken into a sequence of an optimal solution 2, Recursively define the var recursively this helps to define what ie? aie Problem into 2 or more optimal parts "solution will look like optimal parts 3. Compute the value o| Fan op the smallest subproblon, ptimal solution i Hon in a bottom-up fashion(starting! with al solution fi subprol lems, ei and P,. We must be ait oy problem P, broken down into two toa car eetimal solution to P. The ate Combine optimal solutjons + Overlapping subprobl eubproblems. " lems: Having defi ontial station, a frst attompt at solving the pratleaene eee slotion ata rsirive algorithm: Bynannc ater yt smal plement lot of unnecessary commutes” times, ie. that the straight recursive solution does 4 '¥ computation. Dynamic Programming works by eliminating this Characteristics of dynamic programming: There are ano of characteristics that are to all dynamic programming problems. Theses are: 1 c 1.The problem can be divided into no. of stages, With a decision required at each stage. For example, in the shortest path problem, these were defined by the structure of the graph the decision was where to go next. 2. Each stage has a number of states associated with it. The states for the shortest path problem was the node reached. 3. The decision at one stage transforms one state into a state in the next stage. The decision of where to go next defined where we arrived in the next stage. 4.Given the current state, the optimal decision for each of the remaining states does not depend on the previous states or decisions. In the shortest path problems, it was not necessary to know how we got to a node, only that we did. 5.There exists a recursive relationship that identifies the optimal decision for stage j, given that stage j + 1 has already been solved. 6.The final stage must be solvable by itself. / 7 Suppose you want to compute the nth Fibonacei number, F,, The optimal solutiore to the problem is simply F, (this is a somewhat contrived use of the word “optimal” to illustrate dynamic programming = Recall that the Fibonacci numbers are defined: F(n) = Lifn=1or2, Fg +F,,, otherwise, So the Fibonacci numbers are: n:123456789 10... Fn:112358 13 213455... alysis and Design ran Semester, Algorithm Analy ae rere ond Odea oe coe epared to write pamte Fy. We are PrePo dt Tie proge, gorithm rhs we rn sori =2)the a return F(n—2) + Fin-1) sreatentaigorthn for comm made or Fy FQ) + FIA) iN F2) Ty Fa) + FO) MeN F(t) + F(2) 14 : is 4 three times, and Fis computed twice. Each recomputat, incur otra tecinive work that has already been done elsewhere. Note also the shape this diagram; it looks like a binary tree of calls. We can see the height of the tree: going from the root to the rightmost leaf. The height is © (). This exponential behavior is very inefficient. We can do much better with dynamic programming. Our problem satisfies t optimal substructure property: each solution is the sum of two other solutions. Using a dynamic programming technique called memoization, we can make th recursive algorithm much faster. We assume there is an array A of integers whose fir: tialized to 1, and there is an integer called unknour i ally two, that keeps track of the index of the least Fibonacci number whose value; not known: Fa) ifn 1, (i+ 1 e +1, and G+ 1 Grid example. a) afoo also 2 1 2|5{4[8 eleaypals Step 1. The first step in designing a dj : . oat bald errcincaneange Amami ramming grins dein vet of the cheapest (least dangerous) path from the bottom to the sell) Te fd the value of the best path to the top, we need to find the minimal value in the last row of the array, that is, 7 min, jeg Ani Step 2. This is the core of the solution, We start with the initialization. The simplest way is to set A(1,) = C(1,j) for 1 e for any : denote some very’ large mum that INE dar ska make INE the sum of al cost {for example initialization do ; forj=1tom | bi «<0 BON€0 : fori=Otondo BG, m+) © INF r fori = 1tondo forj=1tomdo . a Bid) — Cli, j) + min (BE-1,j- ),B GAY), BG -1,) + DI / finding the cost of the least dangerous path ; cost < INF forj= 1 tom do if(B (ng)

You might also like