0% found this document useful (0 votes)
68 views13 pages

ADA QB Sol

This document discusses various algorithm design concepts including algorithms, algorithm efficiency, growth functions, best/worst/average case efficiency, asymptotic notation (O, Ω, Θ), and specific algorithms like insertion sort, merge sort, quicksort, binary search, and the knapsack problem. It provides definitions and explanations of these terms and algorithms over 9 paragraphs. Key points covered include the definitions of algorithms and pseudocode, analyzing time and space efficiency, using asymptotic notation to analyze efficiency, and explaining divide and conquer algorithms like merge sort and quicksort.

Uploaded by

ankitaarya1207
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views13 pages

ADA QB Sol

This document discusses various algorithm design concepts including algorithms, algorithm efficiency, growth functions, best/worst/average case efficiency, asymptotic notation (O, Ω, Θ), and specific algorithms like insertion sort, merge sort, quicksort, binary search, and the knapsack problem. It provides definitions and explanations of these terms and algorithms over 9 paragraphs. Key points covered include the definitions of algorithms and pseudocode, analyzing time and space efficiency, using asymptotic notation to analyze efficiency, and explaining divide and conquer algorithms like merge sort and quicksort.

Uploaded by

ankitaarya1207
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 13

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Analysis Design of Algorithm


BE 5th CSE

Unit-1

1.What is an algorithm?

Ans:- An algorithm is a sequence of unambiguous instructions for solving a problem, i.e.,


for obtaining a required output for any legitimate input in finite amount of time.

2. What is an algorithm design technique?

Ans:- An algorithm design technique is a general approach to solving problems


algorithmically that is applicable to a variety of problems from different areas of
computing.

3. What is pseudocode?

Ans A pseudocode is a mixture of a natural language and programming language


constructs to specify an algorithm. A pseudocode is more precise than a natural language
and its usage often yields more concise algorithm descriptions.

4. What are the types of algorithm efficiencies?

Ans:-The two types of algorithm efficiencies are


a. Time efficiency: indicates how fast the algorithm runs
b.Space efficiency: indicates how much extra memory the algorithm needs

5. What are exponential growth functions?

Ans:-The functions 2n and n! are exponential growth functions, because these two
functions grow so fast that their values become astronomically large even for rather
smaller values of n.

6. What is worst-case efficiency?

The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size
n, which is an input or inputs of size n for which the algorithm runs the longest among
all possible inputs of that size.

7. What is best-case efficiency?


Ans:-The best-case efficiency of an algorithm is its efficiency for the best-case input of
size n, which is an input or inputs for which the algorithm runs the fastest among all
possible inputs of that size

8. What is average case efficiency?

Ans:-The average case efficiency of an algorithm is its efficiency for an average case
input of size n. It provides information about an algorithm behavior on a “typical” or
“random” input.

9. Define O, Ω & Θ notation?

Ans:-A function t(n) is said to be in O(g(n)), denoted by t(n) ɛO(g(n)), if t(n) is bounded
above by some constant multiple of g(n) for all large n, i.e., if there exists some positive
constant c and some nonnegative integer n0 such that
T(n) ≤ cg(n) for all n ≥ n0

A function t(n) is said to be in Ω(g(n)), denoted by t(n) ɛΩ(g(n)), if t(n) is bounded below
by some constant multiple of g(n) for all large n, i.e., if there exists some positive
constant c and some nonnegative integer n0 such that T(n) ≥ cg(n) for all n ≥ n0

A function t(n) is said to be in Θ(g(n)), denoted by t(n) ɛΘ(g(n)), if t(n) is bounded both
above & below by some constant multiple of g(n) for all large n, i.e., if there exists some
positive constants c1 & c2 and some nonnegative integer n0 such that c2g(n) ≤ t(n) ≤
c1g(n) for all n ≥ n0

Q. Show that
By substitution method we have

10. Write Insertion algorithm


Insertion sort
Ans ;- A good algorithm for sorting a small number of elements.It works the way you might sort
a hand of playing cards:

• Start with an empty left hand and the cards face down on the table.
• Then remove one card at a time from the table, and insert it into the correct
position in the left hand.
• To Þnd the correct position for a card, compare it with each of the cards already

in the hand, from right to left.


•At all times, the cards held in the left hand are sorted, and these cards were
originally the top cards of the pile on the table

Unit-2

1. Give the general plan for divide-and-conquer algorithms.

Ans:- The general plan is as follows


1. A problems instance is divided into several smaller instances of the same problem,
ideally about the same size
2. The smaller instances are solved, typically recursively
3. If necessary the solutions obtained are combined to get the solution of the original
problem

2. What is the difference between quicksort and mergesort?

Ans:- Both quicksort and mergesort use the divide-and-conquer technique in which the
given array is partitioned into subarrays and solved. The difference lies in the technique
that the arrays are partitioned. For mergesort the arrays are partitioned according to their
position and in quicksort they are partitioned according to the element values.

3. What is binary search?

Ans:- Binary search is a remarkably efficient algorithm for searching in a sorted array. It
works by comparing a search key K with the arrays middle element A[m]. if they match
the algorithm stops; otherwise the same operation is repeated recursively for the first half
of the array if K < A[m] and the second half if K > A[m].

4. what is knapsack problem ?

Ans: - knapsack problem is a problem in combinatorial optimization: Given a set of items, each
with a weight and a value, determine the count of each item to include in a collection so that the
total weight is less than or equal to a given limit and the total value is as large as possible. It
derives its name from the problem faced by someone who is constrained by a fixed-size knapsack
and must fill it with the most useful items.
5. What is greedy technique?
Ans:- Greedy technique suggests a greedy grab of the best alternative available in the hope that
asequence of locally optimal choices will yield a globally optimal solution to the entire problem.
The choice must be made as follows .
Feasible : It has to satisfy the problem’s constraints ._
Locally optimal : It has to be the best local choice among all feasible
choices available on that step. .
Irrevocable : Once made, it cannot be changed on a subsequent step of the algorithm

6. Write the algorithm for Iterative binary search?


Ans:-
Algorithm BinSearch(a,n,x)
//Given an array a[1:n] of elements in nondecreasing
// order, n>0, determine whether x is present
{
low : = 1;
high : = n;
while (low < high) do
{
mid : = [(low+high)/2];
if(x < a[mid]) then high:= mid-1;
else if (x >a[mid]) then low:=mid + 1;
else return mid;
}
return 0;
}

7. Explain Merge sort

Ans:- A sorting algorithm based on divide and conquer. Its worst-case running time has
a lower order of growth than insertion sort. Because we are dealing with subproblems, we state
each subproblem as sorting a subarray A[p . . r ]. Initially, p = 1 and r = n, but these values
change as we recurse through subproblems.
To sort A[p . . r ]:
Divide by splitting into two subarrays A[p . . q] and A[q + 1 . . r ], where q is the
halfway point of A[p . . r ].
Conquer by recursively sorting the two subarrays A[p . . q] and A[q + 1 . . r ].
Combine by merging the two sorted subarrays A[p . . q] and A[q + 1 . . r ] to produce
a single sorted subarray A[p . . r ]. To accomplish this step, we.ll deÞne a
procedure MERGE(A, p, q, r ).The recursion bottoms out when the subarray has just 1 element, so
that it.s trivially sorted.
MERGE-SORT(A, p, r )
if p < r _ Check for base case
then q ← (p + r)/2 _ Divide
MERGE-SORT(A, p, q) _ Conquer
MERGE-SORT(A, q + 1, r ) _ Conquer
MERGE(A, p, q, r ) _ Combine
Initial call: MERGE-SORT(A, 1, n)
[It is astounding how often students forget how easy it is to compute the halfway
point of p and r as their average (p + r)/2. We of course have to take the ßoor
to ensure that we get an integer index q. But it is common to see students perform
calculations like p +(r − p)/2, or even more elaborate expressions, forgetting the
easy way to compute an average.]

8. Explain Quick sort

Ans:- Quicksort is based on the three-step process of divide-and-conquer.


• To sort the subarray A[p . . r ]:
Divide: Partition A[p . . r ], into two (possibly empty) subarrays A[p . . q − 1]
and A[q + 1 . . r ], such that each element in the Þrst subarray A[p . . q − 1]
is ≤ A[q] and A[q] is ≤ each element in the second subarray A[q + 1 . . r ].
Conquer: Sort the two subarrays by recursive calls to QUICKSORT.
Combine: No work is needed to combine the subarrays, because they are sorted
in place.
• Perform the divide step by a procedure PARTITION, which returns the index q

that marks the position separating the subarrays.

QUICKSORT(A, p, r )
if p < r
then q ←PARTITION(A, p, r )
QUICKSORT(A, p, q − 1)
QUICKSORT(A, q + 1, r )
Initial call is QUICKSORT(A, 1, n).
Partitioning
Partition subarray A[p . . r ] by the following procedure:
PARTITION(A, p, r )
x ← A[r ]
i←p−1
for j ← p to r − 1
do if A[ j ] ≤ x
then i ← i + 1
exchange A[i ] ↔ A[ j ]
exchange A[i + 1] ↔ A[r ]
return i + 1
• PARTITION always selects the last element A[r ] in the subarray A[p . . r ] as the

pivot.the element around which to partition.


• As the procedure executes, the array is partitioned into four regions, some of

which may be empty:


Loop invariant:
1. All entries in A[p . . i ] are ≤ pivot.
2. All entries in A[i + 1 . . j − 1] are > pivot.
3. A[r ] = pivot.
It.s not needed as part of the loop invariant, but the fourth region is A[ j . . r−1],
whose entries have not yet been examined, and so we don.t know how they
compare to the pivot.

9. Explain knapsack problem

Ans 0-1 knapsack problem:


• n items.
• Item i is worth $vi , weighs wi pounds.
• Find a most valuable subset of items with total weight ≤ W.
• Have to either take an item or not take it.can.t take part of it.

Fractional knapsack problem: Like the 0-1 knapsack problem, but can take fraction
of an item.

Both have optimal substructure.


But the fractional knapsack problem has the greedy-choice property, and the 0-1
knapsack problem does not.
To solve the fractional problem, rank items by value/weight: vi/wi .
Let vi/wi ≥ vi+1/wi+1 for all i .

FRACTIONAL-KNAPSACK(v,w,W)
load ← 0
i←1
while load < W and i ≤ n
do if wi ≤ W − load
then take all of item i
else take (W − load)/wi of item i
add what was taken to load
i ←i + 1

Time: O(n lg n) to sort, O(n) thereafter.

Greedy doesn.t work for the 0-1 knapsack problem. Might get empty space, which
lowers the average value per pound of the items taken.
i123
vi 60 100 120
wi 10 20 30
vi/wi 6 5 4
W = 50.
Greedy solution:
• Take items 1 and 2.

• value = 160, weight = 30.

Have 20 pounds of capacity left over.


Optimal solution:
• Take items 2 and 3.

• value = 220, weight = 50.

No leftover capacity.

10 Write any two characteristics of Greedy Algorithm?


To solve a problem in an optimal way construct the solution from given set of
candidates. As the algorithm proceeds, two other sets get accumulated among this one set
contains the candidates that have been already considered and chosen while the other set
contains the candidates that have been considered but rejected.

Unit-3
1. what is Dynamic Programming
Ans:- • Not a speciÞc algorithm, but a technique (like divide-and-conquer).
• Developed back in the day when .programming. meant .tabular method. (like
linear programming). Doesn.t really refer to computer programming.
• Used for optimization problems:

• Find a solution with the optimal value.

• Minimization or maximization. (We.ll see both.)

Four-step method
1. Characterize the structure of an optimal solution.
2. Recursively deÞne the value of an optimal solution.
3. Compute the value of an optimal solution in a bottom-up fashion.
4. Construct an optimal solution from computed information.

2. Write the difference between the Greedy method and Dynamic


programming.
Greedy method Dynamic programming
_ Only one sequence of decision is Many number of decisions are
generated. generated.
_ It does not guarantee to give an It definitely gives an optimal
solution always. optimal solution always.

3. Define Multistage Graphs.

A multisatage graph G=(V,E) in a directed graph in which the vertices are


partitioned in to k>=2 disjoint sets Vi 1<=i<=k.
It is used to find minimum cost path from source S to sink t i.e., destination.

4. What are the drawbacks of dynamic programming?

Time and space requirements are high, since storage is needed for all level.
Optimality should be checked at all levels.

5. Explain Floyd's algorithm

It is convenient to record the lengths of shortest paths in an n by n matrix D called


the distance matrix: the element dij in the ith row and the jth column of this matrix
indicates the length of the shortest path from the ith vertex to the jth vertex . We can
generate the distance matrix with an algorithm that is very similar to warshall's algorithm.
It is called Floyd's algorithm.

6. Explain Traveling salesman problem”?


A salesman has to travel n cities starting from any one of the cities and visit the
remaining cities exactly once and come back to the city where he started his journey in
such a manner that either the distance is minimum or cost is minimum. This is known as
traveling salesman problem.
7. Write some applications of travelling salesperson problem.

Routing a postal van to pick up mail from boxes located at n different sites. Using a robot
arm to tighten the nuts on some piece of machinery on an assembly line.
Production environment in which several commodities are manufactured on
the same set of machines.

8. Write matrix chain multiplication

The way we parenthesize a chain of matrices can have a dramatic impact on the cost of
evaluating the product. Consider first the cost of multiplying two matrices. The standard
algorithm is given by the following pseudocode. The attributes rows and columns are the
numbers of rows and columns in a matrix.
MATRIX-MULTIPLY(A, B)
1 if columns[A] ≠ rows[B]
2 then error "incompatible dimensions"
3 else for i ← 1 to rows[A]
4 do for j ← 1 to columns[B]
5 do C[i, j] ← 0
6 for k ← 1 to columns[A]
7 do C[i, j] ← C[i, j] + A[i, k] · B[k, j]
8 return C

MATRIX-CHAIN-ORDER(p)
1 n ← length[p] - 1
2 for i ← 1 to n
3 do m[i, i] ← 0
4 for l ← 2 to n ▹l is the chain length.
5 do for i ← 1 to n - l + 1
6 do j ← i + l - 1
7 m[i, j] ← ∞
8 for k ← i to j - 1
9 do q ← m[i, k] + m[k + 1, j] + pi-1 pkpj
10 if q < m[i, j]
11 then m[i, j] ← q
12 s[i, j] ← k
13 return m and s

PRINT-OPTIMAL-PARENS(s, i, j)
1 if i = j
2 then print "A"i
3 else print "("
4 PRINT-OPTIMAL-PARENS(s, i, s[i, j])
5 PRINT-OPTIMAL-PARENS(s, s[i, j] + 1, j)
6 print ")"

9. Explain All pair shortest path

Unlike the single-source algorithms, which assume an adjacency-list representation of the


graph, most of the algorithms in this chapter use an adjacency-matrix representation.
(Johnson's algorithm for sparse graphs uses adjacency lists.) For convenience, we assume
that the vertices are numbered 1, 2,..., |V|, so that the input is an n × n matrix W
representing the edge weights of an n-vertex directed graph G = (V, E). That is, W = (wij),
where

PRINT-ALL-PAIRS-SHORTEST-PATH(Π, i, j)
1 if i = j
2 then print i
3 else if πij = NIL
4 then print "no path from" i "to" j "exists"
5 else PRINT-ALL-PAIRS-SHORTEST-PATH(Π, i, πij)
6 print j

Now, let be the minimum weight of any path from vertex i to vertex j that contains at
most
m edges. When m = 0, there is a shortest path from i to j with no edges if and only if i = j.
Thus,

For m ≥ 1, we compute as the minimum of (the weight of the shortest path from i to j
consisting of at most m - 1 edges) and the minimum weight of any path from i to j
consisting
of at most m edges, obtained by looking at all possible predecessors k of j. Thus, we
recursively define

EXTEND-SHORTEST-PATHS(L, W)
1 n ← rows[L]
2 let be an n × n matrix
3 for i ← 1 to n
4 do for j ← to n
5 do
6 for k ← 1 to n
7 do
8 return L′

10. Explain Topological sorting

This section shows how depth-first search can be used to perform a topological sort of a
directed acyclic graph, or a "dag" as it is sometimes called. A topological sort of a dag G
=
(V, E) is a linear ordering of all its vertices such that if G contains an edge (u, v), then u
appears before v in the ordering.

the topologically sorted dag as an ordering of vertices along a horizontal line such that all
directed edges go from left to right.

Unit – 4

1.Explain Backtracking

The principal idea is to construct solutions one component at a time and evaluate such
partially constructed candidates as follows. If a partially constructed solution can be
developed further without violating the problem's constraints, it is done by taking the first
remaininig legitimate option for the next component.

If there is no legitimate option for the next component, no alternatives for any remaining
component need to be considered. In this case, the algorithm backtracks to replace the
last component of the partially constructed solution with its next option

2.What are the requirements that are needed for performing Backtracking?

To solve any problem using backtracking, it requires that all the solutions satisfy a complex
set of constraints. They are:
i. Explicit constraints.
ii. Implicit constraints.

3.Define state space of the problem.

All the paths from the root of the organization tree to all the nodes is called as state space of
the problem.

4.Explain n-Queens problem


The problem is to place n queens on an n by n chessboard so that no two queens attack
each other by being in the same row or same column or on the same diagonal.

5.Explain Subset-Sum Problem

We consider the subset-sum problem: Find a subset of a given set S={S1,S2,..........Sn} of


n positive integers whose sum is equal to a given positive integer d.

6.Explain Branch and Bound Technique

Compared to backtracking, branch and bound requires The idea to be strengthened


further if we deal with an optimization problem, one that seeks to minimize or maximize
an objective function, usually subject to some constraints.

7.Explain Graph coloring” problem.

The graph coloring problem asks us to assign the smallest number of colors to vertices of
a graph so that no two adjacent vertices are the same color.

8.State m – colorability decision problem.

Let G be a graph and m be a given positive integer. We want to discover whether the
nodes of G can be colored in such a way that no two adjacent nodes have the same color yet
only m colors are used.

9.Explain Knapsack Problem

Find the most valuable subset of n items of given positive integer weights and values that
fit into a knapsack of a given positive integer capacity.

10.Differentiate live node, dead node & E-node

Live Node:
A node which has been generated and all of whose children have not yet been
generated is called as a live node.
Dead node: It is defined as a generated node, which is to be expanded further all
of whose children have been generated.
E – node :Any live node whose children are currently being
generated is called as a E – node.

Unit-5
1.Define Branch-and-Bound method.
The term Branch-and-Bound refers to all the state space methods in which all children of
the E-node are generated before any other live node can become the E- node.

2. What is a FIFO branch - and - bound algorithm?


It is a BFS like state space tree search in which the list of live node is a first in first order.The
unexplored node are stored in queue.

3. What is a LIFO branch - and - bound algorithm?

It is a DFS like state space tree search in which the list of live node is a last in first out order. The
unexplored node are stored in stack.

4. What are NP- hard and Np-complete problems?

The problems whose solutions have computing times are bounded by polynomials of small
degree.

5. What is a decision problem?

Any problem for which the answer is either zero or one is called decision
problem.

6. Explain class P problems

Class P is a class of decision problems that can be solved in polynomial time by(deterministic)
algorithms. This class of problems is called polynomial.

7. Explain class NP problems

Class NP is the class of decision problems that can be solved by nondeterministic polynomial
algorithms.Most decision problems are in NP. First of all, this class includes all the problems in
P. This class of problems is called Nondeterministic polynomial.

8. Explain NP-complete problems

A decision problem d is said to be NP-complete if


1) it belongs to class NP
2) every problem in NP is polynomially reducible to D.

9.Explain NP-Hard problems

The notion of an NP-hard problem can be defined more formally by extending the
notion of polynomial reducability to problems that are not necessary in class NP
including optimization problems.

10. What is meant by least cost search.

The next E-node is selected on the basis of an intelligent ranking function.


The ideal way to assign ranks would be on the basis of the additional computational
effort to reach an answer node from the list of live nodes.

11. Define 0/1 knapsack problem using brance and bound.


There are n objects and capacity of knapsack is m
Profit vector={p1,p2,…pn}
Weight vector={w1,w2,…wn}
n
Minimize Σ pixi
i=1
subject to
n
Σ wixi<=m
i=1
where xi=1 or xi=0 1<=i<=m

You might also like