0% found this document useful (0 votes)
278 views14 pages

DAA 2marks With Answers

1. An algorithm is a sequence of instructions to solve a problem in finite time. Algorithms can be classified by the type of problem solved or design technique used. 2. Abstract data types (ADTs) define objects and operations on them. Common ADTs include lists, stacks, queues, and dictionaries. 3. Algorithm analysis measures time efficiency by counting operations, especially in inner loops. Asymptotic analysis determines an algorithm's order of growth.

Uploaded by

miraclesuresh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
278 views14 pages

DAA 2marks With Answers

1. An algorithm is a sequence of instructions to solve a problem in finite time. Algorithms can be classified by the type of problem solved or design technique used. 2. Abstract data types (ADTs) define objects and operations on them. Common ADTs include lists, stacks, queues, and dictionaries. 3. Algorithm analysis measures time efficiency by counting operations, especially in inner loops. Asymptotic analysis determines an algorithm's order of growth.

Uploaded by

miraclesuresh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 14

Unit I

Algorithm:
An algorithm is a sequence of nonambiguous instructions for solving a problem in a finite
amount of time. An input to an algorithm specifies an instance of the problem the algorithm
solves.
How can you classify the algorithms?
Among several ways to classify algorithms, the two principal alternatives are:
. to group algorithms according to types of problems they solve
. to group algorithms according to underlying design techniques they are based upon
ADT
An abstract collection of objects with several operations that can be performed on them is called
an abstract data type (ADT). The list, the stack, the queue, the priority queue, and the dictionary
are important examples of abstract data types. Modern object-oriented languages support
implementation of ADTs by means of classes.
Algorithm Design Technique:
An algorithm design technique (or strategy or paradigm) is a general approach to solving
problems algorithmically that is applicable to a variety of problems from different areas of
computing.
Pseudocode:
Pseudocode is a mixture of a natural language and programming languagelike constructs.
Pseudocode is usually more precise than natural language, and its usage often yields more
succinct algorithm descriptions.
Flow Chart:
Flowchart is a method of expressing an algorithm by a collection of connected geometric shapes
containing descriptions of the algorithms steps.
What is performance analysis?
Performance analysis is the criteria for judging the algorithm.It have direct relationship to
performance.When we solve a problem ,there may be more then one algorithm to solve a
problem,through analysis we find the run time of an algorithms and we choose the best algorithm
which takes leser run time.
Explain Recurrence Relation?How to analyze time efficiency of Recursive Algorithms?
Reccurence is a function that call itself many times to solve a particular problem .Recursive
Relation are recursive definitation of functions(mathematical).when an algorithm contain a
recursive call to itself,its running time can be described by a recurrence. Recurrence Relation is
an equation that describes a function in the terms of its value on smaller inputs.There are three
ways to solve Recurrence relation: Iteration method, Substition method, Master Theorem metho

General Plan for Analyzing the Time Efficiency of Nonrecursive Algorithms


1. Decide on a parameter (or parameters) indicating an inputs size.
2. Identify the algorithms basic operation. (As a rule, it is located in the innermost loop.)
3. Check whether the number of times the basic operation is executed depends only on the size of
an input. If it also depends on some additional property, the worst-case, average-case, and, if
necessary, best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithms basic operation is executed.
5. Using standard formulas and rules of sum manipulation either find a closed form formula for
the count or, at the very least, establish its order of growth.
General Plan for Analyzing the Time Efficiency of Recursive Algorithms
1. Decide on a parameter (or parameters) indicating an inputs size.
2. Identify the algorithms basic operation.
3. Check whether the number of times the basic operation is executed can vary on different
inputs of the same size; if it can, the worst-case, average-case, and best-case efficiencies must be
investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the number of times the
basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its solution.
Two measures of efficiency:
There are two kinds of algorithm efficiency: time efficiency and space efficiency. Time efficiency
indicates how fast the algorithm runs; space efficiency deals with the extra space it requires.
Basic Operation:
Analgorithms time efficiency is principally measured as a function of its input size by counting
the number of times its basic operation is executed. A basic operation is the operation that
contributes the most to running time. Typically, it is the most time-consuming operation in the
algorithms innermost loop.
Order of Growth:
The established framework for analyzing time efficiency is primarily grounded in the order of
growth of the algorithms running time as its input size goes to infinity. The notations O, _, and _
are used to indicate and compare the asymptotic orders of growth of functions expressing
algorithm efficiencies.

Unit II
Brute Force:
Brute force is a straightforward approach to solving a problem, usually directly based on the
problem statement and definitions of the concepts involved.
Give Example
Master Theorem:
If f (n) _(nd) where d 0 in recurrence (5.1), then

Analogous results hold for the O and notations, too.


Closest-Pair Problem
The closest-pair problem calls for finding the two closest points in a set of n points using Brute
Force approach. It is the simplest of a variety of problems in computational geometry that deals
with proximity of points in the plane or higher-dimensional spaces.
Convex Hull Problem:
Finding the convex hull for a given set of points in the plane or a higher dimensional space is one
of the most importantsome people believe the most importantproblems in computational
geometry. This prominence is due to a variety of applications in which this problem needs to be
solved, either by itself or as a part of a larger task.
Convex:
A set of points (finite or infinite) in the plane is called convex if for any two points p and q in the
set, the entire line segment with the endpoints at p and q belongs to the set.
Convex Hull:
The convex hull of a set S of points is the smallest convex set containing S. (The smallest
requirement means that the convex hull of S must be a subset of any convex set containing S.)
Exhaistive Search:

Exhaustive search is simply a brute-force approach to combinatorial problems. It suggests


generating each and every element of the problem domain, selecting those of them that satisfy all
the constraints, and then finding a desired element (e.g., the one that optimizes some objective
function).
Give Example
Hamiltonian Circuit:
A Hamiltonian circuit is defined as a cycle that passes through all the vertices of the graph
exactly once. Here Source and Destination of a vertex is the same vertex.
General Plan for Divide and Conquer Method:
1. A problem is divided into several subproblems of the same type, ideally of about equal size.
2. The subproblems are solved (typically recursively, though sometimes a different algorithm is
employed, especially when subproblems become small enough).
3. If necessary, the solutions to the subproblems are combined to get a solution to the original
problem.
Give Example

Binary Tree
A binary tree T is defined as a finite set of nodes that is either empty or consists of a root and
two disjoint binary trees TL and TR called, respectively, the left and right subtree of the root.
Give Example tree
Tree Traversals:
1. In the preorder traversal, the root is visited before the left and right subtrees are visited
(in that order).
2. In the inorder traversal, the root is visited after visiting its left subtree but before visiting
the right subtree.
3. In the postorder traversal, the root is visited after visiting the left and right subtrees (in
that order).
Merge sort
Mergesort is a divide-and-conquer sorting algorithm. It works by dividing an input array into two
halves, sorting them recursively, and then merging the two sorted halves to get the original array
sorted. The algorithms time efficiency is in (n log n) in all cases.
Quick Sort
Quicksort is a divide-and-conquer sorting algorithm that works by partitioning its input elements
according to their value relative to some preselected element. Quicksort is noted for its superior
efficiency among n log n algorithms for sorting randomly ordered arrays but also for the
quadratic worst-case efficiency.
Difference between merge and quick sort:

Quicksort has a bad worst case, while Mergesort is always O(n log n) guaranteed, but typical
Quicksort implementations are faster than Mergesort in practice.
Also, Mergesort requires additional storage, which is a problem in many cases (e.g. library
routines). This is why Quicksort is almost always used by library routines.
Quicksort uses a pivot and sorts the two parts with the pivot as reference point with the risk that
the pivot will be either maximum or minimum of the sorted array. If you will be choosing
the wrong pivots you will end up with complexity n^2 (n^2 comparsions)
Mergesort as named is based on recursively dividing array into halfs of the same size and
merging them back. Pretty nice explanations on wikipedia for example. Especially the picture
with the tree-like brakedown seems to explain it pretty well.
Unit III
Dynamic Programming:
Dynamic programming is a technique for solving problems with overlapping subproblems.
Typically, these subproblems arise from a recurrence relating a solution to a given problem with
solutions to its smaller subproblems of the same type.
OBST:
An optimal binary search tree is a binary search tree for which the nodes are arranged on
levels such that the tree cost is minimum. an optimal binary search tree (BST), sometimes
called a weight-balanced binary tree, is a binary search tree which provides the smallest
possible search time (or expected search time) for a given sequence of accesses (or access
probabilities).
Transitive Closure:
The transitive closure of a directed graph with n vertices can be defined as the n n boolean
matrix T = {tij}, in which the element in the ith row and the jth column is 1 if there exists a
nontrivial path (i.e., directed path of a positive length) from the ith vertex to the jth vertex;
otherwise, tij is 0.
Digraph:
A digraph is short for directed graph, and it is a diagram composed of points
called vertices (nodes) and arrows called arcs going from a vertex to a vertex.
Give example
Adjacency Matrix:
An adjacency matrix is a square matrix used to represent a finite graph. The elements of
the matrix indicate whether pairs of vertices are adjacent or not in the graph.
Distance Matrix:

a distance matrix is a matrix (two-dimensional array) containing the distances, taken


pairwise, between the elements of a set. Depending upon the application involved,
the distance being used to define this matrix may or may not be a metric.
Greedy Technique:
The greedy technique suggests constructing a solution to an optimization problem through a
sequence of steps, each expanding a partially constructed solution obtained so far, until a
complete solution to the problem is reached. On each step, the choice made must be feasible,
locally optimal, and irrevocable.
Prims Algorithm:
Prims algorithm is a greedy algorithm for constructing a minimum spanning tree of a weighted
connected graph. It works by attaching to a previously constructed subtree a vertex closest to the
vertices already in the tree.
Kruskals Algorithm:
Kruskals algorithm is another greedy algorithm for the minimum spanning tree problem. It
constructs a minimum spanning tree by selecting edges in nondecreasing order of their weights
provided that the inclusion does not create a cycle.
Dijkstras Algorithm:
Dijkstras algorithm solves the single-source shortest-path problem of finding shortest paths
from a given vertex (the source) to all the other vertices of a weighted graph or digraph. It works
as Prims algorithm but compares path lengths rather than edge lengths. Dijkstras algorithm
always yields a correct solution for a graph with nonnegative weights.
Huffman Tree:
A Huffman tree is a binary tree that minimizes the weighted path length from the root to the
leaves of predefined weights. The most important application of Huffman trees is Huffman
codes.
Huffman Code:
A Huffman code is an optimal prefix-free variable-length encoding scheme that assigns bit
strings to symbols based on their frequencies in a given text. This is accomplished by a greedy
construction of a binary tree whose leaves represent the alphabet symbols and whose edges are
labeled with 0s and 1s.
Difference between Dynaimc programming and Greedy approach

Greedy Programming
A greedy algorithm is one which finds optimal solution at each and every stage with the
hope of finding global optimum at the end.
Dynamic Programming
A Dynamic algorithm is applicable to problems that exhibit Overlapping
subproblems and Optimal substructure properties.
Difference
The main difference is that, the choice made by a greedy algorithm may depend on choices
made so far but not on future choices or all the solutions to the sub problem. It iteratively makes
one greedy choice after another, reducing each given problem into a smaller one. In other words,
a greedy algorithm never reconsiders its choices. This is the main difference from dynamic
programming, which is exhaustive and is guaranteed to find the solution. After every stage,
dynamic programming makes decisions based on all the decisions made in the previous stage,
and may reconsider the previous stage's algorithmic path to solution.

Distinguish between Dynamic programming and Divide and Conquer

Does Prims algorithm always work correctly on graphs with negative edge weights?

Prove that any weighted connected graph with distinct weights has exactly one minimum
spanning tree.
Lets assume that the graph has 2 MST - MST1 and MST2. Let E be the set of edges present in
MST2 but not in MST1.
Consider MST1. If this is a minimum spanning tree, adding an edge to it should create a cycle.
Consider adding an edge 'e' from E. Add 'e' to MST1. This would create a cycle. Hence, this new
tree ( say T ) is just 1 edge away from being a MST. To make a MINIMUM spanning tree out of
it, you have to remove the most expensive edge in the cycle. Because all the edges have different
weights, the most expensive edge will be only one of its kind. If 'e' is the most expensive edge,
then, you don't get multiple MSTs. If 'e' is not the most expensive edge, then MST1 was not a
MINIMUM spanning tree.
Unit IV
Iterative Improvement
The iterative-improvement technique involves finding a solution to an optimization problem by
generating a sequence of feasible solutions with improving values of the problems objective
function. Each subsequent solution in such a sequence typically involves a small, localized
change in the previous feasible solution. When no such change improves the value of the
objective function, the algorithm returns the last feasible solution as optimal and stops.
Simplex Method:
The simplex method is the classic method for solving the general linear programming problem. It
works by generating a sequence of adjacent extreme points of the problems feasible region with
improving values of the objective function.
Maximum Flow Problem:
The maximum-flow problem asks to find the maximum flow possible in a network, a weighted
directed graph with a source and a sink.
Ford-Fulkerson Method:
The Ford-Fulkerson method is a classic template for solving the maximumflow problem by the
iterative-improvement approach. The shortestaugmenting- path method implements this idea by
labeling network vertices in the breadth-first search manner. The Ford-Fulkerson method also
finds a minimum cut in a given network.
Maximum cardinality matching
A maximum cardinality matching is the largest subset of edges in a graph such that no two edges
share the same vertex. For a bipartite graph, it can be found by a sequence of augmentations of
previously obtained matchings.

Stable Marriage Problem:


The stable marriage problem is to find a stable matching for elements of two n element sets
based on given matching preferences. This problem always has a solution that can be found by
the Gale-Shapley algorithm.
Man-optimal
It assigns to each man the highest-ranked woman possible under any stable marriage.
Woman-optimal
It assigns to each man the highest-ranked man possible under any stable marriage.
Blocking pair
A pair (m, w), where m Y, w X, is said to be a blocking pair for a marriage matching M if
man m and woman w are not matched in M but they prefer each other to their mates in M.
Maximum Matching Theorem:
A matching M is a maximum matching if and only if there exists no augmenting path with
respect to M.
Bipartite Graph:
In a bipartite graph, all the vertices can be partitioned into two disjoint sets V and U, not
necessarily of the same size, so that every edge connects a vertex in one of these sets to a vertex
in the other set.
Preflow:
A preflow is a flow that satisfies the capacity constraints but not the flow-conservation
requirement. Any vertex is allowed to have more flow entering the vertex than leaving it.A
preflowpush algorithm moves the excess flow toward the sink until the flow-conservation
requirement is reestablished for all intermediate vertices of the network.
Max-Flow Min-Cut Theorem
The value of a maximum flow in a network is equal to the capacity of its minimum cut.
Augmenting Path Method:
On each iteration, we can try to find a path from source to sink along which some additional flow
can be sent. Such a path is called flow augmenting. If a flow-augmenting path is found, we
adjust the flow along the edges of this path to get a flow of an increased value and try to find an
augmenting path for the new flow. If no flow-augmenting path can be found, we conclude that
the current flow is optimal. This general template for solving the maximum-flow problem is
called the augmenting-path method. This is also known as Ford-Fulkerson method.
Properties of digraph:
It contains exactly one vertex with no entering edges; this vertex is called the source and
assumed to be numbered 1.
It contains exactly one vertex with no leaving edges; this vertex is called the sink and
assumed to be numbered n.

The weight uij of each directed edge (i, j ) is a positive integer, called the edge capacity.
(This number represents the upper bound on the amount of the material that can be sent
from i to j through a link represented by this edge.)

Simplex tableau
Each exteme point can be represented by a simplex tableau, a table storing the information about
the basic feasible solution corresponding to the extreme point.
Requirements of the standard form in simplex method:
It must be a maximization problem.
All the constraints (except the nonnegativity constraints) must be in the form of linear
equations with nonnegative right-hand sides.
All the variables must be required to be nonnegative.
Extreme Point Theorem:
Any linear programming problem with a nonempty bounded feasible region has an optimal
solution; moreover, an optimal solution can always be found at an extreme point of the problems
feasible
region.

Unit V
Trivial Lower Bound
A trivial lower bound is based on counting the number of items in the problems input that must
be processed and the number of output items that need to be produced.
Information-theoretic Lower Bound:
An information-theoretic lower bound is usually obtained through a mechanism of decision trees.
This technique is particularly useful for comparison based algorithms for sorting and searching.
Decision Tree
An information-theoretic lower bound is usually obtained through a mechanism of decision trees.
Each internal node of a binary decision tree represents a key comparison indicated in the node,
e.g., k < k . The nodes left subtree contains the information about subsequent comparisons made
if k < k, and its right subtree does the same for the case of k >k. Each leaf represents a possible
outcome of the algorithms
run on some input of size n.
Decision tree for finding a minimum of three numbers:

.
Adversary Method:
The adversary method for establishing lower bounds is based on following the logic of a
malevolent adversary who forces the algorithm into the most time-consuming path.
Class P
Class P is a class of decision problems that can be solved in polynomial time by (deterministic)
algorithms. This class of problems is called polynomial.
Give example
Class NP
Class NP is the class of decision problems that can be solved by nondeterministic polynomial
algorithms. This class of problems is called nondeterministic polynomial.
NP-Complete
A decision problem D is said to be NP-complete if:
1. it belongs to class NP
2. every problem in NP is polynomially reducible to D
CNF-satisfiability problem
The CNF-satisfiability problem is NPcomplete. The CNF-satisfiability problem deals with
boolean expressions. Each boolean expression can be represented in conjunctive normal form,
and asks whether or not one can assign values true and false to variables of a given boolean
expression in its CNF form to make the entire expression true.
Difference between backtracking and branch-and-bound

Backtracking
[1] It is used to find all possible solutions available to the problem.
[2] It traverse tree by DFS(Depth First Search).
[3] It realizes that it has made a bad choice & undoes the last choice by backing up.
[4] It search the state space tree until it found a solution.
[5] It involves feasibility function.
Branch-and-Bound (BB)
[1] It is used to solve optimization problem.
[2] It may traverse the tree in any manner, DFS or BFS.
[3] It realizes that it already has a better optimal solution that the pre-solution leads to so it
abandons that pre-solution.
[4] It completely searches the state space tree to get optimal solution.
[5] It involves bounding function.
Approximation Algorithms
Approximation algorithms are often used to find approximate solutions to difficult problems of
combinatorial optimization. The performance ratio is the principal metric for measuring the
accuracy of such approximation algorithms.

Graph Coloring Problem


For a given graph, find its chromatic number, which is the smallest number of colors that need to
be assigned to the graphs vertices so that no two adjacent vertices are assigned the same color.
What are the state space algorithms?
Backtracking and Branch-and-bound are called state-space algorithms as they generate state-space tree
while solving the problem.
What are the differences between exhaustive search method and state-space algorithms?
State-space algorithms make it possible to solve some larger instances of difficult combinatorial
problems. Unlike exhaustive search, state-space algorithms generate candidate solutions one component
at a time and evaluate the partially constructed solutions. If no potential values of the remaining
components can lead to a solution, the remaining components are not generated at all.
What is a state-space tree?
The tree construted to implement backtracking with the choices for the components is called the statespace tree. Its root represents initial state before the search for a solution and the nodes at each level
represent the choices made for the corresponding component of a solution.
What is promising and non-promising node?
A node in state-space tree is said to be promising, if it corresponds to a partially constructed solution that
may still lead to complete solution. Otherwise, that node is non-promising.
What is N-Queens problem?
N-Queens problem is to place n-queens on an n x n chessboard, such that no two queens attack each
other by being in the same row or in the same column or on the same diagonal.
What is a subset sum problem? Find the subset from the set S={1,2,3,4}, for the sum d=7 using
backtracking.
A subset sum problem finds a subset of a given set S={s1,s2,..sn}of n positive integers, whose sum is
equal to a given positive integer d.
Draw the backtracking tree to find the subset.
Result: {1,2,4}, {3,4}.
What is a feasible solution and optimal solution?
A feasible solution is a point in the problems state-space that satisfies all the problems constraints.
An optimal solution is a feasible solution, with the best value of the objective function.
What are the reasons for terminating search path at current node in the state-space tree?

The value of the nodes bound is not better than the value of the best solution seen so far.

The node represents no feasible solutions, because the constraints of the problem are already
violated.

The subset of feasible solutions represented by the node consists of a single point here we
compare the value of the objective function for this feasible solution with that of the best solution
seen so far and update the latter with the former, if the new solution is better.

What is Assignment problem?


The Assignment problem assign n people to n jobs, so that the total cost of the assignment is as small as
possible. That is, there are n people who need to be assigned to execute n jobs, one person per job. The
cost of assigning ith person to jth job is C[i,j]. Find the assignment with the smallest total cost.
What is Knapsack problem? How it could be solved by branch-and-bound problems?
There are n items of known weights w1,w2..wn and the values are v1,v2..vn. The capacity of Knapsack is
W. Find the most valuable subset of items, that fit into knapsack.
To solve branch-and-bound problem, the steps are:
Order the items given in descending order, by their value-to-weight ratios.
Construct the state-space tree as binary tree. Each node on ith level indicate inclusion of item to the set,
if the node is left child, otherwise exclusion of the item.
What is travelling salesman problem? How it could be solved using branch-and-bound technique?
The travelling salesman problem is to find the shortest tour through a given set of n cities that visits each
city exactly once before returning to the city where it started.
Travelling salesman problem as it finds the shortest tour, it computes the lower bound. To compute lower
bound,
Find the sum Si of the distances from city i to the nearest cities.Compute sum S of these n numbers.
Divide S by 2 to find lower bound. Set a constraint such that any city should be visited before another.
This reduces more work in generating all-pairs of permutations.
Explain 8-queens problem.

You might also like