Vasvi Khullar Mca - Iv (B) 06417704417
Vasvi Khullar Mca - Iv (B) 06417704417
DAA
ASSIGNMENT – 2
UNIT – 4
2014
1. Define the following terms :
a. Branch & bound mechanism
b. Undecidable problems
c. Polynomial time verification
Ans 1.
a) Branch & bound mechanism
Branch and bound is a systematic method for solving optimization problems. It is a rather
general optimization technique that applies where the greedy method and dynamic
programming fail.However, it is much slower. Indeed, it often leads to exponential time
complexities in the worst case.On the other hand, if applied carefully, it can lead to algorithms
that run reasonably fast on average.The general idea of B&B is a BFS-like search for the optimal
solution, but not all nodes get expanded (i.e., their children generated). Rather, a carefully
selected criterion determines which node to expand and when, and another criterion tells the
algorithm when an optimal solution has been found.
b) Undecidable Problems
A problem is undecidable if there is no Turing machine which will always halt in finite amount of
time to give answer as ‘yes’ or ‘no’. An undecidable problem has no algorithm to determine the
answer for a given input.
Examples
Ambiguity of context-free languages: Given a context-free language, there is no Turing
machine which will always halt in finite amount of time and give answer whether
language is ambiguous or not.
Equivalence of two context-free languages: Given two context-free languages, there is no
Turing machine which will always halt in finite amount of time and give answer whether
two context free languages are equal or not.
Everything or completeness of CFG: Given a CFG and input alphabet, whether CFG will
generate all possible strings of input alphabet (∑*)is undecidable.
Regularity of CFL, CSL, REC and REC: Given a CFL, CSL, REC or REC, determining whether
this language is regular is undecidable.
1
VASVI KHULLAR MCA – IV (B) 06417704417
2013
Ans 1.
P NP NP HARD NPC
P is a complexity NP is a complexity Intuitively, these are NP-Complete is a
class that class that the problems that complexity class
represents the set represents the set are at least as hard which represents
of all decision of all decision as the NP-complete the set of all
problems that can problems for which problems. Note that problems X in NP
be solved in the instances NP-hard for which it is
polynomial time. where the answer problems do not possible to reduce
is “yes” have proofs have to be in NP, any other NP
that can be verified and they do not problem Y to X in
in polynomial time. have to be decision polynomial time.
problems.
The precise
definition here is
that a problem X is
NP-hard, if there is
an NP-complete
problem Y, such
that Y is reducible
to X in polynomial
time.
That is, given an This means that if But since any NP- Intuitively this
instance of the someone gives us complete problem means that we can
problem, the an instance of the can be reduced to solve Y quickly if
answer yes or no problem and a any other NP- we know how to
can be decided in certificate complete problem in solve X quickly.
polynomial time. (sometimes called a polynomial time, all Precisely, Y is
witness) to the NP-complete reducible to X, if
answer being yes, problems can be there is a
we can check that reduced to any NP- polynomial time
it is correct in hard problem in algorithm f to
2
VASVI KHULLAR MCA – IV (B) 06417704417
3
VASVI KHULLAR MCA – IV (B) 06417704417
2. Currently what are the various techniques used while dealing with NPC problem? Are
they optimal? Differentiate between decision problem and optimization problem.
For instance, if we can't solve TSP on general graphs, let's try to just solve it for graphs obeying
a Euclidean distance metric.
2) Approximation Algorithms
Some NP-hard problems admit polynomial time approximation algorithms. Sometimes this give
only a constant factor approximation at best, such as MAX-CUT or Metric TSP, and sometimes it
yields a Polynomial Time Approximation Scheme, whereby one can trade additional running
time for a better approximation. (It's polynomial time in n with 1/epsilon in the exponent.) In
the case of Euclidean TSP, we have had such an algorithm since 2010.
3) Random Algorithms
This isn't really solely about solving NP-hard problems, as there are legitimate reasons to use
random algorithms to speed up solving problems in P as well. However, randomization does
offer an inroad to solving NP-hard problems more quickly. The idea is to remove the guarantee
that the algorithm work every time, but instead have it succeed at least 2/3 of the time,
depending on the luck of the draw. By re-running the algorithm until it succeeds, or increases
our confidence to the desired level, we can get an arbitrarily high probability for a correct
answer. This tactic lets us probably solve some NP-hard problems in robust control theory and
matrix theory in polynomial time. The main advantage of random algorithms is simplicity. These
algorithms are frequently very simple to describe and implement. Sometimes, they can be
transformed into complicated deterministic algorithms that maintain their approximation
guarantees. Frequently, they let us sample the problem space to create a solution on a smaller
input set, to try to extrapolate the solution to the larger space. They can also allow us to reduce
the dimensionality of a problem while maintaining some of its structure.
4
VASVI KHULLAR MCA – IV (B) 06417704417
Just because a problem is NP-complete doesn't mean we have to brute force it. Even if we must
use an exponential time algorithm, not all such algorithms are the same, so we're always on the
look out for improvements in such algorithms. Consider the General Number Field Sieve for
factoring. (Yes, I know factoring hasn't been proved NP-hard--it's still a good example.) It takes
quite a bit of time to run, exponential in the size of the number to be factored, but it is leaps
and bounds ahead of, say, trial division. Quite large numbers can be factored on your home
computer in a day or two.
Sometimes exponential time algorithms actually run pretty quickly in practice. For instance, the
simplex algorithm for linear programs is still used despite its exponential worst case running
time, since it is so simple to implement and blazingly fast on most real world examples.
5) Parallelize!
Machines are becoming massively parallel, and each parallel part is becoming faster on its own.
Millions of simultaneous operations can be done in the GPU or FPU (or arrays of the same) on
modern computers. When computations for a problem can be done in parallel, it stops
mattering that exponentially many computations need to be done. In the real world, it's the
wall clock time that matters. Of course, not all problems admit easily parallelizable solutions...
Even serial processors are getting faster and faster. Even if you have to use a slow exponential
time algorithm AND perform all the computations in sequence, at the very least, you can get a
blindingly fast modern supercomputer to do it for you. Again, it's the wall clock time that
matters, not the number of steps in the computation.