Lower Bound Theory
Lower Bound Theory
Lower Bound Theory Concept is based upon the calculation of minimum time that is required
to execute an algorithm is known as a lower bound theory or Base Bound Theory.
Lower Bound Theory uses a number of methods/techniques to find out the lower bound.
Techniques:
The techniques which are used by lower Bound Theory are:
1. Comparisons Trees.
2. Oracle and adversary argument
3. State Space Method
1. Comparison trees:
In a comparison sort, we use only comparisons between elements to gain order information
about an input sequence (a1; a2......an).
To determine their relative order, if we assume all elements are distinct, then we just need to
consider ai ≤ aj '=' is excluded &, ≥,≤,>,< are equivalent.
Consider sorting three numbers a1, a2, and a3. There are 3! = 6 possible combinations:
Decision Tree: A decision tree is a full binary tree that shows the comparisons between
elements that are executed by an appropriate sorting algorithm operating on an input of a
given size. Control, data movement, and all other conditions of the algorithm are ignored.
In a decision tree, there will be an array of length n.
N! ≤2n
1. 1,2,3,4,5,6,7,8,9,10,11,12,13,14
And the last midpoint is:
1. 2, 4, 6, 8, 10, 12, 14
Thus, we will consider all the midpoints and we will make a tree of it by having stepwise
midpoints.
For Example
2k-1
23-1= 8-1=7
Where k = level=3
N! ≤ 2k-1
14 < 15
Where N = Nodes
Here, Internal Nodes will always be less than 2k in the Binary Search.
Step5:
n+1<= 2k
Log (n+1) = k log 2
k >=
k >=log2(n+1)
Step6:
1. T (n) = k
Step7:
T (n) >=log2(n+1)
Here, the minimum number of Comparisons to perform a task of the search of n terms using
Binary Search
Another technique for obtaining lower bounds consists of making use of an "oracle."
Given some model of estimation such as comparison trees, the oracle tells us the outcome of
each comparison.
In order to derive a good lower bound, the oracle efforts it's finest to cause the algorithm to
work as hard as it might.
It does this by deciding as the outcome of the next analysis, the result which matters the most
work to be needed to determine the final answer.
And by keeping step of the work that is finished, a worst-case lower bound for the problem
can be derived.
Example: (Merging Problem) given the sets A (1: m) and B (1: n), where the information in
A and in B are sorted. Consider lower bounds for algorithms combining these two sets to give
an individual sorted set.
Consider that all of the m+n elements are specific and A (1) < A (2) < ....< A (m) and B (1) <
B (2) < ....< B (n).
Elementary combinatory tells us that there are C ((m+n), n)) ways that the A's and B's may
merge together while still preserving the ordering within A and B.
Thus, if we need comparison trees as our model for combining algorithms, then there will be
C ((m+n), n)) external nodes and therefore at least log C ((m+n), m) comparisons are needed
by any comparison-based merging algorithm.
If we let MERGE (m, n) be the minimum number of comparisons used to merge m items with
n items then we have the inequality
1. State Space Method is a set of rules that show the possible states (n-tuples) that an
algorithm can assume from a given state of a single comparison.
2. Once the state transitions are given, it is possible to derive lower bounds by arguing that
the finished state cannot be reached using any fewer transitions.
4. Aim: When state changed count it that is the aim of State Space Method.
5. In this approach, we will count the number of comparison by counting the number of
changes in state.
6. Analysis of the problem to find out the smallest and biggest items by using the state space
method.
9. For the largest item, we need 7 comparisons and what will be the second largest item?
Now we count those teams who lose the match with team A
Teams are: B, D, and E
So the total no of comparisons are: 7
Let n is the total number of items, then
Comparisons = n-1 (to find the biggest item)
No of Comparisons to find out the 2nd biggest item = log2n-1
10. In this no of comparisons are equal to the number of changes of states during the
execution of the algorithm.
Phase-3: This is a Phase in which teams which come under C-State are considered and there
will be matches between them to find out the team which is never winning at all.
In this Structure, we are going to move upward for denoting who is not winning after the
match.
Here H is the team which is never winning at all. By this, we fulfill our second aim to.
Lower bound (L (n)) is a property of the particular issue i.e. the sorting problem, matrix
multiplication not of any particular algorithm solving that problem.
Lower bound theory says that no calculation can carry out the activity in less than that of (L
(n)) times the units for arbitrary inputs i.e. that for every comparison based sorting algorithm
must take at least L (n) time in the worst case.
Trivial lower bounds are utilized to yield the bound best alternative is to count the number
of elements in the problems input that must be prepared and the number of output items that
need to be produced.
The lower bound theory is the method that has been utilized to establish the given algorithm
in the most efficient way which is possible. This is done by discovering a function g (n) that
is a lower bound on the time that any algorithm must take to solve the given problem. Now if
we have an algorithm whose computing time is the same order as g (n) , then we know that
asymptotically we cannot do better.
If f (n) is the time for some algorithm, then we write f (n) = Ω (g (n)) to mean that g (n) is
the lower bound of f (n) . This equation can be formally written, if there exists positive
constants c and n0 such that |f (n)| >= c|g (n)| for all n > n0. In addition for developing lower
bounds within the constant factor, we are more conscious of the fact to determine more exact
bounds whenever this is possible.
Deriving good lower bounds is more challenging than arrange efficient algorithms. This
happens because a lower bound states a fact about all possible algorithms for solving a
problem. Generally, we cannot enumerate and analyze all these algorithms, so lower bound
proofs are often hard to obtain.
A problem is in the class NPC if it is in NP and is as hard as any problem in NP. A problem
is NP-hard if all problems in NP are polynomial time reducible to it, even though it may not
be in NP itself.
If a polynomial time algorithm exists for any of these problems, all problems in NP would be
polynomial time solvable. These problems are called NP-complete. The phenomenon of NP-
completeness is important for both theoretical and practical reasons.
Definition of NP-Completeness
A language B is NP-complete if it satisfies two conditions
B is in NP
Every A in NP is polynomial time reducible to B.
If a language satisfies the second property, but not necessarily the first one, the language B is
known as NP-Hard. Informally, a search problem B is NP-Hard if there exists some NP-
Complete problem A that Turing reduces to B.
The problem in NP-Hard cannot be solved in polynomial time, until P = NP. If a problem is
proved to be NPC, there is no need to waste time on trying to find an efficient algorithm for
it. Instead, we can focus on design approximation algorithm.
NP-Complete Problems
Following are some NP-Complete problems, for which no polynomial time algorithm is
known.
NP-Hard Problems
The following problems are NP-Hard
TSP is NP-Complete
The traveling salesman problem consists of a salesman and a set of cities. The salesman has
to visit each one of the cities starting from a certain one and returning to the same city. The
challenge of the problem is that the traveling salesman wants to minimize the total length of
the trip
Proof
To prove TSP is NP-Complete, first we have to prove that TSP belongs to NP. In TSP, we
find a tour and check that the tour contains each vertex once. Then the total cost of the edges
of the tour is calculated. Finally, we check if the cost is minimum. This can be completed in
polynomial time. Thus TSP belongs to NP.
Secondly, we have to prove that TSP is NP-hard. To prove this, one way is to show that
Hamiltonian cycle ≤p TSP (as we know that the Hamiltonian cycle problem is NPcomplete).
Hence, an instance of TSP is constructed. We create the complete graph G' = (V, E'), where
E′={(i,j):i,j∈Vandi≠j
Thus, the cost function is defined as follows −
t(i,j)={01if(i,j)∈Eotherwise
Now, suppose that a Hamiltonian cycle h exists in G. It is clear that the cost of each edge in h
is 0 in G' as each edge belongs to E. Therefore, h has a cost of 0 in G'. Thus, if graph G has a
Hamiltonian cycle, then graph G' has a tour of 0 cost.
Conversely, we assume that G' has a tour h' of cost at most 0. The cost of edges in E' are 0
and 1 by definition. Hence, each edge must have a cost of 0 as the cost of h' is 0. We
therefore conclude that h' contains only edges in E.
We have thus proven that G has a Hamiltonian cycle, if and only if G' has a tour of cost at
most 0. TSP is NP-complete.
NP-hard NP-Complete
NP-Hard problems(say X) can be solved if
NP-Complete problems can be solved by a non-
and only if there is a NP-Complete
deterministic Algorithm/Turing Machine in
problem(say Y) that can be reducible into
polynomial time.
X in polynomial time.
To solve this problem, it do not have to be To solve this problem, it must be both NP and
in NP . NP-hard problems.
Do not have to be a Decision problem. It is exclusively a Decision problem.
NP-hard NP-Complete
Example: Determine whether a graph has a
Example: Halting problem, Vertex cover Hamiltonian cycle, Determine whether a Boolean
problem, etc. formula is satisfiable or not, Circuit-satisfiability
problem, etc.
NP Problem:
The NP problems set of problems whose solutions are hard to find but easy to verify and are solved
by Non-Deterministic Machine in polynomial time.
NP-Hard Problem:
A Problem X is NP-Hard if there is an NP-Complete problem Y, such that Y is reducible to X in
polynomial time. NP-Hard problems are as hard as NP-Complete problems. NP-Hard Problem need
not be in NP class.
NP-Complete Problem: