0% found this document useful (0 votes)
78 views

What Is Complexity Theory?

Complexity theory analyzes how efficiently problems can be solved on a computer based on the relationship between the input size and the number of steps to solve the problem. Problems with polynomial time complexity can be solved efficiently as the number of steps increases polynomially with the input size, while problems requiring exponential time are intractable to solve for large inputs. A key open question is whether problems solvable in nondeterministic polynomial time (NP problems) can also be solved in deterministic polynomial time (P problems), as this would have major implications for many important problems like code breaking.

Uploaded by

K Anne
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views

What Is Complexity Theory?

Complexity theory analyzes how efficiently problems can be solved on a computer based on the relationship between the input size and the number of steps to solve the problem. Problems with polynomial time complexity can be solved efficiently as the number of steps increases polynomially with the input size, while problems requiring exponential time are intractable to solve for large inputs. A key open question is whether problems solvable in nondeterministic polynomial time (NP problems) can also be solved in deterministic polynomial time (P problems), as this would have major implications for many important problems like code breaking.

Uploaded by

K Anne
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

What is Complexity Theory?

 A single machine's operation could be written in a program or used to compute any computable
sequence.

 It was Mathematician John von Neumann who would first bring Alan Turing's vision to life with
an actual universal machine. Neumann knew the machine would be mostly empty memory to
store the program. It also needed a component to execute the program's instructions, known as the
arithmetic and logic unit. Then it follows the sequence of operations to answer complex questions
such as a missile or ballistic calculations for the military.

 Having enough space or memory was a small problem than the time needed to perform a tricky
question's computation sequence. In 1955 mathematician John Nash articulated this concern in a
letter to the National Security Agency (NSA) about specific problems' computational
requirements. In this case, he cared about code-breaking.

 First, he suggests that we look not at the absolute time needed to calculate any specific example,
such as how long it would take to add two 4-digit numbers, but instead look at how the
computations grow relative to the input size.

 We should use the number of machine operations needed to generate an answer or simply the
number of steps.

 Simply the number of states transitions the machine goes through. Then we can graph the growth
of any operation sequence this way and get a curve representing how many steps any machine
with demand as the question size or input size growths. It was the shape of this growth curve that
became a meaningful way to classify a problem. One key reason this growth increases more
sharply for some issues is the loops in the algorithm being run by the machine. The number of
steps in a loop will depend at least on the size of the input. It leads to a straight line plotted on a
graph of numbers of actions against input size known as linear time growth.

 More complex problems will call for algorithms that have loops within loops known as nested
loops. For example, we perform a nested loop when checking if a list is complete. For the first
loop, we go through each item on our list, and then for each item in the first loop, we perform a
second loop, which is to scan our entire list until we find that item. If a list of n items is long, it
could require n*n or n^2 steps. If we grabbed this structure in the same way, we instead get a
curve line, and, in our example, it could be expressed by the equation n^2, and if the loop is
nested more than two times, this exponent rose with each new depth. We get a growth curve n^3
for three nested loops, so it was suggested that we close this kind of problem polynomial time.
Because the curves which represent their growth can be described using one or more terms of the
form nk wherein our example n is the input length then k is the depth of our loop nesting.

 Most problems we ask a computer to solve a problem require polynomial time.


 With polynomial-time algorithms, the number of loops in the algorithm doesn't change when
input size grows, and increasingly the input or n adds to the number of steps within each loop.

 With exponential time algorithms, the number of loops in our algorithm increases when input
grows.

 Nash suggested to the government that as long as you keep your encryption keys secret, you
would end up with an encryption system that was, in his words, impractical for the enemy to
break. It was simply chosen n to be large enough.

 No matter what domain computers are applied to, the interesting or important problems we want
our computers to solve often require exponential time to solve using our best algorithms. But of
course, people work hard to find algorithms that could solve problems we thought exponential in
polynomial time instead. And they do this by finding a clever shortcut to the solution. When this
happens, we say they reduce the complexity of a problem from the exponential category to the
polynomial category. Yet many problems seem irreducible.

 For example, given a collection of numbers, can you identify a set of them that adds up to zero.
Solving this problem is hard, yet if you were given a solution to this problem, you could certainly
verify if it is correct very easily. They gave this category of problems are bizarre name NP and
here's why people considered how these kinds of problems could be solved in polynomial time in
theory. In theory, you can use the fast verification procedure parallel on all possibilities and check
if there is a correct one. Think of it as guessing in parallel. This would only be possible on a
theoretical machine that can make multiple decisions simultaneously. We call this theoretical
machine a nondeterministic machine compared to our axial machines, which he called
deterministic because they can just jump from one state to another in a linear sequence. And this
is why we call this set of problems NP. It stands for solvable problems in polynomial time using a
nondeterministic machine or simply nondeterministic polynomial time. It contains hundreds of
important problems we face every day in computing.

 Since this nondeterministic machine doesn't exist, it is thought that the problems and NP will
always be exponential to solve with the regularly deterministic machines. That is, P does not
equal NP. There is one last twist to the story, people have noticed that many problems in this set
of NPs can be reduced to each other, and that's because they share some structural similarities.
So, this set of interconnected problems is growing problems, which we call NP-complete. Solving
any one of these NP-complete problems with the fast or polynomial algorithm means you also
had a fast algorithm for every other problem in NP.

 A huge percent of the computers are doing today with simple vanish. Though most believe it's not
going to happen, the other approaches to prove that this said NP-complete problems would
remain hard will be forever out of reach. There is still a million-dollar prize for anyone to come
up with a solution to this. It remains the most popular question in computer science today.

You might also like