0% found this document useful (0 votes)
7 views25 pages

Comp CH4-1

Chapter Four discusses computational complexity, focusing on algorithm analysis, time and space complexity, and the significance of Big O notation. It covers various methods for analyzing algorithms, including worst-case, best-case, and average-case scenarios, as well as techniques for proving algorithm correctness. The chapter emphasizes the importance of understanding algorithm efficiency and scalability in computer science.

Uploaded by

danimicro08
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views25 pages

Comp CH4-1

Chapter Four discusses computational complexity, focusing on algorithm analysis, time and space complexity, and the significance of Big O notation. It covers various methods for analyzing algorithms, including worst-case, best-case, and average-case scenarios, as well as techniques for proving algorithm correctness. The chapter emphasizes the importance of understanding algorithm efficiency and scalability in computer science.

Uploaded by

danimicro08
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

CHAPTER FOUR

COMPUTATIONAL COMPLEXITY
Computational Complexity
 Basics of algorithm analysis: Big O-notation

 Polynomial time and space

 nondeterministic polynomial time

 P vs NP Un-decidable problems

 Polynomial time reductions and NP-complete problems

 Cook’s theorem
Computational Complexity
 It is an important part of computational complexity
theory, which provides theoretical estimation for the
required resources of an algorithm to solve a specific
computational problem.

 Most algorithms are designed to work with inputs of


arbitrary length.

 Analysis of algorithms is the determination of the amount


of time and space resources required to execute it.
Computational Complexity
 The efficiency or running time of an algorithm is stated as a function
relating the input length to the number of steps, known as time
complexity,

 or volume of memory, known as space complexity.

 Analysis of algorithms is used to how to choose a better algorithm for


a particular problem as one computational problem can be solved by
different algorithms.

 Analysis of algorithm is the process of analyzing the problem-solving


capability of the algorithm in terms of the time and size required (the
size of memory for storage while implementation).
Computational Complexity
 The main concern of analysis of algorithms is the required time
or performance. Generally, we perform the following types of
analysis:
 Worst-case − The maximum number of steps taken on any instance of size
a.

 Best-case − The minimum number of steps taken on any instance of size a.

 Average case − An average number of steps taken on any instance of size a.

 Amortized − A sequence of operations applied to the input of size a


averaged over time.
Computational Complexity
 To solve a problem, we need to consider time as well as space
complexity as the program may run on a system where memory is
limited but adequate space is available or may be vice-versa.

 If we compare bubble sort and merge sort. Bubble sort does not
require additional memory, but merge sort requires additional
space.

 Though time complexity of bubble sort is higher compared to


merge sort, we may need to apply bubble sort if the program needs
to run in an environment, where memory is very limited.
Rate of Growth
 Rate of growth is defined as the rate at which the running time of
the algorithm is increased when the input size is increased.

 The growth rate could be categorized into two types:

 Linear:- If the algorithm is increased in a linear way with an


increasing in input size, it is linear growth rate.

 Exponentially:- And if the running time of the algorithm is


increased exponentially with the increase in input size, it is
exponential growth rate.
Proving Correctness of an
Algorithm
 Once an algorithm is designed to solve a problem, it becomes very
important that the algorithm always returns the desired output for every
input given. So, there is a need to prove the correctness of an algorithm
designed. This can be done using various methods.
 Proof by Counterexample:- Identify a case for which the algorithm might not be true and apply. If the
counterexample works for the algorithm, then the correctness is proved. Otherwise, another algorithm that
solves this counterexample must be designed.

 Proof by Induction:- Using mathematical induction, we can prove an algorithm is correct for all the inputs
by proving it is correct for a base case input, say 1, and assume it is correct for another input k, and then
prove it is true for k+1.

 Proof by Loop Invariant :- Find a loop invariant k, prove that the base case holds true for the loop invariant
in the algorithm. Then apply mathematical induction to prove the rest of algorithm true.
Methodology of Analysis
Asymptotic Analysis:-
 The asymptotic behavior of a function f(n) refers to the growth of f(n) as n
gets large.

 We typically ignore small values of n, since we are usually interested in


estimating how slow the program will be on large inputs.

 A good rule of thumb is that the slower the asymptotic growth rate, the better
the algorithm. Though it’s not always true
Methodology of Analysis
 Solving Recurrence Equations:-

 Recurrence is an equation or inequality that describes a function in terms of its value


on smaller inputs. Recurrences are generally used in divide-and-conquer paradigm.

 A recurrence relation can be solved using the following methods

 Substitution Method − In this method, we guess a bound and using mathematical


induction we prove that our assumption was correct.

 Recursion Tree Method − In this method, a recurrence tree is formed where each
node represents the cost.

 Master’s Theorem − This is another important technique to find the complexity of


a recurrence relation.
Methodology of Analysis
 Amortized Analysis:-

 Amortized analysis is generally used for certain algorithms where a


sequence of similar operations are performed.

 Amortized analysis provides a bound on the actual cost of the entire


sequence, instead of bounding the cost of sequence of operations
separately.

 Amortized analysis differs from average-case analysis; probability is not


involved in amortized analysis. Amortized analysis guarantees the average
performance of each operation in the worst case.
Methodology of Analysis
 Aggregate Method:-

 The aggregate method gives a global view of a problem. In this method, if n


operations takes worst-case time T(n) in total. Then the amortized cost of each
operation is T(n)/n. Though different operations may take different time, in this
method varying cost is neglected.

 Accounting Method:-

 In this method, different charges are assigned to different operations according


to their actual cost. If the amortized cost of an operation exceeds its actual
cost, the difference is assigned to the object as credit. This credit helps to pay
for later operations for which the amortized cost less than actual cost.
Methodology of Analysis
 Potential Method:-

 This method represents the prepaid work as potential energy, instead of


considering prepaid work as credit. This energy can be released to pay for
future operations.

 Dynamic Table:-

 If the allocated space for the table is not enough, we must copy the table
into larger size table. Similarly, if large number of members are erased
from the table, it is a good idea to reallocate the table with a smaller size.
Big-O Notation
 Big O notation is a mathematical notation that describes the limiting
behavior of a function when the argument tends towards a particular value
or infinity.

 In computer science, big O notation is used to classify algorithms according


to how their run time or space requirements grow as the input size grows.

 Is Big-O Useful?

 Big-O notation is really only most useful for large n.

 The suppression of low-order terms and leading constants is misleading for


small n.
Big-O Notation
 Big O notation characterizes functions according to their growth
rates: different functions with the same growth rate may be
represented using the same O notation.

 The letter O is used because the growth rate of a function is also


referred to as the order of the function.

 A description of a function in terms of big O notation usually


only provides an upper bound on the growth rate of the function.
Big-O Notation
Big-O Notation
 Big O notation characterizes functions according to their growth
rates: different functions with the same growth rate may be
represented using the same O notation.

 The letter O is used because the growth rate of a function is also


referred to as the order of the function.

 A description of a function in terms of big O notation usually


only provides an upper bound on the growth rate of the function.
Big-O Notation
 Big O notation is a powerful tool used in computer science to
describe the time complexity or space complexity of algorithms.

 It provides a standardized way to compare the efficiency of


different algorithms in terms of their worst-case performance.

 Understanding Big O notation is essential for analyzing and


designing efficient algorithms.
What is Big-O Notation?
 Big-O, commonly referred to as “Order of”, is a way to express the upper bound of
an algorithm’s time complexity, since it analyses the worst-case situation of
algorithm.

 It provides an upper limit on the time taken by an algorithm in terms of the size of
the input.

 It’s denoted as O(f(n)), where f(n) is a function that represents the number of
operations (steps) that an algorithm performs to solve a problem of size n.

 Big O notation only describes the asymptotic behavior of a function, not its exact
value.

 The Big O notation can be used to compare the efficiency of different algorithms or
data structures
Definition of Big-O Notation:
 Given two functions f(n) and g(n),

 we say that f(n) is O(g(n)) if there exist constants c > 0 and n0 >= 0 such that
f(n) <= c*g(n) for all n >= n0.

 In simpler terms, f(n) is O(g(n)) if f(n) grows no faster than


c*g(n) for all n >= n0

 where c and n0 are constants.


Why is Big O Notation
Important?
 Big O notation is important for several reasons:

 Big O Notation is important because it helps analyze the efficiency of


algorithms.

 It provides a way to describe how the runtime or space requirements of an


algorithm grow as the input size increases.

 Allows programmers to compare different algorithms and choose the most


efficient one for a specific problem.

 Helps in understanding the scalability of algorithms and predicting how they


will perform as the input size grows.

 Enables developers to optimize code and improve overall performance


Properties of Big O Notation:
 Reflexivity

 For any function f(n), f(n) = O(f(n)). Example:

 f(n) = n2, then f(n) = O(n2).

Transitivity
 If f(n) = O(g(n)) and g(n) = O(h(n)), then f(n) = O(h(n)).

Constant Factor
 for any constant c > 0 and functions f(n) and g(n), if f(n) =
O(g(n)), then c*f(n) = O(g(n)).
Properties of Big O Notation:
 Sum Rule

 If f(n) = O(g(n)) and h(n) = O(g(n)), then f(n) + h(n) = O(g(n)).

 Product Rule

 If f(n) = O(g(n)) and h(n) = O(k(n)), then f(n) * h(n) = O(g(n) * k(n)).

 Composition Rule

 If f(n) = O(g(n)) and g(n) = O(h(n)), then f(g(n)) = O(h(n)).


Common Big-O Notations:
 Big-O notation is a way to measure the time and space complexity of an
algorithm. It describes the upper bound of the complexity in the worst-
case scenario. Let’s look into the different types of time complexities:
 Linear Time Complexity: Big O(n) Complexity

 traverses through an array to find a specific element:

 Logarithmic Time Complexity: Big O(log n) Complexity

 a binary search algorithm

 Quadratic Time Complexity: Big O(n2) Complexity

 a simple bubble sort algorithm


Common Big-O Notations:
 Cubic Time Complexity: Big O(n3) Complexity

 a naive matrix multiplication algorithm.

 Polynomial Time Complexity: Big O(nk) Complexity

 include linear time complexity O(n), quadratic time complexity O(n2), and
cubic time complexity O(n3).

 Exponential Time Complexity: Big O(2n) Complexity

 the problem of generating all subsets of a set

 Factorial Time Complexity: Big O(n!) Complexity

You might also like