Chapter - 1 (Analysis of Algorithms) (Not Completed)
Chapter - 1 (Analysis of Algorithms) (Not Completed)
analysis of algorithms
topic: algorithms
consists of finite set of steps/statements to solve a given problem in finite amount of time.
a procedure that reduces the solution of some class of problems to a series of rote steps which, if followed to the
latter, and as far as may be necessary, is bound to:
• always give the right answer and never give a wrong answer.
• work for all instances of problems of the class, an algorithm must work for all valid instances of a given
problem. it should not fail or refuse to work for a specific subset of inputs.
- the algorithm should always provide a correct and meaningful output.
- if the input is invalid, the algorithm should gracefully handle errors instead of failing unexpectedly.
(i) each statement/steps consists of one and more basic fundamental operations.
- example: x=y+z;
normal variables: x, y and z
basic and fundamental operations (cannot be further divided):
(i) addition
(ii) assignment
trivial (not neccesary)
- load
(a) input
- an algorithm may take zero or more inputs.
example:
- sorting algorithm takes a list as input.
- printing a message (print("hello")) is also considered input (internal to the algorithm).
(b) output
- an algorithm must produce at least one output.
- without output, we cannot verify if the algorithm is correct or useful.
conclusion
- every fundamental operation in an algorithm must be definite and effective.
- an algorithm must always terminate in finite time.
- it may take zero or more inputs but must always produce at least one output to be meaningful.
(b) flow chart: avoid most (if not all) issues of ambiguity; difficult to modify without specialized tools;
largerly standardized (graphical way)
(c) pseudo-code (spark in sahani): also avoid most issues of ambiguity vaguely (weakly) resembles
common elements of prorgramming languages; no particular agreement on syntax.
(d) programming language: tend to require expressing low-level details that are not necessary for a high-
level understanding.
(ii) computation
some means of performing arithmetic computations, comparisons, testing logical conditions, and so forth...
(iii) selection
some means of choosing among two or more possible courses of action, based upon initial data, user input and/or
computed results
(iv) iteration
some means of repeatedly executing a collection of instructions, for a fixed number of times or until some logical
condition holds
problems
decidable undecidable
a problem is decidable if there exists an algorithm a problem is undecidable if there is
that can solve it in finite time for every valid no algorithm that can solve it in
input, runs on deterministic turing machine finite time for every valid input,
proven using turing machinens and
halting problem
tractable intractable
problems that can be solved problems that can be solved but
efficiently (in polynomial time). not efficiently (in exponential
these problems belong to p time or worse).
(polynomial time) complexity these problems belong to np-hard
class (n2) or exponential complexity classes
(2n)
doubt(i): when do i say a solution with regard to algorithm is efficient and optimal?
- if it is bounded by a polynomial.
why?
to determine the resource consumption so that we can make perfomance comparison.
Chapter - 1 (Analysis of Algorithms) Page 8
to determine the resource consumption so that we can make perfomance comparison.
doubt (ii) time and space are the only two resources that need to be analysed?
no, there are many other resources
for example:
(a) time
(b) space
(c) registers
(d) energy (power)
(e) bandwidth/channel capacity
conclusion:
- this definition mixed algorithmic efficiency with hardware performance, making it inconsistent across
different machines.
- researchers realized that efficiency should not depend on the machine but on the algorithm itself.
efficiency was defined based on exploring all possible solutions. this approach focused on brute-force methods
that checked every potential answer.
efficiency is determined by how the algorithm’s time and space requirements grow with input size n.
it should be independent of hardware and not rely on exhaustive search.
examples: 2n, 3n
the growth rate is very rapid and unmanageable, meaning it increases in a unpredictable way for even
n2<2n
(i) time has a polynomial rate of growth (4n 2+8n+6) (example explained below)
(ii) algorithm time is bounded by a polynomial
it is a quadratic equation of degree 2.
in apriori analysis, we are going to derive the time by means of mathematical function with respect to input
size, which can come in form of polynomial and exponential.
our goal as the algorithm developer is develop algorithm with polynomial rate of growth (they are efficient, they
grow slow, algo runs fast) (if function runs fast like exponential then algorithm runs slowly)
- algorithms with polynomial time grow slower as input size increases. the time complexity increases at a rate
that is more predictable and manageable.
Chapter - 1 (Analysis of Algorithms) Page 12
that is more predictable and manageable.
- polynomial time algorithms can handle larger inputs without performance suffering drastically. in
contrast, exponential time algorithms quickly become unusable as inputs grow even slightly.
- polynomial time allows an algorithm to run efficiently even for moderate-sized inputs.
- exponential time algorithms are inefficient because the time grows so rapidly that they can't solve large
problems in any reasonable time.
19 25 31 36 41 44 51
9 11 12 13 15 16 17
- in this case, the performance is analyzed after the algorithm has been implemented and tested on a specific
hardware platform, which could include factors like CPU, memory, operating system, and other system-
specific characteristics. so it is difficult to compare the efficiency of two algorithms unless experiments on
their running times have been performed in the same hardware and software environments.
- the time complexity of an algorithm using a posteriori analysis differ from system to system, if the time
taken by the program is less then the credit will go to the compiler and hardware.
- experiments can be done only on a limited set of test inputs and care must be taken to make sure these are
Chapter - 1 (Analysis of Algorithms) Page 13
- experiments can be done only on a limited set of test inputs and care must be taken to make sure these are
representive.
analysis:
example: x←y+z; what is the time taken by statement?
to determine the time taken by this statement i need a platform/environemt
platform
doubt (i) why we don't prefer a posteriori analysis for algorithm efficiency? what are the advantages and
drawbacks?
(i) gives the exact execution time (i) not easy to analyse across platforms
- since we are executing the algorithm in a real time - execution time is influenced by hardware
environment, we get the actual time values in - makes it difficult to derive general conclusions
seconds, miliseconds or nano seconds
(ii) not applicable for all input classes.
(ii) helps in real-world perfomance tuning - a posteriori analysis requires us to run the
- useful for optimizing software for a particular algorithm on specific test cases.
machine or setup. - but algorithms may behave differently for different
- identifies hardware bottlenecks. inputs
- we cannot test all possible inputs.
instead of running the algorithm on a specific system, we analyze its performance mathematically,
independent of hardware or software constraints. so it allows us to evaluate the efficiency of two algorithms
in a way that is independent from the hardware and software environment.
- the time complexity of an algorithm using a priori analysis is same for every system, if the algorithm is
running faster the credit goes to programmer
this method helps us understand the growth rate of time and space complexity based on the input size n,
using asymptotic notation (e.g., big-o notation).
- actual execution time varies across platforms due to hardware differences (cpu speed, memory, compilers,
etc.)
- to ensure a platform-independent analysis, we measure fundamental operations instead of real-time
execution.
- a mathematical performance measure helps predict efficiency before implementation (a priori analysis).
- example: if c=2, then the word size is '2 log2n' bits, allowing us to index up to n2 elements.
this keeps the RAM model realistic while still allowing efficient computation.
- the ram model simplifies computations by assuming basic operations take constant time.
- in real-world computers, some complex instructions exist (e.g., exponentiation), which take more than
O(1).
- this creates gray areas where an operation might be assumed to be constant time but actually isn't.
however, there are special cases where exponentiation can be O(1), such as when n is an exact power of 2.
- bitwise left shift (<<) moves bits left, multiplying the number by 2n.
- since shift operations work on a fixed word size, they usually take constant time O(1).
- but if n is too large, overflow occurs, and extra operations may be needed.
- in theoretical analysis, we assume operations like computing 2n using bit shifting are O(1).
- but in real-world hardware, if the result is too large to fit in one word, extra computations occur, making
the time complexity non-O(1).
- comparison/relational operations:
in the RAM model, comparison and relational operations are assumed to take constant timeO(1).
these operations include:
• greater than ( > )
• less than ( < )
• greater than or equal to ( ≥ )
• less than or equal to ( ≤ )
• equal to ( = )
- logical operations:
in the RAM(random access machine) model, the following logical operations are assumed to take
constant time O(1):
• logical AND (∧ or &&)
• logical OR (∨ or ||)
• logical NOT (¬ or !)
• bitwise AND (&)
• bitwise OR (|)
• bitwise XOR (^)
• bitwise NOT (~)
• bitwise shift left (<<) (when shifting within word size)
• bitwise shift right (>>) (when shifting within word size)
- transfer operations:
in the random access machine (RAM) model, the following data transfer operations are assumed to take
constant time O(1):
(a) loading and storing operations
• reading a value from memory:
example: x = A[i] (fetching an element from an array).
• storing a value into memory:
example: A[i] = x (writing an element into an array).
- i/o operations:
in the RAM model, the following input/output (i/o) operations are assumed to take constant time o(1):
(a) reading input
• reading a single integer, float, character, or word from standard input:
example: x = input() (reading a value from the user).
assumption
- in reality, i/o operations depend on hardware and are usually much slower than memory operations.
- in the ram model, we assume these operations take 0(1) for theoretical simplicity in analyzing
algorithms.
memory cpu
(assume memory is large enough to
accomodate any algorithm)
find out what is
all algorithmic total time in units
steps are stored
here, all operations
Chapter - 1 (Analysis of Algorithms) Page 21
here, all operations
of algorithms are
stored in memory
for example, we will write statements of the type "algorithm A runs in time proportional to n," meaning
that if we were to perform experiments, we would find that the actual running time of algorithm A on
any input of size n never exceeds cn, where c is a constant that depends on the
hardware and software environment used in the experiment.
given two algorithms A and B, where A runs in time proportional to n and B runs in time proportional to
n2, we will prefer A to B, since the function n grows at a smaller rate than the function n2.
method(i)
- take values of n=32
log2n n
log232 32
5 < 5.6
algorithm test:
{
x←y+z;
for i←1 to n;
Chapter - 1 (Analysis of Algorithms) Page 22
for i←1 to n;
a←b*c;
for i←1 to n;
for j←1 to n;
k=k*5;
}
(i) x←y+z;
+(addition): takes one unit
← (storing into x): takes one unit
total: 2 units
i←1 to n;
- first we initliase i=1; takes one unit
- then we give it condition, comparing i with n (n times) + 1 (when the condition fails and loop terminates);
takes n+1 units
- loop increments the value of i until loop does is satisfying the condition so i++; takes n units
(1+(n+1)+n)
a←b*c;
- this statement is within the loop means it runs for n times not n+1 times because on the last
condition(when it fails) it will not enter but terminates the loop; takes n units
total: 1+n+(n+1)+n+n
i←1 to n;
- first we initliase i=1; takes one unit
- then we give it condition, comparing i with n (n times) + 1 (when the condition fails and loop terminates);
takes n+1 units
- loop increments the value of i until loop does is satisfying the condition so i++; takes n units
(1+(n+1)+n)
for j←1 to n;
- first we initliase j=1; takes one unit
- then we give it condition, comparing i with n (n times) + 1 (when the condition fails and loop terminates);
takes n+1 units
- loop increments the value of j until loop does is satisfying the condition so j++; takes n units
k=k*5;
- this statement is within the loop of j and j is within the loop of i that means
- k*5; runs for 2n times; takes n.n units
- k=k*5; runs for 2n times; takes n.n units
final:
1+(n+1)+n+n+n(n+1)+n.n+n.n+n.n
1+n+1+n+n+n2+n+n2+n2+n2
4n2+4n+2
total time:
2+(4n+2)+4n2+4n+2'
4n2+8n+6
n is input size.
as you increase the value, time increases.
in coding:
x = y + z;
algorithm test:
{
x←y+z;
for i←1 to n;
for j←1 to n;
k=k*5;
}
x←y+z;
+(addition)
← (storing into x)
both have same frequency '1'
for i←1 to n;
a←b*c;
loop runs for n+1 times but the operations will execute for n times
so,
*(multipication)
← (storing into a)
both have same frequency 'n'
highest frequency is 'n'
for i←1 to n;
for j←1 to n;
k=k*5;
loop runs for n(n+1) times but the operations will execute for n 2 times
so,
*(multipication)
Chapter - 1 (Analysis of Algorithms) Page 25
*(multipication)
← (storing into a)
both have same frequency 'n 2'
highest frequency is 'n2'
final: n2+n+1
characterize the time in elegant way, by limiting the bounds of the function.
(b) 0(n2)
what does it mean?
it means that the algorithm may have one step, 10 step, hundred steps but there is no step whose time is more
than n2.
general rule
- drop constants → ignore coefficients (e.g., 4n² → n²).
- keep the dominant term → ignore lower-order terms (e.g., 8n and 6 are removed).
- result: big-0 notation → focus on worst-case growth.
seller business grows seller business grows seller business grows seller business grows seller business grows seller business grows
and sells item by and sells item by bike and sells item by car and sells item by and sells item by and sells item by
cycle truck helicopter airplane
cost: 2lakh cost: 25lakh
cost: 5k cost: 50lakh cost: 50crore cost: 400crore
he maintained everything in villa, if somebody asked him how much you have invested 'roughly' in
mode of transport?
400crore, as rest of the money becomes insignificant as compared to 400crore.
our example:
- time complexity: n2+n+6
n2>n>6
- rate of growth: n and 6 is much lesser and insignificant than n2 so, the other terms become
insignificant.
n2 having better growth than n.
the rate of growth of running time can be described using different notations, including
big-theta and big-0 notation (asn)
therefore we extend asymptotic notation in analysis of algorithms to characterize the time or to get
the bounds of the functions representing the time
apriori analysis
type of analysis: determining the behavious of algorithm for different input classes
(i) best-case: the input class for which algorithm does least amount of work and takes minimum time.
- the best-case scenario depends on how the input is structured in a way that makes the algorithm work
the fastest.
- it is useful for analyzing optimal performance but does not give a complete picture of efficiency.
(ii) worst-case: the input class for which the algorithm does the most amount of work and takes maximum
time.
- the worst-case scenario helps us understand the upper bound on running time, ensuring the algorithm
never takes longer than this.
(iii) average-case: the expected running time of an algorithm over all possible inputs, assuming each input
of size n occurs with equal probability.
example:
<a1, a2, a3,........,an>
what is the probabilty with which algorithm take input from i 2 class?
average time:
probability
number of classes
time
interpretation: "this algorithm will always take this much time, no more and no less."
apart from big-o, big-omega, and theta, there are two more notations:
interpretation: "this function grows slower than f(n), but it is not equal to it."
interpretation: "this function grows faster than f(n), but it is not equal to it."
so we characterize it by means of
- upper bound: time of algorithm cannot be more than this/his algorithm will take at most this
much time.
- lower bound: time of algorithm cannot be lesser than this/this algorithm will take at least this
much time.
this is upper
13 bound for
12 person 'A'
11
10 person 'A' standing on this floor
9
this is lower
8
bound for
person 'A'
multifloor
building
floor values
(i) >10: upper bound, i can have many upper bounds
- closest upper bound: 11
let's try to understand upper bound and lower bound in more depth through hypothetical scenario
scenario: estimating the construction cost
a person wants to build a house and needs to estimate the construction cost. however, the exact cost
can only be known after the construction is complete.
- the budget is around 50 lakhs.
- the person asks two builders to provide estimates:
○ upper bound estimate (maximum possible cost)
○ lower bound estimate (minimum possible cost)
(b) builder 'b' gives a better upper bound estimate: "< 48 lakh"
○ this means the maximum cost required to construct a house is at most 48 lakh.
○ this is a tighter (better) upper bound because it is closer to our budget (50 lakh).
big-0 takeaway:
- big-0 notation represents the upper bound of a function's growth.
- we always try to find the closest possible valid upper bound.
(b) builder 'b' gives a better lower bound estimate: "> 48 lakh"
○ this means the cost will not be lower than 48 lakh.
○ this is a tighter (better) lower bound because it is closer to our budget (50 lakh).
big-omega takeaway:
- big-omega notation represents the lower bound of a function's growth.
- we always try to find the largest possible valid lower bound.
definition: let f(n) and g(n) be the functions from the set of integers/reals to the set of real numbers.
concept: big-0h(0), upper bound
(i) understand big-oh notation:
- it tells us how fast a function grows as the input n gets bigger.
- we ignore small details and focus only on big values of nn.
example:
- if someone says O(n²), it means the function doesn’t grow faster than n² for large n.
- if someone says O(n), it means the function grows at most as fast as n.
breaking it down:
(i) f(n) is said to be 0(g(n))
(ii) if there exists a constant c>0 such that when you multiply g(n) by c, the function f(n) is always
less than or equal to c⋅ g(n) for all sufficiently large n (i.e., for all n≥k), then we can say that f(n) is
O(g(n)).
in this context, the "set" refers to a collection or group of functions that satisfy a particular
condition.
such that
0≤ f(n) ≤ c.g(n) for all n ≥ n0(k)}
this means that the set O(g(n)) contains all functions f(n) that, for sufficiently large (greater
than or equal to n0(k)), grow no faster than g(n) multiplied by a constant c.
- O(g(n)) groups all functions that satisfy the given inequality condition with respect to g(n).
set 0(g(n))
2nlgn 5n2+2n
set 0(n2)
in simpler terms:
○ for small values of n, f(n) might be negative.
○ for large values of n, f(n) must be ≥0.
example:
f(n)=−100+n2
since f(n) eventually stays nonnegative for all large n, it is asymptotically nonnegative.
- if f(n) were asymptotically negative, this would not make sense, because:
○ big-o is used to compare growth rates (negative growth would not fit the concept).
○ in complexity analysis, we talk about time or space complexity, which are always nonnegative.
- if g(n) were negative, then O(g(n)) would not make sense, because:
○ no function f(n) can be bounded above by a negative function for large n.
○ the set O(g(n)) would be empty (no valid functions would fit).
let's take an example and compare the graphs to get a better understanding:
let's check
1<n2
1+n<n2+n2
1+n+n2<n2+n2+n2
1+n+n2<3n2, n>1
point to remember:
smaller functions are in the order of bigger ones in big-0 notation because big-0 represents an upper
bound
analysis:
we have the function: f(n) = 1 + n + n²
now, the goal is to prove that this function grows at most as fast as n², and we are doing this using
big-Oh notation. the idea is to show that for sufficiently large n, f(n) is bounded above by some
constant multiple of n².
(b) the second term is n. again, for large n, n is smaller than n², so we can compare it to n².
now compare:
1 + n + n² ≤ n² + n² + n² = 3n²
Chapter - 1 (Analysis of Algorithms) Page 40
1 + n + n² ≤ n² + n² + n² = 3n²
this shows that for sufficiently large n, f(n) is always less than or equal to 3n².
conclusion:
• we are not comparing each term directly with n², but rather, we're bounding the smaller terms by n²
and showing that the entire function f(n) is less than or equal to a constant multiple of n².
• we chose c = 3 to show that for sufficiently large n, f(n) is bounded above by 3n².
thus, we proved that f(n) = o(n²).
c g
k
f(n)
- f(n) is g(n)
f(n) is 0(n2)
1+n+n2<3n2 ,n>1
- for small values of n, yes, 1+n+n2 might be larger than n2, but we ignore small n in asymptotic
analysis.
- the key is that 1+n+n2 grows at the same rate as n2 for large n, meaning it can be bounded by some
constant multiple of n2.
point to remember:
smaller functions are in the order of bigger ones in big-0 notation because big-0 represents an upper
bound
example (ii)
we are given:
f(n)=n+logn
such that:
f(n)≤c⋅ g(n)for all n≥k
- in n+logn, the term n grows much faster than logn, especially for large n, so logn becomes negligible
as compared to n.
n ≤ 2n, n>1
- smaller functions are in the order of bigger ones in big-0 notation because big-0 represents an upper
bound
n ≤ 2n, n>1 = 0(n)
f(n): n+logn
c: 2
g(n): n
0(n)
for example:
○ if n=106, then logn≈20, which is tiny compared to 10 6.
○ this means that n+logn is almost the same as n for large n.
thus, n+logn is asymptotically bounded by O(n).
why is f(n)=O(1)?
- f(n)=2100 is a constant value (a fixed number).
Chapter - 1 (Analysis of Algorithms) Page 45
- f(n)=2100 is a constant value (a fixed number).
- it does not depend on n or grow as n increases.
- in big-o notation, a constant function is always considered O(1).
intuitive explanation
- imagine a machine that always takes the same amount of time to complete a task, no matter the input
size.
- whether n=1 or n=106, the time taken is always the same: 2 100.
- this means the function has constant time complexity, i.e., O(1).
point to remember:
when i say f ≤ c.g then f is 0(g)
bigger functions
smaller functions
are in the order of
example (iv)
f(n)=n2(polynomial) and g(n)=2n(exponential)
so, prove f(n) is 0(g(n)), whenever n>k
- is f order of g (f 0(g)) or g order of f (g 0(f))?
(i) polynomial is a smaller function as compared to exponential so as we know smaller function are in
the order of bigger function.
n f(n2) g(2n)
1 1 2
Chapter - 1 (Analysis of Algorithms) Page 46
n f(n2) g(2n)
1 1 2
2 4 4 relationship before 4 is
unpredictable
3 9 8
(non-converging)
4 16 16
5 25 32
6 36 64
f(n) ≤ c⋅ g(n), n ≥ k
1 ≤ 2, n ≥ 1
4 ≤ 4, n ≥ 2
9 ≤ 8, n ≥ 3
16 ≤ 16, n ≥ 4
25 ≤ 32, n ≥ 5 so, f(n) is 0(g(n)), whenever n>4
32 ≤ 64, n ≥ 6
graphical visualisation:
64
32
32
16 25
9 16
2 4
8
4
1
definition of big-omega Ω :
a function f(n) is said to be Ω(g(n)) if there exist positive constants c and k(n 0)
such that:
f(n)≥c⋅ g(n) for all n > k (c,k>0)
such that
∃c>0 and n0(k)>0 such that 0 ≤ cg(n) ≤ f(n)for all n≥n0.
the set Ω(g(n)) contains all functions f(n) that grow at least as fast as g(n), up to a
constant factor, for sufficiently large n.
set 0(g(n))
5n2+2n 2n2+4n
set Ω(n2)
graphical representation:
rate of the growth of the
function f(n)
let's check
point to remember:
point to remember:
bigger functions are in the order of smaller ones in big-omega notation because big-omega represents
an lower bound
analysis:
we have the function:
f(n) = 1 + n + n²
now, the goal is to prove that this function grows at least as fast as n², and we are doing this
using big-omega notation. the idea is to show that for sufficiently large n, f(n) is bounded
below by some constant multiple of n².
○ the second term is n. similarly, for large n, n is smaller than n², but we need to prove that
the whole function grows at least as fast as some constant multiple of n².
○ we know that 1 and n are much smaller than n² as n grows large, so we bound the terms
to ensure the inequality holds.
○ when n becomes sufficiently large, both 1 and n are smaller than n², so the entire
expression is bounded below by n².
conclusion:
• we are not comparing each term directly with n², but rather, we're showing that the function
f(n) has a lower bound that is proportional to n² for large values of n.
• this shows that for sufficiently large n, f(n) is always greater than or equal to n², so f(n) is
Ω(n²).
• this approach is common in big-omega analysis because we’re focusing on the asymptotic
lower bound behavior for large n, not the exact details of the function for small n.
c g
k
f(n)
- f(n) is g(n)
f(n) is Ω(n2)
takeways:
- big-omega (Ω) ignores lower-order terms for large n.
- the function must be at least as large as a constant multiple of g(n).
- since n2 dominates, 1+n+n2 is at least Ω(n2).
my question is: can we choose constants arbitrarily (without a specific or fixed reason or without being
bound by any rule or limitation)? for big-omega, we chose c=1, so the lower bound became n2. for big-oh,
we chose c = 3, making n2 the upper bound. what’s the catch here? am i missing something in my
understanding?
point to remember:
the actual work in proving big-oh (O) or big-omega (Ω) is to find a constant 'c' (along with a threshold
'k') that makes the inequality hold for sufficiently large n.
example (ii):
we are given
f(n)=n+logn
Chapter - 1 (Analysis of Algorithms) Page 55
f(n)=n+logn
such that:
f(n) ≥ c⋅ g(n) for all n ≥ k
thus, we have:
f(n): n+logn
c: 1
g(n): n
Ω(n)
for example:
- if n=106, then logn≈20, which is tiny compared to 10 6.
- this means n+logn is practically the same as n for large n, confirming the lower bound Ω(n).
thus, n+logn is asymptotically lower-bounded by Ω(n).
why is f(n)=Ω(1)?
- f(n)=2100 is a constant value (a fixed number).
- it does not depend on n or grow as n increases.
- in big-omega notation, a constant function is always considered O(1).
- this inequality is true for any c ≤ 2100, because 2100 is a constant, and it will always be greater than
or equal to c⋅ 1 as long as c is a constant value less than or equal to 2 100.
point to remember:
when i say f ≥ c.g then f is 0(g)
smaller functions
bigger functions
are in the order of
some own doubts: summarising the concepts of why we use notations, understanding of analysis and
specially trying to gain the understanding of constants.
doubt(ii) can we choose constants arbitrarily (without a specific or fixed reason or without being bound by
any rule or limitation)?
constants like c are not completely arbitrary, but they are flexible within certain limits.
(b) in O(1):
- the constant cc must satisfy:
2100 ≤ c⋅ 1
- in O:
○ we aim to prove an upper bound,
the difference in c reflects the nature of the bounds. c is chosen based on whether we are proving a lower
bound (Ω) or an upper bound (O).
summary:
what’s the "catch"?
the catch is that constants must satisfy the inequality being proved.
- constants are not completely arbitrary, as they depend on the relationship between f(n) and g(n).
- we only need one valid constant c to prove the inequality. other valid constants might exist, but once we
find one, the proof is complete.
final example:
f(n)≤c⋅ g(n) for all n ≥ k (c,k>0) f(n) ≥ c⋅ g(n) for all n ≥ k (c,k>0)
f(n)=1+n+n2 f(n)=1+n+n2
(b) removes ambiguity: theta tells us that g(n) perfectly captures f(n)'s growth, not faster or slower.
(c) asymptotic tightness: it focuses on the exact growth for large n, ignoring smaller terms.
in this context, the "set" refers to a collection or group of functions that satisfy a particular
condition.
such that
c1⋅ g(n) ≤ f(n) ≤ c2⋅ g(n), for all n ≥ n0(k)
for Θ(g(n)), the set contains all functions f(n) that are asymptotically tightly bound by
g(n), meaning f(n) neither grows significantly faster nor slower than g(n) for large n.
f(n) is bounded both above and below by constant multiples of g(n) for sufficiently large n.
the set Θ(g(n)) includes all functions f(n) that satisfy the upper and lower bounds defined
by c1 and c2.