DAA 1st Unit Notes
DAA 1st Unit Notes
UNIT 1: Introduction
[ What is an Algorithm? Fundamentals of Algorithmic problem solving,
Fundamentals of the Analysis of Algorithm Efficiency, Analysis Framework,
Measuring the input size, Units for measuring Running time, Orders of Growth,
Worst-case, Best-case and Average-case efficiencies.
Asymptotic Notations and Basic Efficiency classes, Informal Introduction, O
notation, Ω-notation, θ-notation, mathematical analysis of non-recursive
algorithms, mathematical analysis of recursive algorithms.]
What is an Algorithm?
• Algorithm was first time proposed a Purshian mathematician Al-Chwarizmi
in 825 AD. According to web star dictionary, algorithm is a special method
to represent the procedure to solve given problem.
Or
• An Algorithm is any well-defined computational procedure that takes some
value or set of values as Input and produces a set of values or some value
as output. Thus algorithm is a sequence of computational steps that
transforms the input into the output.
Fundamentals of Algorithmic problem solving
• Understand the problem: This is very crucial phase. If we did any mistake
in this step the entire algorithm becomes wrong. So before designing an
algorithm is to understand the problem first.
• Solution as an algorithm: Solve the problem exactly if possible. Even
though some problems are solvable by exact method, but they are not faster
when compared to approximation method. So in that situation we will use
approximation method.
• Algorithm techniques: In this we will use different design techniques like,
i) Divide-and-conquer ii) Greedy method iii) Dynamic programming iv)
Backtracking v) Branch and bound…. etc.,
• Prove correctness: once algorithm has been specified, next we have to
prove its Input: Zero or more inputs Output: At least one output.
Finiteness: N number of steps. Definiteness: Clear algorithm step.
Effectiveness: A carried out step. Criteria for Algorithms Code the algorithm
correctness. Usually testing is used for proving correctness.
• Analyze an algorithm: Analyzing an algorithm means studying the
algorithm behavior ie., calculating the time complexity and space
complexity. If the time complexity of algorithm is more then we will use one
more designing technique such that time complexity should be minimum
• Coding an algorithm: after completion of all phases successfully then we
will code an algorithm. Coding should not depend on any program language.
We will use general notation (pseudo-code) and English language statement.
Ultimately algorithms are implemented as computer programs.
Space Complexity
The space complexity of an algorithm is the amount of memory it
needs to run to completion.
The Space Complexity of any algorithm P is given by S(P)=C+SP(I),C is
constant.
• Fixed Space Requirements (C)
Independent of the characteristics of the inputs and outputs
It includes instruction space
space for simple variables, fixed-size structured variable, constants
• Variable Space Requirements (SP(I))
depend on the instance characteristic I
number, size, values of inputs and outputs associated with I
recursive stack space, formal parameters, local variables, return
address.
Example:
Algorithm : Iterative function for sum a list of numbers
Algorithm sum( list[ ], n)
{
tempsum = 0;
for i = 0 to n do
tempsum += list [i];
return tempsum;
}
In the above example list [] is dependent on n. Hence SP(I)=n. The remaining
variables are i, n, tempsum each requires one location. Hence S(P)=3+n
Time Complexity
The time complexity of an algorithm is the amount of computer time it needs to
run to completion.
The time T(P) taken by a program P is the sum of the compile time and the run (or
execution)time. The compile time does not depend on the instance characteristics.
T(P)=C+TP(I)
It is combination of
-Compile time (C) independent of instance characteristics
-Run (execution) time TP dependent of instance characteristics
Time complexity is calculated in terms of program step as it is difficult to know
the complexities of individual operations.
Tabular method for computing Time Complexity :
Complexity is determined by using a table which includes steps per
execution(s/e) i.e amount by which count changes as a result of execution of
the statement.
Frequency – number of times a statement is executed.
Statement s/e Freque Total
ncy steps
Algorithm sum( list[ ],n) 0 - 0
{
0 - 0
tempsum := 0;
1 1 1
for i := 0 ton do
tempsum :=tempsum +
1 n+1 n+1
1 n n
list [i];
return tempsum; 1 1 1
} 0 0 0
Total 2n+3
Analysis Framework
Measuring an Input’s Size
It is observed that almost all algorithms run longer on larger inputs. For example, it
takes longer to sort larger arrays, multiply larger matrices, and so on. Therefore, it
is logical to investigate an algorithm's efficiency as a function of some parameter n
indicating the algorithm's input size. There are situations, where the choice of a
parameter indicating an input size does matter. The choice of an appropriate size
metric can be influenced by operations of the algorithm in question. For example,
how should we measure an input's size for a spell-checking algorithm? If the
algorithm examines individual characters of its input, then we should measure the
size by the number of characters; if it works by processing words, we should count
their number in the input.
Units for Measuring Running time
To measure an algorithm's efficiency, we would like to have a metric that does not
depend on these extraneous factors. One possible approach is to count the number
of times each of the algorithm's operations is executed. This approach is both
excessively difficult and, as we shall see, usually unnecessary. The thing to do is to
identify the most important operation of the algorithm, called the basic operation,
the operation contributing the most to the total running time, and compute the
number of times the basic operation is executed. For example, most sorting
algorithms work by comparing elements (keys) of a list being sorted with each
other; for such algorithms, the basic operation is a key comparison. As another
example, algorithms for matrix multiplication and polynomial evaluation require
two arithmetic operations: multiplication and addition. Let cop be the execution
time of an algorithm's basic operation on a particular computer, and let C(n) be the
number of times this operation needs to be executed for this algorithm.
Then we can estimate the running time T(n) of a program implementing this
algorithm on that computed by the formula:
𝑇(𝑛) ≈ 𝑐𝑜𝑝𝐶(𝑛) Unless n is extremely large or very small, the formula can give
a reasonable estimate of the algorithm's running time. It is for these reasons that the
efficiency analysis framework ignores multiplicative constants and concentrates on
the count's order of growth to within a constant multiple for large-size inputs.
Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running
time complexity of an algorithm.
Ο Notation
Ω Notation
θ Notation
Big Oh notation: Ο
Definition
The function f(n)=O(g(n)) (read as “f of n is big oh of g of n”)
If there exist positive constants c and n0 such that
f(n)≤C*g(n) for all n, n≥0
The value g(n)is the upper bound value of f(n).
Example:
Consider the following f(n) and g(n)... f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as O(g(n)) then it must satisfy
f(n) <= C x g(n) for all values of C > 0 and n0>= 1 f(n) <= C g(n)
⇒3n + 2 <= C n
Above condition is always TRUE for all values of C = 4 and n >= 2. By using Big
- Oh notation we can represent the time complexity as follows...
3n + 2 = O(n)
3n+2=O(n) as
3n+2 ≤4n for all n≥2
Omega notation: Ω
The function f(n)=Ω (g(n)) (read as “f of n is Omega of g of n”) iff there exist
positive constants c and n0 such that f(n)≥C*g(n) for all n, n≥0
The value g(n) is the lower bound value of f(n).
Example:
3n+2=Ω (n) as
3n+2 ≥3n for all n≥1
Consider the following f(n) and g(n)... f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C
g(n) for all values of C > 0 and n0>= 1 f(n) >= C g(n)
⇒3n + 2 <= C n
Above condition is always TRUE for all values of C = 1 and n >= 1.
By using Big - Omega notation we can represent the time complexity as follows...
3n + 2 = Ω(n)
Theta notation: θ
The function f(n)= θ (g(n)) (read as “f of n is theta of g of n”) if there exist
positive constants c1, c2 and n0 such that C1*g(n) ≤f(n)≤C2*g(n) for all n, n≥0
Example:
3n+2=θ (n) as
3n+2 ≥3n for all n≥2
3n+2 ≤3n for all n≥2
Here c1=3 and c2=4 and n0=2
Consider the following f(n) and g(n)... f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n)
⇒C2 g(n)
<= f(n) >= C2 g(n) for all values of C1, C2 > 0 and n0>= 1 C1 g(n) <= f(n) >=
C1 n <= 3n + 2 >= C2 n
Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n >= 1.
By using Big - Theta notation we can represent the time
complexity as follows...
3n + 2 = Θ(n)