SOU - Lecture Handout - ADA - Unit-1
SOU - Lecture Handout - ADA - Unit-1
1.1. Algorithm
An algorithm is any well-defined computational procedure that takes some values or set of
values as input and produces some values or set of values as output.
An algorithm is a sequence of computational steps that transform the input into the output.
An algorithm is a set of rules for carrying out calculation either by hand or on a machine.
2. Output: By using inputs which are externally supplied the algorithm produces at least
one quantity as output.
3. Definiteness: The instructions used in the algorithm specify one or more operations.
These operations must be clear and unambiguous. This implies that each of these
operations must be definite; clearly specifying what is to be done.
4. Finiteness: Algorithm must terminate after some finite number of steps for all cases.
5. Effectiveness: The instructions which are used to accomplish the task must be basic
i.e. the human being can trace the instructions by using paper and pencil in every way.
Generally, there is always more than one way to solve a problem in computer science with
different algorithms. Therefore, it is highly required to use a method to compare the solutions
in order to judge which one is more optimal. The method must be:
Independent of the machine and its configuration, on which the algorithm is running
on.
Shows a direct correlation with the number of inputs.
Can distinguish two algorithms clearly without ambiguity.
There are two such methods used, time complexity and space complexity which are discussed
below:
Time Complexity: The time complexity of an algorithm quantifies the amount of time taken
by an algorithm to run as a function of the length of the input. Note that the time to run is a
function of the length of the input and not the actual execution time of the machine on which
the algorithm is running on.
Definition–
The valid algorithm takes a finite amount of time for execution. The time required by the
algorithm to solve given problem is called time complexity of the algorithm. Time complexity
is very useful measure in algorithm analysis.
It is the time needed for the completion of an algorithm. To estimate the time complexity, we
need to consider the cost of each fundamental instruction and the number of times the
instruction is executed.
C <- A + B
return C
The addition of two scalar numbers requires one addition operation. the time complexity of this
algorithm is constant, so T(n) = O (1) .
DEPARTMENT OF COMPUTER ENGINEERING Page | 2
*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA
int a[n];
return true
return false
Assuming that each of the operations in the computer takes approximately constant time, let it
be c. The number of lines of code executed actually depends on the value of Z. During analyses
of the algorithm, mostly the worst-case scenario is considered, i.e., when there is no pair of
elements with sum equals Z. In the worst case,
So total execution time is N*c + N*N*c + c. Now ignore the lower order terms since the lower
order terms are relatively insignificant for large input, therefore only the highest order term is
taken (without constant) which is N*N in this case. Different notations are used to describe the
limiting behavior of a function, but since the worst case is taken so Big-O notation will be used
to represent the time complexity.
Hence, the time complexity is O (N2) for the above algorithm. Note that the time complexity
is solely based on the number of elements in array A i.e the input length, so if the length of the
array will increase the time of execution will also increase.
Order of growth is how the time of execution depends on the length of the input. In the above
example, it is clearly evident that the time of execution quadratically depends on the length of
the array. Order of growth will help to compute the running time with ease.
Space Complexity:
Definition – Problem-solving using computer requires memory to hold temporary data or final
result while the program is in execution. The amount of memory required by the algorithm to
solve given problem is called space complexity of the algorithm.
The space complexity of an algorithm quantifies the amount of space taken by an algorithm to
run as a function of the length of the input. Consider an example: Suppose a problem to find
the frequency of array elements.
1) A fixed part: It is independent of the input size. It includes memory for instructions
(code), constants, variables, etc.
2) A variable part: It is dependent on the input size. It includes memory for recursion
stack, referenced variables, etc.
C <— A+B
return C
The addition of two scalar numbers requires one extra memory location to hold the result. Thus
the space complexity of this algorithm is constant, hence S(n) = O (1).
int freq[n];
int a[n];
cin>>a[i];
freq[a[i]]++;
Here two arrays of length N, and variable i are used in the algorithm so, the total space used is
N * c + N * c + 1 * c = 2N * c + c, where c is a unit space taken. For many inputs, constant c
is insignificant, and it can be said that the space complexity is O(N).
There is also auxiliary space, which is different from space complexity. The main difference is
where space complexity quantifies the total space used by the algorithm, auxiliary space
quantifies the extra space that is used in the algorithm apart from the given input. In the above
example, the auxiliary space is the space used by the freq [] array because that is not part of the
given input. So total auxiliary space is N * c + c which is O(N) only.
In computer science, the analysis of algorithms is the process of finding the computational
complexity of algorithms—the amount of time, storage, or other resources needed to execute
them. Usually, this involves determining a function that relates the size of an algorithm's input
to the number of steps it takes (its time complexity) or the number of storage locations it uses
(its space complexity). An algorithm is said to be efficient when this function's values are small,
or grow slowly compared to a growth in the size of the input. Different inputs of the same size
DEPARTMENT OF COMPUTER ENGINEERING Page | 5
*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA
may cause the algorithm to have different behavior, so best, worst and average case
descriptions might all be of practical interest. When not otherwise specified, the function
describing the performance of an algorithm is usually an upper bound, determined from the
worst case inputs to the algorithm.
The term "analysis of algorithms" was coined by Donald Knuth Algorithm analysis is an
important part of a broader computational complexity theory, which provides theoretical
estimates for the resources needed by any algorithm which solves a given computational
problem. These estimates provide an insight into reasonable directions of search for efficient
algorithms.
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually
require certain assumptions concerning the particular implementation of the algorithm, called
model of computation. A model of computation may be defined in terms of an abstract
computer, e.g. Turing machine, and/or by postulating that certain operations are executed in
unit time. For example, if the sorted list to which we apply binary search has n elements, and
we can guarantee that each lookup of an element in the list can be done in unit time, then at
most log2(n) + 1 time units are needed to return an answer.
4.1. Set
The number of elements in a set is called cardinality or size of the set, denoted |S| or
sometimes n(S).
The two sets have same cardinality if their elements can be put into a one-to-one
correspondence. It is easy to see that the cardinality of an empty set is zero i.e., |ø|.
4.3. Multiset
If we do want to take the number of occurrences of members into account, we call the
group a multiset.
For example, {7} and {7, 7} are identical as set but {7} and {7, 7} are different as multiset.
A set contains infinite elements. For example, set of negative integers, set of integers, etc
…
4.6. Subset
For two sets A and B, we say that A is a subset of B, written A B, if every member of set
A is also a member of set B.
Formally, A⊆ B if x ϵ A implies x ϵ B
4.7. Proper Subset
The sets A and B are equal, written A = B, if each is a subset of the other.
Let A be the set. The power of A, written P(A) or 2A, is the set of all subsets of A. That is,
P (A) = {B: B⊆ A}.
The union of A and B, written A∪B, is the set we get by combining all elements in A and
B into a single set.
The intersection of set A and B, written A ∩ B, is the set of elements that are both in A
and in B. That is, A ∩ B = { x : x ϵ A and x ϵ B}.
All set under consideration are subset of some large set U called universal set.
Given a universal set U, the complement of A, written A', is the set of all elements under
consideration that are not in A.
4.16. Sequences
A sequence of objects is a list of objects in some order. For example, the sequence 7, 21,
57 would be written as (7, 21, 57). In a set the order does not matter but in a sequence it
does.
Repetition is not permitted in a set but repetition is permitted in a sequence. So, (7, 7, 21,
57) is different from {7, 21, 57}.
5.1. Relation
Let X and Y be two sets. Any subset ρ of their Cartesian product X x Y is a relation.
Relationship between two sets of numbers is known as function. Function is special kind
of relation.
But note that function maps values to one value only. Two values in one set could map to one
value but one value must never map to two values.
5.2. Function
The relation is called a function if for each x Є X, there exists one and only one y Є Y such
that (x, y) Є f.
DEPARTMENT OF COMPUTER ENGINEERING Page | 9
*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA
The set X is called domain of function, set Y is its image and the set f(D)={f(x)|xЄD} is
its range.
f(x) = x3
f(-1) = (-1)3 = -1
f(2) = (2)3 = 8
f(3) = (3)3 = 27
In general we can say that a relation is any subset of the Cartesian product of its domain
and co-domain.
The function maps only one value from domain to its co domain while relation maps one
value from domain to more than one values of its co domain.
So that by using this concept we can say all functions are considered as relation also but
not vice versa.
Different relations can observe some special properties namely reflexive, symmetric ,
transitive and Anti symmetric.
Reflexive:
Symmetric:
When for all values of x and y, x R y → y R x is true. Then we can say that relation
R is symmetric. Equality (=) relation is also symmetric.
Transitive:
When for all values of x, y and z, x R y and y R z then we can say that x R z, which is
known as transitive property of the relation.
If x>y and y>z then we can say that x>z i.e. x is greater than y and y is greater than z
then x is also greater than z.
Anti-symmetric:
When for all values of x and y if x R y and y R x implies x=y then relation R is Anti -
symmetric.
Anti-symmetric property and symmetric properties are lookalike same but they are
different.
E.g. consider the relation greater than or equal to ≥ if x ≥ y and y ≥ x then we can say
that y = x.
A relation is Anti-symmetric if and only if x X and (x, x) R.
Equivalence Relation:
The relation is equivalent only when it satisfies all following property i.e. relation
must be reflexive, symmetric and transitive then it is called Equivalence Relation.
E.g. Equality ‘=’ relation is equivalence relation because equality proves above
condition i.e. it is reflexive, symmetric and transitive.
Reflexive: x=x is true for all values of x. so we can say that ’=’ is reflexive.
Symmetric: x=y and y=x is true for all values of x and y then we can say that ‘=’
is symmetric.
Transitive: if x=y and y=z is true for all values then we can say that x=z. thus’ =’
is transitive.
Where ui are called the components of u. If all the ui are zero i.e., ui = 0, then u is called
the zero vector.
Given vectors u and v are equal i.e., u = v, if they have the same number of components
and if corresponding components are equal.
If two vectors, u and v, have the number of components, their sum, u + v, is the vector
obtained by adding corresponding components from u and v.
The product of a scalar k and a vector u i.e., ku, is the vector obtained by multiplying each
component of u by k:
It is not difficult to see k(u + v) = ku + kv where k is a scalar and u and v are vectors
The dot product or inner product of vectors u = (u1, u2, . . . , un) and v = (v1, v2, . . . , vn) is
denoted by u.v and defined by
√𝑢. 𝑢
||u|| =
√𝑢12 + 𝑢22 +. . . . . . +𝑢𝑛2
6.4. Matrices
The m horizontal n-tuples are called the rows of A, and the n vertical m-tuples, its columns.
Note that the element aij, called the ij-entry, appear in the ith row and the jth column.
A matrix whose entries are all zero is called a zero matrix and denoted by 0.
Let A and B be two matrices of the same size. The sum of A and B is written as A + B and
obtained by adding corresponding elements from A and B.
Let A, B, and C is matrices of same size and let k and I two scalars. Then
i. (A + B) + C = A + (B + C)
ii. A+B =B +A
v. k (A + B) = kA + kB
vi. (k + I) A = kA + I A
vii. (k I) A = k(I A)
viii. IA=A
Suppose A and B are two matrices such that the number of columns of A is equal to number
of rows of B. Say matrix A is an m×p matrix and matrix B is a p×n matrix. Then the
product of A and B is the m×n matrix whose ij-entry is obtained by multiplying the
elements of the ith row of A by the corresponding elements of the jth column of B and then
adding them.
It is important to note that if the number of columns of A is not equal to the number of
rows of B, then the product AB is not defined.
i. (AB)C = A(BC)
ii. A(B+C) = AB + AC
iii. (B+C) A = BA + CA
6.11. Transpose
The transpose of a matrix A is obtained by writing the row of A, in order, as columns and
denoted by AT. In other words, if A - (Aij), then B = (bij) is the transpose of A if bij - aji for
all i and j.
For example, if
A= then AT =
If the number of rows and the number of columns of any matrix are same, we say matrix
is a square matrix, i.e., a square matrix has same number of rows and columns. A square
matrix with n rows and n columns is said to be order n and is called an n-square matrix.
The main diagonal, or simply diagonal, of an n-square matrix A = (aij) consists of the
elements a(11), a(22), a(33) . . . a(mn).
The n-square matrix with 1's along the main diagonal and 0's elsewhere is called the unit
matrix and usually denoted by I.
The unit matrix plays the same role in matrix multiplication as the number 1 does in the
usual multiplication of numbers.
7.1. Inequalities
The term inequality is applied to any statement involving one of the symbols <, >, ≤, ≥.
i. x≥1
ii. x + y + 2z > 16
iii. p2 + q2 ≤1/2
iv. a2 + ab > 1
By solution of the one variable inequality 2x + 3≤7 we mean any number which substituted
for x yields a true statement.
For example, 1 is a solution of 2x + 3≤7 since 2(1) + 3 = 5 and 5 is less than and equal to
7.
By a solution of the two variable inequality x - y≤5 we mean any ordered pair of numbers
which when substituted for x and y, respectively, yields a true statement.
A solution of an inequality is said to satisfy the inequality. For example, (2, 1) is satisfy x
- y≤5.
One Unknown
A linear equation in one unknown can always be stated into the standard form
ax = b
Where x is an unknown and a and b are constants. If a is not equal to zero, this
equation has a unique solution
x = b/a
Two Unknowns
A linear equation in two unknown, x and y, can be put into the form
ax + by = c
Where x and y are two unknowns and a, b, c are real numbers. Also, we assume that
a and b are no zero.
A solution of the equation consists of a pair of number, u = (k1, k2), which satisfies the
equation ax + by = c.
Mathematically speaking, a solution consists of u = (k1, k2) such that ak1 + bk2 = c.
Solution of the equation can be found by assigning arbitrary values to x and solving for y
or assigning arbitrary values to y and solving for x.
a1 x + b1 x = c1
a2 x + b2 x = c2
DEPARTMENT OF COMPUTER ENGINEERING Page | 17
*Proprietary material of SILVER OAK UNIVERSITY
1010043316
(ANALYSIS & DESIGN OF
ALGORTIHM)
LECTURE COMPANION SEMESTER: 5 PREPARED BY: PARTH S WADHWA
Where a1, a2, b1, b2 are not zero. A pair of numbers which satisfies both equations is
called a simultaneous solution of the given equations or a solution of the system of
equations.
1. If the system has exactly one solution, the graph of the linear equations intersects in
one point.
2. If the system has no solutions, the graphs of the linear equations are parallel.
3. If the system has an infinite number of solutions, the graphs of the linear equations
coincide.
The special cases (2) and (3) can only occur when the coefficient of x and y in the two
linear equations are proportional.
The solution to the following system can be obtained by the elimination process, whereby
reduce the system to a single equation in only one unknown.
a1x + b1x = c1
a2x + b2x = c2