Module 1
Module 1
ALGORITHM:
The word “Algorithm” comes from the Persian author Abdullah Jafar Muhammad ibn Musa
Al-khowarizmi in ninth century, who has given the definition of algorithm as follows:
Input: there are zero or more quantities, which are externally supplied;
Output: at least one quantity is produced
Definiteness: each instruction must be clear and unambiguous;
Finiteness: if we trace out the instructions of an algorithm, then for all cases the
algorithm will terminate after a finite number of steps;
Effectiveness: every instruction must be sufficiently basic that it can be carried out by
a person using only pencil and paper. It is not enough that each operation be definite,
but it must also be feasible.
DESIGN OF ALGORITHM: The study of algorithm includes many important and active
areas of researches. They are:
Understanding the problem:
o This is the very first step in designing of an algorithm. In this step first of all
need to understand the problem statement completely by reading the problem
description carefully.
o After that, find out what are the necessary inputs for solving that problem.
o The input to the algorithm is called instance of the problem.
o It is very important to decide the range of inputs so that the boundary values of
algorithm get fixed. The algorithm should work correctly for all valid inputs.
Decision Making:
o After finding the required input set for the given problem we have to analyze the
input and need to decide certain issues such as
o Capabilities of computational devices:
It is necessary to know the computational capabilities of devices
on which the algorithm will be running ie., sequential or parallel
algorithm.
Complex problems require huge amount of memory and more
execution time. For solving such problems it is essential to have
proper choice of a computational device which is space and time
efficient.
o Choice for either exact or approximate problem solving method:
The next important decision is to decide whether the problem is
to be solved exactly or approximately.
If the problem needs to be solved correctly then we need exact
algorithm. Otherwise, if the problem is so complex then we need
approximation algorithm.(Example: Salesperson Problem)
o Data Structures:
Data Structure and algorithm work together and these are
interdependent.
Hence choice of proper data structure is required before
designing the actual algorithm.
o Algorithmic Strategies:
It is a general approach by which many problems can be solved
algorithmically.
Algorithmic strategies are also called as algorithmic techniques
or algorithmic paradigm.
Algorithm Design Techniques:
Brute Force: This is straight forward technique with naïve
approach.
Divide-and-Conquer: The problem is divided into smaller
instances.
Dynamic Programming: The results of smaller, reoccurring
instances are obtained to solve the problem.
Greedy Technique: To solve the problem locally optimal
decisions are made.
Back tracking: This method is based on the trial and error.
Specification of Algorithm:
o Transition:
Algorithm Verification:
o Algorithm verification means checking correctness of an algorithm that is to
check whether the algorithm gives correct output in finite amount of time for a
valid set of input.
Analysis of algorithm: The following factors should consider while analyzing an
algorithm
o Time Complexity: The amount of time taken by an algorithm to run.
o Space Complexity: The amount of space taken by an algorithm to store
variable.
o Range of Input: The design of an algorithm should be such that it should
handle the range of input.
o Simplicity: Simplicity of an algorithm means generating sequence of
instructions which are easy to understand.
o Generality: Generality shows that sometimes it becomes easier to design an
algorithm in more general way rather than designing it for particular set of
input.(Example: GCD)
Implementation of Algorithm: The implementation of an algorithm is done by
suitable programming language.
Testing a Program: Testing a program is an activity carried out to expose as many
errors as possible and to correct them.
o There are two phases for testing a program:
o Debugging
o Profiling
Debugging is a technique in which a sample set of data is tested to see
whether faulty results occur or not. If any faulty result occurs then those
results are corrected.
But in Debugging technique only presence of error is pointed out. Any hidden
error cannot be identified.
So, we cannot verify correctness of output on sample data. Hence, Profiling
Concept is introduced.
Profiling or Performance Measurement is the process of executing a correct
program on a sample set of data. Then the time and space required by the
program to execute is measured.
1. Time Complexity
a. Measures the number of operations an algorithm performs relative to input
size.
b. Expressed using Big O Notation (O), Theta (), and Omega ().
c. Example:
i. Linear Search:
ii. Binary Search:
iii. Quick Sort: in average case, in worst case.
2. Space Complexity
a. Measures the memory consumed by an algorithm.
b. Includes input storage, auxiliary space, and function call stack usage.
c. Example:
i. Recursive algorithms often have higher space complexity due to stack
usage.
3. Asymptotic Notations
a. Big O (Upper Bound): Worst-case growth rate.
b. Theta () (Tight Bound): Average-case behavior.
c. Omega () (Lower Bound): Best-case behavior.
d. Example:
i. Insertion Sort: Worst-case , Best-case .
4. Empirical vs. Theoretical Analysis
a. Theoretical Analysis provides asymptotic complexity without running the
program.
b. Empirical Testing measures actual runtime on specific hardware
c. Example:
i. Measuring sorting algorithms' execution time on large datasets.
Difference between Algorithm and Pseudo code:
Pseudo code does not use specific programming language syntax and therefore
could be understood by programmer‟s who are familiar with different
programming language.
Transforming an algorithm presented in pseudo code to programming code
could be much easier than converting an algorithm written in natural language.
Algorithm:
The recursive Tower of Hanoi problem can be solved iteratively using a stack.
Algorithm:
1. Push the initial problem onto a stack.
2. Use an iterative approach to move disks based on rules.
3. Keep track of movements using a loop.
COMPLEXITY
Algorithm evaluation can be done in two ways either before execution of a program or
after execution of a program
Space Complexity:
Let P be an algorithm, then total space required for algorithm is S(P)=C+Sp Where C is
constant which is fixed space and Sp is variable space which varies depend on the problem
When we analyze space complexity of an algorithm, we concentrate on estimating
Sp(Variable Space)
Time Complexity:
Let P be an algorithm, then total time required for algorithm is T(P)=C+Tp, Where c is
compile time and Tp is runtime
In the First method, introduce a new variable count into the program. This is a global
variable with initial value „0‟.
The Second method to determine the step count of an algorithm is to build a table in
which we list the total number of steps contributed by each statement.
BEST , WORST AND AVERAGE CASE WITH SIMPLE EXAMPLES
Suppose M is an algorithm, and suppose n is the size of the input data. The time
and space used by the algorithm M are the two main measures for the efficiency of M.
The time is measured by counting the number of key operations, for example, in case
of sorting and searching algorithms, the number of comparisons is the number of key
operations.
That is because key operations are so defined that the time for the other operations is
much less than or at most proportional to the time for the key operations.
The space is measured by counting the maximum of memory needed by the
algorithm.
The complexity of an algorithm M is the function f(n), which give the running time
and/or storage space requirement of the algorithm in terms of the size n of the input
data.
Frequently, the storage space required by an algorithm is simply a multiple of the
data size n.
In general the term “complexity” given anywhere simply refers to the running time
of the algorithm.
There are 3 cases, in general, to find the complexity function f(n):
o Best case: The minimum value of f(n) for any possible input.
o Worst case: The maximum value of f(n) for any possible input.
o Average case: The value of f(n) which is in between maximum and minimum
for any possible input. Generally the Average case implies the expected value
of f(n).
To understand the Best, Worst and Average cases of an algorithm, consider a linear
array A[1….n], where the array A contains n-elements. Suppose you want either to
find the location LOC of a given element (say x ) in the given array A or to send
some message, such as LOC=0, to indicate that does not appear in A. Here the linear
search algorithm solves this problem by comparing given x, one-by-one, with each
element in A. That is, we compare x with A[1], then A[2], and so on, until we find
LOC such that x=A[LOC].
Analysis of linear search algorithm
The complexity of the search algorithm is given by the number C of comparisons between x
and array elements A[K].
Best case: Clearly the best case occurs when x is the first element in the array A. That is
x=A[LOC]. In this case c(n)=1
Worst case: Clearly the worst case occurs when x is the last element in the array A or x is not
present in given array A (to ensure this we have to search entire array A till last element). In
this case, we have c(n)=n
Average case: Here we assume that searched element x appear array A, and it is equally
likely to occur at any position in the array. Here the number of comparisons can be any of
numbers 1,2,3,…n , and each number occurs with the probability p=1/n then
BINARY SEARCH
1 < x2 n . When we
successful search).
In Binary search we jump into the middle of the file, where we find key a[mid], and
s a[mid]. Similarly, if
a[mid] > x, then further search is only necessary in that part of the file which follows
a[mid].
If we use recursive procedure of finding the middle key a[mid] of the un-searched
portion of a file, then every un-successful comparis
roughly half the un-searched portion from consideration.
Algorithm:
Example 1:
Index 1 2 3 4 5 6 7 8 9 10 11 12
Elements 4 7 8 9 16 20 24 38 39 45 54 77
20 requires 1 comparison;
8 and 39 requires 2 comparisons;
4, 9, 24, 54 requires 3 comparisons and
7, 16, 38, 45, 77 requires 4 comparisons
Summing the comparisons, needed to find all twelve items and dividing by 12, yielding
37/12 or approximately 3.08 comparisons per successful search on the average.
Example 2:
Index 0 1 2 3 4 5 6 7 8
Elements -15 -6 0 7 9 23 54 82 101
Solution:
Continuing in this manner the number of element comparisons needed to find each of
nine elements is:
1 3 4 5 6 9
-6 0 7 9 23 54
Comparisons 3 3 4 1 3 4
There are ten possible ways that an un-successful search may terminate depending
upon the value of x.
If x < a(1), a(1) < x < a(2), a(2) < x < a(3), a(5) < x < a(6), a(6) < x < a(7) or a(7)
< x < a(8) the algorithm requires 3 element comparisons
present. For all of the remaining possibilities BINSRCH requires 4 element comparisons.
Thus the average number of element comparisons for an unsuccessful search is:
(3 + 3 + 3 + 4 + 4 + 3 + 3 + 3 + 4 + 4) / 10 = 34/10 = 3.4
Time Complexity:
The time complexity of binary search in a successful search is O(log n) and for an
unsuccessful search is O(log n).
A non-recursive program for binary search:
# include <stdio.h>
# include <conio.h>
main()
{
int number[25], n, data, i, flag = 0, low, high, mid;
clrscr();
printf("\n Enter the number of elements: ");
scanf("%d", &n);
printf("\n Enter the elements in ascending order: ");
for(i = 0; i < n; i++)
scanf("%d", &number[i]);
printf("\n Enter the element to be searched: ");
scanf("%d", &data);
low = 0; high = n-1;
while(low <= high)
{
mid = (low + high)/2;
if(number[mid] == data)
{
flag = 1;
break;
}
else
{
if(data < number[mid])
high = mid - 1;
else
low = mid + 1;
}
}
if(flag == 1)
printf("\n Data found at location: %d", mid + 1);
else
printf("\n Data Not Found ");
}
# include <stdio.h>
# include <conio.h>
Quick Sort:
the first most efficient sorting algorithms. It is an example of a class of algorithms that
The quick sort algorithm partitions the original array by rearranging it into two groups.
The first group contains those elements less than some arbitrary chosen value taken
from the set, and the second group contains those elements greater than or equal to
the chosen value. The chosen value is known as the pivot element. Once the array has
been rearranged in this way with respect to the pivot, the same partitioning procedure
is recursively applied to each of the two subsets. When all the subsets have been
partitioned and rearranged, the original array is sorted.
The function partition() makes use of two pointers up and down which are moved
toward each other in the following fashion:
1. >= pivot.
2.
3. If down > up, interchange a[down] with a[up]
4.
pivot is found and place
The program uses a recursive function quicksort(). The algorithm of quick sort function
1. It terminates when the condition low >= high is satisfied. This condition will
be satisfied only when the array is completely sorted.
2. Here we
calls the partition function to find the proper position j of the element x[low]
i.e. pivot. Then we will have two sub-arrays x[low], x[low+1],................... x[j-1]
and x[j+1], x[j+2], ......... x[high].
4. It calls itself recursively to sort the right sub-array x[j+1], x[j+2], . . x[high]
between positions j+1 and high.
Algorithm
Sorts the elements a[p], . . . . . ,a[q] which reside in the global array a[n] into
ascending order. The a[n + 1] is considered to be defined and must be greater than all
elements in a[n]; a[n + 1] = +
quicksort (p, q)
{
if ( p < q ) then
{
call j = PARTITION(a, p, q+1); // j is the position of the partitioning element
call quicksort(p, j 1);
call quicksort(j + 1 , q);
}
}
partition(a, m, p)
{
v = a[m]; up = m; down = p;
do
{
repeat
up = up + 1;
until (a[up] > v);
repeat
down = down 1;
until (a[down] < v);
if (up < down) then call interchange(a, up,
down); } while (up > down);
a[m] = a[down];
a[down] = v;
return (down);
}
interchange(a, up, down)
{
p = a[up];
a[up] = a[down];
a[down] = p;
}
Example:
an element smaller than pivot. If such elements are found, the elements are swapped.
Let us consider the following example with 13 elements to analyze quick sort:
2 3 4 5 6 7 9 10 11 12 13 Remarks
08 24 02 58 04 70 45
pivot
pivot 04 79
pivot up down
pivot 57
& down
02 (08 04)
16
16
pivot,
04 06 08 16 38
(56 57 79
pivot up
down
pivot 45 57
& down
79 57)
pivot up
57 79
58 79)
& down
57
79)
pivot,
& down
70
79
pivot,
down,
57 58 70 79)
02 04 06 08 16 24 38 45 57 70 79
# include<stdio.h>
# include<conio.h>
int main()
{
int num, i = 0;
clrscr();
printf( "Enter the number of elements: " );
scanf( "%d", &num);
printf( "Enter the elements: " );
for(i=0; i < num; i++)
scanf( "%d", &array[i] );
quicksort(0, num -1);
printf( "\nThe elements after sorting are: " );
for(i=0; i < num; i++)
printf("%d ", array[i]);
return 0;
}
do
{
do
up = up + 1;
while(array[up] < pivot );
do
down = down - 1;
while(array[down] > pivot);