0% found this document useful (0 votes)
159 views22 pages

Algorithm: Definition (Algorithm) : An Algorithm Is A Finite Set of Instructions That, If Followed, Accomplishes A

1. An algorithm is a finite set of unambiguous instructions that accomplishes a specific task. Algorithms must be definite, finite, and effectively computable. 2. The main steps for problem solving with algorithms are defining the problem, designing the algorithm, analyzing it, implementing it, testing it, and maintaining it. 3. Performance of algorithms is measured by time and space complexity. Time complexity analyzes how an algorithm's running time grows as the input size grows, while space complexity analyzes how much memory an algorithm requires.

Uploaded by

Pk Wright
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
159 views22 pages

Algorithm: Definition (Algorithm) : An Algorithm Is A Finite Set of Instructions That, If Followed, Accomplishes A

1. An algorithm is a finite set of unambiguous instructions that accomplishes a specific task. Algorithms must be definite, finite, and effectively computable. 2. The main steps for problem solving with algorithms are defining the problem, designing the algorithm, analyzing it, implementing it, testing it, and maintaining it. 3. Performance of algorithms is measured by time and space complexity. Time complexity analyzes how an algorithm's running time grows as the input size grows, while space complexity analyzes how much memory an algorithm requires.

Uploaded by

Pk Wright
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

ALGORITHM

Algorithm

The word algorithm comes from the name of a Persian author, Abu Ja'far Mohammedibn Musa al
Khowarizm.
Definition [Algorithm]: An algorithm is a finite set of instructions that, if followed, accomplishes a
particular task.
Algorithms must satisfy the following criteria:
1. Input: Zero or more quantities are externally supplied.
2. Output: At least one quantity is produced.
3. Definiteness: Each instruction is clear and unambiguous.
4. Finiteness: If we trace out the instructions of an algorithm, then for all cases, the algorithm
terminates after a finite number of steps.
5. Effectiveness. Every instruction must be very basic so that it can be carried out by pencil
and paper and it also must be feasible.

Algorithm and Problem Solving

The main steps for Problem Solving are:


1. Problem definition
2. Algorithm design / Algorithm specification
3. Algorithm analysis
4. Implementation
5. Testing
6. Maintenance
1. Problem Definition: What is the task to be accomplished?
Ex: Calculate the average of the grades for a given student
2. Algorithm Design / Specifications: Describe in natural language / pseudo-code / diagram
3. Algorithm analysis:
Space complexity - How much space is required
Time complexity - How much time does it take to run the algorithm
4: Implementation: Decide on the programming language to use C, C++, Lisp, and Java
5: Testing: fix bugs, ensure
6: Maintenance: Release Updates, fix bugs

Distinct areas of study of algorithms

1. Devise algorithms: (i).Divide & Conquer,(ii). Branch and Bound ,(iii) Dynamic Programming.
2. Validate algorithms: Check for that it computes the correct answer for all possible legal inputs.
3.Analyze algorithms: Determining how much computing time & storage an algorithm requires
4. Test a program:
(i) Debugging : To determine whether faulty results occur and, if so, to correct them.
(ii). Profiling(performance measurement) : measuring the time and space it takes to
compute the results

Designed By Dr. P P e n c h a l || Unit-1 Page 1 of 22


Pseudocode

Algorithm can be represented in


 Text mode and
 Graphic mode
Graphical representation is called Flowchart
Text mode is called Pseudocode which is High-level description of an algorithm.
Example of Pseudocode

Algorithm arrayMax(A, n)
Input array A of n integers
Output maximum element of A
currentMax  A[0]
for i  1 to n  1 do
if A[i]  currentMax then
currentMax  A[i]
return currentMax

ALGORITHM SPECIFICATION
Pseudocode Convention
Algorithms using pseudocode that resembles C and Pascal.

1. Comments begin with //


2. Blocks are indicated with matching braces: { and }
3. Assignment
a. (variable):=(expression);
4. There are two Boolean values true and false
5. Elements of multidimensional arrays are accessed using [ and ]
6. While

7. For

Designed By Dr. P P e n c h a l || Unit-1 Page 2 of 22


8. Do.. while

9. conditional statement

10. procedure

An Example: finds and returns the maximum of n given numbers

Tower of Hanoi, is a mathematical puzzle which consists of three towers (pegs) and

more than one rings is as depicted

Recursive Algorithm
A recursive function is a function that is defined in terms of itself. Similarly, an algorithm is said to be
recursive if the same algorithm is invoked in the body.
Example: Towers of Hanoipuzzl
Tower of Hanoi, is a mathematical puzzle which consists of three towers (pegs) and more than one
rings is as depicted

These rings are of different sizes and stacked upon in an ascending order, i.e. the smaller one sits
over the larger one. There are other variations of the puzzle where the number of disks increase, but
the tower count remains the same.
Rules
A few rules to be followed for Tower of Hanoi are :
 Only one disk can be moved among the towers at any given time.
 Only the "top" disk can be removed.
 No large disk can sit over a small disk.

Designed By Dr. P P e n c h a l || Unit-1 Page 3 of 22


PERFORMANCE ANALYSIS
Performance Analysis

Performance evaluation can be loosely divided into two major phases:

(1) A prior estimates (performance analysis) and


(2) A posterior testing (performance measurement).
Performance of algorithm measured in two ways:
a. Space complexity
b. Time complexity

Space complexity

Definition [Space complexity]: The space complexity of an algorithm is the amount of memory it
needs to run to completion.

Example:

The Space needed by algorithm is seen to be the sum of the following component:
a. Fixed Part
b. Variable Part
A fixed part that is independent of the characteristics (eg:number,size)of the inputs and Outputs.4
Ex:
 Space for the code
 space for variable and fixed-size component variables
A variable part that consists of the space needed by component variables whose size is dependent
on the particular problem instance being solved.
Ex:
 Space needed by referenced variables
 Recursion stacks space.
The space requirement s(p) of any algorithm p may therefore be written as,
S(P) = c+ Sp(Instance characteristics) Where ‘c’ is a constant.

Designed By Dr. P P e n c h a l || Unit-1 Page 4 of 22


Example:

Algorithm sum(a,n)
{
s=0.0;
for I=1 to n do
s= s+a[I];
return s;
}

The problem instances for this algorithm are characterized by n, the number of elements to
be summed. The space needed d by ‘n’ is one word, since it is of type integer.
The space needed by ‘a’ is the space needed by variables of type array of floating point
numbers.
This is at least ‘n’ words, since ‘a’ must be large enough to hold the ‘n’ elements to be
summed.
So, we obtain S sum(n)>=(n+s)

Time Complexity

Definition [Time Complexity]: The time complexity of an algorithm is the amount of computer time
it needs to run to completion.

The compile time does not depend on the instance characteristics. Also we may assume that a
compiled program will be run several times without recompilation .This rum time is denoted by
tp(instance characteristics).

The number of steps any problem statement is assigned depends on the kind of statement called
steps per execution (s/e).
Example: 1 Time Complexity Analysis of SUM

Designed By Dr. P P e n c h a l || Unit-1 Page 5 of 22


Example: 2 Time Complexity Analysis of RSUM

Example: 3 Time Complexity Analysis of MATRIX Addition

Asymptotic Notation (, ,  )

The following notations are commonly use notations in performance analysis and used to
characterize the complexity of an algorithm:
1. Big–OH (O) ,
2. Big–OMEGA (Ω),
3. Big–THETA (Θ)
4. Little–OH (o)
5. Little –OMEGE ()
In Asymptotic Notation gives us a measure that will work for different operating systems,
compilers and CPUs
 It is a way to describe the characteristics of a function in the limit.
 It describes the rate of growth of functions.
 Focus on what’s important by abstracting away low-order terms and constant factors.
The following are the names of the various time complexities.
Time complexity Name
O(1) Constant
O(log n) Logarithmic
O (n) Linear
O(nlog n) Linear
2
O(n ) Quadratic
3
O(n ) Cubic
n
O(2 ) Exponential

Designed By Dr. P P e n c h a l || Unit-1 Page 6 of 22


Big–OH (O): (Upper Bound)

Examples:
(i). 3n+2 = 0(n) as 3n+2 < 4n for all n > 2.
(ii). 3n + 3 = O(n) as 3n + 3 < 4n for all n > 3.
(iii). 10n2+4n +2 = 0(n2)as 10n2 +4n+2 < lln2 for all n >5

Big–OMEGA (Ω): (Lower Bound)

Examples:

THETA (Θ) (Lower and Upper Bound)

Examples:
i.
ii.

Designed By Dr. P P e n c h a l || Unit-1 Page 7 of 22


Little–OH (o)

Little –OMEGE ()

Performance Measurement
Performance measurement is concerned with obtaining the space and time requirements of a
particular algorithm. These quantities depend on the compiler and options used as well as on the
computer on which the algorithm is run.

We do not consider measuring the run-time space requirements of a program. Rather, we focus on
measuring the computing time of a program. To obtain the computing(or run) time of a program, we
need a clocking procedure. We assume the existence of a program GetTime() that returns the
current time in millisecond

Example:

Designed By Dr. P P e n c h a l || Unit-1 Page 8 of 22


DIVIDE-AND-CONQUER
General Method (Control abstraction for divide-and-conquer)
In general method of divide and conquer, a given problem is,
i) Divided into smaller subproblems.
ii) These subproblems are solved independently.
iii) Combining all the solutions of subproblems into a solution of the whole.
 If the subproblems are large enough then divide and conquer is reapplied.
 The generated subproblems are usually of some type as the original problem.
 Hence recursive algorithms are used in divide and conquer strategy.

Example 1:

Computing time of DAndC is described by the recurrence relation


Master Theorem for Divide and Conquer

Where
 T(n) is the time for DAndCon any input of size n
 g(n) is the time to compute the answer directly for small inputs.
 f(n) is the time for dividing P and combining the solutions to subproblems.
The complexity of many divide-and-conquer algorithms is given by recurrences of the form

Where a and b are known constants.

Designed By Dr. P P e n c h a l || Unit-1 Page 9 of 22


Example 2 :
Let us consider the problem of finding sum of n numbers ranging from a0, ... an-1.
If n > 1,
 We can divide the problem into two instances of the same problem.
 They are sum of the first [n/2] numbers.
 Compute the sum of the 1st [n/2] numbers, and then compute the sum of another n/2 numbers.
Combine the answers of two n/2 numbers sum.
a0 + . . . + an-1 =( a0 + ....+ an/2) + (a n/2+1 + . . . . + an-1)
We get the following recurrence for the running time T(n).
T(n)=aT(n/b)+f(n) Here a=2 and b=2
Finding recurrence and Time Complexity T(n)

Applications of Divide and conquer:


 Binary search,
 Quick sort,
 Merge sort,
 Strassen’s matrix multiplication.

Binary Search

1. Binary Search finds the position of a specified input value (the search "key") within an array
sorted by key value.
2. In each step, the algorithm compares the search key value with the key value of the middle
element of the array.
3. If the keys match, then a matching element has been found and its index, or position, is
returned.
4. Otherwise, if the search key is less than the middle element's key, then the algorithm repeats its
action on the sub-array to the left of the middle element or, if the search key is greater, then the
algorithm repeats on sub array to the right of the middle element.
5. If the search element is less than the minimum position element or greater than the maximum
position element then this algorithm returns not found.

The following are the algorithms for Binary Search:

Designed By Dr. P P e n c h a l || Unit-1 Page 10 of 22


Algorithm (Iterative binary search)

Algorithm (Recursive binary search)

Example:
Let us select the 14 entries
-15,-6,
6, 0,7, 9, 23,54, 82,101, 112, 125, 131, 142, 151
15
Tracing the Algorithm for two cases Binary decision tree for binary
When x=151 When x=-14 search tree when n=14

Computing time of binary search by giving formulas that describe the best,average, and worst cases

Designed By Dr. P P e n c h a l || Unit-1 Page 11 of 22


Example:
For a binary search to work, it is mandatory for the target array to be sorted. We shall learn the
process of binary search with a pictorial example. The following is our sorted array and let us
assume that we need to search the location of value 31 using binary search.

First, we shall determine half of the array by using this formula −


mid = low + (high - low) / 2
Here it is, 0 + (9 - 0 ) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.

Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find that
the value at location 4 is 27, which is not a match. As the value is greater than 27 and we have a
sorted array, so we also know that the target value must be in the upper portion of the array.

We change our low to mid + 1 and find the new mid value again.
low = mid + 1
mid = low + (high - low) / 2
Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.

The value stored at location 7 is not a match, rather it is more than what we are looking for. So, the
value must be in the lower part from this location.

Hence, we calculate the mid again. This time it is 5.

We compare the value stored at location 5 with our target value. We find that it is a match.

We conclude that the target value 31 is stored at location 5.


Binary search halves the searchable items and thus reduces the count of comparisons to be made to
very less numbers.

Designed By Dr. P P e n c h a l || Unit-1 Page 12 of 22


FINDING THE MAXIMUM AND MINIMUM
Finding the maximum and minimum problem is to find the maximum and minimum items in a set of
n element. The below algorithm is a straightforward algorithm to accomplish finding the maximum
and minimum values.
Straight forward algorithm

 The best case occurs when the elements are in increasing order. The number of element
comparisons is n-1
 The worst case occurs when the elements are in decreasing order. In this case the number of
element comparisons is 2(n-1).
 The average number of comparisons is 3n/2-1
Divide-and-conquer algorithm:

 A divide-and-conquer algorithm for this problem would proceed as follows: Let P = (n,
a[i],..., a[j]) denote an arbitrary instance of the problem.

Designed By Dr. P P e n c h a l || Unit-1 Page 13 of 22


 Here n is the number of elements in the list a[i],..., a[j]and we are interested in finding
the maximum and minimum of this list.
 Let Small (P) be true when n <2.In this case, the maximum and minimum are a[i] if n = 1.
 If n = 2, the problem can be solved by making one comparison.
 If the list has more than two elements, P has to be divided into smaller instances. For
example, we might divide P into the two instances
 P1 =
 P2 =
 After having divided P into two smaller
subproblems, we can solve them by Example: On Board
recursively invoking the same divide-and-
conquer algorithm.
 If MAX (P) and MIN (P) are the maximum
and minimum of the elements in P, then
MAX (P) is the larger of MAX (P1) and MAX
(P2). Also, MIN (P) is the smaller of MIN
(P1) and MIN (P2).

Resulting recurrence relation is

3n/2-2 is the best-, average-, and worst-case number of comparisons when n is a power of two.

Designed By Dr. P P e n c h a l || Unit-1 Page 14 of 22


Merge Sort
The merge sort splits the list to be sorted into two equal halves, and places them in separate
arrays. This sorting method is an example of the DIVIDE-AND-CONQUER paradigm i.e. it
breaks the data into two halves and then sorts the two half data sets recursively, and finally
merges them to obtain the complete sorted list. The merge sort is a comparison sort and has an
algorithmic complexity of O (n log n). Elementary implementations of the merge sort make use of
two arrays - one for each half of
the data set. The following image
depicts the complete procedure of Example: on Board
merge sort.

Advantages of Merge Sort:


1. Marginally faster than the heap sort for larger sets
2. Merge Sort always does lesser number of comparisons than Quick Sort. Worst case for
merge sort does about 39% less comparisons against quick sort’s average case.
3. Merge sort is often the best choice for sorting a linked list.
Algorithm Merge sort

Designed By Dr. P P e n c h a l || Unit-1 Page 15 of 22


The above algorithm describes this process very briefly using recursion and a function Merge which
merges two sorted sets. Before executing MergeSort, the n elements should be placed in a[1 :n].
Then MergeSort(l,n) causes the keys to be rearranged into non- decreasing order in a

Mergesort recurrence relation

Time Complexity

Designed By Dr. P P e n c h a l || Unit-1 Page 16 of 22


QUICK SORT
Quick Sort is an algorithm based on the DIVIDE-AND-CONQUER paradigm that selects a pivot
element and reorders the given list in such a way that all elements smaller to it are on one side and
those bigger than it are on the other. Then the sub lists are recursively sorted until the list gets
completely sorted. The time complexity of this algorithm is O (n log n).

The following describes internals of quick sort.

Function Partition of Algorithm:


 Accomplishes an in-place partitioning of the elements of a [m: p-1].
 It is assumed that a[p] >=a[m] and that a[m] is the partitioning element.
 If m = 1 and p-1 = n, then a[n+ 1]must be defined and must be greater than or equal to all
elements in a[1:n].
Function Interchange
The function Interchange (a,i,j) exchanges a[i]with a[j].
Function QuickSort of Algorithm:
QuickSort divides the problem into sub-problem and recursively calls the partition algorithm.
Space Complexity
Auxiliary space used in the average case for implementing recursive function calls is
O (log n) and hence proves to be a bit space costly, especially when it comes to large
data sets.
Time Complexity
 The time complexity of this algorithm is O (n log n)
 Its worst case has a time complexity of O (n2) which can prove very fatal for large data sets.
Competitive sorting algorithms

Algorithm

Designed By Dr. P P e n c h a l || Unit-1 Page 17 of 22


Example:9 7 5 12 11 2 14 10

Example: on Board

Designed By Dr. P P e n c h a l || Unit-1 Page 18 of 22


SELCTION PROBLEM- (Finding the Kth-smallest element)

The Partition algorithm can also be used to obtain an efficient solution for the selection problem. In
this problem, we are given n elements a[1: n] and are required to determine the kth smallest element.

Logic:
If the partitioning element v is positioned at a[j], then j-1 elements are less than or equal to a[j] and n-j
elements are greater than or equal to a[j]. Hence if K< j, then the kth smallest elements in a[1:j].

NOTE: WRITE PARTETION ALGORITHEM HERE

The average computing time T(n) of Selectl is o(n)

Example: on board (65,70, 75,80,85,60,55,50,and 45)

Designed By Dr. P P e n c h a l || Unit-1 Page 19 of 22


SELECTION SORT
Selection sorting algorithm is an in-place comparison-based algorithm in which the list is divided
into two parts, the sorted part at the left end and the unsorted part at the right end. Initially, the
sorted part is empty and the unsorted part is the entire list.
The smallest element is selected from the unsorted array and swapped with the leftmost element, and
that element becomes a part of the sorted array. This process continues moving unsorted array
boundary by one element to the right.
This algorithm is not suitable for large data sets as its average and worst case complexities are of
Ο(n2), where n is the number of items.

Inner working of Selection Sort


Consider the following depicted array as an example.

For the first position in the sorted list, the whole list is scanned sequentially. The first position where
14 is stored presently, we search the whole list and find that 10 is the lowest value.

So we replace 14 with 10. After one iteration 10, which happens to be the minimum value in the list,
appears in the first position of the sorted list.

SORTED UNSORTED

For the second position, where 33 is residing, we start


scanning the rest of the list in a linear manner.

We find that 14 is the second lowest value in the list


and it should appear at the second place. We swap
these values.

After two iterations, two least values are positioned at


the beginning in a sorted manner.

SORTED UNSORTED

The same process is applied to the rest of the items in


the array. The following is a pictorial depiction of the
entire sorting process −

Designed By Dr. P P e n c h a l || Unit-1 Page 20 of 22


Algorithm
Step 1 − Set MIN to location 0
Step 2 − Search the minimum element in the list
Step 3 − Swap with value at location MIN
Step 4 − Increment MIN to point to next element
Step 5 − Repeat until list is sorted
Pseudocode
procedure selection sort
list : array of items
n : size of list
for i = 1 to n - 1
/* set current element as minimum*/
min = i
/* check the element to be minimum */
for j = i+1 to n
if list[j] < list[min] then
min = j;
end if
end for
/* swap the minimum element with the current element*/
if indexMin != i then
swap list[min] and list[i]
end if
end for
end procedure

STRESSEN`S MATRIX MULTIPLICATION


Conventional Method or Naive Method
Let A and B be two n x n matrices. The product matrix C = AB is also an n x n matrix whose i,jth
element is formed by taking the elements in the ith row of A and the jth columnof B and multiplying
them to get

for all i and j between1and n.. To compute C(i,j) using this formula, we need n multiplications. As the
matrix C has n2 elements, the time for the resulting matrix multiplication algorithm, which we refer to
as the conventional method is ((n3).

Following is simple Divide and Conquer method to multiply two square matrices.
1) Divide matrices A and B in 4 sub-matrices
sub matrices of size N/2 x N/2 as shown in the below diagram.
2) Calculate following values recursively. ae + bg, af + bh, ce + dg and cf + dh.

In the above method, we do 8 multiplications for


matrices of size N/2 x N/2 and 4 additions. Addition
of two matrices takes O(n2) time. So the time
complexity can be written as
T(N) = 8T(n/2) O( 2)
/2) + O(n
From Master's Theorem,, time complexity of above
method is O(N3) which is unfortunately same as the
above naive method.

Designed By Dr. P P e n c h a l || Unit-1 Page 21 of 22


Strassen’s Divide and Conquer Method
In the above divide and conquer method, the main component for high time complexity is 8 recursive
calls. The idea of Strassen’s method is to reduce the number of recursive calls to 7. Strassen’s
method is similar to above simple divide and conquer method in the sense that this method also
divide matrices to sub-matrices
matrices of size N/2 x N/2 as shown in the above diagram, bu
but in Strassen’s
method, the four sub-matrices
matrices of result are calculated using following formulae.

Time Complexity of Strassen’s Method


Addition and Subtraction of two matrices takes O(N2) time. So time complexity can be written as
T(N) = 7T(N/2) + O(N2)
From Master's Theorem,, time complexity of above method is O(NLog7) which is approximately
O(N2.8074)

Designed By Dr. P P e n c h a l || Unit-1 Page 22 of 22

You might also like