Fundamentals of Analysis of Algorithms Efficiency
Fundamentals of Analysis of Algorithms Efficiency
Analysis of algorithms means to investigate an algorithms efficiency with respect to resources: running time ( time efficiency ) and memory space ( space efficiency ) Time being more critical than space, we concentrate on Time efficiency of algorithms. The theory developed, holds good for space complexity also. Experimental Studies: requires writing a program implementing the algorithm and running the program with inputs of varying size and composition. It uses a function, like the built-in clock() function, to get an accurate measure of the actual running time, then analysis is done by plotting the results. Limitations of Experiments It is necessary to implement the algorithm, which may be difficult Results may not be indicative of the running time on other inputs not included in the experiment. In order to compare two algorithms, the same hardware and software environments must be used Theoretical Analysis: It uses a high-level description of the algorithm instead of an implementation. Analysis characterizes running time as a function of the input size, n, and takes into account all possible inputs. This allows us to evaluate the speed of an algorithm independent of the hardware/software environment. Therefore theoretical analysis can be used for analyzing any algorithm
Consider the following example: ALGORITHM sum_of_numbers ( A[0 n-1] ) // Functionality : Finds the Sum // Input : Array of n numbers // Output : Sum of n numbers i 0 sum 0 while i < n sum sum + A[i] n i i+1 return sum Total number of steps for basic operation execution, C (n) = n NOTE: Constant of fastest growing term is insignificant: Complexity theory is an Approximation theory. We are not interested in exact time required by an algorithm to solve the problem. Rather we are interested in order of growth. i.e How much faster will algorithm run on computer that is twice as fast? How much longer does it take to solve problem of double input size? We can crudely estimate running time by T (n) Cop C (n) Where, T (n): running time as a function of n. Cop : running time of a single operation. C (n): number of basic operations as a function of n. Order of Growth: For order of growth, consider only the leading term of a formula and ignore the constant coefficient. The following is the table of values of several functions important for analysis of algorithms.
Asymptotic Notations
Asymptotic notation is a way of comparing functions that ignores constant factors and small input sizes. Three notations used to compare orders of growth of an algorithms basic operation count are: O, , notations
Analysis: 1. Input size: number of elements = n (size of the array) 2. Basic operation: a) Comparison b) Assignment 3. NO best, worst, average cases. 4. Let C (n) denotes number of comparisons: Algorithm makes one comparison on each execution of the loop, which is repeated for each value of the loops variable i within the bound between 1 and n 1.
n-1
C ( n) =
i=1
C (n) =
i=1
Example: Element uniqueness problem Algorithm UniqueElements (A[0..n-1]) //Checks whether all the elements in a given array are distinct //Input: An array A[0..n-1] //Output: Returns true if all the elements in A are distinct and false otherwise for i 0 to n - 2 do for j i + 1 to n 1 do if A[i] = = A[j] return false return true Analysis 1. Input size: number of elements = n (size of the array) 2. Basic operation: Comparison 3. Best, worst, average cases EXISTS. Worst case input is an array giving largest comparisons. Array with no equal elements Array with last two elements are the only pair of equal elements 4. Let C (n) denotes number of comparisons in worst case: Algorithm makes one comparison for each repetition of the innermost loop i.e., for each value of the loops variable j between its limits i + 1 and n 1; and this is repeated for each
value of the outer loop i.e, for each value of the loops variable i between its limits 0 and n 2 n-2 n-1
C (n ) =
1 i = 0 j=i+1
C (n) =
i=0 n-2
( (n 1 ) (i + 1) + 1 )
C (n) =
i=0
(n 1 - i)
n-2 n-2
C (n ) =
i=0
(n 1)
n-2
i=0 n-2
C (n) = (n 1)
1
i=0 n-2
i=0
C (n) = (n 1)
i=0
- (n 2) (n 1)
2 (n 2) (n 1) 2
C (n ) =
(n 1)(n 1) -
C (n ) = C (n ) =
M (n) (n)
Example: Find the number of binary digits in the binary representation of a positive decimal integer ALGORITHM BinRec (n) //Input: A positive decimal integer n //Output: The number of binary digits in ns binary representation if n = = 1 return 1 else return BinRec ( n/2 ) + 1 Analysis: 1. Input size: given number = n 2. Basic operation: addition 3. NO best, worst, average cases. 4. Let A (n) denotes number of additions. for n > 1 A (n) = A ( n/2 ) + 1 A (1) = 0 initial condition Where: A ( n/2 ) : to compute BinRec ( n/2 ) 1 : to increase the returned value by 1 5. Solve the recurrence: A (n) = A ( n/2 ) + 1 for n > 1 Assume n = 2k (smoothness rule) A (2k) = A (2k-1) + 1 for k > 0; A (20) = 0 Solving using Backward substitution method: A (2k) = A (2k-1) + 1 = [A (2k-2) + 1] + 1 = A (2k-2) + 2 = [A (2k-3) + 1] + 2 = A (2k-3) + 3 In the ith recursion, we have = A (2k-i) + i When i = k, we have = A (2k-k) + k = A (20) + k Since A (20) = 0 A (2k) = k Since n = 2k, HENCE k = log2 n A (n) = log2 n A (n) ( log n)