Data Structure - Algorithm Analysis
Data Structure - Algorithm Analysis
2
Algorithms
• Input : All algorithms can have zero or more inputs. The logic of
the algorithm should work on this input to give the desired result.
• Output : At least one output should be produced from the
algorithm based on the input given.
• Definiteness : Every step of algorithm should be clear and not
have any ambiguity.
• Finiteness : Every algorithm should have a proper end. The
algorithm can’t lead to an infinite condition.
• Effectiveness : Every step in the algorithm should be easy to
understand and can be implemented using any programming
language.
3
Performance Analysis
4
Space complexity
5
Space complexity
8
Space complexity
Example1 :
•Sabc(I) = 0
9
Space complexity
Example2 :
10
Space complexity
Example2 :
We want to add a list of numbers (Program 1.10).
•Although the output is a simple value, the input includes an
array.
•Therefore, the variable space requirement depends on how the
array is passed into the function.
• Programming languages like Pascal may pass arrays by value.
This means that the entire array is copied into tem-porary storage
before the function is executed. In these languages the variable
space requirement for this program is Ssum(I) = Ssum(n) = n
•where n is the size of the array.
11
Space complexity
Example2 :
•C passes all parameters by value.
•When an array is passed as an argument to a function, C
interprets it as passing the address of the first element of the array.
•C does not copy the array.
•Therefore, Ssum(n) = 0
12
Space complexity
Example3 :
13
Space complexity
Example3 : Program 1.11 also adds a list of numbers, but this
time the summation is handled recursively.
•This means that the compiler must save the parameters, the local
variables, and the return address for each recursive call.
•In this example, the space needed for one recursive call is the
number of bytes required for the two parameters and the return
address.
•We can use the sizeof function to find the number of bytes
required by each type.
•On an 80386 computer, integers and pointers require 2 bytes of
storage and floats need 4 bytes.
•Figure 1.1 shows the number of bytes required for one recursive
call. 14
Space complexity
15
Space complexity
• If the array has n = MAX-SIZE numbers, the total variable space
needed for the recursive version is
Srsum(MAX-SIZE) = 6*MAX-SIZE.
• If MAX-SIZE = 1000, the variable space needed by the recursive
version is 6*1000 = 6,000 bytes.
• The iterative version has no variable space requirement.
• As you can see, the recursive version has a far greater overhead
than its iterative counterpart.
16
Time complexity
•The time T(P) taken by a program P is the sum of its compile time and its
run (or execution) time.
•The compile time is similar to the fixed space component since it does not
depend on the instance characteristics.
•In addition, once we have verified that the program runs correctly, we may
run it many times without recompilation.
•Consequently, we are really concerned only with the program's execution
time Tp .
•Determining Tp is not an easy task because it requires a detailed knowledge
of the compiler's attributes.
• That is, we must know how the compiler translates our source program into
object code.
17
Time complexity
• For example, suppose we have a simple program that adds and
subtracts numbers.
• Letting n denote the instance characteristic, we might express
Tp(n)
as:
Tp(n) = ca ADD(n) + cs SUB(n) + cl LDA(n) + cst STA(n)
• where ca , cs, cl and cst are constants that refer to the time needed
to perform each operation
and
• ADD, SUB, LDA, STA are the number of additions,
subtractions, loads, and stores that are performed when the
program is run with instance characteristic n.
18
Time complexity
• Obtaining such a detailed estimate of running time is rarely
worth the effort.
• If we must know the running time, the best approach is to use
the system clock to time the program.
• Alternately, we could count the number of operations the
program performs.
• This gives us a machine-independent estimate, but we must
know how to divide the program into distinct steps.
Note that the step count only tells how many steps are executed, it
does not tell us how much time each step takes. 21
Time complexity
2n+3 steps
22
Time complexity
Recursive summing of a list of numbers :
23
Time complexity
2n+2
24
Time complexity
•To determine the step count for this function, we first need to figure out the step
count for the boundary condition of n = 0. Looking at Program 1.14, we can see
that when n = 0 only the if conditional and the second return statement are
executed. So, the total step count for n = 0 is 2. For n > 0, the if conditional and
the first return statement are executed. So each recursive call with n > 0 adds
two to the step count. Since there are n such function calls and these are
followed by one with n = 0, the step count for the function is 2n + 2.
25
Time complexity
• By physically placing count statements within our
functions we can run the functions and obtain precise
counts for various instance characteristics.
• Another way to obtain step counts is to use a tabular
method.
• To construct a step count table we first determine the step
count for each statement. We call this the steps/execution,
or s/e for shorts.
• Next we figure out the number of times that each
statement is executed. We call this the frequency.
Time complexity
• The frequency of a nonexecutable statement is zero.
• Multiplying s/e by the frequency, gives us the total steps
for each statement.
• Summing these totals, gives us the step count for the entire
function.
• Although this seems like a very complicated process, in
fact, it is quite easy.
Time complexity
Time complexity
Time complexity
• The examples we have looked at so far were sufficiently simple that the time
complexities were nice functions of fairly simple characteristics like the
number of elements, and the number of rows and columns.
• For many programs, the time complexity is not dependent solely on the
number of inputs or outputs or some other easily specified characteristic.
• Consider the function binarysearch . This function searches an ordered list.
A natural parameter with respect to which you might wish to determine the
step count is the number of elements(n) in the list. That is, we would like to
know how the computing time changes as we change the number of elements
n. The parameter n is inadequate. For the same n, the step count varies with
the position of the element searchnum that is being searched for. We can
extricate ourselves from the difficulties resulting from situations when the
chosen parameters are not adequate to determine the step count uniquely by
defining 'three kinds of steps counts: best case, worst case and average.
Time complexity
• The best case step count is the minimum number of steps that
can be executed for the given paramenters.
• The worst case step count is the maximum number of steps that
can be executed for the given paramenters.
32
Time complexity
Matrix addition:
•We want to determine the step count for a function that adds two-
dimensional arrays (Program 1.15).
•The arrays a and b are added and the result is returned in array c.
• All of the arrays are of size rows x cols.
•Program 1.16 shows the add function with the step counts
introduced.
•As in the previous examples, we want to express the total count in
terms of the size of the inputs, in this case rows and cols.
33
Time complexity
34
Time complexity
Matrix addition:
35
Thank you