0% found this document useful (0 votes)
12 views41 pages

CH 02

This document discusses the fundamentals of algorithm analysis, focusing on time and space complexity, and various methodologies for analyzing algorithms. It explains the importance of understanding how input size affects execution time and introduces concepts such as best, worst, and average case complexities. Additionally, it covers amortization analysis techniques, including the accounting method and potential function method, to evaluate the average running time of algorithms.

Uploaded by

qx54x4q55g
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views41 pages

CH 02

This document discusses the fundamentals of algorithm analysis, focusing on time and space complexity, and various methodologies for analyzing algorithms. It explains the importance of understanding how input size affects execution time and introduces concepts such as best, worst, and average case complexities. Additionally, it covers amortization analysis techniques, including the accounting method and potential function method, to evaluate the average running time of algorithms.

Uploaded by

qx54x4q55g
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 41

CH02

Fundamentals of the
Analysis of Algorithm
Efficiency
Eng. Jibril Hassan (Msc IT)

Design and Analysis of Algorithms - Chapter 2 1


Objectives

• After studying this chapter you


should able to;
• Define “Algorithm Analysis
framework.
• Explain different methods for
analyzing the algorithms.
• Describe Amortization analysis in
algorithm.

Design and Analysis of Algorithms - Chapter 2 2


Introduction
• Algorithm analysis is an important part
of computational complexity theory.
• It provides theoretical estimates for the
resources needed for any algorithm to
solve a given problem.
• These estimates help to provide an
isight regarding the measures that
determine algorithm efficiency.

Design and Analysis of Algorithms - Chapter 2 3


Algorithm Analysis
Framework
• Algorithm Analysis framework involves finding
out the time taken by a program and the
memory space it requires to execute.
• It also determines how the input size of a
program influences the running time of the
program.
• The two factors that help us to determine the
efficiency of an algorithm are;
• Amount of time required by an algorithm to
execute.
• Amount of space required by an algorithm to
execute.
Design and Analysis of Algorithms - Chapter 2 4
Time Complexity
• Time complexity of an algorithm is the amount of
time required for it to execute.
• The time taken by an algorithm is given as the
sum of the compile time and the execution time.
• The compile time does not depend on the instance
characteristics as a program once compiled can be
run many times without recompiling.
• So only the run time of the program matters while
calculating time complexity and it is denoted by tp
(instance characteristics).
• It is difficult to calculate the time complexity in
terms of physically clocked time.

Design and Analysis of Algorithms - Chapter 2 5


Time Complexity
• For example in multi-user operating system,
it depends on various factors such as;
• System Load
• Number of programs running on the system
• Instruction set used
• Speed of the hardware
• The time complexity of an algorithm is given
in terms of frequency counts.
• Frequency count is the count that indicates
the number of times the statement is
executed.
Design and Analysis of Algorithms - Chapter 2 6
Space Complexity
• Space complexity of an algorithm is the amount of
storage required for it to execute.
• The space required by an algorithm to execute is
given as the sum of the following components;
1) A fixed part that is independent of the
characteristics of inputs and output. This part
include instruction space (eg. Space for the
code). Space for simple variable and fixed size
component variables and space for constants.
2) A variable part that consists of the space needed
by the component variables whose size is
dependent on a particular problem instance
being solved and the space needed by the
referenced variable.
Design and Analysis of Algorithms - Chapter 2 7
Space Complexity
• To calculate the space complexity of an
algorithm we have to consider two factors;
1) Constant characteristic
2) Instance characteristic
• Equation S(P) = C + Sp
• Where C is the constant part and indicates the
space required for inputs and outputs which
includes instructions, variables and indentifiers.
• Sp defines the space required for an instance
characteristic.
• This is a variable part whose space requirement
depends on a particular problem.
Design and Analysis of Algorithms - Chapter 2 8
Measuring input size
• The time required to execute an algorithm
depends on the input size of the algorithm. If
the Input size is longer, then the time taken
to execute it is more.
• Therefore we can calculate the efficiency of
an algorithm as a function to which the input
size is passed as parameter.
• Sometimes to implement an algorithm we
need prior information of the input size.
• For example when performing multiplication
of two matrices we should know the order of
these matrices, only then we can enter the
elements of the matrices.
Design and Analysis of Algorithms - Chapter 2 9
Measuring running time
• The time measured for analyzing an
algorithm is generally called as running time.
• For measuring the running time of an
algorithm we consider the following;
1) First recognize the basic operation (the
operation contributing the most to total run
time) of an algorithm.
2) Identifying the basic operation of an
algorithm is not that difficult. It is generally the
most time consuming operation in the
algorithm. Normally such operations are
located in the inner loop of algorithm.
Design and Analysis of Algorithms - Chapter 2 10
Measuring running time

Design and Analysis of Algorithms - Chapter 2 11


Measuring running time
•For example, the basic operation of sorting
algorithm is to compare the elements and
place them in appropriate position.
•The formula for calculating the total time
taken is T(n) = Cop C(n)
•Where T(n) is the running time of basic
operation
•Cop is the time taken by the basic operation
to execute.
•C(n) is the number of time the operation
needs to be executed using this formula we
can obtain the approximate computing
Design and Analysis time.
of Algorithms - Chapter 2 12
Algorithm for Sequential
Search
• Let us now consider the algorithm for sequential
search and find its best, worst and average case time
complexities.
• …………………………………………………………….
• Algorithm Seq_search{H[0…n-1], key}
• //Problem description: This algorithm searches the key
elements for an array H[0…n-1] sequentially.
• //input: An array H[0…n-1] and search the key element
• //output: Returns the index of H where the key element
is present for p=0 to n-1 do
• If {H[p]=key} then
• Return p
• …………………………………………………………….

Design and Analysis of Algorithms - Chapter 2 13


Algorithm tracing for
Sequential Search algorithm
• Let us now trace the sequential search algorithm.
• …………………………………………………………….
• //Let us consider n=4, H[ ] ={10, 14, 18, 20},
Key=14
• For p=0 to 4-1 do// this loop iterates form p=0 to
4-1
• If {H[0]=key} then // the loop continues to iterate
as H[0] is not the search
• //element
• return 1 // finally the array returns 1 which is the
position of the key element
• …………………………………………………………….

Design and Analysis of Algorithms - Chapter 2 14


Best Case Time Complexity
• If an algorithm takes the least amount of time to execute
a specific set of input, then it is called best case time
complexity.
• The above searching algorithm searches the element key
from the list of n elements of the array H[0….N-1]. If the
element key is present at the first location of the list
H[0….N-1] then the time taken to execute the algorithm
is the least. The time complexity depends on the number
of times the basic operation is executed. Thus we get the
best case time complexity when the number of basic
operations is minimum.
• If the element to be searched is found at the first
position, then the basic operation is only one and the
best case complexity is achieved.
• The equation of the best case is denoted C best = 1
Design and Analysis of Algorithms - Chapter 2 15
Worst Case and Average
Case Time Complexity
• If an algorithm takes maximum amount of
time to execute a specific set of input, then
it is called worst case time complexity.
• Average Case: If the time complexity of an
algorithm for certain sets of inputs is on an
average same then such a time complexity
is called average case time complexity. The
average case time complexity is not the just
the average of best case and worst case
time complexities.

Design and Analysis of Algorithms - Chapter 2 16


Worst Case and Average
Case Time Complexity
• Let us now consider the algorithm for
sequential search and find its best,
worst and average case time
complexities.

Design and Analysis of Algorithms - Chapter 2 17


Worst Case and Average
Case Time Complexity

Design and Analysis of Algorithms - Chapter 2 18


Methodologies for Analyzing
Algorithms
• There are different methodologies for
analyzing the algorithms. The following
are some of the methodologies used for
analyzing the algorithm;
• Pseudocode.
• Random Access Machine Model.
• Counting primitive operations.
• Analyzing recursive Algorithms.
• Testing and measuring over a range of
instances.
Design and Analysis of Algorithms - Chapter 2 19
Pseudocode.
• It is a compact and informal high level
explanation of computer algorithms that uses
the structural principles of a programming
language. It is meant for human reading
rather than machine reading.
• It excludes some details from the algorithms
which are not needed for human
understanding such as variable declaration,
system specific code and subroutines.
• The objective of using pseudocode is to make
the programming language code easier for
the human to understand.
Design and Analysis of Algorithms - Chapter 2 20
Pseudocode.

Design and Analysis of Algorithms - Chapter 2 21


Random Access Machine
(RAM)
• We can use the analytical approach directly on the
high level code or pseudocode in order to analyze
an algorithm without experimenting its running
time.
• The method of calculating primitive operations
produces a computational model called Random
Access Machine.
• This model views the computer as a CPU connected
to a depository of memory cells.
• The memory cells stores words, which may be a
character string. An address, that is, the value of a
basic data type or a number. The word random
access indicates the CPU’s capability to access an
arbitrary memory cell with a primitive operation.
Design and Analysis of Algorithms - Chapter 2 22
Random Access Machine
(RAM)
• There is no limit on the size of data that can be
stored in the memory. The CPU in the RAM
model performs a primitive operation in a
number of steps independent of the input size.
• Therefore the number of primitive operations
an algorithm performs corresponds directly to
the running time of the algorthm.

Design and Analysis of Algorithms - Chapter 2 23


Counting Primitive
Operations
• While analayzing the algorithm we have to count
the number of primitive operations that the
algorithm executes. Examples of some of the
primitive operations are as follows;
• Calling a method
• Returning value from a function
• Performing some arithmetic operations like
addition, subtraction etc.
• Assigning a value to a variable.
• Comparing two variables
• Reference to the pointer
• Indexing an array
Design and Analysis of Algorithms - Chapter 2 24
Counting Primitive
Operations
• Counting primitive operations describes how to
count the maximum number of primitive
operations an algorithm executes.

Design and Analysis of Algorithms - Chapter 2 25


Analyzing recursive
algorithms
• Problem solving not only uses the method
of iteration but also the method of
recursion. In this technique, a procedure P
is defined, which makes calls to itself,
provided, as a function that calls to P are
for solving sub problems.
• Recursive calls are the function calls to P
in smaller instances.
• Recursive procedure should define a base
case which is small enough to solve
without using recursion.
Design and Analysis of Algorithms - Chapter 2 26
Analysis of
recursiveMax algorithm
• Here the algorithm checks if the array contains
the maximm item, therefore the simple base
case can immediately solve the problem.
Otherwise, the algorithm first recursively
calculates the maximum of n-1 elements in the
array and then returns the maximum value
and the last element of the array.
• Analyzing the running time of a recursive
algorithm is a bit difficult and requires the use
of recurrence equation.
• The function T(n) denotes the running time of
an algorithm based on the input size n.

Design and Analysis of Algorithms - Chapter 2 27


Amortization
• Amortization is an important tool used to find
the average running time of an algorithm.
Amortization Analysis assures the performance
of each operation in the worst case. In this
section we will see some of the techniques of
amortization analysis and an extendable array
implementation.
• The following are the two fundamental
techniques to perform an amortization
analysis;
• Accounting method
• Potential function method
Design and Analysis of Algorithms - Chapter 2 28
Accounting Method
• The accounting method explain the amortized cost for
any particular operation. This technique performs an
amortized analysis based on a financial model.
• According to this method if the amortized cost of any
given operation is greater than its actual cost, then
this difference in cost is considered to be as credit and
if the amortized cost is lesser than the actual cost, the
difference in cost is considered to be debit.
• The credits that are gathered are used for those
operations whose amortized cost is less than its actual
cost.
• The sum of amortized cost for any given series of
operations gives the upper bound of the sum of the
actual cost of that series.

Design and Analysis of Algorithms - Chapter 2 29


Accounting Method
• For example of accounting method let us
consider the table expansion
• We usually create a table without knowing the
actual space required for it.
• We may use a method where the size of the
table is doubled when it is filled up. With this
example we are trying to show that the
amortized cost of an operation to insert an
element into the table is O(1).
• Consider the following; TB=Table, P=Element to
be inserted, no(TB)= number of elements in
table TB, size(TB)=size allocated for TB.

Design and Analysis of Algorithms - Chapter 2 30


Potential function Method
• This technique performs an amortized analysis based on
an energy model.
• In this technique, a value Φ , which represents the current
energy state of the system, is associated with the
structure. Every operation performed contributes to some
additional amount. Known as amortized time, to Φ , and
also extracts value from Φ proportionate to the actual
time spent.
• Formally, nΦ≥ 0 denotes the initial value of Φ before an
operation is performed and Φ i denotes the value of the
potential function, Φ , after the ith operation is performed.
• The reason for using the potential function argument is to
use the change in potential algorithm for the ith
operation, Φ I – Φ i-1, to characterise the amortized time
required for that operation.

Design and Analysis of Algorithms - Chapter 2 31


Analyzing an extendable
array implementation
• The limitation of the single array implementation is that
it needs advanced specification of a fixed capacity N for
the total number of elements that are stored in the
table.
• If the actual number of elements in the table is n, is
smaller than N, then this implementation will waste
space. If n increases beyond N, then the implementation
crashes. So let us provide a means to increase the size
of the array A to store the elements of table S.
• In programming language such as C. C++ and java it is
not possible to increase the size of the array as the
capacity of it is fixed for some number N. if there is
an overflow in the elements of an array then we can
follow the below steps.

Design and Analysis of Algorithms - Chapter 2 32


Terminal Questions
• Expalin space complexity.
• Define best case, worst case and
average case complexity.
• What are the methodologies used for
analyzing algorithms?

Design and Analysis of Algorithms - Chapter 2 33


Summary
• Analysis framework of an algorithm includes the
complexities involved in the executing an
algorithm, measuring the input size and running
time of an algorithm and calculating best, worse
and average case analysis.
• We discussed different methodologies of
algorithm analysis like pseudocode, random
access machine model, counting primitive
operations, analyzing recursive algorithms and
testing and measuring over a range of
instances.
• We analyzed amortization techniques and an
extendable array implementation.
Design and Analysis of Algorithms - Chapter 2 34
Glossary
• Subroutines: A program unit which does not
return any value through its name and has a
number of arguments.
• Recursive algorithm: It is an algorithm which
calls itself with smaller inputs and obtains the
inputs for the current input by applying simple
operations to the returned value of the
smaller input.
• Array: an array is a sequence of elements of
the similar type placed in continuous memory
locations that can be individually referenced
by adding an index to a single identifier.
Design and Analysis of Algorithms - Chapter 2 35
Self Assessment Questions
• 1. The Efficiency of an algorithm can be
determined by calculating its performance.
• 2. Time complexity of an algorithm is the
amount of time required by an algorithm to
execute.
• 3. If an algorithm takes least amount of time to
execute a specific set of input then it is called
Best case time complexity.

Design and Analysis of Algorithms - Chapter 2 36


Self Assessment Questions
• 4. The method of calculating primitive
operations produces a computational model
called Random assess machine.
• 5. Counting primitive operations describes how
to count the maximum number of primitive
operations an algorithm executes.
• 6. Recursive procedure should define a Base
case which is small enough to solve without
using recursion.

Design and Analysis of Algorithms - Chapter 2 37


Self Assessment Questions
• Accounting method Technique is used to
perform an amortized analysis method based
on a financial model.
• If you can setup such a schema called
amortization scheme then each operation in
the series has an Amortized running time
• Potential Function technique is used to
perform an amortized analysis method based
on an energy model.

Design and Analysis of Algorithms - Chapter 2 38


Self Assessment Questions
• The method of calculating primitive operations
produces a computational model called:
Random Access Method
• Counting Primitive Operations describes
how to count the maximum number of
primitive operations an algorithm executes.
• Recursive procedure should define a base
case which is small enough to solve without
using recursion.
• The Efficiency of an algorithm can be
determined by calculating its performance.

Design and Analysis of Algorithms - Chapter 2 39


Self Assessment Questions
• Time Complexity of an algorithm is the
amount of time required by an algorithm to
execute.
• If an algorithm takes least amount of time to
execute a specific set of input then it is called
Best Case time complexity.

Design and Analysis of Algorithms - Chapter 2 40


END

Design and Analysis of Algorithms - Chapter 2 41

You might also like