0% found this document useful (0 votes)
17 views34 pages

DAA Lecture-1

The document provides an overview of algorithm analysis and design, covering topics such as algorithm definition, design strategies, and efficiency analysis. It discusses the importance of correctness, time and space efficiency, and optimality in algorithms, along with various notations for expressing algorithm growth rates. The document also highlights empirical and theoretical analysis methods for evaluating algorithm performance.

Uploaded by

Clock
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views34 pages

DAA Lecture-1

The document provides an overview of algorithm analysis and design, covering topics such as algorithm definition, design strategies, and efficiency analysis. It discusses the importance of correctness, time and space efficiency, and optimality in algorithms, along with various notations for expressing algorithm growth rates. The document also highlights empirical and theoretical analysis methods for evaluating algorithm performance.

Uploaded by

Clock
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 34

Algorithm Analysis and

Design
Name:-
M Asif Mahmood
Lecturer GDC , D.I.KHAN
E-Mail:-
[email protected]

1
Covered Topics
• Algorithm Analysis
• Algorithm Related Issues
• Algorithm Design strategies
• Analysis of algorithms
• Order of Growth

2
Introduction of Algorithm
• An algorithm is a sequence of unambiguous instructions for
solving a problem, i.e., for obtaining a required output for
any legitimate input in a finite amount of time.
•Recipe, process, method, technique, procedure, routine,
… with following requirements:
•Finiteness
terminates after a finite number of steps
•Definiteness
rigorously and unambiguously specified
•Input
valid inputs are clearly specified
•Output
can be proved to produce the correct output given a
valid input
•Effectiveness
steps are sufficiently simple and basic
• 3
Introduction of Algorithm

• Commonly used notation of an algorithm is

problem

algorithm

input “computer” output

4
Basic Issues Related to
Algorithms
• How to design algorithms

• How to express algorithms

• Proving correctness

• Efficiency

Theoretical analysis
Empirical analysis
Optimality

5
Algorithm design
strategies
• Brute force
• Divide and conquer
• Decrease and conquer
• Transform and conquer
• Greedy approach
• Dynamic programming
• Backtracking and Branch and bound
• Space and time tradeoffs

6
Factors affecting/ masking
speed
• Number of input values
• Processor speed
• Number of simultaneous programs
etc
• The location of items in a data
structure
• Different implementations (Software
differences)

7
Analysis of Algorithms
• How good is the algorithm?
1.Correctness
2.Time efficiency
3.Space efficiency

• Does there exist a better algorithm?


1.Lowerbounds
2.Optimality

8
Theoretical analysis of time
efficiency
• Time efficiency is analyzed by determining
the number of repetitions of the basic
operation as a function of input size

• Basic operation: the operation that


contributes most towards the running time of
the algorithm.
input size

running time execution time Number of times


for basic operation basic operation is
executed

9
Theoretical analysis of time
efficiency (Cont !!!!)
Input Size and basic operation

examples are.
Input size Basic
Problem
measure operation
Search for key Number of
Key comparison
in list of n items items in list n
Multiply two
matrices of Dimensions of Floating point
floating point matrices multiplication
numbers
Floating point
Compute an n
multiplication
Visiting a vertex
#vertices
Graph problem or traversing an
and/or edges 10
Theoretical analysis of time
efficiency (Cont !!!!)
•Type of formulas to count basic
operations
1) Exact formula
e.g., C(n) = n(n-1)/2

2) Formula indicating order of growth


with specific multiplicative constant
e.g., C(n) ≈ 0.5 n2

3) Formula indicating order of growth


with unknown multiplicative constant
e.g., C(n) ≈ cn2
11
Empirical analysis of time
efficiency
• Select a specific (typical) sample of
inputs
• Use physical unit of time (e.g.,
milliseconds)
OR
Count actual number of basic
operations
• Analyze the empirical data

12
Time efficiency of non-
recursive algorithms
Steps in mathematical analysis of non-
recursive algorithms:

1) Decide on parameter n indicating input


size

2) Identify algorithm’s basic operation

3) Determine worst, average, and best for


input of size n

4) Set up summation for C(n) reflecting


algorithm’s loop structure

5) Simplify summation using standard


formulas 13
Time efficiency of recursive
algorithms
Steps in mathematical analysis of recursive
algorithms:

1) Decide on parameter n indicating input size

2) Identify algorithm’s basic operation

3) Determine worst, average, and best case for input


of size n

4) Set up a recurrence relation and initial condition(s)


for C(n)-the number of times the basic operation will
be executed for an input of size n (alternatively count
recursive calls).

5) Solve the recurrence to obtain a closed form or


estimate the order of magnitude of the solution.
14
Space efficiency of algorithms
• Space efficiency refer to consumption of main
memory, but it refer to Secondary storage (in case of
Virtual Memory)
•space efficiency and time efficiency as two opposite
ends on a band
• Every point in between the two ends has a certain
time and space efficiency.
• The more time efficiency you have, the less space
efficiency you have, and vice versa.
• Algorithms like Mergesort are exceedingly fast, but
require lots of space to do the operations.
• The Red-Black tree is a compromise between
space and time. The Red-Black tree is basically a
binary tree representation of a 2-3-4 tree, and so it
takes up less space than the 2-3-4 tree
15
Optimality of Algorithms
• Following techniques are used to optimize an algorithm
1) Environment specific
•Optimization of algorithms frequently depends
on the properties of the machine the algorithm
will be executed on as well as the language the
algorithm is written in and chosen data types.
2) General techniques
•Use of Indexing (e.g in searching, prefer to
binary search due to use of indexing)
3) Dependency trees and Spreadsheets
•Spreadsheet are a 'special case' of algorithm
that self optimize by virtue of their dependency
trees that are inherent in the design of
spreadsheets in order to reduce re-calculations
when a cell changes.
16
Optimality of Algorithms (Cont !!!)
4) Searching strings
• Searching for particular text string in long
sequences of characters potentially
generates lengthy.
• Follow the Boyer-Moore String searching
algorithm or Moore-Horspool searching
algorithm techniques
5) Hot spot analyzers
• Special system software products known
as "performance analyzers" are often
available from suppliers to help in testing
process
6) Benchmarking & competitive algorithms
• Benchmark are used for comparisons with
competitive systems.
• Used by customer to check the functionality and
performance.
17
Optimality of Algorithms (Cont !!!)
7) Compiled versus Interpreted languages
• A compiled algorithm will, in general,
execute faster than the equivalent
interpreted algorithm
8) Just-in-time compilers
• 'JIT' compilers combine features of
interpreted languages with compiled
languages and may also incorporate
elements of optimization to a greater or
lesser extent
9) Choice of instruction or data type
• Particularly in an Assembler language (although also
applicable to HLL statements), the choice of a particular
'instruction' or data types, can have a large impact on
execution efficiency.
• In general, instructions that process variables such as
18
signed or unsigned 16-bit or 32-bit integers are faster
Correctness of Algorithms
1) Meet the functional requirements
2) Performance Criteria
3) Quality Criteria

19
Order of Growth (Asymptotic
Complexity)
Running time of an algorithm as a
function of input size n for large n.
Expressed using only the highest-
order term in the expression for
the exact running time.
◦ Instead of exact running time, say
(n2).
Describes behavior of function in the
limit.

20
Asymptotic growth rate
 A way of comparing functions that ignores
constant factors and small input sizes

 O(g(n)):
class of functions t(n) that grow no faster
than g(n)

Θ (g(n)): class of functions t(n) that grow at same


rate as g(n)

 Ω(g(n)):class of functions t(n) that grow at least


as fast as g(n)

21
Asymptotic Notation
 , O, , o, 
Defined for functions over the natural
numbers.
◦ Ex: f(n) = (n2).
◦ Describes how f(n) grows in comparison
to n2.
Define a set of functions; in practice used
to compare two function sizes.
The notations describe different rate-of-
growth relations between the defining
function and the defined set of functions.
22
-notation
For function g(n), we define
(g(n)), big-Theta of n, as the set:
(g(n)) = {f(n) :
 positive constants c1, c2,
and n0, such that n  n0,
we have 0  c1g(n)  f(n) 
c2g(n)
} Intuitively: Set of all functions that
have the same rate of growth as g(n).

23
-notation
For function g(n), we define
(g(n)), big-Theta of n, as the set:
(g(n)) = {f(n) :
 positive constants c1, c2,
and n0, such that n  n0,
we have 0  c1g(n)  f(n) 
c2g(n)
}
Technically, f(n)  (g(n)).
Older usage, f(n) = (g(n)).
I’ll accept either…

f(n) and g(n) are nonnegative, for large n.


24
Example
(g(n)) = {f(n) :  positive constants c1, c2, and
n0, such that n  n0, 0  c1g(n)  f(n) 
c2g(n)}
10n2 - 3n = (n2)
What constants for n , c , and c will
0 1 2
work?
Make c a little smaller than the
1
leading coefficient, and c2 a little
bigger.
To compare orders of growth,
look at the leading term.
25
O-notation
For function g(n), we define
O(g(n)), big-O of n, as the set:
O(g(n)) = {f(n) :
 positive constants c and n0,
such that n  n0,
we have 0  f(n)  cg(n) }
Intuitively: Set of all functions whose rate
of growth is the same as or lower than that
of g(n).

g(n) is an asymptotic upper bound for f(n).


f(n) = (g(n))  f(n) = O(g(n)).
(g(n))  O(g(n)).

26
 -notation
For function g(n), we define
(g(n)), big-Omega of n, as the
set:
(g(n)) = {f(n) :
 positive constants c and n0,
such that n  n0,
we have 0  cg(n)  f(n)}
Intuitively: Set of all functions whose
rate of growth is the same as or higher
than that of g(n).

g(n) is an asymptotic lower bound for f(n).


f(n) = (g(n))  f(n) = (g(n)).
(g(n))  (g(n)).

27
Relations Between , O, 

28
Relations Between , , O
Theorem
Theorem :: For For any
any two
two functions
functions g(n)
g(n) and
and
f(n),
f(n),
f(n) == (g(n))
f(n) (g(n)) iff
iff
f(n)
f(n) == O(g(n))
O(g(n)) and f(n) == (g(n)).
and f(n) (g(n)).
I.e., (g(n)) = O(g(n))  (g(n))

In practice, asymptotically tight bounds


are obtained from asymptotic upper and
lower bounds.

29
Running Times
 “Running time is O(f(n))”  Worst case is O(f(n))
 O(f(n)) bound on the worst-case running time 
O(f(n)) bound on the running time of every
input.
 (f(n)) bound on the worst-case running time 
(f(n)) bound on the running time of every
input.
 “Running time is (f(n))”  Best case is (f(n))
 Can still say “Worst-case running time is (f(n))”
◦ Means worst-case running time is given by some
unspecified function g(n)  (f(n)).

30
Asymptotic Notation in
Equations
Can use asymptotic notation in equations
to replace expressions containing lower-
order terms.
For example,
4n3 + 3n2 + 2n + 1 = 4n3 + 3n2 + (n)
= 4n3 + (n2) = (n3). How to interpret?
Inequations, (f(n)) always stands for an
anonymous function g(n)  (f(n))
◦ In the example above, (n2) stands for
3n2 + 2n + 1.

31
o -notation
For a given function g(n), the set little-o:
o(g(n)) = {f(n):  c > 0,  n0 > 0 such that
 n  n0, we have 0  f(n) < cg(n)}.
f(n) becomes insignificant relative to g(n)
as n approaches infinity:
lim [f(n) / g(n)] = 0
n

g(n) is an upper bound for f(n) that is not


asymptotically tight.

32
 -notation
For a given function g(n), the set little-omega:

(g(n)) = {f(n):  c > 0,  n > 0 such that


0
 n  n0, we have 0  cg(n) < f(n)}.
f(n) becomes arbitrarily large relative to
g(n) as n approaches infinity:
lim [f(n) / g(n)] = .
n

g(n) is a lower bound for f(n) that is not


asymptotically tight.

33
Comparison of Functions
fg  ab

f (n) = O(g(n))  a  b
f (n) = (g(n))  a  b
f (n) = (g(n))  a = b
f (n) = o(g(n))  a < b
f (n) = (g(n))  a > b

34

You might also like