0% found this document useful (0 votes)
22 views79 pages

20IS42 Module-1

The document describes an introduction to algorithms and algorithm analysis. It defines what an algorithm is and provides examples. It also discusses different ways to specify algorithms including pseudocode. Key areas of algorithm study are outlined.

Uploaded by

madhuryat105
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views79 pages

20IS42 Module-1

The document describes an introduction to algorithms and algorithm analysis. It defines what an algorithm is and provides examples. It also discusses different ways to specify algorithms including pseudocode. Key areas of algorithm study are outlined.

Uploaded by

madhuryat105
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

DESIGN AND ANALYSIS OF ALGORITHMS

(20IS42)

Module 1

By,
MANJESH R,
Assistant Professor,
IS&E, VVCE, Mysuru - 02
Manjesh R, ISE, VVCE, Mysuru-02
Manjesh R, ISE, VVCE, Mysuru-02
Manjesh R, ISE, VVCE, Mysuru-02
Manjesh R, ISE, VVCE, Mysuru-02
What is Algorithm?
 An algorithm is a sequence of unambiguous
instructions for solving a problem, i.e., for obtaining a
required output for any legitimate input in a finite
amount of time.
Problem

Algorithm

Input “Computer” Output

Manjesh R, ISE, VVCE, Mysuru-02


Find Minimum of Two elements

Find the minimum of 20 elements.

• Divide into two groups of 10 elements each.

• Find the minimum element in each group somehow.

• Compare the minimums of each group to determine the


overall minimum.

Manjesh R, ISE, VVCE, Mysuru-02


Algorithm Must Satisfy following Criteria

Manjesh R, ISE, VVCE, Mysuru-02


Distinct areas of study about Algorithm:
Study of algorithms includes important areas:
1. How to devise algorithms
– Creating algorithm is an art; various techniques yield efficient algorithms
2. How to validate algorithms
– Prove that algorithm computes correct answer for all possible legitimate
inputs.
– Next step is to write program and perform program verification: program
proving
– Proof of correctness
3. How to analyze algorithms
– CPU to perform operations, Memory to store program and data
– Determine how much computing time and storage an algorithm requires.
4. How to test a program - two stages:
– debugging and profiling (performance measurement)

Manjesh R, ISE, VVCE, Mysuru-02


Algorithm Specification:
We can describe an algorithm in many ways.

• We can use natural language like English.


• We can make use of pseudocode.
• We can use Graphical representations called flowcharts
- for small and simple algorithms
• We must make sure that the resulting instructions are definite
• Example: GCD
we will consider three methods for solving the same problem:
computing the greatest common divisor of two integers.

Manjesh R, ISE, VVCE, Mysuru-02


We present most of our algorithm using a PSEUDOCODE that resembles C.

1. Procedure for Algorithm is Algorithm Name ( parameter list )

2. Comments begin with // and continue until the end of line.

3. Blocks are indicated with matching braces: { and }. A compound Statement can
be represented as a block. The body of a procedure also forms a block.

4. An identifier begins with a letter.


The data types of variables are not explicitly declared.
The types will be clear from the context.
Whether a variable is global or local to a procedure will also be evident from the
context.
We assume simple datatypes such as integer, float, char, Boolean, and so on.

5. Assignment of values to variables is done using the assignment statement


<variable>  <expression>

6. Input and output are done using instruction read & write
Manjesh R, ISE, VVCE, Mysuru-02
7. There are two Boolean values true and false. In order to produce these values, the logical
operators and, or, and not and the relational Operators <,<,=,/,>,and > are provided.

8. Elements of multidimensional arrays are accessed using [ and ].

9. Conditional statement is : If <condition> then <statement > otherwise <statement>

10. The following looping statements are employed for, while and repeat, until.
break : Exit from loop (inner/outer) & return : exit of Function

As an example, the following algorithm finds and returns the maximum of n given numbers:
Algorithm Max (A, n)
// Input: An array A of size n
// Output: Maximum of array element
{
Result  A[i]
for i 2 to n do
if A[i] > Result
then Result  A[i]
return Result
}

Manjesh R, ISE, VVCE, Mysuru-02


• These examples will help us to illustrate several important points:
• The non ambiguity requirement for each step of an algorithm cannot be
compromised.
• The range of inputs for which an algorithm works has to be specified
carefully.
• The same algorithm can be represented in several different ways.
• There may exist several algorithms for solving the same problem.
• Algorithms for the same problem can be based on very different ideas and
can solve the problem with dramatically different speeds.

• The greatest common divisor of two nonnegative, not-both-zero integers m


and n, denoted gcd(m, n), is defined as the largest integer that divides both m
and n evenly, i.e., with a remainder of zero.
• Euclid’s algorithm is based on applying repeatedly the equality
gcd(m, n) = gcd(n, m mod n), where m mod n is the remainder of the division
of m by n, until m mod n is equal to 0. Since gcd(m, 0) = m (why?)
• The last value of m is also the greatest common divisor of the initial m and n.
• For example, gcd(60, 24) can be computed as follows:
gcd(60, 24) = gcd(24, 12) = gcd(12, 0) = 12
Manjesh R, ISE, VVCE, Mysuru-02
• Euclid’s algorithm for computing gcd(m, n)
• Step 1 If n = 0, return the value of m as the answer and stop; otherwise, proceed
to Step 2.
• Step 2 Divide m by n and assign the value of the remainder to r.
• Step 3 Assign the value of n to m and the value of r to n. Go to Step 1.

Alternatively, we can express the same algorithm in pseudocode:


• ALGORITHM Euclid(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two non-negative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n != 0 do
r ← m mod n
m←n
n←r
return m

• The value of the second integer eventually becomes 0, and the algorithm stops.

Manjesh R, ISE, VVCE, Mysuru-02


• Let us look at the other method for this problem. The first is simply based on
the definition of the greatest common divisor of m and n as the largest integer
that divides both numbers evenly.
• Obviously, such a common divisor cannot be greater than the smaller of these
numbers, which we will denote by t = min{m, n}.

• Consecutive integer checking algorithm for computing gcd(m, n)


• Step 1 Assign the value of min{m, n} to t.
• Step 2 Divide m by t. If the remainder of this division is 0, go to Step 3;
otherwise, go to Step 4.
• Step 3 Divide n by t. If the remainder of this division is 0, return the value of
t as the answer and stop; otherwise, proceed to Step 4.
• Step 4 Decrease the value of t by 1. Go to Step 2.

• Note that unlike Euclid’s algorithm, this algorithm, in the form presented, does
not work correctly when one of its input numbers is zero. This example
illustrates why it is so important to specify the set of an algorithm’s inputs
explicitly and carefully.

Manjesh R, ISE, VVCE, Mysuru-02


• The third procedure for finding the greatest common divisor should be familiar to
you from middle school.

• Middle-school Procedure for computing gcd(m, n)


• Step 1 Find the prime factors of m.
• Step 2 Find the prime factors of n.
• Step 3 Identify all the common factors in the two prime expansions found in
Step 1 and Step 2. (If p is a common factor occurring pm and pn times in m
and n, respectively, it should be repeated min{pm, pn} times.)
• Step 4 Compute the product of all the common factors and return it as the
greatest common divisor of the numbers given.

• Thus, for the numbers 60 and 24, we get


60 = 2 . 2 . 3 . 5
24 = 2 . 2 . 2 . 3
gcd(60, 24) = 2 . 2 . 3 = 12.

• The middle-school procedure does not qualify, in the form presented, as a


legitimate algorithm. Why? Because the prime factorization steps are not defined
unambiguously: they require a list of prime numbers

Manjesh R, ISE, VVCE, Mysuru-02


• So, let us introduce a simple algorithm for generating consecutive primes not
exceeding any given integer n > 1. It was probably invented in ancient Greece
and is known as the Sieve of Eratosthenes.

• As an example, consider the application of the algorithm to finding the list of


primes not exceeding n = 25:
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
2 3 5 7 9 11 13 15 17 19 21 23 25
23 5 7 11 13 17 19 23 25
23 5 7 11 13 17 19 23

• What is the largest number p whose multiples can still remain on the list to
make further iterations of the algorithm necessary?
• Obviously, p . p should not be greater than n, and therefore p cannot exceed
“√n “ rounded down (denoted “ √ n “ using the so-called floor function).

Manjesh R, ISE, VVCE, Mysuru-02


ALGORITHM Sieve(n)
//Implements the sieve of Eratosthenes
//Input: A positive integer n > 1
//Output: Array L of all prime numbers less than or equal to n
for p←2 to n do A[p]←p
for p←2 to √n do //see note before pseudocode
if A[p] =0 //p hasn’t been eliminated on previous passes
j←p∗p
while j ≤ n do
A[j ]←0 //mark element as eliminated
j ←j + p
//copy the remaining elements of A to array L of the primes
i ←0
for p←2 to n do
if A[p] = 0
L[i]←A[p]
i ←i + 1
return L
Manjesh R, ISE, VVCE, Mysuru-02
Fundamentals of Algorithmic Problem Solving
• We can consider
algorithms to be
procedural solutions
to problems.

• These solutions are


not answers but
specific instructions
for getting answers

• We now list and


briefly discuss a
sequence of steps
one typically goes
through in designing
and analyzing an
algorithm.

Manjesh R, ISE, VVCE, Mysuru-02


• Understanding the Problem
• From a practical perspective, the first thing you need to do before designing an
algorithm is to understand completely the problem given.
• Read the problem’s description carefully and ask questions if you have any doubts
about the problem, do a few small examples by hand, think about special cases, and
ask questions again if needed.
• Of course, it helps to understand how such an algorithm works and to know its
strengths and weaknesses, especially if you have to choose among several available
algorithms.
• But often you will not find a readily available algorithm and will have to design your
own.
• An input to an algorithm specifies an instance of the problem the algorithm solves.
• It is very important to specify exactly the set of instances, algorithm needs to handle.
• Ascertaining the Capabilities of the Computational Device
• Once you completely understand a problem, you need to ascertain the capabilities of
the computational device the algorithm is intended for.
• The vast majority of algorithms in use today are still destined to be programmed for a
computer closely resembling the von Neumann machine.
• The essence of this architecture is captured by the so-called random-access machine
(RAM).
• Its central assumption is that instructions are executed one after another, one
operation at a time. Accordingly, algorithms designed to be executed on such
machines are called sequential algorithms.
Manjesh R, ISE, VVCE, Mysuru-02
• The central assumption of the RAM model does not hold for some newer computers
that can execute operations concurrently, i.e., in parallel. Algorithms that take
advantage of this capability are called parallel algorithms.
• In many situations you need not worry about a computer being too slow for the task.
• There are important problems, however, that are very complex by their nature, or
have to process huge volumes of data, or deal with applications where the time is
critical. In such situations, it is imperative to be aware of the speed and memory
available on a particular computer system.

• Choosing between Exact and Approximate Problem Solving


• The next principal decision is to choose between solving the problem exactly or
solving it approximately.
• In the former case, an algorithm is called an exact algorithm; in the latter case, an
algorithm is called an approximation algorithm.
• Why would one opt for an approximation algorithm?
• First, there are important problems that simply cannot be solved exactly for most
of their instances.
• Second, available algorithms for solving a problem exactly can be unacceptably
slow because of the problem’s intrinsic complexity.
• Third, an approximation algorithm can be a part of a more sophisticated algorithm
that solves a problem exactly.

Manjesh R, ISE, VVCE, Mysuru-02


• Algorithm Design Techniques
• An algorithm design technique (or “strategy” or “paradigm”) is a general approach to
solving problems algorithmically that is applicable to a variety of problems from
different areas of computing.
• Learning the techniques is of utmost importance for the following reasons.
• First, they provide guidance for designing algorithms for new problems, i.e.,
problems for which there is no known satisfactory algorithm.
• Second, algorithms are the cornerstone of computer science.
• Algorithm design techniques make it possible to classify algorithms according to an
underlying design idea; therefore, they can serve as a natural way to both categorize
and study algorithms.

• Designing an Algorithm and Data Structures


• While the algorithm design techniques do provide a powerful set of general
approaches to algorithmic problem solving, designing an algorithm for a particular
problem may still be a challenging task.
• Some design techniques can be simply inapplicable to the problem in question.
Sometimes, several techniques need to be combined, and there are algorithms that
are hard to pinpoint as applications of the known design techniques.
• Of course, one should pay close attention to choosing data structures appropriate for
the operations performed by the algorithm.
• Algorithms + Data Structures = Programs

Manjesh R, ISE, VVCE, Mysuru-02


• Methods of Specifying an Algorithm
• Once you have designed an algorithm, you need to specify it in some fashion. Algorithm can be
described in words (in a free and also a step-by-step form) and in pseudocode. These are the
two options that are most widely used nowadays for specifying algorithms.
• Using a natural language has an obvious appeal; however, the inherent ambiguity of any
natural language makes a succinct and clear description of algorithms surprisingly difficult.
• Pseudocode is a mixture of a natural language and programming language like constructs.
Pseudocode is usually more precise than natural language, and its usage often yields more
succinct algorithm descriptions. Surprisingly, computer scientists have never agreed on a single
form of pseudocode.
• In the earlier days of computing, the dominant vehicle for specifying algorithms was a
flowchart, a method of expressing an algorithm by a collection of connected geometric shapes
containing descriptions of the algorithm’s steps. This representation technique has proved to
be inconvenient for all but very simple algorithms.

• Proving an Algorithm’s Correctness


• Once an algorithm has been specified, you have to prove its correctness. That is, you have to
prove that the algorithm yields a required result for every legitimate input in a finite amount of
time.
• For some algorithms, a proof of correctness is quite easy; for others, it can be quite complex. A
common technique for proving correctness is to use mathematical induction because an
algorithm’s iterations provide a natural sequence of steps needed for such proofs.
• The notion of correctness for approximation algorithms is less straightforward than it is for
exact algorithms.
Manjesh R, ISE, VVCE, Mysuru-02
• Analysing an Algorithm
• We usually want algorithms to possess several qualities. After correctness, by far the
most important is efficiency.
• In fact, there are two kinds of algorithm efficiency: time efficiency, indicating how fast
the algorithm runs, and space efficiency, indicating how much extra memory it uses.
• Another desirable characteristic of an algorithm is simplicity. Unlike efficiency, which
can be precisely defined.
• Yet another desirable characteristic of an algorithm is generality. There are, in fact,
two issues here: generality of the problem the algorithm solves and the set of inputs it
accepts.
• Coding an Algorithm
• Most algorithms are destined to be ultimately implemented as computer programs.
• The validity of programs is still established by testing.

Manjesh R, ISE, VVCE, Mysuru-02


Important Problem Types

• Sorting

• Searching

• String Processing

• Graph Problems

• Combinatorial Problems

• Geometric Problems

• Numerical Problems

Manjesh R, ISE, VVCE, Mysuru-02


Sorting (I)
• Rearrange the items of a given list in ascending order.
• Input: A sequence of n numbers <a1, a2, …, an>
• Output: A reordering <a´1, a´2, …, a´n> of the input sequence such that
a´1≤ a´2 ≤ … ≤ a´n.

• Why sorting?
• Help searching
• Algorithms often use sorting as a key subroutine.

• Sorting key
• A specially chosen piece of information used to guide sorting. E.g., sort
student records by names.

Manjesh R, ISE, VVCE, Mysuru-02


Sorting (II)
• Examples of sorting algorithms
• Selection sort
• Bubble sort
• Insertion sort
• Merge sort
• Heap sort …

• Evaluate sorting algorithm complexity: the number of key comparisons.

• Two properties
• Stability: A sorting algorithm is called stable if it preserves the relative
order of any two equal elements in its input.
• In place : A sorting algorithm is in place if it does not require extra
memory, except, possibly for a few memory units.

Manjesh R, ISE, VVCE, Mysuru-02


Searching

• Find a given value, called a search key, in a given set.

• Examples of searching algorithms


• Sequential search
• Binary search …

Manjesh R, ISE, VVCE, Mysuru-02


String Processing

• A string is a sequence of characters from an alphabet.


• Text strings: letters, numbers, and special characters.
• Bit strings: zeros and ones
• Gene sequences: which can be modeled by strings of characters from
four alphabet {A, C, G, T}

• String matching: searching for a given word/pattern in a text.

• Examples:
(i) searching for a word or phrase on WWW or in a Word document
(ii) searching for a short read in the reference genomic sequence

Manjesh R, ISE, VVCE, Mysuru-02


Graph Problems

• Informal definition
• A graph is a collection of points called vertices, some of which are
connected by line segments called edges.

• Modeling real-life problems


• Modeling WWW
• Communication networks
• Project scheduling …

• Examples of graph algorithms


• Graph traversal algorithms
• Shortest-path algorithms
• Topological sorting

Manjesh R, ISE, VVCE, Mysuru-02


Combinatorial Problems
• Combinatorial problems are problems that ask, explicitly or implicitly, to find
a combinatorial object—such as a permutation, a combination, or a subset—
that satisfies certain constraints.
• A desired combinatorial object may also be required to have some additional
property such as a maximum value or a minimum cost.

Geometric Problems
• Geometric algorithms deal with geometric objects such as points, lines, and
polygons.
• Some more examples are:
• Computer graphics
• Robotics
• Tomography – medicine
• Closest-pair problem
• Convex-Hull problem

Manjesh R, ISE, VVCE, Mysuru-02


Numerical Problems
• Numerical problems, another large special area of applications, are problems
that involve mathematical objects of continuous nature: solving equations and
systems of equations, computing definite integrals, evaluating functions, and
so on.
• The majority of such mathematical problems can be solved only
approximately.

• Some more examples are:


• Business analysis – operations research
• Information storage and retrieval
• Network data transfers

Manjesh R, ISE, VVCE, Mysuru-02


The Analysis Framework
• There are two kinds of efficiency: time efficiency and space efficiency.
• Time efficiency, also called time complexity, indicates how fast an algorithm in
question runs.
• Space efficiency, also called space complexity, refers to the amount of memory
units required by the algorithm in addition to the space needed for its input and
output.

Measuring an Input’s Size


• The obvious observation that almost all algorithms run longer on larger inputs.
• For example, it takes longer to sort larger arrays, multiply larger matrices,
and so on.
• Therefore, it is logical to investigate an algorithm’s efficiency as a function of
some parameter n indicating the algorithm’s input size.
• In most cases, selecting such a parameter is quite straightforward.
• For example, it will be the size of the list for problems of sorting, searching,
finding the list’s smallest element, and most other problems dealing with
lists.

Manjesh R, ISE, VVCE, Mysuru-02


• For the problem of evaluating a polynomial p(x) = anxn + . . . . . + a0 of
degree n, it will be the polynomial’s degree or the number of its
coefficients, which is larger by 1 than its degree.
• For example, how should we measure an input’s size for a spell-
checking algorithm?
• If the algorithm examines individual characters of its input, we
should measure the size by the number of characters.
• We should make a special note about measuring input size for
algorithms solving problems such as checking primality of a positive
integer n.
• Here, the input is just one number, and it is this number’s
magnitude that determines the input size.
• In such situations, it is preferable to measure size by the number b
of bits in the n’s binary representation:
b = log2 n + 1

Manjesh R, ISE, VVCE, Mysuru-02


Units for Measuring Running Time
• The next issue concerns units for measuring an algorithm’s running time.
• Of course, we can simply use some standard unit of time measurement—a
second, or millisecond, and so on—to measure the running time of a program
implementing the algorithm.
• There are obvious drawbacks to such an approach, however: dependence on the
speed of a particular computer, dependence on the quality of a program
implementing the algorithm and of the compiler used in generating the machine
code, and the difficulty of clocking the actual running time of the program.
• Since we are after a measure of an algorithm’s efficiency, we would like to have a
metric that does not depend on these extraneous factors.
• One possible approach is to count the number of times each of the algorithm’s
operations is executed.
• This approach is both excessively difficult and, as we shall see, usually
unnecessary.
• The thing to do is to identify the most important operation of the algorithm,
called the basic operation, the operation contributing the most to the total
running time, and compute the number of times the basic operation is executed.
• Thus, the established framework for the analysis of an algorithm’s time efficiency
suggests measuring it by counting the number of times the algorithm’s basic
operation is executed on inputs of size n.
Manjesh R, ISE, VVCE, Mysuru-02
• For example, Let cop be the execution time of an algorithm’s basic operation on a
particular computer, and let C(n) be the number of times this operation needs to be
executed for this algorithm.
• Then we can estimate the running time T (n) of a program implementing this algorithm
on that computer by the formula
T (n) ≈ copC(n)
• How much faster would this algorithm run on a machine that is 10 times faster than
the one we have?” The answer is, obviously, 10 times.
• Assuming that C(n) = 1/2 n(n − 1), how much longer will the algorithm run if we double
its input size? The answer is about four times longer. Indeed, for all but very small
values of n,

• The efficiency analysis framework ignores multiplicative constants and concentrates on


the count’s order of growth to within a constant multiple for large-size inputs.
Manjesh R, ISE, VVCE, Mysuru-02
Orders of Growth

• Difference in running times on small inputs is not what really distinguishes


efficient algorithms from inefficient ones.
• For large values of input size n, functions order of growth counts.

Manjesh R, ISE, VVCE, Mysuru-02


• 1 < log n < √n < n < n logn < n2 < n2logn < n3 < n3logn <
. . . . . . < 2n < 3n < nk < n!

Manjesh R, ISE, VVCE, Mysuru-02


Orders of Growth (2)
• Linear value is having constant growth.
• The function growing the slowest among these is the logarithmic
function.
• 2n and the factorial function n!, both these functions grow so fast that
their values become astronomically large even for rather small values of
n.
• So both are often referred to as “exponential-growth functions” (or
simply “exponential”).
• The function log2n increases in value by just 1 (because log22n = log22 +
log2n = 1+ log2n);
• The linear function increases twofold, the linearithmic function n log2 n
increases slightly more than twofold;
• The quadratic function n2 and cubic function n3 increase fourfold and
eightfold, respectively (because (2n)2 = 4n2 and (2n)3 = 8n3); the value of
2n gets squared (because 22n = (2n)2); and n! increases much more than
that.

Manjesh R, ISE, VVCE, Mysuru-02


Worst-Case, Best-Case, and Average-Case Efficiencies
• Consider, as an example, sequential search. This is a straightforward algorithm
that searches for a given item (some search key K) in a list of n elements by
checking successive elements of the list until either a match with the search
key is found or the list is exhausted.

Manjesh R, ISE, VVCE, Mysuru-02


Worst-Case Efficiency
• The running time of this algorithm can be quite different for the same
list size n.
• In the worst case, when there are no matching elements or the first
matching element happens to be the last one on the list, the algorithm
makes the largest number of key comparisons among all possible inputs
of size n:
Cworst(n) = n

• The worst-case efficiency of an algorithm is its efficiency for the worst-


case input of size n, which is an input (or inputs) of size n for which the
algorithm runs the longest among all possible inputs of that size.
• The way to determine the worst-case efficiency of an algorithm is, in
principle, quite straightforward: analyze the algorithm to see what kind
of inputs yield the largest value of the basic operation’s count C(n)
among all possible inputs of size n and then compute this worst-case
value Cworst(n).

Manjesh R, ISE, VVCE, Mysuru-02


Best-Case Efficiency
• The best-case efficiency of an algorithm is its efficiency for the best-case
input of size n, which is an input (or inputs) of size n for which the
algorithm runs the fastest among all possible inputs of that size.
• Accordingly, we can analyze the bestcase efficiency as follows.
• First, we determine the kind of inputs for which the count C(n) will be
the smallest among all possible inputs of size n.

• The best-case inputs for sequential search are lists of size n with their
first element equal to a search key; accordingly,
Cbest(n) = 1
for this algorithm.
• The analysis of the best-case efficiency is not nearly as important as that
of the worst-case efficiency. But it is not completely useless, either.

Manjesh R, ISE, VVCE, Mysuru-02


Average-Case Efficiency
• Neither the worst-case analysis nor its best-case counterpart yields the
necessary information about an algorithm’s behavior on a “typical” or
“random” input.
• This is the information that the average-case efficiency seeks to provide.
• To analyze the algorithm’s average case efficiency, we must make some
assumptions about possible inputs of size n.
• Let’s consider again sequential search. The standard assumptions are
that
• (a) the probability of a successful search is equal to p (0 ≤ p ≤ 1).
• (b) the probability of the first match occurring in the ith position of
the list is the same for every i. (1-p).
• Cavg(n) = p * [ 1/n+2/n+3/n+........+i/n+......n/n] + (1-p) * n
= p/n*[1+2+3+.....+i+.....+n]+(1-p)*n
= p/n*n(n+1)/2+(1-p)*n
= p(n+1)/2+(1-p)*n

Manjesh R, ISE, VVCE, Mysuru-02


• Yet another type of efficiency is called amortized efficiency.
• It applies not to a single run of an algorithm but rather to a sequence of
operations performed on the same data structure.
• It turns out that in some situations a single operation can be expensive,
but the total time for an entire sequence of n such operations is always
significantly better than the worst-case efficiency of that single operation
multiplied by n.
• So we can “amortize” the high cost of such a worst-case occurrence over
the entire sequence in a manner similar to the way a business would
amortize the cost of an expensive item over the years of the item’s
productive life.

Manjesh R, ISE, VVCE, Mysuru-02


Asymptotic Notations and Basic Efficiency Classes
• To compare and rank orders of growth, computer scientists use three notations: O (big
oh), Ω (big omega), and Θ (big theta).
• In the context we are interested in, t (n) will be an algorithm’s running time (usually
indicated by its basic operation count C(n)), and g(n) will be some simple function to
compare the count with.
• Informally, O(g(n)) is the set of all functions with a lower or same order of growth as
g(n) (to within a constant multiple, as n goes to infinity). Thus, to give a few examples,
the following assertions are all true:

• The second notation, Ω(g(n)), stands for the set of all functions with a higher or same
order of growth as g(n) (to within a constant multiple, as n goes to infinity).
• For example,

• Finally, Θ(g(n)) is the set of all functions that have the same order of growth as g(n) (to
within a constant multiple, as n goes to infinity). Thus, every quadratic function
an2 + bn + c with a > 0 is in Θ (n2)
Manjesh R, ISE, VVCE, Mysuru-02
O (Big Oh)- notation

Manjesh R, ISE, VVCE, Mysuru-02


• Here c * g(n) is the upper bound.

• The upper bound on t(n) indicates


that algorithm will not consume
more than the specified time
c * g(n).

• t(n) can touch that line, but never


cross it.

• Big-O is a formal method


expressing the longest amount of
time it could take for algorithm to
complete.
Manjesh R, ISE, VVCE, Mysuru-02
Ω (Big-Omega) - notation

Manjesh R, ISE, VVCE, Mysuru-02


• Here c * g(n) is the lower bound.
• The lower bound on t(n) indicates that the algorithm will not perform
better below the specified time c * g(n).
• t(n) can touch that line, but never cross it.
• Big- Ω is a formal method of finding best case time efficiency it could take
for algorithm to complete.

Manjesh R, ISE, VVCE, Mysuru-02


Θ (Big-Theta) - notation

Manjesh R, ISE, VVCE, Mysuru-02


• Here c1 * g(n) is the upper bound and c2 * g(n) is the lower bound.
• The upper and lower bound on t(n) indicates that the algorithm will perform
between the specified time c1* g(n) and c2* g(n).
• t(n) can touch both lines, but never cross it.
• Big- Θ is a formal method of finding average case time efficiency it could
take for algorithm to complete.
Manjesh R, ISE, VVCE, Mysuru-02
Useful Property Involving the Asymptotic Notations
If an algorithm has two executable parts, the analysis of this algorithm
can be obtained using the following theorem.

Theorem If f1(n) € O(g1(n)) and f2(n) € O(g2(n)) then f1(n)+ f2(n) € O


(max{g1(n), g2(n)}).

Manjesh R, ISE, VVCE, Mysuru-02


Using Limits for Comparing Orders of Growth

Manjesh R, ISE, VVCE, Mysuru-02


Basic Efficiency Classes
Class Name Examples
1 or C Constant Single Instruction

log n logarithmic Binary Search

n linear Sequential Search

n log n linearithmic Merge/Quick Sort

n2 quadratic Bubble/Selection Sort

n3 cubic Matrix Multiplication

2n exponential Tower of Hanoi

n! factorial Permutation

Manjesh R, ISE, VVCE, Mysuru-02


Mathematical Analysis of Non-recursive Algorithms
General Plan for Analyzing the Time Efficiency of Nonrecursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation. (As a rule, it is located in the
innermost loop.)
3. Check whether the number of times the basic operation is executed
depends only on the size of an input. If it also depends on some additional
property, the worst-case, average-case, and, if necessary, best-case
efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic
operation is executed.
5. Using standard formulas and rules of sum manipulation, either find a closed
form formula for the count or, at the very least, establish its order of growth.

Manjesh R, ISE, VVCE, Mysuru-02


Properties of Logarithms

Manjesh R, ISE, VVCE, Mysuru-02


Important Summation Formulas

Manjesh R, ISE, VVCE, Mysuru-02


Sum Manipulation Rules

Manjesh R, ISE, VVCE, Mysuru-02


Example 1:
largest element
in a list of n
numbers

Manjesh R, ISE, VVCE, Mysuru-02


Example 2:
Element
uniqueness
problem

Manjesh R, ISE, VVCE, Mysuru-02


Example 3:
Matrix
Multiplication
Example 4: Find the number
Do it yourself of binary digits in the binary
representation of a positive
decimal integer.

The exact formula for the number of times


the comparison n>1 will be executed is
actually log2n + 1 — the number of bits in
the binary representation of n.
Manjesh R, ISE, VVCE, Mysuru-02
Mathematical Analysis of Recursive Algorithms
General Plan for Analyzing the Time Efficiency of Recursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed can vary
on different inputs of the same size; if it can, the worst-case, average-case,
and best-case efficiencies must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the
number of times the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its
solution.

Manjesh R, ISE, VVCE, Mysuru-02


Recursion: Direct & Indirect

Direct Recursion: A recursive


function that invoke itself
Eg: Factorial Function

Indirect Recursion: A
function which contains a
call to another function,
which in turn calls another
function, ……. so on and
eventually calls the first
function.

Manjesh R, ISE, VVCE, Mysuru-02


Design of Recursive function
• General rule for Designing Recursive algorithm
• Determine the Base Case
• Determine the General Case
• Combine the base and general case
• Eg: Factorial of a number

• Base case & General Case


• Base Case : A special case where solution can be obtained without using
recursion (base/terminal condition).
• A base case serves two functions
• It acts as a terminating condition.
• The recursive function obtain result from the base case it reaches.
• Eg: 0!
• General case: (Other than base case)
• This portion of the code contains the logic required to reduce the size of
the problem so as to move towards the base case or terminal condition
• Eg: fact, n*fact(n-1) .
Manjesh R, ISE, VVCE, Mysuru-02
Example 1: Compute the factorial function

Manjesh R, ISE, VVCE, Mysuru-02


Manjesh R, ISE, VVCE, Mysuru-02
Manjesh R, ISE, VVCE, Mysuru-02
Manjesh R, ISE, VVCE, Mysuru-02
Example 2: Tower of Hanoi
• There are 3 needles A, B, C
• The different diameters of n discs are placed one above the other(smaller discs
placed above the larger disc).
• The two needles B and C are empty.
• All the discs from needle A are to be transformed to needle C using B needle as
Temporary storage.

• The rules to be followed while transferring the discs are


• Only one disk is moved at a time from one needle to another needle.
• Smaller disc is on top of the larger disc at any time.
• Only one needle can be used to for storing intermediate discs.

Manjesh R, ISE, VVCE, Mysuru-02


Manjesh R, ISE, VVCE, Mysuru-02
• Base Case : this case occurs when there are no discs.
• In this situation we simply return the control to the calling function using the
return
return if n=0

• General Case: this case occurs if one or more discs have to be transferred from
source to destination.
• If there are n discs, then all n discs can be transferred recursively using following
3 steps:

• Move n-1 discs recursively from Source to Temp.


• Move nth disc from Source to Destination.
• Move n-1discs recursively from temp to destination.

Manjesh R, ISE, VVCE, Mysuru-02


Manjesh R, ISE, VVCE, Mysuru-02
Manjesh R, ISE, VVCE, Mysuru-02
The above recurrence relation can be solved using repeated substitution
as shown below:

Manjesh R, ISE, VVCE, Mysuru-02


Manjesh R, ISE, VVCE, Mysuru-02
Example 2: Recursive version to find number of binary digits in the binary
representation of a positive decimal integer.

Consider
Manjesh R, ISE, VVCE, Mysuru-02
Manjesh R, ISE, VVCE, Mysuru-02

You might also like