ADA Module1 Part 1
ADA Module1 Part 1
We call this a computer, keeping in mind that before electronic computer was
invented, the word “computer” meant Human being involved in performing
numeric calculations.
Note, however, that although the majority of algorithms are indeed intended
for eventual computer implementation, the notion of algorithm does not depend
on such an assumption.
For example, the correctness of Euclid‘s algorithm for computing the greatest common
divisor stems from the correctness of the equality gcd(m, n) = gcd(n, m mod n), the simple
observation that the second integer gets smaller on every iteration of the algorithm, and the
fact that the algorithm stops when the second integer becomes 0.
For some algorithms, a proof of correctness is quite easy; for others, it can be quite
complex.
A common technique for proving correctness is to use mathematical induction.
Proof by mathematical induction is most appropriate for proving the correctness of an
algorithm.
Analyzing an algorithm
After correctness, efficiency has to be estimated.
Time efficiency and space efficiency.
Time: how fast the algorithm runs?
Space: how much extra memory the algorithm needs?
Simplicity: how simpler it is compared to existing algorithms. Sometimes
simpler algorithms are also more efficient than more complicated alternatives.
Unfortunately, it is not always true, in which case a judicious compromise needs to
be made.
Generality: Generality of the problem the algorithm solves and the set of inputs it accepts.
If not satisfied with these three properties it is necessary to redesign the algorithm.
Code the algorithm
Writing program by using programming language.
Selection of programming language should support the features mentioned in the
design phase.
Program testing: If the inputs to algorithms belong to the specified sets then
require no verification. But while implementing algorithms as programs to be
used in actual applications, it is required to provide such verifications.
Documentation of the algorithm is also important.
It is also possible to estimate how much longer would this algorithm run if we
double its input size. Assume that c(n)=1/2n(n-1), for all but very small values of
n,
1 1 1
C(n)= n(n-1)= n2-n≈ n2
2 2 2
1
c (2 n)2
T (2 n) opC (2 n) 2
Therefore ≈ ≈ =4
T (n) c op C (n) 1 2
n
2
(This means algorithm would run 4 times longer if we double the input)
1.3.3 Orders of growth
A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones.
♦When we have to compute, for example, the greatest common divisor of two small
numbers, it is not immediately clear how much more efficient Euclid‘s algorithm is
compared to the other two algorithms discussed in previous section or even why
we should care which of them is faster and by how much. It is only when we have
to find the greatest common divisor of two large numbers that the difference in
algorithm efficiencies becomes both clear and important.
♦ For large values of n, it is the function‘s order of growth that counts: look at Table
which contains values of a few functions particularly important for analysis of algorithms.
Table: Values (some approximate) of several functions important for analysis of algorithms
1.3.4 Worst-Case, Best-Case, Average Case Efficiencies
♦Algorithm efficiency depends on the input size n. And for some algorithms efficiency
not only on input size but also on the specifics of particular or type of input.
Best-case efficiency: Efficiency (number of times the basic operation will be executed)
for the best case input of size n. i.e. The algorithm runs the fastest among all possible
inputs of size n. Average-case efficiency: Average time taken (number of times the
basic operation will be executed) to solve all the possible instances (random) of the
input.
♦ Here is the algorithm's pseudo code, in which, for simplicity, a list is implemented
as an array. (It also assumes that the second condition will not be checked if the
first one, which checks that the array's index does not exceed its upper bound,
fails.)
♦
Clearly, the running time of this algorithm can be quite different for the same list size n.
Worst case efficiency
♦ The worst-case efficiency of an algorithm is its efficiency for the worst-case input of
size n, which is an input (or inputs) of size n for which the algorithm runs the longest
among all possible inputs of that size.
♦ In the worst case the input might be in such a way that there are no matching
elements or the first matching element happens to be the last one on the list, in this
case the algorithm makes the largest number of key comparisons among all
possible inputs of size n:
Cworst (n) = n.
First, determine the kind of inputs for which the count C (n) will be the smallest among
all possible inputs of size n. (Note that the best case does not mean the smallest input;
it means the input of size n for which the algorithm runs the fastest.)
♦ Then ascertain the value of C (n) on these most convenient inputs.
♦ Example- for sequential search, best-case inputs will be lists of size n with their first
element equal to a search key; accordingly, Cbest (n) = 1.
♦ The analysis of the best-case efficiency is not nearly as important as that of the
worst-case efficiency.
♦ But it is not completely useless. For example, there is a sorting algorithm (insertion
sort) for
which the best-case inputs are already sorted arrays on which the algorithm works very fast.
♦ Thus, such an algorithm might well be the method of choice for applications
dealing with
almost sorted arrays. And, of course, if the best-case efficiency of an algorithm is
unsatisfactory, we can immediately discard it without further analysis.
Average case efficiency
It yields the information about an algorithm about an algorithm’s behaviour on a typical and
random input.
♦To analyze the algorithm's average-case efficiency, we must make some assumptions
about possible inputs of size n.
♦The investigation of the average case efficiency is considerably more difficult than
investigation of the worst case and best case efficiency.
♦It involves dividing all instances of size n .into several classes so that for each
instance of the class the number of times the algorithm's basic operation is executed is
the same.
Then a probability distribution of inputs needs to be obtained or assumed so that the
expected value of the basic operation's count can then be derived. The average
number
of key comparisons Cavg(n) can be computed as follows,
Let us consider again sequential search. The standard assumptions are,
♦Let p be the probability of successful search.
♦
In the case of a successful search, the probability of the first match occurring in the ith
position of the list is p/n for every i, where 1<=i<=n and the number of comparisons
made by the algorithm in such a situation is obviously i.
♦ In the case of an unsuccessful search, the number of comparisons is n with the
probability of such a search being (1 - p).
♦ Therefore,
Example, if p = 1 (i.e., the search must be successful), the average number of key
comparisons made by sequential search is (n + 1) /2.
♦ If p = 0 (i.e., the search must be unsuccessful), the average number of key
comparisons will be n because the algorithm will inspect all n elements on all such
inputs.