0% found this document useful (0 votes)
35 views

ADA Module1 Part 1

Uploaded by

sanjaydevale330
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

ADA Module1 Part 1

Uploaded by

sanjaydevale330
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Module 1: INTRODUCTION

1. Fundamentals of Algorithmic Problem Solving


What is an Algorithm?

1.1 Informal Definition:


An Algorithm is a step by step procedure to solve any problem. Algorithm
defines a computational procedure that takes some value or set of values as input
and produces output. Thus algorithm is a sequence of computational steps that
transforms the input into the output.
Formal Definition:
An algorithm is a sequence of unambiguous instructions for solving a problem i.e., for
obtaining a required output for any legitimate input in a finite amount of time.
This definition can be illustrated by a simple diagram.

 The reference to ―instructions‖ in the definition implies that there is


something or someone capable of understanding and following the instructions given.

 We call this a computer, keeping in mind that before electronic computer was
invented, the word “computer” meant Human being involved in performing
numeric calculations.
 Note, however, that although the majority of algorithms are indeed intended
for eventual computer implementation, the notion of algorithm does not depend
on such an assumption.

In addition, all algorithms should satisfy the following criteria or properties.

 Definiteness: Each instruction is clear and unambiguous. The nonambiguity


requirement for each step of an algorithm cannot be compromised.
 INPUT and OUTPUT: The range of inputs for which an algorithm works has to be
specified carefully. Algorithm must produce atleast one quantity as output.
 FINITENESS : If we trace out the instructions of an algorithm, then for all cases, the
algorithm terminates after a finite number of steps.
 EFFECTIVENESS : Every instruction must very basic so that it can be carried out.
 The same algorithm can be represented in several different formats
 Several algorithms for solving the same problem may exist.
 Algorithms for same problem can be based on very different ideas and can solve
the problem with dramatically different speeds.

1.2 Fundamentals Of Algorithmic Problem Solving


 Algorithms are procedural solutions to problems. These solutions are not answers
but specific instructions for getting answers.
 The following diagram briefly illustrates the sequence of steps one typically goes
through in designing and analyzing an algorithm.
 It includes the following sequence of steps:
1. Understanding the problem
2. Deciding on: Computational means, Exact vs. approximate problem solving,
data structure(s), Algorithm design techniques.
3. Design an algorithm
4. Prove correctness
5. Analyze the algorithm
6. Code the algorithm

 Understanding the problem


 Before designing an algorithm the most important thing is to understand the
problem given.
 Asking questions, doing a few examples by hand, thinking about special
cases, etc.
 An Input to an algorithm specifies an instance of the problem the algorithm
that it solves.
 Important to specify exactly the range of instances the algorithm needs to
handle. Else it will work correctly for majority of inputs but crash on some
boundary value.
 A correct algorithm is not one that works most of the time, but one that works
correctly for all legitimate inputs.
 Ascertaining the capabilities of a computational device
 After understanding need to ascertain the capabilities of the device.
 The vast majority of algorithms in use today are still destined to be
programmed for a computer closely resembling the von Neumann machine—
a computer architecture.
 Von Neumann architectures are sequential and the algorithms implemented
on them are called sequential algorithms.
 Algorithms which designed to be executed on parallel computers called
parallel algorithms.
 For very complex algorithms concentrate on a machine with high speed and
more memory where time is critical.

 Choosing between exact and approximate problem solving


 For exact result->exact algorithm
 For approximate result->approximation algorithm.
 Examples of approximation algorithms: Obtaining square roots for numbers
and solving non-linear equations.
 An approximation algorithm can be a part of a more sophisticated algorithm
that solves a problem exactly.

 Deciding on appropriate data structures


 Algorithms may or may not demand ingenuity in representing their inputs.
 Inputs are represented using various data structures.
 Algorithm + data structures=program.

 Algorithm Design Techniques and Methods of Specifying an Algorithm

 An algorithm design technique (or ―strategy or ―paradigm) is a general


approach to solving problems algorithmically that is applicable to a variety of
problems from different areas of computing.
 The different algorithm design techniques are: brute force approach, divide and
conquer, greedy method, decrease and conquer, dynamic programming,
transform and conquer and back tracking.
 Methods of specifying an algorithm: Using natural language, in this method
ambiguity problem will be there.
 The next 2 options are : pseudo code and flowchart.
Pseudo code: mix of natural and programming language. More precise than NL

 Flow chart: method of expressing an algorithm by collection of connected


geometric shapes containing descriptions of the algorithm‘s steps.
This representation technique has proved to be inconvenient for large
problems.
 The state of the art of computing has not yet reached a point where an
algorithm‘s description—be it in a natural language or pseudo code—can be fed into
an electronic computer directly. Instead, it needs to be converted into a computer
program written in a particular computer language.
 Hence program as yet another way of specifying the algorithm, although it is preferable
to consider it as the algorithm‘s implementation.
 Proving an Algorithm’s Correctness
 After specifying an algorithm we have to prove its correctness.
 The correctness is to prove that the algorithm yields a required result for every legitimate
input in a finite amount of time.

 For example, the correctness of Euclid‘s algorithm for computing the greatest common
divisor stems from the correctness of the equality gcd(m, n) = gcd(n, m mod n), the simple
observation that the second integer gets smaller on every iteration of the algorithm, and the
fact that the algorithm stops when the second integer becomes 0.

 For some algorithms, a proof of correctness is quite easy; for others, it can be quite
complex.
 A common technique for proving correctness is to use mathematical induction.
 Proof by mathematical induction is most appropriate for proving the correctness of an
algorithm.
 Analyzing an algorithm
 After correctness, efficiency has to be estimated.
 Time efficiency and space efficiency.
 Time: how fast the algorithm runs?
 Space: how much extra memory the algorithm needs?
 Simplicity: how simpler it is compared to existing algorithms. Sometimes
simpler algorithms are also more efficient than more complicated alternatives.
Unfortunately, it is not always true, in which case a judicious compromise needs to
be made.
 Generality: Generality of the problem the algorithm solves and the set of inputs it accepts.
If not satisfied with these three properties it is necessary to redesign the algorithm.
 Code the algorithm
 Writing program by using programming language.
 Selection of programming language should support the features mentioned in the
design phase.
 Program testing: If the inputs to algorithms belong to the specified sets then
require no verification. But while implementing algorithms as programs to be
used in actual applications, it is required to provide such verifications.
Documentation of the algorithm is also important.

1.3 Fundamentals of the analysis of Algorithm Efficiency


1.3.1 Analysis Framework
There are two kinds of efficiency:
♦ Time efficiency - indicates how fast an algorithm runs.
♦ Space efficiency - deals with the extra space the algorithm requires.

Measuring An Input Size


♦ An algorithm's efficiency is investigated as a function of some parameter
‗n‘ indicating the algorithm's input size.
♦ In most cases, selecting such a parameter is quite straightforward.
♦ For example, it will be the size of the list for problems of sorting, searching, finding
the list's smallest element, and most other problems dealing with lists.
♦ For the problem of evaluating a polynomial p(x) of degree n, it will be the
polynomial's degree or the number of its coefficients, which is larger by one than its
degree.
♦ There are situations, of course, where the choice of a parameter indicating an
input size does matter.
Example - computing the product of two n-by-n matrices.There are two natural measures
of size for this problem.

 The matrix order n.


 The total number of elements N in the matrices being multiplied.
♦ Since there is a simple formula relating these two measures, we can easily
switch from one to the other, but the answer about an algorithm's efficiency will be
qualitatively different depending on which of the two measures we use.

The choice of an appropriate size metric can be influenced by operations of the


algorithm in question. For example, how should we measure an input's size for a spell-
checking algorithm? If the algorithm examines individual characters of its input,
then we should measure the size by the number of characters; if it works by
processing words, we should count their number in the input.
♦ We should make a special note about measuring size of inputs for algorithms
involving properties of numbers (e.g., checking whether a given integer n is
prime).Number b of bits in the n’s binary representation is given by,
b=|_log 2n_|+1

1.3.2 Units For Measuring Run Time

♦ We can simply use some standard unit of time measurement-a second, a


millisecond, and so on-to measure the running time of a program implementing the
algorithm.
♦ There are obvious drawbacks to such an approach. They are
♦ Dependence on the speed of a particular computer
♦ Dependence on the quality of a program implementing the algorithm
♦ The compiler used in generating the machine code
♦ The difficulty of clocking the actual running time of the program.
♦ Since we are in need to measure algorithm efficiency, we should have a metric
that does not depend on these extraneous factors.
♦ One possible approach is to count the number of times each of the algorithm's
operations is executed. This approach is both difficult and unnecessary.
♦ The main objective is to identify the most important operation of the algorithm, called the
basic operation, the operation contributing the most to the total running time, and
compute the number of times the basic operation is executed.
♦ As a rule, it is not difficult to identify the basic operation of an algorithm.
For e.g., The basic operation is usually the most time-consuming operation in the
algorithm‘s innermost loop
♦ Consider the following example:
Let cop be the execution time of an algorithm‘s basic operation on a
particular computer, and let C(n) be the number of times this operation needs to be
executed for this algorithm. Then we can estimate the running time T (n) of a program
implementing
this algorithm on that computer by the formula
T (n) ≈ copC(n).
Total number of steps for basic operation execution, C (n) = n

It is also possible to estimate how much longer would this algorithm run if we
double its input size. Assume that c(n)=1/2n(n-1), for all but very small values of
n,
1 1 1
C(n)= n(n-1)= n2-n≈ n2
2 2 2
1
c (2 n)2
T (2 n) opC (2 n) 2
Therefore ≈ ≈ =4
T (n) c op C (n) 1 2
n
2
(This means algorithm would run 4 times longer if we double the input)
1.3.3 Orders of growth

A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones.
♦When we have to compute, for example, the greatest common divisor of two small
numbers, it is not immediately clear how much more efficient Euclid‘s algorithm is
compared to the other two algorithms discussed in previous section or even why
we should care which of them is faster and by how much. It is only when we have
to find the greatest common divisor of two large numbers that the difference in
algorithm efficiencies becomes both clear and important.
♦ For large values of n, it is the function‘s order of growth that counts: look at Table
which contains values of a few functions particularly important for analysis of algorithms.
Table: Values (some approximate) of several functions important for analysis of algorithms
1.3.4 Worst-Case, Best-Case, Average Case Efficiencies

♦Algorithm efficiency depends on the input size n. And for some algorithms efficiency
not only on input size but also on the specifics of particular or type of input.

♦We have best, worst & average case efficiencies.


Worst-case efficiency: Efficiency (number of times the basic operation will be executed)
for the worst case input of size n. i.e. The algorithm runs the longest among all
possible inputs of size n.

Best-case efficiency: Efficiency (number of times the basic operation will be executed)

for the best case input of size n. i.e. The algorithm runs the fastest among all possible
inputs of size n. Average-case efficiency: Average time taken (number of times the
basic operation will be executed) to solve all the possible instances (random) of the
input.

NOTE: It is not the average of worst and best case.


♦ Example: Sequential search. This is a straightforward algorithm that searches for a
given item (some search key K) in a list of n elements by checking successive elements
of the list until either a match with the search key is found or the list is exhausted.

♦ Here is the algorithm's pseudo code, in which, for simplicity, a list is implemented
as an array. (It also assumes that the second condition will not be checked if the
first one, which checks that the array's index does not exceed its upper bound,
fails.)


Clearly, the running time of this algorithm can be quite different for the same list size n.
Worst case efficiency
♦ The worst-case efficiency of an algorithm is its efficiency for the worst-case input of
size n, which is an input (or inputs) of size n for which the algorithm runs the longest
among all possible inputs of that size.
♦ In the worst case the input might be in such a way that there are no matching
elements or the first matching element happens to be the last one on the list, in this
case the algorithm makes the largest number of key comparisons among all
possible inputs of size n:
Cworst (n) = n.

♦ The way to determine is quite straightforward


♦ To analyze the algorithm to see what kind of inputs yield the largest value of the
basic
operation's count C(n) among all possible inputs of size n and then compute this worst- case
value Cworst (n).
♦ The worst-case analysis provides very important information about an algorithm's
efficiency
by bounding its running time from above. In other words, it guarantees that for any instance
of size n, the running time will not exceed Cworst (n) its running time on the worst-case
inputs.
Best case Efficiency
♦ The best-case efficiency of an algorithm is its efficiency for the best-case input of
size n, which is an input (or inputs) of size n for which the algorithm runs the fastest
among all possible inputs of that size.
♦ We can analyze the best case efficie ncy as follows.

First, determine the kind of inputs for which the count C (n) will be the smallest among
all possible inputs of size n. (Note that the best case does not mean the smallest input;
it means the input of size n for which the algorithm runs the fastest.)
♦ Then ascertain the value of C (n) on these most convenient inputs.
♦ Example- for sequential search, best-case inputs will be lists of size n with their first
element equal to a search key; accordingly, Cbest (n) = 1.
♦ The analysis of the best-case efficiency is not nearly as important as that of the
worst-case efficiency.
♦ But it is not completely useless. For example, there is a sorting algorithm (insertion
sort) for
which the best-case inputs are already sorted arrays on which the algorithm works very fast.
♦ Thus, such an algorithm might well be the method of choice for applications
dealing with
almost sorted arrays. And, of course, if the best-case efficiency of an algorithm is
unsatisfactory, we can immediately discard it without further analysis.
Average case efficiency
It yields the information about an algorithm about an algorithm’s behaviour on a typical and
random input.
♦To analyze the algorithm's average-case efficiency, we must make some assumptions
about possible inputs of size n.
♦The investigation of the average case efficiency is considerably more difficult than
investigation of the worst case and best case efficiency.
♦It involves dividing all instances of size n .into several classes so that for each
instance of the class the number of times the algorithm's basic operation is executed is
the same.
Then a probability distribution of inputs needs to be obtained or assumed so that the
expected value of the basic operation's count can then be derived. The average
number
of key comparisons Cavg(n) can be computed as follows,
Let us consider again sequential search. The standard assumptions are,
♦Let p be the probability of successful search.


In the case of a successful search, the probability of the first match occurring in the ith
position of the list is p/n for every i, where 1<=i<=n and the number of comparisons
made by the algorithm in such a situation is obviously i.
♦ In the case of an unsuccessful search, the number of comparisons is n with the
probability of such a search being (1 - p).
♦ Therefore,
Example, if p = 1 (i.e., the search must be successful), the average number of key
comparisons made by sequential search is (n + 1) /2.
♦ If p = 0 (i.e., the search must be unsuccessful), the average number of key
comparisons will be n because the algorithm will inspect all n elements on all such
inputs.

Recapitulation(Summary) of the Analysis Framework


1 Both time and space efficiencies are measured as functions of the algorithm‘s
input size.
2 Time efficiency is measured by counting the number of times the algorithm‘s
basic operation is executed. Space efficiency is measured by counting the
number of extra memory units consumed by the algorithm.
3 The efficiencies of some algorithms may differ significantly for inputs of the
same size. For such algorithms, we need to distinguish between the worst-case,
average- case, and best-case efficiencies.
4 The framework‘s primary interest lies in the order of growth of the algorithm‘s
running time (extra memory units consumed) as its input size goes to infinity.

You might also like