0% found this document useful (0 votes)
5 views

Unit 1-Introduction

Uploaded by

hpl83659
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Unit 1-Introduction

Uploaded by

hpl83659
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

DESIGN AND ANALYSIS OF ALGORITHM

UNIT 1 – INTRODUCTION
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e.,
for obtaining a required output for any legitimate input in a finite amount of time.
problem

algorithm

input output
computer

Fundamentals of Algorithmic Problem Solving:

pg. 1
DESIGN AND ANALYSIS OF ALGORITHM

Steps to design and analyse an algorithm:


We can consider algorithms to be procedural solutions to problems. These solutions are not
answers but specific instructions for getting answers. It is this emphasis on precisely defined
constructive procedures that makes computer science distinct from other disciplines.

• Understanding the problem: Before designing an algorithm we must understand


completely the problem given.
An input to an algorithm specifies an instance of the problem the algorithm
solves. It is very important to specify exactly the set of instances the algorithm needs
to handle.

• Ascertaining the Capabilities of Computational Device:


Once you completely understand a problem, you need to ascertain the
capabilities of the computational device the algorithm is intended for. Most of
algorithms are destined to be programmed for a computer closely resembling the von
Neumann machine—a computer architecture outlined by the Hungarian-American
mathematician John von Neumann (1903– 1957), in collaboration with A. Burks and
H. Goldstine, in 1946. Its central assumption is that instructions are executed one after
another, one operation at a time. Accordingly, algorithms designed to be executed on
such machines are called sequential algorithms. Computers that can execute
operations concurrently, i.e., in parallel are called parallel algorithms.

• Choosing Between Exact and Approximate Problem Solving


The next principal decision is to choose between solving the problem exactly
or solving it approximately. In the former case, an algorithm is called an exact
algorithm; in the latter case, an algorithm is called an approximation algorithm. These
are chosen depending on the problem definition and complexity.

• Algorithm Design Technique:


An algorithm design technique (or “strategy” or “paradigm”) is a general
approach to solving problems algorithmically that is applicable to a variety of
problems from different areas of computing.

• Designing an Algorithm and Data structures


While the algorithm design techniques do provide a powerful set of general
approaches to algorithmic problem solving, designing an algorithm for a particular
problem may still be a challenging task. one should pay close attention to choosing data
structures appropriate for the operations performed by the algorithm.
Therefore, Algorithms + Data Structures = Programs

pg. 2
DESIGN AND ANALYSIS OF ALGORITHM

• Methods of specifying an algorithm


Once you have designed an algorithm, you need to specify it in some fashion.
Pseudocode, which is a mixture of a natural language and programming language like
constructs is used to write algorithms. Pseudocode is usually more precise than natural
language, and its usage often yields more clear algorithm descriptions.

• Proving an algorithm’s correctness


Once an algorithm has been specified, you have to prove its correctness. That is, you
have to prove that the algorithm yields a required result for every legitimate input in a
finite amount of time.
A common technique for proving correctness is to use mathematical induction
because an algorithm’s iterations provide a natural sequence of steps needed for such
proofs.

• Analysing an algorithm
After correctness, by far the most important is efficiency. In fact, there are two kinds of
algorithm efficiency: time efficiency, indicating how fast the algorithm runs, and space
efficiency, indicating how much extra memory it uses.
Other desirable characteristic of an algorithm is simplicity and generality.

• Coding an algorithm
After analysing the algorithm, we need to code the program. The validity of programs is
established by testing.

Fundamentals of the Analysis of Algorithm Efficiency

Analysis Framework
There are two kinds of efficiency: time efficiency and space efficiency. Time efficiency, also
called time complexity, indicates how fast an algorithm in question runs. Space efficiency,
also called space complexity, refers to the amount of memory units required by the algorithm
in addition to the space needed for its input and output.

Measuring Input size


It takes longer to sort larger arrays, multiply larger matrices, and so on. Therefore, it is
logical to investigate an algorithm’s efficiency as a function of some parameter n indicating
the algorithm’s input size which will help us to calculate an algorithms efficiency.

Units for Measuring Running Time


We need to identify the most important operation of the algorithm, called the basic
operation, the operation contributing the most to the total running time, and compute the
number of times the basic operation is executed.

pg. 3
DESIGN AND ANALYSIS OF ALGORITHM

The established framework for the analysis of an algorithm’s time efficiency suggests
measuring it by counting the number of times the algorithm’s basic operation is executed on
inputs of size n.
Let cop be the execution time of an algorithm’s basic operation on a particular computer, and
let C(n) be the number of times this operation needs to be executed for this algorithm. Then
we can estimate the running time T (n) of a program implementing this algorithm on that
computer by the formula T (n) ≈ copC(n)

Orders of Growth
A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones, It is the larger input size that matters.
For large values of n, it is the function’s order of growth that counts: just look at Table, which
contains values of a few functions particularly important for analysis of algorithms. The
magnitude of the numbers in Table has a profound significance for the analysis of
algorithms. The function growing the slowest among these is the logarithmic function. It
grows so slowly, in fact, that we should expect a program
n log2 n n n log2 n n2 n3 2n n!
10 3.3 101 3.3.101 102 103 103 3.6.106
102 6.6 102 6.6.102 104 106 1.3.1030 9.3.10157
103 10 10 3 1.0.104 106 109
104 13 104 1.3.105 108 1012
105 17 105 1.7.106 1010 1015
106 20 106 2.0.107 1012 1018

implementing an algorithm with a logarithmic basic-operation count to run practi cally


instantaneously on inputs of all realistic sizes. Also note that although specific values of such
a count depend, of course, on the logarithm’s base, the formula
loga n = loga b logb n
makes it possible to switch from one base to another, leaving the count logarithmic
but with a new multiplicative constant. This is why we omit a logarithm’s base and
write simply log n in situations where we are interested just in a function’s order of
growth to within a multiplicative constant.
Worst-case, Best-case and Average-case efficiency
There are many algorithms for which running time depends not only on an input size
but also on the specifics of a particular input. Consider, as an example, sequential search.
This is a straightforward algorithm that searches for a given item (some search key K) in a
list of n elements by checking successive elements of the list until either a match with the
search key is found or the list is exhausted. Here is the algorithm’s pseudocode, in which, for
simplicity, a list is implemented as an array. It also assumes that the second condition A[i]= K

pg. 4
DESIGN AND ANALYSIS OF ALGORITHM

will not be checked if the first one, which checks that the array’s index does not exceed its
upper bound, fails.
ALGORITHM SequentialSearch(A[0..n − 1], K)
//Searches for a given value in a given array by sequential search
//Input: An array A[0..n − 1] and a search key K
//Output: The index of the first element in A that matches K
//or −1 if there are no matching elements

i←0
while i < n and A[i] != K do
i=i+1
if i < n return i
else return -1

Clearly, the running time of this algorithm can be quite different for the same list size n. In
the worst case, when there are no matching elements or the first matching element happens to
be the last one on the list, the algorithm makes the largest number of key comparisons among
all possible inputs of size n: Cworst(n) = n
The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n,
which is an input (or inputs) of size n for which the algorithm runs the longest among all
possible inputs of that size. The way to determine the worst-case efficiency of an algorithm
is, in principle, quite straightforward: analyze the algorithm to see what kind of inputs yield
the largest value of the basic operation’s count C(n) among all possible inputs of size n and
then compute this worst-case value Cworst(n). (For sequential search, the answer was
obvious. The methods for handling less trivial situations are explained in subsequent sections
of this chapter.) Clearly, the worst-case analysis provides very important information about an
algorithm’s efficiency by bounding its running time from above. In others words, it
guarantees that for any instance of size n, the running time will not exceed Cworst(n), its
running time on the worst-case inputs. The best-case efficiency of an algorithm is its
efficiency for the best-case input of size n, which is an input (or inputs) of size n for which
the algorithm runs the fastest among all possible inputs of that size. Accordingly, we can
analyze the bestcase efficiency as follows. First, we determine the kind of inputs for which
the count C(n) will be the smallest among all possible inputs of size n. (Note that the best
case does not mean the smallest input; it means the input of size n for which the algorithm
runs the fastest.) Then we ascertain the value of C(n) on these most convenient inputs. For
example, the best-case inputs for sequential search are lists of size n with their first element
equal to a search key; accordingly, Cbest(n) = 1 for this algorithm.
However, that neither the worst-case analysis nor its best-case counterpart yields the
necessary information about an algorithm’s behaviour on a “typical” or “random” input. This
is the information that the average-case efficiency seeks to provide. To analyse the

pg. 5
DESIGN AND ANALYSIS OF ALGORITHM

algorithm’s average case efficiency, we must make some assumptions about possible inputs
of size n. Average-case efficiency cannot be obtained by taking the average of the worst-case
and the best-case efficiencies.

pg. 6

You might also like