Module 1 ADA
Module 1 ADA
MODULE-1
INTRODUCTION
What Is an Algorithm?
(Define algorithm and also discuss the characteristics of an algorithm 5M)
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for
obtaining a required output for any legitimate input in a finite amount of time.
This definition can be illustrated by a simple diagram,
Example:1
(Design Euclid’s algorithm for computing GCD (m,n). Find GCD (60, 24) using
Euclid’s Algorithm 10 M)
Let us see different methods to find greatest common divisor (GCD)
1. Euclid’s algorithm for computing gcd(m, n)
Step 1 If n = 0, return the value of m as the answer and stop; otherwise, proceed to Step 2.
Step 2 Divide m by n and assign the value of the remainder to r.
Step 3 Assign the value of n to m and the value of r to n. Go to Step 1.
ALGORITHM Euclid(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n ≠ 0 do
r ←m mod n m←n
n←r
return m
Before designing an algorithm is to understand completely the problem given. Read the
problem’s description carefully and ask questions if you have any doubts about the problem,
do a few small examples by hand, think about special cases, and ask questions again if needed.
2. Ascertaining the Capabilities of the Computational Device
Once you completely understand a problem, you need to ascertain the capabilities of the
computational device the algorithm is intended for.
Random-access machine (RAM). Its central assumption is that instructions are executed one
after another, one operation at a time. Accordingly, algorithms designed to be executed on such
machines are called sequential algorithms.
The central assumption of the RAM model does not hold for some newer computers that can
execute operations concurrently, i.e., in parallel. Algorithms that take advantage of this
capability are called parallel algorithms.
3. Choosing between Exact and Approximate Problem Solving
The next principal decision is to choose between solving the problem exactly or solving it
approximately.
In the former case, an algorithm is called an exact algorithm; in the latter case, an algorithm is
called an approximation algorithm.
Exact algorithms give precise answers, while approximation algorithms offer close but not
perfect solutions.
Ex: People might go for approximation algorithms because some problems are just too hard to
solve exactly, like finding square roots or solving complex equations.
Problems like sorting, searching need exact solutions.
• Natural language appeals to simplicity, its inherent ambiguity can make describing
algorithms difficult.
• Pseudocode, blending natural language with programming-like constructs, offers more
precision and conciseness in algorithm descriptions. Although various pseudocode
dialects exist, they are generally easy to understand for those familiar with modern
programming languages. Despite advancements, algorithms described in natural
language or pseudocode must ultimately be translated into computer programs for
execution
Once an algorithm has been specified, you have to prove its correctness. That is, you have to
prove that the algorithm yields a required result for every legitimate input in a finite amount of
time.
For some algorithms, a proof of correctness is quite easy; for others, it can be quite complex.
A common technique for proving correctness is to use mathematical induction because an
algorithm’s iterations provide a natural sequence of steps needed for such proofs.
But in order to show that an algorithm is incorrect, you need just one instance of its input for
which the algorithm fails.
8. Analyzing an Algorithm
We usually want our algorithms to possess several qualities. After correctness, by far the most
important is efficiency. In fact, there are two kinds of algorithm efficiency: time efficiency,
indicating how fast the algorithm runs, and space efficiency, indicating how much extra
memory it uses.
Other desirable characteristic of an algorithm is simplicity- simple algorithms are easier to
understand and program, and generality- of the problem and range of inputs.
Dept., of CSE Page 5
Analysis & Design of Algorithms Module 1 BCS401
9. Coding an Algorithm
In the early days of electronic computing, both resources—time and space— were at a
premium. But the technological innovations have improved the computer’s speed and
memory size. Now space efficiency is not much concerned compared to speed.
Ex: For searching, finding the list’s smallest element, and most other problems dealing with
lists input size is size of the list.
In certain scenarios, the choice input size significantly impacts the assessment of algorithm
efficiency. One such example is the computation of the product of two n × n matrices. There are
two common measures of input size for this problem: Matrix Order & Total Number of
elements.
Let Cop be the execution time of an algorithm’s basic operation on a particular computer, and let
C(n) be the number of times this operation needs to be executed for this algorithm. Then we can
estimate the running time T (n) of a program implementing this algorithm on that computer
by the formula,
T (n) ≈ Cop .C(n)
Example:
Let C(n)=1/2n(n-1). How much longer will the algorithm run if we double its input size?
The Answer is 4 because,
Orders of Growth
We expect the algorithms to work faster for all values of n. But some algorithm executes faster
for smaller values of n, but as the value of n increases, they tend to be very slow.
“This change in behaviour as the value of n increases is called order of growth”
Ex: If the running time T(n) of an algorithm varies linearly with increase or decrease in the
value of n, the order of growth is called as linear.
The concept of order of growth can be clearly understood by considering the common
computing time functions as shown in the table below:
From the table we can observe that log2n function grows very slowly compare to 2n (which is
exponential). All the above functions can be ordered according to their order of growth (from
lowest to highest) as shown below:
Worst-case efficiency:
Definition: The worst-case efficiency of an algorithm is its efficiency for the worst-case input
of size n, which is an input (or inputs) of size n for which the algorithm runs the longest among
all possible inputs of that size.
Ex: In linear or sequential search operation, the worst case is, when there are no matching
elements or the first matching element happens to be the last one on the list, the algorithm makes
the largest number of key comparisons among all possible inputs of size n:
Therefore Cworst(n) = n.
Best-case efficiency:
Definition: The best-case efficiency of an algorithm is its efficiency for the best- case input of
size n, which is an input (or inputs) of size n for which the algorithm runs the fastest among all
possible inputs of that size.
For example, the best-case inputs for sequential search are lists of size n with their first element
equal to a search key; accordingly,
Cbest(n) = 1
Average-case efficiency:
Some cases neither the worst-case analysis nor its best-case counterpart yields the necessary
➢ (big oh),
➢ Ω (big omega), and
➢ Θ (big theta).
O (big oh)
Ω -notation
Θ (big theta).
Using the formal definitions of the asymptotic notations, we can prove their general properties. The
following property, in particular, is useful in analyzing algorithms that comprise two consecutively
executed parts.
A much more convenient method for doing so is based on computing the limit of the ratio of
two functions in question.
Note that the first two cases mean that t (n) ∈ O(g(n)), the last two mean that t (n) ∈ Ω(g(n)),
and the second case means that t (n) ∈ Θ(g(n)). The limit-based approach is often more
convenient than the one based on the definitions because it can take advantage of the powerful
calculus techniques developed for computing limits, such as L’Hospital’s rule
EXAMPLE 1 Consider the problem of finding the value of the largest element in a list of n
numbers. For simplicity, we assume that the list is implemented as an array.
(Write an Algorithm to find maximum of n elements and obtain its time
complexity 8M)
EXAMPLE 2 Consider the element uniqueness problem: check whether all the elements in
a given array of n elements are distinct.
Dept., of CSE Page 18
Analysis & Design of Algorithms Module 1 BCS401
(Write an algorithm to find the uniqueness of elements in an array and give the
mathematical analysis of this non recursive algorithm with steps? M)
EXAMPLE 3 Given two n × n matrices A and B, find the time efficiency of the definition-
based algorithm for computing their product C = AB. By definition, C is an n × n matrix whose
elements are computed as the scalar (dot) products of the rows of matrix A and the columns of
matrix B:
Basic Operation: There are two arithmetical operations in the innermost loop
here—multiplication and addition—that, in principle, can
compete for designation as the algorithm’s basic operation.
EXAMPLE 4 The following algorithm finds the number of binary digits in the binary
representation of a positive decimal integer.
Input Size: Here, the input is just one number, and it is this number’s
magnitude that determines the input size.
Basic Operation: First, notice that the most frequently executed operation
here is not inside the while loop but rather the comparison
n>1
Count of Basic Operation, A more significant feature of this example is the fact that the
C(n): loop variable takes on only a few values between its lower
and upper limits;
EXAMPLE 1 Compute the factorial function F(n) = n! for an arbitrary nonnegative integer
n. Since
and 0!= 1 by definition, we can compute F(n) = F(n − 1) . n with the following recursive
algorithm.
The number of multiplications M(n) needed to compute it must satisfy the equality
Thus, we succeeded in setting up the recurrence relation and initial condition for the
algorithm’s number of multiplications M(n):
(1. Write the tower of Hanoi algorithm and steps for analysis of recursive
algorithm. Show the analysis of the above algorithm 8M)
(2. Give the general Plan for analyzing Time efficiency of recursive
algorithms and also Analyze the tower of Hanoi recursive algorithm
10M)
(3. Write the recursive algorithm for tower of Hanoi. Prove that the time
complexity is exponential. 08M)
The Tower of Hanoi puzzle. In this puzzle, we have n disks of different sizes that can slide
onto any of three pegs. Initially, all the disks are on the first peg in order of size, the largest on
the bottom and the smallest on top.
The goal is to move all the disks to the third peg, using the second one as an auxiliary, if
necessary. We can move only one disk at a time, and it is forbidden to place a larger disk on
top of a smaller one.
Selection Sort
consider the application of the brute-force approach to the problem of sorting: given a list of n
orderable items (e.g., numbers, characters from some alphabet, character strings), rearrange
them in nondecreasing order.
We start selection sort by scanning the entire given list to find its smallest element and
exchange it with the first element, putting the smallest element in its final position in the sorted
list.
Then we scan the list, starting with the second element, to find the smallest among the last n
− 1 elements and exchange it with the second element, putting the second smallest element in
its final position.
Generally, on the ith pass through the list, which we number from 0 to n − 2, the algorithm
searches for the smallest item among the last n − i elements and swaps it with Ai :
As an example, the action of the algorithm on the list 89, 45, 68, 90, 29, 34, 17 is illustrated
in Figure
Bubble Sort
(1.With the algorithm derive the worst case efficiency for Bubble sort 4M)
(2. With the algorithm derive the worst case efficiency for bubble sort 4M)
Another brute-force application to the sorting problem is to compare adjacent elements of the
list and exchange them if they are out of order. By doing it repeatedly, we end up ―bubbling
up‖ the largest element to the last position on the list.
The next pass bubbles up the second largest element, and so on, until after n − 1 passes the
list is sorted. Pass i (0 ≤ i ≤ n − 2) of bubble sort can be represented by the following diagram:
The action of the algorithm on the list 89, 45, 68, 90, 29, 34, 17 is illustrated as an example in
Figure
Sequential Search
(Design an Algorithm for performing sequential search and compute best case,
worst case and average case efficiency 10M)
The algorithm simply compares successive elements of a given list with a given search key
until either a match is encountered (successful search) or the list is exhausted without finding
a match (unsuccessful search).
A simple extra trick is often employed in implementing sequential search: if we append the
search key to the end of the list, the search for the key will have to be successful, and therefore
we can eliminate the end of list check altogether.
CAvg(n)=n/2;
Given a string of n characters called the text and a string of m characters (m ≤ n) called the
pattern, find a substring of the text that matches the pattern.
To put it more precisely, we want to find i—the index of the leftmost character of the first
matching substring in the text—such that
If matches other than the first one need to be found, a string-matching algorithm can simply
continue working until the entire text is exhausted.
Align the pattern against the first m characters of the text and start matching the corresponding
pairs of characters from left to right until either all the m pairs of the characters match (then
the algorithm can stop) or a mismatching pair is encountered.
In the latter case, shift the pattern one position to the right and resume the character
comparisons, starting again with the first character of the pattern and its counterpart in the text.
Note that the last position in the text that can still be a beginning of a matching substring is n
– m (provided the text positions are indexed from 0 to n − 1).
Beyond that position, there are not enough characters to match the entire pattern; hence, the
algorithm need not make any comparisons there.
Thus, in the worst case, the algorithm makes m (n − m + 1) character comparisons, which
puts it in the O(nm) class.
Therefore, the average-case efficiency should be considerably better than the worst-case
efficiency.
Indeed, it is: for searching in random texts, it has been shown to be linear, i.e., Θ(n).