0% found this document useful (0 votes)
16 views

CSC 303 Analysis and Design of Algorithm-2

The document provides an overview of the analysis and design of algorithms, emphasizing their importance in determining time and space complexity before coding. It outlines the characteristics, types, and advantages of algorithms, as well as various analysis methods, including asymptotic analysis and notations. Additionally, it discusses the stages of problem development and the need for different types of algorithms to find optimal solutions.

Uploaded by

yinka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

CSC 303 Analysis and Design of Algorithm-2

The document provides an overview of the analysis and design of algorithms, emphasizing their importance in determining time and space complexity before coding. It outlines the characteristics, types, and advantages of algorithms, as well as various analysis methods, including asymptotic analysis and notations. Additionally, it discusses the stages of problem development and the need for different types of algorithms to find optimal solutions.

Uploaded by

yinka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 43

ANALYSIS AND DESIGN OF ALGORITHMS (CSC 303)

What is the analysis and design of algorithms?


Analysis and Design of Algorithms help in the investigation of feasible options before the coding
step. By examining algorithms, you can determine the space and time complexity involved. The
interaction of algorithms and design provides a comprehensive overview of the code that will be
written to address the challenge. By enhancing time and space efficiency, this approach
contributes to a more streamlined solution.
In the branch of computer science and IT writing good algorithms with best cases is an important
skill. Learning different algorithms will help increase your logic-building and ability to solve
complex problems, giving you a better understanding of the problem. Choosing the suitable
algorithm for the problem helps in saving time and provides you with a readable solution.
Why Analysis of Algorithms is important?
 To forecast the behavior of an algorithm without putting it into action on a specific
computer

 It is far more convenient to have basic metrics for an algorithm's efficiency than to
develop the algorithm and assess its efficiency each time a specific parameter in the
underlying computer system changes

 It is hard to foresee an algorithm's exact behavior. There are far too many variables
to consider

 As a result, the analysis is simply an approximation; it is not perfect

 More significantly, by comparing several algorithms, we can identify which one is


ideal for our needs
What is an Algorithm?
Algorithms are sequences of instructions that you write to solve a complex problem. You write
these instructions by performing different calculations, data processing, and scenarios.
These are independent of any programming language, and you can write the code in your
preferred language. The algorithms are nothing but the steps that guide you to solve the problem
with the definitions of the use of time and space resources. You can get information about the
time and space complexity through algorithms before writing the actual code.
The algorithms help in saving a lot of time as you don't need to code the programs and then find
the complexities.
The best solution can find out when you write the algorithms for the specific problem.
It is the best way to represent any problem with the best and most efficient solutions.
How to Write an Algorithm?
There is no hard and fast rule when writing an algorithm. There are a few points you just have to
consider when you write an algorithm.
 Describe the problem clearly. Be simple and clear with the description of the
problem.

1
 Determine the input and output of the algorithm. Do not make the algorithm runs
infinite.

 Briefly describe the start and end points of the algorithm.

 Description of the steps to achieve the target.

 Review the algorithm.


Characteristics of Algorithms
 Algorithms must have an endpoint. It should not run for infinity. There must be a
finite amount of time for which the algorithm must run

 You must give an algorithm an individual name

 Definition of input and output should be well defined. You must give sample input
other than 0 and output for better understanding

 There must be a defined instruction for solving the problem. The algorithm should
not have an unnecessary number of instructions

 You must give clear and concise instructions without making them difficult to
understand

 The instruction must be independent of the programming language. The algorithm


should be dependent on one single language

 Algorithm helps in faster processing, so it must be adequate and correct, which helps
in getting greater efficiency
Need for Algorithms
 There is an important need to have algorithms in computer science

 Get a clear understanding of the problem

 To find the optimal solution for the problem

 Provides scalability. Divide the problem into smaller steps as it gives a better
understanding of the problem

 Understanding the design principles and algorithms

 Use the best technologies to get the most efficient and best solution

 Without implementation getting complete information about the problem is

 To understand the comparison between different algorithms and find the best time
complexity with the best space resource

2
Factors of Algorithm
It is important to consider some factors when you write an algorithm.
 Understandable: It is important that when you write an algorithm, it must be easily
understandable and easy to read and understand

 Simplify: The algorithm should have simplicity. You must write an algorithm that is
simple and concise

 Short and Crisp: You must consider this factor when writing an algorithm. The
algorithm should be short and must have complete information about the problem

 Description: The algorithm should have complete information about the problem
and describe the problem in complete detail

 Modular: You should be able to break down the problem into smaller sections. The
algorithm should be specially designed so that it can be easily understood

 Precision: Algorithms are expected to be precise and correct. The input should give
the desired output. Algorithms should be correct and work according to the need
Development of Problem
There are different stages when a problem is documented.
1. Definition and development of a model

2. Specify and design the algorithm

3. Analyzing and correcting the algorithm

4. Selection of correct algorithm

5. Writing the implementation and performing the program testing

6. Finally, document the problem with the solution.


Types of Design and Algorithm Analysis
Whenever you analyze an algorithm, you consider the following cases.
Best Case: We find the best case of an algorithm, i.e., the condition when the algorithm gets
executed in the minimum number of operations. We find the lower bound of the algorithm when
the algorithm performs successfully. For example: When we perform a linear search algorithm in
any data structure, if we search for an element and the element is present in the first position, that
case is referred to as the best case.
Worst Case: We find the worst case of an algorithm, i.e., the condition when the algorithm gets
excited in the maximum number of operations. In the worst case, we get an algorithm's higher
bound running time. For example: When we perform a linear search algorithm in any data
structure if we search for an element that is not present in that data structure, that case is referred
to as the worst case.
Average Case: We find the average case of an algorithm, i.e., the condition when the algorithm
gets excited in a few numbers of operations. For example: When we perform a linear search

3
algorithm in any data structure, if we search for an element and the element is present in the
middle position, that case is referred to as the average case.
Analysis Methods
There are different analysis methods. The most common are given below.
Asymptotic Analysis

In asymptotic analysis, there are three main asymptotic notations.


 Big O Notation: This notation represents the worst case. It only calculates the upper
bound of time when the algorithm is implemented

 Omega Notation: This notation represents the best case. It only calculates the lower
bound of time when the algorithm is implemented

 Theta Notation: It is used for analyzing the average case. It calculates the time
complexity using the upper and lower bounds
Types of Algorithms
Sorting Algorithms

We perform sorting operations on data to sort the data in ascending or descending order. It helps
in arranging the data in any format. Different problems can be solved using sorting algorithms,
some as bubble sort, merge sort, quick sport, selection sort, and insertion sort are a few such
sorting algorithms.
int main()
{
int [] array={1,5,6,7,8};
int search=6;
boolean flag = false;
for(int i = 0; i < arr.length ; i++)
{
if(array[i] == search)
{
flag=true;
}
}
if(flag == true)
System.out.println("Number found");
else
System.out.println("Number not found");
}

Output
Number found

Time Complexity= O(n)


Brute Force Algorithms

4
To solve a problem, we first need to have a solution, and for this, we first find the solution for
the problem. We do this by brute force algorithm without considering the time and space
complexity once we get the solution by brute force algorithm and later try to optimize the
solution.
Recursive Algorithms

Recursive algorithms are the most important algorithm in computer science. In this algorithm, a
function gives a call to itself. It recursively calls its function to solve the problem. You don't
have to worry about the subproblems; only consider a case and write the algorithm accordingly.
While writing this algorithm, you must consider the time management factor as recursion calls
recursive stack.
Let's try to understand this algorithm by an example.
int factorial(int number)
{
if(number<=1){
return 1;
}
else{
return number * factorial(number-1);
}
}
Divide and Conquer Algorithm

It is another important algorithm that is mostly used in solving maximum problems. As the name
implies, first, you divide the problem into sub-problems and combine those sub-problems to get
the solution. In this, you first break down the problems into sub-parts and finally combine those
sub-parts to get the solution for the problem. Many problems are solved using this algorithm.
Some applications are binary search, median finding, matrix multiplication, merge, quick sort,
etc.
Backtracking Algorithm

The better version of the brute force approach. In this algorithm, we first start with a solution,
and if the solution is successful in solving the problem, we print the same solution, but in case
the first move does not provide the solution, then you backtrack and try to solve it with another
move. This backtracking process is called as backtracking algorithm. There are many
applications of this algorithm. Some of them are the N - Queens problem, the KnapSack
algorithm, generating binary strings, etc.
Dynamic Programming Algorithm

The most efficient algorithm in computer science. In this algorithm, we analyze the past and
apply that in the future. This algorithm considers and provides the solution with the best time and
space complexity. There are two versions in which you use the dynamic algorithm. Bottom-up
approach: In this approach, you solve the last subproblem and then shift to the upper problem.
The results of these subproblems help you solve the problem's upper part.
Top-down approach: In this approach, you start solving the problem from the top portion. You
apply those solutions to the lower part of the problem by getting the results from the top issues.

5
The are many applications of this algorithm. Some of them are the bellman ford algorithm's
longest common subsequence, bellman ford algorithm, chain, subset sum, longest common
substring, etc.
Greedy Algorithm

In this algorithm, you find the solution that is best for the present. You don't worry about the
future and look for another optimal solution. The algorithm works in a top-down approach and
might not be the best solution as it chooses the solution that works locally and is chosen globally.
It has two properties i.e., Greedy choice property and optimal structure. The are many
applications of this algorithm. Huffman coding, K centers problems, Graph coloring, fractional
knapsack problems, minimum spanning trees, etc.
Advantages of Algorithms
There are many advantages of writing algorithms. Some of them are discussed below.
 Understanding the problem is easy. The steps and sequence help in better
understanding the problem

 The algorithm helps in determining the solution without writing the implementation

 The problems are broken down into smaller sub-problems which helps the
programmer easily convert the solution into the code

 It is independent of the programming language

 An Algorithm act as a blueprint of a program

 The programmer can understand the algorithm and find the optimal solution without
writing the actual program and coding it

Disadvantages of Algorithm
 Writing an algorithm is time-consuming

 Describing loops in algorithms is hard

 Describing big problems is difficult to solve and writing algorithms for that kind of
problem is more difficult
What is the design and analysis of algorithms?

In the design and analysis of algorithms, we study algorithms and the design implementation of
algorithms. Algorithms play an important role in solving issues. It gives us a clear and concise
picture of how to tackle the problem with the best possible solution.
Evaluation Questions
Why are there types of algorithms?

There are different types of algorithms. Every algorithm differs in terms of space and time
complexity. The resources used in one algorithm may not be efficient in another algorithm.

6
Therefore there is a need for different types of algorithms that can help us in the identification of
a better solution.
Name some types of algorithms.

Some commonly used algorithms that help us in most applications are the sorting algorithm,
Brute Force Algorithm, Recursive Algorithm, Divide and conquer Algorithm, backtracking
Algorithm, Greedy Algorithm, and Dynamic Programming Algorithm.
Why do we study Algorithms?

Algorithms play an important role when we define a problem with a solution. The sequence of
steps helps us to get a clear picture of the problem, which further helps us to find an optimized
and efficient solution. It is the program's blueprint, and we can analyze it before implementing it.
What do you refer to as ADA?

ADA commonly refers to the Analysis and Design of Algorithms. Learning this is an important
skill in computer science. Solving a problem without having the proper algorithm cause error,
and it takes a lot of time for implementation and then making the changes later on. Design and
analysis help to get a clear picture of the problem.

Asymptotic analysis of an algorithm

Asymptotic analysis of an algorithm refers to defining the mathematical foundation/framing of


its run-time performance. Using asymptotic analysis, we can very well conclude the best case,
average case, and worst case scenario of an algorithm.

Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to
work in a constant time. Other than the "input" all other factors are considered constant.

Asymptotic analysis refers to computing the running time of any operation in mathematical units
of computation. For example, the running time of one operation is computed as f(n) and may be
for another operation it is computed as g(n2). This means the first operation running time will
increase linearly with the increase in n and the running time of the second operation will increase
exponentially when n increases. Similarly, the running time of both operations will be nearly the
same if n is significantly small.

Usually, the time required by an algorithm falls under three types −

 Best Case − Minimum time required for program execution.


 Average Case − Average time required for program execution.
 Worst Case − Maximum time required for program execution.

7
Consequently, analysis of algorithms focuses on the computation of space and time complexity.
Here are various types of time complexities which can be analyzed for the algorithm:

 Best case time complexity: The best case time complexity of an algorithm is a measure of
the minimum time that the algorithm will require for an input of size 'n.' The running time
of many algorithms varies not only for the inputs of different sizes but also for the
different inputs of the same size.
 Worst case time Complexity: The worst case time complexity of an algorithm is a
measure of the minimum time that the algorithm will require for an input of size 'n.'
Therefore, if various algorithms for sorting are taken into account and say 'n,' input data
items are supplied in reverse order for a sorting algorithm, then the algorithm will require
n2 operations to perform the sort which will correspond to the worst case time complexity
of the algorithm.
 Average Time complexity Algorithm: This is the time that the algorithm will require to
execute a typical input data of size 'n' is known as the average case time complexity.

Asymptotic Notations

Following are the commonly used asymptotic notations to calculate the running time complexity
of an algorithm.

 Ο − Big Oh Notation
 Ω − Big Omega Notation
 Θ − Theta Notation
 o − Little Oh Notation
 ω − Little Omega Notation

Big Oh Notation, Ο

The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time. It
measures the worst case time complexity or the longest amount of time an algorithm can
possibly take to complete.

8
For example, for a function f(n)

Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }

Example

Let us consider a given function, f(n) = 4.n3+10.n2+5.n+1.

Considering g(n) = n3

f(n) ≥ 5.g(n) for all the values of n > 2.

Hence, the complexity of f(n) can be represented as O (g (n) ) ,i.e. O (n3).

Big Omega Notation, Ω

The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It
measures the best case time complexity or the best amount of time an algorithm can possibly
take to complete.

9
For example, for a function f(n)

Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }

Example

Let us consider a given function, f(n) = 4.n3+10.n2+5.n+1

Considering g(n) = n3 , f(n) ≥ 4.g(n) for all the values of n > 0.

Hence, the complexity of f(n) can be represented as Ω (g (n) ) ,i.e. Ω (n3).

Theta Notation, θ

The notation θ(n) is the formal way to express both the lower bound and the upper bound of an
algorithm's running time. Some may confuse the theta notation as the average case time
complexity; while big theta notation could be almost accurately used to describe the average
case, other notations could be used as well. It is represented as follows −

10
θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }

Example

Let us consider a given function, f(n) = 4.n3+10.n2+5.n+1

Considering g(n) = n3 , 4.g(n)≤ f(n)≤ 5.g(n) for all the values of n.

Hence, the complexity of f(n) can be represented as Ɵ (g (n) ) ,i.e. Ɵ (n3).

Little Oh (o) and Little Omega (ω) Notations

The Little Oh and Little Omega notations also represent the best and worst case complexities but
they are not asymptotically tight in contrast to the Big Oh and Big Omega Notations. Therefore,
the most commonly used notations to represent time complexities are Big Oh and Big Omega
Notations only.

Analyzing Algorithm Control Structure

To analyze a programming code or algorithm, we must notice that each instruction affects the
overall performance of the algorithm and therefore, each instruction must be analyzed separately
to analyze overall performance. However, there are some algorithm control structures which are
present in each programming code and have a specific asymptotic analysis.

Some Algorithm Control Structures are:

11
1. Sequencing
2. If-then-else
3. for loop
4. While loop

1. Sequencing:

Suppose our algorithm consists of two parts A and B. A takes time t A and B takes time tB for
computation. The total computation "t A + tB" is according to the sequence rule. According to
maximum rule, this computation time is (max (tA,tB)).

Example1
Suppose tA =O (n) and tB = θ (n2).
Then, the total computation time can be calculated as

Computation Time = tA + tB
= (max (tA,tB)
= (max (O (n), θ (n2)) = θ (n2)

2. If-then-else:

12
The total time computation is according to the condition rule-"if-then-else." According to the
maximum rule, this computation time is max (tA,tB).

Example:

Suppose tA = O (n2) and tB = θ (n2)


Calculate the total computation time for the following:

13
Total Computation = (max (tA,tB))
= max (O (n2), θ (n2) = θ (n2)

3. For loop:

The general format of for loop is:

1. For (initialization; condition; updation)


2.
3. Statement(s);

Complexity of for loop:

The outer loop executes N times. Every time the outer loop executes, the inner loop executes M
times. As a result, the statements in the inner loop execute a total of N * M times. Thus, the total
complexity for the two loops is O (N2)

4. While loop:

The Simple technique for analyzing the loop is to determine the function of variable involved
whose value decreases each time around. Secondly, for terminating the loop, it is necessary that
value must be a positive integer. By keeping track of how many times the value of function

14
decreases, one can obtain the number of repetition of the loop. The other approach for analyzing
"while" loop is to treat them as recursive algorithms.

Algorithm:

1. 1. [Initialize] Set k: =1, LOC: =1 and MAX: = DATA [1]


2. 2. Repeat steps 3 and 4 while K≤N
3. 3. if MAX<DATA [k],then:
4. Set LOC: = K and MAX: = DATA [k]
5. 4. Set k: = k+1
6. [End of step 2 loop]
7. 5. Write: LOC, MAX
8. 6. EXIT
Sorting Algorithms
Bubble Sort

Bubble Sort, also known as Exchange Sort, is a simple sorting algorithm. It works by repeatedly
stepping throughout the list to be sorted, comparing two items at a time and swapping them if
they are in the wrong order. The pass through the list is duplicated until no swaps are desired,
which means the list is sorted.This is the easiest method among all sorting algorithms.

How Bubble Sort Works

1. The bubble sort starts with the very first index and makes it a bubble element. Then it
compares the bubble element, which is currently our first index element, with the next
element. If the bubble element is greater and the second element is smaller, then both of
them will swap.
After swapping, the second element will become the bubble element. Now we will
compare the second element with the third as we did in the earlier step and swap them if
required. The same process is followed until the last element.
2. We will follow the same process for the rest of the iterations. After each of the iteration,
we will notice that the largest element present in the unsorted array has reached the last
index.

For each iteration, the bubble sort will compare up to the last unsorted element.

Once all the elements get sorted in the ascending order, the algorithm will get terminated.

15
Selection Sort

The selection sort enhances the bubble sort by making only a single swap for each pass through
the rundown. In order to do this, a selection sort searches for the biggest value as it makes a pass
and, after finishing the pass, places it in the best possible area. Similarly, as with a bubble sort,
after the first pass, the biggest item is in the right place. After the second pass, the following
biggest is set up. This procedure proceeds and requires n-1 goes to sort n item since the last item
must be set up after the (n-1) th pass.

ALGORITHM: SELECTION SORT (A)

1. 1. k ← length [A]
2. 2. for j ←1 to n-1
3. 3. smallest ← j
4. 4. for I ← j + 1 to k
5. 5. if A [i] < A [ smallest]
6. 6. then smallest ← i
7. 7. exchange (A [j], A [smallest])

How Selection Sort works

1. In the selection sort, first of all, we set the initial element as a minimum.
2. Now we will compare the minimum with the second element. If the second element turns
out to be smaller than the minimum, we will swap them, followed by assigning to a
minimum to the third element.
3. Else if the second element is greater than the minimum, which is our first element, then
we will do nothing and move on to the third element and then compare it with the
minimum.
We will repeat this process until we reach the last element.
4. After the completion of each iteration, we will notice that our minimum has reached the
start of the unsorted list.
5. For each iteration, we will start the indexing from the first element of the unsorted list.
We will repeat the Steps from 1 to 4 until the list gets sorted or all the elements get
correctly positioned.

16
Insertion Sort

Insertion sort is one of the simplest sorting algorithms for the reason that it sorts a single element
at a particular instance. It is not the best sorting algorithm in terms of performance, but it's
slightly more efficient than selection sort and bubble sort in practical scenarios. It is an intuitive
sorting technique.

Let's consider the example of cards to have a better understanding of the logic behind the
insertion sort.

Suppose we have a set of cards in our hand, such that we want to arrange these cards in
ascending order. To sort these cards, we have a number of intuitive ways.

One such thing we can do is initially we can hold all of the cards in our left hand, and we can
start taking cards one after other from the left hand, followed by building a sorted arrangement in
the right hand.

Assuming the first card to be already sorted, we will select the next unsorted card. If the unsorted
card is found to be greater than the selected card, we will simply place it on the right side, else to
the left side. At any stage during this whole process, the left hand will be unsorted, and the right
hand will be sorted.

In the same way, we will sort the rest of the unsorted cards by placing them in the correct
position. At each iteration, the insertion algorithm places an unsorted element at its right place.

ALGORITHM: INSERTION SORT (A)

1. 1. for j = 2 to A.length
2. 2. key = A[j]
3. 3. // Insert A[j] into the sorted sequence A[1.. j - 1]
4. 4. i = j - 1
5. 5. while i > 0 and A[i] > key
6. 6. A[i + 1] = A[i]
7. 7. ii = i -1
8. 8. A[i + 1] = key

How Insertion Sort Works

1. We will start by assuming the very first element of the array is already sorted. Inside the key,
we will store the second element.

17
Next, we will compare our first element with the key, such that if the key is found to be smaller
than the first element, we will interchange their indexes or place the key at the first index. After
doing this, we will notice that the first two elements are sorted.

2. Now, we will move on to the third element and compare it with the left-hand side elements. If
it is the smallest element, then we will place the third element at the first index.

Else if it is greater than the first element and smaller than the second element, then we will
interchange its position with the third element and place it after the first element. After doing
this, we will have our first three elements in a sorted manner.

3. Similarly, we will sort the rest of the elements and place them in their correct position.

Time Complexities:

o Best Case Complexity: The insertion sort algorithm has a best-case time complexity
of O(n) for the already sorted array because here, only the outer loop is running n times,
and the inner loop is kept still.
o Average Case Complexity: The average-case time complexity for the insertion sort
algorithm is O(n2), which is incurred when the existing elements are in jumbled order,
i.e., neither in the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs
when we sort the ascending order of an array into the descending order.
In this algorithm, every individual element is compared with the rest of the elements, due
to which n-1 comparisons are made for every nth element.

The insertion sort algorithm is highly recommended, especially when a few elements are left for
sorting or in case the array encompasses few elements.

Space Complexity

The insertion sort encompasses a space complexity of O(1) due to the usage of an extra
variable key.

Insertion Sort Applications

The insertion sort algorithm is used in the following cases:

o When the array contains only a few elements.


o When there exist few elements to sort.

18
Advantages of Insertion Sort

1. It is simple to implement.
2. It is efficient on small datasets.
3. It is stable (does not change the relative order of elements with equal keys)
4. It is in-place (only requires a constant amount O (1) of extra memory space).
5. It is an online algorithm, which can sort a list when it is received.

Divide and Conquer Approach

Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a
dispute on a huge input, break the input into minor pieces, decide the problem on each of the
small pieces, and then merge the piecewise solutions into a global solution. This mechanism of
solving the problem is called the Divide & Conquer Strategy.

Divide and Conquer algorithm consists of a dispute using the following three steps.

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get the solution to the whole
problem.

Generally, we can follow the divide-and-conquer approach in a three-step process.

Examples: The specific computer algorithms are based on the Divide & Conquer approach:

19
Generally, we can follow the divide-and-conquer approach in a three-step process.

Examples: The specific computer algorithms are based on the Divide & Conquer approach:

1. Maximum and Minimum Problem


2. Binary Search
3. Sorting (merge sort, quick sort)
4. Tower of Hanoi.

Fundamental of Divide & Conquer Strategy:

There are two fundamental of Divide & Conquer Strategy:

1. Relational Formula

20
2. Stopping Condition

1. Relational Formula: It is the formula that we generate from the given technique. After
generation of Formula we apply D&C Strategy, i.e. we break the problem recursively & solve
the broken subproblems.

2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then we
need to know that for how much time, we need to apply divide & Conquer. So the condition
where the need to stop our recursion steps of D&C is called as Stopping Condition.

Applications of Divide and Conquer Approach:

Following algorithms are based on the concept of the Divide and Conquer Technique:

1. Binary Search: The binary search algorithm is a searching algorithm, which is also
called a half-interval search or logarithmic search. It works by comparing the target value
with the middle element existing in a sorted array. After making the comparison, if the
value differs, then the half that cannot contain the target will eventually eliminate,
followed by continuing the search on the other half. We will again consider the middle
element and compare it with the target value. The process keeps on repeating until the
target value is met. If we found the other half to be empty after ending the search, then it
can be concluded that the target is not present in the array.
2. Quicksort: It is the most efficient sorting algorithm, which is also known as partition-
exchange sort. It starts by selecting a pivot value from an array followed by dividing the
rest of the array elements into two sub-arrays. The partition is made by comparing each
of the elements with the pivot value. It compares whether the element holds a greater
value or lesser value than the pivot and then sort the arrays recursively.
3. Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts
by dividing an array into sub-array and then recursively sorts each of them. After the
sorting is done, it merges them back.
4. Closest Pair of Points: It is a problem of computational geometry. This algorithm
emphasizes finding out the closest pair of points in a metric space, given n points, such
that the distance between the pair of points should be minimal.
5. Strassen's Algorithm: It is an algorithm for matrix multiplication, which is named after
Volker Strassen. It has proven to be much faster than the traditional algorithm when
works on large matrices.

21
6. Cooley-Tukey Fast Fourier Transform (FFT) algorithm: The Fast Fourier Transform
algorithm is named after J. W. Cooley and John Turkey. It follows the Divide and
Conquer Approach and imposes a complexity of O(nlogn).
7. Karatsuba algorithm for fast multiplication: It is one of the fastest multiplication
algorithms of the traditional time, invented by Anatoly Karatsuba in late 1960 and got
published in 1962. It multiplies two n-digit numbers in such a way by reducing it to at
most single-digit.

Advantages of Divide and Conquer

o Divide and Conquer tend to successfully solve one of the biggest problems, such as the
Tower of Hanoi, a mathematical puzzle. It is challenging to solve complicated problems
for which you have no basic idea, but with the help of the divide and conquer approach, it
has lessened the effort as it works on dividing the main problem into two halves and then
solve them recursively. This algorithm is much faster than other algorithms.
o It efficiently uses cache memory without occupying much space because it solves simple
subproblems within the cache memory instead of accessing the slower main memory.
o It is more proficient than that of its counterpart Brute Force technique.
o Since these algorithms inhibit parallelism, it does not involve any modification and is
handled by systems incorporating parallel processing.

Disadvantages of Divide and Conquer

o Since most of its algorithms are designed by incorporating recursion, so it necessitates


high memory management.
o An explicit stack may overuse the space.
o It may even crash the system if the recursion is performed rigorously greater than the
stack present in the CPU.

Greedy Algorithm

The greedy method is one of the strategies like Divide and conquer used to solve the problems.
This method is used for solving optimization problems. An optimization problem is a problem
that demands either maximum or minimum results. Let's understand through some terms.

The Greedy method is the simplest and straightforward approach. It is not an algorithm, but it is
a technique. The main function of this approach is that the decision is taken on the basis of the

22
currently available information. Whatever the current information is present, the decision is
made without worrying about the effect of the current decision in future.

This technique is basically used to determine the feasible solution that may or may not be
optimal. The feasible solution is a subset that satisfies the given criteria. The optimal solution is
the solution which is the best and the most favorable solution in the subset. In the case of
feasible, if more than one solution satisfies the given criteria then those solutions will be
considered as the feasible, whereas the optimal solution is the best solution among all the
solutions.

Characteristics of Greedy method

The following are the characteristics of a greedy method:

o To construct the solution in an optimal way, this algorithm creates two sets where one set
contains all the chosen items, and another set contains the rejected items.
o A Greedy algorithm makes good local choices in the hope that the solution should be
either feasible or optimal.

Components of Greedy Algorithm

The components that can be used in the greedy algorithm are:

o Candidate set: A solution that is created from the set is known as a candidate set.
o Selection function: This function is used to choose the candidate or subset which can be
added in the solution.
o Feasibility function: A function that is used to determine whether the candidate or
subset can be used to contribute to the solution or not.
o Objective function: A function is used to assign the value to the solution or the partial
solution.
o Solution function: This function is used to intimate whether the complete function has
been reached or not.

Applications of Greedy Algorithm

o It is used in finding the shortest path.


o It is used to find the minimum spanning tree using the prim's algorithm or the Kruskal's
algorithm.

23
o It is used in a job sequencing with a deadline.
o This algorithm is also used to solve the fractional knapsack problem.

Pseudo code of Greedy Algorithm

1. Algorithm Greedy (a, n)


2. {
3. Solution : = 0;
4. for i = 0 to n do
5. {
6. x: = select(a);
7. if feasible(solution, x)
8. {
9. Solution: = union(solution , x)
10. }
11. return solution;
12. } }

The above is the greedy algorithm. Initially, the solution is assigned with zero value. We pass the
array and number of elements in the greedy algorithm. Inside the for loop, we select the element
one by one and checks whether the solution is feasible or not. If the solution is feasible, then we
perform the union.

Let's understand through an example.

Suppose there is a problem 'P'. I want to travel from A to B shown as below:

P:A→B

The problem is that we have to travel this journey from A to B. There are various solutions to go
from A to B. We can go from A to B by walk, car, bike, train, aeroplane, etc. There is a
constraint in the journey that we have to travel this journey within 12 hrs. If I go by train or
aeroplane then only, I can cover this distance within 12 hrs. There are many solutions to this
problem but there are only two solutions that satisfy the constraint.

If we say that we have to cover the journey at the minimum cost. This means that we have to
travel this distance as minimum as possible, so this problem is known as a minimization
problem. Till now, we have two feasible solutions, i.e., one by train and another one by air. Since
travelling by train will lead to the minimum cost so it is an optimal solution. An optimal solution
is also the feasible solution, but providing the best result so that solution is the optimal solution
with the minimum cost. There would be only one optimal solution.

24
The problem that requires either minimum or maximum result then that problem is known as an
optimization problem. Greedy method is one of the strategies used for solving the optimization
problems.

Disadvantages of using Greedy algorithm

Greedy algorithm makes decisions based on the information available at each phase without
considering the broader problem. So, there might be a possibility that the greedy solution does
not give the best solution for every problem.

It follows the local optimum choice at each stage with a intend of finding the global optimum.
Let's understand through an example.

Consider the graph which is given below:

We have to travel from the source to the destination at the minimum cost. Since we have three
feasible solutions having cost paths as 10, 20, and 5. 5 is the minimum cost path so it is the
optimal solution. This is the local optimum, and in this way, we find the local optimum at each
stage in order to calculate the global optimal solution.

General Revision on Analysis and Design of Algorithms

 Explain the importance of algorithm analysis in the design of efficient


algorithms.
Algorithm analysis helps in evaluating the efficiency and performance of algorithms, allowing for
better decision-making during algorithm design. It helps identify potential bottlenecks,
understand resource requirements, and optimize algorithms.

 What is the time complexity of an algorithm and how is it determined?

25
Time complexity measures the amount of time an algorithm takes to run as a function of the
input size. It is determined by analyzing the number of basic operations performed by the
algorithm, such as comparisons, assignments, and arithmetic operations.

 Explain the concept of "Big O" notation and its role in algorithm analysis.
Big O notation is used to express the upper bound of an algorithm's time complexity in the
worst-case scenario. It provides a way to compare and classify algorithms based on their growth
rates, allowing us to understand the scalability and efficiency of different algorithms.

 Describe the difference between a brute-force algorithm and an optimized


algorithm.
A brute-force algorithm exhaustively tries all possible solutions to a problem, usually resulting in
high time complexity. An optimized algorithm, on the other hand, aims to minimize the time
complexity by employing strategies like divide and conquer, dynamic programming, or greedy
techniques.

 Explain the concept of algorithmic efficiency and its relationship with algorithm
design.
Algorithmic efficiency refers to the ability of an algorithm to solve a problem in the most time-
and space-efficient manner. Efficient algorithm design aims to minimize resource usage and
maximize performance, resulting in faster and more scalable solutions.

 Compare and contrast dynamic programming and greedy algorithms.


Dynamic programming and greedy algorithms are both optimization techniques used in
algorithm design. Dynamic programming solves problems by breaking them into overlapping
subproblems and solving them iteratively, while greedy algorithms make locally optimal choices
at each step to find a globally optimal solution.

 How can recursion be used in algorithm design? Provide an example.


Recursion is a technique where a function calls itself to solve subproblems until a base case is
reached. It can be used to simplify complex algorithms by breaking them down into smaller,
more manageable subproblems. For example, the quicksort algorithm uses recursion to sort
subarrays.

 Discuss the advantages and limitations of using heuristic algorithms.


Heuristic algorithms are problem-solving techniques that utilize rules of thumb or "heuristics" to
find approximate solutions. They often offer faster runtime but may sacrifice accuracy and
optimality. Heuristic algorithms are useful when the exact solution is infeasible or when quick
approximations are acceptable.

 Explain the concept of divide and conquer and provide an example algorithm
that uses this technique.

26
Divide and conquer is a problem-solving technique that involves breaking a problem into
smaller, independent subproblems, solving them recursively, and combining the solutions to
obtain the final result. An example algorithm that uses divide and conquer is the merge sort
algorithm used for sorting arrays.

 What is the role of randomness in algorithm design? Give an example.


Randomness can be used in algorithm design to introduce stochastic elements or make
probabilistic decisions. One example is the genetic algorithm, which uses random mutations and
crossovers to simulate evolution and find optimal solutions to optimization problems.

 Discuss the importance of algorithm design patterns and give an example.


Algorithm design patterns provide reusable solutions to commonly occurring algorithmic
problems. They help improve code readability, maintainability, and efficiency. An example of an
algorithm design pattern is the "divide and conquer" pattern used in recursive algorithms like
binary search.

 How does the concept of space complexity differ from time complexity?
Time complexity measures the amount of time an algorithm takes to run, while space
complexity measures the amount of memory or space required by an algorithm to solve a
problem. Both are important considerations in algorithm analysis and design.

 Explain the concept of backtracking and its use in algorithm design.


Backtracking is a recursive algorithm design technique used for solving problems by trying out
different possibilities and "backtracking" when a solution path turns out to be invalid. It is useful
for problems like finding all possible solutions or solving constraint satisfaction problems.

 Discuss the impact of algorithm design on system scalability and performance.


Efficient algorithm design is crucial for system scalability and performance. Well-designed
algorithms can handle larger data sets and execute faster, reducing resource usage and
improving overall system performance.

 What role does problem size play in algorithm analysis and performance
evaluation?
The problem size is the input size or the size of the data set given to an algorithm. It affects the
time and space complexity of an algorithm and influences its performance. By analyzing how an
algorithm's runtime or memory usage varies with problem size, we can evaluate its scalability.

 What is meant by asymptotic analysis of algorithms?


Asymptotic analysis involves analyzing the behavior of an algorithm as the input size approaches
infinity. It focuses on understanding how an algorithm's time or space complexity grows with
increasing problem size, allowing us to make generalizations about its performance.

 How does the choice of programming language impact algorithm design and
execution?

27
The choice of programming language can affect algorithm design and execution due to
differences in performance, available libraries, language-specific constructs, and memory
management techniques. Certain algorithms may be more naturally expressed and efficient in
specific programming languages.

 Explain the concept of algorithmic optimization and its trade-offs.


Algorithmic optimization involves making changes to an algorithm or its design to improve
performance or efficiency. However, optimization often involves trade-offs, such as increased
complexity or reduced readability. Balancing these trade-offs is essential in algorithm design.

 What is an algorithm?
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining
a required output for any legitimate input in finite amount of time. An algorithm is step by step
procedure to solve a problem.

 What are the types of algorithm efficiencies?


The two types of algorithm efficiencies are
• Time efficiency: indicates how fast the algorithm runs
• Space efficiency: indicates how much extra memory the algorithm needs

 Mention some of the important problem types?


Some of the important problem types are as follows
• Sorting
• Searching
• String processing
• Graph problems
• Combinatorial problems
• Geometric problems
• Numerical problems

 What is worst-case efficiency?

28
The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n,
which is an input or inputs of size n for which the algorithm runs the longest among all possible
inputs of that size.
 What is best-case efficiency?
The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which
is an input or inputs for which the algorithm runs the fastest among all possible inputs of that sze.

 Define Ω-notation?
function t[n] is said to be in Ω [g[n]], denoted by t[n] ε Ω [g[n]], if t[n] is bounded below by
some constant multiple of g[n] for all large n, i.e., if there exists some positive constant c and
some non- negative integer n0 such that T [n] >=cg [n] for all n >=n0
 What is average case efficiency?
The average case efficiency of an algorithm is its efficiency for an average case input of size n. It
provides information about an algorithm behavior on a ―typical‖ or ―random‖ input.
 Define O-notation?
A function t[n] is said to be in O[g[n]], denoted by t[n] ε O[g[n]], if t[n] is bounded above by
some constant multiple of g[n] for all large n, i.e., if there exists some positive constant c and
some non- negative integer n0 such that
T [n]<=cg [n] for all n >= n0
 Define θ-notation?
A function t[n] is said to be in θ [g[n]], denoted by t[n] ε θ [g[n]], if t[n] is bounded
bothabove & below by some constant multiple of g[n] for all large n, i.e., if there exists some
positive constants c1 & c2 and some nonnegative integer n0 such thatc 2g [n] <= t [n] <= c 1g [n]
for all n >= n0

Give the Euclid‘s algorithm for computing gcd[m, n]


ALGORITHM Euclid_gcd[m, n] //Computes gcd[m, n] by Euclid‘s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
/Output: Greatest common divisor of m and n while n ≠ 0 do
r ←m mod n
m←n
n←r return m
Example: gcd[60, 24] = gcd[24, 12] = gcd[12, 0] = 12.

29
 Explain the various Asymptotic Notations used in algorithm design? Or Discuss the
properties of asymptotic notations. Or Explain the basic efficiency classes with notations.

Asymptotic notation is a notation, which is used to take meaningful statement about the
efficiency of

30
a program. The efficiency analysis framework concentrates on the order of growth of an
algorithm‘s basic operation count as the principal indicator of the algorithm‘s efficiency.
To compare and rank such orders of growth, computer scientists use three notations, they are: O -
Big oh notation, Ω - Big omega notation, Θ - Big theta notation
Let t[n] and g[n] can be any nonnegative functions defined on the set of natural numbers. The
algorithm‘s running time t[n] usually indicated by its basic operation count C[n], and g[n], some
simple function to compare with the count.
There are 5 basic asymptotic notations used in the algorithm design.
• Big Oh: A function t[n] is said to be in O[g[n]], denoted by t[n] ε O[g[n]], if t[n] is bounded
above by some constant multiple of g[n] for all large n, i.e., if there exists some positive constant
c and some non-negative integer n0 such that T [n] <=cg [n] for all n >= n0
• Big Omega: A function t[n] is said to be in Ω [g[n]], denoted by t[n] ε Ω [g[n]], if t[n] is
bounded below by some constant multiple of g[n] for all large n, i.e., if there exists some positive
constant c and some non-negative integer n0 such that T [n] >=cg [n] for all n >=n0
• Big Theta: A function t[n] is said to be in θ [g[n]], denoted by t[n] ε θ [g[n]], if t[n] is bounded
both above & below by some constant multiple of g[n] for all large n, i.e., if there exists some
positive constants c1 & c2 and some nonnegative integer n0 such that c2g [n] <= t [n] <= c1g
[n] for all n >= n0
• Little oh : The function f[n] = 0[g[n]] iff Lim f[n] = 0 n - g[n]
• Little Omega . :The function f[n] = ω [g[n]] iff Lim f[n] = 0 n - g[n] t[n]  O[g[n]] iff t[n]
<=cg[n] for n > n0

t[n]  Ω[g[n]] iff t[n] >=cg[n] for n > n0

31
t[n] Θ[g[n]] iff t[n]O[g[n]] and Ω[g[n]]

Informal Definitions: Big O, Ω, Θ Some properties of asymptotic order of growth

 f[n]  O[f[n]]
 f[n]  O[g[n]] iff g[n] [f[n]]
 If f [n]  O[g [n]] and g[n]  O[h[n]] , then f[n]

O[h[n]] Note similarity with a ≤ b
 If f1[n]  O[g1[n]] and f2[n]  O[g2[n]] , then f1[n] + f2[n]  O[max{g1[n], g2[n]}]

Basic Efficiency classes:

32
1 constant Best case
log n logarithmic Divide ignore part
n linear Examine each
n log n n-log -n or Divide use all parts
linear
logarithmic
n2 quadratic Nested loops
n3 cubic Nested loops

2n exponential All subsets


n! factorial All permutations
 Explain recursive and non-recursive algorithms with example. Or
With an example, explain how recurrence equations are solved.

Mathematical Analysis of Recursive Algorithms


General Plan for Analysis
 Decide on a parameter indicating an input‘s size.
 Identify the algorithm‘s basic operation.
 Check whether the number of times the basic op. is executed may vary on different inputs of the
same size. [If it may, the worst, average, and best cases must be investigated separately.]
 Set up a recurrence relation with an appropriate initial condition expressing the number of times
the basic op. is executed.
 Solve the recurrence by backward substitutions or another method.

EXAMPLE Compute the factorial function F[n] = n! for an arbitrary nonnegative integer n

ALGORITHM F[n]
//Computes n! recursively
//Input: A nonnegative integer n //Output: The value of n! if n = 0 return 1 else return F[n − 1]
*n

EXAMPLE 2: consider educational workhorse of recursive algorithms: the Tower of Hanoi


puzzle. We have n disks of different sizes that can slide onto any of three pegs. Consider A
(source), B (auxiliary), and C (Destination). Initially, all the disks are on the first peg in order of
size, the largest on the bottom and the smallest on top. The goal is to move all the disks to the
third peg, using the second one as an auxiliary.

ALGORITHM TOH(n, A, C, B)

//Move disks from source to destination recursively

33
//Input: n disks and 3 pegs A, B, and C //Output: Disks moved to destination as in the
source order. if n=1 Move disk from A to C
else
Move top n-1 disks from A to B using C TOH(n - 1, A, B, C) Move top n-1 disks from B to C
using A TOH(n - 1, B, C, A)

Mathematical Analysis of Non-Recursive Algorithms


General Plan for Analysis
Decide on parameter n indicating input size
 Identify algorithm‘s basic operation
 Determine worst, average, and best cases for input of size n
Set up a sum for the number of times the basic operation is executed
 Simplify the sum using standard formulas and rules [see Appendix A]
EXAMPLE Consider the problem of finding the value of the largest element in a list of n
numbers
ALGORITHM MaxElement[A[0..n−1]]
//Determines the value of the largest element in a given array
//Input: An array A[0..n−1] of real numbers //Output: The value of the largest element in A
maxval ←A[0]
fori ←1 to n−1do if A[i]> maxval maxval←A[i] return maxval

EXAMPLE 2 Given two n×n matrices A and B, find the time efficiency of the definition-based
algorithm for computing their produc tC =AB. By definition,C is an n×n matrix whose elements
are computed as the scalar [dot] products of the rows of matrix A and the columns of matrixB:
whereC[i, j]=A[i, 0]B[0, j]+...+A[i, k]B[k, j]+...+A[i, n−1]B[n−1, j] for every pair of indices 0
≤i, j ≤n−1.

ALGORITHM MatrixMultiplication[A[0..n−1, 0..n−1], B[0..n−1, 0..n−1]]


//Multiplies two square matrices of order n by the definition-based algorithm
//Input: Two n×n matrices A andB
//Output: MatrixC
=AB fori ←0 to n−1do
for j ←0 to n−1do C[i, j]←0.0 for k←0 to n−1do
C[i, j]←C[i, j]+A[i, k]∗B[k, j] return C

34
To measure an input‘s size by matrix order n. There are two arithmetical operations in the
innermost loop here—multiplication and addition—that, in principle, can compete for
designation as the algorithm‘s basic operation.

 What are the fundamental steps to solve an algorithm? Explain. Or Describe in detail
about the steps in analyzing and coding an algorithm.
An algorithm is a sequence of unambiguous instructions for solving problem, i.e., for obtaining a
required output for any legitimate input in a finite amount of time.
Algorithmic steps are
• Understand the problem
• Decision making
• Design an algorithm
• Proving correctness of an algorithm
• Analyze the algorithm
Coding and implementation of an algorithm

35
Figure : Algorithm design and analysis process

3. Understand the problem

a. Read the description carefully to understand the problem completely


b. Identfiy the problem types and use existing algorithm to find solution
c. Input (instance) to the problem and range of the input gets fixed.

4. Decision making

a. Ascertaining the capabilities of computationa l device


i. In Ram instructions are executed one after another, accordingly algorithms designed to be
executed on such machines are executed sequential algorithms
ii. In some computers operations are executed concurrently in parallel. iii. Choice of computational
devices like processor and memory is mainly based on space and time efficiency.
b. Choosing between exact versus approximate problem solving

36
i. An algorithm used to solve the problem exactly and produce correct result is called exact
algorithm
If the problem is to so complex and not able to get exact solution then it is called approximation
algorithm.

c. Algorithm design strategies


i. Algorithms + data structures = programs, though algorithms and data structures are independent
then they combined to produce programs.
ii. Implementation of an algorithm is possible with the help of algorithms and data structures.
iii. Algorithm design strategy techniques are brute force, dynamic programming, greedy technique,
divide and conquer and so on.
iv. Methods for specifying an algorithm
1. Natural language: It is very simple and easy to specify an algorithm using natural language.
Example // addition of 2 nos
Read a
Read b Add c=a+b
Store and display the result in c
2. Flow chart – Flowchart is a diagrammatic and graphical representation of an algorithm. It is a
method of expressing an algorithm by a collection of connected graphical shapes containing
description of the algorithm‘s steps.
3. Pseudo code - It is a mixture of natural language and programming language constructs. It is
usually more precise than natural language.
Example // sum of 2 nos
// input a and b
// output c
c← a+ b

d. Proving an algorithm’s correctness

i. Once an algorithm has been specified then its correctness must be proved.
ii. An algorithm must yields a required result for every legitimate input in a finite amount of time.
iii. For example, the correctness of Euclid‘s algorithm for computing the greatest common
divisor stems from the correctness of the equality gcd(m, n) = gcd(n, m mod n).
iv. A common technique for proving correctness is to use mathematical induction because an
algorithm‘s iterations provide a natural sequence of steps needed for such proofs.
v. The notion of correctness for approximation algorithms is less straightforward than it is for exact
algorithms. The error produced by the algorithm should not exceed a predefined limit.

e. Analyzing an algorithm

37
For an algorithm the most important is algorithm efficiency .There are two types of algorithm
efficiencies are
• Time efficiency: indicates how fast the algorithm runs
• Space efficiency: indicates how much extra memory the algorithm needs
So the efficiency of an algorithm through analysis is based on both time and space efficiency.

There are some factors to analyze an algorithm are:


• Simplicity of an algorithm
• Generality of an algorithm
• Time efficiency of an algorithm
• Space efficiency of an algorithm

f.Coding an algorithm

i. The coding / implementation of an algorithm is done by a suitable programming language like


C, C++, JAVA.

ii. The transition from an algorithm to a program can be done either incorrectly or very
inefficiently. Implementing an algorithm correctly is necessary. The Algorithm power should not
reduce by inefficient implementation. iii. Standard tricks like computing a loop‘s invariant
outside the loop, collecting common sub expressions, replacing expensive operations by cheap
ones, selection of programming language and so on should be known to the programmer.
What are the fundamental steps to solve an algorithm? Or What are the steps for solving an
efficient algorithm? [
Analysis of algorithm is the process of investigation of an algorithm‘s efficiency with respect to
two resources: running time and memory space.
The reasons for selecting these two criteria are:
• The simplicity and generality measures of an algorithm estimate the efficiency
• The speed and memory are the efficiency considerations of modern computers. That there are
two kinds of efficiency: time efficiency and space efficiency.
• Time efficiency, also called time complexity, indicates how fast an algorithm in question runs.
• Space efficiency, also called space complexity, refers to the amount of memory units required by
the algorithm in addition to the space needed for its input and output.
The steps for an efficient algorithm
1. Measuring an input‘s size
a. The efficiency measure of an algorithm is directly proportional to the input size or range.
b. The input given may be a square or a non-square matrix.
c. Some algorithms require more than one parameter to indicate the size of their inputs.
2. Units for measuring time
a. We can simply use some standard unit of time measurement-a second, a

38
millisecond, and so on-to measure the running time of a program implementing the algorithm.
b. There are obvious drawbacks to such an approach. They are
• Dependence on the speed of a particular computer
• Dependence on the quality of a program implementing the algorithm
• The compiler used in generating the machine code
• The difficulty of clocking the actual running time of the program.
c. Since we are in need to measure algorithm efficiency, we should have a
metric that does not depend on these extraneous factors.
d. One possible approach is to count the number of times each of the
algorithm's operations is executed. This approach is both difficult and unnecessary.
e. The main objective is to identify the most important operation of the
algorithm, called the basic operation, the operation contributing the most to the total running
time, and compute the number of times the basic operation is executed.
3. Efficiency classes
It is reasonable to measure an algorithm's efficiency as a function of a parameter indicating the
size of the algorithm's input.
a. But there are many algorithms for which running time depends not only
on an input size but also on the specifics of a particular input.
4. Example, sequential search. This is a straightforward algorithm that searches for a given item
(some search key K) in a list of n elements by checking successive elements of the list until
either a match with the search key is found or the list is exhausted. ALGORITHM Sequential
Search(A[0..n -1], K)
//Searches for a given value in a given array by sequential search
//Input: An array A[0..n -1] and a search key K
//Output: Returns the index of the first element of A that matches K
// or -1 ifthere are no matching
elements i←0 while i < n and A[i] ≠ K
do i←i+1
if i < n return i
else return -1
Worst case efficiency
• The worst-case efficiency of an algorithm is its efficiency for the worstcase input of size
n, which is an input (or inputs) of size n for which the algorithm runs the longest among all
possible inputs of that size.

39
• In the worst case, when there are no matching elements or the first matching element
happens to be the last one on the list, the algorithm makes the largest number of key comparisons
among all possible inputs of size n: Cworst (n) = n.
Best case Efficiency
• The best-case efficiency of an algorithm is its efficiency for the best-case input of size n,
which is an input (or inputs) of size n for which the algorithm runs the fastest among all possible
inputs of that size.
• First, determine the kind of inputs for which the count C (n) will be the smallest among
all possible inputs of size n. (Note that the best case does not mean the smallest input; it means
the input of size n for which the algorithm runs the fastest.)
• Then ascertain the value of C (n) on these most convenient inputs. Example- for
sequential search, best-case inputs will be lists of size n with their first elements equal to a search
key; accordingly, Cbest(n) = 1.
Average case efficiency
• The average number of key comparisions Cavg(n) can be computed as follows, o let us
consider again sequential search. The standard assumptions are, In the case of a successful
search, the probability of the
first match occurring in the ith position of the list is pin for every i, and the number of
comparisons made by the algorithm in such a situation is obviously i. Cavg(n) =(n+ 1)/2

 DIVIDE AND CONQUER METHOD AND GREEDY METHOD

1.What is brute force algorithm?


A straightforward approach, usually based directly on the problem‗s statement and definitions of
the concepts involved.

2.What is exhaustive search?


A brute force solution to a problem involving search for an element with a special property,
usually among combinatorial objects such as permutations, combinations, or subsets of a set.
Examples or Techniques used:
• Traveling Salesman Problem (TSP)
• Knapsack Problem (KP)
• Assignment Problem (AP)

3. Give the general plan for divide-and-conquer algorithms.


The general plan is as follows
• A problems instance is divided into several smaller instances of the same problem, ideally about
the same size
• The smaller instances are solved, typically recursively

40
• If necessary the solutions obtained are combined to get the solution of the original problem
Given a function to compute on ‗n‗ inputs the divide-and-conquer strategy suggests splitting the
inputs in to‗k‗ distinct sub sets, 1<k <n, yielding ‗k‗ sub problems. The sub problems must be
solved, and then a method must be found to combine sub solutions into a solution of the whole.
If the sub problems are still relatively large, then the divide-and conquer strategy can possibly be
reapplied.

4.Define – Feasibility
A feasible set [of candidates] is promising if it can be extended to produce not merely a solution,
but an optimal solution to the problem.

5.Define - Hamiltonian circuit


A Hamiltonian circuit is defined as a cycle that passes through all the vertices of the graph
exactly once.

6.Define- Merge sort and list the steps also


Merge sort sorts a given array A[0..n-1] by dividing it into two halves a[0..[n/2]-1] and A[n/2..n-
1] sorting each of them recursively then merging the two smaller sorted arrays into a single
sorted one.
Steps
Divide Step: If given array A has zero or one element, return S; it is already sorted. Otherwise,
divide A into two arrays, A1 and A2, each containing about half of the elements of A.
1. Recursion Step: Recursively sort array A1 and A2.
Conquer Step: Combine the elements back in A by merging the sorted arrays A1 and A2 into a
sorted sequence

7. Define -Quick Sort


Quick sort is an algorithm of choice in many situations because it is not difficult to implement, it
is a good\"general purpose\" sort and it consumes relatively fewer resources during execution.

Analysis: O[n log n] in average and best cases and O[n ] in worst case

8. What is binary search?


Binary search is a remarkably efficient algorithm for searching in a sorted array. It works by
comparing a search key K with the arrays middle element A[m]. If they match the algorithm
stops; otherwise the same operation is repeated recursively for the first half of the array if K <
A[m] and the second half if K > A[m]. Binary Search Time Complexity: O[n log n]

41
9.Define - Dijkstra‘s Algorithm
Dijkstra‘s algorithm solves the single source shortest path problem of finding shortest paths from
a given vertex[ the source], to all the other vertices of a weighted graph or digraph.
Dijkstra‘s algorithm provides a correct solution for a graph with non-negative weights.

10. Define - Huffman trees


A Huffman tree is binary tree that minimizes the weighted path length from the root to the leaves
containing a set of predefined weights. The most important application of Huffman trees are
Huffman code.
11. What do you mean by optimal solution?
Given a problem with n inputs, we obtain a subset that satisfies come
constraints. Any subset that satisfies these constraints is called a feasible solution. A
feasible solution, which maximizes or minimizes a given objective function, is called optimal
solution.
12. What is ClosestPair Problem?
The closest-pair problem finds the two closest points in a set of n points. It isthe simplest of a
variety of problems in computational geometry that deals with proximity of points in the plane or
higher-dimensional spaces.
13. Distinguish between BFS and DFS.
DFS follows the procedure:
a. Select an unvisited node x, visit it, and treat as the current node
Find an unvisited neighbor of the current node, visit it, and make it the new current node;

b. If the current node has no unvisited neighbors, backtrack to the its parent, and make that parent
the new current node;
c. Repeat steps 3 and 4 until no more nodes can be visited.
d. If there are still unvisited nodes, repeat from
step 1. BFS follows the procedure:
e. Select an unvisited node x, visit it, have it be the root in a BFS tree being formed. Its level is
called the current level.
f. From each node z in the current level, in the order in which the level nodes were visited, visit all
the unvisited neighbors of z. The newly visited nodes from this level form a new level that
becomes the next current level.
g. Repeat step 2 until no more nodes can be visited.
h. If there are still unvisited nodes, repeat from Step 1

42
14. What are applications or examples of Brute force techniques?
• Exhaustive searching techniques [TSP,KP,AP]
• Finding closest pair and convex hull problems
• Sorting: selection and bubble sort
• Searching: Brute force string match and sequential search. Computing n!

15. What are the applications of divide and conquer techniques?


• Sorting : Merge sort and quick sort
• Search: Binary search
• Strassen‘s Matrix multiplication
• Finding closest pair and convex hull problems
• Multiplication of large integers

43

You might also like