CSC 303 Analysis and Design of Algorithm-2
CSC 303 Analysis and Design of Algorithm-2
It is far more convenient to have basic metrics for an algorithm's efficiency than to
develop the algorithm and assess its efficiency each time a specific parameter in the
underlying computer system changes
It is hard to foresee an algorithm's exact behavior. There are far too many variables
to consider
1
Determine the input and output of the algorithm. Do not make the algorithm runs
infinite.
Definition of input and output should be well defined. You must give sample input
other than 0 and output for better understanding
There must be a defined instruction for solving the problem. The algorithm should
not have an unnecessary number of instructions
You must give clear and concise instructions without making them difficult to
understand
Algorithm helps in faster processing, so it must be adequate and correct, which helps
in getting greater efficiency
Need for Algorithms
There is an important need to have algorithms in computer science
Provides scalability. Divide the problem into smaller steps as it gives a better
understanding of the problem
Use the best technologies to get the most efficient and best solution
To understand the comparison between different algorithms and find the best time
complexity with the best space resource
2
Factors of Algorithm
It is important to consider some factors when you write an algorithm.
Understandable: It is important that when you write an algorithm, it must be easily
understandable and easy to read and understand
Simplify: The algorithm should have simplicity. You must write an algorithm that is
simple and concise
Short and Crisp: You must consider this factor when writing an algorithm. The
algorithm should be short and must have complete information about the problem
Description: The algorithm should have complete information about the problem
and describe the problem in complete detail
Modular: You should be able to break down the problem into smaller sections. The
algorithm should be specially designed so that it can be easily understood
Precision: Algorithms are expected to be precise and correct. The input should give
the desired output. Algorithms should be correct and work according to the need
Development of Problem
There are different stages when a problem is documented.
1. Definition and development of a model
3
algorithm in any data structure, if we search for an element and the element is present in the
middle position, that case is referred to as the average case.
Analysis Methods
There are different analysis methods. The most common are given below.
Asymptotic Analysis
Omega Notation: This notation represents the best case. It only calculates the lower
bound of time when the algorithm is implemented
Theta Notation: It is used for analyzing the average case. It calculates the time
complexity using the upper and lower bounds
Types of Algorithms
Sorting Algorithms
We perform sorting operations on data to sort the data in ascending or descending order. It helps
in arranging the data in any format. Different problems can be solved using sorting algorithms,
some as bubble sort, merge sort, quick sport, selection sort, and insertion sort are a few such
sorting algorithms.
int main()
{
int [] array={1,5,6,7,8};
int search=6;
boolean flag = false;
for(int i = 0; i < arr.length ; i++)
{
if(array[i] == search)
{
flag=true;
}
}
if(flag == true)
System.out.println("Number found");
else
System.out.println("Number not found");
}
Output
Number found
4
To solve a problem, we first need to have a solution, and for this, we first find the solution for
the problem. We do this by brute force algorithm without considering the time and space
complexity once we get the solution by brute force algorithm and later try to optimize the
solution.
Recursive Algorithms
Recursive algorithms are the most important algorithm in computer science. In this algorithm, a
function gives a call to itself. It recursively calls its function to solve the problem. You don't
have to worry about the subproblems; only consider a case and write the algorithm accordingly.
While writing this algorithm, you must consider the time management factor as recursion calls
recursive stack.
Let's try to understand this algorithm by an example.
int factorial(int number)
{
if(number<=1){
return 1;
}
else{
return number * factorial(number-1);
}
}
Divide and Conquer Algorithm
It is another important algorithm that is mostly used in solving maximum problems. As the name
implies, first, you divide the problem into sub-problems and combine those sub-problems to get
the solution. In this, you first break down the problems into sub-parts and finally combine those
sub-parts to get the solution for the problem. Many problems are solved using this algorithm.
Some applications are binary search, median finding, matrix multiplication, merge, quick sort,
etc.
Backtracking Algorithm
The better version of the brute force approach. In this algorithm, we first start with a solution,
and if the solution is successful in solving the problem, we print the same solution, but in case
the first move does not provide the solution, then you backtrack and try to solve it with another
move. This backtracking process is called as backtracking algorithm. There are many
applications of this algorithm. Some of them are the N - Queens problem, the KnapSack
algorithm, generating binary strings, etc.
Dynamic Programming Algorithm
The most efficient algorithm in computer science. In this algorithm, we analyze the past and
apply that in the future. This algorithm considers and provides the solution with the best time and
space complexity. There are two versions in which you use the dynamic algorithm. Bottom-up
approach: In this approach, you solve the last subproblem and then shift to the upper problem.
The results of these subproblems help you solve the problem's upper part.
Top-down approach: In this approach, you start solving the problem from the top portion. You
apply those solutions to the lower part of the problem by getting the results from the top issues.
5
The are many applications of this algorithm. Some of them are the bellman ford algorithm's
longest common subsequence, bellman ford algorithm, chain, subset sum, longest common
substring, etc.
Greedy Algorithm
In this algorithm, you find the solution that is best for the present. You don't worry about the
future and look for another optimal solution. The algorithm works in a top-down approach and
might not be the best solution as it chooses the solution that works locally and is chosen globally.
It has two properties i.e., Greedy choice property and optimal structure. The are many
applications of this algorithm. Huffman coding, K centers problems, Graph coloring, fractional
knapsack problems, minimum spanning trees, etc.
Advantages of Algorithms
There are many advantages of writing algorithms. Some of them are discussed below.
Understanding the problem is easy. The steps and sequence help in better
understanding the problem
The algorithm helps in determining the solution without writing the implementation
The problems are broken down into smaller sub-problems which helps the
programmer easily convert the solution into the code
The programmer can understand the algorithm and find the optimal solution without
writing the actual program and coding it
Disadvantages of Algorithm
Writing an algorithm is time-consuming
Describing big problems is difficult to solve and writing algorithms for that kind of
problem is more difficult
What is the design and analysis of algorithms?
In the design and analysis of algorithms, we study algorithms and the design implementation of
algorithms. Algorithms play an important role in solving issues. It gives us a clear and concise
picture of how to tackle the problem with the best possible solution.
Evaluation Questions
Why are there types of algorithms?
There are different types of algorithms. Every algorithm differs in terms of space and time
complexity. The resources used in one algorithm may not be efficient in another algorithm.
6
Therefore there is a need for different types of algorithms that can help us in the identification of
a better solution.
Name some types of algorithms.
Some commonly used algorithms that help us in most applications are the sorting algorithm,
Brute Force Algorithm, Recursive Algorithm, Divide and conquer Algorithm, backtracking
Algorithm, Greedy Algorithm, and Dynamic Programming Algorithm.
Why do we study Algorithms?
Algorithms play an important role when we define a problem with a solution. The sequence of
steps helps us to get a clear picture of the problem, which further helps us to find an optimized
and efficient solution. It is the program's blueprint, and we can analyze it before implementing it.
What do you refer to as ADA?
ADA commonly refers to the Analysis and Design of Algorithms. Learning this is an important
skill in computer science. Solving a problem without having the proper algorithm cause error,
and it takes a lot of time for implementation and then making the changes later on. Design and
analysis help to get a clear picture of the problem.
Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to
work in a constant time. Other than the "input" all other factors are considered constant.
Asymptotic analysis refers to computing the running time of any operation in mathematical units
of computation. For example, the running time of one operation is computed as f(n) and may be
for another operation it is computed as g(n2). This means the first operation running time will
increase linearly with the increase in n and the running time of the second operation will increase
exponentially when n increases. Similarly, the running time of both operations will be nearly the
same if n is significantly small.
7
Consequently, analysis of algorithms focuses on the computation of space and time complexity.
Here are various types of time complexities which can be analyzed for the algorithm:
Best case time complexity: The best case time complexity of an algorithm is a measure of
the minimum time that the algorithm will require for an input of size 'n.' The running time
of many algorithms varies not only for the inputs of different sizes but also for the
different inputs of the same size.
Worst case time Complexity: The worst case time complexity of an algorithm is a
measure of the minimum time that the algorithm will require for an input of size 'n.'
Therefore, if various algorithms for sorting are taken into account and say 'n,' input data
items are supplied in reverse order for a sorting algorithm, then the algorithm will require
n2 operations to perform the sort which will correspond to the worst case time complexity
of the algorithm.
Average Time complexity Algorithm: This is the time that the algorithm will require to
execute a typical input data of size 'n' is known as the average case time complexity.
Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running time complexity
of an algorithm.
Ο − Big Oh Notation
Ω − Big Omega Notation
Θ − Theta Notation
o − Little Oh Notation
ω − Little Omega Notation
Big Oh Notation, Ο
The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time. It
measures the worst case time complexity or the longest amount of time an algorithm can
possibly take to complete.
8
For example, for a function f(n)
Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }
Example
Considering g(n) = n3
The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It
measures the best case time complexity or the best amount of time an algorithm can possibly
take to complete.
9
For example, for a function f(n)
Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }
Example
Theta Notation, θ
The notation θ(n) is the formal way to express both the lower bound and the upper bound of an
algorithm's running time. Some may confuse the theta notation as the average case time
complexity; while big theta notation could be almost accurately used to describe the average
case, other notations could be used as well. It is represented as follows −
10
θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }
Example
The Little Oh and Little Omega notations also represent the best and worst case complexities but
they are not asymptotically tight in contrast to the Big Oh and Big Omega Notations. Therefore,
the most commonly used notations to represent time complexities are Big Oh and Big Omega
Notations only.
To analyze a programming code or algorithm, we must notice that each instruction affects the
overall performance of the algorithm and therefore, each instruction must be analyzed separately
to analyze overall performance. However, there are some algorithm control structures which are
present in each programming code and have a specific asymptotic analysis.
11
1. Sequencing
2. If-then-else
3. for loop
4. While loop
1. Sequencing:
Suppose our algorithm consists of two parts A and B. A takes time t A and B takes time tB for
computation. The total computation "t A + tB" is according to the sequence rule. According to
maximum rule, this computation time is (max (tA,tB)).
Example1
Suppose tA =O (n) and tB = θ (n2).
Then, the total computation time can be calculated as
Computation Time = tA + tB
= (max (tA,tB)
= (max (O (n), θ (n2)) = θ (n2)
2. If-then-else:
12
The total time computation is according to the condition rule-"if-then-else." According to the
maximum rule, this computation time is max (tA,tB).
Example:
13
Total Computation = (max (tA,tB))
= max (O (n2), θ (n2) = θ (n2)
3. For loop:
The outer loop executes N times. Every time the outer loop executes, the inner loop executes M
times. As a result, the statements in the inner loop execute a total of N * M times. Thus, the total
complexity for the two loops is O (N2)
4. While loop:
The Simple technique for analyzing the loop is to determine the function of variable involved
whose value decreases each time around. Secondly, for terminating the loop, it is necessary that
value must be a positive integer. By keeping track of how many times the value of function
14
decreases, one can obtain the number of repetition of the loop. The other approach for analyzing
"while" loop is to treat them as recursive algorithms.
Algorithm:
Bubble Sort, also known as Exchange Sort, is a simple sorting algorithm. It works by repeatedly
stepping throughout the list to be sorted, comparing two items at a time and swapping them if
they are in the wrong order. The pass through the list is duplicated until no swaps are desired,
which means the list is sorted.This is the easiest method among all sorting algorithms.
1. The bubble sort starts with the very first index and makes it a bubble element. Then it
compares the bubble element, which is currently our first index element, with the next
element. If the bubble element is greater and the second element is smaller, then both of
them will swap.
After swapping, the second element will become the bubble element. Now we will
compare the second element with the third as we did in the earlier step and swap them if
required. The same process is followed until the last element.
2. We will follow the same process for the rest of the iterations. After each of the iteration,
we will notice that the largest element present in the unsorted array has reached the last
index.
For each iteration, the bubble sort will compare up to the last unsorted element.
Once all the elements get sorted in the ascending order, the algorithm will get terminated.
15
Selection Sort
The selection sort enhances the bubble sort by making only a single swap for each pass through
the rundown. In order to do this, a selection sort searches for the biggest value as it makes a pass
and, after finishing the pass, places it in the best possible area. Similarly, as with a bubble sort,
after the first pass, the biggest item is in the right place. After the second pass, the following
biggest is set up. This procedure proceeds and requires n-1 goes to sort n item since the last item
must be set up after the (n-1) th pass.
1. 1. k ← length [A]
2. 2. for j ←1 to n-1
3. 3. smallest ← j
4. 4. for I ← j + 1 to k
5. 5. if A [i] < A [ smallest]
6. 6. then smallest ← i
7. 7. exchange (A [j], A [smallest])
1. In the selection sort, first of all, we set the initial element as a minimum.
2. Now we will compare the minimum with the second element. If the second element turns
out to be smaller than the minimum, we will swap them, followed by assigning to a
minimum to the third element.
3. Else if the second element is greater than the minimum, which is our first element, then
we will do nothing and move on to the third element and then compare it with the
minimum.
We will repeat this process until we reach the last element.
4. After the completion of each iteration, we will notice that our minimum has reached the
start of the unsorted list.
5. For each iteration, we will start the indexing from the first element of the unsorted list.
We will repeat the Steps from 1 to 4 until the list gets sorted or all the elements get
correctly positioned.
16
Insertion Sort
Insertion sort is one of the simplest sorting algorithms for the reason that it sorts a single element
at a particular instance. It is not the best sorting algorithm in terms of performance, but it's
slightly more efficient than selection sort and bubble sort in practical scenarios. It is an intuitive
sorting technique.
Let's consider the example of cards to have a better understanding of the logic behind the
insertion sort.
Suppose we have a set of cards in our hand, such that we want to arrange these cards in
ascending order. To sort these cards, we have a number of intuitive ways.
One such thing we can do is initially we can hold all of the cards in our left hand, and we can
start taking cards one after other from the left hand, followed by building a sorted arrangement in
the right hand.
Assuming the first card to be already sorted, we will select the next unsorted card. If the unsorted
card is found to be greater than the selected card, we will simply place it on the right side, else to
the left side. At any stage during this whole process, the left hand will be unsorted, and the right
hand will be sorted.
In the same way, we will sort the rest of the unsorted cards by placing them in the correct
position. At each iteration, the insertion algorithm places an unsorted element at its right place.
1. 1. for j = 2 to A.length
2. 2. key = A[j]
3. 3. // Insert A[j] into the sorted sequence A[1.. j - 1]
4. 4. i = j - 1
5. 5. while i > 0 and A[i] > key
6. 6. A[i + 1] = A[i]
7. 7. ii = i -1
8. 8. A[i + 1] = key
1. We will start by assuming the very first element of the array is already sorted. Inside the key,
we will store the second element.
17
Next, we will compare our first element with the key, such that if the key is found to be smaller
than the first element, we will interchange their indexes or place the key at the first index. After
doing this, we will notice that the first two elements are sorted.
2. Now, we will move on to the third element and compare it with the left-hand side elements. If
it is the smallest element, then we will place the third element at the first index.
Else if it is greater than the first element and smaller than the second element, then we will
interchange its position with the third element and place it after the first element. After doing
this, we will have our first three elements in a sorted manner.
3. Similarly, we will sort the rest of the elements and place them in their correct position.
Time Complexities:
o Best Case Complexity: The insertion sort algorithm has a best-case time complexity
of O(n) for the already sorted array because here, only the outer loop is running n times,
and the inner loop is kept still.
o Average Case Complexity: The average-case time complexity for the insertion sort
algorithm is O(n2), which is incurred when the existing elements are in jumbled order,
i.e., neither in the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs
when we sort the ascending order of an array into the descending order.
In this algorithm, every individual element is compared with the rest of the elements, due
to which n-1 comparisons are made for every nth element.
The insertion sort algorithm is highly recommended, especially when a few elements are left for
sorting or in case the array encompasses few elements.
Space Complexity
The insertion sort encompasses a space complexity of O(1) due to the usage of an extra
variable key.
18
Advantages of Insertion Sort
1. It is simple to implement.
2. It is efficient on small datasets.
3. It is stable (does not change the relative order of elements with equal keys)
4. It is in-place (only requires a constant amount O (1) of extra memory space).
5. It is an online algorithm, which can sort a list when it is received.
Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a
dispute on a huge input, break the input into minor pieces, decide the problem on each of the
small pieces, and then merge the piecewise solutions into a global solution. This mechanism of
solving the problem is called the Divide & Conquer Strategy.
Divide and Conquer algorithm consists of a dispute using the following three steps.
Examples: The specific computer algorithms are based on the Divide & Conquer approach:
19
Generally, we can follow the divide-and-conquer approach in a three-step process.
Examples: The specific computer algorithms are based on the Divide & Conquer approach:
1. Relational Formula
20
2. Stopping Condition
1. Relational Formula: It is the formula that we generate from the given technique. After
generation of Formula we apply D&C Strategy, i.e. we break the problem recursively & solve
the broken subproblems.
2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then we
need to know that for how much time, we need to apply divide & Conquer. So the condition
where the need to stop our recursion steps of D&C is called as Stopping Condition.
Following algorithms are based on the concept of the Divide and Conquer Technique:
1. Binary Search: The binary search algorithm is a searching algorithm, which is also
called a half-interval search or logarithmic search. It works by comparing the target value
with the middle element existing in a sorted array. After making the comparison, if the
value differs, then the half that cannot contain the target will eventually eliminate,
followed by continuing the search on the other half. We will again consider the middle
element and compare it with the target value. The process keeps on repeating until the
target value is met. If we found the other half to be empty after ending the search, then it
can be concluded that the target is not present in the array.
2. Quicksort: It is the most efficient sorting algorithm, which is also known as partition-
exchange sort. It starts by selecting a pivot value from an array followed by dividing the
rest of the array elements into two sub-arrays. The partition is made by comparing each
of the elements with the pivot value. It compares whether the element holds a greater
value or lesser value than the pivot and then sort the arrays recursively.
3. Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts
by dividing an array into sub-array and then recursively sorts each of them. After the
sorting is done, it merges them back.
4. Closest Pair of Points: It is a problem of computational geometry. This algorithm
emphasizes finding out the closest pair of points in a metric space, given n points, such
that the distance between the pair of points should be minimal.
5. Strassen's Algorithm: It is an algorithm for matrix multiplication, which is named after
Volker Strassen. It has proven to be much faster than the traditional algorithm when
works on large matrices.
21
6. Cooley-Tukey Fast Fourier Transform (FFT) algorithm: The Fast Fourier Transform
algorithm is named after J. W. Cooley and John Turkey. It follows the Divide and
Conquer Approach and imposes a complexity of O(nlogn).
7. Karatsuba algorithm for fast multiplication: It is one of the fastest multiplication
algorithms of the traditional time, invented by Anatoly Karatsuba in late 1960 and got
published in 1962. It multiplies two n-digit numbers in such a way by reducing it to at
most single-digit.
o Divide and Conquer tend to successfully solve one of the biggest problems, such as the
Tower of Hanoi, a mathematical puzzle. It is challenging to solve complicated problems
for which you have no basic idea, but with the help of the divide and conquer approach, it
has lessened the effort as it works on dividing the main problem into two halves and then
solve them recursively. This algorithm is much faster than other algorithms.
o It efficiently uses cache memory without occupying much space because it solves simple
subproblems within the cache memory instead of accessing the slower main memory.
o It is more proficient than that of its counterpart Brute Force technique.
o Since these algorithms inhibit parallelism, it does not involve any modification and is
handled by systems incorporating parallel processing.
Greedy Algorithm
The greedy method is one of the strategies like Divide and conquer used to solve the problems.
This method is used for solving optimization problems. An optimization problem is a problem
that demands either maximum or minimum results. Let's understand through some terms.
The Greedy method is the simplest and straightforward approach. It is not an algorithm, but it is
a technique. The main function of this approach is that the decision is taken on the basis of the
22
currently available information. Whatever the current information is present, the decision is
made without worrying about the effect of the current decision in future.
This technique is basically used to determine the feasible solution that may or may not be
optimal. The feasible solution is a subset that satisfies the given criteria. The optimal solution is
the solution which is the best and the most favorable solution in the subset. In the case of
feasible, if more than one solution satisfies the given criteria then those solutions will be
considered as the feasible, whereas the optimal solution is the best solution among all the
solutions.
o To construct the solution in an optimal way, this algorithm creates two sets where one set
contains all the chosen items, and another set contains the rejected items.
o A Greedy algorithm makes good local choices in the hope that the solution should be
either feasible or optimal.
o Candidate set: A solution that is created from the set is known as a candidate set.
o Selection function: This function is used to choose the candidate or subset which can be
added in the solution.
o Feasibility function: A function that is used to determine whether the candidate or
subset can be used to contribute to the solution or not.
o Objective function: A function is used to assign the value to the solution or the partial
solution.
o Solution function: This function is used to intimate whether the complete function has
been reached or not.
23
o It is used in a job sequencing with a deadline.
o This algorithm is also used to solve the fractional knapsack problem.
The above is the greedy algorithm. Initially, the solution is assigned with zero value. We pass the
array and number of elements in the greedy algorithm. Inside the for loop, we select the element
one by one and checks whether the solution is feasible or not. If the solution is feasible, then we
perform the union.
P:A→B
The problem is that we have to travel this journey from A to B. There are various solutions to go
from A to B. We can go from A to B by walk, car, bike, train, aeroplane, etc. There is a
constraint in the journey that we have to travel this journey within 12 hrs. If I go by train or
aeroplane then only, I can cover this distance within 12 hrs. There are many solutions to this
problem but there are only two solutions that satisfy the constraint.
If we say that we have to cover the journey at the minimum cost. This means that we have to
travel this distance as minimum as possible, so this problem is known as a minimization
problem. Till now, we have two feasible solutions, i.e., one by train and another one by air. Since
travelling by train will lead to the minimum cost so it is an optimal solution. An optimal solution
is also the feasible solution, but providing the best result so that solution is the optimal solution
with the minimum cost. There would be only one optimal solution.
24
The problem that requires either minimum or maximum result then that problem is known as an
optimization problem. Greedy method is one of the strategies used for solving the optimization
problems.
Greedy algorithm makes decisions based on the information available at each phase without
considering the broader problem. So, there might be a possibility that the greedy solution does
not give the best solution for every problem.
It follows the local optimum choice at each stage with a intend of finding the global optimum.
Let's understand through an example.
We have to travel from the source to the destination at the minimum cost. Since we have three
feasible solutions having cost paths as 10, 20, and 5. 5 is the minimum cost path so it is the
optimal solution. This is the local optimum, and in this way, we find the local optimum at each
stage in order to calculate the global optimal solution.
25
Time complexity measures the amount of time an algorithm takes to run as a function of the
input size. It is determined by analyzing the number of basic operations performed by the
algorithm, such as comparisons, assignments, and arithmetic operations.
Explain the concept of "Big O" notation and its role in algorithm analysis.
Big O notation is used to express the upper bound of an algorithm's time complexity in the
worst-case scenario. It provides a way to compare and classify algorithms based on their growth
rates, allowing us to understand the scalability and efficiency of different algorithms.
Explain the concept of algorithmic efficiency and its relationship with algorithm
design.
Algorithmic efficiency refers to the ability of an algorithm to solve a problem in the most time-
and space-efficient manner. Efficient algorithm design aims to minimize resource usage and
maximize performance, resulting in faster and more scalable solutions.
Explain the concept of divide and conquer and provide an example algorithm
that uses this technique.
26
Divide and conquer is a problem-solving technique that involves breaking a problem into
smaller, independent subproblems, solving them recursively, and combining the solutions to
obtain the final result. An example algorithm that uses divide and conquer is the merge sort
algorithm used for sorting arrays.
How does the concept of space complexity differ from time complexity?
Time complexity measures the amount of time an algorithm takes to run, while space
complexity measures the amount of memory or space required by an algorithm to solve a
problem. Both are important considerations in algorithm analysis and design.
What role does problem size play in algorithm analysis and performance
evaluation?
The problem size is the input size or the size of the data set given to an algorithm. It affects the
time and space complexity of an algorithm and influences its performance. By analyzing how an
algorithm's runtime or memory usage varies with problem size, we can evaluate its scalability.
How does the choice of programming language impact algorithm design and
execution?
27
The choice of programming language can affect algorithm design and execution due to
differences in performance, available libraries, language-specific constructs, and memory
management techniques. Certain algorithms may be more naturally expressed and efficient in
specific programming languages.
What is an algorithm?
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining
a required output for any legitimate input in finite amount of time. An algorithm is step by step
procedure to solve a problem.
28
The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n,
which is an input or inputs of size n for which the algorithm runs the longest among all possible
inputs of that size.
What is best-case efficiency?
The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which
is an input or inputs for which the algorithm runs the fastest among all possible inputs of that sze.
Define Ω-notation?
function t[n] is said to be in Ω [g[n]], denoted by t[n] ε Ω [g[n]], if t[n] is bounded below by
some constant multiple of g[n] for all large n, i.e., if there exists some positive constant c and
some non- negative integer n0 such that T [n] >=cg [n] for all n >=n0
What is average case efficiency?
The average case efficiency of an algorithm is its efficiency for an average case input of size n. It
provides information about an algorithm behavior on a ―typical‖ or ―random‖ input.
Define O-notation?
A function t[n] is said to be in O[g[n]], denoted by t[n] ε O[g[n]], if t[n] is bounded above by
some constant multiple of g[n] for all large n, i.e., if there exists some positive constant c and
some non- negative integer n0 such that
T [n]<=cg [n] for all n >= n0
Define θ-notation?
A function t[n] is said to be in θ [g[n]], denoted by t[n] ε θ [g[n]], if t[n] is bounded
bothabove & below by some constant multiple of g[n] for all large n, i.e., if there exists some
positive constants c1 & c2 and some nonnegative integer n0 such thatc 2g [n] <= t [n] <= c 1g [n]
for all n >= n0
29
Explain the various Asymptotic Notations used in algorithm design? Or Discuss the
properties of asymptotic notations. Or Explain the basic efficiency classes with notations.
Asymptotic notation is a notation, which is used to take meaningful statement about the
efficiency of
30
a program. The efficiency analysis framework concentrates on the order of growth of an
algorithm‘s basic operation count as the principal indicator of the algorithm‘s efficiency.
To compare and rank such orders of growth, computer scientists use three notations, they are: O -
Big oh notation, Ω - Big omega notation, Θ - Big theta notation
Let t[n] and g[n] can be any nonnegative functions defined on the set of natural numbers. The
algorithm‘s running time t[n] usually indicated by its basic operation count C[n], and g[n], some
simple function to compare with the count.
There are 5 basic asymptotic notations used in the algorithm design.
• Big Oh: A function t[n] is said to be in O[g[n]], denoted by t[n] ε O[g[n]], if t[n] is bounded
above by some constant multiple of g[n] for all large n, i.e., if there exists some positive constant
c and some non-negative integer n0 such that T [n] <=cg [n] for all n >= n0
• Big Omega: A function t[n] is said to be in Ω [g[n]], denoted by t[n] ε Ω [g[n]], if t[n] is
bounded below by some constant multiple of g[n] for all large n, i.e., if there exists some positive
constant c and some non-negative integer n0 such that T [n] >=cg [n] for all n >=n0
• Big Theta: A function t[n] is said to be in θ [g[n]], denoted by t[n] ε θ [g[n]], if t[n] is bounded
both above & below by some constant multiple of g[n] for all large n, i.e., if there exists some
positive constants c1 & c2 and some nonnegative integer n0 such that c2g [n] <= t [n] <= c1g
[n] for all n >= n0
• Little oh : The function f[n] = 0[g[n]] iff Lim f[n] = 0 n - g[n]
• Little Omega . :The function f[n] = ω [g[n]] iff Lim f[n] = 0 n - g[n] t[n] O[g[n]] iff t[n]
<=cg[n] for n > n0
31
t[n] Θ[g[n]] iff t[n]O[g[n]] and Ω[g[n]]
f[n] O[f[n]]
f[n] O[g[n]] iff g[n] [f[n]]
If f [n] O[g [n]] and g[n] O[h[n]] , then f[n]
O[h[n]] Note similarity with a ≤ b
If f1[n] O[g1[n]] and f2[n] O[g2[n]] , then f1[n] + f2[n] O[max{g1[n], g2[n]}]
32
1 constant Best case
log n logarithmic Divide ignore part
n linear Examine each
n log n n-log -n or Divide use all parts
linear
logarithmic
n2 quadratic Nested loops
n3 cubic Nested loops
EXAMPLE Compute the factorial function F[n] = n! for an arbitrary nonnegative integer n
ALGORITHM F[n]
//Computes n! recursively
//Input: A nonnegative integer n //Output: The value of n! if n = 0 return 1 else return F[n − 1]
*n
ALGORITHM TOH(n, A, C, B)
33
//Input: n disks and 3 pegs A, B, and C //Output: Disks moved to destination as in the
source order. if n=1 Move disk from A to C
else
Move top n-1 disks from A to B using C TOH(n - 1, A, B, C) Move top n-1 disks from B to C
using A TOH(n - 1, B, C, A)
EXAMPLE 2 Given two n×n matrices A and B, find the time efficiency of the definition-based
algorithm for computing their produc tC =AB. By definition,C is an n×n matrix whose elements
are computed as the scalar [dot] products of the rows of matrix A and the columns of matrixB:
whereC[i, j]=A[i, 0]B[0, j]+...+A[i, k]B[k, j]+...+A[i, n−1]B[n−1, j] for every pair of indices 0
≤i, j ≤n−1.
34
To measure an input‘s size by matrix order n. There are two arithmetical operations in the
innermost loop here—multiplication and addition—that, in principle, can compete for
designation as the algorithm‘s basic operation.
What are the fundamental steps to solve an algorithm? Explain. Or Describe in detail
about the steps in analyzing and coding an algorithm.
An algorithm is a sequence of unambiguous instructions for solving problem, i.e., for obtaining a
required output for any legitimate input in a finite amount of time.
Algorithmic steps are
• Understand the problem
• Decision making
• Design an algorithm
• Proving correctness of an algorithm
• Analyze the algorithm
Coding and implementation of an algorithm
35
Figure : Algorithm design and analysis process
4. Decision making
36
i. An algorithm used to solve the problem exactly and produce correct result is called exact
algorithm
If the problem is to so complex and not able to get exact solution then it is called approximation
algorithm.
i. Once an algorithm has been specified then its correctness must be proved.
ii. An algorithm must yields a required result for every legitimate input in a finite amount of time.
iii. For example, the correctness of Euclid‘s algorithm for computing the greatest common
divisor stems from the correctness of the equality gcd(m, n) = gcd(n, m mod n).
iv. A common technique for proving correctness is to use mathematical induction because an
algorithm‘s iterations provide a natural sequence of steps needed for such proofs.
v. The notion of correctness for approximation algorithms is less straightforward than it is for exact
algorithms. The error produced by the algorithm should not exceed a predefined limit.
e. Analyzing an algorithm
37
For an algorithm the most important is algorithm efficiency .There are two types of algorithm
efficiencies are
• Time efficiency: indicates how fast the algorithm runs
• Space efficiency: indicates how much extra memory the algorithm needs
So the efficiency of an algorithm through analysis is based on both time and space efficiency.
f.Coding an algorithm
ii. The transition from an algorithm to a program can be done either incorrectly or very
inefficiently. Implementing an algorithm correctly is necessary. The Algorithm power should not
reduce by inefficient implementation. iii. Standard tricks like computing a loop‘s invariant
outside the loop, collecting common sub expressions, replacing expensive operations by cheap
ones, selection of programming language and so on should be known to the programmer.
What are the fundamental steps to solve an algorithm? Or What are the steps for solving an
efficient algorithm? [
Analysis of algorithm is the process of investigation of an algorithm‘s efficiency with respect to
two resources: running time and memory space.
The reasons for selecting these two criteria are:
• The simplicity and generality measures of an algorithm estimate the efficiency
• The speed and memory are the efficiency considerations of modern computers. That there are
two kinds of efficiency: time efficiency and space efficiency.
• Time efficiency, also called time complexity, indicates how fast an algorithm in question runs.
• Space efficiency, also called space complexity, refers to the amount of memory units required by
the algorithm in addition to the space needed for its input and output.
The steps for an efficient algorithm
1. Measuring an input‘s size
a. The efficiency measure of an algorithm is directly proportional to the input size or range.
b. The input given may be a square or a non-square matrix.
c. Some algorithms require more than one parameter to indicate the size of their inputs.
2. Units for measuring time
a. We can simply use some standard unit of time measurement-a second, a
38
millisecond, and so on-to measure the running time of a program implementing the algorithm.
b. There are obvious drawbacks to such an approach. They are
• Dependence on the speed of a particular computer
• Dependence on the quality of a program implementing the algorithm
• The compiler used in generating the machine code
• The difficulty of clocking the actual running time of the program.
c. Since we are in need to measure algorithm efficiency, we should have a
metric that does not depend on these extraneous factors.
d. One possible approach is to count the number of times each of the
algorithm's operations is executed. This approach is both difficult and unnecessary.
e. The main objective is to identify the most important operation of the
algorithm, called the basic operation, the operation contributing the most to the total running
time, and compute the number of times the basic operation is executed.
3. Efficiency classes
It is reasonable to measure an algorithm's efficiency as a function of a parameter indicating the
size of the algorithm's input.
a. But there are many algorithms for which running time depends not only
on an input size but also on the specifics of a particular input.
4. Example, sequential search. This is a straightforward algorithm that searches for a given item
(some search key K) in a list of n elements by checking successive elements of the list until
either a match with the search key is found or the list is exhausted. ALGORITHM Sequential
Search(A[0..n -1], K)
//Searches for a given value in a given array by sequential search
//Input: An array A[0..n -1] and a search key K
//Output: Returns the index of the first element of A that matches K
// or -1 ifthere are no matching
elements i←0 while i < n and A[i] ≠ K
do i←i+1
if i < n return i
else return -1
Worst case efficiency
• The worst-case efficiency of an algorithm is its efficiency for the worstcase input of size
n, which is an input (or inputs) of size n for which the algorithm runs the longest among all
possible inputs of that size.
39
• In the worst case, when there are no matching elements or the first matching element
happens to be the last one on the list, the algorithm makes the largest number of key comparisons
among all possible inputs of size n: Cworst (n) = n.
Best case Efficiency
• The best-case efficiency of an algorithm is its efficiency for the best-case input of size n,
which is an input (or inputs) of size n for which the algorithm runs the fastest among all possible
inputs of that size.
• First, determine the kind of inputs for which the count C (n) will be the smallest among
all possible inputs of size n. (Note that the best case does not mean the smallest input; it means
the input of size n for which the algorithm runs the fastest.)
• Then ascertain the value of C (n) on these most convenient inputs. Example- for
sequential search, best-case inputs will be lists of size n with their first elements equal to a search
key; accordingly, Cbest(n) = 1.
Average case efficiency
• The average number of key comparisions Cavg(n) can be computed as follows, o let us
consider again sequential search. The standard assumptions are, In the case of a successful
search, the probability of the
first match occurring in the ith position of the list is pin for every i, and the number of
comparisons made by the algorithm in such a situation is obviously i. Cavg(n) =(n+ 1)/2
40
• If necessary the solutions obtained are combined to get the solution of the original problem
Given a function to compute on ‗n‗ inputs the divide-and-conquer strategy suggests splitting the
inputs in to‗k‗ distinct sub sets, 1<k <n, yielding ‗k‗ sub problems. The sub problems must be
solved, and then a method must be found to combine sub solutions into a solution of the whole.
If the sub problems are still relatively large, then the divide-and conquer strategy can possibly be
reapplied.
4.Define – Feasibility
A feasible set [of candidates] is promising if it can be extended to produce not merely a solution,
but an optimal solution to the problem.
Analysis: O[n log n] in average and best cases and O[n ] in worst case
41
9.Define - Dijkstra‘s Algorithm
Dijkstra‘s algorithm solves the single source shortest path problem of finding shortest paths from
a given vertex[ the source], to all the other vertices of a weighted graph or digraph.
Dijkstra‘s algorithm provides a correct solution for a graph with non-negative weights.
b. If the current node has no unvisited neighbors, backtrack to the its parent, and make that parent
the new current node;
c. Repeat steps 3 and 4 until no more nodes can be visited.
d. If there are still unvisited nodes, repeat from
step 1. BFS follows the procedure:
e. Select an unvisited node x, visit it, have it be the root in a BFS tree being formed. Its level is
called the current level.
f. From each node z in the current level, in the order in which the level nodes were visited, visit all
the unvisited neighbors of z. The newly visited nodes from this level form a new level that
becomes the next current level.
g. Repeat step 2 until no more nodes can be visited.
h. If there are still unvisited nodes, repeat from Step 1
42
14. What are applications or examples of Brute force techniques?
• Exhaustive searching techniques [TSP,KP,AP]
• Finding closest pair and convex hull problems
• Sorting: selection and bubble sort
• Searching: Brute force string match and sequential search. Computing n!
43