CSC 401lesson One
CSC 401lesson One
Basic algorithmic analysis: Asymptotic analysis of Upper and average complexity bounds;
standard complexity classes time and space tradeoffs in algorithms analysis ; recursive
algorithms. Algorithmic Strategies: Fundamental computing algorithms: numerical
algorithms, sequential and binary search algorithms; sorting algorithms, binary search trees,
Hash tables, graphs & its representation.
Overview
Algorithm as you have been taught in earlier is the step by step of problem solving. In this course
we are going to study the basic algorithm analysis and different strategies of algorithm. The aim
of this study is to be equipped with knowledge on how to develop good algorithms for best solution
to problems. Algorithm Analysis is at the heart of computer science, serving as a toolset that
allows you to evaluate and compare the performance of different algorithms in solving specific
tasks. It can be defined as the study of the computational complexity of algorithms which helps in
minimizing the resources required by a program, thereby improving overall program efficiency.
Algorithm analysis is something designed to compare two algorithms at the idea level — ignoring
low-level details such as the implementation programming language, the hardware the algorithm
runs on, or the instruction set of the given CPU. The algorithms are compared in terms of just what
they are: Ideas of how something is computed. If our algorithm takes 1 second to run for an input
of size 1000, how will it behave if the input size is doubled? Will it run just as fast, half as fast, or
four times slower? In practical programming, this is important as it allows us to predict how our
algorithm will behave when the input data becomes larger. For example, if we've made an
algorithm for a web application that works well with 1000 users and measure its running time,
using algorithm complexity analysis we can have a pretty good idea of what will happen once we
get 2000 users instead.
Analysis of Algorithm
The analysis is a process of estimating the efficiency of an algorithm, that is, trying to know how
good or bad an algorithm could be.
There are two main parameters based on which an algorithm can be analyzed:
• Space Complexity: The space complexity is concerned with the amount of space required
by an algorithm to run to completion.
• Time Complexity: Time complexity is a function of input size n that refers to the execution
time of an algorithm as n increases..
Running Time
Number Of Times
Execution Time For Basic Operation Is
Basic Operation Executed
Examples of Basic Operations and Input Size in given
Problem Input size measure Basic Operation
Searching for key in a list of Number of list items (n) Key comparison
n items
Checking Primality of a ‘n’ size = number of digits in Division
given integer n binary representation
Matrices multiplication Matrix dimensions or total Multiplication of two
no of elements numbers
Graph problems Vertices and/or edges Visiting a vertex or
traversing an edge.
Worst-case time complexity: For a given input size ‘n’, the worst-case time complexity
can be defined as the maximum amount of time needed by an algorithm to complete its
execution. It is simply a function defined by the maximum number of steps performed on
an instance having an input size of n. We are more interest in this for algorithm analysis.
Average case time complexity: For 'n' input size, the average case time complexity can be
defined as the average amount of time needed by an algorithm to complete its execution.
It is simply a function defined by the average number of steps performed on an instance
having an input size of n.
Best case time complexity: For 'n' input size, the best-case time complexity can be defined
as the minimum amount of time needed by an algorithm to complete its execution. It is
simply a function defined by the minimum number of steps performed on an instance
having an input size of n. This can be used to determine inefficient algorithm.
Algorithm Complexity
The algorithm complexity measures the number of steps required by the algorithm to
solve a given problem. It evaluates the order of count of basic operations executed by an
algorithm as a function of input data size.
To assess the complexity, the order (approximation) of the count of operation is always
considered instead of counting the exact steps.
OC(n) also known as Asymptotic notation represents the complexity of an algorithm,
which or "Big O" notation. Here the f(n) corresponds to the function whose size is the
same as that of the input data. The complexity of the asymptotic computation O(n)
determines in which order the resources such as CPU time, memory, etc. are consumed
by the algorithm that is articulated as a function of input data size.
The complexity can be found in any form such as constant, logarithmic, linear, n*log(n),
quadratic, cubic, exponential, etc. It is nothing but the order of constant, logarithmic,
linear and so on, the number of steps encountered for the completion of a particular
algorithm. The complexity of an algorithm is also known as "running time".
Constant Complexity
In this case, the execution time of the basic operation does not depend on the size of the input, so
this function has a constant time complexity, classifying it as O(1) in Big O notation.
It undergoes an execution of a constant number of steps like 1, 3, 10, etc. for solving a given
problem.
For example,
// Pseudocode for checking if a number is even or odd
function isEvenOrOdd(number) {
return (number % 2 == 0) ? 'Even' : 'Odd'
}
Logarithmic Complexity
Imposes a complexity of O (log(N)). An algorithm Complexity if its runtime is proportional to
the logarithm of the input size. To perform operations on N elements, it often takes the
logarithmic base as 2. For N = 1,000,000, an algorithm that has a complexity of O(log(N)) would
undergo 20 steps (with a constant precision). Here, the logarithmic base does not hold a
necessary consequence for the operation count order, so it is usually omitted.
Example algorithm
Def binary_search(arr, target);
Low = 0
High = len(arr) -1
while low <= high
mid = (low + high)//2
if arr(mid)= = target;
return mid
elif arr[md] < target;
low = mid + 1
else
high = mid – 1
return -1
arr = [2,5,8,12,16,23,38]
target =16
index = binary_search(arr, target)
if index != -1
print(“Element”, target, “found at index”, index)
else
print(“Element”, target, “not found in the list.”)
Linear Complexity
Imposes a complexity of O (N). An algoritm whose time execution is dependent on the input size
(n) is said to have linear complexity.
For example, if there exist 1000 elements, then it will take about 1000 steps. Basically, in linear
complexity, the number of elements linearly depends on the number of steps. For example, the
number of steps for N elements can be N/2 or 3*N. For instance, if the statements within a loop
run with 1unit of time, so if the loop runs for n time, its complexity is 1*O(n) = O(n)
Lets take a practical example;
Function sum (n){
Let sum = 0 // 1 unit of time i.e O(1)
For (let I =1; I,= n: 1++){ //loop repeat upto n times
Sum += I; // 1 unit of time *n
}
Return sum; //1 unit of time}
Linearithmic Complexity
It also imposes a run time of O(n*log(n)). This similar to logarithmic has an extra
dependency on input size. It uses the principle of Divide and Conquer and undergoes the
execution of the order n*log(n) on n number of elements to solve the given problem. For
a given 1000 elements, the linearithmic complexity will execute 10,000 steps for solving
a given problem. Good example is Merge sort as shown below:
Def merge_sort(arr)
If len(arr) <= 1
return arr
mid = len(arr)//2
left = merge_sort(arr[:mid])
right = merge_sort(arr[:mid])
return merge(left, right)
def merge(left, right);
merge = []
i=0
j=0
while i < len (left) and j < len(right);
if left[i] < = right [j];
merged.append(left[i])
i += 1
else
merged.append(right[j])
j+=1
merged.extend(left[i:])
merged.extend(left[j:])
return merged
arr = [8, 3, 1, 7, 4, 6, 2, 5]
sorted_arr = merge_sort(arr)
print(sorted_arr)
The algorithm has an array of 8 elements which is split into two continuously until it
cannot be split any more. The arrays are then merged to give a sorted array.
Quadratic Complexity
It imposes a complexity of O (n2). If an algorithm running time is directly proportional to
the squared size of the input size (n), it is said to have quadratic complexity. If N = 100, it
will endure 10,000 steps. In other words, whenever the order of operation tends to have a
quadratic relation with the input data size, it results in quadratic complexity.
For example, for N number of elements, the steps are found to be in the order of 3*N2/2.
Practical example: an algorithm to check for duplicate in array
Function checkforDuplicate(Array){
For (i=0; i<Array.length: i++) // the loop repeats until n times
Const tempArray = Array.slice(i+1): // O(1*n) times; the statement
runs for O(1) times n repetition
If (tempArray.indexoff(Array[i] !==-1{ // O(n*n)// the function
indexoff will repeat n times for tempArray and also loop for n times of the above
for loop, therefore if statement will run for total of O(n*n)
Return true://O(1*n), runs constant time O(1) but repeats n times
because of for loop
}
}
Return false:// O(1); repeated only 1 time because it is outside loop
Calculating the time complexity of the above code, we have:
C(n) = O(1*n) + )(n*n) + O(1*n) + O(1)
= O(n*n)
=O(n2) ignoring all the constants and lower terms.
Cubic Complexity
It imposes a complexity of O (n3). This complexity increases even faster than quadratic
complexity For N input data size, it executes the order of N3 steps on N elements to solve
a given problem. It is similar to quadratic complexity, instead of having two nested loops,
it has three. If there exist 100 elements, it is going to execute 1,000,000 steps. Greater
polynomial complexities should be avoided where possible.
Floyd Warshall algorithm is an example of cubic complexity:
Def Floyd_warshall(graph)
N=len(graph)
Distances = graph.copy()
For k in range(n);
For I in range(n)
For j in range(n)
Distances[i][j] = min(distances[i][j], distances[i][k] + distances[k][j])
Return distances
Inf= float(ínf’)
Graph = [ 0, inf, -2. Inf],
[4, 0, 3 inf],
[inf, inf, 0,2],
[inf, -1, inf, 0]
Exponential Complexity
It imposes a complexity of O(2n), O(N!), O(nk), . this simply means the number of operations
doubles as the input increases. This can be illustrated using Fibonacci process:
Def Fibonacci(n)
If n<= 1;
Return n
Else
Return Fibonacci(n-1) + Fibonacci(n-2)
Result = Fibonacci(5)
Print result
def generate_permutations(elements);
if len(elements) = = 1:
return [elements]
permutations = []
for i in range(len(elements));
remaining =elements[:i] + elements[i +1:]
sub_permutations = generate_permutations(remaining)
for perm in sub_permutations
permutations.append([elements[i]] + perm)
return permutations
elements = [1,2,3,4]
permutations = generate_permutations(elements)
for perm in permutations
print(perm)
here we try to find possible permutation of the elements [1, 2, 3, 4]. Recursively each element is
picked to be the starting element and permutations are calculated. For this we receive 12
permutations because there are four elements.
Key points in Algorithm complexity
Time complexity measures algorithm performance based on input size
Factors affecting time complexity include input size, basic operations performed, nested
loop/recursion and hardware.
Common type complexity types include O(1), O(n), O(logn), O(nlogn), O(nᶺ2), O(nᶺ3),
O(2ᶺn), and O(n!)
Lower time complexity means shorter run-time and better scalability with larger dataset.
Understanding time complexity is important for efficient algorithm design and optimization
in various fields.
Factors that affect Time Complexity are:
Input size
Number of basic operations performed
Presence of nested loops, recursion and even the hardware we are using.