algorithm
algorithm
arr[] = 0 1 2 3 4 5 6 7
8 5 97 8 10 16 23 25 56 72 100
Output: 3
N / 2k =1
Thus, the maximum number of steps (or iterations) required is proportional to log2N
Time Complexity of Binary Search Algorithm
❏ Complexity:
Each step involves a constant amount of work (calculating the middle index and performing a
comparison).
O(log2N)
The best case occurs when the target is found at the middle index on the first iteration.
O(1)
Time Complexity of Binary Search Algorithm
Worst-Case Time Complexity:
The worst case occurs when the target is not present, or it is located at the farthest possible
position (leftmost or rightmost in the last remaining subarray).
O(log2N)
The average number of iterations is also logarithmic, so the average-case time complexity
is:
O(log2N)
Sorting Algorithms
Introduction
❏ A sorting algorithm is used to arrange elements of an array/list in a specific
order. For example:
An unsorted array:
200 111 35 77 6 30 55 29
Sorting algorithm
After sorting:
6 29 30 35 55 77 111 200
Different Sorting Algorithms
❏ There are several sorting algorithms, some of them are given below:
● Bubble Sort
● Selection Sort
● Insertion Sort
● Merge Sort
● Quick Sort
● Heap Sort
Bubble Sort Algorithm
❏ Bubble Sort is the simplest sorting algorithm that works by repeatedly
swapping the adjacent elements if they are in the wrong order.
This algorithm is not suitable for large data sets as its average and worst-case
time complexity are quite high.
Input:
33 11 24 76 66 55 50
Output:
11 24 33 50 55 66 76
Outer Loop: Executes n−1 passes in the worst case, where n is the number of
elements in the array.
Inner Loop: For each pass i, the inner loop executes n−i−1 comparisons
(fewer comparisons as the largest elements "bubble up" to their correct
positions).
Time complexity of Bubble sort
❏ Best Case (Already Sorted Array):
We start with second element of the array as first element in the array is assumed to be
sorted.
Compare second element with the first element and check if the second element is smaller
then swap them.
Move to the third element and compare it with the first two elements and put at its correct
position
Input:
11 2 6 30 15 10
Output:
2 6 10 11 15 30
1. partition()
2. quickSort()
Partition()
It is the key process in the quicksort algorithm. It involves selecting a pivot element
and rearranging the array so that
- All elements smaller than the pivot are placed to its left, and
- The elements greater than the pivot are placed to its right.
The point where the pivot is placed is called the partitioning index and it is
returned to its caller quickSort().
50 70 60 12 23 55 33
It divides the given array into two subarrays based on the partitioning index returned by partition() function.
Then it keeps calling itself for these two subarrays until the whole array is sorted.
If partition index is p
int p = partition(arr,s,e);
quicksort(arr,s,p-1);
quicksort(arr,p+1,e);
QuickSort Algorithm
Partition Function
if(s>=e){
return;
int p = partition(arr,s,e);
quicksort(arr,s,p-1);
quicksort(arr,p+1,e);
}
Quicksort Example:
Input:
50 70 60 12 23 55 33
Output:
12 23 33 50 55 60 70
1. The partition function is called at each step, dividing the array into two subarrays.
2. Recursively, QuickSort is applied to the left and right subarrays.
Let T(n) represent the time complexity for sorting an array of size n:
T(n)=T(k)+T(n−k−1)+O(n)
Divide: Divide the list or array recursively into two halves until it can no more be
divided.
Conquer: Each subarray is sorted individually using the merge sort algorithm.
Merge: The sorted subarrays are merged back together in sorted order. The
process continues until all elements from both subarrays have been merged.
Merge sort
8 3 7 4 2 6 5 1
8 3 7 4 2 6 5 1
Divide
8 3 7 4 Divide 2 6 5 1
8 3 7 4 2 6 5 1
Divide
3 8 4 7 2 6 1 5
Merge
3 4 7 8 1 2 5 6
Merge
1 2 3 4 5 6 7 8
Merge sort Algorithm
MergeSort(Array, start, end):
if start> end
return
mid = (start+end)/2
mergeSort(Array, start, mid)
mergeSort(Array, mid+1, end)
merge(Array, start, mid, end)
T(n) =
2T(n2)+Θ(n) if n>1
● T(n) Represents the total time time taken by the algorithm to sort an array of size n.
● 2T(n/2) represents time taken by the algorithm to recursively sort the two halves of the
array. Since each half has n/2 elements, we have two recursive calls with input size as
(n/2).
● O(n) represents the time taken to merge the two sorted halves
Time Complexity of Merge sort
Best Case: When the array is already sorted or nearly sorted.
Head
Create Link:
Node* head = node1;
node1->next=node2;
node2->next=node3;
node3->next=node4;
Linked List Example
Here is a code to create a singly linked list and print the linked list:
A set E that is a subset of VxV. That is, E is a set of pairs of the form (x,y) where x
and y are nodes in V
Examples of Graphs
● V={0,1,2,3,4}
● E={(0,1), (1,2), (0,3), (3,0), (2,2), (4,3)}
0 is adjacent to 1.
1 is not adjacent to 0.
2 is adjacent from 1.
Graph Representation
For graphs to be computationally useful, they have to be conveniently represented
in programs
Iterate through the matrix using two for loops and print the value of the cells.
Example of Adjacency Matrix
● Simple to implement
● Easy and fast to tell if a pair (i,j) is an edge: simply check if A[i][j] is 1 or 0
Cons:
● No matter how few edges the graph has, the matrix takes O(n2) in memory
Adjacency Lists Representation
A graph of n nodes is represented by a one-dimensional array L of linked lists,
where
L[i] is the linked list containing all the nodes adjacent from node i.
Iterate through the array using a loop and print the vector for each vertex.
Example of Adjacency List Representation
L[0]: empty
L[1]: empty
L[2]: 0, 1, 4, 5
L[3]: 0, 1, 4, 5
L[4]: 0, 1
L[5]: 0, 1
Cons:
● It can take up to O(n) time to determine if a pair of nodes (i,j) is an edge: one
would have to search the linked list L[i], which takes time proportional to the
length of L[i].
Graph Traversal Techniques
The previous connectivity problem, as well as many other graph problems, can be
solved using graph traversal techniques
If the traversal got to node x from node y, y is viewed as the parent of x, and x a
child of y
Depth-First Search
1. Select an unvisited node x, visit it, and treat as the current node
2. Find an unvisited neighbor of the current node, visit it, and make it the new
current node;
3. If the current node has no unvisited neighbors, backtrack to the its parent, and
make that parent the new current node;
4. Repeat steps 3 and 4 until no more nodes can be visited.
5. If there are still unvisited nodes, repeat from step 1.
Illustration of DFS
Implementation of DFS
Observations:
● The last node visited is the first node from which to proceed.
● Also, the backtracking proceeds on the basis of "last visited, first to backtrack
too".
● This suggests that a stack is the proper data structure to remember the
current node and how to backtrack.
1. Select an unvisited node x, visit it, have it be the root in a BFS tree being
formed. Its level is called the current level.
2. From each node z in the current level, in the order in which the level nodes
were visited, visit all the unvisited neighbors of z. The newly visited nodes
from this level form a new level that becomes the next current level.
3. Repeat step 2 until no more nodes can be visited.
4. If there are still unvisited nodes, repeat from Step 1.
Illustration of BFS
Implementation of BFS
Observations:
The first node visited in each level is the first node from which to proceed to visit
new nodes.
This suggests that a queue is the proper data structure to remember the order of
the steps.
0 3
| |
1 --- 2 4
Output: 2
Input: graph = 0 1 1 1
1 0 1 0
1 1 0 1
1 0 1 0
It is mainly an optimization over plain recursion. Wherever we see a recursive solution that
has repeated calls for the same inputs, we can optimize it using Dynamic Programming.
The idea is to simply store the results of subproblems so that we do not have to re-compute
them when needed later. This simple optimization typically reduces time complexities from
exponential to polynomial.
When to Use Dynamic Programming (DP)?
Optimal Substructure:
The property Optimal substructure means that we use the optimal results of
subproblems to achieve the optimal result of the bigger problem.
Overlapping Subproblems:
The same subproblems are solved repeatedly in different parts of the problem
refer to Overlapping Subproblems Property in Dynamic Programming.
Approaches of Dynamic Programming (DP)
Dynamic programming can be achieved using two approaches:
Before making any recursive call, we first check if the memoization table already
has solution for it.
After the recursive call is over, we store the solution in the memoization table.
Approaches of Dynamic Programming (DP)
2. Bottom-Up Approach (Tabulation):
In the bottom-up approach, also known as tabulation, we start with the smallest
subproblems and gradually build up to the final solution.
We write an iterative solution (avoid recursion overhead) and build the solution in
bottom-up manner.
We use a dp table where we first fill the solution for base cases and then fill the
remaining entries of the table using recursive formula.
We only use recursive formula on table entries and do not make recursive calls.
Fibonacci number using Recursion
Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34,55, …
One of the most basic, classic examples of this process is the fibonacci sequence.
It's recursive formulation is:
The time complexity of the above approach is exponential and upper bounded by O(2n) as
we make two recursive calls in every function.
Fibonacci number using Dynamic Programming
We can clearly see that that recursive solution is doing a lot work again and again which is
causing the time complexity to be exponential.
To overcome the problem of recursive solution we use DP by following the mechanism given
below:
Identify Subproblems: Divide the main problem into smaller, independent subproblems,
i.e., F(n-1) and F(n-2)
Store Solutions: Solve each subproblem and store the solution in a table or array so that
we do not have to recompute the same again.
Build Up Solutions: Use the stored solutions to build up the solution to the main problem.
For F(n), look up F(n-1) and F(n-2) in the table and add them.
Avoid Recomputation: By storing solutions, DP ensures that each subproblem (for
example, F(2)) is solved only once, reducing computation time.
Fibonacci number using Dynamic Programming
Using Memoization Approach – O(n) Time and O(n) Space:
To accomplish this in our example, we use a memoization array initialized with -1. Before
making a recursive call, we check whether the corresponding position in the memo array
contains -1. If it does, it means the value hasn’t been computed yet, so we calculate it
recursively. Once computed, the result is stored in the memo array, allowing us to reuse it
directly if the same value is needed again later.