Unit 3 DSA
Unit 3 DSA
Graph representation-
Adjacency Matrix:
DFS –
The depth-first search (DFS) algorithm starts with the initial node of graph G
and goes deeper until we find the goal node or the node with no children.
Because of the recursive nature, stack data structure can be used to implement
the DFS algorithm.
The step by step process to implement the DFS traversal is given as follows -
1. First, create a stack with the total number of vertices in the graph.
2. Now, choose any vertex as the starting point of traversal, and push that
vertex into the stack.
3. After that, push a non-visited vertex (adjacent to the vertex on the top of
the stack) to the top of the stack.
4. Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex
on the stack's top.
5. If no vertex is left, go back and pop a vertex from the stack.
6. Repeat steps 2, 3, and 4 until the stack is empty.
BFS –
BFS is an algorithm that explores a graph level by level, visiting all neighbours
of the current node before moving to the next level.
Starting from the root, all the nodes at a particular level are visited first and then
the nodes of the next level are traversed till all the nodes are visited.
To do this a queue is used. All the adjacent unvisited nodes of the current level
are pushed into the queue and the nodes of the current level are marked visited
and popped from the queue.
The step by step process to implement the BFS traversal is given as follows –
while queue:
vertex = queue.popleft() # Dequeue a vertex from the front of the
queue
if vertex not in visited:
print(vertex) # Process the visited vertex (you can replace
this with any desired operation)
visited.add(vertex)
# Enqueue unvisited neighbors
for neighbor in graph[vertex]:
if neighbor not in visited:
queue.append(neighbor)
Example of BFS
Prim's Algorithm:
Prim's algorithm is another greedy algorithm for finding the MST. It starts with
an arbitrary vertex and repeatedly adds the nearest vertex that is not in the MST.
Here's an overview of the algorithm:
a. Initialize the MST with a single vertex.
b. Repeat the following steps until the MST contains all vertices:
Select the minimum weight edge that connects a vertex in the MST to a vertex
outside the MST.
Add the selected edge and the new vertex to the MST.
Prim's algorithm is often more efficient for dense graphs or situations where you
have fast access to a data structure that efficiently finds the minimum edge.
Que: Define Shortest Path Problem with example ?
The shortest path problem involves finding the shortest path between two
vertices (or nodes) in a graph. Algorithms such as the Floyd-Warshall algorithm
and different variations of Dijkstra's algorithm are used to find solutions to the
shortest path problem. Applications of the shortest path problem include those
in road networks, logistics, communications, electronic design, power grid
contingency analysis, and community detection.
Variations of the Shortest Path Problem
Example :
How to find Shortest Paths from Source to all Vertices using Dijkstra’s
Algorithm ?
#include <limits.h>
#include <stdbool.h>
#include <stdio.h>
/ Number of vertices in the graph
#define V 9
// driver's code
int main()
{
/* Let us create the example graph discussed above */
int graph[V][V] = { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },
{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },
{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },
{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },
{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },
{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },
{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },
{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };
// Function call
dijkstra(graph, 0);
return 0;
}
Output:
Vertex Distance from Source
0 0
1 4
2 12
3 19
4 21
5 11
6 9
7 8
8 14
Binary Search
Binary Search is defined as a searching algorithm used in a sorted array by repeatedly
dividing the search interval in half. The idea of binary search is to use the information that
the array is sorted and reduce the time complexity to O(log N).
In this algorithm,
Divide the search space into two halves by finding the middle index “mid”.
Compare the middle element of the search space with the key.
If the key is found at middle element, the process is terminated.
If the key is not found at middle element, choose which half will be used as the
next search space.
If the key is smaller than the middle element, then the left side is used
for next search.
If the key is larger than the middle element, then the right side is used
for next search.
This process is continued until the key is found or the total search space is
exhausted.
# Repeat until the pointers low and high meet each other
while low <= high:
if array[mid] == x:
return mid
else:
high = mid - 1
return -1
array = [3, 4, 5, 6, 7, 8, 9]
x=4
if result != -1:
print("Element is present at index " + str(result))
else:
print("Not found")
The key process in quickSort is a partition(). The target of partitions is to place the pivot
(any element can be chosen to be a pivot) at its correct position in the sorted array and put all
smaller elements to the left of the pivot, and all greater elements to the right of the
pivot.Partition is done recursively on each side of the pivot after the pivot is placed in its
correct position and this finally sorts the array.
Algorithm
1. QUICKSORT (array A, start, end)
{
2. if (start < end)
{
3. p = partition(A, start, end)
4. QUICKSORT (A, start, p - 1)
5. QUICKSORT (A, p + 1, end)
}
}
The partition algorithm
1. PARTITION (array A, start, end)
{
2. pivot ? A[end]
3. i ? start-1
4. for j ? start to end -1 {
5. do if (A[j] < pivot) {
6. then i ? i + 1
7. swap A[i] with A[j]
}
}
8. swap A[i+1] with A[end]
9. return i+1 }
How does Heap sort work ?
Heaps can be used in sorting an array. In max-heaps, maximum element will always be at the
root. Heap Sort uses this property of heap to sort the array.
Merge Sort
Merge sort is defined as a sorting algorithm that works by dividing an array into smaller
subarrays, sorting each subarray, and then merging the sorted subarrays back together to
form the final sorted array. In simple terms, we can say that the process of merge sort is to
divide the array into two halves, sort each half, and then merge the sorted halves back
together. This process is repeated until the entire array is sorted.
MergeSort(A, p, r):
if p > r
return
q = (p+r)/2
mergeSort(A, p, q)
mergeSort(A, q+1, r)
merge(A, p, q, r)
Hash Table
Hash Table is a data structure which stores data in an associative manner. In a hash table, data
is stored in an array format, where each data value has its own unique index value. Access of
data becomes very fast if we know the index of the desired data. Thus, it becomes a data
structure in which insertion and search operations are very fast irrespective of the size of the
data. Hash Table uses an array as a storage medium and uses hash technique to generate an
index where an element is to be inserted or is to be located from.
Hashing
Hashing is a technique to convert a range of key values into a range of indexes of an array.
We're going to use modulo operator to get a range of key values. Consider an example of
hash table of size 20, and the following items are to be stored. Item are in the (key, value)
format.
Linear Probing
As we can see, it may happen that the hashing technique is used to create an already used
index of the array. In such a case, we can search the next empty location in the array by
looking into the next cell until we find an empty cell. This technique is called linear probing.
Let us consider a simple hash function as “key mod 7” and a sequence of keys as 50, 700, 76,
85, 92, 73, 101,
which means hash(key)= key% S, here S=size of the table =7,indexed from 0 to 6.We can
define the hash function as per our choice if we want to create a hash table, although it is
fixed internally with a pre-defined formula.