0% found this document useful (0 votes)
2 views

dsa_assignment2[1][1]

The document compares adjacency matrix and adjacency list representations for undirected graphs, highlighting their space utilization and time complexities for various graph algorithms. It also discusses traversal algorithms like DFS and BFS, explaining their ability to produce spanning trees but not necessarily minimum spanning trees (MSTs), which require specific algorithms like Prim's or Kruskal's. Additionally, it outlines a method for extracting a Maximum Spanning Tree and the inefficiencies of brute-force methods for finding shortest paths in large graphs, advocating for more efficient algorithms.

Uploaded by

vs493599
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

dsa_assignment2[1][1]

The document compares adjacency matrix and adjacency list representations for undirected graphs, highlighting their space utilization and time complexities for various graph algorithms. It also discusses traversal algorithms like DFS and BFS, explaining their ability to produce spanning trees but not necessarily minimum spanning trees (MSTs), which require specific algorithms like Prim's or Kruskal's. Additionally, it outlines a method for extracting a Maximum Spanning Tree and the inefficiencies of brute-force methods for finding shortest paths in large graphs, advocating for more efficient algorithms.

Uploaded by

vs493599
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

DSA Assignment 2

SUBMITED BY: Vansh Deep Singh


ROLL NO. : IEC2023128

Answer 1:
Both adjacency matrix and adjacency list are common representations used for storing
undirected graphs in computer memory. Let's compare them in terms of space utilization and
how theyaffect the time complexity of graph algorithms:

1. Adjacency Matrix - The relationships between vertices in an adjacency matrix


are represented by a 2D array.
Space Utilization:
Regardless of the number of edges, the adjacency matrix takes up ( V times V )
space if there are ( V ) vertices. As a result, the space usage is ( O(V^2) ).

Complexity of Time:
- Verifying whether two vertices have an edge connecting them: (O(1))
- Locating each vertex's neighbors: (O(V) ) - necessitates row or column scanning
- Increasing or decreasing an edge: (O(1) )
- Complexity of Space: O(V^2)

2. Adjacency List: - An adjacency list keeps track of all the vertices in the list (or
array, linked list, etc.) preserved, including all of the neighboring vertices.
Space Utilization: This requires ( V space for vertices and ( E ) space for edges,
therefore if there are ( V ) vertices and ( E ) edges, the space utilization is
( O(V + E) ).

Complexity of Time:
- Verifying whether two vertices have an edge connecting them:( O(v)) ), where
the number of neighbors of vertex ( v ) is denoted by ( text{degree}(v) ).
- Locating each vertex's neighbors: O((v)) ) - Including or excluding an edge
(O(1))
Complexity of Space: O(V + E) )

Affect on Complexity of Time:

1. Example 1: Depth-First Search (DFS) - DFS requires ( O(V^2) ) time to traverse


the graph since it must look through each cell to identify linked vertices when
using an adjacency matrix.
- DFS requires ( O(V + E) ) time when using an adjacency list since it only goes
through the vertices that are adjacent.

ANSWER 2:
Traversal algorithms like DFS (Depth-First Search) and BFS (Breadth-First
Search) inherently produce a spanning tree when applied to a connected graph.
Here's how:

DFS: In DFS, starting from an initial vertex, the algorithm explores as far as
possible along each branch before backtracking. During this process, it marks
visited vertices and edges, effectively forming a tree structure. This tree is a
spanning tree because it includes all the vertices of the original graph and is acyclic
(no cycles), which makes it a tree.
BFS: Similarly, BFS explores the graph level by level, starting from the initial
vertex. It visits all the vertices reachable from the initial vertex in a systematic
manner, creating a tree that spans all vertices and is acyclic.
Now, regarding the Minimum Spanning Tree (MST), traversal algorithms alone
like DFS or BFS do not guarantee the generation of an MST, especially when
weights are given in the graph. Here's why:

DFS and BFS are not designed to consider weights: These traversal algorithms
prioritize visiting vertices based on their connectivity, not on edge weights. In
contrast, an MST is a tree that spans all vertices with the minimum possible total
edge weight.
MST algorithms consider weights: Algorithms like Prim's or Kruskal's are
specifically designed to find an MST by considering edge weights. They iteratively
add edges to the growing spanning tree while ensuring it remains acyclic and spans
all vertices with the minimum total weight.
Let's consider an example to illustrate this:

Suppose we have the following graph with weights:


4
A ------- B
|\ |
| 1 |2
| \ |
| \ |
| \ |
| \ |
3 5|
| \|
C -------- D
6
Here, if we perform a DFS or BFS traversal starting from vertex A, we would get a
spanning tree:

4
A ------- B
| |
| |
| |
| |
| |
3 5
| |
C ------- D
6
However, this is not necessarily the Minimum Spanning Tree. The MST for this
graph, considering edge weights, would be:

A ------- B
| |
| |
1 2
| |
C ------- D
This MST has a total weight of 3 (1 + 2), which is less than the total weight of the
spanning tree obtained through DFS or BFS (4 + 1 + 3 + 5 + 6 = 19),
demonstrating that traversal algorithms alone do not guarantee an MST when
weights are involved.

ANSWER 3:
To extract a Maximum Spanning Tree (MxST) from an undirected graph, we can
modify Kruskal's algorithm slightly. Kruskal's algorithm typically selects edges in
non-decreasing order of weight. To find a Maximum Spanning Tree, we need to
select edges in non-increasing order of weight. Here's an outline of the algorithm:
Sort the edges of the graph in non-increasing order of weight.
Initialize an empty graph for the Maximum Spanning Tree.
Iterate through the sorted edges. For each edge:
If adding the edge to the Maximum Spanning Tree does not create a cycle, add it to
the tree.
Otherwise, skip the edge.
Continue until the Maximum Spanning Tree has V−1 edges, where 𝑉is the number
of vertices in the original graph.
Return the Maximum Spanning Tree.
This modified algorithm ensures that we select the heaviest edges that do not create
a cycle, resulting in a Maximum Spanning Tree.

Now, let's discuss the time and space complexity of this algorithm:

Time Complexity: Sorting the edges initially takes O(ElogE),


where E is the number of edges in the graph. After sorting, iterating through the
edges takes O(E), and for each edge, checking for cycle detection (e.g., using
Union-Find) takes nearly constant time. Hence, the overall time complexity is
dominated by the sorting step, giving
𝑂(𝐸log +𝐸)
O(ElogE) time complexity.
Space Complexity: The space complexity primarily depends on the data structures
used for sorting the edges and storing the Maximum Spanning Tree. Sorting the
edges requires 𝑂(𝐸) space, and storing the Maximum Spanning Tree requires
O(V) space, where 𝑉 is the number of vertices. Therefore, the overall space
complexity is 𝑂(𝐸+𝑉). However, if we consider E to be at least V−1 (in the case
of a connected graph), the space complexity simplifies to O(E).

ANSWER 4:
To find all possible paths in a graph starting from a given vertex, we can use a
Depth-First
Search (DFS) algorithm. DFS is well-suited for this task because it systematically
explores all possible
paths starting from the given vertex. Here's the algorithm:
Algorithm to find all possible paths in a graph using Depth-First Search (DFS):
1. Initialize an empty list to store all paths found.
2. Perform a Depth-First Search (DFS) starting from the given vertex.
3. During DFS traversal:
Maintain a path list to track the current path being explored.
At each vertex encountered during DFS:
o Add the vertex to the current path list.
o If the current vertex is the destination vertex:
Append a copy of the current path list to the list of paths found.
o Otherwise, recursively explore all unvisited neighbouring vertices.
o Remove the last vertex from the current path list before backtracking.
4. Once DFS traversal is complete, return the list of paths found.
Illustration:
Consider the following graph:
A---B---C
|
|
D---E---F
Let's find all possible paths starting from vertex A:
1. Initialize the list of paths and a set to keep track of visited vertices.
2. Start DFS traversal from vertex A.
3. During traversal, maintain the current path being explored.
4. Explore all unvisited neighbouring vertices recursively.
5. When reaching vertex C, append the current path to the list of paths found.
6. Backtrack and continue exploring other paths.
7. Return the list of paths found.
If we apply the algorithm to the given graph, we'll find the following paths starting
from vertex A:
A -> B -> C
A -> B -> E -> F
A -> D -> E -> F
DFS is helpful in finding paths in a graph because it systematically explores all
possible paths starting from
the given vertex, making it suitable for this task.

ANSWER 5 :

In essence, the method you're explaining is a brute-force method of determining


the shortest path between two nodes. Though conceptually straightforward, its high
temporal complexity makes it impractical for big graphs.

Let's dissect it:


1. Estimating all potential pathways : In the worst scenario, a graph with (n ) nodes
can have up to ( n^{n-2} ) paths connecting any two nodes. This is due to the fact
that each node can have a maximum of (n-1 ) edges leaving it, and there are (n-2 )
potential intermediate node possibilities between each node and the start and end
nodes for each node. Even with somewhat reasonable graphs, this quickly becomes
unmanageable.

2. Computing total weights : After obtaining the total weight of every path, you
must determine the total weight of each path. This includes adding up the edge
weights in every path.

3. Finding the smallest total: The last step involves comparing all paths' total
weights to see which is the smallest.

Let's now examine the temporal complexity:

1. Calculating every potential route: The time complexity of this is (O(n^{n-2}) ).


This becomes astronomically enormous for even moderately larger graphs, as was
previously indicated.

2. Total Weight Calculation : Assume that a node has an average of ( E ) edges


connecting to it. The number of edges in a path determines the time complexity of
adding the weights of all the edges for each path. Therefore, this step's temporal
complexity is ( O(E*n^{n-2}) ).

3. Determining the least total: This takes ( O(n^{n-2}) ) time to compare all the
totals.

For large graphs with thousands or millions of vertices and edges, this
method becomes computationally intractable and may take an impractical
amount of time to execute. Therefore, more efficient algorithms like
Dijkstra's algorithm or the A* algorithm, which have polynomial or near-
linear time complexity, are preferred for finding the shortest path in large
graphs. These algorithms exploit specific characteristics of the graph,
such as its structure or edge weights, to efficiently find the shortest path
without exhaustively exploring all possible paths.

You might also like