DAA Ass Group3
DAA Ass Group3
Name Id
1. Yawukal Addis………………………………………………..BULR3616/14
2. Woldie Chala…………………………………………………..BULR1667/14
3. Abinet Getahun…………………………………………………BULR0072/14
4. Beza Awoke……………………………………………………BULR0350/14
5. Tarekegn Zeleke………………………………………………..BUTR10002/14
BONGA ETHIOPIA
Shortest path algorithms are fundamental in graph theory and are used to find
the shortest path between two vertices in a graph. There are different types of
shortest path algorithms based on the characteristics of the graph, such as
unweighted graphs, weighted graphs with non-negative weights, and graphs
with negative weights.
In an unweighted graph, all edges have the same weight or cost, and
finding the shortest path involves finding the path with the minimum
number of edges between two vertices.
The two most common algorithms for finding the shortest path in an
unweighted graph are:
Algorithm:
1. Start with the source vertex and mark it as visited.
2. Enqueue the source vertex into a queue.
3. While the queue is not empty:
- Dequeue a vertex from the queue.
- For each unvisited neighbor of the dequeued vertex:
Page | 1
- Mark the neighbour as visited.
- Enqueue the neighbour into the queue.
- Set the distance of the neighbour as one more than the distance of
the dequeued vertex.
4. Repeat until all reachable vertices are visited.
Algorithm:
1. Initialize a distance array with infinity for all vertices except the source
(distance to source is 0).
2. Start from the source vertex and explore all neighbouring vertices.
3. Update the distance of each neighbouring vertex as one more than the
distance of the current vertex.
4. Continue exploring vertices until all reachable vertices are visited.
Shortest path algorithms in unweighted graphs are essential for scenarios where
edge weights are not considered, such as network routing protocols or
determining connectivity between nodes. They provide a foundation for more
complex shortest path algorithms in weighted graphs by understanding basic
graph traversal principles.
Shortest path algorithms in weighted graphs are used to find the path
with the minimum total weight or cost between two vertices. There are
several popular algorithms for finding the shortest path in weighted
graphs, depending on the characteristics of the graph and the weights
assigned to the edges. Two common algorithms for finding the shortest
Page | 2
path in a weighted graph are Dijkstra's Algorithm and Bellman-Ford
Algorithm.
In a weighted graph, each edge has a weight or cost associated with it,
and finding the shortest path involves finding the path with the minimum
total weight between two vertices.
The two most common algorithms for finding the shortest path in a
weighted graph are:
1. Dijkstra’s Algorithm:
Algorithm:
1. Initialize a distance array with infinity for all vertices except the
source (distance to source is 0).
2. Create a priority queue or min-heap to store vertices based on their
current distance from the source.
3. Start from the source vertex and explore all neighbouring vertices.
4. Update the distance of each neighbouring vertex if a shorter path is
found.
5. Repeat the process until all reachable vertices are visited.
2. Bellman-Ford Algorithm:
Algorithm:
1. Initialize a distance array with infinity for all vertices except the source
(distance to source is 0).
2. Relax all edges repeatedly, updating the distance of each vertex if a
shorter path is found.
3. Repeat the relaxation step V-1 times, where V is the number of vertices
in the graph
Page | 3
4. Check for negative weight cycles by performing one more relaxation
step. If any distance is updated, then there is a negative weight cycle.
Algorithm:
Page | 4
shortest distance between any two vertices if a shorter path is found by
including the intermediate vertex.
The steps of the algorithm are:
1. Initialize a 2D array to store the shortest distances between all pairs of
vertices, with the initial values being the weights of the edges if there is
an edge, and infinity if there is no edge. Also, set the diagonal elements
of the array to 0.
2. For each intermediate vertex k from 1 to V, where V is the number of
vertices, update the distance array as follows: For each pair of vertices i
and j, if the distance from i to j through vertex k is shorter than the
current distance, update the distance to the new shorter distance.
3. After the above step, the array will contain the shortest distances between
all pairs of vertices.
The algorithm guarantees to find the shortest path between all pairs of
vertices in a weighted graph, including graphs with negative edge
weights. Topological sorting with dynamic programming
Algorithm:
1. Perform topological sorting of the graph.
2. Initialize distances to all nodes as infinity except the initial node which
is set to 0.
3. Iterate through the sorted nodes and update distances based on the edges.
Page | 5
This problem can be solved with the help of using two techniques:-
Brute-force approach: The brute-force approach tries all the possible
solutions with all the different fractions but it is a time-consuming
approach.
Greedy approach: In Greedy approach, we calculate the ratio of
profit/weight, and accordingly, we will select the item. The item with the
highest ratio would be selected first.
The fractional knapsack problem can be solved by first sorting the items
according to their values, and it can be done in O(NlogN) this approach
starts with finding the most valuable item, and we consider the most
valuable item as much as possible, so start with the highest value item
denoted by vi. Then, we consider the next item from the sorted list, and
in this way, we perform the linear search in O(N) time complexity.
Therefore, the overall running time would be O(NlogN) plus O(N)
equals to O(NlogN).
We can say that the fractional knapsack problem can be solved much
faster than the 0/1 knapsack problem
1) The first approach is to select the item based on the maximum profit.
2) The second approach is to select the item based on the minimum
weight.
3) The third approach is to calculate the ratio of profit/weight.
Objects: 1 2 3 4 5 6 7
Profit (P): 10 15 7 8 9 4
Weight(w): 1 3 5 4 1 3 2
W (Weight of the knapsack): 15
Page | 6
n (no of items): 7
First approach:-
Objects: 1 2 3 4 5 6 7
Page | 7
Profit (P): 5 10 15 7 8 9 4
Weight(w): 1 3 5 4 1 3 2
Object 1: 5/1 = 5
Object 2: 10/3 = 3. 33
Object 3: 15/5 = 3
Object 5: 8/1 = 8
Object 6: 9/3 = 3
Object 7: 4/2 = 2
Page | 8
After object 5, object 1 has the maximum profit/weight ratio, i.e., 5. So,
we select object 1 shown in the below table:
After object 1, object 2 has the maximum profit/weight ratio, i.e., 3.3. So,
we select object 2 having profit/weight ratio as 3.3.
After object 2, object 3 has the maximum profit/weight ratio, i.e., 3. So,
we select object 3 having profit/weight ratio as 3.
Page | 9
6 9 3 5-3=2
As we can observe in the above table that the remaining weight is zero which
means that the knapsack is full. We cannot add more objects in the knapsack.
Therefore, the total profit would be equal to (8 + 5 + 10 + 15 + 9 + 4), i.e., 51.
In the first approach, the maximum profit is 47.25. The maximum profit in the
second approach is 46. The maximum profit in the third approach is 51.
Therefore, we can say that the third approach, i.e., maximum profit/weight ratio
is the best approach among all the approaches.
FRACTIONAL_KNAPSACK(X, V, W, M)
S ← Φ // Set of selected items, initially empty
SW ← 0 // weight of selected items
SP ← 0 // profit of selected items
i←1
while i ≤ n do
if (SW + w[i]) ≤ M then
S ← S ∪ X[i]
Page | 10
SW ← SW + W[i]
SP ← SP + V[i]
else
frac ← (M – SW) / W[i]
S ← S ∪ X[i] * frac // Add fraction of item X[i]
SP ← SP + V[i] * frac // Add fraction of profit
SW ← SW + W[i] * frac // Add fraction of weight
end
i←i+1
end
A probabilistic algorithm is an algorithm where the result and/or the way the
result is obtained depend on chance. These algorithms are also sometimes called
randomized algorithms. In some applications the use of probabilistic algorithms
is natural, e.g. simulating the behaviour of some existing or planned system
over time. In this case the result by nature is stochastic.
There are also a number of discrete problems for which only an exact result is
acceptable eg. sorting and searching) and where the introduction of randomness
Page | 11
influences only on the ease and efficiency in finding the solution. For some
problems where trivial exhaustive search is not feasible probabilistic algorithms
can be applied giving a result that is correct with a probability less than one eg.
primarily testing, string equality testing). The probability of failure can be made
arbitrary small by repeated applications of the algorithm.
Page | 12
deterministically. Example: Randomized primarily testing using the
Miller-Rabin algorithm.
Application:
Graph algorithms like random walks and graph colouring.
Computational geometry for algorithms like randomized incremental
construction.
Optimization problems where randomness can lead to better solutions.
3.1.2 Applications of Probabilistic Algorithms in DAA:
Page | 13
Example: Probabilistic graphical models for probabilistic reasoning
and decision making.
3.1.3 Advantages:
Versatility: They can handle complex problems where exact solutions are
computationally prohibitive.
Page | 14
There are various characteristics of parallel algorithm which are as
follows −
Deterministic versus nondeterministic − It is only deterministic
algorithms are implementable on real machines. Our study is confined to
deterministic algorithms with polynomial time complexity
Computational Granularity − Granularity decides the size of data
items and program modules used in the computation. In this sense, we
also classify algorithms as fine-grain, medium-grain, or coarse-grain.
Parallelism profile − The distribution of the degree of parallelism in an
algorithm reveals the opportunity for parallel processing. This often
affects the effectiveness of the parallel algorithms.
Communication patterns and synchronization requirements −
Communication patterns address both memory access and interprocessor
communications. The patterns can be static or dynamic, depending on
the algorithms. Static algorithms are more suitable for SIMD or pipelined
machines, while dynamic algorithms are for MIMD machines. The
synchronization frequency often affects the efficiency of an algorithm.
Uniformity of the operations − This refers to the types of fundamental
operations to be performed. If the operations are uniform across the data
set, the SIMD processing or pipelining may be more desirable. In other
words, randomly structured algorithms are more suitable for MIMD
processing. Other related issues include data types and precision desired.
Memory requirement and data structures − In solving large-scale
problems, the data sets may require huge memory space. Memory
efficiency is affected by data structures chosen and data movement
patterns in the algorithms. Both time and space complexities are key
measures of the granularity of a parallel algorithm.
3.2.2 Types of Parallel Algorithms
Page | 15
1. Task Parallelism:-Task parallelism divides a task into smaller sub-
tasks that can be executed concurrently. Each sub-task may operate on
different data or parts of the problem. Example: Matrix multiplication
where different threads compute different rows or columns concurrently.
Application:
Parallel sorting algorithms like parallel merge sort.
Image and video processing where different regions or frames
can be processed concurrently.
Page | 16
1. Parallel Sorting Algorithms:-Algorithms like parallel merge sort,
quicksort, and parallel radix sort distribute sorting operations across
multiple processors.
Application: Sorting large datasets in databases, search engines, and
data analytics platforms where efficiency in sorting impacts overall
performance.
2. Parallel Graph Algorithms: Algorithms such as parallel breadth-first
search (BFS), parallel depth-first search (DFS), and parallel shortest path
algorithms distribute graph traversal and computation across multiple
processors.
Application: Social network analysis, route planning in transportation
networks, and recommendation systems.
Page | 17
Scalability: They allow systems to scale with the addition of
more processors, enhancing throughput and handling larger
datasets.
Complexity: Designing parallel algorithms requires managing
synchronization, load balancing, and communication overhead
effectively.
Page | 18