Algorithms and Data Structures 2014
Exercises and Solutions Week 10
1 Dijkstra’s shortest path algorithm
Dijkstra’s shortest path algorithm will not work on a graph with negative edges. Illustrate this
fact by an example.
Solution Starting from A in the graph below, Dijkstra’s algorithm chooses to visit B, setting its
predecessor to A. It then visits C, and at that point no shorter path to B is found. Moreover, there
are no unvisited nodes to which B is adjacent, so the shortest path to B will not be updated any-
more. However, the shortest path to B goes through D and C, so we conclude that the algorithm
may not work on a graph with negative edges.
A
1 2
B D
0 -2
C
Note that this graph does not contain negative cycles, so Bellman-Ford would find the shortest
paths in it.
2 Minimal Spanning Trees
Prim vs Kruskal
A popular alternative to Prim’s method, is Kruskal’s algorithm. To construct an MST for a given
graph G = (V, E), Kruskal starts with an empty graph (i.e., a graph with no vertices, and hence
no edges) and extends this graph stepwise according to the following procedure:
repeatedly add the lightest edge e from E that does not produce a cycle.
In other words, it builds the MST edge by edge simply by selecting the cheapest edge at that
moment while carefully avoiding to introduce cycles. Like Prim’s method, Kruskal’s is also a
greedy algorithm: every choice it makes depends on information that is directly available.
• Suppose we want to find the minimum spanning tree of the following graph.
8 7
B C D
4 2 9
4
A 11 I 14 E
7 6
8 10
1 2
H G F
1
1. Run Kruskal’s algorithm. Show which edges are considered at each intermediate state,
and indicate whether an edge is accepted or refused. The latter will happen if the
cheapest edge at that moment creates a cycle.
2. Run Prim’s algorithm on the same graph, starting with node A as root.
• Choosing candidate edges: In Prim’s MST algorithm, at every turn the lightest edge is
chosen. Suppose that, at a certain moment, there are two edges with the same lowest weight.
Moreover, suppose that these two edges do not introduce a cycle when they are both added
to the intermediate tree. Do we still get an MST if we add these edges simultaneously? If
not, give a counterexample. What about Kruskals algorithm?
Solution Consider the following graph.
A
2 2
B C
1 1
D
Choosing A as the root node, Prim’s algorithm will have both B and C as its candidates
after the first iteration, and the relevant edges have equal weights. However, adding both
of these clearly eliminates the possibility of constructing an MST for this graph. Thus, if we
have two candidate edges in Prim’s algorithm, we cannot add both of them.
In Kruskal’s algorithm we can do this, for if we would choose to add one of them the other
remains a candidate by the assumption that adding both of them will not introduce a cycle.
• Building the MST: Both algorithms do not explicitly build the constructed MST. However,
this tree can be created using the parents list π. Give an algorithm that builds a tree out of
the π. You can choose yourself a suitable format for representing the result tree.
Solution The parents list π is a representation of the result tree. ^
¨
Adding/Removing edges
Given a graph G = (V, E), and let T be an MST of G.
1. Suppose we extend G by adding an edge: i.e. we create E 0 = E ∪ {e}, where e is an edge
that connects two nodes of G. Give an algorithm that constructs in O(|V |) time a new tree
T 0 out of T , such that T 0 be an MST of G0 = (V, E 0 ).
2. Suppose we remove one of the edges occurring in T from G. Let G0 be the result of this
operation. Assume that G0 remains connected. Again, give an algorithm that creates a new
MST T 0 of G0 with running time O(|V | + |E|).
Solution
1. To derive a correct algorithm, we will use the fact that any MST can be obtained by using
Kruskal’s algorithm (we prove this in the end). Note that the exercise does not really require
the following correctness proof, but we provide it for clarification.
Let us try to create an MST for G0 by having Kruskal’s algorithm select edges from T . We
are done if this works. However, it could be that at some point no edge from T can be
2
selected because e has a lower weight than any unselected edge in T (this can only hap-
pen for e because of our claim about Kruskal’s algorithm), so in that case we are forced to
select e instead. Afterwards, we try to continue selecting edges from T , and eventually it
will happen that adding the smallest unselected edge e0 in T to the tree creates a cycle as
a result of e being in that tree. Note that e0 has the largest weight of all the edges in that
cycle because of the order in which edges are added. We choose to leave out e0 . A crucial
observation is that if we would have kept e0 , but left out e instead, the selected set of edges
would divide the graph in exactly the same components as the current set does. Therefore,
we can now continue running Kruskal’s algorithm by selecting the remaining edges from T ,
and the correctness of this method is inherited from the correctness of Kruskal’s algorithm.
In short, we obtain an MST T 0 for G0 by starting with T , adding e to it and removing e0 from
it (if T is an MST for G0 , then e = e0 and indeed the result will be T ). To find e0 , run a DFS on
the tree T with e added to it. Because we are dealing with a tree with one additional edge,
this can be done in O(|V |). We have found the cycle if we arrive at a node A to which an
unfinished node B discovered earlier is adjacent (ignoring the node where we came from).
Now we backtrack back to B (also in O(|V |)) while keeping track of the largest edge so far,
and in the end we remove that largest edge. This gives us an O(|V |) algorithm.
Finally, we show that any MST T for a graph G can be obtained by using Kruskal’s algo-
rithm. Suppose at some point the minimal unselected edge in T that does not create a cycle
in the tree T 0 that we are constructing is any edge e, and suppose the minimal unselected
edge in G that does not create a cycle in T 0 is any edge e0 . We know that w(e0 ) ≤ w(e), but
we want to prove that w(e0 ) = w(e), i.e., that e can be selected by the algorithm. Suppose
w(e0 ) < w(e). If we would add e0 to T , the resulting graph contains a cycle in which e0 is the
largest edge because T is minimal. Inductively, we know that all other edges in that cycle
have already been selected, which leads to the contradiction that e0 cannot be selected at all,
and we conclude that T can be obtained by running Kruskal’s algorithm on G.
2. The result U of removing an edge from T divides the vertices into two components. We can
mark all vertices with a component identifier in O(|V |) by simply performing either a BFS
or a DFS on U . Now consider all edges in G, and find the smallest one that connects the two
components. Then T 0 is U with that edge added, and we have described an O(|V | + |E|)
algorithm to construct it. Note that this is actually O(|E|), as the graph is connected.
3 Bellman-Ford
The Single Source Shortest Path problem can be solved by dynamic programming, using the fol-
lowing recurrence relation:
δ0 (s, v) = ∞, for s 6= v (1)
δk (s, s) = 0 for all k (2)
δk (s, v) = min{δk−1 (s, u) + w(u, v) | (u, v) ∈ E} (3)
1. Give a bottom-up (i.e. iterative) algorithm with running time O(|V ||E|).
2. A straightforward solution uses a two-dimensional matrix to memoize intermediate results.
Show that one obtains the same results by simply dropping all the subscripts. In that case it
is sufficient to store memoized values in a single one-dimensional array. What is the relation
between this solution and the Bellman-Ford algorithm?
Solution
1. The deepest simple path in the graph can have a length of up to |V | − 1 edges, so this is the
minimum number of iterations the algorithm should run to propagate all updates.
3
1: function SSSP(V , E, w, s)
2: for v ∈ V do
3: δ0 [v] ← ∞
4: δ0 [s] ← 0
5: for k from 1 to |V | − 1 do
6: δk [s] ← 0
7: for (u, v) ∈ E do
8: d ← δk−1 [u] + w(u, v)
9: if δk [v] is not defined or d < δk [v] then
10: δk [v] ← d
Obviously, the running time of the above algorithm is in O(|V ||E|).
2. The correctness of the algorithm follows from the fact that, for all k, δk [v] provides an upper-
bound on the actual shortest path length δ(s, v), and this upperbound never increases after
an iteration. In fact, it will decrease for some v as long as not all shortest paths have been
found yet. Dropping the subscripts in the algorithm above will not hurt these properties.
The resulting algorithm is precisely the Bellman-Ford algorithm.
4 Floyd-Warshall
Floyd-Warshall is an algorithm for finding all shortest paths in a weighted graph with positive
or negative edge weights (but with no negative cycles). The algorithm is based on the following
recursive formula.
δ0 (i, j) = w(i, j) (4)
δk (i, j) = min(δk−1 (i, j), δk−1 (i, k) + δk−1 (k, j)) (5)
1. As with the previous exercise, show that the subscripts can be dropped in a bottom-up
computation of δ(i, j). Hence, no additional matrix is needed, so the space complexity is
O(n2 ).
2. How can you detect whether the original graph contained negative cycles? Given an ‘infor-
mal proof’ (plausible explanation) of the correctness of your method.
3. Extend Floyd-Warshall such that not only the lengths of all shortest paths are computed
correctly but also the shortest paths themselves can be reconstructed.
Solution
1. The subscripts can be dropped for reasons equivalent to the ones we gave for Bellman-Ford.
2. For any node in a negative cycle, the algorithm will find a negative path from that node to
itself. This does not happen when there are no negative cycles, so we can just check if the
value δk (i, i) is negative whenever it is calculated for arbitrary k and i.
3. For all vertices s and g, we introduce a reference πs [g] to the predecessor of g on the shortest
path from s to g. Initially, πs [g] = s whenever there is an edge from s to g; otherwise, it is
undefined. Now suppose we are applying the update rule (5) and find that
δk−1 (i, k) + δk−1 (k, j) < δk−1 (i, j)
for certain k, i, and j. This means the shortest path from i to j (so far) goes through k, so
we update πi [j] = πk [j]. When the algorithm is finished, the shortest paths from any source
vertex s can be reconstructed using πs in the same way as this was done after Dijkstra’s
algorithm.