Solution Colors
Solution Colors
Proof. The operation au = min(au , av ) can only make au smaller, never larger.
Proof. If it any point we operate on au and make it smaller than bu , then we find ourselves
in the conditions of Proposition 2. Obviously this has no importance if there is no solution
to begin with, but we cannot know that beforehand.
Proposition 4. When propagating color c from node u to node v, we may only pass
through nodes w having aw ≥ c and bw ≤ c.
Proof. If aw < c, then we simply cannot assign color c to node w because the min
operation would select aw , not c. If bw > c, then by assigning color c to node w we would
be violating Proposition 3.
This is the crux of the solution: propagating a color c from nodes u that have it (au = c)
to nodes v that need it (bv = c).
Definition 1. A node v can be satisfied if there exists a node u with au = bv and a path
u → v such that all nodes w on the path have aw ≥ bv and bw ≤ bv . Node u is said to be
a source node for v.
Note that nodes v having av = bv are trivially satisfied. The path contains only node v
itself and no operations are necessary.
1
Definition 2. A color c can be satisfied if every node v having bv = c can be satisfied.
Proposition 5. Coloring a can be changed into b if and only if every node can be satisfied.
Proof. The negative half is easy: If there exists a node v that cannot be satisfied, then
either (a) we will not be able to change av into bv or (b) we can only change it by making
aw = bv < bw somewhere along the way, thus violating Proposition 3.
To prove the positive, constructive half, we remark that propagating colors changes the
graph. By making the value of aw smaller for an arbitrary node w while satisfying a node
v, we are making it harder to obey the condition aw ≥ bv0 later when we are attempting
to satisfy another node v 0 .
Fortunately, the fix is simple. We consider colors in decreasing order, from the largest
value in b to the smallest. Suppose at some moment we are propagating the value c. This
may affect some nodes w having aw > c by making aw = c (“the change”). However, this
will not be a problem when propagating a future color d < c. There are three possible
cases:
1. Node w was accessible before the change, meaning aw ≥ d and bw ≤ d. After the
change, aw = c > d and bw is unchanged, so node w is still accessible after the
change.
2. Node w was inaccessible before the change because aw < d. Since the change further
decreased aw to c, node w is still inaccessible after the change.
3. Node w was inaccessible before the change because bw > d. Since the change did
not alter bw , only aw , node w is still inaccessible after the change.
Next we discuss how to implement this for the various graph types given in the statement.
Complete graph
In a complete graph the path u → v is simply the edge (u, v). It is never necessary
to visit intermediate nodes because changing colors can only make the problem harder,
never easier.
Thus node v can be satisfied if there exists a node u with au = bv . Globally, the problem
admits a solution if:
1. au ≥ bu for all u;
2
2. b ∈ a (we can view a and b as sets by considering their distinct elements).
The time complexity is O(N 2 ) because we still need to read past the edges of the graph
in order to get to the next test case. Deciding the satisfiability itself takes O(N ) time.
When all the nodes lie on a chain, we will view the graph as a pair of arrays a and b and
paths as ranges in those sequences. We can satisfy an index i if
(a) There exists an index l ≤ i such that ak ≥ ai and bk ≤ bi for all l ≤ k < i
(informally, we propagate the color from the left), or
(b) There exists an index r ≥ i such that ak ≥ ai and bk ≤ bi for all r ≥ k > i
(informally, we propagate the color from the right).
We explain how to handle the left side. One approach that is easy to formulate uses
range minimum/maximum queries. We store pointers from each i to the closest l ≤ i
having al = bi . Then we can satisfy index i from the left if
2. max(bl , bl+1 , . . . , bi ) ≤ bi (or propagating color bi will make some indices unsatisfi-
able).
The running time is O(N log N ) with a practical implementation of range minimum
queries. This can be improved to O(N ) using sorted stacks.
Star graph
As before, we assume that au ≥ bu for all nodes and that all values in b also appear in a.
Then the root r is satisfiable because we can propagate br along a direct edge if needed.
Furthermore, there are only three ways to satisfy a leaf v.
3
In theory, case (3) can mean that ar and br must have the maximum and minimum values
in b. Checking this condition explicitly is not necessary and can be tricky in practice.
For example, the nodes having the minimum value in b may already be satisfied (case 1
above).
Small tree
Trees have M = N − 1 edges. When the sum of N 2 is small, an O(M N ) approach works.
Please see the section “Small graph” below.
Permutation tree
If b is a permutation of a, then for every node v there exists exactly one possible source
node u and a single path u → v. For u to be a source node, we must check that:
1. min aw ≥ bv and
w∈u→v
2. max bw ≤ bv .
w∈u→v
Thus, the solution reduces to path minimum and maximum queries. We discuss the
minimum case. One approach is to choose an arbitrary root r and define
Since k ≤ log N , we need O(N log N ) space to store A and B. We can also compute
them in O(N log N ), specifically
We can then compute the lowest common ancestor l for every pair (u, v) and compute
the path minimum by considering the paths (u, l) and (v, l). In turn, the answer for each
path can be computed by considering two overlapping chains whose size is a power of 2
and which cover the path completely.
4
Small graph
For small graphs, an O(M N ) approach is sufficient. Therefore, we can afford to run up
to N depth-first searches, one from each node v. Each search runs in O(M + N ) and
visits only nodes w having aw ≥ bv and bw ≤ bv . Node v is satisfiable if and only if the
search encounters any nodes with aw = bv .
General graph
• Vc = {u ∈ V | au ≥ c and bu ≤ c}
• Ec = {(u, v) ∈ E | u, v ∈ Vc }
Suppose we construct a disjoint-set forest for Gc . Then color c is satisfiable if for every
node v having bv = c there exists a node u having au = c in the same connected component
as c.
Now let us consider the next color in decreasing order, d < c. In similar fashion we wish
to obtain Gd = (Vd , Ed ), build its disjoint-set forest and decide the satisfiability of d.
How can we achieve this? Simply rebuilding the forest from scratch takes O(M α(N )),
yielding a slow running time of O(M N α(N )) for all the colors.
Interestingly, each node (along with its incident edges) is added and removed from the
graph exactly once. The key is to build the forest of d from the forest of c, or some other
forest we have previously built, to save time. Thus, we have reduced the problem to
offline dynamic connectivity, where we maintain a forest throughout the entire algorithm
and perform M edge additions and M edge removals on it. This can be done theoretically
in O(log N ) per operation, but the implementation is impractical here. We present two
√
different approaches, achieving O(log2 N ) and O(α(N ) M ) per operation respectively.
5
Disjoint-set forests with undo support
Consider an edge (u, v) with its initial values au , av , bu , bv . Suppose that, at some point
during the algorithm, we propagate a value c across the edge. What can we say about c?
First, c ≤ au and c ≤ av because we started with au and av and values never increase.
Second, c ≥ bu and c ≥ bv , otherwise we would violate Proposition 3. Thus, we can
introduce two notations t1 and t2 and say that
t1 , max(bu , bv ) ≤ c ≤ min(au , av ) , t2
The letter t is not accidental. We can think of colors as moments of time and say that edge
(u, v) is “in existence” between times t1 and t2 inclusively. We do this for all edges. Now,
in order to satisfy a color c, we wish to address the question: what edges are in existence
at time c? Then we move to the next color, update the edge list and its corresponding
disjoint-set forest, and repeat the question.
For this purpose, we construct a segment tree over the N time moments with all the
intervals [t1 , t2 ]. For every interval we also store the originating edge (u, v). Then we
traverse the tree in depth-first order. When entering a node, we add all the edges stored
in that node to the disjoint-set forest. We use stacks to keep the history of the forest
data, specifically each node’s rank and parent. This allows us, when exiting a node, to
remove the edges from the disjoint-set forest and revert to the state before entering the
node.
Finally, leaves in the segment tree correspond to single moments of time t, and at those
leaves we query the disjoint-set forest to decide if the color t is satisfiable.
The use of stacks makes it impractical to use path compression in our disjoint-set forest.
We still perform unions by rank, which achieves O(log N ) time per operation.
Thus, there are M edges in the segment tree, each potentially occurring in O(log N )
nodes, and to process each occurrence we perform O(log N ) operation on the disjoint-set
forest. The overall complexity is O(M log2 N ).
Suppose we intend to build from scratch the disjoint-set forest for a color c. However, if
the number of nodes having au = c or bu = c is small, then we have expended O(M α(N ))
effort for little benefit.
Instead, let us build a smaller forest, but one that we can keep using for a longer time.
√
Specifically, find a color d ≤ c such that there are O( M ) edges appearing and disap-
6
pearing in all the transitions from Ec to Ed . We call the interval [d, c] a block. Next,
build a disjoint-set forest F using all the edges in Ec ∩ · · · ∩ Ed , namely edges between
nodes u having au ≥ c and bu ≤ d. F is relevant to all the satisfiability checks for colors
d through c. Make a copy of F so we can reuse it multiple times.
In order to answer the satisfiability question for a color e ∈ [d, c], we augment F with
all the relevant edges, specifically all those between nodes u having au ≥ e and bu ≤ e.
√ √
Due to our choice of d, there are O( M ) edges to add and O( M ) nodes in whose
√
connectivity we are interested, so each color can be verified in O(α(N ) M ) time.
Once we are done, discard F and move on to the next block, beginning with the next
color after d.
√
1. Block-level effort. There are O( M ) blocks and it takes O(M α(N )) to rebuild the
forest in each block.
2. Color-level effort. There are N colors and for each color we check connectivity in
√
O(α(N ) M ).
√
Thus, the overall running time is O(α(N )M M ).
This approach is not theoretically sound, because there may exist a color (even multiple
colors) with O(M ) incident edges. However, it behaves well in practice.