2022-2023 ICPC Latin American Regional Programming Contest - Unofficial Editorial
2022-2023 ICPC Latin American Regional Programming Contest - Unofficial Editorial
This is the unofficial editorial for the 2022-2023 ICPC Latin American Re-
gional Programming Contest. For any questions/suggestions, you can reach
out to me personally through Codeforces or leave your comments in the blog
of the editorial. Thank you!
1
B. Board Game
Instead of iterating over the players, let’s compute for each token, the mini-
mum id of a player that can get it
Let’s assume that for a given token, we want to check if there is any
player with id in [l, r] that can get it. If there is at least one valid player in
the range, it’s easy to see that the one maximizing Ai · X + Bi will be one of
them.
Let’s build a segment tree over the players. On each node, we store the
lines of all the players with id contained in the node’s range. Now, for a
given token (X, Y ), we can start descending the tree. Let’s assume we are
currently standing in one particular node of the tree, and let’s call V to the
maximum value of Ai · X + Bi for this node.
• If Y ≥ V , then none of the players in the node’s range can take this
token. We can ignore all of them.
• If Y < V , we know there is at least one player in this range that will
take it. Let’s check using recursion if there is at least one in the left
child of the node. If we couldn’t find it, then move to the right child.
It’s easy to see that in each level of the segment tree we visit at most 2
nodes, so we check O(log n) nodes in total.
The only thing remaining is to be able to quickly check the maximum
value in one node for a given token. To do that, we can use convex hull trick
on each node to be able to answer queries of maximum value of Ai · X + Bi
in O(log n).
Final time complexity: O(n · log2 n). This can also be improved to O(n ·
log n) by processing tokens in increasing order of X, but it was not necessary
to get AC. Alternatively, you can also use Li Chao tree to solve the problem.
C. City Folding
Let’s consider the process backwards (i.e. starting from the last fold).
2
If H is in the upper half of the paper, then in the previous step, it was
in the half that was folded on top of the other one. Otherwise, it was on the
other part. By using this observation we can compute, for each step, if we
should fold or not the part in which H is.
Now let’s start the process from the beginning from P . We have some
cases:
• If in this moment P should be part of the half that will be folded, then
if P is in the left half we push L to the answer, or R otherwise.
Don’t forget that whenever we fold one part on top of the other, the order
of its elements is reversed.
D. Daily Trips
This was the easiest problem in the contest. The solution is to simulate the
process described in the statement. It can be done in several ways.
E. Empty Squares
This problem has many solutions.
It can be proven that the answer is at most 3. This allows us to solve
it in O(1) by some annoying case analysis, or even in O(n2 ) by brute force
with some observations.
Let me introduce a different solution that doesn’t require any case analy-
sis. We will solve the problem by using dynamic programming. In particular,
f (i, a, b) denotes if we can cover two squares of sizes 1 × a and 1 × b respec-
tively, if we only have tiles of sizes 1 × 1, 1 × 2, . . . , 1 × i.
3
• We can skip the tile of size 1 × i, transitioning to f (i − 1, a, b).
• If i ̸= k, we can try to use the tile of size 1 × i to cover either the first
square (f (i − 1, a − i, b)) or the second one (f (i − 1, a, b − i)). Be careful
to check that i ≤ a and i ≤ b respectively.
The solution then is to iterate over the values of a and b such that a is
at most the size of the first part, and b is at most the size of the second
part. If f (n, a, b) = true, then we can cover k + a + b squares. By taking the
maximum value we can obtain over all possible values of a and b (denote it
by v), we can note that the answer is n − v.
The number of states in our dp is O(n3 ). In order to save some memory,
we can notice that f (i, a, b) only have transitions to f (i − 1, x, y) for some
values of x and y. We can keep the dp values only for the previous value of
i. By doing this, we reduce memory usage to O(n2 ).
This looks too slow. But we can notice that a + b <= n, so the total
3
amount of states is actually at most n4 . This is fast enough for n ≤ 1000.
F. Favorite Tree
Let’s assume the root of T2 is always vertex 1. We are going to iterate over
the root R of T1 , and we will assume it is matched with the root of T2 .
Now we need to solve the problem for rooted trees, assuming that the root
of both trees will always be included. In particular, let’s denote as f (i, j) as
a boolean value that denotes if we can remove some nodes of the subtree of
i in T1 , so that the remaining nodes form a tree that is isomorphic to the
subtree of j in T2 . We want to find out the value of f (R, 1).
We are going to denote the set of children of vertex i in T1 as ch1 (i). In
the same way, we define ch2 (i) to represent children of i in T2 . Let’s see how
to compute f (i, j).
Let’s create a bipartite graph, where the left part is ch1 (i) and the right
one is ch2 (j). Given x ∈ ch1 (i) and y ∈ ch2 (j), we add an edge between x
and y if f (x, y) = true (we check this value with recursion). Notice that we
reduced the problem to finding a maximum matching. If we can match every
4
vertex of the right part, then f (i, j) = true. To compute this, we can use
Hopcroft-Karp’s algorithm
Let’s analyze the time complexity of our solution. Notice that if we fix
two nodes v1 and v2 in T1 and T2 respectively, we will add one edge to the
bipartite graph while computing f (i, j) only when i is the parent of v1 and
j is the parent of v2 (and this happens at most once for a fixed root). This
means that the total edges in all the matchings required to compute f (R, 1)
is O(n2 ).
√
The time complexity of running Hopcroft-Karp’s algorithm is O(m · n),
where m is the amount of edges. In our√case, m = n2 , so the cost of computing
f (R, 1) for a fixed value of R is O(n2 · n). Given that we iterate over all the
possible√values of R, we conclude that the final complexity of our solution is
O(n3 · n), which clearly fits in the time limit.
a+b
=c
2
We can rewrite this as:
(a + b) = 2 ∗ c
Given that P and Q are convex, the points that can be represented as a
sum of a ∈ P and b ∈ Q also form a convex polygon with O(n + m) vertices,
and we can construct it in linear time. This is called the Minkowski Sum of
P and Q.
So, after computing the Minkowski sum of P and Q, we have reduced the
problem to be able to check if c lies inside a convex polygon. This can be
done with binary search in O(log n).
5
Final time complexity of the solution is (n + q · log n).
H. Horse Race
Let’s analyse the i -th small race:
• If the j -th horse took part of it, it means it’s final place should be at
least Wi .
• If the j -th horse didn’t take part of it, it’s final place can’t be exactly
Wi .
6
again with this value as the starting position: during that previous step, we
visited all the positions we would visit now, plus some others too (so, the
answer was bigger).
Although this optimization improves the performance of the algorithm,
it doesn’t actually improves its time complexity. There are cases in which it
will perform O(n4 · log n) operations, but the constant factor is considerably
smaller.
J. Joining a Marathon
Let’s first think about how to detect which photos are trash.
We can rewrite the equation of the position of the i -th runner as Si ∗ t −
Ti · Si . This is the equation of a line, which we will denote as fi , and we will
denote as fi (v) as the result of evaluating the i -th line in t = v.
For a given photo, if we sort all the runners by the value of fi (U ), we can
check if there exists an index j such that A ≤ fj (U ) ≤ B by using binary
search. There is a problem: we clearly can’t sort all the lines for each photo.
Instead, we will process the photos in increasing order of U , while keeping
the sorted list of runners in the meantime.
Let’s take two lines fi and fj such that Si > Sj . We will denote the value
of t such that fi (t) = fj (t) as xi,j . It’s not hard to see that for every value
U ∈ (−∞, xi,j ], the i -th runner will be before the j -th one in the sorted list
of lines. If we consider that U ∈ (xi,j , ∞), then the relative order between
them should be swapped.
Initially, we can sort the lines assuming that U = −∞. We will maintain
in a set the values of xi,j for the lines i and j that are located in consecu-
tive positions in the current list. Now, let’s start processing the photos by
increasing value of U . When processing a new photo, we should swap con-
secutive lines such while xi,j ≤ U (and update the corresponding values of
the affected positions in the set). After we finish updating all the necessary
positions, it’s easy to see that now the lines are sorted by the value of fi (U ),
so we can apply the binary search we mentioned before.
We can notice that when two consecutive values i and j are swapped, we
will never swap them again, so the amount of times we do a swap is O(n2 ).
7
This means that the complexity of computing the photos that are trash is
O((n2 + m) · log n). Let’s remove from the list all the photos that are not
trash, since they won’t affect any query.
Now let’s process the queries offline. Instead of counting the amount of
trash photos for each query, we will compute for which queries each photo is
trash.
It turns out we can use the exact same algorithm to keep the sorted array
of queries by the current value of U while iterating over the photos again.
After updating the order of queries for a given photo, we need to add one to
the value of all the queries such that fi (U ) < A or fi (U ) > B. Given that
the queries are ordered by fi (U ), all the positions in which we add one are
located in a prefix or a suffix of the list.
We need a data structure that allows us to perform two operations:
There are several ways to do this. For example, you can use an implicit
treap, a lazy segment tree or even a Fenwick tree.
The final complexity of the solution is O((n2 + q 2 + m) · log n).
K. Kind Baker
This problem probably has many different solutions. I’m going to describe
the approach I used.
First, let’s notice that if we use the machine n times, we can have at most
n
2 different different types of pieces. So we know that the value of n has to
be at least ⌈log2 k⌉. Let’s show that we can actually achieve this.
First, if k = 1, just print 0. Otherwise, let’s fill the first column and the
first row with all the n colors. Now we have 2 different types, so we can
subtract 2 from the value of k.
n
Given that k ≤ 4000, notice that n ≤ 12. Also, notice that 2 2 ≤ 100.
Let’s denote k1 = ⌊ n2 ⌋ and k2 = ⌈ n2 ⌉.
8
For each value i ∈ [0, k1 ), iterate over all the rows from 2 to 100, and if
the id of the current row has the i-th bit set, add the color i to the entire
row. You can do the same with columns for the other k2 values, but using
columns instead of rows.
It’s not hard to notice that by doing this, we generate exactly 2n different
types of pieces. Also, thanks to the fact that we used every color in each of the
cells of the first row and column, every color forms a connected component.
So this solution is correct.
The only remaining part is to make the amount of different types exactly
equal to k. This can be done in several ways. For example, you can perform
the described process only until you detect that you already have k types, or
you can generate all the 2n types and then clear some cells until the amount
is reduced to exactly k.
Time complexity is O(k).
L. Lazy Printing
It can be proven that when performing an instruction, it’s always optimal to
print the maximal amount of letters possible. I will show how to compute this
value. During the explanation, lcp(i, j) is the length of the longest common
prefix between the suffixes of T that start in i and j respectively.
Let’s denote the next position to be printed by the machine as i. We
will iterate over all possible lengths (between 1 and D) of the string that the
machine can use now. For length k we can write k + lcp(i, i + k) letters with
this operation.
The value of lcp(i, i + k) can be calculated in several ways in O(log n)
or O(1) (for example hashing or suffix array), so the time complexity is
represented by the amount of times we compute lcp(i, i + k) for some values
of i and k.
At first glance, it looks like our algorithm’s complexity is O(n · D), which
would be too slow, but it turns out this is not true. When we compute the
maximum value for a starting position, we perform D calls to the function
lcp, but the key fact is to realize that this value we get is at least D. This
means that after doing D calls, the value of i will increase by at least D.
9
From this observation we can conclude that the final complexity is O(n)
or O(n · log n), depending on how you compute lcp.
M. Maze in Bolt
This problem can be solved by performing a BFS/DFS. Let’s define which
are going to be our states.
Given that rotating the nut horizontally C times is the same as not ro-
tating it at all, we can represent the current state with two values: the row
of the screw where the nut is located, and how many times we rotated the
nut horizontally (between 0 and C − 1). Notice that the amount of states is
R · C, and this is at most 105 .
We can start the BFS/DFS from all valid positions in the first row at
the same time, and check if we can reach any position in the last row by
performing the operations described in the statement.
Checking if a state is valid can be done in several ways, but the easiest
one is to iterate each position of the nut naively, because the constraints
allow us to do that. Time complexity is O(R · C 2 ).
Don’t forget to also try to reverse the string S and try the BFS/DFS
again (representing the flip operation).
10