Final Editorial - AUST IUPC 2025
Final Editorial - AUST IUPC 2025
The contest was intentionally slightly easier than usual. We hope that you liked the problems!
The contest was coordinated by Shahjalal Shohag and reviewed by Jubayer Nirjhor. Problems were
reviewed and selected by both of them.
The following is the list of all authors, developers, and testers.
Problem Author Developer Tester
A. Max-Min Madness Jinlong Han Jinlong Han Shaikat Hosen Rony
B. The Absolute MEX Challenge Shahjalal Shohag Al-Shahriar Tonmoy Kawsar Hossain
C. Palindromic Palindrome Partition Shahjalal Shohag Mahdi Hasnat Siyam Al-Shahriar Tonmoy
D. Strong Tree Ayon Shahrier Ayon Shahrier Sabbir Rahman Abir
E. Aloy and the Forbidden Code Shahjalal Shohag Kawsar Hossain Rudro Debnath
F. Rotating Painter Tamim Ehsan Tamim Ehsan Kawsar Hossain
G. GCD and LCM in Perfect Sync Jinlong Han, Shahjalal Shohag Jinlong Han Tasmeem Reza
H. Flip to Zero Jubayer Nirjhor, Shahjalal Shohag, Al-Shahriar Tonmoy Al-Shahriar Tonmoy Mahdi Hasnat Siyam
I. The Art of Rearrangement Shahjalal Shohag Rudro Debnath Al-Shahriar Tonmoy
J. No Duplicates Sabbir Rahman Abir Sabbir Rahman Abir Ayon Shahrier
K. Primal Brackets Shaikat Hosen Rony Shaikat Hosen Rony Rudro Debnath
We also tried to provide problems with multiple variations and with a balanced difficulty. But problem K
turned out to be way harder than we expected.
The following is the list of the same problems but in sorted order of (expected) difficulty.
Problem Expected Difficulty Expected #AC #AC (Main Contest) Tags
E. Aloy and the Forbidden Code Div3A 130 130 Giveaway
I. The Art of Rearrangement Div2A 120 127 Adhoc
B. The Absolute MEX Challenge Div2B 80 114 Constructive
K. Primal Brackets Div2C 50 10 Adhoc, DP on Trees
F. Rotating Painter Div2C 40 42 Geometry
G. GCD and LCM in Perfect Sync Div2D 30 19 Number Thoery, Inclusion Exclusion
H. Flip to Zero Div2D 20 6 DP, BFS, Data Structures
C. Palindromic Palindrome Partition Div2D 15 5 DP, Strings
D. Strong Tree Div2E 10 4 Trees, DSU, Segment Tree
A. Max-Min Madness Div2E 5 6 Adhoc, Combinatorics
J. No Duplicates Div2E 5 0 Counting, Data Structures, Implementation
one can prove that each valid operation strictly decreases rev(a) in the lexicographic order. This fact
guarantees that the process terminates after a finite number of steps.
To maximize the total number of operations, we need to “slow down” the decrease. The following greedy
strategy is optimal:
• If a1 > 0 (i.e. the smallest element is positive), simply perform Operation 1 on a1 by decrementing
it.
Page 1 of 13
MTB Presents AUST Inter University Programming Contest
Feb 21-22, 2025
• Otherwise, when a1 = 0, let i be the smallest index such that ai > 0. Then perform Operation 2
on the indices 1, 2, . . . , i. This operation sets every aj for 1 ≤ j ≤ i to ai − 1, which is exactly the
lexicographic predecessor of the current a.
This strategy is optimal because every operation moves the state to its immediate predecessor in the
lexicographic order (when considering the reversed array), so no extra operations can be squeezed in
between.
To count the maximum number of operations, let’s analyze the scenario when all elements of the array
are equal.
Let f (i, j) denote the maximum number of operations that can be performed on an array of length i
where every element is equal to j. Basically the number of operations to convert all elements to 0.
To construct the transition, we can consider the following:
• First we need to convert all elements of the first i − 1 positions to 0. So this will take f (i − 1, j)
operations.
• Then we perform Operation 2 on the indices 1, 2, . . . , i. This will convert all elements to j − 1. So
this will add 1 operation.
• Now we need to convert all elements to 0. So this will take f (i, j − 1) operations.
The base case is g(0, 0) = f (0, 0) + 1 = 0 + 1 = 1. Similarly, g(i, 0) = g(0, i) = 1 for all i ≥ 0.
But this is exactly the Pascal’s Triangle (rotated by 45 degrees).
Hence, we have
i+j
g(i, j) = .
i
Or you can think of it like this: the above recurrence is the exact same recurrence of the number of ways
to go from point (0, 0) to point (i, j) if we can only go right or up. And that problem can be solved using
stars and bars by imagining i stars and j bars and the number of such sequences is i+j i .
So, the maximum number of operations on an array of length i with all entries equal to j is given by
i+j
f (i, j) = − 1.
i
Using this combinatorial formula, we can compute the answer for a given array. The steps are the following:
Page 2 of 13
MTB Presents AUST Inter University Programming Contest
Feb 21-22, 2025
which is equivalent to
n n n
X X i + ai − 1 X i + ai − 1
a1 + (f (i, ai − 1) + 1) = a1 + − 1 + 1 = a1 + .
i i
i=2 i=2 i=2
By precomputing factorials and inverse factorials, you can compute the answer easily in O(n + M ) time
where M is the maximum element in the array.
Continuing in this alternating fashion, we effectively append to the right in increasing order (assigning
1, 2, 3, . . . ) and to the left in decreasing order (assigning n − 1, n − 2, . . . ) until they meet in the middle.
So basically in the following format:
?, n − 1, n − 2, n − 3, . . . , . . . , 3, 2, 1, n
Here by ? mark, it means we can set p1 to the number that is left after filling the remaining indices as
this index doesn’t play a role in this construction.
This arrangement guarantees that the differences obtained contains all of {0, 1, 2, . . . , n − 2}, so the MEX
is at least n − 1.
Thus, this construction yields a valid permutation meeting the required condition.
Page 3 of 13
MTB Presents AUST Inter University Programming Contest
Feb 21-22, 2025
• Fix a palindromic substring at the beginning of the string. Because the sequence of lengths must be
a palindrome, the substring taken at the beginning forces a palindromic substring at the end of the
string with the same length.
• Remove these two substrings and then recursively partition the remaining middle part.
In other words, if we fix the left part (say, s[l . . . i]) as a palindrome, then the corresponding right part
(s[j . . . r]) is also determined (with j chosen so that the length equals that of s[l . . . i]). This fixed jump
from both ends ensures that the lengths of the outer segments match.
So for s[l . . . r], the transition is to loop over all possible left palindromes s[l . . . i] and then check if s[j . . . r]
is a palindrome with the same length.
It looks like the complexity is O(n3 ), but notice that if you fix l, then the r is also fixed because you are
peeling off the same length from both ends. So the complexity is actually O(n2 ).
Finally, to check if a substring s[l . . . r] is a palindrome efficiently, we can use hashing or a simple DP
approach as follows:
Now, let’s move to the scoring aspect. Suppose that the inner partition (i.e., the partition of the substring
after removing the matching outer palindromes) has a score corresponding to x2 where x is the number of
segments. When we add a matching pair of palindromic substrings to the ends, the number of segments
increases by 2. Notice that:
(x + 2)2 = x2 + 4x + 4.
Thus, if the inner partition contributes a term x2 (from the square of its segment count), adding two
segments contributes three parts:
• The original x2 ,
• A constant 4.
In our DP, we therefore need to track three components: the sum of squares (the actual scores), the linear
contribution (which arises when the number of segments increases), and the count of partitions.
The overall time complexity is O(n2 ).
Page 4 of 13
MTB Presents AUST Inter University Programming Contest
Feb 21-22, 2025
We will maintain disjoint sets of connected vertices in the tree. Since it is a rooted tree, each of these
components will have a single vertex as its own root. We need to store the maximum possible strength of
a path from this root to any of the vertices in the component. Let’s call this value the “best-path” of the
component.
Initially, each of the vertices of the tree are in separate sets. We will iterate over the vertices in ascending
order of their ranks. Now for the current vertex, check all of its adjacent children. If a child has a lower
rank, we can consider the best-path from its component as a candidate. We can then choose up to two
candidates which have the maximum values to get the answer for our current vertex.
Afterwards, we need to merge the current vertex with the components of its adjacent vertices (including
the parent) which have lower ranks. The merging of components can be done in O(n log n) using DSU. We
will pre-calculate the strength of the paths from vertex-1 to every vertex, let’s call this the “prefix-sum”.
You can notice that the best-path of any component is to the vertex with the maximum prefix-sum.
The value of the best-path can thus be computed by subtracting the prefix-sum of the parent of the
component-root from the maximum prefix-sum of any vertex in the component.
Segment Tree Solution:
Let’s maintain an array of length n with initial values of 0. Consider a very large value Inf and for each
vertex from 1 to n, add −Inf to the array for each vertex in its corresponding sub-tree. Then we will
iterate over the vertices in ascending order of their ranks. For the current vertex u, add Inf + au to each
vertex in its corresponding sub-tree. What this essentially does is, the array keeps the current strength of
the paths from vertex-1 to every vertex, where the value of the vertices with higher ranks are replaced
with Inf .
Now by checking all of its adjacent children, we can pick the highest value of a path from their subtrees.
We need to subtract the value up to the current vertex to get the accurate strength of the path and
consider it a candidate if it has a positive value. Since the vertices with higher ranks still contain −Inf
as their value, all of the paths that go through them will produce a negative value. We can then choose
up to two candidates which have the maximum values to get the answer for our current vertex.
To efficiently perform the sub-tree operations, we can use the DFS order of a rooted tree to convert the tree
into an array and get a sub-array range for each vertex and its sub-tree. Then we can use a segment-tree
on this array in O(n log n) for the addition/subtraction updates and the range-maximum queries.
Page 5 of 13
MTB Presents AUST Inter University Programming Contest
Feb 21-22, 2025
We notice that some parts of the lower polygon are covered by the top polygon near its vertices. However,
if we rotate the top polygon, some of these obstructed areas become visible. We can continue this process,
but no matter how we rotate the top polygon, we cannot reveal any area covered by its incircle. Effectively,
we replace the upper polygon with its incircle.
To determine the uncovered area, we must compute the area of the sections of the lower polygon not
covered by the incircle. We do this as follows:
So, the problem reduces to a triangle circle intersection problem. We can find the area using trigonometry
and some geometry knowledge.
Let:
Here, An apothem of a polygon is a line segment drawn from the center of the polygon perpendicular to
the midpoint of one of its sides; essentially, it is the shortest distance from the center of the polygon to
any of its side.
The circumradius of a polygon is the radius of its circumcircle. The circumcircle of a regular polygon is
the circle that passes through every vertex of the polygon.
The uncovered area depends on the relationship between r and other parameters:
• If r ≤ a, then:
1
Uncovered Area = × s × a × n − πr2 . (1)
2
Page 6 of 13
MTB Presents AUST Inter University Programming Contest
Feb 21-22, 2025
• If r ≥ d, then:
Uncovered Area = 0. (2)
1 p 2
Area of the red triangle = × r − a2 × a
2
r2
Area of the green arc = ϕ ×
2
And,
2π
n
ϕ= −θ
2
π
⇒ϕ= −θ
n
Since,
a
cos θ =
r
a
⇒ θ = cos−1
r
π a
∴ϕ= − cos−1
n r
Therefore, area of the arc
π a r2
Area = − cos−1 ×
n r 2
And total covered area is
1 p 2 π a r2
−1
=2×n× × r − a2 × a + − cos ×
2 n r 2
So, the final answer is
1
1 p 2 π a r2
−1
= ×s×a×n−2×n× × r − a2 × a + − cos ×
2 2 n r 2
Page 7 of 13
MTB Presents AUST Inter University Programming Contest
Feb 21-22, 2025
You can find the solution in multiple ways. However, the author thinks this approach is easier to
understand.
lcm(a1 , a2 , . . . , an )
= a1
gcd(a1 , a2 , . . . , an )
so that for each prime p dividing a1 the exponent in a1 is ep . For any sequence {a1 , a2 , . . . , an }, let fi
denote the exponent of p in ai . Notice that f1 = ep .
The gcd and lcm for the prime p are given by
so the contribution of p to
lcm(a1 , . . . , an )
gcd(a1 , . . . , an )
is
pmax{fi }−min{fi } .
lcm(a1 , . . . , an )
= a1 = pep × (other primes),
gcd(a1 , . . . , an )
max fi − min fi = ep .
1≤i≤n 1≤i≤n
For each prime p with exponent ep in a1 , let’s analyze how we can choose the other exponents f2 , f3 , . . . , fn
(for n − 1 positions).
The key observation is that for any valid sequence with f1 = ep , if we have max{fi } − min{fi } = ep , then
the possible ranges for [min{fi }, max{fi }] (or [m, M ]) are: [0, ep ], [1, ep + 1], . . . , [ep , 2ep ]
This gives us ep + 1 possible ranges.
For each range [m, M ], let’s try to count the number of valid sequences.
In general, we can choose each fi (for n − 1 positions f2 , f3 , . . . , fn ) to be any value within [m, M ], giving
(M − m + 1)n−1 = (ep + 1)n−1 possibilities.
Page 8 of 13
MTB Presents AUST Inter University Programming Contest
Feb 21-22, 2025
However, this overcounts sequences that don’t actually achieve both the minimum and maximum values
in their range. We need to apply inclusion-exclusion.
Case 1: m = 0
In this case, [m, M ] = [0, ep ]
• All f2 , f3 , . . . , fn can be chosen from [0, ep ], so there are (ep + 1)n−1 valid sequences.
Case 2: m = ep
In this case, [m, M ] = [ep , 2ep ]
• All f2 , f3 , . . . , fn can be chosen from [ep , 2ep ], so there are (2ep − ep + 1)n−1 = (ep + 1)n−1 valid
sequences.
• All f2 , f3 , . . . , fn can be chosen from [m, M ], so there are (M − m + 1)n−1 = (ep + 1)n−1 valid
sequences.
Page 9 of 13
MTB Presents AUST Inter University Programming Contest
Feb 21-22, 2025
• Add back sequences where neither minimum nor maximum appears (as these were subtracted twice):
So for each 0 < m < ep , (ep − 1 such ranges), the number of valid sequences is:
So over all three cases, the total number of valid sequences is:
(ep + 1)n−1 − en−1 + (ep + 1)n−1 − epn−1 + (ep − 1) (ep + 1)n−1 − 2epn−1 + (ep − 1)n−1
p
So basically the solution is: factorize a1 , compute the contribution for each prime factor as shown using
fast exponentiation, and multiply all contributions modulo 998 244 353.
x − r + (k − r) = x + k − 2r.
Since r can vary subject to the constraints imposed by the number of ones and zeros available, the number
of ones changes in steps of 2.
Page 10 of 13
MTB Presents AUST Inter University Programming Contest
Feb 21-22, 2025
We can model the problem as a graph problem where each node is a state which is an integer between 0
and n representing the number of ones. Our goal is to find the minimum number of operations to reach
each state starting from the all-zero state (state 0). Since operations are reversible, finding the distance
from 0 to any state m gives the minimum number of operations needed to turn a string with m ones into
all 0s.
A high-level outline of the solution is as follows:
x + k − 2r,
where r (the number of ones flipped) is chosen such that the move is valid. This gives a range of
reachable states with differences changing by 2.
• Use a breadth-first search (BFS) to compute the minimum number of operations needed to reach
each state.
• There might be multiple direct edges (transitions) from a node, but as all of them are in a range
(partitioned by odd or even numbers), we can maintain two sets of unvisited even and odd nodes
and do the transitions efficiently.
• The answer for each m (for 1 ≤ m ≤ n) is the computed distance from state 0 to state m, or −1 if
state m is unreachable.
This approach efficiently computes f (m) for all m from 1 to n in O(n log n) time.
ai − i > ai+1 − (i + 1)
This holds because ai ≥ ai+1 , and since i < i + 1, subtracting a larger index from a smaller value ensures
that the differences strictly decrease. As a result, all values remain distinct.
Thus, sorting the array in decreasing order guarantees a valid arrangement.
Problem J. No Duplicates
Let’s solve this problem step by step.
For each testcase, we need to process q restrictions one by one, and after each restriction, we need to
calculate the number of valid arrays.
One key observation is that if an interval is fully covered by another interval, we can just ignore the
smaller interval.
So we can think of the following approach:
• Sort all intervals by left endpoint and process them in that order.
• For each interval [l, r], we only need to consider the portion that isn’t already covered by previous
intervals.
Page 11 of 13
MTB Presents AUST Inter University Programming Contest
Feb 21-22, 2025
• Let’s say for an interval of length s, a portion of length c is already covered by previous intervals.
• For positions not covered by any interval, each position contributes a factor of m to the answer.
You can precompute factorials and inverse factorials to speed up the permutation calculation. It can be
done in O(m).
Now to solve the problem with each new restriction:
• For each interval, track its contribution factor (its contribution divided by ms )
– Remove or split existing intervals that get covered by the new one
– Update contributions accordingly
– Add the new interval with its contribution
• For any node corresponding to a balanced subarray of the form [+x] A B C ... [-x], the balanced
subarrays A, B, C, ... are its children.
• The leaf nodes of this tree are exactly the minimal balanced pairs of length 2, that is, sequences of
the form [+x, -x].
A key observation is that any partition of the sequence into primal subsequences must have at least as
many subsequences as there are leaves in the bracket tree. This is because you cannot merge two different
leaves, say [+x, -x] and [+y, -y], into a single primal subsequence. In a primal sequence all the positive
elements must come before all the negative ones, and merging two leaves would force the closing bracket
of the first pair to come before the opening bracket of the second, which is not allowed.
Thus, if we denote the number of leaves by #leaf, we must have
f (b) ≥ #leaf.
It turns out that this bound is tight. One way to see this is to use a dynamic programming (DP) approach
on the bracket tree:
Page 12 of 13
MTB Presents AUST Inter University Programming Contest
Feb 21-22, 2025
• For a subtree corresponding to [+x] A B C ... [-x], assume that you have already partitioned
each of its child subtrees into the minimum number of primal subsequences.
• You can then append the outer brackets [+x] and [-x] to one of the primal subsequences from its
children, so no additional subsequence is needed for this subtree.
• The only case where you must form a new primal subsequence is when the subtree is a leaf itself
(i.e. it is of the form [+x, -x]).
Therefore, the minimum number of primal subsequences needed is exactly equal to the number of leaves
in the bracket tree.
A simpler implementation leverages the fact that a leaf appears as a consecutive pair [+x, -x] in the
sequence. Define an indicator variable is_leafi = 1 if the pair (ai−1 , ai ) forms a leaf (i.e. if ai−1 = −ai ),
and 0 otherwise. Then, by building a prefix sum array p where pi = pi−1 + is_leafi , the answer for any
balanced subarray a[l . . . r] is given by
Page 13 of 13