0% found this document useful (0 votes)
9 views25 pages

ICPCNotebooks HCMUS PenguinSpammers

The document is a team notebook from HCMUS-PenguinSpammers, dated February 3, 2022, containing a comprehensive list of algorithms, data structures, combinatorics, geometry, graphs, number theory, linear algebra, probability, statistics, and strings. It serves as a reference guide for various computational techniques and methods. The content is organized into sections with specific topics and subtopics, providing a structured overview of advanced programming concepts.

Uploaded by

tran hieu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views25 pages

ICPCNotebooks HCMUS PenguinSpammers

The document is a team notebook from HCMUS-PenguinSpammers, dated February 3, 2022, containing a comprehensive list of algorithms, data structures, combinatorics, geometry, graphs, number theory, linear algebra, probability, statistics, and strings. It serves as a reference guide for various computational techniques and methods. The content is organized into sections with specific topics and subtopics, providing a structured overview of advanced programming concepts.

Uploaded by

tran hieu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Team notebook

HCMUS-PenguinSpammers

February 3, 2022

Contents 4 Dynamic Programming Optimization 8 7.3 PolyRoots . . . . . . . . . . . . . . . . . . . . 16


4.1 Convex Hull Trick . . . . . . . . . . . . . . . 8 7.4 Polynomial . . . . . . . . . . . . . . . . . . . 16
1 Algorithms 1 4.2 Divide and Conquer . . . . . . . . . . . . . . 8
1.1 Mo’s Algorithm . . . . . . . . . . . . . . . . . 1 8 Misc 16
1.2 Mo’s Algorithms on Trees . . . . . . . . . . . 1 5 Geometry 8 8.1 Dates . . . . . . . . . . . . . . . . . . . . . . 16
5.1 Closest Pair Problem . . . . . . . . . . . . . . 8 8.2 Debugging Tricks . . . . . . . . . . . . . . . . 16
1.3 Parallel Binary Search . . . . . . . . . . . . . 1
5.2 Convex Diameter . . . . . . . . . . . . . . . . 9 8.3 Interval Container . . . . . . . . . . . . . . . 17
5.3 Pick Theorem . . . . . . . . . . . . . . . . . . 9 8.4 Optimization Tricks . . . . . . . . . . . . . . 17
2 Combinatorics 2
5.4 Square . . . . . . . . . . . . . . . . . . . . . . 10 8.4.1 Bit hacks . . . . . . . . . . . . . . . . 17
2.1 Factorial Approximate . . . . . . . . . . . . . 2 5.5 Triangle . . . . . . . . . . . . . . . . . . . . . 10 8.4.2 Pragmas . . . . . . . . . . . . . . . . . 17
2.2 Factorial . . . . . . . . . . . . . . . . . . . . . 2 8.5 Ternary Search . . . . . . . . . . . . . . . . . 17
2.3 Fast Fourier Transform . . . . . . . . . . . . . 2 6 Graphs 10
2.4 General purpose numbers . . . . . . . . . . . 3 6.1 Bridges . . . . . . . . . . . . . . . . . . . . . 10 9 Number Theory 17
2.5 Lucas Theorem . . . . . . . . . . . . . . . . . 3 6.2 Dijkstra . . . . . . . . . . . . . . . . . . . . . 11 9.1 Chinese Remainder Theorem . . . . . . . . . 17
2.6 Multinomial . . . . . . . . . . . . . . . . . . . 4 6.3 Directed MST . . . . . . . . . . . . . . . . . . 11 9.2 Convolution . . . . . . . . . . . . . . . . . . . 17
2.7 Others . . . . . . . . . . . . . . . . . . . . . . 4 6.4 Edge Coloring . . . . . . . . . . . . . . . . . . 12 9.3 Diophantine Equations . . . . . . . . . . . . . 18
6.5 Eulerian Path . . . . . . . . . . . . . . . . . . 12 9.4 Discrete Logarithm . . . . . . . . . . . . . . . 18
2.8 Permutation To Int . . . . . . . . . . . . . . . 4
6.6 Floyd - Warshall . . . . . . . . . . . . . . . . 12 9.5 Ext Euclidean . . . . . . . . . . . . . . . . . . 18
2.9 Sigma Function . . . . . . . . . . . . . . . . . 4
6.7 Ford - Bellman . . . . . . . . . . . . . . . . . 12 9.6 Fast Eratosthenes . . . . . . . . . . . . . . . . 18
6.8 Gomory Hu . . . . . . . . . . . . . . . . . . . 13 9.7 Highest Exponent Factorial . . . . . . . . . . 19
3 Data Structures 4 6.9 Karp Min Mean Cycle . . . . . . . . . . . . . 13 9.8 Miller - Rabin . . . . . . . . . . . . . . . . . . 19
3.1 Binary Index Tree . . . . . . . . . . . . . . . 4 6.10 Konig’s Theorem . . . . . . . . . . . . . . . . 13 9.9 Mod Integer . . . . . . . . . . . . . . . . . . . 19
3.2 Disjoint Set Uninon (DSU) . . . . . . . . . . 5 6.11 LCA . . . . . . . . . . . . . . . . . . . . . . . 13 9.10 Mod Inv . . . . . . . . . . . . . . . . . . . . . 19
3.3 Fake Update . . . . . . . . . . . . . . . . . . 5 6.12 Math . . . . . . . . . . . . . . . . . . . . . . . 14 9.11 Mod Mul . . . . . . . . . . . . . . . . . . . . 19
3.4 Fenwick Tree 2D . . . . . . . . . . . . . . . . 5 6.13 Minimum Path Cover in DAG . . . . . . . . . 14 9.12 Mod Pow . . . . . . . . . . . . . . . . . . . . 19
3.5 Fenwick Tree . . . . . . . . . . . . . . . . . . 5 6.14 Planar Graph (Euler) . . . . . . . . . . . . . 14 9.13 Number Theoretic Transform . . . . . . . . . 19
3.6 Hash Table . . . . . . . . . . . . . . . . . . . 6 6.15 Push Relabel . . . . . . . . . . . . . . . . . . 14 9.14 Pollard Rho Factorize . . . . . . . . . . . . . 20
3.7 Mo Queries . . . . . . . . . . . . . . . . . . . 6 6.16 SCC Kosaraju . . . . . . . . . . . . . . . . . 14 9.15 Primes . . . . . . . . . . . . . . . . . . . . . . 20
6.17 Tarjan SCC . . . . . . . . . . . . . . . . . . . 15 9.16 Totient Sieve . . . . . . . . . . . . . . . . . . 21
3.8 Range Minimum Query . . . . . . . . . . . . 6
6.18 Topological Sort . . . . . . . . . . . . . . . . 15 9.17 Totient . . . . . . . . . . . . . . . . . . . . . . 21
3.9 STL Treap . . . . . . . . . . . . . . . . . . . 6
3.10 Segment Tree . . . . . . . . . . . . . . . . . . 7 7 Linear Algebra 15 10 Probability and Statistics 21
3.11 Sparse Table . . . . . . . . . . . . . . . . . . 7 7.1 Matrix Determinant . . . . . . . . . . . . . . 15 10.1 Continuous Distributions . . . . . . . . . . . 21
3.12 Trie . . . . . . . . . . . . . . . . . . . . . . . 7 7.2 Matrix Inverse . . . . . . . . . . . . . . . . . 15 10.1.1 Uniform distribution . . . . . . . . . . 21

1
HCMUS-PenguinSpammers 2

10.1.2 Exponential distribution . . . . . . . . 21 }


10.1.3 Normal distribution . . . . . . . . . . 21 LCA lca;
lca.initialize(n);
10.2 Discrete Distributions . . . . . . . . . . . . . 21
10.2.1 Binomial distribution . . . . . . . . . 21 1.2 Mo’s Algorithms on Trees int L = 1, R = 0;
10.2.2 First success distribution . . . . . . . 21 for(query q: Q){
10.2.3 Poisson distribution . . . . . . . . . . 21 int u = q.l, v = q.r;
/* if(ST[u] > ST[v]) swap(u, v); // assume that
10.3 Probability Theory . . . . . . . . . . . . . . . 21 Given a tree with N nodes and Q queries. Each node has S[u] <= S[v]
an integer weight. int parent = lca.get(u, v);
11 Strings 21 Each query provides two numbers u and v, ask for how
11.1 Hashing . . . . . . . . . . . . . . . . . . . . . 21 many different integers weight of nodes if(parent == u){
11.2 Incremental Aho Corasick . . . . . . . . . . . 22 there are on path from u to v. int qL = ST[u], qR = ST[v];
11.3 KMP . . . . . . . . . . . . . . . . . . . . . . . 23 update(L, R, qL, qR);
---------- }else{
11.4 Minimal String Rotation . . . . . . . . . . . . 23 Modify DFS: int qL = EN[u], qR = ST[v];
11.5 Suffix Array . . . . . . . . . . . . . . . . . . . 23 ---------- update(L, R, qL, qR);
11.6 Suffix Automation . . . . . . . . . . . . . . . 23 For each node u, maintain the start and the end DFS if(cnt_val[a[parent]] == 0)
time. Let’s call them ST(u) and EN(u). res[q.pos] += 1;
11.7 Suffix Tree . . . . . . . . . . . . . . . . . . . . 24 => For each query, a node is considered if its }
11.8 Z Algorithm . . . . . . . . . . . . . . . . . . . 24 occurrence count is one.
res[q.pos] += cur_ans;
-------------- }
1 Algorithms Query solving: return res;
-------------- }
1.1 Mo’s Algorithm Let’s query be (u, v). Assume that ST(u) <= ST(v).
Denotes P as LCA(u, v).

/* Case 1: P = u 1.3 Parallel Binary Search


https://www.spoj.com/problems/FREQ2/ Our query would be in range [ST(u), ST(v)].
*/
vector <int> MoQueries(int n, vector <query> Q){ Case 2: P != u int lo[N], mid[N], hi[N];
Our query would be in range [EN(u), ST(v)] + [ST(p), vector<int> vec[N];
block_size = sqrt(n); ST(p)]
sort(Q.begin(), Q.end(), [](const query &A, const */ void clear() //Reset
query &B){ {
return (A.l/block_size != B.l/block_size)? void update(int &L, int &R, int qL, int qR){ memset(bit, 0, sizeof(bit));
(A.l/block_size < B.l/block_size) : (A.r < while (L > qL) add(--L); }
B.r); while (R < qR) add(++R);
}); void apply(int idx) //Apply ith update/query
vector <int> res; while (L < qL) del(L++); {
res.resize((int)Q.size()); while (R > qR) del(R--); if(ql[idx] <= qr[idx])
} update(ql[idx], qa[idx]),
int L = 1, R = 0; update(qr[idx]+1, -qa[idx]);
for(query q: Q){ vector <int> MoQueries(int n, vector <query> Q){ else
while (L > q.l) add(--L); block_size = sqrt((int)nodes.size()); {
while (R < q.r) add(++R); sort(Q.begin(), Q.end(), [](const query &A, const update(1, qa[idx]);
query &B){ update(qr[idx]+1, -qa[idx]);
while (L < q.l) del(L++); return (ST[A.l]/block_size != update(ql[idx], qa[idx]);
while (R > q.r) del(R--); ST[B.l]/block_size)? (ST[A.l]/block_size < }
ST[B.l]/block_size) : (ST[A.r] < ST[B.r]); }
res[q.pos] = calc(1, R-L+1); });
} vector <int> res; bool check(int idx) //Check if the condition is
return res; res.resize((int)Q.size()); satisfied
HCMUS-PenguinSpammers 3

{ work(); cpx(double _real, double _image) {


int req=reqd[idx]; } real = _real;
for(auto &it:owns[idx]) } image = _image;
{ }
req-=pref(it); cpx(){}
if(req<0) };
break; 2 Combinatorics
} cpx operator + (const cpx &c1, const cpx &c2) {
if(req<=0) return cpx(c1.real + c2.real, c1.image + c2.image);
return 1; 2.1 Factorial Approximate }
return 0;
} Approximate Factorial: cpx operator - (const cpx &c1, const cpx &c2) {
√ n return cpx(c1.real - c2.real, c1.image - c2.image);
void work() n! = 2.π.n.( )n (1) }
{ e
for(int i=1;i<=q;i++) cpx operator * (const cpx &c1, const cpx &c2) {
vec[i].clear(); 2.2 Factorial return cpx(c1.real*c2.real - c1.image*c2.image,
for(int i=1;i<=n;i++) c1.real*c2.image + c1.image*c2.real);
if(mid[i]>0) }
vec[mid[i]].push_back(i); n 123 4 5 6 7 8 9 10
clear(); n! 1 2 6 24 120 720 5040 40320 362880 3628800 int rev(int id, int len) {
for(int i=1;i<=q;i++) n 11 12 13 14 15 16 17 int ret = 0;
{ for (int i = 0; (1 << i) < len; i++) {
apply(i); n! 4.0e7 4.8e8 6.2e9 8.7e10 1.3e12 2.1e13 3.6e14 ret <<= 1;
for(auto &it:vec[i]) //Add appropriate n 20 25 30 40 50 100 150 171 if (id & (1 << i)) ret |= 1;
check conditions n! 2e18 2e25 3e32 8e47 3e64 9e157 6e262 >DBL MAX }
{ return ret;
if(check(it)) }
hi[it]=i; 2.3 Fast Fourier Transform
else cpx A[1 << 20];
lo[it]=i+1;
} /** void FFT(cpx *a, int len, int DFT) {
} * Fast Fourier Transform. for (int i = 0; i < len; i++)
} * Useful to compute convolutions. A[rev(i, len)] = a[i];
* computes: for (int s = 1; (1 << s) <= len; s++) {
void parallel_binary() * C(f star g)[n] = sum_m(f[m] * g[n - m]) int m = (1 << s);
{ * for all n. cpx wm = cpx(cos( DFT * 2 * PI / m), sin(DFT * 2 *
for(int i=1;i<=n;i++) * test: icpc live archive, 6886 - Golf Bot PI / m));
lo[i]=1, hi[i]=q+1; * */ for(int k = 0; k < len; k += m) {
bool changed = 1; cpx w = cpx(1, 0);
while(changed) for(int j = 0; j < (m >> 1); j++) {
{ using namespace std; cpx t = w * A[k + j + (m >> 1)];
changed=0; #include <bits/stdc++.h> cpx u = A[k + j];
for(int i=1;i<=n;i++) #define D(x) cout << #x " = " << (x) << endl A[k + j] = u + t;
{ #define endl ’\n’ A[k + j + (m >> 1)] = u - t;
if(lo[i]<hi[i]) w = w * wm;
{ const int MN = 262144 << 1; }
changed=1; int d[MN + 10], d2[MN + 10]; }
mid[i]=(lo[i] + hi[i])/2; }
} if (DFT == -1) for (int i = 0; i < len; i++)
else const double PI = acos(-1.0); A[i].real /= len, A[i].image /= len;
mid[i]=-1; for (int i = 0; i < len; i++) a[i] = A[i];
} struct cpx { return;
double real, image;
HCMUS-PenguinSpammers 4

} B[0, . . .] = [1, − 21 , 16 , 0, − 30
1
, 0, 1
42
, . . .] # on k existing trees of size ni : n1 n2 · · · nk nk−2
Sums of powers: # with degrees di : (n − 2)!/((d1 − 1)! · · · (dn − 1)!)
cpx in[1 << 20]; n m  Catalan numbers
X 1 X m + 1
nm = Bk · (n + 1)m+1−k ! ! !
void solve(int n) { m + 1 k=0 k
i=1 1 2n 2n 2n (2n)!
memset(d, 0, sizeof d); Cn = = − =
int t; Euler-Maclaurin formula for infinite sums: n+1 n n n+1 (n + 1)!n!
for (int i = 0; i < n; ++i) { ∞ Z ∞ ∞
X X Bk (k−1)
cin >> t; f (i) = f (x)dx − f (m) 2(2n + 1) X
d[t] = true; i=m m k=1
k! C0 = 1, Cn+1 = Cn , Cn+1 = Ci Cn−i
}
n+2
Z ∞ f (m) f 0 (m) f 000 (m)
int m; ≈ f (x)dx + − + + O(f (5) (m)) Cn = 1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, 58786, . . .
cin >> m; m 2 12 720
vector<int> q(m); [noitemsep]sub-diagonal monotone paths in an n × n
Stirling numbers of the first kind
for (int i = 0; i < m; ++i) grid. strings with n pairs of parenthesis, correctly
Number of permutations on n items with k cycles.
cin >> q[i]; nested. binary trees with with n+1 leaves (0 or 2 chil-
c(n, k) = c(n − 1, k − 1) + (n − 1)c(n − 1, k), c(0, 0) = 1 dren). ordered trees with n+1 vertices. ways a convex
for (int i = 0; i < MN; ++i) { Pn k polygon with n + 2 sides can be cut into triangles by
k=0 c(n, k)x = x(x + 1) . . . (x + n − 1)
if (d[i])
in[i] = cpx(1, 0); connecting vertices with straight lines. permutations
else c(8, k) = 8, 0, 5040, 13068, 13132, 6769, 1960, 322, 28, 1 of [n] with no 3-term increasing subseq.
in[i] = cpx(0, 0); Stirling numbers of the second kind
}
Partitions of n distinct elements into exactly k groups. 2.5 Lucas Theorem
FFT(in, MN, 1);
for (int i = 0; i < MN; ++i) { S(n, k) = S(n − 1, k − 1) + kS(n − 1, k) For non-negative integers m and n and a prime p, the fol-
in[i] = in[i] * in[i]; lowing congruence relation holds: :
} S(n, 1) = S(n, n) = 1 ! !
k
FFT(in, MN, -1); k
!
m Y mi
1 X k−j k ≡ (mod p),
S(n, k) = (−1) jn n ni
int ans = 0; k! j=0 j i=0
for (int i = 0; i < q.size(); ++i) {
if (in[q[i]].real > 0.5 || d[q[i]]) { Eulerian numbers where :
ans++; Number of permutations π ∈ Sn in which exactly k el-
} m = mk pk + mk−1 pk−1 + · · · + m1 p + m0 ,
ements are greater than the previous element. k j:s s.t.
}
cout << ans << endl; π(j) > π(j + 1), k + 1 j:s s.t. π(j) ≥ j, k j:s s.t. π(j) > j. and :
}
E(n, k) = (n − k)E(n − 1, k − 1) + (k + 1)E(n − 1, k) n = nk pk + nk−1 pk−1 + · · · + n1 p + n0
int main() { are the base p expansions
 of m and n respectively. This uses
E(n, 0) = E(n, n − 1) = 1
ios_base::sync_with_stdio(false);cin.tie(NULL); the convention that m n
= 0 if m ≤ n.
int n; k
!
j n+1
X
while (cin >> n) E(n, k) = (−1) (k + 1 − j)n
solve(n); j 2.6 Multinomial
j=0
return 0;
} Bell numbers
/**
Total number of partitions of n distinct elements. B(n) = * Description: Computes $\displaystyle \binom{k_1 +
1, 1, 2, 5, 15, 52, 203, 877, 4140, 21147, . . . . For p prime, \dots + k_n}{k_1, k_2, \dots, k_n} = \frac{(\sum
k_i)!}{k_1!k_2!...k_n!}$.
2.4 General purpose numbers B(pm + n) ≡ mB(n) + B(n + 1) (mod p) * Status: Tested on kattis:lexicography
*/
Bernoulli numbers Labeled unrooted trees #pragma once
EGF of Bernoulli numbers is B(t) = et −1
t
(FFT-able). # on n vertices: nn−2
HCMUS-PenguinSpammers 5

long long multinomial(vector<int>& v) { int permToInt(vector<int>& v) { for (where++; where <= n; where += where &
long long c = 1, m = v.empty() ? 1 : v[0]; int use = 0, i = 0, r = 0; -where) {
for (long long i = 1; i < v.size(); i++) { for(int x : v) r = r * ++i + t[where] += what;
for (long long j = 0; j < v[i]; j++) { __builtin_popcount(use & -(1<<x)), }
c = c * ++m / (j + 1); use |= 1 << x; // (note: }
} minus, not ~!)
} return r; void add(int from, int to, long long what) {
return c; } add(from, what);
} add(to + 1, -what);
}

2.9 Sigma Function long long query(int where) {


2.7 Others long long sum = t[0];
The Sigma Function is defined as:
Cycles Let gS (n) be the number of n-permutations whose for (where++; where > 0; where -= where &
X
cycle lengths all belong to the set S. Then σx (n) = dx -where) {
sum += t[where];

! d|n
X xn X xn }
gS (n) = exp when x = 0 is called the divisor function, that counts
n=0
n! n∈S
n return sum;
the number of positive divisors of n. }
Derangements Permutations of a set such that none Now, we are interested in find };
of the elements appear in their original position. X
  σ0 (d)
n! d|n
D(n) = (n−1)(D(n−1)+D(n−2)) = nD(n−1)+(−1)n = 3.2 Disjoint Set Uninon (DSU)
e If n is written as prime factorization:
Burnside’s lemma Given a group G of symmetries and k class DSU{
a set X, the number of elements of X up to symmetry equals
Y e
n= Pi k public:
i=1
vector <int> parent;
1 X g void initialize(int n){
|X |, We can demonstrate that:
|G| g∈G parent.resize(n+1, -1);
}
k
where X g are the elements fixed by g (g.x = x). X
σ0 (d) =
Y
g(ek + 1) int findSet(int u){
If f (n) counts “configurations” (of some sort) of length d|n i=1 while(parent[u] > 0)
n, we can ignore rotational symmetry using G = Zn to get u = parent[u];
where g(x) is the sum of the first x positive numbers: return u;
n−1 }
1X 1X
g(n) = f (gcd(n, k)) = f (k)φ(n/k). g(x) = (x ∗ (x + 1))/2
n n void Union(int u, int v){
k=0 k|n
int x = parent[u] + parent[v];
if(parent[u] > parent[v]){
2.8 Permutation To Int 3 Data Structures parent[v] = x;
parent[u] = v;
3.1 Binary Index Tree }else{
/** parent[u] = x;
* Description: Permutation -> integer conversion. (Not parent[v] = u;
order preserving.) struct BIT { }
* Integer -> permutation can use a lookup table. int n; }
* Time: O(n) int t[2 * N]; };
**/
void add(int where, long long what) {
HCMUS-PenguinSpammers 6

3.3 Fake Update a[i].se = lower_bound(Sy.begin(), Sy.end(), ll sum = 0;


a[i].se) - Sy.begin(); for (; x; x &= x - 1)
} sum += ft[x-1].query(ind(x-1, y));
return sum;
vector <int> fake_bit[MAXN]; // do fake BIT update and get operator }
for(int i = 1; i <= n; i++){ };
void fake_update(int x, int y, int limit_x){ fake_get(a[i].fi-1, a[i].se-1);
for(int i = x; i < limit_x; i += i&(-i)) fake_update(a[i].fi, a[i].se, (int)Sx.size());
fake_bit[i].pb(y); }
} 3.5 Fenwick Tree
for(int i = 0; i < Sx.size(); i++){
void fake_get(int x, int y){ fake_bit[i].pb(INT_MIN); // avoid zero
for(int i = x; i >= 1; i -= i&(-i)) sort(fake_bit[i].begin(), fake_bit[i].end()); template <typename T>
fake_bit[i].pb(y); fake_bit[i].resize(unique(fake_bit[i].begin(), class FenwickTree{
} fake_bit[i].end()) - fake_bit[i].begin()); vector <T> fenw;
bit[i].resize((int)fake_bit[i].size(), 0); int n;
vector <int> bit[MAXN]; } public:
void initialize(int _n){
void update(int x, int y, int limit_x, int val){ // real update, get operator this->n = _n;
for(int i = x; i < limit_x; i += i&(-i)){ int res = 0; fenw.resize(n+1);
for(int j = lower_bound(fake_bit[i].begin(), for(int i = 1; i <= n; i++){ }
fake_bit[i].end(), y) - int maxCurLen = get(a[i].fi-1, a[i].se-1) + 1;
fake_bit[i].begin(); j < res = max(res, maxCurLen); void update(int id, T val) {
fake_bit[i].size(); j += j&(-j)) update(a[i].fi, a[i].se, (int)Sx.size(), while (id <= n) {
bit[i][j] = max(bit[i][j], val); maxCurLen); fenw[id] += val;
} } id += id&(-id);
} } }
}
int get(int x, int y){
int ans = 0; T get(int id){
for(int i = x; i >= 1; i -= i&(-i)){ 3.4 Fenwick Tree 2D T ans{};
for(int j = lower_bound(fake_bit[i].begin(), while(id >= 1){
fake_bit[i].end(), y) - ans += fenw[id];
fake_bit[i].begin(); j >= 1; j -= j&(-j)) #include "FenwickTree.cpp" id -= id&(-id);
ans = max(ans, bit[i][j]); }
} struct FT2 { return ans;
return ans; vector<vi> ys; vector<FT> ft; }
} FT2(int limx) : ys(limx) {} };
void fakeUpdate(int x, int y) {
int main(){ for (; x < sz(ys); x |= x + 1)
_io ys[x].push_back(y);
int n; cin >> n; } 3.6 Hash Table
vector <int> Sx, Sy; void init() {
for(int i = 1; i <= n; i++){ for (vi& v : ys) sort(all(v)),
cin >> a[i].fi >> a[i].se; ft.emplace_back(sz(v)); /*
Sx.pb(a[i].fi); } * Micro hash table, can be used as a set.
Sy.pb(a[i].se); int ind(int x, int y) { * Very efficient vs std::set
} return (int)(lower_bound(all(ys[x]), y) *
unique_arr(Sx); - ys[x].begin()); } */
unique_arr(Sy); void update(int x, int y, ll dif) {
// unique all value for (; x < sz(ys); x |= x + 1) const int MN = 1001;
for(int i = 1; i <= n; i++){ ft[x].update(ind(x, y), dif); struct ht {
a[i].fi = lower_bound(Sx.begin(), Sx.end(), } int _s[(MN + 10) >> 5];
a[i].fi) - Sx.begin(); ll query(int x, int y) { int len;
HCMUS-PenguinSpammers 7

void set(int id) { }; 3.9 STL Treap


len++; dfs(root, -1, 0, dfs);
_s[id >> 5] |= (1LL << (id & 31)); #define K(x) pii(I[x[0]] / blk, I[x[1]] ^ -(I[x[0]] /
} blk & 1))
bool is_set(int id) { iota(all(s), 0); struct Node {
return _s[id >> 5] & (1LL << (id & 31)); sort(all(s), [&](int s, int t){ return K(Q[s]) Node *l = 0, *r = 0;
} < K(Q[t]); }); int val, y, c = 1;
}; for (int qi : s) rep(end,0,2) { Node(int val) : val(val), y(rand()) {}
int &a = pos[end], b = Q[qi][end], i = 0; void recalc();
#define step(c) { if (in[c]) { del(a, end); in[a] = 0; };
} \
3.7 Mo Queries else { add(c, end); in[c] = 1; } a = int cnt(Node* n) { return n ? n->c : 0; }
c; } void Node::recalc() { c = cnt(l) + cnt(r) + 1; }
while (!(L[b] <= L[a] && R[a] <= R[b]))
void add(int ind, int end) { ... } // add a[ind] (end = I[i++] = b, b = par[b]; template<class F> void each(Node* n, F f) {
0 or 1) while (a != b) step(par[a]); if (n) { each(n->l, f); f(n->val); each(n->r,
void del(int ind, int end) { ... } // remove a[ind] while (i--) step(I[i]); f); }
int calc() { ... } // compute current answer if (end) res[qi] = calc(); }
}
vi mo(vector<pii> Q) { return res; pair<Node*, Node*> split(Node* n, int k) {
int L = 0, R = 0, blk = 350; // ~N/sqrt(Q) } if (!n) return {};
vi s(sz(Q)), res = s; if (cnt(n->l) >= k) { // "n->val >= k" for
#define K(x) pii(x.first/blk, x.second ^ -(x.first/blk lower_bound(k)
& 1)) auto pa = split(n->l, k);
iota(all(s), 0); n->l = pa.second;
sort(all(s), [&](int s, int t){ return K(Q[s]) 3.8 Range Minimum Query n->recalc();
< K(Q[t]); }); return {pa.first, n};
for (int qi : s) { } else {
pii q = Q[qi]; /* auto pa = split(n->r, k - cnt(n->l) -
while (L > q.first) add(--L, 0); return min(v[a], v[a + 1], ..., v[b - 1]) in 1); // and just "k"
while (R < q.second) add(R++, 1); constant time n->r = pa.first;
while (L < q.first) del(L++, 0); */ n->recalc();
while (R > q.second) del(--R, 1); return {n, pa.second};
res[qi] = calc(); template<class T> }
} struct RMQ { }
return res; vector<vector<T>> jmp;
} RMQ(const vector<T>& V) : jmp(1, V) { Node* merge(Node* l, Node* r) {
for (int pw = 1, k = 1; pw * 2 <= sz(V); if (!l) return r;
vi moTree(vector<array<int, 2>> Q, vector<vi>& ed, int pw *= 2, ++k) { if (!r) return l;
root=0){ jmp.emplace_back(sz(V) - pw * 2 + if (l->y > r->y) {
int N = sz(ed), pos[2] = {}, blk = 350; // 1); l->r = merge(l->r, r);
~N/sqrt(Q) rep(j,0,sz(jmp[k])) l->recalc();
vi s(sz(Q)), res = s, I(N), L(N), R(N), in(N), jmp[k][j] = min(jmp[k - return l;
par(N); 1][j], jmp[k - 1][j + } else {
add(0, 0), in[0] = 1; pw]); r->l = merge(l, r->l);
auto dfs = [&](int x, int p, int dep, auto& f) } r->recalc();
-> void { } return r;
par[x] = p; T query(int a, int b) { }
L[x] = N; assert(a < b); // or return inf if a == b }
if (dep) I[x] = N++; int dep = 31 - __builtin_clz(b - a);
for (int y : ed[x]) if (y != p) f(y, x, return min(jmp[dep][a], jmp[dep][b - (1 Node* ins(Node* t, Node* n, int pos) {
!dep, f); << dep)]); auto pa = split(t, pos);
if (!dep) I[x] = N++; } return merge(merge(pa.first, n), pa.second);
R[x] = N; }; }
HCMUS-PenguinSpammers 8

int n; void init(){


// Example application: move the range [l, r) to index k vector<vector<T>> ans; nodes = 0;
void move(Node*& t, int l, int r, int k) { clear();
Node *a, *b, *c; SparseTable() {} }
tie(a,b) = split(t, l); tie(b,c) = split(b, r -
l); SparseTable(const vector<T>& a, const func& f) : int add(const string &s, bool query = 0){
if (k <= l) t = merge(ins(a, b, k), c); n(a.size()), calc(f) { int cur_node = 0;
else t = merge(a, ins(c, b, k - r)); int last = trunc(log2(n)) + 1; for(int i = 0; i < s.size(); ++i){
} ans.resize(n); int id = gid(s[i]);
for (int i = 0; i < n; i++){ if(tree[cur_node].a[id] == -1){
ans[i].resize(last); if(query) return 0;
} tree[cur_node].a[id] = nodes;
3.10 Segment Tree for (int i = 0; i < n; i++){ clear();
ans[i][0] = a[i]; }
} cur_node = tree[cur_node].a[id];
#include <bits/stdc++.h> for (int j = 1; j < last; j++){ }
using namespace std; for (int i = 0; i <= n - (1 << j); i++){ if(!query) tree[cur_node].c++;
ans[i][j] = calc(ans[i][j - 1], ans[i + return tree[cur_node].c;
const int N = 1e5 + 10; (1 << (j - 1))][j - 1]); }
}
int node[4*N]; } };
}
void modify(int seg, int l, int r, int p, int val){
if(l == r){ T query(int l, int r){
node[seg] += val; assert(0 <= l && l <= r && r < n);
return; int k = trunc(log2(r - l + 1));
4 Dynamic Programming Optimization
} return calc(ans[l][k], ans[r - (1 << k) +
int mid = (l + r)/2; 1][k]); 4.1 Convex Hull Trick
if(p <= mid){ }
modify(2*seg + 1, l, mid, p, val); };
}else{ #define long long long
modify(2*seg + 2, mid + 1, r, p, val); #define pll pair <long, long>
} #define all(c) c.begin(), c.end()
node[seg] = node[2*seg + 1] + node[2*seg + 2]; 3.12 Trie #define fastio ios_base::sync_with_stdio(false);
} cin.tie(0)

int sum(int seg, int l, int r, int a, int b){ const int MN = 26; // size of alphabet struct line{
if(l > b || r < a) return 0; const int MS = 100010; // Number of states. long a, b;
if(l >= a && r <= b) return node[seg]; line() {};
int mid = (l + r)/2; struct trie{ line(long a, long b) : a(a), b(b) {};
return sum(2*seg + 1, l, mid, a, b) + sum(2*seg + struct node{ bool operator < (const line &A) const {
2, mid + 1, r, a, b); int c; return pll(a,b) < pll(A.a,A.b);
} int a[MN]; }
}; };

node tree[MS]; bool bad(line A, line B, line C){


3.11 Sparse Table int nodes; return (C.b - B.b) * (A.a - B.a) <= (B.b - A.b) *
(B.a - C.a);
void clear(){ }
tree[nodes].c = 0;
template <typename T, typename func = function<T(const memset(tree[nodes].a, -1, sizeof tree[nodes].a); void addLine(vector<line> &memo, line cur){
T, const T)>> nodes++; int k = memo.size();
struct SparseTable { } while (k >= 2 && bad(memo[k - 2], memo[k - 1],
func calc; cur)){
HCMUS-PenguinSpammers 9

memo.pop_back(); * https://icpc.kattis.com/problems/branch int ls = (p.size() + 1) >> 1;


k--; * http://codeforces.com/contest/321/problem/E double l = (p[ls - 1].x + p[ls].x) * 0.5;
} * */ vector<point> xl(ls), xr(p.size() - ls);
memo.push_back(cur); unordered_set<int> left;
} void comp(int l, int r, int le, int re) { for (int i = 0; i < ls; ++i) {
if (l > r) return; xl[i] = x[i];
long Fn(line A, long x){ left.insert(x[i].id);
return A.a * x + A.b; int mid = (l + r) >> 1; }
} for (int i = ls; i < p.size(); ++i) {
int best = max(mid + 1, le); xr[i - ls] = x[i];
long query(vector<line> &memo, long x){ dp[cur][mid] = dp[cur ^ 1][best] + cost(mid, best - }
int lo = 0, hi = memo.size() - 1; 1);
while (lo != hi){ for (int i = best; i <= re; i++) { vector<point> yl, yr;
int mi = (lo + hi) / 2; if (dp[cur][mid] > dp[cur ^ 1][i] + cost(mid, i - vector<point> pl, pr;
if (Fn(memo[mi], x) > Fn(memo[mi + 1], x)){ 1)) { yl.reserve(ls); yr.reserve(p.size() - ls);
lo = mi + 1; best = i; pl.reserve(ls); pr.reserve(p.size() - ls);
} dp[cur][mid] = dp[cur ^ 1][i] + cost(mid, i - 1); for (int i = 0; i < p.size(); ++i) {
else hi = mi; } if (left.count(y[i].id))
} } yl.push_back(y[i]);
return Fn(memo[lo], x); else
} comp(l, mid - 1, le, best); yr.push_back(y[i]);
comp(mid + 1, r, best, re);
const int N = 1e6 + 1; } if (left.count(p[i].id))
long dp[N]; pl.push_back(p[i]);
else
int main() pr.push_back(p[i]);
{ }
fastio;
5 Geometry
int n, c; cin >> n >> c; double dl = cp(pl, xl, yl);
vector<line> memo; 5.1 Closest Pair Problem double dr = cp(pr, xr, yr);
for (int i = 1; i <= n; i++){ double d = min(dl, dr);
long val; cin >> val; vector<point> yp; yp.reserve(p.size());
addLine(memo, {-2 * val, val * val + dp[i - struct point { for (int i = 0; i < p.size(); ++i) {
1]}); double x, y; if (fabs(y[i].x - l) < d)
dp[i] = query(memo, val) + val * val + c; int id; yp.push_back(y[i]);
} point() {} }
cout << dp[n] << ’\n’; point (double a, double b) : x(a), y(b) {} for (int i = 0; i < yp.size(); ++i) {
return 0; }; for (int j = i + 1; j < yp.size() && j < i + 7;
} ++j) {
double dist(const point &o, const point &p) { d = min(d, dist(yp[i], yp[j]));
double a = p.x - o.x, b = p.y - o.y; }
return sqrt(a * a + b * b); }
4.2 Divide and Conquer } return d;
}
double cp(vector<point> &p, vector<point> &x,
/** vector<point> &y) { double closest_pair(vector<point> &p) {
* recurrence: if (p.size() < 4) { vector<point> x(p.begin(), p.end());
* dp[k][i] = min dp[k-1][j] + c[i][j - 1], for all double best = 1e100; sort(x.begin(), x.end(), [](const point &a, const
j > i; for (int i = 0; i < p.size(); ++i) point &b) {
* for (int j = i + 1; j < p.size(); ++j) return a.x < b.x;
* "comp" computes dp[k][i] for all i in O(n log n) (k best = min(best, dist(p[i], p[j])); });
is fixed) return best; vector<point> y(p.begin(), p.end());
* }
* Problems:
HCMUS-PenguinSpammers 10

sort(y.begin(), y.end(), [](const point &a, const points[i]) <= 0) }


point &b) { Up.pop_back(); }while(i != is || j != js);
return a.y < b.y; Up.push_back(points[i]); return sqrt(maxd);
}); } }
return cp(p, x, y); if(i == points.size()-1 || cross(A, points[i],
} B) < 0){
while(Down.size() > 2 &&
cross(Down[Down.size()-2], 5.3 Pick Theorem
Down[Down.size()-1], points[i]) >= 0)
5.2 Convex Diameter Down.pop_back();
Down.push_back(points[i]); struct point{
} ll x, y;
struct point{ } };
int x, y; for(int i = 0; i < Up.size(); i++)
}; convex.push_back(Up[i]); //Pick: S = I + B/2 - 1
for(int i = Down.size()-2; i > 0; i--)
struct vec{ convex.push_back(Down[i]); ld polygonArea(vector <point> &points){
int x, y; return convex; int n = (int)points.size();
}; } ld area = 0.0;
int j = n-1;
vec operator - (const point &A, const point &B){ int dist(point A, point B){ for(int i = 0; i < n; i++){
return vec{A.x - B.x, A.y - B.y}; return (A.x - B.x)*(A.x - B.x) + (A.y - B.y)*(A.y - area += (points[j].x + points[i].x) *
} B.y); (points[j].y - points[i].y);
} j = i;
int cross(vec A, vec B){ }
return A.x*B.y - A.y*B.x; double findConvexDiameter(vector <point> convexHull){
} int n = convexHull.size(); return abs(area/2.0);
}
int cross(point A, point B, point C){ int is = 0, js = 0;
int val = A.x*(B.y - C.y) + B.x*(C.y - A.y) + for(int i = 1; i < n; i++){ ll boundary(vector <point> points){
C.x*(A.y - B.y); if(convexHull[i].y > convexHull[is].y) int n = (int)points.size();
if(val == 0) is = i; ll num_bound = 0;
return 0; // coline if(convexHull[js].y > convexHull[i].y) for(int i = 0; i < n; i++){
if(val < 0) js = i; ll dx = (points[i].x - points[(i+1)%n].x);
return 1; // clockwise } ll dy = (points[i].y - points[(i+1)%n].y);
return -1; //counter clockwise num_bound += abs(__gcd(dx, dy)) - 1;
} int maxd = dist(convexHull[is], convexHull[js]); }
int i, maxi, j, maxj; return num_bound;
vector <point> findConvexHull(vector <point> points){ i = maxi = is; }
vector <point> convex; j = maxj = js;
sort(points.begin(), points.end(), [](const point do{
&A, const point &B){ int ni = (i+1)%n, nj = (j+1)%n;
return (A.x == B.x)? (A.y < B.y): (A.x < B.x); if(cross(convexHull[ni] - convexHull[i], 5.4 Square
}); convexHull[nj] - convexHull[j]) <= 0){
vector <point> Up, Down; j = nj;
point A = points[0], B = points.back(); }else{
Up.push_back(A); i = ni; typedef long double ld;
Down.push_back(A); }
int d = dist(convexHull[i], convexHull[j]); const ld eps = 1e-12;
for(int i = 0; i < points.size(); i++){ if(d > maxd){ int cmp(ld x, ld y = 0, ld tol = eps) {
if(i == points.size()-1 || cross(A, points[i], maxd = d; return ( x <= y + tol) ? (x + tol < y) ? -1 : 0 : 1;
B) > 0){ maxi = i; }
while(Up.size() > 2 && maxj = j;
cross(Up[Up.size()-2], Up[Up.size()-1], struct point{
HCMUS-PenguinSpammers 11

ld x, y; if ((cmp(s1.x1, s2.x1) != -1 && cmp(s1.x1, s2.x2) !=


point(ld a, ld b) : x(a), y(b) {} 1) ||
point() {} (cmp(s1.x2, s2.x1) != -1 && cmp(s1.x2, s2.x2) != abc
}; 1)) cR = p
(a + b + c)(a + b − c)(a + c − b)(b + c − a)
return true;
struct square{ return false;
ld x1, x2, y1, y2, }
a, b, c;
6 Graphs
point edges[4]; ld min_dist(square &s1, square &s2) {
square(ld _a, ld _b, ld _c) { if (inside(s1, s2) || inside(s2, s1)) 6.1 Bridges
a = _a, b = _b, c = _c; return 0;
x1 = a - c * 0.5;
x2 = a + c * 0.5; ld ans = 1e100; struct Graph {
y1 = b - c * 0.5; for (int i = 0; i < 4; ++i) vector<vector<Edge>> g;
y2 = b + c * 0.5; for (int j = 0; j < 4; ++j) vector<int> vi, low, d, pi, is_b; // vi = visited
edges[0] = point(x1, y1); ans = min(ans, min_dist(s1.edges[i], int bridges_computed;
edges[1] = point(x2, y1); s2.edges[j])); int ticks, edges;
edges[2] = point(x2, y2);
edges[3] = point(x1, y2); Graph(int n, int m) {
} if (inside_hori(s1, s2) || inside_hori(s2, s1)) { g.assign(n, vector<Edge>());
}; if (cmp(s1.y1, s2.y2) != -1) id_b.assign(m, 0);
ans = min(ans, s1.y1 - s2.y2); vi.resize(n);
ld min_dist(point &a, point &b) { else low.resize(n);
ld x = a.x - b.x, if (cmp(s2.y1, s1.y2) != -1) d.resize(n);
y = a.y - b.y; ans = min(ans, s2.y1 - s1.y2); pi.resize(n);
return sqrt(x * x + y * y); } edges = 0;
} bridges_computed = 0;
if (inside_vert(s1, s2) || inside_vert(s2, s1)) { }
bool point_in_box(square s1, point p) { if (cmp(s1.x1, s2.x2) != -1)
if (cmp(s1.x1, p.x) != 1 && cmp(s1.x2, p.x) != -1 && ans = min(ans, s1.x1 - s2.x2); void addEge(int u, int v) {
cmp(s1.y1, p.y) != 1 && cmp(s1.y2, p.y) != -1) else g[u].push_back(Edge(v, edges));
return true; if (cmp(s2.x1, s1.x2) != -1) g[v].push_back(Edge(u, edges));
return false; ans = min(ans, s2.x1 - s1.x2); edges++;
} } }

bool inside(square &s1, square &s2) { return ans; void dfs(int u) {


for (int i = 0; i < 4; ++i) } vi[u] = true;
if (point_in_box(s2, s1.edges[i])) d[u] = low[u] = ticks++;
return true; for (int i = 0; i < g[u].size(); i++) {
int v = g[u][i].to;
return false; 5.5 Triangle if (v == pi[u]) continue;
} if (!vi[v]) {
Let a, b, c be length of the three sides of a triangle. pi[v] = u;
bool inside_vert(square &s1, square &s2) { dfs(v);
if ((cmp(s1.y1, s2.y1) != -1 && cmp(s1.y1, s2.y2) != if(d[u] < low[v]) is_b[g[u][i].id] =
1) || p = (a + b + c) ∗ 0.5 true;
(cmp(s1.y2, s2.y1) != -1 && cmp(s1.y2, s2.y2) != low[u] = min(low[u], low[v]);
1)) The inradius is defined by: } else {
return true; low[u] = min(low[u], low[v]);
}
s
return false; (p − a)(p − b)(p − c)
} iR = }
p }
bool inside_hori(square &s1, square &s2) {
The radius of its circumcircle is given by the formula: // multiple edges from a to b are not allowerd.
HCMUS-PenguinSpammers 12

// (they could be detected as a bridge). priority_queue<edge> q; seen[r] = r;


// if we need to handle this, just count how many q.push(edge(start, 0)); vector<Edge> Q(n), in(n, {-1,-1}), comp;
edges there are from a to b. while (!q.empty()) { deque<tuple<int, int, vector<Edge>>> cycs;
void compBridges() { int node = q.top().to; rep(s,0,n) {
fill(pi.begin(), pi.end(), -1); long long dist = q.top().w; int u = s, qi = 0, w;
fill(vi.begin(), vi.end(), false); q.pop(); while (seen[u] < 0) {
fill(d.begin(), d.end(), 0); if (dist > d[node]) continue; if (!heap[u]) return {-1,{}};
fill(low.begin(), low.end(), 0); for (int i = 0; i < g[node].size(); i++) { Edge e = heap[u]->top();
ticks = 0; int to = g[node][i].to; heap[u]->delta -= e.w,
for (int i = 0; i < g.size(); i++) long long w_extra = g[node][i].w; pop(heap[u]);
if (!vi[i]) dfs(i); if (dist + w_extra < d[to]) { Q[qi] = e, path[qi++] = u,
bridges_computed = 1; p[to] = node; seen[u] = s;
} d[to] = dist + w_extra; res += e.w, u = uf.find(e.a);
q.push(edge(to, d[to])); if (seen[u] == s) { /// found
map<int, vector<Edge>> bridgesTree() { } cycle, contract
if (!bridges_computed) compBridges(); } Node* cyc = 0;
int n = g.size(); } int end = qi, time =
Dsu dsu(n); return {p, d}; uf.time();
for (int i = 0; i < n; i++) } do cyc = merge(cyc, heap[w
for (auto e : g[i]) = path[--qi]]);
if (!is_b[e.id]) dsu.Join(i, e.to); while (uf.join(u, w));
map<int. vector<Edge>> tree; u = uf.find(u), heap[u] =
for (int i = 0; i < n; i++) 6.3 Directed MST cyc, seen[u] = -1;
for (auto e : g[i]) cycs.push_front({u, time,
if (is_b[e.id]) {&Q[qi], &Q[end]}});
tree[dsu.Find(i)].emplace_back(dsu.Find(e.to), struct Edge { int a, b; ll w; }; }
e.id); struct Node { /// lazy skew heap node }
return tree; Edge key; rep(i,0,qi) in[uf.find(Q[i].b)] = Q[i];
} Node *l, *r; }
}; ll delta;
void prop() { for (auto& [u,t,comp] : cycs) { // restore sol
key.w += delta; (optional)
if (l) l->delta += delta; uf.rollback(t);
6.2 Dijkstra if (r) r->delta += delta; Edge inEdge = in[u];
delta = 0; for (auto& e : comp) in[uf.find(e.b)] =
} e;
struct edge { Edge top() { prop(); return key; } in[uf.find(inEdge.b)] = inEdge;
int to; }; }
long long w; Node *merge(Node *a, Node *b) { rep(i,0,n) par[i] = in[i].a;
edge() {} if (!a || !b) return a ?: b; return {res, par};
edge(int a, long long b) : to(a), w(b) {} a->prop(), b->prop(); }
bool operator<(const edge &e) const { if (a->key.w > b->key.w) swap(a, b);
return w > e.w; swap(a->l, (a->r = merge(b, a->r)));
} return a;
}; } 6.4 Edge Coloring
void pop(Node*& a) { a->prop(); a = merge(a->l, a->r); }
typedef <vector<vector<edge>> graph;
const long long inf = 1000000LL * 10000000LL; pair<ll, vi> dmst(int n, int r, vector<Edge>& g) { vi edgeColoring(int N, vector<pii> eds) {
pair<vector<int>, vector<long long>> dijkstra(graph& g, RollbackUF uf(n); vi cc(N + 1), ret(sz(eds)), fan(N), free(N),
int start) { vector<Node*> heap(n); loc;
int n = g.size(); for (Edge e : g) heap[e.b] = merge(heap[e.b], for (pii e : eds) ++cc[e.first], ++cc[e.second];
vector<long long> d(n, inf); new Node{e}); int u, v, ncols = *max_element(all(cc)) + 1;
vector<int> p(n, -1); ll res = 0; vector<vi> adj(N, vi(ncols, -1));
d[start] = 0; vi seen(n, -1), path(n), par(n); for (pii e : eds) {
HCMUS-PenguinSpammers 13

tie(u, v) = e; } auto newDist = max(m[i][k] +


fan[0] = v; m[k][j], -inf);
loc.assign(ncols, 0); void dfs(int u) m[i][j] = min(m[i][j], newDist);
int at = u, end = u, d, c = free[u], ind { }
= 0, i = 0; while(g[u].size()) rep(k,0,n) if (m[k][k] < 0) rep(i,0,n)
while (d = free[v], !loc[d] && (v = { rep(j,0,n)
adj[u][d]) != -1) int v = g[u].back(); if (m[i][k] != inf && m[k][j] != inf)
loc[d] = ++ind, cc[ind] = d, g[u].pop_back(); m[i][j] = -inf;
fan[ind] = v; dfs(v); }
cc[loc[d]] = c; }
for (int cd = d; at != -1; cd ^= c ^ d, path.push_back(u);
at = adj[at][cd]) }
swap(adj[at][cd], adj[end = 6.7 Ford - Bellman
at][cd ^ c ^ d]); bool getPath(){
while (adj[fan[i]][d] != -1) { int ctEdges = 0;
int left = fan[i], right = vector<int> outDeg, inDeg; const ll inf = LLONG_MAX;
fan[++i], e = cc[i]; outDeg = inDeg = vector<int> (n + 1, 0); struct Ed { int a, b, w, s() { return a < b ? a : -a;
adj[u][e] = left; for(int i = 1; i <= n; i++) }};
adj[left][e] = u; { struct Node { ll dist = inf; int prev = -1; };
adj[right][e] = -1; ctEdges += g[i].size();
free[right] = e; outDeg[i] += g[i].size(); void bellmanFord(vector<Node>& nodes, vector<Ed>& eds,
} for(auto &u:g[i]) int s) {
adj[u][d] = fan[i]; inDeg[u]++; nodes[s].dist = 0;
adj[fan[i]][d] = u; } sort(all(eds), [](Ed a, Ed b) { return a.s() <
for (int y : {fan[0], u, end}) int ctMiddle = 0, src = 1; b.s(); });
for (int& z = free[y] = 0; for(int i = 1; i <= n; i++)
adj[y][z] != -1; z++); { int lim = sz(nodes) / 2 + 2; // /3+100 with
} if(abs(inDeg[i] - outDeg[i]) > 1) shuffled vertices
rep(i,0,sz(eds)) return 0; rep(i,0,lim) for (Ed ed : eds) {
for (tie(u, v) = eds[i]; adj[u][ret[i]] if(inDeg[i] == outDeg[i]) Node cur = nodes[ed.a], &dest =
!= v;) ++ret[i]; ctMiddle++; nodes[ed.b];
return ret; if(outDeg[i] > inDeg[i]) if (abs(cur.dist) == inf) continue;
} src = i; ll d = cur.dist + ed.w;
} if (d < dest.dist) {
if(ctMiddle != n && ctMiddle + 2 != n) dest.prev = ed.a;
return 0; dest.dist = (i < lim-1 ? d :
6.5 Eulerian Path dfs(src); -inf);
reverse(path.begin(), path.end()); }
return (path.size() == ctEdges + 1); }
struct DirectedEulerPath } rep(i,0,lim) for (Ed e : eds) {
{ }; if (nodes[e.a].dist == -inf)
int n; nodes[e.b].dist = -inf;
vector<vector<int> > g; }
vector<int> path; }

void init(int _n){ 6.6 Floyd - Warshall


n = _n;
g = vector<vector<int> > (n + 1, 6.8 Gomory Hu
vector<int> ()); const ll inf = 1LL << 62;
path.clear(); void floydWarshall(vector<vector<ll>>& m) {
} int n = sz(m); #include "PushRelabel.cpp"
rep(i,0,n) m[i][i] = min(m[i][i], 0LL);
void add_edge(int u, int v){ rep(k,0,n) rep(i,0,n) rep(j,0,n) typedef array<ll, 3> Edge;
g[u].push_back(v); if (m[i][k] != inf && m[k][j] != inf) { vector<Edge> gomoryHu(int N, vector<Edge> ed) {
HCMUS-PenguinSpammers 14

vector<Edge> tree; int T = 0;


vi par(N); for (int k = 1; k <= n; ++k) for (int u = 0; u < n; vi time, path, ret;
rep(i,1,N) { ++u) { RMQ<int> rmq;
PushRelabel D(N); // Dinic also works if (d[u][k - 1] == INT_MAX) continue;
for (Edge t : ed) D.addEdge(t[0], t[1], for (int i = g[u].size() - 1; i >= 0; --i) LCA(vector<vi>& C) : time(sz(C)),
t[2], t[2]); d[g[u][i].v][k] = min(d[g[u][i].v][k], d[u][k - rmq((dfs(C,0,-1), ret)) {}
tree.push_back({i, par[i], D.calc(i, 1] + g[u][i].w); void dfs(vector<vi>& C, int v, int par) {
par[i])}); } time[v] = T++;
rep(j,i+1,N) for (int y : C[v]) if (y != par) {
if (par[j] == par[i] && bool flag = true; path.push_back(v),
D.leftOfMinCut(j)) par[j] = ret.push_back(time[v]);
i; for (int i = 0; i < n && flag; ++i) dfs(C, y, v);
} if (d[i][n] != INT_MAX) }
return tree; flag = false; }
}
if (flag) { int lca(int a, int b) {
return true; // return true if there is no a cycle. if (a == b) return a;
} tie(a, b) = minmax(time[a], time[b]);
6.9 Karp Min Mean Cycle return path[rmq.query(a, b)];
double ans = 1e15; }
//dist(a,b){return depth[a] + depth[b] -
/** for (int u = 0; u + 1 < n; ++u) { 2*depth[lca(a,b)];}
* Finds the min mean cycle, if you need the max mean if (d[u][n] == INT_MAX) continue; };
cycle double W = -1e15;
* just add all the edges with negative cost and print
* ans * -1 for (int k = 0; k < n; ++k)
* if (d[u][k] != INT_MAX) 6.12 Math
* test: uva, 11090 - Going in Cycle!! W = max(W, (double)(d[u][n] - d[u][k]) / (n -
* */ k)); Number of Spanning Trees
const int MN = 1000;
Create an N × N matrix mat, and for each edge a →
ans = min(ans, W);
struct edge{ b ∈ G, do mat[a][b]--, mat[b][b]++ (and mat[b][a]--,
}
int v; mat[a][a]++ if G is undirected). Remove the ith row and
long long w; // printf("%.2lf\n", ans); column and take the determinant; this yields the number
edge(){} edge(int v, int w) : v(v), w(w) {} cout << fixed << setprecision(2) << ans << endl; of directed spanning trees rooted at i (if G is undirected,
};
remove any row/column).
return false;
long long d[MN][MN]; } Erdős–Gallai theorem
// This is a copy of g because increments the size A simple graph with node degrees d1 ≥ · · · ≥ dn exists iff
// pass as reference if this does not matter. d1 + · · · + dn is even and for every k = 1 . . . n,
int karp(vector<vector<edge> > g) {
int n = g.size(); 6.10 Konig’s Theorem k
X n
X
di ≤ k(k − 1) + min(di , k).
g.resize(n + 1); // this is important In any bipartite graph, the number of edges in a maximum i=1 i=k+1
matching equals the number of vertices in a minimum vertex
for (int i = 0; i < n; ++i)
cover
if (!g[i].empty()) 6.13 Minimum Path Cover in DAG
g[n].push_back(edge(i,0));
++n; 6.11 LCA Given a directed acyclic graph G = (V, E), we are to find
the minimum number of vertex-disjoint paths to cover each
for(int i = 0;i<n;++i)
fill(d[i],d[i]+(n+1),INT_MAX); #include "../Data Structures/RMQ.h"
vertex in V.
We can construct a bipartite graph G0 = (V out ∪
d[n - 1][0] = 0; struct LCA { V in, E 0 ) from G, where :
HCMUS-PenguinSpammers 15

PushRelabel(int n) : g(n), ec(n), cur(n), 1;


hs(2*n), H(n) {} hi = H[u];
} else if (cur[u]->c &&
V out = {v ∈ V : v has positive out − degree} void addEdge(int s, int t, ll cap, ll rcap=0) { H[u] ==
if (s == t) return; H[cur[u]->dest]+1)
g[s].push_back({t, sz(g[t]), 0, cap}); addFlow(*cur[u],
V in = {v ∈ V : v has positive in − degree} g[t].push_back({s, sz(g[s])-1, 0, rcap}); min(ec[u],
} cur[u]->c));
E 0 = {(u, v) ∈ V out × V in : (u, v) ∈ E}
else ++cur[u];
Then it can be shown, via König’s theorem, that G’ void addFlow(Edge& e, ll f) { }
Edge &back = g[e.dest][e.back]; }
has a matching of size m if and only if there exists n − m if (!ec[e.dest] && f) bool leftOfMinCut(int a) { return H[a] >=
vertex-disjoint paths that cover each vertex in G, where hs[H[e.dest]].push_back(e.dest); sz(g); }
n is the number of vertices in G and m is the maximum e.f += f; e.c -= f; ec[e.dest] += f; };
cardinality bipartite mathching in G’. back.f -= f; back.c += f; ec[back.dest]
-= f;
}
Therefore, the problem can be solved by finding the ll calc(int s, int t) { 6.16 SCC Kosaraju
maximum cardinality matching in G’ instead. int v = sz(g); H[s] = v; ec[t] = 1;
NOTE: If the paths are note necesarily disjoints, find vi co(2*v); co[0] = v-1;
rep(i,0,v) cur[i] = g[i].data(); // SCC = Strongly Connected Components
the transitive closure and solve the problem for disjoint
for (Edge& e : g[s]) addFlow(e, e.c);
paths. struct SCC {
for (int hi = 0;;) { vector<vector<int>> g, gr;
while (hs[hi].empty()) if (!hi--) vector<bool> used;
6.14 Planar Graph (Euler) vector<int> order, component;
return -ec[s];
int u = hs[hi].back(); int total_components;
Euler’s formula states that if a finite, connected, planar
graph is drawn in the plane without any edge intersections, hs[hi].pop_back();
while (ec[u] > 0) // discharge u SCC(vector<vector<int>>& adj) {
and v is the number of vertices, e is the number of edges if (cur[u] == g[u].data() g = adj;
and f is the number of faces (regions bounded by edges, + sz(g[u])) { int n = g.size();
including the outer, infinitely large region), then: H[u] = 1e9; gr.resize(n);
for (Edge& e : for (int i = 0; i < n; i++)
g[u]) if (e.c for (auto to : g[i])
f +v =e+2 && H[u] > gr[to].push_back(i);
It can be extended to non connected planar graphs with H[e.dest]+1)
H[u] = used.assign(n, false);
c connected components: H[e.dest]+1, for (int i = 0; i < n; i++)
cur[u] if (!used[i])
f +v =e+c+1 = &e; GenTime(i);
if (++co[H[u]],
!--co[hi] && used.assign(n, false);
6.15 Push Relabel hi < v) component.assign(n, -1);
rep(i,0,v) total_components = 0;
if (hi for (int i = n - 1; i >= 0; i--) {
struct PushRelabel { < H[i] int v = order[i];
struct Edge { && H[i] if (!used[v]) {
int dest, back; < v) vector<int> cur_component;
ll f, c; --co[H[i]], Dfs(cur_component, v);
}; H[i] for (auto node : cur_component)
vector<vector<Edge>> g; = component[node] = total_components;
vector<ll> ec; v }
vector<Edge*> cur; + }
vector<vi> hs; vi H; }
HCMUS-PenguinSpammers 16

if (stacked[v]) low[u] = min(low[u], low[v]); double v = a[j][i] / a[i][i];


void GenTime(int node) { } if (v != 0) rep(k,i+1,n) a[j][k]
used[node] = true; if (d[u] == low[u]) { -= v * a[i][k];
for (auto to : g[node]) int v; }
if (!used[to]) do { }
GenTime(to); v = s.back(); s.pop_back(); return res;
order.push_back(node); stacked[v] = false; }
} scc[v] = current_scc;
} while (u != v);
void Dfs(vector<int>& cur, int node) { current_scc++;
used[node] = true; } 7.2 Matrix Inverse
cur.push_back(node); }
if (!used[to]) };
Dfs(cur, to); int matInv(vector<vector<double>>& A) {
} int n = sz(A); vi col(n);
vector<vector<double>> tmp(n,
vector<vector<int>> CondensedGraph() { vector<double>(n));
6.18 Topological Sort rep(i,0,n) tmp[i][i] = 1, col[i] = i;
vector<vector<int>> ans(total_components);
for (int i = 0; i < int(g.size()); i++) {
for (int to : g[i]) { vi topoSort(const vector<vi>& gr) { rep(i,0,n) {
int u = component[i], v = component[to]; vi indeg(sz(gr)), ret; int r = i, c = i;
if (u != v) for (auto& li : gr) for (int x : li) indeg[x]++; rep(j,i,n) rep(k,i,n)
ans[u].push_back(v); queue<int> q; // use priority_queue for lexic. if (fabs(A[j][k]) > fabs(A[r][c]))
} largest ans. r = j, c = k;
} rep(i,0,sz(gr)) if (indeg[i] == 0) q.push(i); if (fabs(A[r][c]) < 1e-12) return i;
return ans; while (!q.empty()) { A[i].swap(A[r]); tmp[i].swap(tmp[r]);
} int i = q.front(); // top() for priority rep(j,0,n)
}; queue swap(A[j][i], A[j][c]),
ret.push_back(i); swap(tmp[j][i], tmp[j][c]);
q.pop(); swap(col[i], col[c]);
for (int x : gr[i]) double v = A[i][i];
6.17 Tarjan SCC if (--indeg[x] == 0) q.push(x); rep(j,i+1,n) {
} double f = A[j][i] / v;
return ret; A[j][i] = 0;
const int N = 20002; } rep(k,i+1,n) A[j][k] -= f*A[i][k];
struct tarjan_scc { rep(k,0,n) tmp[j][k] -=
int scc[MN], low[MN], d[MN], stacked[MN]; f*tmp[i][k];
int ticks, current_scc; }
deque<int> s; // used as stack rep(j,i+1,n) A[i][j] /= v;
tarjan_scc() {} 7 Linear Algebra rep(j,0,n) tmp[i][j] /= v;
void init() { A[i][i] = 1;
memset(scc, -1, sizeof(scc)); 7.1 Matrix Determinant }
memset(d, -1, sizeof(d));
memset(stacked, 0, sizeof(stacked)); /// forget A at this point, just eliminate tmp
s.clear(); double det(vector<vector<double>>& a) { backward
ticks = current_scc = 0; int n = sz(a); double res = 1; for (int i = n-1; i > 0; --i) rep(j,0,i) {
} rep(i,0,n) { double v = A[j][i];
void compute(vector<vector<int>> &g, int u) { int b = i; rep(k,0,n) tmp[j][k] -= v*tmp[i][k];
d[u] = low[u] = ticks++; rep(j,i+1,n) if (fabs(a[j][i]) > }
s.push_back(u); fabs(a[b][i])) b = j;
stacked[u] = true; if (i != b) swap(a[i], a[b]), res *= -1; rep(i,0,n) rep(j,0,n) A[col[i]][col[j]] =
for (int i = 0; i < g[u].size(); i++) { res *= a[i][i]; tmp[i][j];
int v = g[u][i]; if (res == 0) return 0; return n;
if (d[v] == -1) compute(g, v); rep(j,i+1,n) { }
HCMUS-PenguinSpammers 17

double b = a.back(), c; a.back() = 0;


for(int i=sz(a)-1; i--;) c = a[i], a[i] if (d > p100*3) top100 = true, d -= 3*p100, y += 300;
= a[i+1]*x0+b, b=c; else y += ((d-1) / p100) * 100, d = (d-1) % p100 + 1;
a.pop_back();
7.3 PolyRoots } if (d > p4*24) top4 = true, d -= 24*p4, y += 24*4;
}; else y += ((d-1) / p4) * 4, d = (d-1) % p4 + 1;
#include "Polynomial.cpp" if (d > p1*3) top1 = true, d -= p1*3, y += 3;
else y += (d-1) / p1, d = (d-1) % p1 + 1;
vector<double> polyRoots(Poly p, double xmin, double
xmax) { 8 Misc const int *ac = top1 && (!top4 || top100) ? B : A;
if (sz(p.a) == 2) { return {-p.a[0]/p.a[1]}; } for (m = 1; m < 12; ++m) if (d <= ac[m + 1]) break;
vector<double> ret; 8.1 Dates d -= ac[m];
Poly der = p; }
der.diff();
auto dr = polyRoots(der, xmin, xmax); //
dr.push_back(xmin-1); // Time - Leap years
dr.push_back(xmax+1); // 8.2 Debugging Tricks
sort(all(dr));
rep(i,0,sz(dr)-1) { // A[i] has the accumulated number of days from months ˆ signal(SIGSEGV, [](int) { _Exit(0); }); con-
double l = dr[i], h = dr[i+1]; previous to i verts segfaults into Wrong Answers. Similarly one
bool sign = p(l) > 0; const int A[13] = { 0, 0, 31, 59, 90, 120, 151, 181,
if (sign ^ (p(h) > 0)) { 212, 243, 273, 304, 334 }; can catch SIGABRT (assertion failures) and SIGFPE
rep(it,0,60) { // while (h - l > // same as A, but for a leap year (zero divisions). _GLIBCXX_DEBUG failures generate
1e-8) const int B[13] = { 0, 0, 31, 60, 91, 121, 152, 182, SIGABRT (or SIGSEGV on gcc 5.4.0 apparently).
double m = (l + h) / 2, f 213, 244, 274, 305, 335 };
= p(m); // returns number of leap years up to, and including, y ˆ feenableexcept(29); kills the program on NaNs (1),
if ((f <= 0) ^ sign) l = m; int leap_years(int y) { return y / 4 - y / 100 + y / 0-divs (4), infinities (8) and denormals (16).
else h = m; 400; }
} bool is_leap(int y) { return y % 400 == 0 || (y % 4 ==
ret.push_back((l + h) / 2); 0 && y % 100 != 0); } 8.3 Interval Container
} // number of days in blocks of years
} const int p400 = 400*365 + leap_years(400);
return ret; const int p100 = 100*365 + leap_years(100); set<pii>::iterator addInterval(set<pii>& is, int L, int
} const int p4 = 4*365 + 1; R) {
const int p1 = 365; if (L == R) return is.end();
int date_to_days(int d, int m, int y) auto it = is.lower_bound({L, R}), before = it;
{ while (it != is.end() && it->first <= R) {
7.4 Polynomial return (y - 1) * 365 + leap_years(y - 1) + R = max(R, it->second);
(is_leap(y) ? B[m] : A[m]) + d; before = it = is.erase(it);
} }
struct Poly { void days_to_date(int days, int &d, int &m, int &y) if (it != is.begin() && (--it)->second >= L) {
vector<double> a; { L = min(L, it->first);
double operator()(double x) const { bool top100; // are we in the top 100 years of a 400 R = max(R, it->second);
double val = 0; block? is.erase(it);
for (int i = sz(a); i--;) (val *= x) += bool top4; // are we in the top 4 years of a 100 }
a[i]; block? return is.insert(before, {L,R});
return val; bool top1; // are we in the top year of a 4 block? }
}
void diff() { y = 1; void removeInterval(set<pii>& is, int L, int R) {
rep(i,1,sz(a)) a[i-1] = i*a[i]; top100 = top4 = top1 = false; if (L == R) return;
a.pop_back(); auto it = addInterval(is, L, R);
} y += ((days-1) / p400) * 400; auto r2 = it->second;
void divroot(double x0) { d = (days-1) % p400 + 1; if (it->first == L) is.erase(it);
HCMUS-PenguinSpammers 18

else (int&)it->second = L; } * The number of roots of unity to use nroots_unity


if (R != r2) is.emplace(R, r2); must be set so that the product of the first
} * nroots_unity primes of the vector nth_roots_unity is
greater than the maximum value of the
* convolution. Never use sizes of vectors bigger than
9 Number Theory 2^24, if you need to change the values of
8.4 Optimization Tricks * the nth roots of unity to appropriate primes for
9.1 Chinese Remainder Theorem those sizes.
__builtin_ia32_ldmxcsr(40896); disables denormals */
(which make floats 20x slower near their minimum value). vector<LL> convolve(const vector<LL> &a, const
/** vector<LL> &b, int nroots_unity = 2) {
* Chinese remainder theorem. int N = 1 << ceil_log2(a.size() + b.size());
8.4.1 Bit hacks * Find z such that z % x[i] = a[i] for all i. vector<LL> ans(N,0), fA(N), fB(N), fC(N);
* */ LL modulo = 1;
ˆ x & -x is the least bit in x. long long crt(vector<long long> &a, vector<long long> for (int times = 0; times < nroots_unity; times++) {
&x) { fill(fA.begin(), fA.end(), 0);
ˆ for (int x = m; x; ) { --x &= m; ... } loops long long z = 0; fill(fB.begin(), fB.end(), 0);
over all subset masks of m (except m itself). long long n = 1; for (int i = 0; i < a.size(); i++) fA[i] = a[i];
for (int i = 0; i < x.size(); ++i) for (int i = 0; i < b.size(); i++) fB[i] = b[i];
ˆ c = x&-x, r = x+c; (((r^x) >> 2)/c) | r is the n *= x[i]; LL prime = nth_roots_unity[times].first;
next number after x with the same number of bits LL inv_modulo = mod_inv(modulo % prime, prime);
for (int i = 0; i < a.size(); ++i) { LL normalize = mod_inv(N, prime);
set. long long tmp = (a[i] * (n / x[i])) % n; ntfft(fA, 1, nth_roots_unity[times]);
tmp = (tmp * mod_inv(n / x[i], x[i])) % n; ntfft(fB, 1, nth_roots_unity[times]);
ˆ rep(b,0,K) rep(i,0,(1 << K)) z = (z + tmp) % n; for (int i = 0; i < N; i++) fC[i] = (fA[i] * fB[i])
if (i & 1 << b) D[i] += D[i^(1 << b)]; com- } % prime;
putes all sums of subsets. ntfft(fC, -1, nth_roots_unity[times]);
return (z + n) % n; for (int i = 0; i < N; i++) {
} LL curr = (fC[i] * normalize) % prime;
8.4.2 Pragmas LL k = (curr - (ans[i] % prime) + prime) % prime;
ˆ #pragma GCC optimize ("Ofast") will make GCC auto- k = (k * inv_modulo) % prime;
ans[i] += modulo * k;
vectorize loops and optimizes floating points better. 9.2 Convolution }
modulo *= prime;
ˆ #pragma GCC target ("avx2") can double performance }
of vectorized code, but causes crashes on old machines. typedef long long int LL; return ans;
typedef pair<LL, LL> PLL; }
ˆ #pragma GCC optimize ("trapv") kills the program on
inline bool is_pow2(LL x) {
integer overflows (but is really slow).
return (x & (x-1)) == 0;
} 9.3 Diophantine Equations
8.5 Ternary Search
inline int ceil_log2(LL x) {
int ans = 0; long long gcd(long long a, long long b, long long &x,
template<class F> --x; long long &y) {
int ternSearch(int a, int b, F f) { while (x != 0) { if (a == 0) {
assert(a <= b); x >>= 1; x = 0;
while (b - a >= 5) { ans++; y = 1;
int mid = (a + b) / 2; } return b;
if (f(mid) < f(mid+1)) a = mid; // (A) return ans; }
else b = mid+1; } long long x1, y1;
} long long d = gcd(b % a, a, x1, y1);
rep(i,a+1,b+1) if (f(a) < f(i)) a = i; // (B) /* Returns the convolution of the two given vectors in x = y1 - (b / a) * x1;
return a; time proportional to n*log(n). y = x1;
HCMUS-PenguinSpammers 19

return d; long long rx2 = x; x = u, y = v, u = m, v = n;


} }
if (lx2 > rx2) swap(lx2, rx2); }
bool find_any_solution(long long a, long long b, long long long lx = max(lx1, lx2);
long c, long long &x0, long long rx = min(rx1, rx2);
long long &y0, long long &g) {
g = gcd(abs(a), abs(b), x0, y0); if (lx > rx) return 0; 9.6 Fast Eratosthenes
if (c % g) { return (rx - lx) / abs(b) + 1;
return false; }
} const int LIM = 1e6;
bitset<LIM> isPrime;
x0 *= c / g; vi eratosthenes() {
y0 *= c / g; 9.4 Discrete Logarithm const int S = (int)round(sqrt(LIM)), R = LIM /
if (a < 0) x0 = -x0; 2;
if (b < 0) y0 = -y0; vi pr = {2}, sieve(S+1);
return true; pr.reserve(int(LIM/log(LIM)*1.1));
} // Computes x which a ^ x = b mod n. vector<pii> cp;
for (int i = 3; i <= S; i += 2) if (!sieve[i]) {
void shift_solution(long long &x, long long &y, long long long d_log(long long a, long long b, long long n) { cp.push_back({i, i * i / 2});
long a, long long b, long long m = ceil(sqrt(n)); for (int j = i * i; j <= S; j += 2 * i)
long long cnt) { long long aj = 1; sieve[j] = 1;
x += cnt * b; map<long long, long long> M; }
y -= cnt * a; for (int i = 0; i < m; ++i) { for (int L = 1; L <= R; L += S) {
} if (!M.count(aj)) array<bool, S> block{};
M[aj] = i; for (auto &[p, idx] : cp)
long long find_all_solutions(long long a, long long b, aj = (aj * a) % n; for (int i=idx; i < S+L; idx =
long long c, } (i+=p)) block[i-L] = 1;
long long minx, long long maxx, long long miny, rep(i,0,min(S, R - L))
long long maxy) { long long coef = mod_pow(a, n - 2, n); if (!block[i]) pr.push_back((L +
long long x, y, g; coef = mod_pow(coef, m, n); i) * 2 + 1);
if (!find_any_solution(a, b, c, x, y, g)) return 0; // coef = a ^ (-m) }
a /= g; long long gamma = b; for (int i : pr) isPrime[i] = 1;
b /= g; for (int i = 0; i < m; ++i) { return pr;
if (M.count(gamma)) { }
long long sign_a = a > 0 ? +1 : -1; return i * m + M[gamma];
long long sign_b = b > 0 ? +1 : -1; } else {
gamma = (gamma * coef) % n;
shift_solution(x, y, a, b, (minx - x) / b); } 9.7 Highest Exponent Factorial
if (x < minx) shift_solution(x, y, a, b, sign_b); }
if (x > maxx) return 0; return -1;
long long lx1 = x; } int highest_exponent(int p, const int &n){
int ans = 0;
shift_solution(x, y, a, b, (maxx - x) / b); int t = p;
if (x > maxx) shift_solution(x, y, a, b, -sign_b); while(t <= n){
long long rx1 = x; 9.5 Ext Euclidean ans += n/t;
t*=p;
shift_solution(x, y, a, b, -(miny - y) / a); }
if (y < miny) shift_solution(x, y, a, b, -sign_a); void ext_euclid(long long a, long long b, long long &x, return ans;
if (y > maxy) return 0; long long &y, long long &g) { }
long long lx2 = x; x = 0, y = 1, g = b;
long long m, n, q, r;
shift_solution(x, y, a, b, -(maxy - y) / a); for (long long u = 1, v = 0; a != 0; g = a, a = r) {
if (y > maxy) shift_solution(x, y, a, b, sign_a); q = g / a, r = g % a; 9.8 Miller - Rabin
m = x - u * q, n = y - v * q;
HCMUS-PenguinSpammers 20

mint_t(T v) : val(v % mod) {} long long ans = 1;


const int rounds = 20; while (exp > 0) {
mint_t operator + (const mint_t& o) const { if (exp & 1)
// checks whether a is a witness that n is not prime, 1 return (val + o.val) % mod; ans = mod_mul(ans, a, mod);
< a < n } a = mod_mul(a, a, mod);
bool witness(long long a, long long n) { mint_t operator - (const mint_t& o) const { exp >>= 1;
// check as in Miller Rabin Primality Test described return (val - o.val) % mod; }
long long u = n - 1; } return ans;
int t = 0; mint_t operator * (const mint_t& o) const { }
while (u % 2 == 0) { return (val * o.val) % mod;
t++; }
u >>= 1; };
} 9.13 Number Theoretic Transform
long long next = mod_pow(a, u, n); typedef mint_t<long long, 998244353> mint;
if (next == 1) return false;
long long last; typedef long long int LL;
for (int i = 0; i < t; ++i) { typedef pair<LL, LL> PLL;
last = next; 9.10 Mod Inv
next = mod_mul(last, last, n); /* The following vector of pairs contains pairs (prime,
if (next == 1) { generator)
return last != n - 1; long long mod_inv(long long n, long long m) { * where the prime has an Nth root of unity for N being
} long long x, y, gcd; a power of two.
} ext_euclid(n, m, x, y, gcd); * The generator is a number g s.t g^(p-1)=1 (mod p)
return next != 1; if (gcd != 1) * but is different from 1 for all smaller powers */
} return 0; vector<PLL> nth_roots_unity {
return (x + m) % m; {1224736769,330732430},{1711276033,927759239},{167772161,16748
} {469762049,343261969},{754974721,643797295},{1107296257,88386
// Checks if a number is prime with prob 1 - 1 / (2 ^
it) PLL ext_euclid(LL a, LL b) {
// D(miller_rabin(99999999999999997LL) == 1); if (b == 0)
// D(miller_rabin(9999999999971LL) == 1); 9.11 Mod Mul return make_pair(1,0);
// D(miller_rabin(7907) == 1); pair<LL,LL> rc = ext_euclid(b, a % b);
bool miller_rabin(long long n, int it = rounds) { return make_pair(rc.second, rc.first - (a / b) *
if (n <= 1) return false; // Computes (a * b) % mod rc.second);
if (n == 2) return true; long long mod_mul(long long a, long long b, long long }
if (n % 2 == 0) return false; mod) {
for (int i = 0; i < it; ++i) { long long x = 0, y = a % mod; //returns -1 if there is no unique modular inverse
long long a = rand() % (n - 1) + 1; while (b > 0) { LL mod_inv(LL x, LL modulo) {
if (witness(a, n)) { if (b & 1) PLL p = ext_euclid(x, modulo);
return false; x = (x + y) % mod; if ( (p.first * x + p.second * modulo) != 1 )
} y = (y * 2) % mod; return -1;
} b /= 2; return (p.first+modulo) % modulo;
return true; } }
} return x % mod;
}
//Number theory fft. The size of a must be a power of 2
void ntfft(vector<LL> &a, int dir, const PLL
9.9 Mod Integer &root_unity) {
9.12 Mod Pow int n = a.size();
LL prime = root_unity.first;
template<class T, T mod> LL basew = mod_pow(root_unity.second, (prime-1) / n,
struct mint_t { // Computes ( a ^ exp ) % mod. prime);
T val; long long mod_pow(long long a, long long exp, long long if (dir < 0) basew = mod_inv(basew, prime);
mint_t() : val(0) {} mod) { for (int m = n; m >= 2; m >>= 1) {
HCMUS-PenguinSpammers 21

int mh = m >> 1; while (d == 1) if (pmap[i] == false)


LL w = 1; d = pollard_rho(n); ans.push_back(a + i);
for (int i = 0; i < mh; i++) { vector<long long> dd = factorize(d); return ans;
for (int j = i; j < n; j += m) { ans = factorize(n / d); }
int k = j + mh; for (int i = 0; i < dd.size(); ++i)
LL x = (a[j] - a[k] + prime) % prime; ans.push_back(dd[i]); vector<pair<int, int>> factor(int n) {
a[j] = (a[j] + a[k]) % prime; } vector<pair<int, int>> ans;
a[k] = (w * x) % prime; return ans; if (n == 0) return ans;
} } for (int i = 0; primes[i] * primes[i] <= n; ++i) {
w = (w * basew) % prime; if ((n % primes[i]) == 0) {
} int expo = 0;
basew = (basew * basew) % prime; while ((n % primes[i]) == 0) {
} 9.15 Primes expo++;
int i = 0; n /= primes[i];
for (int j = 1; j < n - 1; j++) { }
for (int k = n >> 1; k > (i ^= k); k >>= 1); namespace primes { ans.emplace_back(primes[i], expo);
if (j < i) swap(a[i], a[j]); const int MP = 100001; }
} bool sieve[MP]; }
} long long primes[MP];
int num_p; if (n > 1) {
void fill_sieve() { ans.emplace_back(n, 1);
num_p = 0; }
9.14 Pollard Rho Factorize sieve[0] = sieve[1] = true; return ans;
for (long long i = 2; i < MP; ++i) { }
if (!sieve[i]) { }
long long pollard_rho(long long n) { primes[num_p++] = i;
long long x, y, i = 1, k = 2, d; for (long long j = i * i; j < MP; j += i)
x = y = rand() % n; sieve[j] = true;
while (1) { } 9.16 Totient Sieve
++i; }
x = mod_mul(x, x, n); }
x += 2; for (int i = 1; i < MN; i++)
if (x >= n) x -= n; // Finds prime numbers between a and b, using basic phi[i] = i;
if (x == y) return 1; primes up to sqrt(b)
d = __gcd(abs(x - y), n); // a must be greater than 1. for (int i = 1; i < MN; i++)
if (d != 1) return d; vector<long long> seg_sieve(long long a, long long b) if (!sieve[i]) // is prime
if (i == k) { { for (int j = i; j < MN; j += i)
y = x; long long ant = a; phi[j] -= phi[j] / i;
k *= 2; a = max(a, 3LL);
} vector<bool> pmap(b - a + 1);
} long long sqrt_b = sqrt(b);
return 1; for (int i = 0; i < num_p; ++i) { 9.17 Totient
} long long p = primes[i];
if (p > sqrt_b) break;
long long j = (a + p - 1) / p; long long totient(long long n) {
// Returns a list with the prime divisors of n for (long long v = (j == 1) ? p + p : j * p; v <= if (n == 1) return 0;
vector<long long> factorize(long long n) { b; v += p) { long long ans = n;
vector<long long> ans; pmap[v - a] = true; for (int i = 0; primes[i] * primes[i] <= n; ++i) {
if (n == 1) } if ((n % primes[i]) == 0) {
return ans; } while ((n % primes[i]) == 0) n /= primes[i];
if (miller_rabin(n)) { vector<long long> ans; ans -= ans / primes[i];
ans.push_back(n); if (ant == 2) ans.push_back(2); }
} else { int start = a % 2 ? 0 : 1; }
long long d = 1; for (int i = start, I = b - a + 1; i < I; i += 2) if (n > 1) {
HCMUS-PenguinSpammers 22

ans -= ans / n; 10.2 Discrete Distributions For independent X and Y ,


}
return ans; 10.2.1 Binomial distribution V (aX + bY ) = a2 V (X) + b2 V (Y ).
}
The number of successes in n independent yes/no exper-
iments, each which yields success with probability p is
11 Strings
Bin(n, p), n = 1, 2, . . . , 0 ≤ p ≤ 1.
10 Probability and Statistics !
n k 11.1 Hashing
p(k) = p (1 − p)n−k
k
10.1 Continuous Distributions struct H {

10.1.1 Uniform distribution µ = np, σ 2 = np(1 − p) typedef uint64_t ull;


ull x; H(ull x=0) : x(x) {}
Bin(n, p) is approximately Po(np) for small p. #define OP(O,A,B) H operator O(H o) { ull r = x; asm \
If the probability density function is constant between a and (A "addq %%rdx, %0\n adcq $0,%0" : "+a"(r) :
b and 0 elsewhere it is U(a, b), a < b. B); return r; }
10.2.2 First success distribution OP(+,,"d"(o.x)) OP(*,"mul %1\n", "r"(o.x) :
"rdx")
 1
a<x<b The number of trials needed to get the first success in inde- H operator-(H o) { return *this + ~o.x; }
f (x) = b−a
0 otherwise pendent yes/no experiments, each wich yields success with ull get() const { return x + !~x; }
probability p is Fs(p), 0 ≤ p ≤ 1. bool operator==(H o) const { return get() ==
o.get(); }
a+b 2 (b − a)2 p(k) = p(1 − p)k−1 , k = 1, 2, . . . bool operator<(H o) const { return get() <
µ= ,σ = o.get(); }
2 12 1 2 1−p };
µ= ,σ =
p p2 static const H C = (ll)1e11+3; // (order ~ 3e9; random
also ok)
10.1.2 Exponential distribution
10.2.3 Poisson distribution
struct HashInterval {
The time between events in a Poisson process is Exp(λ), λ > vector<H> ha, pw;
The number of events occurring in a fixed period of time t
0. HashInterval(string& str) : ha(sz(str)+1),
if these events occur with a known average rate κ and inde-
λe−λx x ≥ 0 pw(ha) {

f (x) = pendently of the time since the last event is Po(λ), λ = tκ. pw[0] = 1;
0 x<0 rep(i,0,sz(str))
λk
p(k) = e−λ , k = 0, 1, 2, . . . ha[i+1] = ha[i] * C + str[i],
1 2 1 k! pw[i+1] = pw[i] * C;
µ= ,σ = 2 }
λ λ µ = λ, σ 2 = λ H hashInterval(int a, int b) { // hash [a, b)
return ha[b] - ha[a] * pw[b - a];
10.1.3 Normal distribution }
10.3 Probability Theory };
Most real random values with mean µ and variance σ 2 are Let X be a discrete random variable with probability pX (x)
vector<H> getHashes(string& str, int length) {
well described by N (µ, σ 2 ), σ > 0. of assuming the valueP x. It will then have an expected value if (sz(str) < length) return {};
(mean) µ = E(X) = Px xpX (x) and variance σ 2 = V (X) = H h = 0, pw = 1;
1 −
(x−µ)2 E(X 2 ) − (E(X))2 = x (x − E(X))2 pX (x) where σ is the rep(i,0,length)
f (x) = √ e 2σ2 standard deviation. If X is instead continuous it will have a h = h * C + str[i], pw = pw * C;
2πσ 2 vector<H> ret = {h};
probability density function fX (x) and the sums above will rep(i,length,sz(str)) {
instead be integrals with pX (x) replaced by fX (x).
If X1 ∼ N (µ1 , σ12 ) and X2 ∼ N (µ2 , σ22 ) then ret.push_back(h = h * C + str[i] - pw *
Expectation is linear: str[i-length]);
}
aX1 + bX2 + c ∼ N (µ1 + µ2 + c, a2 σ12 + b2 σ22 ) E(aX + bY ) = aE(X) + bE(Y ) return ret;
HCMUS-PenguinSpammers 23

} } static int matchPMA(const Node *t, const string &str)


} {
H hashString(string& s){H h{}; for(char c:s) int res = 0;
h=h*C+c;return h;} int match(const string &str) const { for(char c : str) {
int res = 0; int a = c - AlphabetBase;
for(const Node *t : roots) while(t->next[a] == nullptr)
res += matchPMA(t, str); t = t->fail;
11.2 Incremental Aho Corasick return res; t = t->next[a];
} res += t->sum;
}
class IncrementalAhoCorasic { private: return res;
static const int Alphabets = 26; static void makePMA(vector<String>::const_iterator }
static const int AlphabetBase = ’a’; begin, vector<String>::const_iterator end, Node
struct Node { *nodes, vector<Node*> &que) {
Node *fail; int nNodes = 0; vector<Node> nodes;
Node *next[Alphabets]; Node *root = new(&nodes[nNodes ++]) Node(); int nNodes;
int sum; for(auto it = begin; it != end; ++ it) { vector<String> strings;
Node() : fail(NULL), next{}, sum(0) { } Node *t = root; vector<Node*> roots;
}; for(char c : it->str) { vector<int> sizes;
Node *&n = t->next[c - AlphabetBase]; vector<Node*> que;
struct String { if(n == nullptr) };
string str; n = new(&nodes[nNodes ++]) Node();
int sign; t = n; int main() {
}; } int m;
t->sum += it->sign; while(~scanf("%d", &m)) {
public: } IncrementalAhoCorasic iac;
//totalLen = sum of (len + 1) int qt = 0; iac.init(600000);
void init(int totalLen) { for(Node *&n : root->next) { rep(i, m) {
nodes.resize(totalLen); if(n != nullptr) { int ty;
nNodes = 0; n->fail = root; char s[300001];
strings.clear(); que[qt ++] = n; scanf("%d%s", &ty, s);
roots.clear(); } else { if(ty == 1) {
sizes.clear(); n = root; iac.insert(s, +1);
que.resize(totalLen); } } else if(ty == 2) {
} } iac.insert(s, -1);
for(int qh = 0; qh != qt; ++ qh) { } else if(ty == 3) {
void insert(const string &str, int sign) { Node *t = que[qh]; int ans = iac.match(s);
strings.push_back(String{ str, sign }); int a = 0; printf("%d\n", ans);
roots.push_back(nodes.data() + nNodes); for(Node *n : t->next) { fflush(stdout);
sizes.push_back(1); if(n != nullptr) { } else {
nNodes += (int)str.size() + 1; que[qt ++] = n; abort();
auto check = [&]() { return sizes.size() > 1 && Node *r = t->fail; }
sizes.end()[-1] == sizes.end()[-2]; }; while(r->next[a] == nullptr) }
if(!check()) r = r->fail; }
makePMA(strings.end() - 1, strings.end(), n->fail = r->next[a]; return 0;
roots.back(), que); n->sum += r->next[a]->sum; }
while(check()) { }
int m = sizes.back(); ++ a;
roots.pop_back(); }
sizes.pop_back(); } 11.3 KMP
sizes.back() += m; }
if(!check())
makePMA(strings.end() - m * 2, strings.end(), vi pi(const string& s) {
roots.back(), que); vi p(sz(s));
HCMUS-PenguinSpammers 24

rep(i,1,sz(s)) { for (int i = 0; i < n; i++)


int g = p[i-1]; const int MAX_DIGIT = 256; c[SA[i]] = i;
while (g && s[i] != s[g]) g = p[g-1]; void countingSort(vector<int>& SA, vector<int>& RA, int int k = 0;
p[i] = g + (s[i] == s[g]); k = 0) { for (int j, i = 0; i < n-1; i++) {
} int n = SA.size(); if(c[i] - 1 < 0)
return p; vector<int> cnt(max(MAX_DIGIT, n), 0); continue;
} for (int i = 0; i < n; i++) j = SA[c[i] - 1];
if (i + k < n) k = max(k - 1, 0);
vi match(const string& s, const string& pat) { cnt[RA[i + k]]++; while (i+k < n && j+k < n && s[i + k] == s[j +
vi p = pi(pat + ’\0’ + s), res; else k])
rep(i,sz(p)-sz(s),sz(p)) cnt[0]++; k++;
if (p[i] == sz(pat)) res.push_back(i - 2 for (int i = 1; i < cnt.size(); i++) PLCP[i] = k;
* sz(pat)); cnt[i] += cnt[i - 1]; }
return res; vector<int> tempSA(n); for (int i = 0; i < n; i++)
} for (int i = n - 1; i >= 0; i--) LCP[i] = PLCP[SA[i]];
if (SA[i] + k < n) return LCP;
tempSA[--cnt[RA[SA[i] + k]]] = SA[i]; }
else
11.4 Minimal String Rotation tempSA[--cnt[0]] = SA[i];
SA = tempSA;
} 11.6 Suffix Automation
// Lexicographically minimal string rotation
int lmsr() { vector <int> constructSA(string s) {
string s; int n = s.length(); /*
cin >> s; vector <int> SA(n); * Suffix automaton:
int n = s.size(); vector <int> RA(n); * This implementation was extended to maintain
s += s; vector <int> tempRA(n); (online) the
vector<int> f(s.size(), -1); for (int i = 0; i < n; i++) { * number of different substrings. This is equivalent
int k = 0; RA[i] = s[i]; to compute
for (int j = 1; j < 2 * n; ++j) { SA[i] = i; * the number of paths from the initial state to all
int i = f[j - k - 1]; } the other
while (i != -1 && s[j] != s[k + i + 1]) { for (int step = 1; step < n; step <<= 1) { * states.
if (s[j] < s[k + i + 1]) countingSort(SA, RA, step); *
k = j - i - 1; countingSort(SA, RA, 0); * The overall complexity is O(n)
i = f[i]; int c = 0; * can be tested here:
} tempRA[SA[0]] = c; https://www.urionlinejudge.com.br/judge/en/problems/view/1
if (i == -1 && s[j] != s[k + i + 1]) { for (int i = 1; i < n; i++) { * */
if (s[j] < s[k + i + 1]) { if (RA[SA[i]] == RA[SA[i - 1]] && RA[SA[i] +
k = j; step] == RA[SA[i - 1] + step]) struct state {
} tempRA[SA[i]] = tempRA[SA[i - 1]]; int len, link;
f[j - k] = -1; else long long num_paths;
} else { tempRA[SA[i]] = tempRA[SA[i - 1]] + 1; map<int, int> next;
f[j - k] = i + 1; } };
} RA = tempRA;
} if (RA[SA[n - 1]] == n - 1) break; const int MN = 200011;
return k; } state sa[MN << 1];
} return SA; int sz, last;
} long long tot_paths;

vector<int> computeLCP(const string& s, const void sa_init() {


11.5 Suffix Array vector<int>& SA) { sz = 1;
int n = SA.size(); last = 0;
vector<int> LCP(n), PLCP(n), c(n, 0); sa[0].len = 0;
const int MAXN = 200005; sa[0].link = -1;
HCMUS-PenguinSpammers 25

sa[0].next.clear(); void ukkadd(int i, int c) { suff: static pii LCS(string s, string t) {


sa[0].num_paths = 1; if (r[v]<=q) { SuffixTree st(s + (char)(’z’ + 1) + t +
tot_paths = 0; if (t[v][c]==-1) { t[v][c]=m; (char)(’z’ + 2));
} l[m]=i; st.lcs(0, sz(s), sz(s) + 1 + sz(t), 0);
p[m++]=v; v=s[v]; q=r[v]; return st.best;
void sa_extend(int c) { goto suff; } }
int cur = sz++; v=t[v][c]; q=l[v]; };
sa[cur].len = sa[last].len + 1; }
sa[cur].next.clear(); if (q==-1 || c==toi(a[q])) q++; else {
sa[cur].num_paths = 0; l[m+1]=i; p[m+1]=m; l[m]=l[v];
int p; r[m]=q; 11.8 Z Algorithm
for (p = last; p != -1 && !sa[p].next.count(c); p = p[m]=p[v]; t[m][c]=m+1;
sa[p].link) { t[m][toi(a[q])]=v;
sa[p].next[c] = cur; l[v]=q; p[v]=m; vector<int> compute_z(const string &s){
sa[cur].num_paths += sa[p].num_paths; t[p[m]][toi(a[l[m]])]=m; int n = s.size();
tot_paths += sa[p].num_paths; v=s[p[m]]; q=l[m]; vector<int> z(n,0);
} while (q<r[m]) { int l,r;
v=t[v][toi(a[q])]; r = l = 0;
if (p == -1) { q+=r[v]-l[v]; } for(int i = 1; i < n; ++i){
sa[cur].link = 0; if (q==r[m]) s[m]=v; else if(i > r) {
} else { s[m]=m+2; l = r = i;
int q = sa[p].next[c]; q=r[v]-(q-r[m]); m+=2; goto suff; while(r < n and s[r - l] == s[r])r++;
if (sa[p].len + 1 == sa[q].len) { } z[i] = r - l;r--;
sa[cur].link = q; } }else{
} else { int k = i-l;
int clone = sz++; SuffixTree(string a) : a(a) { if(z[k] < r - i +1) z[i] = z[k];
sa[clone].len = sa[p].len + 1; fill(r,r+N,sz(a)); else {
sa[clone].next = sa[q].next; memset(s, 0, sizeof s); l = i;
sa[clone].num_paths = 0; memset(t, -1, sizeof t); while(r < n and s[r - l] == s[r])r++;
sa[clone].link = sa[q].link; fill(t[1],t[1]+ALPHA,0); z[i] = r - l;r--;
for (; p!= -1 && sa[p].next[c] == q; p = s[0] = 1; l[0] = l[1] = -1; r[0] = r[1] }
sa[p].link) { = p[0] = p[1] = 0; }
sa[p].next[c] = clone; rep(i,0,sz(a)) ukkadd(i, toi(a[i])); }
sa[q].num_paths -= sa[p].num_paths; } return z;
sa[clone].num_paths += sa[p].num_paths; }
} // example: find longest common substring (uses
sa[q].link = sa[cur].link = clone; ALPHA = 28) int main(){
} pii best;
} int lcs(int node, int i1, int i2, int olen) { //string line;cin>>line;
last = cur; if (l[node] <= i1 && i1 < r[node]) string line = "alfalfa";
} return 1; vector<int> z = compute_z(line);
if (l[node] <= i2 && i2 < r[node])
return 2; for(int i = 0; i < z.size(); ++i ){
int mask = 0, len = node ? olen + if(i)cout<<" ";
11.7 Suffix Tree (r[node] - l[node]) : 0; cout<<z[i];
rep(c,0,ALPHA) if (t[node][c] != -1) }
mask |= lcs(t[node][c], i1, i2, cout<<endl;
struct SuffixTree { len);
enum { N = 200010, ALPHA = 26 }; // N ~ if (mask == 3) // must print "0 0 0 4 0 0 1"
2*maxlen+10 best = max(best, {len, r[node] -
int toi(char c) { return c - ’a’; } len}); return 0;
string a; // v = cur node, q = cur position return mask; }
int t[N][ALPHA],l[N],r[N],p[N],s[N],v=0,q=0,m=2; }

You might also like