Competitive Programming with Large Reasoning Models
Competitive Programming with Large Reasoning Models
OpenAI∗
Abstract
We show that reinforcement learning applied to large language models (LLMs) significantly boosts
performance on complex coding and reasoning tasks. Additionally, we compare two general-purpose
reasoning models — OpenAI o1 and an early checkpoint of o3 — with a domain-specific system, o1-
ioi, which uses hand-engineered inference strategies designed for competing in the 2024 International
Olympiad in Informatics (IOI). We competed live at IOI 2024 with o1-ioi and, using hand-crafted
test-time strategies, placed in the 49th percentile. Under relaxed competition constraints, o1-ioi
arXiv:2502.06807v1 [cs.LG] 3 Feb 2025
achieved a gold medal. However, when evaluating later models such as o3, we find that o3 achieves
gold without hand-crafted domain-specific strategies or relaxed constraints. Our findings show that
although specialized pipelines such as o1-ioi yield solid improvements, the scaled-up, general-purpose
o3 model surpasses those results without relying on hand-crafted inference heuristics. Notably, o3
achieves a gold medal at the 2024 IOI and obtains a CodeForces rating on par with elite human
competitors. Overall, these results indicate that scaling general-purpose reinforcement learning,
rather than relying on domain-specific techniques, offers a robust path toward state-of-the-art AI in
reasoning domains, such as competitive programming.
1 Introduction
Competitive programming is widely recognized as a challenging benchmark for evaluating reasoning and
coding proficiency [2]. Solving complex algorithmic problems demands advanced computational thinking
and problem solving skills. Moreover, these problems are also objectively gradable, making it an ideal
testbed to assess the reasoning capabilities of AI systems.
Recent work on program synthesis with large language models [1] has demonstrated that even rela-
tively general models, ranging from 244M to 137B parameters, can generate short Python scripts from
natural language instructions. Importantly, performance improves log-linearly with model size, and fine-
tuning significantly boosts accuracy. Concurrently, Codex [2], an early code-focused LLM, excelled at
Python program generation and powered GitHub Copilot. Further progress came from AlphaCode [7],
which tackled competitive programming tasks using large-scale code generation and heuristics at in-
ference, and the subsequent AlphaCode2[6], whose improvements nearly doubled AlphaCode’s solved
problems and placed it in the 85th percentile on the CodeForces platform. Both AlphaCode systems
used large-scale sampling of up to a million candidate solutions per problem before selecting their top
10 submissions with a hand-engineered test-time strategy.
Since then, significant progress has been made in harnessing reinforcement learning to improve LLMs’
reasoning skills. This has led to the emergence of large reasoning models (LRMs): language models
trained via reinforcement learning to “reason” and “think through” extended chains of thought. In
particular, OpenAI’s o1 [4, 12] and its soon-to-be-released successor o3 [13] use chain-of-thought reasoning
to tackle intricate tasks such as mathematics and coding. Work by DeepSeek-R1 [3] and Kimi k1.5 [15]
independently illustrates how learning chain-of-thought boosts performance on both mathematical and
programming challenges.
An open question is how domain-specific, hand-engineered inference strategies compare to learned
approaches that models generate and execute on their own. We have three systems available that can
shed light on this question: o1, o1-ioi, and early checkpoints of o3. OpenAI o1 was the first large rea-
soning model and used general purpose methods to improve programming performance. Building on
this foundation, o1-ioi was a fine-tuned system tailored to compete in the 2024 International Olympiad
in Informatics (IOI) and used test-time strategies similar to those used in the AlphaCode system. This
specialization led to strong performance improvements on both the 2024 IOI and competitive program-
ming platforms such as CodeForces. Subsequent advances led to the development of o3, which has
∗ Contributions listed in Appendix A
1
significantly advanced the reasoning capabilities of AI models. Unlike o1-ioi or AlphaCode, o3 does not
depend on coding-specific test-time strategies defined by humans. Instead, we found that complex test-
time reasoning strategies emerged naturally from end-to-end RL, leading to unprecedented performance
on competitive programming benchmarks.
This report provides a high-level overview of the importance of reasoning in coding tasks such as
competitive programming, the progress of OpenAI’s large reasoning models in programming ability, and
our evaluation methodology and results on various competitive programming and coding benchmarks.
2 OpenAI o1
We start with OpenAI o1, a large language model trained with reinforcement learning to tackle complex
reasoning tasks. By generating an extended internal chain of thought before answering [16], o1 resembles
a human who methodically works through a challenging problem step by step. Reinforcement learning
refines this chain-of-thought process, helping the model identify and correct errors, break down complex
tasks into manageable parts, and explore alternate solution paths when an approach fails. These in-
context reasoning capabilities substantially boost o1’s overall performance on a wide range of tasks.
Additionally, OpenAI o1 is trained to use external tools [14], especially for writing and executing
code in a secure environment.1 This capability lets o1 verify whether its generated code compiles, passes
provided test cases, and meets other correctness checks. By testing and refining its outputs, o1 iteratively
improves its solutions over the course of a single sample.
1400
Codeforces rating
1258 / 62nd
1200
1000
800
808 / 11th
600
gpt4o o1-preview o1
We compared o1 against a non-reasoning LLM (gpt-4o) and an earlier reasoning model (o1-preview).
Figure 1 shows how both o1-preview and o1 dramatically outperform gpt-4o, highlighting the effectiveness
of reinforcement learning for complex reasoning. The o1-preview model achieved a CodeForces rating
of 1258 (62nd percentile) — up from gpt-4o’s 808 (11th percentile). Further training pushed o1’s rating
to 1673 (89th percentile), establishing a new milestone for AI performance in competitive programming.
1 https://platform.openai.com/docs/assistants/tools/code-interpreter
2
In Appendix B we provide additional details of which problems our models can solve and how ratings
were calculated.
3 OpenAI o1-ioi
During our development and evaluation of OpenAI o1, we found that increasing both the amount of
reinforcement learning (RL) compute and test-time inference compute consistently improved model per-
formance.
Figure 2: Additional RL training and additional test-time compute improves competitive mathematics
performance.
As shown in Figure 2, scaling RL training and extending test-time inference led to marked gains,
highlighting the importance of optimizing these two compute dimensions to push performance beyond
conventional LLM pretraining.
Building on these insights, we created the o1-ioi system for competing at the 2024 International
Olympiad in Informatics (IOI). In addition to continued RL training targeted at coding tasks, o1-ioi
incorporates specialized test-time inference strategies engineered for competitive programming.
This added focus on coding allowed o1-ioi to write and execute C++ programs during inference. The
model improved its reasoning by iteratively running and refining solutions, thereby strengthening both
its coding and problem-solving skills.
3
Problem formulation For o1-ioi we chose to attempt to solve the individual subtasks of each problem
separately, as the scoring for IOI is done on a subtask-by-subtask basis and gives each competitor the
maximum score over all of their attempts on each subtask. To do this, we divided each IOI problem into
its composite subtasks (using the divisions laid out in the scoring guide for each problem). This was
done simply by creating one version of the document for each subtask with the information about the
other subtasks removed.
Clustering We clustered the generated solutions based on their outputs on model-generated test in-
puts. For each subtask, we first prompted the model to write random test input generators in C++ given
the problem specification and subtask. We used these generators to generate 256 random test inputs.
To ensure the validity of these test inputs, we then prompted the model to write test input validators
in C++ that check, given a test input, whether it satisfies the subtask constraints. Finally, we accepted
each test input that passes at least 75% of the validators. For each subtask, we generated 256 of these
random test case inputs, and then clustered based on their outputs for these test cases. Any programs
that matched each other’s outputs on all test inputs would be placed in the same cluster.
Reranking We then implemented the reranking core of our test-time compute strategy. We scored
each solution based on:
• The quality of the solution according to a learned scoring function.
• Errors on model-generated test inputs.
• Failing the provided public test cases.
Each cluster was given a score defined as the average score of the samples it contained minus a penalty
for each time a sample submission was attempted from that cluster. The weights of all of these penalties
were tuned by random search on solutions to previous years’ IOI problems, by directly simulating the
submission process.
Submission We then submitted up to 50 (the maximum number allowed for human competitors) of
these solutions in a round-robin fashion over subtasks, starting from the hardest. We selected the top-
ranked solution in the top-ranked cluster for each given subtask. When a subtask was solved (meaning
that the maximum score was attained), we ceased sampling on that subtask. When submitting solutions
to any subtask that was a strict superset of a solved subtask, we would filter out any solutions that did
not match the outputs on test inputs of the solved constituent subtasks, allowing us to rapidly narrow
down candidate solutions on harder subtasks by rejecting those that would almost certainly have failed
easier subtasks.
4
Codeforces rating / percentile
2214 / 98th
2200
2092 / 96th
2000
1600
1400
1200
1000
800
600
o1 o1-ioi o1-ioi o1-ioi
public test filtering full-test-time
strategy
Figure 3: Further training OpenAI o1 on coding tasks and incorporating test-time strategies improves
performance.
300
250
213
200
Score
156
150
100
50
0
Random Hand-crafted Random
(50-submissions) (50-submissions) (10K Submissions)
Submission Strategy
In comparison, selecting 50 random submissions would have yielded an average score of only 156
points, indicating that the selection strategy contributed nearly 60 additional points under the compe-
tition’s constraints.
When the submission limit was relaxed to 10,000 per problem, the model’s performance improved
dramatically. Without employing any test-time selection strategy, it achieved a score of 362.14, surpassing
the gold medal threshold and demonstrating the model’s potential. We show samples that yielded the
362.14 score in Appendix C.
4 OpenAI o3
Building on the insights gained from o1 and o1-ioi, we explore the limits of reinforcement learning (RL)
training alone, without relying on human-engineered test-time strategies. While o1-ioi achieved strong
5
results by combining additional RL fine-tuning with carefully designed test-time inference pipelines, its
success hinged on human intervention to define and implement these strategies. We sought to explore
the performance of a model even further trained with RL with the ability to autonomously develop and
execute its own test-time reasoning strategies. To this end, we obtained access to early checkpoints of
o3 [13] to evaluate on competitive programming tasks.
2500
2214 / 98th
2000
Codeforces rating
1673 / 89th
1500
1000
o1 o1-ioi o3
As shown in Figure 5, further RL training provided a significant improvement over both o1 and
the full o1-ioi system. Notably, the transition from the o1-ioi model to o3 resulted in a rating increase
from 2214 (98th percentile) to 2724 (99.8th percentile), reflecting a substantial leap in competitive
programming performance. This improvement demonstrates o3’s ability to solve a wider range of complex
algorithmic problems with higher reliability, pushing its capabilities closer to top-tier human competitors
on CodeForces.
In addition to its significantly improved problem-solving capabilities, we observe that o3 demonstrates
more insightful and deliberate chains of thought. The model not only writes and executes code to validate
its solutions against public test cases, it also refines its approach based on these verifications. Figure 6
shows an advanced test-time strategy discovered by o3: for problems where verification is nontrivial, it
often writes simple brute-force solutions — trading efficiency for correctness — then cross-checks the
outputs against its more optimized algorithmic implementations. This self-imposed validation mechanism
lets o3 catch potential errors and improve the reliability of its solutions.
Sampling Approach. Unlike o1-ioi, which sampled solutions separately for each subtask, we adopted
a different approach when evaluating o3: sampling from a single prompt containing the original problem
6
Figure 6: o3 testing its own solution. This reflects a sophisticated reasoning strategy that partially
implements the hand-designed test-time strategy used for o1-ioi in IOI 2024.
statement. Additionally, while o1-ioi generated 10K solutions per subtask, for o3 we sampled only 1K
solutions per problem.
Selection strategies also differed between the two models. Whereas o1-ioi relied on a complex, human-
defined test-time strategy (3.2) to select solutions, o3 followed a much simpler approach. Specifically, we
selected the top 50 solutions with the highest test-time compute from 1,024 samples per problem. Despite
this streamlined method, o3 produced robust solutions capable of covering many, if not all, subtasks —
without the need for subtask-specific prompts, manual partitioning, or intricate submission strategies.
300
250
213
Score
200
150
100
50
0
o1-ioi o1-ioi O3
(50-Submissions) (10K-Submissions) (50-Submissions)
Submission Strategy
Figure 7: IOI 2024 scores under different submission strategies. Even without human-engineered
heuristics or relaxed submission limits, o3 outperforms o1-ioi and surpasses the gold threshold with just
50 submissions.
7
Results. Figure 7 presents the final scores. The IOI scoring system is subtask-based, with a maximum
total of 600 points in the 2024 contest. The gold medal threshold was approximately 360 points. Key
results include:
• o1-ioi scored 213 points with 50 submissions, improving to 362.14 points with 10K submissions, just
above the gold medal cutoff.
• o3 achieved 395.64 points, surpassing the gold threshold even under the 50-submission limit.
These results demonstrate that o3 outperforms o1-ioi without relying on IOI-specific, hand-crafted
test-time strategies. Instead, the sophisticated test-time techniques that emerged during o3 training,
such as generating brute-force solutions to verify outputs, served as a more than adequate replacement
and eliminated the need for the hand-engineered clustering and selection pipelines required by o1-ioi.
Overall, the IOI 2024 findings confirm that large-scale RL training alone can achieve state-of-the-art
coding and reasoning performance. By independently learning to generate, evaluate, and refine solutions,
o3 surpasses o1-ioi without dependence on domain-specific heuristics or clustering-based methods.
8
HackerRank Astra
Pass@1 (%) 75.55 75.80
Avg Score (%)
70 69.52
63.92
60.89
60
50.91
50
Percentage (%)
40
30
20
10
0
gpt-4o o1-preview o1
Models
5 attempts, it is considered an incorrect attempt. All evaluations are averaged over 3 trials. We do
not penalize the model for system failures (e.g., container hangs or grading failures), and we retry these
rollouts until we can record a valid attempt.
SWE-bench Verified
71.7%
70
60
50 48.9%
Percent Correct
41.3%
40
33.2%
30
20
10
0
gpt4o o1-preview o1 o3
Models
9
6 Conclusion
Through the o-series large reasoning models, we demonstrate that chain-of-thought reasoning is a pow-
erful strategy for improving performance in coding tasks, from competitive programming benchmarks
such as CodeForces and IOI to complex software engineering challenges like SWE-bench and Astra.
Our findings highlight that increasing reinforcement learning training compute, coupled with enhanced
test-time compute, consistently boosts model performance to nearly match the best humans in the world.
Given these results, we believe o-series large reasoning models will unlock many new use cases for AI in
science, coding, math, and many other fields.
Sampling Infrastructure: Andre Saraiva, Hunter Lightman, Vineet Kosaraju, Wenda Zhou
Test-time Strategy: Alexander Wei, Daniel Selsam, David Dohan, Francis Song, Ignasi Clavera, Max
Schwarzer, Rhythm Garg, Rui Shu
Acknowledgments: We are grateful to the IOI committee for allowing us to enter our model, o1-ioi,
in the 2024 International Olympiad in Informatics. We also extend our thanks to Wael Ewida, a member
of the IOI technical committee, for hosting a portal that enabled us to submit our solutions under the
same conditions as the contestants. Additionally, we appreciate the support of those who contributed
to and maintained our sandboxed code execution, including Taylor Gordon, Oleg Boiko, John Rizzo,
Paul Ashbourne, Leo Liu, Alexander Prokofiev, and Scottie Yan. We also extend our gratitude to Chris
Orsinger and Michelle Fradin for their contributions to data efforts. Finally, we would like to express
our sincere gratitude to everyone involved in the reinforcement learning reasoning efforts for o1 and o3,
whose dedication and expertise were instrumental in advancing this work.
B.1 Data
For our test set we use “Division 1” contests from late 2023 and 2024, all of which occurred after the
o3 training set data cut-off. As a redundant additional check, we used embedding search to confirm
that the test problems have not been seen by the model during training. We excluded one contest that
contained an interactive problem for which grading was inconvenient, but otherwise included all post-
cut-off Division 1 problems to which we had access at the time. During training we used a validation
set of primarily Division 2 problems; when that set indicated that performance was very strong we built
and evaluated the Division 1 set presented here.
10
B.2 Grading
We run the complete set of tests for each problem, and have confirmed that our test environment closely
matches the official CodeForces grading service, including by manually submitting solutions for the
hardest problems to the official CodeForces graders.
Following AlphaCode [6] we allow the model to make 10 independent submissions against the full
test set and mark a problem as solved if any one of those 10 passes. This is close to but not strictly the
same as the human affordance, as human participants see only the results of the pre-tests during the
competition. However in Division 1 contests the pre-tests are typically “strong” (highly correlated with
full tests), and in our results the number of failures before a passing submission is typically small (see
1). We did not have access to labels for which test cases were pre-tests.
11
o3 vs top active competitors
4000
^ top 1
3500
^ top 10
codeforces rating
3000
o3
^ top 200
2500
^ top 1% worldwide
2000
40 60 80 100
solve rate (%)
Figure 10: o3 would place among the best human competitive programmers in the world. Here we show
the average solve rate and current rating for participants that entered at least 8 of our 12 unseen test
contests. Horizontal lines show performance thresholds from the global CodeForces leaderboard of
active competitors. The very best humans still solve more problems than AI, for now.
Table 1: We estimate our CodeForces rating from simulated contest participation. Here we show a
detailed breakdown of o3 performance per-problem.
12
problem pass@1 pass@10 # failed pass@10
problem rating (no ranking) (no ranking) submissions (ranking 1162)
1919 B 800 1141 / 1162 1.00 0 solved
1919 C 1400 499 / 1162 1.00 0 solved
1919 D 2100 25 / 1162 0.20 2 solved
1919 E 2600 6 / 1162 0.05 1 solved
1919 F1 2300 1090 / 1162 1.00 0 solved
1919 F2 2800 227 / 1162 0.89 0 solved
1919 G 3500 0 / 1162 0.00 0 not solved
1919 H 2000 0 / 1162 0.00 0 not solved
Contest 1942 - 30/Mar/24 - CodeTON Round 8 (Div. 1 + Div. 2, Rated, Prizes!)
score: 8,701
1942 A 800 1157 / 1162 1.00 0 solved
1942 B 1100 1157 / 1162 1.00 0 solved
1942 C1 1300 999 / 1162 1.00 0 solved
1942 C2 1700 525 / 1162 1.00 1 solved
1942 D 2100 1061 / 1162 1.00 0 solved
1942 E 2300 347 / 1162 0.97 0 solved
1942 F 2700 0 / 1162 0.00 0 not solved
1942 G 2800 239 / 1162 0.90 0 solved
1942 H 3500 0 / 1162 0.00 0 not solved
Contest 1943 - 16/Mar/24 - Codeforces Round 934 (Div. 1)
score: 3,427
1943 A 1300 116 / 1162 0.65 0 solved
1943 B 2000 1 / 1162 0.01 0 not solved
1943 C 2300 160 / 1162 0.77 0 solved
1943 D1 2400 848 / 1162 1.00 0 solved
1943 D2 2800 14 / 1162 0.11 0 solved
1943 E1 2900 0 / 1162 0.00 0 not solved
1943 E2 3300 0 / 1162 0.00 0 not solved
1943 F 3500 0 / 1162 0.00 0 not solved
Contest 1951 - 06/Apr/24 - Codeforces Global Round 25
score: 9,396
1951 A 900 1157 / 1162 1.00 0 solved
1951 B 1200 1150 / 1162 1.00 0 solved
1951 C 1400 1155 / 1162 1.00 0 solved
1951 D 2000 875 / 1162 1.00 0 solved
1951 E 2000 1009 / 1162 1.00 0 solved
1951 F 2500 53 / 1162 0.37 0 solved
1951 G 3100 34 / 1162 0.26 0 solved
1951 H 3200 1 / 1162 0.01 0 not solved
1951 I 3200 0 / 1162 0.00 0 not solved
Contest 1965 - 27/Apr/24 - Codeforces Round 941 (Div. 1)
score: 3,891
1965 A 1400 1143 / 1162 1.00 0 solved
1965 B 1800 1064 / 1162 1.00 0 solved
1965 C 2300 313 / 1162 0.96 0 solved
1965 D 2900 690 / 1162 1.00 0 solved
1965 E 3100 0 / 1162 0.00 0 not solved
1965 F 3300 0 / 1162 0.00 0 not solved
Contest 1967 - 30/Apr/24 - Codeforces Round 942 (Div. 1)
score: 3,871
1967 A 1400 1088 / 1162 1.00 0 solved
1967 B1 1400 1154 / 1162 1.00 0 solved
1967 B2 2200 1149 / 1162 1.00 0 solved
1967 C 2300 1116 / 1162 1.00 0 solved
1967 D 2800 9 / 1162 0.08 0 solved
1967 E1 3100 0 / 1162 0.00 0 not solved
1967 E2 3500 0 / 1162 0.00 0 not solved
1967 F 3200 0 / 1162 0.00 0 not solved
Contest 1975 - 25/May/24 - Codeforces Round 947 (Div. 1 + Div. 2)
score: 5,959
Continued on the next page
13
problem pass@1 pass@10 # failed pass@10
problem rating (no ranking) (no ranking) submissions (ranking 1162)
1975 A 800 1161 / 1162 1.00 0 solved
1975 B 1000 1091 / 1162 1.00 0 solved
1975 C 1200 492 / 1162 1.00 0 solved
1975 D 1700 9 / 1162 0.08 3 solved
1975 E 2100 80 / 1162 0.51 1 solved
1975 F 2600 12 / 1162 0.10 0 solved
1975 G 3000 0 / 1162 0.00 0 not solved
1975 H 3500 0 / 1162 0.00 0 not solved
1975 I 3500 0 / 1162 0.00 0 not solved
Contest 1984 - 09/Jun/24 - Codeforces Global Round 26
score: 12,255
1984 A 800 1161 / 1162 1.00 0 solved
1984 B 1100 1158 / 1162 1.00 0 solved
1984 C1 1300 914 / 1162 1.00 0 solved
1984 C2 1700 768 / 1162 1.00 0 solved
1984 D 2000 193 / 1162 0.84 1 solved
1984 E 2400 849 / 1162 1.00 1 solved
1984 F 2500 918 / 1162 1.00 0 solved
1984 G 3200 0 / 1162 0.00 0 not solved
1984 H 3300 138 / 1162 0.72 3 solved
Contest 2002 - 11/Aug/24 - EPIC IoT Round August 2024 (Div. 1 + Div. 2)
score: 8,981
2002 A 800 1161 / 1162 1.00 0 solved
2002 B 1000 1152 / 1162 1.00 0 solved
2002 C 1200 1096 / 1162 1.00 0 solved
2002 D1 1900 1067 / 1162 1.00 0 solved
2002 D2 2300 805 / 1162 1.00 0 solved
2002 E 2300 232 / 1162 0.89 0 solved
2002 F1 2600 12 / 1162 0.10 8 solved
2002 F2 2800 7 / 1162 0.06 0 not solved
2002 G 3400 143 / 1162 0.73 1 solved
2002 H 3500 0 / 1162 0.00 0 not solved
Contest 2006 - 30/Aug/24 - Codeforces Round 969 (Div. 1)
score: 2,757
2006 A 1700 945 / 1162 1.00 0 solved
2006 B 1800 1149 / 1162 1.00 0 solved
2006 C 2300 811 / 1162 1.00 0 solved
2006 D 2600 2 / 1162 0.02 0 not solved
2006 E 3100 4 / 1162 0.03 0 not solved
2006 F 3500 0 / 1162 0.00 0 not solved
C IOI Submissions
This section presents the solutions generated by o1-ioi during the 2024 International Olympiad in Infor-
matics.
C.1 Nile
All 100 possible points for Nile were scored in a single submission.
#include "nile.h"
#include <bits/stdc++.h>
using namespace std;
struct UF{
vector<int> parent, sizee, minIndex;
vector<array<long long, 2>> singledPat; // singledPat[0] = min d[i] for i mod 2 = 0 in
block, singledPat[1] for i mod 2 = 1 in block
vector<long long> singledBr; // singled bridging
14
vector<long long> singled; // singled in block
};
sumSingledGlobal -= u.singled[a];
sumSingledGlobal -= u.singled[b];
// union by size
if(u.sizee[a] < u.sizee[b]) swap(a, b);
u.parent[b] = a;
u.sizee[a] = newSize;
u.minIndex[a] = newMinIndex;
u.singledPat[a] = newSingledPat;
u.singledBr[a] = newSingledBr;
updateSingled(u, a);
sumSingledGlobal += u.singled[a];
}
15
u.singledBr[r] = min(u.singledBr[r], val);
// singled is 0 anyway
}
sumSingledGlobal += u.singled[r];
}
int N = W.size();
int Q = E.size();
UF u;
u.parent.resize(N);
u.sizee.resize(N);
u.minIndex.resize(N);
u.singledPat.resize(N);
u.singledBr.resize(N);
u.singled.resize(N);
16
u.singled[i] = d[i]; // singled = d[i] because block size 1 is odd
}
sumSingledGlobal = 0;
for(int i = 0; i < N; i++) {
sumSingledGlobal += u.singled[i];
}
return result;
}
C.2 Message
This solution achieved a score of 79.64 out of 100, with full marks awarded for subtask 1 and partial
credit received for subtask 2.
#include "message.h"
#include <bits/stdc++.h>
using namespace std;
17
int bit = (i >> p) & 1; // p-th bit of i, i is the index in safe subset
A[col] = bit;
}
for (int col : sabotage_cols) {
A[col] = false;
}
std::vector<bool> B = send_packet(A);
first4results.push_back(B);
}
// Now identify known safe column from the results of these 4 calls.
vector<string> colSequences(31, string(4, ’0’));
for (int c = 0; c < 31; c++) {
for (int r = 0; r < 4; r++) {
colSequences[c][r] = first4results[r][c] ? ’1’ : ’0’;
}
}
// Step 2: store sabotage subset’s bits in the known safe column in next 31 calls. The
other 15 safe columns store message bits if available.
int S = M.size(); // message length.
int offset = 0; // offset in M.
for (int i = 0; i < 31; i++) {
vector<bool> A(31, false);
int sabotageBit = C[i];
A[knownSafeCol] = sabotageBit; // store sabotage subset’s bit i in known safe column.
for (int c : safe_cols) {
if (c == knownSafeCol) continue; // skip known safe column because we used it for
sabotage subset.
int col = c;
if (offset < S) {
A[col] = M[offset];
18
offset++;
} else {
A[col] = false;
}
}
for (int sc : sabotage_cols) {
A[sc] = false;
}
std::vector<bool> B = send_packet(A);
// We do not store B because we do not need to adapt further. Actually, we can store B
if we want to adapt but the solution does not require further adaptation.
}
// Step 3: store message length in the next 11 calls in the known safe column. The other 15
safe columns store more message bits if available.
int L = S; // length in integer form.
vector<int> lengthBinary(11, 0);
for (int i = 0; i < 11; i++) {
lengthBinary[i] = (L >> i) & 1;
}
// Step 4: store the rest of the message in subsequent calls using all 16 safe columns.
int messageLeft = S - offset; // how many bits are left to store.
int fullColumns = 16; // number of safe columns.
int messageRows = (messageLeft + fullColumns - 1) / fullColumns; // how many rows needed.
std::vector<bool> receive_message(std::vector<std::vector<bool>> R)
{
19
int n = R.size();
vector<bool> result; // message to return.
if (n < 4) {
return result;
}
// Step 1: from the first 4 rows in R, find the known safe column.
vector<string> colSequences(31, string(4, ’0’));
for (int c = 0; c < 31; c++) {
for (int r = 0; r < 4 && r < n; r++) {
if (r < n) {
colSequences[c][r] = R[r][c] ? ’1’ : ’0’;
}
}
}
// Step 2: from next 31 rows in known safe column, read sabotage subset.
vector<bool> sabotageSubset(31, false);
int sabotageStart = 4;
for (int i = 0; i < 31 && sabotageStart + i < n; i++) {
sabotageSubset[i] = R[sabotageStart + i][knownSafeCol];
}
20
}
// Step 5: read message from the next (31 + 11) rows’ other safe columns and from
subsequent rows using all safe columns.
vector<bool> message;
int offset = 0; // offset in message.
int sabotageRowStart = 4;
int sabotageRows = 31;
for (int r = sabotageRowStart; r < sabotageRowStart + sabotageRows && r < n; r++) {
for (int c : safe_cols) {
if (c == knownSafeCol) continue; // skip known safe column.
if (offset < messageLength) {
message.push_back(R[r][c]);
offset++;
}
}
}
// The message is now in vector message. If offset < messageLength, we did not get all
bits. but we should.
return message;
}
C.3 Tree
A total of 30 points were scored on Tree across two separate submissions.
C.3.1 Submission 1
The first submission achieved a score of 17 out of 100, with points earned from subtasks 1 and 4.
#include "tree.h"
#include <bits/stdc++.h>
using namespace std;
21
static vector<int> parentGlobal;
static vector<int> weightGlobal;
static vector<long long> deltaGlobal;
static vector<int> leafCountGlobal;
static vector<int> sortedByLeafCount; // We’ll store node indices sorted by leaf count.
static vector<long long> prefixDelta, prefixDeltaTimesLeafCount; // prefix sums in sorted
order by leaf count.
static long long sumAllDelta;
static vector<vector<int>> childrenGlobal;
childrenGlobal.assign(N, {});
for (int i=1; i < N; i++) childrenGlobal[parentGlobal[i]].push_back(i);
leafCountGlobal.resize(N);
computeLeafCountRec(0);
vector<int> deg(N,0);
for (int i=1; i < N; i++) deg[parentGlobal[i]]++;
deltaGlobal.resize(N);
for (int i=0; i < N; i++){
if (parentGlobal[i] == -1){ // root
if (deg[i] == 0) { // leaf root
deltaGlobal[i] = weightGlobal[i];
} else { // internal root
deltaGlobal[i] = - (long long)weightGlobal[i];
}
} else {
if (deg[i] == 0) { // leaf
deltaGlobal[i] = (long long)weightGlobal[parentGlobal[i]] + (long
long)weightGlobal[i];
} else { // internal
deltaGlobal[i] = (long long)weightGlobal[parentGlobal[i]] - (long
long)weightGlobal[i];
}
}
}
sortedByLeafCount.resize(N);
iota(sortedByLeafCount.begin(), sortedByLeafCount.end(), 0);
sort(sortedByLeafCount.begin(), sortedByLeafCount.end(), [](int a, int b){
return leafCountGlobal[a] < leafCountGlobal[b];
});
prefixDelta.resize(N,0);
prefixDeltaTimesLeafCount.resize(N,0);
22
sumAllDelta = 0;
for (int i=0; i < N; i++) sumAllDelta += deltaGlobal[i];
long long runningDeltaTimesLeaf = 0;
long long runningDelta = 0;
for (int i=0; i < N; i++){
int idx = sortedByLeafCount[i];
long long valDeltaTimesLeaf = deltaGlobal[idx] * (long long)leafCountGlobal[idx];
runningDeltaTimesLeaf += valDeltaTimesLeaf;
runningDelta += deltaGlobal[idx];
prefixDeltaTimesLeafCount[i] = runningDeltaTimesLeaf;
prefixDelta[i] = runningDelta;
}
}
long long sumA = 0; // sum of delta[i]*b[i] for nodes with b[i] <= a.
if (idx >= 0) {
sumA = prefixDeltaTimesLeafCount[idx]; // sum_{i: b[i] <= a} delta[i]*b[i]
}
return cost;
}
C.3.2 Submission 2
Submission 2 achieved 13 of 100 points on subtask 2.
#include "tree.h"
#include <bits/stdc++.h>
using namespace std;
struct Pt {
long long x;
long long y;
};
struct PWL {
long long L, R;
vector<Pt> pts; // sorted by x
23
};
int NGlobal;
int PGlobal[2005];
int wGlobal[2005];
vector<int> childrenGlobal[2005];
PWL GGlobal[2005];
vector<ChildState> cs(d);
priority_queue<pair<long double, int>, vector<pair<long double, int>>, greater<pair<long
double, int>>> pq;
24
cs[i].seg = seg;
long double slope = 0;
if (seg+1 < n) {
long long dx = childF[i].pts[seg+1].x - childF[i].pts[seg].x;
long long dy = childF[i].pts[seg+1].y - childF[i].pts[seg].y;
slope = dx!=0?(long double)dy/dx:0;
} else {
slope = 0;
}
cs[i].slope = slope;
if (seg+1 < n) cs[i].nextBound = min((long long)childF[i].pts[seg+1].x, R);
else cs[i].nextBound = R;
pq.push({slope, i});
}
vector<Pt> fpts;
fpts.push_back({sum, cost});
fpts.push_back({sum, cost});
25
cs[idx].x += feasibleInc;
if (cs[idx].x == cs[idx].nextBound) {
const PWL &g = childF[idx];
int seg = cs[idx].seg;
if (cs[idx].x == R) {
cs[idx].slope = 1e9; // saturate
cs[idx].nextBound = R;
} else {
seg++;
cs[idx].seg = seg;
int n = g.pts.size();
if (seg+1 < n) {
long long dx = g.pts[seg+1].x - g.pts[seg].x;
long long dy = g.pts[seg+1].y - g.pts[seg].y;
long double slopeC = dx!=0?(long double)dy/dx:0;
cs[idx].slope = slopeC;
cs[idx].nextBound = min((long long)g.pts[seg+1].x, R);
} else {
cs[idx].slope = 0;
cs[idx].nextBound = R;
}
}
}
}
for (int idx : group) {
if (cs[idx].x < R) {
pq.push({cs[idx].slope, idx});
}
}
}
vector<Pt> unify;
unify.push_back(fpts[0]);
for (int i = 1; i < fpts.size(); i++) {
if (fpts[i].x == unify.back().x) {
unify.back().y = min(unify.back().y, fpts[i].y);
} else unify.push_back(fpts[i]);
}
vector<Pt> final;
if (!unify.empty()) final.push_back(unify[0]);
for (int i = 1; i < unify.size()-1; i++) {
long long x1 = final.back().x;
long long y1 = final.back().y;
long long x2 = unify[i].x;
long long y2 = unify[i].y;
long long x3 = unify[i+1].x;
long long y3 = unify[i+1].y;
long double slope1 = (x2!=x1)? (long double)(y2-y1)/(x2-x1) : 1e9;
long double slope2 = (x3!=x2)? (long double)(y3-y2)/(x3-x2) : 1e9;
if (abs(slope1 - slope2) < 1e-9) {
// unify linear, skip unify[i]
} else {
final.push_back(unify[i]);
}
}
if (!unify.empty()) final.push_back(unify.back());
PWL result;
result.L = sumDomainL; result.R = sumDomainR;
result.pts = final;
return result;
26
}
static PWL parentFormula(const PWL &f, long long wVal, long long L, long long R) {
vector<long long> candVal;
candVal.push_back(L);
candVal.push_back(R);
for (auto &p : f.pts) {
if (p.x >= L && p.x <= R) candVal.push_back(p.x);
}
if (f.L >= L && f.L <= R) candVal.push_back(f.L);
if (f.R >= L && f.R <= R) candVal.push_back(f.R);
sort(candVal.begin(), candVal.end());
candVal.erase(unique(candVal.begin(), candVal.end()), candVal.end());
vector<Pt> pts;
for (auto val : candVal) {
long long bestCost = LLONG_MAX/2;
if (val >= f.L && val <= f.R) {
int left = 0, right = f.pts.size()-1;
while (left+1 < right) {
int mid = (left + right) / 2;
if (f.pts[mid].x <= val) left = mid; else right = mid;
}
if (f.pts[left].x <= val && val <= f.pts[right].x) {
long long x1 = f.pts[left].x;
long long y1 = f.pts[left].y;
long long x2 = f.pts[right].x;
long long y2 = f.pts[right].y;
long double slope = (x2!=x1? (long double)(y2 - y1)/(long double)(x2 - x1) : 0);
if (slope <= wVal && slope >= -wVal && val >= x1 && val <= x2) {
long double costf = (long double) y1 + slope * (long double)(val - x1);
long long c = (long long) llround(costf);
bestCost = min(bestCost, c);
}
}
}
for (auto &p : f.pts) {
long long s = p.x;
long long c = p.y + wVal * llabs(val - s);
if (c < bestCost) bestCost = c;
}
pts.push_back({val, bestCost});
}
27
vector<Pt> unify;
unify.push_back(pts[0]);
for (int i = 1; i < pts.size(); i++) {
if (pts[i].x == unify.back().x) {
unify.back().y = min(unify.back().y, pts[i].y);
} else unify.push_back(pts[i]);
}
vector<Pt> final;
if (!unify.empty()) final.push_back(unify[0]);
for (int i = 1; i < unify.size()-1; i++) {
long long x1 = final.back().x;
long long y1 = final.back().y;
long long x2 = unify[i].x;
long long y2 = unify[i].y;
long long x3 = unify[i+1].x;
long long y3 = unify[i+1].y;
long double slope1 = x2!=x1? (long double)(y2-y1)/(x2-x1) : 1e9;
long double slope2 = x3!=x2? (long double)(y3-y2)/(x3-x2) : 1e9;
if (abs(slope1 - slope2) < 1e-9) {
// unify linear, skip unify[i]
} else {
final.push_back(unify[i]);
}
}
if (!unify.empty()) final.push_back(unify.back());
PWL result;
result.L = L; result.R = R;
result.pts = final;
return result;
}
28
PWL f = combineChildren(childF, L, R);
PWL p = parentFormula(f, wGlobal[node], L, R);
GGlobal[node] = p;
}
}
C.4 Hieroglyphs
A total of 44 points was scored on Hieroglyphs across two separate submissions.
C.4.1 Submission 1
In the first submission, a score of 34 out of 100 points was achieved, distributed across subtasks 1, 2,
and 4.
#include "hieroglyphs.h"
#include <bits/stdc++.h>
using namespace std;
std::vector<int> ucs(std::vector<int> A, std::vector<int> B)
{
int N = (int) A.size();
int M = (int) B.size();
if (N == 0 || M == 0) {
std::vector<int> ret;
return ret;
}
const int MAX_VAL = 200001;
29
} else {
cVal[i] = 0;
}
}
if (sumC == 0) {
std::vector<int> ret;
return ret;
}
vector<Interval> intervals;
for (int x = 0; x < 200002; x++) {
if (cVal[x] == 1) {
Interval I;
I.letter = x;
I.eA = earliestA[x]; I.lA = latestA[x];
I.eB = earliestB[x]; I.lB = latestB[x];
intervals.push_back(I);
}
}
30
}
int mid = (start+end)/2;
if (pos <= mid) f(idx*2, start, mid, pos, val, f);
else f(idx*2+1, mid+1, end, pos, val, f);
segmentTree[idx] = max(segmentTree[idx*2], segmentTree[idx*2+1]);
};
auto query = [&](int idx, int start, int end, int l, int r, auto &&f) -> int {
if (r < start || end < l) return -1;
if (l <= start && end <= r) {
return segmentTree[idx];
}
int mid = (start+end)/2;
int leftVal = f(idx*2, start, mid, l, r, f);
int rightVal = f(idx*2+1, mid+1, end, l, r, f);
return max(leftVal, rightVal);
};
int j = 0;
for (int i = 0; i < intervals.size(); i++) {
auto &x = intervals[i];
int eAx = x.eA;
// remove intervals from data structure where lA[y] < eAx.
while (j < intervalsSortedByLA.size() && intervalsSortedByLA[j].lA < eAx) {
auto &y = intervalsSortedByLA[j];
// remove y from segment tree keyed by eB[y].
update(1, 0, M-1, y.eB, -1, update);
j++;
}
// query in B dimension: find if there’s an interval y with eB[y] <= lB[x] and lB[y] >=
eB[x].
int minB = x.eB; // eB[x]
int maxB = x.lB; // lB[x]
if (minB > maxB) {
// If eB[x] > lB[x], no intersection possible.
} else {
// query in the segment tree range [0, maxB] to find the maximum lB[y].
int maxVal = query(1, 0, M-1, 0, maxB, query);
if (maxVal >= minB) {
// found intersection with a letter y where cVal[y] = 1.
std::vector<int> ret;
ret.push_back(-1);
return ret;
}
}
31
posBarr[i].clear();
}
struct QItem {
int letter;
int pos;
};
struct QComp {
bool operator()(const QItem &a, const QItem &b) const {
if (a.pos == b.pos) return a.letter > b.letter;
return a.pos > b.pos;
}
};
32
vector<int> U;
U.reserve(sumC);
auto updatePosB0Alpha = [&](int letter, auto &freqAlpha, auto &TAlpha, int &posB0Alpha, int
M) {
int i = posB0Alpha + 1;
while (i < M) {
int l = B[i];
if (cVal[l] > 0) {
freqAlpha[l]++;
if (freqAlpha[l] > TAlpha[l]) {
freqAlpha[l]--;
break;
}
}
i++;
}
posB0Alpha = i-1;
};
TAlpha[letter]++;
int oldPosB0Alpha = posB0Alpha;
vector<pair<int,int>> freqChanges;
33
int newPosB0Alpha = i-1;
occInAarr[letter]++;
posAarrIdx[letter]++;
if (occInAarr[letter] < cVal[letter] && posAarrIdx[letter] <
posAarr[letter].size()) {
QItem newItem;
newItem.letter = letter;
newItem.pos = posAarr[letter][posAarrIdx[letter]];
pq.push(newItem);
}
posB0Alpha = newPosB0Alpha;
} else {
TAlpha[letter]--;
for (auto &fc : freqChanges) {
freqAlpha[fc.first]--;
}
posB0Alpha = oldPosB0Alpha;
posAarrIdx[letter]++;
if (posAarrIdx[letter] < posAarr[letter].size()) {
QItem newItem;
newItem.letter = letter;
newItem.pos = posAarr[letter][posAarrIdx[letter]];
pq.push(newItem);
}
continue;
}
}
}
return U;
}
C.4.2 Submission 2
In the second submission, the model scored 10 points on subtask 3.
#include "hieroglyphs.h"
#include <bits/stdc++.h>
using namespace std;
int a0 = 0, a1 = 0;
34
for (int i = 0; i < N; i++) {
if (A[i] == 0) a0++;
else if (A[i] == 1) a1++;
}
int b0 = 0, b1 = 0;
for (int i = 0; i < M; i++) {
if (B[i] == 0) b0++;
else if (B[i] == 1) b1++;
}
auto getOnesAfterZero = [&](vector<int> & zeroPos, vector<int> & prefixOnes, int totalOnes,
int z) {
if (z == 0) return totalOnes;
else {
if (z-1 >= (int)zeroPos.size()) return 0; // safety
int pos = zeroPos[z-1];
int onesUsed = prefixOnes[pos+1];
int onesAfter = totalOnes - onesUsed;
return onesAfter;
}
};
vector<int> cZ(Z+1);
for (int z = 0; z <= Z; z++) cZ[z] = min(fAval[z], fBval[z]);
35
if (w-1 >= (int)onePos.size()) return 0; // safety
int pos = onePos[w-1];
int zerosUsed = prefixZeros[pos+1];
int zerosAfter = totalZeros - zerosUsed;
return zerosAfter;
}
};
vector<int> cO(O+1);
for (int w = 0; w <= O; w++) cO[w] = min(gAval[w], gBval[w]);
vector<int> U;
U.reserve(Z+O);
int zUsed = 0;
int oUsed = 0;
{
int newZUsed = zUsed + 1;
int newOUsed = oUsed;
int onesLeft = O - newOUsed;
int zerosLeft = Z - newZUsed;
bool feasible = true;
if (Z > newZUsed) { // check cZ in range [newZUsed+1..Z]
int maxZConstraint = cZ[newZUsed+1];
if (onesLeft < maxZConstraint) feasible = false;
36
}
if (O > newOUsed) { // check cO in range [newOUsed+1..O]
int maxOConstraint = cO[newOUsed+1];
if (zerosLeft < maxOConstraint) feasible = false;
}
canPickZero = feasible;
}
{
int newZUsed = zUsed;
int newOUsed = oUsed + 1;
int onesLeft = O - newOUsed;
int zerosLeft = Z - newZUsed;
bool feasible = true;
if (Z > newZUsed) {
int maxZConstraint = cZ[newZUsed+1];
if (onesLeft < maxZConstraint) feasible = false;
}
if (O > newOUsed) {
int maxOConstraint = cO[newOUsed+1];
if (zerosLeft < maxOConstraint) feasible = false;
}
canPickOne = feasible;
}
37
int pos = (z == 0) ? -1 : zeroPositions[z-1];
int onesRight = (int)onePositions.size() - (int)(std::lower_bound(onePositions.begin(),
onePositions.end(), pos+1) - onePositions.begin());
if (onesRight < cZ[z]) {
return {-1};
}
}
return U;
}
C.5 Mosaic
A total of 42 points were scored on Mosaic across two separate submissions.
C.5.1 Submission 1
The first submission achieved a score of 22 out of 100, with points distributed across subtasks 1, 2, and
4.
#include "mosaic.h"
#include <bits/stdc++.h>
using namespace std;
38
}
// fill left column
for(int i=0; i<N; i++) {
A[i*N + 0] = Y[i];
}
// fill rest
for(int i=1; i<N; i++) {
for(int j=1; j<N; j++) {
A[i*N + j] = (1 - A[(i-1)*N + j]) * (1 - A[i*N + j-1]);
}
}
return C;
}
C.5.2 Submission 2
The model scored 20 points in the second submission on subtasks 1, 3, and 5.
#include "mosaic.h"
#include <bits/stdc++.h>
using namespace std;
39
if (N == 1) {
vector<long long> result(Q);
for (int q = 0; q < Q; q++) {
// subrectangle T..B x L..R is always T=0,B=0,L=0,R=0 if we have queries.
if (T[q] == 0 && B[q] == 0 && L[q] == 0 && R[q] == 0) {
// black tile count is 1 if X[0]=1 or Y[0]=1, and X[0]=Y[0], else 0.
// The problem states X[0] = Y[0], so tile(0,0) is black if X[0]=1.
if (X[0] == 1) result[q] = 1; else result[q] = 0;
} else {
// no other queries are possible if N=1, but if they are, the subrectangle is
empty or out of range.
result[q] = 0;
}
}
return result;
}
40
long long r = R[q];
result[q] = count;
}
return result;
}
C.6 Sphinx
A total of 71.5 points were scored on Sphinx across two separate submissions.
C.6.1 Submission 1
The first submission achieved a score of 50 out of 100, with 50% partial credit earned on all subtasks.
#include "sphinx.h"
#include <bits/stdc++.h>
int perform_experiment(std::vector<int> E);
std::vector<int> find_colours(int Nx, std::vector<int> Xx, std::vector<int> Yx) {
int N = Nx;
41
std::vector<std::vector<int>> adj(N);
int M = Xx.size();
for (int i = 0; i < M; i++) {
int u = Xx[i], v = Yx[i];
adj[u].push_back(v);
adj[v].push_back(u);
}
struct UnionFind {
std::vector<int> parent, rank;
UnionFind(int n) : parent(n), rank(n, 0) {
for (int i = 0; i < n; i++) parent[i] = i;
}
int findp(int x) {
if (parent[x] == x) return x;
parent[x] = findp(parent[x]);
return parent[x];
}
void unite(int x, int y) {
x = findp(x); y = findp(y);
if (x == y) return;
if (rank[x] < rank[y]) parent[x] = y;
else if (rank[x] > rank[y]) parent[y] = x;
else {parent[y] = x; rank[x]++;}
}
} uf(N);
42
if (!compInput.empty()) compCount = connectedComponentsInSubset(compInput);
else compCount = 0;
auto findMerges = [&](auto &self, std::vector<int>& roots, int merges) -> void {
if (merges == 0) return;
if (merges == (int)roots.size()) {
for (int r : roots) {
uf.unite(v, r);
}
return;
} else {
if (roots.size() == merges) {
for (int r : roots) {
uf.unite(v, r);
}
return;
} else {
int half = roots.size()/2;
std::vector<int> left(roots.begin(), roots.begin() + half);
std::vector<int> right(roots.begin() + half, roots.end());
std::vector<int> Earr(N);
std::vector<int> Sset2(N, 0);
for (int i : S2) Sset2[i] = 1;
for (int i = 0; i < N; i++) Earr[i] = (Sset2[i] == 1) ? -1 : N;
43
for (int ip : compInput) visited2[ip] = 0; // re-mark for BFS.
for (int ip : compInput) if (visited2[ip] == 0) {
compCount++;
std::queue<int> q;
q.push(ip);
visited2[ip] = 1;
while(!q.empty()) {
int u = q.front(); q.pop();
for (int w : adj[u]) {
if (visited2[w] == 0) {
visited2[w] = 1;
q.push(w);
}
}
}
}
} else compCount = 0;
std::set<int> distinctRoots;
for (int i = 0; i < v; i++) distinctRoots.insert(uf.findp(i));
std::vector<int> rootsVec(distinctRoots.begin(), distinctRoots.end());
return colorID;
}
C.6.2 Submission 2
Submission 2 achieved 43 points on subtasks 1, 2, and 3.
#include "sphinx.h"
#include <bits/stdc++.h>
using namespace std;
44
for (int i = 0; i < (int)X.size(); i++) {
int u = X[i];
int v = Y[i];
adj[u].push_back(v);
adj[v].push_back(u);
adjMatrix[u][v] = adjMatrix[v][u] = true;
}
int R = perform_experiment(E);
int c = ccount;
45
int cpart = query_count_in_T(part, x);
if (cpart > 0) {
r = mid;
} else {
l = mid + 1;
}
}
int found = S[l];
result.push_back(found);
vector<int> U = unassigned;
while (!U.empty()) {
vector<int> T = find_independent_set_heuristic(U);
vector<int> assignedInT;
vector<int> S = T;
while (!S.empty()) {
bool foundX = false;
for (int x = 0; x < N; x++) {
if (S.empty()) break;
int countx = query_count_in_T(S, x);
if (countx > 0) {
vector<int> found = find_vertices_in_T_with_color_x(S, x);
for (int v : found) {
final_colors[v] = x;
assignedInT.push_back(v);
}
for (int v : found) {
auto it = find(S.begin(), S.end(), v);
if (it != S.end()) S.erase(it);
}
foundX = true;
46
}
}
if (!foundX) {
// If no color found for S, we can’t assign those vertices.
// But maybe it’s impossible.
break;
}
}
// remove T from U.
set<int> assignedSet(assignedInT.begin(), assignedInT.end());
vector<int> newU;
for (int v : U) {
if (assignedSet.find(v) == assignedSet.end()) newU.push_back(v);
}
U = newU;
}
return final_colors;
}
References
[1] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan,
Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language
models. arXiv preprint arXiv:2108.07732, 2021.
[2] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared
Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large
language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
[3] DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, et al.
Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint
arXiv:2501.12948, 2025.
[4] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec
Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint
arXiv:2412.16720, 2024.
[5] Carlos E Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik
Narasimhan. Swe-bench: Can language models resolve real-world github issues? arXiv preprint
arXiv:2310.06770, 2023.
[6] Leblond, Rémi and Gimeno, Felix and Altché, Florent and Saade, Alaa and Ruddock, Anton and
Tallec, Corentin and Powell, George and Grill, Jean-Bastien and Mikula, Maciej and Lochbrun-
ner, Matthias and others. Alphacode 2 technical report. https://storage.googleapis.com/
deepmind-media/AlphaCode2/AlphaCode2_Tech_Report.pdf, December 2023. Accessed: 2025-
01-14.
[7] Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom
Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation
with alphacode. Science, 378(6624):1092–1097, 2022.
[8] Mike Mirzayanov. Codeforces rating system. https://codeforces.com/blog/entry/102, 2010.
[9] Mike Mirzayanov. Open codeforces rating system. https://codeforces.com/blog/entry/20762,
2016.
[10] Mike Mirzayanov. Codeforces: Soon we will change the rating calculation for new accounts. https:
//codeforces.com/blog/entry/77890, 2020.
47
[11] OpenAI. Introducing swe-bench verified. https://openai.com/index/
introducing-swe-bench-verified/, August 2024. Accessed: 2025-01-14.
[12] OpenAI. Learning to reason with llms. https://openai.com/index/
learning-to-reason-with-llms/, September 2024. Accessed: 2025-01-14.
[15] Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun
Xiao, et al. Kimi k1.5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599,
2025.
[16] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
neural information processing systems, 35:24824–24837, 2022.
48