Alpha Geometry 2
Alpha Geometry 2
together with other additions, has markedly improved the coverage rate of the AlphaGeometry language
on International Math Olympiads (IMO) 2000-2024 geometry problems from 66% to 88%. The search
process of AlphaGeometry2 has also been greatly improved through the use of Gemini architecture for
better language modeling, and a novel knowledge-sharing mechanism that combines multiple search
trees. Together with further enhancements to the symbolic engine and synthetic data generation, we
have significantly boosted the overall solving rate of AlphaGeometry2 to 84% for all geometry problems
over the last 25 years, compared to 54% previously. AlphaGeometry2 was also part of the system that
achieved silver-medal standard at IMO 2024 https://dpmd.ai/imo-silver. Last but not least, we
report progress towards using AlphaGeometry2 as a part of a fully automated system that reliably solves
geometry problems directly from natural language input.
Contents
1 Introduction 2
8 Results 14
B Multi-modal 21
1. Introduction
The International Mathematical Olympiad (IMO) is a prestigious mathematics competition for high
school students worldwide. IMO problems are known for their difficulty, and solving them requires
deep understanding of mathematical concepts and the ability to apply them creatively. Geometry, one
of the four IMO categories, is the most uniform across problems, hence the most approachable. It is
also well suited for basic reasoning research.
There has been two main approaches for automatically solving geometry problems. One is bashing
the problems algebraically with Wu’s method Chou (1985); Wu (2008), Area method Chou et al.
(1993, 1994), or Gröbner bases Kapur (1986a,b), and second approach relies in synthetic techniques
such as Deduction database Chou et al. (2000), or Full angle method Chou et al. (1996). We focus
on the latter as a more human-like approach suitable for transferring the research knowledge to
other domains. In our previous work Trinh et al. (2024), we introduced AlphaGeometry (AG1), a
neuro-symbolic system that demonstrated a significant step towards mastering this domain, achieving
a 54% solve rate on all 2000-2024 IMO geometry problems. AG1 combines a language model (LM)
with a symbolic engine to effectively tackle these challenging problems.
Despite its success, AG1 exhibited limitations in several key areas. Its performance was constrained
by the scope of its domain-specific language, the efficiency of the symbolic engine, and the capacity
of the initial language model. As a result, when considering all the recent IMO geometry problems
from the year 2000 until now, AG1 can only achieve a solving rate of 54%.
This paper introduces AlphaGeometry2 (AG2), a substantial upgrade that addresses these limita-
tions and significantly enhances performance. AG2 leverages a more powerful Gemini-based language
model trained on a larger and more diverse dataset. We also introduce a significantly faster and more
robust symbolic engine, incorporating optimizations such as a reduced rule set and enhanced handling
of double points. Furthermore, we expand the domain language to encompass a wider range of
geometric concepts, including locus theorems and linear equations. To further improve performance,
2
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
we develop a novel search algorithm that explores a broader range of auxiliary construction strategies,
and employs a knowledge-sharing mechanism to expand and accelerate the search process. Finally,
we make progress towards building a fully automated and reliable system that solves geometry
problems in natural language. To do this we utilize Gemini Team Gemini (2024) to translate problems
from natural language into the AlphaGeometry language and implement a new automated diagram
generation algorithm.
These enhancements culminate in a substantial improvement in performance: AG2 achieves an
impressive 84% solve rate on all 2000-2024 IMO geometry problems, demonstrating a significant
leap forward in AI’s ability to tackle challenging mathematical reasoning tasks, and surpassing an
average IMO gold medalist.
AlphaGeometry2 Key Improvements:
• Expanded Domain Language: Covering locus-type theorems, linear equations, and non-
constructive problem statements.
• Stronger and faster Symbolic Engine: Optimized rule set, added handling of double points,
and a faster implementation in C++.
• Advanced Novel Search Algorithm: Utilizing multiple search trees with knowledge sharing.
• Enhanced Language Model: Leveraging the Gemini architecture, trained on a larger and more
diverse dataset.
Name Meaning
cong a b c d 𝐴𝐵 = 𝐶 𝐷
perp a b c d 𝐴𝐵 ⊥ 𝐶 𝐷
para a b c d 𝐴𝐵 ∥ 𝐶 𝐷
coll a b c 𝐴, 𝐵, 𝐶 are collinear
cyclic a b c d 𝐴, 𝐵, 𝐶, 𝐷 are concyclic points
eqangle a b c d e f g h Directed angle between 𝐴𝐵 and 𝐶 𝐷 is the same as the one between 𝐸𝐹 and 𝐺 𝐻
eqratio a b c d e f g h 𝐴𝐵/𝐶 𝐷 = 𝐸𝐹 /𝐺 𝐻
aconst a b c d x Angle between 𝐴𝐵 and 𝐶 𝐷 is equal to 𝑥 , where 𝑥 ∈ [0, 180)
rconst a b c d y 𝐴𝐵 : 𝐶 𝐷 = 𝑦 where 𝑦 is a constant
First of all, AG2 adds two predicates to allow questions of type “Find x”:
In some geometry problems, including the one appearing at IMO 2024, there are linear equations
3
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
of geometric quantities (angles, distances) that AG1 cannot capture. To express these notions, AG2
adds the following three predicates:
Another category that was not supported in AG1, so-called locus problems, talk about movements
of objects such as points, lines, and circles. AG2 captures this through a new predicate syntax. Table 2
lists 11 locus cases with the corresponding predicate and their syntax. Here we make use of one new
token * to serve as the fixed-point placeholder.
Table 2 | The 11 types of locus-type statements, and their corresponding predicate syntax in the AG
domain language.
Furthermore, in AG2 proofs, we include explicit predicates to represent diagram checks for
topological/non-degeneracy conditions:
AG2 can also prove points being non-distinct by introducing a new predicate, overlap a b (points
𝐴 and 𝐵 are overlapping points), where any predicate involving 𝐴 can also be used for 𝐵 and vice
versa. During the deduction closure, overlapping points can be defined by being a center of the same
circle; we therefore introduce another predicate cyclic_with_center to capture this scenario.
Here, cyclic_with_center a1 a2 ... an x means 𝑎1 = 𝑎2 = · · · = 𝑎𝑥 is the center of the circle
that goes through 𝑎𝑥 +1 ...𝑎𝑛 (in case 𝑥 = 0, it is equivalent to cyclic).
Notice that, when describing a problem, AG1 uses at most 2 predicates to define a point, i.e. each
point is defined as the intersection between at most two objects (line or circle). This limits AG1
4
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
to only constructive problems - problems where all points can be straightforwardly constructed by
following their definition order and taking the intersection of two well-defined objects. In AG2, we
relax this constraint to cover more problems where points can be defined by at least three predicates,
making the diagram construction non-trivial. Our approach for automating this process is discussed
in the next section.
All changes described in this section improve the AG domain language coverage from 66 to 88%
on all 2000-2024 IMO geometry problems. The remaining 12% contain 3D geometry, inequalities,
non-linear equations, and countably many points (i.e. problems that have 𝑛 points where 𝑛 is an
arbitrary positive integer). All problems (covered and not covered) by AG1 and AG2 can be found on
Figure 8. Not covered are referred as "Not attempted".
Automated diagram generation. Another manual part of our pipeline was diagram generation.
In AG1, each point is defined by at most two basic predicates recalled in Table 1, the problem is
therefore defined constructively and diagrams can be generated automatically. In AG2, we allow one
or multiple points being defined simultaneously by an arbitrary number of predicates, allowing us to
also cover non-constructive problems. Consider a non-constructive problem statement, “Let 𝐴𝐵𝐶 be a
triangle with incenter 𝐼 , such that 𝐼 𝐴 = 2 𝐼 𝐵 ...”, here point 𝐼 is not only defined as an incenter, i.e. the
intersection of two internal bisectors, but also defined by a third predicate 𝐼 𝐴 = 2 𝐼 𝐵 and there is no
general strategy to construct such four points. Since AG2 covers non-constructive problems, diagram
construction becomes a non-trivial part of the pipeline and generally requires human intervention.
Similar to Krueger et al. (2021), we propose the following algorithm to automatically generate
diagrams given non-constructive problem specifications:
Let 𝑥¯ ∈ ℝ2𝑛 be a vector representing all coordinates of all points. We encode every constraint 𝑐 in
the diagram, including the goal, as 𝑓𝑐 (¯
𝑥 ) = 0 with a nonlinear function 𝑓𝑐 . We numerically search for
5
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
a suitable 𝑥¯ in two steps. First, we run the ADAM gradient descent optimization on the mean-squared
Í
error loss 𝑐 ∈ 𝐶 𝑓𝑐 (¯
𝑥 ) where 𝐶 is the set of all the constraints, together with a non-degeneracy loss.
For every two points 𝐴, 𝐵, we add the loss of the form 1/(| 𝐴𝐵 | 2 + 𝜖), and an 𝐿2 normalization on all
points to prevent their value from becoming too large. After the loss in the ADAM optimization meets
a certain threshold, we stop caring about the non-degeneracy, and switch from a gradient descent
optimization to the Gauss-Newton-Levenberg method to look for a numerical solution of a combined
under- and over-determined system of nonlinear equations.
This two-stage optimization method builds upon the methodology introduced in Krueger et al.
(2021). While the first stage remains unchanged, we incorporate a novel second stage. This addition
addresses the practical limitations encountered when tuning the gradient descent optimization in the
original method, where achieving a consistently satisfactory error margin proved challenging.
We benchmark this method on 44 IMO problems formalized in AG language (see Figure 8) and are
able to find diagrams for 41 of them. We run the two-stage convergence procedure in multiple parallel
processes, and in a loop which restarts and generates another random initial configuration after a
failure. This way, 40 / 44 problems got their diagram generated within 1 hour using approximately
40 processes for each problem (many problems got their diagram within seconds, on the first try).
For the remaining 4 problems, we run the same procedure longer and with more parallelization. This
way, we also obtained a diagram for IMO-2011-6 after 400 minutes using 3333 processes.
While re-implementing DDAR, we tried to keep approximately the same logical strength as the
original algorithm, just a little stronger because of the implementation differences (for example Thales
Theorem was replaced with a more general Central Angle Theorem). However, DDAR1 is missing
one key feature, which is crucial for tackling hard problems: it is unable to accept two points with
different names and the same coordinates.
For example, imagine a problem where we intersect two lines 𝑎, 𝑏 at a point 𝑋 , and intend to
prove that 𝑋 lies on a certain circle 𝜔. The most plausible approach might be via a reformulation –
instead of proving that the intersection of 𝑎, 𝑏 is on 𝜔, we prove that the intersection of 𝑎, 𝜔 lies on 𝑏.
This is equivalent, yet can be much easier to prove because we can move angles on the circle. For
6
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
Figure 1 | Handling “double" points in AG2. It is hard to prove that the intersection of 𝑎, 𝑏 is on 𝜔.
But if a language model suggests a construction 𝑋 ′ ∈ 𝑎 ∩ 𝜔, then DDAR can prove the goal by proving
𝑋 ′ ∈ 𝑏, and hence 𝑋 = 𝑋 ′ .
illustration see Figure 1. To do such “reformulation” of reasoning with double points, we proceed
through the following 4 steps:
• Construct a new point 𝑋 ′ as the intersection of 𝑎, 𝜔 (we don’t know yet that 𝑋 ′ coincides with
𝑋 ). This is an auxiliary construction that must be predicted by a language model.
• Prove that 𝑋 lies on 𝑏.
• Since both 𝑋 and 𝑋 ′ lie on both, 𝑎, 𝑏, we conclude 𝑋 = 𝑋 ′ .
• Consequently 𝑋 lies on 𝜔.
The DDAR1 algorithm is processing a list of rules, and tries to apply each rule to all combinations of
points. This process involves a candidate searching step, whose time complexity is polynomial in the
number of points, and a clause matching step, whose time complexity is exponential in the number
of clauses per premise. In theory the worst case for searching similar triangle candidates in AG1
is 𝑂 ( 𝑁 8 ), which is one of the most time consuming steps. The exponential clause matching step is
another expensive step. To make the search more efficient, we take all essential rules, and hard-code
search for their application, which reduces the number of queries for the AR sub-engine to at most
cubic. Furthermore, we discard the explicit rules for angles and distances (e.g. about perpendicular
or parallel lines) – all such deductions happen automatically in the AR engine.
The two main time-consuming parts of DDAR are a search for similar triangles and a search for
cyclic quadrilaterals. In AG2, we designed an improved DDAR2 algorithm. For similar triangles, we
go through all triples of points, hash their “shape”, and detect a similar pair if the shape is recognized
twice. For cyclic quadrilaterals, we go through all pairs (point 𝑋 , segment 𝐴𝐵), and hash the value
of ( 𝐴, 𝐵, ∠ 𝐴𝑋 𝐵). If such a triple repeats, we get a cyclic quadrilateral. By the “value” of segment
𝐴𝐵, or ∠ 𝐴𝑋 𝐵, we mean a symbolic normal form calculated by the AR-submodule. This submodule
keeps track of known linear equations between angles, distances, and log-distances, understands its
algebraic consequences, and can reduce any linear expression to its normal form.
7
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
While the new algorithm already significantly accelerates DDAR, we make further speed improvements
by implementing its core computation (Gaussian Elimination) in C++. The new C++ library, which
is exported into Python via pybind11 Jakob et al. (2017), is over 300 times faster than DDAR1. In
order to benchmark the speed improvement, we select a set of 25 IMO problems that cannot be solved
by DDAR (see Figure 8), and run the test 50 times on a machine with AMD EPYC 7B13 64 core CPU.
While on average DDAR1 finishes its computations in 1179.57 ± 8.055 seconds, DDAR2 is much faster
- finishing in 3.44711 ± 0.05476 seconds1 .
Larger, more complex diagrams and better data distribution. First of all, we scale up resources
for data generation, and do more careful re-balancing of the data distribution. As demonstrated on
Figure 2, compared to AG1, AG2:
• Explores random diagrams at twice the size, allowing for extracting much more complex
problems.
• Produces theorems at up to 2x more complex, i.e. number of points and premises.
• Produces up to 10x more complex proofs, i.e. 10x more proof steps.
• Has a more balanced data distribution between question types.
• Has a more balanced data distribution between problems with and without auxiliary points.
More types of theorems. Besides generating theorems that prove classic statements such as “AB =
CD”, AG2 data generating algorithm also produces problems of “locus” type, i.e. asserting statements
such as “When X moves on line/circle Y, then Z moves on a fixed line/circle T”. Introduced in Section 2,
these statements are not supported in the AG1 data generation algorithm, as there is no notion of
movement and movement dependency. In AG2, we record the movement dependency for each point
𝑋 during random diagram generation through a function 𝑃 ( .) with the following definition:
1 The average running time may vary depending on the machine status at different times.
8
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
107 AG1
AG2
106
105
104
103
102
101
0 5 10 15 20 25 30
Number of points (problem size)
(a) AG2 includes more complicated/longer problems compared to AG1.
Question types in the training data
AG1 Auxiliary points in the training data
AG2
AG1
Number of examples (log scale)
AG2
108
Number of examples (log scale)
107
Collinear
Equal distance
Concyclic
Parallel
Perpendicular
Equal ratio
Evaluate value
Equal angles
107
with_aux without_aux
(c) AG2 has a much more balanced mix
between proofs with auxiliary points
(b) AG2 has a more balanced distribution of examples per and proofs without (50:50 in AG2 vs
question type. 9:91 in AG1).
𝑃 ( 𝐴): the set of points that control the movements of 𝐴, where 𝐴 is a point or a set of points,
defined in a constructive problem statement. Two examples of 𝑃 are presented in Table 3 and all
cases where locus-type statements are detected are shown in Table 5.
If Then
a = midpoint b c, d = midpoint a c 𝑃 ( 𝑑 ) = { 𝑏, 𝑐 }
a = on_line b c 𝑃 ( 𝑎) = { 𝑎, 𝑏, 𝑐 }
Table 3 | Two examples of 𝑃 . Top row: since 𝑑 is uniquely defined as the midpoint of 𝑎 and 𝑐, and 𝑎 is
uniquely defined as the midpoint of 𝑏 and 𝑐, the source of movement for 𝑑 is 𝑏 and 𝑐. Second row:
Since 𝑎 can be anywhere on line 𝑏𝑐, 𝑎 itself is also a part of its movement source.
9
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
Faster data generation algorithm. We also improved the speed of the data generation algorithm.
Recall that in AG1, we first run deduction closure on random diagrams, and “traceback" to obtain
the minimal problem and minimal proof that proves each fact in the closure. To obtain the minimal
problem in AG1, we have to exhaustively remove different subsets of points from the problem and
rerun DDAR to check provability. Such a search can find the subset with the smallest cardinality,
however as an exponential search, it is infeasible for a larger number of points. Therefore, we
switched to a greedily discarding algorithm shown in Figure 3, which uses just a linear number of
checks for whether a set of points suffices to prove the goal. The greedy algorithm is guaranteed to
find a minimal set of points with respect to inclusion as long as the check is monotonic (if 𝐴 ⊆ 𝐵,
then check_provable ( 𝐴) ⇒ check_provable ( 𝐵)). In reality, we also require the pruned set to
remain closed under construction dependencies (so that we can still run a random construction).
If we incorporate this condition into the check_provable predicate, it stops being monotonic.
This difficulty can be fixed by processing the points via the algorithm from Figure 3 in a reverse-
topological order (first points that do not depend on any other points, and last the initial points of
the construction).
def p r u n e _ p o i n t s (
points : set [ Point ] ,
c h e c k _ p r o v a b l e : C a l l a b l e [ [ s e t [ P o i n t ] ] , bool ] ) :
pruned = s e t ( p o i n t s )
f o r p in r e v e r s e _ t o p o l o g i c a l ( p o i n t s ) :
i f c h e c k _ p r o v a b l e ( pruned − {p } ) :
pruned = pruned − {p}
return
Figure 3 | Basic greedy algorithm to find a minimal set of points satisfying a monotonic predicate
check.
• "Classic" search tree: the same beam tree search used in AG1, where a language model is asked
to produce one auxiliary point at each node.
• Tree predicting multiple auxiliary points at each node: a language model is allowed to produce
as many auxiliary points as it wants at each tree node. Recall that this is possible because our
10
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
2. Formalization
(A): “right_triangle a b c; …”
ls (B): diagram construction
ode
agem
la ngu
ipl e
lt
Mu
Figure 4 | Overview of our search algorithm. We employ several different search trees which can
share facts they proved via a special knowledge sharing mechanism.
LM is trained to produce full proofs, starting with auxiliary points and followed by the deduction
steps2 . Note that even though we want our models to generate all necessary auxiliary points in
one query, in practice we observe the need to call the model multiple times given previously
produced auxiliary points. Allowing the model to produce multiple auxiliary points accelerates
finding a solution and effectively increases the tree search depth.
• Tree predicting different types of aux points uniformly. Recall that LM outputs for auxiliary
points look like x00 a : cong a b c d (00) coll a e f (01), i.e. “construct point a such that
a b = c d and a e f are collinear". Typically to predict aux points we prompt the language model
with the first token x00 and let it generate the rest. Here, instead, we prompt the LM with x00
a : cong, x00 a : coll, x00 a : cyclic, x00 a : perp, etc. to force uniform distribution
2 See a more detailed discussion on producing full proofs with a language model alone in Section E
11
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
across the first 4 tokens, and then let the LM generate the rest.
• Deep but narrow tree (e.g. beam size 64 and depth 10).
• Shallow but wide tree (e.g. beam size 512 and depth 4).
System design details. For proof search, we use TPUv4 to serve multiple replicas per model3 and let
different search trees within the same model to query the same server under their own search strategy.
Besides running these search trees asynchronously, we also run the LM workers asynchronously with
DDAR workers. The LM workers write down the content of the nodes they explored to a database,
and the DDAR workers asynchronously pick up these nodes and attempt them. The DDAR workers
coordinate between themselves to make sure they divide work equally. A single DDAR worker pool is
shared across different problems (if multiple problems are solved at once), such that problems that
got solved earlier release its own DDAR compute resources for the rest of the problems that are being
solved.
AG1 language model, a custom transformer, was trained in the unsupervised fashion in two phases:
training on problems with and without auxiliary constructions followed by training on only problems
that contain auxiliary constructions. For AG2 we leverage the Gemini training pipeline and simplify
training to just one phase: unsupervised learning on all data. Our new language model is a sparse
mixture-of-expert Transformer-based model that builds on Gemini Team Gemini (2024) and trained
on AG2 data described in Section 5. We train multiple models of different size using three training
setups:
1. Training from scratch with a custom tokenizer in the domain specific language (AG1 setup).
2. Fine-tuning already pre-trained custom math specialized Gemini models in natural language
(for more details see Appendix A).
3. Multimodal training from scratch with an additional image input - a diagram of the given
geometry problem (for more details see Appendix B).
Apart from a large synthetic training set of around 300 million theorems, we create three evaluation
sets:
All these sets contain full proofs, and during training we compute perplexity loss on them. Note,
however, that these are only proxy metrics for two reasons. First, during inference (just like in AG1)
we only use auxiliary points suggested by the language model, while the perplexity is computed on
3 The exact number of TPUs depends on the model size.
12
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
the entire proof. Second, there might be multiple ways to solve a given problem, but perplexity is
computed for one particular solution. Just like in AG1, our main downstream metric is the solve rate
on IMO problems, where the language model produces auxiliary points followed by a DDAR run via
beam search described in section 6. These results will be discussed in Section 8.
We train our models with the largest possible batch size allowed by the hardware4 using TPUv4.
A learning rate schedule is a linear warm-up followed by the cosine anneal. Learning rate hyperpa-
rameters are determined from scaling laws. On Figure 5 we illustrate learning curves for different
size Gemini models in terms of parameter count. As expected, increasing the model size decreases
perplexity loss for train, eval and our special IMO evaluation set.
0.6
51m train
51m eval
0.5 51m imo_eval
176m train
0.4 176m eval
176m imo_eval
Loss
3p3B train
0.3 3p3B eval
3p3B imo_eval
0.2
0.1
0.0 0.5 1.0 1.5 2.0 2.5 3.0
Tokens 1e11
Figure 5 | Learning curves for AlphaGeometry2 language models of different sizes in terms of
parameter count (“m" - million, “B" - billion). Increasing the model size results in decreasing loss for
train, eval and IMO evaluation sets.
A new problem is solved via the search algorithm described in section 6 with multiple search trees
and multiple language models of different sizes. In contrast to AG1, we use top-k sampling with
temperature 𝑡 = 1.0 and 𝑘 = 32. Note that a high temperature and multiple samples are essential for
solving IMO problems. With the greedy decoding 𝑡 = 0.0, 𝑘 = 1, and no tree search, our models can
solve only two problems out of 26 that require auxiliary constructions. Increasing the temperature to
𝑡 = 1.0 and using 𝑘 = 32 samples (without a search tree) allows our language models to solve 9 out
of 26 problems. Lower temperatures 𝑡 < 1.0 do not produce diverse enough auxiliary constructions
(see Figure 6), while higher temperatures result in the increasing number LM outputs with a wrong
domain language syntax.
The analysis string. In AG1, the interface between LM and DDAR is minimal: DDAR takes auxiliary
constructions proposed by LM, and the LM stops proposing auxiliary constructions when DDAR
4We did not observe any training issues compared to smaller batches.
13
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
1.00 38
Unique samples ratio
Problems solved
0.75 35
0.50 30
0.25
0.00 25 109 1010 1011 1012
0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Tokens
Temperature
Figure 6 | Ratio of unique samples for various Figure 7 | Number of 2000-2024 IMO problems
temperatures for top-k sampling. solved by one language model as a function of
seen tokens during training.
succeeds in finding a solution. In AG2, we enrich this neuro-symbolic interface by letting the LM
know about the deductions made by DDAR before proposing auxiliary constructions. Namely, we feed
the following information into the LM:
Note that by definition, 𝑆1 ⊂ 𝑆2 ⊂ 𝑆3 . Once these three sets are computed, we serialize and
concatenate them into a string called analysis string, using our domain specific language. This string
is fed into the LM, together with the original problem statement as follows: <problem_statement>
serialized(𝑆1 ) serialized(𝑆2 − 𝑆1 ) serialized(𝑆3 − 𝑆2 ). In contrast, the input to the AG1
LM is simply <problem_statement>.
8. Results
Our main downstream metric is the solve rate on IMO geometry problems. There are a total of 45
geometry problems in 2000-2024 IMO, which we translate into 50 AlphaGeometry problems (we
call this set IMO-AG-50). Some problems are split into two due to specifics of our formalization.
Figure 8 demonstrates our main result: AlphaGeometry2 solves 42 out of 50 of all 2000-2024 IMO
geometry problems, thus surpassing an average gold medallist for the first time5 . More details
are presented in Table 4, which compares various AG2 configurations with other systems, such as AG1
Trinh et al. (2024) and TongGeometry Zhang et al. (2024). We also perform an additional evaluation
on a new set of 30 hardest IMO shortlist problems, which are formalizable in the AG2 language, and
which have never appeared at IMO. For these additional results see Appendix D.
On Figure 7 we present the IMO solve rate as a function of training time (seen tokens during
training) for one language model coupled with DDAR via the "classical" tree search described in
Section 6. Interestingly, AlphaGeometry2 can already solve 27 out of 50 problems after only 250
training steps with batch size 256, or around 200 million tokens6 . We also run ablation studies on
how inference settings affect the overall performance (see Figure 9). For a single search tree we find
5 Sinha et al. (2024) previously claimed to achieve a gold medalist performance, but it was done on a subset of IMO
problems.
6 Note that even without the language model, AlphaGeometry2 can solve 16 problems with its symbolic engine alone
(see Figure 8).
14
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
D DD D D D
2000-1 2002-2 2005-5 2007-4 2003-4 2001-5 2005-1
2001-1
AlphaGeometry 1
Not attempted
D D D DD
2010-4 2012-1 2013-4 2015-4 2004-5 2007-2 2009-4
2002-6 Not solved
Solved
D D D D D
2016-1 2017-4 2022-4 2014-4 2021-4 2013-3 2014-3
2003-3
Solved by DDAR
2006-1
AlphaGeometry 2
2000-6 2004-1 2009-2 2010-2 2008-6 2011-6 2018-6
Not attempted
2006-6 2020-6
Figure 8 | AlphaGeometry2 results on all 2000-2024 IMO geometry problems. Problems are grouped
together based on their status, and ordered chronologically within the groups.
Table 4 | Evaluation on IMO-AG-50 benchmark. IMO-AG-50 contains all IMO 2000-2024 geometry
problems, while IMO-AG-30 introduced in Trinh et al. (2024) contains only a subset formalizable in
terms of the AG1 language.
that the optimal configuration is beam size of 128, beam depth of 4 and 32 samples. More samples or
a larger beam search does not help solve more problems.
Our geometry experts and IMO medalists consider many AlphaGeometry solutions to exhibit
superhuman creativity. In Appendix C we provide several such examples with the detailed analysis.
Out of the unsolved IMO problems, we have 2 attempted but not solved and 6 unformalizable
15
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
Inference settings
38
35
Problems solved
31
24
19
20 22 24 26 27 29 20 21 22 23 24 20 22 23 24 25 27
Beam size Beam depth Number of samples
Figure 9 | Number of 2000-2024 IMO geometry problems solved for different inference settings
with one search tree. We start with beam size 512, beam depth 4, 32 samples and vary one of the
parameters while keeping others fixed.
problems. The unformalizable problems involve inequalities and variable number of points, which
are currently not covered by the AlphaGeometry2 language. Two of the remaining unsolved IMO
problems (IMO 2018 P6, IMO 2023 P6) involve advanced geometry problem solving techniques such
as inversion, projective geometry or radical axis, which are not implemented in our current DDAR.
While such problems in theory can be solved without these techniques, such solutions would require
longer inference time, longer proofs and more auxiliary constructions to make up for the lack of the
aforementioned machinery in our current DDAR, which hinders AlphaGeometry’s current problem
solving capabilities.
16
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
still a room for improvements. First, our domain language does not allow talking about variable number
of points, non-linear equations and problems involving inequalities, which must be addressed in order
to fully “solve geometry”. Second, AG2 has not solved all IMO and IMOSL problems. We hypothesize
that breaking problems into subproblems and applying Reinforcement learning approaches could close
this gap. Finally, in this paper we reported progress on building a fully automated geometry problem
solving system, which takes input in natural language and outputs a solution reliably without any
hallucinations. Despite good initial results, we think the auto-formalization can be further improved
with more formalization examples and supervised fine-tuning.
References
H. Chae, S. Yoon, C. Y. Chun, G. Go, Y. Cho, G. Lee, and E. K. Ryu. Decomposing complex visual
comprehension into atomic visual skills for vision language models. In The 4th Workshop on
Mathematical Reasoning and AI at NeurIPS’24, 2024. URL https://openreview.net/forum?
id=nFU4xCyoe0.
S.-C. Chou. Proving and discovering geometry theorems using Wu’s method. The University of Texas at
Austin, 1985.
S.-C. Chou, X.-S. Gao, and J.-Z. Zhang. Automated production of traditional proofs for constructive
geometry theorems. In [1993] Proceedings Eighth Annual IEEE Symposium on Logic in Computer
Science, pages 48–56. IEEE, 1993.
S.-C. Chou, X. Gao, and J.-Z. Zhang. Machine proofs in geometry: Automated production of readable
proofs for geometry theorems, volume 6. World Scientific, 1994.
S.-C. Chou, X.-S. Gao, and J.-Z. Zhang. Automated generation of readable proofs with geometric
invariants: I. multiple and shortest proof generation. Journal of Automated Reasoning, 17(3):
325–347, 1996.
S.-C. Chou, X.-S. Gao, and J.-Z. Zhang. A deductive database approach to automated geometry
theorem proving and discovering. Journal of Automated Reasoning, 25(3):219–246, 2000.
B. Deiseroth, M. Brack, P. Schramowski, K. Kersting, and S. Weinbach. T-free: Tokenizer-free
generative llms via sparse representations for memory-efficient embeddings, 2024. URL https:
//arxiv.org/abs/2406.19223.
W. Jakob, J. Rhinelander, and D. Moldovan. pybind11 – seamless operability between c++11 and
python, 2017. https://github.com/pybind/pybind11.
A. Q. Jiang, W. Li, and M. Jamnik. Multilingual mathematical autoformalization. arXiv preprint
arXiv:2311.03755, 2023.
D. Kapur. Geometry theorem proving using hilbert’s nullstellensatz. In Proceedings of the fifth ACM
symposium on Symbolic and algebraic computation, pages 202–208, 1986a.
D. Kapur. Using gröbner bases to reason about geometry problems. Journal of Symbolic Computation,
2(4):399–408, 1986b.
R. Krueger, J. M. Han, and D. Selsam. Automatically building diagrams for olympiad geometry
problems. In CADE, pages 577–588, 2021.
A. Poiroux, G. Weiss, V. Kunčak, and A. Bosselut. Improving autoformalization using type checking.
arXiv preprint arXiv:2406.07222, 2024.
17
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
A. K. Singh and D. Strouse. Tokenization counts: the impact of tokenization on arithmetic in frontier
llms, 2024. URL https://arxiv.org/abs/2402.14903.
S. Sinha, A. Prabhu, P. Kumaraguru, S. Bhat, and M. Bethge. Wu’s method can boost symbolic ai to
rival silver medalists and alphageometry to outperform gold medalists at imo geometry, 2024. URL
https://arxiv.org/abs/2404.06405.
C. Szegedy. A promising path towards autoformalization and general artificial intelligence. In
Intelligent Computer Mathematics: 13th International Conference, CICM 2020, Bertinoro, Italy, July
26–31, 2020, Proceedings 13, pages 3–20. Springer, 2020.
Team Gemini. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.
arXiv preprint arXiv:2403.05530, 2024.
T. H. Trinh, Y. Wu, Q. V. Le, H. He, and T. Luong. Solving olympiad geometry without human
demonstrations. Nature, 625(7995):476, 2024.
W.-t. Wu. On the decision problem and the mechanization of theorem-proving in elementary geometry.
In Selected Works Of Wen-Tsun Wu, pages 117–138. World Scientific, 2008.
Y. Wu, A. Q. Jiang, W. Li, M. N. Rabe, C. Staats, M. Jamnik, and C. Szegedy. Autoformalization with
large language models, 2022. URL https://arxiv.org/abs/2205.12615.
C. Zhang, J. Song, S. Li, Y. Liang, Y. Ma, W. Wang, Y. Zhu, and S.-C. Zhu. Proposing and solving
olympiad geometry with guided tree search. arXiv preprint arXiv:2412.10673, 2024.
18
Given a random diagram, Then let X = Then if X is nonempty, we create a syn- Auxiliary construc- Case
if DDAR proves: thetic proof that says: "when X moves, tions will be the fol-
... lowing points and
everything else they
depend on
cong a b c d 𝑃 ( 𝑏, 𝑐, 𝑑 ) − 𝑃 ( 𝑎) circle center 𝑏, radius 𝑐𝑑 goes through a 𝑎 2
fixed point
𝑃 ( 𝑎) − 𝑃 ( 𝑏, 𝑐, 𝑑 ) 𝑎 moves on a fixed circle 𝑏, 𝑐, 𝑑 8
𝑃 ( 𝑎, 𝑏) − 𝑃 ( 𝑐, 𝑑 ) the distance between 𝑎 and 𝑏 is fixed 𝑐, 𝑑 9
cong a b a c 𝑃 ( 𝑎) − 𝑃 ( 𝑏, 𝑐) 𝑎 moves on a fixed line 𝑏, 𝑐 7
𝑃 ( 𝑏, 𝑐) − 𝑃 ( 𝑎) ≥ 𝑀 𝑏 & 𝑐 are equidistant to a fixed point, 𝑎 4
when M move
cyclic a b c d 𝑃 ( 𝑏, 𝑐, 𝑑 ) − 𝑃 ( 𝑎) the circumcircle of 𝑏 𝑐 𝑑 moves through 𝑎 1
a fixed point
𝑃 ( 𝑑 ) − 𝑃 ( 𝑎, 𝑏, 𝑐) 𝑑 moves on a fixed circle 𝑎, 𝑏, 𝑐 8
coll a b c 𝑃 ( 𝑏, 𝑐) − 𝑃 ( 𝑎) line 𝑏 𝑐 goes through a fixed point 𝑎 3
𝑃 ( 𝑐) − 𝑃 ( 𝑎, 𝑏) 𝑐 moves on a fixed line 𝑎, 𝑏 7
eqangle b a b c e d e f 𝑃 ( 𝑎, 𝑏, 𝑐) − 𝑃 ( 𝑑, 𝑒, 𝑓 ) the angle 𝑎 𝑏 𝑐 has a fixed value 𝑑, 𝑒, 𝑓 11
𝑃 ( 𝑏) − 𝑃 ( 𝑎, 𝑐, 𝑑, 𝑒, 𝑓 ) the point 𝑏 moves on a fixed circle 𝑎, 𝑐, 𝑑, 𝑒, 𝑓 8
para a b c d 𝑃 ( 𝑏, 𝑐, 𝑑 ) − 𝑃 ( 𝑎) The line through 𝑏 and ∥ 𝑐𝑑 moves 𝑎 5
through a fixed point
𝑃 ( 𝑐, 𝑑 ) − 𝑃 ( 𝑎, 𝑏) The line 𝑐𝑑 is always parallel to a fixed 𝑎, 𝑏 10
line
𝑃 ( 𝑎) − 𝑃 ( 𝑏, 𝑐, 𝑑 ) 𝑎 moves on a fixed line 𝑏, 𝑐, 𝑑 7
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
19
Table 5 | 17 cases where locus-type statements are detected during data generation. These 17 cases
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
Tokenizers. Tokenizer is an essential part of modern language models and, more broadly, any
foundation models7 . It is generally believed that a tokenizer might be the major bottleneck in the
models abilities to do math, e.g. see Singh and Strouse (2024). We investigate this hypothesis in the
controlled setting of AlphaGeometry. To do so we train models of the same architecture with different
tokenizers: custom tokenizers with vocabularies of a few thousand tokens and the large language
model tokenizers with a vocabulary of 300k tokens. Recall that our custom tokenizers are created at
word-level, i.e. each token has full meaning, as opposed to subword level tokens. In AG language
there are the following types of tokens:
Somewhat surprisingly, we find that AlphaGeometry performance on the 2000-2024 IMO geometry
problems stays the same with different tokenizers, which suggests that modern LLM tokenizers might
be flexible enough to perform mathematical manipulations.
Domain specific language. Alongside tokenizers, it is interesting to study the role of the domain
specific language in the LM’s ability to solve math problems. It is natural to assume that using domain
specific languages simplifies mathematical manipulations and prevents obvious mistakes, which might
occur from using a less strict language. To investigate this, we translate all AlphaGeometry2 data
from the AlphaGeometry language into natural language and train a new model. Then we compare
its performance against the model of the same size trained on the original AlphaGeometry data.
Somewhat surprising again, we get the same results on 2000-2024 IMO geometry problems, which
opens a path for fine-tuning large language models pre-trained in natural language on math data.
Below we demonstrate an example of translating AlphaGeometry into natural language.
AlphaGeometry language: d e f g : coll a d g (000) coll f a b (001) coll d b c (002)
coll e c a (003) cong d b d c (004) cong f a f b (005)
Natural language: Construct points d e f g such that a d g are collinear (000), f a
b are collinear (001), d b c are collinear (002), e c a are collinear (003), |d b| =
|d c| (004), |f a| = |f b| (005)
7 Tokenizer-free models is an active area of research, for example, see Deiseroth et al. (2024)
20
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
Fine-tuning of language models pre-trained on math data. Having shown that the custom
tokenizer and the domain specific language does not play a critical role for AlphaGeometry, we
leverage language models pre-trained on various math data. We start with a Gemini model with 3.3B
parameters trained on public math datasets (see Section 7 in Team Gemini (2024)), and fine-tune
it in the unsupervised manner on the AlphaGeometry data. On our IMO-AG-50 evaluation set the
fine-tuned model performs on par with smaller models and the 3.3B model trained from scratch8 . On
the other hand, we find that even though all these models are trained on the same AG data, they
do produce slightly different auxiliary points proposals and do help each other via the knowledge
sharing mechanism described in Section 6, thus forming an ensemble-like system (see Figure 4).
0.10
Loss
0.08
0.06
0.04
0 1 2 3 4 5 6
Tokens 1e11
Figure 10 | Learning curves for two 3B models: one is trained from scratch and another one pre-trained
on math data and then fine-tuned on the AG data. The model pre-trained on math has initially lower
loss but both converge to the same point after training for 200B tokens.
B. Multi-modal
Until now we talked about AG2 as a system that couples a language model together with a symbolic
engine. However, since our language model is based on Gemini 1.5, which is multi-modal by design
(see Team Gemini (2024)), it is natural to enhance AG model through multi-modal reasoning. For this,
we train a new family of models that, alongside the problem text, take the corresponding diagram
image as the input. For training and during test time, diagrams are built as described in Section 3.
Despite promising results during the training, we do not observe any improvements in the solve
rate on the downstream IMO problems when using this model alone. However, just like in case
of fine-tuning pre-trained models (see Section A), we find that the multimodal model produces
slightly different auxiliary point proposals. Combined with other models via the knowledge sharing
8We also train even larger models in the supervised manner and achieve the same results
21
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
mechanism (see Section 6) this boosts the overall performance. We hypothesize that adding the image
on its own might not help that much due to very complicated diagrams, which become very crowded
for the IMO problems. The image tokenization process might also play a negative role, as it splits the
diagram into independent sequential patches which leads to losing some spatial information. Recall
also that some details about the diagram are already provided through the text, as mentioned in
Section 7.2, and our symbolic engine, DDAR, does have access to topological aspects of the diagram,
e.g. through inspecting sameclock predicates. Furthermore, Chae et al. (2024) shows that vision-
language models have poor atomic visual skills, which suggests that adding visual elements might not
aid the geometry problem-solving process. Finally note that the core of geometry problem-solving
lies in algebraic reasoning, as previously demonstrated in Trinh et al. (2024), rather than geometric
reasoning. Many human IMO contestants can reliably solve geometry problems (including very hard
problems such as IMO 2011 P6) using computational methods such as complex numbers, barycentric
coordinates and trigonometry bashing, which means that visual information and diagrams are not
critical to solving geometry problems.
IMO 2024 P4: Let triangle 𝐴𝐵𝐶 with incenter 𝐼 satisfying 𝐴𝐵 < 𝐴𝐶 < 𝐵𝐶 . Let 𝑋 be a point
on line 𝐵𝐶 , different from 𝐶 , such that the line through 𝑋 and parallel to 𝐴𝐶 is tangent to the
incircle. Similarly, let 𝑌 be a point on line 𝐵𝐶 , different from 𝐵, such that the line through 𝑌 and
parallel to 𝐴𝐵 is tangent to the incircle. Line 𝐴𝐼 intersects the circumcircle of triangle 𝐴𝐵𝐶 again
at 𝑃 . Let 𝐾 and 𝐿 be the midpoints of 𝐴𝐶 and 𝐴𝐵, respectively. Prove that ∠ 𝐾 𝐼𝐿 + ∠𝑌 𝑃𝑋 = 180◦ .
The problem asks about the relationship between ∠ 𝐾 𝐼𝐿 and ∠ 𝑋 𝑃𝑌 . The former is the angle formed
by a midpoint and the incenter, which usually does not go well together and cannot be computed by the
angles of the main triangle 𝐴𝐵𝐶 . Typically, a human contestant would rely on trigonometry, complex
numbers or other computational methods to find the solution. For AlphaGeometry, its DDAR system
only relies on simple angle chasing and ratio chasing, so this necessitates the need for some auxiliary
point constructions. To this end, AlphaGeometry constructs 𝐸 as a point on the line BI such that ∠ 𝐴𝐸𝐵 =
90◦ , which elegantly ties these seemingly unrelated geometric elements together by creating pairs of
similar triangles 𝐴𝐵𝐸 and 𝑌 𝐵𝐼 , 𝐴𝐿𝐸 and 𝐼𝑃𝐶 . These pairs of similar triangles create new pairs of equal
angles and equal side length ratio. That being said, the point 𝐸 gives purposes to the midpoint 𝐿 of 𝐴𝐵.
To complete the proof, we need to prove ∠ 𝐴𝐼 𝐾 = ∠ 𝐵𝑌 𝑃 and ∠ 𝐴𝐼𝐿 = ∠𝐶𝑃𝑋 . To this end, we need to prove
that triangle 𝐴𝐾𝐼 is similar to triangle 𝐵𝑃𝑌 and triangle 𝐴𝐿𝐼 is similar to triangle 𝐶𝑃𝑋 , which is done
by side length ratio chasing, which is obtained from the pairs of similar triangles above. A full solution
is published at https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/
imo-2024-solutions/P4/index.html. This solution was obtained within 30 seconds at IMO
2024 and was given the full seven points by Joseph Myers, a two-time IMO gold medalist and Chair
of the IMO 2024 Problem Selection Committee.
Along with IMO 2024 P4, AlphaGeometry can solve many challenging problems with only 1 extra
auxiliary point, some of which involves rather unconventional constructions. One such problem is
IMO 2013 P3.
22
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
K
L E
O T2
T1
B D C
X Y
IMO 2013 P3: Let the excircle of triangle 𝐴𝐵𝐶 opposite the vertex 𝐴 be tangent to the side
𝐵𝐶 at the point 𝐴1 . Define the points 𝐵1 on 𝐶 𝐴 and 𝐶1 on 𝐴𝐵 analogously, using the excircles
opposite 𝐵 and 𝐶 , respectively. Suppose that the circumcenter of triangle 𝐴1 𝐵1 𝐶1 lies on the
circumcircle of triangle 𝐴𝐵𝐶 . Prove that triangle 𝐴𝐵𝐶 is right-angled.
In this problem, AlphaGeometry simply takes the midpoint 𝐷 of arc 𝐴𝐵𝐶 ˆ containing 𝐵 as the
extra point, which is a highly unconventional construction as it is non-symmetric. Yet, it allows
AlphaGeometry to uncover the fact that 𝐵, 𝐴1 , 𝐷, 𝐼 𝑎 are concyclic points, which is a key result that
is only true if and only if 𝐴𝐵 ⊥ 𝐴𝐶 . To prove this fact, AlphaGeometry exploits the fact that 𝑂1
and 𝐷 give rise to the similar triangle pairs △𝑂1 𝐶1 𝐵1 ∼ △𝑂1 𝐵𝐶 and △ 𝐷𝐴1 𝐵1 ∼ △ 𝐷𝐵𝐴 and then uses
these results to facilitate angle chasing, which gives ∠ 𝐷𝐴1 𝐼 𝑎 = ∠ 𝐷𝐵𝐼 𝑎 and the fact that 𝐵, 𝐴1 , 𝐷, 𝐼 𝑎 are
concyclic points follows.
Another example is IMO 2014 P3, one of the hard geometry problems given at the IMO.
23
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
Ib
O1
A
Ic
C1 B1
A1
O
B C
Ia
Figure 12 | IMO 2013 P3 diagram with AlphaGeometry auxiliary construction, point 𝐷. It allows
proving 𝐵𝐴1 𝐷𝐼 𝑎 is cyclic, which is the key to solve this problem.
E
I F
B D
H
IMO 2014 P3: Convex quadrilateral 𝐴𝐵𝐶 𝐷 has ∠ 𝐴𝐵𝐶 = ∠𝐶 𝐷𝐴 = 90◦ . Point 𝐻 is the foot of the
perpendicular from 𝐴 to 𝐵𝐷. Points 𝑆 and 𝑇 lie on sides 𝐴𝐵 and 𝐴𝐷, respectively, such that 𝐻 lies
inside triangle 𝑆𝐶𝑇 and ∠𝐶 𝐻𝑆 − ∠𝐶𝑆𝐵 = 90◦ , ∠𝑇 𝐻𝐶 − ∠ 𝐷𝑇𝐶 = 90◦ . Prove that line 𝐵𝐷 is tangent
to the circumcircle of triangle 𝑇𝑆𝐻 .
24
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
To our surprise, AlphaGeometry manages to prove a more generalized result 𝑂𝐻 ⊥ 𝐵𝐷, which
implies that the circumcircle of △ 𝐻𝑆𝑇 touches 𝐵𝐷 when combining with the condition 𝐻 ∈ 𝐵𝐷
in the original problem. To do this, AlphaGeometry constructs points 𝐸, 𝐹, 𝐺, 𝐼 as reflections of 𝑆
w.r.t 𝑂𝐻 , 𝐻 w.r.t 𝐴𝑇 , 𝐻 w.r.t 𝐴𝑆, 𝐻 w.r.t 𝑆𝑇 respectively. Since the given conditions ∠𝐶 𝐻𝑆 − ∠𝐶𝑆𝐵 =
90◦ , ∠𝑇 𝐻𝐶 − ∠ 𝐷𝑇𝐶 = 90◦ imply that the circumcenters of △𝐶 𝐻𝑆, △𝐶 𝐻𝑇 lie on 𝐴𝐵, 𝐴𝐷 respectively,
the constructions of 𝐹 and 𝐺 create cyclic quadrilaterals 𝐶 𝐻𝑆𝐺, 𝐶 𝐻𝑇 𝐹 , which will facilitate angle
chasing. Moreover, the constructions of 𝐸 and 𝐼 create the cyclic quadrilateral 𝐻𝐺𝐼𝐸 with center 𝑆,
and the points 𝐶, 𝑇 now become the circumcenter of △ 𝐹 𝐻 𝐼 and △ 𝐹𝐺𝐼 respectively. Combining these
facts altogether, AlphaGeometry obtains an extraordinary angle chasing proof, in contrast to the
common approaches using ratio chasing (possibly combined with the knowledge of Apollonius circles),
trigonometry or inversion by most human contestants. This shows that AlphaGeometry is capable of
solving hard problems with only a simple deduction engine.
x5
A
E
x2
x6
x8 X7
x9
Z Y
I
x1
x3
x10
Figure 14 | IMOSL 2009 G7 diagram with AlphaGeometry auxiliary constructions (colored red), key
cyclic properties (colored polygons) and key similar triangle pairs (colored triangle pairs).
IMOSL 2009 G7: Let 𝐴𝐵𝐶 be a triangle with incenter 𝐼 and let 𝑋 , 𝑌 and 𝑍 be the incenters of
the triangles 𝐵𝐼𝐶 , 𝐶𝐼 𝐴 and 𝐴𝐼 𝐵, respectively. Let the triangle 𝑋𝑌 𝑍 be equilateral. Prove that 𝐴𝐵𝐶
is equilateral too.
To the best of our knowledge, this problem previously had only computational solutions, e.g.
by using complex numbers, trigonometric computations or proof by contradiction via an inequality
argument. Since AlphaGeometry does not have access to these computational and reasoning tools, as
well as advanced Euclidean geometry knowledge, we originally expected that this problem cannot
be solved by AlphaGeometry. Nevertheless, AlphaGeometry was able to produce an elegant solution
with only angle and ratio chasing by constructing key auxiliary constructions. First, AlphaGeometry
shows that 𝑋 and 𝑍 are reflections of each other w.r.t. 𝐵𝐼 , and by symmetry it follows that 𝐼 is the
circumcenter of △ 𝑋𝑌 𝑍 . From this we can show that 𝐴𝐵 = 𝐴𝐶 , and by symmetry we have △ 𝐴𝐵𝐶 is an
25
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
equilateral triangle. However, the main challenge with this problem is to use the condition △ 𝑋𝑌 𝑍 being
an equilateral triangle, i.e. 𝑋𝑌 = 𝑌 𝑍 and its cyclic variants. To this end, AlphaGeometry constructs a
series of circumcenters of key triangles:
At first, these constructions seem very counter-intuitive since most humans would not construct these
points. Given the nature of the points 𝑋, 𝑌 , 𝑍 , there are not many geometric properties related to
these points and this particular configuration as a whole, which makes this problem very hard for
humans to come up with a synthetic solution. Nevertheless, these circumcenter constructions help
facilitate pairs of equal/similar triangles, which allow AlphaGeometry to exploit the fact that △ 𝑋𝑌 𝑍 is
an equilateral triangle and solve the problem.
All these examples demonstrate that AlphaGeometry is very efficient in constructing auxiliary
points and can offer rather elegant solutions to hard problems without using highly complex Euclidean
geometry knowledge and machinery. As such, it leads to creative and efficient solutions that humans
normally would not come up with.
26
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
2002-g7 2002-g8
D
2003-g5 2004-g7 2005-g5 2005-g6
Not solved
Solved
2006-g9 2007-g8 2009-g6 2009-g7
2007-2 2009-g8 2010-g5
D Solved by DDAR
2011-g3
D
2011-g6 2011-g7 2012-g6 2012-g8 2014-g7
validity of each deduction proof step. Namely, we isolate the predicates in the premise of the proof
step, and add them to a new DDAR engine, then run a deduction closure with respect to only the
deduction rule being used in that step. If the new DDAR manages to prove the conclusion of the step,
and the numerical check of the conclusion in the diagram passed, the step is considered verified.
Our step verification recognizes the following errors:
For evaluation we query our language models with the 2000-2024 IMO problems and 32 samples at
temperature 1.0. The models are queried without any end of sentence tokens such that they generate
full proofs. Then we compute how many valid proof steps the models produce on average across
samples and all problems. It turns out our models do not make many syntax mistakes (see Figure 16),
the majority of generated steps are either valid (either fully verified or correct but unverified). One
surprising find is that both small and larger models perform similarly. These results support ideas that
large language models can be self sufficient without depending on external tools, but until inference
speed is improved and hallucinations are completely resolved, the tools will stay essential for math
applications.
27
Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2
Figure 16 | Proof steps validity statistics. Models almost do not make any syntax errors. Small and
larger models perform similarly.
28