Solving Recurrence Relations Using Machine Learning, With Application To Cost Analysis
Solving Recurrence Relations Using Machine Learning, With Application To Cost Analysis
Solving Recurrence Relations Using Machine Learning, With Application To Cost Analysis
Automatic static cost analysis infers information about the resources used by programs without actu-
ally running them with concrete data, and presents such information as functions of input data sizes.
Most of the analysis tools for logic programs (and other languages) are based on setting up recur-
rence relations representing (bounds on) the computational cost of predicates, and solving them to
find closed-form functions that are equivalent to (or a bound on) them. Such recurrence solving is a
bottleneck in current tools: many of the recurrences that arise during the analysis cannot be solved
with current solvers, such as Computer Algebra Systems (CASs), so that specific methods for differ-
ent classes of recurrences need to be developed. We address such a challenge by developing a novel,
general approach for solving arbitrary, constrained recurrence relations, that uses machine-learning
sparse regression techniques to guess a candidate closed-form function, and a combination of an
SMT-solver and a CAS to check whether such function is actually a solution of the recurrence. We
have implemented a prototype and evaluated it with recurrences generated by a cost analysis system
(the one in CiaoPP). The experimental results are quite promising, showing that our approach can
find closed-form solutions, in a reasonable time, for classes of recurrences that cannot be solved by
such a system, nor by current CASs.
Figure 1: Control flow diagram of our novel solver based on machine learning.
The applicability of these resource analysis techniques strongly depends on the capabilities of the
component in charge of solving (or safely approximating) the recurrence relations generated during the
analysis, which has become a bottleneck in some systems.
A common approach to automatically solving such recurrence relations consists of using a Computer
Algebra System (CAS) or a specialized solver to find a closed form. However, this approach poses
several difficulties and limitations. For example, some recurrence relations contain complex expressions
or recursive structures that most of the well-known CASs cannot solve, making it necessary to develop
ad-hoc techniques to handle such cases. Moreover, some recurrences may not have the form required
by such systems because an input data size variable does not decrease, but increases instead. Note
that a decreasing-size variable could be implicit in the program, i.e., it could be a function of a subset
input data sizes (a ranking function), which could be inferred by applying established techniques used in
termination analysis [15]. However, such techniques are usually restricted to linear arithmetic.
In order to address this challenge we have developed a novel, general method for solving arbitrary,
constrained recurrence relations. It is a guess and check approach that uses machine learning techniques
for the guess stage, and a combination of an SMT-solver and a CAS for the check stage (see Figure 1). To
the best of our knowledge, there is no other approach that does this. The resulting closed-form function
solutions can be of different kinds, such as polynomial, factorial, exponential, summation, or logarithmic.
The rest of this paper is organized as follows. Section 2 gives and overview of our novel guess and
check approach. Then Section 3 provides some background information and preliminary notation. Sec-
tion 4 presents a more detailed, formal and algorithmic description of our approach. Section 5 describes
the use of our approach in the context of static cost analysis. Section 6 comments on our prototype im-
plementation and its experimental evaluation. Finally, Section 7 summarizes some conclusions and lines
for future work.
recursively defined). We will use the following recurrence as an example to illustrate our approach:
f (x) = 0 if x = 0 (1)
f (x) = f ( f (x − 1)) + 1 if x > 0
where the ti ’s are arbitrary functions on⃗x from a set T of candidate terms that we call base functions, and
the βi ’s are the coefficients (real numbers) that are estimated by regression, but so that only a few coeffi-
cients are nonzero. Currently, the set T is fixed, and contains the base functions that are representative of
the common complexity orders (in Section 7 we comment on future plans to obtain it). For illustration
purposes, assume that we use the following set T of base functions:
where each base function is represented as a lambda expression. Then, the sparse linear regression is
performed as follows:
1. Generate a training set S. First, a set Xtrain = {⃗x1 , . . . ,⃗xk } of input values to the recurrence function
is randomly generated. Then, starting with an initial S = 0, / for each input value⃗xi ∈ Xtrain , a training
case si is generated and added to S. For any input value ⃗x ∈ Xtrain the corresponding training case
s is a tuple of the form:
s = ⟨b, c1 , . . . , cn ⟩
where ci = [[ti ]]⃗x for 1 ≤ i ≤ n, and [[ti ]]⃗x represents the result (a scalar) of evaluating the base
function ti ∈ T for input value ⃗x, where T is a set of n base functions, as already explained. The
(dependent) value b (also a constant number) is the result of evaluating the recurrence f (⃗x) that we
want to solve or approximate, in our example, the one defined in Equation 1. Assuming that there
is an ⃗x ∈ Xtrain such that ⃗x = ⟨5⟩, its corresponding training case s in our example will be:
2. Perform the sparse regression in two steps using the training set S created above. In the first step,
we use linear regression with Lasso (ℓ1 ) regularization [6] on the coefficients. This is a penalty
term that encourages coefficients whose associated base functions have a small correlation with
the dependent value to be exactly zero. This way, typically most of the base functions in T will
be discarded, and only those that are really needed to approximate our target function will be
kept. The level of penalization is controlled by a hyperparameter λ ≥ 0. As commonly done in
machine learning [6], the value of λ that generalizes optimally on unseen (test) inputs is found via
cross-validation on a separate validation set (generated randomly in the same way as the training
set). The result of this step is a (column) vector ⃗β of coefficients, and an independent coefficient
β0 . Finally, we generate a test set Xtest (again, randomly in the same way as the training set) of
158 Solving Recurrence Relations using Machine Learning
input values to the recurrence function to obtain a measure R2 of the accuracy of the estimation.
Additionally, we discard those terms whose corresponding coefficient is less than a given threshold
ε. The resulting closed-form expression that estimates the target function is
where E(T,⃗x) is a vector of the terms in T with the arguments bound to⃗x, and rmε takes a vector of
coefficients and returns another vector where the coefficients less than ε are rounded to zero. Both
the Lasso regularization and the pruning function discard many terms from T in the final function.
3. Finally, our method performs again a standard linear regression (without Lasso regularization) on
the training set S, but without using those base functions corresponding to the terms discarded
previously by Lasso and the ε-pruning. In our example, with ε = 0.05, we obtain:
fˆ(x) = 1.0 x
with a value R2 = 1, which means that the estimation obtained predicts exactly the values for the
test set, and thus, it is a candidate solution for the recurrence in Equation 1. If R2 were less than 1,
it would mean that the function obtained is not a candidate (exact) solution, but a (possibly unsafe)
approximation, as there are values in the test set that cannot be exactly predicted.
f (x) = 0 if x = 0
f (x) = f ( f (x − 1)) + 1 if x > 0 (2)
fˆ(x) = x if x ≥ 0
Now, Expression (3) below shows the encoding of the recurrence as a first order logic formula.
∀x (x = 0 =⇒ f (x) = 0) ∧ (x > 0 =⇒ f (x) = f ( f (x − 1)) + 1) (3)
Finally, Expression (4) below shows the negation of such formula, as well as the references to the function
name substituted by the definition of the candidate solution. We underline both the subexpressions to be
replaced, and the subexpressions resulting from the substitutions.
It is easy to see that Formula (4) is unsatisfiable. Therefore, fˆ(x) = x is an exact solution for f (x) in the
recurrence defined by Equation 1.
M. Klemen, M.Á. Carreira-Perpiñán & P. Lopez-Garcia 159
For some cases where the candidate solution contains transcendental functions, our implementation
of the method uses a CAS to perform simplifications and transformations, in order to obtain a formula
supported by the SMT-solver. We find this combination of CAS and SMT-solver particularly useful,
since it allows solving more problems than only using one of these systems in isolation.
3 Preliminaries
Recurrence relations. A recurrence relation of order k, k > 0, for a function f , is a set of equations
that give k initial values for f , and an equation that recursively defines any other value of f as a function g
that takes k previous values of f as parameters. For example, the following recurrence relation of second
order (k = 2), with g being the arithmetic addition +, defines the Fibonacci function:
(
1 if n = 0 or n = 1
f (n) = (5)
f (n − 1) + f (n − 2) if n ≥ 2
A challenging class of recurrences that we can solve with our approach are “nested” recurrences, e.g.,
recurrences of the form f (n) = g( f ( f (n − 1))).
We use the letters x, y, z to denote variables, and a, b, c, d to denote constants and coefficients. We
use f , g to represent functions, and e,t to represent arbitrary expressions. We use ϕ to represent arbitrary
boolean constraints over a set of variables. Sometimes, we also use β to represent coefficients obtained
with linear regression. In all cases, the symbols can be subscribed. We use ⃗x to denote a finite sequence
⟨x1 , x2 , . . . , xn ⟩, for some n > 0. Given a sequence S and an element x, ⟨x|S⟩ is a new sequence with first
element x and tail S.
Given a piecewise function:
e1 (⃗x) if ϕ1 (⃗x)
e2 (⃗x) if ϕ2 (⃗x)
f (⃗x) = . .. (6)
..
.
ek (⃗x) if ϕk (⃗x)
where f ∈ D → R+ , with D = {⃗x|⃗x ∈ Zm ∧ ϕpre (⃗x)} for some boolean constraint ϕpre , and ei (⃗x), ϕi (⃗x)
are arbitrary expressions and constraints over ⃗x respectively. We say that ϕpre is the precondition of f ,
and that f is a constrained recurrence relation if and only if:
• ∃i ∈ [1, k] such that ei does not contain any call to f (i.e., it is in closed form).
• ϕpre |=
W
ϕi .
1≤i≤k
⃗ then
if ϕ1 (d)
⃗
return e1 (d)
else
⃗ then
if ϕ2 (d)
⃗
return e2 (d)
else
···
end if
end if
More formally, let def( f ) denote the definition of a (piecewise) constrained recurrence relation f
represented as the sequence ⟨(e1 (⃗x), ϕ1 (⃗x)), . . . , (ek (⃗x), ϕk (⃗x))⟩, where each element of the sequence is
a pair representing a case. The order of such sequence determines the evaluation strategy. Then, the
⃗ denoted EvalFun( f (d)),
evaluation of f for a concrete value d, ⃗ is defined as follows:
⃗ = EvalBody(def( f ), d)
EvalFun( f (d)) ⃗
(
[[e]]d⃗ ⃗
if ϕ(d)
⃗ =
EvalBody(⟨(e, ϕ)|Ps⟩, d)
⃗ if ¬ϕ(d)
EvalBody(Ps, d) ⃗
where βi ∈ R, and ti are expressions over ⃗x, not including recursive references to fˆ. If the above condi-
tions are met, we say that fˆ is a closed form for f .
To illustrate the need of introducing an evaluation strategy for the recurrence that is consistent with
the termination of the program, consider the following Prolog program which does not terminate for a
call p(X) where X is bound to an integer:
1 p ( X ) : - X > 0 , X1 is X + 1 , p ( X1 ) .
2 p(X) :- X = 0.
The following recurrence relation for its cost (in resolution steps) can be set up:
Cp (x) = 1 if x = 0 (8)
Cp (x) = 1 + Cp (x + 1) if x > 0
A CAS will give the closed form Cp (x) = 1 − x for such recurrence, however, the cost analysis should
give Cp (x) = ∞.
M. Klemen, M.Á. Carreira-Perpiñán & P. Lopez-Garcia 161
Linear Regression. Linear regression [5] is a statistical technique used to approximate the linear rela-
tionship between a number of independent variables and a dependent (output) variable. Given a vector of
independent (input) variables X = (X1 , . . . , Xp )T ∈ R p , we predict the output variable Y using the formula
p
Y = β0 + ∑ βi Xi (9)
i=1
which is defined through the vector of coefficients β = (β0 , . . . , β p )T ∈ R p . Such coefficients are esti-
mated from a set of observations {yi , xi1 , . . . , xip }ni=1 so as to minimize a loss function, most commonly
the sum of squares
n p 2
β = arg min ∑ yi − β0 − ∑ xi j β j (10)
β ∈R p i=1 j=1
Sometimes (as is our case) some of the input variables are not relevant to explain the output, but the above
least-squares estimate will almost always assign nonzero values to all the coefficients. In order to force
the estimate to make exactly zero the coefficients of irrelevant variables (hence removing them and doing
feature selection), various techniques have been proposed. The most widely used one is the Lasso [6],
which adds an ℓ1 penalty on β (i.e., the sum of absolute values of each coefficient) to Expression 10:
n p 2 p
β = arg min ∑ yi − β0 − ∑ xi j β j + λ ∑ |β j | (11)
β ∈R p i=1 j=1 j=1
where λ ≥ 0 is a hyperparameter that determines the level of penalization: the greater λ , the greater
the number of coefficients that are exactly equal to 0. The Lasso has two advantages over other feature
selection techniques for linear regression. First, it defines a convex problem whose unique solution can
be efficiently computed even for datasets where either of n or p are large (almost as efficiently as a
standard linear regression). Second, it has been shown in practice to be very good at estimating the
relevant variables.
! !{
s k
^ i−1
^
¬ϕ j (⃗x) ∧ ϕi (⃗x) ∧ ϕpre (⃗x) =⇒ Eqi (12)
i=1 j=1 SMT
where Eqi is the result of replacing in F(⃗x) = ei (⃗x) each occurrence of F, if possible, by the definition of
the candidate solution F̂ (by using replaceCalls in line 4), and performing a simplification by the CAS
(by using simplifyCAS in line 6). A goal of such simplification is to obtain (sub)expressions supported by
the SMT-solver. The function replaceCalls(expr, F(⃗x′ ), F̂, ϕpre , ϕ) replaces every subexpression in expr
of the form F(⃗x′ ) by F̂(⃗x′ ), if ϕpre (⃗x′ ) ∧ ϕ =⇒ ϕpre (⃗x′ ). The operation JeKSMT is the translation of any
expression e to an SMT-LIB expression. Although all variables appearing in Formula 12 are declared
as integers, we omit these details in Algorithm 2 and in Formula 12 for the sake of brevity. Note that
this encoding is consistent with the evaluation (EvalFun) described in Section 3. Finally, the algorithm
asks the SMT-solver for models of the negated formula (line 17). If no model exists, then it returns
true, concluding that F̂ is an exact solution to the recurrence, i.e., F̂(⃗x) = F(⃗x) for any input ⃗x ∈ D such
that EvalFun(F(⃗x)) terminates. Otherwise, it returns false. Note that, if it is not possible to replace all
occurrences of F by F̂, or if after performing the simplification by simplifyCAS there are subexpressions
not supported by the SMT-solver, then the algorithm finishes returning false.
164 Solving Recurrence Relations using Machine Learning
The CiaoPP system first infers size relations for the different arguments of predicates, using a rich
set of size metrics (see [13, 17] for details). Assume that the size metric used in this example, for the
numeric argument X is the actual value of it (denoted int(X)). The system will try to infer a function
Sp (x) that gives the size of the output argument of p/2 (the second one), as a function of the size (x) of the
input argument (the first one). For this purpose, the following size relations for Sp (x) are automatically
set up (the same as the recurrence in Equation 1 used in Section 2 as example):
Sp (x) = 0 if x = 0 (13)
Sp (x) = Sp (Sp (x − 1)) + 1 if x > 0
The first and second recurrence correspond to the first and second clauses respectively (i.e., base and
recursive cases). Once recurrence relations (either representing the size of terms, as the ones above,
or the computational cost of predicates, as the ones that we will see latter) have been set up, a solving
process is started.
Nested recurrences, as the one that arise in this example, cannot be handled by most state-of-the-art
recurrence solvers. In particular, the modular solver used by CiaoPP fails to find a closed-form function
for the recurrence relation above. In contrast, the novel approach that we propose, sketched in next
section, obtains the closed form Ŝp (x) = x, which is an exact solution of such recurrence (as shown in
Section 2).
Once the size relations have been inferred, CiaoPP uses them to infer the computational cost of a
call to p/2. For simplicity, assume that in this example, such cost is given in terms of the number of
resolution steps, as a function of the size of the input argument, but note that CiaoPP’s cost analysis
M. Klemen, M.Á. Carreira-Perpiñán & P. Lopez-Garcia 165
is parametric with respect to resources, which can be defined by the user by means of a rich assertion
language, so that it can infer a wide range of resources, besides resolution steps. Also for simplicity,
we assume that all builtin predicates, such as arithmetic/comparison operators have zero cost (in practice
there is a “trust”assertion for each builtin that specifies its cost as if it had been inferred by the analysis).
In order to infer the cost of a call to p/2, represented as Cp (x), CiaoPP sets up the following cost
relations, by using the size relations inferred previously:
Cp (x) = 1 if x = 0 (14)
Cp (x) = Cp (x − 1) + Cp (Sp (x − 1)) + 1 if x > 0
We can see that the cost of the second recursive call to predicate p/2 depends on the size of the output
argument of the first recursive call to such predicate, which is given by function Sp (x), whose closed
form Sp (x) = x is computed by our approach, as already explained. Plugin such closed form into the
recurrence relation above, it can be solved now by CiaoPP, obtaining Cp (x) = 2x+1 − 1.
Table 1: Closed forms obtained with the previous (CF) and new solver (CFNew).
As we can see, none of the recurrences are solvable by the current CiaoPP solver, except s-max.
The specialized solver for such recurrence has been developed relatively recently. Also, none of the
recurrences are solvable by the CASs Mathematica [10] and Sympy [11], which we can arguably consider
state-of-the-art CASs. In contrast, our new solver is able to infer exact closed-forms functions for all the
recurrences in a reasonable time.
promising, showing that our approach can find exact, verified, closed-form solutions, in a reasonable
time, for such recurrences, which imply potentially, arbitrarily large accuracy gains in cost analysis of
(logic) programs. Not being able to solve a recurrence can cause huge accuracy losses, for instance, if
such a recurrence corresponds to a predicate that is deep in the control flow graph of the program, and
such accuracy loss is propagated to the main predicate, inferring not useful information at all.
Since our technique uses linear regression with a randomly generated training set (by evaluating the
recurrence to obtain the dependent value), it is not guaranteed that a solution can be found. Even if an
exact solution is found in the first stage, it is not always possible to prove its correctness in the second
stage. Therefore, in this sense, this approach is not complete. However, it is able to find some solutions
that current state-of-the-art solvers are unable to find. As a proof of concept, we have considered a par-
ticular deterministic evaluation for constrained recurrence relations, and the verification of the candidate
solution is consistent with this evaluation. However, it is possible to implement different evaluation se-
mantics for the recurrences, adapting the verification stage accordingly. Note that we need to require the
termination of the recurrence evaluation as a precondition for the conclusions obtained. This is also due
to the particular evaluation strategy of recurrences that we are considering. In practice, non-terminating
recurrences can be discarded in the first stage, by setting a timeout. Our approach can also be combined
with a termination prover in order to guarantee such a precondition. Finally, note that an alternative
use of our tool is to omit the verification stage, using only the closed-form function inferred by the first
stage, together with an error measure. This can be useful in some applications (e.g., granularity control
in parallel/distributed computing) where it is enough to have good although unsafe approximations.
As a future work, we plan to fully integrate our novel solver into the CiaoPP system, combining
it with its current set of back-end solvers in order to improve the static cost analysis. We also plan to
further refine and improve our algorithms in several directions. As already explained, currently the set T
of base functions is fixed, user-provided. We plan to automatically infer it by using different heuristics.
We can perform an automatic analysis of the recurrence we are solving, to extract some features that
allow selection of the terms that most likely are part of the solution. For example, if the recurrence has
a nested, double recursion, then we can select a quadratic term, etc. Also, machine learning techniques
may be applied to learn a good set of base functions from some features of the programs.
Acknowledgments This work has been partially supported by MICINN projects PID2019-108528RB-C21
ProCode, TED2021-132464B-I00 PRODIGY, and FJC2021-047102-I, and the Tezos foundation. The authors
would also like to thank Louis Rustenholz, John Gallagher, Manuel Hermenegildo, José F. Morales and the anony-
mous reviewers for very useful feedback. Louis Rustenholz also recreated the experimental results and double-
checked them.
References
[1] E. Albert, P. Arenas, S. Genaim & G. Puebla (2011): Closed-Form Upper Bounds in Static Cost Analysis.
Journal of Automated Reasoning 46(2), pp. 161–203, doi:10.1007/s10817-010-9174-1.
[2] S. K. Debray & N. W. Lin (1993): Cost Analysis of Logic Programs. ACM TOPLAS 15(5), pp. 826–875,
doi:10.1145/161468.161472.
[3] S. K. Debray, N.-W. Lin & M. V. Hermenegildo (1990): Task Granularity Analysis in Logic Programs. In:
Proc. PLDI’90, ACM, pp. 174–188, doi:10.1145/93542.93564.
[4] S. K. Debray, P. Lopez-Garcia, M. V. Hermenegildo & N.-W. Lin (1997): Lower Bound Cost Estimation for
Logic Programs. In: ILPS’97, MIT Press, pp. 291–305, doi:10.7551/mitpress/4283.001.0001.
168 Solving Recurrence Relations using Machine Learning
[5] Trevor Hastie, Robert Tibshirani & Jerome Friedman (2009): The Elements of Statistical Learning: Data
Mining, Inference and Prediction, second edition. Springer New York, NY, doi:10.1007/978-0-387-84858-7.
[6] Trevor Hastie, Robert Tibshirani & Martin Wainwright (2015): Statistical Learning with Sparsity: The Lasso
and Generalizations. Chapman & Hall/CRC, doi:10.1201/b18401.
[7] M. Hermenegildo, G. Puebla, F. Bueno & P. Lopez Garcia (2005): Integrated Program Debugging, Veri-
fication, and Optimization Using Abstract Interpretation (and The Ciao System Preprocessor). Science of
Computer Programming 58(1–2), pp. 115–140, doi:10.1016/j.scico.2005.02.006.
[8] P. Lopez-Garcia, L. Darmawan, M. Klemen, U. Liqat, F. Bueno & M. V. Hermenegildo (2018): Interval-
based Resource Usage Verification by Translation into Horn Clauses and an Application to Energy Con-
sumption. TPLP 18(2), pp. 167–223, doi:10.1017/S1471068418000042.
[9] P. Lopez-Garcia, M. Klemen, U. Liqat & M. V. Hermenegildo (2016): A General Framework for
Static Profiling of Parametric Resource Usage. TPLP (ICLP’16 Special Issue) 16(5-6), pp. 849–865,
doi:10.1017/S1471068416000442.
[10] (2023): Wolfram Mathematica (v13.2): the World’s Definitive System for Modern Technical Computing.
https://www.wolfram.com/mathematica. Accessed: May 25, 2023.
[11] Aaron Meurer, Christopher P. Smith, Mateusz Paprocki, Ondřej Čertík, Sergey B. Kirpichev, Matthew Rock-
lin, AMiT Kumar, Sergiu Ivanov, Jason Keith Moore, Sartaj Singh, Thilina Rathnayake, Sean Vig, Brian E.
Granger, Richard P. Muller, Francesco Bonazzi, Harsh Gupta, Shivam Vats, Fredrik Johansson, Fabian Pe-
dregosa, Matthew J. Curry, Andy R. Terrel, Štěpán Roučka, Ashutosh Saboo, Isuru Fernando, Sumith Kulal,
Robert Cimrman & Anthony Scopatz (2017): SymPy: symbolic computing in Python. PeerJ Computer Sci-
ence 3, p. e103, doi:10.7717/peerj-cs.103.
[12] Leonardo Mendonça de Moura & Nikolaj Bjørner (2008): Z3: An Efficient SMT Solver. In C. R. Ramakr-
ishnan & Jakob Rehof, editors: Tools and Algorithms for the Construction and Analysis of Systems, 14th
International Conference, TACAS 2008, Lecture Notes in Computer Science 4963, Springer, pp. 337–340,
doi:10.1007/978-3-540-78800-3_24.
[13] J. Navas, E. Mera, P. Lopez-Garcia & M. Hermenegildo (2007): User-Definable Resource Bounds Analysis
for Logic Programs. In: Proc. of ICLP’07, LNCS 4670, Springer, pp. 348–363, doi:10.1007/978-3-540-
74610-2_24.
[14] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier
Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake VanderPlas, Alexandre
Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot & Edouard Duchesnay (2011): Scikit-
learn: Machine Learning in Python. Journal of Machine Learning Research 12, pp. 2825–2830,
doi:10.5555/1953048.2078195. Available at https://dl.acm.org/doi/10.5555/1953048.2078195.
[15] A. Podelski & A. Rybalchenko (2004): A Complete Method for the Synthesis of Linear Ranking Functions.
In: VMCAI’04, LNCS 2937, Springer, pp. 239–251, doi:10.1007/978-3-540-24622-0_20.
[16] M. Rosendahl (1989): Automatic Complexity Analysis. In: 4th ACM Conference on Functional Programming
Languages and Computer Architecture (FPCA’89), ACM Press, pp. 144–156, doi:10.1145/99370.99381.
[17] A. Serrano, P. Lopez-Garcia & M. V. Hermenegildo (2014): Resource Usage Analysis of Logic Pro-
grams via Abstract Interpretation Using Sized Types. TPLP, ICLP’14 Special Issue 14(4-5), pp. 739–754,
doi:10.1017/S147106841400057X.
[18] B. Wegbreit (1975): Mechanical Program Analysis. Communications of the ACM 18(9), pp. 528–539,
doi:10.1145/361002.361016.
[19] (2023): Z3 API in Python. https://ericpony.github.io/z3py-tutorial. Accessed: May 25, 2023.