An Adaptive Pruning Method For Discrete L-Curve
An Adaptive Pruning Method For Discrete L-Curve
www.elsevier.com/locate/cam
Abstract
We describe a robust and adaptive implementation of the L-curve criterion. The algorithm locates the corner of a discrete L-curve
which is a log–log plot of corresponding residual norms and solution norms of regularized solutions from a method with a discrete
regularization parameter (such as truncated SVD or regularizing CG iterations). Our algorithm needs no predefined parameters, and
in order to capture the global features of the curve in an adaptive fashion, we use a sequence of pruned L-curves that correspond to
considering the curves at different scales. We compare our new algorithm to existing algorithms and demonstrate its robustness by
numerical examples.
© 2005 Elsevier B.V. All rights reserved.
1. Introduction
We are concerned with discrete ill-posed problems, i.e., linear systems of equations A x = b or linear least squares
problems min A x − b2 with a very ill-conditioned coefficient matrix A, obtained from the discretization of an ill-
posed problem, such as a Fredholm integral equation of the first kind. To compute stable solutions to such systems under
the influence of data noise, one must use regularization. The amount of regularization is controlled by a parameter, and
in most cases it is necessary to choose this parameter from the given problem and the given set of data.
In this paper we consider regularization methods for which the regularization parameter takes discrete values k, e.g.,
when the regularized solution lies in a k-dimensional subspace. Examples of such methods are truncated SVD and
regularizing CG iterations. These methods produce a sequence of p regularized solutions xk for k = 1, 2, . . . , p, and
the goal is to choose the optimal value of the parameter k. A variety of methods have been proposed for this parameter
choice problem, such as the discrepancy principle, error-estimation methods, generalized cross-validation, and the
L-curve criterion. For an overview, see Chapter 7 in [5].
夡 This work was supported in part by Grant no. 21-03-0574 from the Danish Natural Science Research Foundation, and by COFIN Grant no.
0377-0427/$ - see front matter © 2005 Elsevier B.V. All rights reserved.
doi:10.1016/j.cam.2005.09.026
484 P.C. Hansen et al. / Journal of Computational and Applied Mathematics 198 (2007) 483 – 492
This work focuses on the L-curve criterion, which is based on a log–log plot of corresponding values of the residual
and solution norms:
For many problems arising in a variety of applications, it is found that this curve has a particular “L” shape, and that
the optimal regularization parameter corresponds to a point on the curve near the “corner” of the L-shaped region; see,
e.g., [5, Section 7.5] or [6] for an analysis of this phenomenon.
For continuous L-curves it was suggested in [8] to define the corner as the point with maximum curvature. For discrete
L-curves it is less obvious how to make an operational definition of a corner suited for computer implementation. While
a few attempts have been made [2,4,9], we feel that there is a need for an efficient and robust general-purpose algorithm
for computing the corner of a discrete L-curve. The algorithm developed in this paper achieves its robustness through
a combination of several adaptive strategies, and the Matlab codes is available from the authors.
The L-curve criterion has its limitations; it does not work well when the solution is very smooth [3], and the
regularization parameter may not behave consistently with the optimal parameter as the problem size goes to infinity
[10] (see also the discussion of these limitations in [6]). It is still worthwhile to have access to a robust algorithm for
discrete L-curves, partly because the method can work very well, and partly because it can be used together with other
methods with other limitations.
Our paper is organized as follows. Section 2 introduces the discrete L-curve and summarizes previous algorithms
for finding the corner. Section 3 is the main contribution of the paper and describes the new algorithm in detail. The
algorithm is tested in Section 4 where the performance is shown using a series of smaller test problems as well as a
large-scale problem.
For convenience
assume that the coefficient matrix A is of dimensions m×n with m n, and let the SVD of A be given
by A = ni=1 ui i viT . What characterizes a discrete ill-posed problem is that the singular values i decay gradually
to zero, and that the absolute value of the right-hand side coefficients uTi b from some point, and on the average, decay
faster.
The best known regularization method with a discrete regularization parameter is probably the truncated SVD
(TSVD) method. For any k < n the TSVD solution xk is defined by
k
uTi b
xk = vi . (2)
i
i=1
The CGLS algorithm is mathematically equivalent to applying the CG method to the normal equations, and when
applied to ill-posed problems this method exhibits semi-convergence, i.e., initially the iterates approach the exact
solution while at later stages they deviate from it again. Moreover, it is found that the number k of iterations plays the
role of the regularization parameter. Hence, these so-called regularizing CG iterations also lead to a discrete L-curve.
For more details see, e.g., Sections 6.3–6.5 in [5].
In the rest of the paper we occasionally need to talk about the oriented angle between the two line segments associated
with a triple of L-curve points, with the usual convention of the sign of the angle. Specifically, let Pj , Pk and P
be three points satisfying j < k < , and let vr,s denote the vector from Pr to Ps . Then we define the oriented angle
(j, k, ) ∈ [−, ] associated with the triplet as the angle between the two vectors vj,k and vk, , i.e.,
10
2
10
5
9
8 76
5 43 2
0 0
10 10
-5 -3 -1 -2 -1
10 10 10 10 10
1
10
0 0
10 10
-3 -2 -1 -2 -1
10 10 10 10 10
Fig. 1. Top: a discrete L-curve for TSVD with a global corner at k = 9 and a little “step” at k = 4; the smallest angle between neighboring triplets of
points occurs at k = 4. Bottom left: part of the Tikhonov L-curve for the same problem. Bottom right: part of the 2D spline curve used by the Matlab
function l_corner in REGULARIZATION TOOLS [4]; the point on the spline curve with maximum curvature is indicated by the diamond.
With this definition, an angle (j, k, ) < 0 corresponds to a point which is a potential candidate for the corner point,
while (j, k, )0 indicates a point of no interest.
In principle, it ought to be easy to find the corner of a discrete L-curve: compute all angles (k − 1, k, k + 1) for
k = 2, . . . , p − 1 and associate the corner point Pk with the angle closest to −/2. Unfortunately, this simple approach
is not guaranteed to work in practice because discrete L-curves often have several small local corners, occasionally
associated with clusters of L-curve points. A global point of view of the discrete L-curve is needed in order to find the
desired corner of the overall curve.
Fig. 1 illustrates this, using the TSVD method on a tiny problem. The top left plot shows a discrete L-curve with 11
points, with the desired global corner at k = 9 and with a local corner at k = 4 (shown in more detail in the top right
plot). For this particular L-curve, the smallest angle between neighboring line segments is attained at k = 4; but the
L-curve’s little “step” here is actually an insignificant part of the overall horizontal part of the curve in this region. The
bottom left plot in Fig. 1 shows a part of the continuous Tikhonov L-curve for the same problem, together with the
points of the discrete TSVD L-curve. Clearly the Tikhonov L-curve is not affected very much by the little “step” of the
discrete L-curve.
Two algorithms have been proposed for computing the corner of a discrete L-curve, taking into account the need to
capture the overall behavior of the curve and avoiding the local corners. The first algorithm, which was described in
[8] and implemented in l_corner in [4], fits a 2D spline curve to the points of the discrete L-curve. The curvature of the
spline curve is well defined and independent of the parametrization, and the algorithm returns the point on the discrete
L-curve closest to the corner of the spline curve.
The spline curve has an undesired tendency to track the unwanted local corners of the discrete L-curve, and therefore
a preprocessing stage is added where the L-curve points are first smoothed by means of a local low-degree polynomial.
Unfortunately, this smoothing step depends on a few parameters. Hence the overall algorithm is not adaptive, and often
it is necessary to hand-tune the parameters of the smoothing process in order to remove the influence of the small local
486 P.C. Hansen et al. / Journal of Computational and Applied Mathematics 198 (2007) 483 – 492
corners, without missing the global corner. If we use the default parameters in [4] then we obtain the spline curve shown
in the bottom right plot of Fig. 1 whose corner (indicated by the diamond) is, incorrectly, located at the little “step.”
A more recent algorithm, called the triangle method, is described in [2]. The key idea here is to consider the following
triples of L-curve points:
(Pj , Pk , Pp ), j = 1, . . . , p − 2, k = j + 1, . . . , p − 1,
and identify as the corner the triple where the oriented angle (j, k, p) is minimum. If all angles (j, k, p) are greater
than −/8 then the L-curve is considered “flat” and the leftmost point is chosen. Note that the leftmost point Pp of the
entire L-curve is always included in the calculations. The authors of the triangle algorithm [2] were not able to provide
us with Matlab code, and hence the tests in Section 4 are done using our own Matlab implementation. For the L-curve
in Fig. 1 this algorithm returns k = 8 which is a good estimate of the optimal k.
The main concern with the triangle algorithm is its complexity, which is 21 (p − 1)(p − 2) and which can be too
high when p is large and/or when fast processing is required, e.g., in a real-time application (possibly in connection
with updating algorithms). The amount of computation can be reduced by working with a subsampled L-curve, but the
subsampling must be done carefully by the user and is not part of the algorithm.
An implementation of a robust discrete L-curve criterion should preferably have an average complexity of O(p log p),
and must include a means for adaptively filtering small local corners. The process must be adaptive, because the size or
scale of the local phenomena is problem dependent and usually unknown by the user. In addition, the algorithm should
not make use of any pre-set parameters.
To achieve the required adaptivity and robustness, our new algorithm consists of two stages. In the first stage we
compute corner candidates using L-curves at different scales or resolutions (not knowing a priori which scale is optimal).
In the second stage we then compute the overall best corner from the candidates found in the first stage. During the
two stages we monitor the results, in order to identify L-curves that lack a corner (e.g., because the problem is well
conditioned).
The key idea is that if we remove the right amount of points from the discrete L-curve, then the corner can easily be
found using the remaining set of points. However, the set of points to be removed is unknown. If too few points are
removed we still maintain unwanted local features, and if too many points are removed the corner will be incorrectly
located or may disappear.
In the first stage of the algorithm, we therefore work with a sequence of pruned L-curves, that is, curves in which
a varying number of points are removed. For each pruned L-curve that is considered convex, we locate two corner
candidates. This produces a short list of r candidate points to the corner Pk1 , . . . , Pkr and several (or possibly all) of
the candidates may be identical. Duplicate entries are removed, and the candidate list is sorted such that the remaining
indices satisfy ki > ki−1 . Also, to be able to handle Pk1 , we augment the list with the first point of the entire L-curve
and set Pk0 = P1 .
In the second stage we then pick the best corner from the list of candidates found in the first stage. If the candidate
list includes only one point then we are done, otherwise we must choose a single point from the list. We cannot exclude
that points on the vertical part of the L-curve are among the candidates, and as a safeguard we therefore seek to avoid
any such point. If we traverse the sorted candidate list, then the wanted corner is the last candidate point before reaching
the vertical part. If no point lies on the vertical branch of the L-curve, then the last point is chosen. We use two criteria
to check for feasible points. A point Pki , i = 1, . . . , r is considered lying on the vertical branch of the L-curve if
the vector vki−1 ,ki has a slope greater than /4, and the curvature of a candidate point Pki is acceptable if the angle
(ki−1 , ki , ki+1 ) is negative.
The complete algorithm is shown in Fig. 2. The computation of the corner candidates using each pruned L-curve is
done by two separate routines which we describe below, one relying on the angles between subsequent line segments
and one aiming at tracking the global vertical and horizontal features of the L-curve. The routine based on angles also
checks for correct curvature of the given pruned L-curve, and no corner is returned from this routine if the pruned
P.C. Hansen et al. / Journal of Computational and Applied Mathematics 198 (2007) 483 – 492 487
Fig. 2. The overall design of the adaptive pruning algorithm for locating the corner of a discrete L-curve. Here, Pk denotes a point on the original
L-curve, and Pki denotes a candidate point in the list L.
L-curve is flat or concave. The overall algorithm will always return an index k to a single point which is considered as
the corner of the discrete L-curve, unless all the pruned L-curves are found to be concave (in which case the algorithm
returns an error message).
This corner selection strategy was proposed in [9] and is similar in spirit to the guideline of the triangle method
[2]. In the pruned L-curve, we find the angle (k − 1, k, k + 1) which is closest to −/2. To explain our method, we
consider the angle i = (i − 1, i, i + 1) defined in (4), which we can write as
i = si |i |, si = sign(i ), i = 2, . . . , p − 1.
If we normalize each vector vj −1,j , j = 2, . . . , p such that vj −1,j 2 = 1 then the two quantities si and |i | are given
by
and
which follows from elementary geometry. Here, (z)l denotes coordinate l of the vector z. The corner is then defined by
k = argmini |i + /2|. Note that wi carries sufficient information, such that k = argmini |wi + 1|. If k (or equivalently
wk ) is negative, then the point Pk is accepted as a corner. Otherwise, the given pruned L-curve is considered flat or
concave and no corner is found.
The approach used here is similar to an idea from [1] in which the corner of the continuous Tikhonov L-curve is
defined as the point with smallest Euclidean distance to the “origin” O of the coordinate system. With O chosen in a
suitable way, it is shown in [1] that the point on the L-curve closest to O is near the point of maximum curvature.
The main issue is to locate a suitable “origin” in the log–log plot. In [1] it is defined as the point (log A xn −
b2 , log x1 2 ) where x1 and xn are the Tikhonov solutions for regularization parameters = 1 and = n ,
respectively. But given only points on a discrete L-curve, neither the singular values nor their estimates are necessarily
known. Instead we seek to identify the “flat” and the “steep” parts of the L-curve, knowing that the corner must lie
between these parts.
Define the horizontal vector vH = (−1, 0)T , and let p̂ be the number of points on the pruned L-curve. Then
we define the slopes j as the angles between vH and the normalized vectors vj −1,j for j = 2, . . . , p̂. The most
488 P.C. Hansen et al. / Journal of Computational and Applied Mathematics 198 (2007) 483 – 492
Table 1
Number of loops and number of line segments per loop in stage one
No. loops p̄ 1 2 3 4
p̂ in each loop p−1 5, p − 1 5, 10, p − 1 5, 10, 20, p − 1
horizontal line segment is identified by h = argminj |j |, and the most vertical one by v = argminj |j + /2|. Again,
wj = (vH )1 (vj −1,j )2 − (vH )2 (vj −1,j )1 = −(vj −1,j )2 carries sufficient information and thus h = argminj |(vj −1,j )2 |
and v = argminj |1 − (vj −1,j )2 | = argmaxj |(vj −1,j )2 |. Hence the horizontal and vertical parts of the curve can be
computed very efficiently.
To ensure that the chosen line segments are good candidates for the global behavior of the L-curve, we add the
constraint that the horizontal line segment must lie to the right of the vertical one. The “origin” O is now defined as
the intersection between the horizontal line at log xh 2 and the line defined by the vector vv −1,v . The corner is then
selected among all p points on the entire L-curve as the point with the smallest Euclidean distance to this O.
The work in our algorithm is obviously dominated by the two corner-finding algorithms in the loop of stage one.
Table 1 shows the number of loops p̄ and the number of line segments p̂ in each pruned L-curve, and we see that the
number of loops is p̄ = log2 (0.4(p − 1)) ≈ log2 p.
The angle-based corner location algorithm (cf. Section 3.2) involves the computation of the quantities wi for i =
1, . . . , p̂ in each loop. The total amount of this work is therefore approximately 3(5 + 10 + 20 + · · · + (p − 1)) ≈ 3 · 2p
flops.
The main work in the other corner location algorithm (cf. Section 3.3) involves, in each loop, the location of the
vertical and horizontal branches of the pruned L-curve with p̂ line segments, and the corner location via the distance to
the origin. The latter involves 5p flops. The former involves a double loop with at most (p̂/2+1)2 ≈ p̂2 /4 comparisons;
but this double loop is terminated as soon as the angle criterion is fulfilled. In the worst case, the work is approximately
5p · p̄ + 41 (52 + 102 + 202 + · · · + (p − 1)2 ) ≈ 5p log2 p + 41 · 1.5p 2 flops.
The total amount of work in the adaptive pruning algorithm is therefore, in the worst case, about 6p+5p log2 p+0.4p 2
flops. However, due to the early termination of the double loop mentioned above, we observe experimentally a work
load of the order p log2 p flops, cf. the next section.
We illustrate the performance and robustness of our adaptive pruning algorithm, and we compare the algorithm to
the two previously described algorithms: l_corner from [4] and the triangle method from [2]. To perform a general
comparison of state-of-the-art methods, we also compare with the General Cross Validation (GCV) method, which tries
to minimize the predictive mean-square error A xk − bexact 2 , where bexact is the noise-free right-hand side. In case
of TSVD, the parameter k chosen by the GCV method minimizes the function
Ax k − b22
Gk = .
(n − k)2
While the theory for GCV is well established, the minimum is occasionally very flat resulting in (severely) under-
regularized solutions. These problems are described, e.g., in [5, Sections 7.6– 7.7].
We use eight test problems from REGULARIZATION TOOLS [4] and the problem gravity from [7]. In addition we create
four test problems with ill-conditioned matrices from Matlab’s “matrix gallery” and the exact solution from the test
problem shaw. The test problems are listed in Table 2.
P.C. Hansen et al. / Journal of Computational and Applied Mathematics 198 (2007) 483 – 492 489
Table 2
The test problems used in our comparison
All problems from REGULARIZATION TOOLS use the default solution, except ilaplace where the third solution is used. All “gallery” matrices are used
with the exact solution from the shaw test problem, and prolate is called with parameter 0.05.
All test problems consist of an ill-conditioned coefficient matrix A and an exact solution x exact such that the exact
right-hand side is given by bexact = A x exact . To simulate measurement errors, the right-hand side b = bexact + e is
contaminated by additive white Gaussian noise e scaled such that the relative noise level e2 /bexact 2 is fixed. The
TSVD method is used to regularize all test problems. To evaluate the quality of the regularized solutions, we define the
best TSVD solution as the solution xk ∗ where
k ∗ = argmin x exact − xk 2 .
k
For problem size n = 128 and relative noise level e2 /bexact 2 = 5 · 10−3 , all test problems are generated with eight
different realizations of the noise. Let i = 1, . . . , 13 denote the problem and j = 1, . . . , 8 the realization number. For
each i and j, we compute the optimal parameter kij∗ as well as kijL from l_corner, kijG from the GCV method, kijT from the
triangle method, and kijA from the new adaptive pruning algorithm. The quality of all the solutions is measured by the
quantity
xk − x exact 2
Q
ij
ij = , = A, L, G, T,
xkij∗ − x exact 2
where A, L, G, and T refer to adaptive pruning algorithm, l_corner, GCV method and triangle method, respectively.
The minimum value Q
ij = 1 is optimal, and a value Qij > 100 is considered off the scale.
Fig. 3 shows the quality measure for all tests. In some occasions, kijL and kijG produce regularized solutions that are
off the scale; this behavior is well-known because the spline might fit the local behavior of the L-curve and thus find a
corner far from the global corner, and the GCV-function can have a very flat minimum. The results from the pruning
algorithm are never off the scale, and the results from the triangle method are only off in test problem 9. Overall, both
algorithms perform equally good, the triangle method being slightly better than the pruning algorithm for test problem
11. Similar tests were performed for other values of n and different noise levels, the results being virtually the same
and therefore not shown here.
It is interesting to observe that GCV behaves somewhat similar for all test problems, whereas the three L-curve
algorithms seem to divide the problems into two groups: one that seems easy to treat, and another where the L-curve
criterion seems more likely to fail. This effect is more pronounced for small n. Fig. 4 shows the corner of the L-curve of
a realization of test problem nine of size n = 64. This L-curve exhibits two corners of which both the pruning algorithm
and the triangle algorithm choose the wrong one, leading to large errors. The optimal solution does not lie exactly in
the other corner, but slightly to the right of this corner on the horizontal branch of the L-curve. This illustrates that the
L-curve heuristic can fail, as mentioned in the Introduction.
Our tests illustrate that the adaptive pruning algorithm is more robust than the l_corner algorithm and the GCV
method, and that it performs similar to the triangle method. The tests also illustrate that we cannot always expect to get
the optimal regularization parameter by using the L-curve criterion, as this optimum is not always associated with the
corner of the L-curve.
As mentioned earlier, the main problem with the triangle method is the complexity of O(p 2 ), whereas the adaptive
pruning algorithm tends to have an average complexity O(p log p). To illustrate this, all 13 test problems were run with
noise levels e2 /bexact 2 = 5 · 10−2 , e2 /bexact 2 = 5 · 10−3 and e2 /bexact 2 = 5 · 10−4 varying the problem
490 P.C. Hansen et al. / Journal of Computational and Applied Mathematics 198 (2007) 483 – 492
1
QA
10
ij
0
10
1 2 3 4 5 6 7 8 9 10 11 12 13
l_corner from Regularization Tools
2
10
1
QL
ij
10
0
10
1 2 3 4 5 6 7 8 9 10 11 12 13
General Cross Validation (GCV)
2
10
1
QG
10
ij
0
10
1 2 3 4 5 6 7 8 9 10 11 12 13
Triangle method
2
10
1
QT
10
ij
0
10
1 2 3 4 5 6 7 8 9 10 11 12 13
Test problem
1.6 L-curve
Optimal
1.4 Adaptive pruning algorithm
l_ corner
1.2 Triangle method
0.8
0.6
0.4
0.2
0
-4.5 -4 -3.5 -3 -2.5
Fig. 4. Problematic L-curve for problem (i, j ) = (5, 9). The curve has no simple corner, and the optimal solution lies neither in the corner nor near
the corner.
Seconds
-2
10
3
10
2 2
10 10
Problem size Problem size
Fig. 5. Left: flop counts for the adaptive pruning algorithm and the triangle method. Right: run times for the three L-curve algorithms.
size from n = 16 to 128, and the number of floating point operations was counted. The results are shown in Fig. 5 for
the adaptive pruning algorithm and the triangle method showing the average over the three noise levels and all test
problems. The flop count for the triangle method is about 3p 2 , while it is about 25p log p for the adaptive pruning
algorithm.
Fig. 5 also shows the average run times. While this measure is sensitive to implementation details, it shows the same
trend as the flop counts. Our adaptive pruning algorithm is faster than both the triangle algorithm and l_corner, and for
p = 100 our algorithm is more than 10 times faster than the triangle method.
We also include a large-scale example in the form of an image deblurring problem. The blurring is spatially invariant
and separates into column and row blur, and zero boundary conditions are used in the reconstruction. This leads to a
problem where A is a 104 ×104 nonsymmetric Kronecker product of two Toeplitz matrices, and x and b are column-wise
stacked images. For the reconstruction we use CGLS with full reorthogonalization.
The L-curve for the image problem after 1500 CGLS iterations is shown in Fig. 6, and we see that both the adaptive
pruning algorithm and l_corner find points close to the true corner of the L-curve. The triangle method erroneously
492 P.C. Hansen et al. / Journal of Computational and Applied Mathematics 198 (2007) 483 – 492
4.27 L-curve
Optimal k (970)
Adaptive pruning algorithm (1131)
4.268 l_corner (1122)
Triangle method (895)
4.266
4.264
4.262
4.26
4.258
4.256
4.254
4.252
Fig. 6. Corner part of L-curve for large-scale image reconstruction problem. The optimal solution lies on the horizontal part of the curve to the right
of the corner, and is denoted by a circle. The number in parenthesis is the corresponding number of CGLS iterations.
identifies a corner far off on the horizontal branch of the L-curve, and its run time is much larger than for the other
L-curve algorithms. A simple timing of the methods shows a run time of approximately 28 s for the triangle method
compared to about half a second for the adaptive pruning algorithm and the l_corner algorithm, using a laptop with a
Pentium Centrino 1.4 GHz processor. The GCV function is not well-defined for CGLS and therefore is not used here.
5. Conclusion
We described a new adaptive algorithm for finding the corner of a discrete L-curve. Numerical examples show that
the new algorithm is more robust than the algorithm l_corner from [4] based on spline curve fitting, and faster than the
triangle method [2]. It is also shown that the heuristic L-curve algorithm can fail no matter how it is implemented.
References
[1] M. Belge, M.E. Kilmer, E.L. Miller, Efficient determination of multiple regularization parameters in a generalized L-curve framework, Inverse
Problems 18 (2002) 1161–1183.
[2] J.L. Castellanos, S. Gómez, V. Guerra, The triangle method for finding the corner of the L-curve, Appl. Numer. Math. 43 (2002) 359–373.
[3] M. Hanke, Limitations of the L-curve method for ill-posed problems, BIT 36 (1996) 287–301.
[4] P.C. Hansen, Regularization tools: a Matlab package for analysis and solution of discrete ill-posed problems, Numer. Algorithms 6 (1994)
1–35.
[5] P.C. Hansen, Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion, SIAM, Philadelphia, 1998.
[6] P.C. Hansen, The L-curve and its use in the numerical treatment of inverse problems; invited chapter, in: P. Johnston (Ed.), Computational
Inverse Problems in Electrocardiology, WIT Press, Southampton, 2001, pp. 119–142.
[7] P.C. Hansen, Deconvolution and regularization with Toeplitz matrices, Numer. Algorithms 29 (2002) 323–378.
[8] P.C. Hansen, D.P. O’Leary, The use of the L-curve in the regularization of discrete ill-posed problems, SIAM J. Sci. Comput. 14 (1993)
1487–1503.
[9] G. Rodriguez, D. Theis, An algorithm for estimating the optimal regularization parameter by the L-curve, Rend. Mat. 25 (2005) 69–84.
[10] C.R. Vogel, Non-convergence of the L-curve regularization parameter selection method, Inverse Problems 12 (1996) 535–547.