0% found this document useful (0 votes)
50 views11 pages

J. Vis. Commun. Image R.: Wenfei Cao, Jian Sun, Zongben Xu

This document presents a fast image deconvolution algorithm using closed-form thresholding formulas for Lq (q = 1/2, 3/2) non-convex regularization. The authors deduce closed-form thresholding formulas for L1/2 and L3/2 regularization, which allow solving an image deconvolution optimization problem more efficiently compared to previous methods. Experimental results show that L3/2 regularization is more effective than L0, L1 or L1/2 regularization for image deconvolution, and L1/2 regularization performs competitively with L1 regularization and better than L0 regularization.

Uploaded by

vamsi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views11 pages

J. Vis. Commun. Image R.: Wenfei Cao, Jian Sun, Zongben Xu

This document presents a fast image deconvolution algorithm using closed-form thresholding formulas for Lq (q = 1/2, 3/2) non-convex regularization. The authors deduce closed-form thresholding formulas for L1/2 and L3/2 regularization, which allow solving an image deconvolution optimization problem more efficiently compared to previous methods. Experimental results show that L3/2 regularization is more effective than L0, L1 or L1/2 regularization for image deconvolution, and L1/2 regularization performs competitively with L1 regularization and better than L0 regularization.

Uploaded by

vamsi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

J. Vis. Commun. Image R.

xxx (2012) xxx–xxx

Contents lists available at SciVerse ScienceDirect

J. Vis. Commun. Image R.


journal homepage: www.elsevier.com/locate/jvci

Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ


regularization
Wenfei Cao, Jian Sun, Zongben Xu ⇑
School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China

a r t i c l e i n f o a b s t r a c t

Article history: In this paper, we focus on the research of fast deconvolution algorithm based on the non-convex
Received 3 April 2012 Lq ðq ¼ 12 ; 23Þ sparse regularization. Recently, we have deduced the closed-form thresholding formula for
Accepted 29 October 2012 L1 regularization model (Xu (2010) [1]). In this work, we further deduce the closed-form thresholding for-
Available online xxxx 2
mula for the L2 non-convex regularization problem. Based on the closed-form formulas for Lq ðq ¼ 12 ; 23Þ
3
regularization, we propose a fast algorithm to solve the image deconvolution problem using half-qua-
Keywords: dratic splitting method. Extensive experiments for image deconvolution demonstrate that our algorithm
Sparsity
has a significant acceleration over Krishnan et al.’s algorithm (Krishnan et al. (2009) [3]). Moreover, the
L1 regularization
2
L2 regularization
simulated experiments further indicate that L2 regularization is more effective than L0 ; L1 or L1 regulari-
3 2
3
Variable splitting zation in image deconvolution, andL1 regularization is competitive to L1 regularization and better than L0
2

Image deconvolution regularization.


L0 regularization Ó 2012 Elsevier Inc. All rights reserved.
L1 regularization
Thresholding formula

1. Introduction inverse problem, such as the total variation [7], nonlocal self-sim-
ilarity [8–10], sparse prior [11–13] and so on. Especially, the spar-
Image blur is a common artifact in digital photography caused sity induced by nonconvex non-convex regularization or the
by camera shake or object movement. Recovering the un-blurred hyper-Laplacian distribution from probabilistic point of view at-
sharp image from the blurry image, which is generally called image tracts a lot of attention in the community of computer vision
deconvolution, has been a fundamental research problem in image [14], machining learning and compressive sensing [15–17]. These
processing and computational photography. Image deconvolution prior models give rise to surprising results. For example, Chartrand
algorithms [4–6] can be categorized to blind deconvolution and [17,18] applies non-convex regularization to the Magnetic Reso-
non-blind deblurring, in which the blur kernel is unknown and nance Imaging (MRI) reconstruction task, bringing about promis-
known respectively. Tremendous methods have been proposed to ing results that only few samples in K-data space can effectively
estimate the blur kernel. In this work, we focus on the non-blind reconstruct the MRI image.
deblurring problem, i.e., recovering the sharp image from a blurry In this paper, we work on the fast image deconvolution algorithm
image given the blur kernel. with non-convex regularization to suppress ringing artifacts and
Mathematically, blurry image can be modeled as the convolution noises. The idea is motivated by Krishnan’s work in [3], in which
of an ideal sharp image with a blur kernel and then adding zero mean hyper-Laplacian prior of natural image is imposed on the image
Gaussian white noise. The degraded process can be modeled as non-blind deconvolution algorithm, which is equivalent to solving
an inverse linear optimization problem with Lq -norm (0 < q < 1)
Y ¼Xkþn ð1:1Þ non-convex regularization. Using quadratic splitting framework, one
where X is the sharp image, k is a blur kernel and n is the noise. Im- sub-problem is to optimize the non-convex regularization problem:
age deconvolution aims to recover a high quality image X, given a
x ¼ argminfðx  aÞ2 þ kjxjq g: ð1:2Þ
blurry image Y. x
The ill-posed nature of this problem implies that additional
assumption on X should be introduced. Recently, many kinds of This sub-problem actually is a very special case of the problem pro-
image priors are discovered and utilized to regularize this ill-posed posed by Elad [25] in the context of sparse representation and by
Chartrand [15–17] in the setting of compressive sensing. According
⇑ Corresponding author. to their work, from the geometric point of view, this solution is just
E-mail addresses: [email protected] (W. Cao), [email protected] (J. the intersection point between a hyperplane and a Lq ð0 < q < 1Þ
Sun), [email protected] (Z. Xu). ball, and when q goes closer to zero, the solution of this problem

1047-3203/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.jvcir.2012.10.006

Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
2 W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx

becomes more sparse. However, from the algebraic point of view, 2. Image deconvolution based on non-convex regularization
how to fast solve this optimization problem is a challenge due to
the non-convexity and the non-smoothness of this problem. In this 2.1. Formulation
work, our efforts will focus on two special values, 12 and 23, over the
interval ð0; 1Þ. Traditionally, closed-form hard thresholding [19] Assuming that X is the original uncorrupted grayscale image
and soft-thresholding formulas [20,21] have been proposed to solve with N pixels; Y is an image degraded by blur kernel k and noise n:
this regularization problem when q ¼ 0 and q ¼ 1. In [3], when q ¼ 12
or 23, Krishnan et al. [3] proposed to solve the above problem by pre-
Y ¼Xkþn ð2:1Þ
senting some clever discriminate conditions to compare and select Non-blind deconvolution aims to restore the real image X given the
optimal solution from the multiple roots of the first-order deriva- known or estimated blur kernel k. Due to the ill-posedness of this
tive equation of the cost function. Although this method makes this task, prior information of natural images should be utilized to reg-
problem undertake a major breakthrough, multiple roots should be ularize the inverse problem [13,3]. In this work, we utilize the
computed and compared to produce the final solution. A natural sparse hyper-Laplacian distribution prior of nature image in the
question is whether we could derive the closed-form thresholding gradient domain [3], i.e.,
formulas for non-convex regularization with q ¼ 12 or 23 in
0 < q < 1, in parallel to the well-known hard/soft thresholding for- X
2
q
s jjXfj jjq
mulas for q ¼ 0 or 1. j¼1
In this work, we will present the closed-form thresholding for-
pðXÞ / e ; ð2:2Þ
mulas for non-convex regularization problem in Eq. (1.2) with P q 1q
where jjzjjq ¼ ð denotes convolution, f1 ¼ ½1; 1 and
i ðzÞi Þ ; 
q ¼ 23 or 12, and apply them to solve the image deconvolution prob- f2 ¼ ½1; 1T are two first-order derivative filters and 0 < q < 1.
lem. It has been found that the gradients of natural images are dis- From the probabilistic perspective, we seek the MAP (maximum-a
q
tributed as heavy-tailed hyper-Laplacian distribution pðxÞ / ekjxj posteriori) estimate of X in Bayesian framework: pðXjY; kÞ /
with 0:5 6 q 6 0:8. In the Bayesian framework, this prior will pðYjX; kÞPðXÞ, the first term is the Gaussian likelihood and the
impose the Lq norm non-convex regularization for the inverse second term is the hyper-Laplacian image prior. Maximizing
problem with the formulation in Eq. (1.2). Therefore, developing pðXjY; kÞ is equivalent to minimizing
a fast algorithm for L1 or L2 regularization problem in the ( )
2 3
range of 0:5 6 q 6 0:8 can be expected to be promising in image k X2
X  ¼ argmin jjX  k  Yjj2F þ jjX  fj jjqq ð2:3Þ
deconvolution task. The contribution of this work can be X 2 j¼1
summarized as: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
PM PN 2
where jjAjjF ¼ i¼1 j¼1 aij indicates the Frobenious norm. If as-
 We deduce the closed-form thresholding formula for linear sume that x and y are vectors stretched from X and Y column by col-
inverse model with L2 regularization by deeply analyzing the umn, and K; F 1 and F 2 are the matrix form of the filters k; f1 and f2
3
distribution of roots of the first-order derivative equation of for image convolution, then problem (2.3)can be equivalently repre-
the cost function. Together with our previous work on the sented as
close-form thresholding formula for L1 regularization in [1,2],  
2 k
these thresholding formulas enable fast and efficient image x ¼ argmin jjKx  yjj22 þ jjF 1 xjjqq þ jjF 2 xjjqq ð2:4Þ
x 2
deconvolution algorithm in the framework of half-quadratic
splitting strategy. where k makes a trade-off between the fidelity term and the regu-
P q q
1
 We conduct extensive experiments over a set natural images larization term. When 0 6 q < 1; jjFxjjq ¼ i ðFxÞi imposes non-
blurred by eight real blur kernels. The results demonstrate that convex regularization on the image gradients.
our algorithm enables a significantly faster speed over Krishnan
et al.’s method; Moreover, L2 regularization is more effective 2.2. Half-quadratic splitting algorithm
3
over L0 ; L1 or L1 regularization for image deconvolution, and L1
2 2
regularization is competitive to L1 regularization and better Using the half-quadratic splitting method, Krishnan et al., [3]
than L0 regularization. introduced two auxiliary variables u1 and u2 and the problem
(2.4) can be converted to the following optimization problem
We believe that the closed-form thresholding formulas for L2 or L1 
3 2
k b b
non-convex regularization are important to machine learning and x ¼ argmin jjKx  yjj22 þ jjF 1 x  u1 jj22 þ jjF 2 x  u2 jj22
computer vision communities beyond the application of image x 2 2 2
o
deconvolution. That is because this linear inverse problem with þ jju1 jjqq þ jju2 jjqq ð2:5Þ
non-convex regularization is a general model with wide applica-
tions for compressive sensing [16,17], image demosaicing [14], im- where b is a control parameter. As b ! 1, the solution of prob-
age super-resolution [14], etc. Moreover, theoretically, the closed- lem(2.5) converges to that of Eq. (2.4). Minimizing Eqn. (2.5) for a
form formulas make the theoretical analysis of the non-convex reg- fixed b can be performed by alternating two steps: one sub-problem
ularization problem possible or easier, which deserves to be inves- is to solve x, given u1 and u2 , which is called x-subproblem; the
tigated in our future work. other sub-problem is to solve u1 ; u2 , given x, which is called
The remainder of this paper can be organized as follows. Sec- u-subproblem.
tion 2 will describe the image deconvolution model based on
non-convex regularization and its optimization using half-qua- 2.2.1. x-Subproblem
dratic splitting scheme; In Section 3, we will deduce the threshold- Given u1 and u2 , the x-subproblem aims to obtain the optimal x
 
ing formula for Lq q ¼ 23 regularization problem and also  introduce
 by optimizing the energy function Eq. (2.5), which is to optimize:
our previously proposed thresholding formula for Lq q ¼ 12 regu-
larization problem. Then we will present our deconvolution algo- x ¼ argminfkjjKx  yjj22 þ bjjF 1 x  u1 jj22 þ bjjF 2 x  u2 jj22 g
x
rithm with Lq ðq ¼ 12 ; 23Þ regularization; In Section 4, we will report
the experimental results in both speed and quality; Finally, this The subproblem can be optimized by setting the first derivative of
paper is concluded in Section 5. the cost function to zero:

Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx 3

  8

k k >
> 2
jaj 1 þ cos 23p  2u3k ðaÞ if a > pðkÞ
F T1 F 1 þ F T2 F2 þ K T K x ¼ F T1 u1 þ F T2 u2 þ K T y ð2:6Þ >
< 3
b b
x ¼ 0 if jaj 6 pðkÞ ð3:2Þ
>
>

where Kx ¼ X  k. Assuming circular boundary conditions, we can >


:  jaj 1 þ cos
2 2p 2 u ðaÞ
3 3
 3 k
if a < pðkÞ
apply 2D FFT to efficiently obtain the optimal solution x as:
! where
FFTðF 1 Þ  FFTðu1 Þ þ FFTðF 2 Þ  FFTðu2 Þ þ bk FFTðKÞ  FFTðyÞ  3 ! p ffiffiffiffiffiffi
IFFT k jaj 2 3
54 23
FFTðF 1 Þ  FFTðF 1 Þ þ FFTðF 2 Þ  FFTðF2Þ þ bk FFTðKÞ  FFTðKÞ uk ðaÞ ¼ arccos ; pðkÞ ¼ ðkÞ :
8 3 4
ð2:7Þ
Next, we will derive the closed-form thresholding formula for the L2
3
where  denotes the complex conjugate,  denotes the component- regularization problem.
wise multiplication, and the division is also performed in compo-
nent-wise fashion. The fast fourier transform of F 1 ; F 2 ; K can be 3.1. The thresholding formula for L2 regularization
3
pre-computed, therefore solving Eq. (2.7) only requires 3 FFTs at
each iteration, i.e., FFTðu1 Þ, FFTðu2 Þ; IFFTðÞ. The L2 regularization model is
3 n 2
o
2.2.2. u-Subproblem
x ¼ argmin f ðxÞ ¼ ðx  aÞ2 þ kjxj3

ð3:3Þ
x
Given a fixed x, finding the optimal u1 ; u2 can be achieved by Our aim is to seek the minimum point of f ðxÞ, denoted as x . In the
optimizing following, we will first present two lemmas, Lemma 3.1 and Lemma
  3.2; And then, based on the two lemmas, we derive the minimum
b
ui ¼ argmin jjF i x  ui jj22 þ jjui jjqq point of f ðxÞ when x – 0 in Lemma 3.3; Finally, by the Lemma 3.1
ui 2
and Lemma 3.3, we derive the minimum point of f ðxÞ when x 2 R
where i ¼ 1; 2. This optimization problem can be decomposed into in Theorem 3.4.
2N independent one-dimension Lq regularization problems:
Lemma 3.1. The minimum point x of f ðxÞ in Eq. (3.3) satisfies the
 
b following properties:
ðui Þj ¼ argmin jujq þ ðu  ðF i xÞj Þ2 ð2:8Þ
u 2
1. If a P0, then x 2 [0, a];
where ðÞj (j ¼ 1; . . . ; N) denotes the j-th component of a vector. It 2. If a < 0, then x 2 [a, 0).
has been derived that the formulation of the closed-form solutions
of the above problem are hard thresholding and soft-thresholding
when q ¼ 0 and 1 respectively.
Proof. We only prove the case (1), and case (2) can be proved in
For 0 < q < 1, it is challenging to derive the closed-form solu-
the same way. If assuming x < 0, without the loss of generality,
tion of this optimization problem. Krishnan et al. [3] utilized New-
let x ¼ MðM > 0Þ, then we have f ðx Þ ¼ f ðMÞ ¼ ðM  aÞ2 þ
ton–Raphson method to optimize this problem for 0 < q < 1. 2 2
kðj  Mj3 Þ ¼ ðM þ aÞ2 þ kjMj3 > a2 ¼ f ð0Þ. This obviously contra-
Especially for q ¼ 12 or 23 cases, some discriminant rules are proposed 
dicts the fact that x is a minimum point. On the other hand, if
to find global optimal solution by comparing and selecting from
assuming x > a and let x ¼ a+4 (4>0), then we have
roots of the first order derivative of the cost function in Eq. (2.8). 2 2
Although it accelerated the optimization procedure without the f ðx Þ ¼ f ða þ 4Þ ¼ 42 þ kja þ 4j3 > kjaj3 ¼ f ðaÞ:
need of numerous iterations as done by Newton–Raphson method,
it still needs to compute and compare multiple roots using some
The plot for L threshold formula The plot for L threshold formula
discriminant rules. 1 2/3
1 1
In the next section, we will present the closed-form
thresholding formulas for the global optimal solution of Eq. (2.8)
0.5 0.5
with q ¼ 12 ; 23. These formulas not only further speed up the
deconvolution algorithm, but also can be easily extended to other
applications in signal/image processing, e.g., denoising or super- 0 0
resolution, since Eq. (2.8) is a general non-convex regularization
model in these applications. −0.5 −0.5

3. Image deconvolution based on closed-form thresholding −1 −1


−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
formulas of L1 ; L2 regularization
2 3

The plot for L threshold formula The plot for L threshold formula
1/2 0
In this section, we will firstly present the thresholding formulas
1 1
for L1 ; L2 regularization problems. And then, by combining the
2 3

thresholding formulas and half-quadratic splitting strategy, we 0.5 0.5


propose a fast algorithm for image deconvolution.
We first review our previous work on the closed-form
0 0
thresholding formula for L1 regularization problem [1,2], i.e., to
2
solve:
−0.5 −0.5
1
x ¼ argminfðx  aÞ2 þ kjxj2 g ð3:1Þ
x
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
where the variables is scalar values instead of vectors. This optimi-
zation problem has the closed-form thresholding formula: Fig. 1. The plots of the different threshold formulas.

Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
4 W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx

Fig. 2. Test images.

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
This also leads to contradiction with the fact that x is a minimum u0 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi113 0 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi113
u 2
point. Hence, x 2 [0, a] for case (1). h u a
t @ a4 43 3 A @ a2 a4 43
jAj ¼ þ  2k þ   2 k3 A
2 4 27 2 4 27

Lemma 3.2. Assume that d1 ¼  A2 þ 2 Aa and d2 ¼  A2  2 Aa ,


3
where and jaj > p4ffiffiffiffi
27
k4 , then d1  d2 < 0.

Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx 5

Fig. 3. Test blur kernels.

Table 1
Comparison for running time with kernel 4(19 19).

Size DL1 (Ave./Std.) OurL1 (Ave./Std.) Ratio DL2 (Ave./Std.) OurL2 (Ave./Std.) Ratio
2 2 3 3

256 256 0.7269/0.0340 0.1452/0.0134 5.0062 0.9009/0.0464 0.2278/0.0192 3.9548


512 512 3.5615/0.0084 0.6923/0.0061 5.1444 4.2966/0.0048 1.1865/0.0077 3.6212
1024 1024 14.1109/0.0168 2.5477/0.0166 5.5387 17.0178/0.0085 4.4434/0.0110 3.8299
2048 2048 55.0141/0.0503 9.6791/0.0284 5.6838 66.9524/0.0730 17.0195/0.0485 3.9339
3000 3000 109.8524/0.1082 18.7584/0.0948 5.8562 126.5699/0.2687 34.4123/0.1194 3.6766
3500 3500 150.1106/0.3826 25.9526/0.2777 5.7840 172.3184/0.3806 46.8686/0.0987 3.9304
4000 4000 196.3966/0.3407 33.5712/0.1605 5.8502 225.7459/1.0865 60.7211/0.1459 3.7178
Ave. 75.6819/0.1344 13.0495/0.0854 5:5519 87.6860/0.2669 23.5542/0.0643 3:7732

Note: Bold value indicates a ratio for the average consuming time of different methods.

8 0 qffiffiffiffiffiffiffiffiffiffiffiffi13
Proof. The proof of d1  d2 <0 is equivalent to the proof of A6 < 4a2 . >
> 2jaj
> jAjþ jAj2
We now prove that A6 < 4a2 . Let sðtÞ ¼ t1=3 ðt P 0Þ. It is easy to ver- >
> @ jAj
A
>
> 2
if a > pðkÞ
ify that sðtÞ is concave, then we can obtain the inequality >
>
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi >
<
2 43 3 2
sðt1 þt
2
2
Þ > sðt1 Þþsðt
2

. By setting t1 ¼ a2 þ a4  27
4
2 k , t2 ¼ a2  x ¼ 0 if jaj 6 pðkÞ ð3:4Þ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi >
> 0 qffiffiffiffiffiffiffiffiffiffiffiffi13
>
>
a4
 27 43 3
2 k (obviously, t1 – t2 and t 1 ; t 2 P 0), we can easily prove >
> 2jaj 2
4 > @jAjþ jAj jAj A
>
>
>  if a < pðkÞ
that A6 < 4a2 , i.e., d1  d2 < 0. h : 2

where
Lemma 3.3. For f ðxÞ in Eq. (3.3) where k >0 and a2 R, if x – 0, then
the minimum point of f ðxÞ can be represented as:   12  
2 1 / 27a2 32 2 1
8 0 qffiffiffiffiffiffiffiffiffiffiffiffi1 jAj ¼ pffiffiffi k4 cosh ; / ¼ arccosh k ; pðkÞ ¼ ð3k3 Þ4 :
> 3 3 3 16 3
>
> jAjþ
2jaj
jAj2
>
> @ jAj
A
>
> if a > sðkÞ
>
< 2

^x ¼ Proof. Please refer to Appendix B for the proof. h


> 0 qffiffiffiffiffiffiffiffiffiffiffiffi13
>
> 2jaj
jAj2 In Fig. 1, we plot the closed-form thresholding formulas for the
>
> jAjþ jAj
> @
>
> 2
A if a < sðkÞ optimal solutions of Lq regularization problem x ¼ argminfðx
: x
aÞ2 þ kjxjq g when q ¼ 0; 12 ; 23 ; 1 respectively. The x-coordinate and

y-coordinate in these sub-figures correspond to a and x respec-
where tively. We can observe that the thresholding curves of L1 ; L2
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi regularization problems lie between the curves of the traditional
2 3
u0 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi113 0 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi113
u 2 3
u a a 4 4 a 2 a4 43 soft thresholding (L1 regularization) and hard thresholding (L0
jAj ¼ t@ þ  2k A þ@  3
 2 k3 A ; sðkÞ regularization).
2 4 27 2 4 27

4 3 3.2. Image deconvolution algorithm


¼ pffiffiffiffiffiffi k4 :
27
Given the closed-form thresholding formulas for L1 ; L2 regulari-
2 3
zation problems, the optimal solutions of Eq. (2.8) with q ¼ 12 ; 23 in
Proof. Please refer to Appendix A for the proof. h
u-subproblem can be efficiently computed by the thresholding for-
mulas in Eqs. (3.2) and (3.4) with x ¼ u; k ¼ 2b ; a ¼ ðF i xÞj .
Theorem 3.4. The minimum point of f ðxÞ in Eq. (3.3) has the follow- Now both the x-subproblem and u-subproblem in the quadratic
ing closed-form thresholding formula when x 2 R: splitting algorithm in Section 2.2 can be efficiently computed in

Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
6 W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx

Table 2
Comparison for running time with kernel 8(27 27).

Size DL1 (Ave./Std.) OurL1 (Ave./Std.) Ratio DL2 (Ave./Std.) OurL2 (Ave./Std.) Ratio
2 2 3 3

256 256 0.7264/0.0389 0.1365/0.0118 5.3216 0.9096/0.0369 0.2350/0.0233 3.8706


512 512 3.5751/0.0033 0.6935/0.0043 5.1552 4.3552/0.0551 1.1877/0.0024 3.6669
1024 1024 14.1563/0.0165 2.5433/0.0124 5.5661 17.0849/0.0377 4.4553/0.0134 3.8347
2048 2048 55.2309/0.0637 9.6818/0.0253 5.7046 67.1039/0.0482 17.0219/0.0325 3.9422
3000 3000 110.3661/0.1105 18.7466/0.0501 5.8873 126.8374/0.1049 34.5268/0.1721 3.6736
3500 3500 150.4214/0.2632 25.8423/0.0951 5.8207 172.8680/0.2739 47.0590/0.0900 3.6734
4000 4000 196.7216/0.3144 33.5025/0.1168 5.8718 225.7242/0.7428 61.1651/0.1594 3.6904
Ave. 75.8854/0.0704 13.0209/0.0451 5:6182 87.8405/0.1856 23.6644/0.0704 3:7646

Note: Bold value indicates a ratio for the average consuming time of different methods.

Table 3 Table 4
Comparison for different images with kernel 7. Comparison for different images with kernel 8.

Images Blurry L1 L2 L1 L0 Images Blurry L1 L2 L1 L0


3 2 3 2

a(512 512) 20.68 26.88 26:90 26.77 25.38 a(512 512) 19.05 26.39 26:42 26.33 25.18
b(512 512) 20.47 30.54 30:63 30.44 28.96 b(512 512) 19.39 29.29 29:47 29.38 27.78
c(512 512) 20.43 29.56 29:70 29.57 27.81 c(512 512) 19.23 28.69 28:89 28.83 27.05
d(512 512) 22.87 32.41 32:42 32.19 30.54 d(512 512) 20.33 30.83 30:94 30.84 29.25
e(512 512) 22.40 30:55 30.51 30.34 29.56 e(512 512) 20.93 29.55 29:59 29.47 28.60
f(512 512) 21.79 30.37 30:45 30.31 28.94 f(512 512) 20.08 29.44 29:59 29.49 28.05
g(1608 1624) 13.67 25.39 25:45 25.27 21.90 g(1608 1624) 12.75 24.43 24:64 24.58 21.00
h(1554 1383) 26.93 34.58 34:59 34.50 33.80 h(1554 1383) 25.65 33:81 33.73 33.54 33.04
i(1362 1263) 23.67 32.64 33:14 33.09 30.31 i(1362 1263) 21.86 32.35 33.05 33:07 30.01
j(1024 1341) 15.39 26.12 26:43 26.35 22.14 j(1024 1341) 14.46 25.97 26.43 26:44 21.65
k(1308 1197) 18.47 26.03 26:24 26.21 24.14 k(1308 1197) 17.43 26.17 26:46 26:46 23.92
l(1000 667) 18.34 27.28 27:66 27.67 25.08 l(1000 667) 17.64 27.58 28.10 28:15 24.69
m(886 886) 22.80 30.35 30:70 30.64 28.69 m(886 886) 22.20 30.12 30:52 30.46 28.34
n(1246 1119) 17.57 26.36 26:48 26.35 23.89 n(1246 1119) 17.05 26.09 26:31 26.25 23.44
o(1413 1413) 20.38 30.82 31:13 30.98 28.31 o(1413 1413) 19.54 29.61 30:12 30.10 27.08
p(1284 1380) 18.94 31.62 31:89 31.63 28.23 p(1284 1380) 17.59 30.12 30:71 30.68 26.95
q(1600 1200) 14.16 23.54 23:75 23.67 20.18 q(1600 1200) 13.58 23.55 23:82 23.80 19.93
r(1693 1084) 17.77 27.91 28:14 28.02 25.36 r(1693 1084) 17.23 27.30 27.72 27:73 24.49
s(1024 675) 17.36 27.16 27:63 27.65 24.50 s(1024 675) 16.50 27.45 28.14 28:26 23.97
t(1600 1200) 18.89 25.98 26:07 25.97 24.45 t(1600 1200) 18.54 26.15 26:29 26.22 24.23

Note: Bold values indicate the highest PSNR value. Note: Bold values indicate the highest PSNR value.

closed-form formulation, then the final algorithm for image decon- ground-truth high quality image. In our implementation, edge
volution is shown in Algorithm 1. tapering operation is utilized to reduce the possible boundary arti-
facts. To compare the best potential performance
pffiffiffi of different regu-
4. Experiments larization algorithms, we set binc ¼ 2 and k ¼ 2b to the optimal
value in a range of values with best PSNR performance as in [3].
In this section, we will conduct several groups of experiments to Our experiments are executed using Matlab software on desk-
demonstrate that the proposed deconvolution algorithm enables top computer with 2.51 GHz AMD CPU (dual core) and 1.87 GB
significantly faster speed over Krishnan et al’s algorithm [3]. More- RAM.
over, by extensive experiments, we will show that L2 regularization
3
is more effective for image deconvolution than L0 ; L1 or L1 regular-
2 Algorithm 1. Fast Image Deconvolution Using Closed-Form
ization, and L1 regularization is competitive to L1 regularization
2
and better than L0 regularization. Thresholding Formulas of Lq ðq ¼ 12 ; 23Þ Regularization
Input: Blurred image y; blur kernel k; regularization weight
4.1. Experiment setting
k; q=12 or 23;b0 ; bInc ; bM ;
maximal number of outer iterations T;
Our test natural images are collected from two sources: (1) the
number of inner iterations J.
standard test images for image processing with size of 521 512;
Step 1: Initialize iter = 0, x ¼ y and b ¼ b0 , pre-compute
(2) the high-resolution images from web site of http://www.flickr.-
constant terms in Eq. (2.7).
com/. The images from the second source are commonly with lar-
Step 2: repeat
ger resolutions to test the ability for our algorithm to handle large
iter = iter + 1.
images. We list all the test images in Fig. 2. All the test images are
for i = 1 to J do
blurred by real-world camera shake kernels from [22], and the blur
x-subproblem: optimize x according to Eq. (2.7).
kernels are shown in Fig. 3 (the images are scaled for better illus-
u-subproblem: optimize u1 ; u2 according to Eq. (3.2)
tration). To better simulate the real-captured blurry image, we also
or (3.4) when q=12 or 23.
add Gaussian noises with standard deviation of 0.01 to the blurry
image and followed by quantization to 255 discrete values. The endfor
2552
PSNR defined as 10log10 MSEðxÞ is employed to evaluate the deconvo- b ¼ binc  b.
lution performance, where x is the deconvolution result and until b > bM or iter > T.
MSEðxÞ denotes the mean square error between x and the Output: x.

Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx 7

Table 5 4.2. Comparison for speed


Average results for different regularization.

Ave. Blurry L1 L2 L1 L0 In this experiment, we will evaluate the speed of our algorithm
3 2
compared to the Krishnan’s algorithm (without the look-up table
ker1(13 13) 22.5621 30.5138 30:5250 30.3146 28.2054
ker2(15 15) 22.2905 29.1895 29:3143 29.1567 26.9743 technique) [3]. It has been shown in [3] that the speed of other
ker3(17 17) 21.7455 29.0075 29:2680 29.1865 26.6380 methods such as re-weighted method is slower than Krishnan’s
ker4(19 19) 22.2555 29.3000 29:6125 29.5735 26.7215 algorithm. We test the algorithms on images with varying resolu-
ker5(21 21) 18.8620 30.6030 30:7085 30.5335 28.0740 tions. Our proposed deconvolution algorithms using L1 and L2 reg-
ker6(23 23) 19.7125 29.4790 29:5740 29.4305 27.4645 2 3

ker7(23 23) 19.6490 28.8050 28:9955 28.8810 26.6085


ularization are denoted as OurL1 and OurL2 respectively whereas
2 3

ker8(27 27) 18.5515 28.2445 28:5470 28.5040 25.9325 the corresponding algorithms proposed in [3] are denoted
DL1 (without the look-up table technique) and DL2 (without the
Note: Bold values indicate the highest PSNR value. 2 3
look-up table technique) respectively. Table 1 exhibits for kernel
Table 6 4 the average result and the standard deviation of ten times exper-
Comparison for 8 kernels with image o. iment with different resolutions, and Table 2 for kernel 8. From Ta-
Image o Blurry L1 L2 L1 L0 ble 1 and Table 2, we find that our L1 algorithm is roughly 5.5 times
3 2 2
faster than Krishnan’s L1 algorithm, and our L2 algorithm is roughly
ker1(13 13) 23.97 33.38 33:41 33.10 30.47 2 3

ker2(15 15) 23.39 31.57 31:83 31.62 28.96


3.7 times faster than Krishnan’s L2 algorithm in average, indicating
3
ker3(17 17) 22.26 30.87 31:21 31.08 28.34 that the closed-form thresholding formula significantly speed up
ker4(19 19) 23.34 30.87 31:37 31.34 28.03 the deconvolution algorithm in the framework of splitting qua-
ker5(21 21) 19.57 33.14 33:31 33.04 30.12 dratic algorithm. Of course, we can further exploit other engineer-
ker6(23 23) 20.51 31.21 31:51 31.38 28.86
ing technologies like look-up table or high performance computing
ker7(23 23) 20.38 30.82 31:13 30.98 28.31
ker8(27 27) 19.54 29.59 30:10 30.08 27.08 platform like GPU (Graphic Processing Units) to further accelerate
our algorithm. Moreover, from Table 1 and Table 2, the small stan-
Note: Bold values indicate the highest PSNR value.
dard deviations and the similar acceleration results for two differ-
Table 7
ent sizes of kernels manifest that the acceleration speed is stable.
Comparison for 8 kernels with image p.
4.3. Evaluation for different regularization
Image p Blurry L1 L2 L1 L0
3 2

ker1(13 13) 23.64 34:23 34.09 33.66 30.76 To compare the performance of different regularization algo-
ker2(15 15) 22.87 32.19 32:36 32.07 28.94
rithms, we conduct eight groups of experiments for different ker-
ker3(17 17) 21.63 30.97 31:44 31.33 27.84
ker4(19 19) 22.60 31.23 31:81 31.78 27.73 nels. For each kernel, we evaluate the deconvolution performance
ker5(21 21) 17.91 33.56 33:71 33.38 29.91 over the 20 test images shown in Fig. 2. Because of the space
ker6(23 23) 19.07 32.39 32:56 32.29 29.38 restriction, we only list two groups of recovery results, as showed
ker7(23 23) 18.94 31.62 31:89 31.63 28.23
in Table 3 and Table 4. From Table 3 and Table 4, we find that, first,
ker8(27 27) 17.59 30.14 30:72 30.66 26.96
the deconvolution results using L2 regularization outperform over
3
Note: Bold values indicate the highest PSNR value. those by L0 ; L1 or L1 regularization in terms of PSNR values; Second,
2

Original Blurry image recovery


image PSNR=19.54 PSNR=29.59

recovery recovery recovery


PSNR=30.10 PSNR=30.08 PSNR=27.08

Fig. 4. The deconvolution results by different regularization algorithms for image o.

Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
8 W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx

Original Blurry image recovery


image PSNR=19.59 PSNR=30.14

recovery recovery recovery


PSNR=30.72 PSNR=30.69 PSNR=26.96

Fig. 5. The deconvolution results by different regularization algorithms for image p.

Table 8 5. Conclusion and future work


The triangle and hyperbolic expression for the roots of cubic equation
pffiffiffiffiffiffi
x3 þ 3px þ 2q ¼ 0; p – 0. let r ¼ sgnðqÞ jpj In this paper, we derived the closed-form thresholding formula
p<0 p>0 for L2 regularization problem. Based on this thresholding formula
3
together with our previously derived thresholding formula for L1
q2 þ p3 < 0 q 2 þ p3 > 0 2
regularization problem, we proposed a fast deconvolution algorithm
q
cos u ¼ r3
cosh u ¼ rq3 sinh u ¼ rq3 using half quadratic splitting strategy. Extensive experiments dem-
x1 ¼ 2r cos u3 x1 ¼ 2r cosh u
3 x1 ¼ 2r sinh u3 onstrate that our algorithm significantly speeds up the previous
pffiffiffi pffiffiffi
x2 ¼ 2r cosðp3  u3 Þ x2 ¼ r cosh u3 þ i 3r sinh u3 x2 ¼ r sinh u3 þ i 3r cosh u
3 deconvolution algorithm in the same framework. And we also justi-
pffiffiffi pffiffiffi
x3 ¼ 2r cosðp3 þ u3 Þ x3 ¼ r cosh 3  i 3r sinh u3
u
x3 ¼ r sinh 3  i 3r cosh u
u
3 fied that L2 regularization is more powerful than L1 ; L1 or L0 regular-
3 2
ization, and L1 regularization is competitive to L1 regularization and
2
better than L0 regularization for image deconvolution.
The derived thresholding formula in this work provides an
effective way to optimize the non-convex regularization problem
the L1 regularization is competitive to L1 regularization and better
2 using closed-form formulation. The L2 regularization problem has
than L0 regularization for image deconvolution. We also present 3
wide applications beyond the image deconvolution, e.g., compres-
the average PSNR results over all the test images for each kernel
sive sensing, super-resolution, denoising, etc. On the other hand,
in Table 5. From Table 5, we can also draw the consistent conclu-
the derived thresholding formula can be extended to solve more
sions though the blur kernels are different in both shape and
complex regularization problem
resolution.
To further test the performance of our algorithm, we evaluate
x ¼ argminfjjA~
~ yjj2 þ kjj~
x ~ xjjqq g ð5:1Þ
the deconvolution results for each image over eight different ker- ~
x
nels. Similarly, we only list two groups of experiment over two test
images o and p, the results are shown in Table 6 and Table 7. From where A is a matrix commonly composed of a set of basis in its col-
Table 6 and Table 7, we can derive the same conclusions. We show umns, and ~ x and ~y are vectors of variables. This optimization prob-
two of the deconvolution results in Fig. 4 and Fig. 5, it is shown that lem is different to the problem in Eq. (1.2) that matrix A is
the deblurred images using Lq ðq ¼ 12 ; 23Þ regularization algorithm is introduced which makes vector of variables in ~ x dependent on each
clearly with higher visual quality with less noises/ringing artifacts other. This optimization problem has wide applications in image/sig-
compared to L0 or L1 regularization algorithm. nal processing such as dictionary learning [23], image restoration
In summary, extensive experiments demonstrate that our [12], etc. It can be fast optimized by iterative thresholding algorithm:
deconvolution algorithm with Lq ðq ¼ 12 ; 23Þ regularization enables ~
xkþ1 ¼ Tkt ð~xk  2tAT ðA~
xk  ~
yÞÞ; t is an appropriate stepsize, T is the
significantly faster speed over Krishnan’s algorithm [3], while L2 hard, soft and half thresolding operator when q ¼ 0; 1; 12 respectively
3
regularization outperforms over L0 ; L1 or L1 regularization and L1 [19,21,1,2]. Obviously, the proposed thresholding formula for L2 reg-
2 2 3
regularization is competitive to L1 regularization and better than ularization can be incorporated to the iterative threholding algo-
L1 regularization. rithm as the threholding operator to solve the Eq. (5.1) when q ¼ 23.
2

Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx 9

In the future work, we are interested in the analysis of the con- AþC ¼0 ðA:4Þ
vergence of our deconvolution algorithm and plan to extend the B þ D þ AC ¼ 0 ðA:5Þ
applications of the thresholding formulas of L1 and L2 regulariza-
2 3 AD þ BC ¼ a ðA:6Þ
tion to other related problems in image processing and machine
k
learning. BD ¼ ðA:7Þ
3
Acknowledgements From (A.4), we get C ¼ A. By substituting C ¼ A into (A.5), (A.6),
we get
We would like to thank Doctor J. X. Jia and Q. Zhao for many B þ D  A2 ¼ 0 ðA:8Þ
helpful suggestions. This work was supported by the National AD  AB ¼ a ðA:9Þ
973 Programming (2013CB329404), the Key Program of National
Natural Science Foundation of China (Grant No. 11131006), and
the National Natural Science Foundations of China (Grant No.
61075054). (i) when A ¼ 0, from (A.4) and (A.6), we get C ¼ A ¼ a ¼ 0. a ¼ 0
3
obviously contradicts a > p4ffiffiffiffi k4 , so ðiÞ never occurs.
27
Appendix A. The proof of Lemma 3.3
(ii) when A – 0,
Proof. We will find the minimum point of f ðxÞ on x – 0 by seeking B þ D ¼ A2 ðA:10Þ
and analyzing the roots of equation f 0 ðxÞ ¼ 0 on x – 0. Since a
f 0 ðxÞ ¼ 2ðx  aÞ þ 23 k signðxÞ BþD¼ ðA:11Þ
1 , we obtain that: A
jxj3
we can further obtain that,
1 1 k
xjxj3  ajxj3 þ signðxÞ ¼ 0 ðx – 0Þ ðA:1Þ
3 ðA2 þ AaÞ
3
B¼ ðA:12Þ
Assuming that jxj ¼ y , (A.1) can be reduced as the following two 2
cases: ðA2  AaÞ
D¼ ðA:13Þ
case1: for x > 0; y4  ay þ 3k ¼ 0; 2
ðA2 þAaÞ ðA2 aÞ
case2: for x < 0; y4  ay  3k ¼ 0. By substituting (A.12), (A.13) to (A.7), we get 2
 2 A ¼ 3k. And
still, by reduction and rearrangement, we obtain A  43 kA2 
6
a2 ¼
1 1
Let gðxÞ ¼ xjxj3  ajxj3 þ 3k signðxÞ; h1 ðyÞ ¼ y4  ay þ 3k and 0. Let M ¼ A2 , we get
h2 ðyÞ ¼ y4 þ ay þ 3k. In the following, for case 1, we seek the 4
M3  kM  a2 ¼ 0 ðA:14Þ
positive minimum point of f ðxÞ by exploring the zero point 3
distribution of h1 ðyÞ. And for case 2, we seek the negative From the root discriminant formula of the triple equation, we get
minimum point of f ðxÞ by the symmetry relationship of h2 ðyÞ and  2 2  3 !
q
2 p
3 a 4 a4 43 3
h1 ðyÞ, i.e., h2 ðyÞ ¼ h1 ðyÞ. D¼ þ ¼  þ  k ¼  k
I. We first analyze case 1. By Lemma 3.1, in order to seek the 2 3 2 9 4 93
minimum point of f ðxÞ on x > 0, it suffices to consider the case of where q ¼ a2 ; p ¼  43 k. Since a > p4ffiffiffiffi
3
k4 ; D > 0. Hence, Eq. (A.14)
27
a P 0. Since for x > 0; gðxÞ ¼ gðy3 Þ ¼ h1 ðyÞ, we just need to seek the only have one real root. According to the Cardan formula for cubic
positive root y ^ of h1 ðyÞ satisfying: equation, we get the root of Eq. (A.14) as M ¼ A2 ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
13 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
13
^Þ ¼ 0; h1 ðyÞ < 0 when y < y
h1 ðy ^; h1 ðyÞ > 0 when y > y
^: a2 4 43 3 2 4 43 3
2
þ a4  27 2 k þ a2  a4  27 2 k . Right now, by (A.3), (A.4),
ðA:2Þ (A.12), (A.13) and h1 ðyÞ ¼ 0, we get
where y is near to y ^. We next investigate the root distribution of ðA2 þ AaÞ
0
h1 ðyÞ by analyzing its derivative. Since h1 ðyÞ ¼ 4y3  a, it is easy to y2 þ Ay þ ¼0 ðA:15Þ
p ffiffi 2
3 a
verify that when y < 4; h1 ðyÞ monotonically decreases and when 2
ðA  AÞa
pffiffi pffiffi y2  Ay þ ¼0 ðA:16Þ
y > 3 4a; h1 ðyÞ monotonically increases, which means that y ¼ 3 4a is 2
00
the unique minimum point of h1 ðyÞ. Since h1 ðyÞ ¼ 12y2 P 0; h1 ðyÞ From the root discriminant formula of the quadratic equation, we
is a convex function. The root distribution for h1 ðyÞ has three cases: get
pffiffi
caseðaÞ: h1 ðyÞ ¼ 0 has no root. This means that h1 ð 3 4aÞ > 0, then a
p ffiffi p ffiffi 3
d1 ¼ ðA2 þ 2 Þ ðA:17Þ
we can get ð 3 4aÞ4  að 3 4aÞ þ 3k > 0, i.e., a < p4ffiffiffiffi k4 ; A
27 a
caseðbÞ: h1 ðyÞ ¼ 0 has one unique real root. This means that d2 ¼ ðA2  2 Þ ðA:18Þ
pffiffi 3 pffiffi A
h1 ð 3 4aÞ ¼ 0, therefore a ¼ p4ffiffiffiffi
27
k4 . In this case, however, y ^ ¼ 3 4a does The following is to seek the real root of equation (A.15), (A.16) by
p ffiffi
not satisfy Eq. (A.2), so y ^ ¼ 3 4a is a saddle point rather than a Lemma 3.2.
3

minimum point; When A > 0, due to a > p4ffiffiffiffi 27


k4 , we can obtain d1 < 0. By Lemma
caseðcÞ: h1 ðyÞ ¼ 0 has two real roots. This means that 3.2, d1  d2 < 0, hence, d2 > 0. Therefore, (A.15) has no real roots,
pffiffiffiffiffiffiffiffiffi2
p ffiffi 3 A 2a A
h1 ð 3 4aÞ < 0, therefore a > p4ffiffiffiffi k4 . In this case, h1 ðyÞ has two different and (A.16) has two different real roots: y1 ¼ 2
A
, y2 ¼
27 pffiffiffiffiffiffiffiffiffi2
roots y1 ; y2 (y1 < y2 ), and only y2 satisfies Eq. (A.2) that corresponds Aþ 2a A
2
A
. Because we need the root of h1 ðyÞ that satisfies Eq.
to minimum point of f ðxÞ. In the following, we will seek y2 by (A.2), y2 is the root what we seek and y1 is discarded.
method of undetermined coefficients. Assume that When A < 0, due to a > p4ffiffiffiffi
3
k4 , we can obtain d2 < 0. since
27
k d1  d2 < 0, hence d1 > 0. In this case, (A.16) has no real roots, and
pffiffiffiffiffiffiffiffiffiffiffiffi2
h1 ðyÞ ¼ y4  ay þ A 2a A
3 (A.15) has two different real roots, y3 ¼ A
, y4 ¼
pffiffiffiffiffiffiffiffiffiffiffiffi2 2
¼ ðy2 þ Ay þ BÞðy2 þ Cy þ DÞ where A; B; C; D 2 R: ðA:3Þ Aþ 2a A
2
A
. What we need is the maximal root, so y4 is kept and
By expansion and comparison, we get y3 is discarded.

Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
10 W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx
qffiffiffiffiffiffiffiffiffiffiffiffi
2jaj
jAjþ jAj
jAj2
In both cases, the obtained roots y2 and y4 can be unified as: where ^
x ¼ mk ðaÞ :¼ ð 2
Þ3 signðaÞ. Let uðaÞ ¼ jaj
pffiffiffiffiffiffiffiffiffi2ffi 2
2a 2
jAjþ jAj A 3 mk ðaÞ þkjmk ðaÞj3 

y (a > p4ffiffiffiffi k4 ). Hence, the minimum point of f ðxÞ on 2jmk ðaÞj
. Obviously, the threshold t ðkÞ can be computed from
2 27
^3
3
p4ffiffiffiffi k4 ).
the roots of uðaÞ. Since mk ðaÞ is the minimum point of the equation
x > 0 is ^
x ¼ y (a > 27 2

II. For case 2, we give a simple argumentation by symmetry. Our f ðxÞ ¼ ðx  aÞ2 þ kjxj3 ðx – 0Þ; mk satisfies: jmk ðaÞj  a þ 3k signðjmk ðaÞjÞ
jmk ðaÞj1=3
¼
goal is to seek the minimum point of f ðxÞ on x < 0. Due to 0. According to the equation, we can obtain
gðxÞ ¼ gðy3 Þ ¼ h2 ðyÞ for x < 0, we need to seek the root y ^ of
k 2
h2 ðyÞ satisfying: jmk ðaÞj2 ¼ ajjmk ðaÞj  mk ðaÞj3 ðB:3Þ
3
^Þ ¼ 0; h2 ðyÞ > 0 when y < y
h2 ðy ^; h2 ðyÞ < 0 when y > y
^: ðA:19Þ
we substitute (B.3) into uðaÞ; uðaÞ can be further reduced as
where y is near to y^. Actually we can seek the required root of h2 ðyÞ
jaj k
in the similar way as case 1. However, the deductions in case 2 can uðaÞ ¼  ðB:4Þ
2 3jmk ðaÞj1=3
be simplified by the symmetry between h2 ðyÞ and h1 ðyÞ. Since
h2 ðyÞ ¼ h1 ðyÞ, therefore h2 ðyÞ and h1 ðyÞ are symmetric with From (B.4), we can learn that uðaÞ ¼ uðaÞ, so uðaÞ is symmetric with
respect to the vertical axis. Thus the minimal root of h2 ðyÞ corre- respect to the vertical axis. In a 2 ½p4ffiffiffiffi k3=4 ; 1Þ; uðaÞ is monotonically
pffiffiffiffiffiffiffiffiffi
2aA2
27

sponds to the minus of the maximal root of h1 ðyÞ : y ^¼


jAjþ jAj
. increasing. Moreover, lima!p4ffiffiffik3=4 uðaÞ ¼ p8ffiffiffiffi
27
k3=4  23 m ðp4ffiffiffik k3=4 Þ < 0 and
2 27 k
27
Hence the minimum root of h2 ðyÞ, i.e., the minimum point of f ðxÞ lima!þ1 uðaÞ ¼ þ1. Therefore, uðaÞ on ½p4ffiffiffiffi k4 ; 1Þ has a unique root
3

3 27
on x < 0 is ^x ¼ y^3 (a <  p4ffiffiffiffi k4 ).
27 t ðkÞ. By the symmetry of uðaÞ, it has another unique root t  ðkÞ
In summary, the minimal point of f ðxÞ for x – 0 is: on ð1;  p4ffiffiffiffi k3=4 . Thus, we have
8 0 qffiffiffiffiffiffiffiffiffiffiffiffi1 27
3
>
>
>
> jAjþ
2jaj
jAj2 f ð^xÞ < a2 ¼ f ð0Þ () jaj > t  ðkÞ
>
> @ jAj
A if a > sðkÞ
>
> 2
< And still, from inequality (B.2), we obtain
^x ¼ 0 qffiffiffiffiffiffiffiffiffiffiffiffi13
>
> jmk ðaÞj k
>
> jAjþ
2jaj
jAj2
>
> @ jAj
A jaj P þ
>
>  2
if a < sðkÞ 2 2jmk ðaÞj1=3
:
jmk ðaÞj k k k
¼ þ þ þ
where 2 2  3  jmk ðaÞj1=3 2  3  jmk ðaÞj1=3 2  3  jmk ðaÞj1=3
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
u0
u 2
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi113 0 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi113 4 jmk ðaÞj k3 2p
4
ffiffiffiffiffiffiffiffi
u a a 4 4 3
a 2 a4 43 P4  3 3 ¼ 3k3
jAj ¼ t@ þ  2k A þ@  3
 2 k3 A ; sðkÞ 2 2 3 jmk ðaÞj 3
2 4 27 2 4 27

4 3 From above inequality, we can learn that when jmk2ðaÞj ¼ 23jmk ðaÞj1=3 ,
k
¼ pffiffiffiffiffiffi k4 :  3
p
4
ffiffiffiffiffiffiffiffi
27 i.e., jmk ðaÞj ¼ ð3k Þ4 , the equality holds, i.e., jaj ¼ 23 3k3 . By substitut-
3
p 4
ffiffiffiffiffiffiffiffi
ing jmk ðaÞj ¼ ð3k Þ4 and jaj ¼ 23 3k3 into Eq. (B.4), we can get
p ffiffiffiffiffiffiffi
ffi p ffiffiffiffiffiffiffi

Appendix B. The proof of Theorem 3.4 4 4
uð23 3k3 Þ ¼ 0. Hence t  ðkÞ ¼ 23 3k3 .
2
In summary, the minimum point of f ðxÞ ¼ ðx  aÞ2 þ kjxj3 ; x 2 R,
Proof. Note that we have already derived the minimum point ^ x of
satisfies
f ðxÞ when x – 0 in Lemma 3.3. We now derive the minimum point
8 0 qffiffiffiffiffiffiffiffiffiffiffiffi13
x of f ðxÞ when x 2 R. From the previous analysis, we can easily get >
( >
> jAjþ 2jaj jAj2 p ffiffiffiffiffiffiffiffi
>
> @ jAj
A 4
^xf ð^xÞ < f ð0Þ ¼ a2 >
> 2
if a > 23 3k3
x ¼ >
>
>
>
0 f ð^xÞ P f ð0Þ ¼ a2 < p ffiffiffiffiffiffiffiffi
x ¼ 0 if jaj 6 23
4
3k3

However, our aim is not to seek x by comparison between f ð^xÞ and >
>
>
> 0 qffiffiffiffiffiffiffiffiffiffiffiffi13
f ð0Þ, but to seek x in a closed-form thresholding formula, i.e., >
> 2jaj
jAj2 p ffiffiffiffiffiffiffiffi
>
> jAjþ jAj
>
> @ A if a <  23
4
3k3
 >
: 2
^x jaj > t  ðkÞ
x ¼
0 jaj 6 t  ðkÞ
By utilizing the triangle and hyperbolic expression for the roots of
Next our task is to explicitly compute the expression of t ðkÞ. When cubic equation, jAj can be reduced as
2
f ð^ x  aÞ2 þ kj^
xÞ 6 a2 , we get ð^ xj3 6 a2 . By reducing the equation, we
  12  
further get 2 1 / 27a2 32
jAj ¼ pffiffiffi k4 cosh where / ¼ arccosh k :
2 3 3 16
2a^x P ^x2 þ kj^xj3 ðB:1Þ
2

When a > 0, by Lemma 3.1, ^


x 2 ð0; aÞ; a P x2 þkj^
^ xj3
; when a < 0, by Thus, the proof of Theorem 3.4 is complete. h
2^x
2
^x2 þkj^xj3
Lemma 3.1, ^x 2 ða; 0Þ, (B.1) can be reduced as a P 2^x
. Hence
we unify these two cases as follows Appendix C. The triangle and hyperbolic expression for the
roots of cubic equation
2 2
^x2 þ kj^xj3 mk ðaÞ2 þ kjmk ðaÞj3
jaj P or jaj P ðjaj The triangle and hyperbolic expressions for the roots of cubic
2j^xj 2jmk ðaÞj
equation are presented in Table 8 [24]. According to the second
4 3 column of Table 8, we can derive the hyperbolic expression for
> pffiffiffiffiffiffi k4 Þ ðB:2Þ
27 the unique real root of our cubic Eq. (A.14), i.e.,

Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx 11

  12   [12] J. Mairal, M. Elad, G. Sapiro, Sparse representation for color image restoration,
2 1 / 27a2 32
jAj ¼ pffiffiffi k4 cosh where / ¼ arccosh k : IEEE Trans. Image Process. 17 (1) (2008) 53–69.
3 3 16 [13] W. Dong, L. Zhang, G. Shi, X. Wu, Image deblurring and super-resolution by
adaptive sparse domain selection and adaptive regularization, IEEE Trans.
Image Process. 20 (7) (2011) 1838–1857.
[14] M.F. Tappen, B.C. Russell, W.T. Freeman, Exploiting the sparse derivative prior
References for super-resolution and image demosaicing, in: Third International Workshop
on Statistical and Computational Theories of Vision at ICCV, 2003.
[15] R. Chartrand, Exact reconstruction of sparse signals via nonconvex
[1] Z.B. Xu, Data modeling: visual psychology approach and L1 regularization
2 minimization, IEEE Signal Process. Lett. 14 (10) (2007) 707–710.
theory, in: Proceedings of the International Congress of Mathematicians, 2010,
[16] R. Chartrand, V. Staneva, Restricted isometry properties and nonconvex
pp. 1–34.
compressive sensing, Inverse Prob. 24 (3) (2008) 1–14.
[2] Z.B. Xu, X.Y. Chang, F.M. Xu, H. Zhang, L1 regularization: an iterative half
2 [17] R. Chartrand, W. Yin, Iteratively reweighted algorithms for compressive
thresholding algorithm, IEEE Trans. Neural Networks Learning Syst. 23 (7)
sensing, in: IEEE International Conference on Acoustics, Speech and Signal
(2012) 1013–1027.
Processing, 2008, pp. 3869–3872.
[3] D. Krishnan, R. Fergus, Fast image deconvolution using hyper-Laplacian priors,
[18] R. Chartrand, Fast algorithms for nonconvex compressive sensing: MRI
Adv. Neural Inf. Process. Syst. (NIPS) (2009).
reconstruction from very few data, in: IEEE International Symposium on
[4] R. Fergus, B. Singh, A. Hertzmann, S.T. Roweis, W.T. Freeman, Removing camera
Biomedical Imaging: From Nano to Macro, 2009, pp. 262–265.
shake from a single photograph, ACM Trans. Graphics 25 (3) (2006) 787–794.
[19] T. Blumensath, M. Yaghoobi, M.E. Davies, Iterative hard thresholding and L0
[5] S. Cho, S. Lee, Fast motion deblurring, ACM Trans. Graphics 28 (5) (2009) 1–8.
regularisation, IEE Trans. Acoust. Speech Signal Process. vol. 3 (2007).
[6] L. Yuan, J. Sun, L. Quan, H.-Y. Shum, Progressive inter-scale and intra-scale non-
[20] D.L. Donoho, Denoising by soft thresholding, IEEE Trans. Inf. Theory 41 (3)
blind image deconvolution, ACM Trans. Graphics 27 (3) (2008) 1–8.
(1995) 613–627.
[7] T. Chan, S. Esedoglu, F. Park, A. Yip, Math. Models Comput. Vision 17 (2005).
[21] I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for
[8] A. Buades, B. Coll, J.M. Morel, A Non-local Algorithm for Image Denoising,
linear inverse problems with a sparsity constraint, Commun. Pure and Appl.
CVPR, vol.2, IEE, 2005, pp. 60–65.
Math. 57 (11) (2004) 1413–1457.
[9] K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D
[22] A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating
transform-domain collaborative filtering, IEEE Trans. Image Process. 16 (8)
blind deconvolution algorithms, in:CVPR, 2009, pp. 1964–1971.
(2007) 2080–2095.
[23] J. Yang, J. Wright, T.S. Huang, Y. Ma, Image super-resolution via sparse
[10] J. Mairal, F. Bach, J. Ponce, G. Sapiro, A. Zisserman, Non-local sparse models for
representation, IEEE Trans. Image Process. 19 (11) (2010) 2861–2873.
image restoration, in: 2009 IEEE 12th International Conference on Computer
[24] F.C. Xing, Investigation on solutions of cubic equations with one unknown, J.
Vision, 2009, pp. 2272–2279.
Central Univ. Nationalities (Natural Sciences Edition) 12 (3) (2003) 207–218.
[11] M. Elad, M. Aharon, Image denoising via sparse and redundant representations
[25] M. Elad, Sparse and Redundant Representations: From Theory to Applications
over learned dictionaries, IEEE Trans. Image Process. 15 (12) (2006) 3736–
in Signal and Image Processing, Springer, 2010.
3745.

Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006

You might also like