J. Vis. Commun. Image R.: Wenfei Cao, Jian Sun, Zongben Xu
J. Vis. Commun. Image R.: Wenfei Cao, Jian Sun, Zongben Xu
a r t i c l e i n f o a b s t r a c t
Article history: In this paper, we focus on the research of fast deconvolution algorithm based on the non-convex
Received 3 April 2012 Lq ðq ¼ 12 ; 23Þ sparse regularization. Recently, we have deduced the closed-form thresholding formula for
Accepted 29 October 2012 L1 regularization model (Xu (2010) [1]). In this work, we further deduce the closed-form thresholding for-
Available online xxxx 2
mula for the L2 non-convex regularization problem. Based on the closed-form formulas for Lq ðq ¼ 12 ; 23Þ
3
regularization, we propose a fast algorithm to solve the image deconvolution problem using half-qua-
Keywords: dratic splitting method. Extensive experiments for image deconvolution demonstrate that our algorithm
Sparsity
has a significant acceleration over Krishnan et al.’s algorithm (Krishnan et al. (2009) [3]). Moreover, the
L1 regularization
2
L2 regularization
simulated experiments further indicate that L2 regularization is more effective than L0 ; L1 or L1 regulari-
3 2
3
Variable splitting zation in image deconvolution, andL1 regularization is competitive to L1 regularization and better than L0
2
1. Introduction inverse problem, such as the total variation [7], nonlocal self-sim-
ilarity [8–10], sparse prior [11–13] and so on. Especially, the spar-
Image blur is a common artifact in digital photography caused sity induced by nonconvex non-convex regularization or the
by camera shake or object movement. Recovering the un-blurred hyper-Laplacian distribution from probabilistic point of view at-
sharp image from the blurry image, which is generally called image tracts a lot of attention in the community of computer vision
deconvolution, has been a fundamental research problem in image [14], machining learning and compressive sensing [15–17]. These
processing and computational photography. Image deconvolution prior models give rise to surprising results. For example, Chartrand
algorithms [4–6] can be categorized to blind deconvolution and [17,18] applies non-convex regularization to the Magnetic Reso-
non-blind deblurring, in which the blur kernel is unknown and nance Imaging (MRI) reconstruction task, bringing about promis-
known respectively. Tremendous methods have been proposed to ing results that only few samples in K-data space can effectively
estimate the blur kernel. In this work, we focus on the non-blind reconstruct the MRI image.
deblurring problem, i.e., recovering the sharp image from a blurry In this paper, we work on the fast image deconvolution algorithm
image given the blur kernel. with non-convex regularization to suppress ringing artifacts and
Mathematically, blurry image can be modeled as the convolution noises. The idea is motivated by Krishnan’s work in [3], in which
of an ideal sharp image with a blur kernel and then adding zero mean hyper-Laplacian prior of natural image is imposed on the image
Gaussian white noise. The degraded process can be modeled as non-blind deconvolution algorithm, which is equivalent to solving
an inverse linear optimization problem with Lq -norm (0 < q < 1)
Y ¼Xkþn ð1:1Þ non-convex regularization. Using quadratic splitting framework, one
where X is the sharp image, k is a blur kernel and n is the noise. Im- sub-problem is to optimize the non-convex regularization problem:
age deconvolution aims to recover a high quality image X, given a
x ¼ argminfðx aÞ2 þ kjxjq g: ð1:2Þ
blurry image Y. x
The ill-posed nature of this problem implies that additional
assumption on X should be introduced. Recently, many kinds of This sub-problem actually is a very special case of the problem pro-
image priors are discovered and utilized to regularize this ill-posed posed by Elad [25] in the context of sparse representation and by
Chartrand [15–17] in the setting of compressive sensing. According
⇑ Corresponding author. to their work, from the geometric point of view, this solution is just
E-mail addresses: [email protected] (W. Cao), [email protected] (J. the intersection point between a hyperplane and a Lq ð0 < q < 1Þ
Sun), [email protected] (Z. Xu). ball, and when q goes closer to zero, the solution of this problem
1047-3203/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.jvcir.2012.10.006
Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
2 W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx
becomes more sparse. However, from the algebraic point of view, 2. Image deconvolution based on non-convex regularization
how to fast solve this optimization problem is a challenge due to
the non-convexity and the non-smoothness of this problem. In this 2.1. Formulation
work, our efforts will focus on two special values, 12 and 23, over the
interval ð0; 1Þ. Traditionally, closed-form hard thresholding [19] Assuming that X is the original uncorrupted grayscale image
and soft-thresholding formulas [20,21] have been proposed to solve with N pixels; Y is an image degraded by blur kernel k and noise n:
this regularization problem when q ¼ 0 and q ¼ 1. In [3], when q ¼ 12
or 23, Krishnan et al. [3] proposed to solve the above problem by pre-
Y ¼Xkþn ð2:1Þ
senting some clever discriminate conditions to compare and select Non-blind deconvolution aims to restore the real image X given the
optimal solution from the multiple roots of the first-order deriva- known or estimated blur kernel k. Due to the ill-posedness of this
tive equation of the cost function. Although this method makes this task, prior information of natural images should be utilized to reg-
problem undertake a major breakthrough, multiple roots should be ularize the inverse problem [13,3]. In this work, we utilize the
computed and compared to produce the final solution. A natural sparse hyper-Laplacian distribution prior of nature image in the
question is whether we could derive the closed-form thresholding gradient domain [3], i.e.,
formulas for non-convex regularization with q ¼ 12 or 23 in
0 < q < 1, in parallel to the well-known hard/soft thresholding for- X
2
q
s jjXfj jjq
mulas for q ¼ 0 or 1. j¼1
In this work, we will present the closed-form thresholding for-
pðXÞ / e ; ð2:2Þ
mulas for non-convex regularization problem in Eq. (1.2) with P q 1q
where jjzjjq ¼ ð denotes convolution, f1 ¼ ½1; 1 and
i ðzÞi Þ ;
q ¼ 23 or 12, and apply them to solve the image deconvolution prob- f2 ¼ ½1; 1T are two first-order derivative filters and 0 < q < 1.
lem. It has been found that the gradients of natural images are dis- From the probabilistic perspective, we seek the MAP (maximum-a
q
tributed as heavy-tailed hyper-Laplacian distribution pðxÞ / ekjxj posteriori) estimate of X in Bayesian framework: pðXjY; kÞ /
with 0:5 6 q 6 0:8. In the Bayesian framework, this prior will pðYjX; kÞPðXÞ, the first term is the Gaussian likelihood and the
impose the Lq norm non-convex regularization for the inverse second term is the hyper-Laplacian image prior. Maximizing
problem with the formulation in Eq. (1.2). Therefore, developing pðXjY; kÞ is equivalent to minimizing
a fast algorithm for L1 or L2 regularization problem in the ( )
2 3
range of 0:5 6 q 6 0:8 can be expected to be promising in image k X2
X ¼ argmin jjX k Yjj2F þ jjX fj jjqq ð2:3Þ
deconvolution task. The contribution of this work can be X 2 j¼1
summarized as: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
PM PN 2
where jjAjjF ¼ i¼1 j¼1 aij indicates the Frobenious norm. If as-
We deduce the closed-form thresholding formula for linear sume that x and y are vectors stretched from X and Y column by col-
inverse model with L2 regularization by deeply analyzing the umn, and K; F 1 and F 2 are the matrix form of the filters k; f1 and f2
3
distribution of roots of the first-order derivative equation of for image convolution, then problem (2.3)can be equivalently repre-
the cost function. Together with our previous work on the sented as
close-form thresholding formula for L1 regularization in [1,2],
2 k
these thresholding formulas enable fast and efficient image x ¼ argmin jjKx yjj22 þ jjF 1 xjjqq þ jjF 2 xjjqq ð2:4Þ
x 2
deconvolution algorithm in the framework of half-quadratic
splitting strategy. where k makes a trade-off between the fidelity term and the regu-
P q q
1
We conduct extensive experiments over a set natural images larization term. When 0 6 q < 1; jjFxjjq ¼ i ðFxÞi imposes non-
blurred by eight real blur kernels. The results demonstrate that convex regularization on the image gradients.
our algorithm enables a significantly faster speed over Krishnan
et al.’s method; Moreover, L2 regularization is more effective 2.2. Half-quadratic splitting algorithm
3
over L0 ; L1 or L1 regularization for image deconvolution, and L1
2 2
regularization is competitive to L1 regularization and better Using the half-quadratic splitting method, Krishnan et al., [3]
than L0 regularization. introduced two auxiliary variables u1 and u2 and the problem
(2.4) can be converted to the following optimization problem
We believe that the closed-form thresholding formulas for L2 or L1
3 2
k b b
non-convex regularization are important to machine learning and x ¼ argmin jjKx yjj22 þ jjF 1 x u1 jj22 þ jjF 2 x u2 jj22
computer vision communities beyond the application of image x 2 2 2
o
deconvolution. That is because this linear inverse problem with þ jju1 jjqq þ jju2 jjqq ð2:5Þ
non-convex regularization is a general model with wide applica-
tions for compressive sensing [16,17], image demosaicing [14], im- where b is a control parameter. As b ! 1, the solution of prob-
age super-resolution [14], etc. Moreover, theoretically, the closed- lem(2.5) converges to that of Eq. (2.4). Minimizing Eqn. (2.5) for a
form formulas make the theoretical analysis of the non-convex reg- fixed b can be performed by alternating two steps: one sub-problem
ularization problem possible or easier, which deserves to be inves- is to solve x, given u1 and u2 , which is called x-subproblem; the
tigated in our future work. other sub-problem is to solve u1 ; u2 , given x, which is called
The remainder of this paper can be organized as follows. Sec- u-subproblem.
tion 2 will describe the image deconvolution model based on
non-convex regularization and its optimization using half-qua- 2.2.1. x-Subproblem
dratic splitting scheme; In Section 3, we will deduce the threshold- Given u1 and u2 , the x-subproblem aims to obtain the optimal x
ing formula for Lq q ¼ 23 regularization problem and also introduce
by optimizing the energy function Eq. (2.5), which is to optimize:
our previously proposed thresholding formula for Lq q ¼ 12 regu-
larization problem. Then we will present our deconvolution algo- x ¼ argminfkjjKx yjj22 þ bjjF 1 x u1 jj22 þ bjjF 2 x u2 jj22 g
x
rithm with Lq ðq ¼ 12 ; 23Þ regularization; In Section 4, we will report
the experimental results in both speed and quality; Finally, this The subproblem can be optimized by setting the first derivative of
paper is concluded in Section 5. the cost function to zero:
Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx 3
8
k k >
> 2
jaj 1 þ cos 23p 2u3k ðaÞ if a > pðkÞ
F T1 F 1 þ F T2 F2 þ K T K x ¼ F T1 u1 þ F T2 u2 þ K T y ð2:6Þ >
< 3
b b
x ¼ 0 if jaj 6 pðkÞ ð3:2Þ
>
>
The plot for L threshold formula The plot for L threshold formula
1/2 0
In this section, we will firstly present the thresholding formulas
1 1
for L1 ; L2 regularization problems. And then, by combining the
2 3
Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
4 W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
This also leads to contradiction with the fact that x is a minimum u0 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi113 0 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi113
u 2
point. Hence, x 2 [0, a] for case (1). h u a
t @ a4 43 3 A @ a2 a4 43
jAj ¼ þ 2k þ 2 k3 A
2 4 27 2 4 27
Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx 5
Table 1
Comparison for running time with kernel 4(19 19).
Size DL1 (Ave./Std.) OurL1 (Ave./Std.) Ratio DL2 (Ave./Std.) OurL2 (Ave./Std.) Ratio
2 2 3 3
Note: Bold value indicates a ratio for the average consuming time of different methods.
8 0 qffiffiffiffiffiffiffiffiffiffiffiffi13
Proof. The proof of d1 d2 <0 is equivalent to the proof of A6 < 4a2 . >
> 2jaj
> jAjþ jAj2
We now prove that A6 < 4a2 . Let sðtÞ ¼ t1=3 ðt P 0Þ. It is easy to ver- >
> @ jAj
A
>
> 2
if a > pðkÞ
ify that sðtÞ is concave, then we can obtain the inequality >
>
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi >
<
2 43 3 2
sðt1 þt
2
2
Þ > sðt1 Þþsðt
2
2Þ
. By setting t1 ¼ a2 þ a4 27
4
2 k , t2 ¼ a2 x ¼ 0 if jaj 6 pðkÞ ð3:4Þ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi >
> 0 qffiffiffiffiffiffiffiffiffiffiffiffi13
>
>
a4
27 43 3
2 k (obviously, t1 – t2 and t 1 ; t 2 P 0), we can easily prove >
> 2jaj 2
4 > @jAjþ jAj jAj A
>
>
> if a < pðkÞ
that A6 < 4a2 , i.e., d1 d2 < 0. h : 2
where
Lemma 3.3. For f ðxÞ in Eq. (3.3) where k >0 and a2 R, if x – 0, then
the minimum point of f ðxÞ can be represented as: 12
2 1 / 27a2 32 2 1
8 0 qffiffiffiffiffiffiffiffiffiffiffiffi1 jAj ¼ pffiffiffi k4 cosh ; / ¼ arccosh k ; pðkÞ ¼ ð3k3 Þ4 :
> 3 3 3 16 3
>
> jAjþ
2jaj
jAj2
>
> @ jAj
A
>
> if a > sðkÞ
>
< 2
Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
6 W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx
Table 2
Comparison for running time with kernel 8(27 27).
Size DL1 (Ave./Std.) OurL1 (Ave./Std.) Ratio DL2 (Ave./Std.) OurL2 (Ave./Std.) Ratio
2 2 3 3
Note: Bold value indicates a ratio for the average consuming time of different methods.
Table 3 Table 4
Comparison for different images with kernel 7. Comparison for different images with kernel 8.
a(512 512) 20.68 26.88 26:90 26.77 25.38 a(512 512) 19.05 26.39 26:42 26.33 25.18
b(512 512) 20.47 30.54 30:63 30.44 28.96 b(512 512) 19.39 29.29 29:47 29.38 27.78
c(512 512) 20.43 29.56 29:70 29.57 27.81 c(512 512) 19.23 28.69 28:89 28.83 27.05
d(512 512) 22.87 32.41 32:42 32.19 30.54 d(512 512) 20.33 30.83 30:94 30.84 29.25
e(512 512) 22.40 30:55 30.51 30.34 29.56 e(512 512) 20.93 29.55 29:59 29.47 28.60
f(512 512) 21.79 30.37 30:45 30.31 28.94 f(512 512) 20.08 29.44 29:59 29.49 28.05
g(1608 1624) 13.67 25.39 25:45 25.27 21.90 g(1608 1624) 12.75 24.43 24:64 24.58 21.00
h(1554 1383) 26.93 34.58 34:59 34.50 33.80 h(1554 1383) 25.65 33:81 33.73 33.54 33.04
i(1362 1263) 23.67 32.64 33:14 33.09 30.31 i(1362 1263) 21.86 32.35 33.05 33:07 30.01
j(1024 1341) 15.39 26.12 26:43 26.35 22.14 j(1024 1341) 14.46 25.97 26.43 26:44 21.65
k(1308 1197) 18.47 26.03 26:24 26.21 24.14 k(1308 1197) 17.43 26.17 26:46 26:46 23.92
l(1000 667) 18.34 27.28 27:66 27.67 25.08 l(1000 667) 17.64 27.58 28.10 28:15 24.69
m(886 886) 22.80 30.35 30:70 30.64 28.69 m(886 886) 22.20 30.12 30:52 30.46 28.34
n(1246 1119) 17.57 26.36 26:48 26.35 23.89 n(1246 1119) 17.05 26.09 26:31 26.25 23.44
o(1413 1413) 20.38 30.82 31:13 30.98 28.31 o(1413 1413) 19.54 29.61 30:12 30.10 27.08
p(1284 1380) 18.94 31.62 31:89 31.63 28.23 p(1284 1380) 17.59 30.12 30:71 30.68 26.95
q(1600 1200) 14.16 23.54 23:75 23.67 20.18 q(1600 1200) 13.58 23.55 23:82 23.80 19.93
r(1693 1084) 17.77 27.91 28:14 28.02 25.36 r(1693 1084) 17.23 27.30 27.72 27:73 24.49
s(1024 675) 17.36 27.16 27:63 27.65 24.50 s(1024 675) 16.50 27.45 28.14 28:26 23.97
t(1600 1200) 18.89 25.98 26:07 25.97 24.45 t(1600 1200) 18.54 26.15 26:29 26.22 24.23
Note: Bold values indicate the highest PSNR value. Note: Bold values indicate the highest PSNR value.
closed-form formulation, then the final algorithm for image decon- ground-truth high quality image. In our implementation, edge
volution is shown in Algorithm 1. tapering operation is utilized to reduce the possible boundary arti-
facts. To compare the best potential performance
pffiffiffi of different regu-
4. Experiments larization algorithms, we set binc ¼ 2 and k ¼ 2b to the optimal
value in a range of values with best PSNR performance as in [3].
In this section, we will conduct several groups of experiments to Our experiments are executed using Matlab software on desk-
demonstrate that the proposed deconvolution algorithm enables top computer with 2.51 GHz AMD CPU (dual core) and 1.87 GB
significantly faster speed over Krishnan et al’s algorithm [3]. More- RAM.
over, by extensive experiments, we will show that L2 regularization
3
is more effective for image deconvolution than L0 ; L1 or L1 regular-
2 Algorithm 1. Fast Image Deconvolution Using Closed-Form
ization, and L1 regularization is competitive to L1 regularization
2
and better than L0 regularization. Thresholding Formulas of Lq ðq ¼ 12 ; 23Þ Regularization
Input: Blurred image y; blur kernel k; regularization weight
4.1. Experiment setting
k; q=12 or 23;b0 ; bInc ; bM ;
maximal number of outer iterations T;
Our test natural images are collected from two sources: (1) the
number of inner iterations J.
standard test images for image processing with size of 521 512;
Step 1: Initialize iter = 0, x ¼ y and b ¼ b0 , pre-compute
(2) the high-resolution images from web site of http://www.flickr.-
constant terms in Eq. (2.7).
com/. The images from the second source are commonly with lar-
Step 2: repeat
ger resolutions to test the ability for our algorithm to handle large
iter = iter + 1.
images. We list all the test images in Fig. 2. All the test images are
for i = 1 to J do
blurred by real-world camera shake kernels from [22], and the blur
x-subproblem: optimize x according to Eq. (2.7).
kernels are shown in Fig. 3 (the images are scaled for better illus-
u-subproblem: optimize u1 ; u2 according to Eq. (3.2)
tration). To better simulate the real-captured blurry image, we also
or (3.4) when q=12 or 23.
add Gaussian noises with standard deviation of 0.01 to the blurry
image and followed by quantization to 255 discrete values. The endfor
2552
PSNR defined as 10log10 MSEðxÞ is employed to evaluate the deconvo- b ¼ binc b.
lution performance, where x is the deconvolution result and until b > bM or iter > T.
MSEðxÞ denotes the mean square error between x and the Output: x.
Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx 7
Ave. Blurry L1 L2 L1 L0 In this experiment, we will evaluate the speed of our algorithm
3 2
compared to the Krishnan’s algorithm (without the look-up table
ker1(13 13) 22.5621 30.5138 30:5250 30.3146 28.2054
ker2(15 15) 22.2905 29.1895 29:3143 29.1567 26.9743 technique) [3]. It has been shown in [3] that the speed of other
ker3(17 17) 21.7455 29.0075 29:2680 29.1865 26.6380 methods such as re-weighted method is slower than Krishnan’s
ker4(19 19) 22.2555 29.3000 29:6125 29.5735 26.7215 algorithm. We test the algorithms on images with varying resolu-
ker5(21 21) 18.8620 30.6030 30:7085 30.5335 28.0740 tions. Our proposed deconvolution algorithms using L1 and L2 reg-
ker6(23 23) 19.7125 29.4790 29:5740 29.4305 27.4645 2 3
ker8(27 27) 18.5515 28.2445 28:5470 28.5040 25.9325 the corresponding algorithms proposed in [3] are denoted
DL1 (without the look-up table technique) and DL2 (without the
Note: Bold values indicate the highest PSNR value. 2 3
look-up table technique) respectively. Table 1 exhibits for kernel
Table 6 4 the average result and the standard deviation of ten times exper-
Comparison for 8 kernels with image o. iment with different resolutions, and Table 2 for kernel 8. From Ta-
Image o Blurry L1 L2 L1 L0 ble 1 and Table 2, we find that our L1 algorithm is roughly 5.5 times
3 2 2
faster than Krishnan’s L1 algorithm, and our L2 algorithm is roughly
ker1(13 13) 23.97 33.38 33:41 33.10 30.47 2 3
ker1(13 13) 23.64 34:23 34.09 33.66 30.76 To compare the performance of different regularization algo-
ker2(15 15) 22.87 32.19 32:36 32.07 28.94
rithms, we conduct eight groups of experiments for different ker-
ker3(17 17) 21.63 30.97 31:44 31.33 27.84
ker4(19 19) 22.60 31.23 31:81 31.78 27.73 nels. For each kernel, we evaluate the deconvolution performance
ker5(21 21) 17.91 33.56 33:71 33.38 29.91 over the 20 test images shown in Fig. 2. Because of the space
ker6(23 23) 19.07 32.39 32:56 32.29 29.38 restriction, we only list two groups of recovery results, as showed
ker7(23 23) 18.94 31.62 31:89 31.63 28.23
in Table 3 and Table 4. From Table 3 and Table 4, we find that, first,
ker8(27 27) 17.59 30.14 30:72 30.66 26.96
the deconvolution results using L2 regularization outperform over
3
Note: Bold values indicate the highest PSNR value. those by L0 ; L1 or L1 regularization in terms of PSNR values; Second,
2
Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
8 W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx
Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx 9
In the future work, we are interested in the analysis of the con- AþC ¼0 ðA:4Þ
vergence of our deconvolution algorithm and plan to extend the B þ D þ AC ¼ 0 ðA:5Þ
applications of the thresholding formulas of L1 and L2 regulariza-
2 3 AD þ BC ¼ a ðA:6Þ
tion to other related problems in image processing and machine
k
learning. BD ¼ ðA:7Þ
3
Acknowledgements From (A.4), we get C ¼ A. By substituting C ¼ A into (A.5), (A.6),
we get
We would like to thank Doctor J. X. Jia and Q. Zhao for many B þ D A2 ¼ 0 ðA:8Þ
helpful suggestions. This work was supported by the National AD AB ¼ a ðA:9Þ
973 Programming (2013CB329404), the Key Program of National
Natural Science Foundation of China (Grant No. 11131006), and
the National Natural Science Foundations of China (Grant No.
61075054). (i) when A ¼ 0, from (A.4) and (A.6), we get C ¼ A ¼ a ¼ 0. a ¼ 0
3
obviously contradicts a > p4ffiffiffiffi k4 , so ðiÞ never occurs.
27
Appendix A. The proof of Lemma 3.3
(ii) when A – 0,
Proof. We will find the minimum point of f ðxÞ on x – 0 by seeking B þ D ¼ A2 ðA:10Þ
and analyzing the roots of equation f 0 ðxÞ ¼ 0 on x – 0. Since a
f 0 ðxÞ ¼ 2ðx aÞ þ 23 k signðxÞ BþD¼ ðA:11Þ
1 , we obtain that: A
jxj3
we can further obtain that,
1 1 k
xjxj3 ajxj3 þ signðxÞ ¼ 0 ðx – 0Þ ðA:1Þ
3 ðA2 þ AaÞ
3
B¼ ðA:12Þ
Assuming that jxj ¼ y , (A.1) can be reduced as the following two 2
cases: ðA2 AaÞ
D¼ ðA:13Þ
case1: for x > 0; y4 ay þ 3k ¼ 0; 2
ðA2 þAaÞ ðA2 aÞ
case2: for x < 0; y4 ay 3k ¼ 0. By substituting (A.12), (A.13) to (A.7), we get 2
2 A ¼ 3k. And
still, by reduction and rearrangement, we obtain A 43 kA2
6
a2 ¼
1 1
Let gðxÞ ¼ xjxj3 ajxj3 þ 3k signðxÞ; h1 ðyÞ ¼ y4 ay þ 3k and 0. Let M ¼ A2 , we get
h2 ðyÞ ¼ y4 þ ay þ 3k. In the following, for case 1, we seek the 4
M3 kM a2 ¼ 0 ðA:14Þ
positive minimum point of f ðxÞ by exploring the zero point 3
distribution of h1 ðyÞ. And for case 2, we seek the negative From the root discriminant formula of the triple equation, we get
minimum point of f ðxÞ by the symmetry relationship of h2 ðyÞ and 2 2 3 !
q
2 p
3 a 4 a4 43 3
h1 ðyÞ, i.e., h2 ðyÞ ¼ h1 ðyÞ. D¼ þ ¼ þ k ¼ k
I. We first analyze case 1. By Lemma 3.1, in order to seek the 2 3 2 9 4 93
minimum point of f ðxÞ on x > 0, it suffices to consider the case of where q ¼ a2 ; p ¼ 43 k. Since a > p4ffiffiffiffi
3
k4 ; D > 0. Hence, Eq. (A.14)
27
a P 0. Since for x > 0; gðxÞ ¼ gðy3 Þ ¼ h1 ðyÞ, we just need to seek the only have one real root. According to the Cardan formula for cubic
positive root y ^ of h1 ðyÞ satisfying: equation, we get the root of Eq. (A.14) as M ¼ A2 ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
13 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
13
^Þ ¼ 0; h1 ðyÞ < 0 when y < y
h1 ðy ^; h1 ðyÞ > 0 when y > y
^: a2 4 43 3 2 4 43 3
2
þ a4 27 2 k þ a2 a4 27 2 k . Right now, by (A.3), (A.4),
ðA:2Þ (A.12), (A.13) and h1 ðyÞ ¼ 0, we get
where y is near to y ^. We next investigate the root distribution of ðA2 þ AaÞ
0
h1 ðyÞ by analyzing its derivative. Since h1 ðyÞ ¼ 4y3 a, it is easy to y2 þ Ay þ ¼0 ðA:15Þ
p ffiffi 2
3 a
verify that when y < 4; h1 ðyÞ monotonically decreases and when 2
ðA AÞa
pffiffi pffiffi y2 Ay þ ¼0 ðA:16Þ
y > 3 4a; h1 ðyÞ monotonically increases, which means that y ¼ 3 4a is 2
00
the unique minimum point of h1 ðyÞ. Since h1 ðyÞ ¼ 12y2 P 0; h1 ðyÞ From the root discriminant formula of the quadratic equation, we
is a convex function. The root distribution for h1 ðyÞ has three cases: get
pffiffi
caseðaÞ: h1 ðyÞ ¼ 0 has no root. This means that h1 ð 3 4aÞ > 0, then a
p ffiffi p ffiffi 3
d1 ¼ ðA2 þ 2 Þ ðA:17Þ
we can get ð 3 4aÞ4 að 3 4aÞ þ 3k > 0, i.e., a < p4ffiffiffiffi k4 ; A
27 a
caseðbÞ: h1 ðyÞ ¼ 0 has one unique real root. This means that d2 ¼ ðA2 2 Þ ðA:18Þ
pffiffi 3 pffiffi A
h1 ð 3 4aÞ ¼ 0, therefore a ¼ p4ffiffiffiffi
27
k4 . In this case, however, y ^ ¼ 3 4a does The following is to seek the real root of equation (A.15), (A.16) by
p ffiffi
not satisfy Eq. (A.2), so y ^ ¼ 3 4a is a saddle point rather than a Lemma 3.2.
3
Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
10 W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx
qffiffiffiffiffiffiffiffiffiffiffiffi
2jaj
jAjþ jAj
jAj2
In both cases, the obtained roots y2 and y4 can be unified as: where ^
x ¼ mk ðaÞ :¼ ð 2
Þ3 signðaÞ. Let uðaÞ ¼ jaj
pffiffiffiffiffiffiffiffiffi2ffi 2
2a 2
jAjþ jAj A 3 mk ðaÞ þkjmk ðaÞj3
^¼
y (a > p4ffiffiffiffi k4 ). Hence, the minimum point of f ðxÞ on 2jmk ðaÞj
. Obviously, the threshold t ðkÞ can be computed from
2 27
^3
3
p4ffiffiffiffi k4 ).
the roots of uðaÞ. Since mk ðaÞ is the minimum point of the equation
x > 0 is ^
x ¼ y (a > 27 2
II. For case 2, we give a simple argumentation by symmetry. Our f ðxÞ ¼ ðx aÞ2 þ kjxj3 ðx – 0Þ; mk satisfies: jmk ðaÞj a þ 3k signðjmk ðaÞjÞ
jmk ðaÞj1=3
¼
goal is to seek the minimum point of f ðxÞ on x < 0. Due to 0. According to the equation, we can obtain
gðxÞ ¼ gðy3 Þ ¼ h2 ðyÞ for x < 0, we need to seek the root y ^ of
k 2
h2 ðyÞ satisfying: jmk ðaÞj2 ¼ ajjmk ðaÞj mk ðaÞj3 ðB:3Þ
3
^Þ ¼ 0; h2 ðyÞ > 0 when y < y
h2 ðy ^; h2 ðyÞ < 0 when y > y
^: ðA:19Þ
we substitute (B.3) into uðaÞ; uðaÞ can be further reduced as
where y is near to y^. Actually we can seek the required root of h2 ðyÞ
jaj k
in the similar way as case 1. However, the deductions in case 2 can uðaÞ ¼ ðB:4Þ
2 3jmk ðaÞj1=3
be simplified by the symmetry between h2 ðyÞ and h1 ðyÞ. Since
h2 ðyÞ ¼ h1 ðyÞ, therefore h2 ðyÞ and h1 ðyÞ are symmetric with From (B.4), we can learn that uðaÞ ¼ uðaÞ, so uðaÞ is symmetric with
respect to the vertical axis. Thus the minimal root of h2 ðyÞ corre- respect to the vertical axis. In a 2 ½p4ffiffiffiffi k3=4 ; 1Þ; uðaÞ is monotonically
pffiffiffiffiffiffiffiffiffi
2aA2
27
3 27
on x < 0 is ^x ¼ y^3 (a < p4ffiffiffiffi k4 ).
27 t ðkÞ. By the symmetry of uðaÞ, it has another unique root t ðkÞ
In summary, the minimal point of f ðxÞ for x – 0 is: on ð1; p4ffiffiffiffi k3=4 . Thus, we have
8 0 qffiffiffiffiffiffiffiffiffiffiffiffi1 27
3
>
>
>
> jAjþ
2jaj
jAj2 f ð^xÞ < a2 ¼ f ð0Þ () jaj > t ðkÞ
>
> @ jAj
A if a > sðkÞ
>
> 2
< And still, from inequality (B.2), we obtain
^x ¼ 0 qffiffiffiffiffiffiffiffiffiffiffiffi13
>
> jmk ðaÞj k
>
> jAjþ
2jaj
jAj2
>
> @ jAj
A jaj P þ
>
> 2
if a < sðkÞ 2 2jmk ðaÞj1=3
:
jmk ðaÞj k k k
¼ þ þ þ
where 2 2 3 jmk ðaÞj1=3 2 3 jmk ðaÞj1=3 2 3 jmk ðaÞj1=3
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
u0
u 2
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi113 0 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi113 4 jmk ðaÞj k3 2p
4
ffiffiffiffiffiffiffiffi
u a a 4 4 3
a 2 a4 43 P4 3 3 ¼ 3k3
jAj ¼ t@ þ 2k A þ@ 3
2 k3 A ; sðkÞ 2 2 3 jmk ðaÞj 3
2 4 27 2 4 27
4 3 From above inequality, we can learn that when jmk2ðaÞj ¼ 23jmk ðaÞj1=3 ,
k
¼ pffiffiffiffiffiffi k4 : 3
p
4
ffiffiffiffiffiffiffiffi
27 i.e., jmk ðaÞj ¼ ð3k Þ4 , the equality holds, i.e., jaj ¼ 23 3k3 . By substitut-
3
p 4
ffiffiffiffiffiffiffiffi
ing jmk ðaÞj ¼ ð3k Þ4 and jaj ¼ 23 3k3 into Eq. (B.4), we can get
p ffiffiffiffiffiffiffi
ffi p ffiffiffiffiffiffiffi
ffi
Appendix B. The proof of Theorem 3.4 4 4
uð23 3k3 Þ ¼ 0. Hence t ðkÞ ¼ 23 3k3 .
2
In summary, the minimum point of f ðxÞ ¼ ðx aÞ2 þ kjxj3 ; x 2 R,
Proof. Note that we have already derived the minimum point ^ x of
satisfies
f ðxÞ when x – 0 in Lemma 3.3. We now derive the minimum point
8 0 qffiffiffiffiffiffiffiffiffiffiffiffi13
x of f ðxÞ when x 2 R. From the previous analysis, we can easily get >
( >
> jAjþ 2jaj jAj2 p ffiffiffiffiffiffiffiffi
>
> @ jAj
A 4
^xf ð^xÞ < f ð0Þ ¼ a2 >
> 2
if a > 23 3k3
x ¼ >
>
>
>
0 f ð^xÞ P f ð0Þ ¼ a2 < p ffiffiffiffiffiffiffiffi
x ¼ 0 if jaj 6 23
4
3k3
However, our aim is not to seek x by comparison between f ð^xÞ and >
>
>
> 0 qffiffiffiffiffiffiffiffiffiffiffiffi13
f ð0Þ, but to seek x in a closed-form thresholding formula, i.e., >
> 2jaj
jAj2 p ffiffiffiffiffiffiffiffi
>
> jAjþ jAj
>
> @ A if a < 23
4
3k3
>
: 2
^x jaj > t ðkÞ
x ¼
0 jaj 6 t ðkÞ
By utilizing the triangle and hyperbolic expression for the roots of
Next our task is to explicitly compute the expression of t ðkÞ. When cubic equation, jAj can be reduced as
2
f ð^ x aÞ2 þ kj^
xÞ 6 a2 , we get ð^ xj3 6 a2 . By reducing the equation, we
12
further get 2 1 / 27a2 32
jAj ¼ pffiffiffi k4 cosh where / ¼ arccosh k :
2 3 3 16
2a^x P ^x2 þ kj^xj3 ðB:1Þ
2
Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006
W. Cao et al. / J. Vis. Commun. Image R. xxx (2012) xxx–xxx 11
12 [12] J. Mairal, M. Elad, G. Sapiro, Sparse representation for color image restoration,
2 1 / 27a2 32
jAj ¼ pffiffiffi k4 cosh where / ¼ arccosh k : IEEE Trans. Image Process. 17 (1) (2008) 53–69.
3 3 16 [13] W. Dong, L. Zhang, G. Shi, X. Wu, Image deblurring and super-resolution by
adaptive sparse domain selection and adaptive regularization, IEEE Trans.
Image Process. 20 (7) (2011) 1838–1857.
[14] M.F. Tappen, B.C. Russell, W.T. Freeman, Exploiting the sparse derivative prior
References for super-resolution and image demosaicing, in: Third International Workshop
on Statistical and Computational Theories of Vision at ICCV, 2003.
[15] R. Chartrand, Exact reconstruction of sparse signals via nonconvex
[1] Z.B. Xu, Data modeling: visual psychology approach and L1 regularization
2 minimization, IEEE Signal Process. Lett. 14 (10) (2007) 707–710.
theory, in: Proceedings of the International Congress of Mathematicians, 2010,
[16] R. Chartrand, V. Staneva, Restricted isometry properties and nonconvex
pp. 1–34.
compressive sensing, Inverse Prob. 24 (3) (2008) 1–14.
[2] Z.B. Xu, X.Y. Chang, F.M. Xu, H. Zhang, L1 regularization: an iterative half
2 [17] R. Chartrand, W. Yin, Iteratively reweighted algorithms for compressive
thresholding algorithm, IEEE Trans. Neural Networks Learning Syst. 23 (7)
sensing, in: IEEE International Conference on Acoustics, Speech and Signal
(2012) 1013–1027.
Processing, 2008, pp. 3869–3872.
[3] D. Krishnan, R. Fergus, Fast image deconvolution using hyper-Laplacian priors,
[18] R. Chartrand, Fast algorithms for nonconvex compressive sensing: MRI
Adv. Neural Inf. Process. Syst. (NIPS) (2009).
reconstruction from very few data, in: IEEE International Symposium on
[4] R. Fergus, B. Singh, A. Hertzmann, S.T. Roweis, W.T. Freeman, Removing camera
Biomedical Imaging: From Nano to Macro, 2009, pp. 262–265.
shake from a single photograph, ACM Trans. Graphics 25 (3) (2006) 787–794.
[19] T. Blumensath, M. Yaghoobi, M.E. Davies, Iterative hard thresholding and L0
[5] S. Cho, S. Lee, Fast motion deblurring, ACM Trans. Graphics 28 (5) (2009) 1–8.
regularisation, IEE Trans. Acoust. Speech Signal Process. vol. 3 (2007).
[6] L. Yuan, J. Sun, L. Quan, H.-Y. Shum, Progressive inter-scale and intra-scale non-
[20] D.L. Donoho, Denoising by soft thresholding, IEEE Trans. Inf. Theory 41 (3)
blind image deconvolution, ACM Trans. Graphics 27 (3) (2008) 1–8.
(1995) 613–627.
[7] T. Chan, S. Esedoglu, F. Park, A. Yip, Math. Models Comput. Vision 17 (2005).
[21] I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for
[8] A. Buades, B. Coll, J.M. Morel, A Non-local Algorithm for Image Denoising,
linear inverse problems with a sparsity constraint, Commun. Pure and Appl.
CVPR, vol.2, IEE, 2005, pp. 60–65.
Math. 57 (11) (2004) 1413–1457.
[9] K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D
[22] A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating
transform-domain collaborative filtering, IEEE Trans. Image Process. 16 (8)
blind deconvolution algorithms, in:CVPR, 2009, pp. 1964–1971.
(2007) 2080–2095.
[23] J. Yang, J. Wright, T.S. Huang, Y. Ma, Image super-resolution via sparse
[10] J. Mairal, F. Bach, J. Ponce, G. Sapiro, A. Zisserman, Non-local sparse models for
representation, IEEE Trans. Image Process. 19 (11) (2010) 2861–2873.
image restoration, in: 2009 IEEE 12th International Conference on Computer
[24] F.C. Xing, Investigation on solutions of cubic equations with one unknown, J.
Vision, 2009, pp. 2272–2279.
Central Univ. Nationalities (Natural Sciences Edition) 12 (3) (2003) 207–218.
[11] M. Elad, M. Aharon, Image denoising via sparse and redundant representations
[25] M. Elad, Sparse and Redundant Representations: From Theory to Applications
over learned dictionaries, IEEE Trans. Image Process. 15 (12) (2006) 3736–
in Signal and Image Processing, Springer, 2010.
3745.
Please cite this article in press as: W. Cao et al., Fast image deconvolution using closed-form thresholding formulas of Lq ðq ¼ 12 ; 23Þ regularization, J. Vis.
Commun. (2012), http://dx.doi.org/10.1016/j.jvcir.2012.10.006