Weighted Nuclear Norm Minimization With Application To Image Denoising
Weighted Nuclear Norm Minimization With Application To Image Denoising
Weighted Nuclear Norm Minimization With Application To Image Denoising
2863
2857
The WNNM problem, however, is much more difficult U⊥ A2 , where A1 and A2 are the components of X in sub-
to optimize than NNM since the objective function in (5) spaces U and U⊥ , respectively. Then we have
is not convex in general. In [3], the sub-gradient method
is employed to derive the solution of NNM; unfortunately, f (X) =Y − X2F + Xw,∗
similar derivation cannot be applied to WNNM since the =UΣV T − UA1 − U⊥ A2 2F + UA1 + U⊥ A2 w,∗
sub-gradient conditions are no longer satisfied. In subsec-
tion 2.2, we will discuss the solution of WNNM in detail. ≥UΣV T − UA1 2F + UA1 w,∗ (Lemma 1).
Obviously, NNM is a special case of WNNM when all the
weights wi=1...n are the same. Our solution will cover the Similarly, for the row space bases V, we have
solution of NNM in [3], while our derivation is much sim-
f (X) ≥ UΣV T − UBV T 2F + λUBV T w,∗ .
pler than the complex sub-gradient based derivation in [3].
2.2. Optimization Orthonormal matrices U and V will not change the F-norm
and weighted nuclear norm, and thus we have
Before analyzing the optimization of WNNM, we first
give following three lemmas. f (X) ≥ Σ − B2F + λBw,∗ .
Lemma 1. ∀A, B ∈ m×n that satisfy AT B = 0, we have Therefore, if we have the solution of the minimization prob-
lem in (6), the solution of the original WNNM problem in
(1)A + Bw,∗ ≥ Aw,∗ ; (5) can be obtained as X̂ = UB̂V T .
(2)A + BF ≥ AF .
Based on the above lemmas and theorem, we discuss the
A B solution of the WNNM problem under three situations: the
Lemma 2. ∀M = with A ∈ m×m and D ∈
C D weights wi=1···n are in a non-ascending order, in an arbi-
n×n , if the weights satisfy w1 ≥ · · · ≥ wm+n ≥ 0, we trary order, and in a non-descending order, respectively.
have
Mw,∗ ≥ Aw1 ,∗ + Dw2 ,∗ ,
2.2.1 The weights are in a non-ascending order
where w = [w1 , . . . , wm+n ], w1 = [w1 , . . . , wm ] and w2 =
[wm+1 , . . . , wm+n ]. Based on Theorem 1, we have the globally optimal solution
n×n
of the WNNM problem in (5) in the case that w1 ≥ · · · ≥
Lemma 3. ∀A ∈ and a diagonal non-negative ma- wn ≥ 0 . We have the following theorem.
trix W ∈ n×n with non-ascending ordered diagonal ele-
ments, let A = XΦYT be the SVD of A, we have Theorem 2. If weights satisfy w1 ≥ · · · ≥ wn ≥ 0 , the
WNNM problem in (5) has a globally optimal solution
σi (A)σi (W) = max tr[WUT AV],
UT U=I,VT V=I
i X̂ = USw (Σ)VT ,
where I is the identity matrix, σi (A) and σi (W) are the i-
where Y = UΣVT is the SVD of Y, and Sw (Σ) is the gen-
th singular values of matrices A and W, respectively. When
eralized soft-thresholding operator with weight vector w
U = X and V = Y, tr[WUT AV] reaches its maximum value.
The proofs of the above lemmas can be found in the sup- Sw (Σ)ii = max(Σii − wi , 0).
plementary material. We then have the following theorem,
Proof. Considering the optimization problem in (6), and as-
which guarantees that the column and row spaces of the so-
suming that ΛB is a diagonal matrix which has the same
lution to the WNNM problem in (5) still lie in the corre-
diagonal elements as matrix B, we have
sponding spaces of the observation data matrix Y.
Theorem 1. ∀Y ∈ m×n , denote by Y = UΣVT the SVD Σ − B2F + λBw,∗
of it. For the WNNM problem in (5) with non-negative =Σ − ΛB − (B − ΛB )2F + ΛB + (B − ΛB )w,∗
weight vector w , its solution X̂ can be written as X̂ =
≥|Σ − ΛB 2F + ΛB w,∗ (Lemma 2).
UB̂VT , where B̂ is the solution of the following optimiza-
tion problem Thus, in such a weight condition, the optimal solution of (6)
B̂ = arg minB Σ − B2F + Bw,∗ . (6) has a diagonal form ΛB . Both Σ and ΛB are diagonal ma-
trices, and the solution can be obtained by soft-thresholding
Proof. Denote by U⊥ the set of orthogonal bases of the operation on each element. Based on the conclusion in The-
complementary space of U, we can write X as X = UA1 + orem 1, the optimal solution of (5) is X̂ = USw (Σ)V T .
2864
2858
Theorem 2 greatly extends the Theorem 2.1 in [3] (which By iterating between the above two steps, (6) can be
is described by (1)-(3) in this paper). We show that if the solved iteratively via sorting the diagonal elements and
weights wi=1...n are in a non-ascending order, not neces- shrinking the singular values:
sarily have the same value, the WNNM problem is still
convex and the optimal solution can still be obtained by (PT(k+1) , Φ, QT(k+1) ) = SV D(Λ(k) );
(8)
soft-thresholding on the singular values but with different Λ(k+1) = PT(k+1) Sw (Σ)Q(k+1) .
thresholds. The Theorem 2.1 given by Cai et al. in [3] is a
special case of our Theorem 2. Compared with the complex Based on the conclusion of Theorem 1, the final estimation
sub-gradient based proof in [3], however, our proof is much of X̂ can be obtained by
more concise.
T
X̂ = UP̂ Sw (Σ)Q̂V T .
2.2.2 The weights are in an arbitrary order
In the case that weights wi=1···n are not in a non-ascending 2.2.3 The weights are in a non-descending order
order but in an arbitrary order, the WNNM problem in (5)
is non-convex, and thus we cannot have a global minimum At last, we consider another special but very useful case,
of it. We propose an iterative algorithm to solve it. i.e., the weights wi,...,n are in a non-descending order.
In Theorem 1, we have proved that the solution of (5) can Based on the iterative algorithm proposed in subsection
be obtained by solving (6). Let B = PΛQT be the SVD of 2.2.2, we have the following corollary.
B. We solve the following optimization problem iteratively
Corollary 1. If the weights satisfy 0 ≤ w1 ≤ . . . ≤ wn , the
(P̂, Λ̂, Q̂) = arg min PΛQT − Σ2F + PΛQT w,∗ , iterative algorithm described in subsection 2.2.2 will have
P,Λ,Q
(7) a fixed point X̂ = USw (Σ)VT .
s.t.PT P = I, QT Q = I
Proof. In (8), by initializing Λ(0) as any diagonal matrix
where I is the identity matrix. with non-ascending ordered diagonal elements, we have
Step1: Given non-negative diagonal matrix Λ, we solve
(P̂, Q̂) = arg minP,Q PΛQT − Σ2F . (P(1) = I, Φ = Λ(0) , Q(1) = I) = SV D(Λ(0) );
Λ(1) = ISw (Σ)I = Sw (Σ).
Based on the definition of Frobenius norm, we have
minP,Q PΛQT − Σ2F Consequently, ∀0 < i < j ≤ n, we have Σii ≥ Σjj and
wi ≤ wj . After soft-thresholding operation, Λ(1) = Sw (Σ)
= minP,Q tr[(PΛQT − Σ)(PΛQT − Σ)T ] still satisfies the non-ascending order. Thus in the next it-
=tr[ΛΛ + ΣΣ] − 2 maxP,Q tr[PΛQT Σ T ] eration, P and Q are still identity matrices, and the opti-
mization of (7) reaches a fix point. Based on the conclu-
=tr[ΛΛ + ΣΣ] − 2 i σi (Σ)σi (Λ) (Lemma 3)
sion of Theorem 1, we obtain a fix point estimation of X by
and the optimal solution of P and Q are the column and row X̂ = USw (Σ)V T .
bases of the SVD of matrix Λ. As Λ is already a diagonal
matrix, P and Q are permutation matrices which make the The conclusion in Corollary 1 is very important and use-
diagonal matrix PΛQT have non-ascending ordered diago- ful. The singular values of a matrix are always sorted in a
nal elements. non-ascending order, and the larger singular values usually
Step2: Given orthogonal matrices P and Q, we solve correspond to the subspaces of more important components
of the data matrix. Therefore, we’d better shrink the larger
Λ̂ = arg minΛ PΛQT − Σ2F + PΛQT w,∗ .
singular values less, that is, assigning smaller weights to the
Since PΛQT is a diagonal matrix which has non-ascending larger singular values in the weighted nuclear norm. In such
ordered elements, we have a case, Corollary 1 guarantees that our proposed iterative
algorithm has a fixed point. Furthermore, this fixed point
Λ̂ = arg minΛ i (PΛQT )ii −Σii 22 +|wi ·(PΛQT )ii |1 . has an analytical form (i.e., X̂ = USw (Σ)V T ). Hence, in
The soft-thresholding operation can be performed on each practice we do not need to really iterate, but directly get
element of diagonal matrix PΛQT . Because P and Q are the desired solution in a single step, which makes the pro-
permutation matrices which only change the positions of posed method very efficient. As we will see in the follow-
diagonal elements, we have ing Section 3, Corollary 1 offers us an effective denoising
algorithm, which shows superior denoising performance to
Λ̂ = PT Sw (Σ)Q. almost all state-of-the-art denoising algorithms we can find.
2865
2859
3. WNNM for Image Denoising and then the initial σi (Xj ) can be estimated as
Image denoising aims to reconstruct the original image x
σ̂i (Xj ) = max(σi2 (Y j ) − nσn2 , 0),
from its noisy observation y = x + n, where n is assumed to
be additive Gaussian white noise with zero mean and vari-
where σi (Y j ) is the i-th singular value of Y j . Note that
ance σn2 . Denoising is not only an important pre-processing
the obtained weights wi=1,...,n are guaranteed to be in a
step for many vision applications, but also an ideal test
non-descending order since σ̂i (Xj ) are always sorted in a
bed for evaluating statistical image modeling methods. The
non-ascending order. By applying the above procedures to
seminal work of nonlocal means [1] triggers the wide study
each patch and aggregating all patches together, the image x
of nonlocal self-similarity (NSS) based methods for image
can be reconstructed. In practice, we can run several more
denoising. NSS refers to the fact that there are many re-
rounds of those procedures to enhance the denoising out-
peated local patterns across a natural image, and those non-
puts. The whole denoising algorithm is summarized in Al-
local similar patches to a given patch can help much the
gorithm 1.
reconstruction of it. The NSS based image denoising algo-
rithms such as BM3D [7], LSSC [22] and NCSR [10] have
Algorithm 1 Image Denoising by WNNM
achieved state-of-the-art denoising results.
Input: Noisy image y
For a local patch yj in image y, we can search for its non- (0)
1: Initialize x̂ = y, y(0) = y
local similar patches across the image (in practice, in a large 2: for k=1:K do
enough local window) by methods such as block matching 3: Iterative regularization y(k) = x̂(k−1) + δ(y − ŷ(k−1) )
[7]. By stacking those nonlocal similar patches into a ma- 4: for each patch yj in y(k) do
trix, denote by Y j , we have Y j = Xj + Nj , where Xj and 5: Find similar patch group Y j
Nj are the patch matrices of original image and noise, re- 6: Estimate weight vector w
spectively. Intuitively, Xj should be a low rank matrix, and 7: Singular value decomposition [U, Σ, V] = SV D(Y j )
the low rank matrix approximation methods can be used 8: Get the estimation: X̂j = USw (Σ)V T
to estimate Xj from Y j . By aggregating all the denoised 9: end for
patches, the whole image can be estimated. Indeed, the 10: Aggregate Xj to form the clean image x̂(k)
11: end for
NNM method has been adopted in [15] for video denoising.
Output: Clean image x̂(K)
We apply the proposed WNNM model to Y j to estimate Xj
for image denoising. By using the noise variance σn2 to nor-
malize the F-norm data fidelity term Y j − Xj 2F , we have 4. Experiments
the following energy function:
We compare the proposed WNNM based image denois-
1
X̂j = arg minXj σn2 Y j − Xj 2F + Xj w,∗ . (9) ing algorithm with several state-of-the-art denoising meth-
ods, including BM3D [7], EPLL [31], LSSC [22], NCSR
Obviously, the key issue now is the determination of the [10] and SAIST [9]. The baseline NNM algorithm is also
weight vector w. For natural images, we have the general compared. All the competing methods exploit the image
prior knowledge that the larger singular values of Xj are nonlocal redundancies. In subsection 4.1, we discuss the pa-
more important than the smaller ones since they represent rameter settings in the WNNM denoising algorithm; in sub-
the energy of the major components of Xj . In the appli- section 4.2, we evaluate WNNM and its competing methods
cation of denoising, the larger the singular values, the less on 20 widely used test images.
they should be shrunk. Therefore, a natural idea is that the
weight assigned to σi (Xj ), the i-th singular value of Xj ,
4.1. Parameter Setting
should be inversely proportional to σi (Xj ). We let There are several parameters (δ, c, K and patch size) in
√
the proposed algorithm. For all noise levels, the iterative
wi = c n (σi (Xj ) + ε), (10) regularization parameter δ and the parameter c are fixed to
0.1 and 2.8, respectively. Iteration number K and patch size
where c > 0 is a constant, n is the number of similar patches are set based on noise level. For higher noise level, we need
in Y j and ε = 10−16 is to avoid dividing by zero. to choose bigger patches and run more times the iteration.
With the above defined weights, the proposed WNNM By experience, we set patch size to 6 × 6, 7 × 7, 8 × 8
algorithm in subsection 2.2.3 can be directly used to solve and 9 × 9 for σn ≤ 20, 20 < σn ≤ 40, 40 < σn ≤ 60
the model in (9). However, there is still one problem re- and 60 < σn , respectively. K is set to 8, 12, 14, and 14
maining, that is, the singular values σi (Xj ) are not avail- respectively, on these noise levels.
able. We assume that the noise energy is evenly distributed For NNM, we use the same √ parameters as WNNM ex-
over each subspace spanned by the basis pair of U and V, cept for the uniform weight nσn . The source codes of the
2866
2860
5. Conclusion
As a significant extension of the nuclear norm mini-
mization problem, the weighted nuclear norm minimization
(WNNM) was studied in this paper. We showed that, when
Figure 1. The 20 test images. the weights are in a non-ascending order, WNNM is still
convex and we presented the analytical optimal solution;
competing methods are obtained from the original authors, when the weights are in an arbitrary order, we presented
and we use the default parameters. an iterative algorithm to solve it; when the weights are in
a non-descending order, we proved that the iterative algo-
4.2. Experimental Results on 20 Test Images
rithm can result in an analytical fixed point solution, which
We evaluate the competing methods on 20 widely used can be efficiently computed. We then applied the proposed
test images, whose scenes are shown in Fig. 1. Zero mean WNNM algorithm to image denoising. The experimental
additive white Gaussian noises with variance σn2 are added results showed that WNNM can not only lead to visible
to those test images to generate the noisy observations. Due PSNR improvements over state-of-the-art methods such as
to page limit, we show the results on four noise levels, rang- BM3D, but also preserve much better the image local struc-
ing from low noise level σn = 10, to medium noise levels tures and generate much less visual artifacts. It can be ex-
σn = 30 and 50, and to strong noise level σn = 100. Re- pected that WNNM will have more successful applications
sults on more noise levels can be found in the supplemen- in computer vision problems.
tary material. The PSNR results by the competing denoising
methods are shown in Table 1. The highest PSNR result for References
each image and on each noise level is highlighted in bold. [1] A. Buades, B. Coll, and J.-M. Morel. A non-local algorithm
We have the following observations. First, the proposed for image denoising. In CVPR, 2005.
WNNM achieves the highest PSNR in almost every case. [2] A. M. Buchanan and A. W. Fitzgibbon. Damped newton
It achieves 1.3dB-2dB improvement over the NNM method algorithms for matrix factorization with missing data. In
in average and outperforms the benchmark BM3D method CVPR, 2005.
by 0.3dB-0.45dB in average (up to 1.16dB on image Leaves [3] J.-F. Cai, E. J. Candès, and Z. Shen. A singular value thresh-
with noise level σn = 10) consistently on all the four noise olding algorithm for matrix completion. SIAM Journal on
levels. Such an improvement is notable since few methods Optimization, 20(4):1956–1982, 2010.
can surpass BM3D more than 0.3dB in average [18, 17]. [4] E. J. Candès, X. Li, Y. Ma, and J. Wright. Robust principal
Second, some methods such as LSSC and NCSR can out- component analysis? JACM, 58(3):11, 2011.
perform BM3D a little when the noise level is low, but their [5] E. J. Candès and Y. Plan. Matrix completion with noise. In
Proceedings of the IEEE, 2010.
PSNR indices become almost the same as, or lower than,
[6] E. J. Candès and B. Recht. Exact matrix completion via con-
those of BM3D with the increase of noise level. This shows
vex optimization. Foundations of Computational mathemat-
that the proposed WNNM method is more robust to noise ics, 9(6):717–772, 2009.
strength than other methods. [7] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image
In Fig. 2 and Fig. 3, we compare the visual quality of denoising by sparse 3-d transform-domain collaborative fil-
the denoised images by the competing algorithms (more vi- tering. TIP, 16(8):2080–2095, 2007.
sual comparison results can be found in the supplementary [8] F. De La Torre and M. J. Black. A framework for robust
material). Fig. 2 demonstrates that the proposed WNNM subspace learning. IJCV, 54(1-3):117–142, 2003.
reconstructs more image details from the noisy observation. [9] W. Dong, G. Shi, and X. Li. Nonlocal image restoration
Compared with WNNM, methods LSSC, NCSR and SAIST with bilateral variance estimation: a low-rank approach. TIP,
over-smooth more the textures in the sands area of image 22(2):700–711, 2013.
Boats, and methods BM3D and EPLL generate more arti- [10] W. Dong, L. Zhang, and G. Shi. Centralized sparse represen-
tation for image restoration. In ICCV, 2011.
facts. More interestingly, as can be seen in the highlighted
[11] D. L. Donoho, M. Gavish, and A. Montanari. The phase
window, the proposed WNNM can still well reconstruct the
transition of matrix recovery from gaussian measurements
tiny masts of the boat, while the masts are almost disap- matches the minimax mse of matrix denoising. In PNAS,
peared in the reconstructed images by other methods. Fig. 2013.
3 shows an example with strong noise. It is obvious that [12] M. Elad and M. Aharon. Image denoising via sparse and
WNNM generates much less artifacts and preserves much redundant representations over learned dictionaries. TIP,
better the image edge structures than other competing meth- 15(12):3736–3745, 2006.
ods. In summary, WNNM shows strong denoising capabil- [13] A. Eriksson and A. Van Den Hengel. Efficient computation
ity, producing visually much more pleasant denoising out- of robust low-rank matrix approximations in the presence of
puts while having higher PSNR indices. missing data using the l1 norm. In CVPR, 2010.
2867
2861
(i) Ground truth (j) Noisy image (PSNR: 14.16dB) (k) BM3D (PSNR: 26.78dB) (l) EPLL (PSNR: 26.65dB)
(m) LSSC (PSNR: 26.77dB) (n) NCSR (PSNR: 26.66dB) (o) SAIST (PSNR: 26.63dB) (p) WNNM (PSNR: 26.98dB)
Figure 2. Denoising results on image Boats by different methods (noise level σn = 50).
(a) Ground truth (b) Noisy image ( PSNR: 8.10dB) (c) BM3D (PSNR: 22.52dB) (d) EPLL (PSNR: 22.23dB)
(e) LSSC (PSNR: 22.24dB) (f) NCSR (PSNR: 22.11dB) (g) SAIST (PSNR: 22.61dB) (h) WNNM (PSNR: 22.91dB)
Figure 3. Denoising results on image Monarch by different methods (noise level σn = 100).
[14] M. Fazel, H. Hindi, and S. P. Boyd. A rank minimization using low rank matrix completion. In CVPR, 2010.
heuristic with application to minimum order system approx- [16] Q. Ke and T. Kanade. Robust l1 norm factorization in the
imation. In ACC, 2001. presence of outliers and missing data by alternative convex
[15] H. Ji, C. Liu, Z. Shen, and Y. Xu. Robust video denoising programming. In CVPR, 2005.
2868
2862
Table 1. Denoising results (PSNR) by different methods.
σn =10 σn =30
NNM BM3D EPLL LSSC NCSR SAIST WNNM NNM BM3D EPLL LSSC NCSR SAIST WNNM
C.Man 32.87 34.18 34.02 34.24 34.18 34.30 34.44 27.43 28.64 28.36 28.63 28.59 28.36 28.80
House 35.97 36.71 35.75 36.95 36.80 36.66 36.95 30.99 32.09 31.23 32.41 32.07 32.30 32.52
Peppers 33.77 34.68 34.54 34.80 34.68 34.82 34.95 28.11 29.28 29.16 29.25 29.10 29.24 29.49
Montage 36.09 37.35 36.49 37.26 37.17 37.46 37.84 29.28 31.38 30.17 31.10 30.92 31.06 31.65
Leaves 33.55 34.04 33.29 34.52 34.53 34.92 35.20 26.81 27.81 27.18 27.65 28.14 28.29 28.60
StarFish 32.62 33.30 33.29 33.74 33.65 33.72 33.99 26.62 27.65 27.52 27.70 27.78 27.92 28.08
Monarch 33.54 34.12 34.27 34.44 34.51 34.76 35.03 27.44 28.36 28.35 28.20 28.46 28.65 28.92
Airplane 32.19 33.33 33.39 33.51 33.40 33.43 33.64 26.53 27.56 27.67 27.53 27.53 27.66 27.83
Paint 33.13 34.00 34.01 34.35 34.15 34.28 34.50 27.02 28.29 28.33 28.29 28.10 28.44 28.58
J.Bean 37.52 37.91 37.63 38.69 38.31 38.37 38.93 31.03 31.97 31.56 32.39 32.13 32.14 32.46
Fence 32.62 33.50 32.89 33.60 33.65 33.76 33.93 27.19 28.19 27.23 28.16 28.23 28.26 28.56
Parrot 32.54 33.57 33.58 33.62 33.56 33.66 33.81 27.26 28.12 28.07 27.99 28.07 28.12 28.33
Lena 35.19 35.93 35.58 35.83 35.85 35.90 36.03 30.15 31.26 30.79 31.18 31.06 31.27 31.43
Barbara 34.40 34.98 33.61 34.98 35.00 35.24 35.51 28.59 29.81 27.57 29.60 29.62 30.14 30.31
Boat 33.05 33.92 33.66 34.01 33.91 33.91 34.09 27.82 29.12 28.89 29.06 28.94 28.98 29.24
Hill 32.89 33.62 33.48 33.66 33.69 33.65 33.79 28.11 29.16 28.90 29.09 28.97 29.06 29.25
F.print 31.38 32.46 32.12 32.57 32.68 32.69 32.82 25.84 26.83 26.19 26.68 26.92 26.95 26.99
Man 32.99 33.98 33.97 34.10 34.05 34.12 34.23 27.87 28.86 28.83 28.87 28.78 28.81 29.00
Couple 32.97 34.04 33.85 34.01 34.00 33.96 34.14 27.36 28.87 28.62 28.77 8.57 28.72 28.98
Straw 29.84 30.89 30.74 31.25 31.35 31.49 31.62 23.52 24.84 24.64 24.99 25.00 25.23 25.27
AVE. 33.462 34.326 34.008 34.507 34.456 34.555 34.772 27.753 28.905 28.463 28.877 28.849 28.980 29.214
σn =50 σn =100
C-Man 24.88 26.12 26.02 26.35 26.14 26.15 26.42 21.49 23.07 22.86 23.15 22.93 23.09 23.36
House 27.84 29.69 28.76 29.99 29.62 30.17 30.32 23.65 25.87 25.19 25.71 25.56 26.53 26.68
Peppers 25.29 26.68 26.63 26.79 26.82 26.73 26.91 21.24 23.39 23.08 23.20 22.84 23.32 23.46
Montage 26.04 27.9 27.17 28.10 27.84 28.0 28.27 21.70 23.89 23.42 23.77 23.74 23.98 24.16
Leaves 23.36 24.68 24.38 24.81 25.04 25.25 25.47 18.73 20.91 20.25 20.58 20.86 21.40 21.57
Starfish 23.83 25.04 25.04 25.12 25.07 25.29 25.44 20.58 22.10 21.92 21.77 21.91 22.10 22.22
Mornar. 24.46 25.82 25.78 25.88 25.73 26.10 26.32 20.22 22.52 22.23 22.24 22.11 22.61 22.95
Plane 23.97 25.10 25.24 25.25 24.93 25.34 25.43 20.73 22.11 22.02 21.69 21.83 22.27 22.55
Paint 24.19 25.67 25.77 25.59 25.37 25.77 25.98 21.02 22.51 22.50 22.14 22.11 22.42 22.74
J.Bean 27.96 29.26 28.75 29.42 29.29 29.32 29.62 23.79 25.80 25.17 25.64 25.66 25.82 26.04
Fence 24.59 25.92 24.58 25.87 25.78 26.00 26.43 21.23 22.92 21.11 22.71 22.23 22.98 23.37
Parrot 24.87 25.90 25.84 25.82 25.71 25.95 26.09 21.38 22.96 22.71 22.79 22.53 23.04 23.19
Lena 27.74 29.05 28.42 28.95 28.90 29.01 29.24 24.41 25.95 25.30 25.96 25.71 25.93 26.20
Barbara 25.75 27.23 24.82 27.03 26.99 27.51 27.79 22.14 23.62 22.14 23.54 23.20 24.07 24.37
Boat 25.39 26.78 26.65 26.77 26.66 26.63 26.97 22.48 23.97 23.71 23.87 23.68 23.80 24.10
Hill 25.94 27.19 26.96 27.14 26.99 27.04 27.34 23.32 24.58 24.43 24.47 24.36 24.29 24.75
F.print 23.37 24.53 23.59 24.26 24.48 24.52 24.67 20.01 21.61 19.85 21.30 21.39 21.62 21.81
Man 25.66 26.81 26.72 26.72 26.67 26.68 26.94 22.88 24.22 24.07 23.98 24.02 24.01 24.36
Couple 24.84 26.46 26.24 26.35 26.19 26.30 26.65 22.07 23.51 23.32 23.27 23.15 23.21 23.55
Straw 20.99 22.29 21.93 22.51 22.30 22.65 22.74 18.33 19.43 18.84 19.43 19.10 19.42 19.67
AVE. 25.048 26.406 25.965 26.436 26.326 26.521 26.752 21.570 23.247 22.706 23.061 22.996 23.296 23.555
[17] A. Levin and B. Nadler. Natural image denoising: Optimal- [24] R. Salakhutdinov and N. Srebro. Collaborative filtering in a
ity and inherent bounds. In CVPR, 2011. non-uniform world: Learning with the weighted trace norm.
[18] A. Levin, B. Nadler, F. Durand, and W. T. Freeman. Patch In NIPS, 2010.
complexity, finite pixel correlations and optimal denoising. [25] N. Srebro, T. Jaakkola, et al. Weighted low-rank approxima-
In ECCV. 2012. tions. In ICML, 2003.
[19] Z. Lin, R. Liu, and Z. Su. Linearized alternating direction [26] S. Wang, L. Zhang, and L. Y. Nonlocal spectral prior model
method with adaptive penalty for low-rank representation. In for low-level vision. In ACCV, 2012.
NIPS, 2011. [27] J. Wright, Y. Peng, Y. Ma, A. Ganesh, and S. Rao. Robust
principal component analysis: Exact recovery of corrupted
[20] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma. Robust
low-rank matrices via convex optimization. In NIPS, 2009.
subspace segmentation by low-rank representation. In ICML,
2010. [28] D. Zhang, Y. Hu, J. Ye, X. Li, and X. He. Matrix completion
by truncated nuclear norm regularization. In CVPR, 2012.
[21] R. Liu, Z. Lin, F. De la Torre, and Z. Su. Fixed-rank repre-
[29] Z. Zhang, A. Ganesh, X. Liang, and Y. Ma. Tilt: transform
sentation for unsupervised visual learning. In CVPR, 2012.
invariant low-rank textures. IJCV, 99(1):1–24, 2012.
[22] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. [30] Y. Zheng, G. Liu, S. Sugimoto, S. Yan, and M. Okutomi.
Non-local sparse models for image restoration. In ICCV, Practical low-rank matrix approximation under robust l1
2009. norm. In CVPR, 2012.
[23] Y. Mu, J. Dong, X. Yuan, and S. Yan. Accelerated low-rank [31] D. Zoran and Y. Weiss. From learning models of natural
visual recovery by random projection. In CVPR, 2011. image patches to whole image restoration. In ICCV, 2011.
2869
2863