Matrix Decomposition Methods in Image Processing
Matrix Decomposition Methods in Image Processing
المحاضرة السابعة
Matrix Decomposition Methods in
Image Processing
المادة DSP :
المرحلة :الثالثة
اسم االستاذ :م.م .ريام ثائر احمد
1. Some Types of Matrix Decomposition Methods
2. The Effect of the Matrix Decomposition Methods on Images
a. The Effect of SVD Method on image
b. The Effect of Hessenberg Decomposition Method on image
c. The Effect of QR Decomposition Method on image
d. The Effect of LU Decomposition Method on image
U
Original Image Apply SVD on Image
𝐴�=𝑃�𝐻�𝑃�𝑇�
P H
𝐴�=𝑄�𝑅�
Original Image
Q R
𝐴�=𝐿�𝑈�
Original Image Gray U-matrix L-matrix
Original Image
L U
Let 𝐴 be any 𝑚×𝑛 matrix. Then there are orthogonal matrices 𝑈, and a
diagonal matrix S such that
00
00
00
2. Theorem:
Any 𝑚×𝑛 real matrix 𝐴 can be factored uniquely into a product of the form
𝑈S𝑉𝑇, called the SVD of 𝐴, where 𝑈 and 𝑉 are orthogonal matrices and S is an
𝑚×𝑛 diagonal matrix whose diagonal entries called the singular values of 𝐴 are
all real and satisfy the following:
k=min (𝑚�,) 𝜎 �1 � ≥ 𝜎 �2 � ≥ … ≥ 𝜎 � 𝑘 � � ≥ 0
Let 𝜎𝑗 denote the 𝑗𝑡ℎ singular value along the diagonal of 𝑆 for 𝑗=1,...,. If 𝑢𝑗 and
𝑣𝑗 represent the 𝑗𝑡ℎ column vectors of 𝑈 and 𝑉, respectively, then 𝐴 can be
written as
00
to adequately represent the original image given by 𝐴 even if 𝑟 is much smaller
than 𝑘 (where 𝑟 is the number of the largest singular values of 𝐴) because we
are using the largest singular values first. If 𝜎�𝑟�� � > 0, then 𝐴�𝑟�� � is a rank
approximation to 𝐴. Students can reconstruct the images using the SVD with𝑟�
different ranks. The total storage for 𝐴�𝑟�� will be
𝑇�𝑠��(𝐴�𝑟�) = 𝑟��
(𝑚�+𝑛�+1)
The integer 𝑟 can be chosen confidently less then 𝑛, and the digital image
corresponding to 𝐴�𝑟� still have very close the original image. However, the
chose the different 𝑟 will have a different corresponding image and storage for
it. For typical choices of the k, the storage required for 𝐴�𝑟�� will be less the
percent. 20
Using the command subplot, they can plot all these approximations along with
the original image in the same window for easy comparison.
Moreover, they can compute the error between the original image and its
approximations. One way of doing this is through the Frobenius norm of a matrix
which is defined as
Students can compute the relative error in the Frobenius norm of the image 𝐴 at
different ranks and check if the results of the norm roughly agree with the error
based on visual perception. They can investigate, for example, how large does
the rank needs to be so that the relative error (in the Frobenius norm) is less than
5%.
20
Grayscale Original Image of Size 497x498
30
As we see the 10𝑡ℎ Iteration the image contains the 100 entries, also form the
30th Iteration we get the image near to original image and form the 70𝑡ℎ Iteration
i.e. A 70×70 matrix, with 4900 entries is significantly reduced the original image
of size 497×498 matrix, with 247506 entries. So, there is no need to go up to
100𝑡ℎ Iteration.
In the following examples, we will show how the SVD works in several
applications in DIP.
Due to SVD conceptual and stability reasons, it becomes more and more popular
in the signal processing area. SVD is an attractive algebraic transform for image
processing. SVD has prominent properties in imaging. Although some SVD
properties are fully utilized in image processing, others still need more
investigation and contributed to it.
c- The SVD packs the maximum signal energy into as few coefficients. It has
the ability to adapt to the variations in local statistics of an image. However,
SVD is an image adaptive transform; the transform itself needs to be represented
in order to recover the data.
SVD is used to approximate the matrix decomposing the data into an optimal
estimate of the signal and the noise components. This property is one of the
40
Al-Mustaqbal University
College of Science
Intelligent Medical System Department