Maths Report
Maths Report
(MMA204T)
Mathematical Foundation for Computer Science
Report on
Singular Value Decomposition for image compression
Submitted by
ShriLaksh
ShriLakshmi M - RVCE23MCA018
Nimra Sadiya - RVCE23MCA026
Sharanya N - RVCE23MCA053
1
RV COLLEGE OF ENGINEERING®, BENGALURU – 560059
(Autonomous Institution Affiliated to VTU, Belagavi)
Department of Master of Computer Applications
CERT IFICATE
Assignment marks
1 30
2 30
Total 60
Reduced to 30
2
INDEX
SL NO TOPIC PAGE NO
1.1 Abstract 04-04
3
1.1 ABSTRACT
4
2.1 INTRODCUTION
5
3.1 Singular Value Decompression
a. Background
Singular Value Decomposition (SVD) is a foundational concept in
linear algebra, offering a powerful method to decompose a matrix into
simpler components. At its core, SVD breaks down a matrix 𝐴A into
three constituent matrices: 𝑈U, ΣΣ, and 𝑉𝑇VT. The matrix 𝑈U
contains the left singular vectors, representing the orthogonal basis in
the domain space, while 𝑉𝑇VT holds the right singular vectors,
representing the orthogonal basis in the codomain space. The diagonal
matrix ΣΣ captures the singular values, which convey information
about the scale and importance of each basis vector.
This decomposition allows for a geometric understanding of the
original matrix, revealing how it can be thought of as a combination
of rotations and scaling. SVD finds extensive applications across
various domains, including data compression, dimensionality
reduction, solving linear systems, image processing, and collaborative
filtering, making it a cornerstone of modern computational
mathematics and data analysis.
SVD offers robustness and stability, especially in scenarios where
matrices might be ill-conditioned or singular. Its numerical stability
makes it invaluable in practical computations, where precision and
reliability are paramount.
Moreover, SVD provides a unique insight into the intrinsic structure
of matrices, enabling researchers and practitioners to uncover hidden
patterns and relationships within data. This versatility and reliability
have cemented SVD as a fundamental tool in fields ranging from
machine learning and signal processing to engineering and scientific
computing, playing a crucial role in advancing our understanding and
utilization of complex data structures.
6
b. PROCESS
Original Image:
8
d. Data Analysis
The following is the data that we got from our experiment with
Singular Value Decomposition.
9
These are the graphs we made to help visualize our data:
The first graph represents how the image file size changes as k (# of
elements we keep) increases.
As one could expect, each increase in k changes the file size by less
than the previous element.
This is due to each consecutive element influencing the image less
than the previous one.
The reason that we stop our experiment at 205 k is that we found that
increasing k beyond that will result in a bigger file size than the
original file.
The second graph compares the error between the compressed file and
the original file.
The error decreases as we increase k because we now have more
terms that influence the image, and as expected the curve bottoms out
due to each k having less impact than the previous one.
10
Mean Square Error (MSE) is the way we measured the difference
between two images.
To find the MSE you square the difference between a pixel in the
original image and the corresponding pixel in the compressed image.
By doing this for every pixel, summing the differences, and dividing
by the number of pixels we get an approximate error.
This number represents the average squared difference for each pixel
in the image like the name suggests.
11
4.1 Advantages of SVD
12
5.1 Disadvantages of SVD
13
6.1 Applications of SVD
14
6. Latent Semantic Analysis (LSA): In natural language
processing (NLP), SVD is employed in LSA to analyse and
extract latent semantic information from large text corpora.
By decomposing term-document matrices, SVD helps in
identifying underlying concepts and relationships between
words and documents.
7. Biomedical Data Analysis: SVD is utilized in analysing
biomedical data, including gene expression data, protein
interaction networks, and medical imaging.
It aids in identifying meaningful patterns, clustering similar
samples, and uncovering biomarkers associated with diseases.
8. Eigenface Method: In facial recognition systems, the eigenface
method utilizes SVD to represent faces as linear combinations
of basis images (eigenfaces).
By decomposing face images into eigenfaces, SVD enables
efficient face representation and recognition.
9. Robotics and Control Systems: SVD is applied in robotics and
control systems for tasks such as system modelling, trajectory
planning, and robot calibration.
It helps in identifying dominant modes of system behaviour and
optimizing control strategies.
15
7.1 Steps to calculate SVD of a matrix
16
8.1 SVD Image Compression Measures
CR=MxN/(M+N)×K+K
17
Where:
Where:
𝑀M is the number of rows in the image.
𝑁N is the number of columns in the image.
𝐸𝑖𝑗Eij represents the squared difference between the pixel values at
position (𝑖,𝑗)(i,j) in the original and compressed images.
3. Peak Signal to Noise Ratio (PSNR) Peak signal-to-noise ratio
(PSNR) is defined as the ratio between the maximum possible power
of a signal and the power of corrupting noise that affects the fidelity
of its representation.
PSNR is usually expressed in terms of the logarithmic decibel
scale to accommodate signals with a wide range. In lossy
compression, the quality of compressed image is determined by
calculating PSNR. The signal in this case is the original data,
and the noise is the error introduced by compression. The PSNR
(in dB) is given by the equations. Here, MAXI is the maximum
possible pixel value of the image.
The equation for PSNR is typically defined as:
PSNR=10⋅log 10(MAX2MSE)PSNR=10⋅log10(MSEMAX2)
Where:
MAXMAX is the maximum possible pixel value of the image
(usually 255 for an 8-bit image).
MSEMSE is the Mean Squared Error between the original and
the compressed images.
The MSE is calculated as:
MSE=1𝑚𝑛∑𝑖=0𝑚−1∑𝑗=0𝑛−1(𝐼original(𝑖,𝑗)−𝐼compressed(𝑖,𝑗))2MSE
=mn1∑i=0m−1∑j=0n−1(Ioriginal(i,j)−Icompressed(i,j))2
18
Where:
𝑚m and 𝑛n are the dimensions of the image.
𝐼original (𝑖,𝑗)Ioriginal (i,j) and 𝐼compressed (𝑖,𝑗)Icompressed
(i,j) are the pixel values of the original and compressed images
at position (𝑖,𝑗)(i,j) respectively.
19
9.1 Process of image compression using SVD
20
10.1 Implementation
21
D = np.array([[0,1,1,0,1,1,0],
ay([[0,1,1,0,1,1,0],
[1,1,1,1,1,1,1],
[1,1,1,1,1,1,1],
[0,1,1,1,1,1,0],
[0,0,1,1,1,0,0],
[0,0,0,1,0,0,0],
])
U,S,V = plot_svd(D)
22
11.1 Conclusion
In performing
ming SVD compression for JPEG images the values of
Compression Ratio and their variation with corresponding singular
values (SVD coefficients) are observed and their relation is concluded
to be a decreasing exponential function.
More compression ratio
atio can be achieved for smaller ranks.
On the other hand, the computation time for the compressed images is
the same for all the values of k taken.
It was also found that the fewer the singular values were used, the
smaller the resulting file size was. An increase in the number of SVD
coefficients causes an increase in the resulting file size of the
compressed image.
As the number of SVD coefficients nears the rank of the original
image matrix, the value of Compression Ratio approaches one.
From the observations
servations recorded it is observed the Mean Square Error
decreases with increase in the number of SVD coefficients, unlike
PSNR which varies inversely with the value of 'k'.
Therefore, an optimum value for 'k' must be chosen, with an
acceptable error, which
ch conveys most of the information contained in
the original image, and has an acceptable file size too
23