0% found this document useful (0 votes)
4 views

Maths Report

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Maths Report

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Department of

Master of Computer Applications

(MMA204T)
Mathematical Foundation for Computer Science

Report on
Singular Value Decomposition for image compression

Submitted by

ShriLaksh
ShriLakshmi M - RVCE23MCA018
Nimra Sadiya - RVCE23MCA026
Sharanya N - RVCE23MCA053

Under the guidance


Prof. Satish V Motammanavar

1
RV COLLEGE OF ENGINEERING®, BENGALURU – 560059
(Autonomous Institution Affiliated to VTU, Belagavi)
Department of Master of Computer Applications

CERT IFICATE

This is to certify that Ms. ShriLakshmi M (RVCE23MCA018), Nimra Sadiya

(RVCE23MCA026) and Sharanya N (RVCE23MCA053) of 1st Semester Master of

Computer Applications program has satisfactorily completed the course of Experiential

Learning through Assignment Mathematical Foundation for Computer Science

(MMA204T) prescribed for the academic year 2023 – 2025.

Assignment marks

Phases Max Marks Obtained Marks

1 30

2 30

Total 60

Reduced to 30

Signature of Student Signature of faculty In-Charge Signature of Director

2
INDEX

SL NO TOPIC PAGE NO
1.1 Abstract 04-04

2.1 Introduction 05-05

3.1 Singular Value Decomposition


a. Background 06-06
b. Process 07-07
c. Pictures 08-08
d. Data Analysis 09-11

4.1 Advantages Of SVD 12-12

5.1 Disadvantages Of SVD 13-13

6.1 Applications Of SVD 14-15

7.1 Steps to calculate SVD Of A Matrix 16-16

8.1 SVD Image Compression Measures 17-19

9.1 Process Of Image Compression Using SVD 20-20

10.1 Implementation 21-22

11.1 Conclusion 23-23

3
1.1 ABSTRACT

Image compression is a crucial technique employed in various fields


such as digital image processing, computer vision, and multimedia
communication.
Singular Value Decomposition (SVD) is a powerful mathematical
tool widely used for image compression due to its ability to capture
the essential information of an image in a compact form.
This project aims to explore the mathematical foundations of SVD-
based image compression and its practical application.
The project begins with a comprehensive overview of the principles
behind image compression and the role of SVD in this process.
It delves into the mathematical concepts of SVD, explaining how it
decomposes an image matrix into its constituent components of
singular vectors and singular values.
Theoretical aspects such as the low-rank approximation property of
SVD and its implications for image compression are discussed in
detail.
Furthermore, the project investigates practical implementation
aspects, including the determination of optimal compression ratios
and the trade-off between image quality and compression efficiency.
Various experiments are conducted to demonstrate the effectiveness
of SVD-based compression techniques on different types of images
and datasets.

4
2.1 INTRODCUTION

Image compression deals with reducing the size of data required


to represent images. As the images are digital, goal of the image
compression algorithm is to represent the image with lowest number
of bits possible. Redundancies present in the image file can be
exploited to in-order to reduce the number of bits. Redundancy can be
defined as approximate repetitive pattern that tend to give the degree
of resolution to the image.
However, quality of the image should not be compromised too much
so that it becomes incomprehensible for the user. A good image
compression algorithm should strike a balance between this trade-off.
Usually the requirement of the application dictate the trade-off
between compression and quality of the data.
Image compression techniques can be broadly classified into two
categories. Lossless compression and lossy compression. Lossless
compression techniques define entropy, which limits the reduction in
the image. It tends to produce the same copy as that of the original
data. Lossless compression is known as reversible as it doesn’t
degrade the quality of the image.
Lossy compression techniques identify the minute details and
variations in the system which the human eye is not fine-tuned to
recognize. Amount of storage required to store the image file can
be reduced by eliminating such features. Lossy compression often
compromises the quality of the image. However, it can be used to
achieve the storage space requirements. Lossy compression is
irreversible because it degrades the data.

5
3.1 Singular Value Decompression

a. Background
Singular Value Decomposition (SVD) is a foundational concept in
linear algebra, offering a powerful method to decompose a matrix into
simpler components. At its core, SVD breaks down a matrix 𝐴A into
three constituent matrices: 𝑈U, ΣΣ, and 𝑉𝑇VT. The matrix 𝑈U
contains the left singular vectors, representing the orthogonal basis in
the domain space, while 𝑉𝑇VT holds the right singular vectors,
representing the orthogonal basis in the codomain space. The diagonal
matrix ΣΣ captures the singular values, which convey information
about the scale and importance of each basis vector.
This decomposition allows for a geometric understanding of the
original matrix, revealing how it can be thought of as a combination
of rotations and scaling. SVD finds extensive applications across
various domains, including data compression, dimensionality
reduction, solving linear systems, image processing, and collaborative
filtering, making it a cornerstone of modern computational
mathematics and data analysis.
SVD offers robustness and stability, especially in scenarios where
matrices might be ill-conditioned or singular. Its numerical stability
makes it invaluable in practical computations, where precision and
reliability are paramount.
Moreover, SVD provides a unique insight into the intrinsic structure
of matrices, enabling researchers and practitioners to uncover hidden
patterns and relationships within data. This versatility and reliability
have cemented SVD as a fundamental tool in fields ranging from
machine learning and signal processing to engineering and scientific
computing, playing a crucial role in advancing our understanding and
utilization of complex data structures.

6
b. PROCESS

1. Convert the Image to a Matrix: An image is represented as a


matrix where each element corresponds to a pixel value. For
grayscale images, this matrix is typically 𝑚×𝑛m×n, where 𝑚m
is the number of rows (height) and 𝑛n is the number of columns
(width). For colour images, there are usually three matrices
representing the red, green, and blue channels.
2. Perform SVD on the Image Matrix: Apply SVD to the image
matrix. This decomposes the matrix into three separate matrices:
𝑈U, ΣΣ, and 𝑉𝑇VT. The 𝑈U and 𝑉𝑇VT matrices contain
orthogonal bases for the rows and columns of the original
matrix, respectively. The ΣΣ matrix contains the singular values
along its diagonal.
3. Truncate ΣΣ (Optional): To compress the image, truncate the
ΣΣ matrix by keeping only the largest singular values. This
reduces the amount of information needed to represent the
image.
4. Reconstruct the Image: Multiply the truncated matrices 𝑈U,
ΣΣ, and 𝑉𝑇VT to reconstruct the compressed image matrix. This
reconstructed matrix will have fewer singular values, leading to
a lossy representation of the original image.
5. Visualization: Display the reconstructed image. Although it's
compressed, the visual quality is often preserved well, especially
if a sufficient number of singular values are retained.
By keeping only the most significant singular values, SVD allows for
significant compression while retaining the essential features of the
image. This compression technique finds applications in image
storage, transmission over networks with limited bandwidth, and
efficient processing in image-related tasks. The choice of how many
singular values to retain typically depends on the desired level of
compression and the acceptable loss in image quality.
7
c. Pictures

Original Image:

Compressed Images: K = 5,25,45,65,86,105,125,145,165,186,205

8
d. Data Analysis

The following is the data that we got from our experiment with
Singular Value Decomposition.

k Image size (kb) Error (mse) Compression Ratio

5 138 .01752945165 .6145251397

25 218 .009504274225 .3910614525

45 251 .007137542359 .2988826816

65 275 .005789235824 .2318435754

85 293 .004861137596 .1815642458

105 309 .004151782269 .1368715084

125 322 .003585364946 .1005586592

145 333 .003115425366 .06983240223

165 342 .0027716585017 .04469273743

185 349 .00237558426 .0251396648

205 355 .002081830939 .008379888268

9
These are the graphs we made to help visualize our data:

The first graph represents how the image file size changes as k (# of
elements we keep) increases.
As one could expect, each increase in k changes the file size by less
than the previous element.
This is due to each consecutive element influencing the image less
than the previous one.
The reason that we stop our experiment at 205 k is that we found that
increasing k beyond that will result in a bigger file size than the
original file.

The second graph compares the error between the compressed file and
the original file.
The error decreases as we increase k because we now have more
terms that influence the image, and as expected the curve bottoms out
due to each k having less impact than the previous one.

10
Mean Square Error (MSE) is the way we measured the difference
between two images.
To find the MSE you square the difference between a pixel in the
original image and the corresponding pixel in the compressed image.
By doing this for every pixel, summing the differences, and dividing
by the number of pixels we get an approximate error.
This number represents the average squared difference for each pixel
in the image like the name suggests.

Lastly the third graph represents the compression ratio of SVD as K


changes. We defined the compression ratio as 1 - the compressed file
size over the original file size.
This graph is the opposite of the image size graph because as K
increases the file size increases which causes the ratio to 0.

11
4.1 Advantages of SVD

1. High Compression Ratio: SVD can achieve high compression


ratios while preserving image quality reasonably well.
a. By retaining only the most significant singular values,
which capture the most important features of the image,
SVD can significantly reduce the amount of data required
to represent the image.

2. Lossy Compression: While SVD is a lossy compression


technique, it often preserves visual quality satisfactorily,
especially when sufficient number of singular values are
retained.
a. This makes it suitable for applications where some loss of
image fidelity is acceptable in exchange for reduced
storage or transmission requirements.

3. Mathematical Simplicity: The mathematical framework of


SVD is well-defined and relatively straightforward to
implement.
a. This simplicity makes it accessible and efficient for image
compression tasks.

4. Adaptability: SVD is adaptable to various image types and


content. It can effectively compress both grayscale and coloured
images, as well as images with different structures and textures.

12
5.1 Disadvantages of SVD

1. Lossy Compression: While lossy compression can achieve high


compression ratios, it inherently involves some loss of image
information.
Depending on the application and the degree of compression,
this loss may result in noticeable degradation of image quality.

2. Computational Complexity: SVD involves computationally


intensive operations, especially for large matrices representing
high-resolution images.
Calculating the SVD of such matrices can be time-consuming
and resource-intensive, making it less practical for real-time or
resource-constrained applications.

3. Storage Overhead: Storing the singular values and matrices


required for reconstruction can introduce storage overhead,
particularly when considering large images or large number of
images in a dataset.

4. Limited Control Over Compression: Unlike some other


compression techniques, such as discrete cosine transform
(DCT) used in JPEG compression, SVD may offer less control
over the degree of compression and the resulting image quality.
It may be challenging to optimize compression parameters to
achieve specific quality targets.

13
6.1 Applications of SVD

Some of the key applications of SVD include:


1. Data Compression: SVD is widely used for compressing data,
including images, audio signals, and text documents.
By retaining only the most significant singular values, SVD can
achieve high compression ratios while preserving essential
information.
2. Dimensionality Reduction: In machine learning and data
analysis, SVD is employed for reducing the dimensionality of
high-dimensional datasets.
It helps in capturing the most important features or components
of the data while discarding noise or irrelevant information,
thereby simplifying subsequent analysis tasks.
3. Recommendation Systems: SVD plays a crucial role in
collaborative filtering-based recommendation systems.
By decomposing user-item interaction matrices into latent
factors, SVD enables the discovery of hidden relationships and
preferences, facilitating personalized recommendations for
users.
4. Image Processing: In image processing, SVD is used for tasks
such as denoising, image compression, and image enhancement.
By decomposing images into singular values and vectors, SVD
allows for efficient representation and manipulation of image
data while preserving visual quality.
5. Signal Processing: SVD finds applications in signal processing
tasks such as noise reduction, feature extraction, and system
identification. It helps in separating signals from noise and
identifying dominant patterns or components within signals.

14
6. Latent Semantic Analysis (LSA): In natural language
processing (NLP), SVD is employed in LSA to analyse and
extract latent semantic information from large text corpora.
By decomposing term-document matrices, SVD helps in
identifying underlying concepts and relationships between
words and documents.
7. Biomedical Data Analysis: SVD is utilized in analysing
biomedical data, including gene expression data, protein
interaction networks, and medical imaging.
It aids in identifying meaningful patterns, clustering similar
samples, and uncovering biomarkers associated with diseases.
8. Eigenface Method: In facial recognition systems, the eigenface
method utilizes SVD to represent faces as linear combinations
of basis images (eigenfaces).
By decomposing face images into eigenfaces, SVD enables
efficient face representation and recognition.
9. Robotics and Control Systems: SVD is applied in robotics and
control systems for tasks such as system modelling, trajectory
planning, and robot calibration.
It helps in identifying dominant modes of system behaviour and
optimizing control strategies.

15
7.1 Steps to calculate SVD of a matrix

1) First, calculate AAT and ATA.


2) Use AAT to find the eigen values and eigenvectors
to form the columns of U: (AAT - ƛI) ẍ = 0 [3].
3) Use ATA to find the eigen values and eigenvectors
to form the columns of V: (ATA - ƛI) ẍ =0.
4) Divide each eigenvector by its magnitude to form
the columns of U and V.
5) Take the square root of the eigen values to find the
singular values, and arrange them in the diagonal
matrix S in descending order: σ1 ≥ σ2 ≥ … ≥ σr ≥ 0.

16
8.1 SVD Image Compression Measures

To measure the performance of the SVD compression, the


quantitative and qualitative measurement of the compressed image is
found by calculating the following 3 parameters:
1. Compression Ratio (CR) Compression Ratio is defined as the
ratio of file sizes of the uncompressed image to that of the
compressed image.

CR=MxN/(M+N)×K+K

This formula assumes that each singular value requires storage, as


well as the left and right singular vectors.
The term (𝑀+𝑁)×𝐾(M+N)×K represents the storage required for the
singular vectors, and 𝐾K represents the storage required for the
singular values. The denominator reflects the total storage required
for the compressed image.
2. Mean Square Error (MSE) MSE is defined as square of the
difference between pixel value of original image and the
corresponding pixel value of the compressed image averaged over the
entire image. Mean Square Error (MSE) is computed to measure the
quality difference between the original images A and the compressed
image AK, using the following formula
First, compute the element-wise squared difference between
corresponding pixels in the original and compressed images.
𝐸=(𝐼−𝐼′)2E=(I−I′)2
Then, take the mean of all the squared differences.
MSE=1𝑀𝑁∑𝑖=1𝑀∑𝑗=1𝑁𝐸𝑖𝑗MSE=MN1∑i=1M∑j=1NEij

17
Where:

Where:
𝑀M is the number of rows in the image.
𝑁N is the number of columns in the image.
𝐸𝑖𝑗Eij represents the squared difference between the pixel values at
position (𝑖,𝑗)(i,j) in the original and compressed images.
3. Peak Signal to Noise Ratio (PSNR) Peak signal-to-noise ratio
(PSNR) is defined as the ratio between the maximum possible power
of a signal and the power of corrupting noise that affects the fidelity
of its representation.
PSNR is usually expressed in terms of the logarithmic decibel
scale to accommodate signals with a wide range. In lossy
compression, the quality of compressed image is determined by
calculating PSNR. The signal in this case is the original data,
and the noise is the error introduced by compression. The PSNR
(in dB) is given by the equations. Here, MAXI is the maximum
possible pixel value of the image.
The equation for PSNR is typically defined as:
PSNR=10⋅log 10(MAX2MSE)PSNR=10⋅log10(MSEMAX2)
Where:
 MAXMAX is the maximum possible pixel value of the image
(usually 255 for an 8-bit image).
 MSEMSE is the Mean Squared Error between the original and
the compressed images.
The MSE is calculated as:

MSE=1𝑚𝑛∑𝑖=0𝑚−1∑𝑗=0𝑛−1(𝐼original(𝑖,𝑗)−𝐼compressed(𝑖,𝑗))2MSE
=mn1∑i=0m−1∑j=0n−1(Ioriginal(i,j)−Icompressed(i,j))2

18
Where:
 𝑚m and 𝑛n are the dimensions of the image.
 𝐼original (𝑖,𝑗)Ioriginal (i,j) and 𝐼compressed (𝑖,𝑗)Icompressed
(i,j) are the pixel values of the original and compressed images
at position (𝑖,𝑗)(i,j) respectively.

19
9.1 Process of image compression using SVD

20
10.1 Implementation

21
D = np.array([[0,1,1,0,1,1,0],
ay([[0,1,1,0,1,1,0],
[1,1,1,1,1,1,1],
[1,1,1,1,1,1,1],
[0,1,1,1,1,1,0],
[0,0,1,1,1,0,0],
[0,0,0,1,0,0,0],
])

U,S,V = plot_svd(D)

22
11.1 Conclusion

In performing
ming SVD compression for JPEG images the values of
Compression Ratio and their variation with corresponding singular
values (SVD coefficients) are observed and their relation is concluded
to be a decreasing exponential function.
More compression ratio
atio can be achieved for smaller ranks.
On the other hand, the computation time for the compressed images is
the same for all the values of k taken.
It was also found that the fewer the singular values were used, the
smaller the resulting file size was. An increase in the number of SVD
coefficients causes an increase in the resulting file size of the
compressed image.
As the number of SVD coefficients nears the rank of the original
image matrix, the value of Compression Ratio approaches one.
From the observations
servations recorded it is observed the Mean Square Error
decreases with increase in the number of SVD coefficients, unlike
PSNR which varies inversely with the value of 'k'.
Therefore, an optimum value for 'k' must be chosen, with an
acceptable error, which
ch conveys most of the information contained in
the original image, and has an acceptable file size too

23

You might also like