0% found this document useful (0 votes)
9 views2 pages

Nice Template For Reports

dfgdg

Uploaded by

weromat581
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views2 pages

Nice Template For Reports

dfgdg

Uploaded by

weromat581
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Image Compression using SVD

Naman Gupta (2021Btech136)


Karmanpreet Singh (2021Btech138)

Abstract m-dimensional and n-dimensional space respectively. Therefore


only Σ changes the length of vectors.
Your abstract.
2.2 Low Rank Approximation
1 Introduction The best rank-k approximation for a m×n matrix A, where k <
s = min(m, n), or some matrix norm ||.||, is one that minimizes
Your introduction goes here! Simply start writing your document
the following problem:
and use the Recompile button to view the updated PDF preview.
Examples of commonly used commands and features are listed min ∥A − Ak ∥
Ak
below, to help you get started.
Once you’re familiar with the editor, you can find various project such that rank(Ak ) ≤ k.
settings in the Overleaf menu, accessed via the button in the very Under the induced 2-norm, the best rank-k approximation is
top left of the editor. To view tutorials, user guides, and further given by the sum of the first k outer products of the left and right
documentation, please visit our help library, or head to our plans singular vectors scaled by the corresponding singular value (where,
page to choose your plan. σ1 ≥ σ2 ≥ σ3 ≥ . . . ≥ σs ):
Ak = σ1 u1 v1⊤ + · · · + σk uk vk⊤
2 Methodology Observe that the norm of the difference between the best ap-
proximation and the matrix under the induced 2-norm condition is
2.1 About Singular Value Decomposition the magnitude of the (k + 1)th singular value of the matrix:
n
A matrix of size m × n is a grid of real numbers consisting of m X
rows and n columns. In linear algebra, a branch of mathematics, ∥A − Ak ∥2 = σi ui vi⊤ = σk+1
i=k+1 2
matrices of size m×n describe linear mappings from n-dimensional
to m-dimensional space. The word linear roughly means that Note that the best rank-k approximation to A can be stored
straight lines map to straight lines and the origin in n-dimensional efficiently by only storing the k singular values σ1 , σ2 , σ3 , . . . , σk
space maps to the origin in m-dimensional space. When we have the k left singular vectors u1 , u2 , . . . , uk , and the k right singular
an (m×n) - matrix A and a (n×k) - matrix B, we can compute vectors v1 , v2 , . . . , vk .
the product AB which is an (m×k) - matrix. The mapping
corresponding to AB is exactly the composition of the mappings 2.3 Using SVD for image compression
corresponding to A and B respectively.
We can decompose a given image into the three color channels
red, green and blue. Each channel can be represented as a (m×n)
Singular Value Decomposition (SVD) states that every (m×n) -
- matrix with values ranging from 0 to 255. We will now compress
matrix A can be written as a product where U and V are orthogonal
the matrix A representing one of the channels.

To do this, we compute an approximation to the matrix A


that takes only a fraction of the space to store. Now here’s
the great thing about SVD: the data in the matrices U, Σ and
V is sorted by how much it contributes to the matrix A in
the product. That enables us to get quite a good approxima-
tion by simply using only the most important parts of the matrices.

We now choose a number k of singular values that we are going


Figure 1: SVD to use for the approximation. The higher this number, the better
the quality of the approximation gets but also the more data is
matrices and the the matrix Σ consists of descending non-negative needed to encode it. We now take only the first k columns of U
values on its diagonal and zeros elsewhere. The entries and V and the upper left (k×k) - square of Σ, containing the k
largest (and therefore most important) singular values. We then
σ1 ≥ σ2 ≥ σ3 ≥ . . . ≥ 0 have
The amount of data needed to store this approximation is pro-
on the diagonal of Σ are called the singular values (SVs) of A. Geo- portional to the colored area:
metrically, Σ maps the j-th unit coordinate vector of n-dimensional
space to the j-th coordinate vector of m-dimensional space, scaled compressedsize = m ∗ k + n ∗ k + k = k ∗ (1 + m + n)
by the factor σj . Orthogonality of U and V means that they (Actually, slightly less space is needed due to the orthogonality of
correspond to rotations (possibly followed by a reflection) of U and V.)

1
Figure 2: SVD

Figure 3: The figure above shows different rank-k approximations


of an image.

References

You might also like