0% found this document useful (0 votes)
15 views

Image Compression

Uploaded by

ksaba6242
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Image Compression

Uploaded by

ksaba6242
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 28

Image Compression

May 28, 2 1
024
Introduction and Overview
The field of image compression continues to grow at a rapid pace
As we look to the future, the need to store and transmit images will
only continue to increase faster than the available capability to process
all the data
Applications that require image compression are many and varied
such as:
1.Internet,

2.Businesses,

3.Multimedia,

4.Satellite imaging,
5.Medical imaging

May 28, 2 2
024
 Compression algorithm development starts with
applications to two-dimensional (2-D) still images
 After the 2-D methods are developed, they are often
extended to video (motion imaging). However, we will
focus on image compression of single frames of image
data
 Image compression involves reducing the size of image
data files, while retaining necessary information
 Image segmentation methods, which are primarily a data
reduction process, can be used for compression
 The reduced file created by the compression process is
called the compressed file and is used to reconstruct the
image, resulting in the decompressed image
 The original image, before any compression is performed,
is called the uncompressed image file
 The ratio of the original, uncompressed image file and the
compressed file is referred to as the compression ratio
3

May 28, 2
024
 The compression ratio is denoted by:

May 28, 2 4
024
 The reduction in file size is necessary to meet the bandwidth
requirements for many transmission systems, and for the storage
requirements in computer databases

 Also, the amount of data required for digital images is enormous

May 28, 2 5
024
 This number is based on the actual transmission rate being the
maximum, which is typically not the case due to Internet traffic,
overhead bits and transmission errors

May 28, 2 6
024
Example 10.1.5 applies maximum data rate to Example 10.1.4

May 28, 2 7
024
 Now, consider the transmission of video images, where we need
multiple frames per second
 If we consider just one second of video data that has been digitized
at 640x480 pixels per frame, and requiring 15 frames per second for
interlaced video, then:

May 28, 2 8
024
 Applications requiring high speed connections such as high
definition television, real-time teleconferencing, and transmission
of multiband high resolution satellite images, leads us to the
conclusion that image compression is not only desirable but
necessary
 Key to a successful compression scheme is retaining necessary
information
 To understand “retaining necessary information”, we must
differentiate between data and information
1. Data:
• For digital images, data refers to the pixel gray level values that
correspond to the brightness of a pixel at a point in space
• Data are used to convey information, much like the way the
alphabet is used to convey information via words
2. Information:
• Information is an interpretation of the data in a meaningful way
• Information is an elusive concept; it can be application specific
May 28, 2 9
024
 There are two primary types of image compression
methods:
1. Lossless compression methods:
• Allows for the exact recreation of the original image
data, and can compress complex images to a
maximum 1/2 to 1/3 the original size – 2:1 to 3:1
compression ratios
• Preserves the data exactly
2. Lossy compression methods:
• Data loss, original image cannot be re-created exactly
• Can compress complex images 10:1 to 50:1 and
retain high quality, and 100 to 200 times for lower
quality, but acceptable, images

May 28, 2 10
024
 Compression algorithms are developed by taking
advantage of the redundancy that is inherent in
image data

 Four primary types of redundancy that can be


found in images are:
1. Coding
2. Interpixel
3. Interband
4. Psychovisual redundancy

May 28, 2 11
024
1. Coding redundancy
 Occurs when the data used to represent the
image is not utilized in an optimal manner
2. Interpixel redundancy
 Occurs because adjacent pixels tend to be highly
correlated, in most images the brightness levels
do not change rapidly, but change gradually
3. Interband redundancy
 Occurs in color images due to the correlation
between bands within an image – if we extract the
red, green and blue bands they look similar
4. Psychovisual redundancy
 Some information is more important to the human
visual system than other types of information

May 28, 2 12
024
 The key in image compression algorithm development
is to determine the minimal data required to retain the
necessary information
 The compression is achieved by taking advantage of
the redundancy that exists in images
 If the redundancies are removed prior to
compression, for example with a decorrelation
process, a more effective compression can be
achieved
 To help determine which information can be removed
and which information is important, the image fidelity
criteria are used
 These measures provide metrics for determining
image quality
 It should be noted that the information required is
application specific, and that, with lossless schemes,
there is no need for a fidelity criteria
May 28, 2 13
024
Fidelity Criteria
 Removal of “irrelevant visual” information involves a loss
of real or quantitative image information.
 Two types of criteria can be used for such an assessment:
(1) objective fidelity criteria, and (2) subjective fidelity
criteria.
 (1) objective fidelity criteria : When information loss can
be expressed as a mathematical function of the input and
output of a compression process
 (2) subjective fidelity criteria: measuring image quality by
the subjective evaluations of people is often more
appropriate.

May 28, 2 14
024
Fidelity Criteria

May 28, 2 15
024
Fidelity Criteria

 The mean squared signal-to-noise ratio of the output image is defined as:

 Information loss can be measured using this criteria

May 28, 2 16
024
Compression System Model
•The compression system model consists of two parts:
1.The compressor
2.The decompressor
•The compressor consists of a preprocessing stage and
encoding stage, whereas the decompressor consists of
a decoding stage followed by a post processing stage

May 28, 2 17
024
• The compressor can be broken into following stages:
1. Data reduction: Image data can be reduced by gray
level and/or spatial quantization, or can undergo any
desired image improvement (for example, noise
removal) process
2. Mapping: Involves mapping the original image data into
another mathematical space where it is easier to
compress the data
3. Quantization: Involves taking potentially continuous
data from the mapping stage and putting it in discrete
form
4. Coding: Involves mapping the discrete data from the
quantizer onto a code in an optimal manner
• A compression algorithm may consist of all the stages,
or it may consist of only one or two of the stages

May 28, 2 18
024
• The decompressor can be broken down into following
stages:
1. Decoding: Takes the compressed file and reverses the
original coding by mapping the codes to the original,
quantized values
2. Inverse mapping: Involves reversing the original
mapping process
3. Post processing: Involves enhancing the look of the
final image
• This may be done to reverse any preprocessing, for
example, enlarging an image that was shrunk in the
data reduction process
• In other cases the post processing may be used to
simply enhance the image to ameliorate any artifacts
from the compression process itself

May 28, 2 19
024
LOSSLESS COMPRESSION METHODS

No loss of data, decompressed image exactly same as


uncompressed image
Medical images or any images used in courts
Lossless compression methods typically provide about a
10% reduction in file size for complex images
These methods can provide substantial compression for
simple images
However, lossless compression techniques may be used for
both preprocessing and post processing in image
compression algorithms to obtain the extra 10% compression

May 28, 2 20
024
Huffman Coding
 Huffman proposed an algorithm in 1952
 The most popular techniques for removing coding
redundancy
 Huffman coding yields the smallest possible number
of code symbols per source symbol.
 The resulting code is optimal for a fixed value of n, subject
to the constraint that the source symbols be coded one at a
time.

May 28, 2 21
024
Huffman Coding
• Forward Pass
1. Sort probabilities per symbol (e.g., gray-levels)
2. Combine the lowest two probabilities
3. Repeat Step2 until only two probabilities remain.

May 28, 2 22
024
Huffman Coding
 Backward Pass
Assign code symbols going backwards

The average length of this code is

23

May 28, 2
024
Run Length Coding
 RLC
 identify the length of the pixel
values and encodes the image in the form
of a run
 Each row of the image is written as a
sequence
 The length is represented as a run of black
and white pixels. This is known as Run-
length coding.

24

May 28, 2
024
Run Length Coding (Cont..)
 Given
0 0 1 1 1
0 0 0 0 0
0 0 0 0 0
1 1 1 1 1
1 1 1 1 1
 Horizontal Application
 Run length vectors would be calculated row wise
 Find max length vector
 Total no of pixels=no of vectors×(bits used to represent max
length + no of bits per pixel)
 Compression ratio=no of pixels for original/no of pixels for 25

compressed
May 28, 2
024
Lossy Compression Methods
Lossy compression methods are required to achieve
high compression ratios with complex images
They provides tradeoffs between image quality and
degree of compression, which allows the compression
algorithm to be customized to the application

May 28, 2 26
024
May 28, 2 27
024
 With more advanced methods, images can be
compressed 10 to 20 times with virtually no visible
information loss, and 30 to 50 times with minimal
degradation
 Newer techniques, such as JPEG2000, can achieve
reasonably good image quality with compression
ratios as high as 100 to 200
 Image enhancement and restoration techniques can
be combined with lossy compression schemes to
improve the appearance of the decompressed image
 In general, a higher compression ratio results in a
poorer image, but the results are highly image
dependent – application specific
 Lossy compression can be performed in both the
spatial and transform domains. Hybrid methods use
both domains.

May 28, 2 28
024

You might also like