0% found this document useful (0 votes)
47 views39 pages

UNIT IV - Compression

The document discusses image compression, including its objectives, types of compression (lossy and lossless), sources of redundancy in digital images, and the basic components of a compression system - the encoder and decoder. It describes how the encoder performs mapping, quantization, and coding of image data, while the decoder reverses this process to reconstruct the image. Fidelity criteria for measuring information loss during compression are also introduced.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views39 pages

UNIT IV - Compression

The document discusses image compression, including its objectives, types of compression (lossy and lossless), sources of redundancy in digital images, and the basic components of a compression system - the encoder and decoder. It describes how the encoder performs mapping, quantization, and coding of image data, while the decoder reverses this process to reconstruct the image. Fidelity criteria for measuring information loss during compression are also introduced.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

19CS334

Fundamentals of Image Processing


Dr. Sivadi Balakrishna,
Associate Professor,
Department of CSE, VFSTR.

Department of Computer Science & Engineering 1


UNIT – IV
IMAGE COMPRESSION

 Image Compression
 Need for data compression
 Huffman
 Run Length Encoding
 Shift Codes
 Arithmetic Coding
 Transform Coding
 JPEG Standard
 MPEG

Department of Computer Science & Engineering 2


IMAGE COMPRESSION
OBJECTIVES
 Be able to measure the amount of information in a digital image.
 Understand the main sources of data redundancy in digital
images.
 Know the difference between lossy and error free compression,
and the amount of compression that is possible with each.
 Be familiar with the popular image compression standards, such
as JPEG and JPEG-2000,that are in use today.
 Understand the principal image compression methods, and how
and why they work.
 Be able to compress and decompress grayscale, color, and video
imagery.

Department of Computer Science & Engineering 3


Introduction

 The Image compression is the art and science of


reducing the amount of data required to represent
an image.

 Applications
1. Internet,
2. Businesses,
3. Multimedia,
4. Satellite imaging,
5. Medical imaging

Department of Computer Science & Engineering 4


Fundamentals
The term data compression refers to the process
of reducing the amount of data required to
represent a given quantity of information
Data Information
Various amount of data can be used to
represent the same information
Data might contain elements that provide no
relevant information : data
redundancy(R=1-1/C,C=b/b’)
Data redundancy is a central issue in image
compression. It is not an abstract concept but
mathematically quantifiable entity.

Department of Computer Science & Engineering 5


Fundamentals
Image compression involves reducing the size of
image data files, while retaining necessary
information
Retaining necessary information depends upon
the application
Image segmentation methods, which are
primarily a data reduction process, can be used
for compression
The original image, before any compression is
performed, is called the uncompressed image file
The ratio of the original, uncompressed image
file and the compressed file is referred to as the
compression ratio
Department of Computer Science & Engineering 6
Compression Ratio
The key in image compression algorithm development is to
determine the minimal data required to retain the necessary
information
The compression is achieved by taking advantage of the
redundancy that exists in images
If the redundancies are removed prior to compression, for
example with a decorrelation process, a more effective
compression can be achieved
To help determine which information can be removed and
which information is important, the image fidelity criteria are
used
These measures provide metrics for determining image quality
It should be noted that the information required is application
specific, and that, with lossless schemes, there is no need for a
fidelity criteria
Department of Computer Science & Engineering 7
Compression Types

Compression

Error-Free Compression Lossy Compression


(Loss-less)

Department of Computer Science & Engineering 8


Data Redundancy
1. Coding redundancy
Occurs when the data used to represent the
image is not utilized in an optimal manner
2. Interpixel(Spatial/Temporal) redundancy
Occurs because adjacent pixels tend to be
highly correlated, in most images the
brightness levels do not change rapidly, but
change gradually
3. Psychovisual redundancy
Some information is more important to the
human visual system than other types of
information

Department of Computer Science & Engineering 9


Data Redundancy

Department of Computer Science & Engineering 10


Data Redundancy
1. Coding redundancy
Occurs when the data used to represent the
image is not utilized in an optimal manner

The total number of bits


required to represent an M N * image is MNLavg

Department of Computer Science & Engineering 11


Fidelity Criteria

 The removal of “irrelevant visual” information


involves a loss of real or quantitative image
information. Because information is lost, a
means of quantifying the nature of the loss is
needed.
 Two types of criteria can be used for such an
assessment:
 (1) objective fidelity criteria, and (2) subjective
fidelity criteria.

Department of Computer Science & Engineering 12


Objective and Subjective Fidelity Criteria

 When information loss can be expressed as a


mathematical function of the input and output
of a compression process
 Measuring image quality by the subjective
evaluations of people.

Department of Computer Science & Engineering 13


Compression System Model

The compression system model


consists of two Functional
components:

The Encoder
The Decoder

Department of Computer Science & Engineering 14


Compression System Model

Department of Computer Science & Engineering 15


Encoder

The encoder can be broken into following stages:


1. Mapping: Involves mapping the original image data into
another mathematical space where it is easier to compress
the data
2. Quantization: Involves taking potentially continuous data
from the mapping stage and putting it in discrete form
3. Coding: Involves mapping the discrete data from the
quantizer onto a code in an optimal manner

Department of Computer Science & Engineering 16


Decoder

The de-compressor can be broken down into following stages:

1. Decoding: Takes the compressed file and reverses the original


coding by mapping the codes to the original, quantized values

2. Inverse mapping: Involves reversing the original mapping process


3. Postprocessing: Involves enhancing the look of the final image
This may be done to reverse any preprocessing, for example,
enlarging an image that was shrunk in the data reduction process

In other cases the postprocessing may be used to simply enhance


the image to ameliorate any artifacts from the compression process
Department of Computer Science & Engineering 17
Lossless Compression Methods

No loss of data, decompressed image exactly same as uncompressed


image
Medical images or any images used in courts
Lossless compression methods typically provide about a 10%
reduction in file size for complex images
Lossless compression methods can provide substantial compression
for simple images
However, lossless compression techniques may be used for both
preprocessing and postprocessing in image compression algorithms
to obtain the extra 10% compression

Department of Computer Science & Engineering 18


Lossless Statistical Compression -
Variable Length Encoding
Huffman Coding
The Huffman code, developed by D. Huffman in 1952, is a minimum
length code
This means that given the statistical distribution of the gray levels
(the histogram), the Huffman algorithm will generate a code that is as
close as possible to the minimum bound, the entropy
The method results in an unequal (or variable) length code, where the
size of the code words can vary
For complex images, Huffman coding alone will typically reduce the
file by 10% to 50% (1.1:1 to 1.5:1), but this ratio can be improved to
2:1 or 3:1 by preprocessing for irrelevant information removal

Department of Computer Science & Engineering 19


Lossless Statistical Compression -
Variable Length Encoding
The Huffman algorithm can be described in five steps:
• Find the gray level probabilities for the image by finding the
histogram
• Order the input probabilities (histogram magnitudes) from smallest
to largest
• Combine the smallest two by addition
• GOTO step 2, until only two probabilities are left
• By working backward along the tree, generate code by alternating
assignment of 0 and 1

Department of Computer Science & Engineering 20


Lossless Statistical Compression -
Variable Length Encoding

Department of Computer Science & Engineering 21


Huffman codes

Department of Computer Science & Engineering 22


JPEG Standard

encoder
Source Encoder descriptors statistical symbols entropy compressed
image data model model encoder image data

model entropy
tables coding tables

The basic parts of an JPEG encoder

Department of Computer Science & Engineering 23


JPEG Standard

88 blocks
DCT-based encoder

statistical entropy compressed


FDCT quantizer image data
model encoder

Source table table


image data specification specification

The basic architecture of JPEG


Baseline system

JPEG Baseline system is composed of:


Sequential DCT-based mode

Huffman coding

Department of Computer Science & Engineering 24


JPEG Standard

Step 1: The input image is divided into


a small block which is having 8x8
dimensions. This dimension is sum up
to 64 units. Each unit of the image is
called pixel.

Department of Computer Science & Engineering 25


JPEG Standard

Step 2: JPEG uses [Y,Cb,Cr] model instead


of using the [R,G,B] model. So in the
2nd step, RGB is converted into YCbCr.

Department of Computer Science & Engineering 26


JPEG Standard

Step 3: After the conversion of colors, it is


forwarded to DCT. DCT uses a cosine
function and does not use complex
numbers. It converts information which are
in a block of pixels from the spatial domain
to the frequency domain.

Department of Computer Science & Engineering 27


JPEG Standard

Step 4: Humans are unable to see


important aspects of the image because
they are having high frequencies. The
matrix after DCT conversion can only
preserve values at the lowest frequency that
to in certain point. Quantization is used to
reduce the number of bits per sample.
There are two types of Quantization:
1.Uniform Quantization
2.Non-Uniform Quantization

Department of Computer Science & Engineering 28


JPEG Standard

Step 5: The zigzag scan is used to map the


8x8 matrix to a 1x64 vector. Zigzag
scanning is used to group low-frequency
coefficients to the top level of the vector and
the high coefficient to the bottom. To
remove the large number of zero in the
quantized matrix, the zigzag matrix is used.

Department of Computer Science & Engineering 29


JPEG Standard

Step 6: Next step is vectoring, the different


pulse code modulation (DPCM) is applied to
the DC component. DC components are
large and vary but they are usually close to
the previous value. DPCM encodes the
difference between the current block and
the previous block.

Department of Computer Science & Engineering 30


JPEG Standard

Step 7: In this step, Run Length Encoding


(RLE) is applied to AC components. This is
done because AC components have a lot of
zeros in it. It encodes in pair of (skip, value)
in which skip is non zero value and value is
the actual coded value of the non zero
components.

Department of Computer Science & Engineering 31


JPEG Standard

Step 8: In this step, DC components are


coded into Huffman.

Department of Computer Science & Engineering 32


MPEG Standard

 Moving Picture Experts Group

 Established in 1988

 Standards under International


Organization for standardization
(ISO) and International Electro
technical Commission (IEC)

 Official name is: ISO/IEC JTC1


SC29 WG11

Department of Computer Science & Engineering 33


MPEG Standard

 MPEG-1 : a standard for storage and


retrieval of moving pictures and
audio on storage media
 MPEG-2 : a standard for digital television
 MPEG-4 : a standard for multimedia
applications
 MPEG-7 : a content representation
standard for information search
 MPEG-21: offers metadata information for
audio and video files

Department of Computer Science & Engineering 34


MPEG1 Standard

 First standard to be published by the MPEG


organization (in 1992)

 A standard for storage and retrieval of


moving pictures and audio on storage media

 Example formats: VideoCD (VCD), mp3, mp2

Department of Computer Science & Engineering 35


5 Parts of MPEG1

 Part 1: Combining video and audio inputs


into a single/multiple data stream
 Part 2: Video Compression
 Part 3: Audio Compression
 Part 4: Requirements Verification
 Part 5: Technical report on the software
implementation of the Parts 1 - 3

Department of Computer Science & Engineering 36


Basic Structure of Audio Encoder

Note: A decoder basically works in just the opposite manner

Department of Computer Science & Engineering 37


Processes of and Audio Encoder

Mapping Block – divides audio inputs into 32


equal-width frequency subbands (samples)
Psychoacoustic Block – calculates masking
threshold for each subband
Bit-Allocation Block – allocates bits using
outputs of the Mapping and Psychoacoustic
blocks
Quantizer & Coding Block – scales and quantize
(reduce) the samples
Frame Packing Block – formats the samples
with headers into an encoded stream
Department of Computer Science & Engineering 38
The End

Department of Computer Science & Engineering 39

You might also like