0% found this document useful (0 votes)
196 views35 pages

Image Compression VTU Project

Fractal image compression can be obtained by dividing the original grey level image into unoverlapped blocks depending on a threshold value and the well known techniques of Quad tree decomposition. By using threshold value of 0.2 and Huffman coding for encoding and decoding of the image these techniques have been applied for the compression of satellite imageries. The compression ratio (CR) and Peak Signal to Noise Ratio (PSNR) values are determined for three types of images namely standard Lena

Uploaded by

SUNIL raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
196 views35 pages

Image Compression VTU Project

Fractal image compression can be obtained by dividing the original grey level image into unoverlapped blocks depending on a threshold value and the well known techniques of Quad tree decomposition. By using threshold value of 0.2 and Huffman coding for encoding and decoding of the image these techniques have been applied for the compression of satellite imageries. The compression ratio (CR) and Peak Signal to Noise Ratio (PSNR) values are determined for three types of images namely standard Lena

Uploaded by

SUNIL raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Fractal image compression using quad tree decomposition and Huffman coding

Chapter 1

Introduction

1.1 Image compression

Images are pictorial representation of the required information. Images can be two-
dimensional or 3 dimensional. Images are represented using pixels which tell what
information exists at that particular point. Internet is one of the major sources to store the
images. Internet is generally used to access information and it also acts as a database to store
information in the form of images or text.

With the advance of the information age the need for mass information storage and retrieval
grows. The capacity of commercial storage devices, however, has not kept pace with the
proliferation of image data. Images are stored on computers as collections of bits (a bit is a
binary unit of information) representing pixels, or points forming the picture elements. Since
the human eye can process large amounts of information, many pixels - some 8 million bits'
worth - are required to store even moderate quality images. Any real time application
requires the information size to be less for faster and efficient processing of data. Thus the
data should be compressed for saving the memory space which is limited in real time
applications. Also compression helps in efficient use of time and space for data processing.

Image compression is a type of data compression applied to digital images, to reduce their
cost for storage or transmission. Algorithms will take visual perception and
the statistical properties of image data to provide superior results compared with generic
compression methods.

1.2 Types of image compression

Image compression may be lossy or lossless. Lossless compression is preferred for


archival purposes and often for medical imaging, technical drawings, clip art, or comics.
Lossy compression methods, especially when used at low bit rates, introduce compression

B.E/Dept. of TCE/BNMIT 1 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

artefacts. Lossy methods are especially suitable for natural images such as photographs in
applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve
a substantial reduction in bit rate. Lossy compression that produces negligible differences
may be called visually lossless.

1.2.1 Lossless compression techniques

There are a few different methods for lossless compression. There’s run-length
encoding (used for BMP files), which takes runs of data (consecutive data elements with
identical values) and stores them in a single data value and count. It’s best suited for
simple graphics files, where there are long runs of identical data elements . DEFLATE is
another lossless data compression method used for PNG images. It uses a combination of
the LZ77 algorithm and Huffman coding. In addition to being used for PNG images, it’s
also used in ZIP and gzip compression. Lempel-Ziv-Welch (LZW) compression is a
lossless compression algorithm that performs a limited analysis of data. It’s used in GIF
and some TIFF file formats.

1.2.2 Lossy compression techniques

There are a number of lossy compression methods, some of which can be


combined with lossless methods to create even smaller file sizes. One method is reducing
the image’s colour space to the most common colours within the image. This is often
used in GIF and sometimes in PNG images to result in smaller file sizes. When used on
the right types of images and combined with dithering, it can result in images nearly
identical to the originals.

Transform encoding is the type of encoding used for JPEG images. In images, transform
coding averages out the color in small blocks of the image using a discrete cosine
transform (DCT) to create an image that has far fewer colors than the original.

Chroma sub sampling is another type of lossy compression that takes into account that
the human eye perceives changes in brightness more sharply than changes of color, and
B.E/Dept. of TCE/BNMIT 2 2016-17
Fractal image compression using quad tree decomposition and Huffman coding

takes advantage of it by dropping or averaging some chroma (color) information while


maintaining luma (brightness) information. It’s commonly used in video encoding
schemes and in JPEG images.

JPEG uses a lossy form of compression based on the discrete cosine transform (DCT). This
mathematical operation converts each frame/field of the video source from the spatial (2D)
domain into the frequency domain (a.k.a. transform domain). A perceptual model based
loosely on the human psycho visual system discards high-frequency information, i.e. sharp
transitions in intensity, and color hue. In the transform domain, the process of reducing
information is called quantization. In simpler terms, quantization is a method for optimally
reducing a large number scale (with different occurrences of each number) into a smaller
one, and the transform-domain is a convenient representation of the image because the high-
frequency coefficients, which contribute less to the overall picture than other coefficients, are
characteristically small-values with high compressibility. The quantized coefficients are then
sequenced and losslessly packed into the output bitstream. Nearly all software
implementations of JPEG permit user control over the compression-ratio (as well as other
optional parameters), allowing the user to trade off picture-quality for smaller file size. In
embedded applications (such as miniDV, which uses a similar DCT-compression scheme),
the parameters are pre-selected and fixed for the application.

The compression method is usually lossy, meaning that some original image information is
lost and cannot be restored, possibly affecting image quality. There is an
optional lossless mode defined in the JPEG standard. However, this mode is not widely
supported in products.

There is also an interlaced progressive JPEG format, in which data is compressed in multiple
passes of progressively higher detail. This is ideal for large images that will be displayed
while downloading over a slow connection, allowing a reasonable preview after receiving
only a portion of the data. However, support for progressive JPEGs is not universal. When
progressive JPEGs are received by programs that do not support them the software displays
the image only after it has been completely downloaded.

B.E/Dept. of TCE/BNMIT 3 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

There are also many medical imaging and traffic systems that create and process 12-bit JPEG
images, normally grayscale images. The 12-bit JPEG format has been part of the JPEG
specification for some time, but this format is not as widely supported.

A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a
sum of cosine functions oscillating at different frequencies. DCTs are important to numerous
applications in science and engineering, from lossy compression of audio (e.g. MP3)
and images (e.g. JPEG) (where small high-frequency components can be discarded),
to spectral methods for the numerical solution of partial differential equations. The use
of cosine rather than sine functions is critical for compression, since it turns out (as described
below) that fewer cosine functions are needed to approximate a typical signal, whereas for
differential equations the cosines express a particular choice of boundary conditions.

In particular, a DCT is a Fourier-related transform similar to the discrete Fourier


transform (DFT), but using only real numbers. The DCTs are generally related to Fourier
Series coefficients of a periodically and symmetrically extended sequence whereas DFTs are
related to Fourier Series coefficients of a periodically extended sequence. DCTs are
equivalent to DFTs of roughly twice the length, operating on real data with even symmetry
(since the Fourier transform of a real and even function is real and even), whereas in some
variants the input and/or output data are shifted by half a sample. There are eight standard
DCT variants, of which four are common.

The most common variant of discrete cosine transform is the type-II DCT, which is often
called simply "the DCT". Its inverse, the type-III DCT, is correspondingly often called
simply "the inverse DCT" or "the IDCT". Two related transforms are the discrete sine
transforms (DST), which is equivalent to a DFT of real and odd functions, and the modified
discrete cosine transforms (MDCT), which is based on a DCT of overlapping data.
Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT on MD
Signals. There are several algorithms to compute MD DCT. A new variety of fast algorithms
are also developed to reduce the computational complexity of implementing DCT.

B.E/Dept. of TCE/BNMIT 4 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

All these existing compression techniques have a major drawback of loss of information and
more time required for compression. The compressed images have lesser quality and take a
lot of time to get back the decompressed image. One of the compression techniques available
which can reduce/eliminate these problems is the fractal image compression.

1.3 Fractals

A fractal is a mathematical set that exhibits a repeating pattern displayed at every


scale. It is also known as expanding symmetry or evolving symmetry. If the replication is
exactly the same at every scale, it is called a self-similar pattern. A fractal is a never-ending
pattern. Fractals are infinitely complex patterns that are self-similar across different scales.
They are created by repeating a simple process over and over in an ongoing feedback loop.
Driven by recursion, fractals are images of dynamic systems – the pictures of Chaos.
Geometrically, they exist in between our familiar dimensions. Fractal patterns are extremely
familiar, since nature is full of fractals. For instance: trees, rivers, coastlines, mountains,
clouds, seashells, hurricanes, etc. Abstract fractals – such as the Mandelbrot Set – can be
generated by a computer calculating a simple equation over and over.

1.4 Fractal image compression

The fractal image compression first partitions the original image into non
overlapping domain regions (they can be any size or shape). Then a collection of
possible range regions is defined. The range regions can overlap and need not cover the
entire image, but must be larger than the domain regions, For each domain region the
algorithm then searches for a suitable range region that, when applied with an appropriate
affine transformation, very closely resembles the domain region. Afterward, a FIF (Fractal
Image Format) file is generated for the image. This file contains information on the choice of
domain regions, and the list of affine coefficients (i.e. the entries of the transformation
matrix) of all associated affine transformations. So all the pixels' data in a given region are
compressed into a small set of entries of the transform matrix, with each entry, corresponding
to an integer between 0 and 255, taking up one byte.
B.E/Dept. of TCE/BNMIT 5 2016-17
Fractal image compression using quad tree decomposition and Huffman coding

This process is independent of the resolution of the original image. The output graphic will
look like the original at any resolution, since the compressor has found an IFS whose
attractor replicates the original one (i.e. a set of equations describing the original image). Of
course, the process takes a lot of work, especially during the search for the suitable range
regions. But once the compression is done, the FIF file can be decompressed very quickly.
Thus, fractal image compression is asymmetrical. The practical implementations of a fractal
compressor offer different levels of compression. The lower levels have more relaxed search
criteria to cut down processing time, but with the loss of more detail. The higher levels give
very good detail, but takes a long time to process each image.

To decompress an image, the compressor first allocates two memory buffers of equal size,
with arbitrary initial content. The iterations then begin, with buffer 1 the range image and
buffer 2 the domain image. The domain image is partitioned into domain regions as specified
in the FIF file. For each domain region, its associated range region is located in the range
image. Then the corresponding affine map is applied to the content of the range region,
pulling the content toward the map's attractor. Since each of the affine maps is contractive,
the range region is contracted by the transformation. This is the reason that the range regions
are required to be larger than the domain regions during compression.

For the next iteration, the roles of the domain image and range image are switched. The
process of mapping the range regions (now in buffer 2) to their respective domain regions (in
buffer 1) is repeated, using the prescribed affine transformations. Then the entire step is
repeated again and again, with the content of buffer 1 mapped to buffer 2, then vice versa. At
every step, the content is pulled ever closer to the attractor of the IFS which forms a collage
of the original image. Eventually the differences between the two images become very small,
and the content of the first buffer is the output decompressed image.

B.E/Dept. of TCE/BNMIT 6 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

1.5 Block truncation coding used in satellite communication

Block truncation coding is one of the simplest compression techniques where the
image is divided into blocks and then processed. The traditional method involves
computation of a high mean and the low mean to replace the original pixel values. Here the
first and the second moments are preserved and the bit rate is a constant value (2 bits per
pixel). The major disadvantage is that there are heavy blocking artefacts when the block size
increases. Many methods have been evolved in order to improve the compression ratio and
also to reduce the bit rate [14]. BTC is a recent technique used for compression of
monochrome image data. It is one-bit adaptive moment-preserving quantizer that preserves
certain statistical moments of small blocks of the input image in the quantized output. The
original algorithm of BTC preserves the standard mean and the standard deviation [9]. The
statistical overheads Mean and the Standard deviation are to be coded as part of the block.
The truncated block of the BTC is the one-bit output of the quantizer for every pixel in the
block .Various methods have been proposed during last twenty years for image compression
such BTC and Absolute Moment Block Truncation Coding AMBTC [6].AMBTC preserves
the higher mean and lower mean of the blocks and use this quantity to quantize output.
AMBTC provides better image quality than image compression using BTC. Moreover, the
AMBTC is quite faster compared to BTC this paper represents comparative study between
BTC and AMBTC .This paper is organized as follows: Section II explains BTC algorithm.
Section III explains algorithm of AMBTC. Section IV explains image characteristic Section
V briefly explains the performance evaluation criteria

1.6 Drawbacks of traditional compression techniques

The most widely used compression techniques is the JPEG compression algorithm.
The PSNR values obtained using this technique is very good. But this method has a major
drawback of loss of information at the boundaries and edges. Since JPEG uses partitioning of
the image into frequency components, the lower frequency components get eliminated and

B.E/Dept. of TCE/BNMIT 7 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

thus loss of information occurs. Also the method is dimension specific and the compressed
image size will not be very less.

In order to overcome these problems that occur in the JPEG technique, fractal image
compression using quad tree decomposition and Huffman coding is used. In this method, all
the image block that get divided is individually processed and compressed thereby reducing
the loss of information of lower frequency components. Also this technique is more secure
when compared to any other technique as the compressed image is stored in the form of code
word thereby reducing the risk of security attack.

This method also reduces the storage capacity needed to store the compressed image as the
compressed image is a code word. So not much of data is required to represent this
compressed image thereby reducing space needed to store the data.

This method also has one of the advantages that the time required for compression and
decompression is very much less compared to other compression techniques.

B.E/Dept. of TCE/BNMIT 8 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

Chapter 2

Literature Survey

Fractal image coding based on quad tree is a novel technique for still image
compression. Compared with other image compression methods, fractal image coding has the
advantage of higher compression ratio, higher decoding speed and decoded image having
nothing to do with the resolution of image. However, it spends too much time to look for the
best matching Ri block on encoding. To improve the encoding speed, we must narrow the
search range and improve the search skills to ensure the best match block falls within the
range. In this paper, an improved fractal image compression algorithm based on quad tree is
proposed. First, we improve the construction method of search attractor by constructing
directly from the big Di block, so it can save a lot of searching time in encoding. Second, the
attractors can be self-constructed, so it is not happened that the attractor is not found in the
traditional methods. Experimental result shows that the algorithm makes image coding faster
and more efficiency [4].

With high compression ratio performance, fractal image compression technology becomes a
hot topic of research of image compression techniques. However, the encoding time of
traditional fractal compression technique is too long to achieve real-time image compression,
so it cannot be widely used. Based on the theory of fractal image compression; this paper
raised an improved algorithm form the aspect of image segmentation [5].

Fractal image compression is a lossy compression scheme that exploits the affine redundancy
in images and represents them in the form of mathematical model. The scheme has capability
of very high compression ratio, high resolution and fast decoding time. The encoding
procedure, however, is computationally very expensive and thus requires long encoding time.
The encoding procedure consists of dividing the image into range blocks and domain blocks,
and then it takes a range block and matches it with the domain block by properly aligning the
domain block. It repeats the above procedure for the entire domain pool and then encodes the
range block in terms of domain block that is most similar to it. This paper is an effort to

B.E/Dept. of TCE/BNMIT 9 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

reduce complexity and accuracy of first part of the algorithm, i.e. range and domain block
alignment and matching. The computation complexity of this process is mainly achieved by
doing alignment and matching in Fourier domain using fast convolution technique. The
accuracy is achieved by the ability if algorithm to find rotation angles that is not restricted to
90̊, 180, ̊ 270 and 360 ̊ ̊ and also we are able to find the translation of domain block with
respect to range block that is often ignored in many fractal encoders in aligning and matching
domain block with the range block [7].

Human codes are being widely used in image and video compression. We propose a
decoding scheme for Huffman codes, which requires only a few computations per codeword,
independent of the number of codeword n, the height of the Human tree h, or the length of a
codeword. The memory requirement for the proposed scheme depends on the Huffman tree,
however, for sparse Huffman trees (JPEG, H.263, MPEG), it is O(n) [8].

Transmission of audio-video data over internet applications like Multimedia is increasing


with fast pace. Biometric, Content Based Image Retrieval (CBIR), CCTV footage
applications require huge storage of images in database. For such applications this
combination of half tone with Huffman coding is useful. Half toning is lossy technique used
in printing industry where binary image is required. Objective of achieving higher
Compression Ratio by combining lossy half tone and lossless Modified Huffman coding
techniques is used. Apart from standard operator like Floyd-Steinberg and Jarvis operators,
Small and South-East operator are used. Halftone and Huffman coding technique is
implemented on 10 different color images of size 512x512. For measurement of image
quality, Mean Square Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) and Structure
Similarity Index (SSIM) are used. This hybrid technique can use for low bit rate video data
transmission and mass image storage [9].

Fractal image coding has the advantage of higher compression ratio, but is a lossy
compression scheme. The encoding procedure consists of dividing the image into range
blocks and domain blocks and then it takes a range block and matches it with the domain
block. The image is encoded by partitioning the domain block and using Affine

B.E/Dept. of TCE/BNMIT 10 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

transformation to achieve fractal compression. The image is reconstructed using iterative


functions and inverse transforms. In the present work the fractal coding techniques are
applied for the compression of satellite imageries. The compression ratio and Peak Signal to
Noise Ratio (PSNR) values are determined for three types of images namely standard Lena
image, Satellite Rural image and Satellite Urban image. The MATLAB simulation results for
the reconstructed image for 4 iterations show that for a compression ratio ~3.2 and PSNR
values achievable for Lena image ~12, Satellite Rural image ~17.0 and for Satellite urban
image ~22. Comparison of the present results with the EZW coding indicates that, the fractal
compression techniques are found more effective for compression of Satellite Urban
imageries since the Satellite Urban images contains more fractals compared to that of
Satellite Rural image and Lena image [10].

In fractal image compression, the encoding step is computationally expensive and consumes
longer time, which limits the workable applications of fractal image compression. Combining
with the characteristics of fractal image encoding, this paper presents an improved image
blocks classification method to speed up the encoding process. The classification features are
the number and the positions of the local extreme points in row direction in an image block,
and a three layers tree classifier which provides a stepwise precise classification is utilized.
The comparative experiments results show the validity of presented approach in accelerating
fractal encoding process and holding the quality of the reconstructed image, and show the
presented approach with simple principle can classify the image blocks more accurately [11].

Even though fractal image compression would result in high compression ratio and good
quality of the reconstructed image, it rarely used due to its time consuming coding procedure.
The most sluggish stage of the compression in this method is for finding the best matched
domain block for every range block of the image. In this paper we propose a method which
by using a special characteristic vector classifies the domain blocks of the image. Therefore,
the search process is accelerated and the quality of the reconstructed image is preserved [12].

Fractals can be an effective approach for several applications other than image coding and
transmission: database indexing, texture mapping, and even pattern recognition problems

B.E/Dept. of TCE/BNMIT 11 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

such as writer authentication. However, fractal-based algorithms are strongly asymmetric


because, in spite of the linearity of the decoding phase, the coding process is much more time
consuming. Many different solutions have been proposed for this problem, but there is not
yet a standard for fractal coding. This paper proposes a method to reduce the complexity of
the image coding phase by classifying the blocks according to an approximation error
measure. It is formally shown that postponing range\slash domain comparisons with respect
to a preset block, it is possible to reduce drastically the amount of operations needed to
encode each range. The proposed method has been compared with three other fractal coding
methods, showing under which circumstances it performs better in terms of both bit rate
and/or computing time [13].

B.E/Dept. of TCE/BNMIT 12 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

Chapter 3

Project Details

3.1 Block Diagram

 In the block diagram 3.1.1, the input image is read into the system. It is then divided
into blocks by the quad tree decomposition block. The divided blocks are then
encoded and compressed using Huffman coding algorithm.
The coded image blocks are then decoded using Huffman decoding algorithm and the
decompressed image is obtained at the output.

Input image Quad tree Huffman


decomposition encoding

Decompressed Compressed
Huffman
image image
decoding

Fig 3.1.1 Project block diagram

3.1.1 Quad tree decomposition

A quad tree is a tree data structure in which each internal node has exactly four children.
Quad trees are the two-dimensional analog of octrees and are most often used to partition a
two-dimensional space by recursively subdividing it into four quadrants or regions. The data

B.E/Dept. of TCE/BNMIT 13 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

associated with a leaf cell varies by application, but the leaf cell represents a "unit of
interesting spatial information".

The subdivided regions may be square or rectangular, or may have arbitrary shapes. This data
structure was named a quad tree by Raphael Finkel and J.L. Bentley in 1974. A similar
partitioning is also known as a Q-tree. All forms of quad trees share some common features:

 They decompose space into adaptable cells


 Each cell (or bucket) has a maximum capacity. When maximum capacity is reached, the
bucket splits
 The tree directory follows the spatial decomposition of the quad tree.

The Quad tree approach divides a square image into four equal sized square blocks, and then
tests each block to see if meets some criterion of homogeneity. If a block meets the criterion
it is not divided any further, and the test criterion is applied to those blocks. This process is
repeated iteratively until each block meets the criterion. The result may have blocks of
several different sizes [4][5][7].

3.1.2 Huffman encoding


Huffman coding is a lossless data compression algorithm. The idea is to assign
variable length codes to input characters; lengths of assigned codes are based on the
frequencies of corresponding characters. The most frequent character gets the smallest code
and the least frequent character gets the largest code. The Huffman encoding algorithm starts
by constructing a list of all the alphabet symbols in descending order of their probabilities. It
then constructs, from the bottom up, a binary tree with a symbol at every leaf. This is done in
steps, where at each step two symbols with the smallest probabilities are selected, added to
the top of the partial tree, deleted from the list, and replaced with an auxiliary symbol
representing the two original symbols [9]. When the list is reduced to just one auxiliary
symbol (representing the entire alphabet), the tree is complete. The tree is then traversed to
determine the code words of the symbols.

B.E/Dept. of TCE/BNMIT 14 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

3.1.3 Huffman decoding


Huffman decoding happens as follows. Start at the root and read the first bit off the
input (the compressed file). If it is zero, follow the bottom edge of the tree; if it is one, follow
the top edge. Read the next bit and move another edge toward the leaves of the tree. When
the decoder arrives at a leaf, it finds there the original, uncompressed symbol, and that code
is emitted by the decoder. The process starts again at the root with the next bit.

3.1.4 Affine transform


It is a transformation that preserves co linearity and ratios of distance. It is a linear
mapping method that preserves points, straight lines and planes. Affine transformation is
composed of many types and can be applied in any sequence or combinations some of the
transformations involved in affine transform are translational, reflection, rotational, scaling
etc., Examples of affine transformations include translation, scaling, homothetic, similarity
transformation, reflection, rotation, shear mapping, and compositions of them in any
combination and sequence.

An affine transformation preserves:

1. Co linearity between points: three or more points which lie on the same line (called
collinear points) continue to be collinear after the transformation.
2. Parallelism: two or more lines which are parallel, continue to be parallel after the
transformation.
3. Convexity of sets: a convex set continues to be convex after the transformation.
Moreover, the extreme points of the original set are mapped to the extreme points of
the transformed set.[3]
4. Ratios of lengths
5. Barycentre of weighted collections of points.

B.E/Dept. of TCE/BNMIT 15 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

3.1.5 Iterated function system


In mathematics, iterated function systems (IFSs) are a method of constructing
fractals; the resulting fractals are often self-similar. IFS fractals, as they are normally called,
can be of any number of dimensions, but are commonly computed and drawn in 2D.
Fractal image compression is based on the construction of Iterated Function System (IFS)
.IFS can approximates the original image. An IFS is a set of contractive transformations,
each of which maps into itself.IFS codes use affine transformations to express relation
between different parts of the image.

3.1.6 Compression Ratio

The compression ratio is used to measure the ability of data compression by


comparing the size of the image being compressed to the size of the original image. The
greater the compression ratio means the better the wavelet function.

3.1.7 peak signal to noise ratio(PSNR)


Peak signal-to-noise ratio, often abbreviated PSNR, is an engineering term for the
ratio between the maximum possible power of a signal and the power of corrupting noise that
affects the fidelity of its representation.
PSNR is most easily defined via the mean squared error (MSE).

PSNR (db) = 20log ((signal power) / (noise power)) ………………(3.1.7.1)

3.2 Image compression techniques

 There are many compression techniques available under image processing based on
various encoding schemes. Some of the encoding schemes include run-length
encoding, DPCM and predictive encoding, arithmetic encoding, chain codes and so

B.E/Dept. of TCE/BNMIT 16 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

on. Out of these, Huffman coding is considered as the most effective and fast
encoding technique.

 Some of the image compression algorithms include JPEG compression, fractal image
compression, LZW algorithm used in GIF and TIFF, area image compression
algorithm, deflation algorithms used in PNG, MNG and TIFF.

 Lossy compression or irreversible compression is the class of data encoding methods


that uses inexact approximations and partial data discarding to represent the content.
These techniques are used to reduce data size for storage, handling, and transmitting
content. Different versions of the photo of the cat above show how higher degrees of
approximation create coarser images as more details are removed. This is opposed
to lossless data compression (reversible data compression) which does not degrade
the data. The amount of data reduction possible using lossy compression is often
much higher than through lossless techniques.

B.E/Dept. of TCE/BNMIT 17 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

3.3 Proposed system algorithm

 The algorithm steps are as follows.


1. Divides the original image using Quad tree decomposition of threshold is 0.2,
Minimum Dimension and maximum dimension is 2 and 64 respectively.
2. Record the values of x and y coordinates, mean value and block size from Quad tree
Decomposition.
3. Record the fractal coding information to complete encoding the image using
Huffman Coding and calculating the compression ratio.
4. For the encoding image applying Huffman decoding to reconstruct the image and
Calculating PSNR.

3.4 Software details


MATLAB

 MATLAB is a high-performance language for technical computing. It integrates


computation, visualization, and programming in an easy-to-use environment where
problems and solutions are expressed in familiar mathematical notation.

Typical uses include:

1) Math and computation


2) Algorithm development
3) Modelling, simulation, and prototyping
4) Data analysis, exploration, and visualization
5) Scientific and engineering graphics
6) Application development, including Graphical User Interface building
 MATLAB is an interactive system whose basic data element is an array that does not
require dimensioning. This allows you to solve many technical computing problems,
especially those with matrix and vector formulations, in a fraction of the time it
would take to write a program in a scalar non-interactive language such as C or
FORTRAN.
B.E/Dept. of TCE/BNMIT 18 2016-17
Fractal image compression using quad tree decomposition and Huffman coding

 MATLAB system consists of 5 main parts.


1. The MATLAB language: This is a high level matrix/array language with
control flow statements, functions, data structures, input/output and object
oriented programming features, it allows both “programming in the small” to
rapidly create quick and dirty throw-away programs and “programming in
large” to create complete large and complex programs.
2. The MATLAB working environment: This is a set of tools that the user works
with. It includes export and import of data through variables and data modules.
3. Handle graphics: This is the MATLAB graphics system. It includes high level
commands for 2-D and 3-D visualization, image processing, and animation and
presentation graphics.
4. The MATLAB mathematical function library: This is the vast collection of
computational algorithms ranging from elementary functions like sine, sum,
cosine, Bessel functions, FFT etc.
5. The MATLAB application program interface(API): This is a library that allows
you to write C & FORTRAN programs that interact with matlab. It includes
facilities for calling routines from MATLAB, calling MATLAB as a
computational engine.
 Some of the advantages of using MATLAB are:
1. Ease of use: MATLAB is an interpreted language like many versions of basic.
The program can be used as a scratch pad to evaluate expression typed at the
command lines.
2. Platform independence: MATLAB is supported on many different computer
systems providing a large measure of platform independence. Programs
written in MATLAB can migrate to new platform when there is a need for
user change.
3. Predefined functions: MATLAB comes complete with an extensive library of
predefined functions that provide tested and pre-packaged solutions to many
basic technical tasks.
4. Device independence plotting: MATLAB has many integral plotting and
imaging commands unlike other computer languages.

B.E/Dept. of TCE/BNMIT 19 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

5. Graphical user interface: MATLAB includes tools that allow a programmer to


interactively construct a graphical user interface (GUI) for his/her program.
6. MATLAB compiler: MATLAB’s flexibility and platform independence is
achieved by computing MATLAB programs into a device independent P-code
and then interpreting the P-code instruction at run time.

3.5 MATLAB commands used

 Clc, clear all, close all: used to clear the command window, clear the directory and
close all the opened files.
 Imread: this command is used to read the input image given by the user.
 Imresize: resizes the image so that it has the specified number of rows and columns.
Either NUMROWS or NUMCOLS may be NaN, in which case imresize computes
the number of rows or columns automatically in order to preserve the image aspect
ratio.
 Qtdecomp: qtdecomp divides a square image into four equal-sized square blocks, and
then tests each block to see if meets some criterion of homogeneity. If a block meets
the criterion, it is not divided any further. If it does not meet the criterion, it is
subdivided again into four blocks, and the test criterion is applied to those blocks.
This process is repeated iteratively until each block meets the criterion. The result
may have blocks of several different sizes.
 Hist: bins the elements of Y into 10 equally spaced containers and returns the number
of elements in each container. If Y is a matrix, hist works down the columns.
 Huffmandict: huffmandict Code dictionary generator for Huffman coder. Generates a
binary Huffman code dictionary using the maximum variance algorithm for the
distinct symbols given by the SYM vector. The symbols can be represented as a
numeric vector or single-dimensional alphanumeric cell array. The second input,
PROB, represents the probability of occurrence for each of these symbols. SYM and
PROB must be of same length.
 Huffmanenco: Encode an input signal using Huffman coding algorithm. encodes the
input signal, SIG, based on the code dictionary, DICT. The code dictionary is
generated using the HUFFMANDICT function. Each of the symbols appearing in
B.E/Dept. of TCE/BNMIT 20 2016-17
Fractal image compression using quad tree decomposition and Huffman coding

SIG must be present in the code dictionary, DICT. The SIG input can be a numeric
vector or a single-dimensional cell array containing alphanumeric values.
 Huffmandeco: Huffman decoder. decodes the numeric Huffman code vector COMP
using the code dictionary, DICT. The encoded signal is generated by the
HUFFMANENCO function. The code dictionary can be generated using the
HUFFMANDICT function. The decoded signal will be a numeric vector if the
original signals are only numeric. If any signal value in DICT is alphanumeric, then
the decoded signal will be represented as a single-dimensional cell array.

3.6 Project working design

 Accept the input image by the user.


 Read the input image.
 Resize the input image for specific dimension and display the image.
 Divide the image into square blocks by applying quad tree decomposition.
 Compare the range and domain blocks for matching of information. If information
matches, stop dividing the blocks else continue the block division until information in
range and domain block matches.
 Select the blocks of data required and apply Huffman encoding algorithm. The
compressed version of the blocks will be in the form of a code word.
 Apply Huffman decoding algorithm to the obtained code word in order to get back
the original block of information.
 Combine each block of data after applying Huffman decoding algorithm in order to
get back the decompressed original image.

B.E/Dept. of TCE/BNMIT 21 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

3.7 Experimental Results


3.7.1 Fractal compression results

3.7.1.1 Fractal compression results

 Figure 3.7.1.1 shows the results of the image compression for various dimensions.
 The results show a large amount of reduction in the size of the image. Also the
compression ratio and psnr values are large.
Table 3.7.1.1 Results of the image compression for various dimensions

TEST IMAGE DIMENSION TIME COMPR TIME PSNR SIZE


TAKEN ESSION TAKEN (DB) REDUCE
FOR RATIO FOR -D (%)
COMPRE DECOM
SSION PRESSIO
(SEC) N(SEC)

LENA 256*256 0.66819 9.6816 6.9759 25.24 48.61


512*512 0.7127 9.4561 7.3408 24.97 48.61

B.E/Dept. of TCE/BNMIT 22 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

1024*1024 0.6935 9.5122 7.0196 25.03 48.61


CAMERAMAN 256*256 0.6 13.7825 4.7227 24.93 46.15
512*512 0.5665 13.5474 4.7901 24.53 48.43
1024*1024 1.3647 13.5446 6.6683 24.63 48.43
BABBOON 256*256 0.6187 9.2968 21.7766 26.29 47.94
512*512 0.5752 8.6667 6.7806 25.83 49.31
1024*1024 0.5418 8.7832 6.7620 25.92 49.31

 Figure 3.7.1.2, 3.7.1.3, 3.7.1.4, 3.7.1.5 shows the fractal images for which the outputs
were obtained.

Fig.3.7.1.2 Fractal design Fig. 3.7.1.3 Christmas tree

Fig.3.7.1.4 Colour pattern Fig. 3.7.1.5 Papaya leaf

B.E/Dept. of TCE/BNMIT 23 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

Fig.3.7.1.6 fern leaf Fig.3.7.1.7 cauliflower

 Table 3.7.1.2 shows the results obtained for the four images.
Table 3.7.1.2 Fractal image results

Test image Dimensi Time taken for Compress Time taken for PSNR %size
–on compression -ion ratio decompression(sec) (db) reduced
(sec)
Fractal 512*512 1.79 5.5860 13.5584 14.71 44.72
design 67
Christmas 512*512 0.7691 8.2917 9.9925 23.96 74.22
tree 40
Papaya leaf 512*512 4.99 8.96 50.64 24.26 90.85
Colour 512*512 4.73 7.66 60.18 25.48 42.25
pattern
Fern leaf 500*500 2.79 13.9 28.81 26.84 92.71
Cauliflower 3276*32 3.46 13.84 34.96 26.63 89.45
76

B.E/Dept. of TCE/BNMIT 24 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

 Figure 3.7.1.8 and figure 3.7.1.9 shows the satellite images for which the results were
obtained.

Fig 3.7.1.8 Satellite image of craters Fig 3.7.1.9 satellite image of a city

 Table 3.7.1.3 shows the results obtained for these two satellite images.
Table 3.7.1.3 Satellite image results
Test Dimensio Time taken for Compres Time taken for PSNR(db) Size
satellite -n compression sion ratio decompression(sec) reduc
image (sec) -ed
(%)
Craters 500*500 0.8673 7.5282 9.0497 23.8693 96.78
City 156*157 0.8601 6.5987 10.4101 20.2161 33.22

B.E/Dept. of TCE/BNMIT 25 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

3.7.2 Jpeg image compression results

 Figure 3.7.2.1, 3.7.2.2, 3.7.2.3, 3.7.2.4, 3.7.2.5 shows output obtained for JPEG image
compression algorithm.

Fig 3.7.2.1 original image Fig.3.7.2.2 DCT output

Fig.3.7.2.3 DCT co-efficient

B.E/Dept. of TCE/BNMIT 26 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

Fig.3.7.2.4 decompressed output

Fig.3.7.2.5 Jpeg image compression results

B.E/Dept. of TCE/BNMIT 27 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

 Table 3.7.2.1 shows results obtained for JPEG compression.

Table 3.7.2.1 Jpeg compression results

Method PSNR(db) COMPRESSION % COMPRESSION


RATIO
JPEG 45.5 10.456 <45

 From the table 3.7.2.1 it is clear that the values obtained for JPEG compression is
high. But the output of JPEG shows that the loss of information is more as the
information at the edges and boundaries are almost let off.

3.7.3 DCT image compression results

 Figure 3.7.3.1 and figure 3.7.3.2 shows output obtained for basic DCT algorithm.

Fig. 3.7.3.1 original image Fig. 3.7.3.2 output image

 Table 3.7.3.1 shows results obtained for DCT compression.

Table 3.7.3.1 DCT compression results


Method PSNR COMPRESSION % COMPRESSION
RATIO
DCT 23.145287 1.0949 <10

B.E/Dept. of TCE/BNMIT 28 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

 From table 3.7.3.1 it is clear that DCT method for image compression yields in results
with lesser compression ratios and PSNR values.

 Table 3.7.3.2. Shows the comparison between fractal image compression, Jpeg image
compression and DCT image compression.

Table 3.7.3.2 Comparison table

METHOD(Lena PSNR COMPRESSION %COMPRESSION


image) RATIO
Fractal image 25.24 9.6816 48.61
compression
JPEG image 45.5 10.456 <45
compression
DCT image 23.145287 1.0949 <10
compression

 From the above mentioned tabular column it is evident that fractal image compression
provides better compression ratio and PSNR values compared to standard DCT
compression algorithm.
 However, the PSNR values obtained for Lena image in jpeg compression algorithm is
more compared to fractal image compression algorithm. But the major disadvantage
with jpeg compression algorithm is that some part of information in the image is lost
during the process of quantization. Also the boundaries and edges are not very much
clear compared to the fractal image compression algorithm.
 The compression ratios of jpeg and fractal compression are almost equivalent and the
amount of compression that takes place is more in fractal image compression when
compared to jpeg image compression.

B.E/Dept. of TCE/BNMIT 29 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

3.8 satellite image results comparison


 The following results were obtained using the standard block truncation coding
method (BTC) for satellite images [14] as shown in figure 3.8.1.

Fig 3.8.1 Satellite image compression using BTC technique

 The following results were obtained for the same satellite image using fractal image
compression using quad tree decomposition and Huffman coding technique as shown
in figure 3.8.2 and figure 3.8.3.

Fig. 3.8.2 original image

B.E/Dept. of TCE/BNMIT 30 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

Fig. 3.8.3 fractal decompressed output

 From the above two figures it is evident that the amount of information lost in fractal
image compression using quad tree decomposition and Huffman coding is very much
less (almost negligible) when compared to the BTC technique.
 One of the other commonly used techniques in satellite image compression is the run
length encoding. In this method the repeated runs of a codeword are stored as a single
bit. In this way the storage can be enhanced. But this method has a disadvantage of
lesser psnr and compression ratio values. This is overcome by using fractal image
compression using quad tree decomposition and Huffman coding technique.
 One of the other methods used in satellite communication is the coulomb coding
technique. In this method, the image is converted into blocks of data and each block
is assigned with a standard value called the coulomb value and further processing is
done. This method has a disadvantage as the size of the image cannot exceed beyond
standard dimension. If the dimension of the image is more, then the assignment of the
coulomb value takes a lot of time. This constraint is overcome by fractal image
compression using quad tree decomposition and Huffman coding.

B.E/Dept. of TCE/BNMIT 31 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

Chapter 4

4.1 Conclusion

 Fractal image compression using quad tree decomposition and Huffman coding was
executed for 3 sets of images of dimensions 256*256, 512*512, and 1024*1024. The
images that have been compressed are Lena, cameraman and baboon.
 The results were obtained for fractal images of variable dimensions. The results were
also obtained for satellite images.
 The results were compared. Accordingly, the results obtained for lena image 256*256
dimension, it was found that JPEG and fractal image compression yield in similar
results and the information loss is less in fractal image compression when compared
to JPEG image compression.
 From all the results obtained, it is evident that fractal image compression using quad
tree decomposition and Huffman coding provides efficient results when compared to
DCT and JPEG technique.
 It is also evident that satellite images can be compressed more effectively without
much loss of information when compared to existing compression techniques. The
size gets reduced by more than 70% by using fractal image compression. All the
information is effectively retained back after decompression with very less loss of
information.
 Fractal image compression using quad tree decomposition and Huffman coding can
be used for any dimension of the image and any format of the image.
 Some of the advantages of fractal image compression using quad tree decomposition
and Huffman coding over JPEG image compression technique are as follows:
1. There is not much loss of information at the edges and borders.
2. Regions with the high frequency components are not discarded. All the
information is processed irrespective of the information content in it.
3. Time taken for compression and decompression is less as the number of stages for
compression and decompression is less.

B.E/Dept. of TCE/BNMIT 32 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

4. More secure encryption and decryption as the Huffman code word is difficult to
manipulate.
5. Resolution independent. Any image of any resolution can be analyzed and
compressed.
6. Satellite images can be processed more efficiently as the size of the decompressed
image is very less and the time taken for the process is also very less.
7. Any format of the image can be used as input(jpeg, png, gif etc.,)

B.E/Dept. of TCE/BNMIT 33 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

4.2 References

[1] Veena Devi.S.V and A.G.Ananth (2012) “Fractal Image compression using quad
tree decomposition and Huffman coding”, SIPIJ, Vol.3, No.2,
[2] Fisher Y, editor (1995) “Fractal image compression: theory and application”, New
York, Springer-Verlag.
[3] Arnaud E. Jacquin, (1993) “Fractal image coding”, Proceedings of IEEE VOL.81,
pp. 1451-1465
[4] Bohong Liu and Yung Yan, (2010) “An Improved Fractal Image Coding Based on
the Quadtree”, IEEE 3rd International Congress on Image and Signal Processing, pp.
529-532.
[5] Hui Yu, Li Li, Dan Liu, Hongyu Zhai, Xiaoming Dong, (2010) “Based on Quad
tree Fractal Image Compression Improved Algorithm for Research”, IEEE Trans,
pp.1-3.
[6] Barnsley MF,(1993) ” Fractal everywhere”, 2nd ed, San Deigo Academic Press.
[7] Dr. Muhammad Kamran, Amna Irshad Sipra and Muhammd Nadeem, (2010) “A
novel domain Optimization technique in fractal image compression”, IEEE
Proceedings of the 8th World Congress on Intelligent Control and Automation, pp.
994-999.
[8] Manoj Aggarwal and Ajai Narayan (2000) “Efficient Huffman Decoding”, IEEE
Trans, pp.936-939.
[9] H.B.Kekre, Tanuja K Sarode, Sanjay R Sange(2011) “ Image reconstruction using
Fast Inverse Halftone & Huffman coding Technique”, IJCA,volume 27-No 6, pp.34-
40.
[10] VeenaDevi.S.V and A.G.Ananth (2011) “Fractal Image Compression of Satellite
Imageries”, IJCA, Volume 30-No.3, pp.33-36.
[11] Jinshu Han (2007) “Speeding up Fractal Image Compression Based on Local
Extreme Points”, IEEE Computer Society, pp. 732-737.
[12] Narges Rowshanbin, Shadrokh Samavi and Shahram Shirani (2006)
“Acceleration of Fractal Image Compression Using Characteristic Vector
Classification”, IEEE CCECE/CCGEI, pp.2057-2060.

B.E/Dept. of TCE/BNMIT 34 2016-17


Fractal image compression using quad tree decomposition and Huffman coding

[13] Riccardo Distasi, Michele Nappi and Daniel Riccio (2006) “A Range/Domain
Approximation Error Based Approach for Fractal Image Compression” IEEE Trans
on Image Processing VOL.15, No1. pp.89-97.
[14] S. Chandravadana, N. Nithyanandam (2014) “Compression of Satellite Images
Using Lossy and Lossless Coding Techniques”, International Journal of Engineering
and Technology (IJET), Vol 6 No 1 Feb-Mar 2014.

B.E/Dept. of TCE/BNMIT 35 2016-17

You might also like