Image Compression VTU Project
Image Compression VTU Project
Chapter 1
Introduction
Images are pictorial representation of the required information. Images can be two-
dimensional or 3 dimensional. Images are represented using pixels which tell what
information exists at that particular point. Internet is one of the major sources to store the
images. Internet is generally used to access information and it also acts as a database to store
information in the form of images or text.
With the advance of the information age the need for mass information storage and retrieval
grows. The capacity of commercial storage devices, however, has not kept pace with the
proliferation of image data. Images are stored on computers as collections of bits (a bit is a
binary unit of information) representing pixels, or points forming the picture elements. Since
the human eye can process large amounts of information, many pixels - some 8 million bits'
worth - are required to store even moderate quality images. Any real time application
requires the information size to be less for faster and efficient processing of data. Thus the
data should be compressed for saving the memory space which is limited in real time
applications. Also compression helps in efficient use of time and space for data processing.
Image compression is a type of data compression applied to digital images, to reduce their
cost for storage or transmission. Algorithms will take visual perception and
the statistical properties of image data to provide superior results compared with generic
compression methods.
artefacts. Lossy methods are especially suitable for natural images such as photographs in
applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve
a substantial reduction in bit rate. Lossy compression that produces negligible differences
may be called visually lossless.
There are a few different methods for lossless compression. There’s run-length
encoding (used for BMP files), which takes runs of data (consecutive data elements with
identical values) and stores them in a single data value and count. It’s best suited for
simple graphics files, where there are long runs of identical data elements . DEFLATE is
another lossless data compression method used for PNG images. It uses a combination of
the LZ77 algorithm and Huffman coding. In addition to being used for PNG images, it’s
also used in ZIP and gzip compression. Lempel-Ziv-Welch (LZW) compression is a
lossless compression algorithm that performs a limited analysis of data. It’s used in GIF
and some TIFF file formats.
Transform encoding is the type of encoding used for JPEG images. In images, transform
coding averages out the color in small blocks of the image using a discrete cosine
transform (DCT) to create an image that has far fewer colors than the original.
Chroma sub sampling is another type of lossy compression that takes into account that
the human eye perceives changes in brightness more sharply than changes of color, and
B.E/Dept. of TCE/BNMIT 2 2016-17
Fractal image compression using quad tree decomposition and Huffman coding
JPEG uses a lossy form of compression based on the discrete cosine transform (DCT). This
mathematical operation converts each frame/field of the video source from the spatial (2D)
domain into the frequency domain (a.k.a. transform domain). A perceptual model based
loosely on the human psycho visual system discards high-frequency information, i.e. sharp
transitions in intensity, and color hue. In the transform domain, the process of reducing
information is called quantization. In simpler terms, quantization is a method for optimally
reducing a large number scale (with different occurrences of each number) into a smaller
one, and the transform-domain is a convenient representation of the image because the high-
frequency coefficients, which contribute less to the overall picture than other coefficients, are
characteristically small-values with high compressibility. The quantized coefficients are then
sequenced and losslessly packed into the output bitstream. Nearly all software
implementations of JPEG permit user control over the compression-ratio (as well as other
optional parameters), allowing the user to trade off picture-quality for smaller file size. In
embedded applications (such as miniDV, which uses a similar DCT-compression scheme),
the parameters are pre-selected and fixed for the application.
The compression method is usually lossy, meaning that some original image information is
lost and cannot be restored, possibly affecting image quality. There is an
optional lossless mode defined in the JPEG standard. However, this mode is not widely
supported in products.
There is also an interlaced progressive JPEG format, in which data is compressed in multiple
passes of progressively higher detail. This is ideal for large images that will be displayed
while downloading over a slow connection, allowing a reasonable preview after receiving
only a portion of the data. However, support for progressive JPEGs is not universal. When
progressive JPEGs are received by programs that do not support them the software displays
the image only after it has been completely downloaded.
There are also many medical imaging and traffic systems that create and process 12-bit JPEG
images, normally grayscale images. The 12-bit JPEG format has been part of the JPEG
specification for some time, but this format is not as widely supported.
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a
sum of cosine functions oscillating at different frequencies. DCTs are important to numerous
applications in science and engineering, from lossy compression of audio (e.g. MP3)
and images (e.g. JPEG) (where small high-frequency components can be discarded),
to spectral methods for the numerical solution of partial differential equations. The use
of cosine rather than sine functions is critical for compression, since it turns out (as described
below) that fewer cosine functions are needed to approximate a typical signal, whereas for
differential equations the cosines express a particular choice of boundary conditions.
The most common variant of discrete cosine transform is the type-II DCT, which is often
called simply "the DCT". Its inverse, the type-III DCT, is correspondingly often called
simply "the inverse DCT" or "the IDCT". Two related transforms are the discrete sine
transforms (DST), which is equivalent to a DFT of real and odd functions, and the modified
discrete cosine transforms (MDCT), which is based on a DCT of overlapping data.
Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT on MD
Signals. There are several algorithms to compute MD DCT. A new variety of fast algorithms
are also developed to reduce the computational complexity of implementing DCT.
All these existing compression techniques have a major drawback of loss of information and
more time required for compression. The compressed images have lesser quality and take a
lot of time to get back the decompressed image. One of the compression techniques available
which can reduce/eliminate these problems is the fractal image compression.
1.3 Fractals
The fractal image compression first partitions the original image into non
overlapping domain regions (they can be any size or shape). Then a collection of
possible range regions is defined. The range regions can overlap and need not cover the
entire image, but must be larger than the domain regions, For each domain region the
algorithm then searches for a suitable range region that, when applied with an appropriate
affine transformation, very closely resembles the domain region. Afterward, a FIF (Fractal
Image Format) file is generated for the image. This file contains information on the choice of
domain regions, and the list of affine coefficients (i.e. the entries of the transformation
matrix) of all associated affine transformations. So all the pixels' data in a given region are
compressed into a small set of entries of the transform matrix, with each entry, corresponding
to an integer between 0 and 255, taking up one byte.
B.E/Dept. of TCE/BNMIT 5 2016-17
Fractal image compression using quad tree decomposition and Huffman coding
This process is independent of the resolution of the original image. The output graphic will
look like the original at any resolution, since the compressor has found an IFS whose
attractor replicates the original one (i.e. a set of equations describing the original image). Of
course, the process takes a lot of work, especially during the search for the suitable range
regions. But once the compression is done, the FIF file can be decompressed very quickly.
Thus, fractal image compression is asymmetrical. The practical implementations of a fractal
compressor offer different levels of compression. The lower levels have more relaxed search
criteria to cut down processing time, but with the loss of more detail. The higher levels give
very good detail, but takes a long time to process each image.
To decompress an image, the compressor first allocates two memory buffers of equal size,
with arbitrary initial content. The iterations then begin, with buffer 1 the range image and
buffer 2 the domain image. The domain image is partitioned into domain regions as specified
in the FIF file. For each domain region, its associated range region is located in the range
image. Then the corresponding affine map is applied to the content of the range region,
pulling the content toward the map's attractor. Since each of the affine maps is contractive,
the range region is contracted by the transformation. This is the reason that the range regions
are required to be larger than the domain regions during compression.
For the next iteration, the roles of the domain image and range image are switched. The
process of mapping the range regions (now in buffer 2) to their respective domain regions (in
buffer 1) is repeated, using the prescribed affine transformations. Then the entire step is
repeated again and again, with the content of buffer 1 mapped to buffer 2, then vice versa. At
every step, the content is pulled ever closer to the attractor of the IFS which forms a collage
of the original image. Eventually the differences between the two images become very small,
and the content of the first buffer is the output decompressed image.
Block truncation coding is one of the simplest compression techniques where the
image is divided into blocks and then processed. The traditional method involves
computation of a high mean and the low mean to replace the original pixel values. Here the
first and the second moments are preserved and the bit rate is a constant value (2 bits per
pixel). The major disadvantage is that there are heavy blocking artefacts when the block size
increases. Many methods have been evolved in order to improve the compression ratio and
also to reduce the bit rate [14]. BTC is a recent technique used for compression of
monochrome image data. It is one-bit adaptive moment-preserving quantizer that preserves
certain statistical moments of small blocks of the input image in the quantized output. The
original algorithm of BTC preserves the standard mean and the standard deviation [9]. The
statistical overheads Mean and the Standard deviation are to be coded as part of the block.
The truncated block of the BTC is the one-bit output of the quantizer for every pixel in the
block .Various methods have been proposed during last twenty years for image compression
such BTC and Absolute Moment Block Truncation Coding AMBTC [6].AMBTC preserves
the higher mean and lower mean of the blocks and use this quantity to quantize output.
AMBTC provides better image quality than image compression using BTC. Moreover, the
AMBTC is quite faster compared to BTC this paper represents comparative study between
BTC and AMBTC .This paper is organized as follows: Section II explains BTC algorithm.
Section III explains algorithm of AMBTC. Section IV explains image characteristic Section
V briefly explains the performance evaluation criteria
The most widely used compression techniques is the JPEG compression algorithm.
The PSNR values obtained using this technique is very good. But this method has a major
drawback of loss of information at the boundaries and edges. Since JPEG uses partitioning of
the image into frequency components, the lower frequency components get eliminated and
thus loss of information occurs. Also the method is dimension specific and the compressed
image size will not be very less.
In order to overcome these problems that occur in the JPEG technique, fractal image
compression using quad tree decomposition and Huffman coding is used. In this method, all
the image block that get divided is individually processed and compressed thereby reducing
the loss of information of lower frequency components. Also this technique is more secure
when compared to any other technique as the compressed image is stored in the form of code
word thereby reducing the risk of security attack.
This method also reduces the storage capacity needed to store the compressed image as the
compressed image is a code word. So not much of data is required to represent this
compressed image thereby reducing space needed to store the data.
This method also has one of the advantages that the time required for compression and
decompression is very much less compared to other compression techniques.
Chapter 2
Literature Survey
Fractal image coding based on quad tree is a novel technique for still image
compression. Compared with other image compression methods, fractal image coding has the
advantage of higher compression ratio, higher decoding speed and decoded image having
nothing to do with the resolution of image. However, it spends too much time to look for the
best matching Ri block on encoding. To improve the encoding speed, we must narrow the
search range and improve the search skills to ensure the best match block falls within the
range. In this paper, an improved fractal image compression algorithm based on quad tree is
proposed. First, we improve the construction method of search attractor by constructing
directly from the big Di block, so it can save a lot of searching time in encoding. Second, the
attractors can be self-constructed, so it is not happened that the attractor is not found in the
traditional methods. Experimental result shows that the algorithm makes image coding faster
and more efficiency [4].
With high compression ratio performance, fractal image compression technology becomes a
hot topic of research of image compression techniques. However, the encoding time of
traditional fractal compression technique is too long to achieve real-time image compression,
so it cannot be widely used. Based on the theory of fractal image compression; this paper
raised an improved algorithm form the aspect of image segmentation [5].
Fractal image compression is a lossy compression scheme that exploits the affine redundancy
in images and represents them in the form of mathematical model. The scheme has capability
of very high compression ratio, high resolution and fast decoding time. The encoding
procedure, however, is computationally very expensive and thus requires long encoding time.
The encoding procedure consists of dividing the image into range blocks and domain blocks,
and then it takes a range block and matches it with the domain block by properly aligning the
domain block. It repeats the above procedure for the entire domain pool and then encodes the
range block in terms of domain block that is most similar to it. This paper is an effort to
reduce complexity and accuracy of first part of the algorithm, i.e. range and domain block
alignment and matching. The computation complexity of this process is mainly achieved by
doing alignment and matching in Fourier domain using fast convolution technique. The
accuracy is achieved by the ability if algorithm to find rotation angles that is not restricted to
90̊, 180, ̊ 270 and 360 ̊ ̊ and also we are able to find the translation of domain block with
respect to range block that is often ignored in many fractal encoders in aligning and matching
domain block with the range block [7].
Human codes are being widely used in image and video compression. We propose a
decoding scheme for Huffman codes, which requires only a few computations per codeword,
independent of the number of codeword n, the height of the Human tree h, or the length of a
codeword. The memory requirement for the proposed scheme depends on the Huffman tree,
however, for sparse Huffman trees (JPEG, H.263, MPEG), it is O(n) [8].
Fractal image coding has the advantage of higher compression ratio, but is a lossy
compression scheme. The encoding procedure consists of dividing the image into range
blocks and domain blocks and then it takes a range block and matches it with the domain
block. The image is encoded by partitioning the domain block and using Affine
In fractal image compression, the encoding step is computationally expensive and consumes
longer time, which limits the workable applications of fractal image compression. Combining
with the characteristics of fractal image encoding, this paper presents an improved image
blocks classification method to speed up the encoding process. The classification features are
the number and the positions of the local extreme points in row direction in an image block,
and a three layers tree classifier which provides a stepwise precise classification is utilized.
The comparative experiments results show the validity of presented approach in accelerating
fractal encoding process and holding the quality of the reconstructed image, and show the
presented approach with simple principle can classify the image blocks more accurately [11].
Even though fractal image compression would result in high compression ratio and good
quality of the reconstructed image, it rarely used due to its time consuming coding procedure.
The most sluggish stage of the compression in this method is for finding the best matched
domain block for every range block of the image. In this paper we propose a method which
by using a special characteristic vector classifies the domain blocks of the image. Therefore,
the search process is accelerated and the quality of the reconstructed image is preserved [12].
Fractals can be an effective approach for several applications other than image coding and
transmission: database indexing, texture mapping, and even pattern recognition problems
Chapter 3
Project Details
In the block diagram 3.1.1, the input image is read into the system. It is then divided
into blocks by the quad tree decomposition block. The divided blocks are then
encoded and compressed using Huffman coding algorithm.
The coded image blocks are then decoded using Huffman decoding algorithm and the
decompressed image is obtained at the output.
Decompressed Compressed
Huffman
image image
decoding
A quad tree is a tree data structure in which each internal node has exactly four children.
Quad trees are the two-dimensional analog of octrees and are most often used to partition a
two-dimensional space by recursively subdividing it into four quadrants or regions. The data
associated with a leaf cell varies by application, but the leaf cell represents a "unit of
interesting spatial information".
The subdivided regions may be square or rectangular, or may have arbitrary shapes. This data
structure was named a quad tree by Raphael Finkel and J.L. Bentley in 1974. A similar
partitioning is also known as a Q-tree. All forms of quad trees share some common features:
The Quad tree approach divides a square image into four equal sized square blocks, and then
tests each block to see if meets some criterion of homogeneity. If a block meets the criterion
it is not divided any further, and the test criterion is applied to those blocks. This process is
repeated iteratively until each block meets the criterion. The result may have blocks of
several different sizes [4][5][7].
1. Co linearity between points: three or more points which lie on the same line (called
collinear points) continue to be collinear after the transformation.
2. Parallelism: two or more lines which are parallel, continue to be parallel after the
transformation.
3. Convexity of sets: a convex set continues to be convex after the transformation.
Moreover, the extreme points of the original set are mapped to the extreme points of
the transformed set.[3]
4. Ratios of lengths
5. Barycentre of weighted collections of points.
There are many compression techniques available under image processing based on
various encoding schemes. Some of the encoding schemes include run-length
encoding, DPCM and predictive encoding, arithmetic encoding, chain codes and so
on. Out of these, Huffman coding is considered as the most effective and fast
encoding technique.
Some of the image compression algorithms include JPEG compression, fractal image
compression, LZW algorithm used in GIF and TIFF, area image compression
algorithm, deflation algorithms used in PNG, MNG and TIFF.
Clc, clear all, close all: used to clear the command window, clear the directory and
close all the opened files.
Imread: this command is used to read the input image given by the user.
Imresize: resizes the image so that it has the specified number of rows and columns.
Either NUMROWS or NUMCOLS may be NaN, in which case imresize computes
the number of rows or columns automatically in order to preserve the image aspect
ratio.
Qtdecomp: qtdecomp divides a square image into four equal-sized square blocks, and
then tests each block to see if meets some criterion of homogeneity. If a block meets
the criterion, it is not divided any further. If it does not meet the criterion, it is
subdivided again into four blocks, and the test criterion is applied to those blocks.
This process is repeated iteratively until each block meets the criterion. The result
may have blocks of several different sizes.
Hist: bins the elements of Y into 10 equally spaced containers and returns the number
of elements in each container. If Y is a matrix, hist works down the columns.
Huffmandict: huffmandict Code dictionary generator for Huffman coder. Generates a
binary Huffman code dictionary using the maximum variance algorithm for the
distinct symbols given by the SYM vector. The symbols can be represented as a
numeric vector or single-dimensional alphanumeric cell array. The second input,
PROB, represents the probability of occurrence for each of these symbols. SYM and
PROB must be of same length.
Huffmanenco: Encode an input signal using Huffman coding algorithm. encodes the
input signal, SIG, based on the code dictionary, DICT. The code dictionary is
generated using the HUFFMANDICT function. Each of the symbols appearing in
B.E/Dept. of TCE/BNMIT 20 2016-17
Fractal image compression using quad tree decomposition and Huffman coding
SIG must be present in the code dictionary, DICT. The SIG input can be a numeric
vector or a single-dimensional cell array containing alphanumeric values.
Huffmandeco: Huffman decoder. decodes the numeric Huffman code vector COMP
using the code dictionary, DICT. The encoded signal is generated by the
HUFFMANENCO function. The code dictionary can be generated using the
HUFFMANDICT function. The decoded signal will be a numeric vector if the
original signals are only numeric. If any signal value in DICT is alphanumeric, then
the decoded signal will be represented as a single-dimensional cell array.
Figure 3.7.1.1 shows the results of the image compression for various dimensions.
The results show a large amount of reduction in the size of the image. Also the
compression ratio and psnr values are large.
Table 3.7.1.1 Results of the image compression for various dimensions
Figure 3.7.1.2, 3.7.1.3, 3.7.1.4, 3.7.1.5 shows the fractal images for which the outputs
were obtained.
Table 3.7.1.2 shows the results obtained for the four images.
Table 3.7.1.2 Fractal image results
Test image Dimensi Time taken for Compress Time taken for PSNR %size
–on compression -ion ratio decompression(sec) (db) reduced
(sec)
Fractal 512*512 1.79 5.5860 13.5584 14.71 44.72
design 67
Christmas 512*512 0.7691 8.2917 9.9925 23.96 74.22
tree 40
Papaya leaf 512*512 4.99 8.96 50.64 24.26 90.85
Colour 512*512 4.73 7.66 60.18 25.48 42.25
pattern
Fern leaf 500*500 2.79 13.9 28.81 26.84 92.71
Cauliflower 3276*32 3.46 13.84 34.96 26.63 89.45
76
Figure 3.7.1.8 and figure 3.7.1.9 shows the satellite images for which the results were
obtained.
Fig 3.7.1.8 Satellite image of craters Fig 3.7.1.9 satellite image of a city
Table 3.7.1.3 shows the results obtained for these two satellite images.
Table 3.7.1.3 Satellite image results
Test Dimensio Time taken for Compres Time taken for PSNR(db) Size
satellite -n compression sion ratio decompression(sec) reduc
image (sec) -ed
(%)
Craters 500*500 0.8673 7.5282 9.0497 23.8693 96.78
City 156*157 0.8601 6.5987 10.4101 20.2161 33.22
Figure 3.7.2.1, 3.7.2.2, 3.7.2.3, 3.7.2.4, 3.7.2.5 shows output obtained for JPEG image
compression algorithm.
From the table 3.7.2.1 it is clear that the values obtained for JPEG compression is
high. But the output of JPEG shows that the loss of information is more as the
information at the edges and boundaries are almost let off.
Figure 3.7.3.1 and figure 3.7.3.2 shows output obtained for basic DCT algorithm.
From table 3.7.3.1 it is clear that DCT method for image compression yields in results
with lesser compression ratios and PSNR values.
Table 3.7.3.2. Shows the comparison between fractal image compression, Jpeg image
compression and DCT image compression.
From the above mentioned tabular column it is evident that fractal image compression
provides better compression ratio and PSNR values compared to standard DCT
compression algorithm.
However, the PSNR values obtained for Lena image in jpeg compression algorithm is
more compared to fractal image compression algorithm. But the major disadvantage
with jpeg compression algorithm is that some part of information in the image is lost
during the process of quantization. Also the boundaries and edges are not very much
clear compared to the fractal image compression algorithm.
The compression ratios of jpeg and fractal compression are almost equivalent and the
amount of compression that takes place is more in fractal image compression when
compared to jpeg image compression.
The following results were obtained for the same satellite image using fractal image
compression using quad tree decomposition and Huffman coding technique as shown
in figure 3.8.2 and figure 3.8.3.
From the above two figures it is evident that the amount of information lost in fractal
image compression using quad tree decomposition and Huffman coding is very much
less (almost negligible) when compared to the BTC technique.
One of the other commonly used techniques in satellite image compression is the run
length encoding. In this method the repeated runs of a codeword are stored as a single
bit. In this way the storage can be enhanced. But this method has a disadvantage of
lesser psnr and compression ratio values. This is overcome by using fractal image
compression using quad tree decomposition and Huffman coding technique.
One of the other methods used in satellite communication is the coulomb coding
technique. In this method, the image is converted into blocks of data and each block
is assigned with a standard value called the coulomb value and further processing is
done. This method has a disadvantage as the size of the image cannot exceed beyond
standard dimension. If the dimension of the image is more, then the assignment of the
coulomb value takes a lot of time. This constraint is overcome by fractal image
compression using quad tree decomposition and Huffman coding.
Chapter 4
4.1 Conclusion
Fractal image compression using quad tree decomposition and Huffman coding was
executed for 3 sets of images of dimensions 256*256, 512*512, and 1024*1024. The
images that have been compressed are Lena, cameraman and baboon.
The results were obtained for fractal images of variable dimensions. The results were
also obtained for satellite images.
The results were compared. Accordingly, the results obtained for lena image 256*256
dimension, it was found that JPEG and fractal image compression yield in similar
results and the information loss is less in fractal image compression when compared
to JPEG image compression.
From all the results obtained, it is evident that fractal image compression using quad
tree decomposition and Huffman coding provides efficient results when compared to
DCT and JPEG technique.
It is also evident that satellite images can be compressed more effectively without
much loss of information when compared to existing compression techniques. The
size gets reduced by more than 70% by using fractal image compression. All the
information is effectively retained back after decompression with very less loss of
information.
Fractal image compression using quad tree decomposition and Huffman coding can
be used for any dimension of the image and any format of the image.
Some of the advantages of fractal image compression using quad tree decomposition
and Huffman coding over JPEG image compression technique are as follows:
1. There is not much loss of information at the edges and borders.
2. Regions with the high frequency components are not discarded. All the
information is processed irrespective of the information content in it.
3. Time taken for compression and decompression is less as the number of stages for
compression and decompression is less.
4. More secure encryption and decryption as the Huffman code word is difficult to
manipulate.
5. Resolution independent. Any image of any resolution can be analyzed and
compressed.
6. Satellite images can be processed more efficiently as the size of the decompressed
image is very less and the time taken for the process is also very less.
7. Any format of the image can be used as input(jpeg, png, gif etc.,)
4.2 References
[1] Veena Devi.S.V and A.G.Ananth (2012) “Fractal Image compression using quad
tree decomposition and Huffman coding”, SIPIJ, Vol.3, No.2,
[2] Fisher Y, editor (1995) “Fractal image compression: theory and application”, New
York, Springer-Verlag.
[3] Arnaud E. Jacquin, (1993) “Fractal image coding”, Proceedings of IEEE VOL.81,
pp. 1451-1465
[4] Bohong Liu and Yung Yan, (2010) “An Improved Fractal Image Coding Based on
the Quadtree”, IEEE 3rd International Congress on Image and Signal Processing, pp.
529-532.
[5] Hui Yu, Li Li, Dan Liu, Hongyu Zhai, Xiaoming Dong, (2010) “Based on Quad
tree Fractal Image Compression Improved Algorithm for Research”, IEEE Trans,
pp.1-3.
[6] Barnsley MF,(1993) ” Fractal everywhere”, 2nd ed, San Deigo Academic Press.
[7] Dr. Muhammad Kamran, Amna Irshad Sipra and Muhammd Nadeem, (2010) “A
novel domain Optimization technique in fractal image compression”, IEEE
Proceedings of the 8th World Congress on Intelligent Control and Automation, pp.
994-999.
[8] Manoj Aggarwal and Ajai Narayan (2000) “Efficient Huffman Decoding”, IEEE
Trans, pp.936-939.
[9] H.B.Kekre, Tanuja K Sarode, Sanjay R Sange(2011) “ Image reconstruction using
Fast Inverse Halftone & Huffman coding Technique”, IJCA,volume 27-No 6, pp.34-
40.
[10] VeenaDevi.S.V and A.G.Ananth (2011) “Fractal Image Compression of Satellite
Imageries”, IJCA, Volume 30-No.3, pp.33-36.
[11] Jinshu Han (2007) “Speeding up Fractal Image Compression Based on Local
Extreme Points”, IEEE Computer Society, pp. 732-737.
[12] Narges Rowshanbin, Shadrokh Samavi and Shahram Shirani (2006)
“Acceleration of Fractal Image Compression Using Characteristic Vector
Classification”, IEEE CCECE/CCGEI, pp.2057-2060.
[13] Riccardo Distasi, Michele Nappi and Daniel Riccio (2006) “A Range/Domain
Approximation Error Based Approach for Fractal Image Compression” IEEE Trans
on Image Processing VOL.15, No1. pp.89-97.
[14] S. Chandravadana, N. Nithyanandam (2014) “Compression of Satellite Images
Using Lossy and Lossless Coding Techniques”, International Journal of Engineering
and Technology (IJET), Vol 6 No 1 Feb-Mar 2014.