Lossless Image Compression Techniques A State-Of-T PDF
Lossless Image Compression Techniques A State-Of-T PDF
Lossless Image Compression Techniques A State-Of-T PDF
Article
Lossless Image Compression Techniques:
A State-of-the-Art Survey
Md. Atiqur Rahman and Mohamed Hamada *
School of Computer Science and Engineering, The University of Aizu, Aizu-Wakamatsu City,
Fukushima, 965-8580, Japan; [email protected]
* Correspondence: [email protected]
Received: 3 September 2019; Accepted: 7 October 2019; Published: 11 October 2019
Abstract: Modern daily life activities result in a huge amount of data, which creates a big challenge for
storing and communicating them. As an example, hospitals produce a huge amount of data on a daily
basis, which makes a big challenge to store it in a limited storage or to communicate them through the
restricted bandwidth over the Internet. Therefore, there is an increasing demand for more research in
data compression and communication theory to deal with such challenges. Such research responds to
the requirements of data transmission at high speed over networks. In this paper, we focus on deep
analysis of the most common techniques in image compression. We present a detailed analysis of
run-length, entropy and dictionary based lossless image compression algorithms with a common
numeric example for a clear comparison. Following that, the state-of-the-art techniques are discussed
based on some bench-marked images. Finally, we use standard metrics such as average code length
(ACL), compression ratio (CR), pick signal-to-noise ratio (PSNR), efficiency, encoding time (ET) and
decoding time (DT) in order to measure the performance of the state-of-the-art techniques.
Keywords: lossless and lossy compression; run-length; Shannon–Fano; Huffman; LZW; arithmetic
coding; average code length; compression ratio; PSNR and efficiency
1. Introduction
The utilization of the computer in modernized activities is increasing virtually everywhere. As
a result, sending a plethora of data, especially images and videos over the cyber world, is the most
challenging issue because of circumscribed bandwidth and storage capacity; and it is time-consuming
and costly as reported in [1]. For instance, a conventional movie camera customarily uses 24 frames per
second. However, recent video standards sanction 120, 240, or 300 frames per second. Video is a series
of still images or frames passed per second and a color image contains three panels: red, green and
blue. Suppose you would like to send or store a three-hour color movie file of 1200 × 1200 dimension
and 50 frames are passed in every second. It takes approximately (1200 × 1200 × 3 × 84 × 50 × 10,800)
bits = 17,797,851.5625 Megabits = 2172.5893 gigabytes storage if a pixel is coded in 8 bits, which is a
sizably voluminous challenge to store in a computer or send over the cyber world. Here, three is the
number of channels of a color image, that is, R, G, and B, and 10,800 is the total number of seconds.
Additionally, the medium of transmission and latency are two major issues for data transmission. If
the video file is sent over a medium of 100 Mbps, approximately (17,797,851.5625 Megabits)/100 =
177,978.5156 s = 49.4385 h is required because the medium can send 100 Megabits per second. For
these reasons, compression is required and it is a paramount way to represent an image with fewer
bits keeping its quality and an immensely colossal volume of data can be sent through an inhibited
bandwidth at high speed over the cyber world reported in [2,3]. The general block diagram of an
image compression procedure is shown in Figure 1.
There are many image compression techniques and an image compression technique is verbally
expressed to be the best when it contains less average code length, encoding and decoding time, and
provides more compression ratio. Image compression algorithms are extensively applied in medical
imaging, computer communication, military communication via radar, teleconferencing, magnetic
resonance imaging (MRI), broadcast television and satellite images reported in [4]. Some applications
of these require high-quality visual information and others need less quality, reported in [5,6].
From the perspectives, compression is divided into two types: lossless and lossy. All pristine data
are recuperated correctly from an encoded data set in lossless, whereas the lossy technique retrieves
virtually all data sempiternally eliminating categorical information, especially redundant information
reported in [7,8]. Lossless is mostly utilized in facsimile transmissions of bitonal images, ZIP file
format, digital medical imagery, internet telephony, and streaming video file reported in [9].
The foremost intention of implementing a compression algorithm is to diminish superfluous data
reported in [10]. Run-length coding, for example, is a lossless procedure where a set of consecutive
same pixels (runs of data) are preserved as a single value and a count stated in [11,12]. But, long
runs of data does not subsist in authentic images mentioned in [13,14] which is the main quandary
of run-length coding. Article [15] shows that a chain code binarization with run-length, and LZ77
provides a more satisfactory result than the traditional run-length technique from a compression ratio
perspective. The authors in [16] show a different way of compression utilizing a bit series of a bit plane
and demonstrate that it provides a better result than conventional run-length coding.
The entropy encoding techniques are proposed to solve the difficulties of a run-length algorithm.
Entropy coding style encodes source symbols of an image with code words of different lengths.
There are some well-recognized entropy coding methods: such as Shannon–Fano, Huffman and
arithmetic coding. The first entropy coding technique is Shannon–Fano, which gives a better result
than run-length reported in [17]. The authors in [18] show that Shannon–Fano coding provides 30.64%
and 36.51% better results for image and text compression, respectively, compared to run-length coding.
However, Nelso et al. stated in [19] that Shannon–Fano sometimes generates two different codes
for the same symbol and does not ascertain optimal codes, which are the two main problems of the
algorithm. From the perspectives, Shannon–Fano coding is an inefficient data compression technique
reported in [20,21].
Huffman is another entropy coding algorithm that solves the quandaries of Shannon–Fano
reported in [22,23]. In that technique, pixels that are happening more frequently are encoded, utilizing
fewer bits shown in [24,25]. Although Huffman coding is a good compression technique, Rufai et
al. proposed singular value decomposition (SVD) and Huffman coding based image compression
procedure in [26], where SVD is used to decompose an image first and the rank is reduced by ignoring
some lower singular values. Lastly, the processed representation is coded by Huffman coding, which
shows a better result than JPEG2000 for lossy compression. In [27], three algorithms, Huffman, fractal
algorithm and Discrete Wavelet Transform (DWT) coding, have been implemented and are compared
to show the best coding procedure among them. It shows that Huffman works better to reduce
Symmetry 2019, 11, 1274 3 of 22
redundant data and DWT improves the quality of a compressed image, whereas the fractal provides a
better compression ratio. The main problem of Huffman coding is that it is very sensitive to noise. It
can not reconstruct an image perfectly from an encoded image if any changes are happened reported
in [28].
Another lossless entropy method is arithmetic coding, which gives a short average code compared
to Huffman coding reported in [29]. In [30], Masmoudi et al. proposed a modified technique of
arithmetic coding that encodes an image from top to bottom block-row wise and block by block from
left to right in lieu of pixel by pixel using a statistical model. The precise probability between the current
and its neighboring block are calculated by reducing the Kullback–Leibler gap. As a result, around
15.5% and 16.4% bitrates are decremented for static and adaptive order sequentially. Utilizing adaptive
arithmetic coding and finite mixture models, a block-predicated lossless compression has been proposed
in [31]. Here, an image is partitioned into non-overlapping blocks and encoded every block individually
utilizing arithmetic coding. This algorithm provides 9.7% better results than JPEG-LS reported in
[32,33] when the work is done in a predicted error domain in lieu of pixel domain. Articles [34,35] state
that arithmetic coding provides better compression ratio. But, it takes so much time that is virtually
unutilizable for dynamic compression. Furthermore, its use is restricted by patent. On the other hand,
though Huffman coding provides marginally less compression but it utilizes very less time to encode
an image than arithmetic coding. That’s why it is good for dynamic compression reported in [36,37].
Furthermore, an image encoded by arithmetic coding can corrupt the entire image for a single bit
error because it has very impecunious error resistance reported in [38–40]. Contiguous to, the primary
inhibition of entropy coding is that it increments the complexity of CPU stated in [41,42].
LZW (Lempel–Ziv–Welch) is a dictionary predicated compression technique that reads a sequence
of pixels, and then groups the pixels into strings. Lastly, the strings are converted into codes. In that
technique, a code table with 4096 common entries are utilized and the fixed codes 0–255 are assigned
first in a table as an initial entry because an image can have a maximum of 256 different pixels from 0
to 255. It works better in case of text compression reported in [43]. However, Saravanan et al. propose
an image coding procedure utilizing LZW, which compresses an image in two stages shown in [44].
Firstly, an image is encoded utilizing Huffman coding. Secondly, after concatenating all the code
words, LZW is applied to compress the encoded image, which provides a better result. However, the
main challenge of that technique is to manage the string table.
In this study, we use a common numeric data set and shows the step by step details of
implementation procedures of the state-of-the-art data compression techniques mentioned. This
demonstrates the comparisons among the methods and explicates the quandaries of the methods
based on the results of some benchmarked images. The organization of this article is shown as follows:
the encoding and decoding procedure; and the analysis of run-length, Shannon–Fano, Huffman, LZW
and Arithmetic coding are discussed in Section 2. The experimental results of some bench-marked
images are explained in Section 2.2, and concluding statements are presented in Section 3.
It shows that only twenty six elements are preserved in two matrices in lieu of 50 items, which
designates that (26 × 8) = 208 bits are sent to the decoder in lieu of (50 × 8) = 400 bits. Thus, the
average code length is 208/50 = 4.16 bits and ((8−4.16)/8) × 100 = 48% working memory is saved for
the data set.
1. Read each element from the array items and write the element repeatedly until its corresponding
number in the position array is found.
As an example, the first 6 and 7 of the items array are written one times at index 1 and 2 in the
new decoded list, respectively, whereas the next 6 and 7 are reiterated three times at the index 3 to 5
and nine times at the index 6 to 14 in the same decoded list, respectively. These processes will continue
until the reading of all elements from the items is finished. Conclusively, we get the same list as the
original list(A) after decoding.
Run-length coding does not perform any compression on array C. C contains seven different
components (7,5,4,6,3,2,1) and their probabilities are 0.42,0.26,0.08,0.08,0.06,0.06 and 0.04, respectively.
As indicated by the algorithm, the two groups left (0.42,0.08) and right (0.26,0.08,0.06,0.06,0.04) are
made and the Shannon–Fano encoding system is applied as demonstrated in Figure 2.
Symmetry 2019, 11, 1274 5 of 22
Entropy, efficiency, ACL, CR, mean square error (MSE) and PSNR are determined using the
following equations that are utilized to measure the performance of a compression algorithm, where
Proi , Bi , OR, CO, and MAX represent probability of ith symbol, length of the code word of the ith
symbol, original image, compressed image and the maximum variation of a dataset separately. The
encoded results of the array (C) are appeared in Table 1, where Ei represents an encoded code word of
the ith symbol:
N −1
entropy = − ∑ Proi log2 Proi , (1)
i =0
entropy
e f f iciency = ∗ 100%, (2)
ACL
NP
ACL = ∑ Pro(i) Bi , (3)
i =1
M −1 N −1
1
MSE =
M∗N ∑ ∑ (OR( p, q) − CO( p, q))2 , (5)
p =0 q =0
Max2
PSNR = 10 Log10 . (6)
MSE
−1
i Pi Ei Bi ∑N
i = 0 Pi Bi Pi log2 Pi CR BPP
7 0.42 00 2 0.84 -0.526
4 0.08 01 2 0.52 −0.292
5 0.26 10 2 0.16 −0.505
6 0.08 1100 4 0.32 −0.292 3.226 0.31
3 0.06 1110 4 0.24 −0.244
2 0.06 1111 4 0.24 −0.244
1 0.04 1101 4 0.16 −0.186
ACL = 2.48 Entropy = 2.289
Symmetry 2019, 11, 1274 6 of 22
1. Read each bit from an encoded bitstream and scan the tree until a leaf node is found. At the point
when a leaf hub is discovered, read the symbol of the node as decoded value, and the process
will proceed until scanning of the encoded bitstream is finished.
−1
i Pi Ei Bi ∑N
i = 0 P i Bi Pi log2 Pi CR BPP
7 0.42 1 1 0.42 −0.526
5 0.26 01 2 0.52 −0.505
4 0.08 0001 4 0.32 −0.292
6 0.08 0010 4 0.32 −0.292 3.448 0.29
3 0.06 0011 4 0.24 −0.244
2 0.06 00000 5 0.3 −0.244
1 0.04 00001 5 0.2 −0.186
ACL = 2.32 Entropy = 2.289
1. Recreates the equivalent Huffman tree built in the encoding step using the probabilities.
2. Each bit is scanned from the encoded bitstream and traverses the tree node by node until a leaf
node is reached. At the point when a leaf node is discovered, the symbol is predicted from the
node. This process will proceed until finished.
FD = FD + ND,
ELSE
Store the code for FD as encoded data and insert FD + ND to the table. In addition, set FD
= ND.
Since the previously mentioned original list (C) contains only 7 (1–7) different values, only 1–7
are inserted into the table as an initial dictionary first. Applying the LZW encoding procedure on C
shown in Table 3 and we get the decoded list that appears in Table 4. Finally, the encoded bitstream is
sent to the decoder, where each piece of encoded data is converted into 6-bit binary on the grounds
that the biggest value is 33 in the encoded list and just 6 bits are required to represent 33.
Dictionary
Row Number Encoded Output
Index Entry
1 - 1 1
2 - 2 2
3 - 3 3
4 - 4 4
5 - 5 5
6 - 6 6
7 - 7 7
8 1 8 16
9 6 9 67
10 7 10 76
11 6 11 66
12 9 12 677
13 7 13 77
14 7 14 74
15 4 15 47
16 13 16 777
Symmetry 2019, 11, 1274 9 of 22
Table 3. Cont.
Dictionary
Row Number Encoded Output
Index Entry
17 13 17 775
18 5 18 57
19 14 19 744
20 15 20 474
21 15 21 477
22 16 22 7777
23 16 23 7775
24 18 24 577
25 7 25 75
26 5 26 55
27 24 27 5773
28 3 28 33
29 3 29 32
30 2 30 23
31 29 31 322
32 2 32 25
33 26 33 555
34 26 34 556
35 6 35 65
36 33 36 5555
37 26 37 551
38 1 - -
39 0 Stop Code
Since the average code length is 3.84, as it appears in Table 4. Thus, LZW saves 36% memory,
which is 28.7356% and 29.3103% more than Shannon–Fano and Huffman coding individually for the
same dataset. Furthermore, the only encoded bitstream is sent to the decoder for decompression.
ELSE
Symmetry 2019, 11, 1274 10 of 22
Assign the translation of NC to DS, the first code of DS to NC, NC to FEV and add
FEV+NC into the table. Furthermore, send DS to the output.
For instance, the mentioned encoded bitstream converts each six bits into decimal value and
assign 1–7 as the initial dictionary shown in Table 5. The decoding demonstration for the encoded data
is shown in Table 6, and we get a similar list as C after decoding.
Initial Dictionary
Index Entry
1 1
2 2
3 3
4 4
5 5
6 6
7 7
all cannot be reduced, and it is good for deducing file size that carries more repeated data reported
in [49,50].
LLL + LUL
tag = . (7)
2
Symmetry 2019, 11, 1274 12 of 22
For the example shown in Figure 4, the LLL and LUL are 0.9551925 and 0.9551975. Thus,
the tag is 0.955195. The bitstream of the tag value is 001111000110111. Thus, the average
code length is 15/10 = 1.5 bits, and the compression ratio is 5.3333 where 15 is the length of
the tag. Finally, the tag’s bitstream, symbols (4,3,1,2), and their corresponding probabilities
(0.5, 0.2, 0.2, 0.1) are sent to the decoder for decompression. When Arithmetic coding is
applied on data set (C), we get [00000101100110101110100101111100101010111011110011
11110101011001011100110011100000000101011010010110110111100001011] bitstream from the
provided tag. Thus, average code length and compression ratio is 2.3000 bits and 3.4783 separately,
which saves 71.25% of storage. It appears that run-length, Shannon–Fano, Huffman and LZW coding
use 44.7115%, 7.2581%, 6.5041% and 33.908% more memory than arithmetic coding.
1. tag = 0.955195. Since .9 < = tag < = 1.0, Thus, decoded value is 2 because the symbol 2 is in range.
2. NT1 = (tag-LL)/r = 0.55195 and it is in between 0.5 and 0.7, so the decoded value is 3.
3. NT2 = (NT1 - LL)/r = 0.25975 and it is in between 0 and 0.5, so the decoded value is 4.
4. NT3 = (NT2 - LL)/r = 0.5195 and it is in between 0.5 and 0.7, so the decoded value is 3.
5. NT4 = (NT3 - LL)/r = 0.0975 and it is in between 0 and 0.5, so the decoded value is 4.
6. NT5 = (NT4 - LL)/r = 0.195 and it is in between 0 and 0.5, so the decoded value is 4.
7. NT6 = (NT5 - LL)/r = 0.39 and it is in between 0 and 0.5, so the decoded value is 4.
8. NT7 = (NT6 - LL)/r = 0.78 and it is in between 0.7 and 0.9, so the decoded value is 1.
9. NT8 = (NT7 - LL)/r = 0.4 and it is in between 0 and 0.5, so the decoded value is 4.
10. NT9 = (NT8 - LL)/r = 0.8 and it is in between 0.7 and 0.9, so the decoded value is 1.
Symmetry 2019, 11, 1274 13 of 22
The encoding and decoding time are the periods of time required to encode and decode an
image. Average code length determines the number of bits used to store a pixel on average, and
the compression ratio represents the ratio of original and compressed images. Pick signal-to-noise
ratio ((PSNR)) is used to measure the quality of an image. Less encoding and decoding time, short
average code length and higher compression ratio tell how much faster an algorithm is and how much
less memory it uses. The higher efficiency and PSNR convey that an image contains high-quality
information. The encoding time, decoding time, average code length, and compression ratio are
shown in Tables 7–10, whereas Figures 7–11 show the graphical representation of encoding time,
decoding time, average code length, compression ratio and efficiency, respectively, based on the
twenty-five images.
Symmetry 2019, 11, 1274 14 of 22
Table 7 shows that arithmetic and run-length coding take the highest (4.0178) and lowest (0.1349)
milliseconds on average, whereas Shannon–Fano, Huffman and LZW take 0.5873, 0.2488 and 0.1054
milliseconds individually to encode the images. It appears that arithmetic coding uses 96.6424%,
85.3825%, 93.8076% and 97.3767% more time than run-length, Shannon–Fano, Huffman and LZW coding,
respectively. However, Huffman coding uses much less time (0.0062) on average in decoding, whereas
arithmetic coding uses more time, which is demonstrated in Table 8. On the other hand, LZW uses more
time than Shannon–Fano and Huffman coding but less than Arithmetic and Run-Length coding. Figures
7 and 8 show the graphical representation of encoding and decoding time for comparison.
Tables 9 and 10 show average code length and compression ratio, respectively. It looks that RLE
uses 10.5618 bits per pixel, on average, which is 24.2553% more memory being used than the original
images, which is the reason it is not used directly for real image compression. On the other hand, LZW
uses the lowest number of bits (5.9365) per pixel, but the problem of LZW is that it sometimes uses
more memory than an original, which happened for image 21 shown in Table 9. Arithmetic coding
uses the second lowest number of bits per pixel on average. Thus, arithmetic coding is the best coding
technique because it provides a better compression ratio than other state-of-the-art techniques without
LZW shown in Table 10. Figures 9 and 10 demonstrate the graphical representation of average code
length and compression ratio separately for comparison.
Symmetry 2019, 11, 1274 15 of 22
All the state-of-the-art strategies are lossless. Thus, pick signal-to-noise ratio and mean squared
error (MSE) for each algorithm are inf and zero, respectively, for every case. However, arithmetic
and run-length coding on average have the highest (99.9899) and lowest (58.6783) efficiency than the
other methods shown in Figure 11. Despite the fact that the proficiency of LZW coding at some point
provides better outcomes and sometimes provides absolutely terrible outcome, which is why it is not
Symmetry 2019, 11, 1274 18 of 22
used for image compression in real applications. The list of the decompression images is shown in
Figure 12.
From the previously mentioned perspectives, it can tell that arithmetic coding is the best way
when more compression is required; however, it isn’t useful for a real-time application in view of
taking additional time in encoding and decoding steps. Searching in a dictionary is a big challenging
issue for LZW coding, and it provides the worst results for an image compression. Shannon–Fano
coding sometimes does not provide optimal code and provides two different codes for the same
element, which is the reason it is obsolete now. Run-length coding is not good for a straightforward
real image compression.
Thus, it very well may be reasoned that Huffman coding is the best algorithm for the recent
technologies among the state-of-the-art lossless methods mentioned used in various applications.
However, if we can decrease the encoding and decoding time in case of arithmetic coding, then it
will be the best algorithm. On the other hand, Huffman coding will work more if we can decrease its
average code length keeping its same encoding and decoding times. In this article, all the experiments
are done using C, Matlab (version 9.4.0.813654 (R2018a). Natick, Massachusetts, USA: The MathWorks
Inc.; 2018) and Python languages. For the coding environments, Spyder (Python 3.6), Codeblocks
(17.12, The Code::Blocks Team) and Matlab are utilized. Furthermore, we utilized an HP laptop (Palo
Alto, California, United States) that contained the Intel Core i3-3110M @2.40 GHz processor (Santa
Clara, USA), 8 GB DDR3 RAM, 32 KB L1D-Cache, 32 KB L1I-Cache, 256 KB L2 Cache and 3 MB L3
Cache, where L1D, L1I, and L2 Caches contained 8-way set associative, 64-byte line size each, and L3
Cache contained 12-way set associative, 64-byte line size. According to the algorithms used for testing,
the CPU-Time is 1.499 × 10−6 O( P), 6.481 × 10−6 O( P + | β| ∗ log| β|), 2.746 × 10−6 O( P + | β| ∗ log| β|),
1.171 × 10−6 O( P) and 4.452 × 10−5 O(| β| + P) for Run-length, Shannon–Fano, Huffman, LZW and
Arithmetic coding, respectively, where P indicates the number of pixels and β represents the number
of different pixels of an image.
Symmetry 2019, 11, 1274 19 of 22
4. Conclusions
In this study, we presented a detailed analysis of some common lossless image compression
techniques such as: the run-length, Shannon–Fano, Huffman, LZW and arithmetic coding. The
relevance of these techniques comes from the fact that most of the other recently developed lossless (or
lossy) algorithms use one of them as a part of its compression procedure. All the mentioned algorithms
have been discussed using a common numeric data set. Both computer generated and actual medical
images are used to assess the efficiency of such state-of-the-art methods. We also used standard metrics
such as: encoding time, decoding time, average code length, compression ratio, efficiency and PSNR
Symmetry 2019, 11, 1274 20 of 22
to measure the superiority of such techniques. Finally, we noticed that Huffman coding outperforms
other state-of-the-art techniques in case of real time lossless compression applications.
Author Contributions: For the research article, M.A.R.; conceived and designed the experiments, M.A.R.;
performed the experiments, M.A.R.; analyzed the data; M.A.R and M.H.; wrote the paper, M.H.; writing–review
and editing, M.H.; supervision, M.H.; funding acquisition. This paper was prepared the contributions of all
authors. All authors have read and approved the final manuscript.
Funding: This research received no external funding.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Bovik, A.C. Handbook of Image and Video Processing; Academic Press: Cambridge, MA, USA, 2010
2. Sayood, K. Introduction to Data Compression; Morgan Kaufmann: Burlington, MA, USA, 2017
3. Salomon, D.; Motta, G. Handbook of Data Compression; Springer Science & Business Media: Berlin, Germnay,
2010.
4. Ding, J.; Furgeson, J.C.; Sha, E.H. Application specific image compression for virtual conferencing. In
Proceedings of the International Conference on Information Technology: Coding and Computing (Cat.
No.PR00540), Las Vegas, NV, USA, 27–29 March 2000; pp. 48–53. doi:10.1109/ITCC.2000.844182.
5. Bhavani, S.; Thanushkodi, K. A survey on coding algorithms in medical image compression. Int. J. Comput.
Sci. Eng. 2010, 2, 1429–1434.
6. Kharate, G.K.; Patil, V.H. Color Image Compression Based on Wavelet Packet Best Tree. arXiv 2010,
arXiv:1004.3276.
7. Haque, M.R.; Ahmed, F. Image Data Compression with JPEG and JPEG2000. In Avaliable online: http:
//eeweb.poly.edu/~yao/EE3414_S03/Projects/Loginova_Zhan_ImageCompressing_Rep.pdf (accessed on
1 October 2019.)
8. Clarke, R.J. Digital Compression of still Images And Video; Academic Press, Inc.: Orlando, FL, USA, 1995.
9. Joshi, M.A. Digital Image Processing: An Algorithm Approach; PHI Learning Pvt. Ltd.: New Delhi, India, 2006.
10. Golomb, S. Run-length encodings (Corresp). IEEE Trans. Inform. Theory 1966, 12, 399–401.
11. Nelson, M.; Gailly, J.L. The Data Compression Book; M & t Books: New York, NY, USA, 1996; Volume 199.
12. Sharma, M. Compression using Huffman coding. IJCSNS Int. J. Comput. Sci. Netw. Secur. 2010, 10, 133–141.
13. Burger, W.; Burge, M.J. Digital Image Processing: An Algorithmic Introduction Using Java; Springer: Berlin,
Germany, 2016.
14. Kim, S.D.; Lee, J.H.; Kim, J.K. A new chain-coding algorithm for binary images using run-length codes.
Comput. Vis. Graphics Image Process. 1988, 41, 114–128.
15. Žalik, B.; Mongus, D.; Lukač, N. A universal chain code compression method. J. Vis. Commun. Image
Represent. 2015, 29, 8–15.
16. Benndorf, S.; Siemens, A.G. Method for the Compression of Data Using a Run-Length Coding. U.S. Patent
8,374,445, 12 February 2013.
17. Shanmugasundaram, S.; Lourdusamy, R. A comparative study of text compression algorithms. Int. J. Wisdom
Based Comput. 2011, 1, 68–76.
18. Kodituwakku, S.R.; Amarasinghe, U.S. Comparison of lossless data compression algorithms for text data.
Indian J. Comput. Sci. Eng. 2010, 1, 416–425.
19. Rahman, M.A.; Islam, S.M.S.; Shin, J.; Islam, M.R. Histogram Alternation Based Digital Image Compression
using Base-2 Coding. In Proceedings of the 2018 Digital Image Computing: Techniques and Applications
(DICTA), Canberra, Australia, 10–13 December 2018; pp. 1–8. doi:10.1109/DICTA.2018.8615830.
20. Drozdek, A. Elements of Data Compression; Brooks/Cole Publishing Co.: Pacific Grove, CA, USA, 2001.
21. Howard, P.G., Vitter, J.S. Parallel lossless image compression using Huffman and arithmetic coding. Data
Compr. Conf. 1992, doi:10.1109/DCC.1992.227451
22. Pujar, J.H.; Kadlaskar, L.M. A new lossless method of image compression and decompression using Huffman
coding techniques. J. Theor. Appl. Inform. Technol. 2010, 15, 15–21.
23. Mathur, M.K.; Loonker, S.; Saxena, D. Lossless Huffman coding technique for image compression and
reconstruction using binary trees. Int. J. Comput. Technol. Appl. 2012, 1, 76–79.
Symmetry 2019, 11, 1274 21 of 22
24. Vijayvargiya, G.; Silakari, S.; Pandey, R. A Survey: Various Techniques of Image Compression. arXiv 2013,
arXiv:1311.6877.
25. Rahman, M.A.; Shin, J.; Saha, A.K.; Rashedul Islam, M. A Novel Lossless Coding Technique for Image
Compression. In Proceedings of the 2018 Joint 7th International Conference on Informatics, Electronics &
Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (ic4PR),
Kitakyushu, Japan, 25–29 June 2018; pp. 82–86. doi:10.1109/ICIEV.2018.8641065.
26. Rufai, A.M.; Anbarjafari, G.; Demirel, H. Lossy medical image compression using Huffman coding and
singular value decomposition. In Proceedings of the 2013 21st Signal Processing and Communications
Applications Conference (SIU), Haspolat, Turkey, 24–26 April 2013.
27. Jasmi, R.P.; Perumal, B.; Rajasekaran, M.P. Comparison of image compression techniques using huffman
coding, DWT and fractal algorithm. In Proceedings of the 2015 International Conference on Computer
Communication and Informatics (ICCCI), Coimbatore, India, 8–10 January 2015; pp. 1–5.
28. Xue, T.; Zhang, Y.; Shen, Y.; Zhang, Z.; You, X.; Zhang, C. Adaptive Spatial Modulation Combining BCH
Coding and Huffman Coding. In Proceedings of the 2018 IEEE 23rd International Conference on Digital
Signal Processing (DSP), Shanghai, China, 19–21 November 2018; pp. 1–5. doi:10.1109/ICDSP.2018.8631648.
29. Witten, I.H.;Neal, R.M.; Cleary, J.G. Arithmetic Coding for Data Compression. Commun. ACM 1987, 30,
520–540.
30. Masmoudi, A.; Masmoudi, A. A new arithmetic coding model for a block-based lossless image compression
based on exploiting inter-block correlation. Signal Image Video Process. 2015, 9, 1021–1027.
31. Masmoudi, A.; Puech, W.; Masmoudi, A. An improved lossless image compression based arithmetic coding
using mixture of non-parametric distributions. Multimed. Tools Appl. 2015, 74, 10605–10619.
32. Weinberger, M.J.; Seroussi, G.; Sapiro, G. The LOCO-I lossless image compression algorithm: Principles and
standardization into JPEG-LS. IEEE Trans. Image Process. 2000, 9, 1309–1324.
33. Li, X.; Orchard, M.T. Edge directed prediction for lossless compression of natural images. IEEE Trans. Image
Proc. 2001, 6, 813–817.
34. Sasilal, L.; Govindan, V.K. Arithmetic Coding-A Reliable Implementation. Int. J. Comput. Appl. 2013, 73, 7.
35. Ding, J.J.; Wang, I.H. Improved frequency table adjusting algorithms for context-based adaptive lossless
image coding. In Proceedings of the 2016 IEEE International Conference on Consumer Electronics-Taiwan
(ICCE-TW), Nantou, Taiwan, 27–29 May 2016 ; pp. 1–2.
36. Rahman, M.A.; Fazle Rabbi, M.M.; Rahman, M.M.; Islam, M.M.; Islam, M.R. Histogram modification based
lossy image compression scheme using Huffman coding. In Proceedings of the 2018 4th International
Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT), Dhaka,
Bangladesh, 13–15 September 2018; pp. 279–284. doi:10.1109/CEEICT.2018.8628092
37. Pennebaker, W.B.; Mitchell, J.L. JPEG: Still Image Data Compression Standard; Springer Science & Business
Media: New York, NY, USA, 1992.
38. Clunie, D.A. Lossless compression of grayscale medical images: effectiveness of traditional and
state-of-the-art approaches. In proceedings of Medical Imaging 2000: PACS Design and Evaluation:
Engineering and Clinical Issues, San Diego, CA, USA, 12–18 February 2000, 74–85. doi:10.1117/12.386389
39. Kim, J.; Kyung, C.M. A lossless embedded compression using significant bit truncation for HD video coding.
IEEE Trans. Circuits Syst. Video Technol. 2010, 20, 848–860.
40. Rufai, A.M.; Anbarjafari, G.; Demirel, H. Lossy medical image compression using Huffman coding and
singular value decomposition. In Proceedings of the 2013 21st Signal Processing and Communications
Applications Conference (SIU), Haspolat, 24–26 April 2013; pp. 1-4. doi:10.1109/SIU.2013.6531592
41. Kato, M.; Sony Corp. Motion Video Coding with Adaptive Precision for DC Component Coefficient
Quantization and Variable Length Coding. U.S. Patent, 5,559,557, 24 September 1996.
42. Lamorahan, C.; Pinontoan, B.; Nainggolan, N. Data Compression Using Shannon–Fano Algorithm. de
CARTESIAN 2013, 2, 10–17.
43. Yokoo, H. Improved variations relating the Ziv-Lempel and Welch-type algorithms for sequential data
compression. IEEE Trans. Inform. Theory 1992, 38, 73–81. doi:10.1109/18.108251.
44. Saravanan, C.; Surender, M. Enhancing efficiency of huffman coding using Lempel Ziv coding for image
compression. Int. J. Soft Comput. Eng. 2013, 6, 2231–2307
45. Pu, I.M. Fundamental Data Compression; Butterworth-Heinemann: Oxford, UK, 2005.
Symmetry 2019, 11, 1274 22 of 22
46. Shannon, C.E. A mathematical theory of communication. ACM SIGMOBILE Mobile Comput. Commun. Rev.
2001, 5, 3–55.
47. Huffman, D.A. A method for the construction of minimum-redundancy codes. Proc. IRE 1952, 40, 1098–1101.
48. Hussain, A.J.; Al-Fayadh, A.; Radi, N. Image compression techniques: A survey in lossless and lossy
algorithms. Neurocomputing 2018, 300, 44–69.
49. Zhou, Y.-L.; Fan, X.-P.; Liu, S.-Q.; Xiong, Z.-Y. Improved LZW algorithm of lossless data compression
for WSN. In Proceedings of the 2010 3rd International Conference on Computer Science and Information
Technology, Chengdu, China, 9–11 July 2010; pp. 523–527. doi:10.1109/ICCSIT.2010.5563620.
50. Ziv, J.; Lempel, A. A universal algorithm for sequential data compression. IEEE Trans. Inform. Theory 1977,
23, 337–343, doi:10.1109/TIT.1977.1055714.
51. Rahman, M.A.; Jannatul Ferdous, M.; Hossain, M. M.; Islam, M.R.; Hamada, M. A lossless speech signal
compression technique. In preceedings of the 1st International Conference on Advances in Science,
Engineering and Robotics Technology, Dhaka, Bangladesh, 3–5 May 2019.
52. Langdon, G.G. An introduction to arithmetic coding. IBM J. Res. Dev. 1984, 28, 135–149.
53. Osirix-viewer.com. 2019. OsiriX DICOM Viewer | DICOM Image Library. Available online:
https://www.osirix-viewer.com/resources/dicom-image-library/ (Accessed 10 April 2019)
c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).