0% found this document useful (0 votes)
17 views

Compression of Hyperspectral Image Using JPEG Compression Algorithm

This paper delves into the considerable challenges of working with hyperspectral images, which are notably large and multidimensional, with file sizes often surpassing hundreds of megabytes. Hyperspectral imaging captures light across a continuous range of wavelengths, providing detailed spectral information for each pixel.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Compression of Hyperspectral Image Using JPEG Compression Algorithm

This paper delves into the considerable challenges of working with hyperspectral images, which are notably large and multidimensional, with file sizes often surpassing hundreds of megabytes. Hyperspectral imaging captures light across a continuous range of wavelengths, providing detailed spectral information for each pixel.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1859

Compression of Hyperspectral Image using JPEG


Compression Algorithm
Sujendra G Bharadwaj1 Shruthi B2
Department of Electronics and Communication Engineering Department of Electronics and Communication Engineering
SJB Institute of Technology SJB Institute of Technology
Bangalore-560060, India Bangalore-560060, India

Abstract:- This paper delves into the considerable


challenges of working with hyperspectral images, which
are notably large and multidimensional, with file sizes
often surpassing hundreds of megabytes. Hyperspectral
imaging captures light across a continuous range of
wavelengths, providing detailed spectral information for
each pixel. This rich dataset is invaluable for applications
such as environmental monitoring, precision agriculture,
mineral exploration, and medical diagnostics, where
accurate spectral data aids in identifying materials and
detecting subtle variations. However, the immense data
volume not only strains storage and transmission
resources but also requires efficient processing and
analysis techniques to handle the high-dimensional data
without compromising quality. Additionally, compression
methods are essential to manage storage constraints and
improve real-time usability, but they must balance data
reduction with the preservation of spectral integrity for Fig 1 Hyperspectral Image
effective analysis and application.
 L - Bands
Keywords:- JPEG Compression, Discrete Cosine Transform,
Quantization, Image Decompression.  Lossless Compression:
This method allows for the exact reconstruction of the
I. INTRODUCTION original image, ensuring that no data is lost during the
compression process. While it maintains high image quality,
Hyperspectral images capture a continuous spectrum for the compression ratios are typically lower compared to lossy
each pixel, enabling detailed spectral analysis through methods. Lossless techniques are particularly important in
hyperspectral image processing, also known as spectral applications requiring precise image fidelity, such as medical
imaging. Unlike the human eye, which perceives visible light imaging and archival storage [20]
in three bands (red, green, and blue), hyperspectral imaging
divides the spectrum into many bands, enhancing data  Lossy Compression:
retrieval. This imaging technique is significant in remote This technique reduces file sizes by removing some
sensing for storing detailed information about materials and image data, resulting in a degree of quality loss. The extent of
has applications in target detection, classification, anomaly data loss can vary, and while the reduction in size is
detection, and spectral unmixing. Sensors collect data across significant, it often compromisesthe image quality compared
contiguous wavelength bands from 400 to 2500 nm, with to lossless methods. Lossy compression is commonly used in
fixed spectral resolution and equal pixel counts per band. multimedia formats, including MP3 for audio and JPEG for
Each pixel's spatial resolution defines the area it represents, images.
forming a data cube of reflectance values across various
wavelengths. Hyperspectral imaging is utilized in military  Near Lossless Compression:
operations to monitor troop movements, in agriculture for This approach offers a middle ground between lossy and
quality assessment and disease detection, and in remote lossless compression, allowing for minor distortions that fall
sensing for mineral classification and tracking natural within acceptable limits. Near lossless compression achieves
disasters. better compression performance than lossless techniques
while maintaining higher quality than lossy methods, making
Image compression plays a crucial role in efficiently it suitable for various applications where some level of
storing and transmitting image data by minimizing file sizes distortion is permissible.[18]
while preserving essential visual quality. It can be divided
into three main categories:

IJISRT24OCT1859 www.ijisrt.com 2869


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1859

Fig 2 Continuous Spectrum of the Hyperspectral Image Cell

II. PROPOSED ALGORITHM  Quantization:


This process reduces the precision of the DCT
The JPEG algorithm is a widely adopted method for coefficients, effectively discarding less criticalinformation
lossy compression of digital images. This technique while preserving the essential features that the human eye can
significantly reduces file sizes while maintaining acceptable perceive.
image quality, enabling users to store vast quantities of
images that were previously impractical due to storage  Entropy Encoding:
limitations. By minimizing the size of image files, JPEG Utilizing techniques such as Huffman coding, this
compression not only enhances storage efficiency but also final block further compresses thequantized coefficients by
reduces the system's processing load, leading to improved assigning shorter codes to more frequent values, maximizing
overall performance. storage efficiency.

 The JPEG Encoder Operates through Three Primary Together, these components allow JPEG compression
Blocks: to achieve a balance between image quality and file size,
making it an essential technique in digital imaging
 Forward Discrete Cosine Transform (FDCT): applications.
This step transforms the spatial representation of the
image intothe frequency domain, separating important visual
information from less significant data.

IJISRT24OCT1859 www.ijisrt.com 2870


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1859

Fig 3 Block Diagram of JPEG Compression & Decompression

 Forward Discrete Cosine Transform- optimizes the image quality by focusing on perceptually
The Forward Discrete Cosine Transform (FDCT) is a relevant details. The process is designed to align with human
first step in JPEG compression that enables the efficient visual perception, ensuring that the most noticeable features
representation of images by transforming pixel data into the are preserved even after compression. By utilizing cosine
frequency domain. This mathematical operation processes8x8 functions to represent pixel values, FDCT enhances the
blocks of an image, effectively separating the image into robustness of image encoding, making it suitable for various
different frequency components. By analyzing these applications beyond JPEG, including video processing and
components, FDCT retains low-frequency information that other image analysis tasks. Ultimately, the FDCT lays the
corresponds to the essential visual details, while high- foundation for the subsequent stages of JPEG compression,
frequency components, which capture rapid changes and highlighting its importance in modern digital imaging and
noise, can be discarded during quantization. This aspect of communication systems.
FDCT not only facilitates significant file size reduction but also

IJISRT24OCT1859 www.ijisrt.com 2871


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1859

Assuming an interleaving block of 8 × 8 (single- discarded altogether. Although this introduces a loss of some
component, grayscale images) as inputs for the FDCT, 64 image quality, the significant reduction in file size makes
DCT coefficients are produced. These coefficients are then quantization a beneficial trade-off. Moreover, by increasing the
used in the quantization process as shown in the block number of zero coefficients in the resultant data, quantization
diagram. enhances the efficiency of subsequent entropy encoding
methods, such as Huffman coding, allowing for even greater
 Quantization- compression. This makes quantization a fundamentalaspect
Quantization is a second step in image compression, of modern digital image processing, balancing size reduction
particularly in the JPEG algorithm, where it effectively with quality preservation.
transforms continuous DCT coefficients into discrete values,
significantly enhancing data reduction. This process operates The figure shows the simple quantization of a signal by
by mappinga range of input values to a smaller set of output choosing the amplitude values closest to the analog
values, similar to rounding numbers to their nearest integers. amplitude. In the given implementation, quantization is
After applying the Forward Discrete Cosine Transform performed in conjunction with a quantization table and the
(FDCT) to an 8x8 block of pixels, each DCT coefficient is input signal is digitized. This process is fundamentally lossy
divided by a corresponding value from a predefined since this is a many-to-one mapping. This method causes
quantization table and then rounded to the nearest integer. image loss in lossy image compression for DCT-based
These tables are designedbased on human visual perception, encoders. Quantization can berepresented using equation (9).
emphasizing the retention of lower-frequency components This is done by rounding of the quotient of dividing each
that carry more critical image information, while allowing DCT coefficient by its correspondingquantum to the nearest
higher-frequency components—often linked to finer details integer.
and noise—to be approximated more coarsely or even

Fig 4 Quantization

 Entropy Encoding- reflects the value of the DCT coefficient.


The JPEG algorithm concludes its process by outputting
the elements of the DCT block through an entropy encoding As quantization progresses, an increasing number of
mechanism that combines Run-Length Encoding (RLE) and coefficients become zero, particularly among higher
Huffman encoding principles. The entropy encoder generates frequency components. This results in a high likelihood of
a sequence of three tokens, which are repeated until the block encountering zeros within the DCT output matrix. To
is fully encoded. These tokens include the run length, enhance efficiency, the data is reordered to prioritize low-
indicating the count of consecutive zeros that precede the frequency values, with higher frequencies appearing later in
current non-zero DCT coefficient; the bit count, representing the sequence. Consequently, it becomescommon to encounter
thenumber of bits used to encode the amplitude value based sequences such as “0 0 0 0 0 0 0 0 0 0.” Instead of storing
on the Huffman encoding scheme; and the amplitude, which these zeros individually, the data can be compressed into a

IJISRT24OCT1859 www.ijisrt.com 2872


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1859

more efficient representation, like “10x 0,” significantly During compression, the algorithm traverses the tree
reducing the amount of data stored. starting from the leaf node corresponding to the symbol to be
compressed. As it navigates up to the root, it determines
 Huffman Coding whether the current node is a left or right child of its parent,
Entropy encoding further optimizes compression assigning a corresponding binary value of (0) or (1). This
through a compact representation of the quantized DCT efficient approach allows for optimal encoding of the
coefficients, employing techniques like Huffman coding. quantized coefficients, ultimately contributing to the overall
Specified in the JPEG standard, Huffman coding is utilizedin effectiveness of JPEG compression [2][18][10].
baseline sequential codecs and all operational modes of JPEG
compression, effectively encoding source symbols based on  Decompression-
their probabilities. Introduced by David Huffman in 1952, The decompression process reverses the compression
this variable-length encoding algorithm minimizes the steps but follows a different order to ensure the original image
average number of bits required for symbol representation, is accurately reconstructed. Initially, the algorithm retrieves
given that it adheres to the prefix condition. the Huffman tables that were saved with the compressed
image. Using these tables, it decodes the Huffman tokens that
represent the compressed data. Next, the focus shifts to the
Discrete Cosine Transform (DCT) values. The first
coefficient of each DCT block is recovered, and the
remaining 63 coefficients are reconstructed, with zeros filled
in where quantization has led to loss of information. This step
is crucial for re- establishing the block’s integrity, ensuring
that any missing values do not impede the reconstruction
process [2][18].

Following the coefficient recovery, the JPEG algorithm


decodes the DCT values in a zigzag order, which is
instrumental for efficiently retrieving spatial frequency
information. This organization aids in reconstructing the
pixel values with enhanced visual fidelity. The final stage
involves applying the Inverse Discrete Cosine Transform
(IDCT) to translate the frequency components back into the
spatial domain. During this process, the algorithm evaluates
the contribution of each of the 64 frequency coefficients to
their corresponding pixels, thus reconstructing the original
image as closely as possible. This meticulous approach is
fundamental in ensuring that the decompressed image retains
its quality while effectively utilizing the information encoded
during compression [18].

Fig 5 Zigzag Pixel Selection Pattern III. EXPERIMENT AND RESULT

The Huffman coding process initiates with a frequency For this analysis, the Pavia University (PaviaU) dataset
table, detailing the occurrence of each symbol. The algorithm was employed. The PaviaU dataset is a widely used
then constructs a Huffman tree from this table, where each benchmark for hyperspectral image processing tasks. It
node represents a symbol, its frequency, and pointers to its consists of a hyperspectral image captured over the city of
parent and child nodes. The tree grows through successive Pavia, Italy, by the ROSIS sensor (Reflective Optics System
iterations that identify the two nodes with the lowest Imaging Spectrometer). The dataset includes 610x340 pixels,
frequencies that have not yet become parents. A new node is each pixel having 103 spectral bands after noisy bands are
created, serving as the parent of these two nodes, and its removed, covering wavelengths in the visible to near-infrared
frequency is the sum of the frequencies of its children. This range. The PaviaU dataset is particularly useful for
process continues until a single node remains, which becomes applications in urban analysis, where detailed spectral
the root of the Huffman tree. information is necessary to distinguish between different
materials or land cover types, making it ideal for assessing the
effectiveness of hyperspectral image compression methods.

IJISRT24OCT1859 www.ijisrt.com 2873


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1859

 Samples of the Original Band is given below:

IJISRT24OCT1859 www.ijisrt.com 2874


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1859

 Procedure for Band 50th is Shown in the below: -

A similar compression process is performed across all approach not only optimizes the data for storage but also
103 spectral bands to ensure efficient image reduction and enhances the transmission speed, making it feasible to
facilitate successful transmission through various channels. transfer the compressed images over networks with limited
Each band, containing unique spectral information, bandwidth. Ultimately, this strategy enables reliable and
undergoes a standardized compression algorithm that efficient access to valuablehyperspectral data across multiple
minimizes data size while preserving critical details. By applications, ensuring that critical information is preserved
compressing each band individually, the resulting image while reducing the logistical challenges associated with large
retains its high-dimensional characteristics, which are crucial dataset.
for subsequent analyses and applications. This systematic

IJISRT24OCT1859 www.ijisrt.com 2875


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1859

 Result:

Table 1 Result
Metric Band 50 Band 70 Band 85 Band 100 Average
Compression Ratio (CR) 15.45Kb 11.71Kb 10.84Kb 10.29Kb 12.07Kb

IV. CONCLUSION REFERENCES

In summary, the JPEG compression algorithm remains [1]. Soni, H., & Gupta, S. (2020). "A Study on Lossy
a fundamental method in digital image processing, effectively Image Compression Techniques." International
balancing the need for image quality and reduced file sizes. The Journal ofComputer Applications.
various stages of JPEG compression— including Forward [2]. Sayood, K. (2020). Introduction to Data Compression.
Discrete Cosine Transform (DCT), quantization, and entropy Morgan Kaufmann.
encoding—collaborate to createa compact representation of [3]. Jain, A. K. (2021). "Image Compression Techniques:
visual data. By focusing on lower-frequency components that A Comprehensive Review." International Journal of
are more perceptible to the human eye, JPEG maximizes Image and Graphics.
compression while minimizing visual degradation. [4]. Jia, C., Wu, H., & Wang, L. (2021). "Efficient JPEG
Image Compression Method Based on Optimized
Huffman coding plays an essential role within the Huffman Coding." Journal of Information Science,
entropy encoding phase, allowing for efficient data 47(2), 238-250.
representation by assigning shorter codes to more frequent [5]. Zhang, L., Liu, Z., & Wang, X. (2022). "Adaptive
values. The careful orchestration of these techniques ensures Huffman Coding for Image Compression." Applied
that, during decompression, the original image can be Sciences, 12(4), 1804.
accurately reconstructed. As technology progresses, the JPEG [6]. Feng, L., Zhang, H., & Zhao, J. (2022). "Color Image
algorithm continues to evolve, adapting to meet Processing Using YCbCr Color Space." Journal of
contemporary challenges in image processing, such ashigh- Visual Communication and Image Representation.
resolution imaging and real-time applications. Future [7]. Khan, A., & Kumar, P. (2022). "Entropy Coding
innovations may further enhance compression efficiencies Techniques in Image Compression." Journal of
without compromising quality, reinforcing JPEG's relevance ComputerScience and Technology.
in diverse fields like medical imaging, satellite imagery, and [8]. Zhang, J., Wang, L., & Zhou, Y. (2022). "Efficient
digital media. This adaptability highlights JPEG's enduring DCT Algorithms for Image Compression." Journal of
significance in modern imagingtechnologies. Real-Time Image Processing, 19(3), 605-617.
[9]. Wang, C., Zhou, X., & Liu, Y. (2023). "An Overview
FUTURE WORK of Image Compression Algorithms." Applied
Sciences.
 Integration of Machine Learning: [10]. Huang, H., & Yao, X. (2023). "A Fast Huffman
Future enhancements can focus on leveraging machine Coding Algorithm for Image Compression." Journal
learning algorithms to improve JPEG compression ofVisual Communication and Image Representation,
techniques. This approach could result in adaptive methods 104, 103338.
that better tailor compression based on the specific [11]. Lee, S., Kim, J., & Cho, H. (2023). "Improving JPEG
characteristics of images, thereby optimizing both file size Compression Efficiency Using Enhanced Huffman
and quality. Coding Techniques." IEEE Transactions on Image
Processing, 32, 1872-1883.
 Exploration of Advanced Codecs: [12]. Singh, R., Kumar, A., & Gupta, S. (2023). "Adaptive
Research should investigate newer image formats such Discrete Cosine Transform for Image Compression."
as AVIF andHEIF to evaluate their performance compared to International Journal of Imaging Systems and
traditional JPEG. Understanding the advantages of these Technology, 33(1), 234-244.
formats in terms of compression efficiency and visual quality [13]. Chen, Y., Liu, X., & Wang, H. (2023). "Perceptual
can facilitate a smoother transition to more advanced Quantization for Image Compression Based on
technologies in digital imaging. Human Visual Sensitivity." IEEE Transactions on
Image Processing, 32(4), 1020-1034.
 Enhancing Real-Time Processing: [14]. Wang, R., Zhang, Y., & Liu, Z. (2023). "Adaptive
As the demand for instantaneous applications continues Huffman Coding for Enhanced JPEG Compression."
to rise, it will beessential to optimize JPEG algorithms for Journal of Visual Communication and Image
faster processing and reduced latency. This may involve Representation, 88, 103271.
refining the underlying algorithms to improve both [15]. Bhatia, S., Singh, A., & Sahu, S. (2023).
compression and decompression speeds in various contexts. "Performance Analysis of Arithmetic Coding in
JPEG Compression." Journal of Electronic Imaging,
32(1), 013001.

IJISRT24OCT1859 www.ijisrt.com 2876


Volume 9, Issue 10, October– 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24OCT1859

[16]. Zhao, Y., Liu, D., & Zhang, X. (2023). "Parallel


Processing Techniques for Fast JPEG
Decompression." Journal of Computer and System
Sciences, 131, 105-116.
[17]. Khan, M. A., Ahmad, J., & Alam, M. (2023).
"Machine Learning-Based Image Reconstruction for
JPEG Decompression." Pattern Recognition Letters,
168, 25-32.
[18]. Jayalakshmi, B., Satish, B. K., & Rajan, V. K. (2023).
"A Comprehensive Review on Image Compression
Techniques." Journal of Ambient Intelligence and
Humanized Computing.
[19]. Gonzalez, R. C., & Woods, R. E. (2018). Digital
Image Processing. Pearson.
[20]. Salami, M. et al. (2018). "Image Compression: An
Overview." Journal of Computing and Security.
[21]. Bhandari, M., Sharma, P., & Mehta, A. (2022).
"Application of Machine Learning in Image
Compression Techniques: A Survey." International
Journal of Image and Graphics.
[22]. Li, P., Wu, Y., & Zhang, Y. (2024). "Comparative
Analysis of AVIF and JPEG for Image
Compression." Journal of Visual Communication and
Image Representation.

IJISRT24OCT1859 www.ijisrt.com 2877

You might also like