0% found this document useful (0 votes)
47 views17 pages

R.monika - Springer Book Chapter

Uploaded by

Monika Raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views17 pages

R.monika - Springer Book Chapter

Uploaded by

Monika Raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Entropy-Driven Dynamic Block Compressive

Sampling for Underwater Image Compression in


the Context of IoUT: A Research Perspective

R.Monika and Samiappan Dhanalakshmi⋆

Department of ECE, College of Engineering and Technology, SRM Institute of


Science and Technology, SRM Nagar, Kattankulathur- 603203, Tamilnadu, India.

Abstract. The Internet of Underwater Things (IoUT) is a network of


countless connected devices that monitor vast, uncharted water territo-
ries. These gadgets consists of cameras designed to capture images be-
neath the water’s surface. and then distribute it among themselves and
save them in the cloud. However, the substantial amount of data pro-
duced can hinder the devices’ performance due to limited computational
power and battery life. To tackle this, Block Compressed Sampling (BCS)
can be used to compress data, but it may result in distorted images after
recovery. To tackle this problem, the Dynamic Block Compressive Sam-
pling (DBCS) technique is utilized. This study introduces the Entropy-
based Dynamic Block Compressive Sampling (EDBCS) algorithm to en-
hance the sampling accuracy and visual clarity of the recovered image.
Through this approach, blocks with greater entropy receive increased
measurements, while those with lower energy receive fewer ones. The
suggested method has outperformed existing techniques, yielding supe-
rior results.

Keywords: Internet of underwater things · Dynamic block compressive


sampling · Entropy based DBCS · Image reconstruction · Measurement
matrix

1 Introduction

The Internet of Underwater Things (IoUT) [2] encompasses a multitude of un-


derwater devices linked wirelessly to communicate autonomously. These devices
are integral to various underwater activities such as surveillance to safeguard
valuables, search for natural resources and disaster prevention like Tsunamis
and earthquakes. In the context of IoUT, ”things” typically refers to compu-
tational units, sensors, actuators, high-definition cameras, and communication
devices used for collecting, analyzing, and sending data from tangible objects or
devices. To enhance the cost-efficiency of IoUT devices, they incorporate low-
energy computing elements. In a memory-constrained system, the importance

corresponding author, email: [email protected], [email protected]
2 R.Monika et al.

of employing technologies capable of effectively managing the IoUT system be-


comes crucial. Figure 1 shows an underwater surveillance scenario in which a
camera in the underwater environment captures images.

Fig. 1: underwater surveillance image

To overcome the issue, compressing the data generated is necessary. Com-


pressive Sensing (CS) [3] has seen increased usage in recent years due to its
ability to achieve compression at significantly lower sampling rates. Underwater
communication faces challenges like bandwidth constraints and limited trans-
mission range, making efficient data transmission essential. Compressed sensing
addresses this by enabling the transmission of sparse representations, reducing
the data load. Furthermore, underwater systems, especially autonomous vehi-
cles and remote sensors, benefit from reduced power consumption facilitated
by transmitting and processing less data. Compressed sensing ensures real-time
processing, crucial for time-sensitive applications, by reducing computational
complexity. However, the CS’s practice of selecting samples from the entire im-
age often leads to subpar reconstruction quality. Within the realm of CS lies
Block Compressive Sampling (BCS), where the image is segmented into blocks
and then analyzed individually, presenting challenges such as sample selection
and quantity determination [10]. These challenges gave rise to an adaptation of
BCS known as Dynamic Block Compressive Sampling (DBCS), where sample
selection varies based on the nature of each block. The decision on the sample
count allotted to each block is guided by entropy, with higher entropy blocks re-
quiring more samples to capture their richer information content, and vice versa.
Enhancing the precision of the reconstructed image has emerged as a notable
priority. Entropy offers a more precise alignment of signal components compared
to other characteristics.
Title Suppressed Due to Excessive Length 3

1.1 Literature review


The concept of saliency, derived from the human visual system, serves as an
dynamic approach in [18]. Additionally, a different saliency method [5] focuses
on wildlife monitoring through image total variation. These methods rely on
expensive sensors to generate saliency maps, rendering them unsuitable for IoUT
applications.
Entropy, standard deviation, and wavelet coefficients are employed for the
adaptive extraction of measurements in [7], [19], and [17] correspondingly. Al-
though these methods yield high-quality image reconstruction, they are not de-
signed specifically for color images. In [20], the sparsity of image blocks is lever-
aged for the adaptive allocation of measurements. However, due to the utilization
of a fixed threshold for sample allocation, the allocation of adaptive samples is
not optimally achieved. Additionally, a gradient-based BCS approach focusing
on block sparsity is presented in [21] which results in block distortions after
reconstruction.
In [8], measurement allocation is based on the block’s degree of sparsity for
a sustainable IoT application. However, this approach falls short in providing
effective compression, resulting in subpar rate-distortion performance. Further-
more, the DBCS technique in [22], which employs a multi-shape block split strat-
egy, imposes a heavier processing load on real-time applications. This is due to
the unequal block sizes and the cumbersome process of dividing the image into
blocks.
In [14], they present an adaptive sampling approach that incorporates tex-
ture information. This technique involves image reconstruction at the encoder
stage during sampling to enable dynamic allocation of sampling rates, resulting
in heightened computational complexity. Another approach in [1] utilizes edge
information to dynamically extract measurements, effectively capturing the in-
tricate structure of the block. But these methods ignore the relationship among
the blocks, which lowers the visual quality. An DBCS algorithm in [8] and [4]
is based on the error distribution between blocks, showcasing texture and edge
variations but without a significant enhancement in visual quality. A wildlife
surveillance network using wireless multimedia sensing networks makes use of
an image extraction technique in [9]. This method can dynamically extract the
target area based on image pixel characteristics, leading to energy conserva-
tion in nodes. However, achieving precise reconstruction remains a challenge.
In our previous work, we aimed to overcome the aforementioned issues, but
still simplifying the process remained a big issue. To suit this need, we propose
entropy-driven dynamic block compressive sampling in this paper.

1.2 Contribution and structure of the paper


The following benefits are derived from the proposed EDBCS method:
1. The elimination of blocking artifacts and distortions is comprehensive.
2. The entropy calculation process is straightforward and can be seamlessly
implemented in real-time.
4 R.Monika et al.

3. There is a notable enhancement in performance metrics such as PSNR and


SSIM.

The article’s structure is given as follows: Part 2 gives an overview of BCS


and DBCS. Part 3 delves into the proposed EDBCS. Part 4 outlines the findings
and subsequent discussions. Finally, Part 5 concludes the paper.

2 Outline of DBCS and BCS

In BCS, a predetermined quantity of samples are picked from each block, but in
DBCS, varying count of measurements are determined from every block.

2.1 Block Compressive Sampling (BCS) [6]

In traditional CS, the sampling matrix’s dimensions match those of the image,
resulting in measurements being acquired from the entire image simultaneously,
regardless of the information distribution across its regions. This approach often
results in inferior reconstruction quality. Contrastingly, in BCS, the image is
segmented into B × B units and then sampled using a sampling matrix. Each
block undergoes a transformation into a 1-D signal, and a predetermined number
of measures are selected from the blocks to construct the sampled vector yi .

yi = Φ(B) xi (1)

Here, Φ(B) represents a measurement matrix with dimensions n(B) × B 2 , where


n(B) remains constant in BCS.

2.2 Dynamic Block Compressive Sampling (DBCS)

Dynamic Block Compressive Sampling (DBCS) [11] is a variation of Compressed


Sensing (CS) where the signal is divided into blocks, and the measurement pro-
cess dynamically adapts the number of measurements taken from each block
based on their respective signal content. Unlike conventional CS, which typi-
cally employs a fixed number of measurements across the entire signal, ABCS
allows for a flexible allocation of measurements, prioritizing regions with higher
signal complexity or information density. This adaptivity can lead to improved
reconstruction quality, especially in scenarios where signal characteristics vary
across different parts of the signal. In DBCS also the image undergoes block di-
vision as in BCS. However, distinct measurements are selected from each block.
The representation of the sampled vector yi in DBCS is as follows:

yi = Φ(B)
a xi (2)
(B)
Here, Φa denotes the adaptive measurement matrix with dimensions n(B) ×B 2 ,
where n(B) varies in ABCS.
Title Suppressed Due to Excessive Length 5

3 Proposed Entropy driven Dynamic Block Compressive


Sampling (EDBCS)
Entropy represents the information stored within the pixel values of an image.
Higher entropy in a block implies less sparsity in its coefficients and a greater
amount of information. Thus, calculating the entropy of image blocks facilitates
effective sample allocation. The following steps constitute the sampling scheme:

1. Entropy is computed by,


255
X
Si = Pij log2 Pij (3)
j=0

2. For every block, the entropy probability distribution is ascertained by,


Si
Wi = PK (4)
i=1 Si

where ’K’ represents the image’s total number of blocks.


3. Let’s designate ’C’ as the total quantity of samples extracted in total, which
is calculated by
Xk
C= ci (5)
i=1

where ci represents the sample quantity of every block.


4. The total samples assigned are determined by,

ci = round [Wi (C − kc0 ) + c0 ] (6)

where c0 denotes the beginning sample count that is constant for every block.
5. The rate of sampling at initial state is given by:
ci
SRi = (7)
B×B
(B)
6. To create the adaptive measurement matrix Φa , rows ci are chosen from a
random sampling matrix, often known as a ”sparse binary random matrix.”
7. DBCS is subsequently performed using equation (2) to obtain yi .
8. OMP algorithm [13] is adopted for reconstruction. The mathematical steps
of the OMP algorithm is shown below:
Given:
– Measurement matrix Φ of size m × n, in which m represents measure-
ments count and n represents signal dimensionality.
– Observed signal vector y of length m.
– Sparsity level k, representing the number of non-zero coefficients in the
sparse signal.
Initialization:
6 R.Monika et al.

– Initialize the residual r(0) = y.


– Initialize the support set T (0) = ∅.
Iteration:
– Continue iterating until the stop criterion is met (for instance, reach-
ing the maximum number of iterations or meeting the residual norm
threshold):
• For each iteration t = 1, 2, . . .:
(t)
∗ Compute the correlation coefficients: ai = ΦTi r(t−1) for all i ∈ /
T (t−1) , where Φi denotes the i-th column of Φ.
(t)
∗ Select the index with the maximum correlation: j (t) = arg maxi∈T / (t−1) |ai |.
∗ Update the Equation: T = T (t) (t−1)
∪j . (t)

∗ Compute the least squares solution for the selected atoms: x̂(t) =
arg minx ||y − ΦT (t) x||22 .
∗ Revise the residual: r(t) = y − ΦT (t) x̂(t) .
/ T (t) .
Finalization: Return the sparse coefficient vector x̂, where x̂i = 0 for i ∈
The algorithm iteratively selects atoms from the measurement matrix that
have the highest correlation with the current residual, updates the support
set, computes the least squares solution for the selected atoms, and updates
the residual until convergence or the specified stopping criterion is met.
Finally, it returns the sparse coefficient vector with the selected support
set.

4 Results and discussion

Simulation of various compression schemes utilizes Matlab R2019b. Underwater


images from the database cited in [16] are employed.

4.1 Objective evaluation

The performance of these schemes is compared with other dynamic algorithms


[12] and conventional BCS [6]. Performance evaluation indicators like PSNR and
SSIM are utilized for comparison, which are given by the equations,

M AXI2
P SN R = 20log10 (8)
M SE
where MSE denotes Mean Square Error given by:
M N
1 XX
M SE = (OI(m, n) − RI(m, n))2 (9)
M N m=1 n=1

SSIM is given by,

(2µx µy + S1 )(2σxy + S2 )
SSIM = (10)
(µ2x + µ2y + S1 )(σx2 + σy2 + S2 )
Title Suppressed Due to Excessive Length 7

The stabilization factors are denoted as S1 = (k1 L)2 and S2 = (k2 L)2 , where
k1 = 0.01, k2 = 0.03, and L represents the variation in pixel values. NCC [15]
formula is represented using,
PM PN
n=1 OI(m, n).RI(m, n)
N CC = m=1 PM PN (11)
2
m=1 n=1 O(m, n)

NAE [15] formula is represented using,


PM PN
n=1 |OI(m, n) − RI(m, n)|
N AE = m=1 PM PN (12)
m=1 n=1 |O(m, n)|

Here, OI(m,n) represents the pixel value of the original image, while RI(m,n)
denotes the pixel value of the reconstructed image. The database that can be
found in [16] is used to select underwater images. The PSNR values for several
test images is displayed in Table 1.

Table 1: Differences in PSNR between various compression methods


PSNR (dB) analysis for 8*8 block
TEST Proposed
BCS [6] PE-ABCS [12]
IMAGE DBCS
SR-0.1
Sea animal 25.34 30.92 32.87
Algae 23.42 25.32 29.23
Golden fish 28.45 30.70 31.37
Group fishes 29.36 32.05 34.27
SR=0.3
Sea animal 28.45 32.89 34.99
Algae 25.34 28.21 31.94
Golden fish 30.56 32.83 34.93
Group fishes 31.49 35.93 36.91
SR=0.5
Sea animal 29.48 33.74 36.82
Algae 26.45 29.94 33.85
Golden fish 32.65 34.68 36.84
Group fishes 34.58 35.99 38.47

The table 1 compares the PSNR values obtained from different compression
schemes, for various test images at different sampling rates (SR). The Proposed
DBCS outperform conventional BCS and PE-ABCS across all test images and
sampling rates. This suggests that the proposed techniques yield higher fidelity
reconstructions, as indicated by the higher PSNR values. As expected, increas-
ing the sampling rate generally leads to higher PSNR values, indicating better
8 R.Monika et al.

reconstruction quality. This trend is observed across all compression schemes


and test images. The notably higher PSNR values achieved by the proposed
techniques, particularly at lower sampling rates, suggest their potential utility
in practical applications where efficient compression with minimal loss of quality
is desired, such as in underwater surveillance or remote sensing scenarios.

Table 2: SSIM comparison for different compression techniques


SSIM analysis for 8*8 block
TEST Proposed
BCS [6] PE-ABCS [12]
IMAGE DBCS
SR-0.1
Sea animal 0.6293 0.8002 0.8237
Algae 0.5385 0.6359 0.7923
Golden fish 0.6834 0.7234 0.8234
Group fishes 0.6934 0.7123 0.8734
SR=0.3
Sea animal 0.7345 0.8102 0.8495
Algae 0.5923 0.6999 0.8134
Golden fish 0.7423 0.7923 0.8844
Group fishes 0.7845 0.8023 0.9451
SR=0.5
Sea animal 0.8012 0.8275 0.8999
Algae 0.7934 0.8493 0.9212
Golden fish 0.7988 0.8174 0.9453
Group fishes 0.8034 0.8923 0.9823

The table 2 compares SSIM values obtained from different compression schemes
(BCS, PE-ABCS, Proposed DBCS) for various test images at different sampling
rates (SR). Generally, higher SSIM values indicate better structural similarity
between the original and reconstructed images. Therefore, higher SSIM values
imply that the reconstructed images more closely resemble the originals in terms
of structure, texture, and details. This suggests that the proposed techniques pre-
serve structural similarity better during compression, resulting in more faithful
reconstructions. The consistently higher SSIM values obtained with the proposed
techniques, particularly at lower sampling rates, suggest their potential utility
in practical applications where maintaining structural similarity is crucial, such
as in medical imaging or video surveillance.
NCC is used to measure the similarity between two images. NCC compares
corresponding pixel values in two images and computes their correlation after
normalization. This normalization accounts for variations in illumination, con-
trast, and brightness between the images, making NCC robust to such changes.
High NCC values indicate strong similarity between the images, while low values
Title Suppressed Due to Excessive Length 9

suggest dissimilarity. An NCC value of 1 indicates a perfect match between two


images. NCC analysis is shown in table 3.

Table 3: NCC comparison for different compression techniques


NCC analysis for 8*8 block
TEST Proposed
BCS PE-ABCS
IMAGE DBCS
SR-0.1
Sea animal 0.7394 0.8623 0.9364
Algae 0.7933 0.8734 0.9147
Golden fish 0.8012 0.8923 0.8934
Group fishes 0.8125 0.8735 0.8945
SR=0.3
Sea animal 0.8545 0.8897 0.9545
Algae 0.8375 0.8845 0.9454
Golden fish 0.8934 0.9011 0.9024
Group fishes 0.7945 0.8976 0.9123
SR=0.5
Sea animal 0.7993 0.9276 0.9623
Algae 0.8234 0.9512 0.9723
Golden fish 0.7126 0.9453 0.9812
Group fishes 0.7493 0.9221 0.9712

At SR=0.1, the proposed DBCS method achieves notably higher NCC values
compared to the other methods across all image categories, such as Sea animal,
Algae, Golden fish, and Group fishes. This trend continues across different SR
levels, indicating the robustness and effectiveness of the proposed DBCS ap-
proach in preserving image similarity and quality under varying super-resolution
conditions.
NAE stands for ”Normalized Absolute Error.” It’s a metric used to quantify
the difference or error between two sets of data. A lower NAE value indicates
a smaller absolute error and better agreement between the image sets, while
a higher value suggests larger discrepancies. An NAE value of 0 indicates no
error between two sets of data. Achieving an NAE value of 0 signifies that the
predicted or estimated values match exactly with the actual or ground truth
values, with no deviation or discrepancy. NAE analysis is shown in table 4.
The proposed DBCS method consistently outperforms the BCS and PE-
ABCS methods in terms of lower NAE values. Lower NAE values indicate better
agreement between the original and reconstructed images, reflecting improved
accuracy in reconstruction.
10 R.Monika et al.

Table 4: NAE comparison for different compression techniques


NAE analysis for 8*8 block
TEST Proposed
BCS PE-ABCS
IMAGE DBCS
SR-0.1
Sea animal 0.5923 0.4853 0.3752
Algae 0.5823 0.3023 0.2853
Golden fish 0.5642 0.3859 0.2793
Group fishes 0.4893 0.4234 0.3012
SR=0.3
Sea animal 0.5283 0.4863 0.2843
Algae 0.4967 0.4823 0.2734
Golden fish 0.4712 0.4012 0.2412
Group fishes 0.4012 0.3923 0.2834
SR=0.5
Sea animal 0.3947 0.2834 0.2012
Algae 0.2823 0.2482 0.1923
Golden fish 0.2842 0.1999 0.1823
Group fishes 0.2973 0.1193 0.1075

4.2 Compressed size, space-saving and run-time analysis

The compressed size of an image refers to the reduced file size achieved through
various compression techniques without significant loss of visual quality. By min-
imizing redundant data the compressed images maintain their essential visual
content while occupying less disk space or bandwidth.
Space saving refers to the reduction in storage or memory usage achieved
through compression techniques. It’s essential in various computing contexts
where resources are limited or expensive, such as storing files on disk drives,
transmitting data over networks, or running applications with constrained mem-
ory. The formula used for calculation is,
 
Compressed Size
Space saving percentage = 1 − × 100% (13)
Original Size

Runtime, in computing, refers to the duration of time taken by a program,


process, or algorithm to execute from start to finish. It is a crucial metric for
evaluating the efficiency and performance of algorithms. Reduced runtime for a
code signifies improved efficiency in program execution.
This table 5 presents a comparative analysis of multiple algorithms based on
their performance metrics. The algorithms are evaluated using the image ”Gold-
fish” with dimensions 320×200 pixels, and the original image size is 40,960 bytes.
Title Suppressed Due to Excessive Length 11

Table 5: Comparative analysis of multiple algorithms


Image:Goldfish (480 × 320), Size = 40,960 bytes
Technique Compressed size(Bytes) Space saving (%) Run time(s)
SR=0.1 SR=0.3 SR=0.5 SR=0.1 SR=0.3 SR=0.5 SR=0.1 SR=0.3 SR=0.5
DBCS 10604 14614 14213 72.52 65.17 63.62 6.84 8.69 12.74
PE-ABCS 21398 22571 24123 51 42.73 37.45 9.74 14.23 18.89
BCS 22371 27992 32423 46.71 31.80 22.78 12.46 18.73 24.38

It provides information on the compressed size in bytes for each algorithm at dif-
ferent sampling ratios (SR=0.1, SR=0.3, SR=0.5), along with the corresponding
space-saving percentages and run times in seconds. From the results, it can be
observed that the DBCS algorithm outperforms the other algorithms in terms
of space-saving and run time across all sampling ratios.

4.3 Subjective evaluation


The perceptual quality of the recovered images is shown in figure 1.
From figure 1, we can infer that the perception quality of images recon-
structed using the DBCS technique is superior when compared to the other
techniques, namely BCS and PE-ABCS. This implies that the final images looks
similar to the originals in terms of structure, texture, and details. The superior
visual quality of the Proposed DBCS technique could be attributed to its dy-
namic block-based approach, which adapts the measurement allocation based
on the entropy content of the blocks. This adaptiveness allows for more efficient
utilization of resources and better preservation of image details, resulting in
higher visual quality. Thus the Proposed DBCS technique is efficient in achiev-
ing better visual quality in compressed images compared to conventional BCS
and PE-ABCS methods. The reconstruction quality of few other images recon-
structed at SR=0.3 is shown in figure 2.
Figure 2 shows a few other underwater images reconstructed at a sampling
rate of SR=0.3. It can be seen that all images are perfectly reconstructed with no
block distortions. This implies that the reconstruction method used is effective
at preserving image quality even with a reduced amount of data.

4.4 Subjective assessment using MOS


The Mean Opinion Score (MOS) is a metric used to quantitatively measure
the subjective quality of reconstructed images through human evaluation. It
involves individuals providing ratings based on their subjective perception. It
represents the quality level rated on a scale from 1 to 5. A rating of 1 signifies
’annoying quality,’ 2 denotes ’poor quality,’ 3 indicates ’fair quality,’ 4 reflects
’good quality,’ and 5 signifies ’excellent quality.’Fifty images were chosen from
the dataset and subjected to compression using the proposed method. Twenty
12 R.Monika et al.

(a) Original image

(b) BCS [6]

(c) PE-ABCS [12]

(d) Proposed DBCS

Fig. 2: Underwater image perceptual quality assessment at sampling rate of 0.1


(a) Original image, (b), (c), and (d) rebuilt using BCS, PE-ABCS, and proposed
DBCS, in that order

individuals from the general population were contacted, briefed on the scoring
process, and presented with reconstructed images for evaluation. Initially, the
input image is displayed, followed by the reconstructed image. Subsequently, the
scores provided by the evaluators were combined and shown in the table 6.
The table presents Mean Opinion Scores (MOS) for different attributes, in-
dicating the perceived quality of the content. Attributes such as intensity, vari-
ation, precision, originality, graininess, and coarseness are evaluated. Notably,
precision receives the highest score, suggesting finely detailed and accurately
Title Suppressed Due to Excessive Length 13

(a) (b)

(c) (d)

(e) (f)

(g) (h)

(i) (j)

Fig. 3: left column: original images and right column: reconstructed using DBCS
at SR=0.3
14 R.Monika et al.

Table 6: MOS
Attribute Average MOS
Intensity 4.3
Variation 4.3384
Precision 4.5473
Originality 4.4479
Graininess 4.6345
Coarseness 4.534

represented content. The scores for variation and originality indicate dynamic
and impressive reconstructed image content. However, slight concerns arise with
graininess and coarseness, although overall, the content appears to be of high
quality based on the provided scores.

4.5 Complexity analysis

The proposed DBCS method involves only linear mathematical calculation pro-
cedures to fix the dynamic count for each block. Since simple linear equations
involve only one variable raised to the first power, their complexity is generally
low. Solving such equations typically requires basic arithmetic operations like
addition, subtraction, multiplication, and division. As a result, the computa-
tional complexity of solving a simple linear equation is often considered to be
constant or linear, depending on the specific method used for solving. the Big O
analysis would be O(1). This is because solving a simple linear equation involves
a fixed number of arithmetic operations, regardless of the size of the coefficients.
Whether the equation involves small or large numbers, the computational effort
remains constant. Therefore, the time complexity is constant, denoted by O(1),
indicating that the algorithm’s performance is independent of the input size.

4.6 The appropriateness of the proposed DBCS for IoUT.

Limited sample count selection: According to Table 4, it is evident that


DBCS can reconstruct images using fewer bytes of data. This reduced number
of samples results in a much smaller size of the recovered image compared to the
original. The minimal sample selection contributes to addressing memory and
storage challenges in IoUT.
Basic arithmetic computations: Complex operations lead to longer com-
putational times, consequently squandering the power and energy resources of
low-power embedded devices. The proposed DBCS opts for straightforward op-
erations instead of employing intricate matrix-vector calculations.
High-quality visuals with minimal distortions: The perceptual quality
is not related to memory or power concerns. However, attaining excellent recon-
Title Suppressed Due to Excessive Length 15

struction quality would offer an added advantage in capturing specific details


crucial for analysis. This aspect is the standout feature of the DBCS.
Short execution time: On analysis of the data presented in Table 4, it can
be inferred that the DBCS operates more quickly owing to its simplicity. This
contributes to enhancing the battery longevity in the sensor nodes.
Improved space-saving: Space saving is determined by comparing the sizes
of the original and recovered files. Table 4 displays the compressed file size and
space efficiency of the proposed DBCS in comparison to all other methods. The
proposed DBCS achieves a space saving of approximately 60-70% surpassing
other methods by a significant margin.

4.7 Graphical evaluation


The graphical analysis of the results obtained is shown in figure 2.
From figure 2, it is observed that the proposed DBCS has achieved high PSNR
values for all cases. It has achieved better compressed size in bytes resulting in
higher space saving in all cases. This implies that the proposed method effectively
reduces the size of compressed data while preserving image quality, leading to
more efficient storage and transmission of images.

5 Conclusions and future direction


This research paper introduces a Dynamic Block Compressive Sampling (DBCS)
technique tailored for Internet of Underwater Things (IoUT) devices with con-
strained resources, leveraging block entropy for adaptive measurement alloca-
tion. Compared to other dynamic BCS methods, the proposed EDBCS notably
enhances the visual quality of reconstructed images. Its straightforward entropy
value computation and substantial quality enhancement render it highly prac-
tical for IoUT applications. Notably, even at low sampling rates, this method
consistently achieves favourable subjective and objective outputs independent
of colour, dimensions, structure, or texture by eliminating block measurement
redundancy. Future endeavours aim to implement the EDBCS scheme for un-
derwater surveillance videos.

References
1. Canh, T.N., Dinh, K.Q., Jeon, B.: Edge-preserving nonlocal weighting scheme for
total variation based compressive sensing recovery. In: 2014 IEEE International
Conference on Multimedia and Expo (ICME). pp. 1–5. IEEE (2014)
2. Domingo, M.C.: An overview of the internet of things for people with disabilities.
Journal of Network and Computer Applications 35(2), 584–596 (2012)
3. Donoho, D.L.: Compressed sensing. IEEE Transactions on information theory
52(4), 1289–1306 (2006)
4. Duan, X., Li, X., Li, R.: A measurement allocation for block image compressive
sensing. In: International Conference on Cloud Computing and Security. pp. 110–
119. Springer (2018)
16 R.Monika et al.

(a)

(b)

(c)

Fig. 4: a)PSNR analysis of various techniques, b) Compressed size(Bytes) anal-


ysis and c) Space saving analysis
Title Suppressed Due to Excessive Length 17

5. Feng, W., Zhang, J., Hu, C., Wang, Y., Xiang, Q., Yan, H.: A novel saliency
detection method for wild animal monitoring images with wmsn. Journal of Sensors
2018 (2018)
6. Gan, L.: Block compressed sensing of natural images. In: 2007 15th International
conference on digital signal processing. pp. 403–406. IEEE (2007)
7. Li, R., Duan, X., Guo, X., He, W., Lv, Y.: Adaptive compressive sensing of images
using spatial entropy. Computational intelligence and neuroscience 2017 (2017)
8. Li, R., Duan, X., Lv, Y.: Adaptive compressive sensing of images using er-
ror between blocks. International Journal of Distributed Sensor Networks 14(6),
1550147718781751 (2018)
9. Liu, W., Liu, H., Wang, Y., Zheng, X., Zhang, J.: A novel extraction method
for wildlife monitoring images with wireless multimedia sensor networks (wmsns).
Applied Sciences 9(11), 2276 (2019)
10. Monika, R., Dhanalakshmi, S., Sreejith, S.: Coefficient random permutation based
compressed sensing for medical image compression. In: Advances in Electronics,
Communication and Computing, pp. 529–536. Springer (2018)
11. Monika, R., Samiappan, D., Kumar, R.: Underwater image compression using en-
ergy based adaptive block compressive sensing for iout applications. The Visual
Computer pp. 1–17 (2020)
12. Monika, R., Samiappan, D., Kumar, R.: Adaptive measurement allocation for un-
derwater images using block energy in haar wavelet domain. In: AIP Conference
Proceedings. vol. 2427. AIP Publishing (2023)
13. Shen, Y., Li, S.: Sparse signals recovery from noisy measurements by orthogonal
matching pursuit. Inverse Problems & Imaging 9(1), 231 (2015)
14. Sun, F., Xiao, D., He, W., Li, R.: Adaptive image compressive sensing using texture
contrast. International Journal of Digital Multimedia Broadcasting 2017 (2017)
15. Thakur, K.V., Damodare, O.H., Sapkal, A.M.: Identification of suited quality met-
rics for natural and medical images. Signal and Image Processing : An International
Journal (SIPIJ) 7(3), 29–43 (2016)
16. xahidbuffon: Underwater-Datasets. https://github.com/xahidbuffon/
Underwater-Datasets/commits?author=xahidbuffon (2019), [Online; accessed
5-Apr-2021]
17. Xin, L., Junguo, Z., Chen, C., Fantao, L.: Adaptive sampling rate assignment for
block compressed sensing of images using wavelet transform. The Open Cybernetics
and Systemics Journal (2018)
18. Yu, Y., Wang, B., Zhang, L.: Saliency-based compressive sampling for image sig-
nals. IEEE signal processing letters 17(11), 973–976 (2010)
19. Zhang, J., Xiang, Q., Yin, Y., Chen, C., Luo, X.: Adaptive compressed sensing for
wireless image sensor networks. Multimedia Tools and Applications 76(3), 4227–
4242 (2017)
20. ZHANG, S.f., LI, K., XU, J.t., QU, G.c.: Image adaptive coding algorithm based
on compressive sensing [j]. Journal of Tianjin University 4 (2012)
21. Zhao, H.H., Rosin, P.L., Lai, Y.K., Zheng, J.H., Wang, Y.N.: Adaptive gradient-
based block compressive sensing with sparsity for noisy images. Multimedia Tools
and Applications pp. 1–23 (2019)
22. Zhao, H.h., Rosin, P.L., Lai, Y.K., Zheng, J.h., Wang, Y.n.: Adaptive block com-
pressive sensing for noisy images. In: International Symposium on Artificial Intel-
ligence and Robotics. pp. 389–399. Springer (2018)

You might also like