R.monika - Springer Book Chapter
R.monika - Springer Book Chapter
1 Introduction
In BCS, a predetermined quantity of samples are picked from each block, but in
DBCS, varying count of measurements are determined from every block.
In traditional CS, the sampling matrix’s dimensions match those of the image,
resulting in measurements being acquired from the entire image simultaneously,
regardless of the information distribution across its regions. This approach often
results in inferior reconstruction quality. Contrastingly, in BCS, the image is
segmented into B × B units and then sampled using a sampling matrix. Each
block undergoes a transformation into a 1-D signal, and a predetermined number
of measures are selected from the blocks to construct the sampled vector yi .
yi = Φ(B) xi (1)
yi = Φ(B)
a xi (2)
(B)
Here, Φa denotes the adaptive measurement matrix with dimensions n(B) ×B 2 ,
where n(B) varies in ABCS.
Title Suppressed Due to Excessive Length 5
where c0 denotes the beginning sample count that is constant for every block.
5. The rate of sampling at initial state is given by:
ci
SRi = (7)
B×B
(B)
6. To create the adaptive measurement matrix Φa , rows ci are chosen from a
random sampling matrix, often known as a ”sparse binary random matrix.”
7. DBCS is subsequently performed using equation (2) to obtain yi .
8. OMP algorithm [13] is adopted for reconstruction. The mathematical steps
of the OMP algorithm is shown below:
Given:
– Measurement matrix Φ of size m × n, in which m represents measure-
ments count and n represents signal dimensionality.
– Observed signal vector y of length m.
– Sparsity level k, representing the number of non-zero coefficients in the
sparse signal.
Initialization:
6 R.Monika et al.
∗ Compute the least squares solution for the selected atoms: x̂(t) =
arg minx ||y − ΦT (t) x||22 .
∗ Revise the residual: r(t) = y − ΦT (t) x̂(t) .
/ T (t) .
Finalization: Return the sparse coefficient vector x̂, where x̂i = 0 for i ∈
The algorithm iteratively selects atoms from the measurement matrix that
have the highest correlation with the current residual, updates the support
set, computes the least squares solution for the selected atoms, and updates
the residual until convergence or the specified stopping criterion is met.
Finally, it returns the sparse coefficient vector with the selected support
set.
M AXI2
P SN R = 20log10 (8)
M SE
where MSE denotes Mean Square Error given by:
M N
1 XX
M SE = (OI(m, n) − RI(m, n))2 (9)
M N m=1 n=1
(2µx µy + S1 )(2σxy + S2 )
SSIM = (10)
(µ2x + µ2y + S1 )(σx2 + σy2 + S2 )
Title Suppressed Due to Excessive Length 7
The stabilization factors are denoted as S1 = (k1 L)2 and S2 = (k2 L)2 , where
k1 = 0.01, k2 = 0.03, and L represents the variation in pixel values. NCC [15]
formula is represented using,
PM PN
n=1 OI(m, n).RI(m, n)
N CC = m=1 PM PN (11)
2
m=1 n=1 O(m, n)
Here, OI(m,n) represents the pixel value of the original image, while RI(m,n)
denotes the pixel value of the reconstructed image. The database that can be
found in [16] is used to select underwater images. The PSNR values for several
test images is displayed in Table 1.
The table 1 compares the PSNR values obtained from different compression
schemes, for various test images at different sampling rates (SR). The Proposed
DBCS outperform conventional BCS and PE-ABCS across all test images and
sampling rates. This suggests that the proposed techniques yield higher fidelity
reconstructions, as indicated by the higher PSNR values. As expected, increas-
ing the sampling rate generally leads to higher PSNR values, indicating better
8 R.Monika et al.
The table 2 compares SSIM values obtained from different compression schemes
(BCS, PE-ABCS, Proposed DBCS) for various test images at different sampling
rates (SR). Generally, higher SSIM values indicate better structural similarity
between the original and reconstructed images. Therefore, higher SSIM values
imply that the reconstructed images more closely resemble the originals in terms
of structure, texture, and details. This suggests that the proposed techniques pre-
serve structural similarity better during compression, resulting in more faithful
reconstructions. The consistently higher SSIM values obtained with the proposed
techniques, particularly at lower sampling rates, suggest their potential utility
in practical applications where maintaining structural similarity is crucial, such
as in medical imaging or video surveillance.
NCC is used to measure the similarity between two images. NCC compares
corresponding pixel values in two images and computes their correlation after
normalization. This normalization accounts for variations in illumination, con-
trast, and brightness between the images, making NCC robust to such changes.
High NCC values indicate strong similarity between the images, while low values
Title Suppressed Due to Excessive Length 9
At SR=0.1, the proposed DBCS method achieves notably higher NCC values
compared to the other methods across all image categories, such as Sea animal,
Algae, Golden fish, and Group fishes. This trend continues across different SR
levels, indicating the robustness and effectiveness of the proposed DBCS ap-
proach in preserving image similarity and quality under varying super-resolution
conditions.
NAE stands for ”Normalized Absolute Error.” It’s a metric used to quantify
the difference or error between two sets of data. A lower NAE value indicates
a smaller absolute error and better agreement between the image sets, while
a higher value suggests larger discrepancies. An NAE value of 0 indicates no
error between two sets of data. Achieving an NAE value of 0 signifies that the
predicted or estimated values match exactly with the actual or ground truth
values, with no deviation or discrepancy. NAE analysis is shown in table 4.
The proposed DBCS method consistently outperforms the BCS and PE-
ABCS methods in terms of lower NAE values. Lower NAE values indicate better
agreement between the original and reconstructed images, reflecting improved
accuracy in reconstruction.
10 R.Monika et al.
The compressed size of an image refers to the reduced file size achieved through
various compression techniques without significant loss of visual quality. By min-
imizing redundant data the compressed images maintain their essential visual
content while occupying less disk space or bandwidth.
Space saving refers to the reduction in storage or memory usage achieved
through compression techniques. It’s essential in various computing contexts
where resources are limited or expensive, such as storing files on disk drives,
transmitting data over networks, or running applications with constrained mem-
ory. The formula used for calculation is,
Compressed Size
Space saving percentage = 1 − × 100% (13)
Original Size
It provides information on the compressed size in bytes for each algorithm at dif-
ferent sampling ratios (SR=0.1, SR=0.3, SR=0.5), along with the corresponding
space-saving percentages and run times in seconds. From the results, it can be
observed that the DBCS algorithm outperforms the other algorithms in terms
of space-saving and run time across all sampling ratios.
individuals from the general population were contacted, briefed on the scoring
process, and presented with reconstructed images for evaluation. Initially, the
input image is displayed, followed by the reconstructed image. Subsequently, the
scores provided by the evaluators were combined and shown in the table 6.
The table presents Mean Opinion Scores (MOS) for different attributes, in-
dicating the perceived quality of the content. Attributes such as intensity, vari-
ation, precision, originality, graininess, and coarseness are evaluated. Notably,
precision receives the highest score, suggesting finely detailed and accurately
Title Suppressed Due to Excessive Length 13
(a) (b)
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Fig. 3: left column: original images and right column: reconstructed using DBCS
at SR=0.3
14 R.Monika et al.
Table 6: MOS
Attribute Average MOS
Intensity 4.3
Variation 4.3384
Precision 4.5473
Originality 4.4479
Graininess 4.6345
Coarseness 4.534
represented content. The scores for variation and originality indicate dynamic
and impressive reconstructed image content. However, slight concerns arise with
graininess and coarseness, although overall, the content appears to be of high
quality based on the provided scores.
The proposed DBCS method involves only linear mathematical calculation pro-
cedures to fix the dynamic count for each block. Since simple linear equations
involve only one variable raised to the first power, their complexity is generally
low. Solving such equations typically requires basic arithmetic operations like
addition, subtraction, multiplication, and division. As a result, the computa-
tional complexity of solving a simple linear equation is often considered to be
constant or linear, depending on the specific method used for solving. the Big O
analysis would be O(1). This is because solving a simple linear equation involves
a fixed number of arithmetic operations, regardless of the size of the coefficients.
Whether the equation involves small or large numbers, the computational effort
remains constant. Therefore, the time complexity is constant, denoted by O(1),
indicating that the algorithm’s performance is independent of the input size.
References
1. Canh, T.N., Dinh, K.Q., Jeon, B.: Edge-preserving nonlocal weighting scheme for
total variation based compressive sensing recovery. In: 2014 IEEE International
Conference on Multimedia and Expo (ICME). pp. 1–5. IEEE (2014)
2. Domingo, M.C.: An overview of the internet of things for people with disabilities.
Journal of Network and Computer Applications 35(2), 584–596 (2012)
3. Donoho, D.L.: Compressed sensing. IEEE Transactions on information theory
52(4), 1289–1306 (2006)
4. Duan, X., Li, X., Li, R.: A measurement allocation for block image compressive
sensing. In: International Conference on Cloud Computing and Security. pp. 110–
119. Springer (2018)
16 R.Monika et al.
(a)
(b)
(c)
5. Feng, W., Zhang, J., Hu, C., Wang, Y., Xiang, Q., Yan, H.: A novel saliency
detection method for wild animal monitoring images with wmsn. Journal of Sensors
2018 (2018)
6. Gan, L.: Block compressed sensing of natural images. In: 2007 15th International
conference on digital signal processing. pp. 403–406. IEEE (2007)
7. Li, R., Duan, X., Guo, X., He, W., Lv, Y.: Adaptive compressive sensing of images
using spatial entropy. Computational intelligence and neuroscience 2017 (2017)
8. Li, R., Duan, X., Lv, Y.: Adaptive compressive sensing of images using er-
ror between blocks. International Journal of Distributed Sensor Networks 14(6),
1550147718781751 (2018)
9. Liu, W., Liu, H., Wang, Y., Zheng, X., Zhang, J.: A novel extraction method
for wildlife monitoring images with wireless multimedia sensor networks (wmsns).
Applied Sciences 9(11), 2276 (2019)
10. Monika, R., Dhanalakshmi, S., Sreejith, S.: Coefficient random permutation based
compressed sensing for medical image compression. In: Advances in Electronics,
Communication and Computing, pp. 529–536. Springer (2018)
11. Monika, R., Samiappan, D., Kumar, R.: Underwater image compression using en-
ergy based adaptive block compressive sensing for iout applications. The Visual
Computer pp. 1–17 (2020)
12. Monika, R., Samiappan, D., Kumar, R.: Adaptive measurement allocation for un-
derwater images using block energy in haar wavelet domain. In: AIP Conference
Proceedings. vol. 2427. AIP Publishing (2023)
13. Shen, Y., Li, S.: Sparse signals recovery from noisy measurements by orthogonal
matching pursuit. Inverse Problems & Imaging 9(1), 231 (2015)
14. Sun, F., Xiao, D., He, W., Li, R.: Adaptive image compressive sensing using texture
contrast. International Journal of Digital Multimedia Broadcasting 2017 (2017)
15. Thakur, K.V., Damodare, O.H., Sapkal, A.M.: Identification of suited quality met-
rics for natural and medical images. Signal and Image Processing : An International
Journal (SIPIJ) 7(3), 29–43 (2016)
16. xahidbuffon: Underwater-Datasets. https://github.com/xahidbuffon/
Underwater-Datasets/commits?author=xahidbuffon (2019), [Online; accessed
5-Apr-2021]
17. Xin, L., Junguo, Z., Chen, C., Fantao, L.: Adaptive sampling rate assignment for
block compressed sensing of images using wavelet transform. The Open Cybernetics
and Systemics Journal (2018)
18. Yu, Y., Wang, B., Zhang, L.: Saliency-based compressive sampling for image sig-
nals. IEEE signal processing letters 17(11), 973–976 (2010)
19. Zhang, J., Xiang, Q., Yin, Y., Chen, C., Luo, X.: Adaptive compressed sensing for
wireless image sensor networks. Multimedia Tools and Applications 76(3), 4227–
4242 (2017)
20. ZHANG, S.f., LI, K., XU, J.t., QU, G.c.: Image adaptive coding algorithm based
on compressive sensing [j]. Journal of Tianjin University 4 (2012)
21. Zhao, H.H., Rosin, P.L., Lai, Y.K., Zheng, J.H., Wang, Y.N.: Adaptive gradient-
based block compressive sensing with sparsity for noisy images. Multimedia Tools
and Applications pp. 1–23 (2019)
22. Zhao, H.h., Rosin, P.L., Lai, Y.K., Zheng, J.h., Wang, Y.n.: Adaptive block com-
pressive sensing for noisy images. In: International Symposium on Artificial Intel-
ligence and Robotics. pp. 389–399. Springer (2018)