Image Restoration Quality Assessment Based on Regional Differential Information Entropy
Abstract
:1. Introduction
- In comparison with traditional IQA methods, the regional differential information entropy (RDIE) method proposed in this paper yields objective assessment results that better agree with subjective assessments.
- Image information entropy is viewed and described from a new perspective, that is, as a neural network application, which demonstrates the possibility of simulating traditional methods using convolution with specific weights and particular activation functions.
- The traditional information entropy calculation method is serial, whereas the RDIE method proposed in this paper has a high degree of parallelism and great improvement in computing speed.
2. Materials and Methods
2.1. Related Works
2.1.1. No-Reference Method
2.1.2. Full-Reference Method
2.2. Method
2.3. Datasets
2.3.1. Our Datasets
2.3.2. PIPAL
3. Results
3.1. Results of Ablation Experiments
3.1.1. Different Window Sizes
3.1.2. Different Quantization Levels
3.1.3. Different Strides
3.2. Results in Our Dataset
3.3. Results in PIPAL
3.4. Additional Experiments
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wang, Z.; Bovik, A.C.; Sheikh, H.R. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Cham, Switzerland, 2016; pp. 694–711. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Ramakrishnan, S.; Pachori, S.; Gangopadhyay, A. Deep generative filter for motion deblurring. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 2993–3000. [Google Scholar]
- Chen, J.; Chen, J.; Chao, H. Image blind denoising with generative adversarial network based noise modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3155–3164. [Google Scholar]
- Wang, X.; Yu, K.; Wu, S. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018; pp. 63–79. [Google Scholar]
- Ma, C.; Rao, Y.; Cheng, Y. Structure-preserving super resolution with gradient guidance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 7769–7778. [Google Scholar]
- Tsai, D.Y.; Lee, Y.; Matsuyama, E. Information entropy measure for evaluation of image quality. J. Digit. Imaging 2008, 21, 338–347. [Google Scholar] [CrossRef] [Green Version]
- Wang., Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the 2003 IEEE the Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
- Liu, A.; Lin, W.; Narwaria, M. Image quality assessment based on gradient similarity. IEEE Trans. Image Process. 2011, 21, 1500–1512. [Google Scholar]
- Xue, W.; Zhang, L.; Mou, X. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 2013, 23, 684–695. [Google Scholar] [CrossRef] [Green Version]
- Ye, P.; Kumar, J.; Kang, L. Unsupervised feature learning framework for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 1098–1105. [Google Scholar]
- Liu, Y.; Gu, K.; Zhang, Y. Unsupervised blind image quality evaluation via statistical measurements of structure, naturalness, and perception. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 929–943. [Google Scholar] [CrossRef]
- Wang, Z.; Simoncelli, E.P. Reduced-reference image quality assessment using a wavelet-domain natural image statistic model. Human vision and electronic imaging X. SPIE 2005, 5666, 149–159. [Google Scholar]
- Chen, G.H.; Yang, C.L.; Xie, S.L. Gradient-based structural similarity for image quality assessment. In Proceedings of the 2006 IEEE International Conference on Image Processing, Atlanta, GA, USA, 8–11 October 2006; pp. 2929–2932. [Google Scholar]
- Sheikh, H.R.; Bovik, A.C.; De Veciana, G. An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans. Image Process. 2005, 14, 2117–2128. [Google Scholar] [CrossRef] [Green Version]
- Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process. 2006, 15, 430–444. [Google Scholar] [CrossRef]
- Kim, W.H.; Lee, J.S. Framework for fair objective performance evaluation of single-image super-resolution algorithms. Electron. Lett. 2015, 51, 42–44. [Google Scholar] [CrossRef]
- Liu, Z.; Laganière, R. Phase congruence measurement for image similarity assessment. Pattern Recognit. Lett. 2007, 28, 166–172. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Mou, X. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wu, J.; Wu, Y.; Che, R. Perceptual sensitivity based image structure-texture decomposition. In Proceedings of the 2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Shenzhen, China, 6–8 August 2020; pp. 336–341. [Google Scholar]
- Prashnani, E.; Cai, H.; Mostofi, Y. Pieapp: Perceptual image-error assessment through pairwise preference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1808–1817. [Google Scholar]
- Zhang, R.; Isola, P.; Efros, A.A. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
- Bosse, S.; Maniry, D.; Müller, K.R. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2017, 27, 206–219. [Google Scholar] [CrossRef] [PubMed]
- Agustsson, E.; Timofte, R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 126–135. [Google Scholar]
- Carlson, R.E.; Fritsch, F.N. Monotone piecewise bicubic interpolation. SIAM J. Numer. Anal. 1985, 22, 386–400. [Google Scholar] [CrossRef]
- Lim, B.; Son, S.; Kim, H. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
- Yu, J.; Fan, Y.; Yang, J. Wide activation for efficient and accurate image super-resolution. arXiv 2018, arXiv:1808.08718. [Google Scholar]
- Dai, T.; Cai, J.; Zhang, Y. Second-order attention network for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 11065–11074. [Google Scholar]
- Zhang, K.; Zuo, W.; Chen, Y. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [Green Version]
- Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef] [Green Version]
- Zhang, K.; Zuo, W.; Gu, S. Learning deep CNN denoiser prior for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3929–3938. [Google Scholar]
- Chen, H.; Wang, Y.; Guo, T. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 12299–12310. [Google Scholar]
- Qiao, S.; Yang, J.; Zhang, T. Layered input GradiNet for image denoising. Knowl.-Based Syst. 2022, 254, 109587. [Google Scholar] [CrossRef]
- Jinjin, G.; Haoming, C.; Haoyu, C. Pipal: A large-scale image quality assessment dataset for perceptual image restoration. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 633–651. [Google Scholar]
- Madhusudana, P.C.; Birkbeck, N.; Wang, Y. Image quality assessment using contrastive learning. IEEE Trans. Image Process. 2022, 31, 4149–4161. [Google Scholar] [CrossRef]
- Damera-Venkata, N.; Kite, T.D.; Geisler, W.S. Image quality assessment based on a degradation model. IEEE Trans. Image Process. 2000, 9, 636–650. [Google Scholar] [CrossRef] [Green Version]
- Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
- Chandler, D.M.; Hemami, S.S. VSNR: A wavelet-based visual signal-to-noise ratio for natural images. IEEE Trans. Image Process. 2007, 16, 2284–2298. [Google Scholar] [CrossRef] [PubMed]
- Zhang, L.; Zhang, L.; Mou, X. RFSIM: A feature based image quality assessment metric using Riesz transforms. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 321–324. [Google Scholar]
- Zhang, L.; Li, H. SR-SIM: A fast and high performance IQA index based on spectral residual. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 1473–1476. [Google Scholar]
- Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bae, S.H.; Kim, M. A novel image quality assessment with globally and locally consilient visual quality perception. IEEE Trans. Image Process. 2016, 25, 2392–2406. [Google Scholar] [CrossRef] [PubMed]
Metrics | ||||
---|---|---|---|---|
Time/ms | 56.2 | 14.5 | 138,600 | 25.9 |
Stride | Traditional Method | PSNR-Oriented Method | GAN-Based Method |
---|---|---|---|
1 | 0.6484 | 0.7270 | 0.5284 |
2 | 0.6480 | 0.7205 | 0.5292 |
3 | 0.6474 | 0.7203 | 0.5282 |
4 | 0.6533 | 0.7255 | 0.5276 |
5 | 0.6476 | 0.7203 | 0.5227 |
Method | Bicubic | EDSR | WDSR | SAN | SRGAN | SPSR |
---|---|---|---|---|---|---|
PSNR | 24.86 | 26.84 | 26.89 | 27.65 | 24.51 | 24.64 |
SSIM [1] | 0.6962 | 0.7740 | 0.7745 | 0.7997 | 0.6773 | 0.6953 |
MS-SSIM [9] | 0.8669 | 0.9171 | 0.9175 | 0.9297 | 0.8650 | 0.8769 |
CONTRIOUE [36] | 0.2527 | 0.1952 | 0.1651 | 0.1827 | 0.1632 | 0.1892 |
41.53 | 33.73 | 32.89 | 32.59 | 26.33 | 23.14 | |
MOS | 2.019 | 3.163 | 3.141 | 3.415 | 3.763 | 4.15 |
Method | DNCNN | FFDNet | IRCNN | IPT | LIGN |
---|---|---|---|---|---|
PSNR | 27.98 | 27.97 | 27.88 | 29.39 | 28.38 |
SSIM [1] | 0.7916 | 0.7887 | 0.7898 | 0.8090 | 0.8066 |
MS-SSIM [9] | 0.9334 | 0.9334 | 0.9312 | 0.9398 | 0.9406 |
CONTRIOUE [36] | 0.1636 | 0.1956 | 0.1545 | 0.2081 | 0.1836 |
25.96 | 28.41 | 25.57 | 25.92 | 23.93 | |
MOS | 3.361 | 3.142 | 3.381 | 3.397 | 3.667 |
Method | Traditional Method | PSNR-Oriented Method | GAN-Based Method | All Images Recovery Method |
PSNR | 0.4782 | 0.5462 | 0.2839 | 0.4099 |
[37] | 0.5374 | 0.6462 | 0.3410 | 0.4742 |
[38] | 0.6087 | 0.7060 | 0.3385 | 0.5257 |
[1] | 0.5856 | 0.6897 | 0.3388 | 0.5209 |
[9] | 0.6527 | 0.7528 | 0.3823 | 0.5596 |
[16] | 0.7062 | 0.8244 | 0.3217 | 0.5651 |
[17] | 0.6927 | 0.7864 | 0.3857 | 0.5917 |
[39] | 0.6146 | 0.7076 | 0.3128 | 0.5086 |
[40] | 0.4593 | 0.5525 | 0.2951 | 0.4232 |
[10] | 0.6074 | 0.6904 | 0.3523 | 0.5361 |
[41] | 0.6561 | 0.7476 | 0.4631 | 0.6094 |
[20] | 0.6515 | 0.7381 | 0.4090 | 0.5896 |
[20] | 0.6509 | 0.7374 | 0.4058 | 0.5872 |
[42] | 0.6086 | 0.6938 | 0.3706 | 0.5475 |
[43] | 0.6720 | 0.7575 | 0.3494 | 0.5424 |
0.6476 | 0.7203 | 0.5280 | 0.6368 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, Z.; Zhuang, J.; Ye, S.; Xu, N.; Xiao, J.; Peng, C. Image Restoration Quality Assessment Based on Regional Differential Information Entropy. Entropy 2023, 25, 144. https://doi.org/10.3390/e25010144
Wang Z, Zhuang J, Ye S, Xu N, Xiao J, Peng C. Image Restoration Quality Assessment Based on Regional Differential Information Entropy. Entropy. 2023; 25(1):144. https://doi.org/10.3390/e25010144
Chicago/Turabian StyleWang, Zhiyu, Jiayan Zhuang, Sichao Ye, Ningyuan Xu, Jiangjian Xiao, and Chengbin Peng. 2023. "Image Restoration Quality Assessment Based on Regional Differential Information Entropy" Entropy 25, no. 1: 144. https://doi.org/10.3390/e25010144