0% found this document useful (0 votes)
14 views

Paper 1-Deep Neural Network Based Methods For Brain Image

The document compares different deep neural network-based methods for brain image de-noising, including thresholding neural networks (TNNs), deep convolutional neural networks (DnCNNs), Flashlight CNNs (FLCNNs), and Diamond de-noising networks (DmDNs). Experiments on brain MR images found that DmDNs achieved the best results in terms of peak signal-to-noise ratio, followed closely by FLCNNs and DnCNNs, outperforming TNN-based methods.

Uploaded by

Yasmine Ghazlane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Paper 1-Deep Neural Network Based Methods For Brain Image

The document compares different deep neural network-based methods for brain image de-noising, including thresholding neural networks (TNNs), deep convolutional neural networks (DnCNNs), Flashlight CNNs (FLCNNs), and Diamond de-noising networks (DmDNs). Experiments on brain MR images found that DmDNs achieved the best results in terms of peak signal-to-noise ratio, followed closely by FLCNNs and DnCNNs, outperforming TNN-based methods.

Uploaded by

Yasmine Ghazlane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

(IJACSA) International Journal of Advanced Computer Science and Applications,

Vol. 15, No. 2, 2024

Deep Neural Network-based Methods for Brain


Image De-noising: A Short Comparison
Keyan Rahimi1 , Noorbakhsh Amiri Golilarz2
Department of Computer Science, Brown University, Providence, RI 02912, USA 1
Department of Computer Science and Engineering, Mississippi State University, Mississippi State, MS 39762, USA 2

Abstract—Various types of noise may affect the visual quality Although the results were satisfactory, the researchers did
of images during capturing and transmitting procedures. Finding not want to stop at this stage, and they wanted to go beyond
a proper technique to remove the possible noise and improve the conventional gaussian denoisers. In this regard,
both quantitative and qualitative results is always considered as convolutional neural networks are widely used in image
one of the most important and challenging pre -processing tasks processing due to their excellent performance for obtaining
in image and signal processing. In this paper, we made a short high quality output images. Jian and Seung developed a
comparison between two well-known approaches called combined CNN with unsupervised learning for natural image
thresholding neural network (TNN) and deep neural network de-noising [7]. Vincent et al. developed a new training
(DNN) based methods for image de-noising. De-noising results of
TNNs, Dn-CNNs, Flashlight CNN (FLCNN) and Diamond de - principle for unsupervised learning and it became one of the
noising networks (DmDN) have been compared with each other. basic deep learning techniques for noise removal aspects [8].
In this regard, several experiments have been performed in terms While using deep convolutional neural networks there is an
of Peak S ignal to Noise Ratio (PS NR) to validate the performance issue in which we cannot train deeper networks easily. To
analysis of various de-noising methods. The analysis indicates address this problem, Mao et al. proposed symmetric skip
that DmDNs perform better than other learning-based connection combined with auto-encoders [9]. Zhang in [10]
algorithms for de-noising brain MR images. DmDN achieved a proposed a Dn-CNN method consisting of two main stages,
PS NR value of 29.85 dB, 30.74 dB, 29.15 dB, and 29.45 dB for de- residual learning and batch normalization. Deeper networks
noising MR image 1, MR image 2, MR image 3 and MR Image 4, also cause gradient dispersion in which residual learning has
respectively for a standard deviation of 15. been utilized in Dn-CNNs to tackle this issue [11]. There are
also some other issues which deep neural network-based
Keywords—CNN; Deep neural network; de-noising; MR methods are suffering from. One is diminishing feature reuse,
image; PSNR and the other is that increasing the number of parameters and
layers does not have any advantage for them [12]. To address
I. INT RODUCT ION these issues Bin et al. developed a flashlight CNN method
Noise is considered as unwanted signals causing based on deep residual and inception networks that is able to
imperfections and low resolution in image and signal hold many parameters [12]. Additionally, J. Zhang in [11]
processing, and may happen during the receiving and developed a diamond denoiser to deal with the issue of losing
transmitting processes. Thus, further image analysis and network’s gradient caused by deeper networks.
processing may not be possible until we discard or reduce the
noise in the images. In image de-noising, the main goal is A self-supervised based method for fluorescence image
enhancing the visual quality. Various methods are available in denoising has been proposed by Huang et al., [16]. In this
literature for removing the possible noise from images. approach, the authors utilized Wiener filtering and wavelet
transformation, as two classic denoising techniques as well as
Donoho and Johnstone proposed adapting to unknown DeepCAD to perform comparative experiments [16]. In
smoothness [1] and ideal spatial adaptation [2] using wavelet another study conducted by Yang et al. [17], an efficient auto-
shrinkage for de-noising in 1994 and 1995, respectively. encoder technique using convolutional neural networks to
These techniques became the foundation for further gradient perform both classification and de-noising has been
descent learning based methods. Zhang took one step forward developed.
in de-noising by proposing a learning-based method for
improving the conventional approaches [3]. He developed a Content-noise complementary learning has been presented
thresholding neural network using an improved and non -linear in [18] to denoise medical images. In this study to validate the
hard-soft threshold function. Sahraeian et al., proposed an performance of various de-noising methods, MR, CT, and
improved TNN and cycle spinning for image de-noising [4]. PET images have been utilized. Structural priors based deep
Nasri and Nezamabadipour tried to improve Zhang’s results MRI super resolution has been developed in a study conducted
by proposing another data driven function with three shape by Cherukuri et al., [19]. Low rank structure and sharpness
tuning parameters [5]. To enhance the results of TNN based priors have been utilized in this study to enhance the visual
methods, instead of using gradient descent algorithm, the quality of images. Convolutional de-noising autoencoders to
authors in [6] proposed an optimized based technique. discard noise from MR images has also been proposed in [20].
This technique provided better accuracy with less computation
and data for de-noising the medical images.

1|Pag e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 2, 2024

In this paper we have a brief survey on several state-of- One of the methods we discuss however uses a deep feed -
the-art de-noising approaches. We analyzed the results for forward network, which can not only learn with overall
MRI brain image de-noising. Thresholding neural networks, smaller sample sizes but uses residual learning. It trains on
Dn-CNNs, Flashlight CNNs, and Diamond de-noising images that already have noise and learns from it, working
networks have been taken into account. The results indicate along with batch normalization in order to increase its
that deep neural network based methods have superior results accuracy [23]. In the case of the flashlight CNN, it uses a very
compared to TNN based techniques. Among these deep neural similar strategy, while also using inception layers that help the
network based approaches, Diamond de-noising networks network better handle Gaussian white noise. Fig. 1 shows the
(DmDN) perform well, followed closely by FLCNN, and main procedure of de-noising using learning based
DnCNN. approaches. Images have been obtained from [21].
The rest of the paper is organized as follows: Section II is III. TNN BASED M ET HODS
about CNN based image de-noising. A brief discussion about
CNNs and how to perform CNN based de-noising has been Standard hard and soft thresholding functions were first
provided. In Section III, we discuss image de-noising using proposed in [3]. In this case, these functions became the basis
thresholding neural network. In Section IV, we discuss several and foundation of further thresholding based de-noising. Since
deep neural network methods. Section V is results and the obtained results using these functions were not
discussion. Finally, Section VI concludes the paper. satisfactory, the researchers in the fields of image and signal
processing attempted to enhance these methods and add more
II. DE-NOISING USING CNN parameters to make them non-linear and differentiable to be
used in a network called, “thresholding neural network’’.
Sitting as a contrast from more traditional methods, These functions which are the enhanced version of standard
convolutional neural networks can be used to great effect on
thresholds are called “improved thresholding functions’’
de-noising images. CNNs have been the neural network of which were first introduced by Zhang [3]. The equations
choice in the field of image processing due to their high
below indicate these improved soft and improved threshold
effectiveness and can also be used when de-noising. These
functions:
networks use their convolutional layers. There are multiple
different methods regarding deep learning, but the ones that 1
we discuss in this paper are feed-forward convolutional neural L (u, )  u  ( (u -  )2  l - (u   )2 - l )
soft 2 (1)
networks (DnCNN) and flashlight CNNs (FLCNN).
In order to de-noise an image, CNNs traditionally require a where, Lsoft (u, ) denotes the non-linear soft threshold,
large training sample size, and learns by training with input- u is the WT components,  is the threshold value and l  0
output pairs, images of noisy scans, followed by its clean is a function parameter (user defined) [3].
variation. The network learns kernels through its
1 1
convolutional layers, small weights that can detect patterns Lhard (u, )  ( -  1)u
over the input image. The convolutional layers create a  -u     -u -  
1  exp   1  exp  
hierarchical representation of the input and can use this       (2)
separation to learn to differentiate between the noise and the

where, Lhard (u, ) denotes the non-linear hard threshold,


features of an image. Non-linear activation functions are then
applied for complexity, and the network's outputs are
compared to the actual clean image through a loss function, u is the WT coefficients,  is the threshold value and
where it can adjust and try again. After much iteration, it then   0 is a fixed function parameter (user defined) [3].
is tested on new images that have had Gaussian white noise
added to them, tasking the CNN to de-noise the image [22]. Although these functions have been used in various
studies for image denoising, the results have not been quite
Brain satisfactory and there is some space for improvement. Thus,
MR Images another nonlinear and differential threshold function has been
presented by Sahraeian [4] as shown by Eq. (3). This function
has been inspired by Zhang’s improved hard threshold
function.
Noise

Input Noisy Image



m(e -1).sgn(u ) , u  
nu

LS (u, )  
( u  he ).sgn(u ), u  
-n u
 (3)

where, LS (u , ) is the Sahraeian’s nonlinear threshold, n


Output De-noised Deep Neural Net controls the function’s shape and refers to the thresholding
Image Algorithms effect’s degree. Additionally, parameters m and h are used to
preserve the continuity and derivative at  [4].
Fig. 1. T he procedure of deep learning-based de-noising.
The researchers did not want to stop here, and they moved
forward to present a function with more flexibility and

2|Pag e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 2, 2024

capability. Thereafter, Nasri and Nezamabadi-pour [5] in the proposed network with depth D, there are three types of
presented other nonlinear functions with three shape tuning layers [10]:
parameters which are formulated.
 Conv+ReLU is used for the first layer with 64 filters
 i g with the size 3×3×c. Note that c is the channels’
u - 0.5 u i -1  ( g -1) , u   number. Also, ReLU has been utilized to give
 nonlinearity.
 g  u j
(u, , i, j , g )  0.5 j -1 sgn(u ) , u   Conv+BN+ReLU is used from layer 2-D-1 with 64
 
filters of size 3×3×64. Batch normalization (BN) has
 (- )i  g also been used in these layers.
u  0.5 - ( g -1) , u  -
 u i -1
(4)  Conv is utilized in the very last layer with c filter of
where,  is the threshold value, u denotes the WT
size 3×3×64 for reconstructing the output image.

coefficient, i and j controls the function’s shape, and g


calculate the asymptote of the function [5]. For further details

Input Noisy Image


and information about the structure of TNN and WT based de-

Conv+BN+ReLU
Conv+BN+RelU

Residual Image
Conv+NB+Relu
Conv+ReLU
noising, please refer to [3].

Conv
…. ….
IV. DEEP LEARNING BASED M ET HODS
A. DnCNN
Nowadays, due to the availability of large-scale datasets
and progress in deep learning algorithms, CNN approaches
attract lots of attention in imaging technologies [10]. The Fig. 2. T he structure of DnCNN [10].
construction of feed-forward convolutional neural networks
(DnCNNs) for de-noising has become the basis for de-noising B. Flashlight CNN (FLCNN)
using deep learning [10]. In this structure, to improve the Flashlight CNNs are another type of convolutional neural
computational time and also to enhance the quality of the de- network implementing deep NN for noise removal processes.
noised image, batch normalization and residual learning have The main structure of this method is based on the combination
been utilized, leading to this approach becoming one of the of deep residual and inception networks [12]. Utilizing
more efficient and effective gaussian denoisers. Conventional inception layers provides us with overcoming and addressing
deep NNs can estimate a clean image directly, but DnCNNs the reuse of diminishing features while tackling additive white
can remove and discard the clean image by adapting it to the gaussian noise. As shown in Fig. 3, this network consists of
residual learning strategy [10]. Training a single DnCNN as a two main phases [12]:
blind gaussian denoiser gives better results compared to
alternative methods. As mentioned earlier, residual learning  Warmup phase which utilizes convolutional layers
and batch normalization are used in this structure. Residual (typical or conventional CNN). There are two main
learning has been utilized for solving performance degradation stages in this phase. The first one employs 3×3 kernels
issues [14]. with 64 features and the second one employs 5×5
kernels with 64 features.
The developed DnCNN utilizes only one residual unit for
predicting the residual image [10]. If we compare residual  Boost phase utilizes wider inception layers (residual)
mapping with the original unreferenced mapping in terms of leading to growth and increment in the number of
learning, residual mapping is easier, so deep CNN models can networks’ parameters while overcoming the reducing
be trained easily [14] [10]. On the other hand, although feature reuse.
training based on stochastic gradient descent (SGD) is
Warm-up Phase Boost Phase
effective and simple, internal covariance shifts can largely
reduce the training efficiency [15] [10]. So, alleviating the Conv Conv Conv Conv estimate x
y Incept -
covariance shift is also a challenging task in deep CNN 3× 3 3× 3 3× 3
A
3× 3 +
models and is the reason that batch normalization is used in 64 64 64 1
+
these networks [15] [10]. The combination of residual learning 1 layers m layers n layers
and batch normalization provides us with stable training, fast
training procedure (because of using batch normalization),
better qualitative and quantitative results [10]. The main Fig. 3. T he architecture of FLCNN with noisy input of y and estimate x [12].
structure of the DnCNN model is depicted in Fig. 2.
C. Diamond De-noising Network (DmDN)
As can be seen, the network’s input is a noisy image
corrupted by gaussian noise. Here, instead of learning a Images’ detail and important characteristics and
mapping function, we can proceed by adapting residual information may be diminished by doing excessive scaling
learning for training the residual mapping [10]. Additionally, [11]. Although the convolutional network is deeper, it may be
easy to lose the gradient of the network. To address these
issues, Diamond Shaped (DS) multi-scale feature extraction

3|Pag e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 2, 2024

has been utilized in this network to extract the information of


the images’ features [11]. This fixed scale-based network is In the next experiment we utilized another data set
obtained from Kaggle [21] to compare the performance of
called a Diamond De-noising network (DmDN) [11]. This
network contains three main parts as below [11]: various deep neural net based approaches quantitatively. Some
of these images are depicted in Fig. 5. In this experiment, as
 Feature extraction of input noisy images . can be seen from Fig. 6, we compared DmDn, FLCNN,
DnCNN, TNN-Nasri and TNN-Zhang over several standard
 Feature extraction of multi scales . deviations. The results indicate that the first three deep
 Clean image reconstruction or output image. learning methods perform well in de-noising brain MR
images. Among the first three neural net approaches, DmDn
V. RESULT S AND DISCUSSIONS outperforms the others. Although these methods perform well
in de-noising MR Images, they may not work perfectly for
In this part, we have performed two experiments to
other types of datasets such as hyper-spectral remote sensing
validate the efficiency of various de-noising methods. Note
that the images have been contaminated by additive white and standard test images or even if we apply other types of
noise and perturbations .
gaussian noise (AWGN) with zero mean and different
standard deviations. For TNNs we used “sym4’’ with one
decomposition layer. The training parameters are available in
[11] and are the same as the original works used in this study.
Axial DWI brain imaging obtained from [13] is used in the
experimental part. We have used four single images at various
moments of the original data (see Fig. 4). The de-noising
results in terms of PSNR values for various standard
deviations are shown in Table I. As neatly shown, DmDNs
perform better than other de-noising approaches as it achieved
the highest PSNR values. The results indicate that deep
learning-based techniques outperform TNN models for de-
noising MR Images.
MR Image 1 MR Image 2 MR Image 3 MR Image 4

Fig. 4. Four test single images [13].

TABLE I. DENOISING COMP ARISON OF VARIOUS LEARNING


AP P ROACHES IN T ERMS OF PSNR VALUES ( D B)

MR TNN- TNN- Dn-


sigma FLCNN DmDN
Images Zhang Nasri CNN
15 24.02 24.23 29.72 29.83 29.85
MR
Image 1 25 21.98 22.14 27.23 27.36 27.47
50 19.45 19.54 24.25 24.47 24.46
15 24.65 25.16 30.61 30.72 30.74
MR
Image 2 25 22.75 23.11 28.42 28.59 28.62
50 19.86 20.10 25.44 25.61 25.64
15 23.78 24.10 29.01 29.11 29.15 Fig. 5. Some of the brain MR images used in the experimental part [21].
MR
Image 3 25 21.83 22.05 26.84 27.01 27.06
50 19.53 19.78 24.03 24.24 24.29
15 24.01 24.31 29.33 29.41 29.45
MR
Image 4 25 21.45 22.14 27.01 27.14 27.17
50 19.20 19.84 23.84 24.02 24.09

4|Pag e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 2, 2024

[7] V. Jain and H. S. Seung, “ Natural image denoising with convolutional


networks”, Neural Information Processing Systems, Curran Associates
Inc, pp. 769-776, 2008.
[8] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol, “ Extracting and
composing robust features with denoising autoencoders”, in Proc. Int.
Conf. Mach. Learn., 2008, pp. 1096–1103.
[9] X.J. Mao, C.H. Shen and Y.B Yang, “ Image Restoration Using
Convolutional Auto-encoders with Symmetric Skip Connection”,
Proceedings of 30th Int. Conf. on Neural Information Processing
Systems, pp. 2810-2818, 2016.
[10] K. Zhang, W. Zuo, Y. Chen, D. Meng and L. Zhang, “Beyond a Gaussian
Denoiser: Residual Learning of Deep CNN for Image Denoising”, IEEE
T ransactions on Image Processing, vol. 26, no. 7, pp. 3142 -3155, July
2017.
[11] J. Zhang, L. Sang, Z. Wan, Y. Wang and Y. Li, “ Deep Convolutional
Neural Network Based on Multi-Scale Feature Extraction for Image
Denoising”, 2020 IEEE International Conference on Visual
Communications and Image Processing (VCIP), 2020, pp. 213-216, doi:
10.1109/VCIP49819.2020.9301843.
[12] P. H. T hanh Binh, C. Cruz and K. Egiazarian, “ Flashlight CNN Image
Fig. 6. Comparison of performance analysis of various learning algorithms Denoising”, 2020 28th European Signal Processing Conference
for different standard deviations. (EUSIPCO), 2021, pp. 670-674.
[13] Ian Bickle, “ Normal MRI Brain: Adult, radiopaedia.org,”
VI. CONCLUSION https://radiopaedia.org/cases/normal-mri-brain-adult?lang=us.
[14] K. He, X. Zhang, S. Ren, and J. Sun, “ Deep residual learning for image
Images may be influenced by many types of noise, leading recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp.
to a decrease in their visual quality. Trying to find a suitable 770–778, Jun. 2016.
de-noising method for discarding this noise has always been [15] S. Ioffe and C. Szegedy, “ Batch normalization: Accelerating deep
categorized as a challenging task for researchers in the fields network training by reducing internal covariate shift,” in Proc. Int. Conf.
of signal and image analysis. This work provides a s urvey and Mach. Learn., pp. 448–456, 2015.
comparison between several learning based de-noising [16] H. Huang, Y. Liu and Y. Li, “ Fluorescence Image Denoising Based on
methods such as TNNs, Dn-CNNs, Flashlight CNNs Self-supervised Deep Learning, ” 2022 7th International Conference on
Intelligent Computing and Signal Processing (ICSP), Xi'an, China,
(FLCNN) and Diamond de-noising networks (DmDN) in 2022, pp.86-90,doi:10.1109/ICSP54964.2022.9778765.
terms of PSNR values. The quantitative results indicate that [17] H. Yang, C. Chen, W. Lin and Y. Yi, “A New CNN-based Joint
DmDN can be a promising method for brain MRI de-noising Network for Brain T umor Denoising and Classification,” 2022 2nd
as it achieved the highest PSNR values for de-noising MR International Conference on Computer Science, Electronic Information
images 1-4 for a standard deviation of 15. In this study we Engineering and Intelligent Control Technology (CEI), Nanjing, China,
have used AWGN, and we realized that increasing noise level 2022, pp. 506-510, doi: 10.1109/CEI57409.2022.9950184.
decreases PSNR values. For future work, we will analyze the [18] M. Geng et al., “Content-Noise Complementary Learning for Medical
Image Denoising,” in IEEE Transactions on Medical Imaging, vol. 41,
performance of some state-of-the-art methods in the pres ence no. 2, pp. 407-419, Feb. 2022, doi: 10.1109/T MI.2021.3113365.
of various types of noise.
[19] V. Cherukuri, T . Guo, S. J. Schiff and V. Monga, “Deep Mr Image
Super-Resolution Using Structural Priors,” 2018 25th IEEE
REFERENCES International Conference on Image Processing (ICIP), Athens, Greece,
[1] D. L. Donoho and J. M. Johnstone, ‘‘Ideal spatial adaptation by wavelet 2018, pp. 410-414, doi: 10.1109/ICIP.2018.8451496.
shrinkage”, Biometrika, vol. 81, no. 3, pp. 425–455, 1994. [20] A. T homas, D. K. K R, D. Babu and A. P.E, “Denoising Autoencoder
[2] D. L. Donoho and I. M. Johnstone, ‘‘Adapting to unknown smoothness for the Removal of Noise in Brain MR Images,” 2023 International
via wavelet shrinkage”, J. Amer. Statist. Assoc., vol. 90, no. 432, pp. Conference on Control, Communication and Computing (ICCC) ,
1200–1224, 1995. T hiruvananthapuram, India, 2023, pp. 1-5, doi:
10.1109/ICCC57789.2023.10165274.
[3] X.-P. Zhang, ‘‘T hresholding neural network for adaptive noise
reduction,’’IEEE Trans. Neural Netw., vol. 12, no. 3, pp. 567–584, May [21] “ Kaggle”. Available: https://www.kaggle.com/datasets
2001. [22] S. Ghose, N. Singh and P. Singh, “ Image Denoising using Deep
[4] S. M. E. Sahraeian, F. Marvasti, and N. Sadati, ‘‘Wavelet image Learning: Convolutional Neural Network,” 2020 10th International
denoising based on improved thresholding neural network and cycle Conference on Cloud Computing, Data Science & Engineering
spinning”, in Proc. ICASSP, Honolulu, HI, USA, 2007, pp. 585–588. (Confluence), Noida, India, 2020, pp. 511-517, doi:
[5] M. Nasri and H. Nezamabadi-Pour, ‘‘Image denoising in the wavelet 10.1109/Confluence47617.2020.9057895.
domain using a new adaptive thresholding function”, Neurocomputing, [23] W. Jifara, F. Jiang, S. Rho, M. Cheng, & S. Liu, “ Medical image
vol. 72, no. 4, pp. 1012–1025, 2009. denoising using convolutional neural network: a residual learning
approach”, The Journal of Supercomputing, vol. 75, no. 2, pp. 704–718.
[6] A. K. Bhandari, D. Kumar, A. Kumar, and G. K. Singh, ‘‘Optimal
https://doi.org/10.1007/s11227-017-2080-0
subband adaptive thresholding based edge preserved satellite image
denoising using adaptive differential evolution algorithm,’’
Neurocomputing, vol. 174, pp. 698–721, Jan. 2016.

5|Pag e
www.ijacsa.thesai.org

You might also like