Brain Tumor Segmentation Using Double Density Dual Tree Complex Wavelet Transform Combined With Convolutional Neural Network and Genetic Algorithm
Brain Tumor Segmentation Using Double Density Dual Tree Complex Wavelet Transform Combined With Convolutional Neural Network and Genetic Algorithm
Ridha Sefina Samosir1,2, Edi Abdurachman2, Ford Lumban Gaol2, Boy Subirosa Sabarguna2
1
Information System Study Program, Faculty of Computer Science and Design, Kalbis Institute, Jakarta, Indonesia
2
Graduated Program-Doctor of Computer Science, Faculty of Computer Science, Bina Nusantara University, Jakarta, Indonesia
Corresponding Author:
Ridha Sefina Samosir
Information System, Faculty of Computer Science and Design, Kalbis Institute
East Jakarta, Indonesia
Email: [email protected]
1. INTRODUCTION
Magnetic resonance imaging (MRI) is a radiological scanning technique that uses magnets, radio
waves, and computers to produce images of body structures [1]. Various methods can be used for imaging a
medical object, such as angiogram, brain scan, computerized tomography (CT)-scan, diffusion tensor
imaging, functional magnetic resonance imaging (fMRI) [2], MRI, magnetic resonance spectroscopy (MRS),
positron emission tomography (PET), and Biopsy [3]. Some literature states that MRI is the best examination
tool for its relatively safe radiation hazards [4] and the high accuracy rate. In an MRI examination, the patient
is placed on a bed and inserted into a magnetic tube. A strong magnetic field will be formed and align the
protons of the hydrogen atoms, which are then exposed to radio waves. The receiver would detect the signal
on the MRI machine, then the computer would process the received information and produces an image. The
images and resolution of MRI are presented in detail and can detect small changes in the body structures. In
some procedures, a contrast material, such as gadolinium, is used to improve image accuracy. In addition,
MRI can demonstrate the presence of abnormal tissue with high resolution and good contrast [5].
Furthermore, MRI results have a higher sensitivity for detecting the presence or change in tumor size. A
tumor is a disease that arises due to abnormal changes in body tissue cells that turn into tumor cells. When a
brain tumor occurs, the excessive growth of unnecessary cells causes an increased volume and damage to
other cells and interferes with the function of that part of the brain. Brain tumors can cause complications,
including cerebral edema, hydrocephalus, brain herniation, epilepsy, and metastases to other places in several
areas of the brain, such as the temporal lobe, frontal lobe, parietal lobe [6]. In the case of brain anatomy, MRI
uses strong magnetic fields, radio waves, and computers to produce more distinct and detailed images of the
brain and skull structures than other methods.
Segmentation is one of the methods in digital image processing besides image compression,
restoration, analysis, and others. The segmentation process of brain tumor MRI images aims to divide the
area of the MRI image based on the features (color, texture, density, contrast, and other features) [7]. The
accuracy of the segmentation results can be used for medical analysis processes such as tumor detection
based on the areas formed. The image segmentation results can also be in the form of an area called a region
of interest (RoI). The problems faced by MRI image quality for the brain and other organs affect the
segmentation process to find RoI. Some of them include having bad boundaries. These bad boundaries may
lead to either inhomogeneity or different intensity ranges, making it challenging to spot the differences
between pixels in the image [8]. The pixel intensity in the tumor area tends to overlap with pixels in adjacent
normal tissue.
The complexity of the MRI image as the structure and morphology of the human brain is very
complex, brain MRI images tend to have low contrast [9], [10], a relatively large manual segmentation time.
It requires very high accuracy so that no pixel is missed considering that each pixel contains important
information, and in some cases, different MRI images show distinct complexity even though they represent
the same tumor type. In addition, the MRI will produce many images as it is captured from various angles
(axial, coronal, and sagittal). This can trigger differences of opinion from different experts and the length of
time it takes to read the entire image.
Based on the explanations, this study targets to propose a segmenting method for brain MRI images.
Segmentation of brain MRI images can be served as a crucial early stage to help early detection, increase the
accuracy and facilitate [11] the process of diagnosing brain tumors. Ultimately, it will help to identify the
most appropriate type of brain-related treatment.
The development of the brain MRI image segmentation method begins with low-level techniques,
such as thresholding [12] and region growing techniques. The thresholding method would group objects by
comparison with one or more other intensity groups. However, this approach cannot accommodate all the
pixels of the MRI image. Another method that can be used is the unsupervised learning approach, such as
clustering [13] and segmentation [14]. The latest segmentation technique used is the deep learning
technique [15]. Deep learning is the development of artificial neural networks in machine learning.
Therefore, it can also be called a deep neural network. Deep learning performs the classification of pixels
during the segmentation process [16]. The variant of the deep learning technique that has been used is the
convolutional neural network (CNN), which accommodates the formation of a learning layer (network) from
a set of data automatically. This method classifies each image patch into several classes, such as normal
tissue, necrosis, edema, and other classes. This classification aims to label the center point of the voxels [17].
The hybrid method also developed into wavelet method with machine learning [18], and a combination of
fully convolutional neural network (FCNN) and conditional random field (CRF) [19].
In line with the development of brain MRI image segmentation methods, it is proposed to combine
two or more methods (hybrid method) to segment images. This study proposes a hybrid method by
combining the The double density dual-tree complex wavelet transform (DDDTCWT) and CNN with
additional genetic algorithm (GA) approaches based on its capabilities and mechanisms.
The combination of these methods will aid each algorithm involved to be more optimal in its
implementation DDDTCWT algorithm is a technique that combines double density wavelet transform and
dual-tree wavelet transform [20]. The difference between the two only lies in the number of directions
(frequency) of the two previous methods so that the results will be more numerous and detailed. The wavelet
transform involves n levels of decomposition [21] to extract the features contained in the image using the
filtering function. With more features that can be extracted, it will be easier to carry out the following
analysis process, including the segmentation process. In this research, DDDTCWT was used to transform the
input images in detail. Each level of decomposition with DDDTCWT will involve six filtering functions,
which represent low pass filtering and high pass filtering. This detailed decomposition process will identify
more feature information contained in the image and increase CNN's recognition and learning process.
DDDTCWT can also solve the shift-invariance and directional selectivity. Shift invariant occurs when the
image is shifted, while the directional selectivity means that more information will be extracted as the
direction/orientation of the pixel extraction at each level of decomposition.
CNN algorithm can recognize an image through the learning process (courses or training) with a
given object. Then, CNN studies all the features in the image through a series of processes, including
convolution, activation, pooling, and the formation of a fully connected network (FCN) [22]. There were
additional process options during the formation of FCN, namely the optimization of weights and biases
involving a GA [23]. Here, we only optimized the first layer of the CNNs with GA and saw its performance.
GA works as a natural successive that would produce springs, mutation, cross-product, and so on. In many
cases, the FCN algorithm proved can solve an optimization problem [24]. Then, GA will optimize the
weights and biases from the first layer of the CNNs layers so that the identification process will be easier and
faster. In addition, GA will update the parameter values in the training data process by not doing repetitions.
The principle of GA is to select the gene with the best fitness value so that the weight and bias values
included in each optimization process (crossovers and mutations) are with optimal values.
2. LITERATURE REVIEW
Several studies have been conducted regarding various proposed algorithms for image segmentation.
These algorithms use diverse approaches, such as thresholding, classification, clustering, and hybrid method.
Each application of the algorithm gives different accuracy results in the range of 80 - 99%. There are several
updates in image segmentation methods, from conventional to hybrid methods. Threshold and region-based
approaches are part of the conventional one. Gaussian mixture model algorithm for segmenting brain MRI
images is an example of applying a region-based approach [25]. Another study in 2016 that used the
threshold concept was the magnitude base multilevel method [12]. Moreover, some studies use the concept of
supervised and unsupervised learning for brain MRI image segmentation using the adaptive K-Means
Clustering method and the support vector machine (SVM) method [26]. An unsupervised learning method
that combines fuzzy c-means with k-means clustering has also been studied in India [27] from 2018 to 2020,
several researchers combined one or more techniques to help identify the presence of brain tumors, such as
combining a discrete wavelet transform with a probabilistic neural FCN [28] and fully convolutional neural
network (FCNNs) with CRF [29]. The following Table 1 shows several previous studies with different
methods.
3. METHOD
3.1. Research framework
This section contains an explanation of the research framework, the data used, the proposed image
segmentation method, and the performance evaluation technique of the proposed method. This section also
describes about input parameter of each algorithm, scenario used for experiment, part of pseudocode for the
MATLAB code, and CNN architecture proposed. This study has two problem formulations: (1) how to apply
a combination of three methods (DDDTCWT, CNN, and GA) to segment brain MRI images. It means how to
combine the workings and input parameters of each method to segment brain MRI images. (2) how is the
performance of the proposed method to segment the image and identify the grade of tumor or normal tissue.
The performance was evaluated with four indicators: Dice similarity coefficient (DSC), positive present
value (PPV), sensitivity, and accuracy as shown in Figure 1. Thus, the purpose of this research is to develop
the architecture of the proposed hybrid method into a software and evaluate its performance. This study used
several stages to achieve the goal, including literature reviews, conducting simulations/experiments on the
developed software, and observing the results of experiments/simulations.
3.2. Samples
The images used in this study were grouped into three, namely training images, testing images, and
ground truth images. The training images were used to study the features contained in all the training images
Brain tumor segmentation using double density dual tree complex … (Ridha Sefina Samosir)
1376 ISSN: 2252-8938
by performing feature extraction. Testing images for examining the image segmentation to produce RoI, and
the ground truth images were a collection of testing images that have been segmented by the media or other
methods and were utilized as a comparison with the output image segmented by the proposed method. This
ground truth image represents the normal image and the tumor image as shown in Figure 2. All MRI images
were obtained from several hospitals in Indonesia and the Kaggle website. In total, there were 1699 training
images (1397 MRI images of normal patients and 302 for brain tumor patients) and 913 test images (359
tumor MRI images and 554 normal MRI images). 913 images from Kaggle sources were used as ground truth
images to compare the output images of the segmentation with the proposed method. The distribution of
training images with test images was done randomly by MATLAB with a ratio of 70:30. This study does not
specifically use the image sharing approach as the amount of data available was adequate. In addition to
training images and testing images,
The same goes for the multi-layer convolution approach in the deep learning method. CNN is
trained to understand the details of an image better as it works on the principle of multi-layer. Thus, CNN is
able to learn to identify the different features or details of an image distinctively. The CNN algorithm would
carry out the learning process of all the features in the training image obtained from the extraction by
DDDTCWT. After CNN learns the features contained in the training image, 60 FCN scenarios will be
formed. The input parameters for the CNN algorithm were the convolution layer, activation function,
pooling, and optimization function. The parameter values of the proposed activation function were rectified
linear units (RELU) and hyperbolic tangent (TANH) functions, as both are easy to perform and do not cause
large computations, with the consideration that the data used was adequate. This activation function was
expected to obtain a convergent state quickly. Table 2 shows a list of the input parameters used from the
algorithms involved.
Next, the parameter values for the optimization function (optimization of weights and biases) were
applied with or without GA, and MAX pooling was used as the default of the CNN algorithm. Max pooling
is a pooling parameter value that has been widely used in medical image analysis and provides an average
accuracy of 90-98%. Table 3 shows four FCN scenarios from the training results, which resulting 60
schemes.
Before the formation of a neural network, the weight value of each input layer and hidden layer is
required. During the convolution process, the weight and bias value optimization was performed by applying
the GA algorithm. GA aids to gain optimal weight and bias values from all iterations (epochs) that occurred.
Figure 3 is a CNN architecture chart for the training process from the previous method with DDDTCWT.
Generally, DDDTCWT performs decomposition, filtering, and re-decomposition (reformat of the image).
The following is the pseudocode for DDDTCWT used in this study.
Step 1: Decomposition
# Determining the Decomposition Level
#Level 1...4
Step 2: Filtering
# Wavelet function call
case 1
level=1;
dtcplx=dddtree2(typetree,grayImage,level,filtername);
dtcplx.cfs{2} = zeros(size(dtcplx.cfs{2}));
end
Case 4
level = 4;
dtcplx = dddtree2(typetree,grayImage,level,filtername);
dtcplx.cfs{1} = zeros(size(dtcplx.cfs{1}));
dtcplx.cfs{2} = zeros(size(dtcplx.cfs{2}));
dtcplx.cfs{3} = zeros(size(dtcplx.cfs{3}));
dtcplx.cfs{5} = zeros(size(dtcplx.cfs{5}));
end
Step 3: Re-decomposition Image
#dtImage = idddtree2(dtcplx);
Brain tumor segmentation using double density dual tree complex … (Ridha Sefina Samosir)
1378 ISSN: 2252-8938
𝑇𝑃
𝑃𝑃𝑉 = (2)
(𝑇𝑃+𝐹𝑃)
𝑇𝑃+𝑇𝑁
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = (𝑇𝑃+𝐹𝑃+𝐹𝑁+𝑇𝑁) (4)
Where:
DSC = Dice Similarity Coefficient
PPV = Present Positive Value
Sensitivity = Sensitivity
Accuracy = Accuracy
TP = True Positive
TN = True Negative
FP = False Positive
FN = False Negative
Brain tumor segmentation using double density dual tree complex … (Ridha Sefina Samosir)
1380 ISSN: 2252-8938
The next output is from measuring the performance of the proposed method for segmenting and
detecting the testing images of tumor or normal. The measurement indicators used were DSC, PPV,
sensitivity, and accuracy. Based on the first experimental scenario with the FCN formation scheme of RELU
activation function without involving GA, the highest DSC and accuracy values were in DDDTF1 wavelet
function and decomposition level 4 with 98.7% and 97.65%, respectively. However, the highest sensitivity
value (98.39%) was in SELF1 with decomposition level 5. The utmost PPV value was obtained when using
the SELF1 filter function with decomposition level 4 with 98.39% as shown in Table 4. Then, the
performance of the second scenario, the TANH activation function without GA, shows the highest DSC,
PPV, and accuracy values with the fourth decomposition level and wavelet function SELF1, while the
highest sensitivity value was in the DDDTF1 of the fifth decomposition level as shown in Table 5. Next, the
third scenario involves the RELU activation and the GA optimization function in the FCN formation. Table 6
shows that SELF1 with the fourth decomposition level had the highest DSC, PPV, and accuracy values,
while the DDDTF1 with the third decomposition level had the highest sensitivity value. The last scenario
involves TANH activation function and the GA optimization had two schemes with the highest DSC, PPV,
and accuracy values, namely DDDTF1 with decomposition level 4 and SELF1 with decomposition level 2.
Sedangkan nilai akurasi tertinggi ketika menggunakan fungsi filter DDDTF1 dan level dekomposisi 3 as
shown in Table 7.
Overall, only scenarios 3 and 4 have PPV values reaching 100%. It proves that the integration of the
GA optimization function affects the PPV value, and the scheme without GA shows that the largest PPV
value is always at the fourth decomposition level. The highest DSC value also occurs in those scenarios when
applying both RELU and TAHN activation functions with GA optimization. Likewise, the highest sensitivity
value falls in the scheme of THAN activation function with and without GA. In detail, the highest sensitivity
value lies in the DDDTF1 with the third decomposition level. Lastly, the best accuracy values of 98.86% lie
in scenarios three and four, precisely in the SELF1with decomposition level 2 (scenario 3), DDDTF1 with
decomposition level 4 (scenario 3), and DDDDTF1 filter function with decomposition level 4 (scenario 4).
Based on its performance, all measurement indicators of the proposed method had scores ranging
from 95% to 100%. This shows that the proposed hybrid method provides a better contribution than previous
studies to perform image segmentation. This also proves the ability of each method used, CDDDTWT as a
variant of the wavelet transform approach, to extract all the features contained in all training images.
Moreover, CNN can learn distinctively all features in the images and can produce optimal weight and bias
values when integrated with GA.
Figure 4. Application of the proposed testing method with the MATLAB tool
5. CONCLUSION
The combination of DDDTCWT, CNN, and GA can be applied to segment brain MRI images. The
combination of the mechanisms and capabilities of the three methods shows the success of segmenting brain
MRI images with excellent assessment indicators. All combinations of the proposed hybrid system in this
study showed more than 95% in all parameters measured (DSC, PPV, sensitivity, and accuracy) using 913
test images. The top 3 combinations were (1) RELU with GA, filter type of DDDTCWT in SELF1 with
decomposition level 4, (2) TANH with GA, filter type of DDDTCWT in DDDTF1 with decomposition level
4, and (3) TANH with GA, filter type of DDDTCWT in SELF1 with level decomposition 2. The smallest
value found at TANH with GA, filter type of DDDTCWT in SELF2 with level decomposition 1. Adding the
GA in the system improved the system measurements on average, although the time consumed by the system
to apply GA is doubled. Further improvements of this hybrid method with another technic and CNN training
are required for a more applicable technique. In addition, this study also suggests increasing the number of
data sets, both training and testing data. Furthermore, if the number of data sets obtained is quite large, it
needs the cross-validation technique in the distribution of the image, given that cross-validation can increase
the model's performance.
ACKNOWLEDGEMENTS
The authors would like to thank the Kalbis Institute of Technology and Business for funding this
research. We would also like to thank Bina Nusantara University for providing the promoter and co-promoter
for this research.
Brain tumor segmentation using double density dual tree complex … (Ridha Sefina Samosir)
1382 ISSN: 2252-8938
REFERENCES
[1] L. Rundo, C. Militello, G. Russo, S. Vitabile, M. C. Gilardi, and G. Mauri, “GTVcut for neuro-radiosurgery treatment planning:
an MRI brain cancer seeded image segmentation method based on a cellular automata model,” Natural Computing, vol. 17, no. 3,
pp. 521–536, 2018, doi: 10.1007/s11047-017-9636-z.
[2] H. J. Lee et al., “Concurrent electrophysiological and hemodynamic measurements of evoked neural oscillations in human visual
cortex using sparsely interleaved fast fMRI and EEG,” NeuRoImage, vol. 217, 2020, doi: 10.1016/j.neuRoImage.2020.116910.
[3] A. Wadhwa, A. Bhardwaj, and V. Singh Verma, “A review on brain tumor segmentation of MRI images,” Magnetic Resonance
Imaging, vol. 61, pp. 247–259, 2019, doi: 10.1016/j.mri.2019.05.043.
[4] T. Rajesh, R. S. M. Malar, and M. R. Geetha, “Brain tumor detection using optimisation classification based on rough set theory,”
Cluster Computing, vol. 22, pp. 13853–13859, 2019, doi: 10.1007/s10586-018-2111-5.
[5] S. Koike et al., “Brain/MINDS beyond human brain MRI project: A protocol for multi-level harmonization across brain disorders
throughout the lifespan,” NeuRoImage: Clinical, vol. 30, 2021, doi: 10.1016/j.nicl.2021.102600.
[6] M. L. Tsai et al., “Seizure characteristics are related to tumor pathology in children with brain tumors,” Epilepsy Research, vol.
147, pp. 15–21, 2018, doi: 10.1016/j.eplepsyres.2018.08.007.
[7] H. Shen, J. Zhang, and W. Zheng, “Efficient symmetry-driven fully convolutional network for multimodal brain tumor
segmentation,” Proceedings - International Conference on Image Processing, ICIP, vol. 2017-September, pp. 3864–3868, 2018,
doi: 10.1109/ICIP.2017.8297006.
[8] C. Ma, G. Luo, and K. Wang, “Concatenated and Connected Random Forests with Multiscale Patch Driven Active Contour
Model for Automated Brain Tumor Segmentation of MR Images,” IEEE Transactions on Medical Imaging, vol. 37, no. 8,
pp. 1943–1954, 2018, doi: 10.1109/TMI.2018.2805821.
[9] M. Havaei et al., “Brain tumor segmentation with Deep Neural Networks,” Medical Image Analysis, vol. 35, pp. 18–31, 2017,
doi: 10.1016/j.media.2016.05.004.
[10] M. K. Akter, S. M. Khan, S. Azad, and S. A. Fattah, “Automated brain tumor segmentation from mri data based on exploration of
histogram characteristics of the cancerous hemisphere,” 5th IEEE Region 10 Humanitarian Technology Conference 2017, R10-
HTC 2017, vol. 2018-January, pp. 815–818, 2018, doi: 10.1109/R10-HTC.2017.8289080.
[11] D. Cheng and M. Liu, “Triple crossing,” vol. 10541, no. November, pp. 106–113, 2017, doi: 10.1007/978-3-319-67389-9.
[12] T. Kaur, B. S. Saini, and S. Gupta, “A joint intensity and edge magnitude-based multilevel thresholding algorithm for the
automatic segmentation of pathological MR brain images,” Neural Computing and Applications, vol. 30, no. 4, pp. 1317–1340,
2018, doi: 10.1007/s00521-016-2751-4.
[13] A. Min and Z. M. Kyu, “MRI images enhancement and tumor segmentation for brain,” Parallel and Distributed Computing,
Applications and Technologies, PDCAT Proceedings, vol. 2017-December, pp. 270–275, 2018,
doi: 10.1109/PDCAT.2017.00051.
[14] G. Karayegen and M. F. Aksahin, “Brain tumor prediction on MR images with semantic segmentation by using deep learning
network and 3D imaging of tumor region,” Biomedical Signal Processing and Control, vol. 66, 2021,
doi: 10.1016/j.bspc.2021.102458.
[15] G. Madhupriya, M. Guru Narayanan, S. Praveen, and B. Nivetha, “Brain tumor segmentation with deep learning technique,”
Proceedings of the International Conference on Trends in Electronics and Informatics, ICOEI 2019, vol. 2019-April,
pp. 758–763, 2019, doi: 10.1109/icoei.2019.8862575.
[16] R. Chauhan, K. K. Ghanshala, and R. C. Joshi, “Convolutional Neural Network (CNN) for Image Detection and Recognition,”
ICSCCC 2018 - 1st International Conference on Secure Cyber Computing and Communications, pp. 278–282, 2018,
doi: 10.1109/ICSCCC.2018.8703316.
[17] P. Ribalta Lorenzo et al., “Segmenting brain tumors from FLAIR MRI using fully convolutional neural networks,” Computer
Methods and Programs in Biomedicine, vol. 176, pp. 135–148, 2019, doi: 10.1016/j.cmpb.2019.05.006.
[18] K. Usman and K. Rajpoot, “Brain tumor classification from multi-modality MRI using wavelets and machine learning,” Pattern
Analysis and Applications, vol. 20, no. 3, pp. 871–881, 2017, doi: 10.1007/s10044-017-0597-8.
[19] X. Zhao, Y. Wu, G. Song, Z. Li, Y. Zhang, and Y. Fan, “A deep learning model integrating FCNNs and CRFs for brain tumor
segmentation,” Medical Image Analysis, vol. 43, pp. 98–111, 2018, doi: 10.1016/j.media.2017.10.002.
[20] O. Prakash, C. M. Park, A. Khare, M. Jeon, and J. Gwak, “Multiscale fusion of multimodal medical images using lifting scheme
based biorthogonal wavelet transform,” Optik, vol. 182, pp. 995–1014, 2019, doi: 10.1016/j.ijleo.2018.12.028.
[21] P. Samundiswary and H. Rekha, “An efficient pass parallel SPIHT based image compression using double density dual tree
complex wavelet transform for WSN,” International Journal of Innovative Technology and Exploring Engineering, vol. 8, no. 12,
pp. 2762–2768, 2019, doi: 10.35940/ijitee.L2564.1081219.
[22] S. Albawi, T. A. Mohammed, and S. Al-Zawi, “Understanding of a convolutional neural network,” Proceedings of 2017
International Conference on Engineering and Technology, ICET 2017, vol. 2018-January, pp. 1–6, 2018,
doi: 10.1109/ICEngTechnol.2017.8308186.
[23] U. K. Acharya and S. Kumar, “Genetic algorithm based adaptive histogram equalization (GAAHE) technique for medical image
enhancement,” Optik, vol. 230, 2021, doi: 10.1016/j.ijleo.2021.166273.
[24] Ö. İni̇ k, M. Altiok, E. Ülker, and B. Koçer, “MODE-CNN: A fast converging multi-objective optimization algorithm for CNN-
based models,” Applied Soft Computing, vol. 109, 2021, doi: 10.1016/j.asoc.2021.107582.
[25] J. Qiao et al., “Data on MRI brain lesion segmentation using K-means and Gaussian Mixture Model-Expectation Maximization,”
Data in Brief, vol. 27, 2019, doi: 10.1016/j.dib.2019.104628.
[26] D. R. Nayak, R. Dash, and B. Majhi, “Pathological brain detection using curvelet features and least squares SVM,” Multimedia
Tools and Applications, vol. 77, no. 3, pp. 3833–3856, 2018, doi: 10.1007/s11042-016-4171-y.
[27] B. Al Kindhi, T. A. Sardjono, M. H. Purnomo, and G. J. Verkerke, “Hybrid K-means, fuzzy C-means, and hierarchical clustering
for DNA hepatitis C virus trend mutation analysis,” Expert Systems with Applications, vol. 121, pp. 373–381, 2019,
doi: 10.1016/j.eswa.2018.12.019.
[28] N. Varuna Shree and T. N. R. Kumar, “Identification and classification of brain tumor MRI images with feature extraction using
DWT and probabilistic neural network,” Brain Informatics, vol. 5, no. 1, pp. 23–30, 2018, doi: 10.1007/s40708-017-0075-5.
[29] K. Kamnitsas et al., “Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation,” Medical
Image Analysis, vol. 36, pp. 61–78, 2017, doi: 10.1016/j.media.2016.10.004.
[30] S. Pereira, A. Pinto, V. Alves, and C. A. Silva, “Brain Tumor Segmentation Using Convolutional Neural Networks in MRI
Images,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1240–1251, 2016, doi: 10.1109/TMI.2016.2538465.
[31] C. Senthilkumar and R. K. Gnanamurthy, “A Fuzzy clustering based MRI brain image segmentation using back propagation
neural networks,” Cluster Computing, vol. 22, pp. 12305–12312, 2019, doi: 10.1007/s10586-017-1613-x.
[32] C. E. Cardenas et al., “Deep Learning Algorithm for Auto-Delineation of High-Risk Oropharyngeal Clinical Target Volumes
With Built-In Dice Similarity Coefficient Parameter Optimization Function,” International Journal of Radiation Oncology
Biology Physics, vol. 101, no. 2, pp. 468–478, 2018, doi: 10.1016/j.ijrobp.2018.01.114.
[33] P. Trajdos and M. Kurzynski, “Weighting scheme for a pairwise multi-label classifier based on the fuzzy confusion matrix,”
Pattern Recognition Letters, vol. 103, pp. 60–67, 2018, doi: 10.1016/j.patrec.2018.01.012.
[34] R. S. Samosir, E. Abdurachman, F. L. Gaol, and B. S. Sabarguna, “Hybrid method architecture design of mri brain tumors image
segmentation,” ICIC Express Letters, vol. 14, no. 12, pp. 1177–1184, 2020, doi: 10.24507/icicel.14.12.1177.
BIOGRAPHIES OF AUTHORS
Dr. Edi Abdurachman received his doctoral from statistic major Iowa State
University, USA. He is a Professor of Statistic form Bina Nusantra University since 2008. In
addition, He is serving as Head of Study Program in Doctoral of Computer Science from
and Directore of Post Graduated Program (2016 - 2020). His research interests are about
statistic, computer science, and information Technology. He is member of IEEE since 2019-
current and member of International Association of Engineers (IAENG). He is currently. He
can be contacted at email: [email protected]
Dr. Ford Lumban Gaol hold the B.Sc. in Mathematics in 1997, Master of
Computer Science in 2001, and the Doctor in Computer Science from the University of
Indonesia, Indonesia in 2009. He is currently lecturing with the Department of Doctoral of
Computer Science, Bina Nusantara University, Indonesia. He is currently the Head
Department of Doctor of Computer Science Bina Nusantara University and and Research
Interest Group Leader “Advance System in Computational Intelligence & Knowledge
Engineering. He is the President of IEEE Indonesia Section Computer Society (IEEE) also
IAIAI South East Asia Region Director. His research areas of interest include Information
Technology and Computer Science. He can be contacted in email: [email protected]
Brain tumor segmentation using double density dual tree complex … (Ridha Sefina Samosir)