Review of MRI-based Brain Tumor Image Segmentation Using Deep Learning Methods
Review of MRI-based Brain Tumor Image Segmentation Using Deep Learning Methods
Review of MRI-based Brain Tumor Image Segmentation Using Deep Learning Methods
com
ScienceDirect
Procedia Computer Science 102 (2016) 317 – 324
12th International Conference on Application of Fuzzy Systems and Soft Computing, ICAFS
2016, 29-30 August 2016, Vienna, Austria
Abstract
Brain tumor segmentation is an important task in medical image processing. Early diagnosis of brain tumors plays
an important role in improving treatment possibilities and increases the survival rate of the patients. Manual
segmentation of the brain tumors for cancer diagnosis, from large amount of MRI images generated in clinical
routine, is a difficult and time consuming task. There is a need for automatic brain tumor image segmentation. The
purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. Recently, automatic
segmentation using deep learning methods proved popular since these methods achieve the state-of-the-art results
and can address this problem better than other methods. Deep learning methods can also enable efficient processing
and objective evaluation of the large amounts of MRI-based image data. There are number of existing review
papers, focusing on traditional methods for MRI-based brain tumor image segmentation. Different than others, in
this paper, we focus on the recent trend of deep learning methods in this field. First, an introduction to brain tumors
and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend
of deep learning methods are discussed. Finally, an assessment of the current state is presented and future
developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed.
©©2016
2016The
TheAuthors. Published
Authors. by by
Published Elsevier B.V.B.V.
Elsevier This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the Organizing Committee of ICAFS 2016.
Peer-review under responsibility of the Organizing Committee of ICAFS 2016
Keywords: Review; image processing; deep learning; brain tumor segmentation; convolutional neural networks; mri
* Ali IúÕn. Tel.: +90 0392 22 84 267; fax: +90 0392 22 88 407.
E-mail address: [email protected]
1877-0509 © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the Organizing Committee of ICAFS 2016
doi:10.1016/j.procs.2016.09.407
318 Ali Işın et al. / Procedia Computer Science 102 (2016) 317 – 324
1. Introduction
Cancer can be defined as the uncontrolled, unnatural growth and division of the cells in the body. Occurrence, as
a mass, of these unnatural cell growth and division in the brain tissue is called a brain tumor. While brain tumors are
not very common, they are one of the most lethal cancers1.
Depending on their initial origin, brain tumors can be considered as either primary brain tumors or metastatic
brain tumors. In primary ones, the origin of the cells are brain tissue cells, where in metastatic ones cells become
cancerous at any other part of the body and spread into the brain. Gliomas are type of brain tumors that originate
from glial cells. They are the main type of brain tumors that current brain tumor segmentation research focuses on.
The term glioma is a general term that is used to describe different types of gliomas ranging from low-grade gliomas
like astrocytomas and oligodendrogliomas to the high grade (grade IV) glioblastoma multiform (GBM), which is the
most aggressive and the most common primary malignant brain tumor2. Surgery, chemotherapy and radiotherapy are
the techniques used, usually in combination, to treat gliomas3.
Early diagnosis of gliomas plays an important role in improving treatment possibilities. Medical Imaging
techniques such as Computed Tomography (CT), Single-Photon Emission Computed Tomography (SPECT),
Positron Emission Tomography (PET), Magnetic Resonance Spectroscopy (MRS) and Magnetic Resonance
Imaging (MRI) are all used to provide valuable information about shape, size, location and metabolism of brain
tumors assisting in diagnosis. While these modalities are used in combination to provide the highest detailed
information about the brain tumors, due to its good soft tissue contrast and widely availability MRI is considered as
the standard technique. MRI is a non-invasive in vivo imaging technique that uses radio frequency signals to excite
target tissues to produce their internal images under the influence of a very powerful magnetic field. Images of
different MRI sequences are generated by altering excitation and repetition times during image acquisition. These
different MRI modalities produce different types of tissue contrast images, thus providing valuable structural
information and enabling diagnosis and segmentation of tumors along with their subregions4. Four standard MRI
modalities used for glioma diagnosis include T1-weighted MRI (T1), T2-weighted MRI (T2), T1-weighted MRI
with gadolinium contrast enhancement (T1-Gd) and Fluid Attenuated Inversion Recovery (FLAIR) (see Fig. 1).
During MRI acquisition, although can vary from device to device, around one hundred and fifty slices of 2D images
are produced to represent the 3D brain volume. Furthermore, when the slices from the required standard modalities
are combined for diagnosis the data becomes very populated and complicated.
Generally, T1 images are used for distinguishing healthy tissues, whereas T2 images are used to delineate the
edema region which produces bright signal on the image. In T1-Gd images, the tumor border can easily be
distinguished by the bright signal of the accumulated contrast agent (gadolinium ions) in the active cell region of the
tumor tissue. Since necrotic cells do not interact with the contrast agent, they can be observed by hypo intense part
of the tumor core making it possible to easily segment them from the active cell region on the same sequence. In
FLAIR images, signal of water molecules are suppressed which helps in distinguishing edema region from the
Cerebrospinal Fluid (CSF).
Before applying any therapy, it is crucial to segment the tumor in order to protect healthy tissues while damaging
and destroying tumor cells during the therapy. Brain tumor segmentation involves diagnosing, delineating and
separating tumor tissues, such as active cells, necrotic core and edema (Fig. 2) from normal brain tissues including
Gray Matter (GM), White Matter (WM) and CSF. In current clinical routine, this task involves manual annotation
and segmentation of large amount of multimodal MRI images. However, since manual segmentation is a very time
consuming procedure, development of robust automatic segmentation methods, to provide efficient and objective
segmentation, became an interesting and popular research area in recent years5. Current high segmentation
performances obtained by deep learning methods make them good candidates for achieving this task.
The rest of the paper is organized as follows: First we briefly review methods for brain tumor image
segmentation in section 2. Then, in section 3, we especially focus on methods based on deep learning algorithms,
which provide the state-of-the-art results in recent years. In particular, we compare designs of different deep
learning methods and their performances. Finally, in conclusions, we assess the current state-of-the-art and provide
future directions for development.
Ali Işın et al. / Procedia Computer Science 102 (2016) 317 – 324 319
Fig. 1. Four different MRI modalities showing a high grade glioma, each enhancing different subregions of the tumor. From left; T1, T1-Gd, T2,
and FLAIR. Images are generated by using BRATS 2013 data5.
Brain tumor segmentation methods can be classified as manual methods, semi-automatic methods and fully
automatic methods based on the level of user interaction required6.
Manual segmentation requires the radiologist to use the multi-modality information presented by the MRI images
along with anatomical and physiological knowledge gained through training and experience. Procedure involves the
radiologist going through multiple slices of images slice by slice, diagnosing the tumor and manually drawing the
tumor regions carefully. Apart from being a time consuming task, manual segmentation is also radiologist dependent
and segmentation results are subject to large intra and inter rater variability7. However, manual segmentations are
widely used to evaluate the results of semi-automatic and fully automatic methods.
Semi-automatic methods require interaction of the user for three main purposes; initialization, intervention or
feedback response and evaluation8. Initialization is generally performed by defining a region of interest (ROI),
containing the approximate tumor region, for the automatic algorithm to process. Parameters of pre-processing
methods can also be adjusted to suit the input images. In addition to initialization, automated algorithms can be
steered towards a desired result during the process by receiving feedbacks and providing adjustments in response.
Furthermore, user can evaluate the results and modify or repeat the process if not satisfied.
Hamamci et al. proposed the “Tumor Cut” method9. This semi-automatic segmentation method requires the user
to draw the maximum diameter of the tumor on input MRI images. After initialization a cellular automata (CA)
based seeded tumor segmentation method run twice, once for tumor seeds provided by the user and once for the
background seeds to obtain a tumor probability map. This approach includes separately applying the algorithm to
each MRI modality (e.g. T1, T2, T1-Gd and FLAIR), then combining the results to obtain the final tumor volume.
A recent semi-automatic method employed a novel classification approach10. In this approach segmentation
problem was transformed into a classification problem and a brain tumor is segmented by training and classifying
within that same brain only. Generally, machine learning classification methods, for brain tumor segmentation,
requires large amounts of brain MRI scans (with known ground truth) from different cases to train on. This results in
a need to deal with intensity bias correction and other noises. However in this method, user initializes the process by
selecting a subset of voxels belonging to each tissue type, from a single case. For these subsets of voxels, algorithm
extracts the intensity values along with spatial coordinates as features and train a support vector machine (SVM) that
is used to classify all the voxels of the same image to their corresponding tissue type.
Despite semi-automatic brain tumor segmentation methods are less time consuming than manual methods and
can obtain efficient results, they are still prone to intra and inter rater/user variability. Thus, current brain tumor
segmentation research is mainly focused on fully automatic methods.
320 Ali Işın et al. / Procedia Computer Science 102 (2016) 317 – 324
Fig. 2. Brain tumor segmentation. From left: T1-Gd, T2, FLAIR and Segmented Tumor. In segmented image; bright signal is active region, dark
signal is necrotic core and medium level signal is edema. Images are generated by using BRATS 2013 data5.
In fully automatic brain tumor segmentation methods no user interaction is required. Mainly, artificial
intelligence and prior knowledge are combined to solve the segmentation problem.
2.3.1. Challenges
Automatic segmentation of gliomas is a very challenging problem. Tumor bearing brain MRI data is a 3D data
where tumor shapes, size and location can vary greatly from patient to patient. Also tumor boundaries are usually
unclear and irregular with discontinuities, posing great challenge especially against traditional edge-based methods.
In addition to these, brain tumor MRI data obtained from clinical scans or synthetic databases11 are inherently
complex. MRI devices and protocols used for acquisition can vary dramatically from scan to scan imposing intensity
biases and other variations for each different slice of image in the dataset. The need for several modalities to
effectively segment tumor sub-regions even adds to this complexity.
where رis the logical AND operator and |.|is the size of the set (the number of voxels belonging to it).
healthy tissues. Previously obtained atlases of healthy tissues are used to extract the unknown tumor compartments.
However, converting prior knowledge into suitable probabilistic models is a complicated task. Although a semi-
automatic method, Kuwon et al. proposed the best performing generative model15.
Recent performances of deep learning methods, specifically Convolutional Neural Networks (CNNs), in several
object recognition25 and biological image segmentation26 challenges increased their popularity among researches. In
contrast to traditional classification methods, where hand crafted features are fed into, CNNs automatically learn
representative complex features directly from the data itself. Due to this property, research on CNN based brain
tumor segmentation mainly focuses on network architecture design rather than image processing to extract features.
CNNs take patches extracted from the images as inputs and use trainable convolutional filters and local subsampling
to extract a hierarchy of increasingly complex features. Although currently very few in number compared to other
traditional brain tumor segmentation methods, due to state-of-the-art results obtained by CNN based brain tumor
segmentation methods, we will focus the review on these methods in this section. Comparison of the reviewed deep
learning and traditional glioma segmentation methods is presented in Table 1.
Urban et al. proposed a 3D CNN architecture for the multi-modal MRI glioma segmentation task27. Multi-
modality 3D patches, basically cubes of voxels, extracted from the different brain MRI modalities are used as inputs
to a CNN to predict the tissue label of the center voxel of the cube. Input has 3D spatial intensity information and
one additional dimension for MRI modalities. Thus 4D input data is handled effectively by the CNN. While high
dimensional processing can better represent 3D nature of biological structures, it also increases processing load of
the network. As for the architecture, two different networks are designed. The first one is a four layer CNN with the
input layer containing 15 3D filters that have 53 spatial dimensions with an additional 4th dimension accounting for
the corresponding MRI modality resulting in a filter shape of 5 x 5 x 5 x 4. Two of the hidden layer filters also have
53 spatial dimensions plus one dimension which corresponds to the number of filters in the preceding layer. Number
of filters in each hidden layer is 25. The last layer, the softmax layer contains 6 filters one for each tissue type to be
classified allowing the interpretation of the output as probabilities (see Fig.3. for example architecture). The second
network is almost identical with the exception of an additional hidden layer with 40 filters of size 53. Connected
components are used to post-process the results. Reported average results of the two proposed networks are
promising with BRATS dice scores of 87% for the whole tumor region, 77% for the core tumor region and 73% for
the active tumor region.
In contrast to the high dimensional method of Urban et al., Zikic et al. developed an interpretation method to
transform the 4D data, so that standard 2D-CNN architectures can be used to solve the brain tumor segmentation
task28. This can remove the burden of high dimensional CNN design while increasing computational efficiency.
Interpretation is done by transforming each 4-modalitiy 3D input patch of size (d1 x d2 x d3 x 4) into 4.d3-channel of
2D patches of size (d1 x d2 x 4d3). With this method, input patches of size 19x19x4 (single slice is used for each
322 Ali Işın et al. / Procedia Computer Science 102 (2016) 317 – 324
modalitiy) are fed into a 2D-CNN containing two convolutional layers with 64 filters with size 5 x 5 x 4 and 3 x 3 x
4 respectively, separated by a max-pooling layer, followed by one fully-connected (FC) layer and a soft-max layer.
While Urban et al. used hyperbolic tangent function, this method applied rectified linear unit (ReLU) as a non-
linearity term. No post-processing is applied. Reported results indicate BRATS dice scores of 83.7% for the whole
tumor region, 73.6% for core tumor region and 69% for active tumor region. It is important to note that, these results
are obtained with a limited dataset which might affect the performance.
Another novel approach implemented a cascaded two-pathway CNN architecture29. By extracting smaller sized
patches and larger sized patches at the same time, a cascaded CNN that process local details of the brain MRI along
with larger context of brain tissue is realized. Centred at the same location of the image, patches sized 33 x 33 pixels
are extracted from each different MRI modality for local pathway and patches sized 65 x 65 are extracted for global
pathway to classify the label of the central pixel. 2D multi-modality global input patches of size 65 x 65 x 4 are first
processed by a CNN to output patches of size 33 x 33 x 5. Those output patches are then concatenated with the local
patches of size 33x33x4 and fed as an input to a two-pathway CNN with convolutional layers containing 7 x 7 sized
filters in one path and 13 x 13 sized filters in the other one. Thus, creating cascaded two-pathway CNN architecture.
Several modified architectures of this cascaded CNN method are also proposed. Along with this novel architectural
approach, two phase training is also implemented to avoid class imbalances. In first phase, cascaded CNN is trained
with balanced distribution of classes and later in the second phase CNN is retrained with a more representative
distribution of the original images. Furthermore, Maxout non-linearity is used and connected components method is
implemented as a post-processing step. High BRATS dice scores of 88% for whole tumor region, 79% for core
tumor region and 73% for active tumor region are reported. A similar two-pathway approach with only one CNN is
also proposed30.
One of the recent CNN approaches31 evaluated the brain tumor segmentation performance of using deeper CNN
architectures. This approach is realized by implementing small 3 x 3 sized filters in the convolutional layers. In this
way, more convolutional layers can be added to the architecture without reducing the effective receptive field of the
traditional bigger filters. Furthermore, deeper architectures apply more non-linearities and have less filter weights,
due to the use of smaller filters, reducing the chance of overfitting. Modified version of ReLU, leaky rectifier linear
unit (LReLU) is used as non-linearity activation function. Proposed CNN that has 11 layers of depth (6
convolutional layers followed by 3 fully-connected layers with 2 max-pooling layers dividing them into blocks of
three) obtained BRATS dice scores of 88%, 83% and 77% for whole tumor, core tumor and active tumor regions
respectively. Implementation of intensity normalization, intensity bias correction and input patch augmentation as
pre-processing operations along with threshold based unwanted cluster removal as post- processing contributed to
the state of the art results.
Some of the glioma segmentation methods combined CNN application with other classification or clustering
techniques. In one method a local structured prediction with CNN is proposed32. Instead of using CNNs to classify
central voxels of input image patches into brain tissue classes, first patches of labels are extracted from ground truth
images and then clustered by k-means algorithm into N groups to form a label patch dictionary of size N. Later, a
2D CNN is used to classify multimodal input image patches into one of these clusters. As for the segmentation
performance of the method, BRATS dice scores of 83%, 75% and 77% for whole tumor, core tumor and active
tumor regions are reported respectively. On the other hand Rao et al.33 extracted multi plane patches around each
pixel and trained four different CNNs each taking input patches from a separate MRI modality image. Outputs of the
last hidden layers of those CNNs are then concatenated and used as feature maps to train a RF classifier.
Ali Işın et al. / Procedia Computer Science 102 (2016) 317 – 324 323
Implementations of pre/post-processing steps are not reported and only an accuracy level of 67% is provided as a
result.
Table 1. Comparison of the reviewed brain tumor segmentation methods (results are obtained using challenge dataset of BRATS 2013
benchmark5. Note that, we only considered dice scores as the performance measure. Refer to the benchmark for further evaluation metrics)
Havaei et Cascaded Two-pathway CNNs for simultaneous local Fully automatic 0.88 0.79 0.73
al.29 and global processing
Tustison et Concatenated RFs, trained using asymmetry and first Fully automatic 0.87 0.78 0.74
al.19 order statistical features
Urban et al.27 3D CNN architecture using 3D convolutional filters Fully automatic 0.87 0.77 0.73
Havaei et Uses SVM; training and segmentation implemented Semi-automatic 0.86 0.77 0.73
al.10 within the same brain
Dvorak and Local structured prediction with CNN and k-means Fully automatic 0.83 0.75 0.77
Menze32
Davy et al.30 Two-pathway CNN for simultaneous local and global Fully automatic 0.85 0.74 0.68
processing
Zikic et al. 28 3D input patches are interpreted into 2D input patches Fully automatic 0.837 0.736 0.69
to train a CNN
Hamamci et Generative model, uses cellular automata to obtain Semi-automatic 0.72 0.57 0.59
al.9 tumor probability map
Rao et al.33 Four CNNs, one for each modality, with their outputs Fully automatic Not Not Not
concatenated as an input into a RF reported reported reported
4. Conclusions
Automatic segmentation of the brain tumors for cancer diagnosis is a challenging task. Recently, availability of
public datasets and the well-accepted BRATS benchmark provided a common medium for the researchers to
develop and objectively evaluate their methods with the existing techniques. In this paper, we provided a review of
the state-of-the-art methods based on deep learning, and a brief overview of traditional techniques. With the reported
high performances, deep learning methods can be considered as the current state-of-the-art for glioma segmentation.
In traditional automatic glioma segmentation methods, translating prior knowledge into probabilistic maps or
selecting highly representative features for classifiers is challenging task. However, convolutional neural networks
(CNN) have the advantage of automatically learning representative complex features for both healthy brain tissues
and tumor tissues directly from the multi-modal MRI images. Future improvements and modifications in CNN
architectures and addition of complementary information from other imaging modalities such as Positron Emission
Tomography (PET), Magnetic Resonance Spectroscopy (MRS) and Diffusion Tensor Imaging (DTI) may improve
the current methods, eventually leading to the development of clinically acceptable automatic glioma segmentation
methods for better diagnosis.
324 Ali Işın et al. / Procedia Computer Science 102 (2016) 317 – 324
References
1. De Angelis L M. Brain Tumors. N. Engl. J. Med. 2001; 344:114-23.
2. Deimling A. Gliomas. Recent Results in Cancer Research vol 171. Berlin: Springer; 2009.
3. Stupp R. Malignant glioma: ESMO clinical recommendations for diagnosis, treatment and follow-up. Ann Oncol 2007; 18(Suppl 2):69-70.
4. Drevelegas A and Papanikolou N. Imaging modalities in brain tumors Imaging of Brain Tumors with Histological Correlations. Berlin:
Springer; 2011; chapter 2:13-34.
5. Menze B, et al. The Multimodal brain tumor image segmentation benchmark (brats). IEEE Trans Med Imaging 2015; 34(10):1993-2024.
6. Gordillo N, Montseny E, Sobrevilla P. State of the art survey on MRI brain tumor segmentation. Magn Reson Imaging 2013; 31(8):1426–38.
7.White D, Houston A, Sampson W, Wilkins G. Intra and interoperator variations in region-of-interest drawing and their effect on the
measurement of glomerular filtration rates 1999; 24:177–81.
8. Foo JL. A survey of user interaction and automation in medical image segmentation methods. Tech rep ISUHCI20062, Human Computer
Interaction Department, Iowa State Univ; 2006.
9. Hamamci A, et al. Tumor-Cut: segmentation of brain tumors on contrast enhanced MR images for radiosurgery applications. IEEE Trans Med
Imaging 2012; 31(3):790–804.
10. Havaei M, Larochelle H, Poulin P, Jadoin P M. Within-brain classification for brain tumor segmentation. Int J Cars 2016; 11:777-788.
11. Prastawa M, Bullitt E, Gerig G. Simulation of brain tumors in mr images for evaluation of segmentation efficacy. Medical Image Analysis
2009;13(2):297- 311.
12. Bauer S, Wiest R, Nolte L, Reyes M. A survey of MRI-based medical image analysis for brain tumor studies. Phys Med Biol.2013;58:97-129.
13. Liu J, Wang J, Wu F, Liu T, Pan Y. A survey of MRI-based brain tumor segmentation methods. Tsinghua Science and Technology
2014;19(6):578-595.
14. Angelini E D, Clatz O, Mandonnet E,Konukoglu E,Capelle L, Duffau H. Glioma dynamics and computational models: a review of
segmentation, registration, and in silico growth algorithms and their clinical applications. Curr. Med. Imaging 2007; 3: 262–76.
15. D. Kwon et al. Combining generative models for multifocal glioma segmentation and registration. Medical Image Computing and
Computer-Assisted Intervention–MICCAI 2014. Springer, 2014:763–770.
16. Nowak R D, Wavelet-based rician noise removal for magnetic resonance imaging. IEEE Trans Image Processing 1999; 8(10):1408-1419.
17. Zhuang AH, Valentino DJ, Toga AW. Skull stripping magnetic resonance brain images using a model based level set. NeuroImage
2006;32(1):79-92.
18. Shah M, Xiao Y, Subbanna N, Francis S, Arnold D. L, Collins D. L, Arbel T. Evaluating intensity normalization on mris of human brain with
multiple sclerosis. Medical Image Analysis 2011;15(2): 267-282.
19. Tustison N. et al. Optimal symmetric multimodal templates and concatenated random forests for supervised brain tumor segmentation
(simplified) with antsr. Neuroinformatics 2015;13(2): 209–225.
20. Anitha V, Murugavalli S. Brain tumor classification using two-tier classifier with adaptive segmentation technique. IET Comput.Vis.
2016;10(1):9-17.
21. Leung T, Malik J. Representing and recognizing the visual appearance of materials using three-dimensional textons. Int. J. Comput. Vision
2001 43(1)29–44.
22. Islam A, Reza S, Iftekharuddin K. Multifractal texture estimation for detection and segmentation of brain tumors. IEEE Trans Med Imaging
2013;60(11):3204–3215.
23. Bauer S, Nolte LP, Reyes M. Fully automatic segmentation of brain tumor images using support vector machine classification in combination
with hierarchical conditional random field regularization. In Medical Image Computing and Computer-Assisted Intervention-MICCAI 2011.
Springer; 2011: 354-361.
24. Zikic D, et al. Decision forests for tissue-specific segmentation of high-grade gliomas in multi-channel MR. Med Image Comput Comput
Assist Interv 2012;15(3):369–76.
25. Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks. Advances in neural information
processing systems 2012:1097–1105.
26. Ciresan D. et al. Deep neural networks segment neuronal membranes in electron microscopy images. Advances in neural information
processing systems 2012: 2843–2851.
27. Urban G. et al. Multi-modal brain tumor segmentation using deep convolutional neural networks. MICCAI Multimodal Brain Tumor
Segmentation Challenge (BraTS) 2014:31–35.
28. Zikic D. et al. Segmentation of brain tumor tissues with convolutional neural networks. MICCAI Multimodal Brain Tumor Segmentation
Challenge (BraTS) 2014:36–39.
29. Havaei M, Davy A, Farley W D, Biard A, Courville A, Bengio Y, Pal C, Jadoin P M, Larochelle H. Brain tumor segmentation with deep
neural networks. Medical Image Analysis 2016, doi:10.1016/j.media.2016.05.004.
30. Davy A. et al. Brain tumor segmentation with deep neural networks. MICCAI Multimodal Brain Tumor Segmentation Challenge (BraTS)
2014:1–5.
31. Pereira S, Pinto A, Alves V, Silva C A. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans Med
Imaging 2016;35(5):1240–1251.
32. Dvorak P, Menze B. Structured prediction with convolutional neural networks for multimodal brain tumor segmentation. MICCAI
Multimodal Brain Tumor Segmentation Challenge (BraTS) 2015:13–24.
33. Rao V, Sarabi M S, Jaiswal A. Brain tumor segmentation with deep learning. MICCAI Multimodal Brain Tumor Segmentation Challenge
(BraTS) 2015:56–59.