Fully Automated 3D Colon Segmentation and Volume Rendering in Virtual Reality
Fully Automated 3D Colon Segmentation and Volume Rendering in Virtual Reality
1 Introduction
Recent research [1, 2] has demonstrated that deep learning approaches, notably
Convolutional Neural Networks (CNNs), have become increasingly effective and
popular for assisting medical imaging processing tasks, such as detection, seg-
mentation, and classification. Research in the medical field has shown evolving
interest in using such technology to conduct disease analysis [3], surgical planning
[4] and doctor-patient communication [5]. However, even though deep learning
models can be applied to segment various medical images such as Magnetic Res-
onance Imaging (MRI) or Computed Tomography (CT) scan images [1, 2], it is
still difficult to visualize the results. Notably, the segmentation of complex 3D
structures is still typically presented and evaluated with 2D images.
Popular 3D applications such as 3D Slicer and OsiriX provide effective ways
to display medical data as 3D structures. However, the rendering results are
usually limited to 2D screens, and the complicated software often has a steep
learning curve. Therefore, it is necessary to have an intuitive and user-friendly
tool for doctors to quickly identify an organ as a 3D structure with clear bound-
aries. Recent advancement in Virtual Reality and Augmented Reality technology
?
These four authors contributed equally
2 Wanze Xie et al.
have enabled various software applications [6], but few actual products have been
created to augment the doctors’ capability to more effectively inspect medical
image stacks.
In this paper, we introduce an integrated automatic colon segmentation and
visualization system for MRI image stacks, based on the U-Net Convolutional
Network architecture, with volumetric rendering using DirectX12 and OpenVR.
It can load a series of DICOM image files as input, perform fully automated
segmentation and analysis, and highlight the predicted segmentation of the colon
in the volume rendered visualization of the MRI images inside virtual reality
using the HTC Vive head mounted display (HMD). Unlike traditional desktop
screens, virtual reality technology has provided a more intuitive approach for
doctors to locate the colon in the abdominal area in MR Enterography. We
further evaluate the advantage of our system in multiple use cases for medical
imaging analysis:
1. Help doctors more easily find large bowels and a related area inside the MRI
image of patients identified with Crohn’s Disease.
2. Provide manipulable 3D visualization in a collaborative environment to help
doctors conduct surgical planning.
3. Export an automatically generated 3D mesh for 3D printing and help doc-
tors to use the printed model to explain their Crohn’s Disease to the pa-
tient, which facilitates doctor-patient communication. The source code can
be found at (omitted for double-blind review)
3 Methodology
Raw Data Collection We collected and compiled dedicated MRI data set from
the medical center. A male patient (63yo) with Crohn’s Disease was included
to perform MR Enterography as a routine clinical evaluation. Body MR studies
were acquired at the medical center on different scanners with contrast-enhanced
Spoiled 3D Gradient Echo sequences on a Siemens 1.5T MR scanner for the first
exam on GE 1.5T MR scanners for multiple follow-up exams. The acquisition
uses a coronal slice with a thickness of 1.5mm, the number of slices ranging from
48 to 144, and in-plane resolution around 0.78mm × 0.78mm. The patient’s size
dictates the field of view with the craniocaudal coverage requested to include
the rectum up to the colon, so the in-plane resolution varies. This dataset, with
three different protocols and exams on three different scanners at different times
(2012, 2016 and 2017, details in supplement), also represents the variability of
real clinical applications and reflects if the proposed method is generalizable.
Creating the Training Dataset To establish supervised training dataset,
manual segmentation masks were included together with the raw DICOM files.
Colon areas were segmented manually using the free software application 3DSlicer
which were converted into 2D array before feeding them into the neural network
model, as shown in Fig. 2.
The post-processing algorithm has shown promising results from the result
of P2 in Table 1 and Figure 6.
Fig. 8: Demo
Drawing Blending
[12]. Just two small steps are summed towards the light source, using a simplified
version of the sampling function that skips color computation for speed. This
provides a rough, local approximation of lighting, enough to provide an extra
sense of depth and orientation of surfaces. Figure 8 shows the lighting effect.
Interaction We offer many utilities to interact with the data. The volume
can be grabbed and moved using the virtual reality system’s controllers. Other
utilities include lighting (shown in Figure 8), thresholding, and manual editing of
the mask with vibrational feedback proportional to the tissue density (Figure 10).
These features help the user to intuitively view and manipulate the data as a
physical object easily.
References
1. Ronneberger O., Fischer P., Brox T. (2015) U-Net: Convolutional Networks for
Biomedical Image Segmentation. In: Navab N., Hornegger J., Wells W., Frangi
A. (eds) Medical Image Computing and Computer-Assisted Intervention MICCAI
2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham
2. Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. “V-net: Fully convo-
lutional neural networks for volumetric medical image segmentation.” 2016 Fourth
International Conference on 3D Vision (3DV). IEEE, 2016.
3. Chupin, Marie, et al. “Fully automatic hippocampus segmentation and classification
in Alzheimer’s disease and mild cognitive impairment applied on data from ADNI.”
Hippocampus 19.6 (2009): 579-587.
4. Reitinger, Bernhard, et al. “Liver surgery planning using virtual reality.” IEEE
Computer Graphics and Applications 26.6 (2006): 36-47.
5. Levinson, Wendy, Cara S. Lesser, and Ronald M. Epstein. “Developing physician
communication skills for patient-centered care.” Health affairs 29.7 (2010): 1310-
1318.
6. Ayache, Nicholas. “Medical computer vision, virtual reality and robotics.” Image
and Vision Computing 13.4 (1995): 295-313.
7. Ker, Justin, et al. “Deep learning applications in medical image analysis.” Ieee
Access 6 (2018): 9375-9389.
8. G. Litjens et al., A survey on deep learning in medical image analysis, Jun. 2017,
[online] Available: https://arxiv.org/abs/1702.05747.
9. Taha, Abdel Aziz, and Allan Hanbury. “Metrics for evaluating 3D medical image
segmentation: analysis, selection, and tool.” BMC medical imaging 15.1 (2015): 29.
10. Dice, Lee R. “Measures of the amount of ecologic association between species.”
Ecology 26.3 (1945): 297-302.
Fully Automated 3D Colon Segmentation for VR 9
11. Gcer, Firdevs Ikbal, et al. “Evaluation of Crohns disease activity by MR enterog-
raphy: derivation and histopathological comparison of an MR-based activity index.”
European journal of radiology 84.10 (2015): 1829-1834.
12. Schneider, Andrew. “Real-time volumetric cloudscapes.” GPU Pro 7: Advanced
Rendering Techniques 97 (2016)
13. Puylaert, Carl AJ, et al. “Comparison of MRI Activity Scoring Systems and Fea-
tures for the Terminal Ileum in Patients With Crohn Disease.” American Journal
of Roentgenology 212.2 (2019): W25-W31.
14. Mahapatra, Dwarikanath, et al. “Automatic detection and segmentation of Crohn’s
disease tissues from abdominal MRI.” IEEE transactions on medical imaging 32.12
(2013): 2332-2347.
15. Frhlich, Magali, et al. “Holographic Visualisation and Interaction of Fused CT,
PET and MRI Volumetric Medical Imaging Data Using Dedicated Remote GPGPU
Ray Casting.” Springer, Cham, 2018. 102-110.