0% found this document useful (0 votes)
31 views10 pages

Challenges of Medical Image Processing

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views10 pages

Challenges of Medical Image Processing

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220232877

Challenges of medical image processing

Article in Computer Science - Research and Development · February 2011


DOI: 10.1007/s00450-010-0146-9 · Source: DBLP

CITATIONS READS

107 15,648

4 authors, including:

Torsten Wolfgang Kuhlen


RWTH Aachen University
329 PUBLICATIONS 4,007 CITATIONS

SEE PROFILE

All content following this page was uploaded by Torsten Wolfgang Kuhlen on 04 June 2014.

The user has requested enhancement of the downloaded file.


Comput Sci Res Dev (2011) 26: 5–13
DOI 10.1007/s00450-010-0146-9

S P E C I A L I S S U E PA P E R

Challenges of medical image processing


Ingrid Scholl · Til Aach · Thomas M. Deserno ·
Torsten Kuhlen

Published online: 23 December 2010


© Springer-Verlag 2010

Abstract In todays health care, imaging plays an impor- Keywords Medical imaging · Bioimaging ·
tant role throughout the entire clinical process from diag- Neuroimaging · Visualization · Giga-Voxel · Tera-Voxel ·
nostics and treatment planning to surgical procedures and Picture archiving and communication systems (PACS) ·
follow up studies. Since most imaging modalities have gone Content-based image retrieval (CBIR) · Virtual reality
directly digital, with continually increasing resolution, med- (VR) · Graphics processing unit (GPU) programming ·
ical image processing has to face the challenges arising from Parallel algorithm · Grid computing
large data volumes. In this paper, we discuss Kilo- to Ter-
abyte challenges regarding (i) medical image management
and image data mining, (ii) bioimaging, (iii) virtual reality in 1 Introduction
medical visualizations and (iv) neuroimaging. Due to the in-
Recent advances in biomedical signal processing and im-
creasing amount of data, image processing and visualization
age processing have frequently been reviewed [21, 35, 36].
algorithms have to be adjusted. Scalable algorithms and ad-
Usually, such review articles are driven by classifying the
vanced parallelization techniques using graphical process-
methods that are used for processing pixel and voxel data,
ing units have been developed. They are summarized in this e.g., image segmentation, or their applications in diagnos-
paper. While such techniques are coping with the Kilo- to tics, treatment planning and follow up studies. In contrast,
Terabyte challenge, the Petabyte level is already looming on this paper focuses on processing large data volumes of med-
the horizon. For this reason, medical image processing re- ical images and its related challenges.
mains a vital field of research. During the last years, the amount of medical image data
grew from Kilo- to Terabyte. This is mainly due to improve-
ments in medical image acquisition systems with increasing
I. Scholl ()
Faculty of Electrical Engineering and Information Technology, pixel resolution and faster reconstruction processing. For ex-
FH Aachen University of Applied Sciences, Aachen, Germany ample, the new Sky Scan 2011 x-ray nano-tomograph has a
e-mail: [email protected] resolution of 200 nm per pixel and the high resolution mi-
cro computed tomography (CT) reconstructs images with
T. Aach
Institute of Imaging & Computer Vision, RWTH Aachen
8 000 × 8 000 pixel per slice with 0.7 µm isotropic detail
University, Aachen, Germany detectability. This results in 64 Megabyte (MB) per slice.
e-mail: [email protected] New CT and magnetic resonance imaging (MRI) systems
can scale the image resolution and the reconstruction time.
T.M. Deserno
Whole human body scans with this resolution reach several
Department of Medical Informatics, University Hospital Aachen,
RWTH Aachen University, Aachen, Germany Gigabytes (GB) of data load.
e-mail: [email protected] Large medical image data occurs in two different ways:
first, a huge amount of image data from thousands of im-
T. Kuhlen ages such as in picture archiving and communication sys-
Virtual Reality Group, RWTH Aachen University, Aachen,
Germany
tems (PACS) and second, a large amount of image data from
e-mail: [email protected] a single data set. In practice, both ways multiply.
6 I. Scholl et al.

This paper discusses both aspects and is structured in the become a subject of intense research [13, 38]. Appropri-
following manner: Sect. 2 outlines specific current research ate image features (signature) and similarity measures have
projects dealing with the problem of large image data. Sec- been analyzed, ranging from
tion 2.1 considers the management of thousands of medical – global (i.e., the entire image is described by a single sig-
images, the difficulty of image content-based queries and nature) to
the acceptance by the physicians. Section 2.2 focuses on a – local (i.e., each image object or region of interest (ROI)
large data set from fluorescence microscope images depict- is indexed with its own signature) to
ing molecular and cellular bioimaging probes. These images – structural (i.e., a signature is assessing the local or tem-
can be tracked over time and need several GB to save the poral constellation of relevant objects) approaches.
raw data. Section 2.3 introduces another problem handling
CBIR-PACS integration has also been addressed in re-
GB data. In virtual reality (VR) stereoscopic real-time inter-
cent research [15, 46, 61]. However, CBIR-based methods
action and visualization use multiple views rendered from
are still unavailable in today’s radiological routine. Possible
a single huge data set. The efficiency of these methods de- obstacles to the use of CBIR in medicine include the lack of
pends on the number of views, the pixel size of rendered im- (i) translational cooperation between biomedical and engi-
ages and the size of medical data sets. The rendered views neering experts, (ii) effective representation of medical con-
can be blended with additional information from analyzed tent by low-level mathematical features, (iii) comprehensive
data like the flow field inside a human nasal cavity. An- system evaluation and appropriate integration tools [38].
other example of large medical image data is described in The image retrieval in medical applications (IRMA,
Sect. 2.4. It considers the problems associated with Giga- to http://irma-project.org) approach aims at providing a frame-
Terabyte data sets created by collection microscopic images work for medical CBIR applications including interfaces to
from human brain cuts in nerve fiber resolution. These cuts PACS and hospital in formation systems (HIS) [19, 34]. In
are registered to a single volume data set. Three-dimensional other words, IRMA exactly addresses the Kilo- to Terabyte
(3D) visualization and interaction with Giga- to Tera-voxel challenge in medical image management and data mining.
data require specific modern software techniques. Section 3 Figure 1 depicts a web-based graphical user interface (GUI)
gives an overview of advanced programming techniques on build from standardized IRMA in put/output (I/O) templates
this topic. In Sect. 4, we summarize and conclude this paper [12]. In cooperation with the National Library of Medicine
with an outlook on future challenges. (NLM) at the National Institutes of Health (NIH), United
States, a distributed retrieval system has been developed al-
lowing shape-based access to a large database of spine x-ray
images. In total, this database holds about 50 000 vertebrae.
2 Examples of large medical imaging
In terms of data volume, the IRMA-based application sup-
porting screening mammography [43] is even more com-
2.1 Medical image management and image data mining prehensive. Currently, it holds 10 517 digital mammogra-
phies with annotated ground truth, each in high resolution
PACS is a field, where an “explosion” of data has been ob- and with replicates in different sample sizes. Depending
served. In clinical routine, most modalities such as plain x- on the vendor of the imaging device, a single mammog-
ray, CT, MRI, ultrasound (US) as well as optical imaging raphy provides up to 54 MB of uncompressed data [58].
techniques such as endoscopy and microscopy have turned Here, the Kilo- to Terabyte step already applies. Hence, all
direct digital, feeding the PACS with large amounts of im- issues related to performance are unresolved, still crucial
age data. Several TB per year must be handled by the sys- and currently remain. Due to the steadily increasing amount
tems [41], which is regarded as a logistic problem. In med- of medical image data, fast feature extracting and indexing
ical informatics, we refer to “information logistics” when techniques are needed that simultaneously narrow the gap
we aim at providing the right information at the right time between the numerical nature of features and the semantic
to the right place [48, 49]. Several milestones of informa- meaning of images. Combining image content with natural
tion logistics have been reached already [23, 24]. Regarding language-based access to medical case records will provide
medical images, however, retrieval from PACS archives still advanced case-based reasoning methodology for medical
is based on alpha-numerical annotations, such as the natural diagnostics as well as treatment [42]. Therefore, interfac-
language text of diagnosis, or simply the name of the patient, ing image processing with automatic text analysis forms the
subsequent challenge in medical informatics.
the date of acquisition, or some study meta-information.
Almost 15 years ago, Tagare et al. already reported on the 2.2 Bioimaging
impact expected from accessing image archives and mining
image data by content rather than textual description [56], A relatively young field generating rapidly increasing quan-
and content-based image retrieval (CBIR) in medicine has tities of image data is the investigation of biomolecular sys-
Challenges of medical image processing 7

Fig. 2 Cytoskeletal filaments of a living cell superimposed with mo-


tion vectors

Fig. 1 Content-based medical image retrieval using the IRMA frame-


work These regularizers should reflect the properties of the mov-
ing structures, for instance by models from mathematical
tems by molecular and cellular bioimaging [17]. A single physics [45, 57] or fluid flow models [8, 10], and lead to
(3D + t)-dataset acquired by fluorescence microscopy, for mathematically tractable optimization criteria [16, 32, 33,
instance, can easily reach a volume of several GB of raw 60]. In this context, an interesting question is how precise
data. Recording only two such datasets per day leads to an these regularizers agree with models of cellular mechanics,
estimated mean data volume of about 1 000 to 1 500 GB organization and formation.
per year, making the visual inspection of this data impos-
sible. Apart from the logistics of handling this data, their 2.3 Virtual reality in medical visualization
sheer volume also drives the need for automated analysis
[14, 40] to replace visual examinations. Biomolecular sys- VR technology has long been a promising candidate for a
tems are intrinsically dynamic, thus making the quantitative more efficient analysis of large data [59], the key hypoth-
and reproducible analysis of motion the major challenge. esis being that the use of real-time, stereoscopic displays
Accordingly, various approaches for tracking molecular or and direct user interaction enables better understanding in
cellular structures have been developed, with early work in less time. General overviews on VR-based visualization are
the 1970s e.g., [1]. In [11], methods are described to eval- given in Part VII of the Visualization Handbook by Hansen
uate polymer transport and turnover in fluorescent speckle and Johnson [22]. VR-based data analysis largely relies on
microscopy (FSM), based on cross correlation and particle the key concept of interactive data handling in order to fa-
flow. Approaches for tracking micro-tubules can be found cilitate an intuitive trial-and-error exploration. One of the
in [50], where active contours and a hidden Markov model major advantages of this approach is the rich user interface
(HMM) are used [39], or in [14] for a speckle-based tech- provided by VR technology, which enables us to combine
nique. A global minimization method by simulated anneal- interactive exploration and immersive sensation.
ing for the tracking of fluorescent structures was developed Although VR has become accepted as a valuable tool in
in [47]. the analysis of simulated technical and physical processes,
Many cell functions crucially depend on the dynamics in the medical world the situation is somewhat ambivalent.
of the cytoskeleton, which, in mammalian cells, consists of In the clinical practice of medical imaging, VR has not yet
actin filaments, microtubules and intermediate filaments. An become widely accepted. Interviews with radiologists re-
approach to assess the influence of proteins such as GAR22 vealed that they are well trained on the extraction of 3D
on the polymerization of microtubules is described in [27]. information from CT, MRI, and positron emission tomogra-
In [31], a registration-based method for tracking the con- phy (PET) data which is presented as two-dimensional (2D)
tinuous translocation of intermediate filaments towards the slices. Also, presentation of medical images in VR requires
nucleus is developed (Fig. 2). The motility of cells is in- the preparation of raw data in a pre-processing step, thus
fluenced by so-called focal adhesions (FAs). The analysis becoming a cost factor in the radiologists daily workflow.
of their dynamics requires the segmentation and tracking of However, the situation is quite different in research-related
FAs [63]. activities. Here, scientists not only appreciate the potential
Motion estimation is often formulated as an ill-posed of VR for gaining insight into complex and large medical
problem [5]. In addition to measurements on the image data, but VR-based visualization has also proven its impact
data the solution requires regularization via a priori knowl- for the discussion of results across disciplines between med-
edge of the typically expected properties of the motion field. ical experts and researchers from other fields.
8 I. Scholl et al.

Diffusion tensor imaging (DTI) is a good example for ac-


tive research going on in the medical field, which can profit
from VR-based visualization and interaction methods. DTI
currently provides the most advanced method for the assess-
ment of white matter fiber pathways in the living human
brain. Hereby, the course of the fibers is estimated by mea-
suring water diffusivity in the brain. From this DTI data,
an effective diffusion tensor can be estimated within each
voxel. The quantities as mean diffusivity, principal diffu-
sion direction, or anisotropy of the diffusion ellipsoid, can
be computed from the elements of the diffusion tensor [4].
In contrast to deterministic tractography, the probabilistic
Fig. 3 Interactive exploration of probabilistic tractography data in a
approach accounts for the uncertainty within the estimated CAVE virtual environment
white matter fiber pathways and allows for drawing a clearer
picture of the overall fiber architecture within the human
brain.
Traditional visualization software most widely used in
DTI tractography research only reconstructs fiber tracts as
solid paths in 3D without any information about the uncer-
tainty. Therefore, in a project of Jülich Aachen Research Al-
liance (JARA, http://www.jara.org), a VR-based visualiza-
tion tool for the analysis of probabilistic tractography data is
being developed [30]. The interactive visualization of prob-
abilistic fiber tracts allows the domain scientists to directly
interpret their results in 3D space (Fig. 3). The mental work-
load previously required from judging 2D slices or missing
uncertainty information in non-interactive plots can be sig-
nificantly reduced. Different probability values are coded
with different colors and transparencies, permitting a 3D Fig. 4 The Virtual Windtunnel: Interactive exploration of the flow
impression of the fiber tract while still revealing its main field inside a human nasal cavity by real-time particle tracing in 3D
space
direction and the uncertainty around it. Using specific 3D
visualization and interaction methods, interesting parts of
the probabilistic fiber tracts can be revealed intuitively and successfully employed for nearly two decades. One of the
referenced with anatomical landmarks. This allows a more first examples was the Virtual Windtunnel by Bryson et al.
accurate inspection of the anatomic structures in the direct [7].
vicinity of fiber pathways. Domain experts have stated that Recently, a growing number of research projects have
by combining anatomical information from a reference brain been initiated in the medical field, where flow phenomena
with overlaying fiber tracking results in 3D, the visualiza- play a crucial role. Fluid mechanics researchers, computer
tion gives considerably more valuable insight than standard scientists, and medical experts are collaborating within in-
visualization methods. terdisciplinary teams one concentrating on the investigation
The analysis of flow phenomena is another case where of the aerodynamics of nasal respiration [25] and the other
VR-based, immersive visualization technology has gained on the computational analysis of artificial blood pumps [26].
in importance in recent years. With the ever growing perfor- Using the Virtual Windtunnel paradigm implemented in a
mance of modern high performance computers, simulations cave automatic virtual environment (CAVE)-like environ-
that are run on those machines are becoming more and more ment (Fig. 4) and direct interaction with the data in 3D space
complex. Today’s common visualization techniques are in- (Fig. 5), researchers significantly profit from VR for identi-
adequate for analyzing current simulations, which are usu- fying and extracting relevant flow features in their datasets.
ally based on unsteady 3D processes. Here, utilizing VR
technology promises to facilitate the analysis procedure, be- 2.4 Neuroimaging
cause it allows for a visualization and interactive, explo-
rative analysis of complex, time-variant computational fluid A new data set of the human brain with a volume ranging
dynamic (CFD) data directly in 3D space. In the domain of to GB is generated by 1 320 histological cuts [2, 44]. Each
computational engineering science, VR technology has been cut has a thickness of 100 µm and is scanned by polarized
Challenges of medical image processing 9

scans can be achieved using a multi-modal technique. Here,


multiple volume data sets are combined [51]. For this pur-
pose, multiple data sets are loaded into the memory of the
central processing unit (CPU) or the graphics processing
unit (GPU), which requires memory space for all sets at
once.
Figure 6 shows a multi-modal ray casting from two vol-
ume data sets (MTI and PET from a head) combined in a
single 3D view using two different transfer functions. The
images from the brain are combined in a 3D view with pre-
viously reconstructed nerve fibers. Figure 7 represents a 3D
visualization of reconstructed nerve fibers from 36 PLI scans
from a small area of the brain 27.39 × 22.72 × 3.20 mm3 .
Fig. 5 (Color online) Direct interaction with “virtual red blood cells”
flowing through a simulated artificial blood pump allows an intuitive
navigation in space and time. Here, the domain expert picks a particle 3 Software techniques coping with large data
in order to navigate to a specific point in time
Due to the increasing load of data (Table 1) image process-
ing and visualization algorithms have to be adjusted. For ex-
ample, the artificial blood pump dataset consists of a 3.6 mil-
lion cell tetrahedral grid for each of 200 time steps, leading
to a total of 30 GB of data. Such data sizes are quite easy
to handle in standard visualizations. However, interactive,
real-time post-processing and rendering of the data on high-
resolution, immersive displays is a challenging task, requir-
ing the development of advanced parallelization, data man-
agement, and computer graphics methods. Future datasets in
this field are predicted to reach the TB level.
Scalable algorithms must be developed using parallel
techniques to reduce processing time and increase memory
efficiency [28]. If the data amount exceeds the memory of
the CPU or GPU, several techniques can be employed, in-
cluding compressed or packed representations of the data
Fig. 6 (Color online) Multi-modal ray casting visualization of MRI [29], decomposition techniques, multi-resolution schemes
(blue) and PET (ocher) data sets with two different 1D transfer func-
tions [20, 53, 54], or out-of-core techniques [18]. Recent research
combined bricking and decomposition with a hierarchical
data structure.
light imaging (PLI) with 3 569 × 2 700 pixel [3]. The total Here, we consider the interactive rendering of large vol-
amount of memory for all scanned images reaches 47.4 GB ume data containing billions of samples [37, 52]. Different
or 11.9 GB using 32 or 8 bits per pixel, respectively. Nerve programming steps are used for the data management: (i)
fiber paths are reconstructed from PLI scans and saved in decomposition techniques to reach a multi-resolution sub-
a second huge data set. PLI-based reconstruction of nerve division of the data, (ii) streaming techniques to asynchro-
fibers is comparable to DTI. However, PLI provides an extra nously reach the right viewing data, and (iii) algorithms to
ordinary high resolution, which currently cannot be reached render the volume visualization or to visualize the zoomed
using in-vivo techniques. Further, the polarized histological data.
cuts are scanned using a microscope and attain several TB Decomposition techniques of volume data subdivide into
of volume per data set. Micron resolution meets nerve fiber smaller bricks, which are processed further. Each brick de-
resolution and enables analysis of the architecture of nerve composes the data into a hierarchical multi-resolution using
fibers in human brain. data structures like binary space partitioning (BSP) tree, oc-
Interactive visualization and navigation is a challenging tree or kd-tree, can be used for the decomposition. The tree
task with these huge amounts of data. A particular 3D nav- structure is hierarchical where the leaves represent the orig-
igator has been developed to visualize specific areas of the inal data and the inner nodes hold a filtered, coarse-to-fine
brain with the corresponding nerve fiber data in real-time. representation of the original volume data and are saved out-
Interactive visualization of nerve fibers combined with PLI of-core. A streaming technique fetches the current viewing
10 I. Scholl et al.

Fig. 7 (Color online) 3D visualization of 20 816 reconstructed nerve left hand side is zoomed on the right. The color of the nerve fibers rep-
fibers (15 MB) from 36 PLI slides of a human brain. The region with resents their direction. In the small area shown, all fibers have nearly
size of 27.39 mm × 22.72 mm × 3.20 mm that is marked in red on the the same direction

Table 1 Examples of large medical image data (fps = frames per second; fpd = frames per day)

Data Resolution No of images No of slices No of pixel No of bits Total memory

Whole body MRI scan 8 mm 1 250 256 × 256 8 16 MB


Screening mammography 50 µm 4 1 5 000 × 6 000 16 230 MB
Whole body CT scan 1 mm 1 2 000 512 × 512 12 750 MB
4D sequence of a beating heart 20 fps 240 512 × 512 12 per sec: 1.75 GB
PLI human brain scan 20 µm 1 3 200 8 000 × 8 000 16 200 GB
IRMA mammography database 50 µm 10 517 1 5 000 × 6 000 16 590 GB
Fluorescence microscopy 2 fpd per year: 1 TB
Microscopic human brain scan 1.5 µm 1 7 200 106 667 × 106 667 16 66 TB
LIFE full body MRI cohort 8 mm 200 000 250 256 × 256 8 3 PB

data asynchronously at runtime. Only this visible data is sent 2. Parallel GPU-based programming on a single node with
to the visualization pipeline which renders the specific 3D one GPU or multiple GPUs using programming lan-
view by using a GPU-based ray casting algorithm. guages for the massive parallel cores on the graphic card
The main disadvantage of working with Giga- to Ter- [53–55, 62]. With advances in GPU architecture, several
abyte volume data, aside from the logistic problem, is the algorithms have reached higher efficiency by transferring
runtime performance. The user can’t accept waiting for an- the program from CPU to GPU. This means instead of
swers from the program. Therefore, current research is fo- four to eight parallel CPUs, 240 to 480 massively paral-
cused on advanced parallelization techniques in order to lel processing cores on the graphic card are used. Several
reach an acceptable real-time response. These techniques languages have been developed by the graphic cards in-
require different hardware architectures, with one or more dustry to code algorithms for execution on the GPU, for
computers and one or more CPUs and GPUs. Several pro- example:
gramming languages have been developed to support such – Compute Unified Device Architecture (CUDA) is the
architectures: computing engine in NVIDIA graphics processing
1. Parallel CPU-based programming on a single node (one units. C for CUDA is a C-like programming language
computer with multiple CPUs) with shared memory us- developed especially for NVIDIA graphic cards.
ing threaded programming techniques like OpenMP or – Open Computing Language (OpenCL) is a framework
QtThreaded. that executes across heterogeneous platforms consist-
Challenges of medical image processing 11

ing of CPUs, GPUs, and other processors. OpenCL 4. Basser PJ, Mattiello J, Lebihan D (1994) MR diffusion tensor
provides parallel computing using task-based and spectroscopy and imaging. Biophys J 66:259–267
5. Bertero MA, Poggio T, Torre V (1988) Ill-posed problems in early
data-based parallelism. OpenCL is the common lan-
vision. Proc IEEE 76(8):869–889
guage for general purpose programming on any graph- 6. Brennan DD, Whelan PF, Robinson K, Ghitta O, O’Brien JM,
ics card. Sadleir R, Eustace SJ (2005) Rapid automated measurement
of body fat distribution from whole-body MRI. AJR Am J
3. Parallel programming on multiple nodes in a cluster of Roentgenol 185(2):418–423
linked computers connected through a fast local area net- 7. Bryson S, Levit C (1991) The virtual windtunnel: an environment
work (LAN), which is also referred to as Grid computing for the exploration of three-dimensional unsteady flows. In: IEEE
Visualization’91, Proceedings, pp 17–24
[9]. Special software interfaces manage the communica- 8. Corpetti T, Memin E, Perez P (2002) Dense estimation of fluid
tion between the processes, like the message passing in- flows. IEEE Trans Pattern Anal Mach Intell 24(3):365–380
terface (MPI). 9. Coveney PV (2005) Scientific grid computing. Philos Transact A
Math Phys Eng Sci 363(1833):1707–1713
10. Cuzol A, Memin E (2009) A stochastic filtering technique for fluid
flow velocity fields tracking. IEEE Trans Pattern Anal Mach Intell
4 Summary and conclusion 31(7):1278–1293
11. Danuser D, Waterman-Storer CM (2006) Quantitative fluorescent
Current research in medical image management and data speckle microscopy of cytoskeleton dynamics. Annu Rev Biophys
mining, bioimaging, virtual reality in visualization and neu- Biomol Struct 35:361–387
12. Deserno TM, Güld MO, Plodowski B, Spitzer K, Wein BB, Schu-
roimaging has been discussed and advanced programming
bert H, Ney H, Seidl T (2008) Extended query refinement for med-
techniques have been summarized. Handling Giga- to Ter- ical image retrieval. J Digit Imaging 21(3):280–289
abyte of image data, scalable programs have to be developed 13. Deserno TM, Antani S, Long R (2009) Ontology of gaps in
to support different parallel hardware architectures. Modern content-based image retrieval. J Digit Imaging 22(2):202–215
programming languages like C for CUDA, OpenCL and Qt- 14. Dorn JF, Danuser G, Yang G (2008) Computational processing
and analysis of dynamic fluorescence image data. Methods Cell
Threaded have been introduced supporting process thread- Biol 85:497–538
ing on several CPUs and GPUs. 15. El-Kwae EA, Xu H, Kabuka MR (2000) Content-based retrieval
The next level, from Tera- to Petabyte, is already looming in picture archiving and communication systems. J Digit Imaging
on the horizon. High-throughput nextgeneration sequencing 13(2):70–81
16. Geman S, Geman D (1984) Stochastic relaxation, Gibbs distribu-
produces up to 100 TB of data for a single investigation (30 tions, and the Bayesian restoration of images. IEEE Trans Pattern
repetitions). In translational medical research, whole body Anal Mach Intell 6(6):721–741
MRI is gaining popularity. The recently launched Leipzig 17. Gerlich D, Mattes J, Eils R (2003) Quantitative motion analysis
Interdisciplinary Research Cluster of Genetic Factors, Clini- and visualization of cellular structures. Methods 29(1):3–13
18. Gobbetti E, Marton F, Guitián JA (2008) A single-pass GPU ray
cal Phenotypes and Environment (LIFE) project in Germany casting framework for interactive out-of-core rendering of massive
already aims at full-body MRI scanning of a population co- volumetric datasets. Visual. Computing 24:797–806
hort with 200 000 subjects. Assuming a gray scale resolution 19. Güld MO, Thies C, Fischer B, Lehmann TM (2007) A generic
of eight bit, 256 × 256 pixel slices, and 8 mm slice thickness concept for the implementation of medical image retrieval sys-
tems. Int J Med Inform 76(2-3):252–259
[6], one scan yields about 16 MB, and the entire cohort will 20. Guthe S, Wand M, Gonser J, Strasser W (2002) Interactive ren-
be approximately 3 PB. dering of large volume data sets. IEEE Trans Vis Comput Graph
In the future, PACS, CBIR and HIS have to overcome 9(3):53–60
the logistic problem handling Tera- to Petabyte of biomed- 21. Handels H, Meinzer HP, Deserno TM, Tolxdorff T (2010) Ad-
vances and recent developments in medical image computing. Int
ical image data. Data compression, decomposition and par- J Comput Assist Radiol Surg. doi:10.1007/s11548-010-0540-6
allelization techniques will be the keys in developing real- 22. Hansen C, Johnson C (2004) The visualization handbook. Else-
time applications, which also attain the acceptance from the vier, Amsterdam
physicians. 23. Haux R (2006) Health information systems. Past, present, future.
Int J Med Inform 75:268–281
24. Haux R (2010) Medical informatics. Past, present, future. Int J
Med Inform 79:599–610
References 25. Hentschel B, Bischof C, Kuhlen T (2007) Comparative visualiza-
tion of human nasal airflows. Medicine meets virtual reality 15.
1. Axelrod D, Koppel DE, Schlessinger J, Elson E, Webb WW IOS Press, Amsterdam
(1976) Mobility measurement by analysis of fluorescence photo- 26. Hentschel B, Tedjo I, Probst M, Wolter M, Behr M, Bischof C,
bleaching recovery kinetics. Biophys J 16(9):1055–1069 Kuhlen T (2008) Interactive blood damage analysis for ventricular
2. Axer M, Axer H, Grässel D, Amunts K, Zilles K, Pietrzyk U assist devices. IEEE Trans Vis Comput Graph 14(6):1515–1522
(2007) Nerve fiber mapping of the human visual cortex using po- 27. Herberich G, Ivanescu A, Gamper I, Sechi A, Aach T (2010)
larized light imaging. IEEE Trans Nuclear Sci, pp 4345–4347 Analysis of length and orientation of microtubules in wide-field
3. Axer M, Dammers J, Grässel D, Amunts K, Pietrzyk U, Zilles K fluorescence microscopy. Lect Notes Comput Sci 6376:182–191
(2008) Nerve fiber mapping in histological sections of the human 28. Howison M, Bethel EW, Childs H (2010) MPI-hybrid parallelism
brain by means of polarized light. In: Conference Proceedings Hu- for volume rendering on large, multi-core systems. In: Eurograph-
man Brain Mapping, Melbourne ics symposium on parallel graphics and visualization
12 I. Scholl et al.

29. Ihm S, Park I (1999) Wavelet-based 3D compression scheme for 50. Saban M, Altinok A, Peck A, Kenney C, Feinstein S, Wilson L,
interactive visualization of very large volume data. Comput Graph Rose K, Manjunath BS (2006) Automated tracking and modeling
Forum 18:3–15 of microtubule dynamics. In: Proc IEEE int symp biomed imag-
30. von Kapri A, Rick T, Caspers S, Eickhoff S, Zilles T, Kuhlen K ing, pp 1032–1035
(2010) Evaluating a visualization of uncertainty in probabilistic 51. Scholl I, Schubert N, Pietrzyk U (2010) GPU basiertes Volumen-
tractography. In: Proc SPIE Med Imaging rendering von multimodalen medizinischen bilddaten in Echtzeit.
31. Kölsch A, Windoffer S, Würflinger T, Aach T, Leube RE (2010) In: Bildverarbeitung für die Medizin 2010. Algorithmen, Systeme,
The keratin filament cycle of assembly and disassembly. J Cell Sci Anwendungen. Springer, Berlin
123:2266–2272 52. Shawn Mikula S, Trotts I, Stone JM, Jones EG (2007) Internet-
32. Komodakis N, Tziritas G (2007) Approximate labeling via graph enabled high-resolution brain mapping and virtual microscopy.
cuts based on linear programming. IEEE Trans Pattern Anal Mach NeuroImage 35:9–15
Intell 29(8):1436–1453 53. Strengert M, Magallón M, Weiskopf D, Guthe S, Ertl T (2004) Hi-
33. Komodakis N, Tziritas G, Paragios N (2007) Fast, approximately erarchical visualization and compression of large volume datasets
optimal solutions for single and dynamic mrfs. In: IEEE Conf using GPU clusters. In: Eurographics symposium on parallel
Comput Vis Patt Recogni, pp 1–8 graphics and visualization
34. Lehmann TM, Güld MO, Thies C, Fischer B, Spitzer K, Keysers 54. Strengert M, Magallón M, Weiskopf D, Guthe S, Ertl T
D, Ney H, Kohnen M, Schubert H, Wein BB (2004a) Content- (2005) Large volume visualization of compressed time-dependent
based image retrieval in medical applications. Methods Inf Med datasets on GPU clusters. Parallel Comput 31(2):205–219
43(4):354–361 55. Strengert M, Mueller C, Dachsbacher C, Ertl T (2008) CUDASA:
35. Lehmann TM, Meinzer HP, Tolxdorff T (2004b) Advances in bio- Compute unified Device and Systems Architecture. In: Eurograph-
medical image analysis past, present and future challenges. Meth- ics symposium on parallel graphics and visualization
ods Inf Med 43(4):308–314 56. Tagare HD, Jaffe CC, Duncan J (1997) Medical image databases:
36. Lehmann TM, Aach T, Witte H (2006) Sensor signal and image A content-based retrieval approach. J Am Med Inform Assoc
informatics. State Art Current Top 47(2):57–67 4(3):184–198
37. Levoy M (1990) Efficient ray tracing of volume data. ACM Trans 57. Terzopoulos D (1988) The computation of visible surface repre-
Graph 9(3):245–261 sentations. IEEE Trans Pattern Anal Mach Intell 10(4):417–438
38. Long LR, Antani S, Deserno TM, Thoma GR (2009) Content- 58. Trambert M (2006) Digital mammography integrated with pacs.
based image retrieval in medicine. retrospective assessment, state real world issues, considerations, workflow solutions, and reading
of the art, and future directions. Int J Health Inform Syst Informat paradigms. Semin Breast Dis 9(2):75–81
4(1):1–16 59. van Dam A, Forsberg A, Leidlaw DH, LaViola J, Simpson RM
39. Manjunath BS, Sumengen B, Bi Z, Byun J, El-Saban M, Fedorov (2000) Immersive virtual reality for scientific visualization: a
D, Vu N (2006) Towards automated bioimage analysis: from fea- progress report. IEEE Comput Graph Appl 20(6):26–52
tures to semantics. In: Proc IEEE Int Symp Biomed Imaging, pp 60. Weickert J, Schnörr C (2001) A theoretical framework for con-
255–258 vex regularizers in PDE-based computation of image motion. Int
40. Meijering E, Smal I, Danuser G (2006) Tracking in molecular J Comput Vis 45(3):245–264
bioimaging. IEEE Signal Process Mag 23(3):46–53 61. Welter P, Hocken C, Deserno TM, Grouls C, Günther RW (2010)
41. Müller H, Michoux N, Bandon D, Geissbuhler A (2004) A review Workflow management of content-based image retrieval for cad
of content-based image retrieval systems in medical applications. support in pacs environments based on ihe. Int J Comput Assist
Clinical benefits and future directions. Int J Med Inform 73(1):1– Radiol Surg 5(4):393–400
23 62. Westermann B, Ertl T (1998) Efficiently using graphics hard-
42. Névéol A, Deserno TM, Darmonic SJ, Güld MO, Aronson AR ware in volume rendering applications. In: Proceedings of SIG-
(2009) Natural language processing versus content-based image GRAPH’98, pp 169–178
analysis for medical document retrieval. J Am Soc Inf Sci Technol 63. Würflinger T, Gamper I, Aach T, Sechi AS (2011) Automated seg-
60(1):123–134 mentation and tracking for large scale analysis of focal adhesion
43. de Oliveira JEE, Machado AMC, Chavez GC, Lopes APB, De- dynamics. J Microsc 241:37–53
serno TM, Araujo A (2010) Mammosys: a content-based image
retrieval system using breast density patterns. Comput Methods
Programs Biomed 99(3):289–297 Ingrid Scholl received the diploma
44. Palm C, Axer M, Grässel D, Dammers J, Lindemeyer J, Zilles L degree in computer science from
(2010) Towards ultra-high resolution fibre tract mapping of the hu- RWTH Aachen University, Ger-
man brain: registration of polarised light images and reorientation many, in 1995. From 1995 to 1997,
of fibre vectors. Front Human Neurosci 4(9):1–16 she was a research scientist in
45. Papenberg N, Bruhn A, Brox T, Didas S, Weickert J (2006) Highly image processing at the Depart-
accurate optic flow computation with theoretically justified warp- ment of Medical Informatics, Med-
ing. Int J Comput Vis 67(2):141–158 ical Faculty, RWTH Aachen. Af-
46. Qi H, Snyder WE (1999) Content-based image retrieval in picture ter two years maternity leave, she
archiving and communications systems. J Digit Imaging 12(2):81– was from 1999 until 2006 a se-
83 nior software developer at the GPC
47. Racine V, Hertzog A, Jouanneau J, Salamero J, Kervrann C, Biotech AG and the MuellerBBM
Sibarita JB (2006) Multiple-target tracking of 3d fluorescent ob- VibroAcoustic GmbH in Munich.
jects based on simulated annealing. In: Proc IEEE Int Symp Bio- Since 2006, she has been a profes-
med Imaging, pp 1020–1023 sor of computer graphics at the FH
48. Reichertz PL (1977) Towards systematization. Methods Inf Med Aachen University of applied sciences, Aachen, Germany. Her re-
16:125–130 search interests are medical image processing, large data and multi-
49. Reichertz PL (2006) Hospital information systems. Past, present, modal visualization and general purpose GPU programming.
future. Int J Med Inform 75:282–299
Challenges of medical image processing 13

Til Aach received his diploma and Thomas M. Deserno (né Lehmann),
Doctoral degrees, both with honors PhD, is full professor of Med-
in EE, from RWTH Aachen Uni- ical Informatics at the Medical
versity in 1987 and 1993, respec- School, RWTH Aachen Univer-
tively. While working towards his sity, Aachen, Germany, where he
Doctoral Degree, he was a research heads the Division of Medical Im-
scientist with the Institute for Com- age Processing. In addition to lec-
munications Engineering, RWTH turing graduate courses on biomed-
Aachen University, being in charge ical imaging and image processing,
of several projects in image analy- he co-authored the textbook Image
sis, 3D-television and medical im- Processing for the Medical Sciences
age processing. In 1984, he was (Springer-Verlag, 1997) and edited
an intern with Okuma Machinery the Handbook of Medical Informat-
Works Ltd., Nagoya, Japan. From ics (Hanser Verlag, 2005) and re-
1993 to 1998, he was with Philips cently Biomedical Image Process-
Research Labs, Aachen, Germany, where he was responsible for sev- ing (Springer-Verlag 2011). His research interests include discrete re-
eral projects in medical imaging, image processing and analysis. In alizations of continuous image transforms, medical image processing
1996, he was also an independent lecturer with the University of applied to quantitative measurements for computer-assisted diagnoses,
Magdeburg, Germany. In 1998, he was appointed a Full Professor and and content-based image retrieval from large medical databases. Dr.
Director of the Institute for Signal Processing, University of Luebeck. Deserno has authored over 100 scientific publications, is Senior Mem-
In 2004, he became Chairman of the Institute of Imaging and Com- ber of IEEE, a member of SPIE and IADMFR, serves on the Interna-
puter Vision, RWTH Aachen University. His research interests are in tional Editorial Boards of Dentomaxillofacial Radiology, Methods of
medical and industrial image processing, signal processing, pattern Information in Medicine, and World Journal of Radiology, and is Co-
recognition, and computer vision. He has authored or co-authored over editor Europe of the International Journal of Healthcare Information
250 papers, and received several awards, among these the award of the Systems and Informatics.
German “Informationstechnische Gesellschaft” (ITG/VDE), for a pa-
per published in the IEEE Transactions on Image Processing in 1998. Torsten Kuhlen is head of the
Dr. Aach is a co-inventor for about 20 patents. From 2002 to 2008, he Virtual Reality Group at RWTH
was an Associate Editor of the IEEE Transactions on Image Process- Aachen University. In 2008, he was
ing. He was a Technical Program Co-Chair for the IEEE Southwest appointed to a professorship in the
Symposium on Image Analysis and Interpretation (SSIAI) in 2000, Department of Computer Science.
2002, 2004, and 2006. He is a member of the Bio-Imaging and Sig- His research interests include ba-
nal Processing Committee (BISP-TC) of the IEEE Signal Processing sic technologies as well as scientific
Society. applications of VR. For more than
10 years, he has been conducting
several VR joint research projects
in the field of mechanical engineer-
ing, flow simulation, medicine, and
life science. Since 2006, he is co-
speaker of the steering committee
of the VR/AR chapter of Germany’s
computer society.

View publication stats

You might also like