Gadgetron: An Open Source Framework For Medical Image Reconstruction
Gadgetron: An Open Source Framework For Medical Image Reconstruction
Gadgetron: An Open Source Framework For Medical Image Reconstruction
PROCESSING AND
MODELING Full Papers
INTRODUCTION
Image reconstruction software is an integral part of all
modern medical imaging devices, and medical image
reconstruction research is a strong and active field with
hundreds of articles published each year. In the field of
magnetic resonance imaging (MRI), great advances have
been driven by image reconstruction algorithms. Examples include parallel MRI reconstruction (13) and more
recently compressive sensing (4,5).
Most image reconstruction algorithms are published
without a reference implementation (i.e., without source
code). In some cases, the authorsor the device vendors
they collaborate withare reluctant to share their algorithms. In many other cases, there is simply no practical
1
Division of Intramural Research, National Heart Lung and Blood Institute,
National Institutes of Health, Bethesda, Maryland, USA
2
Department of Computer Science, Aarhus University, Aarhus, Denmark
3
Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
*Correspondence to: Michael S. Hansen, Ph.D., National Heart, Lung, and
Blood Institute, NIH, NIH Building 10/B1D-405, 10 Center Drive, Bethesda,
MD 20892. E-mail: michael.hansen@nih.gov
Disclosure: The National Heart, Lung, and Blood Institute and Siemens Medical
Systems have a Cooperative Research and Development Agreement (CRADA).
Received 16 March 2012; revised 25 April 2012; accepted 2 June 2012.
DOI 10.1002/mrm.24389
Published online 12 July 2012 in Wiley Online Library (wileyonlinelibrary.com).
1768
Gadgetron
This article describes a modular open source medical image reconstruction framework called Gadgetron,
which has been designed to encompass all of the
1769
1770
FIG. 1. Gadgetron architecture. The Gadgetron is in communication with a client application through a TCP/IP connection. The client
application sends data to the Gadgetron and associated with each data package is a Message ID. Based on the Message ID, control of the
socket is handed over to a specific Reader, which is capable of deserializing the incoming data package. The data package is converted to
message blocks that are added to the first Gadgets queue. Data are then passed down the Gadget stream where each Gadget can modify
and transform the data. Any Gadget can return images (or partially processed data) to the Gadgetron framework. Based on the Message
ID of this return data package, the control of the socket and the data are handed to a particular Writer, which is responsible for writing the
return message to the client.
source code components such as class and function names, a monospace font will be used, e.g.,
GadgetStreamController.
Gadgetron
1771
1772
algorithms will be added in future releases and made available continuously on the Gadgetron web page (see later). It
is beyond the scope of this article to explain the algorithms
in detail, but references to some key reconstruction steps
are provided in the following. We use MRI as the primary
demonstration modality.
FIG. 3. Python prototyping architecture. The Gadgetron framework provides a specialized Gadget (PythonGadget). The
PythonGadget communicates with a Python interpreter instance
through a PythonCommunicator object (a process wide singleton),
which ensures that only one Gadget attempts to access the interpreter at any given time. At run-time, the PythonGadget will instruct
the PythonCommunicator to load (import) a Python module in the
interpreter instance as specified in the XML configuration file. When
data arrive in the Gadget, they are passed on to the Python module
(through the PythonCommunicator). Each loaded Python module
receives a reference to the PythonGadget such that reconstruction results can be passed back to the calling Gadget and continue
down the stream. As indicated, a given Gadget stream can contain
a mixure of regular C/C++ Gadgets and Python Gadgets.
the interpreter at the same time and consequently the communication with the Python interpreter is centralized in
the PythonCommunicator (a process wide singleton for
the Gadgetron). When the PythonGadget loads, it will
request that the PythonCommunicator loads the Python
module. When data need to be passed to the Python module, it will be passed first to the PythonCommunicator,
which will pass it on to the appropriate Python module
when the Python interpreter becomes available.
As indicated in Fig. 3, it is possible to have an arbitrary
number of PythonGadgets in the reconstruction pipeline.
Moreover, the PythonGadgets can be mixed with standard Gadgets implemented purely in C/C++. This enables
the user to reuse existing, efficient implementations of
reconstruction steps while maintaining the capability of
prototyping in Python.
Standalone Compilation
The core reconstruction algorithm components included
in the framework (e.g., FFTs, iterative solvers, etc.) are contained in the various toolboxes (see earlier) and compiled
into shared libraries. As previously mentioned, these toolboxes can either be linked into Gadgets or be used in standalone applications. To exemplify how to use these toolboxes in practice, we include a number of small standalone
applications. These applications are compiled outside the
streaming client/server architecture of the Gadgetron (Fig.
1) and demonstrate how to use the toolboxes in third-party
applications.
EXAMPLE APPLICATIONS
This section outlines some of the example applications that
are included in the initial Gadgetron distribution. More
The Gadgetron includes a GPU-based real-time reconstruction engine for Cartesian parallel MRI. The implemented
algorithm is an extension of the GRAPPA algorithm (3),
which has been optimized for high-throughput reconstruction of real-time MRI using a large number of receive
channels. To our knowledge, the closest implementation in
the literature is the one by Saybasili et al. (17). In the present
Gadgetron implementation, the GRAPPA convolution kernels are Fourier transformed to image space and applied
directly to the aliased images using pixelwise multiplications. Additionally, the image space unmixing coefficients
for individual channels are combined in image space using
a B1-weighted coil combination procedure as per Walsh
et al. (18). Data are assumed to be acquired in a timeinterleaved fashion, so that a number of neighboring frames
can be averaged and used for calibration data (19).
Figure 4 outlines the Gadgetron GRAPPA reconstruction chain. The core algorithm is implemented in the
GrappaGadget and the GrappaUnmixingGadget. The
GrappaGadget is responsible for monitoring the acquired
raw data and will trigger an update of GRAPPA coefficients when required (and data are available). The
GRAPPA unmixing coefficients and the required coil sensitivity maps for B1-weighted coil combination are calculated on the GPU by the GrappaCalculator. Once
the GRAPPA coefficients have been calculated, they are
downloaded to the CPU where they are stored in a memory structure that can be accessed by the downstream
GrappaUnmixingGadget. The GrappaUnmixingGadget
performs the Fourier transform of k-space data and linear
combination of the available coils using the coefficients
calculated by the GrappaCalculator. The GRAPPA calculation is done asynchronously with Fourier transform
and image unmixing, which enables this configuration
to have a high frame rate. The unmixing coefficients are
updated as often as the hardware is capable and always
when the slice orientation changes. The time that it takes
to update the unmixing coefficients depends on the number of receiver channels, the image matrix size, and the
available hardware, but for most applications it is on the
order of 200 ms. More specifically, on a 16 CPU core
(Intel Xeon 2.67 GHz) computer with 24 GB of RAM and
an Nvidia GeForce GTX 590 graphics card, the GRAPPA
unmixing coefficients for parallel imaging rate 4 with 16
input channels could be calculated in under 200 ms. As the
coefficients are updated independently of the unmixing,
the framerate is not limited by how quickly the coefficients
can be calculated. The framerate is determined by how
quickly the raw data can be Fourier transformed and the
coefficients can be applied. With the previously described
hardware, images could be displayed on the scanner with
less than one frame (approximately 100 ms) latency.
Gadgetron
1773
number of target channels included in the GRAPPA reconstruction. For example, the GRAPPA coefficients may be
calculated from 16 source channels to eight target channels. This reduces the computational load and memory
requirements for the GRAPPA calculation without explicitly reducing the parallel imaging performance. It is beyond
the scope of this article to do an analysis of appropriate settings for these compression factors, but in practice we have
found that with a 32-channel receive array, a reduction to 16
channels upstream and eight channels downstream results
in only marginal reduction of image quality.
Non-Cartesian Parallel MRI
As an example of parallel MRI using non-Cartesian trajectories, the Gadgetron includes a GPU-based implementation of non-Cartesian SENSE (23). The implementation
included in the Gadgetron is based on previous work (9) but
adapted for the Gadgetron architecture. The main functionality of the iterative solver is implemented in the toolboxes
described earlier. The Gadget architecture is used to wrap
the solver and data handling buffers in a reusable manner.
The conjugate gradient SENSE Gadget pipeline is illustrated in Fig. 5. The noise adjustment and PCA virtual coil
generation Gadgets have been reused from the GRAPPA
reconstruction described earlier. Additionally, a series of
other Gadgets for image scaling, magnitude extraction, etc.
are also reused. The non-Cartesian SENSE reconstruction
uses a conjugate gradient iterative solver to solve:
= arg min E m22 + L22 ,
[1]
In addition to the high-throughput GRAPPA functionality described earlier, the implementation also allows for
receiver channel compression based on Principal Component Analysis (PCA) (20,21). The implemented scheme is
inspired by Huang et al. (22) in that it implements two
stages of channel compression. After noise prewhitening
in the NoiseAdjustGadget, the channels are converted to
virtual channels using PCA coefficients in the PCAGadget.
In the present implementation, the coefficients are calculated based on the k-space data from the first frame
of real-time data. Subsequently, the data pass through a
coil reduction Gadget, which simply discards data above
a certain channel number (here the number of channels is
reduced from 32 to 16). In the GRAPPA calculation itself,
further channel compression is possible by limiting the
1774
latter often under the assumption that the image degradation was caused by convolution of the desired image with a
known point spread function. The initial Gadgetron release
provides three iterative solvers that can be used to restore
noisy or blurred images; linear least squares by the conjugate gradient algorithm (15), and both an unconstraint
and a constraint Split-Bregman solver (16) for total variation based minimization. The solvers are derived from
variational problems minimizing, respectively
min 22 + E m22
where is the desired image to be restored, E is the linear degradation operator, and m is the acquired image.
For deblurring, E is modeled as a ConvolutionOperator
and for denoising as an IdentityOperator. The solvers
can be set up with just a handful of lines of code and
GPU-based operators are available for high-performance
considerations.
The framework also includes a small standalone application that applies the non-Cartesian parallel MRI encoding
operator (see earlier) on the latter two total variation (TV)
based functionals demonstrating compressed sensing in
combination with parallel MRI.
FIG. 5. Non-Cartesian parallel MRI Gadget chain. Most of the reconstruction pipeline is reused from the Cartesian parallel MRI Gadget
chain (Fig. 4). After forming virtual channels, the data pass into the
conjugate gradient SENSE Gadget where they are buffered. Once
enough data are available to reconstruct a new frame, coil sensitivities and regularization data are calculated from a time average of
previously acquired data and used along with the undersampled data
in a conjugate gradient iterative solver. The reconstructed images are
passed on to subsequent Gadgets in the chain for further processing.
Gadgetron
1775
1776
In the current implementation, there are some limitations in the Python scripting support. As described, the
access to the Python interpreter is controlled such that two
Gadgets cannot access the interpreter simultaneously. This
could have performance implications if multiple Gadgets
are implemented in Python and has to compete for the interpreter. With the current Python C/C++ API (version 2.7.3),
it has not been possible to overcome this limitation in a
way that would work robustly on all supported platforms.
A future release will seek to improve the Python support
and may also provide support for other scripting languages
such as Matlab.
Finally, the main application of the framework so far
has been MRI reconstruction. Consequently, the toolboxes
are focused on tasks related to this particular modality.
We anticipate that the contents of the toolboxes will naturally evolve, as we and other developers embrace new
applications and imaging modalities. For example, we are
exploring using the framework for deblurring and denoising of light microscopy images using the iterative solvers
in the toolboxes.
CONCLUSIONS
We have presented a new open source framework for
medical image reconstruction and described several applications for which it can be used (MRI, image denoising, and
image deblurring). The architecture is modular, and flexible and promotes reuse of existing reconstruction software
modules in the form of Gadgets. It is possible to implement
new Gadgets in C/C++ with integration of GPU acceleration for high frame rate, low latency reconstructions. It is
also possible to prototype new reconstruction components
using the high-level scripting language Python. The framework and all the example applications are made freely
available to the medical image reconstruction community,
and we hope that it can serve as a platform for researchers
to share and deploy novel reconstruction algorithms.
ACKNOWLEDGMENTS
The authors thank Drs. Peter Kellman and Souheil Inati at
the National Institutes of Health and David Hansen, Christian Christoffersen, and Allan Rasmusson at the Department of Computer Science, Aarhus University for valuable
discussion, feedback, and suggestions.
REFERENCES
1. Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: sensitivity encoding for fast MRI. Magn Reson Med 1999;42:952962.
2. Sodickson DK, Manning WJ. Simultaneous acquisition of spatial harmonics (SMASH): fast imaging with radiofrequency coil arrays. Magn
Reson Med 1997;38:591603.