0% found this document useful (0 votes)
9 views

Medical Image Data Basic

Basic Medical Imaging Informatics

Uploaded by

Sherwin Soriano
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Medical Image Data Basic

Basic Medical Imaging Informatics

Uploaded by

Sherwin Soriano
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

II.

Medical Image Data

1. Medical Image, Image Quality, and Data Formats


2. Medical Imaging Modalities
3. Medical Image Digitalization and Acquisition Gateway

Medical Image Data

The data, on which medical visualization methods and applications are based, are acquired with
scanning devices, such as computed tomography (CT) and magnetic resonance imaging (MRI). These
devices have experienced an enormous development in the last 20 years.

2.1 Introduction

Medical image data are acquired for different purposes, such as diagnosis, therapy planning,
intraoperative navigation, post-operative monitoring, and biomedical research. Before we start with the
description of medical imaging modalities, we briefly discuss major requirements that guide the
selection of imaging modalities in practice:

• the relevant anatomy must be depicted completely,


• the resolution of the data should be sufficient to answer specific diagnostic and therapeutic
questions,
• the image quality with respect to contrast, signal-to-noise ratio (SNR) and artifacts must be
sufficient to interpret the data with respect to diagnostic and therapeutic questions,
•exposure and burden to the patient and to the medical doctor should be minimized, and
•costs should be limited.

Thus, neither optimum spatial resolution nor optimum image quality are relevant goals in clinical
practice. As an example, a CT examination with high radiation dose in very high spatial resolution
optimizes imaging with respect to resolution and image quality but fails to meet the minimum exposure
criterion and may lead to higher costs, since a large dataset has to be archived and transferred over a
network with often only moderate bandwidth.

In this chapter we focus on tomographic imaging modalities, in particular on CT and MRI data. Hybrid
PET/CT and PET/MRI scanners are an exciting development of the last decade. We discuss them as
examples for the potential of the complementary use of imaging data and the necessity to fuse the
resulting information.

We will discuss examples where the image data are applied. Thus, it becomes obvious that a variety of
imaging modalities is required to “answer” various diagnostic questions. The discussion includes recent
developments, such as High-field MRI. We explain what is technically feasible along with the clinical
motivation and use for these developments.

Organization
1. We start with a general discussion of medical image data and their properties (§ 2.2) and continue
with basic signal processing relevant for medical image acquisition (§ 2.3).

2. The discussion of medical imaging modalities starts with an overview on X-ray imaging (§ 2.4). X-ray
images were the first medical image data exposing information about inner structures inside the human
body and it is still by far the most used image modality in modern health care. We will also discuss
various flavors of X-ray imaging, such as angiography and rotational X-ray.

3. We continue with a description of CT data acquisition, which is based on the same physical principle,
but represents a tomographic modality generating volume data (§ 2.5). The second widespread
tomographic modality is Magnetic Resonance Imaging (MRI), which is described in § 2.6. This versatile
imaging modality exploits the different characteristics of human tissue in magnetic fields. Although most
of the applications and algorithms presented in this book are based on CT and MRI data, we also
introduce other modalities, which have a great importance in clinical practice and might be used more
intensively in the future in computer-assisted diagnosis and therapy planning systems. In § 2.7, we
describe the principle of ultrasound generation. Finally, Positron Emission Tomography (PET) and Single
Photon Emission ComputedTomography (SPECT) as the most wide-spread imaging modalities in nuclear
medicine are described

===================

1. Medical Image, Image Quality, and Data Formats

Image file format is often a confusing aspect for someone wishing to process medical images. This article
presents a demystifying overview of the major file formats currently used in medical imaging: Analyze,
Neuroimaging Informatics Technology Initiative (Nifti), Minc, and Digital Imaging and Communications in
Medicine (Dicom). Concepts common to all file formats, such as pixel depth, photometric interpretation,
metadata, and pixel data, are first presented. Then, the characteristics and strengths of the various
formats are discussed. The review concludes with some predictive considerations about the future
trends in medical image file formats.

Basic Concepts

A medical image is the representation of the internal structure or function of an anatomic region in the
form of an array of picture elements called pixels or voxels. It is a discrete representation resulting from
a sampling/reconstruction process that maps numerical values to positions of the space. The number of
pixels used to describe the field-of-view of a certain acquisition modality is an expression of the detail
with which the anatomy or function can be depicted. What the numerical value of the pixel expresses
depends on the imaging modality, the acquisition protocol, the reconstruction, and eventually, the
post-processing.
Pixel Depth Pixel depth is the number of bits used to encode the information of each pixel. Every image
is stored in a file and kept in the memory of a computer as group of bytes. Bytes are group of 8 bits and
represent the smallest quantity that can be stored in the memory of a computer. This means that if a
256 by 256 pixels image has a pixel depth of 12 or 16 bits, the computer will always store two bytes per
pixel and then the pixel data will require 256 × 256 × 2 = 131,072 bytes of memory in both cases. With a
pixel depth of 2 bytes per pixels, it is possible to codify and store integer numbers between 0 and 65,535
(216−1); alternatively, it is possible to represent integer numbers between −32,768 and +32,767 using
15 bits to represent the numbers and 1 bit to represent the sign. Image data may also be real numbers.
The Institute of Electrical and Electronics Engineers created a standard (IEEE-754) in which defines two
basic formats for the encoding in binary of floating-point numbers: the single precision 32-bit and the
double precision 64-bit. The standard addresses the problem of the precision with which a finite number
of combination obtainable with a sequence of n-bit (2n−1) can represent a continuous range of real
numbers. Although unusual, pixels can store complex numbers. Complex data have a real and an
imaginary component which are referred as pairs of real numbers. Therefore, complex data typically
have a pixel depth twice that used to represent a single real number.

From the previous overview emerges that the pixel depth is a concept related to the memory space
necessary to represent in binary the amount of information we want to store in a pixel.

Photometric Interpretation The photometric interpretation specifies how the pixel data should be
interpreted for the correct image display as a monochrome or color image. To specify if color
information is or is not stored in the image pixel values, we introduce the concept of samples per pixel
(also known as number of channels). Monochrome images have one sample per pixel and no color
information stored in the image. A scale of shades of gray from black to white is used to display the
images. The number of shades of gray depends clearly from the number of bits used to store the sample
that, in this case, coincide with the pixel depth. Clinical radiological images, like x-ray computed
tomography (CT) and magnetic resonance (MR) images have a gray scale photometric interpretation.
Nuclear medicine images, like positron emission tomography (PET) and single photon emission
tomography (SPECT), are typically displayed with a color map or color palette. In this case, each pixel of
the image is associated with a color in a predefined color map, but the color regards only the display and
is an information associated with and not really stored in the pixel values. The images still have one
sample per pixel and are said to be in pseudo-color. To encode color information into pixels, we typically
need multiple samples per pixel and to adopt a color model that specifies how to obtain colors
combining the samples [1]. Usually, 8-bit are reserved for each sample or color component. The pixel
depth is calculated by multiplying the sample depth (number of bits used for each sample) with the
number of samples per pixel. Ultrasound images are typically stored employing the red–green–blue
color model (briefly, RGB). In this case, the pixel should be intended as a combination of the three
primary colors, and three samples per pixel are stored. The images will have a pixel depth of 24 bits and
are said to be in true color.

Color is for example used to encode blood flow direction (and velocity) in Doppler ultrasound, to show
additional “functional” information onto a gray scale anatomical image as colored overlays, as in the
case of fMRI activation sites, to simultaneously display functional and anatomical images as in the
PET/CT or PET/MRI, and sometimes in place of gray tones to highlight signal differences.

Metadata Metadata are information that describe the image. It can seem strange, but in any file format,
there is always information associated with the image beyond the pixel data. This information called
metadata is typically stored at the beginning of the file as a header and contains at least the image
matrix dimensions, the spatial resolution, the pixel depth, and the photometric interpretation. Thanks to
metadata, a software application is able to recognize and correctly open an image in a supported file
format simply by a double-click or dragging the image icon onto the icon of the application. In the case
of medical images, metadata have a wider role due to the nature of the images itself. Images coming
from diagnostic modalities typically have information about how the image was produced. For example,
a magnetic resonance image will have parameters related to the pulse sequence used, e.g., timing
information, flip angle, number of acquisitions, etc. A nuclear medicine image like a PET image will have
information about the radiopharmaceutical injected and the weight of the patient. These data allows
software like OsiriX [2] to on-the-fly convert pixel values in standardized uptake values (SUV) without
the need to really write SUV values into the file. Post-processing file formats have a terser metadata
section that essentially describes the pixel data. The different content in the metadata is the main
difference between the images produced by a diagnostic modality and post-processed images.
Metadata are a powerful tool to annotate and exploit image-related information for clinical and
research purposes and to organize and retrieve into archives images and associated data.

Pixel Data This is the section where the numerical values of the pixels are stored. According to the data
type, pixel data are stored as integers or floating-point numbers using the minimum number of bytes
required to represent the values (see Table 1). Looking at images generated by tomographic imaging
modalities, and sent to a Picture Archiving and Communication System (PACS) or a reading station,
radiological images like CT and MR and also modern nuclear medicine modalities, like PET and SPECT,
store 16 bits for each pixel as integers. Although integers, eventually with the specification of a scale
factor, are adequate for “front-end” images, the use of a float data type is frequent in any
post-processing pipeline since it is the most natural representation to address calculations. Image data
may also be of complex type even if this data type is not common and can be bypassed by storing the
real and imaginary parts as separate images. An example of complex data is provided by arrays that in
MRI store acquired data before the reconstruction (the so called k-space) or after the reconstruction if
you choose to save both magnitude and phase images.

Table 1

Summary of file formats characteristics

Format Header Extension Data types


Analyze Fixed-length: 348 byte binary format .img and .hdr Unsigned integer (8-bit), signed integer
(16-, 32-bit), float (32-, 64-bit), complex (64-bit)

Nifti Fixed-length: 352 byte binary formata (348 byte in the case of data stored as .img and .hdr)
.nii Signed and unsigned integer (from 8- to 64-bit), float (from 32- to 128-bit), complex
(from 64- to 256-bit)

Minc Extensible binary format .mnc Signed and unsigned integer (from 8- to 32-bit), float
(32-, 64-bit), complex (32-, 64-bit)

Dicom Variable length binary format .dcm Signed and unsigned integer, (8-, 16-bit; 32-bit only
allowed for radiotherapy dose), float not supported

Open in a separate window

Not all the software support all the specified data types. Dicom, Analyze, and Nifti support color RGB
24-bit; Nifti also supports RGBA 32-bit (RGB plus an alpha-channel)

aNifti has a mechanism to extend the header

Whenever the value of a pixel is stored using two or more bytes it should be taken into account that the
order in which the computer store the bytes is not unique. If we indicate with b1, b2 the two bytes of a
16-bit word, the computer can store the word as (b1:b2) or (b2:b1). The term little endian indicates that
the least significant byte is stored first, while big endian which is the most significant byte to be stored
first. This issue is typically related to the processor on which the computer hardware is based and
regards all the data encoded using more than 8-bit per pixel.

In formats that adopt a fixed-size header, the pixel data start at a fixed position after skipping the
header length. In the case of variable length header, the starting location of the pixel data is marked by
a tag or a pointer. In any case, to calculate the pixel data size, we have to do:

equation M1

where the pixel depth is expressed in bytes. The image file size will be given by:

equation M2

Both the expressions are valid in the case of uncompressed data. Image data may also be compressed to
reduce requirements for storage and transmission, in which case the file size is reduced by a factor that
depends on the compression technique adopted. Generally speaking, compression may be reversible
(lossless) or irreversible (lossy). Lossless compression techniques allow a moderate gain in terms of
image storage. Lossy techniques allow a greater advantage at the cost of information loss but, for this
reason, their use in the world of medical imaging is controversial. It is not clear under which conditions
the reading of the images and/or the quantitative post-processing procedures are not influenced by
information loss. On the other hand, the adoption of lossy compression schemes with a low or moderate
loss of information in place of lossless ones might appear not justified [3].

File Formats

Medical image file formats can be divided in two categories. The first is formats intended to standardize
the images generated by diagnostic modalities, e.g., Dicom [4]. The second is formats born with the aim
to facilitate and strengthen post-processing analysis, e.g., Analyze [5], Nifti [6], and Minc [7]. Medical
image files are typically stored using one of the following two possible configurations. One in which a
single file contains both the metadata and image data, with the metadata stored at the beginning of the
file. This paradigm is used by Dicom, Minc, and Nifti file formats, even if it is allowed by other formats.
The second configuration stores the metadata in one file and the image data in a second one. The
Analyze file format uses the two-files paradigm (.hdr and .img).

In this section, we describe some of the most popular formats currently used: Analyze, Nifti, Minc, and
Dicom. Table 1 summarizes the characteristics of the described file formats.

Historically, the one of the first projects aimed to create standardized file formats in the field of medical
imaging was the Interfile format [8]. It was created in the 1980s and has been used for many years for
the exchange of nuclear medicine images. An Interfile image consists of a pair of files, one containing
metadata information in ASCII format, that the standard calls administrative information, and one
containing the image data. The Interfile header can be viewed and edited with a normal text editor.

Analyze

Analyze 7.5 was created at the end of 1980s as format employed by the commercial software Analyze
developed at the Mayo Clinic in Rochester, MN, USA. For more than a decade, the format was the
standard “de facto” for the medical imaging post-processing. The big insight of the Analyze format was
that it has been designed for multidimensional data (volume). Indeed, it is possible to store in one file
3D or 4D data (the fourth dimension being typically the temporal information). An Analyze 7.5 volume
consists of two binary files: an image file with extension “.img” that contains the voxel raw data and a
header file with extension “.hdr” that contains the metadata, such as number of pixels in the x, y, and z
directions, voxel size, and data type. The header has a fixed size of 348 bytes and is described as a
structure in the C programming language. The reading and the editing of the header require a software
utility. The format is today considered “old” but it is still widely used and supported by many processing
software packages, viewers, and conversion utilities. A new version of the format (AnalyzeAVW) used in
the latest versions of the Analyze software is not discussed here since it is not widespread.
As summarized in Table 1, Analyze 7.5 does not support certain basic data types including the unsigned
16 bits, and this can be sometimes a limitation forcing users to use a scale factor or to switch to a pixel
depth of 32-bit. Moreover, the format does not store enough information to unambiguously establish
the image orientation.

Nifti

Nifti is a file format created at the beginning of 2000s by a committee based at the National Institutes of
Health with the intent to create a format for neuroimaging maintaining the advantages of the Analyze
format, but solving the weaknesses. The Nifti can in fact be thought as a revised Analyze format. The
format fills some of the unused/little used fields present in the Analyze 7.5 header to store new
information like image orientation with the intent to avoid the left–right ambiguity in brain study.
Moreover, Nifti include support for data type not contemplated in the Analyze format like the unsigned
16-bit. Although the format also allows the storage of the header and pixel data in separate files, images
are typically saved as a single “.nii” file in which the header and the pixel data are merged. The header
has a size of 348 bytes in the case of “.hdr” and “.img” data storage, and a size of 352 bytes in the case
of a single “.nii” file for the presence of four additional bytes at the end, essentially to make the size a
multiple of 16, and also to provide a way to store additional-metadata, in which case these 4 bytes are
nonzero. A practical implementation of an extended Nifti format for the processing of
diffusion-weighted magnetic resonance data is described in [9].

The Nifti format allows a double way to store the orientation of the image volume in the space. The first,
comprising a rotation plus a translation, to be used to map voxel coordinates to the scanner frame of
reference; this “rigid body” transformation is encoded using a “quaternion” [10]. The second method is
used to save the 12 parameters of a more general linear transformation which defines the alignment of
the image volume to a standard or template-based coordinate system. This spatial normalization task is
common in brain functional image analysis [11].

The Nifti format has rapidly replaced the Analyze in neuroimaging research, being adopted as the
default format by some of the most widespread public domain software packages, as, FSL [12], SPM
[13], and AFNI [14]. The format is supported by many viewers and image analysis software like 3D Slicer
[15], ImageJ [16], and OsiriX, as well as other emerging software like R [17] and Nibabel [18], besides
various conversion utilities.

An update version of the standard, the Nifti-2, developed to manage larger data set has been defined in
the 2011. This new version encode each of the dimensions of an image matrix with a 64-bit integer
instead of a 16-bit as in the Nifti-1, eliminating the restriction of having a size limit of 32,767. This
updated version maintains almost all the characteristics of the Nifti-1 but, as reserve for some header
fields the double precision, comes with a header of 544 bytes [19].

Minc

The Minc file format was developed at the Montreal Neurological Institute (MNI) starting from 1992 to
provide a flexible data format for medical imaging. The first version of Minc format (Minc1) was based
on the standard Network Common Data Format (NetCDF). Subsequently, to overcame the limit in
supporting large data files and to provide other new features, the Minc development team chose to
switch from NetCDF to Hierarchical Data Format version 5 (HDF5). This new release, not compatible with
the previous one, was called Minc2. The format is mainly used by software tools developed by the MNI
Brain Imaging Center, i.e., a viewer and a processing software library [7]. A set of utilities which allow
the conversion to and from Dicom and Nifti formats, and between Minc1 and Minc2 have been made
available by the same group.

Dicom

The Dicom standard was established by the American College of Radiology and the National Electric
Manufacturers Association. Despite of its 1993 date of birth, the real introduction of the Dicom standard
into imaging departments takes place at the end of 1990s. Today, the Dicom standard is the backbone of
every medical imaging department. The added value of its adoption in terms of access, exchange, and
usability of diagnostic medical images is, in general, huge. Dicom is not only a file format but also a
network communication protocol, and although the two aspects cannot be completely separated, here,
we will discuss only Dicom as a file format.

The innovation of Dicom as a file format has been to establish that the pixel data cannot be separated
from the description of the medical procedure which led to the formation in the image itself. In other
words, the standard stressed the concept that an image that is separate from its metadata becomes
“meaningless” as medical image. Metadata and pixel data are merged in a unique file, and the Dicom
header, in addition to the information about the image matrix, contains the most complete description
of the entire procedure used to generate the image ever conceived in terms of acquisition protocol and
scanning parameters. The header also contains patient information such as name, gender, age, weight,
and height. For these reasons, the Dicom header is modality-dependent and varies in size. In practice,
the header allows the image to be self-descriptive. In order to easily understand the power of this
approach, just think to the software which Siemens first introduced for its MRI systems to replicate an
acquisition protocol. The software, known as “Phoenix”, is able to extract from a Dicom image series
dragged into the acquisition window the protocol and to replicate it for a new acquisition. There are
similar tools for all the major manufacturers.

Regarding the pixel data, Dicom can only store pixel values as integers. Dicom cannot currently save
pixel data in floating-point while it supports various data types, including floats, to store metadata.
Whenever the values stored in each voxel have to be scaled to different units, Dicom makes use of a
scale factor using two fields into the header defining the slope and the intercept of the linear
transformation to be used to convert pixel values to real world values.

Dicom supports compressed image data through a mechanism that allow a non-Dicom-formatted
document to be encapsulated in a Dicom file. Compression schemes supported by Dicom include the
following: JPEG, run-length encoding (RLE), JPEG-LS, JPEG-2000, MPEG2/MPEG4, and Deflated, as
described in Part 5 of the standard [20]. The newly JPEG-XR compression standard has been proposed to
be adopted by Dicom. An encapsulated Dicom file includes metadata related to the native document
plus the metadata necessary to create the Dicom shell.

======================================

2. Medical Imaging Modalities

Medical imaging modalities, for example, includes magnetic resonance imaging (MRI), ultrasound,
medical radiation, angiography and computed tomography (CT) scanners. In addition, to several
scanning techniques to visualise the human body for diagnostic and treatment purposes.

Medical imaging modalities, for example, includes magnetic resonance imaging (MRI), ultrasound,
medical radiation, angiography and computed tomography (CT) scanners. In addition, to several
scanning techniques to visualise the human body for diagnostic and treatment purposes. Also, these
modalities are very useful for patient follow-up, with regards to the progress of the disease state, which
has already been diagnosed, and/or is undergoing a treatment plan. The vast majority of imaging is
based on the application of X-rays and ultrasound (US). These medical imaging modalities are involved
in all levels of hospital care. In addition, they are instrumental in the public health and preventive
medicine settings as well as in the curative and further extending to palliative care. The main objective is
to establish the correct diagnoses.

Table of Contents

Medical radiation
Medical imaging modalities in a clinical setting is a vital contribution to the overall diagnosis of the
patient and help in the decision of an overall treatment plan. The utilisation of imaging techniques in
medical radiation is increasing with new technological advances in medical sciences. Therefore, in the
spectrum of a broad range of imaging modalities are the specialities of nuclear medicine, positron
emission tomography (PET), magnetic resonance imaging (MRI) and ultrasound. Overall, imaging for
medical radiation purposes involves a team of radiologists, radiographers and medical physicists.

Stages of PET scanning

Stages in PET imaging of the human body.

Stages in PET imaging of the human body

X-rays

Medical imaging modalities involve a multidisciplinary approach to obtain a correct diagnosis for the
individual patient with the aim of providing a personalised approach to patient care. These imaging
techniques can be applied as non-invasive methods to view inside the human body, without any surgical
intervention. They can be used to assist in diagnosis or treat a variety of medical conditions. Medical
imaging techniques utilise radiation that is part of the electromagnetic spectrum. These include imaging
X-rays which are the conventional X-ray, computed tomography (CT) and mammography. To improve
X-ray image quality, a contrast agent can be used, for example, in angiography examinations.

X-rays of the head, chest, legs and hand.

X-rays of the head, chest, legs and hand.

Medical imaging modalities

Furthermore, imaging utilised in nuclear medicine and angiography can be attributed to several
techniques to visualise biological processes. The radiopharmaceuticals used are usually small amounts of
radioactive markers: these are used in molecular imaging. Other non-radioactive types of imaging
include magnetic resonance imaging (MRI) and ultrasound (US) imaging. MRI uses strong magnetic
fields, which do not produce any known irreversible biological effects in humans. Diagnostic ultrasound
(US) systems use high-frequency sound waves to produce images of internal body organs and soft tissue.
Several medical imaging modalities use radiation uses X-ray beams that are projected onto the body.
When these X-ray beams pass through the human body some are absorbed, and the resultant image is
detected on the other side of the body.

Some types of medical imaging do not use radiation such as MRI, angiography and ultrasound

MRI angiography

Angiography

Some types of medical imaging function without using ionising radiation; for example, magnetic
resonance imaging (MRI), angiography, ultrasound imaging and have significant applications in the
diagnosis of disease. Medical imaging modalities include single-photon emission computed tomography
(SPECT), positron emission tomography (PET) and hybrid imaging systems such as PET/CT. Alternatively,
other systems use the application of positron emission mammography (PEM) and radio-guided surgery
(RGS). In addition, there is the application of short and long-lived radioisotopes for the research and
development of new imaging agents and associated targeted therapies. Other techniques include
computed tomography (CT), magnetic resonance imaging (MRI), ultrasound imaging and planar X-ray
(analogue, portable and digital) systems.

The special resolution required to elucidate detailed images of various structures within the human
body is the main practical limitation of current medical imaging modalities. However, the rate of image
acquisitions has increased over the last decade; this does not allow for the sensitivity required in order
to express anatomical structure and function which is limited by the radiation dose amongst other
factors.

Spatial resolution of medical imaging modalities

Imaging modality Spatial resolution (mm)

Animal Clinical

PET 1-2 6-10

SPECT 0.5-2 7-15

OPTICAL 2-5 (Visible to IR)

MRI 0.025-0.1 0.2

US 0.05-0.5 0.1-1

CT 0.03-0.4 0.5-1

Medical imaging modalities will not be dictated by the advancements in imaging quality, but more likely
the objective will be to reduce the cost and scanning time including exposure to radiation. These
technical innovations allow for the rational conclusion that medical radiation dose, scanning speed,
image resolution and sensitivity including cost per patient will all be elements of personalised medicine
in the future.

Consequently, the medical physicist will play a pivotal role to further these challenges: especially to
extend knowledge and understanding of the effect of which signals used to construct 3-D
time-dependent images.

In particular, it is important to account for the physical and biological factors that modulate the
behaviour of different energy forms within the human body. Moreover, to understand how to interpret
images and derive more crucial information regarding the patient’s disease state in order to formulate a
treatment plan which is personal to the patient.

As with the continual development and improvements in imaging, it is essential to understand the
specific biological episode associated with each specific disease state. It would be crucial to design
medical imaging modalities that can recognise a ‘fingerprint’ that can be attributed to a specific disease
state.

Furthermore, new imaging modalities would be used to evaluate changes in tissue composition resulting
from a disease like fibrosis. In this case, the physiological parameter would be the reduction of blood
flow in arteries according to angiography. Other techniques could evaluate the change in conductivity or
magnetic susceptibility of brain tissue. All of these improvements could help in the understanding of the
contrast mechanisms in several medical imaging modalities.

In essence, it is important to make use of the data within digital images to develop more quantitative
tissue characterisation from these anatomical scans. For example, functional magnetic resonance
imaging (fMRI) has transformed the understanding of the construction of the brain.

This imaging technique has provided the exact relationship between the MRI signals used to map neural
activity. However, fundamental neurochemical and electrophysiological processes are not well defined.

Diagnostic imaging tools provide powerful techniques to locate biological processes within the human
body. This includes spatial heterogeneity and related changes to the different regions within the
anatomical structure’s fine detail.

Advancements in medical imaging modalities will contribute to an overall personalised treatment plan
for each patient. This can only be guaranteed by continuing translational research in the design of novel
radiopharmaceuticals and biomarkers in order to increase the efforts to devise robust personalised
treatment plans for individual patients.

====================================

3. Medical Image Digitalization and Acquisition Gateway

Image Acquisition Gateway

The image acquisition gateway computer (gateway) with a set of software programs is used as a buffer
between image acquisition and the PACS server. Figure 10.1 shows how this chapter corresponds to the
PACS fundamentals content of Part II. Figure 10.2 depicts the gateway in the PACS data flow. In this
chapter the terms acquisition gateway computer, gateway computer, acquisition gateway, and gateway
have the same meaning.

This chapter covers the digital imaging and communications in medicine (DICOM) interface: the DICOM
compliant gateway, the automatic image recovery scheme for DICOM conformance imaging devices,
interface with other existing picture archiving and communication system (PACS) modules, and the
DICOM broker. When the acquisition gateway has to deal with DICOM formatting, communication, and
many image preprocessing functions, multiple-level processing with a queuing mechanism is necessary.
The image acquisition gateway computer with a set of software programs is used as a buffer between
image acquisition and the PACS-based multimedia imaging informatics servers. The chapter shows the
positions of the PACS server and archive, hospital information system (HIS), radiology information
system (RIS), and Web-based electronic patient records (ePR) server in the generic PACS components
data flow. Ultrasound (US) images can be shown with other modality images in the PACS general display
workstations for cross modality comparisons.

10.1 BACKGROUND

Several acquisition devices (modalities) can share one gateway. The gateway has three primary tasks: (1)
it acquires image data from the radiological imaging device, (2) it converts the data from manufacturer
data format specifications to the PACS image data standard format (header format, byte-ordering,
matrix sizes) that is compliant with the DICOM data formats, and (3) it forwards the image study to the
PACS server or directly to the PACA workstations (WSs). Additional tasks in the gateway are some image
pre-processing, compression, and data security. An acquisition gateway has the following
characteristics:

1. It preserves the image data integrity transmitted from the imaging device.

2. Its operation is transparent to the users and totally or highly automatic.

3. It delivers images timely to the PACS server and WSs.

4. It performs some image preprocessing functions to facilitate image display.

Among all PACS major components, establishing a reliable gateway in PACS is the most difficult task for
a number ...

Computers in Radiology

William W. Boonn MD, in Radiology Secrets Plus (Third Edition), 2011

8 What is DICOM?

DICOM stands for Digital Imaging and Communications in Medicine. DICOM is a standard that
establishes rules that allow medical images and associated information to be exchanged between
imaging equipment from different vendors, computers, and hospitals. A computed tomography (CT)
scanner produced by vendor A and a magnetic resonance imaging (MRI) scanner produced by vendor B
can send images to a PACS from vendor C using DICOM as a common language. In addition to storing
image information, other DICOM standard services include query/retrieve, print management,
scheduling of acquisition and notification of completion, and security profiles.

Components and implementation of a picture archiving and communication system in a prototype


application

Hasan H Khaleel,1 Rahmita OK Rahmat,2 Dimon M Zamrin3

1Department of Medical Devices Techniques’ Engineering, AL-Esraa University College, Baghdad, Iraq;
2Department of Multimedia, Faculty of Computer Science and Information Technology, Universiti Putra
Malaysia, Serdang, Selangor, Malaysia; 3Faculty of Medicine, Universiti Teknologi MARA, Sungai Buloh,
Selangor, Malaysia

Purpose: The purpose of this article was to represent the first experience of applying a picture archiving
and communication system (PACS) at the Universiti Putra Malaysia with the cooperation of Universiti
Teknologi MARA hospital, and to analyze the applicability of PACS, its impact on health care, its benefits
to medical employees, and propose a prototype application of PACS.

Methods: The main PACS components were discussed, HL7 and DICOM standards were introduced, and
a prototype of WebXA application was proposed.

Results: The results of WebXA revealed the ability of this application to retrieve, store, and display
angiography images on a web browser anywhere, as long as an Internet connection is provided.

Conclusion: This article presented PACS with its components and standards, a prototype application was
discussed and evaluated, and a few recommendations have been provided for more improvements in
the future.

Keywords: picture archiving systems, DICOM, web-based viewer

Introduction

Picture archiving and communication systems (PACS) have become one of the most popular health care
systems between 2003 and 2008.1,2 During this period, archiving media and interpretation media
changed from film based to digital imaging, which was considered as a big breakthrough, where digital
image acquisition devices have become more famous than the classic radiology conventional systems.
Therefore, once a digital image of the chest is captured, it can be processed directly by the computer.3

The digital imaging PACS is a combination of hardware and software hybrid system that is used to
acquire, store, deploy and retrieve medical images using Digital Imaging and Communications in
Medicine (DICOM) standard. The images and reports are transmitted digitally via PACS by integrating
the system with the radiology information system (RIS) and hospital information system (HIS).2 This
integration of PACS–RIS–HIS would eliminate the need to manually store, retrieve and display film
jackets. Earlier, the majority of health care systems were adopting the conventional way of storing and
displaying patients’ data in hospitals, which delayed the time from imaging to reporting of the
interpretation. Providing medical stuff with information in a short period of time is an important step in
the current medical systems. Therefore, PACS is becoming a vital step and should be included in
hospitals to speed up doctors’ mission in curing patients.

The PACS was first embodied in mid 1970s.4 Professor Jean-Raoul introduced DIOGENE, a medical
information display system at the Geneva University Hospitals in Switzerland. This system was later
modernized to form PACS. Currently, PACS has been adopted in many hospitals and medical institutes.
By digitizing the medical images, institutes were able to minimize costs in data management and storage
and to reduce time consumption in data transportations.2 Images, as it is widely known, are the basis in
teaching medical imaging. Therefore, it is vital to provide the students of medical imaging with
high-quality images in order to improve the ability of analyzing images.5 Depending on this fact, a study
was established to combine the current PACS with the medical imaging teaching method to design a
better way of imaging teaching systems in higher institutes.5

Before PACS, the examination cycle of a radiology department usually flows in the following steps as
illustrated in Figure 1:6

a patient is directed to the ward of physicians for medical checkup;

the responsible physician may refer the patient to the X-ray laboratory for imaging;

after imaging, the films will be taken manually to the reading room to be printed out and analyzed by
the radiologists in the radiology department;

the radiologist will direct the analysis of X-ray images manually to a clerk to type out the report

the radiologist’s report will manually reach the same physician as the first step to decide;

the report might also reach outside clinics, other hospital departments, emergency room (ER) or
intensive care unit (ICU).

Figure 1 The examination cycle of a radiology department before PACS.

Abbreviations: ER, emergency room; picture archiving and communication system.

This turnaround time of examination cycle may vary from hours to days, and this cycle might consume
much time to get imaging reports which can delay the decision making about the condition of patients.

The major limitations of the conventional examination radiology cycle are as follows:

time consuming – decision making of diagnostic results may not be obtained in a timely manner;

high possibility of losing the examination data of patients – implying examination retake;

physical retrieval of films from library and then from ER may take minutes to hours;

decision making by referring physician(s) varies from hours to days;

digitizing the films is necessary to save a copy of the images.


After PACS installation, the examination workflow is in the following steps:

technician takes digital images in the X-ray laboratory;

a few seconds later, the exposure was adjusted at the modality workstation (display workstation);

images are then sent to digital archive;

images are immediately available to radiologist(s), referring physician(s) office, and anywhere in the
medical institute.

Therefore, with a smaller and more efficient file room, the health care is improved with more physician’s
satisfaction. Figure 2 briefly presents the workflow after PACS installation.

Figure 2 The examination workflow of a radiology department after PACS.

Abbreviation: PACS, picture archiving and communication system.

This article presents and analyzes the experience of using PACS in University Putra Malaysia (UPM) with
the cooperation of UiTM hospital. Few PACS servers and DICOM viewers, such as K-PACS v1.6.0,
ConQuest v1.4.17, ONIS v2.5.1.6 and ClearCanvas v7.1, have been installed and tested on local
computers of the medical institutes. All these used servers are downloaded for free as trial versions for
testing purposes only. The issue of database integration is an area of ongoing work at our institution.
The current medical systems are mostly dependent on various systems across different departments.
Our goal is to overcome the problem of various databases and integrate them in one reliable database
and system that can gather all data of different departments in one database to store, deploy and
display the medical data in a way that can save cost, time and effort and eliminate data duplication.

Therefore, our medical team continues research to achieve this goal. Meantime, the team managed to
publish few articles related to the target of PACS. In one of the previous research works, the authors
discussed problems of multisystem distribution and ways to overcome it.7 They proposed a design to
integrate the medical databases to assist the medical staff in their mission. Such a design can allow
easier communication between different systems via multiple platforms and languages. This step can
minimize errors and risks, faster decision making, improve data management and save time and cost. In
the meanwhile, a previous research proposed a conceptual database design to create a smart medical
system in clinics.8 This design plays a vital role in combining the medical subsystems to make a complete
system for cardiothoracic surgery unit.

Finally, a project was designed to form an integrated algorithm which integrated CAD systems with PACS
using big computing infrastructure.9 This work aimed to create a system where users can request a CAD
service and get the outcome to their PACS. This system helped the end users request to obtain the
results in any modality.

Materials and methods


This section discusses the main PACS components followed by introduction of the HL7 and DICOM
standards and the proposed prototype of WebXA application.

PACS components

PACS consists of four major components: image acquisition devices (imaging modalities),
communication networks, PACS archive and server, and integrated display workstations (WS).8 PACS
can be further connected to RIS and HIS health care systems via PACS communication networks as
shown in Figure 3.

Figure 3 PACS basic components and workflow.

Abbreviations: HIS, hospital information system; PACS, picture archiving and communication system;
RIS, radiology information system.

Image acquisition devices

The imaging acquisition devices are composed of the devices of imaging modalities and acquisition
gateway computers. Imaging modalities include magnetic resonance imaging, computed tomography,
PET, X-ray angiography, echocardiography and others. These modalities are interfaced with the PACS
server via acquisition gateway computers.8 The major roles of using acquisition gateway computers are
to acquire images from the imaging modalities, convert the format of the images data from
manufacturer’s specification to the PACS standard format, which is called DICOM, and to perform some
preprocessing functions on the data such as resizing, background removal and orientation calibration.6

The two common methods of image acquisition are digitization of films and direct digital acquisition.
The digitization of plain films is a vital method to convert the radiology projections (films) into digital
images, because computers can process only digital images. This can be achieved using film/image
digitizers such as laser scanner or charge coupled device (CCD). The second method of image acquisition
is the capturing of direct digital images, which can be done using the currently developed X-ray devices.
These devices can acquire digital images without the need of imaging plates used in conventional
radiography. Therefore, digital images are obtained from 30% of image acquisition devices such as
magnetic resonance imaging, computed tomography, ultrasound, and digital subtraction angiography.

Communication networks

The PACS communication network is the way of moving the medical data between the components of
PACS themselves and other systems and to remote locations. Similar to other computer networks, PACS
network provides a path to communicate between imagine modalities, gateway computers, PACS
server, display and review WSs, HIS/RIS systems and any other remote medical locations. The factors of
PACS networks are the network topology, line capacity and workflow assignments. The topology of
communication networks refers to the physical or logical way of designing these networks, whereas two
or more nodes connect to a link and then two or more links can form a network topology. The five main
topologies used in the medical environments are Ring, Star, Tree, Bus and Mesh.6
Theoretically, three main types of networks are used to transfer the medical data of radiology:

LAN network within one medical department to link imaging modalities, archive and data storage and
the display WSs;

LAN network to link different departments in a hospital (intra-hospital);

Tele-radiology network to transfer medical data to other remote hospitals in that region.

PACS Archive and server

All the patients’ information and imaging examination are sent to the PACS server for archiving. The
data are sent to PACS server from the acquisition gateway computers and from HIS/RIS systems. The
PACS server, which is the heart and engine of PACS, has two main components: storage media
(database) and archive system. The archive system of PACS needs two levels for archiving: short term
and long term. The data (images) from the short-term level are retrieved in 2 seconds, whereas those
from the long-term level are retrieved in £3 minutes.6

Examples of the storage media used for archiving are as follows:

(RAID) redundant array of inexpensive disks for prompt access of current images;

magnetic plates for speedy recovery of reserved images;

erasable magneto-optical plates for impermanent long-haul archive;

(ROM) read only memory in the optical plate library, which constitutes the changeless document;

recently created advanced flexible plates (DVD-ROM) for ease changeless document;

the advanced straight tapes for reinforcement.

PACS servers have many significant functions, of which some of them are listed below:

gets images from examinations through securing portals,

extracts content data depicting the accepted examination from the DICOM image header;

updates the database administration framework;

determines the display WSs to which recently created examinations are to be sent;

automatically recovers important correlation images from recorded examinations;

store stockpiling or long-haul library file framework;

automatically revises the introduction of registered or advanced radiography pictures;

determines ideal complexity and brightness parameters for displaying images.


Display WSs

The display WS is a very important component of the PACS network, which plays a vital role in the
clinical acceptance of PACS. It is the hardware component that replaces the Alternator or the manual
light box of radiology system. Today, most radiologists analyze films in a perusing room utilizing light
boxes or alternators. The light boxes are lighted boards on which ~12 films may be hung at once for
review and physically turn about 8 of 200 films of a patient into position for diagnosing purposes. Simple
image preparing operations such as zooming using an amplifying glass and annotation of films are
performed utilizing the alternators. Display WSs help radiologists make primary diagnosis and hence
they are also named as diagnostic WSs. These WSs are composed from local storage database, network
connection for communications, resource management, display, and processing software. Display WSs
provide some of the basic image processing functions such as access, manipulation, evaluation, and
documentation.

HL7 and DICOM standards

Transmission of images and reports between different medical institutes is a hard mission for two
reasons: first, information systems utilize various machine platforms, and second, the medical images
and information are created from different imaging modalities by distinctive producers.8 With the
growing medical standards, Health Level 7 (HL7) and DICOM, incorporation of heterogeneous, different
restorative images and information into a composed system is made feasible. Interfacing two medical
systems requires two elements: a normal data format and a correspondence protocol. HL7 is a standard
text-based information format, and DICOM incorporates data format and correspondence protocols.10
In compliance with the HL7 standard, DICOM is conceivable to exchange medicinal information such as
HIS, RIS and PACS. By adjusting the DICOM standard, the medical images created from an assortment of
modalities and manufacturers might be interfaced as an incorporated health care system.

HL7, introduced in March 1987, was sorted out by a client–vendor board to create a standard for
electronic information trade in health care environments, particularly for hospital provisions. HL7
standard alludes to the highest level, the provision level, in the seven communication levels of Open
Systems Interconnection (OSI). The main objective is to improve the interface execution between PC
provisions from different manufacturers. This standard confirms exchanging data among health care
systems, for example, HIS, RIS and PACS. On the other hand, DICOM is a significant standard which has
been developed as a consequence of the starting exertions by ACR and NEMA joint council shaped in
1993 to push correspondence of computerized image data regardless of gadget producer. This standard
encourages the advancement and development of PACS to interface with different systems of healing
center data in a similar way. In addition, DICOM permits the making of indicative databases that could
be cross-examined by a wide assortment of gadgets conveyed geologically.

A work in 2008 discussed the integration of a research called content-based image retrieval with RIS and
PACS.11 This work aimed to improve the workflow of radiological daily routine. The importance of this
integration comes from making all the PACS archive available for radiologists to find an accurate
diagnosis for the current study of patients. In this article, the integration between RIS and PACS is
achieved in WebXA where all patients’ studies (images and information) are saved in the database of the
server, and WebXA can call any study using the name of the patient and number of the study. Specialist
can then illustrate the images of a specific study and analyses for better diagnosing from the current
study.

ACR–NEMA, officially known as the American College of Radiology and the National Electrical
Manufacturers Association, established a committee to create a set of benchmarks to serve as a
common background for different therapeutic medical imaging vendors. The main objective was that
recently created instruments have the capacity to impart and partake in imparting therapeutic image
data, specifically, inside PACS domain. The committee, which focused primarily on issues concerning
data exchange, interconnectivity, and communications among the health care systems, started work in
1982.6

WebXA prototype application

WebXA is the PACS prototype application which has the same structural design of client-server except
that the software for the client and server is Web-based application. Extra advantages of Web-based
design server over the client-server are as follows: first, the customer WS equipment might be hardware
independent as long as the Web software is underpinned. Second, the Web-based software is totally
movable, ie, the Web-based application might be utilized on any location as long as an Internet
connection is provided. The disadvantage of Web-based application is that its functionality and
performance are limited compared to the client server. One of the most important purposes of using
Web-based viewers and applications in health care systems is the real-time telecommunication or
teleconsultation. However, what is the teleconsultation process? and is there any application(s) to
support it? In this paper, we explained this process and tested a Web-based prototype application to
support the teleconsultation tool.

The teleconsultation is addressing different scenarios in the health care institutes. Let us consider the
following scenario: when a doctor in the physician department wants to consult with a radiologist in the
reading room about a written report of a patient in the checking room. This consultation process should
occur without leaving, both specialists, their wards and walk all the way to each other’s departments.
Therefore, the teleconsultation is a circumstance in which two or more specialists located in various
departments need to discuss and consult about a patient’s results without leaving their departments.
Earlier, there were few results of teleconsultation tools, such as Televideo, NetMeeting which is a
sharing software from Microsoft, Proshare from Intel, and PCAnywhere from Symantec. However, most
of these applications lack for convincing applicability of image processing functions which are becoming
necessary in the health care environments.

In this paper, a Web-based viewer application, called WebXA, has been created and tested on the PACS.
This application is established for viewing and primary diagnostic purposes only where the medical
imaging software is integrated with the communication facilities to develop a remote viewing and
consultation tool using the TCP/IP protocol.
Results

WebXA is to retrieve, store and display the angiography images on a web browser anywhere as long as
an Internet connection is provided. A real-time text exchanging property is added to WebXA application
to make it easy for specialists to exchange comments and consult about patients’ reports and results, as
shown in Figure 4. WebXA viewer provides:

Easy and friendly GUI contains buttons to stop the moving frames of angiography and to move forward
or backward.

Text exchange view that can be used by the specialists to exchange comments about the medical images
or the primary diagnostic of patients’ data.

Client/PACS server communications via TCP/IP.

Software functions such as save, open and print.

Figure 4 Web-based angiography viewer WebXA.

Note: The patient and physician details in the figure are examples formats only.

Abbreviations: Nxt, next; Prv, previous; Psu, pause.

Few open-source PACS servers and DICOM viewers, such as K-PACS v1.6.0, ConQuest v1.4.17, ONIS
v2.5.1.6 and ClearCanvas v7.1, have been used in this study to test the workflow of PACS components. .
A prototype application for DICOM viewing and teleconsultation purposes application has been
developed in this study, where this application (WebXA) can store, display, download, send and print the
angiography images via Internet explorers. Since exchanging information of patients among specialists
who are located in different geographically area is a vital process, a comment box has been added to
WebXA viewer to give the ability to medical doctors and radiologists to discuss medical reports and
results of patients.

The proposed WebXA application mimics the functionality of PACS discussed above where it stores,
retrieves and illustrates coronary angiography images for radiologists and surgeons directly on their
mobile phones anytime and anywhere in order to save time and efforts of diagnosing. This is a very
important step for the medical doctors. The application is tested with its initial/preliminary results by a
heart surgeon, and he was greatly satisfied for being able to use his personal mobile phone to check his
patients’ data anywhere and anytime in the hospital or even outside.

Discussion

PACS, its components, network and benefits have been discussed in this paper. This system is an
electronic and preferably filmless data system for obtaining, sorting, transporting, putting away and
electronically displaying health care images and data.12 Digital imaging vendors push this system profits,
counting the end of unreasonable silver-based film, progressed access to new and old movies for all
clinicians, lessening in the physical stockpiling prerequisite of massive movies, and more level work force
costs. In this paper, a restricted client study was attempted at the Hospital Sungai Buloh, Malaysia, in
cooperation with the UPM preceding the advancement of the web program engineering with numerous
fewer clients having admittance to the system. The study and implementation focused on restorative
staff and on the accessibility of medical images and reports prior and then afterward the PACS
establishment. The present study focuses on the effect that PACS has made to the working of the clinic
overall and on the living up to expectations lives of distinct clinicians in numerous distinctive orders.

This prototype can illustrate the medical images (coronary angiography) on a mobile phone by
physicians anywhere which is a very important step for the medical doctors. The WebXA was tested by
the third co-author of this article (Professor Dr DM Zamrin), and he is a heart surgeon who works in the
university hospital and was selected to do our tests. He was very happy with the initial results as we
promised to improve the work in the future for a bigger and more accurate PACS for different types of
medical images in the hospital.

Generally, PACS tries to save time by saving and distributing medical data of different modalities for
radiologists to analyses. This prototype is trying to integrate patients’ studies in one database and
illustrate them to radiologists and surgeons in real-time. By doing this, surgeons will be able to illustrate
their patient’s data anytime and anywhere inside or outside hospitals which could save time for
patients. The limitation of this work is that it was applied on coronary angiography only and further
image modalities should be added in the future.

Conclusion

This paper presents the PACS, its components, the communication standards in PACS networks and a
prototype Web-based application. The paper also discussed the conventional way of acquiring images of
patients and exchanging the results among specialists (before PACS). Then after, the presentation moves
to the era after PACS and the forward jump in the health care systems that PACS achieved. To prove the
point of applying PACS in medical institutes, we implemented a prototype application (WebXA) for
angiography display and exchange between different geographical locations. This viewer is able to
display angiography and to exchange comments between specialists who are located in different
departments. In addition to this application, few PACS servers have been tested with our viewer to send
and receive images. The primary results of this application and tests of PACS servers are convinced and
encouraging for more improvements.

Acknowledgment

The authors would like to present their appreciation for the Faculty of Medicine at Universiti Teknologi
MARA, Malaysia, for their support to apply WebXA application and for their will to extend the work in
the near future.

You might also like