Rangayyan-Biomedical Image Analysis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1311

The BIOMEDICAL ENGINEERING Series

Series Editor Michael R. Neuman

Biomedical
Image Analysis
Biomedical Engineering
Series
Edited by Michael R. Neuman

Published Titles
Electromagnetic Analysis and Design in Magnetic Resonance
Imaging, Jianming Jin
Endogenous and Exogenous Regulation and
Control of Physiological Systems, Robert B. Northrop
Artificial Neural Networks in Cancer Diagnosis, Prognosis,
and Treatment, Raouf N.G. Naguib and Gajanan V. Sherbet
Medical Image Registration, Joseph V. Hajnal, Derek Hill, and
David J. Hawkes
Introduction to Dynamic Modeling of Neuro-Sensory Systems,
Robert B. Northrop
Noninvasive Instrumentation and Measurement in Medical
Diagnosis, Robert B. Northrop
Handbook of Neuroprosthetic Methods, Warren E. Finn
and Peter G. LoPresti
Signals and Systems Analysis in Biomedical Engineering,
Robert B. Northrop
Angiography and Plaque Imaging: Advanced Segmentation
Techniques, Jasjit S. Suri and Swamy Laxminarayan
Analysis and Application of Analog Electronic Circuits to
Biomedical Instrumentation, Robert B. Northrop
Biomedical Image Analysis, Rangaraj M. Rangayyan
The BIOMEDICAL ENGINEERING Series
Series Editor Michael R. Neuman

Biomedical
Image Analysis
Rangaraj M. Rangayyan
University of Calgary
Calgary, Alberta, Canada

CRC PR E S S
Boca Raton London New York Washington, D.C.
9695_disclaimer Page 1 Wednesday, November 17, 2004 1:57 PM

Library of Congress Cataloging-in-Publication Data

Catalog record is available from the Library of Congress

This book contains information obtained from authentic and highly regarded sources. Reprinted material
is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable
efforts have been made to publish reliable data and information, but the author and the publisher cannot
assume responsibility for the validity of all materials or for the consequences of their use.

Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic
or mechanical, including photocopying, microfilming, and recording, or by any information storage or
retrieval system, without prior permission in writing from the publisher.

The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for
creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC
for such copying.

Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation, without intent to infringe.

Visit the CRC Press Web site at www.crcpress.com

© 2005 by CRC Press LLC

No claim to original U.S. Government works


International Standard Book Number 0-8493-9695-6
Printed in the United States of America 1 2 3 4 5 6 7 8 9 0
Printed on acid-free paper
To
my wife Mayura,
my daughter Vidya,
and my son Adarsh...
for etching in my mind
the most beautiful images
that I will treasure forever!

v
Preface

Background and Motivation


The science of medical imaging owes much of its existence to the discovery of
X rays by W.C. Roentgen over 100 years ago, in 1895. However, it was the
development of practical computed tomography scanners in the early 1970s by
G. Houns eld and others that brought computers into medical imaging and
clinical practice. Since then, computers have become integral components of
modern medical imaging systems and hospitals, performing a variety of tasks
from data acquisition and image generation to image display and analysis.
With the widespread acceptance of computed tomography came an implicit
invitation to apply computers and computing to a host of other medical imag-
ing situations. As new imaging modalities were developed, the need for com-
puting in image generation, manipulation, display, and analysis grew by many
folds. Computers are now found in virtually every medical imaging system,
including radiography, ultrasound, nuclear medicine, and magnetic resonance
imaging systems. The strengths of computer applications in medical imaging
have been recognized to such an extent that radiology departments in many
hospitals are changing over to \totally digital" departments, using comput-
ers for image archival and communication as well. The humble X-ray lm
that launched the eld of radiology may soon vanish, thereby contributing to
better management of the environment.
The increase in the number of modalities of medical imaging and in their
practical use has been accompanied by an almost natural increase in the
scope and complexity of the associated problems, requiring further advanced
techniques for their solution. For example, physiological imaging with radio-
isotopes in nuclear medicine imaging comes with a host of problems such as
noise due to scatter, e ects of attenuation along the path of propagation of
the gamma rays through the body, and severe blurring due to the collimators
used. Radiation dose concerns limit the strength and amount of the isotopes
that may be used, contributing to further reduction in image quality. Along
with the increase in the acceptance of mammography as a screening tool has
come the need to eciently process such images using computer vision tech-
niques. The use of high-resolution imaging devices for digital mammography
and digital radiography, and the widespread adoption of picture archival and

vii
viii Biomedical Image Analysis
communication systems, have created the need for higher levels of lossless
data compression. The use of multiple modalities of medical imaging for im-
proved diagnosis of a particular type of disease or disorder has raised the need
to combine diverse images of the same organ, or the results thereof, into a
readily comprehensible visual display.
The major strength in the application of computers to medical imaging lies
in the potential use of image processing and computer vision techniques for
quantitative or objective analysis. (See the July 1972 and May 1979 issues
of the Proceedings of the IEEE for historical reviews and articles on digital
image processing.) Medical images are primarily visual in nature however,
visual analysis of images by human observers is usually accompanied by lim-
itations associated with interpersonal variations, errors due to fatigue, errors
due to the low rate of incidence of a certain sign of abnormality in a screening
application, environmental distractions, etc. The interpretation of an image
by an expert bears the weight of the experience and expertise of the ana-
lyst however, such analysis is almost always subjective. Computer analysis
of image features, if performed with the appropriate logic, has the potential
to add objective strength to the interpretation of the expert. It thus becomes
possible to improve the diagnostic con dence and accuracy of even an expert
with many years of experience.
Developing an algorithm for medical image analysis, however, is not an easy
task quite often, it might not even be a straightforward process. The engi-
neer or computer analyst is often bewildered by the variability of features in
biomedical signals, images, and systems that is far higher than that encoun-
tered in physical systems or observations. Benign diseases often mimic the
features of malignant diseases malignancies may exhibit characteristic pat-
terns, which, however, are not always guaranteed to appear. Handling all of
the possibilities and the degrees of freedom in a biomedical system is a major
challenge in most applications. Techniques proven to work well with a certain
system or set of images may not work in another seemingly similar situation.

The Problem-solving Approach


The approach I have taken in presenting the material in this book is primarily
that of problem solving. Engineers are often said to be (with admiration, I
believe) problem solvers. However, the development of a problem statement
and gaining of a good understanding of the problem could require a signi cant
amount of preparatory work. I have selected a logical series of problems, from
the many I have encountered in my research work, for presentation in this
book. Each chapter deals with a certain type of problem with biomedical
images. Each chapter begins with a statement of the problem, and includes
Preface ix
illustrations of the problem with real-life images. Image processing or analysis
techniques are presented, starting with relatively simple \textbook methods",
followed by more sophisticated methods directed at speci c problems. Each
chapter concludes with applications to signi cant and practical problems. The
book is illustrated copiously, in due consideration of the visual nature of the
subject matter.
The methods presented in the book are at a fairly high level of technical
and mathematical sophistication. A good background in one-dimensional sig-
nal and system analysis 1, 2, 3] is very much required in order to follow the
procedures and analyses. Familiarity with the theory of linear systems, sig-
nals, and transforms such as the Laplace and Fourier, in both continuous and
discrete versions, will be assumed. We shall only briey study a few represen-
tative medical imaging techniques. We will study in more detail the problems
present with medical images after they have been acquired, and concentrate
on how to solve the problems. Some preparatory reading on medical imaging
equipment and techniques 3, 4, 5, 6] may be useful, but not always essential.

The Intended Audience


The book is primarily directed at engineering students in their nal year
of undergraduate studies or in their (post-)graduate studies. Electrical and
Computer Engineering students with a rich background in signals and systems
1, 2, 3] will be well prepared for the material in the book. Students in
other engineering disciplines or in computer science, physics, mathematics,
or geophysics should also be able to appreciate the material in this book. A
course on digital signal processing or digital lters 7] would form a useful
link, but a capable student without this topic may not face much diculty.
Additional study of a book on digital image processing 8, 9, 10, 11, 12, 13]
could assist in developing a good understanding of general image processing
methods, but is not required.
Practicing engineers, computer scientists, information technologists, medi-
cal physicists, and data-processing specialists working in diverse areas such as
telecommunications, seismic and geophysical applications, biomedical applica-
tions, hospital information systems, remote sensing, mapping, and geomatics
may nd this book useful in their quest to learn advanced techniques for image
analysis. They could draw inspiration from other applications of data process-
ing or analysis, and satisfy their curiosity regarding computer applications in
medicine and computer-aided medical diagnosis.
x Biomedical Image Analysis

Teaching and Learning Plans


An introduction to the nature of biomedical images is provided in Chap-
ter 1. The easy-to-read material in this chapter gives a general overview of
the imaging techniques that are commonly used to acquire biomedical images
for detailed treatment of medical imaging, refer to Macovski 5], Robb 14],
Barrett and Swindell 3], Huda and Slone 6], and Cho et al. 4]. A good
understanding of the basics of image data acquisition procedures is essential
in order to develop appropriate methods for further treatment of the images.
Several concepts related to image quality and information content are de-
scribed in Chapter 2, along with the related basics of image processing such
as the Fourier transform and the modulation transfer function. The notions,
techniques, and measures introduced in this chapter are extensively used in
the book and in the eld of biomedical image analysis a clear understanding
of this material is an important prerequisite to further study of the subject.
Most of the images acquired in practice su er loss of quality due to arti-
facts and practical limitations. Several methods for the characterization and
removal of artifacts and noise are presented in Chapter 3. Preprocessing of
images to remove artifacts without causing distortion or loss of the desired
information is an important step in the analysis of biomedical images.
Imaging and image processing techniques aimed toward the improvement of
the general quality or the desired features in images are described in Chapter 4.
Methods for contrast enhancement and improvement of the visibility of the
details of interest are presented with illustrative examples.
The important task of detecting regions of interest is the subject of Chap-
ter 5, the largest chapter in the book. Several approaches for the segmentation
and extraction of parts of images are described, along with methods to im-
prove initial approximations or results.
Objective analysis of biomedical images requires the extraction of numerical
features that characterize the most signi cant properties of the regions of
interest. Methods to characterize shape, texture, and oriented patterns are
described in Chapters 6, 7, and 8, respectively. Speci c features are required
for each application, and the features that have been found to be useful in one
application may not suit a new application under investigation. Regardless,
a broad understanding of this subject area is essential in order to possess the
arsenal of feature extraction techniques that is required when attacking a new
problem.
The material in the book through Chapter 8 provides resources that are
more than adequate for a one-semester course with 40 to 50 hours of lectures.
Some of the advanced and specialized topics in these chapters may be omitted,
depending upon the methods and pace of presentation, as well as the level of
comprehension of the students.
Preface xi
The specialized topic of image reconstruction from projections is dealt with
in Chapter 9. The mathematical details related to the derivation of tomo-
graphic images are presented, along with examples of application. This chap-
ter may be skipped in an introductory course, but included in an advanced
course.
Chapter 10 contains descriptions of methods for the restoration of images
with known models of image degradation. The advanced material in this
chapter may be omitted in an introductory course, but forms an important
subject area for those who wish to explore the subject to its full depth.
The subject of image data compression and coding is treated in detail in
Chapter 11. With due regard to the importance of quality and delity in the
treatment of health-related information, the focus of the chapter is on lossless
compression. This subject may also be considered to be an advanced topic of
specialized interest, and limited to an advanced course.
Finally, the most important and signi cant tasks in biomedical image anal-
ysis | pattern analysis, pattern classi cation, and diagnostic decision | are
described in Chapter 12. The mathematical details of pattern classi cation
techniques are presented, along with procedures for their incorporation in
medical diagnosis and clinical assessment. Since this subject forms the cul-
mination of biomedical image analysis, it is recommended that parts of this
chapter be included even in an introductory course.
The book includes adequate material for two one-semester courses or a
full-year course on biomedical image analysis. The subject area is still a
matter of research and development: instructors should endeavor to augment
their courses with material selected from the latest developments published in
advanced journals such as the IEEE Transactions on Medical Imaging as well
as the proceedings of the SPIE series of conferences on medical imaging. The
topics of biometrics, multimodal imaging, multisensor fusion, image-guided
therapy and surgery, and advanced visualization, which are not dealt with in
this book, may also be added if desired.
Each chapter includes a number of study questions and problems to facili-
tate preparation for tests and examinations. Several laboratory exercises are
also provided at the end of each chapter, which could be used to formulate
hands-on exercises with real-life and/or synthetic images. Selected data les
related to some of the problems and exercises at the end of each chapter are
available at the site
www.enel.ucalgary.ca/People/Ranga/enel697
It is strongly recommended that the rst one or two laboratory sessions
in the course be visits to a local hospital, health sciences center, or clinical
laboratory to view biomedical image acquisition and analysis in a practical
(clinical) setting. Images acquired from local sources (with the permissions
and approvals required) could form interesting and motivating material for
laboratory exercises, and should be used to supplement the data les pro-
vided. A few invited lectures and workshops by physiologists, radiologists,
xii Biomedical Image Analysis
pathologists, and other medical professionals should also be included in the
course so as to provide the students with a nonengineering perspective on the
subject.
Practical experience with real-life images is a key element in understanding
and appreciating biomedical image analysis. This aspect could be dicult and
frustrating at times, but provides professional satisfaction and educational
fun!
It is my humble hope that this book will assist students and researchers
who seek to enrich their lives and those of others with the wonderful powers
of biomedical image analysis. Electrical and Computer Engineering is indeed
a great eld in the service of humanity.
Rangaraj Mandayam Rangayyan
Calgary, Alberta, Canada
November, 2004
About the Author

Rangaraj (Raj) Mandayam Rangayyan was born in Mysore, Karnataka, In-


dia, on July 21, 1955. He received the Bachelor of Engineering degree in
Electronics and Communication in 1976 from the University of Mysore at the
People's Education Society College of Engineering, Mandya, Karnataka, In-
dia, and the Ph.D. degree in Electrical Engineering from the Indian Institute
of Science, Bangalore, Karnataka, India, in 1980. He was with the University
of Manitoba, Winnipeg, Manitoba, Canada, from 1981 to 1984. He joined the
University of Calgary, Calgary, Alberta, Canada, in 1984.
He is, at present, a Professor with the Department of Electrical and Com-
puter Engineering (and an Adjunct Professor of Surgery and Radiology) at the
University of Calgary. His research interests are in the areas of digital signal
and image processing, biomedical signal analysis, medical imaging and image
analysis, and computer vision. His research projects have addressed topics
such as mammographic image enhancement and analysis for computer-aided
diagnosis of breast cancer region-based image processing knee-joint vibration
signal analysis for noninvasive diagnosis of articular cartilage pathology di-
rectional analysis of collagen bers and blood vessels in ligaments restoration
of nuclear medicine images analysis of textured images by cepstral ltering
and soni cation and several other applications of biomedical signal and image
analysis.
He has lectured extensively in many countries, including India, Canada,
United States, Brazil, Argentina, Uruguay, Chile, United Kingdom, The Ne-
therlands, France, Spain, Italy, Finland, Russia, Romania, Egypt, Malaysia,
Singapore, Thailand, Hong Kong, China, and Japan. He has collaborated
with many research groups in Brazil, Spain, France, and Romania.
He was an Associate Editor of the IEEE Transactions on Biomedical Engi-
neering from 1989 to 1996 the Program Chair and Editor of the Proceedings
of the IEEE Western Canada Exhibition and Conference on \Telecommuni-
cation for Health Care: Telemetry, Teleradiology, and Telemedicine", July
1990, Calgary, Alberta, Canada the Canadian Regional Representative to
the Administrative Committee of the IEEE Engineering in Medicine and Bi-
ology Society (EMBS), 1990 to 1993 a Member of the Scienti c Program
Committee and Editorial Board, International Symposium on Computerized
Tomography, Novosibirsk, Siberia, Russia, August 1993 the Program Chair
and Co-editor of the Proceedings of the 15th Annual International Conference
of the IEEE EMBS, October 1993, San Diego, CA and Program Co-chair,

xiii
xiv Biomedical Image Analysis
20th Annual International Conference of the IEEE EMBS, Hong Kong, Oc-
tober 1998.
His research work was recognized with the 1997 and 2001 Research Excel-
lence Awards of the Department of Electrical and Computer Engineering, the
1997 Research Award of the Faculty of Engineering, and by appointment as a
\University Professor" in 2003, at the University of Calgary. He was awarded
the Killam Resident Fellowship in 2002 by the University of Calgary in sup-
port of writing this book. He was recognized by the IEEE with the award
of the Third Millennium Medal in 2000, and was elected as a Fellow of the
IEEE in 2001, Fellow of the Engineering Institute of Canada in 2002, Fellow
of the American Institute for Medical and Biological Engineering in 2003, and
Fellow of SPIE: the International Society for Optical Engineering in 2003.

Photo by Trudie Lee.


Acknowledgments

Writing this book on the multifaceted subject of biomedical image analysis


has been challenging, yet yielding more knowledge tiring, yet stimulating the
thirst to understand and appreciate more of the subject matter and dicult,
yet satisfying when a part was brought to a certain stage of completion.
A number of very important people have shaped me and my educational
background. My mother, Srimati Padma Srinivasan Rangayyan, and my fa-
ther, Sri Srinivasan Mandayam Rangayyan, encouraged me to keep striving to
gain higher levels of education and to set and achieve higher goals all the time.
I have been very fortunate to have been taught and guided by a number of
dedicated teachers, the most important of them being Professor Ivaturi Surya
Narayana Murthy, my Ph.D. supervisor, who introduced me to the topic of
biomedical signal analysis at the Indian Institute of Science, Bangalore, Kar-
nataka, India. I o er my humble prayers, respect, and admiration to their
spirits.
My basic education was imparted by many inuential teachers at Saint
Joseph's Convent, Saint Joseph's Indian High School, and Saint Joseph's Col-
lege in Mandya and Bangalore, Karnataka, India. My engineering educa-
tion was provided by the People's Education Society College of Engineering,
Mandya, aliated with the University of Mysore. I was initiated into research
in biomedical engineering at the Indian Institute of Science | India's premier
research institute and one of the very highly acclaimed research institutions
in the world. I express my gratitude to all of my teachers.
My postdoctoral training with Richard Gordon at the University of Mani-
toba, Winnipeg, Manitoba, Canada, made a major contribution to my com-
prehension of the eld of biomedical imaging and image analysis I express
my sincere gratitude to him. My association with clinical researchers and
practitioners at the University of Calgary and the University of Manitoba has
been invaluable in furthering my understanding of the subject matter of this
book. I express my deep gratitude to Cyril Basil Frank, Gordon Douglas Bell,
Joseph Edward Leo Desautels, Leszek Hahn, and Reinhard Kloiber of the
University of Calgary.
My understanding and appreciation of the subject of biomedical signal and
image analysis has been boosted by the collaborative research and studies
performed with my many graduate students, postdoctoral fellows, research as-
sociates, and colleagues. I place on record my gratitude to Fabio Jose Ayres,

xv
xvi Biomedical Image Analysis
Sridhar Krishnan, Naga Ravindra Mudigonda, Margaret Hilary Alto, Han-
ford John Deglint, Thanh Minh Nguyen, Ricardo Jose Ferrari, Liang Shen,
Roseli de Deus Lopes, Antonio Cesar Germano Martins, Marcelo Knorich
Zu o, Bego~na Acha Pi~nero, Carmen Serrano Gotarredona, Laura Roa, Annie
France Frere, Graham Stewart Boag, Vicente Odone Filho, Marcelo Valente,
Silvia Delgado Olabarriaga, Christian Roux, Basel Solaiman, Olivier Menut,
Denise Guliato, Fabricio Adorno, Mario Ribeiro, Mihai Ciuc, Vasile Buzuloiu,
Titus Zaharia, Constantin Vertan, Margaret Sarah Rose, Salahuddin Elka-
diki, Kevin Eng, Nema Mohamed El-Faramawy, Arup Das, Farshad Faghih,
William Alexander Rolston, Yiping Shen, Zahra Marjan Kazem Moussavi,
Joseph Provine, Hieu Ngoc Nguyen, Djamel Boulfelfel, Tamer Farouk Rabie,
Katherine Olivia Ladly, Yuanting Zhang, Zhi-Qiang Liu, Raman Bhalachan-
dra Paranjape, Joseph Andre Rodrigue Blais, Robert Charles Bray, Gopinath
Ramaswamaiah Kuduvalli, Sanjeev Tavathia, William Mark Morrow, Tim-
othy Chi Hung Hon, Subhasis Chaudhuri, Paul Soble, Kirby Jaman, Atam
Prakash Dhawan, and Richard Joseph Lehner. In particular, I thank Liang,
Naga, Ricardo, Gopi, Djamel, Hilary, Tamer, Antonio, Bill Rolston, Bill Mor-
row, and Joseph for permitting me to use signi cant portions of their theses
Naga for producing the cover illustration and Fabio, Hilary, Liang, Mihai,
Gopi, Joseph, Ricardo, and Hanford for careful proofreading of drafts of the
book. Sections of the book were reviewed by Cyril Basil Frank, Joseph Edward
Leo Desautels, Leszek Hahn, Richard Frayne, Norm Bartley, Randy Hoang
Vu, Ilya Kamenetsky, Vijay Devabhaktuni, and Sanjay Srinivasan. I express
my gratitude to them for their comments and advice. I thank Leonard Bruton
and Abu Sesay for discussions on some of the topics described in the book.
I also thank the students of my course ENEL 697 Digital Image Processing
over the past several years for their comments and feedback.
The book has bene ted signi cantly from illustrations and text provided
by a number of researchers worldwide, as identi ed in the references and per-
missions cited. I thank them all for enriching the book with their gifts of
knowledge and kindness. Some of the test images used in the book were ob-
tained from the Center for Image Processing Research, Rensselaer Polytechnic
Institute, Troy, NY, www.ipl.rpi.edu I thank them for the resource.
The research projects that have provided me with the background and expe-
rience essential in order to write the material in this book have been supported
by many agencies. I thank the Natural Sciences and Engineering Research
Council of Canada, the Alberta Heritage Foundation for Medical Research,
the Alberta Breast Cancer Foundation, Control Data Corporation, Kids Can-
cer Care Foundation of Alberta, the University of Calgary, the University
of Manitoba, and the Indian Institute of Science for supporting my research
projects.
I thank the Killam Trusts and the University of Calgary for awarding me
a Killam Resident Fellowship to facilitate work on this book. I gratefully ac-
knowledge support from the Alberta Provincial Biomedical Engineering Grad-
uate Programme, funded by a grant from the Whitaker Foundation, toward
Acknowledgments xvii
student assistantship for preparation of some of the exercises and illustrations
for this book and the related courses ENEL 563 Biomedical Signal Analysis
and ENEL 697 Digital Image Processing at the University of Calgary. I am
pleased to place on record my gratitude for the generous support from the
Department of Electrical and Computer Engineering and the Faculty of En-
gineering at the University of Calgary in terms of supplies, services, and relief
from other duties.
I thank Steven Leikeim for help with computer-related issues and problems.
My association with the IEEE Engineering in Medicine and Biology Society
(EMBS) in many positions has bene ted me considerably in numerous ways.
In particular, the period as an Associate Editor of the IEEE Transactions on
Biomedical Engineering was rewarding, as it provided me with a wonderful
opportunity to work with many leading researchers and authors of scienti c
articles. I thank IEEE EMBS and SPIE for lending professional support to
my career on many fronts.
Writing this book has been a monumental task, often draining me of all
of my energy. The in nite source of inspiration and recharging of my energy
has been my family | my wife Mayura, my daughter Vidya, and my son
Adarsh. While supporting me with their love and a ection, they have had to
bear the loss of my time and e ort at home. I express my sincere gratitude to
my family for their love and support, and place on record their contribution
toward the preparation of this book.
I thank CRC Press and its associates for inviting me to write this book and
for completing the publication process in a friendly and ecient manner.
Rangaraj Mandayam Rangayyan
Calgary, Alberta, Canada
November, 2004
Contents
Preface vii
About the Author xiii
Acknowledgments xv
Symbols and Abbreviations xxix
1 The Nature of Biomedical Images 1
1.1 Body Temperature as an Image . . . . . . . . . . . . . . . . . 2
1.2 Transillumination . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Light Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Electron Microscopy . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 X-ray Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.1 Breast cancer and mammography . . . . . . . . . . . 22
1.6 Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.7 Nuclear Medicine Imaging . . . . . . . . . . . . . . . . . . . . 36
1.8 Ultrasonography . . . . . . . . . . . . . . . . . . . . . . . . . 43
1.9 Magnetic Resonance Imaging . . . . . . . . . . . . . . . . . . 47
1.10 Objectives of Biomedical Image Analysis . . . . . . . . . . . 53
1.11 Computer-aided Diagnosis . . . . . . . . . . . . . . . . . . . . 55
1.12 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
1.13 Study Questions and Problems . . . . . . . . . . . . . . . . . 57
1.14 Laboratory Exercises and Projects . . . . . . . . . . . . . . . 58
2 Image Quality and Information Content 61
2.1 Diculties in Image Acquisition and Analysis . . . . . . . . . 61
2.2 Characterization of Image Quality . . . . . . . . . . . . . . . 64
2.3 Digitization of Images . . . . . . . . . . . . . . . . . . . . . . 65
2.3.1 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.3.2 Quantization . . . . . . . . . . . . . . . . . . . . . . . 66
2.3.3 Array and matrix representation of images . . . . . . 69
2.4 Optical Density . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.5 Dynamic Range . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.6 Contrast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.7 Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.8 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.9 Blur and Spread Functions . . . . . . . . . . . . . . . . . . . 90
2.10 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

xix
xx Biomedical Image Analysis
2.11 The Fourier Transform and Spectral Content . . . . . . . . . 99
2.11.1 Important properties of the Fourier transform . . . . 110
2.12 Modulation Transfer Function . . . . . . . . . . . . . . . . . 122
2.13 Signal-to-Noise Ratio . . . . . . . . . . . . . . . . . . . . . . 131
2.14 Error-based Measures . . . . . . . . . . . . . . . . . . . . . . 138
2.15 Application: Image Sharpness and Acutance . . . . . . . . . 139
2.16 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
2.17 Study Questions and Problems . . . . . . . . . . . . . . . . . 145
2.18 Laboratory Exercises and Projects . . . . . . . . . . . . . . . 149
3 Removal of Artifacts 151
3.1 Characterization of Artifacts . . . . . . . . . . . . . . . . . . 151
3.1.1 Random noise . . . . . . . . . . . . . . . . . . . . . . 151
3.1.2 Examples of noise PDFs . . . . . . . . . . . . . . . . . 159
3.1.3 Structured noise . . . . . . . . . . . . . . . . . . . . . 164
3.1.4 Physiological interference . . . . . . . . . . . . . . . . 165
3.1.5 Other types of noise and artifact . . . . . . . . . . . . 166
3.1.6 Stationary versus nonstationary processes . . . . . . . 166
3.1.7 Covariance and cross-correlation . . . . . . . . . . . . 168
3.1.8 Signal-dependent noise . . . . . . . . . . . . . . . . . 169
3.2 Synchronized or Multiframe Averaging . . . . . . . . . . . . . 171
3.3 Space-domain Local-statistics-based Filters . . . . . . . . . . 174
3.3.1 The mean lter . . . . . . . . . . . . . . . . . . . . . . 176
3.3.2 The median lter . . . . . . . . . . . . . . . . . . . . . 177
3.3.3 Order-statistic lters . . . . . . . . . . . . . . . . . . . 181
3.4 Frequency-domain Filters . . . . . . . . . . . . . . . . . . . . 193
3.4.1 Removal of high-frequency noise . . . . . . . . . . . . 194
3.4.2 Removal of periodic artifacts . . . . . . . . . . . . . . 199
3.5 Matrix Representation of Image Processing . . . . . . . . . . 202
3.5.1 Matrix representation of images . . . . . . . . . . . . 203
3.5.2 Matrix representation of transforms . . . . . . . . . . 206
3.5.3 Matrix representation of convolution . . . . . . . . . . 212
3.5.4 Illustrations of convolution . . . . . . . . . . . . . . . 215
3.5.5 Diagonalization of a circulant matrix . . . . . . . . . . 218
3.5.6 Block-circulant matrix representation of a 2D lter . . 221
3.6 Optimal Filtering . . . . . . . . . . . . . . . . . . . . . . . . 224
3.6.1 The Wiener lter . . . . . . . . . . . . . . . . . . . . . 225
3.7 Adaptive Filters . . . . . . . . . . . . . . . . . . . . . . . . . 228
3.7.1 The local LMMSE lter . . . . . . . . . . . . . . . . . 228
3.7.2 The noise-updating repeated Wiener lter . . . . . . . 234
3.7.3 The adaptive 2D LMS lter . . . . . . . . . . . . . . . 235
3.7.4 The adaptive rectangular window LMS lter . . . . . 237
3.7.5 The adaptive-neighborhood lter . . . . . . . . . . . . 241
3.8 Comparative Analysis of Filters for Noise Removal . . . . . . 251
3.9 Application: Multiframe Averaging in Confocal Microscopy . 270
Table of Contents xxi
3.10 Application: Noise Reduction in Nuclear Medicine Imaging . 271
3.11 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
3.12 Study Questions and Problems . . . . . . . . . . . . . . . . . 281
3.13 Laboratory Exercises and Projects . . . . . . . . . . . . . . . 283
4 Image Enhancement 285
4.1 Digital Subtraction Angiography . . . . . . . . . . . . . . . . 286
4.2 Dual-energy and Energy-subtraction X-ray Imaging . . . . . 287
4.3 Temporal Subtraction . . . . . . . . . . . . . . . . . . . . . . 291
4.4 Gray-scale Transforms . . . . . . . . . . . . . . . . . . . . . . 291
4.4.1 Gray-scale thresholding . . . . . . . . . . . . . . . . . 291
4.4.2 Gray-scale windowing . . . . . . . . . . . . . . . . . . 292
4.4.3 Gamma correction . . . . . . . . . . . . . . . . . . . . 294
4.5 Histogram Transformation . . . . . . . . . . . . . . . . . . . 301
4.5.1 Histogram equalization . . . . . . . . . . . . . . . . . 301
4.5.2 Histogram speci cation . . . . . . . . . . . . . . . . . 305
4.5.3 Limitations of global operations . . . . . . . . . . . . 310
4.5.4 Local-area histogram equalization . . . . . . . . . . . 310
4.5.5 Adaptive-neighborhood histogram equalization . . . . 311
4.6 Convolution Mask Operators . . . . . . . . . . . . . . . . . . 314
4.6.1 Unsharp masking . . . . . . . . . . . . . . . . . . . . . 314
4.6.2 Subtracting Laplacian . . . . . . . . . . . . . . . . . . 316
4.6.3 Limitations of xed operators . . . . . . . . . . . . . . 323
4.7 High-frequency Emphasis . . . . . . . . . . . . . . . . . . . . 325
4.8 Homomorphic Filtering for Enhancement . . . . . . . . . . . 328
4.8.1 Generalized linear ltering . . . . . . . . . . . . . . . 328
4.9 Adaptive Contrast Enhancement . . . . . . . . . . . . . . . . 338
4.9.1 Adaptive-neighborhood contrast enhancement . . . . 338
4.10 Objective Assessment of Contrast Enhancement . . . . . . . 346
4.11 Application: Contrast Enhancement of Mammograms . . . . 350
4.11.1 Clinical evaluation of contrast enhancement . . . . . . 354
4.12 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
4.13 Study Questions and Problems . . . . . . . . . . . . . . . . . 358
4.14 Laboratory Exercises and Projects . . . . . . . . . . . . . . . 361
5 Detection of Regions of Interest 363
5.1 Thresholding and Binarization . . . . . . . . . . . . . . . . . 364
5.2 Detection of Isolated Points and Lines . . . . . . . . . . . . . 365
5.3 Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 367
5.3.1 Convolution mask operators for edge detection . . . . 367
5.3.2 The Laplacian of Gaussian . . . . . . . . . . . . . . . 370
5.3.3 Scale-space methods for multiscale edge detection . . 380
5.3.4 Canny's method for edge detection . . . . . . . . . . . 390
5.3.5 Fourier-domain methods for edge detection . . . . . . 390
5.3.6 Edge linking . . . . . . . . . . . . . . . . . . . . . . . 392
xxii Biomedical Image Analysis
5.4 Segmentation and Region Growing . . . . . . . . . . . . . . . 393
5.4.1 Optimal thresholding . . . . . . . . . . . . . . . . . . 395
5.4.2 Region-oriented segmentation of images . . . . . . . . 396
5.4.3 Splitting and merging of regions . . . . . . . . . . . . 397
5.4.4 Region growing using an additive tolerance . . . . . . 397
5.4.5 Region growing using a multiplicative tolerance . . . . 400
5.4.6 Analysis of region growing in the presence of noise . . 401
5.4.7 Iterative region growing with multiplicative tolerance 402
5.4.8 Region growing based upon the human visual system 405
5.4.9 Application: Detection of calci cations by multitoler-
ance region growing . . . . . . . . . . . . . . . . . . . 410
5.4.10 Application: Detection of calci cations by linear pre-
diction error . . . . . . . . . . . . . . . . . . . . . . . 414
5.5 Fuzzy-set-based Region Growing to Detect Breast Tumors . . 417
5.5.1 Preprocessing based upon fuzzy sets . . . . . . . . . . 419
5.5.2 Fuzzy segmentation based upon region growing . . . . 421
5.5.3 Fuzzy region growing . . . . . . . . . . . . . . . . . . 429
5.6 Detection of Objects of Known Geometry . . . . . . . . . . . 434
5.6.1 The Hough transform . . . . . . . . . . . . . . . . . . 435
5.6.2 Detection of straight lines . . . . . . . . . . . . . . . . 437
5.6.3 Detection of circles . . . . . . . . . . . . . . . . . . . . 440
5.7 Methods for the Improvement of Contour or Region Estimates 444
5.8 Application: Detection of the Spinal Canal . . . . . . . . . . 449
5.9 Application: Detection of the Breast Boundary in Mammo-
grams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
5.9.1 Detection using the traditional active deformable con-
tour model . . . . . . . . . . . . . . . . . . . . . . . . 456
5.9.2 Adaptive active deformable contour model . . . . . . 464
5.9.3 Results of application to mammograms . . . . . . . . 476
5.10 Application: Detection of the Pectoral Muscle in Mammo-
grams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
5.10.1 Detection using the Hough transform . . . . . . . . . 481
5.10.2 Detection using Gabor wavelets . . . . . . . . . . . . . 487
5.10.3 Results of application to mammograms . . . . . . . . 495
5.11 Application: Improved Segmentation of Breast Masses by
Fuzzy-set-based Fusion of Contours and Regions . . . . . . . 500
5.12 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
5.13 Study Questions and Problems . . . . . . . . . . . . . . . . . 527
5.14 Laboratory Exercises and Projects . . . . . . . . . . . . . . . 527
6 Analysis of Shape 529
6.1 Representation of Shapes and Contours . . . . . . . . . . . . 529
6.1.1 Signatures of contours . . . . . . . . . . . . . . . . . . 530
6.1.2 Chain coding . . . . . . . . . . . . . . . . . . . . . . . 530
6.1.3 Segmentation of contours . . . . . . . . . . . . . . . . 534
Table of Contents xxiii
6.1.4 Polygonal modeling of contours . . . . . . . . . . . .. 537
6.1.5 Parabolic modeling of contours . . . . . . . . . . . .. 543
6.1.6 Thinning and skeletonization . . . . . . . . . . . . .. 548
6.2 Shape Factors . . . . . . . . . . . . . . . . . . . . . . . . .. 549
6.2.1 Compactness . . . . . . . . . . . . . . . . . . . . . .. 551
6.2.2 Moments . . . . . . . . . . . . . . . . . . . . . . . .. 555
6.2.3 Chord-length statistics . . . . . . . . . . . . . . . . .. 560
6.3 Fourier Descriptors . . . . . . . . . . . . . . . . . . . . . . .. 562
6.4 Fractional Concavity . . . . . . . . . . . . . . . . . . . . . .. 569
6.5 Analysis of Spicularity . . . . . . . . . . . . . . . . . . . . .. 570
6.6 Application: Shape Analysis of Calci cations . . . . . . . .. 575
6.7 Application: Shape Analysis of Breast Masses and Tumors . 578
6.8 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 581
6.9 Study Questions and Problems . . . . . . . . . . . . . . . .. 581
6.10 Laboratory Exercises and Projects . . . . . . . . . . . . . .. 582
7 Analysis of Texture 583
7.1 Texture in Biomedical Images . . . . . . . . . . . . . . . . . . 584
7.2 Models for the Generation of Texture . . . . . . . . . . . . . 584
7.2.1 Random texture . . . . . . . . . . . . . . . . . . . . . 589
7.2.2 Ordered texture . . . . . . . . . . . . . . . . . . . . . 589
7.2.3 Oriented texture . . . . . . . . . . . . . . . . . . . . . 590
7.3 Statistical Analysis of Texture . . . . . . . . . . . . . . . . . 596
7.3.1 The gray-level co-occurrence matrix . . . . . . . . . . 597
7.3.2 Haralick's measures of texture . . . . . . . . . . . . . 600
7.4 Laws' Measures of Texture Energy . . . . . . . . . . . . . . . 603
7.5 Fractal Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 605
7.5.1 Fractal dimension . . . . . . . . . . . . . . . . . . . . 608
7.5.2 Fractional Brownian motion model . . . . . . . . . . . 609
7.5.3 Fractal analysis of texture . . . . . . . . . . . . . . . . 609
7.5.4 Applications of fractal analysis . . . . . . . . . . . . . 611
7.6 Fourier-domain Analysis of Texture . . . . . . . . . . . . . . 612
7.7 Segmentation and Structural Analysis of Texture . . . . . . . 621
7.7.1 Homomorphic deconvolution of periodic patterns . . . 623
7.8 Audi cation and Soni cation of Texture in Images . . . . . . 625
7.9 Application: Analysis of Breast Masses Using Texture and
Gradient Measures . . . . . . . . . . . . . . . . . . . . . . . . 627
7.9.1 Adaptive normals and ribbons around mass margins . 629
7.9.2 Gradient and contrast measures . . . . . . . . . . . . 632
7.9.3 Results of pattern classi cation . . . . . . . . . . . . . 635
7.10 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
7.11 Study Questions and Problems . . . . . . . . . . . . . . . . . 637
7.12 Laboratory Exercises and Projects . . . . . . . . . . . . . . . 638
xxiv Biomedical Image Analysis
8 Analysis of Oriented Patterns 639
8.1 Oriented Patterns in Images . . . . . . . . . . . . . . . . . . 639
8.2 Measures of Directional Distribution . . . . . . . . . . . . . 641
8.2.1 The rose diagram . . . . . . . . . . . . . . . . . . . . 641
8.2.2 The principal axis . . . . . . . . . . . . . . . . . . . . 641
8.2.3 Angular moments . . . . . . . . . . . . . . . . . . . . 642
8.2.4 Distance measures . . . . . . . . . . . . . . . . . . . . 643
8.2.5 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . 643
8.3 Directional Filtering . . . . . . . . . . . . . . . . . . . . . . . 644
8.3.1 Sector ltering in the Fourier domain . . . . . . . . . 646
8.3.2 Thresholding of the component images . . . . . . . . . 649
8.3.3 Design of fan lters . . . . . . . . . . . . . . . . . . . 651
8.4 Gabor Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
8.4.1 Multiresolution signal decomposition . . . . . . . . . . 660
8.4.2 Formation of the Gabor lter bank . . . . . . . . . . . 664
8.4.3 Reconstruction of the Gabor lter bank output . . . . 665
8.5 Directional Analysis via Multiscale Edge Detection . . . . . . 666
8.6 Hough-Radon Transform Analysis . . . . . . . . . . . . . . . 671
8.6.1 Limitations of the Hough transform . . . . . . . . . . 671
8.6.2 The Hough and Radon transforms combined . . . . . 673
8.6.3 Filtering and integrating the Hough-Radon space . . 676
8.7 Application: Analysis of Ligament Healing . . . . . . . . . . 679
8.7.1 Analysis of collagen remodeling . . . . . . . . . . . . . 680
8.7.2 Analysis of the microvascular structure . . . . . . . . 684
8.8 Application: Detection of Breast Tumors . . . . . . . . . . . 699
8.8.1 Framework for pyramidal decomposition . . . . . . . . 707
8.8.2 Segmentation based upon density slicing . . . . . . . . 710
8.8.3 Hierarchical grouping of isointensity contours . . . . . 712
8.8.4 Results of segmentation of masses . . . . . . . . . . . 712
8.8.5 Detection of masses in full mammograms . . . . . . . 719
8.8.6 Analysis of mammograms using texture ow- eld . . . 726
8.8.7 Adaptive computation of features in ribbons . . . . . 732
8.8.8 Results of mass detection in full mammograms . . . . 735
8.9 Application: Bilateral Asymmetry in Mammograms . . . . . 742
8.9.1 The broglandular disc . . . . . . . . . . . . . . . . . 743
8.9.2 Gaussian mixture model of breast density . . . . . . . 744
8.9.3 Delimitation of the broglandular disc . . . . . . . . . 747
8.9.4 Motivation for directional analysis of mammograms . 755
8.9.5 Directional analysis of broglandular tissue . . . . . . 757
8.9.6 Characterization of bilateral asymmetry . . . . . . . . 766
8.10 Application: Architectural Distortion in Mammograms . . . 775
8.10.1 Detection of spiculated lesions and distortion . . . . . 775
8.10.2 Phase portraits . . . . . . . . . . . . . . . . . . . . . 779
8.10.3 Estimating the orientation eld . . . . . . . . . . . . 780
8.10.4 Characterizing orientation elds with phase portraits 782
Table of Contents xxv
8.10.5 Feature extraction for pattern classi cation . . . . . . 785
8.10.6 Application to segments of mammograms . . . . . . . 785
8.10.7 Detection of sites of architectural distortion . . . . . . 786
8.11 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791
8.12 Study Questions and Problems . . . . . . . . . . . . . . . . . 796
8.13 Laboratory Exercises and Projects . . . . . . . . . . . . . . . 796
9 Image Reconstruction from Projections 797
9.1 Projection Geometry . . . . . . . . . . . . . . . . . . . . . . . 797
9.2 The Fourier Slice Theorem . . . . . . . . . . . . . . . . . . . 798
9.3 Backprojection . . . . . . . . . . . . . . . . . . . . . . . . . . 801
9.3.1 Filtered backprojection . . . . . . . . . . . . . . . . . 804
9.3.2 Discrete ltered backprojection . . . . . . . . . . . . . 806
9.4 Algebraic Reconstruction Techniques . . . . . . . . . . . . . . 813
9.4.1 Approximations to the Kaczmarz method . . . . . . . 820
9.5 Imaging with Di racting Sources . . . . . . . . . . . . . . . . 825
9.6 Display of CT Images . . . . . . . . . . . . . . . . . . . . . . 825
9.7 Agricultural and Forestry Applications . . . . . . . . . . . . . 829
9.8 Microtomography . . . . . . . . . . . . . . . . . . . . . . . . 831
9.9 Application: Analysis of the Tumor in Neuroblastoma . . . . 834
9.9.1 Neuroblastoma . . . . . . . . . . . . . . . . . . . . . . 834
9.9.2 Tissue characterization using CT . . . . . . . . . . . . 838
9.9.3 Estimation of tissue composition from CT images . . 839
9.9.4 Results of application to clinical cases . . . . . . . . . 844
9.9.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 845
9.10 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854
9.11 Study Questions and Problems . . . . . . . . . . . . . . . . . 854
9.12 Laboratory Exercises and Projects . . . . . . . . . . . . . . . 855
10 Deconvolution, Deblurring, and Restoration 857
10.1 Linear Space-invariant Restoration Filters . . . . . . . . . . . 857
10.1.1 Inverse ltering . . . . . . . . . . . . . . . . . . . . . . 858
10.1.2 Power spectrum equalization . . . . . . . . . . . . . . 860
10.1.3 The Wiener lter . . . . . . . . . . . . . . . . . . . . . 863
10.1.4 Constrained least-squares restoration . . . . . . . . . 872
10.1.5 The Metz lter . . . . . . . . . . . . . . . . . . . . . . 874
10.1.6 Information required for image restoration . . . . . . 875
10.1.7 Motion deblurring . . . . . . . . . . . . . . . . . . . . 875
10.2 Blind Deblurring . . . . . . . . . . . . . . . . . . . . . . . . . 877
10.2.1 Iterative blind deblurring . . . . . . . . . . . . . . . . 878
10.3 Homomorphic Deconvolution . . . . . . . . . . . . . . . . . . 885
10.3.1 The complex cepstrum . . . . . . . . . . . . . . . . . . 885
10.3.2 Echo removal by Radon-domain cepstral ltering . . . 886
10.4 Space-variant Restoration . . . . . . . . . . . . . . . . . . . . 891
10.4.1 Sectioned image restoration . . . . . . . . . . . . . . . 893
xxvi Biomedical Image Analysis
10.4.2 Adaptive-neighborhood deblurring . . . . . . ... . . 894
10.4.3 The Kalman lter . . . . . . . . . . . . . . . ... . . 898
10.5 Application: Restoration of Nuclear Medicine Images .. . . 919
10.5.1 Quality control . . . . . . . . . . . . . . . . . ... . . 922
10.5.2 Scatter compensation . . . . . . . . . . . . . ... . . 922
10.5.3 Attenuation correction . . . . . . . . . . . . . ... . . 923
10.5.4 Resolution recovery . . . . . . . . . . . . . . ... . . 924
10.5.5 Geometric averaging of conjugate projections ... . . 926
10.5.6 Examples of restoration of SPECT images . . ... . . 934
10.6 Remarks . . . . . . . . . . . . . . . . . . . . . . . . ... . . 949
10.7 Study Questions and Problems . . . . . . . . . . . . ... . . 953
10.8 Laboratory Exercises and Projects . . . . . . . . . . ... . . 954
11 Image Coding and Data Compression 955
11.1 Considerations Based on Information Theory . . . . . . . . . 956
11.1.1 Noiseless coding theorem for binary transmission . . . 957
11.1.2 Lossy versus lossless compression . . . . . . . . . . . . 957
11.1.3 Distortion measures and delity criteria . . . . . . . . 959
11.2 Fundamental Concepts of Coding . . . . . . . . . . . . . . . . 960
11.3 Direct Source Coding . . . . . . . . . . . . . . . . . . . . . . 961
11.3.1 Hu man coding . . . . . . . . . . . . . . . . . . . . . 961
11.3.2 Run-length coding . . . . . . . . . . . . . . . . . . . . 969
11.3.3 Arithmetic coding . . . . . . . . . . . . . . . . . . . . 969
11.3.4 Lempel{Ziv coding . . . . . . . . . . . . . . . . . . . . 974
11.3.5 Contour coding . . . . . . . . . . . . . . . . . . . . . . 977
11.4 Application: Source Coding of Digitized Mammograms . . . 978
11.5 The Need for Decorrelation . . . . . . . . . . . . . . . . . . . 980
11.6 Transform Coding . . . . . . . . . . . . . . . . . . . . . . . . 984
11.6.1 The discrete cosine transform . . . . . . . . . . . . . . 987
11.6.2 The Karhunen{Loeve transform . . . . . . . . . . . . 989
11.6.3 Encoding of transform coecients . . . . . . . . . . . 992
11.7 Interpolative Coding . . . . . . . . . . . . . . . . . . . . . . . 1001
11.8 Predictive Coding . . . . . . . . . . . . . . . . . . . . . . . . 1004
11.8.1 Two-dimensional linear prediction . . . . . . . . . . . 1005
11.8.2 Multichannel linear prediction . . . . . . . . . . . . . 1009
11.8.3 Adaptive 2D recursive least-squares prediction . . . . 1026
11.9 Image Scanning Using the Peano-Hilbert Curve . . . . . . . . 1033
11.9.1 De nition of the Peano-scan path . . . . . . . . . . . 1035
11.9.2 Properties of the Peano-Hilbert curve . . . . . . . . . 1040
11.9.3 Implementation of Peano scanning . . . . . . . . . . . 1040
11.9.4 Decorrelation of Peano-scanned data . . . . . . . . . . 1041
11.10 Image Coding and Compression Standards . . . . . . . . . . 1043
11.10.1 The JBIG Standard . . . . . . . . . . . . . . . . . . . 1046
11.10.2 The JPEG Standard . . . . . . . . . . . . . . . . . . . 1049
11.10.3 The MPEG Standard . . . . . . . . . . . . . . . . . . 1050
Table of Contents xxvii
11.10.4 The ACR/ NEMA and DICOM Standards . . . . . . 1050
11.11 Segmentation-based Adaptive Scanning . . . . . . . . . . . . 1051
11.11.1 Segmentation-based coding . . . . . . . . . . . . . . . 1051
11.11.2 Region-growing criteria . . . . . . . . . . . . . . . . . 1052
11.11.3 The SLIC procedure . . . . . . . . . . . . . . . . . . . 1055
11.11.4 Results of image data compression with SLIC . . . . . 1055
11.12 Enhanced JBIG Coding . . . . . . . . . . . . . . . . . . . . . 1062
11.13 Lower-limit Analysis of Lossless Data Compression . . . . . . 1066
11.13.1 Memoryless entropy . . . . . . . . . . . . . . . . . . . 1070
11.13.2 Markov entropy . . . . . . . . . . . . . . . . . . . . . 1071
11.13.3 Estimation of the true source entropy . . . . . . . . . 1071
11.14 Application: Teleradiology . . . . . . . . . . . . . . . . . . . 1079
11.14.1 Analog teleradiology . . . . . . . . . . . . . . . . . . . 1080
11.14.2 Digital teleradiology . . . . . . . . . . . . . . . . . . . 1082
11.14.3 High-resolution digital teleradiology . . . . . . . . . . 1084
11.15 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086
11.16 Study Questions and Problems . . . . . . . . . . . . . . . . . 1086
11.17 Laboratory Exercises and Projects . . . . . . . . . . . . . . . 1087
12 Pattern Classi cation and Diagnostic Decision 1089
12.1 Pattern Classi cation . . . . . . . . . . . . . . . . . . . . . . 1091
12.2 Supervised Pattern Classi cation . . . . . . . . . . . . . . . . 1095
12.2.1 Discriminant and decision functions . . . . . . . . . . 1095
12.2.2 Distance functions . . . . . . . . . . . . . . . . . . . . 1097
12.2.3 The nearest-neighbor rule . . . . . . . . . . . . . . . . 1104
12.3 Unsupervised Pattern Classi cation . . . . . . . . . . . . . . 1104
12.3.1 Cluster-seeking methods . . . . . . . . . . . . . . . . . 1105
12.4 Probabilistic Models and Statistical Decision . . . . . . . . . 1110
12.4.1 Likelihood functions and statistical decision . . . . . . 1110
12.4.2 Bayes classi er for normal patterns . . . . . . . . . . . 1118
12.5 Logistic Regression . . . . . . . . . . . . . . . . . . . . . . . . 1120
12.6 The Training and Test Steps . . . . . . . . . . . . . . . . . . 1125
12.6.1 The leave-one-out method . . . . . . . . . . . . . . . . 1125
12.7 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . 1126
12.8 Measures of Diagnostic Accuracy . . . . . . . . . . . . . . . . 1132
12.8.1 Receiver operating characteristics . . . . . . . . . . . 1135
12.8.2 McNemar's test of symmetry . . . . . . . . . . . . . . 1138
12.9 Reliability of Features, Classi ers, and Decisions . . . . . . . 1140
12.9.1 Statistical separability and feature selection . . . . . . 1141
12.10 Application: Image Enhancement for Breast Cancer Screen-
ing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1143
12.10.1 Case selection, digitization, and presentation . . . . . 1145
12.10.2 ROC and statistical analysis . . . . . . . . . . . . . . 1147
12.10.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 1159
xxviii Biomedical Image Analysis
12.11 Application: Classi cation of Breast Masses and Tumors via
Shape Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 1160
12.12 Application: Content-based Retrieval and Analysis of Breast
Masses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166
12.12.1 Pattern classi cation of masses . . . . . . . . . . . . . 1167
12.12.2 Content-based retrieval . . . . . . . . . . . . . . . . . 1169
12.12.3 Extension to telemedicine . . . . . . . . . . . . . . . . 1177
12.13 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
12.14 Study Questions and Problems . . . . . . . . . . . . . . . . . 1184
12.15 Laboratory Exercises and Projects . . . . . . . . . . . . . . . 1185
References 1187
Index 1262
Symbols and Abbreviations

Note: Bold-faced letters represent the vector or matrix form of the variable
in the corresponding plain letters.
Variables or symbols used within limited contexts are not listed here they
are described within their context.
The mathematical symbols listed may stand for other entities or variables
in di erent applications only the common associations used in this book are
listed for ready reference.
a(p q) a autoregressive model or lter coecients
arctan inverse tangent, tan;1
arg argument of
atan inverse tangent, tan;1
au arbitrary units
AADCM adaptive active deformable contour model
ACF autocorrelation function
ACR American College of Radiology
ADC analog-to-digital converter
ALZ adaptive Lempel-Ziv coding
AMTA AMT acutance
ANCE adaptive-neighborhood contrast enhancement
AND adaptive-neighborhood deblurring
ANN arti cial neural network
ANNS adaptive-neighborhood noise subtraction
AR autoregressive model or lter
ARMA autoregressive, moving-average model or lter
ARW adaptive rectangular window
Az area under the ROC curve
b background intensity
b bit
b(m n) moving-average model or lter coecients
bps bits per second
B byte
BIBO bounded-input { bounded-output stability
BI-RADSTM Breast Imaging Reporting and Data System
BP backprojection
cd candela

xxix
xxx Biomedical Image Analysis
cm centimeter
C contrast
C covariance matrix
Ci Curie
cf Co compactness
Ci the ith class in a pattern classi cation problem
Cxy covariance between x and y
CAD computer-aided diagnosis
CBP convolution backprojection
CBIR content-based image retrieval
CC cranio-caudal
CCD charge-coupled device
CCF cross-correlation function
CCITT Comite Consultatif International Telephonique et Telegraphique
CD compact disk
CLS constrained least squares
CMTA cascaded modulation transfer acutance
CMYK cyan, magenta, yellow, black] representation of color
CNR contrast-to-noise ratio
CNS central nervous system
CR computed radiography
CREW compression with reversible embedded wavelets
CRT cathode ray tube
CSD cross-spectral density, cross-spectrum
CT computed tomography
CV coecient of variation
dB decibel
dE Euclidean dimension
df fractal dimension
dpi dots per inch
DAC digital-to-analog converter
DC direct current zero frequency
DCT discrete cosine transform
DFT discrete Fourier transform
DICOM Digital Imaging and Communications in Medicine
DoG di erence of Gaussians
DPCM di erential pulse code modulation
DR digital radiography
DSA digital subtraction angiography
DWT directional wavelet transform
e(n), E (!) model or estimation error
eV electron volt
exp (x) exponential function, ex
ECG electrocardiogram, electrocardiography
Symbols and Abbreviations xxxi
EEG electroencephalogram
EM electromagnetic
EM expectation-maximization
Ex total energy of the signal x
E ] statistical expectation operator
f foreground intensity
fc cuto frequency (usually at ;3 dB ) of a lter
fcc fractional concavity
ff shape factor obtained using Fourier descriptors
fps frames per second
fs sampling frequency
f (m n) a digital image, typically original or undistorted
f (x y) an image, typically original or undistorted
FBP ltered backprojection
FFT fast Fourier transform
FID free-induction decay
FIR nite impulse response ( lter)
FM frequency modulation
FN false negative
FNF false-negative fraction
FOM gure of merit
FP false positive
FPF false-positive fraction
FT Fourier transform
FWHM full width at half the maximum
g ( m n) a digital image, typically processed or distorted
g (x y ) an image, typically processed or distorted
h(m n) impulse response of a system
h(p) measure of information
h(x y) impulse response of a system
H hydrogen
H Hurst coecient
H magnetic eld strength
H entropy
Hf g joint entropy of f and g
Hf jg conditional entropy of f given g
H Hermitian (complex-conjugate) transposition of a matrix
H (k l) discrete Fourier transform of h(m n)
H (u v) frequency response of a lter, Fourier transform of h(x y)
HINT hierarchical interpolation
HU Houns eld unit
HVS human visual system
Hz Hertz
i index of a series
xxxii Biomedical Image Analysis
I the identity matrix
If jg mutual information
IEPA image edge-pro le acutance
IFT inverse Fourier transform
IIR in nite impulse response ( lter)
ISO International
p Organization for Standardization
j ;1
JBIG Joint Bi-level Image (experts) Group
JM Je ries-Matusita distance
JND just-noticeable di erence
JPEG Joint Photographic Experts Group
k kilo (1,000)
(k l) indices in the discrete Fourier (frequency) domain
kV p kilo-volt peak
K kilo (1,024)
KESF knife-edge spread function
KLT Karhunen-Loeve transform
ln natural logarithm (base e)
lp=mm line pairs per millimeter
L an image processing operator or transform in matrix form
Lij loss function in pattern classi cation
LEAP low-energy all-purpose collimator
LEGP low-energy general-purpose collimator
LLMMSE local linear minimum mean-squared error
LMMSE linear minimum mean-squared error
LMS least mean squares
LMSE Laplacian mean-squared error
LoG Laplacian of Gaussian
LP linear prediction (model)
LSF line spread function
LSI linear shift-invariant
LUT look-up table
LZW Lempel-Ziv-Welch code
m meter
m mean
m mean vector of a pattern class
max maximum
mA milliampere
mf shape factor using moments
min minimum
mm millimeter
(m n) indices in the discrete space (image) domain
mod modulus or modulo
modem modulator { demodulator
Symbols and Abbreviations xxxiii
M number of samples or pixels
MA moving average ( lter)
MAP maximum-a-posteriori probability
MCL medial collateral ligament
MIAS Mammographic Image Analysis Society, London, England
MDL minimum description length
ME maximum entropy
MLO medio-lateral oblique
MMSE minimum mean-squared error
MPEG Moving Picture Experts Group
MR magnetic resonance
MRI magnetic resonance imaging
MS mean-squared
MSE mean-squared error
MTF modulation (magnitude) transfer function
n an index
nm nanometer
N number of samples or pixels
NE normalized error
NEMA National Electrical Manufacturers Association
NMR nuclear magnetic resonance
NMSE normalized mean-squared error
NPV negative predictive value
NSHP nonsymmetric half plane
OD optical density
OTF optical transfer function
pf (l) normalized histogram or PDF of image f
pf g (l1 l2 ) joint PDF of images f and g
pf jg (l1 l2 ) conditional PDF of f given g
pixel picture cell or element
pm mth ray sum in ART
pps pulses per second
(p q) indices of a 2D array
p (t) projection (Radon transform) of an image at angle 
p(x) probability density function of the random variable x
p(xjCi ) likelihood function of class Ci or state-conditional PDF of x
P model order
P (x) probability of the event x
Pf (l) histogram of image f
P (Ci jx) posterior probability that x belongs to the class Ci
P (w) Fourier transform of the projection p (t)
P a predicate
PA posterior{anterior
PACS picture archival and communication system
xxxiv Biomedical Image Analysis
PCA principal-component analysis
PCG phonocardiogram (heart sound signal)
PDF probability density function
PET positron emission tomography
PMSE perceptual mean-squared error
PMT photomultiplier tube
PPV positive predictive value
PSD power spectral density, power spectrum
PSE power spectrum equalization
PSF point spread function
PSV prediction selection values in JPEG
q (t) ltered projection of an image at angle 
Q model order
QP quarter plane
rj (x) average risk or loss in pattern classi cation
(r s) temporary indices of a 2D array
R+ the set of nonnegative real numbers
RBST rubber-band straightening transform
RD relative dispersion
RDM radial distance measures
RF radio-frequency
RGB red, green, blue] color representation
RLS recursive least-squares
RMS root mean-squared
ROC receiver operating characteristics
ROI region of interest
ROS region of support
RUKF reduced-update Kalman lter
s second
s space variable in the projection (Radon) space
Sf (u v) power spectral density of the image f
SAR synthetic-aperture radar
SD standard deviation
SEM scanning electron microscope
SI spiculation index
SLIC segmentation-based lossless image coding
SMTA system modulation transfer acutance
SNR signal-to-noise ratio
SPECT single-photon emission computed tomography
SQF subjective quality factor
SQRI square-root integral
STFT short-time Fourier transform
SVD singular value decomposition
S+ sensitivity of a test
S; speci city of a test
Symbols and Abbreviations xxxv
t time variable
t space variable in the projection (Radon) space
T Tesla (strength of a magnetic eld)
T a threshold
T as a superscript, vector or matrix transposition
Tc technetium
Tl thallium
T1 longitudinal relaxation time constant in MRI
T2 transverse magnetization time constant in MRI
T+ positive test result
T; negative test result
TEM transmission electron microscope
Th threshold
TN true negative
TNF true-negative fraction
TP true positive
TPF true-positive fraction
Tr trace of a matrix
TSE total squared error
TV television
u(x y) unit step function
(u v) frequency coordinates in the continuous Fourier domain
UHF ultra high frequency
voxel volume cell or element
V volt
VLSI very-large-scale integrated circuit
w lter tap weight weighting function
w frequency variable related to projections
w lter or weight vector
WHT Walsh-Hadamard transform ; 
WN Fourier transform kernel function WN = exp ;j 2N
W Fourier transform operator in matrix form
(x y ) image coordinates in the space domain
x a feature vector in pattern classi cation
YIQ luminance, in-phase, quadrature] color representation
z a prototype feature vector in pattern classi cation
Z the set of all integers
 null set
1D one-dimensional
2D two-dimensional
3D three-dimensional
4D four-dimensional
xy correlation coecient between x and y
;xy coherence between x and y: Fourier transform of xy
xxxvi Biomedical Image Analysis
;(p) a fuzzy membership function
 Dirac delta (impulse) function
$x $y sampling intervals along the x and y axes
" model error, total squared error
 a random variable or noise process
 scale factor in fractal analysis
 an angle
 a threshold
, % cross-correlation function
( t) the Radon (projection) space
forgetting factor in the RLS lter

the mean (average) of a random variable

X-ray attenuation coecient

m micrometer

CT micro-computed tomography
correlation coecient
the standard deviation of a random variable
2 the variance of a random variable
fg covariance between images f and g
fg covariance between images f and g in matrix form
' basis function of a transform
f autocorrelation of image f in array form
f autocorrelation of image f in matrix form
fg cross-correlation between images f and g in array form
fg cross-correlation between images f and g in matrix form
&f Fourier transform of f  power spectral density of f
r gradient operator
 hi dot product
! factorial
 when in-line, convolution
 as a superscript, complex conjugation
# number of
( average or normalized version of the variable under the bar
( complement of the variable under the bar
^ complex cepstrum of the signal (function of space)
^ complex logarithm of the signal (function of frequency)
~ estimate of the variable under the symbol
~ a variant of a function
0 00 000 rst, second, and third derivatives of the preceding function
0 a variant of a function
 cross product when the related entities are vectors
8 for all
2 belongs to or is in (the set)
fg a set
 subset
Symbols and Abbreviations xxxvii

T superset
S intersection
union

equivalent to
j given, conditional upon
! maps to
( gets (updated as)
) leads to
, transform pair
] closed interval, including the limits
() open interval, not including the limits
jj absolute value or magnitude
jj determinant of a matrix
kk norm of a vector or matrix
dxe ceiling operator the smallest integer  x
bxc oor operator the largest integer  x
1
The Nature of Biomedical Images

The human body is composed of many systems, such as the cardiovascular


system, the musculo-skeletal system, and the central nervous system. Each
system is made up of several subsystems that carry on many physiological pro-
cesses. For example, the visual system performs the task of focusing visual or
pictorial information on to the retina, transduction of the image information
into neural signals, and encoding and transmission of the neural signals to the
visual cortex. The visual cortex is responsible for interpretation of the image
information. The cardiac system performs the important task of rhythmic
pumping of blood through the arterial network of the body to facilitate the
delivery of nutrients, as well as pumping of blood through the pulmonary sys-
tem for oxygenation of the blood itself. The anatomical features of the organs
related to a physiological system often demonstrate characteristics that reect
the functional aspects of its processes as well as the well-being or integrity of
the system itself.
Physiological processes are complex phenomena, including neural or hor-
monal stimulation and control inputs and outputs that could be in the form
of physical material or information and action that could be mechanical,
electrical, or biochemical. Most physiological processes are accompanied by
or manifest themselves as signals that reect their nature and activities. Such
signals could be of many types, including biochemical in the form of hor-
mones or neurotransmitters, electrical in the form of potential or current, and
physical in the form of pressure or temperature.
Diseases or defects in a physiological system cause alterations in its nor-
mal processes, leading to pathological processes that a ect the performance,
health, and general well-being of the system. A pathological process is typ-
ically associated with signals and anatomical features that are di erent in
some respects from the corresponding normal patterns. If we possess a good
understanding of a system of interest, it becomes possible to observe the cor-
responding signals and features and assess the state of the system. The task
is not dicult when the signal is simple and appears at the outer surface of
the body. However, most systems and organs are placed well within the body
and enclosed in protective layers (for good reason!). Investigating or probing
such systems typically requires the use of some form of penetrating radiation
or invasive procedure.

1
2 Biomedical Image Analysis

1.1 Body Temperature as an Image


Most infections cause a rise in the temperature of the body, which may be
sensed easily, albeit in a relative and qualitative manner, via the palm of
one's hand. Objective or quantitative measurement of temperature requires
an instrument, such as a thermometer.
A single measurement f of temperature is a scalar, and represents the ther-
mal state of the body at a particular physical location in or on the body
denoted by its spatial coordinates (x y z ) and at a particular or single in-
stant of time t. If we record the temperature continuously in some form, such
as a strip-chart record, we obtain a signal as a one-dimensional (1D) func-
tion of time, which may be expressed in the continuous-time or analog form
as f (t). The units applicable here are o C (degrees Celsius) for the temper-
ature variable, and s (seconds) for the temporal variable t. If some means
were available to measure the temperature of the body at every spatial posi-
tion, we could obtain a three-dimensional (3D) distribution of temperature as
f (x y z). Furthermore, if we were to perform the 3D measurement at every
instant of time, we would obtain a 3D function of time as f (x y z t) this
entity may also be referred to as a four-dimensional (4D) function.
When oral temperature, for example, is measured at discrete instants of
time, it may be expressed in discrete-time form as f (nT ) or f (n), where n
is the index or measurement sample number of the array of values, and T
represents the uniform interval between the time instants of measurement. A
discrete-time signal that can take amplitude values only from a limited list of
quantized levels is called a digital signal this distinction between discrete-time
and digital signals is often ignored.
If one were to use a thermal camera and take a picture of a body, a two-
dimensional (2D) representation of the heat radiated from the body would
be obtained. Although the temperature distribution within the body (and
even on the surface of the body) is a 3D entity, the picture produced by the
camera is a 2D snapshot of the heat radiation eld. We then have a 2D spatial
function of temperature | an image | which could be represented as f (x y).
The units applicable here are o C for the temperature variable itself, and mm
(millimeters) for the spatial variables x and y. If the image were to be sam-
pled in space and represented on a discrete spatial grid, the corresponding
data could be expressed as f (m$x n$y), where $x and $y are the sam-
pling intervals along the horizontal and vertical axes, respectively (in spatial
units such as mm). It is common practice to represent a digital image simply
as f (m n), which could be interpreted as a 2D array or a matrix of values.
It should be noted at the outset that, while images are routinely treated as
arrays, matrices, and related mathematical entities, they are almost always
representative of physical or other measures of organs or of physiological pro-
The Nature of Biomedical Images 3
cesses that impose practical limitations on the range, degrees of freedom, and
other properties of the image data.
Examples: In intensive-care monitoring, the tympanic (ear drum) temper-
ature is often measured using an infrared sensor. Occasionally, when catheters
are being used for other purposes, a temperature sensor may also be intro-
duced into an artery or the heart to measure the core temperature of the
body. It then becomes possible to obtain a continuous measurement of tem-
perature, although only a few samples taken at intervals of a few minutes
may be stored for subsequent analysis. Figure 1.1 illustrates representations
of temperature measurements as a scalar, an array, and a signal that is a func-
tion of time. It is obvious that the graphical representation facilitates easier
and faster comprehension of trends in the temperature than the numerical
format. Long-term recordings of temperature can facilitate the analysis of
temperature-regulation mechanisms 15, 16].
Infrared (with wavelength in the range 3 000;5 000 nm) or thermal sensors
may also be used to capture the heat radiated or emitted from a body or a
part of a body as an image. Thermal imaging has been investigated as a
potential tool for the detection of breast cancer. A tumor is expected to be
more vascularized than its neighboring tissues, and hence could be at a slightly
higher temperature. The skin surface near the tumor may also demonstrate a
relatively high temperature. Temperature di erences of the order of 2o C have
been measured between surface regions near breast tumors and neighboring
tissues. Figure 1.2 shows thermal images of a patient with benign brocysts
and a patient with breast cancer the local increase in temperature due to a
tumor is evident in the latter case. Thermography can help in the diagnosis
of advanced cancer, but has limited success in the detection of early breast
cancer 17, 18]. Recent improvements in detectors and imaging techniques
have created a renewed interest in the application of thermography for the
detection of breast cancer 19, 20, 21, 22, 23].
Infrared imaging via a telethermographic camera has been applied to the
detection of varicocele, which is the most common cause of infertility in
men 24, 25, 26]. In normal men, the testicular temperature is about 3 ; 4 o C
below the core body temperature. In the case of varicocele, dilation of the tes-
ticular veins reduces the venous return from the scrotum, causes stagnation of
blood and edema, and leads to increased testicular temperature. In the exper-
iments conducted by Merla et al. 25], a cold patch was applied to the subject's
scrotum, and the thermal recovery curves were analyzed. The results obtained
showed that the technique was successful in detecting subclinical varicocele.
Vlaisavljevi*c 26] showed that telethermography can provide better diagnostic
accuracy in the detection of varicocele than contact thermography.
4 Biomedical Image Analysis

33.5 o C
(a)

Time 08 10 12 14 16 18 20 22 24
(hours)
Temperature 33.5 33.3 34.5 36.2 37.3 37.5 38.0 37.8 38.0
(o C )

(b)
39

38

37
Temperature in degrees Celsius

36

35

34

33

32
8 10 12 14 16 18 20 22 24
Time in hours

(c)
FIGURE 1.1
Measurements of the temperature of a patient presented as (a) a scalar with
one temperature measurement f at a time instant t (b) an array f (n) made
up of several measurements at di erent instants of time and (c) a signal f (t)
or f (n). The horizontal axis of the plot represents time in hours the vertical
axis gives temperature in degrees Celsius. Data courtesy of Foothills Hospital,
Calgary.
The Nature of Biomedical Images
(a) (b)
FIGURE 1.2
Body temperature as a 2D image f (x y) or f (m n). The images illustrate the distribution of surface temperature measured
using an infrared camera operating in the 3 000 ; 5 000 nm wavelength range. (a) Image of a patient with pronounced
vascular features and benign brocysts in the breasts. (b) Image of a patient with a malignant mass in the upper-outer
quadrant of the left breast. Images courtesy of P. Hoekstra, III, Therma-Scan, Inc., Huntington Woods, MI.

5
6 Biomedical Image Analysis
The thermal images shown in Figure 1.2 serve to illustrate an important
distinction between two major categories of medical images:
 anatomical or physical images, and
 functional or physiological images.
The images illustrate the notion of body temperature as a signal or image.
Each point in the images in Figure 1.2 represents body temperature, which
is related to the ongoing physiological or pathological processes at the cor-
responding location in the body. A thermal image is, therefore, a functional
image. An ordinary photograph obtained with reected light, on the other
hand, would be a purely anatomical or physical image. More sophisticated
techniques that provide functional images related to circulation and various
physiological processes are described in the following sections.

1.2 Transillumination
Transillumination, diaphanography, and diaphanoscopy involve the shining of
visible light or near-infrared radiation through a part of the body, and viewing
or imaging the transmitted radiation. The technique has been investigated
for the detection of breast cancer, the attractive feature being the use of
nonionizing radiation 27]. The use of near-infrared radiation appears to have
more potential than visible light, due to the observation that nitrogen-rich
compounds preferentially absorb (or attenuate) infrared radiation. The fat
and broglandular tissue in the mature breast contain much less nitrogen than
malignant tissues. Furthermore, the hemoglobin in blood has a high nitrogen
content, and tumors are more vascularized than normal tissues. For these
reasons, breast cancer appears as a relatively dark region in a transilluminated
image.
The e ectiveness of transillumination is limited by scatter and ine ective
penetration of light through a large organ such as the breast. Transillumina-
tion has been found to be useful in di erentiating between cystic (uid- lled)
and solid lesions however, the technique has had limited success in distin-
guishing malignant tumors from benign masses 18, 28, 29].

1.3 Light Microscopy


Studies of the ne structure of biological cells and tissues require signi cant
magni cation for visualization of the details of interest. Useful magni cation
The Nature of Biomedical Images 7
of up to 1 000 may be obtained via light microscopy by the use of combina-
tions of lenses. However, the resolution of light microscopy is reduced by the
following factors 30]:
 Diraction: The bending of light at edges causes blurring the image
of a pinhole appears as a blurred disc known as the Airy disc.

 Astigmatism: Due to nonuniformities in lenses, a point may appear


as an ellipse.

 Chromatic aberration: Electromagnetic (EM) waves of di erent wave-


length or energy that compose the ordinarily used white light converge
at di erent focal planes, thereby causing enlargement of the focal point.
This e ect may be corrected for by using monochromatic light. See
Section 3.9 for a description of confocal microscopy.

 Spherical aberration: The rays of light arriving at the periphery


of a lens are refracted more than the rays along the axis of the lens.
This causes the rays from the periphery and the axis not to arrive at a
common focal point, thereby resulting in blurring. The e ect may be
reduced by using a small aperture.

 Geometric distortion: Poorly crafted lenses may cause geometric


distortion such as the pin-cushion e ect and barrel distortion.
Whereas the best resolution achievable by the human eye is of the order
of 0:1 ; 0:2 mm, light microscopes can provide resolving power up to about
0:2
m.
Example: Figure 1.3 shows a rabbit ventricular myocyte in its relaxed
state as seen through a light microscope at a magni cation of about 600.
The experimental setup was used to study the contractility of the myocyte
with the application of electrical stimuli 31].
Example: Figure 1.4 shows images of three-week-old scar tissue and forty-
week-old healed tissue samples from rabbit ligaments at a magni cation of
about 300. The images demonstrate the alignment patterns of the nuclei of
broblasts (stained to appear as the dark objects in the images): the three-
week-old scar tissue has many broblasts that are scattered in di erent direc-
tions, whereas the forty-week-old healed sample has fewer broblasts that are
well-aligned along the length of the ligament (the horizontal edge of the im-
age). The appearance of the forty-week-old sample is closer to that of normal
samples than that of the three-week-old sample. Images of this nature have
been found to be useful in studying the healing and remodeling processes in
ligaments 32].
8 Biomedical Image Analysis

FIGURE 1.3
A single ventricular myocyte (of a rabbit) in its relaxed state. The width
(thickness) of the myocyte is approximately 15
m. Image courtesy of R.
Clark, Department of Physiology and Biophysics, University of Calgary.
The Nature of Biomedical Images 9

(a)

(b)
FIGURE 1.4
(a) Three-week-old scar tissue sample, and (b) forty-week-old healed tissue
sample from rabbit medial collateral ligaments. Images courtesy of C.B.
Frank, Department of Surgery, University of Calgary.
10 Biomedical Image Analysis

1.4 Electron Microscopy


Accelerated electrons possess EM wave properties, with the wavelength
given by = mv h , where h is Planck's constant, m is the mass of the electron,
and v is the electron's velocity this relationship reduces to = 1p:23
V , where
V is the accelerating voltage 30]. At a voltage of 60 kV , an electron beam
has an e ective wavelength of about 0:005 nm, and a resolving power limit
of about 0:003 nm. Imaging at a low kV provides high contrast but low
resolution, whereas imaging at a high kV provides high resolution due to
smaller wavelength but low contrast due to higher penetrating power. In
addition, a high-kV beam causes less damage to the specimen as the faster
electrons pass through the specimen in less time than with a low-kV beam.
Electron microscopes can provide useful magni cation of the order of 106 ,
and may be used to reveal the ultrastructure of biological tissues. Electron
microscopy typically requires the specimen to be xed, dehydrated, dried,
mounted, and coated with a metal.
Transmission electron microscopy: A transmission electron microscope
(TEM) consists of a high-voltage electron beam generator, a series of EM
lenses, a specimen holding and changing system, and a screen- lm holder, all
enclosed in vacuum. In TEM, the electron beam passes through the specimen,
is a ected in a manner similar to light, and the resulting image is captured
through a screen- lm combination or viewed via a phosphorescent viewing
screen.
Example: Figure 1.5 shows TEM images of collagen bers (in cross-
section) in rabbit ligament samples. The images facilitate analysis of the
diameter distribution of the bers 33]. Scar samples have been observed to
have an almost uniform distribution of ber diameter in the range 60 ; 70 nm,
whereas normal samples have an average diameter of about 150 nm over a
broader distribution. Methods for the detection and analysis of circular ob-
jects are described in Sections 5.6.1, 5.6.3, and 5.8.
Example: In patients with hematuria, the glomerular basement membrane
of capillaries in the kidney is thinner (< 200 nm) than the normal thickness
of the order of 300 nm 34]. Investigation of this feature requires needle-core
biopsy of the kidney and TEM imaging. Figure 1.6 shows a TEM image
of a capillary of a normal kidney in cross-section. Figure 1.7 (a) shows an
image of a sample with normal membrane thickness Figure 1.7 (b) shows
an image of a sample with reduced and variable thickness. Although the
ranges of normal and abnormal membrane thickness have been established by
several studies 34], the diagnostic decision process is subjective methods for
objective and quantitative analysis are desired in this application.
The Nature of Biomedical Images 11

(a)

(b)
FIGURE 1.5
TEM images of collagen bers in rabbit ligament samples at a magni cation
of approximately 30 000. (a) Normal and (b) scar tissue. Images courtesy
of C.B. Frank, Department of Surgery, University of Calgary.
12 Biomedical Image Analysis

FIGURE 1.6
TEM image of a kidney biopsy sample at a magni cation of approximately
3 500. The image shows the complete cross-section of a capillary with nor-
mal membrane thickness. Image courtesy of H. Benediktsson, Department of
Pathology and Laboratory Medicine, University of Calgary.
The Nature of Biomedical Images
(a) (b)
FIGURE 1.7
TEM images of kidney biopsy samples at a magni cation of approximately 8 000. (a) The sample shows normal capillary
membrane thickness. (b) The sample shows reduced and varying membrane thickness. Images courtesy of H. Benediktsson,
Department of Pathology and Laboratory Medicine, University of Calgary.

13
14 Biomedical Image Analysis
Scanning electron microscopy: A scanning electron microscope (SEM)
is similar to a TEM in many ways, but uses a nely focused electron beam
with a diameter of the order of 2 nm to scan the surface of the specimen. The
electron beam is not transmitted through the specimen, which could be fairly
thick in SEM. Instead, the beam is used to scan the surface of the specimen
in a raster pattern, and the secondary electrons that are emitted from the
surface of the sample are detected and ampli ed through a photomultiplier
tube (PMT), and used to form an image on a cathode-ray tube (CRT). An
SEM may be operated in di erent modes to detect a variety of signals emitted
from the sample, and may be used to obtain images with a depth of eld of
several mm.

Example: Figure 1.8 illustrates SEM images of collagen bers in rabbit


ligament samples (freeze-fractured surfaces) 35]. The images are useful in an-
alyzing the angular distribution of bers and the realignment process during
healing after injury. It has been observed that collagen bers in a normal lig-
ament are well aligned, that bers in scar tissue lack a preferred orientation,
and that organization and alignment return toward their normal patterns dur-
ing the course of healing 36, 37, 35]. Image processing methods for directional
analysis are described in Chapter 8.

(a) (b)
FIGURE 1.8
SEM images of collagen bers in rabbit ligament samples at a magni cation
of approximately 4 000. (a) Normal and (b) scar tissue. Reproduced with
permission from C.B. Frank, B. MacFarlane, P. Edwards, R. Rangayyan, Z.Q.
Liu, S. Walsh, and R. Bray, \A quantitative analysis of matrix alignment
in ligament scars: A comparison of movement versus immobilization in an
immature rabbit model", Journal of Orthopaedic Research, 9(2): 219 { 227,
1991. c Orthopaedic Research Society.
The Nature of Biomedical Images 15

1.5 X-ray Imaging


The medical diagnostic potential of X rays was realized soon after their dis-
covery by Roentgen in 1895. (See Robb 38] for a review of the history of
X-ray imaging.) In the simplest form of X-ray imaging or radiography, a 2D
projection (shadow or silhouette) of a 3D body is produced on lm by irra-
diating the body with X-ray photons 4, 3, 5, 6]. This mode of imaging is
referred to as projection or planar imaging. Each ray of X-ray photons is
attenuated by a factor depending upon the integral of the linear attenuation
coecient along the path of the ray, and produces a corresponding gray level
(or signal) at the point hit on the lm or the detecting device used.
Considering the ray path marked as AB in Figure 1.9, let Ni denote the
number of X-ray photons incident upon the body being imaged, within a
speci ed time interval. Let us assume that the X rays are mutually parallel,
with the X-ray source at a large distance from the subject or object being
imaged. Let No be the corresponding number of photons exiting the body.
Then, we have  Z 
No = Ni exp ;
(x y) ds (1.1)
rayAB
or Z  
Ni :

(x y) ds = ln N (1.2)
rayAB o
The equations above are modi ed versions of Beer's law (also known as the
Beer-Lambert law) on the attenuation of X rays due to passage through a
medium. The ray AB lies in the sectional plane PQRS the mutually parallel
rays within the plane PQRS are represented by the coordinates (t s) that are
at an angle  with respect to the (x y) coordinates indicated in Figure 1.9,
with the s axis being parallel to the rays. Then, s = ;x sin  + y cos . The
variable of integration ds represents the elemental distance along the ray, and
the integral is along the ray path AB from the X-ray source to the detector.
(See Section 9.1 for further details on this notation.) The quantities Ni and
No are Poisson variables it is assumed that their values are large for the
equations above to be applicable. The function
(x y) represents the linear
attenuation coecient at (x y) in the sectional plane PQRS. The value of

(x y) depends upon the density of the object or its constituents along the
ray path, as well as the frequency (or wavelength or energy) of the radiation
used. Equation 1.2 assumes the use of monochromatic or monoenergetic X
rays.
A measurement of the exiting X rays (that is, No , and Ni for reference) thus
gives us only an integral of
(x y) over the ray path. The internal details of
the body along the ray path are compressed onto a single point on the lm
or a single measurement. Extending the same argument to all ray paths,
16 Biomedical Image Analysis
z
z

Q’ Q R
No ds
S A Ni
B P
P’ y

y X rays

x
2D projection 3D object
FIGURE 1.9
An X-ray image or a typical radiograph is a 2D projection or planar image
of a 3D object. The entire object is irradiated with X rays. The projection
of a 2D cross-sectional plane PQRS of the object is a 1D pro le P'Q' of the
2D planar image. See also Figures 1.19 and 9.1. Reproduced, with permis-
sion, from R.M. Rangayyan and A. Kantzas, \Image reconstruction", Wiley
Encyclopedia of Electrical and Electronics Engineering, Supplement 1, Editor:
John G. Webster, Wiley, New York, NY, pp 249{268, 2000.  c This material
is used by permission of John Wiley & Sons, Inc.

we see that the radiographic image so produced is a 2D planar image of the


3D object, where the internal details are superimposed. In the case that the
rays are parallel to the x axis (as in Figure 1.9), we have  = 90o , s = ;x,
ds = ;dx, and the planar image
Z
g(y z) = ;
(x y z) dx: (1.3)

Ignoring the negative sign, we see that the 3D object is reduced to (or inte-
grated into) a 2D planar image by the process of radiographic imaging.
The most commonly used detector in X-ray imaging is the screen- lm com-
bination 5, 6]. The X rays exiting from the body being imaged strike a
uorescent (phosphor) screen made of compounds of rare-earth elements such
as lanthanum oxybromide or gadolinium oxysul de, where the X-ray photons
are converted into visible-light photons. A light-sensitive lm that is placed in
contact with the screen (in a light-tight cassette) records the result. The lm
contains a layer of silver-halide emulsion with a thickness of about 10
m.
The exposure or blackening of the lm depends upon the number of light
photons that reach the lm.
A thick screen provides a high eciency of conversion of X rays to light,
but causes loss of resolution due to blurring (see Figure 1.10). The typical
thickness of the phosphor layer in screens is in the range 40 ; 100
m. Some
The Nature of Biomedical Images 17
receiving units make use of a lm with emulsion on both sides that is sand-
wiched between two screens: the second screen (located after the lm along
the path of propagation of the X rays) converts the X-ray photons not af-
fected by the rst screen into light, and thereby increases the eciency of
the receiver. Thin screens may be used in such dual-screen systems to achieve
higher conversion eciency (and lower dose to the patient) without sacri cing
resolution.

X rays

A
light B screen
film

FIGURE 1.10
Blur caused by a thick screen. Light emanating from point A in the screen is
spread over a larger area on the lm than that from point B.

A uoroscopy system uses an image intensi er and a video camera in place


of the lm to capture the image and display it on a monitor as a movie or
video 5, 6]. Images are acquired at a rate of 2 ; 8 frames=s (fps), with
the X-ray beam pulsed at 30 ; 100 ms per frame. In computed radiography
(CR), a photo-stimulable phosphor plate (made of europium-activated barium
uorohalide) is used instead of lm to capture and temporarily hold the image
pattern. The latent image pattern is then scanned using a laser and digitized.
In digital radiography (DR), the lm or the entire screen- lm combination is
replaced with solid-state electronic detectors 39, 40, 41, 42].
Examples: Figures 1.11 (a) and (b) show the posterior-anterior (PA, that
is, back-to-front) and lateral (side-to-side) X-ray images of the chest of a
patient. Details of the ribs and lungs, as well as the outline of the heart,
are visible in the images. Images of this type are useful in visualizing and
discriminating between the air- lled lungs, the uid- lled heart, the ribs, and
vessels. The size of the heart may be assessed in order to detect enlargement
of the heart. The images may be used to detect lesions in the lungs and
fracture of the ribs or the spinal column, and to exclude the presence of uid
in the thoracic cage. The use of two views assists in localizing lesions: use
of the PA view only, for example, will not provide information to decide if a
tumor is located toward the posterior or anterior of the patient.
18
Biomedical Image Analysis
(a) (b)
FIGURE 1.11
(a) Posterior-anterior and (b) lateral chest X-ray images of a patient. Images courtesy of Foothills Hospital, Calgary.
The Nature of Biomedical Images 19
The following paragraphs describe some of the physical and technical con-
siderations in X-ray imaging 4, 5, 6, 43, 44].
 Target and focal spot: An electron beam with energy in the range
of 20 ; 140 keV is used to produce X rays for diagnostic imaging. The
typical target materials used are tungsten and molybdenum. The term
\focal spot" refers to the area of the target struck by the electron beam
to generate X rays however, the nominal focal spot is typically ex-
pressed in terms of its diameter in mm as observed in the imaging plane
(on the lm). A small focal spot is desired in order to obtain a sharp
image, especially in magni cation imaging. (See also Section 2.9 and
Figure 2.18.) Typical focal spot sizes in radiography lie in the range of
0:1 ; 2 mm. A focal spot size of 0:1 ; 0:3 mm is desired in mammogra-
phy.
 Energy: The penetrating capability of an X-ray beam is mainly de-
termined by the accelerating voltage applied to the electron beam that
impinges the target in the X-ray generator. The commonly used indi-
cator of penetrating capability (often referred to as the \energy" of the
X-ray beam) is kV p, standing for kilo-volt-peak. The higher the kV p,
the more penetrating the X-ray beam will be. The actual unit of en-
ergy of an X-ray photon is the electron volt or eV , which is the energy
gained by an electron when a potential of 1 V is applied to it. The kV p
measure relates to the highest possible X-ray photon energy that may
be achieved at the voltage used.
Low-energy X-ray photons are absorbed at or near the skin surface, and
do not contribute to the image. In order to prevent such unwanted
radiation, a lter is used at the X-ray source to absorb low-energy X
rays. Typical lter materials are aluminum and molybdenum.
Imaging of soft-tissue organs such as the breast is performed with low-
energy X rays in the range of 25 ; 32 kV p 45]. The use of a higher kV p
would result in low di erential attenuation and poor tissue-detail visibil-
ity or contrast. A few other energy levels used in projection radiography
are, for imaging the abdomen: 60 ; 100 kV p chest: 80 ; 120 kV p and
skull: 70 ; 90 kV p. The kV p to be used depends upon the distance
between the X-ray source and the patient, the size (thickness) of the
patient, the type of grid used, and several other factors.
 Exposure: For a given tube voltage (kV p), the total number of X-ray
photons released at the source is related to the product of the tube cur-
rent (mA) and the exposure time (s), together expressed as the product
mAs. As a result, for a given body being imaged, the number of pho-
tons that arrive at the lm is also related to the mAs quantity. A low
mAs results in an under-exposed lm (faint or light image), whereas a
high mAs results in an over-exposed or dark image (as well as increased
20 Biomedical Image Analysis
X-ray dose to the patient). Typical exposure values lie in the range
of 2 ; 120 mAs. Most imaging systems determine automatically the
required exposure for a given mode of imaging, patient size, and kV p
setting. Some systems use an initial exposure of the order of 5 ms to
estimate the penetration of the X rays through the body being imaged,
and then determine the required exposure.
 Beam hardening: The X rays used in radiographic imaging are typi-
cally not monoenergetic that is, they possess X-ray photons over a cer-
tain band of frequencies or EM energy levels. As the X rays propagate
through a body, the lower-energy photons get absorbed preferentially,
depending upon the length of the ray path through the body and the
attenuation characteristics of the tissues along the path. Thus, the X
rays that pass through the object at longer distances from the source
will possess relatively fewer photons at lower-energy levels than at the
point of entry into the object (and hence a relatively higher concentra-
tion of higher-energy photons). This e ect is known as beam hardening,
and leads to incorrect estimation of the attenuation coecient in com-
puted tomography (CT) imaging. The e ect of beam hardening may be
reduced by pre ltering or prehardening the X-ray beam and narrowing
its spectrum. The use of monoenergetic X rays from a synchrotron or a
laser obviates this problem.
 Scatter and the use of grids: As an X-ray beam propagates through
a body, photons are lost due to absorption and scattering at each point
in the body. The angle of the scattered photon at a given point along the
incoming beam is a random variable, and hence the scattered photon
contributes to noise at the point where it strikes the detector. Fur-
thermore, scattering results in the loss of contrast of the part of the
object where X-ray photons were scattered from the main beam. The
noise e ect of the scattered radiation is signi cant in gamma-ray emis-
sion imaging, and requires speci c methods to improve the quality of
the image 4, 46]. The e ect of scatter may be reduced by the use of
grids, collimation, or energy discrimination due to the fact that the scat-
tered (or secondary) photons usually have lower energy levels than the
primary photons.
A grid consists of an array of X-ray absorbing strips that are mutually
parallel if the X rays are in a parallel beam, as in chest imaging (see
Figures 1.12 and 1.13), or are converging toward the X-ray source in the
case of a diverging beam (as in breast imaging, see Figure 1.15). Lattice
or honeycomb grids with parallel strips in criss-cross patterns are also
used in mammography. X-ray photons that arrive via a path that is not
aligned with the grids will be stopped from reaching the detector.
A typical grid contains thin strips of lead or aluminum with a strip den-
sity of 25 ; 80 lines=cm and a grid height:strip width ratio in the range
The Nature of Biomedical Images 21
parallel X rays

A
F

E parallel grid
B C
screen-film
A’ D

FIGURE 1.12
Use of parallel grids to reduce scatter. X rays that are parallel to the grids
reach the lm for example, line AA'. Scattered rays AB, AC, and AE have
been blocked by the grids however, the scattered ray AD has reached the lm
in the illustration.

of 5:1 to 12:1. The space between the grids is lled with low-attenuation
material such as wood. A stationary grid produces a line pattern that is
superimposed upon the image, which would be distracting. Figure 1.13
(a) shows a part of an image of a phantom with the grid artifact clearly
visible. (An image of the complete phantom is shown in Figure 1.14.)
Grid artifact is prevented in a reciprocating grid, where the grid is moved
about 20 grid spacings during exposure: the movement smears the grid
shadow and renders it invisible on the image. Figure 1.13 (b) shows
an image of the same object as in part (a), but with no grid artifact.
Low levels of grid artifact may appear in images if the bucky that holds
the grid does not move at a uniform pace or starts moving late or ends
movement early with respect to the X-ray exposure interval. A major
disadvantage of using grids is that it requires approximately two times
the radiation dose required for imaging techniques without grids. Fur-
thermore, the contrast of ne details is reduced due to the smeared
shadow of the grid.
 Photon detection noise: The interaction between an X-ray beam
and a detector is governed by the same rules as for interaction with
any other matter: photons are lost due to scatter and absorption, and
some photons may pass through una ected (or undetected). The small
size of the detectors in DR and CT imaging reduces their detection
22 Biomedical Image Analysis
eciency. Scattered and undetected photons cause noise in the mea-
surement for detailed analysis of noise in X-ray detection, refer to Bar-
rett and Swindell 3], Macovski 5], and Cho et al. 4]. More details on
noise in medical images and techniques to remove noise are presented in
Chapter 3.
 Ray stopping by heavy implants: If the body being imaged contains
extremely heavy parts or components, such as metal screws or pins in
bones and surgical clips that are nearly X-ray-opaque and entirely stop
the incoming X-ray photons, no photons would be detected at the cor-
responding point of exit from the body. The attenuation coecient for
the corresponding path would be inde nite, or within the computational
context, in nity. Then, a reconstruction algorithm would not be able
to redistribute the attenuation values over the points along the corre-
sponding ray path in the reconstructed image. This leads to streaking
artifacts in CT images.
Two special techniques for enhanced X-ray imaging | digital subtraction
angiography (DSA) and dual-energy imaging | are described in Sections 4.1
and 4.2, respectively.

1.5.1 Breast cancer and mammography


Breast cancer: Cancer is caused when a single cell or a group of cells
escapes from the usual controls that regulate cellular growth, and begins to
multiply and spread. This activity results in a mass, tumor, or neoplasm.
Many masses are benign that is, the abnormal growth is restricted to a single,
circumscribed, expanding mass of cells. Some tumors are malignant that is,
the abnormal growth invades the surrounding tissues and may spread, or
metastasize, to distant areas of the body. Although benign masses may lead
to complications, malignant tumors are usually more serious, and it is for
these tumors that the term \cancer" is used. The majority of breast tumors
will have metastasized before reaching a palpable size.
Although curable, especially when detected at early stages, breast cancer
is a major cause of death in women. An important factor in breast cancer
is that it tends to occur earlier in life than other types of cancer and other
major diseases 47, 48]. Although the cause of breast cancer has not yet
been fully understood, early detection and removal of the primary tumor are
essential and e ective methods to reduce mortality, because, at such a point
in time, only a few of the cells that departed from the primary tumor would
have succeeded in forming secondary tumors 49]. When breast tumors are
detected by the a ected women themselves (via self-examination), most of the
tumors would have metastasized 50].
If breast cancer can be detected by some means at an early stage, while it is
clinically localized, the survival rate can be dramatically increased. However,
The Nature of Biomedical Images 23

(a)

(b)
FIGURE 1.13
X-ray images of a part of a phantom: (a) with, and (b) without grid artifact.
Image courtesy of L.J. Hahn, Foothills Hospital, Calgary. See also Figure 1.14.
24 Biomedical Image Analysis

FIGURE 1.14
X-ray image of the American College of Radiology (ACR) phantom for mam-
mography. The pixel-value range 117 210] has been linearly stretched to the
display range 0 255] to show the details. Image courtesy of S. Bright, Sun-
nybrook & Women's College Health Sciences Centre, Toronto, ON, Canada.
See also Figure 1.13.
The Nature of Biomedical Images 25
such early breast cancer is generally not amenable to detection by physical ex-
amination and breast self-examination. The primary role of an imaging tech-
nique is thus the detection of lesions in the breast 29]. Currently, the most
e ective method for the detection of early breast cancer is X-ray mammogra-
phy. Other modalities, such as ultrasonography, transillumination, thermog-
raphy, CT, and magnetic resonance imaging (MRI) have been investigated for
breast cancer diagnosis, but mammography is the only reliable procedure for
detecting nonpalpable cancers and for detecting many minimal breast cancers
when they appear to be curable 18, 28, 29, 51]. Therefore, mammography
has been recommended for periodic screening of asymptomatic women. Mam-
mography has gained recognition as the single most successful technique for
the detection of early, clinically occult breast cancer 52, 53, 54, 55, 56].
X-ray imaging of the breast: The technique of using X rays to ob-
tain images of the breast was rst reported by Warren in 1930, after he had
examined 100 women using sagital views 57]. Because of the lack of a re-
producible method for obtaining satisfactory images, this technique did not
make much progress until 1960, when Egan 58] reported on high-mA and
low-kV p X-ray sources that yielded reproducible images on industrial lm. It
was in the mid-1960s that the rst modern X-ray unit dedicated to mammog-
raphy was developed. Since then, remarkable advances have led to a striking
improvement in image quality and a dramatic reduction in radiation dose.
A major characteristic of mammograms is low contrast, which is due to
the relatively homogeneous soft-tissue composition of the breast. Many ef-
forts have been focused on developing methods to enhance contrast. In an
alternative imaging method known as xeromammography, a selenium-coated
aluminum plate is used as the detector 6]. The plate is initially charged
to about 1 000 V . Exposure to the X rays exiting the patient creates a
charge pattern on the plate due to the liberation of electrons and ions. The
plate is then sprayed with an ionized toner, the pattern of which is trans-
ferred to plastic-coated paper. Xeromammograms provide wide latitude and
edge enhancement, which lead to improved images as compared to screen- lm
mammography. However, xeromammography results in a higher dose to the
subject, and has not been in much use since the 1980s.
A typical mammographic imaging system is shown schematically in Fig-
ure 1.15. Mammography requires high X-ray beam quality (a narrow-band
or nearly monochromatic beam), which is controlled by the tube target ma-
terial (molybdenum) and beam ltration with molybdenum. E ective breast
compression is an important factor in reducing scattered radiation, creating
as uniform a density distribution as possible, eliminating motion, and sepa-
rating mammary structures, thereby increasing the visibility of details in the
image. The use of grids speci cally designed for mammography can further
reduce scattered radiation and improve subject contrast, which is especially
signi cant when imaging thick, dense breasts 59].
Generally, conventional screen- lm mammography is performed with the
breast directly in contact with the screen- lm cassette, producing essentially
26 Biomedical Image Analysis

X-ray source (target)


Filter
Collimating
diaphragm

Breast
compression
paddle

Compressed breast

Focused grid
Screen-film
cassette

FIGURE 1.15
A typical mammography setup.
The Nature of Biomedical Images 27
life-size images. The magni cation technique, on the other hand, interposes
an air gap between the breast and the lm, so that the projected radiographic
image is enlarged. Magni cation produces ne-detail images containing addi-
tional anatomical information that may be useful in re ning mammographic
diagnosis, especially in cases where conventional imaging demonstrates un-
certain or equivocal ndings 60]. As in the grid method, the advantages
of magni cation imaging are achieved at the expense of increased radiation
exposure. Therefore, the magni cation technique is not used routinely.
Screen- lm mammography is now the main tool for the detection of early
breast cancer. The risk of radiation is still a matter of concern, although
there is no direct evidence of breast cancer risk from the low-dose radiation
exposure of mammography. Regardless, technological advances in mammog-
raphy continue to be directed toward minimizing radiation exposure while
maintaining the high quality of the images.
Examples: Figures 1.16 (a) and (b) show the cranio-caudal (CC) and
medio-lateral-oblique (MLO) views of the same breast of a subject. The MLO
view demonstrates architectural distortion due to a spiculated tumor near the
upper right-hand corner edge.
Mammograms are analyzed by radiologists specialized in mammography. A
normal mammogram typically depicts converging patterns of broglandular
tissues and vessels. Any feature that causes a departure from or distortion
with reference to the normal pattern is viewed with suspicion and analyzed
with extra attention. Features such as calci cations, masses, localized increase
in density, architectural distortion, and asymmetry between the left and right
breast images are carefully analyzed.
Several countries and states have instituted breast cancer screening pro-
grams where asymptomatic women within a certain age group are invited to
participate in regular mammographic examinations. Screen Test | Alberta
Program for the Early Detection of Breast Cancer 61] is an example of such
a program. Several applications of image processing and pattern analysis
techniques for mammographic image analysis and breast cancer detection are
described in the chapters to follow.

1.6 Tomography
The problem of visualizing the details of the interior of the human body
noninvasively has always been of interest, and within a few years after the
discovery of X rays by Rontgen in 1895, techniques were developed to image
sectional planes of the body. The techniques of laminagraphy, planigraphy,
or \classical" tomography 38, 62] used synchronous movement of the X-ray
source and lm in such a way as to produce a relatively sharp image of a
28 Biomedical Image Analysis

(a) (b)
FIGURE 1.16
(a) Cranio-caudal (CC) and (b) medio-lateral oblique (MLO) mammograms
of the same breast of a subject. Images courtesy of Screen Test | Alberta
Program for the Early Detection of Breast Cancer 61].
The Nature of Biomedical Images 29
single focal plane of the object, with the images of all other planes being
blurred. Figure 1.17 illustrates a simple linear-motion system, where the X-
ray source and lm cassette move along straight-line paths so as to maintain
the longitudinal (coronal) plane, indicated by the straight line AB, in focus.
It is seen that the X rays along the paths X1-A and X2-A strike the same
physical spot A1 = A2 on the lm, and that the rays along the paths X1-B
and X2-B strike the same spot B1 = B2. On the other hand, for the point C
in a di erent plane, the rays along the paths X1-C and X2-C strike di erent
points C1 6= C2 on the lm. Therefore, the details in the plane AB remain
in focus and cause a strong image, whereas the details in the other planes are
smeared all over the lm. The smearing of information from the other planes
of the object causes loss of contrast in the plane of interest. The development
of CT imaging rendered lm-based tomography obsolete.
Example: Figure 1.18 shows a tomographic image of a patient in a longitu-
dinal (coronal) plane through the chest. Images of this nature provided better
visualization and localization of lesions than regular X-ray projection images,
and permitted the detection of masses in bronchial tubes and air ducts.

X2 X1
Path of source movement X-ray
source

A B Patient
Table
C1 C2
Film cassette
A1 B1 A2 B2

Path of film movement

FIGURE 1.17
Synchronized movement of the X-ray source and lm to obtain a tomographic
image of the focal plane indicated as AB. Adapted from Robb 38].

Computed tomography: The technique of CT imaging was developed


during the late 1960s and the early 1970s, producing images of cross-sections of
the human head and body as never seen before (noninvasively and nondestruc-
30 Biomedical Image Analysis

FIGURE 1.18
Tomographic image of a patient in a longitudinal sectional plane through the
chest. Reproduced with permission from R.A. Robb, \X-ray computed tomog-
raphy: An engineering synthesis of multiscienti c principles", CRC Critical
Reviews in Biomedical Engineering, 7:264{333, March 1982.  c CRC Press.
The Nature of Biomedical Images 31
tively!). In the simplest form of CT imaging, only the desired cross-sectional
plane of the body is irradiated using a nely collimated ray of X-ray photons
(see Figure 1.19), instead of irradiating the entire body with a 3D beam of X
rays as in ordinary radiography (Figure 1.9). The fundamental radiographic
equation for CT is the same as Equation 1.2. Ray integrals are measured at
many positions and angles around the body, scanning the body in the pro-
cess. The principle of image reconstruction from projections, described in
detail in Chapter 9, is then used to compute an image of a section of the
body: hence the name computed tomography. (See Robb 38] for an excellent
review of the history of CT imaging see also Rangayyan and Kantzas 63]
and Rangayyan 64].)

Q’
Q R

No S Ni
B P A
P’ X rays

1D Projection 2D Section
FIGURE 1.19
In the basic form of CT imaging, only the cross-sectional plane of interest is
irradiated with X rays. The projection of the 2D cross-sectional plane PQRS
of the object is the 1D pro le P'Q' shown. Compare this case with the planar
imaging case illustrated in Figure 1.9. See also Figure 9.1. Reproduced, with
permission, from R.M. Rangayyan and A. Kantzas, \Image reconstruction",
Wiley Encyclopedia of Electrical and Electronics Engineering, Supplement 1,
Editor: John G. Webster, Wiley, New York, NY, pp 249{268, 2000.  c This
material is used by permission of John Wiley & Sons, Inc.

Figure 1.20 depicts some of the scanning procedures employed: Figure 1.20
(a) shows the translate-rotate scanning geometry for parallel-ray projections
Figure 1.20 (b) shows the translate-rotate scanning geometry with a small fan-
beam detector array Figure 1.20 (c) shows the rotate-only scanning geometry
for fan-beam projections and Figure 1.20 (d) shows the rotate-only scanning
geometry for fan-beam projections using a ring of detectors. A more recently
developed scanner specialized for cardiovascular imaging 65, 66] completely
eliminates mechanical scanning movement to reduce the scanning time by
32 Biomedical Image Analysis
employing electronically steered X-ray microbeams and rings of detectors, as
illustrated schematically in Figure 1.21.
The fundamental principle behind CT, namely, image reconstruction from
projections, has been known for close to 100 years, since the exposition of
the topic by Radon 67, 68] in 1917. More recent developments in the sub-
ject arose in the 1950s and 1960s from the works of a number of researchers
in diverse applications. Some of the important publications in this area are
the works of Cormack on the representation of a function by its line inte-
grals 69, 70] Bracewell and Riddle on the reconstruction of brightness dis-
tributions of astronomical bodies from fan-beam scans at various angles 71]
Crowther et al. 72] and De Rosier and Klug 73] on the reconstruction of
3D images of viruses from electron micrographs Ramachandran and Laksh-
minarayanan 74] on the convolution backprojection technique and Gordon
et al. 75] on algebraic reconstruction techniques. Pioneering works on the
development of practical scanners for medical applications were performed
by Oldendorf 76], Houns eld 77], and Ambrose 78]. X-ray CT was well
established as a clinical diagnostic tool by the early 1970s.
Once a sectional image is obtained, the process may be repeated to obtain
a series of sectional images of the 3D body or object being investigated. CT
imaging a 3D body may be accomplished by reconstructing one 2D section
at a time through the use of 1D projections. In the Dynamic Spatial Recon-
structor 38, 79], 2D projection images are obtained on a uorescent screen
via irradiation of the entire portion of interest of the body. In single-photon
emission computed tomography (SPECT), several 2D projection images are
obtained using a gamma camera 4, 5, 46, 80, 81]. In these cases, the projec-
tion data of several sectional planes are acquired simultaneously: each row of
a given 2D projection or planar image provides the 1D projection data of a
sectional plane of the body imaged (see Figure 1.9). Many sectional images
may then be reconstructed from the set of 2D planar images acquired.
Other imaging modalities used for projection data collection are ultrasound
(time of ight or attenuation), magnetic resonance (MR), nuclear emission
(gamma rays or positrons), and light 4, 5, 80, 82, 83, 84, 85]. Techniques
using nonionizing radiation are of importance in imaging pregnant women.
Whereas the physical parameter imaged may di er between these modalities,
once the projection data are acquired, the mathematical image reconstruction
procedure could be almost the same. A few special considerations in imaging
with di racting sources are described in Section 9.5. The characteristics of the
data acquired in nuclear medicine, ultrasound, and MR imaging are described
in Sections 1.7, 1.8, and 1.9, respectively.
Examples: Figure 1.22 shows a CT scan of the head of a patient. The
image displays, among other features, the ventricles of the brain. CT images
of the head are useful in the detection of abnormalities in the brain and skull,
including bleeding in brain masses, calci cations, and fractures in the cranial
vault.
The Nature of Biomedical Images 33

FIGURE 1.20
(a) Translate-rotate scanning geometry for parallel-ray projections
(b) translate-rotate scanning geometry with a small fan-beam detector array
(c) rotate-only scanning geometry for fan-beam projections (d) rotate-only
scanning geometry for fan-beam projections using a ring of detectors. Repro-
duced with permission from R.A. Robb, \X-ray computed tomography: An
engineering synthesis of multiscienti c principles", CRC Critical Reviews in
Biomedical Engineering, 7:264{333, March 1982.  c CRC Press.
34 Biomedical Image Analysis

FIGURE 1.21
Electronic steering of an X-ray beam for motion-free scanning and CT imag-
ing. Reproduced with permission from D.P. Boyd, R.G. Gould, J.R. Quinn,
and R. Sparks, \Proposed dynamic cardiac 3-D densitometer for early detec-
tion and evaluation of heart disease", IEEE Transactions on Nuclear Science,
26(2):2724{2727, 1979.  c IEEE. See also Robb 38].
The Nature of Biomedical Images 35

FIGURE 1.22
CT image of a patient showing the details in a cross-section through the head
(brain). Image courtesy of Foothills Hospital, Calgary.

Figure 1.23 illustrates the CT image of the abdomen of a patient. The image
shows the stomach, gall bladder, liver, spleen, kidneys, intestines, and the
spinal column. The air-uid interface in the stomach is clearly visible. Images
of this type are useful in detecting several abnormalities in the abdomen,
including gallstones, kidney stones, and tumors in the liver.
Figure 1.24 shows two renditions of the same CT image of the chest of a
patient: image (a) has been scaled (or windowed details provided in Sections
4.4.2 and 9.6) to illustrate the details of the lungs, whereas image (b) has
been scaled to display the mediastinum in relatively increased detail. Image
(a) shows the details of the distal branches of the pulmonary arteries. CT
images of the chest facilitate the detection of the distortion of anatomy due
to intrathoracic or extrapulmonary uid collection, or due to ruptured lungs.
They also aid in the detection of lung tumors and blockage of the pulmonary
arteries due to thrombi.
CT is an imaging technique that has revolutionized the eld of medical
diagnostics. CT has also found applications in many other areas such as non-
destructive evaluation of industrial and biological specimens, radioastronomy,
light and electron microscopy, optical interferometry, X-ray crystallography,
petroleum engineering, and geophysical exploration. Indirectly, it has also
led to new developments in its predecessor techniques in radiographic imag-
36 Biomedical Image Analysis

FIGURE 1.23
CT image of a patient showing the details in a cross-section through the
abdomen. Image courtesy of Foothills Hospital, Calgary.

ing. Details of the mathematical principles related to reconstruction from


projections and more illustrations of CT imaging are provided in Chapter 9.

1.7 Nuclear Medicine Imaging


The use of radioactivity in medical imaging began in the 1950s nuclear
medicine has now become an integral part of most medical imaging cen-
ters 3, 4, 5, 6, 86]. In nuclear medicine imaging, a small quantity of a radio-
pharmaceutical is administered into the body orally, by intravenous injection,
or by inhalation. The radiopharmaceutical is designed so as to be absorbed by
and localized in a speci c organ of interest. The gamma-ray photons emitted
from the body as a result of radioactive decay of the radiopharmaceutical are
used to form an image that represents the distribution of radioactivity in the
organ.
Nuclear medicine imaging is used to map physiological function such as
perfusion and ventilation of the lungs, and blood supply to the musculature of
the heart, liver, spleen, and thyroid gland. Nuclear medicine has also proven
to be useful in detecting brain and bone tumors. Whereas X-ray images
The Nature of Biomedical Images 37

(a)

(b)
FIGURE 1.24
CT image of a patient scaled to (a) show the details of the lungs and (b) dis-
play the mediastinum in detail | the details of the lungs are not visible in
this rendition. Images courtesy of Alberta Children's Hospital, Calgary.
38 Biomedical Image Analysis
provide information related to density and may be used to detect altered
anatomy, nuclear medicine imaging helps in examining altered physiological
(or pathological) functioning of speci c organs in a body.
The most commonly used isotopes in nuclear medicine imaging are tech-
netium as 99m Tc which emits gamma-ray photons at 140 keV , and thallium
as 201 Tl at 70 keV or 167 keV . Iodine as 131 I is also used for thyroid imaging.
The rst imaging device used in nuclear medicine was the rectilinear scan-
ner, which consisted of a single-bore collimator connected to a gamma-ray
counter or detector. The scanner was coupled to a mechanical system that
performed a raster scan over the area of interest, making a map of the radi-
ation distribution in the area. The amount of radioactivity detected at each
position was either recorded on lm or on a storage oscilloscope. A major
diculty with this approach is that scanning is time consuming.
The scintillation gamma camera or the Anger camera uses a large thallium-
activated sodium iodide NaI (Tl)] detector, typically 40 cm in diameter and
10 mm in thickness. The gamma camera consists of three major parts: a
collimator, a detector, and a set of PMTs. Figure 1.25 illustrates the Anger
camera in a schematic sectional view. The following paragraphs describe some
of the important components of a nuclear medicine imaging system.

Image computer

Position Pulse
height Image
analysis
analysis

Photomultiplier
tubes (PMTs)

NaI(Tl) crystal detector


* *
Collimator

Gamma rays
emitted

Patient

FIGURE 1.25
Schematic (vertical sectional) representation of a nuclear medicine imaging
system with an Anger camera.
The Nature of Biomedical Images 39
 Collimator: A collimator consists of an array of holes separated by
lead septa. The function of the collimator is to allow passage of the
gamma rays that arrive along a certain path of propagation, and to
block (absorb) all gamma rays outside a narrow solid angle of accep-
tance. Collimators are usually made of lead alloys, but other materials
such as tantalum, tungsten, and gold have also been used. Di erent
geometries have been used in the design of collimator holes, including
triangular, square, hexagonal, and round patterns. Hexagonal holes
have been observed to be the most ecient, and are commonly used.
Two key factors in collimator design are geometric eciency, which is
the fraction of the gamma-ray photons from the source that are trans-
mitted through the collimator to the detector, and geometric (spatial)
resolution. In general, for a given type of collimator, the higher the
eciency, the poorer is the resolution. The resolution of a collimator is
increased if the size of the holes is reduced or if the collimator thickness
is increased however, these measures decrease the number of photons
that will reach the crystal, and hence reduce the sensitivity and e-
ciency of the system. The eciency of a typical collimator is about
0:01% that is, only 1 in every 10 000 photons emitted is passed by the
collimator to the crystal.
The most commonly used type of collimator is the parallel-hole collima-
tor, which usually serves for general purpose imaging, particularly for
large organs. Other designs include diverging, converging, fan-beam,
and pin-hole collimators.
 Detector: At the back of the collimator is attached a detector, which is
usually a NaI (Tl) crystal of 6 ; 13 mm thickness. The crystal absorbs
the gamma-ray photons that pass through the collimator holes, and
reemits their energy as visible light (scintillation). The thickness of
the crystal determines the absorbed fraction of the gamma-ray photons
by the photoelectric e ect. A thick crystal has better absorption than
a thin crystal however, a thick crystal scatters and absorbs the light
before it reaches the back surface of the crystal. A crystal of thickness
10 mm absorbs about 92% of the photons received at 140 keV .
 Photomultiplier tubes: The crystal is optically coupled at its back
surface to an array of PMTs. The PMTs are usually hexagonal in sec-
tion, and are arranged so as to cover the imaging area. Scintillations
within the crystal are converted by the photocathodes at the front of the
PMTs to photoelectrons, which are accelerated toward each of a series
of dynodes held at successively higher potentials until they reach the
anode at the back of the tube. During this process, the photoelectrons
produce a number of secondary electrons at each dynode, leading to a
current gain of the order of 106 .
40 Biomedical Image Analysis
 Image computer: The current pulses produced by the PMTs in re-
sponse to scintillations in the crystal are applied to a resistor matrix
that computes the points of arrival of the corresponding gamma-ray
photons. The amplitudes of the pulses represent the energy deposited
by the gamma rays. A pulse-height analyzer is used to select pulses that
are within a preset energy window corresponding to the peak energy of
the gamma rays. The pulse-selection step reduces the e ect of scattered
rays at energy levels outside the energy window used.
Whereas the major advantage of nuclear medicine imaging lies in its ca-
pability of imaging the functional aspects of the human body rather than
the anatomy, its major disadvantages are poor spatial resolution and high
noise content. The intrinsic resolution of a typical gamma camera (crystal) is
3 ; 5 mm the net resolution including the e ect of the collimator, expressed as
the full width at half the maximum (FWHM) of the image of a thin line source
(the line spread function or LSF) is 7:5 ; 10 mm (see Figure 2.21). The main
causes of noise are quantum mottle due to the low number of photons used
to create images, and the random nature of gamma ray emission. Structured
noise may also be caused by nonuniformities in the gamma camera.
In general, the contrast of nuclear medicine images is high as the radiophar-
maceutical is designed so as to be selectively absorbed by and localized in the
organ of interest. Regardless, other organs and tissues in the body will also
absorb some amounts of the radiopharmaceutical, and emit gamma rays that
will appear as background counts and degrade contrast. Such e ects may be
labeled as physiological or anatomical artifacts. The contrast of an image is
also diminished by septal penetration in the collimator and scatter.
Multi-camera imaging and scanning systems: The data acquired by
a gamma camera represent a projection or planar image, which corresponds
to an integration of the 3D body being imaged along the paths of the gamma
rays. It should be noted that gamma rays are emitted by the body in all
directions during the imaging procedure. Modern imaging systems use two or
three gamma cameras (\heads") to acquire simultaneously multiple projection
images. Projection images acquired at diametrically opposite positions may
be averaged to reduce artifacts 87] see Section 10.5.5.
SPECT imaging: SPECT scanners usually gather 64 or 128 projections
spanning 180o or 360o around the patient, depending upon the application. In-
dividual scan lines from the projection images may then be processed through
a reconstruction algorithm to obtain 2D sectional images, which may fur-
ther be combined to create a 3D image of the patient. Coronal, sagital, and
oblique sections may then be created from the 3D dataset. Circular scanning
is commonly used to acquire projection images of the body at di erent angles.
Circular scanning provides projections at equal angular intervals, as required
by certain reconstruction algorithms. However, some clinical studies use el-
liptical scanning so as to keep the camera close to the body, in consideration
of the fact that the spatial resolution deteriorates rapidly with distance. In
The Nature of Biomedical Images 41
such situations, the CT reconstruction algorithm should be adapted to the
nonuniformly spaced data.
Examples: Figure 1.26 illustrates a nuclear medicine projection (planar)
image of the chest of a patient. The region of high intensity (activity) on the
right-hand side of the image represents the heart. Observe the high level of
noise in the image. Figure 1.27 illustrates several series of SPECT images of
the left ventricle of the patient before and after stress (exercise on a treadmill).
The SPECT images display oblique sectional views of the myocardium in three
orientations: the short axis, the horizontal long axis, and the vertical long
axis of the left ventricle. Ischemic regions demonstrate lower intensity than
normal regions due to reduced blood supply. The generally noisy appearance
of SPECT images as well as the nature of the artifacts in nuclear medicine
images are illustrated by the images.
See Section 3.10 for details regarding gated blood-pool imaging see Sec-
tion 10.5 for several examples of SPECT images, and for discussions on the
restoration of SPECT images.

FIGURE 1.26
A planar or projection image of a patient used for myocardial SPECT imaging.
The two horizontal lines indicate the limits of the data used to reconstruct the
SPECT images shown in Figure 1.27. Image courtesy of Foothills Hospital,
Calgary.

Positron emission tomography (PET): Certain isotopes of carbon


(11 C ), nitrogen (13 N ), oxygen (15 O), and uorine (18 F ) emit positrons and are
suitable for nuclear medicine imaging. PET is based upon the simultaneous
detection of the two annihilation photons produced at 511 keV and emitted
in opposite directions when a positron loses its kinetic energy and combines
with an electron 4, 5, 6, 88]. The process is also known as coincidence detec-
tion. Collimation in PET imaging is electronic, which substantially increases
42 Biomedical Image Analysis

(a)

(b)

(c)
FIGURE 1.27
SPECT imaging of the left ventricle. (a) Short-axis images. (b) Horizontal
long axis images. (c) Vertical long axis images. In each case, the upper panel
shows four SPECT images after exercise (stress), and the lower panel shows
the corresponding views before exercise (rest). Images courtesy of Foothills
Hospital, Calgary.
The Nature of Biomedical Images 43
the eciency of the detection process as compared to SPECT imaging. A
diaphragm is used only to de ne the plane of sectional (CT) imaging.
Several modes of data collection are possible for PET imaging, including
stationary, rotate-only, and rotate-translate gantries 88]. In one mode of data
collection, a ring of bismuth-germanate detectors is used to gather emission
statistics that correspond to a projection of a transversal section. A recon-
struction algorithm is then used to create 2D and 3D images.
The spatial resolution of PET imaging is typically 5 mm, which is almost
two times better than that of SPECT imaging. PET is useful in functional
imaging and physiological analysis of organs 89, 90].

1.8 Ultrasonography
Ultrasound in the frequency range of 1 ; 20 MHz is used in diagnostic ul-
trasonography 4, 5, 6]. The velocity of propagation of ultrasound through
a medium depends upon its compressibility: lower compressibility results in
higher velocity. Typical velocities in human tissues are 330 m=s in air (the
lungs) 1 540 m=s in soft tissue and 3 300 m=s in bone. A wave of ultra-
sound may get reected, refracted, scattered, or absorbed as it propagates
through a body. Most modes of diagnostic ultrasonography are based upon
the reection of ultrasound at tissue interfaces. A gel is used to minimize the
presence of air between the transducer and the skin in order to avoid reection
at the skin surface. Typically, pulses of ultrasound of about 1
s duration at
a repetition rate of about 1 000 pps (pulses per second) are applied, and the
resulting echoes are used for locating tissue interfaces and imaging.
Large, smooth surfaces in a body cause specular reection, whereas rough
surfaces and regions cause nonspecular reection or di use scatter. The nor-
mal liver, for example, is made up of clusters of parenchyma that are of the
order of 2 mm in size. Considering an ultrasound signal at 1 MHz and as-
suming a propagation velocity of 1 540 m=s, we get the wavelength to be
1:54 mm, which is of the order of the size of the parenchymal clusters. For
this reason, ultrasound is scattered in all directions by the liver, which appears
with a speckled texture in ultrasound scans 3, 6]. Fluid- lled regions such as
cysts have no internal structure, generate no echoes except at their bound-
aries, and appear as black regions on ultrasound images. The almost-complete
absorption of ultrasound by bone causes shadowing in images: tissues and or-
gans past bones and dense objects along the path of propagation of the beam
are not imaged in full and accurate detail. The quality of ultrasonographic
images is also a ected by multiple reections, speckle noise due to scattering,
and spatial distortion due to refraction. The spatial resolution in ultrasound
images is limited to the order of 0:5 ; 3 mm.
44 Biomedical Image Analysis
Some of the commonly used modes of ultrasonography are briey described
below.
 A mode: A single transducer is used in this mode. The amplitude
(hence the name \A") of the echoes received is displayed on the vertical
axis, with the corresponding depth (related to the time of arrival of the
echo) being displayed on the horizontal axis. The A mode is useful in
distance measurement (ranging), with applications in the detection of
retinal detachment and the detection of shift of the midline of the brain.
 M mode: This mode produces a display with time on the horizontal
axis and echo depth on the vertical axis. The M mode is useful in the
study of movement or motion (hence the name), with applications in
cardiac valve motion analysis.
 B mode: An image of a 2D section or slice of the body is produced
in this mode by using a single transducer to scan the region of interest
or by using an array of sequentially activated transducers. Real-time
imaging is possible in the B mode with 15 ; 40 fps. The B mode is
useful in studying large organs, such as the liver, and in fetal imaging.
 Doppler ultrasound: This mode is based upon the change in fre-
quency of the investigating beam caused by a moving target (the Doppler
e ect), and is useful in imaging blood ow. A particular application lies
in the detection of turbulence and retrograde ow, which is useful in
the diagnosis of stenosis or insuciency of cardiac valves and plaques in
blood vessels 91]. Doppler imaging may be used to obtain a combina-
tion of anatomic information with B-mode imaging and ow information
obtained using pulsed Doppler.
 Special probes: A variety of probes have been developed for ultra-
sonography of speci c organs and for special applications, some of which
are endovaginal probes for fetal imaging, transrectal probes for imaging
the prostate 92], transesophageal probes for imaging the heart via the
esophagus, and intravascular probes for the study of blood vessels.
Examples: Echocardiography is a major application of ultrasonography
for the assessment of the functional integrity of heart valves. An array of ul-
trasound transducers is used in the B mode in echocardiography, so as to ob-
tain a video illustrating the opening and closing activities of the valve leaets.
Figure 1.28 illustrates two frames of an echocardiogram of a subject with a
normal mitral valve, which is captured in the open and closed positions in
the two frames. Figure 1.29 shows the M-mode ultrasound image of the same
subject, illustrating the valve leaet movement against time. Echocardiogra-
phy is useful in the detection of stenosis and loss of exibility of the cardiac
valves due to calci cation.
The Nature of Biomedical Images 45

(a) (b)
FIGURE 1.28
Two frames of the echocardiogram of a subject with normal function of the
mitral valve. (a) Mitral valve in the fully open position. (b) Mitral valve in
the closed position. Images courtesy of Foothills Hospital, Calgary.

FIGURE 1.29
M-mode ultrasound image of a subject with normal function of the mitral
valve. The horizontal axis represents time. The echo signature of the mitral
valve leaets as they open and close is illustrated. Image courtesy of Foothills
Hospital, Calgary.
46 Biomedical Image Analysis
In spite of limitations in image quality and resolution, ultrasonography
is an important medical imaging modality due to the nonionizing nature of
the medium. For this reason, ultrasonography is particularly useful in fetal
imaging. Figure 1.30 shows a B-mode ultrasound image of a fetus. The
outline of the head and face as well as the spinal column are clearly visible
in the image. Images of this nature may be used to measure the size of the
head and head-to-sacrum length of the fetus. Ultrasonography is useful in the
detection of abnormalities in fetal development, especially distension of the
stomach, hydrocephalus, and complications due to maternal (or gestational)
diabetes such as the lack of development of the sacrum.

FIGURE 1.30
B-mode ultrasound (3:5 MHz ) image of a fetus (sagital view). Image courtesy
of Foothills Hospital, Calgary.

Ultrasonography is also useful in tomographic imaging 83, 93], discrimi-


nating between solid masses and uid- lled cysts in the breast 53], and tissue
characterization 5].
The Nature of Biomedical Images 47

1.9 Magnetic Resonance Imaging


MRI is based on the principle of nuclear magnetic resonance (NMR): the
behavior of nuclei under the inuence of externally applied magnetic and EM
(radio-frequency or RF) elds 4, 6, 84, 94, 95, 96]. A nucleus with an odd
number of protons or an odd number of neutrons has an inherent nuclear spin
and exhibits a magnetic moment such a nucleus is said to be NMR-active.
The commonly used modes of MRI rely on hydrogen (1 H or proton), carbon
(13 C ), uorine (19 F ), and phosphorus (31 P ) nuclei.
In the absence of an external magnetic eld, the vectors of magnetic mo-
ments of active nuclei have random orientations, resulting in no net mag-
netism. When a strong external magnetic eld Ho is applied, some of the
nuclear spins of active nuclei align with the eld (either parallel or antiparal-
lel to the eld). The number of spins that align with the eld is a function
of Ho and inversely related to the absolute temperature. At the commonly
used eld strength of 1:5 T (Tesla), only a relatively small fraction of spins
align with the eld. The axis of the magnetic eld is referred to as the z
axis. Parallel alignment corresponds to a lower energy state than antiparallel
alignment, and hence there will be more nuclei in the former state. This state
of forced alignment results in a net magnetization vector. (At equilibrium
and 1:5 T , only about seven more spins in a million are aligned in the parallel
state hence, MRI is a low-sensitivity imaging technique.)
The magnetic spin vector of each active nucleus precesses about the z axis
at a frequency known as the Larmor frequency, given by
!o = Ho (1.4)
where  is the gyromagnetic ratio of the nucleus considered (for protons,
 = 42:57 MHz T ;1 ). From the view-point of classical mechanics and for
1 H , this form of precession is comparable to the rotation of a spinning top's
axis around the vertical.
MRI involves controlled perturbation of the precession of nuclear spins, and
measurement of the RF signals emitted when the perturbation is stopped and
the nuclei return to their previous state of equilibrium. MRI is an intrinsi-
cally 3D imaging procedure. The traditional CT scanner requires mechanical
scanning and provides 2D cross-sectional images in a slice-by-slice manner:
slices at other orientations, if required, have to be computed from a set of
2D slices covering the required volume. In MRI, however, images may be
obtained directly in any transversal, coronal, sagital, or oblique section by us-
ing appropriate gradients and pulse sequences. Furthermore, no mechanical
scanning is involved: slice selection and scanning are performed electronically
by the use of magnetic eld gradients and RF pulses.
The main components and principles of MRI are as follows 84]:
48 Biomedical Image Analysis
 A magnet that provides a strong, uniform eld of the order of 0:5 ;
4 T . This causes some active nuclei to align in the direction of the eld
(parallel or antiparallel) and precess about the axis of the eld. The
rate of precession is proportional to the strength of the magnetic eld
Ho. The stronger the magnetic eld, the more spins are aligned in the
parallel state versus the antiparallel state, and the higher will be the
signal-to-noise ratio (SNR) of the data acquired.
 An RF transmitter to deliver an RF electromagnetic pulse H1 to the
body being imaged. The RF pulse provides the perturbation mentioned
earlier: it causes the axis of precession of the net spin vector to deviate
or \ip" from the z axis. In order for this to happen, the frequency of
the RF eld must be the same as that of precession of the active nuclei,
such that the nuclei can absorb energy from the RF eld (hence the term
\resonance" in MRI). The frequency of RF-induced rotation is given by
!1 = H1 : (1.5)
When the RF perturbation is removed, the active nuclei return to their
unperturbed states (alignment with Ho ) through various relaxation pro-
cesses, emitting energy in the form of RF signals.
 A gradient system to apply to the body a controlled space-variant
and time-variant magnetic eld
h(t x) = G(t)  x (1.6)
where x is a vector representing the spatial coordinates, G is the gradi-
ent applied, and t is time. The components of G along the z direction
as well as in the x and y directions (the plane x ; y is orthogonal to
the z axis) are controlled individually however, the component of mag-
netic eld change is only in the z direction. The gradient causes nuclei
at di erent positions to precess at di erent frequencies, and provides
for spatial coding of the signal emitted from the body. The Larmor
frequency at x is given by
!(x) =  (Ho + G  x): (1.7)
Nuclei at speci c positions or planes in the body may be excited selec-
tively by applying RF pulses of speci c frequencies. The combination of
the gradient elds and the RF pulses applied is called the pulse sequence.
 An RF detector system to detect the RF signals emitted from the
body. The RF signal measured outside the body represents the sum of
the RF signals emitted by active nuclei from a certain part or slice of
the body, as determined by the pulse sequence. The spectral spread of
the RF signal due to the application of gradients provides information
on the location of the corresponding source nuclei.
The Nature of Biomedical Images 49
 A computing and imaging system to reconstruct images from the
measured data, as well as process and display the images. Depend-
ing upon the pulse sequence applied, the RF signal sensed may be
formulated as the 2D or 3D Fourier transform of the image to be re-
constructed 4, 84, 94, 95]. The data measured correspond to samples
of the 2D Fourier transform of a sectional image at points on concentric
squares or circles 4]. The Fourier or backprojection methods of image
reconstruction from projections (described in Chapter 9) may then be
used to obtain the image. (The Fourier method is the most commonly
used method for reconstruction of MR images.)

Various pulse sequences may be used to measure di erent parameters of


the tissues in the body being imaged. The image obtained is a function
of the nuclear spin density in space and the corresponding parameters of
the relaxation processes involved. Longitudinal magnetization refers to the
component of the magnetization vector along the direction of the external
magnetic eld. The process by which longitudinal magnetization returns to
its state of equilibrium (that is, realignment with the external magnetic eld)
after an excitation pulse is referred to as longitudinal relaxation. The time
constant of longitudinal relaxation is labeled as T1 .
A 90o RF pulse causes the net magnetization vector to be oriented in the
plane perpendicular to the external magnetic eld: this is known as transverse
magnetization. When the excitation is removed, the a ected nuclei return to
their states of equilibrium, emitting a signal, known as the free-induction
decay (FID) signal, at the Larmor frequency. The decay time constant of
transverse magnetization is labeled as T2 . Values of T1 for various types
of tissues range from 200 ms to 2 000 ms T2 values range from 80 ms to
180 ms. Several other parameters may be measured by using di erent MRI
pulse sequences: the resulting images may have di erent appearances and
clinical applications.
MRI is suitable for functional imaging. The increased supply of oxygen
(or oxygenated blood) to certain regions of the brain due to related stimuli
may be recorded on MR images. The di erence between the prestimulus and
post-stimulus images may then be used to analyze the functional aspects of
speci c regions of the brain.
Examples: Figure 1.31 shows a sagital MR image of a patient's knee,
illustrating the bones and cartilages that form the knee joint. Images of this
type assist in the detection of bruised bones, bleeding inside the distal end of
the femur, torn cartilages, and ruptured ligaments.
Figures 1.32 (a){(c) illustrate the sagital, coronal, and transversal (cross-
sectional) views of the MR image of a patient's head. The images show the
details of the structure of the brain. MRI is useful in functional imaging of
the brain.
50 Biomedical Image Analysis

FIGURE 1.31
Sagital section of the MR image of a patient's knee. Image courtesy of
Foothills Hospital, Calgary.
The Nature of Biomedical Images
(a) (b) (c)
FIGURE 1.32
(a) Sagital, (b) coronal, and (c) transversal (cross-sectional) MR images of a patient's head. Images courtesy of Foothills
Hospital, Calgary.

51
52
Physiological
Image data acquisition
system (patient)

Biomedical images
Imaging Analog-to- Picture archival
Transducers
system digital and communication
Probing signal conversion system (PACS)
or radiation

Computer-aided
diagnosis
and therapy

Pattern recognition, Analysis of regions Detection Filtering


classification, and or objects; of regions and image
diagnostic decision feature extraction or objects enhancement

Biomedical Image Analysis


Physician
or medical
Image or pattern analysis Image processing
specialist

FIGURE 1.33
Computer-aided diagnosis and therapy based upon biomedical image analysis.
The Nature of Biomedical Images 53

1.10 Objectives of Biomedical Image Analysis


The representation of biomedical images in electronic form facilitates com-
puter processing and analysis of the data. Figure 1.33 illustrates the typical
steps and processes involved in computer-aided diagnosis (CAD) and therapy
based upon biomedical images analysis.
The human{instrument system: The components of a human{instru-
ment system 97, 98, 99, 100, 101, 102] and some related notions are described
in the following paragraphs.
 The subject (or patient): It is important always to bear in mind that
the main purpose of biomedical imaging and image analysis is to provide
a certain bene t to the subject or patient. All systems and procedures
should be designed so as not to cause undue inconvenience to the subject,
and not to cause any harm or danger. In applying invasive or risky
procedures, it is extremely important to perform a risk{bene t analysis
and determine if the anticipated bene ts of the procedure are worth
placing the subject at the risks involved.
 Transducers: lms, scintillation detectors, uorescent screens, solid-
state detectors, piezoelectric crystals, X-ray generators, ultrasound gen-
erators, EM coils, electrodes, sensors.
 Signal-conditioning equipment: PMTs, ampli ers, lters.
 Display equipment: oscilloscopes, strip-chart or paper recorders, com-
puter monitors, printers.
 Recording, data processing, and transmission equipment: lms, analog-
to-digital converters (ADCs), digital-to-analog converters (DACs), digi-
tal tapes, compact disks (CDs), diskettes, computers, telemetry systems,
picture archival and communication systems (PACS).
 Control devices: power supply stabilizers and isolation equipment, pa-
tient intervention systems.
The major objectives of biomedical instrumentation 97, 98, 99, 100, 101,
102] in the context of imaging and image analysis are:
 Information gathering | measurement of phenomena to interpret an
organ, a process, or a system.
 Screening | investigating a large asymptomatic population for the in-
cidence of a certain disease (with the aim of early detection).
 Diagnosis | detection or con rmation of malfunction, pathology, or
abnormality.
54 Biomedical Image Analysis
 Monitoring | obtaining periodic information about a system.

 Therapy and control | modi cation of the behavior of a system based


upon the outcome of the activities listed above to ensure a speci c result.

 Evaluation | objective analysis to determine the ability to meet func-


tional requirements, obtain proof of performance, perform quality con-
trol, or quantify the e ect of treatment.

Invasive versus noninvasive procedures: Image acquisition procedures


may be categorized as invasive or noninvasive procedures. Invasive proce-
dures involve the placement of devices or materials inside the body, such as
the insertion of endoscopes, catheter-tip sensors, and X-ray contrast media.
Noninvasive procedures are desirable in order to minimize risk to the subject.
Note that making measurements or imaging with X rays, ultrasound, etc.
could strictly be classi ed as invasive procedures because they involve pene-
tration of the body with externally administered radiation, even though the
radiation is invisible and there is no visible puncturing or invasion of the body.
Active versus passive procedures: Image acquisition procedures may
be categorized as active or passive procedures. Active data acquisition pro-
cedures require external stimuli to be applied to the subject, or require the
subject to perform a certain activity to stimulate the system of interest in
order to elicit the desired response. For example, in SPECT investigations of
myocardial ischemia, the patient performs vigorous exercise on a treadmill.
An ischemic zone is better delineated in SPECT images taken when the car-
diac system is under stress than when at rest. While such a procedure may
appear to be innocuous, it does carry risks in certain situations for some sub-
jects: stressing an unwell system beyond a certain limit may cause pain in
the extreme situation, the procedure may cause irreparable damage or death.
The investigator should be aware of such risks, factor them in a risk{bene t
analysis, and be prepared to manage adverse reactions.
Passive procedures do not require the subject to perform any activity. Ac-
quiring an image of a subject using reected natural light (with no ash from
the camera) or thermal emission could be categorized as a passive and non-
contact procedure.
Most organizations require ethical approval by specialized committees for
experimental procedures involving human or animal subjects, with the aim of
minimizing the risk and discomfort to the subject and maximizing the bene ts
to both the subject and the investigator.
The Nature of Biomedical Images 55

1.11 Computer-aided Diagnosis


Radiologists, physicians, cardiologists, neuroscientists, pathologists, and other
health-care professionals are highly trained and skilled practitioners. Why
then would we want to suggest the use of computers for the analysis of biomed-
ical images? The following paragraphs provide some arguments in favor of the
application of computers to process and analyze biomedical images.
 Humans are highly skilled and fast in the analysis of visual patterns, but
are slow (usually) in arithmetic operations with large numbers of values.
A single 64  64-pixel SPECT image contains a total of 4 096 pixels a
high-resolution mammogram may contain as many as 5 000  4 000 =
20  106 pixels. If images need to be processed to remove noise or extract
a parameter, it would not be practical for a person to perform such
computation. Computers can perform millions of arithmetic operations
per second. It should be noted, however, that the recognition of objects
and patterns in images using mathematical procedures typically requires
huge numbers of operations that could lead to slow responses in such
tasks from low-level computers. A trained human observer, on the other
hand, can usually recognize an object or a pattern in an instant.
 Humans could be a ected by fatigue, boredom, and environmental fac-
tors, and are susceptible to committing errors. Working with large
numbers of images in one sitting, such as in breast cancer screening,
poses practical diculties. A human observer could be distracted by
other events in the surrounding areas and may miss uncommon signs
present in some images. Computers, being inanimate but mathemat-
ically accurate and consistent machines, can be designed to perform
computationally speci c and repetitive tasks.
 Analysis by humans is usually subjective and qualitative. When com-
parative analysis is required between an image of a subject and another
or a reference pattern, a human observer would typically provide a qual-
itative response. Speci c or objective comparison | for example, the
comparison of the volume of two regions to the accuracy of the order
of even a milliliter | would require the use of a computer. The deriva-
tion of quantitative or numerical features from images would certainly
demand the use of computers.
 Analysis by humans is subject to interobserver as well as intraobserver
variations (with time). Given that most analyses performed by hu-
mans are based upon qualitative judgment, they are liable to vary with
time for a given observer, or from one observer to another. The former
could be due to lack of diligence or due to inconsistent application of
56 Biomedical Image Analysis
knowledge, and the latter due to variations in training and the level of
understanding or competence. Computers can apply a given procedure
repeatedly and whenever recalled in a consistent manner. Furthermore,
it is possible to encode the knowledge (to be more speci c, the logical
processes) of many experts into a single computational procedure, and
thereby enable a computer with the collective \intelligence" of several
human experts in an area of interest.
One of the important points to note in the discussion above is that quantita-
tive analysis becomes possible by the application of computers to biomedical
images. The logic of medical or clinical diagnosis via image analysis could
then be objectively encoded and consistently applied in routine or repetitive
tasks. However, it should be emphasized at this stage that the end-goal of
biomedical image analysis should be computer-aided diagnosis and not auto-
mated diagnosis. A physician or medical specialist typically uses a signi cant
amount of information in addition to images, including the general physical
appearance and mental state of the patient, family history, and socio-economic
factors a ecting the patient, many of which are not amenable to quanti ca-
tion and logical rule-based processes. Biomedical images are, at best, indirect
indicators of the state of the patient many cases may lack a direct or unique
image-to-pathology relationship. The results of image analysis need to be
integrated with other clinical signs, symptoms, and information by a physi-
cian or medical specialist. Above all, the intuition of the medical specialist
plays an important role in arriving at the nal diagnosis. For these reasons,
and keeping in mind the realms of practice of various licensed and regulated
professions, liability, and legal factors, the nal diagnostic decision is best left
to the physician or medical specialist. It could be expected that quantitative
and objective analysis facilitated by the application of computers to biomed-
ical image analysis will lead to a more accurate diagnostic decision by the
physician.
On the importance of quantitative analysis:
\When you can measure what you are speaking about, and ex-
press it in numbers, you know something about it but when you
cannot measure it, when you cannot express it in numbers, your
knowledge is of a meagre and unsatisfactory kind: it may be the
beginning of knowledge, but you have scarcely, in your thoughts,
advanced to the stage of science."
{ Lord Kelvin (William Thomson, 1824 { 1907) 103]
On assumptions made in quantitative analysis:
\Things do not in general run around with their measure stamped
on them like the capacity of a freight car it requires a certain
amount of investigation to discover what their measures are ...
What most experimenters take for granted before they begin their
The Nature of Biomedical Images 57
experiments is in nitely more interesting than any results to which
their experiments lead."
{ Norbert Wiener (1894 { 1964)

1.12 Remarks
We have taken a general look at the nature of biomedical images in this chap-
ter, and seen a few images illustrated for the purpose of gaining familiarity
with their appearance and typical features. Speci c details of the characteris-
tics of several types of biomedical images and their processing or analysis are
described in subsequent chapters, along with more examples.
We have also stated the objectives of biomedical imaging and image ana-
lysis. Some practical diculties that arise in biomedical investigations based
upon imaging were discussed in order to draw attention to the relevant prac-
tical issues. The suitability and desirability of the application of computers
for biomedical image analysis were discussed, with emphasis on objective and
quantitative analysis toward the end-goal of CAD. The following chapters
provide the descriptions of several image processing and analysis techniques
for various biomedical applications.

1.13 Study Questions and Problems


1. Give three reasons for the application of computers in medicine for computer-
aided diagnosis.
2. List the relative advantages and disadvantages of X-ray, ultrasound, CT, MR,
and nuclear medicine imaging for two clinical applications. Indicate where
each method is inappropriate or inapplicable.
3. Discuss the factors aecting the choice of X-ray, ultrasound, CT, MR, and
nuclear medicine imaging procedures for clinical applications.
4. Describe a few sources of artifact in X-ray, ultrasound, CT, MR, and nuclear
medicine imaging.
5. Discuss the sources and the nature of random, periodic, structured, and phys-
iological artifacts in medical images.
6. Describe the dierence between anatomical (physical) and functional (physi-
ological) imaging. Give examples.
7. Distinguish between active and passive medical imaging procedures give ex-
amples.
58 Biomedical Image Analysis
8. Distinguish between invasive and noninvasive medical imaging procedures
give examples.
9. Discuss factors that aect the resolution in various medical imaging modali-
ties, including X-ray, ultrasound, CT, MR, and nuclear medicine imaging.

1.14 Laboratory Exercises and Projects


1. Visit a few medical imaging facilities in your local hospital or health sciences
center. View the procedures related to the acquisition of a few medical images,
including X-ray, ultrasound, MR, CT, and SPECT images.
Respect the priority, privacy, and condentiality of patients.
Discuss the imaging protocols and parameters with a medical physicist and
the technologists. Develop an understanding of the relationship between the
imaging system parameters, image quality, and radiation exposure to the pa-
tient.
Request a radiologist to explain how he or she interprets the images. Ob-
tain information on the dierences between normal and abnormal (disease)
patterns in each mode of imaging.
Collect a few sample images for use in image processing experiments, after
obtaining the necessary permissions and ensuring that you carry no patient
identication out of the facility.
2. Most medical imaging facilities use phantoms or test objects for quality control
of imaging systems. If a phantom is not available, prepare one by attaching
strips of dierent thickness and dierent metals to a plastic or plexiglass sheet.
With the help of a qualied technologist, obtain X-ray images of a phantom at
widely dierent kV p and mAs settings. Study the contrast, noise, and detail
visibility in the resulting images.
Digitize the images for use in image processing experiments.
3. Visit a medical (clinical or pathology) laboratory. View several samples and
specimens through microscopes.
Respect the priority, privacy, and condentiality of patients.
Request a technologist or pathologist to explain how he or she interprets the
images. Obtain information on the dierences between normal and abnormal
(disease) patterns in dierent types of samples and tests.
Collect a few sample images for use in image processing experiments, after
obtaining the necessary permissions and ensuring that you carry no patient
identication out of the laboratory.
4. Interview physicians, radiologists, pathologists, medical physicists, and medi-
cal technologists to nd areas where they need and would like to use comput-
ing technology, digital image processing, computer vision, pattern recognition,
and pattern classication methods to help in their work. Volunteer to assist
them in their work! Develop projects for your course of study or research
The Nature of Biomedical Images 59
in biomedical image analysis. Request a specialist in the relevant area to
collaborate with you in the project.
5. Prepare a set of test images by collecting at least ten images that contain the
following features:
a collection of small objects,
a collection of large objects,
directional (oriented) features,
ne texture,
coarse texture,
geometrical shapes,
human faces,
smooth features,
sharp edges.
Scan a few photos from your family photo album. Limit synthesized images
to one or two in the collection. Limit images borrowed from Web sites to two
in the collection. Use the images in the exercises provided at the end of each
chapter.
For a selection of test images from those that have been used in this book,
visit www.enel.ucalgary.ca/People/Ranga/enel697
2
Image Quality and Information Content

Several factors a ect the quality and information content of biomedical images
acquired with the modalities described in Chapter 1. A few considerations
in biomedical image acquisition and analysis that could have a bearing on
image quality are described in Section 2.1. A good understanding of such
factors, as well as appropriate characterization of the concomitant loss in
image quality, are essential in order to design image processing techniques to
remove the degradation and/or improve the quality of biomedical images. The
characterization of information content is important for the same purposes as
above, as well as in the analysis and design of image transmission and archival
systems.
An inherent problem in characterizing quality lies in the fact that image
quality is typically judged by human observers in a subjective manner. To
quantify the notion of image quality is a dicult proposition. Similarly, the
nature of the information conveyed by an image is dicult to quantify due
to its multifaceted characteristics in terms of statistical, structural, percep-
tual, semantic, and diagnostic connotations. However, several measures have
been designed to characterize or quantify a few specic attributes of images,
which may in turn be associated with various notions of quality as well as
information content. The numerical values of such measures of a given image
before and after certain processes, or the changes in the attributes due to cer-
tain phenomena, could then be used to assess variations in image quality and
information content. We shall explore several such measures in this chapter.

2.1 Diculties in Image Acquisition and Analysis


In Chapter 1, we studied several imaging systems and procedures for the
acquisition of many di erent types of biomedical images. The practical appli-
cation of these techniques may pose certain diculties: the investigator often
faces conditions that may impose limitations on the quality and information
content of the images acquired. The following paragraphs illustrate a few
practical diculties that one might encounter in biomedical image acquisition
and analysis.

61
62 Biomedical Image Analysis
Accessibility of the organ of interest: Several organs of interest in
imaging-based investigation are situated well within the body, encased in pro-
tective and dicult-to-access regions, for good reason! For example, the brain
is protected by the skull, and the prostate is situated at the base of the blad-
der near the pelvic outlet. Several limitations are encountered in imaging
such organs special imaging devices and image processing techniques are re-
quired to facilitate their visualization. Visualization of the arteries in the
brain requires the injection of an X-ray contrast agent and the subtraction
of a reference image see Section 4.1. Special transrectal probes have been
designed for 3D ultrasonic imaging of the prostate 92]. Despite the use of
such special devices and techniques, images obtained in applications as above
tend to be a ected by severe artifacts.
Variability of information: Biological systems exhibit great ranges of in-
herent variability within their di erent categories. The intrinsic and natural
variability presented by biological entities within a given class far exceeds the
variability that we may observe in engineering, physical, and manufactured
samples. The distinction between a normal pattern and an abnormal pat-
tern is often clouded by signicant overlap between the ranges of the features
or variables that are used to characterize the two categories the problem
is compounded when multiple abnormalities need to be considered. Imag-
ing conditions and parameters could cause further ambiguities due to the
e ects of subject positioning and projection. For example, most malignant
breast tumors are irregular and spiculated in shape, whereas benign masses
are smooth and round or oval. However, some malignant tumors may present
smooth shapes, and some benign masses may have rough shapes. A tumor
may present a rough appearance in one view or projection, but a smoother
prole in another. Furthermore, the notion of shape roughness is nonspe-
cic and open-ended. Overlapping patterns caused by ligaments, ducts, and
breast tissue that may lie in other planes, but are integrated on to a single
image plane in the process of mammographic imaging, could also a ect the
appearance of tumors and masses in images. The use of multiple views and
spot magnication imaging could help resolve some of these ambiguities, but
at the cost of additional radiation dose to the subject.
Physiological artifacts and interference: Physiological systems are
dynamic and active. Some activities, such as breathing, may be suspended
voluntarily by an adult subject (in a reasonable state of health and well-
being) for brief periods of time to permit improved imaging. However, car-
diac activity, blood circulation, and peristaltic movement are not under one's
volitional control. The rhythmic contractile activity of the heart poses chal-
lenges in imaging of the heart. The pulsatile movement of blood through the
brain causes slight movements of the brain that could cause artifacts in an-
giographic imaging see Section 4.1. Dark shadows may appear in ultrasound
images next to bony regions due to signicant attenuation of the investigating
beam, and hence the lack of echoes from tissues beyond the bony regions along
Image Quality and Information Content 63
the path of beam propagation. An analyst should pay attention to potential
physiological artifacts when interpreting biomedical images.

Special techniques have been developed to overcome some of the limitations


mentioned above in cardiac imaging. Electronic steering of the X-ray beam
has been employed to reduce the scanning time required for CT projection
data acquisition in order to permit imaging of the heart see Figure 1.21.
State-of-the-art multislice and helical-scan CT scanners acquire the required
data in intervals much shorter than the time taken by the initial models of
CT scanners. Cardiac nuclear medicine imaging is performed by gating the
photon-counting process to a certain specic phase of the cardiac cycle by
using the electrocardiogram (ECG) as a reference see Figure 1.27 and Sec-
tion 3.10. Although nuclear medicine imaging procedures take several min-
utes, the almost-periodic activity of the heart permits the cumulative imaging
of its musculature or chambers at particular positions repeatedly over several
cardiac cycles.

Energy limitations: In X-ray mammography, considering the fact that


the organ imaged is mainly composed of soft tissues, a low kV p would be
desired in order to maximize image contrast. However, low-energy X-ray pho-
tons are absorbed more readily than high-energy photons by the skin and
breast tissues, thereby increasing the radiation dose to the patient. A com-
promise is required between these two considerations. Similarly, in TEM, a
high-kV electron beam would be desirable in order to minimize damage to
the specimen, but a low-kV beam can provide improved contrast. The practi-
cal application of imaging techniques often requires the striking of a trade-o
between conicting considerations as above.

Patient safety: The protection of the subject or patient in a study from


electrical shock, radiation hazard, and other potentially dangerous conditions
is an unquestionable requirement of paramount importance. Most organi-
zations require ethical approval by specialized committees for experimental
procedures involving human or animal subjects, with the aim of minimizing
the risk and discomfort to the subject and maximizing the benets to both the
subjects and the investigator. The relative levels of potential risks involved
should be assessed when a choice is available between various procedures, and
analyzed against their relative benets. Patient safety concerns may preclude
the use of a procedure that may yield better images or results than others,
or may require modications to a procedure that may lead to inferior im-
ages. Further image processing steps would then become essential in order to
improve image quality or otherwise compensate for the initial compromise.
64 Biomedical Image Analysis

2.2 Characterization of Image Quality


Biomedical images are typically complex sources of several items of informa-
tion. Furthermore, the notion of quality cannot be easily characterized with a
small number of features or attributes. Because of these reasons, researchers
have developed a rather large number of measures to represent quantitatively
several attributes of images related to impressions of quality. Changes in
measures related to quality may be analyzed for several purposes, such as:
comparison of images generated by di erent medical imaging systems
comparison of images obtained using di erent imaging parameter set-
tings of a given system
comparison of the results of several image enhancement algorithms
assessment of the e ect of the passage of an image through a transmis-
sion channel or medium and
assessment of images compressed by di erent data compression tech-
niques at di erent rates of loss of data, information, or quality.
Specially designed phantoms are often used to test medical imaging sys-
tems for routine quality control 104, 105, 106, 107, 108]. Bijkerk et al. 109]
developed a phantom with gold disks of di erent diameter and thickness to
test mammography systems. Because the signal contrast and location are
known from the design of the phantom, the detection performance of trained
observers may be used to test and compare imaging systems.
Ideally, it is desirable to use \numerical observers": automatic tools to
measure and express image quality by means of numbers or \gures of merit"
(FOMs) that could be objectively compared see Furuie et al. 110] and Bar-
rett 111] for examples. It is clear that not only are FOMs important, but so
is the methodology for their comparison. Kayargadde and Martens 112, 113]
discuss the relationships between image quality attributes in a psychometric
space and a perceptual space.
Many algorithms have been proposed to explore various attributes of images
or imaging systems. The attributes take into consideration either the whole
image or a chosen region to calculate FOMs, and are labeled as being \global"
or \local", respectively. Often, the measured attribute is image denition
| the clarity with which details are reproduced 114] | which is typically
expressed in terms of image sharpness. This notion was rst mentioned by
Higgins and Jones 115] in the realm of photography, but is valid for image
evaluation in a broader context. Rangayyan and Elkadiki 116] present a
survey of di erent methods to measure sharpness in photographic and digital
images (see Section 2.15). Because quality is a subjective notion, the results
Image Quality and Information Content 65
obtained by algorithms such as those mentioned above need to be validated
against the evaluation of test images by human observers. This could be done
by submitting the same set of images to human and numerical (computer)
evaluation, and then comparing the results 104, 105, 106, 107, 108, 117].
Subjective and objective judgment should agree to some degree under dened
conditions in order for the numerical measures to be useful. The following
sections describe some of the concepts and measures that are commonly used
in biomedical image analysis.

2.3 Digitization of Images


The representation of natural scenes and objects as digital images for process-
ing using computers requires two steps: sampling and quantization. Both of
these steps could potentially cause loss of quality and introduce artifacts.

2.3.1 Sampling
Sampling is the process of representing a continuous-time or continuous-space
signal on a discrete grid, with samples that are separated by (usually) uniform
intervals. The theory and practice of sampling 1D signals have been well
established 1, 2, 7]. In essence, a band-limited signal with the frequency of
its fastest component being fm Hz may be represented without loss by its
samples obtained at the Nyquist rate of fs = 2 fm Hz .
Sampling may be modeled as the multiplication of the given continuous-
time or analog signal with a periodic train of impulses. The multiplication
of two signals in the time domain corresponds to the convolution of their
Fourier spectra. The Fourier transform of a periodic train of impulses is
another periodic train of impulses with a period that is equal to the inverse
of the period in the time domain (that is, fs Hz ). Therefore, the Fourier
spectrum of the sampled signal is periodic, with a period equal to fs Hz . A
sampled signal has innite bandwidth however, the sampled signal contains
distinct or unique frequency components only up to fm =  fs =2 Hz .
If the signal as above is sampled at a rate lower than fs Hz , an error known
as aliasing occurs, where the frequency components above fs =2 Hz appear at
lower frequencies. It then becomes impossible to recover the original signal
from its sampled version.
If sampled at a rate of at least fs Hz , the original signal may be recovered
from its sampled version by lowpass ltering and extracting the base-band
component over the band fm Hz from the innite spectrum of the sampled
signal. If an ideal (rectangular) lowpass lter were to be used, the equivalent
operation in the time domain would be convolution with a sinc function (which
66 Biomedical Image Analysis
is of innite duration). This operation is known as interpolation. Other
interpolating functions of nite duration need to be used in practice, with
the equivalent lter extracting the base-band components without signicant
reduction in gain over the band fm Hz .
In practice, in order to prevent aliasing errors, it is common to use an
anti-aliasing lter prior to the sampling of 1D signals, with a pass-band that
is close to fs =2 Hz , with the prior knowledge that the signal contains no
signicant energy or information beyond fm  fs =2 Hz . Analog spectrum
analyzers may be used to estimate the bandwidth and spectral content of a
given 1D analog signal prior to sampling.
All of the concepts explained above apply to the sampling of 2D signals or
images. However, in most real-life applications of imaging and image process-
ing, it is not possible to estimate the frequency content of the images, and
also not possible to apply anti-aliasing lters. Adequate sampling frequen-
cies need to be established for each type of image or application based upon
prior experience and knowledge. Regardless, even with the same type of im-
ages, di erent sampling frequencies may be suitable or adequate for di erent
applications.
Figure 2.1 illustrates the loss of quality associated with sampling an image
at lower and lower numbers of pixels.
Biomedical images originally obtained on lm are usually digitized using
high-resolution CCD cameras or laser scanners. Several newer biomedical
imaging systems include devices for direct digital data acquisition. In digital
imaging systems such as CT, sampling is inherent in the measurement process,
which is also performed in a domain that is di erent from the image domain.
This adds a further level of complexity to the analysis of sampling. Practical
experimentation and experience have helped in the development of guidelines
to assist in such applications.

2.3.2 Quantization
Quantization is the process of representing the values of a sampled signal or
image using a nite set of allowed values. In a digital representation using n
bits per sample and positive integers only, there exist 2n possible quantized
levels, spanning the range 0 2n ; 1]. If n = 8 bits are used to represent each
pixel, there can exist 256 values or gray levels to represent the values of the
image at each pixel, in the range 0 255].
It is necessary to map appropriately the range of variation of the given
analog signal, such as the output of a charge-coupled device (CCD) detector
or a video device, to the input dynamic range of the quantizer. If the lowest
level (or lower threshold) of the quantizer is set too high in relation to the
range of the original signal, the quantized output will have several samples
with the value zero, corresponding to all signal values that are less than the
lower threshold. Similarly, if the highest level (or higher threshold) of the
quantizer is set too low, the output will have several samples with the highest
Image Quality and Information Content 67

(a) (b)

(c) (d)
FIGURE 2.1
E ect of sampling on the appearance and quality of an image: (a) 225  250
pixels (b) 112  125 pixels (c) 56  62 pixels and (d) 28  31 pixels. All four
images have 256 gray levels at 8 bits per pixel.
68 Biomedical Image Analysis
quantized level, corresponding to all signal values that are greater than the
higher threshold. Furthermore, the decision levels of the quantizer should be
optimized in accordance with the probability density function (PDF) of the
original signal or image.
The Lloyd{Max quantization procedure 8, 9, 118, 119] to optimize a quan-
tizer is derived as follows. Let p(r) represent the PDF of the amplitude or
gray levels in the given image, with the values of the continuous or analog
variable r varying within the range rmin  rmax ]. Let the range rmin  rmax ]
be divided into L parts demarcated by the decision levels R0  R1  R2  : : :  RL ,
with R0 = rmin and RL = rmax see Figure 2.2. Let the L output levels
of the quantizer represent the values Q0  Q1  Q2  : : :  QL;1 , as indicated in
Figure 2.2.
The mean-squared error (MSE) in representing the analog signal by its
quantized values is given by
;1 Z R +1
LX
(r ; Ql )2 p(r) dr:
l

"2 = (2.1)
l=0 R l

Several procedures exist to determine the values of Rl and Ql that minimize


the MSE 8, 9, 118, 119]. A classical result indicates that the output level Ql
should lie at the centroid of the part of the PDF between the decision levels
Rl and Rl+1 , given by R R +1
r p(r) dr
l

Ql = RRR +1 l
 (2.2)
R p(r) dr
l

which reduces to
Ql = Rl +2Rl+1 (2.3)
if the PDF is uniform. It also follows that the decision levels are then given
by
Rl = Ql;12+ Ql : (2.4)
It is common to quantize images to 8 bits=pixel. However, CT images
represent a large dynamic range of X-ray attenuation coecient, normalized
into HU , over the range ;1 000 1 000] for human tissues. Small di erences
of the order of 10 HU could indicate the distinction between normal tissue
and diseased tissue. If the range of 2 000 HU were to be quantized into 256
levels using an 8-bit quantizer, each quantized level would represent a change
of 2256
000 = 7:8125 HU , which could lead to the loss of the distinction as above
in noise. For this reason, CT and several other medical images are quantized
using 12 ; 16 bits=pixel.
The use of an inadequate number of quantized gray levels leads to false
contours and poor representation of image intensities. Figure 2.3 illustrates
the loss of image quality as the number of bits per pixel is reduced from six
to one.
Image Quality and Information Content 69

Q Q Q Q ... Q Q Quantizer
0 1 2 3 L-2 L-1 output levels

R = r R R R R ... R R = r Decision levels


0 min 1 2 3 4 L-1 L max

gray level r

FIGURE 2.2
Quantization of an image gray-level signal r with a Gaussian (solid line) or
uniform (dashed line) PDF. The quantizer output levels are indicated by Ql
and the decision levels represented by Rl .

The quantized values in a digital image are commonly referred to as gray


levels, with 0 representing black and 255 standing for white when 8-bit quanti-
zation is used. Unfortunately, this goes against the notion of a larger amount
of gray being darker than a smaller amount of gray! However, if the quantized
values represent optical density (OD), a larger value would represent a darker
region than a smaller value. Table 2.1 lists a few variables that bear di erent
relationships with the displayed pixel value.

2.3.3 Array and matrix representation of images


Images are commonly represented as 2D functions of space: f (x y). A digital
image f (m n) may be interpreted as a discretized version of f (x y) in a 2D
array, or as a matrix see Section 3.5 for details on matrix representation of
images and image processing operations. The notational di erences between
the representation of an image as a function of space and as a matrix could
be a source of confusion.
70 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 2.3
E ect of gray-level quantization on the appearance and quality of an image:
(a) 64 gray levels (6 bits per pixel) (b) 16 gray levels (4 bits per pixel) (c) four
gray levels (2 bits per pixel) and (d) two gray levels (1 bit per pixel) All four
images have 225  250 pixels. Compare with the image in Figure 2.1 (a) with
256 gray levels at 8 bits per pixel.
TABLE 2.1

Image Quality and Information Content


Relationships Between Tissue Type, Tissue Density, X-ray Attenuation Coecient, Hounseld
Units (HU ), Optical Density (OD), and Gray Level 120, 121]. The X-ray Attenuation
Coecient was Measured at a Photon Energy of 103:2 keV 121].
Tissue Density X-ray Hounseld Optical Gray level Appearance
type gm=cm3 attenuation (cm;1 ) units density (brightness) in image
lung < 0:001 lower low high low dark
;700 ;800]
liver 1.2 0.18 medium medium medium gray
50 70]
bone 1.9 higher high low high white
+800 +1 000]

71
72 Biomedical Image Analysis
An M  N matrix has M rows and N columns its height is M and width
is N numbering of the elements starts with (1 1) at the top-left corner and
ends with (M N ) at the lower-right corner of the image. A function of space
f (x y) that has been converted into a digital representation f (m n) is typi-
cally placed in the rst quadrant in the Cartesian coordinate system. Then, an
M  N will have a width of M and height of N indexing of the elements starts
with (0 0) at the origin at the bottom-left corner and ends with (M ; 1 N ; 1)
at the upper-right corner of the image. Figure 2.4 illustrates the distinction
between these two types of representation of an image. Observe that the size
of a matrix is expressed as rows  columns, whereas the size of an image is
usually expressed as width  height.

column n = 1 2 3 4

row
2 f(0,2) f(1,2) f(2,2) f(3,2) f(1,1) f(1,2) f(1,3) f(1,4)
m = 1

1 f(0,1) f(1,1) f(2,1) f(3,1) 2 f(2,1) f(2,2) f(2,3) f(2,4)

y = 0 f(0,0) f(1,0) f(2,0) f(3,0) 3 f(3,1) f(3,2) f(3,3) f(3,4)

x = 0 1 2 3

f(x, y) as a 4x3 function of f(m, n) as a 3x4 matrix


space in the first quadrant (in the fourth quadrant)

FIGURE 2.4
Array and matrix representation of an image.

2.4 Optical Density


The value of a picture element or cell | commonly known as a pixel, or
occasionally as a pel | in an image may be expressed in terms of a physical
attribute such as temperature, density, or X-ray attenuation coecient the
intensity of light reected from the body at the location corresponding to the
pixel or the transmittance at the corresponding location on a lm rendition
of the image. The last one of the options listed above is popular in medical
imaging due to the common use of lm as the medium for acquisition and
Image Quality and Information Content 73
display of images. The OD at a spot on a lm is dened as

OD = log10 IIi  (2.5)
o
where Ii is the intensity of the light input and Io is the intensity of the light
transmitted through the lm at the spot of interest see Figure 2.5. A perfectly
clear spot will transmit all of the light that is input and will have OD = 0
a dark spot that reduces the intensity of the input light by a factor of 1 000
will have OD = 3. X-ray lms, in particular those used in mammography, are
capable of representing gray levels from OD  0 to OD  3:5.

I
o
film with image
(transparency)
Ii
light
FIGURE 2.5
Measurement of the optical density at a spot on a lm or transparency using
a laser microdensitometer.

2.5 Dynamic Range


The dynamic range of an imaging system or a variable is its range or gamut of
operation, usually limited to the portion of linear response, and is expressed
as the maximum minus the minimum value of the variable or parameter of
interest. The dynamic range of an image is usually expressed as the di erence
between the maximum and minimum values present in the image. X-ray
lms for mammography typically possess a dynamic range of 0 ; 3:5 OD.
Modern CRT monitors provide dynamic range of the order of 0 ; 600 cd=m2
in luminance or 1 : 1 000 in sampled gray levels.
Figure 2.6 compares the characteristic curves of two devices. Device A
has a larger slope or \gamma" (see Section 4.4.3) than Device B, and hence
can provide higher contrast (dened in Section 2.6). Device B has a larger
latitude, or breadth of exposure and optical density over which it can operate,
than Device A. Plots of lm density versus the log of (X-ray) exposure are
known as Hurter{Drield or H-D curves 3].
74 Biomedical Image Analysis

3.0 Saturation
-
Shoulder

2.0 Device A Device B


-
Optical
density

1.0
-

Toe
Background level
(base, fog, noise)

log (exposure)

FIGURE 2.6
Characteristic response curves of two hypothetical imaging devices.

The lower levels of response of a lm or electronic display device are af-
fected by a background level that could include the base level of the medium
or operation of the device as well as noise. The response of a device typically
begins with a nonlinear \toe" region before it reaches its linear range of oper-
ation. Another nonlinear region referred to as the \shoulder" region leads to
the saturation level of the device. It is desirable to operate within the linear
range of a given device.

Air in the lungs and bowels, as well as fat in various organs including the
breast, tend to extend the dynamic range of images toward the lower end of
the density scale. Bone, calcications in the breast and in tumors, as well
as metallic implants such as screws in bones and surgical clips contribute
to high-density areas in images. Mammograms are expected to possess a
dynamic range of 0 ; 3:5 OD. CT images may have a dynamic range of about
;1 000 to +1 000 HU . Metallic implants could have HU values beyond the
operating range of CT systems, and lead to saturated areas in images: the
X-ray beam is e ectively stopped by heavy-metal implants.
Image Quality and Information Content 75

2.6 Contrast
Contrast is dened in a few di erent ways 9], but is essentially the di erence
between the parameter imaged in a region of interest (ROI) and that in a
suitably dened background. If the image parameter is expressed in OD,
contrast is dened as
COD = fOD ; bOD  (2.6)
where fOD and bOD represent the foreground ROI and background OD, re-
spectively. Figure 2.7 illustrates the notion of contrast using circular ROIs.

FIGURE 2.7
Illustration of the notion of contrast, comparing a foreground region f with
its background b.

When the image parameter has not been normalized, the measure of con-
trast will require normalization. If, for example, f and b represent the average
light intensities emitted or reected from the foreground ROI and the back-
ground, respectively, contrast may be dened as
C = ff ; b
+ b (2.7)
or as
C1 = f ;b b : (2.8)
Due to the use of a reference background, the measures dened above are
often referred to as \simultaneous contrast". It should be observed that the
contrast of a region or an object depends not only upon its own intensity, but
also upon that of its background. Furthermore, the measure is not simply a
76 Biomedical Image Analysis
di erence, but a ratio. The human visual system (HVS) has bandpass lter
characteristics, which lead to responses that are proportional to di erences
between illumination levels rather than to absolute illumination levels 122].
Example: The two squares in Figure 2.8 are of the same value (130 in the
scale 0 ; 255), but are placed on two di erent background regions of value
150 on the left and 50 on the right. The lighter background on the left makes
the inner square region appear darker than the corresponding inner square
on the right. This e ect could be explained by the measure of simultaneous
contrast: the contrast of the inner square on the left, using the denition in
Equation 2.8, is
Cl = 130150
; 150
= ;0:1333 (2.9)
whereas that for the inner square on the right is
Cr = 13050; 50 = +1:6: (2.10)
The values of Cl and Cr using the denition in Equation 2.7 are, respectively,
;0:0714 and +0:444 the advantage of this formulation is that the values of
contrast are limited to the range ;1 1]. The negative contrast value for
the inner square on the left indicates that it is darker than the background,
whereas it is the opposite for that on the right. (By covering the background
regions and viewing only the two inner squares simultaneously, it will be seen
that the gray levels of the latter are indeed the same.)
Just-noticeable di
erence: The concept of just-noticeable dierence
(JND) is important in analyzing contrast, visibility, and the quality of medical
images. JND is determined as follows 9, 122]: For a given background level
b as in Equation 2.8, the value of an object in the foreground f is increased
gradually from the same level as b to a level when the object is just perceived.
The value (f ; b)=b at the level of minimal perception of the object is the JND
for the background level b. The experiment should, ideally, be repeated many
times for the same observer, and also repeated for several observers. Exper-
iments have shown that the JND is almost constant, at approximately 0:02
or 2%, over a wide range of background intensity this is known as Weber's
law 122].
Example: The ve bars in Figure 2.9 have intensity values of (from left to
right) 155, 175, 195, 215, and 235. The bars are placed on a background of
150. The contrast of the rst bar (to the left), according to Equation 2.8, is
Cl = 155150; 150
= +0:033: (2.11)
This contrast value is slightly greater than the nominal JND the object should
be barely perceptible to most observers. The contrast values of the remaining
four bars are more than adequate for clear perception.
Example: Calcications appear as bright spots in mammograms. A cal-
cication that appears against fat and low-density tissue may possess high
Image Quality and Information Content 77

FIGURE 2.8
Illustration of the e ect of the background on the perception of an object
(simultaneous contrast). The two inner squares have the same gray level of
130, but are placed on di erent background levels of 150 on the left and 50
on the right.

FIGURE 2.9
Illustration of the notion of just-noticeable di erence. The ve bars have
intensity values of (from left to right) 155, 175, 195, 215, and 235, and are
placed on a background of 150. The rst bar is barely noticeable the contrast
of the bars increases from left to right.
78 Biomedical Image Analysis
contrast and be easily visible. On the other hand, a similar calcication that
appears against a background of high-density breast tissue, or a calcication
that is present within a high-density tumor, could possess low contrast, and
be dicult to detect. Figure 2.10 shows a part of a mammogram with several
calcications appearing against di erent background tissue patterns and den-
sity. The various calcications in this image present di erent levels of contrast
and visibility.
Small calcications and masses situated amidst high-density breast tissue
could present low contrast close to the JND in a mammogram. Such features
present signicant challenges in a breast cancer screening situation. Enhance-
ment of the contrast and visibility of such features could assist in improving
the accuracy of detecting early breast cancer 123, 124, 125] see Sections 4.9.1
and 12.10.

2.7 Histogram
The dynamic range of the gray levels in an image provides global information
on the extent or spread of intensity levels across the image. However, the dy-
namic range does not provide any information on the existence of intermediate
gray levels in the image. The histogram of an image provides information on
the spread of gray levels over the complete dynamic range of the image across
all pixels in the image.
Consider an image f (m n) of size M  N pixels, with gray levels l =
0 1 2 : : :  L ; 1. The histogram of the image may be dened as
;1 NX
MX ;1
Pf (l) = d f (m n) ; l] l = 0 1 2 : : :  L ; 1 (2.12)
m=0 n=0
where the discrete unit impulse function or delta function is dened as 1, 2]

k=0
d (k) = 10 ifotherwise (2.13)
:
The histogram value Pf (l) provides the number of pixels in the image f
that possess the gray level l. The sum of all the entries in a histogram equals
the total number of pixels in the image:
;1
LX
Pf (l) = MN: (2.14)
l=0
The area under the function Pf (l), when multiplied with an appropriate scal-
ing factor, provides the total intensity, density, or brightness of the image,
depending upon the physical parameter represented by the pixel values.
Image Quality and Information Content 79

FIGURE 2.10
Part of a mammogram with several calcications associated with malignant
breast disease. The density of the background a ects the contrast and visi-
bility of the calcications. The image has 768  512 pixels at a resolution of
62 m the true width of the image is about 32 mm.
80 Biomedical Image Analysis
A histogram may be normalized by dividing its entries by the total number
of pixels in the image. Then, with the assumption that the total number of
pixels is large and that the image is a typical representative of its class or the
process that generates images of its kind, the normalized histogram may be
taken to represent the PDF pf (l) of the image-generating process:
p (l) = 1 P (l):
f MN f (2.15)
It follows that ;1
LX
pf (l) = 1: (2.16)
l=0
Example: The histogram of the image in Figure 1.3 is shown in Figure 2.11.
It is seen that most of the pixels in the image lie in the narrow range of 70 ; 150
out of the available range of 0 ; 255. The e ective dynamic range of the image
may be taken to be 70 ; 150, rather than 0 ; 255. This agrees with the dull
and low-contrast appearance of the image. The full available range of gray
levels has not been utilized in the image, which could be due to poor lighting
and image acquisition conditions, or due to the nature of the object being
imaged.
The gray level of the large, blank background in the image in Figure 1.3 is
in the range 80 ; 90: the peak in the histogram corresponds to the general
background range. The relatively bright areas of the myocyte itself have gray
levels in the range 100 ; 130. The histogram of the myocyte image is almost
unimodal that is, it has only one major peak. The peak happens to represent
the background in the image rather than the object of interest.
Example: Figure 2.12 (a) shows the histogram of the image in Figure 1.5
(b). The discrete spikes are due to noise in the image. The histogram of the
image after smoothing, using the 3  3 mean lter and rounding the results to
integers, is shown in part (b) of the gure. The histogram of the ltered image
is bimodal, with two main peaks spanning the gray level ranges 100 ; 180 and
180 ; 255, representing the collagen bers and background, respectively. Most
of the pixels corresponding to the collagen bers in cross-section have gray
levels below about 170 most of the brighter background pixels have values
greater than 200.
Example: Figure 2.13 shows a part of a mammogram with a tumor. The
normalized histogram of the image is shown in Figure 2.14. It is seen that the
histogram has two large peaks in the range 0 ; 20 representing the background
in the image with no breast tissue. Although the image has bright areas, the
number of pixels occupying the high gray levels in the range 200 ; 255 is
insignicant.
Example: Figure 2.15 shows a CT image of a two-year-old male patient
with neuroblastoma (see Section 9.9 for details). The histogram of the image
is shown in Figure 2.16 (a). The histogram of the entire CT study of the
patient, including 75 sectional images, is shown in Figure 2.16 (b). Observe
Image Quality and Information Content 81

4
x 10

1.5
Number of pixels

0.5

0
0 50 100 150 200 250
Gray level

FIGURE 2.11
Histogram of the image of the ventricular myocyte in Figure 1.3. The size of
the image is 480  480 = 230 400 pixels. Entropy H = 4:96 bits.
82 Biomedical Image Analysis
3000

2500

2000
Number of pixels

1500

1000

500

0
0 50 100 150 200 250
Gray level

(a)
1800

1600

1400

1200
Number of pixels

1000

800

600

400

200

0
0 50 100 150 200 250
Gray level

(b)
FIGURE 2.12
(a) Histogram of the image of the collagen bers in Figure 1.5 (b) H =
7:0 bits. (b) Histogram of the image after the application of the 3  3 mean
lter and rounding the results to integers H = 7:1 bits.
Image Quality and Information Content 83

FIGURE 2.13
Part of a mammogram with a malignant tumor (the relatively bright region
along the upper-left edge of the image). The size of the image is 700  700 =
490 000 pixels. The pixel resolution of 62 m the width of the image is about
44 mm. Image courtesy of Foothills Hospital, Calgary.
84 Biomedical Image Analysis

0.025

0.02
Probability of occurrence

0.015

0.01

0.005

0
0 50 100 150 200 250
Gray level

FIGURE 2.14
Normalized histogram of the mammogram in Figure 2.13. Entropy H =
6:92 bits.

that the unit of the pixel variable in the histograms is HU however, the gray-
level values in the image have been scaled for display in Figure 2.15, and do
not directly correspond to the HU values. The histograms are multimodal,
indicating the presence of several types of tissue in the CT images. The peaks
in the histogram in Figure 2.16 (a) in the range 50;150 HU correspond to liver
and other abdominal organs and tissues. The small peak in the range 200 ;
300 HU in the same histogram corresponds to calcied parts of the tumor.
The histogram of the full volume includes a small peak in the range 700 ;
800 HU corresponding to bone not shown in Figure 2.16 (b)]. Histograms of
this nature provide information useful in diagnosis as well as in the follow up
of the e ect of therapy. Methods for the analysis of histograms for application
in neuroblastoma are described in Section 9.9.

2.8 Entropy
The distribution of gray levels over the full available range is represented
by the histogram. The histogram provides quantitative information on the
Image Quality and Information Content 85

FIGURE 2.15
CT image of a patient with neuroblastoma. Only one sectional image out of a
total of 75 images in the study is shown. The size of the image is 512  512 =
262 144 pixels. The tumor, which appears as a large circular region on the left-
hand side of the image, includes calcied tissues that appear as bright regions.
The HU range of ;200 400] has been linearly mapped to the display range
of 0 255] see also Figures 2.16 and 4.4. Image courtesy of Alberta Children's
Hospital, Calgary.
86 Biomedical Image Analysis
2500

2000

1500
Number of pixels

1000

500

0
−200 −100 0 100 200 300 400
Hounsfield Units

(a)
4
x 10
10

7
Number of voxels

0
−200 −100 0 100 200 300 400
Hounsfield Units

(b)
FIGURE 2.16
(a) Histogram of the CT section image in Figure 2.15. (b) Histogram of the
entire CT study of the patient, with 75 sectional images. The histograms are
displayed for the range HU = ;200 400] only.
Image Quality and Information Content 87
probability of occurrence of each gray level in the image. However, it is often
desirable to express, in a single quantity, the manner in which the values of a
histogram or PDF vary over the full available range. Entropy is a statistical
measure of information that is commonly used for this purpose 9, 11, 126,
127, 128, 129].
The various pixels in an image may be considered to be symbols produced by
a discrete information source with the gray levels as its states. Let us consider
the occurrence of L gray levels in an image, with the probability of occurrence
of the lth gray level being p(l) l = 0 1 2 : : :  L ; 1: Let us also treat the gray
level of a pixel as a random variable. A measure of information conveyed by an
event (a pixel or a gray level) may be related to the statistical uncertainty of
the event giving rise to the information, rather than the semantic or structural
content of the signal or image. Given the unlimited scope of applications of
imaging and the context-dependent meaning conveyed by images, a statistical
approach as above is appropriate to serve the general purpose of analysis of
the information content of images.
A measure of information h(p) should be a function of p(l), satisfying the
following criteria 9, 11, 126, 127]:
h(p) should be continuous for 0 < p < 1.
h(p) = 1 for p = 0: a totally unexpected event conveys maximal infor-
mation when it does indeed occur.
h(p) = 0 for p = 1: an event that is certain to occur does not convey
any information.
h(p2) > h(p1) if p2 < p1 : an event with a lower probability of occurrence
conveys more information when it does occur than an event with a higher
probability of occurrence.
If two statistically independent image processes (or pixels) f and g are
considered, the joint information of the two sources is given by the sum
of their individual measures of information: hf g = hf + hg .
These requirements are met by h(p) = ; log(p):
When a source generates a number of gray levels with di erent probabilities,
a measure of average information or entropy is dened as the expected value
of information contained in each possible level:
;1
LX
H= p(l) hp(l)]: (2.17)
l=0
Using ; log2 in place of h, we obtain the commonly used denition of entropy
as ;1
LX
H = ; p(l) log2 p(l)] bits: (2.18)
l=0
88 Biomedical Image Analysis
Because the gray levels are considered as individual entities in this denition,
that is, no neighboring elements are taken into account, the result is known
as the zeroth-order entropy 130].
The entropies of the images in Figures 1.3 and 2.13, with the corresponding
histogram or PDF in Figures 2.11 and 2.14, are 4:96 and 6:92 bits, respectively.
Observe that the histogram in Figure 2.14 has a broader spread than that in
Figure 2.11, which accounts for the correspondingly higher entropy.
Di erentiating the function in Equation 2.18 with respect to p(l), it can
be shown that the maximum possible entropy occurs when all the gray levels
occur with the same probability (equal to L1 ), that is, when the various gray
levels are equally likely:
;1 1
LX 
1 = log L:
Hmax = ; L log 2 L 2 (2.19)
l=0
If the number of gray levels in an image is 2K , then Hmax is K bits the
maximum possible entropy of an image with 8-bit pixels is 8 bits.
It should be observed that entropy characterizes the statistical information
content of a source based upon the PDF of the constituent events, which are
treated as random variables. When an image is characterized by its entropy,
it is important to recognize that the measure is not sensitive to the pictorial,
structural, semantic, or application-specic (diagnostic) information in the
image. Entropy does not account for the spatial distribution of the gray levels
in a given image. Regardless, the entropy of an image is an important measure
because it gives a summarized measure of the statistical information content of
an image, an image-generating source, or an information source characterized
by a PDF, as well as the lower bound on the noise-free transmission rate and
storage capacity requirements.
Properties of entropy: A few important properties of entropy 9, 11,
126, 127, 128, 129] are as follows:
Hp  0 with Hp = 0 only for p = 0 or p = 1: no information is conveyed
by events that do not occur or occur with certainty.
The joint information H(p1 p2  p ) conveyed by n events, with probabil-
ities of occurrence p1  p2      pn , is governed by H(p1 p2  p )  log(n),
n

with equality if and only if pi = n1 for i = 1 2     n.


Considering two images or sources f and g with PDFs pf (l1 ) and pg (l2 ),
where l1 and l2 represent gray levels in the range 0 L ; 1], the average
joint information or joint entropy is
;1 LX
LX ;1
Hf g = ; pf g (l1  l2 ) log2 pf g (l1  l2 )]: (2.20)
l1 =0 l2 =0
Image Quality and Information Content 89
If the two sources are statistically independent, the joint PDF pf g (l1  l2 )
reduces to pf (l1 ) pg (l2 ). Joint entropy is governed by the condition
Hf g  Hf + Hg , with equality if and only if f and g are statistically
independent.
The conditional entropy of an image f given that another image g has
been observed is
;1 LX
LX ;1
Hf jg = ; pg (l2 ) pf jg (l1  l2) log2 pf jg (l1  l2 )]
l1 =0 l2 =0
LX;1 LX ;1
=; pf g (l1  l2 ) log2 pf jg (l1  l2 )] (2.21)
l1 =0 l2 =0
where pf jg (l1  l2 ) is the conditional PDF of f given g. Then, Hf jg =
Hf g ; Hg  Hf , with equality if and only if f and g are statistically
independent. Note: The conditional PDF of f given g is dened as 128,
129] 8 pf g (l1 l2 )
< pg (l2 ) if pg (l2 ) > 0
pf jg (l1  l2) = : (2.22)
1 otherwise:
Higher-order entropy: The formulation of entropy as a measure of in-
formation is based upon the premise that the various pixels in an image may
be considered to be symbols produced by a discrete information source with
the gray levels as its states. From the discussion above, it follows that the
denition of entropy in Equation 2.18 assumes that the successive pixels pro-
duced by the source are statistically independent. While governed by the
limit Hmax = K bits, the entropy of a real-world image (with K bits per
pixel) encountered in practice could be considerably lower, due to the fact
that neighboring pixels in most real images are not independent of one an-
other. Due to this reason, it is desirable to consider sequences of pixels to
estimate the true entropy or information content of a given image.
Let p(fln g) represent the probability of occurrence of the sequence fl0  l1 
l2      lng of gray levels in the image f . The nth -order entropy of f is dened
as
Hn = ; (n +1 1)
X
p(flng) log2 p(flng)] (2.23)
fl g
n

where the summation is over all possible sequences fln g with (n + 1) pix-
els. (Note: Some variations exist in the literature regarding the denition of
higher-order entropy. In the denition given above, n refers to the number
of neighboring or additional elements considered, not counting the initial or
zeroth element this is consistent with the denition of the zeroth-order en-
tropy in Equation 2.18.) Hn is a monotonically decreasing function of n, and
approaches the true entropy of the source as n ! 1 9, 126, 127].
90 Biomedical Image Analysis
Mutual information: A measure that is important in the analysis of
transmission of images over a communication system, as well as in the analysis
of storage in and retrieval from an archival system, with potential loss of
information, is mutual information, dened as
If jg = Hf + Hg ; Hf g = Hf ; Hf jg = Hg ; Hgjf : (2.24)
This measure represents the information received or retrieved with the follow-
ing explanation: Hf is the information input to the transmission or archival
system in the form of the image f . Hf jg is the information about f given that
the received or retrieved image g has been observed. (In this analysis, g is
taken to be known, but f is considered to be unknown, although g is expected
to be a good representation of f .) Then, if g is completely correlated with f ,
we have Hf jg = 0, and If jg = Hf : this represents the case where there is no
loss or distortion in image transmission and reception (or in image storage and
retrieval). If g is independent of f , Hf jg = Hf , and If jg = 0: this represents
the situation where there is complete loss of information in the transmission
or archival process.
Entropy and mutual information are important concepts that are useful in
the design and analysis of image archival, coding, and communication systems
this topic is discussed in Chapter 11.

2.9 Blur and Spread Functions


Several components of image acquisition systems cause blurring due to intrin-
sic and practical limitations. The simplest visualization of blurring is provided
by using a single, ideal point to represent the object being imaged see Fig-
ure 2.17 (a). Mathematically, an ideal point is represented by the continuous
unit impulse function or the Dirac delta function (x y), dened as 1, 2, 131]

(x y) = undened at x = 0 y = 0 (2.25)
0 otherwise
and Z 1 Z 1
(x y) dx dy = 1: (2.26)
x=;1 y=;1
Note: The 1D Dirac delta function (x) is dened in terms of its action within
an integral as 3]
Zb
f (x) (x ; xo ) dx = f0 (xo ) ifotherwise
a < xo < b
 (2.27)
a
where f (x) is a function that is continuous at xo . This is known as the sifting
property of the delta function, because the value of the function f (x) at the location
Image Quality and Information Content 91

(a) (b)
FIGURE 2.17
(a) An ideal point source. (b) A Gaussian-shaped point spread function.

xo of the delta function is sifted or selected from all of its values. The expression
may be extended to all x as
Z1
f ( x) = f () (x ; ) d  (2.28)
=;1
which may also be interpreted as resolving the arbitrary signal f (x) into a weighted
combination of mutually orthogonal delta functions. A common denition of the
delta function is in terms of its integrated strength as
Z1
(x) dx = 1 (2.29)
;1
with the conditions
(x) = undened at x = 0 (2.30)
0 otherwise:
The delta function is also dened as the limiting condition of several ordinary func-
tions, one of which is  
(x) = lim 1 exp ; jxj : (2.31)
!0 2 
The delta function may be visualized as the limit of a function with a sharp peak
of undened value, whose integral over the full extent of the independent variable is
maintained as unity while its temporal or spatial extent is compressed toward zero.
The image obtained when the input is a point or impulse function is known
as the impulse response or point spread function (PSF) see Figure 2.17 (b).
Assuming the imaging system to be linear and shift-invariant (or position-
invariant or space-invariant, abbreviated as LSI), the image g(x y) of an ob-
ject f (x y) is given by the 2D convolution integral 8, 9, 11, 131]
Z 1 Z 1
g(x y) = h(x ;  y ; ) f ( ) d d (2.32)
=;1  =;1
92 Biomedical Image Analysis
Z 1 Z 1
= h( ) f (x ;  y ; ) d d
=;1  =;1
= h(x y)
f (x y)
where h(x y) is the PSF,  and are temporary variables of integration, and

represents 2D convolution.
(Note: For details on the theory of linear systems and convolution, refer to
Lathi 1], Oppenheim et al. 2], Oppenheim and Schafer 7], and Gonzalez and
Woods 8]. In extending the concepts of LSI system theory from time-domain
signals to the space domain of images, it should be observed that causality is
not a matter of concern in most applications of image processing.)
Some examples of the cause of blurring are:
Focal spot: The physical spot on the anode (target) that generates
X rays is not an ideal dimensionless point, but has nite physical di-
mensions and an area of the order of 1 ; 2 mm2 . Several straight-line
paths would then be possible from the X-ray source, through a given
point in the object being imaged, and on to the lm. The image so
formed will include not only the main radiographic shadow (the \um-
bra"), but also an associated blur (the \penumbra"), as illustrated in
Figure 2.18. The penumbra causes blurring of the image.
Thickness of screen or crystal: The screen used in screen-lm X-
ray imaging and the scintillation crystal used in gamma-ray imaging
generate visible light when struck by X or gamma rays. Due to the
nite thickness of the screen or crystal, a point source of light within the
detector will be sensed over a wider region on the lm (see Figure 1.10)
or by several PMTs (see Figure 1.25): the thicker the crystal or screen,
the worse the blurring e ect caused as above.
Scattering: Although it is common to assume straight-line propagation
of X or gamma rays through the body or object being imaged, this is not
always the case in reality. X, gamma, and ultrasound rays do indeed
get scattered within the body and within the detector. The e ect of
rays that are scattered to a direction that is signicantly di erent from
the original path will likely be perceived as background noise. However,
scattering to a smaller extent may cause unsharp edges and blurring in
a manner similar to those described above.
Point, line, and edge spread functions: In practice, it is often not
possible or convenient to obtain an image of an ideal point: a microscopic hole
in a sheet of metal may not allow adequate X-ray photons to pass through and
create a useful image an innitesimally small drop of a radiopharmaceutical
may not emit sucient gamma-ray photons to record an appreciable image on
a gamma camera. However, it is possible to construct phantoms to represent
ideal lines or edges. For use in X-ray imaging, a line phantom may be created
Image Quality and Information Content 93
Ideal Finite
point focal
source spot

X rays

Object
being
imaged

Umbra Umbra
Penumbra
(blur)

FIGURE 2.18
The e ect of a nite focal spot (X-ray-generating portion of the target) on
the sharpness of the image of an object.

by cutting a narrow slit in a sheet of metal. In SPECT imaging, it is common


to use a thin plastic tube, with diameter of the order of 1 mm and lled with
a radiopharmaceutical, to create a line source 86, 132]. Given that the spatial
resolution of a typical SPECT system is of the order of several mm, such a
phantom may be assumed to represent an ideal straight line with no thickness.
An image obtained of such a source is known as the line spread function (LSF)
of the system. Because any cross-section of an ideal straight line is a point
or impulse function, the reconstruction of a cross-section of a line phantom
provides the PSF of the system. Observe also that the integration of an ideal
point results in a straight line along the path of integration see Figure 2.19.
In cases where the construction of a line source is not possible or appropri-
ate, one may prepare a phantom representing an ideal edge. Such a phantom
is easy to prepare for planar X-ray imaging: one needs to simply image the
(ideal and straight) edge of a sheet or slab made of a material with a higher
attenuation coecient than that of the background or table upon which it is
placed when imaging. In the case of CT imaging, a 3D cube or parallelepiped
with its sides and edges milled to be perfect planes and straight lines, re-
spectively, may be used as the test object. A prole of the image of such
a phantom across the ideal edge provides the edge spread function (ESF) of
the system: see Figure 2.20 see also Section 2.15. The derivative of an edge
along the direction perpendicular to the edge is an ideal straight line see Fig-
94 Biomedical Image Analysis

δ (x,y) y f (x,y)
l
y fe (x,y) y

x x x
Integrate Integrate

Point Line Edge

FIGURE 2.19
The relationship between point (impulse function), line, and edge (step) im-
ages. The height of each function represents its strength.

ure 2.19. Therefore, the derivative of the ESF gives the LSF of the system.
Then, the PSF may be estimated from the LSF as described above.

Ideal (sharp) edge


f(b)

Blurred or
unsharp edge

Intensity
f(x)

f(a)

x=a x=b
Distance x

FIGURE 2.20
Blurring of an ideal sharp edge into an unsharp edge by an imaging system.

In practice, due to the presence of noise and artifacts, it would be desirable


to average several measurements of the LSF, which could be performed along
the length of the line or edge. If the imaging system is anisotropic, the LSF
should be obtained for several orientations of the line source. If the blur of
the system varies with the distance between the detector and the source, as is
Image Quality and Information Content 95
the case in nuclear medicine imaging with a gamma camera, one should also
measure the LSF at several distances.
The mathematical relationships between the PSF, LSF, and ESF may be
expressed as follows 3, 9, 11]. Consider integration of the 2D delta function
along the x axis as follows:
Z 1
fl (x y) = (x y) dx
x=;1
Z 1
= (x) (y) dx
x=;1
Z 1
=  (y ) (x) dx
x=;1
= (y): (2.33)
The last integral above is equal to unity the separability property of the 2D
impulse function as (x y) = (x) (y) has been used above. Observe that
although (y) has been expressed as a function of y only, it represents a 2D
function of (x y) that is independent of x in the present case. Considering
(y) over the entire 2D (x y) space, it becomes evident that it is a line function
that is placed on the x axis. The line function is thus given by an integral of
the impulse function (see Figure 2.19).
The output of an LSI system when the input is the line image fl (x y) =
(y), that is, the LSF, which we shall denote here as hl (x y), is given by
Z 1 Z 1
hl (x y) = h( ) fl (x ;  y ; ) d d
Z
=;1 Z =;1
1 1
= h( ) (y ; ) d d
=;1  =;1
Z 1
= h( y) d
=;1
Z 1
= h(x y) dx: (2.34)
x=;1
In the equations above, h(x y) is the PSF of the system, and the sifting
property of the delta function has been used. The nal equation above shows
that the LSF is the integral (in this case, along the x axis) of the PSF. This
result also follows simply from the linearity of the LSI system and that of
the operation of integration: given that h(x y) is the output due to (x y) as
the input, if the input is an integral of the delta function, the output will be
the corresponding integral of h(x y). Observe that, in the present example,
hl (x y) is independent of x.
Let us now consider the Fourier transform of hl (x y). Given that hl (x y) is
independent of x in the present illustration, we may write it as a 1D function
96 Biomedical Image Analysis
hl (y) correspondingly, its Fourier transform will be a 1D function, which we
shall express as Hl (v). Then, we have
Z 1
Hl (v) = hl (y) exp(;j 2
vy) dy
Z
y=;1 Z 1
1
= dy dx h(x y) exp;j 2
(ux + vy)]ju=0
y=;1 x=;1
= H (u v)ju=0
= H (0 v) (2.35)
where H (u v) is the 2D Fourier transform of h(x y) (see Sections 2.11 and
2.12). This shows that the Fourier transform of the LSF gives the values of
the Fourier transform of the PSF along a line in the 2D Fourier plane (in this
case, along the v axis).
In a manner similar to the discussion above, let us consider integrating the
line function as follows:
Z y
fe (x y) = fl (x ) d
 =;1
Z y
= ( ) d : (2.36)
 =;1
The resulting function has the property

8x fe (x y ) =
1 if y > 0 (2.37)
0 if y < 0
which represents an edge or unit step function that is parallel to the x axis
(see Figure 2.19). Thus, the edge or step function is obtained by integrating
the line function. It follows that the ESF is given by
Z y
he (y) = hl ( ) d : (2.38)
 =;1
Conversely, the LSF is the derivative of the ESF:
d h (y):
hl (y) = dy (2.39)
e

Thus the ESF may be used to obtain the LSF, which may further be used
to obtain the PSF and MTF as already explained. (Observe the use of the
generalized delta function to derive the discontinuous line and edge functions
in this section.)
In addition to the procedures and relationships described above, based upon
the Fourier slice theorem (see Section 9.2 and Figure 9.2), it can be shown
that the Fourier transform of a prole of the LSF is equal to the radial prole
Image Quality and Information Content 97
of the Fourier transform of the PSF at the angle of placement of the line
source. If the imaging system may be assumed to be isotropic in the plane of
the line source, a single radial prole is adequate to reconstruct the complete
2D Fourier transform of the PSF. Then, an inverse 2D Fourier transform
provides the PSF. This method, which is essentially the Fourier method of
reconstruction from projections described in Section 9.2, was used by Hon et
al. 132] and Boulfelfel 86] to estimate the PSF of a SPECT system.
Example of application: In the work of Boulfelfel 86], a line source
was prepared using a plastic tube of internal radius 1 mm, lled with 1 mCi
(milli Curie) of 99m Tc. The phantom was imaged using a gamma camera at
various source-to-collimator distances, using an energy window of width of
14 keV centered at 140 keV . Figure 2.21 shows a sample image of the line
source. Figure 2.22 shows a sample prole of the LSF and the averaged prole
obtained by averaging the 64 rows of the LSF image.

FIGURE 2.21
Nuclear medicine (planar) image of a line source obtained using a gamma
camera. The size of the image is 64  64 pixels, with an e ective width of
100 mm. The pixel size is 1:56 mm.

It is common practice to characterize an LSF or PSF with its full width


at half the maximum (FWHM) value. Boulfelfel observed that the FWHM
of the LSF of the gamma cameras studied varied between 0:5 cm and 1:7 cm
depending upon the radiopharmaceutical used, the source-to-collimator dis-
98 Biomedical Image Analysis

180

160

140

120
Scaled counts

100

80

60

40

20

0
0 10 20 30 40 50 60 70 80 90 100
Distance in mm

FIGURE 2.22
Sample prole (dotted line) and averaged prole (solid line) obtained from
the image in Figure 2.21. Either prole may be taken to represent the LSF
of the gamma camera.
Image Quality and Information Content 99
tance, and the intervening medium. The LSF was used to estimate the PSF
as explained above. The FWHM of the PSF of the SPECT system studied
was observed to vary between 1:3 cm and 3:0 cm.
See Section 2.12 for illustrations of the ESF and LSF of a CT imaging
system. See Chapter 10 for descriptions of methods for deblurring images.

2.10 Resolution
The spatial resolution of an imaging system or an image may be expressed in
terms of the following:
The sampling interval (in, for example, mm or m).
The width of (a prole of) the PSF, usually FWHM (in mm).
The size of the laser spot used to obtain the digital image by scanning
an original lm, or the size of the solid-state detector used to obtain the
digital image (in m).
The smallest visible object or separation between objects in the image
(in mm or m).
The nest grid pattern that remains visible in the image (in lp=mm).
The typical resolution limits of a few imaging systems are 6]:
X-ray lm: 25 ; 100 lp=mm.
screen-lm combination: 5 ; 10 lp=mm
mammography: up to 20 lp=mm.
CT: 0:7 lp=mm
CT: 50 lp=mm or 10 m
SPECT: < 0:1 lp=mm.

2.11 The Fourier Transform and Spectral Content


The Fourier transform is a linear, reversible transform that maps an image
from the space domain to the frequency domain. Converting an image from
the spatial to the frequency (Fourier) domain helps in assessing the spectral
content and energy distribution over frequency bands. Sharp edges in the
100 Biomedical Image Analysis
image domain are associated with large proportions of high-frequency con-
tent. Oriented patterns in the space domain correspond to increased energy
in bands of frequency in the spectral domain with the corresponding ori-
entation. Simple geometric patterns such as rectangles and circles map to
recognizable functions in the frequency domain, such as the sinc and Bessel
functions, respectively. Transforming an image to the frequency domain as-
sists in the application of frequency-domain lters to remove noise, enhance
the image, or extract certain components that are better separated in the
frequency domain than in the space domain.
The 2D Fourier transform of an image f (x y), denoted by F (u v), is given
by 8, 9, 11, 131]
Z 1 Z 1
F (u v) = f (x y) exp;j 2
(ux + vy)] dx dy: (2.40)
x=;1 y=;1
The variables u and v represent frequency in the horizontal and vertical direc-
tions, respectively. (The frequency variable in image analysis is often referred
to as spatial frequency to avoid confusion with temporal frequency we will,
however, not use this terminology in this book.) Recall that the complex expo-
nential is a combination of the 2D sine and cosine functions and is separable,
as
exp;j 2
(ux + vy)] (2.41)
= exp(;j 2
ux) exp(;j 2
vy)
= cos(2
ux) ; j sin(2
ux)] cos(2
vy) ; j sin(2
vy)]:
Images are typically functions of space hence, the units of measurement
in the image domain are m, cm, mm, m, etc. In the 2D Fourier domain,
the unit of frequency is cycles=mm, cycles=m, mm;1 , etc. Frequency is also
expressed as lp=mm. If the distance to the viewer is taken into account,
frequency could be expressed in terms of cycles=degree of the visual angle
subtended at the viewer's eye. The unit Hertz is not used in 2D Fourier
analysis.
In computing the Fourier transform, it is common to use the discrete Fourier
transform (DFT) via the fast Fourier transform (FFT) algorithm. The 2D
DFT of a digital image f (m n) of size M  N pixels is dened as
;1 NX
1 MX ;1 
mk + nl

F (k l) = MN f (m n) exp ;j 2

M N : (2.42)
m=0 n=0
For complete recovery of f (m n) from F (k l), the latter should be computed
for k = 0 1 : : :  M ; 1, and l = 0 1 : : :  N ; 1 at the minimum 7, 8, 9].
Then, the inverse transform gives back the original image with no error or
loss of information as
;1 NX
MX ;1  
f (m n) = F (k l) exp +j 2
mk
M N+ nl  (2.43)
k=0 l=0
Image Quality and Information Content 101
for m = 0 1 : : :  M ; 1, and n = 0 1 : : :  N ; 1: This expression may be
interpreted as resolving the given image into a weighted sum of mutually or-
thogonal exponential (or sinusoidal) basis functions. The eight sine functions,
for k = 0 1 2 : : :  7, that form the imaginary part of the basis functions of
the 1D DFT for M = 8 are shown in Figure 2.23. Figures 2.24 and 2.25 show
the rst 64 cosine and sine basis functions (for k l = 0 1 2 : : :  7) that are
the components of the 2D exponential function in Equation 2.43.

FIGURE 2.23
The rst eight sine basis functions of the 1D DFT k = 0 1 2 : : :  7 from top
to bottom. Each function was computed using 64 samples.

In order to use the FFT algorithm, it is common to pad the given image
with zeros or some other appropriate background value and convert the image
to a square of size N  N where N is an integral power of 2. Then, all indices
in Equation 2.42 may be made to run from 0 to N ; 1 as

;1 NX
NX ;1 
2
(mk + nl) 
F (k l) = N1 f (m n) exp ;j
N (2.44)
m=0 n=0
102 Biomedical Image Analysis

FIGURE 2.24
The rst 64 cosine basis functions of the 2D DFT. Each function was computed
using a 64  64 matrix.
Image Quality and Information Content 103

FIGURE 2.25
The rst 64 sine basis functions of the 2D DFT. Each function was computed
using a 64  64 matrix.
104 Biomedical Image Analysis
with k = 0 1 : : :  N ; 1, and l = 0 1 : : :  N ; 1. The inverse transform is
given as
;1 NX
NX ;1 
f (m n) = N1 F (k l) exp +j 2N
(mk + nl) : (2.45)
k=0 l=0
In Equations 2.44 and 2.45, the normalization factor has been divided equally
between the forward and inverse transforms to be N1 for the sake of symme-
try 8].
Example | the rectangle function and its Fourier transform: A
2D function with a rectangular base of size X  Y and height A is dened as
f (x y) = A if 0  x  X 0  y  Y (2.46)
= 0 otherwise:
The 1D version of the rectangle function is also known as the gate function.
The 2D Fourier transform of the rectangle function above is given by
 
F (u v) = AXY sin(
uX ) exp(;j
uX ) sin(
vY ) exp(;j
vY ) : (2.47)

uX
vY
Observe that the Fourier transform of a real image is, in general, a complex
function. However, an image with even symmetry about the origin will have
a real Fourier transform. The exp ] functions in Equation 2.47 indicate the
phase components of the spectrum.
A related function that is commonly used is the rect function, dened as
8
< 1 if jxj < 21  jy j < 1
2
rect(x y) = : (2.48)
0 if jxj > 12  jyj > 1:
2
The Fourier transform of the rect function is the sinc function:
rect(x y) , sinc(u v) (2.49)
where
sinc(u v) = sinc(u) sinc(v) = sin(
u
u) sin(
v
v)  (2.50)
and , indicates that the two functions form a forward and inverse Fourier-
transform pair.
Figure 2.26 shows three images with rectangular (square) objects and their
Fourier log-magnitude spectra. Observe that the smaller the box, the greater
the energy content in the higher-frequency areas of the spectrum. At the lim-
its, we have the Fourier transform of an image of an innitely large rectangle,
that is, the transform of an image with a constant value of unity for all space,
equal to (0 0) and the Fourier transform of an image with an innitesimally
Image Quality and Information Content 105
small rectangle, that is, an impulse, equal to a constant of unity, represent-
ing a \white" spectrum. The frequency axes have been shifted such that
(u v) = (0 0) is at the center of the spectrum displayed. The frequency coor-
dinates in this mode of display of image spectra are shown in Figure 2.27 (b).
Figure 2.28 shows the log-magnitude spectrum in Figure 2.26 (f) with and
without shifting the shifted (or centered or folded) mode of display as in
Figure 2.28 (b) is the preferred mode of display of 2D spectra.
The rectangle image in Figure 2.26 (e) as well as its magnitude spectrum
are also shown as mesh plots in Figure 2.29. The mesh plot demonstrates
more clearly the sinc nature of the spectrum.
Figure 2.30 shows three images with rectangular boxes oriented at 0o , 90o ,
and 135o , and their log-magnitude spectra. The sinc functions in the Fourier
domain in Figure 2.30 are not symmetric in the u and v coordinates, as was
the case in the spectra of the square boxes in Figure 2.26. The narrowing of
the rectangle along a spatial axis results in the widening of the lobes of the
sinc function and the presence of increased high-frequency energy along the
corresponding frequency axis. The rotation of an image in the spatial domain
results in a corresponding rotation in the Fourier domain.
Example | the circle function and its Fourier transform: Circular
apertures and functions are encountered often in imaging and image process-
ing. The circ function, which represents a circular disc or aperture, is dened
as 
circ(r) = 10 ifif rr < 1
> 1 (2.51)
p
where r = (x2 + y2 ).pThe Fourier transform of circ(r) may be shown to be
1
 J1 (2
), where = (u + v ) represents radial frequency in the 2D (u v )
2 2
plane, and J1 is the rst-order Bessel function of the rst kind 3, 9].
Figure 2.31 shows an image of a circular disc and its log-magnitude spec-
trum. The disc image as well as its magnitude spectrum are also shown as
mesh plots in Figure 2.32. Ignoring the e ects due to the representation of
the circular shape on a discrete grid, both the image and its spectrum are
isotropic. Figure 2.33 shows two proles of the log-magnitude spectrum in
Figure 2.31 (b) taken along the central horizontal axis. The nature of the
Bessel function is clearly seen in the 1D plots the conjugate symmetry of
the spectrum is also readily seen in the plot in Figure 2.33 (a). In displaying
proles of 2D system transfer functions, it is common to show only one half
of the prole for positive frequencies, as in Figure 2.33 (b). If such a prole
is shown, it is to be assumed that the system possesses axial or rotational
symmetry that is, the system is isotropic.
Examples of Fourier spectra of biomedical images: Figure 2.34
shows two TEM images of collagen bers in rabbit ligament samples (in
cross-section), and their Fourier spectra. The Bessel characteristics of the
spectrum due to the circular shape of the objects in the image are clearly
106 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 2.26
(a) Rectangle image, with total size 128128 pixels and a rectangle (square) of
size 40  40 pixels. (b) Log-magnitude spectrum of the image in (a). (c) Rect-
angle size 20  20 pixels. (d) Log-magnitude spectrum of the image in (c).
(e) Rectangle size 10  10 pixels. (f) Log-magnitude spectrum of the image
in (e). The spectra have been scaled to map the range 5 12] to the display
range 0 255]. See also Figures 2.28 and 2.29.
Image Quality and Information Content 107

(0, 0) (U/2, 0) (U, 0) (0, V/2)

(0, V/2) (-U/2, 0) (0, 0) (U/2, 0)

(0, -V/2)
(0, V) (U, V)
v v

(a) (b)

FIGURE 2.27
Frequency coordinates in (a) the unshifted mode and (b) the shifted mode of
display of image spectra. U and V represent the sampling frequencies along
the two axes. Spectra of images with real values possess conjugate symmetry
about U=2 and V=2. Spectra of sampled images are periodic, with the periods
equal to U and V along the two axes. It is common practice to display one
complete period of the shifted spectrum, including the conjugate symmetric
parts, as in (b). See also Figure 2.28.

(a) (b)
FIGURE 2.28
(a) Log-magnitude spectrum of the rectangle image in Figure 2.26 (e) without
shifting. Most FFT routines provide spectral data in this format. (b) The
spectrum in (a) shifted or folded such that (u v) = (0 0) is at the center. It
is common practice to display one complete period of the shifted spectrum,
including the conjugate symmetric parts, as in (b). See also Figure 2.27.
108 Biomedical Image Analysis

(a)

(b)
FIGURE 2.29
(a) Mesh plot of the rectangle image in Figure 2.26 (e), with total size 128128
pixels and a rectangle (square) of size 10  10 pixels. (b) Magnitude spectrum
of the image in (a).
Image Quality and Information Content 109

(a) (b)

(c) (d)

(e) (f)
FIGURE 2.30
(a) Rectangle image, with total size 128  128 pixels and a rectangle of size
10  40 pixels. (b) Log-magnitude spectrum of the image in (a). (c) Rectangle
size 40  10 pixels this image may be considered to be that in (a) rotated
by 90o . (d) Log-magnitude spectrum of the image in (c). (e) Image in (c)
rotated by 45o using nearest-neighbor selection. (f) Log-magnitude spectrum
of the image in (e). Spectra scaled to map 5 12] to the display range 0 255].
See also Figure 8.1.
110 Biomedical Image Analysis

(a) (b)
FIGURE 2.31
(a) Image of a circular disc. The radius of the disc is 10 pixels the size of the
image is 128  128 pixels. (b) Log-magnitude spectrum of the image in (a).
See also Figures 2.32 and 2.33.

seen in Figure 2.34 (d). (Compare the examples in Figure 2.34 with those in
Figure 2.31.)
Figure 2.35 shows two SEM images of collagen bers as seen in freeze-
fractured surfaces of rabbit ligament samples, and their Fourier spectra. The
highly oriented and piecewise linear (rectangular) characteristics of the bers
in the normal sample in Figure 2.35 (a) are indicated by the concentrations of
energy along radial lines at the corresponding angles in the spectrum in Fig-
ure 2.35 (b). The scar sample in Figure 2.35 (c) lacks directional preference,
which is reected in its spectrum in Figure 2.35 (d). (Compare the examples
in Figure 2.35 with those in Figure 2.30.)

2.11.1 Important properties of the Fourier transform


The Fourier transform is a linear, reversible transform that maps an image
from the space domain to the frequency domain. The spectrum of an image
can provide useful information on the frequency content of the image, on
the presence of oriented or directional elements, on the presence of specic
image patterns, and on the presence of noise. A study of the spectrum of an
image can assist in the development of ltering algorithms to remove noise,
in the design of algorithms to enhance the image, and in the extraction of
features for pattern recognition. Some of the important properties of the
Fourier transform are described in the following paragraphs with illustrations
as required 9, 8, 11] both the discrete and continuous representations of
functions are used as appropriate or convenient.
Image Quality and Information Content 111

250

200

150

100

50

0
120
100
120
80 100
60 80
40 60
40
20 20

(a)

4
x 10

120
100
120
80 100
60 80
40 60
40
20 20

(b)
FIGURE 2.32
(a) Mesh plot of the circular disc in Figure 2.31 (a). The radius of the disc is
10 pixels the size of the image is 128  128 pixels. (b) Magnitude spectrum
of the image in (a).
112 Biomedical Image Analysis

11

10

9
log magnitude spectrum

3
20 40 60 80 100 120
sample number

(a)
11

10

9
log magnitude spectrum

3
10 20 30 40 50 60
sample number

(b)
FIGURE 2.33
(a) Prole of the log-magnitude spectrum in Figure 2.31 (b) along the central
horizontal axis. (b) Prole in (a) shown only for positive frequencies. The
frequency axis is indicated in samples the true frequency values depend upon
the sampling frequency.
Image Quality and Information Content 113

(a) (b)

(c) (d)
FIGURE 2.34
(a) TEM image of collagen bers in a normal rabbit ligament sample. (b) Log-
magnitude spectrum of the image in (a). (c) TEM image of collagen bers in
a scar tissue sample. (d) Log-magnitude spectrum of the image in (c). See
also Figure 1.5 and Section 1.4.
114 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 2.35
(a) SEM image of collagen bers in a normal rabbit ligament sample. (b) Log-
magnitude spectrum of the image in (a). (c) SEM image of collagen bers in
a scar tissue sample. (d) Log-magnitude spectrum of the image in (c). See
also Figure 1.8 and Section 1.4.
Image Quality and Information Content 115
1. The kernel function of the Fourier transform is separable and symmetric.
This property facilitates the evaluation of the 2D DFT as a set of 1D
row transforms, followed by a set of 1D column transforms. We have

1 ;1
NX 
2
;1
 NX 
2


F (k l) = N exp ;j N mk f (m n) exp ;j N nl :
m=0 n=0
(2.52)
1D FFT routines may be used to obtain 2D and multidimensional Fourier
transforms in the following manner:
";1
NX  #
1
F (m l) = N N 2

f (m n) exp ;j N nl  (2.53)


n=0
;1
1 NX
 
F (k l) = N F (m l) exp ;j 2N
mk : (2.54)
m=0
(Care should be taken to check if the factor N1 is included in the forward
or inverse 1D FFT routine, where required.)
2. The Fourier transform is an energy-conserving transform, that is,
Z 1 Z 1 Z 1 Z 1
jf (x y )j2 dx dy = jF (u v )j2 du dv:
x=;1 y=;1 u=;1 v=;1
(2.55)
This relationship is known as Parseval's theorem.
3. The inverse Fourier transform operation may be performed using the
same FFT routine by taking the forward Fourier transform of the com-
plex conjugate of the given function, and then taking the complex con-
jugate of the result.
4. The Fourier transform is a linear transform. The Fourier transform
of the sum of two images is the sum of the Fourier transforms of the
individual images.
Images are often corrupted by additive noise, such as
g(x y) = f (x y) + (x y): (2.56)
Upon Fourier transformation, we have
G(u v) = F (u v) + (u v): (2.57)
Most real-life images have a large portion of their energy concentrated
around (u v) = (0 0) in a low-frequency region however, the presence
116 Biomedical Image Analysis
of edges, sharp features, and small-scale or ne details leads to increased
strength of high-frequency components (see Figure 2.34). On the other
hand, random noise has a spectrum that is equally spread all over the
frequency space (that is, a at, uniform, or \white" spectrum). Indis-
criminate removal of high-frequency components could cause blurring of
edges and the loss of the ne details in the image.
5. The DFT and its inverse are periodic signals:
F (k l) = F (k  N l) = F (k l  N ) = F (k  N l  N ) (2.58)
where  and are integers.
6. The Fourier transform is conjugate-symmetric for images with real val-
ues:
F (;k ;l) = F  (k l): (2.59)
It follows that jF (;k ;l)j = jF (k l)j and 6 F (;k ;l) = ;6 F (k l) that
is, the magnitude spectrum is even symmetric and the phase spectrum is
odd symmetric. The symmetry of the magnitude spectrum is illustrated
by the examples in Figures 2.26 and 2.30.
7. A spatial shift or translation applied to an image leads to an additional
linear phase component in its Fourier transform the magnitude spec-
trum is una ected. If f (m n) , F (k l) are a Fourier-transform pair,
we have

f (m ; m  n ; n ) , F (k l) exp ;j 2
(km + ln ) 
o o N o o (2.60)

where (mo  no ) is the shift applied in the space domain.


Conversely, we also have

f (m n) exp j 2
(k m + l n) , F (k ; k  l ; l ):
N o o o o (2.61)
This property has important implications in the modulation of 1D sig-
nals for transmission and communication 1] however, it does not have
a similar application with 2D images.
8. F (0 0) gives the average value of the image a scale factor may be re-
quired depending upon the denition of the DFT used.
9. For display purposes, log10 1 + jF (k l)j2 ] is often used the addition of
unity (to avoid taking the log of zero), and the squaring may some-
times be dropped. It is also common to fold or shift the spectrum to
bring the (0 0) frequency point (the \DC" point) to the center, and the
folding frequency (half of the sampling frequency) components to the
Image Quality and Information Content 117
edges. Figures 2.26, 2.27, and 2.28 illustrate shifted spectra and the
corresponding frequency coordinates.
Folding of the spectrum could be achieved by multiplying the image
f (m n) with (;1)(m+n) before the FFT is computed 8]. Because the
indices m and n are integers, this amounts to merely changing the signs
of alternate pixels. This outcome is related to the property in Equa-
tion 2.61 with ko = lo = N=2, which leads to

exp j 2
(k m + l n) = expj
(m + n)] = (;1)(m+n) 
N o o (2.62)
and
f (m n) (;1)(m+n) , F (k ; N=2 l ; N=2): (2.63)
10. Rotation of an image leads to a corresponding rotation of the Fourier
spectrum.
f (m1  n1 ) , F (k1  l1 ) (2.64)
where
m1 = m cos + n sin n1 = ;m sin + n cos (2.65)
k1 = k cos + l sin l1 = ;k sin + l cos : (2.66)
This property is illustrated by the images and spectra in Figure 2.30,
and is useful in the detection of directional or oriented patterns (see
Chapter 8).
11. Scaling an image leads to an inverse scaling of its Fourier transform:
 
f (am bn) , 1 F k  l 
jabj a b (2.67)
where a and b are scalar scaling factors. The shrinking of an image leads
to an expansion of its spectrum, with increased high-frequency content.
On the contrary, if an image is enlarged, its spectrum is shrunk, with
reduced high-frequency energy. The images and spectra in Figure 2.26
illustrate this property.
12. Linear shift-invariant systems and convolution: Most imaging
systems may be modeled as linear and shift-invariant or position-invariant
systems that are completely characterized by their PSFs. The output
of such a system is given as the convolution of the input image with the
PSF:
g(m n) = h(m n)
f (m n) (2.68)
;1 NX
NX ;1
= h( ) f (m ;  n ; ):
=0  =0
118 Biomedical Image Analysis
Upon Fourier transformation, the convolution maps to the multiplica-
tion of the two spectra:
G(k l) = H (k l) F (k l): (2.69)
Thus, we have the important property
h(x y)
f (x y) , H (u v) F (u v) (2.70)
expressed now in the continuous coordinates (x y) and (u v). The char-
acterization of imaging systems in the transform domain is discussed in
Section 2.12.
It should be noted that the convolution , multiplication property with
the DFT implies periodic or circular convolution however, this type
of convolution may be made to be equivalent to linear convolution by
zero-padding. Details on this topic are presented in Section 3.5.3.
13. Multiplication of images in the space domain is equivalent to the con-
volution of their Fourier transforms:
f1 (x y) f2 (x y) , F1 (u v)
F2 (u v): (2.71)
In medical imaging, some types of noise get multiplied with the image.
When a transparency, such as an X-ray image on lm, is viewed using
a light box, the resulting image g(x y) may be modeled as the product
of the transparency or transmittance function f (x y) with the light
source intensity eld s(x y), giving g(x y) = f (x y) s(x y). If s(x y)
is absolutely uniform with a value A, its Fourier transform will be an
impulse: S (u v) = A(u v). The convolution of F (u v) with A(u v)
will have no e ect on the spectrum except scaling by the constant A. If
the source is not uniform, the viewed image will be a distorted version
of the original the corresponding convolution G(u v) = F (u v)
S (u v)
will distort the spectrum F (u v) of the original image.
14. The correlation of two images f (m n) and g(m n) is given by the op-
eration
;1 NX
NX ;1
f g ( ) = f (m n) g(m +  n + ): (2.72)
m=0 n=0
Correlation is useful in the comparison of images where features that
are common to the images may be present with a spatial shift ( ).
Upon Fourier transformation, we get the conjugate product of the spec-
tra of the two images:
;f g (k l) = F (k l) G (k l): (2.73)
Image Quality and Information Content 119
A related measure, known as the correlation coecient and useful in
template matching and image classication, is dened as
PN ;1 PN ;1
= m=0 n=0 f (m n) g (m n)
hP i1 : (2.74)
N ;1 N ;1 f 2 (m n) PN ;1 PN ;1 g 2 (m n) 2
P
m=0 n=0 m=0 n=0
Here, it is assumed that the two images f and g (or features thereof)
are aligned and registered, and are of the same scale and orientation.
15. Di erentiation of an image results in the extraction of edges and high-
pass ltering:
@f (x y) , j 2
u F (u v)
@x
@f (x y) , j 2
v F (u v): (2.75)
@y
The gain of the operator increases linearly with frequency u or v.
When processing digital images, the derivatives are approximated by
di erences computed as
fy (m n)  f (m n) ; f (m ; 1 n)
0

fx (m n)  f (m n) ; f (m n ; 1)


0
(2.76)
(using matrix notation).
It should be noted that operators based upon di erences could cause
negative pixel values in the result. In order to display the result as an
image, it will be necessary to map the full range of the pixel values,
including the negative values, to the display range available. The mag-
nitude of the result may also be displayed if the sign of the result is not
important.
Examples: Figure 2.36 shows an image of a rectangle and its derivatives
in the horizontal and vertical directions, as well as their log-magnitude
spectra. The horizontal and vertical derivatives were obtained by con-
volving the image with ;1 1] and ;1 1]T , respectively. Figures 2.37
and 2.38 show similar sets of results for the image of a myocyte and an
MR image of a knee. It is seen that the two derivatives extract edges in
the corresponding directions edges in the direction orthogonal to that
of the operator are removed. The spectra show that the components in
one direction are enhanced, whereas the components in the orthogonal
direction are removed.
Observe that di erentiation results in the removal of the intensity in-
formation from the image. Correspondingly, the values of the spectrum
for u = 0 or v = 0 are set to zero.
120 Biomedical Image Analysis
16. The Laplacian of an image is dened as
r2 f (x y ) =
@2f + @2f : (2.77)
@x2 @y2
In the Fourier domain, we get
r2 f (x y ) , ;(2
)2 (u2 + v 2 )F (u v ): (2.78)
The spectrum of the image is multiplied by the factor (u2 + v2 ), which
is isotropic and increases quadratically with frequency. Therefore, the
high-frequency components are amplied by this operation. The Lapla-
cian is an omnidirectional operator, and detects edges in all directions.
When processing digital images, the second derivatives may be approx-
imated as follows: Taking the derivative of the expression for fy (m n)0

in Equation 2.76 for the second time, we get


fy (m n)  f (m n) ; f (m ; 1 n) ; f (m ; 1 n) ; f (m ; 2 n)]
00

= f (m n) ; 2 f (m ; 1 n) + f (m ; 2 n) (2.79)
(using matrix notation). Causality is usually not a matter of concern
in image processing, and it is often desirable to have operators use col-
lections of pixels that are centered about the pixel being processed.
Applying a shift of one pixel to the result above (specically, adding 1
to the rst index of each term) leads to
fy (m n)  f (m + 1 n) ; 2 f (m n) + f (m ; 1 n)
00
(2.80)
= f (m ; 1 n) ; 2 f (m n) + f (m + 1 n):
Similarly, we get
fx (m n)  f (m n ; 1) ; 2 f (m n) + f (m n + 1):
00
(2.81)
The Laplacian could then be implemented as
fL (m n) = f (m;1 n)+f (m n;1);4f (m n)+f (m+1 n)+f (m n+1):
(2.82)
This operation is achieved by convolving the image with the 3  3 mask
or operator 2 3
0 1 0
4 1 ;4 1 5 : (2.83)
0 1 0
Examples: Figure 2.39 shows the Laplacian of the rectangle image
in Figure 2.36 (a) and its log-magnitude spectrum. Similar results are
shown in Figures 2.40 and 2.41 for the myocyte image in Figure 2.37 (a)
and the knee MR image in Figure 2.38 (a). The Laplacian operator
Image Quality and Information Content 121
has extracted all edges in all directions correspondingly, high-frequency
components in all directions in the spectrum have been strengthened.
Observe that the images have lost gray-scale on intensity information
correspondingly, the (u v) = (0 0) component has been removed from
the spectra.
17. Integration of an image leads to smoothing or blurring, and lowpass
ltering: Z x
f ( y) d , j 21
u F (u v) (2.84)
=;1
Z y
f (x ) d , j 21
v F (u v): (2.85)
 =;1
The weighting factors that apply to F (u v) diminish with increasing
frequency, and hence high-frequency components are attenuated by this
operation.
The integration of an image from ;1 to the current x or y position is
seldom encountered in practice. Instead, it is common to encounter the
integration of an image over a small region or aperture surrounding the
current position, in the form
Z A=2 Z B=2
g(x y) = 1 f (x +  y + ) d d  (2.86)
AB =;A=2 =;B=2
where the region of integration is a rectangle of size A  B . The normal-
ization factor AB1 leads to the average intensity being computed over
the area of integration. This operation may be interpreted as a moving-
average (MA) lter.
In discrete terms, averaging over a 3  3 aperture or neighborhood is
represented as
1 1
g(m n) = 19
X X
f (m +  n + ): (2.87)
=;1  =;1
This equation may be expanded as
g(m n) = 19 
 f (m ; 1 n ; 1) +f (m ; 1 n) +f (m ; 1 n + 1)
+f (m n ; 1) +f (m n) +f (m n + 1) (2.88)
+f (m + 1 n ; 1) +f (m + 1 n) +f (m + 1 n + 1) ] :
The same operation is also achieved via convolution of the image f (m n)
with the array 2 3
1 4 11 11 11 5  (2.89)
9 1 1 1
122 Biomedical Image Analysis
which may be viewed as the PSF of a lter. It follows that the corre-
sponding e ect in the frequency domain is multiplication of the Fourier
transform of the image with a 2D sinc function.
Integration or averaging as above but only along the horizontal or verti-
cal directions may be performed via convolution with the arrays 13 1 1 1]
or 13 1 1 1]T , respectively.
Examples: Figure 2.42 shows an image of a rectangle with ideal edges,
followed by the results of averaging along the horizontal and vertical
directions via convolution with the arrays 13 1 1 1] and 13 1 1 1]T ,
respectively the log-magnitude spectra of the images are also shown.
Figure 2.43 shows the result of averaging the rectangle image using
the 3  3 mask in Equation 2.89, as well as its spectrum. It is seen
that averaging results in the smoothing of edges and a reduction in
the strength of the high-frequency components in the direction(s) of
averaging.
Similar results are shown in Figures 2.44 and 2.45 for an image of a my-
ocyte, and in Figures 2.46 and 2.47 for an MR image of a knee joint. It is
seen that minor details and artifacts in the images have been suppressed
or removed by the averaging operation.

2.12 Modulation Transfer Function


Analysis of the characteristics of imaging systems, treated as 2D LSI systems,
is easier in the frequency or Fourier domain. Taking the 2D Fourier transform
of the convolution integral in Equation 2.32, we get
G(u v) = H (u v) F (u v) (2.90)
where F (u v), G(u v), and H (u v) are the 2D Fourier transforms of f (x y),
g(x y), and h(x y), respectively.
The 2D frequency-domain function H (u v) is known as the optical transfer
function (OTF), or simply the transfer function, of the imaging system. The
OTF is, in general, a complex quantity. The magnitude of the OTF is known
as the modulation transfer function (MTF). The OTF at each frequency co-
ordinate gives the attenuation (or gain) for the corresponding frequency (u v)
as well as the phase introduced by the system (usually implying distortion).
The OTF of an imaging system may be estimated by compiling its responses
to sinusoidal test patterns or gratings of various frequencies it may also be
computed from various spread functions, as described in Section 2.9. If the
Image Quality and Information Content 123

(a) (b)

(c) (d)

(e) (f)
FIGURE 2.36
(a) Image of a rectangular box. (c) Horizontal and (e) vertical derivatives of
the image in (a), respectively. (b), (d), and (f): Log-magnitude spectra of
the images in (a), (c), and (e), respectively. The images in (c) and (e) were
obtained by mapping the range ;200 200] to the display range of 0 255].
Negative di erences appear in black, positive di erences in white. The spectra
show values in the range 5 12] mapped to 0 255].
124 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 2.37
(a) Image of a myocyte. (c) Horizontal and (e) vertical derivatives of the
image in (a), respectively. (b), (d), and (f): Log-magnitude spectra of the
images in (a), (c), and (e), respectively. Images in (c) and (e) were obtained
by mapping the range ;20 20] to the display range of 0 255]. The spectra
show values in the range 3 12] mapped to 0 255].
Image Quality and Information Content 125

(a) (b)

(c) (d)

(e) (f)
FIGURE 2.38
(a) MR image of a knee. (c) Horizontal and (e) vertical derivatives of the image
in (a), respectively. (b), (d), and (f): Log-magnitude spectra of the images
in (a), (c), and (e), respectively. The images in (c) and (e) were obtained
by mapping the range ;50 50] to the display range of 0 255]. Negative
di erences appear in black, positive di erences in white. The spectra show
values in the range 3 12] mapped to 0 255].
126 Biomedical Image Analysis

(a) (b)
FIGURE 2.39
(a) Laplacian of the rectangle image in Figure 2.36 (a). (b) Log-magnitude
spectrum of the image in (a).

(a) (b)
FIGURE 2.40
(a) Laplacian of the myocyte image in Figure 2.37 (a). (b) Log-magnitude
spectrum of the image in (a).
Image Quality and Information Content 127

(a) (b)
FIGURE 2.41
(a) Laplacian of the MR image in Figure 2.38 (a). (b) Log-magnitude spec-
trum of the image in (a).

imaging system may be approximated as an LSI system, its response or out-


put corresponding to any arbitrary input image may be determined with a
knowledge of its PSF or OTF, as per Equation 2.32 or 2.90.
It should be noted that the widths of a PSF and the corresponding MTF
bear an inverse relationship: the greater the blur, the wider the PSF, and
the narrower the MTF (more high-frequency components are attenuated sig-
nicantly). Therefore, resolution may also be expressed indirectly in the fre-
quency domain as a point along the frequency axis beyond which the attenu-
ation is signicant. Furthermore, a larger area under the (normalized) MTF
indicates a system with better resolution (more high-frequency components
preserved) than a system with a smaller area under the MTF. Several MTF-
based measures related to image quality are described in Section 2.15.
Example: Higashida et al. 133] compared the MTF of an image-intensier
system for DR with that of a screen-lm system. A 2 100-line camera was
used to obtain images in 2 048  2 048 matrices. The distance between the X-
ray tube focal spot and the experimental table top in their studies was 100 cm
the distance between the camera (object plane) and the table top was 10 cm
the distance between the screen-lm cassette and the table top was 5 cm.
The magnication factors for the DR and screen-lm systems are 1:28 and
1:17, respectively. Focal spots of nominal size 0:3 mm and 0:8 mm were used.
MTFs were computed for di erent imaging conditions by capturing images of
a narrow slit to obtain the LSF, and computing its Fourier spectrum. The
e ect of the MTF of the recording system was also included in computing the
total MTF: the total MTF was computed as the product of the MTF due to
the focal spot and the MTF of the detector system.
Figure 2.48 shows the MTFs for three imaging conditions. It is seen that
the two MTFs of the DR system are poorer than that of the screen-lm system
128 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 2.42
(a) Image of a rectangular box. Results of averaging using three pixels in
the (c) horizontal and (e) vertical directions, respectively. (b), (d), and (f):
Log-magnitude spectra of the images in (a), (c), and (e), respectively. The
spectra show values in the range 5 12] mapped to 0 255].
Image Quality and Information Content 129

(a) (b)
FIGURE 2.43
(a) Result of 3  3 averaging of the rectangle image in Figure 2.42 (a). (b) Log-
magnitude spectrum of the image in (a).

at spatial frequency beyond 1 cycle=mm in the object plane. The MTF of


the DR system with a focal spot of 0:3 mm is slightly better than that of the
same system with a focal spot of 0:8 mm. The poorer performance of the
DR system was attributed to the pixel size being as large as 0:11 mm (the
equivalent resolution being 4:5 lp=mm).
Higashida et al. also computed contrast-detail curves based upon exper-
iments with images of square objects of varying thickness. The threshold
object thickness was determined as that related to the lowest-contrast image
where radiologists could visually detect the objects in images with a 50% con-
dence level. (Note: The background and object area remaining the same,
the contrast of the object in an X-ray image increases as the object thickness
is increased.) Figure 2.49 shows the contrast-detail curves for the screen-lm
system and the DR system, with the latter operated at the same dose as the
former in one setting (labeled as iso-dose in the gure, with the focal spot
being 0:8 mm), and at a low-dose setting with the focal spot being 0:3 mm
in another setting. The performance in the detection of low-contrast signals
with the screen-lm system was comparable to that with the DR system at
the same X-ray dose. The low-dose setting resulted in poorer detection capa-
bility with the DR system. In the study of Higashida et al., in spite of the
poorer MTF, the DR system was found to be as e ective as the screen-lm
system (at the same X-ray dose) in clinical studies in the application area of
bone radiography.
Example: Figure 2.50 shows the MTF curve of an amorphous selenium
detector system for direct digital mammography along with those of a screen-
lm system and an indirect digital imaging system. The amorphous selenium
system has the best MTF of the three systems.
130 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 2.44
(a) Image of a myocyte. Results of averaging using three pixels in the
(c) horizontal and (e) vertical directions, respectively. (b), (d), and (f): Log-
magnitude spectra of the images in (a), (c), and (e), respectively. The spectra
show values in the range 3 12] mapped to 0 255].
Image Quality and Information Content 131

(a) (b)
FIGURE 2.45
(a) Result of 3  3 averaging of the myocyte image in Figure 2.44 (a). (b) Log-
magnitude spectrum of the image in (a).

Example: Pateyron et al. 134] developed a CT system for high-resolution


3D imaging of small bone samples using synchrotron radiation. In order to
evaluate the resolution of the system, they imaged a sharp edge using the
system, thereby obtaining the ESF of the system illustrated in Figure 2.51 (a).
The derivative of the ESF in the directional orthogonal to that of the edge was
then computed to obtain the LSF, shown in Figure 2.51 (b). The MTF of the
system was then derived as explained earlier in this section, using the Fourier
transform of the LSF, and is shown in Figure 2.51 (c). The value of the MTF
at 55 lp=mm is 0:1. Using this information, Pateyron et al. estimated the
spatial resolution of their CT system to be (2  55);1 = 0:009 mm or 9 m.

2.13 Signal-to-Noise Ratio


Noise is omnipresent! Some of the sources of random noise in biomedical imag-
ing are scatter, photon-counting noise, and secondary radiation. Blemishes
may be caused by scratches, defects, and nonuniformities in screens, crystals,
and electronic detectors. It is common to assume that noise is additive, and
to express the degraded image g(x y) as
g(x y) = f (x y) + (x y) (2.91)
where f (x y) is the original image and (x y) is the noise at (x y). Fur-
thermore, it is common to assume that the noise process is statistically in-
dependent of (and hence, uncorrelated with) the image process. Then, we
132 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 2.46
(a) MR image of a knee. Results of averaging using three pixels in the
(c) horizontal and (e) vertical directions, respectively. (b), (d), and (f): Log-
magnitude spectra of the images in (a), (c), and (e), respectively. The spectra
show values in the range 3 12] mapped to 0 255].
Image Quality and Information Content 133

(a) (b)
FIGURE 2.47
(a) Result of 3  3 averaging of the knee MR image in Figure 2.46 (a). (b) Log-
magnitude spectrum of the image in (a).

FIGURE 2.48
MTFs of a DR system (II-TV = image-intensier television) and a screen-
lm system at the same X-ray dose. FS = focal spot. Reproduced with
permission from Y. Higashida, Y. Baba, M. Hatemura, A. Yoshida, T. Takada,
and M. Takahashi, \Physical and clinical evaluation of a 2 048  2 048-matrix
image intensier TV digital imaging system in bone radiography", Academic
Radiology, 3(10):842{848. 1996. c Association of University Radiologists.
134 Biomedical Image Analysis

FIGURE 2.49
Contrast-detail curves of a DR system (II-TV = image-intensier television)
and a screen-lm system. The DR system was operated at the same X-
ray dose as the screen-lm system (iso-dose) and at a low-dose setting. Re-
produced with permission from Y. Higashida, Y. Baba, M. Hatemura, A.
Yoshida, T. Takada, and M. Takahashi, \Physical and clinical evaluation of
a 2 048  2 048-matrix image intensier TV digital imaging system in bone
radiography", Academic Radiology, 3(10):842{848. 1996. c Association of
University Radiologists.
Image Quality and Information Content 135

FIGURE 2.50
MTF curves of an amorphous selenium (aSe) detector system for direct digital
mammography, a screen-lm system (S-F), and an indirect digital imaging
system. c=mm = cycles=mm. Figure courtesy of J.E. Gray, Lorad, Danbury,
CT.

have
g = f +   (2.92)
where  represents the mean (average) of the process indicated by the sub-
script. In many cases, the mean of the noise process is zero. The variances
(2 ) of the processes are related as
g2 = f2 + 2 : (2.93)
SNR is a measure used to characterize objectively the relative strengths
of the true image (signal) and the noise in an observed (noisy or degraded)
image. Several denitions of SNR exist, and are used depending upon the
information available and the feature of interest. A common denition of
SNR is " #
2
SNR1 = 10 log10 f2 dB: (2.94)

The variance of noise may be estimated by computing the sample variance of
pixels selected from areas of the given image that do not contain, or are not
expected to contain, any image component. The variance of the true image
as well as that of noise may be computed from the PDFs of the corresponding
processes if they are known or if they can be estimated.
136 Biomedical Image Analysis

FIGURE 2.51
(a) Edge spread function, (b) line spread function, and (c) MTF of a CT
system. 1 micron = 1 m. Reproduced with permission from M. Pateyron,
F. Peyrin, A.M. Laval-Jeantet, P. Spanne, P. Cloetens, and G. Peix, \3D
microtomography of cancellous bone samples using synchrotron radiation",
Proceedings of SPIE 2708: Medical Imaging 1996 { Physics of Medical Imag-
ing, Newport Beach, CA, pp 417{426. c SPIE.
Image Quality and Information Content 137
In some applications, the variance of the image may not provide an appro-
priate indication of the useful range of variation present in the image. For this
reason, another commonly used denition of SNR is based upon the dynamic
range of the image, as

SNR2 = 20 log10 fmax ; fmin dB: (2.95)

Video signals in modern CRT monitors have SNR of the order of 60 ; 70 dB
with noninterlaced frame repetition rate in the range 70;80 frames per second.
Contrast-to-noise ratio (CNR) is a measure that combines the contrast or
the visibility of an object and the SNR, and is dened as
CNR = f ; b  (2.96)
b
where f is an ROI (assumed to be uniform, such as a disc being imaged using
X rays), and b is a background region with no signal content (see Figure 2.7).
Comparing this measure to the basic measure of simultaneous contrast in
Equation 2.8, the di erence lies in the denominator, where CNR uses the
standard deviation. Whereas simultaneous contrast uses a background region
that encircles the ROI, CNR could use a background region located elsewhere
in the image. CNR is well suited to the analysis of X-ray imaging systems,
where the density of an ROI on a lm image depends upon the dose: the
visibility of an object is dependent upon both the dose and the noise.
In a series of studies on image quality, Schade reported on image gradation,
graininess, and sharpness in television and motion-picture systems 135] on
an optical and photoelectric analog of the eye 136] and on the evaluation of
photographic image quality and resolving power 137]. The sine-wave, edge-
transition, and square-wave responses of imaging systems were discussed in
detail. Schade presented a detailed analysis of the relationships between re-
solving power, contrast sensitivity, number of perceptible gray-scale steps, and
granularity with the \three basic characteristics" of an imaging system: in-
tensity transfer function, sine-wave response, and SNR. Schade also presented
experimental setups and procedures with optical benches and equipment for
photoelectric measurements and characterization of optical and imaging sys-
tems.
Burke and Snyder 138] reported on quality metrics of digital images as
related to interpreter performance. Their test set included a collection of 250
transparencies of 10 digital images, each degraded by ve levels of blurring
and ve levels of noise. Their work addressed the question \How can we
measure the degree to which images are improved by digital processing?"
The results obtained indicated that although the main e ect of blur was not
signicant in their interpretation experiment (in terms of the extraction of
the \essential elements of information"), the e ect of noise was signicant.
However, in medical imaging applications such as SPECT, high levels of noise
138 Biomedical Image Analysis
are tolerated, but blurring of edges caused by lters used to suppress noise is
not accepted.
Tapiovaara and Wagner 139] proposed a method to measure image quality
in the context of the image information available for the performance of a
specied detection or discrimination task by an observer. The method was
applied to the analysis of uoroscopy systems by Tapiovaara 140].

2.14 Error-based Measures


Notwithstanding several preceding works on image quality, Hall 141] stated
(in 1981) that \A major problem which has plagued image processing has been
the lack of an e ective image quality measure." In his paper on subjective
evaluation of a perceptual quality metric, Hall discussed several image qual-
ity measures including the MSE, normalized MSE (NMSE), normalized error
(NE), and Laplacian MSE (LMSE), and then dened a \perceptual MSE"
or PMSE based on an HVS model. The measures are based upon the dif-
ferences between a given test image f (m n) and its degraded version g(m n)
after passage through the imaging system being evaluated, computed over the
full image frame either directly or after some lter or transform operation, as
follows: ;1 NX;1
MSE = MN 1 MX f (m n) ; g(m n)]2 (2.97)
m=0 n=0
PM ;1 PN ;1
NMSE = m=0PM ;n1=0PNf;(m n) ; g(m n)]2 (2.98)
1 2
m=0 n=0 f (m n)]
PM ;1 PN ;1
jf (m n) ; g (m n)j
NE = m=0PM ;n1=0PN ;1 (2.99)
m=0 n=0 jf (m n)j
PM ;2 PN ;2
LMSE = m=1PM ;n=1 fL (m n) ; gL (m n)]2 (2.100)
P
2 N ;2 2
m=1 n=1 fL (m n)]
where the M  N images are dened over the range m = 0 1 2 : : :  M ; 1
and n = 0 1 2 : : :  N ; 1 and fL (m n) is the Laplacian (second derivative)
of f (m n) dened as in Equation 2.82 for m = 1 2 : : :  M ; 2 and n =
1 2 : : :  N ; 2: PMSE was dened in a manner similar to NMSE, but with
each image replaced with the logarithm of the image convolved with a PSF
representing the HVS. Hall's results showed that PMSE correlated well with
subjective ranking of images to greater than 99:9%, and performed better
than NMSE or LMSE. It should be noted that the measures dened above
assume the availability of a reference image for comparison in a before{and{
after manner.
Image Quality and Information Content 139

2.15 Application: Image Sharpness and Acutance


In the search for a single measure that could represent the combined e ects
of various imaging and display processes, several researchers proposed various
measures under the general label of \acutance" 114, 115, 142, 143, 144, 145,
146]. The following paragraphs present a review of several such measures
based upon the ESF and the MTF.
As an aside, it is important to note the distinction between acutance and
acuity. Westheimer 147, 148] discussed the concepts of visual acuity and hy-
peracuity and their light-spread and frequency-domain descriptions. Acuity
(as evaluated with Snellen letters or Landolt \C"s) tests the \minimum sepa-
rable", where the visual angle of a small feature is varied until a discrimination
goal just can or cannot be achieved (a resolution task). On the other hand,
hyperacuity (vernier or stereoscopic acuity) relates to spatial localization or
discrimination.
The edge spread function: Higgins and Jones 115] discussed the nature
and evaluation of the sharpness of photographic images, with particular at-
tention to the importance of gradients. With the observation that the cones in
the HVS, while operating in the mode of photopic vision (under high-intensity
lighting), respond to temporal illuminance gradients, and that the eye moves
to scan the eld of vision, they argued that spatial luminance gradients in the
visual eld represent physical aspects of the object or scene that a ect the
perception of detail. Higgins and Jones conducted experiments with microden-
sitometric traces of knife edges recorded on various photographic materials,
and found that the maximum gradient or average gradient measures along
the knife-edge spread functions (KESF) failed to correlate with sharpness as
judged by human observers.
Figure 2.20 illustrates an ideal sharp edge and a hypothetical KESF. Higgins
and Jones proposed a measure of acutance based upon the mean-squared
gradient across a KESF as
Z b 
A = f (b) ;1 f (a) d f (x) 2 dx (2.101)
a dx
where f (x) represents the intensity function along the edge, and a and b are the
spatial limits of the (blurred) edge. Ten di erent photographic materials were
evaluated with the measure of acutance, and the results indicated excellent
correlation between acutance and subjective judgment of sharpness.
Wolfe and Eisen 149] reported on psychometric evaluation of the sharpness
of photographic reproductions. They stated that resolving power, maximum
gradient, and average gradient do not correlate well with sharpness, and that
the variation of density across an edge is an obvious physical measurement to
be investigated in order to obtain an objective correlate of sharpness. Per-
rin 114] continued along these lines, and proposed an averaged measure of
140 Biomedical Image Analysis
acutance by averaging the mean-squared gradient measure of Higgins and
Jones over many sections of the KESF, and further normalizing it with re-
spect to the density di erence across the knife edge. Perrin also discussed the
relationship between the edge trace and the LSF.
MTF-based measures: Although Perrin 114] reported on an averaged
acutance measure based on the mean-squared gradient measure of Higgins
and Jones, he also remarked 150] that the sine-wave response better describes
the behavior of an optical system than a single parameter (such as resolving
power), and discussed the relationship between the sine-wave response and
spread functions. The works of Schade and Perrin, perhaps, shifted interest
from the spatial-gradient technique of Higgins and Jones to the frequency
domain.
Frequency-domain measures related to image quality typically combine the
areas under the MTF curves of the long chain of systems and processes in-
volved from the initial stage of the camera lens, through the lm and/or
display device, to the nal visual system of the viewer 146]. It is known from
basic linear system theory that when a composite system includes a number of
LSI systems with transfer functions H1 (u v), H2 (u v),   , HN (u v) in series
(cascade), the transfer function of the complete system is given by
N
Y
H (u v) = H1 (u v) H2 (u v)  HN (u v) = Hi (u v): (2.102)
i=1
Equivalently, we have the PSF of the net system given by
h(x y) = h1 (x y)
h2 (x y)
  
hN (x y): (2.103)
Given that the high-frequency components in the spectrum of an image are
associated with sharp edges in the image domain, it may be observed that
the transfer function of an imaging or image processing system should pos-
sess large gains at high frequencies in order for the output image to retain
the sharpness present in the input. This observation leads to the result that,
over a given frequency range of interest, a system with larger gains at higher
frequencies (and hence a sharper output image) will have a larger area under
the normalized MTF than another system with lower gains (and hence poorer
sharpness in the resulting image). By design, MTF-area-based measures rep-
resent the combined e ect of all the systems between the image source and
the viewer they are independent of the actual image displayed.
Crane 142] started a series of denitions of acutance based on the MTFs
of imaging system components. He discussed the need for objective correlates
of the subjective property of image sharpness or crispness, and remarked that
resolving power is misleading, and that the averaged squared gradient of edge
proles is dependable but cannot include the e ects of all the components in a
photographic system (camera to viewer). Crane proposed a single numerical
rating based on the areas under the MTF curves of all the systems in the
Image Quality and Information Content 141
chain from the camera to the viewer (for example, camera, negative, printer,
intermediate processing systems, print lm, projector, screen, and observer).
He called the measure the system modulation transfer acutance (SMTA) and
claimed that it could be readily comprehended, compared, and tabulated.
He also recommended that the acutance measure proposed by Higgins and
Jones 115] and Perrin 114] be called image edge-prole acutance (IEPA).
Crane evaluated SMTA using 30 color lms and motion-picture lms, and
found it to be a good tool.
Crane's work started another series of papers proposing modied denitions
of MTF-based acutance measures for various applications: Gendron 146]
proposed a \cascaded modulation transfer or CMT" measure of acutance
(CMTA) to rectify certain deciencies in SMTA. Crane wrote another pa-
per on acutance and granulance 143] and dened \AMT acutance" (AMTA)
based on the ratio of the MTF area of the complete imaging system including
the human eye to that of the eye alone. He also presented measures of gran-
ulance based on root mean-squared (RMS) deviation from mean lightness in
areas expected to be uniform, and discussed the relationships between acu-
tance and granulance. CMTA was used by Kriss 145] to compare the system
sharpness of continuous and discrete imaging systems. AMTA was used by
Yip 144] to analyze the imaging characteristics of CRT multiformat printers.
Assuming the systems involved to be isotropic, the MTF pis typically ex-
pressed as a 1D function of the radial unit of frequency = (u2 + v2 ) see
Figures 2.33 (b) and 2.51 (c). Let us represent the combined MTF of the
complete chain of systems as Hs ( ). Some of the MTF-area-based measures
are dened as follows:
Z max
A1 = Hs ( ) ; He ( )] d  (2.104)
0
where He ( ) is the MTF threshold of the eye, represents the radial frequency
at the eye of the observer, and max is given by the condition Hs ( max ) =
He ( max ) 151]. In order to reduce the weighting on high-frequency compo-
nents, another measure replaces the di erence between the MTFs as above
with their ratio, as 151]
Z 1
A2 = Hs ( ) d : (2.105)
0 He ( )
AMTA was dened as 143, 144]
"R 1 #
H s ( ) H e ( ) d
AMTA = 100 + 66 log10 0 R 1 : (2.106)
0 He ( ) d
The MTF of the eye was modeled as a Gaussian with standard deviation
 = 13 cycles=degree. AMTA values were interpreted as 100 : excellent, 90 :
good, 80 : fair, and 70 : just passable 143].
142 Biomedical Image Analysis
Several authors have presented and discussed various other image quality
criteria and measures that are worth mentioning here whereas some are based
on the MTF and hence have some common ground with acutance, others
are based on di erent factors. Higgins 152] discussed various methods for
analyzing photographic systems, including the e ects of nonlinearity, LSFs,
MTFs, granularity, and sharpness. Granger and Cupery 153] proposed a
\subjective quality factor (SQF)" based upon the integral of the system MTF
(including scaling e ects to the retina) over a certain frequency range. Their
results indicated a correlation of 0:988 between SQF and subjective ranking
by observers.
Higgins 154] published a detailed review of various image quality crite-
ria. Quality criteria as related to objective or subjective tone reproduction,
sharpness, and graininess were described. Higgins reported on the results of
tests evaluating various versions of MTF-based acutance and other measures
with photographic materials having widely di erent MTFs, and recommended
that MTF-based acutance measures are good when no graininess is present
SNR-based measures were found to be better when graininess was apparent.
Task et al. 155] compared several television (TV) display image quality mea-
sures. Their tests included target recognition tasks and several FOMs such
as limiting resolution, MTF area, threshold resolution, and gray-shade fre-
quency product. They found MTF area to be the best measure among those
evaluated.
Barten 151, 156] presented reviews of various image quality measures, and
proposed the evaluation of image quality using the square-root integral (SQRI)
method. The SQRI measure is based upon the ratio of the MTF of the display
system to that of the eye, and can take into account the contrast sensitivity
of the eye and various display parameters such as resolution, addressability,
contrast, luminance, display size, and viewing distance. SQRI is dened as
Z max  12
1
SQRI = ln(2) Hs ( ) d : (2.107)
0 He ( )
Here, max is the maximum frequency to be displayed. The SQRI measure
overcomes some limitations in the SQF measure of Granger and Cupery 153].
Based upon good correlation between SQRI and perceived subjective image
quality, Barten proposed SQRI as an \excellent universal measure of perceived
image quality".
Carlson and Cohen 157] proposed a psychophysical model for predicting the
visibility of displayed information, combining the e ects of MTF, noise, sam-
pling, scene content, mean luminance, and display size. They noted that edge
transitions are a signicant feature of most scenes, and proposed \discrim-
inable di erence diagrams" of modulation transfer versus retinal frequency
(in cycles per degree). Their work indicated that discriminable di erence di-
agrams could be used to predict the visibility of MTF changes in magnitude
but not in phase.
Image Quality and Information Content 143
Several other measures of image quality based upon the HVS have been pro-
posed by Saghri et al. 158], Nill and Bouzas 159], Lukas and Budrikis 160],
and Budrikis 161].
Region-based measure of edge sharpness: Westerink and Roufs 162]
proposed a local basis for perceptually relevant resolution measures. Their
experiments included the presentation of a number of slides with complex
scenes at variable resolution created by defocusing the lens of the projector,
and at various widths. They showed that the width of the LSF correlates
well with subjective quality, and remarked that MTF-based measures \do not
reect the fact that local aspects such as edges and contours play an important
role in the quality sensation".
Rangayyan and Elkadiki 116] discussed the importance of a local measure of
quality, sharpness, or perceptibility of a region or feature of interest in a given
image. The question asked was \Given two images of the same scene, which
one permits better perception of a specic region or object in the image?"
Such a situation may arise in medical imaging, where one may have an array
of images of the same patient or phantom test object acquired using multiple
imaging systems (di erent models or various imaging parameter settings on
the same system). It would be of interest to determine which system or set
of parameters provides the image where a specic object, such as a tumor,
may be seen best. Whereas local luminance gradients are indeed reected as
changes at all frequencies in the MTF, such a global characteristic may dilute
the desired di erence in the situation mentioned above. Furthermore, MTF-
based measures characterize the imaging and viewing systems in general, and
are independent of the specic object or scene on hand.
Based upon the observations of Higgins and Jones 115], Wolfe and Eisen
149], Perrin 114], Carlson and Cohen 157], and Westerink and Roufs 162] on
the importance of local luminance variations, gradients, contours, and edges
(as reviewed above), Rangayyan and Elkadiki presented arguments in favor of
a region-based measure of sharpness or acutance. They extended the measure
of acutance dened by Higgins and Jones 115] and Perrin 114] to 2D regions
by computing the mean-squared gradient across and around the contour of
an object or ROI in the given image, and called the quantity \a region-based
measure of image edge-prole or IEP acutance (IEPA)". Figure 2.52 illus-
trates the basic principle involved in computing the gradient around an ROI,
using normals (perpendiculars to the tangents) at every pixel on its boundary.
Instead of the traditional di erence dened as
f 0 (n) = f (n) ; f (n ; 1) (2.108)
Rangayyan and Elkadiki (see Rangayyan et al. 163] for revised denitions)
split the normal at each boundary pixel into a foreground part f (n) and a
background part b(n) (see Figure 2.52), and dened an averaged gradient as
N f (n) ; b(n) 
fd (k) = N1
X
2n (2.109)
n=1
144 Biomedical Image Analysis
where k is the index of the boundary pixel and N is the number of pairs of
pixels (or di erences) used along the normal. The averaged gradient values
over all boundary pixels were then combined to obtain a single normalized
value of acutance A for the entire region as
" K # 12
A= d1 1 X
K fd2 (k)  (2.110)
max k=1
where K is the number of pixels along the boundary, and dmax is the max-
imum possible gradient value used as a normalization factor. It was shown
that the value of A was reduced by blurring and increased by sharpening of
the ROI 116, 164]. Olabarriaga and Rangayyan 117] further showed that
acutance as dened in Equation 2.110 is not a ected signicantly by noise,
and that it correlates well with sharpness as judged by human observers.

Normal

b(3) Background
b(2)
b(1)

f(1)
f(2)
Boundary
f(3)

Foreground
(Object)

FIGURE 2.52
Computation of di erences along the normals to a region in order to derive a
measure of acutance. Four sample normals are illustrated, with three pairs of
pixels being used to compute di erences along each normal.

Example of application: In terms of their appearance on mammograms,


most benign masses of the breast are well-circumscribed with sharp bound-
aries that delineate them from surrounding tissues. On the other hand, most
malignant tumors possess fuzzy boundaries with slow and extended transition
from a dense core region to the surrounding, less-dense tissues. Based upon
this radiographic observation, Rangayyan et al. 163] hypothesized that the
acutance measure should have higher values for benign masses than for ma-
Image Quality and Information Content 145
lignant tumors. Acutance was computed using normals with variable length
adapted to the complexity of the shape of the boundary of the mass being
analyzed. The measure was tested using 39 mammograms, including 28 be-
nign masses and 11 malignant tumors. Boundaries of the masses were drawn
by a radiologist for this study. It was found that acutance could lead to the
correct classication of all of the 11 malignant tumors and 26 out of the 28
benign masses, resulting in an overall accuracy of 94:9%.
Mudigonda et al. 165, 166] evaluated several versions of the acutance mea-
sure by dening the di erences based upon successive pixel pairs and across-
the-boundary pixel pairs. It was observed that the acutance measure is sensi-
tive to the location of the reference boundary 163, 164, 165, 166]. See Sections
7.9.2 and 12.12 as well as Figure 12.4 for more details and illustrations related
to acutance.

2.16 Remarks
We have reviewed several notions of image quality and information content in
this chapter. We explored many methods and measures designed to character-
ize various image attributes associated with quality and information content.
It should be observed that image quality considerations vary from one appli-
cation of imaging to another, and that appropriate measures should be chosen
after due assessment of the particular problem on hand. In medical diagnostic
applications, emphasis is usually placed on the assessment of image quality
in terms of its e ect on the accuracy of diagnostic interpretation by human
observers and specialists: methods related to this approach are discussed in
Sections 4.11, 12.8, and 12.10. The use of measures of information content in
the analysis of methods for image coding and data compression is described
in Chapter 11.

2.17 Study Questions and Problems


(Note: Some of the questions may require background preparation with other sources
on the basics of signals and systems as well as digital signal and image processing,
such as Lathi 1], Oppenheim et al. 2], Oppenheim and Schafer 7], Gonzalez and
Woods 8], Pratt 10], Jain 12], Hall 9], and Rosenfeld and Kak 11].)
Selected data les related to some of the problems and exercises are available at
the site
www.enel.ucalgary.ca/People/Ranga/enel697
146 Biomedical Image Analysis
1. Explain the dierences between spatial resolution and gray-scale resolution in
a digitized image.
2. Give the typical units for the variables (x y) and (u v) used in the represen-
tation of images in the space and frequency domains.
3. How can a continuous (analog) image be recovered from its sampled (digitized)
version? Describe the operations required in
(a) the space domain, and
(b) the frequency domain.
What are the conditions to be met for exact recovery of the analog image?
4. Distinguish between gray-scale dynamic range and simultaneous contrast. Ex-
plain the eects of the former on the latter.
5. Draw schematic sketches of the histograms of the following types of images:
(a) A collection of objects of the same uniform gray level placed on a uniform
background of a dierent gray level.
(b) A collection of relatively dark cells against a relatively bright background,
with both having some intrinsic variability of gray levels.
(c) An under-exposed X-ray image.
(d) An over-exposed X-ray image.
Annotate the histograms with labels and comments.
6. Starting with the expression for the entropy of a continuous PDF, show that
the entropy is maximized by a uniform PDF. (Hint: Treat this as a constrained
optimization problem, with the constraint being that the integral of the PDF
be equal to unity.)
7. Dene two rectangular functions as
f1 (x y) = 1 if 0  x  X  0  y  Y (2.111)
= 0 otherwise
and 8
< 1 if jxj  X2  jyj  Y2
f2 (x y) = : (2.112)
0 otherwise:
Starting from the denition of the 2D Fourier transform, derive the Fourier
transforms F1 (u v) and F2 (u v) of the two functions show all steps.
Explain the dierences between the two functions in the spatial and frequency
domains.
8. Using the continuous 2D convolution and Fourier transform expressions, prove
that convolution in the space domain is equivalent to multiplication of the
corresponding functions in the Fourier domain.
9. You are given three images of rectangular objects as follows:
(a) a horizontally placed rectangle with the horizontal side two times the
vertical side
(b) the rectangle in (a) rotated by 45o  and
(c) the rectangle in (a) reduced in each dimension by a factor of two.
Draw schematic diagrams of the Fourier spectra of the three images. Explain
the dierences between the spectra.
Image Quality and Information Content 147
10. Draw schematic diagrams of the Fourier magnitude spectra of images with
(a) a circle of radius R
(b) a circle of radius 2R and
(c) a circle of radius R/2.
The value of R is not relevant. Explain the dierences between the three cases
in both the space domain and the frequency domain.
11. (a) Derive the expression for the Fourier transform of @f@x (xy) in terms of the
Fourier transform of f (x y). Show and explain all steps. (Hint: Start with
the denition of the inverse Fourier transform.)
Explain the eect of the dierentiation operator in the space domain and the
frequency domain.
2 f (xy)
(b) Based upon the result in (a), what is the Fourier transform of @ @x 2 ?
Explain.
(c) Based upon the result in (a), state the relationship between the Fourier
(xy) ]2 and that of f (x y). State all properties that you use.
transform of  @f@x
(d) Explain the dierences between the operators in (a), (b), and (c) and their
eects in both the space domain and the frequency domain.
12. Using the continuous 2D Fourier transform expression, prove that the inverse
Fourier transform of a function F (u v) may be obtained by taking the for-
ward Fourier transform of the complex conjugate of the given function that
is, taking the forward transform of F  (u v)], and then taking the complex
conjugate of the result.
13. Starting with the 2D DFT expression, show how the 2D DFT may be com-
puted as a series of 1D DFTs. Show and explain all steps.
14. An image of size 100 mm  100 mm is digitized into a matrix of size 200  200
pixels with uniform sampling and equal spacing between the samples in the
horizontal and vertical directions. The spectrum of the image is computed
using the FFT algorithm after padding the image to 256  256 pixels.
Draw a square to represent the array containing the spectrum and indicate
the FFT array indices as well as the frequency coordinates in mm;1 at the
four corners, at the mid-point of each side, and at the center of the square.
15. A system performs the operation g(x y) = f (x y) ; f (x ; 1 y): Derive the
MTF of the system and explain its characteristics.
16. Using the continuous Fourier transform, derive the relationship between the
Fourier transforms of an image f (x y) and its modied version given as
f1 (x y) = f (x ; x1  y ; y1 ).
Explain the dierences between the two images in the spatial and frequency
domains.
17. The impulse response of a system is approximated by the 3  3 matrix
2 3
1 2 1
42 3 25 : (2.113)
1 2 1
Derive the transfer function of the system and explain its characteristics.
148 Biomedical Image Analysis
18. The image in Equation 2.113 is processed by systems having the following
impulse responses:
(a) h(m n) = ;1 1]
(b) h(m n) = ;1 1]T  and
(c) h(m n) = a 3  3 matrix with all elements equal to 19 .
Compute the output image in each case over a 3  3 array, assuming that the
input is zero outside the array given in Equation 2.113.
19. The 5  5 image 2 3
0 0 0 0 0
66 0 10 10 10 0 77
f (m n) = 66 0 10 10 10 0 77 (2.114)
4 0 10 10 10 0 5
0 0 0 0 0
is processed by two systems in cascade. The rst system produces the output
g1 (m n) = f (m n) ; f (m ; 1 n). The second system produces the output
g2 (m n) = g1 (m n) ; g1 (m n ; 1).
Compute the images g1 and g2 .
Does the sequence of application of the two operators aect the result? Why
(not)?
Explain the eects of the two operators.
20. Derive the MTF for the Laplacian operator and explain its characteristics.
21. Write the expressions for the convolution and correlation of two images.
Explain the similarities and dierences between the two.
What are the equivalent relationships in the Fourier domain?
Explain the eects of the operations in the spatial and frequency domains.
22. Consider two systems with the impulse responses
(a) h1 (m n) = ;1 1] and
(b) h2 (m n) = ;1 1]T .
What will be the eect of passing an image through the two systems in
(a) parallel (and adding the results of the individual systems), or
(b) series (cascade)?
Considering a test image made up of a bright square in the middle of a dark
background, draw schematic diagrams of the outputs at each stage of the two
systems mentioned above.
23. The squared gradient of an image f (x y) is dened as

g(x y) = @f @x (x y) 2 + @f (x y) 2 : (2.115)
@y
Derive the expression for G(u v), the Fourier transform of g(x y).
How does this operator dier from the Laplacian in the spatial and frequency
domains?
24. Using mathematical expressions and operations as required, explain how a
degraded image of an edge may be used to derive the MTF of an imaging
system.
Image Quality and Information Content 149

2.18 Laboratory Exercises and Projects


1. Prepare a phantom for X-ray imaging by attaching a few strips of metal
(such as aluminum or copper) of various thickness to a plastic or plexiglass
sheet. Ensure that the strips have straight edges. With the help of a qualied
technologist, obtain X-ray images of the phantom at a few dierent kV p and
mAs settings. Note the imaging parameters for each experiment.
Repeat the experiment with screens and lms of dierent characteristics, with
and without the grid (bucky), and with the grid being stationary. Study the
contrast, noise, artifacts, and detail visibility in the resulting images.
Digitize the images for use in image processing experiments.
Scan across the edges of the metal strips and obtain the ESF. From this
function, derive the LSF, PSF, and MTF of the imaging system for various
conditions of imaging.
Measure the SNR and CNR of the various metal strips and study their de-
pendence upon the imaging parameters.
2. Repeat the experiment above with wire meshes of dierent spacing in the
range 1 ; 10 lines per mm. Study the eects of the X-ray imaging and digi-
tization parameters on the clarity and visibility of the mesh patterns.
3. Compute the Fourier spectra of several biomedical images with various objects
and features of dierent size, shape, and orientation characteristics, as well as
of varying quality in terms of noise and sharpness. Calibrate the spectra in
terms of frequency in mm;1 or lp=mm. Explain the relationships between the
spatial and frequency-domain characteristics of the images and their spectra.
4. Compute the histograms and log-magnitude Fourier spectra of at least ten
test images that you have acquired.
Comment on the nature of the histograms and spectra.
Relate specic image features to specic components in the histograms and
spectra.
Comment on the usefulness of the histograms and spectra in understanding
the information content of images.
5. Create a test image of size 100  100 pixels, with a circle of diameter 30 pixels
at its center. Let the value of the pixels inside the circle be 100, and those
outside be 80.
Prepare three blurred versions of the test image by applying the 3  3 mean
lter
(i) once,
(ii) three times, and
(iii) ve times successively.
To each of the three blurred images obtained as above, add three levels of
(a) Gaussian noise, and
(b) speckle noise.
150 Biomedical Image Analysis
Select the noise levels such that the edge of the circle becomes obscured in at
least some of the images.
Compute the error measures MSE and NMSE between the original test image
and each of the 18 degraded images obtained as above.
Study the eect of blurring and noise on the error measures and explain your
ndings.
6. From your collection of test images, select two images: one with strong edges
of the objects or features present in the image, and the other with weaker
denition of edges and features.
Compute the horizontal dierence, vertical dierence, and the Laplacian of
the images. Find the minimum and maximum values in each result, and map
appropriate ranges to the display range in order to visualize the results.
Study the results obtained and comment upon your ndings in relation to the
details present in the test images.
3
Removal of Artifacts

Noise is omnipresent! Biomedical images are often aected and corrupted by


various types of noise and artifact. Any image, pattern, or signal other than
that of interest could be termed as interference, artifact, or simply noise .
The sources of noise could be physiological, the instrumentation used, or
the environment of the experiment. The problems caused by artifacts in
biomedical images are vast in scope and variety their potential for degrading
the performance of the most sophisticated image processing algorithms is high.
The removal of artifacts without causing any distortion or loss of the desired
information in the image of interest is often a signicant challenge. The
enormity of the problem of noise removal and its importance are reected
by the placement of this chapter as the rst chapter on image processing
techniques in this book.
This chapter starts with an introduction to the nature of the artifacts that
are commonly encountered in biomedical images. Several illustrations of im-
ages corrupted by various types of artifacts are provided. Details of the design
of lters spanning a broad range of approaches, from linear space-domain and
frequency-domain xed lters, to the optimal Wiener lter, and further on
to nonlinear and adaptive lters, are then described. The chapter concludes
with demonstrations of application of the lters described to a few biomedical
images.
(Note: A good background in signal and system analysis 1, 2, 3, 167] as well
as probability, random variables, and stochastic processes 3, 128, 168, 169,
170, 171, 172, 173] is required in order to follow the procedures and analyses
described in this chapter.)

3.1 Characterization of Artifacts


3.1.1 Random noise
The term random noise refers to an interference that arises from a random
process such as thermal noise in electronic devices and the counting of photons.
A random process is characterized by the PDF representing the probabilities
of occurrence of all possible values of a random variable. (See Papoulis 128]

151
152 Biomedical Image Analysis
or Bendat and Piersol 168] for background material on probability, random
variables, and stochastic processes.)
Consider a random process that is characterized by the PDF p ( ). The
process could be a function of time as (t), or of space in 1D, 2D, or 3D
as (x), (x y), or (x y z ) it could also be a spatio-temporal function as
(x y z t). The argument of the PDF represents the value that the random
process can assume, which could be a voltage in the case of a function of time,
or a gray level in the case of a 2D or 3D image. The use of the same symbol
for the function and the value it can assume when dealing with PDFs is useful
when dealing with several random processes.
The mean  of the random process is given by the rst-order moment
of the PDF, dened as
Z1
 = E ] = p ( )d  (3.1)
;1
where E ] represents the statistical expectation operator. It is common to
assume the mean of a random noise process to be zero.
The mean-squared (MS) value of the random process is given by the
second-order moment of the PDF, dened as
Z1
E 2] = 2p ( )d : (3.2)
;1
The variance 2 of the process is dened as the second central moment:
Z1
2 = E ( ;  )2 ] = ( ;  )2 p ( ) d : (3.3)
;1
The square root of the variance gives the standard deviation (SD)  of the
process. Note that 2 = E 2 ] ; 2 . If the mean is zero, it follows that
2 = E 2 ], that is, the variance and the MS values are the same.
Observe the use of the same symbol to represent the random variable,
the random process, and the random signal as a function of time or space.
The subscript of the PDF or the statistical parameter derived indicates the
random process of concern. The context of the discussion or expression should
make the meaning of the symbol clear.
When the values of a random process form a time series or a function of
time, we have a random signal (or a stochastic process) (t) see Figure 3.1.
When one such time series is observed, it is important to note that the entity
represents but one single realization of the random process. An example of a
random function of time is the current generated by a CCD detector element
due to thermal noise when no light is falling on the detector (known as the
dark current ). The statistical measures described above then have physical
meaning: the mean represents the DC component, the MS value represents the
average power, and the square root of the mean-squared value (the root mean-
squared or RMS value) gives the average noise magnitude. These measures
Removal of Artifacts 153
are useful in calculating the SNR, which is commonly dened as the ratio of
the peak-to-peak amplitude range of the signal to the RMS value of the noise,
or as the ratio of the average power of the desired signal to that of the noise.
Special-purpose CCD detectors are cooled by circulating cold air, water, or
liquid nitrogen to reduce thermal noise and improve the SNR.

0.25

0.2

0.15

0.1

0.05
noise value

−0.05

−0.1

−0.15

−0.2

−0.25

50 100 150 200 250


sample number

FIGURE 3.1
A time series composed of random noise samples with a Gaussian PDF having
 = 0 and 2 = 0:01. MS value = 0:01 RMS = 0:1. See also Figures 3.2 and
3.3.

When the values of a random process form a 2D function of space, we


have a noise image (x y) see Figures 3.2 and 3.3. Several possibilities arise
in this situation: We may have a single random process that generates random
gray levels that are then placed at various locations in the (x y) plane in some
structured or random sequence. We may have an array of detectors with one
detector per pixel of a digital image the gray level generated by each detector
may then be viewed as a distinct random process that is independent of those
of the other detectors. A TV image generated by such a camera in the presence
of no input image could be considered to be a noise process in (x y t), that
is, a function of space and time.
A biomedical image of interest f (x y) may also, for the sake of generality, be
considered to be a realization of a random process f . Such a representation
154 Biomedical Image Analysis

FIGURE 3.2
An image composed of random noise samples with a Gaussian PDF having
 = 0 and 2 = 0:01. MS value = 0:01 RMS = 0:1. The normalized
pixel values in the range ;0:5 0:5] were linearly mapped to the display range
0 255]. See also Figure 3.3.

0.016

0.014

0.012
Probability of occurrence

0.01

0.008

0.006

0.004

0.002

0
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
Normalized gray level

FIGURE 3.3
Normalized histogram of the image in Figure 3.2. The samples were generated
using a Gaussian process with  = 0 and 2 = 0:01. MS value = 0:01 RMS
= 0:1. See also Figures 3.1 and 3.2.
Removal of Artifacts 155
allows for the statistical characterization of sample-to-sample or person-to-
person variations in a collection of images of the same organ, system, or type.
For example, although almost all CT images of the brain show the familiar
cerebral structure, variations do exist from one person to another. A brain CT
image may be represented as a random process that exhibits certain character-
istics on the average. Statistical averages representing populations of images
of a certain type are useful in designing lters, data compression techniques,
and pattern classication procedures that are optimal for the specic type of
images. However, it should be borne in mind that, in diagnostic applications,
it is the deviation from the normal or the average that is present in the image
on hand that is of critical importance.
When an image f (x y) is observed in the presence of random noise , the
detected image g(x y) may be treated as a realization of another random
process g. In most cases, the noise is additive, and the observed image is
expressed as
g(x y) = f (x y) + (x y): (3.4)
Each of the random processes f , , and g is characterized by its own PDF
pf (f ), p ( ), and pg (g), respectively.
In most practical applications, the random processes representing an image
of interest and the noise aecting the image may be assumed to be statisti-
cally independent processes. Two random processes f and are said to be
statistically independent if their joint PDF pf (f ) is equal to the product of
their individual PDFs given as pf (f ) p ( ). It then follows that the rst-order
moment and second-order central moment of the processes in Equation 3.4
are related as
E g] = g = f +  = f = E f ] (3.5)
E (g ; g ) ] = g = f +  
2 2 2 2 (3.6)
2
where  represents the mean and  represents the variance of the random
process indicated by the subscript, and it is assumed that  = 0.
Ensemble averages: When the PDFs of the random processes of con-
cern are not known, it is common to approximate the statistical expectation
operation by averages computed using a collection or ensemble of sample
observations of the random process. Such averages are known as ensemble av-
erages . Suppose we have M observations of the random process f as functions
of (x y): f1 (x y) f2 (x y) : : :  fM (x y) see Figure 3.4. We may estimate the
mean of the process at a particular spatial location (x1  y1 ) as
1 X
M
f (x1  y1 ) = Mlim
!1 M fk (x1  y1 ): (3.7)
k=1
The autocorrelation function (ACF) f (x1  x1 +  y1  y1 +  ) of the random
process f is dened as
f (x1  x1 +  y1  y1 +  ) = E f (x1 y1 ) f (x1 +  y1 +  )] (3.8)
156 Biomedical Image Analysis
which may be estimated as

f (x1 x1 + y1  y1 + k) = Mlim 1 XM


!1 M k=1 fk (x1  y1 ) fk (x1 + y1 + ) (3.9)

where  and  are spatial shift parameters. If the image f (x y) is complex,
one of the versions of f (x y) in the products above should be conjugated most
biomedical images that are encountered in practice are real-valued functions,
and this distinction is often ignored. The ACF indicates how the values of
an image at a particular spatial location are statistically related to (or have
characteristics in common with) the values of the same image at another
shifted location. If the process is stationary, the ACF depends only upon the
shift parameters, and may be expressed as f (  ).

f (x, y)
M

.
.
. .
.
. f (x, y)
k
µ (k)
spatial average f
.
. .
. .
.
f (x, y)
2

f (x, y)
1

ensemble average
µ (x , y )
f 1 1

FIGURE 3.4
Ensemble and spatial averaging of images.

The three equations above may be applied to signals that are functions
of time by replacing the spatial variables (x y) with the temporal variable
t, replacing the shift parameter  with  to represent temporal delay, and
making a few other related changes.
Removal of Artifacts 157
When f (x1  y1 ) is computed for every spatial location or pixel, we get an
average image that could be expressed as f(x y). The image f may be used
to represent the random process f as a prototype. For practical use, such an
average should be computed using sample observations that are of the same
size, scale, orientation, etc. Similarly, the ACF may also be computed for all
possible values of its indices to obtain an image.
Temporal and spatial averages: When we have a sample observation of
a random process fk (t) as a function of time, it is possible to compute time
averages or temporal statistics by integrating along the time axis 31]:
1 Z T=2
f (k) = Tlim
!1 T fk (t) dt: (3.10)
;T=2
The integral would be replaced by a summation in the case of sampled or
discrete-time signals. The time-averaged ACF f ( k) is given by

f ( k) = Tlim 1 Z T=2 f (t) f (t +  ) dt: (3.11)


!1 T ;T=2 k k
Similarly, given an observation of a random process as an image fk (x y),
we may compute averages by integrating over the spatial domain, to obtain
spatial averages or spatial statistics see Figure 3.4. The spatial mean of the
image fk (x y) is given by
1 Z1Z1
 (k) =
f A ;1 ;1 k f (x y) dx dy (3.12)
where A is a normalization factor, such as the actual area of the image. Ob-
serve that the spatial mean above is a single-valued entity (a scalar). For a
stationary process, the spatial ACF is given by
Z1Z1
f (  k) = fk (x y) fk (x +  y +  ) dx dy: (3.13)
;1 ;1
A suitable normalization factor, such as the total energy of the image which is
equal to f (0 0)] may be included, if necessary. The sample index k becomes
irrelevant if only one observation is available. In practice, the integrals change
to summations over the space of the digital image available.
When we have a 2D image as a function of time, such as TV, video, uo-
roscopy, and cine-angiography signals, we have a spatio-temporal signal that
may be expressed as f (x y t) see Figure 3.5. We may then compute statis-
tics over a single frame f (x y t1 ) at the instant of time t1 , which are known
as intraframe statistics. We could also compute parameters through multiple
frames over a certain period of time, which are called interframe statistics
the signal over a specic period of time may then be treated as a 3D dataset.
Random functions of time may thus be characterized in terms of ensemble
and/or temporal statistics. Random functions of space may be represented
158 Biomedical Image Analysis

f (x, y, t)

.
.
. .
.
. f (x, y, t )
1

spatial or
intraframe
.
statistics
. .
. .
.

temporal or interframe statistics

FIGURE 3.5
Spatial and temporal statistics of a video signal.
Removal of Artifacts 159
by their ensemble and/or spatial statistics. Figure 3.4 shows the distinction
between ensemble and spatial averaging. Figure 3.5 illustrates the combined
use of spatial and temporal statistics to analyze a video signal in (x y t).
The mean does not play an important role in 1D signal analysis: it is usually
assumed to be zero, and often subtracted out if it is not zero. However, the
mean of an image represents its average intensity or density removal of the
mean leads to an image with only the edges and the uctuations about the
mean being depicted.
The ACF plays an important role in the characterization of random pro-
cesses. The Fourier transform of the ACF is the power spectral density (PSD)
function, which is useful in frequency-domain analysis. Statistical functions
as above are useful in the analysis of the behavior of random processes, and in
modeling, spectrum analysis, lter design, data compression, and data com-
munication.

3.1.2 Examples of noise PDFs


As we have already seen, several types of noise sources are encountered in
biomedical imaging. Depending upon the characteristics of the noise source
and the phenomena involved in the generation of the signal and noise values,
we encounter a few dierent types of PDFs, some of which are described in
the following paragraphs 3, 128, 173].
Gaussian: The most commonly encountered and used noise PDF is the
Gaussian or normal PDF, expressed as 3, 128]
1 ( x ;  x )2
px (x) = p exp ; 22 : (3.14)
2 x x
A Gaussian PDF is completely specied by its mean x and variance x2 .
Figure 3.6 shows three Gaussian PDFs with  = 0  = 1  = 0  = 2 and
 = 3  = 1. See also Figures 3.2 and 3.3.
When we have two jointly normal random processes x and y, the bivariate
normal PDF is given by
pxy (x y) = p 2 1 2 
4 (1 ;
) x y
 2 
exp ; 2(1 ;1
2 ) (x ;2x ) ; 2
(x ; x)(y ; y ) + (y ;2y )  (3.15)
2
x x y y
where
is the correlation coecient given by

= E (x ; x )(y ; y )] :
x y (3.16)
If
= 0, the two processes are uncorrelated. The bivariate normal PDF then
reduces to a product of two univariate Gaussians, which implies that the two
processes are statistically independent.
160 Biomedical Image Analysis

0.35

0.3

0.25
Gaussian PDFs

0.2

0.15

0.1

0.05

−8 −6 −4 −2 0 2 4 6 8 10
x

FIGURE 3.6
Three Gaussian PDFs. Solid line:  = 0  = 1. Dashed line:  = 0  = 2.
Dotted line:  = 3  = 1.

The importance of the Gaussian PDF in practice arises from a phenomenon


that is expressed as the central limit theorem 3, 128]: The PDF of a ran-
dom process that is the sum of several statistically independent random pro-
cesses is equal to the cascaded convolution of their individual PDFs. When a
large number of functions are convolved in cascade, the result tends toward a
Gaussian-shaped function regardless of the forms of the individual functions.
In practice, an image is typically aected by a series of independent sources
of additive noise the net noise PDF may then be assumed to be a Gaussian.
Uniform: All possible values of a uniformly distributed random process
have equal probability of occurrence. The PDF of such a random process over
the range (a b) is a rectangle of height (b;1 a) over the range (a b). The mean of
the process is (a+2 b) , and the variance is (b;12a) . Figure 3.7 shows two uniform
2

PDFs corresponding to random processes with values spread over the ranges
(;10 10) and (;5 5). The quantization of gray levels in an image to a nite
number of integers leads to an error or noise that is uniformly distributed.
Poisson: The counting of discrete random events such as the number of
photons emitted by a source or detected by a sensor in a given interval of
time leads to a random variable with a Poisson PDF. The discrete nature
of the packets of energy (that is, photons) and the statistical randomness in
their emission and detection contribute to uncertainty, which is reected as
Removal of Artifacts 161

0.12

0.1

0.08
Uniform PDFs

0.06

0.04

0.02

0
−10 −5 0 5 10
x

FIGURE 3.7
Two uniform PDFs. Solid line:  = 0, range = (;10 10). Dashed line:  = 0,
range = (;5 5).
162 Biomedical Image Analysis
quantum noise, photon noise, mottle, or Poisson noise in images. Shot noise
in electronic devices may also be modeled as Poisson noise.
One of the formulations of the Poisson PDF is as follows: The probability
that k photons are detected in a certain interval is given by

k
P (k) = exp(;) k! : (3.17)

Here,  is the mean of the process, which represents the average number of
photons counted in the specied interval over many trials. The values of P (k)
for all (integer) k is the Poisson PDF. The variance of the Poisson PDF is
equal to its mean.
The Poisson PDF tends toward the Gaussian PDF for large mean values.
Figure 3.8 shows two Poisson PDFs along with the Gaussians for the same
parameters it is seen that the Poisson and Gaussian PDFs for  = 2 = 20
match each other well.

0.2

0.18

0.16
Poisson and Gaussian PDFs

0.14

0.12

0.1

0.08

0.06

0.04

0.02

0
0 5 10 15 20 25 30 35 40
k

FIGURE 3.8
Two Poisson PDFs with the corresponding Gaussian PDFs superimposed.
Bars with  and dashed envelope:  = 2 = 4. Bars with  and solid
envelope:  = 2 = 20.
Removal of Artifacts 163
Laplacian: The Laplacian PDF is given by the function
( p )
p (x) = p 1 exp ; 2 jx ;  x j  (3.18)
x x
2 x
where x and x2 are the mean and variance, respectively, of the process.
Figure 3.9 shows two Laplacian PDFs. Error values in linear prediction have
been observed to display Laplacian PDFs 174].

0.7

0.6

0.5
Laplacian PDFs

0.4

0.3

0.2

0.1

−8 −6 −4 −2 0 2 4 6 8 10
x

FIGURE 3.9
Two Laplacian PDFs with  = 0 2 = 1 (solid) and  = 0 2 = 4 (dashed).

Rayleigh: The Rayleigh PDF is given by the function


2
 (x ; a)2 
p (x) = (x ; a) exp ;
x b u(x ; a) b (3.19)
where u(x) is the unit step function such that
 1 if x  0
u(x) = 0 otherwise: (3.20)
The mean and variance of the
p Rayleigh PDF are determined by the parameters
a and b as 173] x = a + ( b=4) and x2 = b(4 ; )=4.
164 Biomedical Image Analysis
Figure 3.10 shows a Rayleigh PDF with a = 1 and b = 4. The Rayleigh
PDF has been used to model speckle noise 175].

0.45

0.4

0.35

0.3
Rayleigh PDF

0.25

0.2

0.15

0.1

0.05

0
0 1 2 3 4 5 6 7 8
x

FIGURE 3.10
Rayleigh PDF with a = 1 and b = 4.

3.1.3 Structured noise


Power-line interference at 50 Hz or 60 Hz is a common artifact in many
biomedical signals. The typical waveform of the interference is known in
advance and, hence, it could be considered to be a form of structured noise.
It should, however, be noted that the phase of the interfering waveform will not
usually be known. Furthermore, the interfering waveform may not be an exact
sinusoid this is indicated by the presence of harmonics of the fundamental
50 Hz or 60 Hz component. Notch and comb lters may be used to remove
power-line artifact 31]. It is not common to encounter power-line interference
in biomedical images.
A common form of structured noise in biomedical images is the grid arti-
fact. On rare occasions, the grid used in X-ray imaging may not translate
or oscillate as designed the stationary grid then casts its shadow, which is
superimposed upon the image of the subject. Stationary grids with parallel
strips create an artifact that is a set of parallel lines which is also periodic.
Removal of Artifacts 165
Depending upon the strip width, strip orientation, and sampling resolution,
the artifactual lines may appear as thin straight lines, or as relatively thick
rectangular stripes with some internal gray-scale variation see Figure 1.13
and Section 3.4.2.
In some imaging applications, grids and frames may be used to position
the object or subject being imaged. The image of the grid or frame could
serve a useful purpose in image registration and calibration, as well as in the
modeling and removal of geometric distortion. The same patterns may be
considered to be artifacts in other applications of image processing.
Labels indicating patient identication, patient positioning, imaging pa-
rameters, and other details are routinely used in X-ray imaging. Ideally, the
image cast by such labels should be well removed from the image of the pa-
tient on the acquired image, although, occasionally, they may get connected
or even overlap. Such items interfere in image analysis when the procedure
is applied to the entire image. Preprocessing techniques are then required to
recognize and remove such artifacts see Section 5.9 for examples.
Surgical implants such as staples, pins, and screws create diculties and
artifacts in X-ray, MR, CT, and ultrasound imaging. The advantage with
such artifacts is that the precise composition and geometry of the implants
are known by design and the manufacturers' specications. Methods may
then be designed to remove each specic artifact.

3.1.4 Physiological interference


As we have already noted, the human body is a complex conglomeration of
several systems and processes. Several physiological processes could be active
at a given instant of time, each one aecting the system or process of interest
in diverse ways. A patient or experimental subject may not be able to exercise
control on all of his or her physiological processes and systems. The eect of
systems or processes other than those of interest on the image being acquired
may be termed as physiological interference several examples are listed below.
 Eect of breathing on a chest X-ray image.
 Eect of breathing, peristalsis, and movement of material through the
gastro-intestinal system on CT images of the abdomen.
 Eect of cardiovascular activity on CT images of the chest.
 Eect of pulsatile movement of arteries in subtraction angiography.
Physiological interference may not be characterized by any specic wave-
form, pattern, or spectral content, and is typically dynamic and nonstationary
(varying with the level of the activity of relevance and hence with time see
Section 3.1.6 for a discussion on stationarity). Thus, simple, linear bandpass
lters will usually not be eective in removing physiological interference.
166 Biomedical Image Analysis
Normal anatomical details such as the ribs in chest X-ray images and the
skull in brain imaging may also be considered to be artifacts when other details
in such images are of primary interest. Methods may need to be developed to
remove their eects before the details of interest may be analyzed.

3.1.5 Other types of noise and artifact


Systematic errors are caused by several factors such as geometric distor-
tion, miscalibration, nonlinear response of detectors, sampling, and quantiza-
tion 3]. Such errors may be modeled from a knowledge of the corresponding
parameters, which may be determined from specications, measured experi-
mentally, or derived mathematically.
A few other types of artifact that cannot be easily categorized into the
groups discussed above are the following:
 Punctate or shot noise due to dust on the screen, lm, or examination
table.
 Scratches on lm that could appear as intense line segments.
 Shot noise due to inactive elements in a detector array.
 Salt-and-pepper noise due to impulsive noise, leading to black or white
pixels at the extreme ends of the pixel-value range.
 Film-grain noise due to scanning of lms with high resolution.
 Punctate noise in chest X-ray or mammographic images caused by cos-
metic powder or deodorant (which could masquerade as microcalcica-
tions).
 Superimposed images of clothing accessories such as pins, hooks, but-
tons, and jewelry.

3.1.6 Stationary versus nonstationary processes


Random processes may be characterized in terms of their temporal/spatial
and/or ensemble statistics. A random process is said to be stationary in the
strict sense or strongly stationary if its statistics are not aected by a shift in
the origin of time or space. In most practical applications, only the rst-order
and second-order averages are used. A random process is said to be weakly
stationary or stationary in the wide sense if its mean is a constant and its
ACF depends only upon the dierence (or shift) in time or space. Then, we
have f (x1  y1 ) = f and f (x1  x1 +  y1  y1 +  ) = f (  ). The ACF is
now a function of the shift parameters  and  only the PSD of the process
does not vary with space.
Removal of Artifacts 167
A stationary process is said to be ergodic if the temporal statistics computed
are independent of the sample observed that is, the same results are obtained
with any sample observation fk (t). The time averages of the process are then
independent of k: f (k) = f and f ( k) = f ( ). All ensemble statis-
tics may be replaced by temporal statistics when analyzing ergodic processes.
Ergodic processes are an important type of stationary random processes be-
cause their statistics may be computed from a single observation as a function
of time. The same concept may be extended to functions of space as well,
although the term \ergodic" is not commonly applied to spatial functions.
Signals or processes that do not meet the conditions described above may, in
general, be called nonstationary processes. A nonstationary process possesses
statistics that vary with time or space. The statistics of most images vary over
space indeed, such variations are the source of pictorial information. Most
biomedical systems are dynamic systems and produce nonstationary signals
and images. However, a physical or physiological system has limitations in the
rate at which it can change its characteristics. This limitation facilitates the
breaking of a signal into segments of short duration (typically a few tens of
milliseconds), over which the statistics of interest may be assumed to remain
constant 31]. The signal is then referred to as a quasistationary process  Tech-
niques designed for stationary signals may then be extended and applied to
nonstationary signals. Analysis of signals by this approach is known as short-
time analysis 31, 176]. On the same token, the characteristics of the features
in an image vary over relatively large scales of space statistical parameters
within small regions of space, within an object, or within an organ of a given
type may be assumed to remain constant. The image may then be assumed
to be block-wise stationary, which permits sectioned or block-by-block pro-
cessing or moving-window processing using techniques designed for stationary
processes 177, 178]. Figure 3.11 illustrates the notion of computing statistics
within a moving window.
Certain systems, such as the cardiac system, normally perform rhythmic op-
erations. Considering the dynamics of the cardiac system, it is obvious that
the system is nonstationary. However, various phases of the cardiac cycle |
as well as the related components of the associated electrocardiogram (ECG),
phonocardiogram (PCG), and carotid pulse signals | repeat over time in
an almost-periodic manner. A given phase of the process or signal possesses
statistics that vary from those of the other phases however, the statistics
of a specic phase repeat cyclically. For example, the statistics of the PCG
signal vary within the duration of a cardiac cycle, especially when murmurs
are present, but repeat themselves at regular intervals over successive cardiac
cycles. Such signals are referred to as cyclo-stationary signals 31]. The cycli-
cal repetition of the process facilitates synchronized ensemble averaging (see
Sections 3.2 and 3.10), using epochs or events extracted from an observation
of the signal over many cycles.
The cyclical nature of cardiac activity may be exploited for synchronized
averaging to reduce noise and improve the SNR of the ECG and PCG 31]. The
168 Biomedical Image Analysis

#
@

FIGURE 3.11
Block-by-block processing of an image. Statistics computed by using the pixels
within the window shown with solid lines (3  3 pixels) are applicable to the
pixel marked with the @ symbol. Statistics for use when processing the pixel
marked with the # symbol (5  5 pixels) are computed by using the pixels
within the window shown with dashed lines.

same technique may also be extended to imaging the heart: In gated blood-
pool imaging, nuclear medicine images of the heart are acquired in several
parts over short intervals of time. Images acquired at the same phases of the
cardiac cycle | determined by using the ECG signal as a reference, trigger, or
\gating" signal | are accumulated over several cardiac cycles. A sequence of
such gated and averaged frames over a full cardiac cycle may then be played
as a video or a movie to visualize the time-varying size and contents of the
left ventricle. (See Section 3.10 for illustration of gated blood-pool imaging.)

3.1.7 Covariance and cross-correlation


When two random processes f and g need to be compared, we could compute
the covariance between them as
Z1Z1
fg = E (f ; f )(g ; g )] = (f ; f )(g ; g ) pfg (f g) df dg (3.21)
;1 ;1
where pfg (f g) is the joint PDF of the two processes, and the image coor-
dinates have been omitted for the sake of compact notation. The covariance
parameter may be normalized to get the correlation coecient, dened as
fg = fg  (3.22)
f g
Removal of Artifacts 169
with ;1  fg  +1. A high covariance indicates that the two processes have
similar statistical variability or behavior. The processes f and g are said to
be uncorrelated if fg = 0. Two processes that are statistically independent
are also uncorrelated the converse of this property is, in general, not true.
When dealing with random processes f and g that are functions of space,
the cross-correlation function (CCF) between them is dened as
fg (  ) = E f (x y) g(x +  y +  )]: (3.23)
In situations where the PDF is not available, the expressions above may be
approximated by the corresponding ensemble or spatial statistics. Correlation
functions are useful in analyzing the nature of variability and spectral band-
width of images, as well as for the detection of objects by template matching.

3.1.8 Signal-dependent noise


Noise may be categorized as being independent of the signal of interest if no
statistical parameter of any order of the noise process is a function of the
signal. Although it is common to assume that the noise present in a signal
or image is statistically independent of the true signal (or image) of interest,
several cases exist in biomedical imaging where this assumption is not valid,
and the noise is functionally related to or dependent upon the signal. The
following paragraphs provide brief notes on a few types of signal-dependent
noise encountered in biomedical imaging.
Poisson noise: Imaging systems that operate in low-light conditions, or
in low-dose radiation conditions such as nuclear medicine imaging, are often
aected by photon noise that can be modeled as a Poisson process 179, 180]
see Section 3.1.2. The probabilistic description of an observed image (pixel
value) go (m n) under the conditions of a Poisson process is given by
go (mn) ; f (m n)]  (3.24)
P (go (m n)jf (m n) ) = f (m n)] g (mexp
n )!
o
where f (m n) is the undegraded pixel value (the observation in the absence
of any noise), and is a proportionality factor. Because the mean of the
degraded image go is given by
E go(m n)] = E f (m n)] (3.25)
images corrupted with Poisson noise are usually normalized as
g(m n) = go (m n) :
(3.26)
It has been shown 179] that, in this case, Poisson noise may be modeled as
stationary noise uncorrelated with the signal and added to the signal as in
Equation 3.4, with zero mean and variance given by
2 (m n) = E f (m n)] = E g(m n)] :
(3.27)
170 Biomedical Image Analysis
Film-grain noise: The granular structure of lm due to the silver-halide
grains used contributes noise to the recorded image, which is known as lm-
grain noise. When images recorded on photographic lm are digitized in order
to be processed by a digital computer, lm-grain noise is a signicant source
of degradation of the information. According to Froehlich et al. 181], the
model for an image corrupted by lm-grain noise is given by
g(m n) = f (m n) +  F f (m n)] 1 (m n) + 2 (m n) (3.28)
where  is a proportionality factor, F ] is a mathematical function, and
1 (m n) and 2 (m n) are samples from two random processes independent of
the signal. This model may be taken to represent a general imaging situation
that includes signal-independent noise as well as signal-dependent noise, and
the noise could be additive or multiplicative. Observe that the model reduces
to the simple signal-independent additive noise model in Equation 3.4 if  = 0.
Froehlich et al. 181] modeled lm-grain noise with F f (m n)] = f (m n)]p ,
using p = 0:5. The two noise processes 1 and 2 were assumed to be Gaussian-
distributed, uncorrelated, zero-mean random processes. According to this
model, the noise that corrupts the image p has two components: one that is
signal-dependent through the factor  f (m n) 1 (m n), and another that
is signal-independent given by 2 (m n). Film-grain noise p may be modeled
as additive noise as in Equation 3.4, with (m n) =  f (m n) 1 (m n) +
2 (m n). It can be shown that (m n) as above is stationary, has zero mean,
and has its variance given by 182]
2 (m n) = 2 E g(m n)] 21 + 22 : (3.29)
The fact that the mean of the corrupted image equals the mean of the noise-
free image has been used in arriving at the relationship above.
Speckle noise: Speckle noise corrupts images that are obtained by coher-
ent radiation, such as synthetic-aperture radar (SAR), ultrasound, laser, and
sonar. When an object being scanned by a laser beam has random surface
roughness with details of the order of the wavelength of the laser, the imaging
system will be unable to resolve the details of the object's roughness this
results in speckle noise. The most widely used model for speckle noise is a
multiplicative model, given as 175, 183, 184, 185, 186, 187]
g(m n) = f (m n) 1 (m n) (3.30)
where 1 (m n) is a stationary noise process that is assumed to be uncorrelated
with the image. If the mean of the noise process  1 is not equal to one, the
noisy image may be normalized by dividing by  1 such that, in the normalized
image, the multiplicative noise has its mean equal to one. Depending upon
the specic application, the distribution of the noise may be assumed to be
exponential 175, 183, 186, 187], Gaussian 184], or Rayleigh 175].
The multiplicative model in Equation 3.30 may be converted to the additive
model as in Equation 3.4 with (m n) being zero-mean additive noise having
Removal of Artifacts 171
a space-variant, signal-dependent variance given by 188]
2
2 (m n) = 1 + 1 2 g2 (m n) + 2g (m n)]: (3.31)
1

In the expression above, g2 (m n) and g (m n) are the variance and the mean
of the noisy image at the point (m n), respectively.
Transformation of signal-dependent noise to signal-independent
noise: In the model used by Naderi and Sawchuk 182] and Arsenault et al.
189, 190], the signal-independent component of the noise as in Equation 3.28
is assumed to be zero. In this case, it has been shown 189, 190, 191] that by
applying an appropriate transformation to the whole image, the noise can be
made signal-independent. One of the transformations proposed is 189, 190]
p
T g(m n)] =  g(m n) (3.32)
where  is an appropriate normalizing constant. It has been shown that the
noise in the transformed image is additive, has a Gaussian distribution, is
unbiased, and has a standard deviation that no longer depends on the signal
but is given by 2 :

3.2 Synchronized or Multiframe Averaging


In certain applications of imaging, if the object being imaged can remain
free from motion or change of any kind (internal or external) over a long
period of time compared to the time required to record an image, it becomes
possible to acquire several frames of images of the object in precisely the same
state or condition. Then, the frames may be averaged to reduce noise this is
known as multiframe averaging. The method may be extended to the imaging
of dynamic systems whose movements follow a rhythm or cycle with phases
that can be determined by another signal, such as the cardiac system whose
phases of contraction are indicated by the ECG signal. Then, several image
frames may be acquired at the same phase of the rhythmic movement over
successive cycles, and averaged to reduce noise. Such a process is known as
synchronized averaging. The process may be repeated or triggered at every
phase of interest. (Note: A process as above in nuclear medicine imaging may
be viewed simply as counting the photons emitted over a long period of time
in total, albeit in a succession of short intervals gated to a particular phase
of contraction of the heart. Ignoring the last step of division by the number
of frames to obtain the average, the process simply accumulates the photon
counts over the frames acquired.)
172 Biomedical Image Analysis
Synchronized averaging is a useful technique in the acquisition of several
biomedical signals 31]. Observe that averaging as above is a form of ensemble
averaging.
Let us represent a single image frame in a situation as above as
gi (x y) = f (x y) + i (x y) (3.33)
where gi (x y) is the ith observed frame of the image f (x y), and i (x y) is the
noise in the same frame. Let us assume that the noise process is independent
of the signal source. Observe that the desired (original) image f (x y) is
invariant from one frame to another. It follows that g2i (xy) = 2i (xy)  that
is, the variance at every pixel in the observed noisy image is equal to the
corresponding variance of the noise process.
If M frames of the image are acquired and averaged, the averaged image is
given by
XM
g(x y) = 1 g (x y):
M i=1 i (3.34)

If the mean of the noise process is zero, we have M


P
i=1 i (x y ) ! 0 as M ! 1
(in practice, as the number of frames averaged increases to a large number).
Then, it follows that 8]
E g(x y)] = f (x y) (3.35)
and
g2(xy) = M1 2(xy) : (3.36)
Thus, the variance at every pixel in the averaged image is reduced by apfactor
of M1 from that in a single frame the SNR is improved by the factor M .
The most important requirement in this procedure is that the frames be-
ing averaged be mutually synchronized, aligned, or registered. Any motion,
change, or displacement between the frames will lead to smearing and distor-
tion.
Example: Figure 3.12 (a) shows a test image with several geometrical
objects placed at random. Images (b) and (c) show two examples of eight noisy
frames of the test image that were obtained by adding Gaussian-distributed
random noise samples. The results of averaging two, four, and eight noisy
frames including the two in (b) and (c)] are shown in parts (d), (e), and
(f), respectively. It is seen that averaging using increasing numbers of frames
of the noisy image leads to a reduction of the noise the decreasing trend in
the RMS values of the processed images (given in the caption of Figure 3.12)
conrms the expected eect of averaging.
See Section 3.9 for an illustration of the application of multiframe averaging
in confocal microscopy. See also Section 3.10 for details on gated blood-pool
imaging in nuclear medicine.
Removal of Artifacts 173

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.12
(a) \Shapes": a 128  128 test image with various geometrical objects placed
at random. (b) Image in (a) with Gaussian noise added, with  = 0 2 = 0:01
(normalized), RMS error = 19:32. (c) Second version of noisy image, RMS
error = 19:54. Result of multiframe averaging using (d) the two frames in (b)
and (c), RMS error = 15:30 (e) four frames, RMS error = 12:51 (f) eight
frames, RMS error = 10:99.
174 Biomedical Image Analysis

3.3 Space-domain Local-statistics-based Filters


Consider the practical situation when we are given a single, noisy observation
of an image of nite size. We do not have access to an ensemble of images to
perform multiframe (synchronized) averaging, and spatial statistics computed
over the entire image frame will lead to scalar values that do not assist in re-
moving the noise and obtaining a cleaner image. Furthermore, we should also
accommodate for nonstationarity of the image. In such situations, moving-
window ltering using windows of small size such as 3  3, 5  5, or 7  7
pixels becomes a valuable option rectangular windows as well as windows of
other shapes may also be considered where appropriate. Various statistical
parameters of the pixels within such a moving window may be computed, with
the result being applied to the pixel in the output image at the same location
where the window is placed (centered) on the input image see Figure 3.13.
Observe that only the pixel values in the given (input) image are used in
the ltering process the output is stored in a separate array. Figure 3.14
illustrates a few dierent neighborhood shapes that are commonly used in
moving-window image ltering 192].

input image output image

# #
@ @

FIGURE 3.13
Moving-window ltering of an image. The size of the moving window in the
illustration is 5  5 pixels. Statistics computed by using the pixels within the
window are applied to the pixel at the same location in the output image.
The moving window is shown for two pixel locations marked # and @.
Removal of Artifacts 175

(a) 3x3 square (b) 4-connected (c) 3x1 bar (d) 1x3 bar
(8-connected) or integer
distance 1

(e) 5x5 square (f) cross (g) 5x1 bar (h) 1x5 bar

(i) circle (j) integer (k) X-1 (l) X-2


distance 2

FIGURE 3.14
A few commonly used moving-window neighborhood shapes for image lter-
ing. The result computed by using the pixels within a window is applied to
the pixel at the location of its center, shown shaded, in the output image.
176 Biomedical Image Analysis
3.3.1 The mean
lter
If we were to select the pixels in a small neighborhood around the pixel to be
processed, the following assumptions may be made:
 the image component is relatively constant that is, the image is quasis-
tationary and
 the only variations in the neighborhood are due to noise.
Further assumptions regarding the noise process that are typically made are
that it is additive, is independent of the image, and has zero mean. Then, if we
were to take the mean of the pixels in the neighborhood, the result will tend
toward the true pixel value in the original, uncorrupted image. In essence,
a spatial collection of pixels around the pixel being processed is substituted
for an ensemble of pixels at the same location from multiple frames in the
averaging process that is, the image-generating process is assumed to be
ergodic.
It is common to use a 3  3 or 8-connected neighborhood as in Figure 3.14
(a) for mean ltering. Then, the output of the lter g(m n) is given by
X
1 X
1
g(m n) = 19 f (m +  n +  ) (3.37)
=;1  =;1
where f (m n) is the input image. The summation above may be expanded
as
g(m n) = 91 
f (m ; 1 n ; 1) +f (m ; 1 n) +f (m ; 1 n + 1)
+f (m n ; 1) +f (m n) +f (m n + 1) (3.38)
+f (m + 1 n ; 1) +f (m + 1 n) +f (m + 1 n + 1) ] :
The same result is also achieved via convolution of the image f (m n) with
the 3  3 array or mask 2 3
1 4 11 11 11 5 : (3.39)
9 1 1 1
Note that the operation above cannot be directly applied at the edges of
the input image array it is common to extend the input array with a border
of zero-valued pixels to permit ltering of the pixels at the edges. One may
also elect not to process the pixels at the edges, or to replace them with the
average of the available neighbors.
The mean lter can suppress Gaussian and uniformly distributed noise ef-
fectively in relatively homogeneous areas of an image. However, the operation
leads to blurring at the edges of the objects in the image, and also to the loss
of ne details and texture. Regardless, mean ltering is commonly employed
Removal of Artifacts 177
to remove noise and smooth images. The blurring of edges may be prevented
to some extent by not applying the mean lter if the dierence between the
pixel being processed and the mean of its neighbors is greater than a certain
threshold this condition, however, makes the lter nonlinear.

3.3.2 The median


lter
The median of a collection of samples is the value that splits the popula-
tion in half: half the number of pixels in the collection will have values less
than the median and half will have values greater than the median. In small
populations of pixels under the constraint that the result be an integer, ap-
proximations will have to be made: the most common procedure rank-orders
the pixels in a neighborhood containing an odd number of pixels, and the
pixel value at the middle of the list is selected as the median. The procedure
also permits the application of order-statistic lters 193]: the ith element in
a rank-ordered list of values is known as the ith order statistic. The median
lter is an order-statistic lter of order N=2 where N is the size of the lter
that is, the number of values used to derive the output.
The median lter is a nonlinear lter. Its success in ltering depends upon
the number of the samples used to derive the output, as well as the spatial
conguration of the neighborhood used to select the samples.
The median lter provides better noise removal than the mean lter without
blurring, especially when the noise has a long-tailed PDF (resulting in outliers)
and in the case of salt-and-pepper noise. However, the median lter could
result in the clipping of corners and distortion of the shape of sharp-edged
objects median ltering with large neighborhoods could also result in the
complete elimination of small objects. Neighborhoods that are not square
in shape are often used for median ltering in order to limit the clipping of
corners and other types of distortion of shape see Figure 3.14.
Examples: Figure 3.15 (a) shows a 1D test signal with a rectangular pulse
part (b) of the same gure shows the test signal degraded with impulse (shot)
noise. The results of ltering the noisy signal using the mean and median with
lter length N = 3 are shown in plots (c) and (d), respectively, of Figure 3.15.
The mean lter has blurred the edges of the pulse it has also created artifacts
in the form of small hills and valleys. The median lter has removed the noise
without distorting the signal.
Figure 3.16 (a) shows a 1D test signal with two rectangular pulses part (b)
of the same gure shows the test signal degraded with uniformly distributed
noise. The results of ltering the noisy signal using the mean and median with
lter length N = 5 are shown in plots (c) and (d), respectively, of Figure 3.16.
The mean lter has reduced the noise level, but has also blurred the edges of
the pulses in addition, the strength of the rst, short pulse has been reduced.
The median lter has removed the noise to some extent without distorting
the edges of the long pulse however, the short pulse has been obliterated.
178 Biomedical Image Analysis

150
(a) original

100

50

0
10 20 30 40 50 60 70 80
150
(b) noisy

100

50

0
10 20 30 40 50 60 70 80
150
(c) mean

100

50

0
10 20 30 40 50 60 70 80
150
(d) median

100

50

0
10 20 30 40 50 60 70 80
index

FIGURE 3.15
(a) A 1D test signal with a rectangular pulse. (b) Degraded signal with
impulse or shot noise. Result of ltering the degraded signal using (c) the
mean and (d) the median operation with a sliding window of N = 3 samples.
Removal of Artifacts 179

150
(a) original

100

50

0
10 20 30 40 50 60 70 80
150
(b) noisy

100

50

0
10 20 30 40 50 60 70 80
150
(c) mean

100

50

0
10 20 30 40 50 60 70 80
150
(d) median

100

50

0
10 20 30 40 50 60 70 80
index

FIGURE 3.16
(a) A 1D test signal with two rectangular pulses. (b) Degraded signal with uni-
formly distributed noise. Result of ltering the degraded signal using (c) the
mean, and (d) the median operation with a sliding window of N = 5 samples.
180 Biomedical Image Analysis
Figure 3.17 shows the original test image \Shapes", the test image degraded
by the addition of Gaussian-distributed random noise with  = 0 and 2 =
0:01 (normalized), and the results of ltering the noisy image with the 3  3
and 5  5 mean and median lters. The RMS errors of the noisy and ltered
images with respect to the test image are given in the gure caption. All of
the lters except the 3  3 median have led to an increase in the RMS error.
The blurring eect of the mean lter is readily seen in the results. Close
observation of the result of 3  3 median ltering Figure 3.17 (d)] shows that
the lter has resulted in distortion of the shapes, in particular, clipping of
the corners of the objects. The 5  5 median lter has led to the complete
removal of small objects see Figure 3.17 (f). Observe that the results of the
3  3 mean and 5  5 median lters have similar RMS error values however,
the blurring eect in the former case, and the distortion of shape information
as well as the loss of small objects in the latter case need to be considered
carefully.
Figure 3.18 gives a similar set of images with the noise being Poisson dis-
tributed. A comparable set with speckle noise is shown in Figure 3.19. Al-
though the lters have reduced the noise to some extent, the distortions in-
troduced have led to increased RMS errors for all of the results.
Figures 3.20 and 3.21 show two cases with salt-and-pepper noise, the density
of pixels aected by noise being 0:05 and 0:1, respectively. The 3  3 median
lter has given good results in both cases with the lowest RMS error and the
least distortion. The 5  5 median lter has led to signicant shape distortion
and the loss of a few small features.
Figure 3.22 shows the normalized histograms of the Shapes test image and
its degraded versions with Gaussian, Poisson, and speckle noise. It is evident
that the signal-dependent Poisson noise and speckle noise have aected the
histogram in a dierent manner compared to the signal-independent Gaussian
noise.
Figure 3.23 shows the results of ltering the Peppers test image aected
by Gaussian noise. Although the RMS errors of the ltered images are low
compared to that of the noisy image, the lters have introduced a mottled
appearance and ne texture in the smooth regions of the original image. Fig-
ure 3.24 shows the case with Poisson noise, where the 55 lters have provided
visually good results, regardless of the RMS errors.
All of the lters have performed reasonably well in the presence of speckle
noise, as illustrated in Figure 3.25, in terms of the reduction of RMS error.
However, the visual quality of the images is poor.
Figures 3.26 and 3.27 show that the median lter has provided good results
in ltering salt-and-pepper noise. Although the RMS values of the results
of the mean lters are lower than those of the noisy images, visual inspec-
tion of the results indicates the undesirable eects of blurring and mottled
appearance.
The RMS error (or the MSE) is commonly used to compare the results of
various image processing operations however, the examples presented above
Removal of Artifacts 181
illustrate the limitations in using the RMS error in comparing images with
dierent types of artifact and distortion. In some of the results shown, an im-
age with a higher RMS error may present better visual quality than another
image with a lower RMS error. Visual inspection and analysis of the results
by qualied users or experts in the domain of application is important. It is
also important to test the proposed methods with phantoms or test images
that demonstrate the characteristics that are relevant to the specic applica-
tion being considered. Assessment of the advantages provided by the ltered
results in further processing and analysis, such as the eects on diagnostic
accuracy in the case of medical images, is another approach to evaluate the
results of ltering.

3.3.3 Order-statistic
lters
The class of order-statistic lters 193] is large, and includes several nonlinear
lters that are useful in ltering dierent types of noise in images. The rst
step in order-statistic ltering is to rank-order, from the minimum to the
maximum, the pixel values in an appropriate neighborhood of the pixel being
processed. The ith entry in the list is the output of the ith order-statistic
lter. A few order-statistic lters of particular interest are the following:
 Min lter: the rst entry in the rank-ordered list, useful in removing
high-valued impulse noise (isolated bright spots or \salt" noise).
 Max lter: the last entry in the rank-ordered list, useful in removing
low-valued impulse noise (isolated dark spots or \pepper" noise).
 Min/Max lter: sequential application of the Min and Max lters, useful
in removing salt-and-pepper noise.
 Median lter: the entry in the middle of the list. The median lter is
the most popular and commonly used lter among the order-statistic
lters see Section 3.3.2 for detailed discussion and illustration of the
median lter.
 -trimmed mean lter: the mean of a reduced list where the rst  and
the last  of the list is rejected, with 0   < 0:5. Outliers, that is
pixels with values very dierent from the rest of the pixels in the list,
are rejected by the trimming process. A value close to 0:5 for  rejects
the entire list except the median or a few values close to it, and the
output is close to or equal to that of the median lter. The mean of
the trimmed list provides a compromise between the generic mean and
median lters.
 L-lters: a weighted combination of all of the elements in the rank-
ordered list. The use of appropriate weights can provide outputs equal
182 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.17
(a) Shapes test image. (b) Image in (a) with Gaussian noise added, with
 = 0 2 = 0:01 (normalized), RMS error = 19:56. Result of ltering the
noisy image in (b) using: (c) 3  3 mean, RMS error = 22:62 (d) 3  3 median,
RMS error = 15:40 (e) 5  5 mean, RMS error = 28:08 (f) 5  5 median,
RMS error = 22:35.
Removal of Artifacts 183

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.18
(a) Shapes test image. (b) Image in (a) with Poisson noise, RMS error = 5:00.
Result of ltering the noisy image in (b) using: (c) 3  3 mean, RMS error =
19:40 (d) 3  3 median, RMS error = 13:19 (e) 5  5 mean, RMS error =
25:85 (f) 5  5 median, RMS error = 23:35.
184 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.19
(a) Shapes test image. (b) Image in (a) with speckle noise, with  = 0 2 =
0:04 (normalized), RMS error = 12:28. Result of ltering the noisy image in
(b) using: (c) 3  3 mean, RMS error = 20:30 (d) 3  3 median, RMS error
= 15:66 (e) 5  5 mean, RMS error = 26:32 (f) 5  5 median, RMS error =
24:56.
Removal of Artifacts 185

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.20
(a) Shapes test image. (b) Image in (a) with salt-and-pepper noise added,
with density = 0:05, RMS error = 40:99. Result of ltering the noisy image
in (b) using: (c) 3  3 mean, RMS error = 24:85 (d) 3  3 median, RMS error
= 14:59 (e) 5  5 mean, RMS error = 28:24 (f) 5  5 median, RMS error =
23:14.
186 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.21
(a) Shapes test image. (b) Image in (a) with salt-and-pepper noise added,
with density = 0:1, RMS error = 56:32. Result of ltering the noisy image in
(b) using: (c) 3  3 mean, RMS error = 29:87 (d) 3  3 median, RMS error
= 15:42 (e) 5  5 mean, RMS error = 31:25 (f) 5  5 median, RMS error =
23:32.
Removal of Artifacts 187

(a) (b)

(c) (d)
FIGURE 3.22
Normalized histograms of (a) the Shapes test image and of the image with
(b) Gaussian noise, (c) Poisson noise, and (d) speckle noise. The rst his-
togram has been scaled to display the range of probability (0 0:05) only the
remaining histograms have been scaled to display the range (0 0:015) only
in order to show the important details. The probability values of gray lev-
els 0 and 255 have been clipped in some of the histograms. Each histogram
represents the gray-level range of 0 255].
188 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.23
(a) \Peppers": a 512 512 test image. (b) Image in (a) with Gaussian noise added,
with = 0 2 = 0:01 (normalized), RMS error = 25:07. Result of ltering the
noisy image in (b) using: (c) 3 3 mean, RMS error = 13:62 (d) 3 3 median,
RMS error = 13:44 (e) 5 5 mean, RMS error = 16:17 (f) 5 5 median, RMS
error = 13:47.
Removal of Artifacts 189

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.24
(a) Peppers test image. (b) Image in (a) with Poisson noise, RMS error =
10:94. Result of ltering the noisy image in (b) using: (c) 3  3 mean, RMS
error = 11:22 (d) 3  3 median, RMS error = 8:56 (e) 5  5 mean, RMS error
= 15:36 (f) 5  5 median, RMS error = 10:83.
190 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.25
(a) Peppers test image. (b) Image in (a) with speckle noise, with  = 0 2 =
0:04 (normalized), RMS error = 26:08. Result of ltering the noisy image in
(b) using: (c) 3  3 mean, RMS error = 13:68 (d) 3  3 median, RMS error
= 15:73 (e) 5  5 mean, RMS error = 16:01 (f) 5  5 median, RMS error =
14:66.
Removal of Artifacts 191

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.26
(a) Peppers test image. (b) Image in (a) with salt-and-pepper noise added,
with density = 0:05, RMS error = 30:64. Result of ltering the noisy image
in (b) using: (c) 3  3 mean, RMS error = 15:17 (d) 3  3 median, RMS error
= 7:38 (e) 5  5 mean, RMS error = 16:96 (f) 5  5 median, RMS error =
10:41.
192 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.27
(a) Peppers test image. (b) Image in (a) with salt-and-pepper noise added,
with density = 0:1, RMS error = 43:74. Result of ltering the noisy image in
(b) using: (c) 3  3 mean, RMS error = 18:98 (d) 3  3 median, RMS error
= 8:62 (e) 5  5 mean, RMS error = 18:71 (f) 5  5 median, RMS error =
11:11.
Removal of Artifacts 193
to those of all of the lters listed above, and facilitate the design of
several order-statistic-based nonlinear lters.
Order-statistic lters represent a family of nonlinear lters that have gained
popularity in image processing due to their characteristics of removing several
types of noise without blurring edges, and due to their simple implementation.

3.4 Frequency-domain Filters


Transforming an image from the space domain to the frequency domain using
a transform such as the Fourier transform provides advantages in ltering and
noise removal. Most images of natural beings, entities, and scenes vary slowly
and smoothly across space, and are usually devoid of step-like changes. As
a consequence, such images have most of their energy concentrated in small
regions around (u v) = (0 0) in their spectra. On the other hand, uncor-
related random noise elds have a uniform, at, or \white" spectrum, with
an almost-constant energy level across the entire frequency space. This leads
to the common observation that the SNR of a noisy, natural image is higher
in low-frequency regions than in high-frequency regions. It becomes evident,
then, that such images may be improved in appearance by suppressing or re-
moving their high-frequency components beyond a certain cut-o frequency.
It should be recognized, however, that, in removing high-frequency compo-
nents, along with the noise components, some desired image components will
also be sacriced. Furthermore, noise components in the low-frequency pass-
band will continue to remain in the image.
The procedure for Fourier-domain ltering of an image f (m n) involves the
following steps:
1. Compute the 2D Fourier transform F (k l) of the image. This may
require padding the image with zeros to increase its size to an N  N
array, with N being an integral power of 2, if an FFT algorithm is to
be used.
2. Design or select an appropriate 2D lter transfer function H (k l).
3. Obtain the ltered image (in the Fourier domain) as
G(k l) = H (k l) F (k l): (3.40)
It is common to dene H (k l) as a real function, thereby aecting only
the magnitude of the input image spectrum the phase remains un-
changed. Depending upon the denition or computation of H (k l), the
spectrum F (k l) may have to be centered or folded (see Figures 2.26,
2.27, and 2.28).
194 Biomedical Image Analysis
4. Compute the inverse Fourier transform of G(k l). If F (k l) was folded
prior to ltering, it must be unfolded prior to the inverse transformation.
5. If the input image was zero-padded, trim the resulting image g(m n).
Although the discussion above concentrates on the removal of noise, it
should be noted that frequency-domain ltering permits the removal of sev-
eral types of artifacts, as well as the selection of the desired frequency com-
ponents in the image in various manners. The transfer function H (k l) may
be specied in several ways in the frequency domain phase corrections may
be introduced in a separate step. It should be recognized that, while dealing
with real-valued images, H (k l) should maintain (conjugate) symmetry in the
frequency plane it is common to use isotropic lter functions.

3.4.1 Removal of high-frequency noise


Lowpass lters are useful in removing high-frequency noise, under the as-
sumption that the noise is additive and that the Fourier components of the
original image past a certain frequency cuto are negligible. The so-called
ideal lowpass lter is dened in the 2D Fourier space as
 1 if D(u v)  D
H (u v) = 0 otherwise: 0 (3.41)
p
where D(u v) = u2 + v2 is the distance of the frequency component at
(u v) from the DC point (u v) = (0 0), with the spectrum being positioned
such that the DC is at its center (see Figures 2.26, 2.27, and 2.28). Note:
The coordinates (u v) and (k l) are used interchangeably to represent the 2D
frequency space.] D0 is the cuto frequency, beyond which all components
of the Fourier transform of the given image are set to zero. Figure 3.28 (a)
shows the ideal lowpass lter function. Figure 3.29 shows proles of the ideal
and Butterworth lowpass lters.
Example: Figures 3.30 (a) and (b) show the Shapes test image and a noisy
version with Gaussian noise added. Figure 3.31 shows the Fourier magnitude
spectra of the original and noisy versions of the Shapes image. The sharp
edges in the image have resulted in signicant high-frequency energy in the
spectrum of the image shown in Figure 3.31 (a) in addition, the presence of
strong features at certain angles in the image has resulted in the concentration
of energy in the corresponding angular bands in the spectrum. The spectrum
of the noisy image, shown in Figure 3.31 (b), demonstrates the presence of
additional and increased high-frequency components.
Figure 3.30 (c) shows the noisy image ltered using an ideal lowpass lter
with the normalized cuto D0 = 0:4 times the highest frequency along the
u or v axis see Figure 3.28 (a)]. A glaring artifact is readily seen in the
result of the lter: while the noise has been reduced, faint echoes of the
edges present in the image have appeared in the result. This is due to the
Removal of Artifacts 195

(a) (b)
FIGURE 3.28
(a) The magnitude transfer function of an ideal lowpass lter. The cuto
frequency D0 is 0:4 times the maximum frequency (that is, 0:2 times the
sampling frequency). (b) The magnitude transfer function of a Butterworth
lowpass lter, with normalized cuto D0 = 0:4 and order n = 2. The (u v) =
(0 0) point is at the center. The gain is proportional to the brightness (white
represents 1:0 and black represents 0:0.)
196 Biomedical Image Analysis

0.8
Filter gain

0.6

0.4

0.2

0
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
Normalized frequency

FIGURE 3.29
Proles of the magnitude transfer functions of an ideal lowpass lter (solid
line) and a Butterworth lowpass lter (dashed line), with normalized cuto
D0 = 0:4 and order n = 2.
Removal of Artifacts 197
fact that the inverse Fourier transform of the circular ideal lter dened in
Equation 3.41 is a Bessel function (see Figure 2.31 for the circle { Bessel
Fourier transform pair). Multiplication of the Fourier transform of the image
with the circle function is equivalent to convolution of the image in the space
domain with the corresponding Bessel function. The ripples or lobes of the
Bessel function lead to echoes of strong edges, an artifact known as the ringing
artifact. The example illustrates that the \ideal" lter's abrupt transition
from the passband to the stopband is, after all, not a desirable characteristic.
The Butterworth lowpass lter: Prevention of the ringing artifacts
encountered with the ideal lowpass lter requires that the transition from the
passband to the stopband (and vice-versa in the case of highpass lters) be
smooth. The Butterworth lter is a commonly used frequency-domain lter
due to its simplicity of design and the property of a maximally at magnitude
response in the passband. For a 1D Butterworth lowpass lter of order n, the
rst 2n ; 1 derivatives of the squared magnitude response are zero at ! = 0,
where ! represents the radian frequency. The Butterworth lter response is
monotonic in the passband as well as in the stopband. (See Rangayyan 31]
for details and illustrations of 1D the Butterworth lter.)
In 2D, the Butterworth lowpass lter is dened as 8]
1
H (u v) = p h ) i2n  (3.42)
1 + ( 2 ; 1) D(Duv
0

p
where n is the order of the lter, D(u v) = u2 + v2 , and D0 is the half-
power 2D radial cuto frequency the scale factor in the denominator leads to
the gain of the lter being p12 at D(u v) = D0 ]. The lter's transition from
the passband to the stopband becomes steeper as the order n is increased.
Figures 3.28 (b) and 3.29 illustrate the magnitude (gain) of the Butterworth
lowpass lter with the normalized cuto D0 = 0:4 and order n = 2.
Example: The result of ltering the noisy Shapes image in Figure 3.30 (b)
with the Butterworth lowpass lter as above is shown in Figure 3.30 (d). It
is seen that noise has been suppressed in the ltered image without causing
the ringing artifact however, the lter has caused some blurring of the edges
of the objects in the image.
Example: Figure 3.32 (a) shows a test image of a clock. The image was
contaminated by a signicant amount of noise, suspected to be due to poor
shielding of the video-signal cable between the camera and the digitizing frame
buer. Part (b) of the gure shows the log-magnitude spectrum of the image.
The sinc components due to the horizontal and vertical lines in the image are
readily seen on the axes of the spectrum. Radial concentrations of energy are
also seen in the spectrum, related to the directional (or oriented) components
of the image. Part (c) of the gure shows the result of application of the
ideal lowpass lter with cuto D0 = 0:4. The strong ringing artifacts caused
by the lter render the image useless, although some noise reduction has
198 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 3.30
(a) The Shapes test image. (b) The test image with Gaussian noise having
a normalized variance of 0:01 added. (c) The result of ideal lowpass ltering
the noisy image, with normalized cuto D0 = 0:4 see Figure 3.28. (d) The
result of ltering with a Butterworth lowpass lter having D0 = 0:4 and order
n = 2. See also Figure 3.31.
Removal of Artifacts 199

(a) (b)
FIGURE 3.31
The centered (folded) Fourier log-magnitude spectrum of (a) the Shapes im-
ages in Figure 3.30 (a) and (b) the noisy Shapes image in Figure 3.30 (b).

been achieved. Part (d) of the gure shows the result of ltering using a
Butterworth lowpass lter, with D0 = 0:4 and order n = 2. The noise in the
image has been suppressed well without causing any artifact (except blurring
or smoothing).
See Section 3.9 for illustration of the application of frequency-domain low-
pass lters to an image obtained with a confocal microscope.

3.4.2 Removal of periodic artifacts


Periodic components in images give rise to impulse-like and periodic concen-
trations of energy in their Fourier spectra. This characteristic facilitates the
removal of periodic artifacts through selective band-reject, notch, or comb
ltering 31].
Example: Figure 3.33 (a) shows a part of an image of a mammographic
phantom acquired with the grid (bucky) remaining xed. The white objects
simulate calcications found in mammograms. The projections of the grid
strips have suppressed the details in the phantom. Figure 3.33 (b) shows the
log-magnitude Fourier spectrum of the image, where the periodic concentra-
tions of energy along the v axis as well as along the corresponding horizontal
strips are related to the grid lines. Figure 3.33 (d) shows the spectrum with
selected regions corresponding to the artifactual components set to zero. Fig-
ure 3.33 (c) shows the corresponding ltered image, where the grid lines have
been almost completely removed.
200 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 3.32
(a) Clock test image (101  101 pixels). (b) Log-magnitude spectrum of the
image. (c) Result of the ideal lowpass lter, D0 = 0:4. (d) Result of the
Butterworth lowpass lter, with D0 = 0:4 and order n = 2.
Removal of Artifacts 201

(a) (b)

(c) (d)
FIGURE 3.33
(a) Part of an image of a mammographic phantom with grid artifact see
also Figure 3.34. (b) Log-magnitude Fourier spectrum of the image in (a).
(c) Filtered image. (d) Filtered version of the spectrum in (b). Phantom
image courtesy of L.J. Hahn, Foothills Hospital, Calgary.
202 Biomedical Image Analysis
Figure 3.34 (a) shows a corresponding image of the phantom acquired with
the bucky moving in the recommended manner the image is free of the grid-
line artifact. Figure 3.34 (b) shows the corresponding log-magnitude Fourier
spectrum, which is also free of the artifactual components that are seen in the
spectrum in Figure 3.33 (b). It should be noted that removing the artifactual
components as indicated by the spectrum in Figure 3.33 (d) leads to the loss
of the frequency-domain components of the desired image in the same regions,
which could lead to some distortion in the ltered image.

(a) (b)
FIGURE 3.34
(a) Part of an image of a mammographic phantom with no grid artifact com-
pare with the image in Figure 3.33 (a). (b) Log-magnitude Fourier spectrum
of the image in (a). Phantom image courtesy of L.J. Hahn, Foothills Hospital,
Calgary.

3.5 Matrix Representation of Image Processing


We have thus far expressed images as 2D arrays, and transform and image
processing procedures with operations such as addition, multiplication, inte-
gration, and convolution of arrays. Such expressions dene the images and
the related entities on a point-by-point, pixel-by-pixel, or single-element ba-
sis. The design of optimal lters and statistical estimation procedures require
the application of operators such as dierentiation and statistical expectation
Removal of Artifacts 203
to expressions involving images and image processing operations. The array
or single-element form of representation of images and operations does not
lend easily to such procedures. It would be convenient if we could represent
a whole image as a single algebraic entity, and if image processing operations
could be expressed using basic algebraic operations. In this section, we shall
see that matrix representation of images, convolution, lters, and transforms
facilitates ecient and compact expression of image processing, optimization,
and estimation procedures.

3.5.1 Matrix representation of images


A sampled image may be represented by a matrix as
f = ff (m n) : m = 0 1 2     M ; 1 n = 0 1 2     N ; 1g : (3.43)
The matrix has M rows, each with N elements the matrix has N columns.
Matrix methods may then be used in the analysis of images and in the deriva-
tion of optimal lters. (See Section 2.3.3 for a discussion on array and matrix
representation of images.)
In treating images as matrices and mathematical entities, it should be recog-
nized always that images are not merely arrays of numbers: certain constraints
are imposed on the image matrix due to the physical properties of the image.
Some of the constraints to be noted are 9]:
 Nonnegativity and upper bound: fmin  f (m n)  fmax , where fmin
and fmax are the minimum and maximum values, respectively, imposed
by the characteristics of the object being represented by the image as
well as by the image quantization process. Usually, fmin = 0, and the
pixel values are constrained to be nonnegative.
 Finite energy: Ef = PMm=0
;1 PN ;1 f 2 (m n)  Emax , where Emax is a
n=0
nite limit on the total energy of the image.
 Smoothness: Most images that represent real-life entities cannot change
their characteristics abruptly. The dierence between a pixel and the
average of its immediate neighbors is usually bounded by a limit S as
0 f (m ; 1 n ; 1) +f (m ; 1 n) +f (m ; 1 n + 1)
1
f (m n) ; 18 @ + f (m n ; 1) +f (m n + 1) A  S:
+ f (m + 1 n ; 1) +f (m + 1 n) +f (m + 1 n + 1)
(3.44)
An M  N matrix may be converted to a vector by row ordering as

f = f 1 f 2     f M
T (3.45)
204 Biomedical Image Analysis
1 1

2 4

3 7

1 2 3 4 2

4 5 6 5 5

7 8 9 6 8

7 3

8 6

9 9

(a) (b) (c)

FIGURE 3.35
(a) Matrix representation of a 3  3 image. Vector representation of the image
in (a) by: (b) row ordering and (c) column ordering or stacking.

where f m = f (m 1) f (m 2)     f (m N )]T is the mth row vector. Column
ordering may also be performed Figure 3.35 illustrates the conversion of an
image to a vector in schematic form.
Using the vector notation, we get the energy of the image as
X
MN
E = fT f = f  f = f 2 (i) (3.46)
i=1

which is the inner product or dot product of the vector with itself. The energy
of the image may also be computed using the outer product as
E = Tr f f T ] (3.47)
where Tr ] represents the trace (the sum of the main diagonal elements) of
the resulting MN  MN matrix.
If the image elements are considered to be random variables, images may
be treated as samples of stochastic processes, and characterized by their sta-
tistical properties as follows:

 Mean f = E f ], which is an MN  1 matrix or vector.


Removal of Artifacts 205
 Covariance = E (f ; f )(f ; f )T ] = E f f T ] ; f f T , which is an
MN  MN matrix given by
2 2      3
66 1121 12222    12PP 77
= 64 .. .. . . . .. 75  (3.48)
. . .
P 1 P 2    PP
2

where P = MN and the matrix elements are


pq = E ff (p) ; f (p)gff (q) ; f (q)g] p q = 1 2     P (3.49)
representing the covariance between the pth and qth elements of the
image vector. is symmetric: pq = qp . The diagonal terms pp 2 are
the variances of the elements of the image vector.
 Autocorrelation or scatter matrix  = E f f T ], which is an MN  MN
matrix. The normalized autocorrelation coecients are dened as pq =
pq =(pp qq ). Then, ;1  pq  1. The normalized autocorrelation
matrix is given by
21  3
66 21 112    12PP 77
 = 6 .. .. . . .. 7 : (3.50)
4. . . . 5
P 1 P 2    1
The absolute scale of variation is retained in the diagonal standard de-
viation matrix:
2 0  0 3
66 0 11 22    0 77
D = 64 .. .. . . .. 75 : (3.51)
. . ..
0 0    PP
Then, = D  D.
Two image vectors f and g are
 Uncorrelated if E f gT ] = E f ] E gT ]. Then, the cross-covariance ma-
trix fg is a diagonal matrix, and the cross-correlation fg = I, the
identity matrix.
 Orthogonal if E f gT ] = 0.
 Statistically independent if p(f  g) = p(f ) p(g). Then, f and g are
uncorrelated.
The representation of image and noise processes as above facilitates the
application of optimization techniques and the design of optimal lters.
206 Biomedical Image Analysis
3.5.2 Matrix representation of transforms
1D transforms: In signal analysis, it is often useful to represent a signal
f (t) over the interval t0 to t0 + T by an expansion of the form
1
X
f (t) = ak 'k (t) (3.52)
k=0
where the functions 'k (t) are mutually orthogonal, that is,
Z t +T 
'k (t) 'l (t) dt = C0 ifif kk = l
0

6
= l: (3.53)
t0
The functions are said to be orthonormal if C = 1.
The coecients ak may be obtained as
Z t +T
ak = C1 f (t) 'k (t) dt
0
(3.54)
t0
k = 0 1 2     1 that is, ak is the projection of f (t) on to 'k (t).
The set of functions f'k (t)g is said to be complete or closed if there exists
no square-integrable function f (t) for which
Z t +T
f (t) 'k (t) dt = 0 8k
0
(3.55)
t0
that is, the function f (t) is orthogonal to all of the members of the set f'k (t)g.
If such a function exists, it should be a member of the set in order for the set
to be closed or complete.
When the set f'k (t)g is complete, it is said to be an orthogonal basis,
and may be used for accurate representation of signals. For example, the
Fourier series representation of periodic signals is based upon the use of an
innite set of sine and cosine functions of frequencies that are integral multi-
ples (harmonics) of the fundamental frequency of the given signal. In order for
such a transformation to exist, the functions f (t) and 'k (t) must be square-
integrable.
With a 1D sampled signal expressed as an N  1 vector or column matrix,
we may represent transforms using an N  N matrix L as
F = L f and f = LT F (3.56)
with L LT = I. The matrix operations are equivalent to
;1
NX ;1
NX
F (k) = L(k n) f (n) and f (n) = L (n k) F (k) (3.57)
n=0 k=0
for k or n going 0 1     N ; 1 respectively. For the DFT,; we need to dene
the matrix L with its elements given by L(k n) = exp ;j 2N kn . With
Removal of Artifacts 207
; 
the notation WN = exp ;j 2N , we have L(k n) = WNkn . Then, the following
representations of the DFT in array or series format and matrix multiplication
format are equivalent:
NX;1 
F (k) = f (n) exp ;j 2N kn  k = 0 1     N ; 1: (3.58)
n=0
2 F (0) 3
66 F (1) 77
66 F (2) 77
66 .. 77 =
64 . 7
F (N ; 2) 5
F (N ; 1)
2W0 W0 WN 0
   WN0 WN0 32 3
N N
1(N ;2) 1(N ;1)
f (0)
66 WN0 WN11 WN 12
   WN WN 7
7 6 f (1) 77
66 WN0 WN21 WN22    WN2(N ;2) WN2(N ;1) 77 666 f (2) 77
66 . . . . . . 77 6 . 77 :
66 .. .. .. . . .. .. 77 66 .. 7
4 WN0 WN(N ;2)1 WN(N ;2)2    WN(N ;2)(N ;2) WN(N ;2)(N ;1) 5 4 f (N ; 2) 5
WN0 WN(N ;1)1 WN(N ;1)2    WN(N ;1)(N ;2) WN(N ;1)(N ;1) f (N ; 1)
(3.59)
However, because of the periodicity of the exponential function, we have
WNm(N ;1) = WN;m = WNN ;m for any integer m note that WNN = 1. Thus, for
a given N , there are only N distinct functions WNk , k = 0 1 2     N ; 1. Fig-
ure 3.36 illustrates the vectors (or phasors) representing W8k , k = 0 1 2     7.
This property of the exponential function reduces the W matrix to one with
only N distinct values, leading to the relationship
2 F (0) 3 2 WN0 WN0 WN0    WN0 WN0 3 2 f (0) 3
66 F (1) 77 66 WN0 WN1 WN2    WN(N ;2) WN(N ;1) 77 66 f (1) 77
66 F (2) 77 66 WN0 WN2 WN4    WN(N ;4) WN(N ;2) 77 66 f (2) 77
66 ... 7 = 66 . . . . . . 77 6 . 7:
64 F (N ; 2) 775 66 .. 0 .. (N ;2) .. (N ;4) . . .. 4 .. 2 77 664 ..f (N ; 2) 775
4 WN WN WN    WN WN 5
F (N ; 1) WN0 WN(N ;1) WN(N ;2)    WN2 WN1 f (N ; 1)
(3.60)
For N = 8, we get
2 F (0) 3 2 W 0 W 0 W 0 W 0 W 0 W 0 W 0 W 0 3 2 f (0) 3
66 F (1) 77 66 W 0 W 1 W 2 W 3 W 4 W 5 W 6 W 7 77 66 f (1) 77
66 F (2) 77 66 W 0 W 2 W 4 W 6 W 0 W 2 W 4 W 6 77 66 f (2) 77
66 F (3) 77 = 66 W 0 W 3 W 6 W 1 W 4 W 7 W 2 W 5 77 66 f (3) 77  (3.61)
66 F (4) 77 66 W 0 W 4 W 0 W 4 W 0 W 4 W 0 W 4 77 66 f (4) 77
66 F (5) 77 66 W 0 W 5 W 2 W 7 W 4 W 1 W 6 W 3 77 66 f (5) 77
4 F (6) 5 4 W 0 W 6 W 4 W 2 W 0 W 6 W 4 W 2 5 4 f (6) 5
F (7) W 0 W 7 W 6 W 5 W 4 W 3 W 2 W 1 f (7)
208 Biomedical Image Analysis
where the subscript N = 8 to W has been suppressed in order to show the
structure of the matrix clearly.

6
W imaginary
8

5 7
W W
8 8

4 0
W W real
8 8

3 1
W W
8 8

2
W
8

FIGURE 3.36
Vectors (or phasors) representing ; the
 N = 8 roots of unity, or W8k , k =

0 1 2     7, where W8 = exp ;j 8 . Based upon a similar gure by Hall 9].
2

2D transforms: The transformation of an N  N image f (m n), m =


0 1 2     N ; 1 n = 0 1 2     N ; 1, may be expressed in a generic repre-
sentation as:
NX ;1 NX;1
F (k l) = 1
N m=0 n=0 f (m n) '(m n k l) (3.62)

;1 NX
NX ;1
f (m n) = N1 F (k l) (m n k l) (3.63)
k=0 l=0
where '(m n k l) is the forward transform kernel and (m n k l) is the
inverse transform kernel. The kernel is said to be separable if '(m n k l) =
'1 (m k) '2 (n l), and symmetric in addition if '1 and '2 are functionally
equal. Then, the 2D transform may be computed in two simpler steps of 1D
Removal of Artifacts 209
row transforms followed by 1D column transforms (or vice versa) as follows:
;1
NX
F1 (m l) = f (m n) '(n l) m l = 0 1     N ; 1 (3.64)
n=0
;1
NX
F (k l) = F1 (m l) '(m k) k l = 0 1     N ; 1: (3.65)
m=0
In the case of the 2D Fourier transform, we have the kernel
2

'(m n k l) = exp ;j (mk + nl)
N 
2 2

= exp ;j N mk exp ;j N nl : (3.66)
The kernel is separable and symmetric. The 2D DFT may be expressed as
F = W f W (3.67)
where f is the N  N image
 matrix, and W is a symmetric N  N matrix with
WNkm = exp ;j 2N km  however, due to the periodicity of the WN function,
the matrix W has only N distinct values, as shown in Equations 3.60 and
3.61.
The DFT matrix W is symmetric, with its rows and columns being mutually
orthogonal:
NX;1 N k = l
mk ml 
WN WN = 0 k 6= l : (3.68)
m=0
Then, W;1 = N1 W , which leads to
f = 1 W F W :
N2 (3.69)
A number of transforms such as the Fourier, Walsh{Hadamard, and discrete
cosine may be expressed as F = A f A, with the matrix A constructed using
the relevant basis functions. The transform matrices may be decomposed into
products of matrices with fewer nonzero elements, reducing redundancy and
computational requirements. The DFT matrix may be factored into a product
of 2 ln N sparse and diagonal matrices, leading to the FFT algorithm 9, 194].
The Walsh{Hadamard Transform: The orthonormal, complete set of
1D Walsh functions dened over the interval 0  x  1 is given by the iterative
relationships 8, 9]
8 ' n (2x)
>
< ] 2
x < 21 
'n (x) = > ' n ] (2x ; 1) x  21  n odd (3.70)
: ; ' n ] (2x ; 1) x  21  n even
2

2
210 Biomedical Image Analysis

where n2 is the integral part of n2 ,
'0 (x) = 1 (3.71)
and 8 1 x< 1
< 2
'1 (x) = : (3.72)
;1 x  12 :
The nth function 'n is generated by compression of the function ' n2 ] into
its rst half and ' n2 ] into its second half. Figure 3.37 shows the rst eight
Walsh functions sampled with 100 samples over the interval (0 1), obtained
using the denition in Equation 3.70. (Note: The ordering of the Walsh
functions varies with the formulation of the functions.)

n=0

n=1

n=2

n=3

n=4

n=5

n=6

+1
n=7
−1

x=0 x=1

FIGURE 3.37
The rst eight Walsh functions sampled with 100 samples over the interval
(0 1).

Walsh functions are ordered by the number of zero-crossings in the inter-


val (0 1), called sequency. If the Walsh functions with the number of zero-
crossings  (2n ; 1) are sampled with N = 2n uniformly spaced points, we
Removal of Artifacts 211
get a square matrix representation that is orthogonal and has its rows ordered
with increasing number of zero-crossings for N = 8 we have
21 1 1 1 1 1 1 13
66 1 1 1 1 ;1 ;1 ;1 ;1 77
66 1 1 ;1 ;1 ;1 ;1 1 1 77
6 7
A = 66 11 ;11 ;;11 ;11 11 ;11 ;;11 ;11 77 : (3.73)
66 1 ;1 ;1 1 ;1 1 1 ;1 77
64 1 ;1 1 ;1 ;1 1 ;1 1 75
1 ;1 1 ;1 1 ;1 1 ;1
(Considering sequency ordering as above, the functions for n = 4 and n = 5
in Figure 3.37 need to be swapped.) The Walsh transform of a 1D signal f
may then be expressed as F = A f .
Another formulation of the Walsh transform denes the forward and inverse
kernel function as 8]
1 ;1
PY
'(n k) = N (;1)bp (n)bP ;1;p (k)]  (3.74)
p=0
for 1D signals, and
;1
PY
'(m n k l) = N1 (;1)bp (m)bP ;1;p (k)+bp (n)bP ;1;p (l)]  (3.75)
p=0
for 2D signals, where bp (m) is the pth bit in the P -bit binary representation
of m.
The major advantage of the Walsh transform is that the kernel has integral
values of +1 and ;1 only. As a result, the transform involves only addition and
subtraction of the input image pixels. Furthermore, the operations involved
in the forward and inverse transformation are identical.
Except for the ordering of rows, the discrete Walsh matrices are equivalent
to the Hadamard matrices of rank 2n  which are constructed as follows 8]:

A2 = 11 ;11  (3.76)

A2N = A N AN
AN ;AN : (3.77)
Then, by dening A = p1N AN , the Walsh{Hadamard transform (WHT) of
a 2D function may be expressed as
F = A f A and f = A F A (3.78)
212 Biomedical Image Analysis
where all matrices are of size N  N .
Figure 3.38 shows the 2D Walsh{Hadamard basis functions (matrices) for
k l = 0 1 2 : : :  7. The functions were generated by using the 2D equivalents
of the recursive denition given for the 1D Walsh functions in Equation 3.70.
Note that the ordering of the functions could vary from one denition to
another.

FIGURE 3.38
The rst 64 Walsh{Hadamard 2D basis functions. Black represents a pixel
value of ;1, and white +1. Each function was computed as an 8  8 matrix.

The computational and representational simplicity of the WHT is evident


from the expressions above. The WHT has applications in image coding,
image sequency ltering, and feature extraction for pattern recognition.

3.5.3 Matrix representation of convolution


With images represented as vectors, linear system operations (convolution
and ltering) may be represented as matrix-vector multiplications 8, 9, 195].
For the sake of simplicity, let us rst consider the 1D LSI system. Let us
Removal of Artifacts 213
assume the system to be causal, and to have an innite impulse response (IIR).
Then, we have the input-output relationship given by the linear convolution
operation
Xn
g(n) = f () h(n ; ): (3.79)
=0
If the input is given over N samples, we could represent the output over the
interval 0 N ] as
g = h f (3.80)
or, in expanded form,
2 g(0) 3 2 h(0) 0     0 3 2 f (0) 3
66 g(1) 77 66 h(1) h(0) 0    0 7 6 f (1) 7
66 g(2) 77 = 66 h(2) h(1) h(0)    0 777 666 f (2) 777 : (3.81)
64 ... 75 64 ... ... ..
.
. . . .. 75 64 ..
. .
75
g(N ) h(N ) h(N ; 1) h(N ; 2)    h(0) f (N )
The matrix h is a Toeplitz-like matrix, which leads to computational ad-
vantages. (Note: A Toeplitz matrix is a square matrix whose elements are
equal along every diagonal.) There will be zeros in the lower-left portion of h
if h(n) has fewer samples than f (n) and g(n): h is then said to be banded.
If the impulse response of the lter is of a nite duration of M + 1 samples,
that is, the lter is of the nite impulse response (FIR) type, it is common to
use the noncausal, moving-window representation of convolution as
nX
+ M2
f () h(n ; ) (3.82)
=n; M2
or M    
X
g(n) = f  + n ; M2 h M2 ;  : (3.83)
=0
This relationship may also be expressed as g = h f  with the matrix and
vectors constructed as in Figure 3.39. The matrix h is now banded and
Toeplitz-like. Furthermore, each row of h, except the rst, is a right-shifted
version of the preceding row. (It has been assumed that M < N .)
Another representation of convolution is the periodic or circular form, where
all of the signals are assumed to be of nite duration and periodic, with
the period being equal to N samples. The shifting required in convolution
is interpreted as periodic shifting: samples that go out of the frame of N
samples at one end will reappear at the other end 7]. The periodic convolution
operation is expressed as
;1
NX
gp (n) = fp () hp ( n ; ] mod N ) (3.84)
=0
214
2 g(0) 3
66 77
66 g(1) 77 =
64 ... 75
g (N )

2 h( M ) h( M ; 1)    h(0)    h(1 ; M2 ) h(; M2 ) 0  0


3 2 f (; M ) 3
66 2 2 77 66 2 77
66 0 h( M2 ) h( M2 ; 1)    h(0)    h(1 ; M2 ) h(; M2 )    0 77 66 f (1 ; M2 ) 77
64 .. .. ... ... ... ... ... .. 75 64 .. 75
. .   . .
0  h( M ) h( M ; 1)    h(0)   h(1 ; M2 ) h(; M2 ) f (N + M2 )

Biomedical Image Analysis


2 2
.
FIGURE 3.39
Construction of the matrix and vectors for convolution in the case of an FIR lter.
Removal of Artifacts 215
where the subscript p indicates the periodic version or interpretation of the
signals. Note that, whereas the result of periodic convolution of two periodic
signals of period N samples each is another periodic signal of the same period
of N samples, the result of their linear convolution would be of duration
2N ; 1 samples. However, periodic convolution may be used to achieve the
same result as linear convolution by padding the signals with zeros so as to
increase their duration to at least 2N ; 1 samples, and then considering the
extended signals to be periodic with the period of 2N ; 1 samples. It should
be observed that implementing convolution by taking the inverse DFT of the
product of the DFTs of the two signals will result in periodic convolution of
the signals zero padding as above will be necessary in such a case if linear
convolution is desired. The operation performed by a causal, stable, LSI
system is linear convolution.
Periodic convolution may also be expressed as g = h f  with the matrix
and vectors constructed as
2 g(0) 3 2 h(0) h(N ; 1)    h(2) h(1) 3 2 f (0) 3
66 g(1) 77 66 h(1) h(0)    h(3) h(2) 77 66 f (1) 77
66 .. 77 = 66 .. .. . . . .. .. 77 66 .. 77 : (3.85)
64 . 75 64 . . . . 75 64 . 7
g(N ; 2) h(N ; 2) h(N ; 3)    h(0) h(N ; 1) f (N ; 2) 5
g(N ; 1) h(N ; 1) h(N ; 2)    h(1) h(0) f (N ; 1)
(The subscript p has been dropped for the sake of concise notation.) Each row
of the matrix h is a right-circular shift of the previous row, and the matrix is
square: such a matrix is known as a circulant matrix.

3.5.4 Illustrations of convolution


Convolution is an important operation in the processing of signals and images.
We encounter convolution when ltering a signal or an image with an LSI
system: the output is the convolution of the input with the impulse response
of the system.
Convolution in 1D: In 1D, for causal signals and systems, we have the
linear convolution operation given by
X
n
g(n) = f (k) h(n ; k): (3.86)
k=0
As expressed above, the signal h needs to be reversed with respect to the
index of summation k, and shifted by the interval n at which the output
sample is to be computed. The index n may be run over a certain range
of interest or over all time for which the result g(n) exists. For each value
of n, the reversed and shifted version of h is multiplied with the signal f
on a point-by-point basis and summed. The multiplication and summation
operation, together, are comparable to the dot product operation, performed
over the nonzero overlapping parts of the two signals.
216 Biomedical Image Analysis
Example: Consider the signal f (n) = 4 1 3 1], dened for n = 0 1 2 3:
Let the signal be processed by an LSI system with the impulse response h(n) =
3 2 1], for n = 0 1 2. The signal and system are assumed to be causal, that
is, the values of f (n) and h(n) are zero for n < 0 furthermore, it is assumed
that f and h are zero beyond the last sample provided.
The operation of convolution is illustrated in Figure 3.40. Observe the
reversal and shifting of h. Observe also that the result g has more samples
(or, is of longer duration) than either f or h: the result of linear convolution
of two signals with N1 and N2 samples will have N1 + N2 ; 1 samples (not
including trailing zero-valued samples).
In the matrix notation of Equation 3.81, the convolution example above is
expressed as
23 0 0 0 1
32 3 2 3
2 4 12
66 2 3 0 0 0 1 77 66 1 77 66 11 77
66 12 3 0 0 0 77 66 3 77 = 66 15 77 : (3.87)
66 0
1 2 3 0 0 77 66 1 77 66 10 77
400 1 2 3 05 405 4 55
0 0 0 1 2 3 0 1
The result is identical to that shown in Figure 3.40.
Convolution in 2D: The output of an LSI imaging or image processing
system is given as the convolution of the input image with the PSF:
;1 NX
NX ;1
g(m n) = f (  ) h(m ;  n ;  ): (3.88)
=0  =0
In this expression, for the sake of generality, the range of summation is allowed
to span the full spatial range of the resulting output image. When the lter
PSF is of a much smaller spatial extent than the input image, it becomes
convenient to locate the origin (0 0) at the center of the PSF, and use positive
and negative indices to represent the omnidirectional (and noncausal) nature
of the PSF. Then, 2D convolution may be expressed as
X
M X
M
g(m n) = f (  ) h(m ;  n ;  ) (3.89)
=;M  =;M
where the size of the PSF is assumed to be odd, given by (2M +1)  (2M +1).
In this format, 2D convolution may be interpreted as a mask operation per-
formed on the input image: the PSF is reversed (ipped or reected) about
both of its axes, placed on top of the input image at the coordinate where the
output value is to be computed, a point-by-point multiplication is performed
of the overlapping areas of the two functions, and the resulting products are
added. The operation needs to be performed at every spatial location for
which the output exists, by dragging and placing the reversed PSF at every
Removal of Artifacts 217

n: 0 1 2 3 4 5 6 7

f(n): 4 1 3 1

h(n): 3 2 1 0

k: 0 1 2 3 4 5 6 7
f(k): 4 1 3 1 0 0 0 0

h(0-k): 0 1 2 3
h(1-k): 0 1 2 3
h(2-k): 0 1 2 3
h(3-k): 0 1 2 3
h(4-k): 0 1 2 3
h(5-k): 0 1 2 3
h(6-k): 0 1 2 3

g(n): 12 11 15 10 5 1 0 0

n: 0 1 2 3 4 5 6 7
FIGURE 3.40
Illustration of the linear convolution of two 1D signals. Observe the reversal
of h(n), shown as h(0 ; k), and the shifting of the reversed signal, shown as
h(1 ; k), h(2 ; k), etc.
218 Biomedical Image Analysis
pixel of the input image. Figure 3.41 illustrates linear 2D convolution per-
formed as described above. Note that, in the case of a PSF having symmetry
about both of its axes, the reversal step has no eect and is not required.
Matrix representation of 2D convolution is described in Section 3.5.6.

3.5.5 Diagonalization of a circulant matrix


An important property of a circulant matrix is that it is diagonalized by the
DFT 8, 9, 196]. Consider the general circulant matrix
2 C (0) C (1) C (2)    C (N ; 2) C (N ; 1)
3
66 C (N ; 1) C (0) C (1)    C (N ; 3) C (N ; 2) 77
C = 666 ... ..
.
..
.
. . . ..
.
..
.
77 :
75 (3.90)
4 C (2) C (3) C (4)    C (0) C (1)
C (1) C (2) C (3)    C (N ; 1) C (0)
; 
Let W = exp j 2 , which leads to W kN = 1 for any integer k. Then, W k ,
N
k = 0 1 2     N ; 1 are the N distinct roots of unity. Now, consider
(k) = C (0) + C (1)W k + C (2)W 2k +    + C (N ; 1)W (N ;1)k : (3.91)
It follows that
(k)W k = C (N ; 1) + C (0)W k + C (1)W 2k +    + C (N ; 2)W (N ;1)k 
(k)W 2k = C (N ; 2) + C (N ; 1)W k + C (0)W 2k +    + C (N ; 3)W (N ;1)k 
and so on to
(k)W (N ;1)k = C (1) + C (2)W k + C (3)W 2k +    + C (0)W (N ;1)k .
This series of relationships may be expressed in compact form as
(k) W(k) = C W(k) (3.92)
where h iT
W(k) = 1 W k  W 2k      W (N ;1)k : (3.93)
Therefore, (k) is an eigenvalue and W(k) is an eigenvector of the circulant
matrix C. Because there are N values W k  k = 0 1     N ; 1 that are
distinct, there are N distinct eigenvectors W(k), which may be written as the
N  N matrix
W = W(0) W(1)    W(N ; 1)]  (3.94)
;  
which is related to the DFT. The (n k) element of W is exp j N nk . Due
th 2
to the orthogonality of the complex exponential functions, the (n k)th element
Removal of Artifacts 219

Reflect (reverse) Reflect (reverse)


about the vertical about the horizontal

1 4 7 7 4 1 9 6 3
2 5 8 8 5 2 8 5 2
3 6 9 9 6 3 7 4 1

(a) (b) (c)

1 6 16 19 12 13 17 23 18 7

1 2 1 1 1 2 2 1 5 22 46 36 33 45 55 60 57 22
9 6 3
3 1 1 2 3 2 4 2 10 35 72 67 62 72 90 94 95 39
8 5 2
1 2 2 1 2 1 3 2 14 44 90 79 93 96 93 90 86 41
7 4 1
3 2 3 4 1 2 0 1 10 38 88 105 107 98 63 62 51 26

1 3 2 4 1 1 1 0 15 53 114 117 119 106 69 75 39 16


9 6 3
4 2 3 1 2 4 3 1 13 50 108 113 103 110 87 90 49 15
8 5 2
2 3 1 4 2 3 1 1 17 52 106 95 82 89 88 96 50 24
7 4 1
1 2 1 1 0 1 0 1 8 30 59 68 52 67 44 46 20 17

3 12 24 27 15 12 6 12 6 9

(d) (e)

FIGURE 3.41
Illustration of the linear convolution of two 2D functions. Observe the reversal
of the PSF in parts (a) { (c), and the shifting of the reversed PSF as a mask
placed on the image to be ltered, in part (d). The shifted mask is shown for
two pixel locations. Observe that the result needs to be written in a dierent
array. The result, of size 10  10 and shown in part (e), has two rows and two
columns more than the input image (of size 8  8).
220 Biomedical Image Analysis
; 
of W;1 is N1 exp ;j 2N nk . We then have W W;1 = W;1 W = I, where I
is the N  N identity matrix. The columns of W are linearly independent.]
The eigenvalue relationship may be written as
W  = C W (3.95)
where all the terms are N  N matrices, and  is a diagonal matrix whose
elements are equal to (k), k = 0 1     N ; 1. The expression above may be
modied to
C = WW;1 : (3.96)
Thus, we see that a circulant matrix is diagonalized by the DFT operator W.
Returning to the relationships of periodic convolution, because h is circu-
lant, we have
h = W Dh W;1  (3.97)
where Dh is a diagonal matrix (corresponding to  in the preceding discus-
sion). The elements of Dh are given by multiplying the rst row of the matrix
h in Equation 3.85 with W nk , n = 0 1 2     N ; 1 as in Equation 3.91:
H (k) = h(0) + h(N ; 1)W k + h(N ; 2)W 2k +    + h(1)W (N ;1)k  (3.98)
which is a DFT relationship that is, H (k) is the DFT of h(n). Note: The
series of h above represents h(N ; n), n = 0 1 2     N ; 1 which is equal
to h(;n) due to periodicity. The series of W values represents exp(+j 2N nk),
n = 0 1 2     N ; 1. The expression may be converted to the usual forward
DFT form by substituting ;n = m.]
It follows that the result of the convolution operation is given by
g = W Dh W;1 f : (3.99)
The interpretation of the matrix relationships above is as follows: W;1 f is
the (forward) DFT of f (with a scale factor of N1 ). The multiplication of this
expression by Dh corresponds to point-by-point transform-domain ltering
with the DFT of h. The multiplication by W corresponds to the inverse
DFT (except for the scale factor N1 ). We now have the following equivalent
relationships that represent convolution:
g(n) = h(n) f (n)
G(k) = H (k) F (k)
g=hf
g = W Dh W ;1 f : (3.100)
Note: The representation of the Fourier transform operator above is dier-
ent from that in Equation 3.67.
Removal of Artifacts 221
3.5.6 Block-circulant matrix representation of a 2D
lter
In the preceding sections, we saw how a 1D LSI lter may be represented by
the matrix relationship g = h f , with the special case of periodic convolution
being represented as above with h being a circulant matrix. Now, let us
consider the matrix representation of 2D lters. Let f represent an original
image or the input image to a 2D lter, and let g represent the corresponding
ltered image, with the 2D arrays having been converted to vectors or column
matrices by row ordering (see Figure 3.35). Let us assume that all of the
images have been padded with zeros and extended to M  N arrays, with M
and N being large such that circular convolution yields a result equivalent to
that of linear convolution. The images may be considered to be periodic with
the period M  N . The matrices f and g are of size MN  1.
The 2D periodic convolution expression in array form is given by
;1 NX
MX ;1
g(m n) = f (  ) h( m ; ] mod M n ;  ] mod N ) (3.101)
=0  =0
for m = 0 1 2     M ; 1 and n = 0 1 2     N ; 1. The result is also periodic
with the period M  N .
In order for the expression g = h f to represent 2D convolution, we need
to construct the matrix h as follows 8, 9]:
2h hM ;1 hM ;2    h2 h1 3
66 h01 h0 hM ;1    h3 h2 77
h = 666 h. 2 h1 h0    h4 h3 77 
.. .. 75 (3.102)
4 .. . . . .. ..
. . . .
hM ;1 hM ;2 hM ;3    h1 h0
where the submatrices are given by
2 h(m 0) h(m N ; 1) h(m N ; 2)    h(m 1) 3
66 h(m 1) h(m 0) h(m N ; 1)    h(m 2) 77
hm = 666 h. (m 2) h(m 1)
.
h(m 0)
.
   h(m 3) 77 : (3.103)
75
4 .. .. .. . . . ..
.
h(m N ; 1) h(m N ; 2) h(m N ; 3)    h(m 0)
The matrix h is of size MN  MN . Each N  N submatrix hm is a circulant
matrix. The submatrices of h are subscripted in a circular manner: h is
known as a block-circulant matrix.
Example: Let us consider ltering the image f (m n) given by
20 0 0 0 03
66 0 1 2 3 0 77
f (m n) = 66 0 6 5 4 0 77 :
40 7 8 9 05
0 0 0 0 0
222 Biomedical Image Analysis
Although the image has nonzero pixels over only a 3  3 region, it has been
padded with zeros to the extent of a 5  5 array to allow for the result of con-
volution to be larger without wrap-around errors due to periodic convolution.
The 3  3 subtracting Laplacian operator, also extended to a 5  5 array,
is given by 20 0 0 0 03
66 0 0 1 0 0 77
h(m n) = 66 0 1 ;4 1 0 77 :
40 0 1 0 05
0 0 0 0 0
However, this form of the operator has its origin at the center of the array,
whereas the origin of the image f (m n) as above would be at the top-left
corner in matrix-indexing order. Therefore, we need to rewrite h(m n) as
follows: 2 ;4 1 0 0 1 3
66 1 0 0 0 0 77
h(m n) = 66 0 0 0 0 0 77 :
4 0 0 0 0 05
1 0 0 0 0
Observe that due to the assumption of periodicity, the values of h(m n) cor-
responding to negative indices now appear on the opposite ends of the matrix.
The matrices corresponding to the relationship g = h f are given in Fig-
ure 3.42. The resulting image g(m n) in array format is
20 1 2 3 0
3
66 1 4 1 ;6 3 77
g(m n) = 66 6 ;11 0 1 4 77 :
4 7 ;14 ;11 ;24 95
0 7 8 9 0
Diagonalization of a block-circulant matrix: Let us dene the j 2follow-

ing functions that are related
 to the 2D DFT 8]: wM (k m ) = exp M km 
and wN (l n) = exp j 2N ln . Let us dene a matrix W of size MN  MN ,
containing M 2 partitions each of size N  N . The (k m)th partition of W is
W(k m) = wM (k m) WN  (3.104)
for k m = 0 1 2     M ; 1, where WN is an N  N matrix with its elements
given by wN (l n) for l n = 0 1 2     N ; 1.
Removal of Artifacts
2
;4 1 0 0 1 j 1 0 0 0 0 j 0 0 0 0 0 j 0 0 0 0 0 j 1 0 0 0 032 0 3 2
0 3
6
6 1 ;4 1 0 0 j 0 1 0 0 0 j 0 0 0 0 0 j 0 0 0 0 0 j 0 1 0 0 076 0 7 6 1 77
6
6 0 1 ;4 1 0 j 0 0 1 0 0 j 0 0 0 0 0 j 0 0 0 0 0 j 0 0 1 0 0 777 666 0 7
7
7
6
6
6 2 77
6
6 0 0 1 ;4 1 j 0 0 0 1 0 j 0 0 0 0 0 j 0 0 0 0 0 j 0 0 0 1 076 0 7 6 3 77
6
6 1 0 0 1 ;4 j 0 0 0 0 1 j 0 0 0 0 0 j 0 0 0 0 0 j 0 0 0 0 1 777 666 0 7
7 6
6 0 7
6
6 ; ; ; ; ;; ; ; ; ; ;; ; ; ; ; ;; ; ; ; ; ;; ; ; ; ; 7 6
; 77 66 ;; 77 66 ;; 777
6
6 1 0 0 0 0 j ;4 1 0 0 1 j 1 0 0 0 0 j 0 0 0 0 0 j 0 0 0 0 076 0 7 6 1 7
6
6 0 1 0 0 0 j 1 ;4 1 0 0 j 0 1 0 0 0 j 0 0 0 0 0 j 0 0 0 0 0 777 666 1 777 666 4 777
6
6 0 0 1 0 0 j 0 1 ;4 1 0 j 0 0 1 0 0 j 0 0 0 0 0 j 0 0 0 0 076 2 7 6 1 7
6
6 0 0 0 1 0 j 0 0 1 ;4 1 j 0 0 0 1 0 j 0 0 0 0 0 j 0 0 0 0 0 777 666 3 777 666 ;6 777
6
6 0 0 0 0 1 j 1 0 0 1 ;4 j 0 0 0 0 1 j 0 0 0 0 0 j 0 0 0 0 076 0 7 6 3 7
6
6 ; ; ; ; ;; ; ; ; ; ;; ; ; ; ; ;; ; ; ; ; ;; ; ; ; ; ; 777 666 ;; 777 666 ;; 777
6
6 0 0 0 0 0 j 1 0 0 0 0 j ;4 1 0 0 1 j 1 0 0 0 0 j 0 0 0 0 076 0 7 6 6 7
6
6 0 0 0 0 0 j 0 1 0 0 0 j 1 ;4 1 0 0 j 0 1 0 0 0 j 0 0 0 0 0 777 666 6 777 666 ;11 777
6
6 0 0 0 0 0 j 0 0 1 0 0 j 0 1 ;4 1 0 j 0 0 1 0 0 j 0 0 0 0 0 77 66 5 77 = 66 0 77 :
6
6
0 0 0 0 0 j 0 0 0 1 0 j 0 0 1 ;4 1 j 0 0 0 1 0 j 0 0 0 0 076 4 7 6 1 7
6
6
0 0 0 0 0 j 0 0 0 0 1 j 1 0 0 1 ;4 j 0 0 0 0 1 j 0 0 0 0 0 777 666 0 777 666 4 777
6
6
; ; ; ; ;; ; ; ; ; ;; ; ; ; ; ;; ; ; ; ; ;; ; ; ; ; ; 77 66 ;; 77 66 ;; 77
6
6
0 0 0 0 0 j 0 0 0 0 0 j 1 0 0 0 0 j ;4 1 0 0 1 j 1 0 0 0 076 0 7 6 7 7
6
6
0 0 0 0 0 j 0 0 0 0 0 j 0 1 0 0 0 j 1 ;4 1 0 0 j 0 1 0 0 0 777 666 7 777 666 ;14 777
6
6
0 0 0 0 0 j 0 0 0 0 0 j 0 0 1 0 0 j 0 1 ;4 1 0 j 0 0 1 0 0 7 6 8 7 6 ;11 7
6 0 0 0 0 0 j 0 0 0 0 0 j 0 0 0 1 0 j 0 0 1 ;4 1 j 0 0 0 1 0 77 66 9 77 66 ;24 77
6
6 0 0 0 0 0 j 0 0 0 0 0 j 0 0 0 0 1 j 1 0 0 1 ;4 j 0 0 0 0 1 77 66 0 77 66 9 77
6
6 ; ; ; ; ;; ; ; ; ; ;; ; ; ; ; ;; ; ; ; ; ;; ; ; ; ; ; 777 666 ;; 777 666 ;; 777
6
6 1 0 0 0 0 j 0 0 0 0 0 j 0 0 0 0 0 j 1 0 0 0 0 j ;4 1 0 0 176 0 7 6 0 7
6
6 0 1 0 0 0 j 0 0 0 0 0 j 0 0 0 0 0 j 0 1 0 0 0 j 1 ;4 1 0 0 77 66 0 77 66 7 77
6
6 0 0 1 0 0 j 0 0 0 0 0 j 0 0 0 0 0 j 0 0 1 0 0 j 0 1 ;4 1 0 77 66 0 77 66 8 77
4 0 0 0 1 0 j 0 0 0 0 0 j 0 0 0 0 0 j 0 0 0 1 0 j 0 0 1 ;4 154 0 5 4 9 5
0 0 0 0 1 j 0 0 0 0 0 j 0 0 0 0 0 j 0 0 0 0 1 j 1 0 0 1 ;4 0 0

FIGURE 3.42
Matrices and vectors related to the application of the Laplacian operator to an image.

223
224 Biomedical Image Analysis
Now, W;1 is also a matrix of size MN  MN , with M 2 partitions of size
N  N . The (k m)th partition of W;1 is
W;1 (k m) = 1 w;1 (k m)W;1 
M M N (3.105)
where wM ;1 (k m) = exp ;j 2M km, for k m = 0 1 2     M ; 1. The matrix

WN;1 has its elements given by N1 wN;1 (l n) where wN;1 (l n) = exp ;j 2N ln 

for l n = 0 1 2     N ;1. The denitions above lead to WW;1 = W;1 W =
I, where I is the MN  MN identity matrix. If h is a block-circulant matrix,
it can be shown 196] that h = W Dh W;1 or Dh = W;1 h W, where Dh is
a diagonal matrix whose elements are related to the DFT of h(m n), that is,
to H (k l).
Similar to the 1D case expressed by the relationships in Equation 3.100, we
have the following equivalent relationships that represent 2D convolution:
g(m n) = h(m n) f (m n)
G(k l) = H (k l) F (k l)
g=hf
g = W Dh W;1 f : (3.106)
Note: Considering an N  N image f (m n), the N  N DFT matrix W
2 2
above is dierent from the N N DFT matrix W in Equation 3.67. The image
matrix f in Equation 3.67 is of size N  N , whereas the image is represented
as an N 2  1 vector in Equation 3.106.
Dierentiation of functions of matrices: The major advantage of ex-
pressing images and image processing operations in matrix form as above is
that mathematical procedures for optimization and estimation may be applied
with ease. For example, we have the following derivatives: given the vectors
f and g and a symmetric matrix W,
@ T @ T
@ f (f g) = @ f (g f ) = g (3.107)
and
@ (f T Wf ) = 2 Wf : (3.108)
@f
Several derivations in Sections 3.6.1 and 3.7.1, as well as in Chapters 10 and
11, demonstrate how optimization of lters may be performed using matrix
representation of images and image processing operations as above.

3.6 Optimal Filtering


The eld of image processing includes several procedures that may be char-
acterized as ad hoc methods: procedures that have been designed to address
Removal of Artifacts 225
a particular problem and observed to yield good results in certain specic
applications or scenarios. Often, the conditions under which such methods
perform well are not known or understood, and the application of the meth-
ods to other images or situations may not lead to useful results. Certain
mathematical models and procedures permit the design of optimal lters |
lters that are derived through an optimization procedure that minimizes a
cost function under specic conditions. Such procedures state explicitly the
necessary conditions, and the behavior of the lter is predictable. However,
as we shall see in the following paragraphs, the application of optimization
methods and the resultant optimal lters require specic knowledge of the
image and noise processes.

3.6.1 The Wiener


lter
The Wiener lter is a linear lter designed to minimize the MSE between
the output of the lter and the undegraded, unknown, original image. The
lter output is an optimal estimate of the original, undegraded image in the
MSE sense, and hence is known as the linear minimum mean squared-error
(LMMSE) or the least-mean-square (LMS) estimate.
Considering the degradation of an image f by additive noise  that is inde-
pendent of the image process, we have the degraded image given by
g = f + : (3.109)
(Degradation including the PSF matrix h is considered in Chapter 10.)
The Wiener estimation problem may be stated as follows 8, 9, 197, 198]:
determine a linear estimate ~f = Lg of f from the given image g, where L is
the linear lter or transform operator to be designed. Recall that f and g
are N 2  1 matrices formed by row or column ordering of the corresponding
N  N images, and that L is an N 2  N 2 matrix.
The optimization criterion used to design the Wiener lter is to minimize
the MSE, given by h i
"2 = E kf ; ~f k2 : (3.110)
Let us express the MSE as the trace of the outer product matrix of the error
vector: h n oi
"2 = E Tr (f ; ~f )(f ; ~f )T : (3.111)
We have the following expressions that result from, or are related to, the
above:
(f ; ~f )(f ; ~f )T = f f T ; f ~f T ; ~f f T + ~f ~f T  (3.112)
~f T = gT LT = (f T + T )LT  (3.113)
f ~f T = f f T LT + f T LT  (3.114)
~f f T = L f f T + L  f T  (3.115)
226 Biomedical Image Analysis
~f ~f T = L ;f f T + f T +  f T +  T  LT : (3.116)
Because the trace of a sum of matrices is equal to the sum of their traces, the
E and Tr operators may be interchanged in order. Applying the E ] operator
to the expressions above, we get the following expressions:
 E f f T ] = f , which is the autocorrelation matrix of the image
 E f ~f T ] = f LT 
 E f T ] = 0, because f and  are assumed to be statistically indepen-
dent
 E ~f f T ] = L f 
 E ~f ~f T ] = L f LT + L  LT 
 E  T ] =  , which is the noise autocorrelation matrix.
Now, the MSE may be written as

"2 = Tr f ; f LT ; L f + L f LT + L  LT


= Tr f ; 2 f LT + L f LT + L  LT : (3.117)

(Note: Tr f LT = Tr L f ] because f is symmetric.) At this point, the
MSE is no longer a function of the images f  g or , but depends only on the
statistical characteristics of f and , and on L.
To obtain the optimal lter operator L, we may now dierentiate the ex-
pression above with respect to L, equate it to zero, and solve the resulting
expression as follows:
@"2 = ;2  + 2 L  + 2 L  = 0: (3.118)
@L f f
The optimal lter is given by
LWiener = f (f +  );1 : (3.119)
The ltered image is given by
~f = f (f +  );1 g: (3.120)
Implementation of the Wiener lter: Consider the matrix f + 
that needs to be inverted in Equation 3.119. The matrix would be of size
N 2  N 2 for N  N images hence, inversion of the matrix as such would be
impractical when N is large. Inversion becomes easier if the matrix can be
written as the product of a diagonal matrix and a unitary matrix.
Now,  is a diagonal matrix if is an uncorrelated random (white) noise
process. In most real images, correlation between pixels reduces as the spatial
Removal of Artifacts 227
shift or distance between the pixel positions considered increases: f is then
banded with several zeros, and may be approximated by a block-circulant
matrix. Then, we can write f = W f W;1  and  = W  W;1 
where  represents the diagonal matrix corresponding to  resulting from
the Fourier transform operation by W (see Section 3.5.5). This step leads to
the Wiener lter output being expressed as
~f = W f (f +  );1 W;1 g: (3.121)
The following interpretation of the entities involved in the Wiener lter
reduces the expression above to a more familiar form:
 W;1 g is equivalent to G(k l), the Fourier transform of g(m n).
 f = W;1 f W is equivalent to Sf (k l), the PSD of f (m n).
  = W;1  W is equivalent to S (k l), the noise PSD.
Then, we get the Wiener estimate in the Fourier domain as
S (k l )

~ f
F (k l) = S (k l) + S (k l) G(k l)
2f 3
1
= 4 S (kl) 5 G(k l): (3.122)
1 + Sf (kl)
(For other derivations of the Wiener lter, see Wiener 198], Lim 199], and
Rangayyan 31].)
Observe that the Wiener lter transfer function depends upon the PSD of
the original signal and noise processes the dependence is upon the second-
order statistics of the processes rather than upon single realizations or obser-
vations of the processes. The design of the Wiener lter as above requires the
estimation of the PSDs or the development of models thereof. Although the
original signal itself is unknown, it is often possible in practice to estimate its
PSD from the average spectra of several artifact-free observations of images
of the same class or type. Noise statistics, such as variance, may be estimated
from signal-free parts of noisy observations of the image.
The gain of the Wiener lter varies from one frequency sample to another
in accordance with the SNR expressed as a function of frequency in the 2D
(u v) or (k l) space]: the gain is high wherever the signal component Sf (k l)
is strong as compared to the noise component S (k l), that is, wherever the
SNR is high the gain is low wherever the SNR is low. The gain is equal to
unity if the noise PSD is zero. However, it should be noted that the Wiener
lter (as above) is not spatially adaptive: its characteristics remain the same
for the entire image. For this reason, while suppressing noise, the Wiener
lter is likely to blur sharp features and edges that may exist in the image
(and share the high-frequency spectral regions with noise).
228 Biomedical Image Analysis
Frequency-domain implementation of the Wiener lter as in Equation 3.122
obviates the need for the inversion of large correlation matrices as in Equa-
tion 3.120. The use of the FFT algorithm facilitates fast computation of the
Fourier transforms of the image data.
Example: Figure 3.43 (a) shows the original Shapes test image part (b)
shows the test image with Gaussian-distributed noise added ( = 0, normal-
ized 2 = 0:01). The log-magnitude spectrum of the noisy image is shown in
part (c) of the gure. In order to implement the Wiener lter as in Equa-
tion 3.122, the true image PSD was modeled using a Laplacian function with
 = 5 pixels in the Fourier domain, represented using a 128  128 array. The
noise PSD was modeled by a uniform function having its total energy (area
under the function) equal to 0:5 times that of the Laplacian PSD model. The
Wiener lter transfer function is illustrated in Figure 3.43 (d). The output
of the Wiener lter is shown in part (e) of the gure: while the noise in the
uniform areas of the image has been suppressed, the edges of the objects in
the image have been severely blurred. The blurring of edges has resulted in
an increase of the RMS error from 19:56 for the noisy image to 52:89 for the
Wiener lter output. It is clear that the assumption of stationarity is not
appropriate for the test image the use of a xed lter, albeit optimal in the
MSE sense, has led to a result that is not desirable. Furthermore, the design
of appropriate signal and noise PSD models is dicult in practice inappro-
priate models could lead to poor performance, as illustrated by the present
example.
The implicit assumption made in deriving the Wiener lter is that the
noise and image processes are second-order stationary processes that is, their
mean and variance do not vary from one image region to another. This also
leads to the assumption that the entire image may be characterized by a
single frequency spectrum or PSD. Most real-life images do not satisfy these
assumptions to the fullest extent, which calls for the design of spatially or
locally adaptive lters. See Section 3.7.1 for details on locally adaptive optimal
lters.

3.7 Adaptive Filters


3.7.1 The local LMMSE
lter
Lee 200] developed a class of adaptive, local-statistics-based lters to obtain
the LMMSE estimate of the original image from a degraded version. The
degradation model represents an image f (m n) corrupted by additive noise
(m n) as
g(m n) = f (m n) + (m n) 8 m n: (3.123)
Removal of Artifacts 229

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.43
(a) Shapes test image. (b) Image in (a) with Gaussian-distributed noise added,
with  = 0 normalized 2 = 0:01 RMS error = 19:56. (c) Log-magnitude
spectrum of the image in (b). (d) Gain of the Wiener lter in the frequency
domain, that is, the magnitude transfer function. Result of ltering the noisy
image in (b) using: (e) the Wiener lter as in (d), RMS error = 52:89 (f) the
local LMMSE lter with a 5  5 window, RMS error = 13:78.
230 Biomedical Image Analysis
This model is equivalent to that in Equation 3.109, but has been shown in
the 2D array form with the indices (m n) to indicate the pixel location in
order to demonstrate the locally adaptive nature of the lter to be derived.
The original image f is considered to be a realization of a nonstationary
random eld, characterized by spatially varying moments (mean, standard
deviation, etc.). The noise process may be either signal-independent or
signal-dependent, and could be nonstationary as well.
The LMMSE approach computes at every spatial location (m n) an estimate
f~(m n) of the original image value f (m n) by applying a linear operator to
the available corrupted image value g(m n). Scalars a(m n) and b(m n) are
sought such that the value f~(m n) computed as
f~(m n) = a(m n) g(m n) + b(m n) (3.124)
minimizes the local MSE
h i2
"2 (m n) = f~(m n) ; f (m n)  (3.125)
where the bar above the expression indicates some form of averaging (statisti-
cal expectation, ensemble averaging, or spatial averaging). We have the local
MSE in expanded form as
"2 (m n) = a(m n) g(m n) + b(m n) ; f (m n)]2 : (3.126)
2
The values a(m n) and b(m n) that minimize " (m n) are computed by
taking the partial derivatives of "2 (m n) with respect to a(m n) and b(m n),
setting them to zero, and solving the resulting equation, as follows:
@"2 (m n) = 2fa(m n) g(m n) + b(m n) ; f (m n)g = 0 (3.127)
@b(m n)
which leads to
b(m n) = f (m n) ; a(m n) g(m n): (3.128)
Now, replacing b(m n) in Equation 3.126 with its value given by Equa-
tion 3.128, we get the local MSE as

"2 (m n) = a(m n)fg(m n) ; g(m n)g ; ff (m n) ; f (m n)g 2 : (3.129)
Dierentiating this expression with respect to a(m n) and setting the result
to zero, we get:
a(m n)fg(m n) ; g(m n)g ; ff (m n) ; f (m n)g fg(m n) ; g(m n)g = 0:
(3.130)
Now, we may allow g(m n) ; g (m n)]2 = g2 (m n) to represent the local
variance of g, and g(m n) ; g(m n)] f (m n) ; f (m n)] = fg (m n), the
local covariance between f and g. This leads to
a(m n) = fg2 ((m
m n) :
n) (3.131)
g
Removal of Artifacts 231
With a(m n) given by Equation 3.131 and b(m n) given by Equation 3.128,
the LMMSE estimate formula in Equation 3.124 becomes
f~(m n) = f (m n) + fg2 ((m
m n) g(m n) ; g(m n)]:
n) (3.132)
g
Because the true statistics of both the original and the corrupted image
as well as their joint statistics are usually unknown in a practical situation,
Lee proposed to estimate them locally in a spatial neighborhood of the pixel
(m n) being processed, leading to the local LMMSE (that is, the LLMMSE)
estimate. Using a rectangular window of size (2P + 1)  (2Q + 1) centered
at the pixel (m n) being processed, we get local estimates of the mean and
variance of the noisy image g as
X
P X
Q
g (m n) = (2P + 1)1(2Q + 1) g(m + p n + q) (3.133)
p=;P q=;Q
and
X
P X
Q
g2 (m n) = (2P + 1)1(2Q + 1) g(m + p n + q) ; g (m n)]2 :
p=;P q=;Q
(3.134)
It should be noted that the parameters are expressed as functions of space
and are space-variant entities. The LLMMSE estimate is then approximated
by the following pixel-by-pixel operation:
" #
2 (m n) ; 2 (m n)
f~(m n) = g (m n) + g 2 (m n) g(m n) ; g (m n)]: (3.135)
g
(For other derivations of the LLMMSE lter, also known as the Wiener lter,
see Lim 199].)
Comparing Equation 3.132 with 3.135, observe that f (m n) is approxi-
mated by g (m n), and that fg (m n) is estimated by the dierence between
the local variance of the degraded image and that of the noise process. The
LLMMSE lter is rendered spatially adaptive | and nonlinear | by the
space-variant estimation of the statistical parameters used.
In deriving Equation 3.135 from Equation 3.132, the assumption that the
noise is uncorrelated with the image is taken into account. The variance of
the noise 2 (m n) is constant over the whole image if the noise is assumed to
be signal-independent, but varies if the noise is signal-dependent in the latter
case 2 (m n) should be estimated locally with a knowledge of the type of the
noise that corrupts the image. Lee's lter was originally derived to deal with
signal-independent additive noise and signal-dependent multiplicative noise,
but may be adapted to other types of signal-dependent noise, such as Poisson
or lm-grain noise 201].
232 Biomedical Image Analysis
The interpretation of Equation 3.135 is as follows: if the processing window
overlaps a uniform region in which any variation is due mostly to the noise, the
second term will be small. Thus, the LLMMSE estimate is equal to the local
mean of the noisy image the noise is thereby reduced. If, on the contrary,
there is an edge in the processing window, the variance of the noisy image is
larger than the variance of the noise. The LLMMSE estimate in this case is
closer to the actual noisy value g(m n), and the edge does not get blurred.
The lter provides good noise attenuation over uniform areas, but poor noise
ltering near edges.
Aghdasi et al. 202] proposed detailed models of degradation of mammo-
grams, including Poisson noise, lm-grain noise, and blurring by several com-
ponents along the chain of image acquisition systems. They applied the local
LMMSE lter as above to remove noise, and compared its performance with
a Bayesian lter. The parametric Wiener lter (see Section 10.1.3) was then
applied to deblur the noise-free mammograms.
Matrix representation: A matrix or vectorial version of Equation 3.132
may also be derived as follows: The LMMSE estimate is expressed as
~f = Ag + b: (3.136)
The MSE between the estimate and the unknown original image may be
expressed as h n oi
"2 = E Tr (f ; ~f )(f ; ~f )T : (3.137)
Substituting the expression in Equation 3.136, we get

"2 = E Tr (f ; Ag ; b)(f ; Ag ; b)T


= E Tr f f T ; f gT AT ; f bT ; Agf T + AggT AT + AgbT
; bf T + bgT AT + bbT :
 (3.138)
Dierentiating the expression above with respect to b and setting it to zero,
we get
E Tr f;f + Ag ; f + Ag + 2bg] = 0 (3.139)
solving which we get
b = f ; Ag  (3.140)
where  indicates some form of averaging.
Using the expression derived for b above, we get the following:
f ; ~f = f ; Ag ; b
= f ; Ag ; f + Ag
= (f ; f ) ; A(g ; g )
= f1 ; Ag1  (3.141)
Removal of Artifacts 233
where f1 = f ; f and g1 = g ; g for the sake of compactness in further
derivation. Now, the MSE becomes

"2 = E Tr (f1 ; Ag1 )(f1 ; Ag1 )T

 
= E Tr f1 f1T ; f1 g1T AT ; Ag1 f1T + Ag1 g1T AT : (3.142)
Dierentiating the expression above with respect to A and setting it to zero,
we get 
E ;2f1 g1T + 2Ag1 g1T = 0: (3.143)
T  T 
Now, E g1 g 1 = E (g ; g )(g ; g ) = g , the covariance matrix of g.
Similarly, E f1 g1T = fg , the cross-covariance matrix of f and g. Thus, we
get A = fg ;g 1 . Finally, we obtain the LMMSE estimate as
~f = g + fg ;g 1 (g ; g ): (3.144)
This expression reduces to Equation 3.135 when local statistics are substituted
for the expectation-based statistical parameters.
Due to the similarity of the optimization criterion used, the LLMMSE lter
as above is also referred to as the Wiener lter in some publications 199, 203].
The use of local statistics overcomes the limitations of the Wiener lter due to
the assumption of stationarity the procedure also removes the need to invert
large matrices. Nonlinearity, if it is of concern, is the price paid in order to
gain these advantages.
Example: Figure 3.43 (f) shows the result of application of the LLMMSE
lter as in Equation 3.135, using a 5  5 window, to the noisy test image in
part (b) of the same gure. The noise in the uniform background regions in
the image, as well as within the geometric objects with uniform gray levels,
has been suppressed well by the lter. However, the noise on and around the
edges of the objects has not been removed. Although the lter has led to a
reduction in the RMS error, the leftover noise around edges has led to a poor
appearance of the result.
Re ned LLMMSE lter: In a rened version of the LLMMSE lter 204],
if the local signal variance g2 (m n) is high, it is assumed that the processing
window is overlapping an edge. With the further assumption that the edge is
straight, which is reasonable for small windows, the direction of the edge is
computed using a gradient operator with eight possible directions for the edge.
According to the direction of the edge detected, the processing window is split
into two sub-areas, each of which is assumed to be uniform see Figure 3.44.
Then, the statistics computed within the sub-area that holds the pixel being
processed are used in Equation 3.135 to estimate the output for the current
pixel. This step reduces the noise present in the neighborhood of edges without
blurring the edges. Over uniform areas where g2 (m n) has reasonably small
values, the statistics are computed over the whole area overlapped by the
processing window.
Results of application of the rened LLMMSE lter are presented in Sec-
tion 3.8.
234 Biomedical Image Analysis

FIGURE 3.44
Splitting of a 77 neighborhood for adaptive ltering based upon the direction
of a local edge detected within the neighborhood in a rened version of the
local LMMSE lter 204]. One of the eight cases shown is selected according
to the direction of the gradient within the 7  7 neighborhood. Pixels in the
partition containing the pixel being processed are used to compute the local
statistics and the output of the lter. Based upon a similar gure in J.S. Lee,
\Rened ltering of image noise using local statistics", Computer Graphics
and Image Processing, 15:380{389, 1981.

3.7.2 The noise-updating repeated Wiener


lter
The noise-updating repeated Wiener (NURW) lter was introduced by Jiang
and Sawchuk 179] to deal with signal-independent additive, signal-dependent
Poisson, and multiplicative noise. The image is treated as a random eld
that is nonstationary in mean and variance 188]. The NURW lter consists
of an iterative application of the LLMMSE lter. After each iteration, the
variance of the noise is updated for use in the LLMMSE estimate formula of
Equation 3.135 in the next iteration as
" #2
2 (m n) 1 2 (m n) 2
1 ; 2 (m n) + (2P + 1)(2
2new (m n) = Q + 1) g2 (m n)  (m n)
g
" # +P X
1 2 (m n) 2 X +Q
+ (2P + 1)(2Q + 1) 2 (m n) 2 (m + p n + q): (3.145)
g p=;P q=;Q
| {z }
(pq)6=(00)

By iterating the lter, noise is substantially reduced even in areas near edges.
In order to avoid the blurring of edges, a dierent (smaller) processing window
size may be chosen for each iteration. Jiang and Sawchuk 179] demonstrated
Removal of Artifacts 235
the use of the NURW lter to improve the quality of images degraded with
additive, multiplicative, and Poisson noise.
Results of application of the NURW lter are presented in Section 3.8.

3.7.3 The adaptive 2D LMS


lter
Noise reduction is usually accomplished by a ltering procedure optimized
with respect to an error measure the most widely used error measure is the
MSE. The Wiener lter is a classical solution to this problem. However, the
Wiener lter is designed under the assumption of stationary as well as statis-
tically independent signal and noise PDF models. This premise is unlikely to
hold true for images that contain large gray-level uctuations such as edges
furthermore, it does not hold true for signal-dependent noise. Recent methods
of circumventing this problem have taken into account the nonstationarity of
the given image. One example is the method developed by Chan and Lim 205]
which takes into account the image nonstationarity by varying the lter pa-
rameters according to the changes in the characteristics or statistics of the
image. Another example is the adaptive 2D LMS algorithm developed by
Hadhoud and Thomas 206], which is described next.
The 2D LMS method is an example of a xed-window Wiener lter in which
the lter coecients vary depending upon the image characteristics. The
algorithm is based on the method of steepest descent, and tracks the variations
in the local statistics of the given image, thereby adapting to dierent image
features. The advantage of this algorithm is that it does not require any a
priori information about the image, the noise statistics, or their correlation
properties. Also, it does not require any averaging, dierentiation, or matrix
operations.
The 2D LMS algorithm is derived by dening a causal FIR lter wl (p q)
whose region of support (ROS) is P  P (P typically being 3) such that
;1 PX
PX ;1
f~(m n) = wl (p q) g(m ; p n ; q) (3.146)
p=0 q=0
where f~(m n) is the estimate of the original pixel value f (m n) g(m n) is
the noise-corrupted input image and l marks the current position of the lter
in the image, which is given by l = mM + n for the pixel position (m n) in
an M  N image, and will take values from 0 to MN ; 1.
The lter coecients wl+1 (p q) for the pixel position l + 1 are determined
by minimizing the MSE between the desired pixel value f (m n) and the es-
timated pixel value f~(m n) at the present pixel location l, using the method
of steepest descent. The lter coecients wl+1 (m n) are estimated as the
present coecients wl (p q) plus a change proportional to the negative gradi-
ent of the error power (MSE), expressed as
wl+1 (p q) = wl (p q) ;  r e2l ] (3.147)
236 Biomedical Image Analysis
where  is a scalar multiplier controlling the rate of convergence and lter
stability el is the error signal, dened as the dierence between the desired
signal f (m n) and the estimate f~(m n) and r is a gradient operator with
respect to wl (p q)] applied to the error power e2l at l. Because the original
image f (m n) is unknown, and the only image on hand is the noise-corrupted
image g(m n), an approximation to the original image d(m n) is used, and
the error is estimated as
el = d(m n) ; f~(m n): (3.148)
The technique used by Hadhoud and Thomas 206] to obtain d(m n) was to
estimate it from the input image g(m n) by decorrelation, the decorrelation
operator being the 2D delay operator of (1 1) samples. This allows the cor-
relation between d(m n) and g(m n) to be similar to the correlation between
f (m n) and g(m n), and, in turn, makes d(m n) correlated to f (m n) to
some extent. Evaluation of Equation 3.147 using Equation 3.146 and Equa-
tion 3.148 gives
wl+1 (p q) = wl (p q) + 2  el g(m ; p n ; q) (3.149)
which is a recursive equation dening the lter coecients at the pixel position
l + 1 in terms of those at the position l.
Implementation of the 2D LMS lter: Equation 3.146 and Equa-
tion 3.149 give the 2D LMS lter and the lter weight updating algorithms,
respectively. Convergence of the algorithm does not depend upon the initial
conditions it converges for any arbitrary initial value, and hence provides
good nonstationary performance. In comparing the 2D LMS lter with the
nonadaptive LMS algorithm, the second term of Equation 3.149 would not
be included in the latter, and the lter coecients would not change from
pixel to pixel under the assumption that the image is stationary. This would
put a constraint on the initial coecient values, because they would be the
values used for the whole image, and thus, dierent initial values would result
in dierent ltered outputs. Although the initial conditions do not aect the
convergence of the 2D LMS lter, the choice of the convergence factor  de-
pends on the particular application, and involves a trade-o between the rate
of convergence, the ability to track nonstationarity, and steady-state MSE.
Example: Figures 3.45 (a) and (b) show a test image and its noisy version,
the latter obtained by adding zero-mean Gaussian noise with 2 = 256 to the
former. Part (c) of the gure shows the result of application of the 2D LMS
lter to the noisy image. In implementing Equation 3.146 and Equation 3.149,
the initial weights of the lter, w0 (p q), were estimated by processing 10 lines
of the given image starting with zero weights. The weights obtained after
processing the 10 lines were then used as the initial conditions, and processing
was restarted at (m n) = (0 0) in the given image. The convergence factor
 was determined by trial and error for dierent images as suggested by
Hadhoud and Thomas 206], and set to 0:4  10;7  the ROS used was 3  3.
Removal of Artifacts 237
The 2D LMS algorithm applies a gradually changing lter that tends to
suppress noise in a relatively uniform manner over the image. Although this
typically results in lower values of the MSE, it also tends to smooth or blur
edges and other structured features in the image, and to leave excessive noise
in uniform regions of the image. One explanation to this eect could be the
fact that the adaptive weights of the lter, wl (p q), depend on the model of the
original image which is approximated by the decorrelated image d(m n). Be-
cause d(m n) is not an accurate approximation of the original image f (m n),
the updated lter weights are not optimal, and thus the algorithm does not
give the optimal MSE value. The 2D LMS algorithm did not perform par-
ticularly well on the test image in Figure 3.45 (b). The resultant image in
Figure 3.45 (c) appears signicantly blurred, with remnant noise.

3.7.4 The adaptive rectangular window LMS


lter
In order to overcome the limitations due to the assumption of stationarity, a
Wiener lter approach using an adaptive-sized rectangular window (ARW) to
estimate the lter coecients was proposed by Song and Pearlman 207, 208,
209], and rened by Mahesh et al. 210]. Using the same image degradation
model as that in Equation 3.109 but with the additional assumption that the
image processes also have zero mean, the estimate used by Mahesh et al. is
of the form
f~(m n) = (m n) g(m n): (3.150)
The problem reduces to that of nding the factor (m n) at each pixel location
using the same minimum MSE criterion as that of the standard Wiener lter.
The error is given by
e(m n) = f (m n) ; f~(m n) = f (m n) ; (m n) g(m n): (3.151)
Minimization of the MSE requires that the error signal e be orthogonal to the
image g, that is,
E f f (m n) ; (m n) g(m n)] g(m n)g = 0: (3.152)
Solving for , we obtain
2 (m n)
(m n) = 2 (m nf) + 2 (m n) : (3.153)
f
If the original image is not a zero-mean process, Equation 3.150 can still be
used by rst subtracting the mean from the images f and g. Because the noise
is of zero mean for all pixels, the a posteriori mean g (m n) of the image g
at pixel position (m n) is equal to the a priori mean f (m n) of the original
image f (m n), and the estimate of Equation 3.150 thus becomes
2 (m n)
f~(m n) = g (m n) + 2 (m nf) + 2 (m n) g(m n) ; g (m n)]: (3.154)
f
238 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.45
(a) \Shapes3": a 128 128 test image with various geometrical objects placed at
random. (b) Image in (a) with Gaussian noise added, RMS error = 14:24. Result of
ltering the image in (b) with: (c) the 2D LMS lter, RMS error = 15:40 (d) two
passes of the ARW-LMS lter, RMS error = 7:07 (e) the ANNS lter, RMS error =
6:68 (f) two passes of the ANNS lter, RMS error = 5:10. Reproduced with permis-
sion from R.B. Paranjape, T.F. Rabie, and R.M. Rangayyan, \Image restoration by
adaptive-neighborhood noise subtraction", Applied Optics, 33(14):2861{2869, 1994.
c Optical Society of America.
Removal of Artifacts 239
Implementation of the ARW-LMS lter: In implementing Equa-
tion 3.154, it is necessary to make the assumption that the image pixel values
in the immediate neighborhood of a pixel (m n) are samples from the same
ensemble as that of f (m n) that is, a globally nonstationary process can be
considered to be locally stationary and ergodic over a small region. Thus, if we
can accurately determine the size of a neighborhood in which the image values
have the same statistical parameters, the sample statistics can approximate
the a posteriori parameters needed for the estimate in Equation 3.154.
The view taken by Song and Pearlman 207, 208, 209] was to identify the size
of a stationary square region for each pixel in the image, and to calculate the
local statistics of the image within that region. The size of the window changes
according to a measure of signal activity an eective algorithm was proposed
to determine the window size that improved the performance of various point
estimators. The eect of the improved performance was greater smoothing
in relatively at (signal-free) regions in the image and less smoothing across
edges.
In deriving the local sample statistics denoted as ~g (m n) and ~f2 (m n),
Mahesh et al. 210] made use of ARWs of length Lr in the row direction and
Lc in the column direction. Because the window is required to be centered
about the pixel being processed, the ARW lengths need to be odd. Except
near the borders of the image, the ARW dimensions can be expressed as
Lr = 2Nr + 1 and Lc = 2Nc + 1, where Nr and Nc are the dimensions of the
one-sided neighborhood. Within this window, the local mean and variance
are calculated as
+X
Nr +X
Nc
~g (m n) = L 1L g(m + p n + q) (3.155)
r c p=;Nr q=;Nc
and
+X
Nr +X
Nc
~g2 (m n) = L 1L g(m + p n + q) ; ~g (m n)]2 : (3.156)
r c p=;Nr q=;Nc

The local variance of the original image ~f2 is estimated as


 ~ 2 ; 2 if ~ 2 > 2
~ 2 g g
f= 0 otherwise. (3.157)
Using the local sample statistics as above, Equation 3.154 becomes
~ 2 (m n)
f~(m n) = ~g (m n) + ~ 2 (m nf) + 2 (m n) g(m n) ; ~g (m n)]: (3.158)
f
It should be noted that the parameters Lr  Lc  Nr  Nc  ~g , ~g2 , and ~f2 as
well as the other parameters that follow are computed for each pixel (m n),
240 Biomedical Image Analysis
and should be denoted as Lr (m n), etc. this detail has been suppressed for
convenience of notation. It should also be noted that, although the noise
variance 2 is usually not known a priori, it can be easily estimated from a
window in a at (signal-free) area of the degraded image.
For accurate estimation of g and g2 , the pixels in the ARW should belong
to the same ensemble as that of the central pixel being ltered. If relatively
large windows are used, the windows may cross over the boundaries of dierent
regions and include pixels from other ensembles. In such a case, blurring could
result across the edges present within the windows. On the other hand, if the
windows are too small, the lack of samples would result in poor estimates
of the mean and variance, and consequently, insucient noise suppression
would occur over uniform regions. Thus, it is desirable to use small windows
where the image intensity changes rapidly, and large windows where the image
contrast is relatively low.
In the method of Mahesh et al. 210], the ARW lengths Lr and Lc are varied
depending upon a signal activity parameter Sr dened as
+X
Nr +X
Nc
Sr (m n) = L 1L g(m + p n + q) ; ~r ]2 ; 2  (3.159)
r c p=;Nr q=Nc
where ~r is the local mean evaluated in the row direction as
+X
Nr
~r = L1 g(m + p n): (3.160)
r p=;Nr
Sr is a measure of local roughness of the image in the row direction, being
equal to the variance of the original image in the same direction. A similar
signal activity parameter is dened in the column direction. If the signal
activity parameter Sr in the row direction is large, indicating the presence
of an edge or other information, the window size in the row direction Nr
is decremented so that points from other ensembles are not included. If Sr
is small, indicating that the current pixel lies in a low-contrast region, Nr
is incremented so that a better estimate of the mean and variance may be
obtained. In order to make this decision, the signal activity parameter in the
row direction is compared to a threshold Tr , and Nr is updated as follows:
Nr Nr ; 1 if Sr  Tr  (3.161)
or Nr Nr + 1 if Sr < Tr : (3.162)
A similar procedure is applied in the column direction to update Nc .
Prespecied minimum and maximum values for Nr and Nc are used to limit
the size of the ARW to reasonable dimensions, which could be related to the
size of the details present in the image. The threshold Tr is dened as
 2
Tr = L  (3.163)
r
Removal of Artifacts 241
where  is a weighting factor that controls the rate at which the window size
changes. The threshold varies in direct proportion to the noise variance and
in inverse proportion to the ARW dimension. Thus, if the noise variance is
high, the threshold will be high, and the window length is more likely to be
incremented than decremented. This will lead to a large window and eective
smoothing of the noise. If the window size is large, the threshold will be small.
Thus, the window size is likely to be decremented for the next pixel. A similar
threshold is dened in the column direction. This helps the ARW length to
converge to a certain range.
Example: The result of application of the ARW-LMS lter to the noisy
test image in Figure 3.45 (b) is shown in part (d) of the same gure. The
ARW size was restricted to be a minimum of 1  1 and a maximum of 5  5
the value of the weighting factor  in Equation 3.163 was xed at 7. The
ARW-LMS algorithm has resulted in much less smoothing at the edges of
the objects in the image than in the uniform regions. As a result, a layer of
noise remains in the ltered image surrounding each of the objects. Although
this is clearly objectionable in the synthesized image, the eect was not as
pronounced in the case of natural scenes, because ideal edges are not common
in natural scenes.
The ARW-LMS output image appears to be clearer and sharper than the 2D
LMS restored image see Figure 3.45 (c)], because the ARW-LMS algorithm
tends to concentrate the error around the edges of objects where the human
visual system tends to ignore the artifact. On the other hand, the 2D LMS
algorithm tends to reduce the error uniformly, which also leads to smoothing
across the edges the decreased sharpness of the edges makes the result less
pleasing to the viewer.

3.7.5 The adaptive-neighborhood


lter
An adaptive-neighborhood paradigm was proposed by Paranjape et al. 211,
212] to lter additive, signal-independent noise, and extended to multiplicative
noise by Das and Rangayyan 213], to signal-dependent noise by Rangayyan
et al. 201], and to color images by Ciuc et al. 214]. Unlike the other methods
where statistics of the noise and signal are estimated locally within a xed-size,
xed-shape (usually rectangular) neighborhood, the adaptive-neighborhood
ltering approach consists of computing statistics within a variable-size, vari-
able-shape neighborhood that is determined individually for every pixel in the
image. It is desired that the adaptive neighborhood grown for the pixel being
processed (which is called the \seed") contains only those spatially connected
pixels that are similar to the seed, that is, the neighborhood does not grow
over edges but overlaps a stationary area. If this condition is fullled, the
statistics computed using the pixels inside the region are likely to be closer to
the true statistics of the local signal and noise components than the statistics
computed within xed neighborhoods. Hence, adaptive-neighborhood lter-
ing should yield more accurate results than xed-neighborhood methods. The
242 Biomedical Image Analysis
approach should also prevent the blurring or distortion of edges because adap-
tive neighborhoods, if grown with an appropriate threshold, should not mix
the pixels of an object with those belonging to its background.
There are two major steps in adaptive-neighborhood ltering: adaptive
region growing and estimation of the noise-free value for the seed pixel using
statistics computed within the region.
Region growing for adaptive-neighborhood ltering: In adaptive-
neighborhood ltering, a region needs to be grown for the pixel being pro-
cessed (the seed) such that it contains only pixels belonging to the same object
or image feature as the seed. Paranjape et al. 211, 212] used the following
region-growing procedure for images corrupted by signal-independent Gaus-
sian additive noise: the absolute dierence between each of the 8-connected
neighbors g(p q) and the seed g(m n) is computed as
dpq = jg(p q) ; g(m n)j: (3.164)
Pixels g(p q) having dpq  T , where T is a xed, predened threshold, are
included in the region. The procedure continues by checking the neighbors of
the newly included pixels in the same manner, and stops when the inclusion
criterion is not fullled for any neighboring pixel. An adaptive neighborhood
is grown for each pixel in the image.
In addition to the foreground region, an adaptive background region is also
grown for each pixel. The background is obtained by expanding (dilating)
the outermost boundary of the foreground region by a prespecied number
of pixels. This provides a ribbon of a certain thickness that surrounds the
foreground region. The foreground region may be further modied by com-
bining isolated pixels into local regions. Observe that a foreground region
could contain several disjoint regions that are not part of the foreground due
to dierences in gray level that are larger than that permitted by the thresh-
old applied such regions could be considered to be part of the background,
although they are enclosed by the foreground region.
Example: Figure 3.46 illustrates the growth of an adaptive neighborhood.
The foreground part of the adaptive neighborhood is shown in a light shade
of gray. The black pixels within the foreground have the same gray level as
that of the seed pixel from where the process was commenced the use of a
simple threshold for region growing as in Equation 3.164 will result in the
same region being grown for all such pixels: for this reason, they could be
called redundant seed pixels. The region-growing procedure need not be ap-
plied to such pixels, which results in computational savings. Furthermore, all
redundant seed pixels within a region get the same output value as computed
for the original seed pixel from where the process was commenced.
Adaptive-neighborhood mean and median lters: Paranjape et al.
211] proposed the use of mean and median values computed using adaptive
neighborhoods to lter noise. The basic premise was that context-dependent
adaptive neighborhoods provide a larger population of pixels to compute lo-
cal statistics than 3  3 or 5  5 neighborhoods, and that edge distortion
Removal of Artifacts 243

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.46
Illustration of the growth of an adaptive neighborhood (left to right and top
to bottom). The neighborhood being grown is shown in a light shade of gray.
The black pixels within the neighborhood have the same gray level as that of
the seed from where the process was commenced: they are called redundant
seed pixels. The last gure shows a region in a darker shade of gray that
surrounds the foreground region: this is known as the adaptive background
region. Figure courtesy of W.M. Morrow 215].
244 Biomedical Image Analysis
is prevented because such neighborhoods are not expected to transgress the
boundaries of the objects or features in the image. They also showed that the
method could be iterated to improve the results.
Example: Figures 3.47 (a) and (b) show a test image and its noisy version
with additive Gaussian-distributed noise. Parts (c) and (d) of the gure show
the results of ltering the noisy image using the 3  3 mean and median,
respectively. The blurring of edges caused by the mean, and the distortion
of shape caused by the median, are clearly seen in the results. Parts (e)
and (f) of the gure illustrate the results of the adaptive-neighborhood mean
and median lters, respectively. Both methods have been equally eective
in removing the noise without causing any blurring or distortion of edges.
However, in other experiments with high levels of noise, and when the process
was iterated, it was observed that the adaptive-neighborhood lters could
lead to the loss of objects that have small gray-level dierences with respect
to their surroundings. The adaptive neighborhood, in such a case, could mix
an object and its surroundings if the threshold is low compared to the noise
level and the contrast of the object.
Adaptive-neighborhood noise subtraction (ANNS): Paranjape et
al. 212] proposed the ANNS method to remove additive, signal-independent
noise. The algorithm estimates the noise value at the seed pixel g(m n) by
using an adaptive neighborhood, and then subtracts the noise value from the
seed pixel to obtain an estimate of the original undegraded value f~(m n).
The strategy used in deriving the ANNS lter is based upon the same prin-
ciples as those of the ARW-LMS algorithm the image process f is assumed
to be a zero-mean process of variance f2 that is observed in the presence of
additive white Gaussian noise, resulting in the image g. The noise process
is assumed to have zero mean and variance of 2 , and assumed to be uncor-
related to f . An estimate of the additive noise at the pixel (m n) is obtained
from the corresponding adaptive neighborhood grown in the corrupted image
g as
~(m n) =  g(m n) (3.165)
where  is a scale factor which depends on the characteristics of the adaptive
neighborhood grown. Then, the estimate of f (m n) is
f~(m n) = g(m n) ; ~(m n) (3.166)
which reduces to
f~(m n) =  g(m n) (3.167)
where  = 1 ; .
As described in Section 3.7.4 on the ARW-LMS algorithm, if the images
used are of nonzero mean, the estimate of Equation 3.167 can be used by rst
subtracting the mean of each image from both sides of the equation. Then,
the estimate may be expressed as
f~(m n) = g (m n) + (1 ; ) g(m n) ; g (m n)] (3.168)
Removal of Artifacts 245

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.47
(a) \Shapes2": a 128  128 test image with various geometrical objects placed
at random. (b) Image in (a) with Gaussian noise added, RMS error = 8:24.
Result of ltering the image in (b) with: (c) the 3  3 mean, RMS error =
9:24 (d) the 3  3 median, RMS error = 6:02 (e) the adaptive-neighborhood
mean, RMS error = 3:16 (f) the adaptive-neighborhood median, RMS error
= 4:01. Images courtesy of R.B. Paranjape.
246 Biomedical Image Analysis
where g is the a posteriori mean of the degraded image g(m n), which is
also equal to the a priori mean f of the original image f (m n) for zero-mean
noise.
The problem now is to nd the factor , which is based upon the criterion
that the estimated noise variance 2~ be equal to the original noise variance
2 . The solution is obtained as follows:
2 = E ~2 ]
h i
= E f g(m n) ; g ]g2
= 2 g2
= 2 (f2 + 2 ): (3.169)
The noise estimation factor  is then given by
s
2
 = 2 + 2 : (3.170)
f
Thus, the estimate of Equation 3.168 becomes
s !
2 (m n)
f~(m n) = g (m n) + 1 ; 2 (m n) + 2 (m n) g(m n) ; g (m n) ]:
f
(3.171)
The indices (m n) have been introduced in the expression above to em-
phasize the point that the statistical parameters are computed for every pixel
(using the corresponding adaptive neighborhood). The estimate f~(m n) given
by Equation 3.171 may be considered to be an approximation of the original
image f (m n) if we are able to obtain accurate values for the statistical pa-
rameters g (m n) and f2 (m n).
Implementation of the ANNS lter: In implementing Equation 3.171,
we need to derive the local (sample) statistics from the adaptive neighborhood
grown at every seed pixel location (m n) in the degraded image. This could
be achieved in a manner similar to that described in Section 3.7.4 for the
ARW-LMS lter.
Paranjape et al. 212] dened the tolerance used for growing adaptive neigh-
borhoods in an adaptive manner, depending upon the signal activity in the
region and the features surrounding it, as follows. A limit of Q pixels is set
for the adaptive neighborhoods. An initial region is rst grown with the tol-
erance set to the full dynamic range of the input image (that is, T = 256).
This results in a square region of size Q pixels being formed. Using the fore-
ground pixels in the adaptive neighborhood, a measure of the uncorrupted
signal activity in the adaptive neighborhood of the seed pixel is given by the
local signal variance ~f2 . The signal variance in the adaptive neighborhood is
then compared with the noise variance 2 , and if ~f2 > 22 , it is assumed that
Removal of Artifacts 247
the adaptive neighborhood has identied a region with signicant structural
characteristics such as an edge, or other distinct objects. This is contrary to
the desired characteristics of an adaptive neighborhood. An adaptive neigh-
borhood is to be formed such that it includes relatively uniform structures
(or background) in the original image, so that the primary source of variation
in the adaptive neighborhood is the additive noise. Therefore, the gray-level
tolerance used to dene the adaptive neighborhood is modied to T = 2~f ,
with the notion that the signal standard deviation ~f be used to dene the
new adaptive neighborhood. The adaptive neighborhood is grown again using
the new tolerance. Because the tolerance has been reduced, the new adaptive
neighborhood developed, presumably, will not contain edges or structural fea-
tures in the image, but will rather grow up to but not include such features.
Using this approach to dene the adaptive neighborhoods, the statistics of
the adaptive neighborhoods are used in Equation 3.171 to estimate the un-
corrupted image at each pixel location. In the situation that the foreground
has only one pixel, the adaptive neighborhood is enlarged to include the back-
ground layer of pixels (of size 3  3) this approach is particularly useful in
the presence of impulse noise or outliers in the corrupted image.
The advantage of the ANNS method lies in the fact that, in at or slowly
varying regions, the signal variance will be small compared to 22 , and the
adaptive neighborhood foreground will grow to the maximum foreground
bound of Q pixels (but of arbitrary shape depending upon the local image
features). On the other hand, in busy regions where the signal variance is
high compared to 22 , the tolerance will be reduced and the foreground will
grow up to any edge present but not across the edge. This results in the
removal of noise up to the edges present in the image, which the ARW-LMS
lter fails to do see Figure 3.45 (d)].
Example: Figure 3.45 (e) shows the result of application of the ANNS
method to the noisy test image in part (b) of the same gure. The maxi-
mum adaptive neighborhood size Q was set to 25 pixels this allows for direct
comparison of the performance of the ANNS method against the ARW-LMS
method. However, the ANNS algorithm uses a variable-shape window (adap-
tive neighborhood) in order to compute the ltered image, whereas the ARW-
LMS method is restricted to rectangular windows. Unlike the ARW-LMS
lter window, the size of the adaptive neighborhood is not compromised near
the edges in the image rather, its shape changes according to the contextual
details present in the image. This aspect of the ANNS method allows for
better estimation of the noise near edges, and thus permits greater noise sup-
pression in such areas. Both the ANNS and ARW-LMS methods appear to
have reduced the noise equally well in relatively uniform regions of the image
however, the ARW-LMS lter output contains residual noise around the edges
of the objects in the image.
Repeated (iterative) application is a powerful and useful attribute of the
ARW-LMS and ANNS methods. Figure 3.45 (f) shows the result of two-pass
ANNS ltering of the test image in part (b) of the gure. The ARW-LMS
248 Biomedical Image Analysis
output in part (d) of the gure was obtained using two iterations. For both of
the algorithms, updated values of the noise variance are required for the second
and subsequent passes through the lter. The second-pass noise variance was
estimated by calculating the variance of the output image after the rst pass,
and subtracting from it an estimate of the variance of the noise-free image
f (m n). The resulting images see Figures 3.45 (d) and (f)] show signicant
improvement over the results from a single pass through the algorithms. The
major artifact after the rst pass through the algorithms was the retention of
noise around distinct edges in the image. With both algorithms, such layers
of noise were greatly reduced after the second application the ANNS method
performed better than the ARW-LMS method after the second pass. A second
artifact that was observed was the patchy appearance of the originally uniform
background regions in the result after the rst pass this artifact too was
reduced after the second pass with both algorithms.
Extension to lter signal-dependent noise: Rangayyan et al. 201]
used the same basic region-growing procedure as above, because most types
of noise (Poisson, lm-grain, and speckle) can be modeled as additive noise,
with modications with respect to the fact that the noise is signal-dependent.
The rst modication is to let the threshold T vary across the image according
to the local statistics of noise, which depend upon the average brightness of
the corrupted area. It is obvious that the performance of the lter strongly
depends upon the threshold value. A small T would lead to small regions
over which the noise statistics cannot be reliably estimated, whereas a large T
would lead to regions that may grow over edges present in the image, with the
result that stationarity in such regions is no longer guaranteed. A reasonable
value for T , large enough to allow inclusion in the region of representative
samples of noise and small enough to prevent the region from growing over
edges, is the local standard deviation of noise:
T =  (m n): (3.172)
Before growing the region for the pixel located at the coordinates (m n),
one has to estimate  (m n). A coarse estimation would consist of taking
the actual value of the seed g(m n) for the statistical expectation values
in Equation 3.27 for Poisson noise, Equation 3.29 for lm-grain noise, and
Equation 3.31 for speckle noise. However, when the noise level is high, more
accurate estimation is required, because the corrupted seed value may dier
signicantly from its average (expected value).
Estimation of seed and threshold values: Rangayyan et al. 201] de-
ned an initial estimate for the seed pixel value g(m n), dened as the -
trimmed mean value gTM (m n) or the median, computed within a 3  3
window. This step was found to be useful in order to prevent the use of a
pixel signicantly altered by noise (an outlier) for region growing. Then, the
noise standard deviation was computed using the initial estimate in place of
E fg(m n)g in Equation 3.27, 3.29, or 3.31, depending upon the type of noise.
Removal of Artifacts 249
The threshold T was dened as in Equation 3.172, and the seed's neighbors
were inspected for inclusion in the region according to Equation 3.164, in
which the actual seed value g(m n) was replaced by the initial estimate. Be-
cause the initial estimate provides only a basic estimate of the seed value, the
estimated noise standard deviation and, consequently, the threshold value T
might dier from their true or optimal values. To overcome this drawback, T
may be continuously updated while the region is being grown, its value being
computed as before, but by using the mean of the pixels already included in
the region, for the expectation values in Equation 3.27, 3.29, or 3.31.
Revising the region to reduce bias: While the region is being grown,
pixels that are inspected but do not meet the inclusion criterion are marked as
background pixels. After the region-growing procedure stops, there are three
types of pixels: region (or foreground) pixels, background pixels, and pixels
that were not inspected as they are not connected to any pixel in the region.
The area that holds the foreground pixels is 8-connected, although not neces-
sarily compact, whereas the background is composed of several disconnected
areas, many of which may be, at most, two pixels-wide, and could be lying
within the foreground area as well. As a region is desired to be compact, that
is, it does not contain holes with one or two pixels, some of the background
pixels could be further checked for inclusion in the region. One can interpret
such pixels as belonging to the same region as the seed, that, due to noise, did
not meet the inclusion criterion in the rst step of region growing. Ignoring
such pixels would result in biased estimation of the signal or noise statistics.
On the other hand, there would be background pixels that are adjacent to
the external border of the foreground area that should not be included in the
region because they, most likely, belong to other objects. For these reasons,
Rangayyan et al. 201] derived a criterion for further inclusion of selected
background pixels in the region: they included in the region all background
pixels whose 8-connected neighbors are all either in the foreground or in the
background. If only one of a background pixel's neighbor was not inspected in
the initial region-growing step, then that pixel was exempted from inclusion
in the foreground. This criterion was found to work eciently in many trials
with dierent types of noise. Figure 3.48 presents a owchart of the adaptive
region-growing procedure.
The criterion for the inclusion of background pixels in the region described
above does not take into account the values of the background pixels, and is
based only on their spatial relationships. It was implicitly assumed that all
objects in the image are at least three pixels wide in any direction. However,
for images that may contain small objects or regions (as in ne texture), the
values of the inspected background pixels should also be taken into account.
It was suggested that, in such a case, if a background pixel fullls the spatial
inclusion criterion, it should be added to the region only if its value does
not dier signicantly from the average value of the pixels in the region (for
example, the absolute dierence between the inspected background pixel value
250 Biomedical Image Analysis
START

Compute seed estimate and


threshold. Add seed pixel to
foreground queue (FQ).

Set next pixel in FQ


as current.

Inspect next neighbor


of current pixel.

Update threshold
Yes No (optional).
Inclusion condition
fulfilled?

Add neighbor to FQ. Add neighbor to


background queue (BQ).

No All of current Yes


pixel’s neighbors
inspected?

All FQ
Yes pixels inspected No
OR FQ size >
limit?

Inspect next pixel


from BQ.

Yes All neighbors No


are either in FQ
or in BQ?

Add BQ pixel to region.

No All pixels Yes


in BQ
inspected?

Compute seed estimate using


statistics within the region

STOP

FIGURE 3.48
Flowchart of the adaptive region-growing procedure. Reproduced with permission
from R.M. Rangayyan, M. Ciuc, and F. Faghih, \Adaptive neighborhood ltering
of images corrupted by signal-dependent noise", Applied Optics, 37(20):4477{4487,
1998. 
c Optical Society of America.
Removal of Artifacts 251
and the average value of the pixels in the region is smaller than twice the local
variance of the noise).
Figure 3.49 illustrates a sample result of the adaptive region-growing pro-
cedure. The foreground region size was limited to a predetermined number of
pixels (100) in order to reduce computing time.
Adaptive-region-based LLMMSE lter: Once an adaptive region is
grown, statistics of the signal, and, according to them, statistics of the noise,
are computed using the pixels in the foreground region obtained. The mean
and variance of the noisy signal are computed using the pixels in the fore-
ground instead of using a rectangular window as commonly done. The vari-
ance of the noise is computed using Equation 3.27, 3.29, or 3.31, depending
upon the type of noise. Finally, the LLMMSE estimate is computed according
to Equation 3.135 and assigned to the seed pixel location in the output image.
The second term in the LLMMSE estimate formula can be interpreted as a
correction term whose contribution to the nal result is important when the
variance of the signal is much larger than the variance of the noise. Over at
regions, where the variations in the image are due mostly to the noise, the
major contribution is given by the rst term that is, the mean of the noisy
signal. This is always the case within a region that is grown to be relatively
uniform, as in adaptive region growing. Hence, the mean of the foreground
pixels by itself represents a good estimator of the noise-free seed pixel this is
advantageous when computational time specications are restrictive.
Example: Figure 3.50 (a) shows a test image of a clock. The image con-
tains a signicant amount of noise, suspected to be due to poor shielding of the
video-signal cable between the camera and the digitizing frame buer. Parts
(b){(f) of the gure show the results of application of the 3  3 mean, 3  3 me-
dian, rened LLMMSE, NURW, and adaptive-neighborhood LLMMSE lters
to the test image. The lters have suppressed noise to similar levels how-
ever, the mean lter has caused signicant blurring of the image, the median
has resulted in some shape distortion in the numerals of the clock, and the
rened LLMMSE has led to a patchy appearance. The NURW and adaptive-
neighborhood LLMMSE lters have provided good noise suppression without
causing edge degradation.

3.8 Comparative Analysis of Filters for Noise Removal


In a comparative study 201] of several of the lters described in the preceding
sections, the Shapes and Peppers images were corrupted by dierent types of
noise and ltered. The performance of the lters was assessed using the MSE
and by visual inspection of the results. The following paragraphs provide the
details of the study.
252 Biomedical Image Analysis

(a) (b) (c)

(d) (e) (f)


FIGURE 3.49
Illustration of the steps in adaptive region growing: (a) 2525 pixel-wide
portion of the original Peppers image. (b) Image corrupted by Poisson noise
with =0.1. (c) The seed pixel, shown in white and located at the center of the
image. (d) First step of region growing on the corrupted image: foreground
pixels are in white, background pixels are in light gray. The foreground size
has been limited to 100 pixels. (e) Region after inclusion of interior back-
ground pixels. (f) Filtered image. In (c), (d), and (e), the region has been
superimposed over the uncorrupted image for convenience of display. Fig-
ure courtesy of M. Ciuc, Laboratorul de Analiza $si Prelucrarea Imaginilor,
Universitatea Politehnica Bucure$sti, Bucharest, Romania.
Removal of Artifacts 253

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.50
(a) Clock test image. Result of ltering the image in (a) using: (b) 3  3
mean (c) 3  3 median (d) the rened LLMMSE lter (e) NURW lter
and (f) adaptive-neighborhood LLMMSE lter. Figure courtesy of M. Ciuc,
Laboratorul de Analiza $si Prelucrarea Imaginilor, Universitatea Politehnica
Bucure$sti, Bucharest, Romania.
254 Biomedical Image Analysis
Additive, Gaussian noise: Random noise with Gaussian distribution
having zero mean and standard deviation of 20 was added to the test images.
Figures 3.51 and 3.52 show the original images, their noisy versions, and the
results of a few selected lters. The MSE values of the noisy and ltered
images are listed in Tables 3.1 and 3.2. The application of the LLMMSE lter
using the adaptive-neighborhood paradigm led to the best results with both
images. The NURW and adaptive-neighborhood mean lters also provided
good results with the Peppers image.
Additive, uniformly distributed noise: The Shapes and Peppers test
images were corrupted by adding uniformly distributed noise with  = 0  =
20. The results of ltering the noisy images are shown in Figures 3.53 and 3.54.
The MSE values of the images are listed in Tables 3.1 and 3.2. The application
of the LLMMSE lter using the adaptive-neighborhood paradigm led to the
best result with the Shapes image. With the Peppers image, the NURW lter
provided better results than the adaptive-neighborhood LLMMSE lter.
Poisson noise: The test images were corrupted by Poisson noise with
= 0:1. The results of ltering the noisy images are shown in Figures 3.55
and 3.56. The MSE values of the images are listed in in Tables 3.1 and
3.2. The application of the LLMMSE lter using the adaptive-neighborhood
paradigm led to the best result with the Shapes image. With the Peppers
image, the NURW lter provided results comparable to those of the adaptive-
neighborhood mean and LLMMSE lters.
Film-grain noise: The test images were corrupted by purely signal-de-
pendent lm-grain noise, with 2 (m n) = 0,  = 3:3, and  1 = 1 in the
model given in Equation 3.28. The results of ltering the noisy images are
shown in Figures 3.57 and 3.58. The MSE values of the images are listed
in Tables 3.1 and 3.2. The LLMMSE lter using the adaptive-neighborhood
paradigm provided the best result with the Shapes image. With the Peppers
image, the NURW lter provided results comparable to those of the adaptive-
neighborhood mean and LLMMSE lters.
Speckle noise: Speckle noise was simulated by multiplicative noise, the
noise having exponential distribution given by

p (mn) (x) = exp ; (m n)]: (3.173)


Removal of Artifacts 255

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.51
(a) Shapes: a 128  128 test image. (b) Image in (a) with Gaussian noise
added, with  = 0  = 20, MSE = 228:75. Result of ltering the noisy
image in (b) using: (c) 3  3 LLMMSE, MSE = 108:39 (d) NURW, MSE
= 132:13 (e) adaptive-neighborhood mean, MSE = 205:04 and (f) adaptive-
neighborhood LLMMSE, MSE = 78:58. Figure courtesy of M. Ciuc, Labo-
ratorul de Analiza $si Prelucrarea Imaginilor, Universitatea Politehnica Bu-
cure$sti, Bucharest, Romania.
256 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.52
(a) 512  512 Peppers test image. (b) Image in (a) with Gaussian noise
added, with  = 0  = 20, MSE = 389:87. Result of ltering the noisy
image in (b) using: (c) rened LLMMSE, MSE = 69:49 (d) NURW, MSE
= 54:70 (e) adaptive-neighborhood mean, MSE = 55:21 and (f) adaptive-
neighborhood LLMMSE, MSE = 52:32. Figure courtesy of M. Ciuc, Labo-
ratorul de Analiza $si Prelucrarea Imaginilor, Universitatea Politehnica Bu-
cure$sti, Bucharest, Romania.
TABLE 3.1

Removal of Artifacts
MSE values of the Noisy and Filtered Versions of the 128  128 Shapes Image.
Noise type Noisy 3 Mean 3 Med. 5 Mean 5 Med. 3 LL 5 LL R-LL NURW AN Mean AN Med. AN LL
Gaussian 228.75 469.26 213.71 772.76 518.17 108.39 122.29 124.88 132.13 205.04 197.93 78.58
Uniform 226.20 479.68 236.55 785.62 530.75 113.83 130.51 133.63 144.71 216.52 204.90 93.08
Poisson 241.07 441.04 266.85 743.73 657.35 108.70 130.14 131.47 147.87 215.18 249.41 62.57
Film-grain 275.11 450.92 283.74 746.90 665.78 119.81 147.27 141.64 166.42 236.27 296.81 69.98
Speckle 255.43 445.61 278.76 749.00 665.02 119.15 147.43 138.49 166.75 236.09 286.90 68.01
Salt & pepper 1740.86 642.20 206.63 835.46 557.75 1739.37 1405.09 1739.10 1740.84 213.02 205.72 1686.13

Note: 3 = 3  3. 5 = 5  5. Med. = Median. LL = LLMMSE. R = Rened. AN = Adaptive neighborhood.

257
258
TABLE 3.2
MSE Values of the Noisy and Filtered Versions of the 512  512 Peppers Image.
Noise type Noisy 3 Mean 3 Med. 5 Mean 5 Med. 3 LL 5 LL R-LL NURW AN Mean AN Med. AN LL
Gaussian 389.87 74.89 93.55 84.88 71.80 86.53 68.19 69.49 54.70 55.21 57.50 52.32
Uniform 391.43 75.09 129.08 85.30 89.10 76.17 63.02 65.25 54.39 62.53 70.98 58.62
Poisson 1132.56 159.29 239.26 116.29 139.50 197.87 121.71 133.82 85.32 88.83 110.10 87.48
Film-grain 1233.43 168.25 245.77 119.59 135.32 212.03 125.25 117.54 88.83 90.54 101.19 89.07
Speckle 988.84 142.62 204.39 110.91 119.98 172.35 105.67 100.76 77.59 81.53 91.45 79.26
Salt & pepper 947.69 144.01 22.93 117.02 38.70 886.03 832.25 821.88 872.37 34.49 29.27 861.01

Biomedical Image Analysis


Note: 3 = 3  3. 5 = 5  5. Med. = Median. LL = LLMMSE. R =Rened. AN = Adaptive neighborhood.
Removal of Artifacts 259
Both the mean and the variance of the noise PDF above are equal to unity,
which lead to a highly noisy image with SNR = 1. When applying ltering
methods directly on such images, the results are typically of poorer quality
than with other types of noise. A common preprocessing step in speckle noise
reduction is to obtain several realizations of the same image and to average
them, which reduces the power of the noise 186, 187]. In the present study,
the lters were applied after averaging four frames of corrupted versions of
the same image, which reduced the noise variance to one half of its initial
value. The results of ltering the noisy images are shown in Figures 3.59 and
3.60. The MSE values of the images are listed in Tables 3.1 and 3.2. The
application of the LLMMSE lter using the adaptive-neighborhood method
provided the best result with the Shapes image. With the Peppers image, the
NURW lter gave results comparable to those of the adaptive-neighborhood
mean and LLMMSE lters.

Salt-and-pepper noise: The test images were corrupted with salt-and-


pepper noise such that 5% of the pixels would become outliers. Although only
the median lter would be appropriate for ltering salt-and-pepper noise, all
of the lters in the comparative study were applied to the noisy image. The
results of ltering the noisy images are shown in Figures 3.61 and 3.62. The
MSE values of the images are listed in in Tables 3.1 and 3.2.

Discussion: The results presented above indicate that the LLMMSE esti-
mate, especially when computed using the adaptive-neighborhood paradigm
or when applied repeatedly, can successfully remove several types of signal-
independent and signal-dependent noise. The use of local statistics in an
adaptive and nonlinear lter is a powerful approach to remove noise while
retaining the edges in the images with minimal distortion. In the case of salt-
and-pepper noise, the local median is the most appropriate method, due to
the high incidence of outliers that render the use of the local variance inap-
propriate. Although the use of the 3  3 median as the initial estimate for
region growing in the adaptive-neighborhood procedure led to clipping of the
corners in some cases, the application of the LLMMSE lter within the same
context corrected this distortion to some extent.

The large number of examples provided also illustrate the diculty in com-
paring the results provided by several lters. Although the MSE or the RMS
error is commonly used for this purpose and has been provided with several
illustrations in this chapter, it is seen that, in some cases, an image with a
larger MSE may be preferred to one with a lower MSE but with some distor-
tion present. In practical applications, it is important to obtain an assessment
of the results by the end-user or by a specialist in the relevant area of appli-
cation.
260 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.53
(a) Shapes test image. (b) Image in (a) with uniformly distributed noise
added, with  = 0  = 20 MSE = 226:20. Result of ltering the noisy image
in (b) using: (c) 3  3 LLMMSE MSE = 113:83. (d) NURW MSE = 144:71.
(e) adaptive-neighborhood mean MSE = 216:52. (f) adaptive-neighborhood
LLMMSE MSE = 93:08. Figure courtesy of M. Ciuc, Laboratorul de Anal-
iza $si Prelucrarea Imaginilor, Universitatea Politehnica Bucure$sti, Bucharest,
Romania.
Removal of Artifacts 261

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.54
(a) Peppers test image. (b) Image in (a) with uniformly distributed noise
added, with  = 0  = 20 MSE = 391:43. Result of ltering the noisy image
in (b) using: (c) rened LLMMSE MSE = 65:25. (d) NURW MSE = 54:39.
(e) adaptive-neighborhood mean MSE = 62:53. (f) adaptive-neighborhood
LLMMSE MSE = 58:62. Figure courtesy of M. Ciuc, Laboratorul de Anal-
iza $si Prelucrarea Imaginilor, Universitatea Politehnica Bucure$sti, Bucharest,
Romania.
262 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.55
(a) Shapes test image. (b) Image in (a) with Poisson noise, with = 0:1. MSE
= 241:07. Result of ltering the noisy image in (b) using: (c) 3  3 LLMMSE
MSE = 108:70. (d) NURW MSE = 147:87. (e) adaptive-neighborhood mean
MSE = 215:18. (f) adaptive-neighborhood LLMMSE MSE = 62:57. Figure
courtesy of M. Ciuc, Laboratorul de Analiza $si Prelucrarea Imaginilor, Uni-
versitatea Politehnica Bucure$sti, Bucharest, Romania.
Removal of Artifacts 263

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.56
(a) Peppers test image. (b) Image in (a) with Poisson noise, with = 0:1.
MSE = 1132:56. Result of ltering the noisy image in (b) using: (c) re-
ned LLMMSE MSE = 133:82. (d) NURW MSE = 85:32. (e) adaptive-
neighborhood mean MSE = 88:83. (f) adaptive-neighborhood LLMMSE
MSE = 87:48. Figure courtesy of M. Ciuc, Laboratorul de Analiza $si Prelu-
crarea Imaginilor, Universitatea Politehnica Bucure$sti, Bucharest, Romania.
264 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.57
(a) Shapes test image. (b) Image in (a) with lm-grain noise MSE = 275:11.
Result of ltering the noisy image in (b) using: (c) 3  3 LLMMSE MSE =
119:81. (d) NURW MSE = 166:42. (e) adaptive-neighborhood mean MSE =
236:27. (f) adaptive-neighborhood LLMMSE MSE = 69:98. Figure courtesy
of M. Ciuc, Laboratorul de Analiza $si Prelucrarea Imaginilor, Universitatea
Politehnica Bucure$sti, Bucharest, Romania.
Removal of Artifacts 265

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.58
(a) Peppers test image. (b) Image in (a) with lm-grain noise MSE = 1233:43.
Result of ltering the noisy image in (b) using: (c) rened LLMMSE MSE =
117:54. (d) NURW MSE = 88:83. (e) adaptive-neighborhood mean MSE =
90:54. (f) adaptive-neighborhood LLMMSE MSE = 89:07. Figure courtesy
of M. Ciuc, Laboratorul de Analiza $si Prelucrarea Imaginilor, Universitatea
Politehnica Bucure$sti, Bucharest, Romania.
266 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.59
(a) Shapes test image. (b) Image in (a) with speckle noise, MSE = 255:43.
Result of ltering the noisy image in (b) using: (c) 3  3 LLMMSE, MSE =
119:15 (d) NURW, MSE = 116:75 (e) adaptive-neighborhood mean, MSE =
236:09 (f) adaptive-neighborhood LLMMSE, MSE = 68:01. Figure courtesy
of M. Ciuc, Laboratorul de Analiza $si Prelucrarea Imaginilor, Universitatea
Politehnica Bucure$sti, Bucharest, Romania.
Removal of Artifacts 267

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.60
(a) Peppers test image. (b) Image in (a) with speckle noise, MSE = 988:84.
Result of ltering the noisy image in (b) using: (c) rened LLMMSE, MSE =
100:76 (d) NURW, MSE = 77:59 (e) adaptive-neighborhood mean, MSE =
81:54 (f) adaptive-neighborhood LLMMSE, MSE = 79:26. Figure courtesy
of M. Ciuc, Laboratorul de Analiza $si Prelucrarea Imaginilor, Universitatea
Politehnica Bucure$sti, Bucharest, Romania.
268 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.61
(a) Shapes test image. (b) Image in (a) with salt-and-pepper noise, MSE =
1740:86. Result of ltering the noisy image in (b) using: (c) 3  3 mean,
MSE = 642:20 (d) 3  3 median, MSE = 206:63 (e) adaptive-neighborhood
mean, MSE = 213:02 (f) adaptive-neighborhood median, MSE = 205:72.
Figure courtesy of M. Ciuc, Laboratorul de Analiza $si Prelucrarea Imaginilor,
Universitatea Politehnica Bucure$sti, Bucharest, Romania.
Removal of Artifacts 269

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.62
(a) Peppers test image. (b) Image in (a) with salt-and-pepper noise, MSE =
947:69. Result of ltering the noisy image in (b) using: (c) 5  5 mean, MSE =
117:02 (d) 33 median, MSE = 22:93 (e) adaptive-neighborhood mean, MSE
= 34:49 (f) adaptive-neighborhood median, MSE = 29:27. Figure courtesy
of M. Ciuc, Laboratorul de Analiza $si Prelucrarea Imaginilor, Universitatea
Politehnica Bucure$sti, Bucharest, Romania.
270 Biomedical Image Analysis

3.9 Application: Multiframe Averaging in Confocal Mi-


croscopy
The confocal microscope uses a laser beam to scan and image nely focused
planes within uorescent-dye-tagged specimens that could be a few mm in
thickness 216, 217, 218]. The use of a coherent light source obviates the blur
caused in imaging with ordinary white light, where the dierent frequency
components of the incident light are reected and refracted at dierent angles
by the specimen. Laser excitation causes the dyes to emit light (that is,
uoresce) at particular wavelengths. The use of multiple dyes to stain dierent
tissues and structures within the specimen permits their separate and distinct
imaging.
The confocal microscope uses a pinhole to permit the passage of only the
light from the plane of focus light from the other planes of the specimen
is blocked. Whereas the use of the pinhole permits ne focusing, it also
reduces signicantly the amount of light that is passed for further detection
and viewing. For this reason, a PMT is used to amplify the light received.
A scanning mechanism is used to raster-scan the sample in steps that could
be as small as 0:1 m. The confocal microscope facilitates the imaging of
multiple focal planes separated by distances of the order of 1 m several such
slices may be acquired and combined to build 3D images of the specimen.
The use of a laser beam for scanning and imaging carries limitations. The
use of high-powered laser beams to obtain strong emitted light could damage
the specimen by heating. On the other hand, low laser power levels result
in weak emitted light, which, during amplication by the PMT, could suer
from high levels of noise. Scanning with a low-power laser beam over long
periods of time to reduce noise could lead to damage of the specimen by
photo-bleaching (the aected molecules permanently lose their capability of
uorescence). Images could, in addition, be contaminated by noise due to
autouorescence of the specimen. A technique commonly used to improve
image quality in confocal microscopy is to average multiple acquisitions of
each scan line or of the full image frame (see Section 3.2).
Figure 3.63 shows images of cells from the nucleus pulposus (the central
portion of the intervertebral discs, which are cartilaginous tissues lying be-
tween the bony vertebral bodies) 216]. The specimen was scanned using a
laser beam of wavelength 488 nm. The red-dye (long-pass cuto at 585 nm)
and green-dye (pass band of 505 ; 530 nm) components show distinctly and
separately the cell nuclei and the actin lament structure, respectively, of the
specimen, in a single focal plane representing a thickness of about 1 m. The
component and composite images would be viewed in the colors mentioned on
the microscope, but are illustrated in gray scale in the gure. Images of this
nature have been found to be useful in studies of injuries and diseases that
aect the intervertebral discs and the spinal column 216].
Removal of Artifacts 271
Figures 3.64 (a) and (b) show two single-frame acquisitions of the composite
image as in Figure 3.63 (c). Figures 3.64 (c) and (d) show the results of av-
eraging four and eight single-frame acquisitions of the specimen, respectively.
Multiframe averaging has clearly reduced the noise and improved the quality
of the image.
Al-Kofahi et al. 219] used confocal microscopy to study the structure of
soma and dendrites via 3D tracing of neuronal topology.
Comparison of ltering with space-domain and frequency-domain
lowpass lters: Figures 3.65 (a) and (b) show the results of 3  3 mean and
median ltering of the single-frame acquisition of the composite image of the
nucleus pulposus in Figure 3.64 (a). Figures 3.65 (c) and (d) show the results
of 5  5 mean and median ltering of the same image. It is evident that
neighborhood ltering, while suppressing noise to some extent, has caused
blurring of the sharp details in the images.
Figure 3.66 shows the results of application of the 5  5 LLMMSE, rened
LLMMSE, NURW, and adaptive-neighborhood LLMMSE lters to the noisy
image in Figure 3.64 (a). The noise was assumed to be multiplicative, with
the normalized mean  = 1 and normalized standard deviation  = 0:2. The
optimal, adaptive, and nonlinear lters have performed well in suppressing
noise without causing blurring.
The results of Fourier-domain ideal and Butterworth lowpass ltering of
the noisy image are shown in Figures 3.67 (c) and (d) parts (a) and (b)
of the same gure show the original image and its Fourier log-magnitude
spectrum, respectively. The spectrum indicates that most of the energy of
the image is concentrated within a small area around the center that is,
around (u v) = (0 0). The lters have caused some loss of sharpness while
suppressing noise. Observe that the ideal lowpass lter has given the noisy
areas a mottled appearance. Comparing the results in Figures 3.65, 3.66, and
3.67 with the results of multiframe averaging in Figures 3.64 and 3.63, it is
seen that multiframe averaging has provided the best results.

3.10 Application: Noise Reduction in Nuclear Medicine


Imaging
Nuclear medicine images are typically acquired under low-photon conditions,
which leads to a signicant presence of Poisson noise in the images. Counting
the photons emitted over long periods of time reduces the eect of noise and
improves the quality of the image. However, imaging over long periods of time
may not be feasible due to motion artifacts and various practical limitations.
Figure 3.68 shows the SPECT images of one section of a resolution phan-
tom acquired over 2 15 and 40 s. Each image has been scaled such that its
272 Biomedical Image Analysis

(a) (b)

(c)
FIGURE 3.63
(a) The red-dye (cell nuclei) component of the confocal microscope image of
the nucleus pulposus of a dog. (b) The green-dye (actin lament structure)
component. (c) Combination of the images in (a) and (b) into a composite im-
age. The images would be viewed in the colors mentioned on the microscope.
The width of each image corresponds to 145 m. Each image was acquired
by averaging eight frames. Images courtesy of C.J. Hunter, J.R. Matyas,
and N.A. Duncan, McCaig Centre for Joint Injury and Arthritis Research,
University of Calgary.
Removal of Artifacts 273

(a) (b)

(c) (d)
FIGURE 3.64
(a) A single-frame acquisition of the composite image of the nucleus pulposus
see also Figure 3.63. (b) A second example of a single-frame acquisition as in
(a). (c) The result of averaging four frames including the two in (a) and (b).
(d) The result of averaging eight frames including the two in (a) and (b). The
width of each image corresponds to 145 m. Images courtesy of C.J. Hunter,
J.R. Matyas, and N.A. Duncan, McCaig Centre for Joint Injury and Arthritis
Research, University of Calgary.
274 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 3.65
Results of ltering the single-frame acquisition of the composite image of the
nucleus pulposus in Figure 3.64 (a) with: (a) the 3  3 mean lter (b) the
3  3 median lter (c) the 5  5 mean lter and (d) the 5  5 median lter.
Removal of Artifacts 275

(a) (b)

(c) (d)
FIGURE 3.66
Results of ltering the single-frame acquisition of the composite image of
the nucleus pulposus in Figure 3.64 (a) with: (a) the 5  5 LLMMSE lter
(b) the rened LLMMSE lter (c) the NURW lter and (d) the adaptive-
neighborhood LLMMSE lter. Figure courtesy of M. Ciuc, Laboratorul
de Analiza $si Prelucrarea Imaginilor, Universitatea Politehnica Bucure$sti,
Bucharest, Romania.
276 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 3.67
(a) The single-frame acquisition of the composite image of the nucleus pulpo-
sus of a dog, as in Figure 3.64 (a). (b) Fourier log-magnitude spectrum of the
image in (a). Results of ltering the image in (a) with: (c) the ideal lowpass
lter with cuto D0 = 0:4, as in Figure 3.28 (a) and (d) the Butterworth
lowpass lter with cuto D0 = 0:4 and order n = 2, as in Figure 3.28 (b).
Removal of Artifacts 277
minimum and maximum values are mapped to the display range of 0 255].
Also shown is a schematic representation of the section: the circles represent
cross-sections of cylindrical holes in a plexiglass block the diameter of the
entire phantom is 200 mm the two large circles at the extremes of the two
sides are of diameter 39 mm the two inner arrays have circles of diameter
22 17 14 12 9 8 6 and 5 mm. The phantom was lled with a radiopharma-
ceutical such that the cylindrical holes would be lled with the radioactive
material. It is evident that the image quality improves as the photon counting
time is increased. The circles of diameter 9 and 8 mm are distinctly visible
only in image (c). However, the circles of diameter 5 mm are not visible in
any image.
Figure 3.69 shows six nuclear medicine (planar) images of the chest of
a patient in the gated blood-pool mode of imaging the heart using 99m Tc.
Frame (a) displays the left ventricle in its fully relaxed state (at the end of
diastole). The subsequent frames show the left ventricle at various stages of
contraction through one cardiac cycle. Frame (d) shows the left ventricle at
its smallest, being fully contracted in systole. Frame (f) completes the cycle
at a few milliseconds before the end of diastole. Each frame represents the
sum of 16 gated frames acquired over 16 cardiac cycles. The ECG signal was
used to time photon counting such that each frame gets the photon counts
acquired during an interval equal to 161 th of the duration of each heart beat, at
exactly the same phase of the cardiac cycle see Figure 3.70. This procedure
is akin to synchronized averaging, with the dierence that the procedure may
be considered to be integration instead of averaging the net result is the same
except for a scale factor. (Clinical imaging systems do not provide the data
corresponding to the intervals of an individual cardiac cycle, but provide only
the images integrated over 16 or 32 cardiac cycles.)

3.11 Remarks
In this chapter, we have studied several types of artifacts that could arise
in biomedical images, and developed a number of techniques to characterize,
model, and remove them. Starting with simple averaging over multiple image
frames or over small neighborhoods within an image, we have seen how sta-
tistical parameters may be used to lter noise of dierent types. We have also
examined frequency-domain derivation and application of lters. The class of
lters based upon mathematical morphology 8, 192, 220, 221, 222] has not
been dealt with in this book.
The analysis of the results of several lters has demonstrated the truth
behind the adage prevention is better than cure : attempts to remove one type
of artifact could lead to the introduction of others! Regardless, adaptive and
278 Biomedical Image Analysis

(a) (b)

(c)
(d)
FIGURE 3.68
128  128 SPECT images of a resolution phantom obtained by counting pho-
tons over: (a) 2 s (b) 15 s and (c) 40 s. (d) Schematic representation of the
section. Images courtesy of L.J. Hahn, Foothills Hospital, Calgary.
Removal of Artifacts 279

(a) (b)

(c) (d)

(e) (f)
FIGURE 3.69
(a) 64  64 gated blood-pool images at six phases of the cardiac cycle, obtained
by averaging over 16 cardiac cycles. Images courtesy of L.J. Hahn, Foothills
Hospital, Calgary.
280 Biomedical Image Analysis

R R

P T P T

Q S Q S

PHOTON COUNT DATA

phase 1 phase 2 phase 3 phase 4 phase 5 phase 6 phase 7 phase 8

FIGURE 3.70
Use of the ECG signal in synchronized averaging or accumulation of photon
counts in gated blood-pool imaging. Two cycles of cardiac activity are shown
by the ECG signal. The ECG waves have the following connotation: P |
atrial contraction QRS | ventricular contraction (systole) T | ventricular
relaxation (diastole). Eight frames representing the gated images are shown
over each cardiac cycle. Counts over the same phase of the cardiac cycle are
added to the same frame over several cardiac cycles.
Removal of Artifacts 281
nonlinear lters have proven themselves to be useful in removing noise without
creating signicant artifacts. Preprocessing of images to remove artifact is an
important step before other methods may be applied for further enhancement
or analysis of the features in the images.
Notwithstanding the models that were used in deriving some of the lters
presented in this chapter, most of the methods applied in practice for noise re-
moval are considered to be ad hoc approaches: methods that have been shown
to work successfully in similar situations encountered by other researchers are
tried to solve the problem on hand. Diculties arise due to the fact that some
of the implicit models and assumptions may not apply well to the image or
noise processes of the current problem. For this reason, it is common to try
several previously established techniques. However, the assessment of the re-
sults of ltering operations and the selection of the most appropriate lter for
a given application remain a challenge. Although several measures of image
quality are available (see Chapter 2), visual assessment of the results by a
specialist in the area of application may remain the most viable approach for
comparative analysis and selection.

3.12 Study Questions and Problems


(Note: Some of the questions may require background preparation with other sources
on the basics of signals and systems as well as digital signal and image processing,
such as Lathi 1], Oppenheim et al. 2], Oppenheim and Schafer 7], Gonzalez and
Woods 8], Pratt 10], Jain 12], Hall 9], and Rosenfeld and Kak 11].)
Selected data les related to some of the problems and exercises are available at
the site
www.enel.ucalgary.ca/People/Ranga/enel697
1. Explain the dierences between linear and circular (or periodic) convolution.
Given two 1D signals h(n) = 3 2 1] and f (n) = 5 4 1] for n = 0 1 2,
compute by hand the results of linear and circular convolution.
Demonstrate how the result of circular convolution may be made equivalent
to that of linear convolution in the above example.
2. The two 1D signals f = 1 4 2 4]T and h = 1 2 ;1]T (given as vectors) are
to be convolved. Prepare a circulant matrix h such that the matrix product
g = h f will provide results equivalent to the linear convolution g = h  f of
the two signals.
3. What are the impulse response and frequency response (transfer function or
MTF) of the 3 3 mean lter?
4. An image is processed by applying the 3 3 mean lter mask (a) once,
(b) twice, and (c) thrice in series.
What are the impulse responses of the three operations?
282 Biomedical Image Analysis
5. Perform linear 2D convolution of the following two images:
2 3
1 2 3
42 4 15 (3.174)
2 3 1
and
2 3
2 5 12
66 2 4 6 3 77
45 5 5 15 : (3.175)
4 2 12
6. The image
2 3
3 2 13
66 2 1 3 4 77
66 6 5 4 5 77 (3.176)
44 3 2 15
2 1 12
is passed through a linear shift-invariant system having the impulse response
2 3
1 3 2
40 5 15 : (3.177)
2 4 3
Compute the output of the system.
7. The output of a lter at (m n) is de ned as the average of the four immediate
neighbors of (m n) the pixel at (m n) is itself not used.
Derive the MTF of the lter and describe its characteristics.
8. P
The Fourier transform of a 1D periodic train of impulses, represented as
+1
n=;1  (t ; nT ), where T is the period or interval between the impulses,
is given by another periodic train of impulses in the frequency domain as
!0 P+n=1;1 (! ; n!0 ), where !0 = 2T .
Using the information provided above, derive the 2D Fourier transform of
an image made up of a periodic array of strips parallel to the x axis. The
thickness of each strip is W , the spacing between the strips is S , and the
image is of size A B , with fA B g  fW S g.
Draw a schematic sketch of the spectrum of the image.
9. A 3 3 window of a noisy image contains the following pixel values:
2 3
52 59 41
4 62 74 66 5 : (3.178)
56 57 59
Compute the outputs of the 3 3 mean and median lters for the pixel at the
center of the window. Show all steps in your computation.
Removal of Artifacts 283
10. A digital image contains periodic grid lines in the horizontal and vertical
directions with a spacing of 1 cm. The sampling interval is 1 mm, and the
size of the image is 20 cm 20 cm.
The spectrum of the image is computed using the FFT with an array size
of 256 256, including zero-padding in the area not covered by the original
image.
Sketch a schematic diagram of the spectrum of the image, indicating the
nature and exact locations of the frequency components of the grid lines.
Propose a method to remove the grid lines from the image.
11. In deriving the Wiener lter, it is assumed that the processes generating the
image f and noise are statistically independent of each other, that the
mean of the noise process is zero, and that both the processes are second-
order stationary. A degraded image is observed as g = f + : The following
expression is encountered for the MSE between the Wiener estimate ~f = Lg
and the original image f :
h n oi
"2 = E Tr (f ; ~f )(f ; ~f )T : (3.179)
Reduce the expression above to one containing L and autocorrelation matrices
only. Give reasons for each step of your derivation.

3.13 Laboratory Exercises and Projects


1. Add salt-and-pepper noise to the image shapes.tif. Apply the median lter
using the neighborhood shapes given in Figure 3.14, and compare the results
in terms of MSE values and edge distortion.
2. From your collection of test images, select two images: one with strong edges
of the objects or features present in the image, and the other with smooth
edges and features.
Prepare several noisy versions of the images by adding
(i) Gaussian noise, and
(ii) salt-and-pepper noise
at various levels.
Filter the noisy images using
(i) the median lter with the neighborhoods given in Figure 3.14 (a), (b), and
(k) and
(ii) the 3 3 mean lter with the condition that the lter is applied only
if the dierence between the pixel being processed and the average of its 8-
connected neighbors is less than a threshold. Try dierent thresholds and
study the eect.
Compare the results in terms of noise removal, MSE, and the eect of the
lters on the edges present in the images.
284 Biomedical Image Analysis
3. Select two of the noisy images from the preceding exercise. Apply the ideal
lowpass lter and the Butterworth lowpass lter using two dierent cuto
frequencies for each lter.
Study the results in terms of noise removal and the eect of the lters on the
sharpness of the edges present in the images.
4
Image Enhancement

In spite of the signi cant advances made in biomedical imaging techniques over
the past few decades, several practical factors often lead to the acquisition
of images with less than the desired levels of contrast, visibility of detail,
or overall quality. In the preceding chapters, we reviewed several practical
limitations, considerations, and trade-os that could lead to poor images.
When the nature of the artifact that led to the poor quality of the image
is known, such as noise as explained in Chapter 3, we may design speci c
methods to remove or reduce the artifact. When the degradation is due to a
blur function, deblurring and restoration techniques, described in Chapter 10,
may be applied to reverse the phenomenon. In some applications of biomedical
imaging, it becomes possible to include additional steps or modi cations in the
imaging procedure to improve image quality, although at additional radiation
dose to the subject in the case of some X-ray imaging procedures, as we shall
see in the sections to follow.
In several situations, the understanding of the exact cause of the loss of
quality is limited or nonexistent, and the investigator is forced to attempt to
improve or enhance the quality of the image on hand using several techniques
applied in an ad hoc manner. In some applications, a nonspeci c improve-
ment in the general appearance of the given image may suce. Researchers
in the eld of image processing have developed a large repertoire of image en-
hancement techniques that have been demonstrated to work well under certain
conditions with certain types of images. Some of the enhancement techniques,
indeed, have an underlying philosophy or hypothesis, as we shall see in the
following sections however, the practical application of the techniques may
encounter diculties due to a mismatch between the applicable conditions or
assumptions and those that relate to the problem on hand.
A few biomedical imaging situations and applications where enhancement
of the feature of interest would be desirable are:
Microcalci cations in mammograms.
Lung nodules in chest X-ray images.
Vascular structure of the brain.
Hair-line fractures in the ribs.

285
286 Biomedical Image Analysis
Some of the features listed above could be dicult to see in the given im-
age due to their small size, subtlety, small dierences in characteristics with
respect to their surrounding structures, or low contrast others could be ren-
dered not readily visible due to superimposed structures in planar images.
Enhancement of the contrast, edges, and general detail visibility in the im-
ages, without causing any distortion or artifacts, would be desirable in the
applications mentioned above.
In this chapter, we shall explore a wide range of image enhancement tech-
niques that can lead to improved contrast or visibility of certain image fea-
tures such as edges or objects of speci c characteristics. In extending the
techniques to other applications, it should be borne in mind that ad hoc
procedures borrowed from other areas may not lead to the best possible or
optimal results. Regardless, if the improvement so gained is substantial and
consistent as judged by the users and experts in the domain of application,
one may have on hand a practically useful technique. (See the July 1972 and
May 1979 issues of the Proceedings of the IEEE for reviews and articles on
digital image processing, including historically signi cant images.)

4.1 Digital Subtraction Angiography


In digital subtraction angiography (DSA), an X-ray contrast agent (such as
an iodine compound) is injected so as to increase the density (attenuation co-
ecient) of the blood within a certain organ or system of interest. A number
of X-ray images are taken as the contrast agent spreads through the arterial
network and before the agent is dispersed via circulation throughout the body.
An image taken before the injection of the agent is used as the \mask" or ref-
erence image, and subtracted from the \live" images obtained with the agent
in the system to obtain enhanced images of the arterial system of interest.
Imaging systems that perform contrast-enhanced X-ray imaging (without
subtraction) in a motion or cine mode are known as cine-angiography systems.
Such systems are useful in studying circulation through the coronary system
to detect sclerosis (narrowing or blockage of arteries due to the deposition of
cholesterol, calcium, and other substances).
Figures 4.1 (a), (b), and (c) show the mask, live, and the result of DSA,
respectively, illustrating the arterial structure in the brain of a subject 223,
224, 225]. The arteries are barely visible in the live image Figure 4.1 (b)], in
spite of the contrast agent. Subtraction of the skull and the other parts that
have remained unchanged between the mask and the live images has resulted
in greatly improved visualization of the arteries in the DSA image Figure 4.1
(c)]. The mathematical procedure involved may be expressed simply as
f = f1 ;  f2  or
Image Enhancement 287
f (m n) = f1 (m n) ;  f2 (m n) (4.1)
where f1 is the live image, f2 is the mask image, and  are weighting factors
(if required), and f is the result of DSA.
The simple mathematical operation of subtraction (on a pixel-by-pixel ba-
sis) has, indeed, a signi cant application in medical imaging. The technique,
however, is sensitive to motion, which causes misalignment of the components
to be subtracted. The DSA result in Figure 4.1 (c) demonstrates motion
artifacts in the lowest quarter and around the periphery of the image. Meth-
ods to minimize motion artifact in DSA have been proposed by Meijering et
al. 223, 224, 225]. Figure 4.1 (d) shows the DSA result after correction of
motion artifacts. Regardless of its simplicity, DSA carries a certain risk of
allergic reaction, infection, and occasionally death, due to the injection of the
contrast agent.

4.2 Dual-energy and Energy-subtraction X-ray Imaging


Dierent materials have varying energy-dependent X-ray attenuation coe-
cients. X-ray measurements or images obtained at multiple energy levels (also
known as energy-selective imaging) could be combined to derive information
about the distribution of speci c materials in the object or body imaged.
Weighted combinations of multiple-energy images may be obtained to display
soft-tissue and hard-tissue details separately 5]. The disadvantages of dual-
energy imaging exist in the need to subject the patient to two or more X-ray
exposures (at dierent energy or kV ). Furthermore, due to the time lapse
between the exposures, motion artifacts could arise in the resulting image.
In a variation of the dual-energy method, MacMahon 226, 227] describes
energy-subtraction imaging using a dual-plate CR system. The Fuji FCR
9501ES (Fuji lm Medical Systems USA, Stamford, CT) digital chest unit uses
two receptor plates instead of one. The plates are separated by a copper lter.
The rst plate acquires the full-spectrum X-ray image in the usual manner.
The copper lter passes only the high-energy components of the X rays on
to the second plate. Because bones and calcium-containing structures would
have preferentially absorbed the low-energy components of the X rays, and
because the high-energy components would have passed through low-density
tissues with little attenuation, the transmitted high-energy components could
be expected to contain more information related to denser tissues than to
lighter tissues. The two plates capture two dierent views derived from the
same X-ray beam the patient is not subjected to two dierent imaging ex-
posures, but only one. Weighted subtraction of the two images as in Equa-
tion 4.1 provides various results that can demonstrate soft tissues or bones
and calci ed tissues in enhanced detail see Figures 4.2 and 4.3.
288 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 4.1
(a) Mask image of the head of a patient for DSA. (b) Live image. (c) DSA
image of the cerebral artery network. (d) DSA image after correction of mo-
tion artifacts. Image data courtesy of E.H.W. Meijering and M.A. Viergever,
Image Sciences Institute, University Medical Center Utrecht, Utrecht, The
Netherlands. Reproduced with permission from E.H.W. Meijering, K.J.
Zuiderveld, and M.A. Viergever, \Image registration for digital subtraction
angiography", International Journal of Computer Vision, 31(2/3): 227 { 246,
1999. c Kluwer Academic Publishers.
Image Enhancement 289
Energy-subtraction imaging as above has been found to be useful in de-
tecting fracture of the ribs, in assessing the presence of calci cation in lung
nodules (which would indicate that they are benign, and hence, need not be
examined further or treated), and in detecting calci ed pleural plaques due
to prolonged exposure to asbestos 226, 227]. The bone-detail image in Fig-
ure 4.3 (a) shows, in enhanced detail, a small calci ed granuloma near the
lower-right corner of the image.

FIGURE 4.2
Full-spectrum PA chest image (CR) of a patient. See also Figure 4.3. Im-
age courtesy of H. MacMahon, University of Chicago, Chicago, IL. Repro-
duced with permission from H. MacMahon, \Improvement in detection of
pulmonary nodules: Digital image processing and computer-aided diagnosis",
RadioGraphics, 20(4): 1169{1171, 2000. c RSNA.
290
Biomedical Image Analysis
(a) (b)
FIGURE 4.3
(a) Bone-detail image, and (b) soft-tissue detail image obtained by energy subtraction. See also Figure 4.2. Images courtesy
of H. MacMahon, University of Chicago, Chicago, IL. Reproduced with permission from H. MacMahon, \Improvement in
detection of pulmonary nodules: Digital image processing and computer-aided diagnosis", RadioGraphics, 20(4): 1169{1171,
2000. c RSNA.
Image Enhancement 291

4.3 Temporal Subtraction


Temporal or time-lapse subtraction of images could be useful in detecting
normal or pathological changes that have occurred over a period of time.
MacMahon 226] describes and illustrates the use of temporal subtraction in
the detection of lung nodules that could be dicult to see in planar chest im-
ages due to superimposed structures. DR and CR imaging facilitate temporal
subtraction.
In temporal subtraction, it is desired that normal anatomic structures are
suppressed and pathological changes are enhanced. Registration of the images
is crucial in temporal subtraction misregistration could lead to artifacts sim-
ilar to those due to motion in DSA. Geometric transformation and warping
techniques are useful in matching landmark features that are not expected to
have changed in the interval between the two imaging sessions 223, 224, 225].
Mazur et al. 228] describe image correlation and geometric transformation
techniques for the registration of radiographs for temporal subtraction.

4.4 Gray-scale Transforms


The gray-level histogram of an image gives a global impression of the presence
of dierent levels of density or intensity in the image over the dynamic range
available (see Section 2.7 for details and illustrations). When the pixels in a
given image do not make full use of the available dynamic range, the histogram
will indicate low levels of occurrences of certain gray-level values or ranges.
The given image may also contain large areas representing objects with certain
speci c ranges of gray level the histogram will then indicate large populations
of pixels occupying the corresponding gray-level ranges. Based upon a study
of the histogram of an image, we could design gray-scale transforms or look-
up tables (LUTs) that alter the overall appearance of the image, and could
improve the visibility of selected details.

4.4.1 Gray-scale thresholding


When the gray levels of the objects of interest in an image are known, or
can be determined from the histogram of the given image, the image may be
thresholded to obtain a variety of images that can display selected features of
interest. For example, if it is known that the objects of interest in the image
have gray-level values greater than L1 , we could create an image for display
292 Biomedical Image Analysis
as
g(m n) = 0255 ifif ff ((m n)  L1
m n)  L1  (4.2)
where f (m n) is the original image g(m n) is the thresholded image to be
displayed and the display range is 0 255]. The result is a bilevel or binary
image. Thresholding may be considered to be a form of image enhancement in
the sense that the objects of interest are perceived better in the resulting im-
age. The same operation may also be considered to be a detection operation
see Section 5.1.
If the values less than L1 were to be considered as noise (or features of no
interest), and the gray levels within the objects of interest that are greater
than L1 are of interest in the displayed image, we could also de ne the output
image as
g(m n) = 0f (m n) ifif ff ((m n)  L1
m n)  L : (4.3)
1
The resulting image will display the features of interest including their gray-
level variations.
Methods for the derivation of optimal thresholds are described in Sections
5.4.1, 8.3.2, and 8.7.2.
Example: A CT slice image of a patient with neuroblastoma is shown
in Figure 4.4 (a). A binarized version of the image, with thresholding as in
Equation 4.2 using L1 = 200 HU , is shown in part (b) of the gure. As
expected, the bony parts of the image appear in the result however, the
calci ed parts of the tumor, which also have high density comparable to that
of bone, appear in the result. The result of thresholding the image as in
Equation 4.3 with L1 = 200 HU is shown in part (c) of the gure. The
relative intensities of the hard bone and the calci ed parts of the tumor are
evident in the result.

4.4.2 Gray-scale windowing


If a given image f (m n) has all of its pixel values in a narrow range of gray
levels, or if certain details of particular interest within the image occupy a
narrow range of gray levels, it would be desirable to stretch the range of
interest to the full range of display available. In the absence of reason to
employ a nonlinear transformation, a linear transformation as follows could
be used for this purpose:
80
< f (m n);f if f (m n)  f1
g(m n) = : f2;f1 1 if f1 < f (m n) < f2  (4.4)
1 if f (m n)  f2
where f (m n) is the original image g(m n) is the windowed image to be
displayed, with its gray-scale normalized to the range 0 1] and f1  f2 ] is
the range of the original gray-level values to be displayed in the output after
Image Enhancement 293

(a) (b)

(c) (d)
FIGURE 4.4
(a) CT image of a patient with neuroblastoma. The tumor, which appears as a
large circular region on the left-hand side of the image, includes calcied tissues that
appear as bright regions. The HU range of ;200 400] has been linearly mapped
to the display range of 0 255] see also Figures 2.15 and 2.16. Image courtesy of
Alberta Children's Hospital, Calgary. (b) The image in (a) thresholded at the level
of 200 HU as in Equation 4.2. Values above 200 HU appear as white, and values
below this threshold appear as black. (c) The image in (a) thresholded at the level
of 200 HU as in Equation 4.3. Values above 200 HU appear at their original level,
and values below this threshold appear as black. (d) The HU range of 0 400] has
been linearly mapped to the display range of 0 255] as in Equation 4.4. Pixels
corresponding to tissues lighter than water appear as black. Pixels greater than
400 HU are saturated at the maximum gray level of 255.
294 Biomedical Image Analysis
stretching to the full range. Note that the range 0 1] in the result needs to
be mapped to the display range available, such as 0 255], which is achieved
by simply multiplying the normalized values by 255. Details (pixels) below
the lower limit f1 will be eliminated (rendered black) and those above the
upper limit f2 will be saturated (rendered white) in the resulting image. The
details within the range f1  f2 ] will be displayed with increased contrast and
latitude, utilizing the full range of display available.
Example: A CT slice image of a patient with neuroblastoma is shown
in Figure 4.4 (a). This image displays the range of ;200 400] HU linearly
mapped to the display range of 0 255] as given by Equation 4.4. The full
range of HU values in the image is ;1000 1042] HU . Part (d) of the gure
shows another display of the same original data, but with mapping of the
range 0 400] HU to 0 255] as given by Equation 4.4. In this result, pixels
corresponding to tissues lighter than water appear as black pixels greater
than 400 HU are saturated at the maximum gray level of 255. Gray-level
thresholding and mapping are commonly used for detailed interpretation of
CT images.
Example: Figure 4.5 (a) shows a part of the chest X-ray image in Fig-
ure 1.11 (b), downsampled to 512  512 pixels. The histogram of the image is
shown in Figure 4.6 (a) observe the large number of pixels with the gray level
zero. Figure 4.6 (b) shows two linear gray-scale transformations (LUTs) that
map the range 0 0:6] (dash-dot line) and 0:2 0:7] (solid line) to the range
0 1] the results of application of the two LUTs to the image in Figure 4.5 (a)
are shown in Figures 4.5 (b) and (c), respectively. The image in Figure 4.5
(b) shows the details in and around the heart with enhanced visibility how-
ever, large portions of the original image have been saturated. The image in
Figure 4.5 (c) provides an improved visualization of a larger range of tissues
than the image in (b) regardless, the details with normalized gray levels less
than 0:2 and greater than 0:7 have been lost.
Example: Figure 4.7 (a) shows an image of a myocyte. Figure 4.8 (a)
shows the normalized histogram of the image. Most of the pixels in the image
have gray levels within the limited range of 50 150] the remainder of the
available range 0 255] is not used eectively.
Figure 4.7 (b) shows the image in (a) after the normalized gray-level range
of 0:2 0:6] was stretched to the full range of 0 1] by the linear transformation
in Equation 4.4. The details within the myocyte are visible with enhanced
clarity in the transformed image. The corresponding histogram in Figure 4.8
(b) shows that the image now occupies the full range of gray scale available
however, several gray levels within the range are unoccupied, as indicated by
the white stripes in the histogram.

4.4.3 Gamma correction


Figure 2.6 shows the H-D curves of two devices. The slope of the curve is
known as  . An imaging system with a large  could lead to an image with
Image Enhancement 295

(a) (b)

(c)
FIGURE 4.5
(a) Part of a chest X-ray image. The histogram of the image is shown in
Figure 4.6 (a). (b) Image in (a) enhanced by linear mapping of the range
0 0:6] to 0 1]. (c) Image in (a) enhanced by linear mapping of the range
0:2 0:7] to 0 1]. See Figure 4.6 (b) for plots of the LUTs.
296 Biomedical Image Analysis

0.025

0.02
Probability of occurrence

0.015

0.01

0.005

0
0 50 100 150 200 250
Gray level

(a)
1

0.9

0.8

0.7
output gray level (normalized)

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
input gray level (normalized)

(b)

FIGURE 4.6
(a) Normalized histogram of the chest X-ray image in Figure 4.5 (a) entropy =
7:55 bits. (b) Linear density-windowing transformations that map the ranges
0 0:6] to 0 1] (dash-dot line) and 0:2 0:7] to 0 1] (solid line).
Image Enhancement 297

(a) (b)
FIGURE 4.7
(a) Image of a myocyte as acquired originally. (b) Image in (a) enhanced by
linear mapping of the normalized range 0:2 0:6] to 0 1]. See Figure 4.8 for
the histograms of the images.

high contrast however, the image may not utilize the full range of the available
gray scale. On the other hand, a system with a small  could result in an
image with wide latitude but poor contrast. Gamma correction is a nonlinear
transformation process by which we may alter the transition from one gray
level to the next, and change the contrast and latitude of gray scale in the
image. The transformation may be expressed as 203]
g(m n) = f (m n)]  (4.5)
where f (m n) is the given image with its gray scale normalized to the range
0 1], and g(m n) is the transformed image. (Note: Lindley 229] provides a
dierent de nition as
 lnff (m n)g 
g(m n) = exp   (4.6)
which would be equivalent to the operation given by Equation 4.5 if the gray
levels were not normalized, that is, the gray levels were to remain in a range
such as 0 ; 255.) Gray-scale windowing as in Equation 4.4 could also be
incorporated into Equation 4.5.
Example: Figure 4.9 (a) shows a part of a chest X-ray image. Figure 4.10
illustrates three transforms with  = 0:3 1:0 and 2:0. Parts (b) and (c) of
Figure 4.9 show the results of gamma correction with  = 0:3 and  = 2:0,
respectively. The two results demonstrate enhanced visibility of details in the
darker and lighter gray-scale regions (with reference to the original image).
298 Biomedical Image Analysis

0.09

0.08

0.07
Probability of occurrence

0.06

0.05

0.04

0.03

0.02

0.01

0
0 50 100 150 200 250
Gray level

(a)

0.09

0.08

0.07
Probability of occurrence

0.06

0.05

0.04

0.03

0.02

0.01

0
0 50 100 150 200 250
Gray level

(b)

FIGURE 4.8
Normalized histograms of (a) the image in Figure 4.7 (a), entropy = 4:96 bits
and (b) the image in Figure 4.7 (b), entropy = 4:49 bits.
Image Enhancement 299

(a) (b)

(c)
FIGURE 4.9
(a) Part of a chest X-ray image. (b) Image in (a) enhanced with  = 0:3.
(c) Image in (a) enhanced with  = 2:0. See Figure 4.10 for plots of the
gamma-correction transforms (LUTs).
300 Biomedical Image Analysis

0.9

0.8

0.7
output gray level (normalized)

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
input gray level (normalized)

FIGURE 4.10
Gamma-correction transforms with  = 0:3 (solid line),  = 1:0 (dotted line),
and  = 2:0 (dash-dot line).
Image Enhancement 301

4.5 Histogram Transformation


As we saw in Section 2.7, the histogram of an image may be normalized and
interpreted as a PDF. Then, based upon certain principles of information
theory, we reach the property that maximal information is conveyed when
the PDF of a process is uniform, that is, the corresponding image has all
possible gray levels with equal probability of occurrence (see Section 2.8).
Based upon this property, the technique of histogram equalization has been
proposed as a method to enhance the appearance of an image 9, 8, 11]. Other
techniques have also been proposed to map the histogram of the given image
into a dierent \desired" type of histogram, with the expectation that the
transformed image so obtained will bear an enhanced appearance. Although
the methods often do not yield useful results in biomedical applications, and
although the underlying assumptions may not be applicable in many practical
situations, histogram-based methods for image enhancement are popular. The
following sections provide the details and results of a few such methods.

4.5.1 Histogram equalization


Consider an image f (m n) of size M  N pixels, with gray levels l = 0 1 2
: : :  L;1. Let the histogram of the image be represented by Pf (l) as de ned in
Equation 2.12. Let us normalize the gray levels by dividing by the maximum
level available or permitted, as r = L;l 1 , such that 0  r  1. Let pf (r) be
the normalized histogram or PDF as given by Equation 2.15.
If we were to apply a transformation s = T (r) to the random variable r,
the PDF of the new variable s is given by 8]

pg (s) = pf (r) dr
ds r=T ;1(s)  (4.7)
where g refers to the resulting image g(m n) with the normalized gray levels
0  s  1. Consider the transformation
Zr
s = T (r) = pf (w) dw 0  r  1: (4.8)
0
This is the cumulative (probability) distribution function of r. T (r) has the
following important and desired properties:
T (r) is single-valued and monotonically increasing over the interval 0 
r  1. This is necessary to maintain the black-to-white transition order
between the original and processed images.
0  T (r)  1 for 0  r  1. This is required in order to maintain the
same range of values in the input and output images.
302 Biomedical Image Analysis
ds = pf (r). Then, we have
It follows that dr
 
pg (s) = pf (r) p 1(r) = 1 0  s  1: (4.9)
f r=T ;1 (s)
Thus, T (r) equalizes the histogram of the given image that is, the histogram
or PDF of the resulting image g(m n) is uniform. As we saw in Section 2.8,
a uniform PDF has maximal entropy.
Discrete version of histogram equalization: For a digital image f (m n)
with a total of P = MN pixels and L gray levels rk  k = 0 1 : : :  L ; 1 0 
rk  1, occurring nk times, respectively, the PDF may be approximated by
the histogram
pf (rk ) = nPk k = 0 1 : : :  L ; 1: (4.10)
The histogram-equalizing transformation is approximated by
X
k Xk n
i k = 0 1 : : :  L ; 1:
sk = T (rk ) = pf (ri) = P (4.11)
i=0 i=0
Note that this transformation may yield values of sk that may not equal the
available quantized gray levels. The values will have to be quantized, and
hence the output image may only have an approximately uniform histogram.
In practical applications, the resulting values in the range 0 1] have to
be scaled to the display range, such as 0 255]. Histogram equalization is
usually implemented via an LUT that lists the related (sk  rk ) pairs as given
by Equation 4.11. It should be noted that a quantized histogram-equalizing
transformation is likely to contain several segments of many-to-one gray-level
transformation: this renders the transformation nonunique and irreversible.
Example: Figure 4.11 (a) shows a 240  288 image of a girl in a snow cave:
the high reectivity of the snow has caused the details inside the cave to have
poor visibility. Part (b) of the same gure shows the result after histogram
equalization the histograms of the original and equalized images are shown
in Figure 4.12. Although the result of equalization shows some of the features
of the girl within the cave better than the original, several details remain dark
and unclear.
The histogram of the equalized image in Figure 4.12 (b) indicates that,
while a large number of gray levels have higher probabilities of occurrence
than their corresponding levels in the original see Figure 4.12 (a)], several gray
levels are unoccupied in the enhanced image (observe the white stripes in the
histogram, which indicate zero probability of occurrence of the corresponding
gray levels). The equalizing transform (LUT), shown in Figure 4.13, indicates
that there are several many-to-one gray-level mappings: note the presence of
several horizontal segments in the LUT. It should also be observed that the
original image has a well-spread histogram, with an entropy of 6:93 bits due
to the absence of several gray levels in the equalized image, its entropy of
5:8 bits turns out to be lower than that of the original.
Image Enhancement 303
Figure 4.11 (c) shows the result of linear stretching or windowing of the
range 0 23] in the original image in Figure 4.11 (a) to the full range of 0 255].
The result shows the details of the girl and the inside of the cave more clearly
than the original or the equalized version however, the high-intensity details
outside the cave have been washed out.
Figure 4.11 (d) shows the result of enhancing the original image in Fig-
ure 4.11 (a) with  = 0:3. Although the details inside the cave are not as
clearly seen as in Figure 4.11 (c), the result has maintained the details at all
gray levels.

(a) (b)

(c) (d)
FIGURE 4.11
(a) Image of a girl in a snow cave (240  288 pixels). (b) Result of histogram
equalization. (c) Result of linear mapping (windowing) of the range 0 23]
to 0 255]. (d) Result of gamma correction with  = 0:3. Image courtesy of
W.M. Morrow 215, 230].

Example: Figure 4.14 (a) shows a part of a chest X-ray image part (b)
of the same gure shows the corresponding histogram-equalized image. Al-
304 Biomedical Image Analysis

0.03

0.025
Probability of occurrence

0.02

0.015

0.01

0.005

0
0 50 100 150 200 250
Gray level

(a)

0.03

0.025
Probability of occurrence

0.02

0.015

0.01

0.005

0
0 50 100 150 200 250
Gray level

(b)

FIGURE 4.12
Normalized histograms of (a) the image in Figure 4.11 (a), entropy = 6:93 bits
and (b) the image in Figure 4.11 (b), entropy = 5:8 bits. See also Figure 4.13.
Image Enhancement 305
250

200
Output gray level

150

100

50

0
50 100 150 200 250
Input gray level

FIGURE 4.13
Histogram-equalizing transform (LUT) for the image in Figure 4.11 (a) see
Figure 4.12 for the histograms of the original and equalized images.

though some parts of the image demonstrate improved visibility of features,


it should be observed that the low-density tissues in the lower right-hand por-
tion of the image have been reduced to poor levels of visibility. The histogram
of the equalized image is shown in Figure 4.15 (a) the equalizing transform
is shown in part (b) of the same gure. It is seen that several gray levels
are unoccupied in the equalized image for this reason, the entropy of the
enhanced image was reduced to 5:95 bits from the value of 7:55 bits for the
original image.
Example: Figure 4.16 (b) shows the histogram-equalized version of the
myocyte image in Figure 4.16 (a). The corresponding equalizing transform,
shown in Figure 4.17 (b), indicates a sharp transition from the darker gray
levels to the brighter gray levels. The rapid transition has caused the output
to have high contrast over a small eective dynamic range, and has rendered
the result useless. The entropies of the original and enhanced images are
4:96 bits and 4:49 bits, respectively.

4.5.2 Histogram specication


A major limitation of histogram equalization is that it can provide only one
output image, which may not be satisfactory in many cases. The user has
306 Biomedical Image Analysis

(a) (b)
FIGURE 4.14
(a) Part of a chest X-ray image. The histogram of the image is shown in
Figure 4.6 (a). (b) Image in (a) enhanced by histogram equalization. The
histogram of the image is shown in Figure 4.15 (a). See Figure 4.15 (b) for a
plot of the LUT.

no control over the procedure or the result. In a related procedure known


as histogram speci cation, a series of histogram-equalization steps is used to
obtain an image with a histogram that is expected to be close to a prespecied
histogram. Then, by specifying several histograms, it is possible to obtain a
range of enhanced images, from which one or more may be selected for further
analysis or use.
Suppose that the desired or speci ed normalized histogram is pd (t), with
the desired image being represented as d, having the normalized gray levels
t = 0 1 2 : : :  L ; 1. Now, the given image f with the PDF pf (r) may be
histogram-equalized by the transformation
Zr
s = T1 (r) = pf (w) dw 0  r  1 (4.12)
0
as we saw in Section 4.5.1, to obtain the image g with the normalized gray
levels s. We may also derive a histogram-equalizing transform for the desired
(but as yet unavailable) image as
Zt
q = T2 (t) = pd (w) dw 0  t  1: (4.13)
0
Observe that, in order to derive a histogram-equalizing transform, we need
only the PDF of the image the image itself is not needed. Let us call the
(hypothetical) image so obtained as e, having the gray levels q. The inverse
Image Enhancement 307

0.03

0.025
Probability of occurrence

0.02

0.015

0.01

0.005

0
0 50 100 150 200 250
Gray level

(a)
250

200
Output gray level

150

100

50

50 100 150 200 250


Input gray level

(b)

FIGURE 4.15
(a) Normalized histogram of the histogram-equalized chest X-ray image in
Figure 4.14 (b) entropy = 5:95 bits. (b) The histogram-equalizing transfor-
mation (LUT). See Figure 4.6 (a) for the histogram of the original image.
308 Biomedical Image Analysis

(a) (b)
FIGURE 4.16
(a) Image of a myocyte. The histogram of the image is shown in Figure 4.8
(a). (b) Image in (a) enhanced by histogram equalization. The histogram of
the image is shown in Figure 4.17 (a). See Figure 4.17 (b) for a plot of the
LUT.

of the transform above, which we may express as t = T2;1 (q), will map the
gray levels q back to t.
Now, pg (s) and pe (q) are both uniform PDFs, and hence are identical func-
tions. The desired PDF may, therefore, be obtained by applying the transform
T2;1 to s that is, t = T2;1 (s). It is assumed here that T2;1 (s) exists, and is
a single-valued (unique) transform. Based on the above, the procedure for
histogram speci cation is as follows:
1. Specify the desired histogram and derive the equivalent PDF pd (t).
2. Derive the histogram-equalizing transform q = T2 (t).
3. Derive the histogram-equalizing transform s = T1 (r) from the PDF
pf (r) of the given image f .
4. Apply the inverse of the transform T2 to the PDF obtained in the previ-
ous step and obtain t = T2;1 (s). This step may be directly implemented
as t = T2;1 T1 (r)].
5. Apply the transform as above to the given image f the result provides
the desired image d with the speci ed PDF pd (t).
Although the procedure given above can theoretically lead us to an image
having the speci ed histogram, the method faces limitations in practice. Dif-
culty arises in the very rst step of specifying a meaningful histogram or
Image Enhancement 309

0.09

0.08

0.07
Probability of occurrence

0.06

0.05

0.04

0.03

0.02

0.01

0
0 50 100 150 200 250
Gray level

(a)
250

200
Output gray level

150

100

50

0
50 100 150 200 250
Input gray level

(b)

FIGURE 4.17
(a) Normalized histogram of the histogram-equalized myocyte image in Fig-
ure 4.16 (b). (b) The histogram-equalizing transformation (LUT). See Figure
4.8 (a) for the histogram of the original image.
310 Biomedical Image Analysis
PDF several trials may be required before a usable image is obtained. More
importantly, in a practical implementation with discrete gray levels, it will be
dicult, if not impossible, to derive the inverse transform T2;1 . The possi-
ble existence of many-to-one mapping segments in the histogram-equalizing
transform T2 , as we saw in the examples in Section 4.5.1, may render inversion
impossible. Appropriate speci cation of the desired PDF could facilitate the
design of an LUT to approximately represent T2;1 . The LUTs corresponding
to T1 and T2;1 may be combined into one LUT that may be applied to the
given image f to obtain the desired image d in a single step. Note that the
image obtained as above may have a histogram that only approximates the
one speci ed.

4.5.3 Limitations of global operations


Global operators such as gray-scale and histogram transforms provide simple
mechanisms to manipulate the appearance of images. Some knowledge about
the range of gray levels of the features of interest can assist in the design of
linear or nonlinear LUTs for the enhancement of selected features in a given
image. Although histogram equalization can lead to useful results in some
situations, it is quite common to result in poor images. Even if we keep aside
the limitations related to nonunique transforms, a global approach to image
enhancement ignores the nonstationary nature of images, and hence could
lead to poor results. The results of histogram equalization of the chest X-
ray and myocyte images in Figures 4.14 and 4.16 demonstrate the limitations
of global transforms. Given the wide range of details of interest in medical
images, such as the hard tissues (bone) and soft tissues (lung) in a chest X-
ray image, it is desirable to design local and adaptive transforms for eective
image enhancement.

4.5.4 Local-area histogram equalization


Global histogram equalization tends to result in images where features hav-
ing gray levels with low probabilities of occurrence in the original image are
merged upon quantization of the equalizing transform, and hence are lost in
the enhanced image. Ketchum 231] attempted to address this problem by
suggesting the application of histogram equalization on a local basis. In local-
area histogram equalization (LAHE), the histogram of the pixels within a 2D
sliding rectangular window, centered at the current pixel being processed, is
equalized, and the resulting transform is applied only to the central pixel the
process is repeated for every pixel in the image. The window provides the
local context for the pixel being processed. The method is computationally
expensive because a new transform needs to be computed for every pixel.
Pizer et al. 232], Leszczynski and Shalev 233], and Rehm and Dallas 234]
proposed variations of LAHE, and extended the method to the enhancement
of medical images. In one of the variations of LAHE, the histogram-equalizing
Image Enhancement 311
transforms are computed not for every pixel, but only for a number of nonover-
lapping rectangular blocks spanning the image. The pixels at the center of
each block are processed using the corresponding transform. Pixels that are
not at the centers of the blocks are processed using interpolated versions of
the transforms corresponding to the four neighboring center pixels. The suc-
cess of LAHE depends upon the appropriate choice of the size of the sliding
window in relation to the sizes of the objects present in the image, and of the
corresponding background areas.
Example: The images in Figures 4.18 (c) and (d) show the results of
application of the LAHE method to the image in part (a) of the gure, using
windows of size 11  11 and 101  101 pixels, respectively. The result of
global histogram equalization is shown in part (b) of the gure for comparison.
Although the results of LAHE provide improved visualization of some of the
details within the snow cave, the method has led to gray-level inversion in
a few regions (black patches in white snow areas) this eect is due to the
spreading of the gray levels in a small region over the full range of 0 255],
which is not applicable to all local areas in a given image. The overall quality
of the results of LAHE has been downgraded by this eect.

4.5.5 Adaptive-neighborhood histogram equalization


A limitation of LAHE lies in the use of rectangular windows: although such
a window provides the local context of the pixel being processed, there is
no apparent justi cation to the choice of the rectangular shape for the mov-
ing window. Furthermore, the success of the method depends signi cantly
upon proper choice of the size of the window the use of a xed window of a
prespeci ed size over an entire image has no particular reasoning.
Paranjape et al. 230] proposed an adaptive-neighborhood approach to his-
togram equalization. As we saw in Section 3.7.5, the adaptive-neighborhood
image processing paradigm is based upon the identi cation of variable-shape,
variable-size neighborhoods for each pixel by region growing. Because the
region-growing procedure used for adaptive-neighborhood image processing
leads to a relatively uniform region, with gray-level variations limited to that
permitted by the speci ed threshold, the local histogram of such a region will
tend to span a limited range of gray levels. Equalizing such a histogram and
permitting the occurrence of the entire range of gray levels in any and every
local context is inappropriate. In order to provide an increased context to
histogram equalization, Paranjape et al. included in the local area not only
the foreground region grown, but also a background composed of a ribbon
of pixels molded to the foreground see Figure 3.46. The extent of the local
context provided depends upon the tolerance speci ed for region growing, the
width of the background ribbon of pixels, and the nature of gray-level vari-
ability present in the given image. The method adapts to local details present
in the given image regions of dierent size and shape are grown for each pixel.
312 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 4.18
(a) Image of a girl in a snow cave (240  288 pixels). (b) Result of global histogram
equalization. Results of LAHE with (c) a 11  11 window and (d) a 101  101
window. Results of adaptive-neighborhood histogram equalization with (e) growth
tolerance 16 and background width 5 pixels, and (f) growth tolerance 64 and back-
ground width 8 pixels. Reproduced with permission from R.B. Paranjape, W.M.
Morrow, and R.M. Rangayyan, \Adaptive-neighborhood histogram equalization for
image enhancement", CVGIP: Graphical Models and Image Processing, 54(3):259{
267, 1992. c Academic Press.
Image Enhancement 313
After obtaining the histogram of the local region, the equalizing transform
is derived, and applied only to the seed pixel from where the process was
started. The same value is applied to all redundant seed pixels in the region
that is, to the pixels that have the same gray-level value as the seed (for which
the same region would have been grown using a simple tolerance).
In an extension of adaptive-neighborhood histogram equalization to color
images proposed by Ciuc et al. 235], instead of equalizing the local histogram,
an adaptive histogram stretching operation is applied to the local histograms.
The enhancement operation is applied only to the intensity of the image
undesired changes to the color balance (hue) are prevented by this method.
Example: Figure 4.19 shows a simple test image with square objects of dif-
ferent gray levels, as well as its enhanced versions using global, local-area, and
adaptive-neighborhood histogram equalization. The limitations of global his-
togram equalization are apparent in the fact that the brighter, inner square on
the right-hand side of the image remains almost invisible. The result of LAHE
permits improved visualization of the inner squares however, the artifacts due
to block-wise processing are obvious and disturbing. Adaptive-neighborhood
histogram equalization has provided the best result, with enhanced visibility
of the inner squares and without any artifacts.

FIGURE 4.19
(a) A test image and its enhanced versions by: (b) global or full-frame his-
togram equalization, (c) LAHE, and (d) adaptive-neighborhood histogram
equalization. Image courtesy of R.B. Paranjape.
314 Biomedical Image Analysis
Example: The images in Figures 4.18 (e) and (f) show the results of ap-
plication of the adaptive-neighborhood histogram equalization method to the
image in part (a) of the gure. The two images were obtained using growth
tolerance values of 16 and 64, and background width of 5 and 8 pixels. The
larger tolerance and larger background width provide for larger areas of the
local context to be included in the local histogram. The result of global his-
togram equalization is shown in part (b) of the gure for comparison. The
results of adaptive-neighborhood histogram equalization provide improved vi-
sualization of details and image features both inside and outside the snow
cave. Furthermore, the result with the larger growth tolerance and back-
ground ribbon width is relatively free of the gray-level inversion (black patches
in otherwise white areas) present in the results of LAHE, shown in parts (c)
and (d) of the same gure.

4.6 Convolution Mask Operators


Filtering images using 3  3 convolution masks is a popular approach. Several
such masks have been proposed and are in practical use for image enhance-
ment. Equation 3.39 demonstrates the use of a simple 3  3 mask to represent
the local mean lter. We shall explore a few other 3  3 convolution masks
for image enhancement in the following sections.

4.6.1 Unsharp masking


When an image is blurred by some unknown phenomenon, we could assume
that each pixel in the original image contributes, in an additive manner, a
certain fraction of its value to the neighboring pixels. Then, each pixel is
composed of its own true value, plus fractional components of its neighbors.
The spreading of the value of a pixel into its neighborhood may be viewed as
the development of a local fog or blurred background.
In an established photographic technique known as unsharp masking, the
given degraded image, in its negative form, is rst blurred, and a positive
transparency is created from the result. The original negative and the positive
are held together, and a (positive) print is made of the combination. The
procedure leads to the subtraction of the local blur or fog component, and
hence to an improved and sharper image.
Image Enhancement 315
A popular 3  3 convolution mask that mimics unsharp masking is given by
2 1 3
; ; 81 ; 18
66 8 77
66 ; 18 2 ; 8 77 :
1 (4.14)
4 5
; 18 ; 81 ; 18

Observe that the net sum of the values in the mask equals unity therefore,
there is no net change in the local average intensity.
The operation above may be generalized to permit the use of other local
window sizes and shapes as
fe (m n) = g(m n) ; g (m n)] + g(m n): (4.15)
This expression indicates that the pixel at the location (m n) in the enhanced
image fe (m n) is given as a weighted combination of the corresponding pixel
g(m n) in the given degraded image, and the dierence between the pixel
and the local mean g (m n). The expression is equivalent to the mask in
Equation 4.14, with = 1 and the local mean being computed as the average
of the eight neighbors of the pixel being processed. Note that because the
mask possesses symmetry about both the x and y axes, reversal has no eect,
and hence is not required, in performing convolution.
The relative weighting between the pixel being processed and the local
dierence could be modi ed depending upon the nature of the image and the
desired eect, leading to various values at the central location in the mask
given in Equation 4.14. Equivalently, dierent values of could be used in
Equation 4.15. Because the local dierence in Equation 4.15 is a measure of
the local gradient, and because gradients are associated with edges, combining
the given image with its local gradient could be expected to lead to edge
enhancement or high-frequency emphasis.
Example: Figure 4.20 (a) shows a test image of a clock part (b) of the
same gure shows the result of unsharp masking using the 3  3 mask in
Equation 4.14. It is evident that the details in the image, such as the numerals,
have been sharpened by the operation. However, it is also seen that the high-
frequency emphasis property of the lter has led to increased noise in the
image.
Figures 4.21 (a), 4.22 (a), 4.23 (a), and 4.24 (a) show the image of a myocyte,
a part of a chest X-ray image, an MR image of a knee, and the Shapes test
image the results of enhancement obtained by the unsharp masking operator
are shown in parts (b) of the same gures. The chest image, in particular, has
been enhanced well by the operation: details of the lungs in the dark region in
the lower-right quadrant of the image are seen better in the enhanced image
than in the original.
An important point to observe from the result of enhancement of the Shapes
test image is that the unsharp masking lter performs edge enhancement. Fur-
316 Biomedical Image Analysis
thermore, strong edges will have a clearly perceptible overshoot and under-
shoot this could be considered to be a form of ringing artifact. The images
in Figure 4.25 illustrate the artifact in an enlarged format. Although the ar-
tifact is not as strongly evident in the other test images, the eect is, indeed,
present. Radiologists often do not prefer edge enhancement, possibly for this
reason.
Note that the unsharp masking operation could lead to negative pixel values
in the enhanced image the user has to decide how to handle this aspect
when displaying the result. The illustrations in this section were prepared
by linearly mapping selected ranges of the results to the display range of
0 255], as stated in the gure captions compression of the larger dynamic
range in the enhanced image to a smaller display range could mute the eect
of enhancement to some extent.

4.6.2 Subtracting Laplacian


Under certain conditions, a degraded image g may be modeled as being the
result of a diusion process that spreads intensity values over space as a
function of time, according to the partial dierential equation 11]
@g =  r2 g (4.16)
@t
where t represents time,  > 0 is a constant, and
2 2
@ g + @ g:
r2 g = @x 2 @y 2 (4.17)
In the initial state at t = 0, we have g(x y 0) = f (x y), the original image.
At some time instant t =
> 0, the degraded image g(x y
) is observed.
The degraded image may be expressed in a Taylor series as
g(x y
) = g(x y 0) +
@g ( x y
) ;

2 @ 2 g (x y
) +    : (4.18)
@t 2 @t2
Ignoring the quadratic and higher-order terms, letting g(x y 0) = f (x y),
and using the diusion model in Equation 4.16, we get
fe = g ; 
r2 g (4.19)
where fe represents an approximation to f . Thus, we have an enhanced
image obtained as a weighted subtraction of the given image and its Laplacian
(gradient).
A discrete implementation of the Laplacian is given by the 3  3 convolution
mask 2 3
0 1 0
4 1 ;4 1 5 (4.20)
0 1 0
Image Enhancement 317

(a) (b)

(c) (d)
FIGURE 4.20
(a) Clock test image. (b) Result of unsharp masking display range ;50 250]
out of ;68 287]. (c) Laplacian (gradient) of the image display range ;50 50]
out of ;354 184]. (d) Result of the subtracting Laplacian display range
;50 250] out of ;184 250].
318 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 4.21
(a) Image of a myocyte the range from the minimum to the maximum of the
image has been linearly mapped to the display range 0 255]. (b) Result of
unsharp masking display range ;20 180] out of ;47 201]. (c) Laplacian
(gradient) of the image display range ;20 20] out of ;152 130]. (d) Result
of the subtracting Laplacian display range ;50 200] out of ;130 282].
Image Enhancement 319

(a) (b)

(c) (d)
FIGURE 4.22
(a) Part of a chest X-ray image. (b) Result of unsharp masking display range
;30 230] out of ;59 264]. (c) Laplacian (gradient) of the image display
range ;5 5] out of ;134 156]. (d) Result of the subtracting Laplacian
display range ;50 250] out of ;156 328].
320 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 4.23
(a) MR image of a knee. (b) Result of unsharp masking display range
;40 250] out of ;72 353]. (c) Laplacian (gradient) of the image display
range ;50 50] out of ;302 365]. (d) Result of the subtracting Laplacian
display range ;50 250] out of ;261 549].
Image Enhancement 321

(a) (b)

(c) (d)
FIGURE 4.24
(a) Shapes test image. (b) Result of unsharp masking display range
;100 250] out of ;130 414]. See also Figure 4.25. (c) Laplacian (gradi-
ent) of the image display range ;50 50] out of ;624 532]. (d) Result of the
subtracting Laplacian display range ;300 300] out of ;532 832].
322 Biomedical Image Analysis

(a) (b)
FIGURE 4.25
Enlarged views of a part of (a) the Shapes test image and (b) the result
of unsharp masking see also Figure 4.24 (a) and (b). Observe the edge-
enhancement artifact.

see also Equation 2.82 and the associated discussion. Observe that the net
weight of the coecients in the Laplacian mask is zero therefore, the mask
performs a dierentiation operation that will lead to the loss of intensity
information (that is, the result in an area of any uniform brightness value will
be zero).
Letting the weighting factor 
= 1 in Equation 4.19, we get the following
3  3 mask known as the subtracting Laplacian:
2 3
0 ;1 0
4 ;1 5 ;1 5 : (4.21)
0 ;1 0
Because the net weight of the mask is equal to unity, the mask retains the
local average intensity in the image.
Comparing Equations 4.21 and 4.14, we see that they have a similar struc-
ture, the main dierence being in the number of the neighboring pixels used
in computing the local gradient or dierence. For this reason, the unsharp
masking lter is referred to as the generalized (subtracting) Laplacian by some
authors. On the same note, the subtracting Laplacian is also an unsharp mask-
ing lter. For the same reasons as in the case of the unsharp masking lter,
the subtracting Laplacian also leads to edge enhancement or high-frequency
emphasis see also Equation 2.82 and the associated discussion.
Example: Part (c) of Figure 4.20 shows the Laplacian of the test image
in part (a) of the same gure. The Laplacian shows large values (positive or
Image Enhancement 323
negative) at the strong edges that are present in the image. Part (d) of the
gure shows the result of the subtracting Laplacian, which demonstrates the
edge-enhancing property of the lter.
Figures 4.21 (c), 4.22 (c), 4.23 (c), and 4.24 (c) show the Laplacian of the
corresponding images in parts (a) of the same gures. Parts (d) of the g-
ures show the results of the subtracting Laplacian operator. The subtracting
Laplacian has provided higher levels of sharpening than the unsharp masking
lter in most cases the result is also noisier in the case of the Clock test
image.
Observe that the Laplacian does not maintain the intensity information
present in the image, whereas the subtracting Laplacian does maintain this
information the former results in a depiction of the edges (gradient) present
in the image, whereas the latter provides a sharper image. As in the case
of unsharp masking, the subtracting Laplacian could lead to negative pixel
values in the enhanced image the user has to decide how to handle this aspect
when displaying the result. The illustrations in this section were prepared by
linearly mapping selected ranges of the results to the display range of 0 255],
as stated in the gure captions compression of the larger dynamic range
in the enhanced image to a smaller display range could mute the eect of
enhancement to some extent, and also alter the intensity values of parts of
the image.
Similar to the artifact introduced by the unsharp-masking operator as illus-
trated in Figure 4.25, the subtracting Laplacian could also introduce disturb-
ing overshoot and undershoot artifacts around edges see Figure 4.24 (d). This
characteristic of the operator is illustrated using a 1D signal in Figure 4.26.
Such artifacts could aect the quality and acceptance of images enhanced
using the subtracting Laplacian.

4.6.3 Limitations of xed operators


Fixed operators, such as the unsharp-masking and subtracting-Laplacian l-
ters, apply the same mathematical operation at every location over the entire
space of the given image. The coecients and the size of such lters do not
vary, and hence the lters cannot adapt to changes in the nature of the im-
age from one location to another. For these reasons, xed operators may
encounter limited success in enhancing large images with complex and space-
variant features. In medical images, we encounter a wide latitude of details
for example, in a chest X-ray image, we see soft-tissue patterns in the lungs
and hard-tissue structures such as ribs. Similar changes in density may be of
concern in one anatomical region or structure, but not in another. The spatial
scale of the details of diagnostic interest could also vary signi cantly from one
part of an image to another for example, from ne blood vessels or bronchial
tubes to large bones such as the ribs in chest X-ray images. Operators with
xed coecients and xed spatial scope of eect cannot take these factors into
324 Biomedical Image Analysis

FIGURE 4.26
Top to bottom: a rectangular pulse signal smoothed with a Gaussian blur
function the rst derivative of the signal the second derivative of the signal
and the result of a lter equivalent to the subtracting Laplacian. The deriva-
tives are shown with enlarged amplitude scales as compared to the original
and ltered signals.
Image Enhancement 325
consideration. Adaptive lters and operators are often desirable to address
these concerns.

4.7 High-frequency Emphasis


Highpass lters are useful in detecting edges, under the assumption that high-
frequency Fourier spectral components are associated with edges and large
changes in the image. This property follows from the eect of dierentiation
of an image on its Fourier transform, as expressed by Equation 2.75.
The ideal highpass lter: The ideal highpass lter is de ned in the 2D
Fourier space as
D(u v)  D0
H (u v) = 10 ifotherwise (4.22)
:
p
where D(u v) = u2 + v2 is the distance of the frequency component at (u v)
from the DC point (u v) = (0 0), with the spectrum being centered such that
the DC component is at its center (see Figures 2.26, 2.27, and 2.28). D0 is
the cuto frequency, below which all components of the Fourier transform of
the given image are set to zero. Figure 4.27 (a) shows the ideal highpass lter
function Figure 4.28 shows the pro le of the lter.
The Butterworth highpass lter: As we saw in the case of lowpass
lters (in Section 3.4.1), prevention of the ringing artifacts encountered with
the ideal lter requires that the transition from the stopband to the passband
be smooth. The Butterworth lter response is monotonic in the passband as
well as in the stopband. (See Rangayyan 31] for details and illustrations of
the 1D Butterworth lter.)
In 2D, the Butterworth highpass lter is de ned as 8]
H (u v) = 1
p h i2n  (4.23)
1 + ( 2 ; 1) D(Du0v)
p
where n is the order of the lter, D(u v) = u2 + v2 , and D0 is the half-
power 2D radial cuto frequency the scale factor in the denominator leads to
the gain of the lter being p12 at D(u v) = D0 ]. The lter's transition from
the stopband to the passband becomes steeper as the order n is increased.
Figure 4.27 (b) illustrates the magnitude (gain) of the Butterworth highpass
lter with the normalized cuto D0 = 0:2 and order n = 2. Figure 4.28 shows
the pro le of the lter.
Because the gain of a highpass lter is zero at DC, the intensity information
is removed by the lter. This leads to a result that depicts only the edges
present in the image. Furthermore, the result will have positive and negative
326 Biomedical Image Analysis

(a) (b)
FIGURE 4.27
(a) The magnitude transfer function of an ideal highpass lter. The cuto
frequency D0 is 0:2 times the maximum frequency. (b) The magnitude transfer
function of a Butterworth highpass lter, with normalized cuto D0 = 0:2 and
order n = 2. The (u v) = (0 0) point is at the center. Black represents a gain
of zero, and white represents a gain of unity. See also Figure 4.28.

values. If the enhancement rather than the extraction of edges is desired,


it is necessary to maintain the intensity information. This eect could be
achieved by using a high-emphasis lter, de ned simply as a highpass lter
plus a constant in the (u v) space. The Butterworth high-emphasis lter may
be speci ed as
H (u v) = 1 + 2
p h i 2n  (4.24)
1 + ( 2 ; 1) D(Du0v)
which is similar to the Butterworth highpass lter in Equation 4.23 except for
the addition of the factors 1 and 2 .
The high-emphasis lter has a nonzero gain at DC. High-frequency com-
ponents are emphasized with respect to the low-frequency components in
the image however, the low-frequency components are not removed entirely.
Figure 4.28 shows the pro le of the Butterworth high-emphasis lter, with
1 = 0:5, 2 = 1:0, D0 = 0:2 and n = 2.
Examples: Figure 4.29 (a) shows a test image of a clock part (b) of the
same gure shows the result of the ideal highpass lter. Although the edges
in the image have been extracted by the lter, the strong presence of ringing
artifacts diminishes the value of the result. Part (c) of the gure shows the
result of the Butterworth highpass lter, where the edges are seen without the
ringing artifact. The result of the Butterworth high-emphasis lter, shown in
Image Enhancement 327

1.8

1.6

1.4

1.2
Filter gain

0.8

0.6

0.4

0.2

0
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
Normalized frequency

FIGURE 4.28
Pro les of the magnitude transfer functions of an ideal highpass lter (solid
line), a Butterworth highpass lter (dash-dot line, normalized cuto D0 = 0:2
and order n = 2), and a Butterworth high-emphasis lter (dashed line). See
also Figure 4.27.
328 Biomedical Image Analysis
part (d) of the gure, demonstrates edge enhancement however, the relative
intensities of the objects have been altered.
Figures 4.30 (a), 4.31 (a), 4.32 (a), and 4.33 (a) show the image of a my-
ocyte, a part of a chest X-ray image, an MR image of a knee, and the Shapes
test image, respectively. The results of the ideal highpass lter, Butterworth
highpass lter, and Butterworth high-emphasis lter are shown in parts (b),
(c), and (d), respectively, of the same gures. The distinction between edge
enhancement and edge extraction is demonstrated by the examples.

4.8 Homomorphic Filtering for Enhancement


We have studied several linear lters designed to separate images that were
added together. The question asked has been, given g(x y) = f (x y)+ (x y),
how could one extract f (x y) only? Given that the Fourier transform is linear,
we know that the Fourier transforms of the images as above are also combined
in an additive manner: G(u v) = F (u v) + (u v). Therefore, a linear lter
will facilitate the separation of F (u v) and (u v), with the assumption that
they have signi cant portions of their energies in dierent frequency bands.
Suppose now that we are presented with an image that contains the product
of two images, such as g(x y) = f (x y) s(x y). From the multiplication or
convolution property of the Fourier transform we have G(u v) = F (u v) 
S (u v), where  represents 2D convolution in the frequency domain. How
would we be able to separate f (x y) from s(x y)?
Furthermore, suppose we have g(x y) = h(x y)  f (x y), where  stands
for 2D convolution, as in the case of the passage of the original image f (x y)
through an LSI system or lter with the impulse response h(x y). The Fourier
transforms of the signals are related as G(u v) = H (u v) F (u v). How could
we attempt to separate f (x y) and h(x y)?

4.8.1 Generalized linear ltering


Given that linear lters are well established and understood, it is attractive
to consider extending their application to images that have been combined by
operations other than addition, especially by multiplication and convolution
as indicated in the preceding paragraphs. An interesting possibility to achieve
this is via conversion of the operation combining the images into addition by
one or more transforms. Under the assumption that the transformed images
occupy dierent portions of the transform space, linear lters may be applied
to separate them. The inverses of the transforms used initially would then
take us back to the original space of the images. This approach was proposed
in a series of papers by Bogert et al. 236] and Oppenheim et al. 237, 238] see
Image Enhancement 329

(a) (b)

(c) (d)
FIGURE 4.29
(a) Clock test image. Result of (b) the ideal highpass lter, display range
;50 50] out of ;79 113] (c) the Butterworth highpass lter, display range
;40 60] out of ;76 115] and (d) the Butterworth high-emphasis lter, dis-
play range ;40 160] out of ;76 204].
330 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 4.30
(a) Image of a myocyte the range from the minimum to the maximum of the
image has been linearly mapped to the display range 0 255]. Result of (b) the
ideal highpass lter, display range ;20 20] out of ;60 65] (c) the Butter-
worth highpass lter, display range ;20 20] out of ;61 61] and (d) the
Butterworth high-emphasis lter, display range ;20 100] out of ;52 138].
Image Enhancement 331

(a) (b)

(c) (d)
FIGURE 4.31
(a) Part of a chest X-ray image. Result of (b) the ideal highpass lter, display
range ;5 5] out of ;74 91] (c) the Butterworth highpass lter, display
range ;5 5] out of ;78 95] and (d) the Butterworth high-emphasis lter,
display range ;50 130] out of ;78 192].
332 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 4.32
(a) MR image of a knee. Result of (b) the ideal highpass lter, display range
;50 50] out of ;117 127] (c) the Butterworth highpass lter, display range
;50 50] out of ;126 139] and (d) the Butterworth high-emphasis lter,
display range ;30 150] out of ;78 267].
Image Enhancement 333

(a) (b)

(c) (d)
FIGURE 4.33
(a) Shapes test image. Result of (b) the ideal highpass lter, display range
;100 100] out of ;154 183] (c) the Butterworth highpass lter, display
range ;100 100] out of ;147 176] and (d) the Butterworth high-emphasis
lter, display range ;100 200] out of ;147 296].
334 Biomedical Image Analysis
also Childers et al. 239] and Rangayyan 31] for details and illustrations of
application to biomedical signals. Because the procedure extends the applica-
tion of linear lters to multiplied and convolved images, it has been referred
to as generalized linear ltering. Furthermore, as the operations can be repre-
sented by algebraically linear transformations between the input and output
vector spaces, they have been called homomorphic systems.
As a simple illustration of a homomorphic system for multiplied images,
consider again the image
g(x y) = f (x y) s(x y): (4.25)
Given the goal of converting the multiplication operation to addition, it is
evident that a simple logarithmic transformation is appropriate:
logg(x y)] = logf (x y) s(x y)] = logf (x y)] + logs(x y)] (4.26)
f (x y) 6= 0 s(x y) 6= 0 8(x y): The logarithms of the two images are now
combined in an additive manner. Taking the Fourier transform, we get
Gl (u v) = Fl (u v) + Sl (u v) (4.27)
where the subscript l indicates that the Fourier transform has been applied
to a log-transformed version of the image.
Assuming that the logarithmic transformation has not aected the sepa-
rability of the Fourier components of the two images f (x y) and s(x y), a
linear lter (lowpass, highpass, etc.) may now be applied to Gl (u v) to sep-
arate them. An inverse Fourier transform will yield the ltered image in the
space domain. An exponential operation will complete the reversal procedure.
This procedure was proposed by Stockham 240] for image processing in the
context of a visual model.
Figure 4.34 illustrates the operations involved in a multiplicative homo-
morphic system (or lter). The symbol at the input or output of each block
indicates the operation that combines the image components at the corre-
sponding step. A system of this nature is useful in image enhancement, where
an image may be treated as the product of an illumination function and a
transmittance or reectance function. The homomorphic lter facilitates the
separation of the illumination function and correction for nonuniform lighting.
The method has been used to achieve simultaneous dynamic range compres-
sion and contrast enhancement 7, 8, 237, 240].
The extension of homomorphic ltering to separate convolved signals is
described in Section 10.3.
Example: The test image in Figure 4.35 (a) shows a girl inside a snow-
cave. The intensity of illumination of the scene diers signi cantly between
the outside and the inside of the snowcave. Although there is high contrast
between the outside and the inside of the snowcave, there is poor contrast of
the details within the snowcave. Because the image possesses a large dynamic
Image Enhancement 335
x + +
Logarithmic Fourier
transform transform
Input
image

+ +
Linear
filter

+ + x
Inverse Fourier Exponential
transform transform Filtered
image

FIGURE 4.34
Homomorphic ltering for enhancement of images combined by multiplication.

range, linear stretching of the gray-level range of the full image is not viable.
(However, a part of the range may be stretched to the full range, as illustrated
in Figure 4.11.)
Figure 4.35 (b) shows the result of logarithmic transformation of the image
in part (a) of the gure. Although the girl is now visible, the image is not
sharp. The image was ltered using a Butterworth high-emphasis lter, as
illustrated in Figure 4.36, within the context of the homomorphic system
shown in Figure 4.34. The lter was speci ed as in Equation 4.24, with
1 = 0:1, 2 = 0:5, D0 = 0:6 and n = 1. The result, shown in Figure
4.35 (c), demonstrates reduced dynamic range in terms of the dierence in
illumination between the inside and the outside of the snowcave, but increased
contrast and sharpness of the details within the snowcave. Application of the
Butterworth high-emphasis lter without the homomorphic system resulted
in the image in Figure 4.35 (d), which does not present the same level of
enhancement as seen in Figure 4.35 (c).
Example: A part of a mammogram containing calci cations is shown in
Figure 4.37 (a). The multiplicative model of an illuminated scene does not ap-
ply to X-ray imaging however, the image has nonuniform brightness (density)
that aects the visibility of details in the darker regions, and could bene t
from homomorphic enhancement. Figure 4.37 (b) shows the result of logarith-
mic transformation of the image in part (a) of the gure the result of ltering
using a Butterworth high-emphasis lter is shown in part (c). The log op-
eration has improved the visibility of the calci cations in the dark region in
the upper-central part of the image (arranged along an almost-vertical linear
pattern) application of the Butterworth high-emphasis lter (illustrated in
Figure 4.36) has further sharpened these features. The result Figure 4.37 (c)],
336 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 4.35
(a) Test image of a girl in a snowcave. Result of (b) log transformation (c) ho-
momorphic ltering including a Butterworth high-emphasis lter and (d) the
Butterworth high-emphasis lter only. The test image in this illustration is
of size 256  256 pixels, and is slightly dierent from that in Figures 4.11
and 4.18 regardless, comparison of the results indicates the advantages of
homomorphic ltering. The Butterworth high-emphasis lter used is shown
in Figure 4.36. Image courtesy of W.M. Morrow 215, 230].
Image Enhancement 337

0.9

0.8

0.7

0.6
Filter gain

0.5

0.4

0.3

0.2

0.1

0
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
Normalized frequency

FIGURE 4.36
Pro le of the high-emphasis Butterworth lter used to enhance high-frequency
components along with homomorphic ltering as illustrated in Figures 4.34,
4.35, and 4.37.
338 Biomedical Image Analysis
however, does not depict the distinction between high-density tissues (bright
areas) and low-density tissues (dark areas).
The result of application of the Butterworth high-emphasis lter without
the homomorphic system is shown in Figure 4.37 (d). This operation has also
resulted in improved depiction of the calci cations in the dark regions, albeit
not to the same extent as within the context of the homomorphic procedure.
Yoon et al. 241] extended the application of homomorphic high-emphasis
ltering to the wavelet domain for contrast enhancement of mammograms.

4.9 Adaptive Contrast Enhancement


Diagnostic features in medical images, such as mammograms, vary widely
in size and shape. Classical image enhancement techniques cannot adapt
to the varying characteristics of such features. The application of a global
transform or a xed operator to an entire image often yields poor results in
at least some parts of the given image. It is, therefore, necessary to design
methods that can adapt the operation performed or the pixel collection used
to derive measures to the local details present in the image. The following
section provides the details of an adaptive-neighborhood approach to contrast
enhancement of images.

4.9.1 Adaptive-neighborhood contrast enhancement


Morrow et al. 123, 215] proposed an adaptive-neighborhood contrast enhance-
ment technique for application to mammograms. As we saw in Section 3.7.5, in
adaptive-neighborhood or region-based image processing, an adaptive neigh-
borhood is de ned about each pixel in the image, the extent of which is
dependent on the characteristics of the image feature in which the pixel being
processed is situated. This neighborhood of similar pixels is called an adaptive
neighborhood or region.
Note that in image segmentation, groups of pixels are found that have some
property in common (such as similar gray level), and are used to de ne disjoint
image regions called segments. Region-based processing may be performed by
initially segmenting the given image and then processing each segment in turn.
Alternatively, for region-based processing, we may de ne possibly overlapping
regions for each pixel, and process each of the regions independently.
Regions, if properly de ned, should correspond to image features. Then,
features in the image are processed as whole units, rather than pixels be-
ing processed using arbitrary groups of neighboring pixels (for example, 3  3
masks). Region-based processing could also be designated as pixel-independent
Image Enhancement 339

(a) (b)

(c) (d)
FIGURE 4.37
(a) Original image of a part of mammogram with malignant calci cations.
Result of (b) log transformation (c) homomorphic ltering including a But-
terworth high-emphasis lter and (d) the Butterworth high-emphasis lter
only. See also Figures 4.40 and 4.41.
340 Biomedical Image Analysis
processing 242, 243, 244], feature-based processing, adaptive-neighborhood
processing, or object-oriented processing.
The fundamental step in adaptive-neighborhood image processing is de n-
ing the extent of regions in the image. Overlapping regions are used in this
application because disjoint segmentation of an image, with subsequent en-
hancement of the segments, would result in noticeable edge artifacts and an
inferior enhanced image.
Seed- ll region growing: Morrow et al. 123, 215] used a region-growing
technique based on a simple graphical seed- ll algorithm, also known as pixel
aggregation 8]. In this method, regions consist of spatially connected pix-
els that fall within a speci ed gray-level deviation from the starting or seed
pixel. For high-resolution digitized mammograms, 4-connectivity was found,
by visual comparison, to be adequate to allow accurate region growing, al-
though small features were better matched with 8-connected regions. The
use of 8-connectivity for region growing requires longer computing time than
4-connectivity.
The owchart in Figure 4.38 illustrates the region-growing algorithm. The
algorithm starts with the pixel being processed, called the seed pixel, or simply
the seed. The seed is placed in an initially empty queue that holds pixels to
be evaluated for inclusion in, or exclusion from, the region being grown. The
main loop is then entered. If the queue is empty, the program exits the loop
otherwise, the rst pixel is taken from the queue. This pixel is called the
current pixel if its gray level value is within the speci ed deviation from the
seed, it is labeled as a foreground pixel. The immediate neighbors (either
4-connected or 8-connected, as speci ed) of the current pixel could possibly
qualify to be foreground pixels, and are added to the queue, if they are not
already in the queue from being connected to previously checked pixels. If
the current pixel is outside the permitted gray-level range, it is marked as
a background pixel, and a border pixel of the region has been reached. A
region may have a number of internal borders, in addition to the encompassing
external border. Thus, the background may consist of more than one set of
pixels, with each such set being disconnected from the others. After all of the
current pixel's neighbors have been checked, control is directed back to the
start of the loop, to check the next pixel in the queue.
The nal step in growing a region around the seed is completing the back-
ground. This is done by starting with the existing background points, as
found during foreground region growing. The neighbors of this set of pixels
are examined to see if they belong to either the foreground or background. If
not, they are set to be the next layer of the background. The new layer is then
used to grow another layer, and so on, until the speci ed background width is
achieved. The region-growing procedure as described above does have ine-
ciencies, in that a given pixel may be checked more than once for placement in
the queue. More complicated algorithms may be used to grow regions along
line segments, and thereby partially eliminate this ineciency 245]. Prelimi-
nary testing of a scan-line based algorithm showed minimal improvement with
Image Enhancement 341
START

Add seed pixel to


foreground queue
(FQ).
Test next pixel

Yes
Is FQ empty?

No

Take top pixel


off FQ

Foreground Current Background


pixel type?

Mark as foreground Mark as background


(Fg); add neighbors (Bg); add to
to FQ background queue (BQ)

Is
Yes background No
Is BQ empty? Increment BQ
width counter
adequate?

No Yes

Take top pixel STOP


off BQ

Are
Yes current
pixel’s
neighbors
in Fg or
Bg?

No

Mark current pixel’s


neighbors as in next
layer of background
and add to BQ

FIGURE 4.38
Procedure for region growing for adaptive-neighborhood contrast enhance-
ment of mammograms. Reproduced with permission from W.M. Morrow,
R.B. Paranjape, R.M. Rangayyan, and J.E.L. Desautels, \Region-based con-
trast enhancement of mammograms" IEEE Transactions on Medical Imaging,
c IEEE.
11(3):392{406, 1992. 
342 Biomedical Image Analysis
mammogram images, because the type of regions grown in mammograms are
usually complex.
The adaptive-neighborhood contrast enhancement procedure may be stated
in algorithmic form as follows 246]:
1. The rst pixel (or the next unprocessed pixel) in the image is taken as
the seed pixel.
2. The immediate neighbors (8-connected pixels) of the seed are checked
for inclusion in the region. Each neighbor pixel is checked to see if its
gray-level value is within the speci ed deviation from the seed pixel's
gray-level value. The growth tolerance or deviation is speci ed as
f (m n) ; seed 
 (4.28)
seed
where f (m n) is the gray-level value of the neighbor pixel being checked
for inclusion, and the threshold
= 0:05.
3. If a neighbor pixel's gray-level value is within the speci ed deviation, it
is added to a queue of foreground pixels that will make up the region
being grown. A pixel is added to the queue only if it has not already
been included while processing another connected pixel.
4. A pixel f (m n) is taken from the start of the foreground queue. This be-
comes the current pixel whose 8-connected neighbors are checked against
the seed's gray-level according to the tolerance speci ed, as in Steps 2
and 3 above.
5. If a neighbor pixel's gray-level value is outside the speci ed gray-level
tolerance range, it is marked as a background pixel. A background
pixel indicates that the border of the region has been reached at that
position. However, if a neighbor pixel's gray-level value is within the
speci ed deviation, it is added to the foreground.
6. Once all the current pixel's neighbors have been checked, the program
goes back to Step 4 to check the connected neighbor pixels of the next
pixel in the foreground queue.
7. Steps 4 ; 6 are repeated until region growing stops (that is, no more
pixels can be added to the foreground region).
8. The borders of the foreground region (marked as background pixels)
are expanded in all directions by a prespeci ed number of pixels (three
pixels in the work of Morrow et al.) to obtain a background region
that is molded to the shape of the foreground region. The foreground
and background regions together form the adaptive neighborhood of
the seed pixel that was used to start the region-growing procedure. See
Figure 3.46 for an example of region growing with an image.
Image Enhancement 343
9. The contrast of the region is computed as per Equation 2.7, and en-
hanced as desired see Figure 4.39. The gray-level value of the seed
pixel is modi ed as per Equation 4.30. All pixels in the foreground
region having the same gray-level value as the seed, referred to as the
redundant seed pixels, are also modi ed to the same value as for the
seed pixel.
10. Steps 1 ; 9 are executed until all the pixels in the image have been
processed.
It should be noted that each pixel in the connected foreground that has the
same gray level as the seed will lead to the same foreground and background.
These pixels are called the region's redundant seed pixels. Considerable com-
putation may be saved by using this redundancy and obviating the repeated
growing of the same regions. Furthermore, the same nal transformation
that is applied to the region's seed pixel is also applicable to the region's re-
dundant seed pixels. In high-resolution mammogram images, redundant seed
pixels were seen to account for over 75% of the pixels in a given image this
large percentage is partially due to the dark background in the image o the
projection of the breast, and due to the relatively smooth variations in gray
levels in mammograms. The number of redundant seeds is also dependent
upon the growth tolerance used for region growing.
Parameters for region growing: The crucial parameter in controlling
seed- ll region growing is the criterion used to decide whether a pixel is to
be included or excluded in the region. This criterion is de ned by the growth
tolerance,
. The growth tolerance indicates the deviation (positive or nega-
tive) about the seed pixel's gray level that is allowed within the foreground
region. For example, with a growth tolerance of 0:05, any pixel with a gray
value between 0:95 and 1:05 times the seed pixel's value, which also satis es
the spatial-connectivity criterion, is included in the region. The reason for
using this type of growth tolerance is found from a closer examination of the
de nition of contrast. Seed- ll region growing results in regions having con-
trast greater (in magnitude) than a certain minimum contrast, Cmin . It is
desired that this minimum contrast be independent of a region's gray level,
so that the results of enhancement will be independent of a multiplicative
transformation of the image. A region with the minimum positive contrast
Cmin will have a mean foreground value of f and a mean background value of
(1 ;
)f . Using Equation 2.7, the minimum contrast Cmin is
Cmin = ff +
; (1 ;
)f

(1 ;
)f = 2 ;

2 : (4.29)
The contrast Cmin is thus independent of the foreground gray level or the
background gray level, and depends only upon the region-growing tolerance
parameter
. Weber's ratio of 2% for a just-noticeable feature suggests that
the growth tolerance should be about 4%, in order to grow regions that are
344 Biomedical Image Analysis
barely noticeable prior to enhancement (and are subsequently enhanced to
a contrast above the Weber ratio). A lower bound on
may be established
empirically, or, depending upon the class of images being enhanced, through
an analysis of the noise present in the images.
Contrast enhancement: Equation 2.7 de nes a region's contrast as a
function of the mean gray levels of the foreground f and background b. The
contrast of a region may be increased by changing f or b. Rearranging Equa-
tion 2.7, and replacing C with an increased contrast Ce gives

fe = b 11 + Ce  (4.30)
;C e
where fe is the new foreground value. Only the seed pixel and the redundant
seed pixels in the foreground are modi ed to the value fe . The remaining
pixels in the foreground obtain new values when they, in turn, act as seed
pixels and are used to grow dierent regions. (If all the pixels in the foreground
were replaced by fe , the output image would depend on the order in which
regions are grown furthermore, the gray-level variations and details within
each region would be lost, and the resulting image would be a collection of
uniform regions.) The new contrast Ce for the region may be calculated using
an analytic function of C 242, 243, 244, 247], or an empirically determined
relationship between Ce and C . Morrow et al. 123] proposed an empirical
relationship between Ce and C as shown in Figure 4.39, which was designed
to boost the perceptibility of regions with low-to-moderate contrast (in the
range 0:02 ; 0:5), while not aecting high-contrast regions.
Example | Contrast enhancement of a cluster of calci cations:
Figure 4.40 (a) shows a part of a mammogram with a cluster of calci cations.
Some of the calci cations are linearly distributed, suggesting that they are
intraductal. Cancer was suspected because of the irregular shape and size of
the individual constituents of the calci cation cluster, although hyperdense
tissue could not be clearly seen in this area of the image. A biopsy was
subsequently performed on the patient, which con rmed the presence of an
invasive intraductal carcinoma.
Figure 4.40 (b) shows the same part of the image as in (a), after adaptive-
neighborhood contrast enhancement was applied to the entire mammogram.
The curve shown in Figure 4.39 was used as the contrast transformation curve,
the growth tolerance was 3%, and a background width of three pixels was used.
Increased contrast is apparent in the enhanced image, and subtle details are
visible at higher contrast. Observe the presence of sharper edges between
features the contrast of the calci cations has been greatly increased in the
processed image. The closed-loop feature immediately below the cluster of
calci cations is possibly the cross-sectional projection of a mammary duct.
If this interpretation is correct, the distorted geometry (dierent from the
normally circular cross-section) could be indicative of intraductal malignancy.
This feature is not readily apparent in the original image.
Image Enhancement 345

0.5
contrast specified for the output image

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6
contrast of region in input image

FIGURE 4.39
An empirical relationship between the contrast C of an adaptive neighborhood
and the increased contrast Ce for enhancement of mammograms 123]. Ce = C
for C  0:5.

In order to compare the results of the adaptive-neighborhood contrast en-


hancement method with those of other techniques, a simple nonlinear rescaling
(or gamma-correction) procedure was applied, with the output being de ned
as g(m n) = f 1:5 (m n) without normalization of the gray scale. The result
was linearly scaled to the display range of 0 255], and is shown in Figure 4.40
(c). Contrast in the area of the calci cation cluster was increased, at the cost
of decreased contrast in the darker areas of the image. Although the enhance-
ment is not as good as with adaptive-neighborhood contrast enhancement,
the advantage of this method is its simplicity.
The 3  3 unsharp masking lter was applied to the complete mammogram
from which the image in Figure 4.40 (a) was obtained. The corresponding
portion of the resulting image is shown in Figure 4.40 (d). The contrast
and sharpness of the calci cation cluster was increased, although not to the
same degree as in the image generated using adaptive-neighborhood contrast
enhancement. The overall appearance of the image was altered signi cantly
from that of the original image.
Global histogram equalization of the full mammogram led to complete
washout of the region with the calci cations. The result, shown in Figure 4.41,
indicates the unsuitability of global techniques for the enhancement of mam-
mograms.
The enhancement shown in the above case has limited practical value, be-
cause the characteristics of the calci cation cluster in the original image are
346 Biomedical Image Analysis
sucient to lead the radiologist to recommend biopsy. However, if mammary
ducts and other anatomical features become more clearly visible in the en-
hanced image, as suggested above, the extent and degree of disease could
be judged more accurately, and the biopsy method and location determined
accordingly.
Example | Contrast enhancement of dense masses: Figure 4.42
(a) shows a portion of a mammogram, in the lower-right quadrant of which
a dense mass with diuse edges and a spiculated appearance is present. The
probable presence of calci cations was suggested after examination of the
lm through a hand lens. Figure 4.42 (b) shows the corresponding part of
the mammogram after adaptive-neighborhood contrast enhancement. The
internal details of the mass are more readily seen in the enhanced image the
bright, irregular details were suspected to be calci cations. Also of interest is
the appearance of the dense mass to the left of the spiculated mass. The mass
has smooth margins, and a generally benign appearance. After enhancement,
bright, irregularly shaped features are apparent in this mass, and may possibly
be calci cations associated with malignancy as well.
Example | Contrast enhancement of a benign mass: Figure 4.43
(a) shows a part of a mammogram with a histologically veri ed benign cyst.
The brighter regions at the center of the cyst do not demonstrate any irregular
outline they were interpreted to be the result of superimposition of crossing,
linear, supporting tissues. The corresponding portion from the enhanced im-
age is shown in Figure 4.43 (b). Few changes are apparent as compared with
the original image, although contrast enhancement was perceived over the
entire image. Enhancement did not aect the appearance or the assessment
of the benign cyst.

4.10 Objective Assessment of Contrast Enhancement


The improvement in images after enhancement is often dicult to measure
or assess. A processed image can be said to be an enhanced version of the
original image if it allows the observer to perceive better the desired infor-
mation in the image. With mammograms, the improvement in perception is
dicult to quantify. The use of statistical measures of gray-level distribution
as measures of local contrast enhancement (for example, variance or entropy)
is not particularly meaningful for mammographic images.
Morrow et al. 123] proposed a new approach to assess image enhancement
through the contrast histogram. The contrast histogram represents the distri-
bution of contrast of all possible regions present in the image. If we measure
the contrast of all regions in the image (as obtained by the region-growing
procedure described in Section 4.9.1) prior to enhancement and subsequent
Image Enhancement 347

(a) (b)

(c) (d)
FIGURE 4.40
(a) Part of a mammogram with a cluster of calci cations, true size 4343 mm.
Results of enhancement by (b) adaptive-neighborhood contrast enhancement
(c) gamma correction and (d) unsharp masking. See also Figures 4.37 and
4.41. Reproduced with permission from W.M. Morrow, R.B. Paranjape, R.M.
Rangayyan, and J.E.L. Desautels, \Region-based contrast enhancement of
mammograms" IEEE Transactions on Medical Imaging, 11(3):392{406, 1992.
c IEEE.
348 Biomedical Image Analysis

FIGURE 4.41
Result of enhancement of the image in Figure 4.40 (a) by global histogram
equalization applied to the entire image. See also Figures 4.37 and 4.40.
Reproduced with permission from W.M. Morrow, R.B. Paranjape, R.M.
Rangayyan, and J.E.L. Desautels, \Region-based contrast enhancement of
mammograms," IEEE Transactions on Medical Imaging, 11(3):392{406, 1992.
c IEEE.

(a) (b)
FIGURE 4.42
(a) Part of a mammogram with dense masses, true size 43  43 mm. (b) Re-
sult of enhancement by adaptive-neighborhood contrast enhancement. Repro-
duced with permission from W.M. Morrow, R.B. Paranjape, R.M. Rangayyan,
and J.E.L. Desautels, \Region-based contrast enhancement of mammograms,"
IEEE Transactions on Medical Imaging, 11(3):392{406, 1992.  c IEEE.
Image Enhancement 349

(a) (b)
FIGURE 4.43
(a) Part of a mammogram with a benign cyst, true size 43  43 mm. (b) Re-
sult of enhancement by adaptive-neighborhood contrast enhancement. Repro-
duced with permission from W.M. Morrow, R.B. Paranjape, R.M. Rangayyan,
and J.E.L. Desautels, \Region-based contrast enhancement of mammograms,"
IEEE Transactions on Medical Imaging, 11(3):392{406, 1992.  c IEEE.

to enhancement, the contrast histogram of the enhanced image should contain


more counts of regions at higher contrast levels than the contrast histogram
of the original image. Various enhancement methods can be quantitatively
compared by measuring the properties of their respective contrast histograms.
The spread of a contrast histogram may be quanti ed by taking the second
moment about the zero-contrast level. For a distribution of contrast values ci ,
quantized so that there are N bins over the range ;1 1], the second moment
M2 is
X
N
M2 = ci2 p(ci) (4.31)
i=1
where p(ci ) is the normalized number of occurrences of seed pixels (including
redundant seed pixels) that lead to the growth of a region with contrast ci .
A low-contrast image, that is, an image with a narrow contrast histogram,
will have a low value for M2 an image with high contrast will have a broader
contrast histogram, and hence a greater value of M2 .
For the purpose described above, image contrast needs to be recomputed
after the entire image has been enhanced, because the relative contrast be-
tween adjacent regions is dependent upon the changes made to each of the
regions. In order to measure the contrast in an image after enhancement,
region growing (using the same parameters as in the enhancement procedure)
350 Biomedical Image Analysis
is performed on the output enhanced image, and a contrast histogram is gen-
erated.
In general, the nal contrast values in the output image of adaptive-neigh-
borhood contrast enhancement will not match the contrast values speci ed by
the contrast transformation in Equation 4.30. This is because Equation 4.30 is
applied pixel-by-pixel to the input image, and the adaptive neighborhood for
each pixel will vary. Only if all the pixels in an object have exactly the same
gray-level value will they all have exactly the same adaptive neighborhood
and be transformed in exactly the same way. Thus, the contrast enhance-
ment curve is useful for identifying the ranges in which contrast enhancement
is desired, but cannot specify the nal contrast of the regions. The contrast
of each region grown in the image is dependent on the value speci ed by the
initial region contrast and the transformation curve, as well as the transfor-
mation applied to adjacent regions.
Figure 4.44 shows the contrast histograms of the complete mammograms
corresponding to the images in Figure 4.40. The contrast distribution is
plotted on a logarithmic scale in order to emphasize the small numbers of
occurrence of features at high contrast values. The wider distribution and
greater occurrence of regions at high contrast values in the histogram of the
adaptive-neighborhood enhanced image show that it has higher contrast. The
histograms of the results of gamma correction and unsharp masking also show
some increase in the counts for larger contrast values than that of the original,
but not to the same extent as the result of adaptive-neighborhood contrast
enhancement. The values of M2 for the four histograms in Figure 4.44 are
3:71  10;4 , 6:17  10;4 , 3:2  10;4 , and 4:4  10;4 . The contrast histogram
and its statistics provide objective means for the analysis of image enhance-
ment.

4.11 Application: Contrast Enhancement of Mammo-


grams
The accurate diagnosis of breast cancer depends upon the quality of the mam-
mograms obtained in particular, the accuracy of diagnosis depends upon the
visibility of small, low-contrast objects within the breast image. Unfortu-
nately, the contrast between malignant tissue and normal tissue is often so
low that the detection of malignant tissue becomes dicult. Hence, the fun-
damental enhancement needed in mammography is an increase in contrast,
especially for dense breasts.
Dronkers and Zwaag 248] suggested the use of reversal lm rather than
negative lm for the implementation of a form of photographic contrast en-
hancement for mammograms. They found that the image quality produced
Image Enhancement 351

4.5

3.5
log10 (number of regions)

2.5

1.5

0.5

0
−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2
adaptive−neighborhood contrast value

(a)
5

4.5

3.5
log10 (number of regions)

2.5

1.5

0.5

0
−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2
adaptive−neighborhood contrast value

Figure 4.44 (b)


352 Biomedical Image Analysis
5

4.5

3.5
log10 (number of regions)

2.5

1.5

0.5

0
−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2
adaptive−neighborhood contrast value

(c)
5

4.5

3.5
log10 (number of regions)

2.5

1.5

0.5

0
−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2
adaptive−neighborhood contrast value

(d)
FIGURE 4.44
Contrast histograms of the full mammograms corresponding to the images in Fig-
ure 4.40. (a) Original, 2 = 3 71  10;4 (b) adaptive-neighborhood contrast
M :

enhancement, 2 = 6 17  10;4 (c) gamma correction, 2 = 3 2  10;4 and


M : M :

(d) unsharp masking, 2 = 4 4  10;4 . Reproduced with permission from W.M.


M :

Morrow, R.B. Paranjape, R.M. Rangayyan, and J.E.L. Desautels, \Region-based


contrast enhancement of mammograms," IEEE Transactions on Medical Imaging,
11(3):392{406, 1992. 
c IEEE.
Image Enhancement 353
was equal to that of conventional techniques without the need for special
mammographic equipment. A photographic unsharp-masking technique for
mammographic images was proposed by McSweeney et al. 249]. This pro-
cedure includes two steps: rst, a blurred image is produced by copying the
original mammogram through a sheet of glass or clear plastic that diuses
the light then, by using subtraction print lm, the nal image is formed by
subtracting the blurred image from the original mammogram. Although the
photographic technique improved the visualization of mammograms, it was
not widely adopted, possibly due to the variability in the image reproduction
procedure.
Askins et al. 250] investigated autoradiographic enhancement of mammo-
grams by using thiourea labeled with 35 S . Mammograms underexposed as
much as tenfold could be autoradiographically intensi ed so that the enhanced
image was comparable to a normally exposed lm. The limitations to rou-
tine use of autoradiographic techniques include cost, processing time, and the
disposal of radioactive solutions.
Digital image enhancement techniques have been used in radiography for
more than three decades. (See Bankman 251] for a section including dis-
cussions on several enhancement techniques.) Ram 252] stated that images
considered unsatisfactory for medical analysis may be rendered usable through
various enhancement techniques, and further indicated that the application of
such techniques in a clinical situation may reduce the radiation dose by about
50%. Rogowska et al. 253] applied digital unsharp masking and local contrast
stretching to chest radiographs, and reported that the quality of images was
improved. Chan et al. 254] investigated unsharp-mask ltering for digital
mammography: according to their receiver operating characteristics (ROC)
studies, unsharp masking could improve the detectability of calci cations on
digital mammograms. However, this method also increased noise and caused
some artifacts.
Algorithms based on adaptive-neighborhood image processing to enhance
mammographic contrast were rst reported on by Gordon and Rangayyan
242]. Rangayyan and Nguyen 243] de ned a tolerance-based method for
growing foreground regions that could have arbitrary shapes rather than
square shapes. Morrow et al. 215, 123] further developed this approach with
a new de nition of background regions. Dhawan et al. 247] p investigated
the bene ts of various contrast transfer functions, including C , ln(1 + 3C ),
1 ; e;3C , and tanh(3C ), where C is the original contrast, but used square
adaptive neighborhoods. They found that while a suitable contrast function
was important to bring out the features of interest in mammograms, it was
dicult to select such a function. Later, Dhawan and Le Royer 255] pro-
posed a tunable contrast enhancement function for improved enhancement of
mammographic features.
Emphasis has recently been directed toward image enhancement based upon
the characteristics of the human visual system 256], leading to innovative
methods using nonlinear lters, scale-space lters, multiresolution lters, and
354 Biomedical Image Analysis
wavelet transforms. Attention has been paid to designing algorithms to en-
hance the contrast and visibility of diagnostic features while maintaining con-
trol on noise enhancement. Laine et al. 257] presented a method for nonlinear
contrast enhancement based on multiresolution representation and the use of
dyadic wavelets. A software package named MUSICA 258] (MUlti-Scale Im-
age Contrast Ampli cation) has been produced by Agfa-Gevaert. Belikova
et al. 259] discussed various optimal lters for the enhancement of mammo-
grams. Qu et al. 260] used wavelet techniques for enhancement and evaluated
the results using breast phantom images. Tahoces et al. 261] presented a mul-
tistage spatial ltering procedure for nonlinear contrast enhancement of chest
and breast images. Qian et al. 262] reported on tree-structured nonlinear
lters based on median lters and an edge detector. Chen et al. 263] pro-
posed a regional contrast enhancement technique based on unsharp masking
and adaptive density shifting.
The various mammogram enhancement algorithms that have been reported
in the literature may be sorted into three categories: algorithms based on
conventional image processing methods 253, 254, 259, 261, 264, 265] adaptive
algorithms based on the principles of human visual perception 123, 242, 247,
255, 256, 263, 266] and multiresolution enhancement algorithms 257, 260,
262, 267, 268, 269, 270]. In order to evaluate the diagnostic utility of an
enhancement algorithm, an ROC study has to be conducted however, few of
the above-mentioned methods 254, 264, 266, 267, 271] have been tested with
ROC procedures see Sections 12.8.1 and 12.10 for details on ROC analysis.

4.11.1 Clinical evaluation of contrast enhancement


In order to examine the dierences in radiological diagnoses that could re-
sult from adaptive-neighborhood enhancement of mammograms, eight test
cases from the teaching library of the Foothills Hospital (Calgary, Alberta,
Canada) were studied in the work of Morrow et al. 123]. For each of the
cases, the pathology was known due to biopsy or other follow-up procedures.
For each case, a single mammographic lm that presented the abnormality
was digitized using an Eikonix 1412 scanner (Eikonix Inc., Bedford, MA) to
4 096 by about 2 048 pixels with 12-bit gray-scale resolution. (The size of
the digitized image diered from lm to lm depending upon the the size
of the actual image in the mammogram.) The eective pixel size was about
0:054 mm  0:054 mm. Films were illuminated by a Plannar 1417 light
box (Gordon Instruments, Orchard Park, NY). Although the light box was
designed to have a uniform light intensity distribution, it was necessary to
correct for nonuniformities in illumination. After correction, pixel gray levels
were determined to be accurate to 10 bits, with a dynamic range of approxi-
mately 0:02 ; 2:52 OD 174].
The images were enhanced using the adaptive-neighborhood contrast en-
hancement method. For all images, the tolerance
for region growing was set
at 0:05, the width of the background was set to three pixels, and the enhance-
Image Enhancement 355
ment curve used was that presented in Figure 4.39. The original and processed
images were down-sampled by a factor of two for processing and display for
interpretation on a MegaScan 2111 monitor (Advanced Video Products Inc.,
Littleton, MA). Although the memory buer of the MegaScan system was of
size 4 096  4 096  12 bits, the display buer was limited to 2 560  2 048  8
bits, with panning and zooming facilities. The monitor displayed images at
72 noninterlaced frames per second.
In each case, the original, digitized mammogram was rst presented on the
MegaScan 2111 monitor. The image occupied about 20  15 cm on the screen.
An experienced radiologist, while viewing the digitized original, described the
architectural abnormalities that were observed. Subsequently, the enhanced
image was added to the display. While observing both the enhanced mam-
mogram and the original mammogram together, the radiologist described any
new details or features that became apparent.
Case (1) was that of a 62-year-old patient with a history of diuse nodu-
larity in both breasts. The MLO view of the left breast was digitized for
assessment. The unenhanced mammogram revealed two separate nodular
lesions: one with well-de ned boundaries, with some indication of lobular
calcium the other smaller, with poorly de ned borders, some spiculation,
but no microcalci cations. The unenhanced mammogram suggested that the
smaller lesion was most likely associated with carcinoma however, there was
some doubt about the origins of the larger lesion. An examination of the
enhanced mammogram revealed de nite calcium deposits in the larger lesion
and some indication of microcalci cations in the smaller lesion. The enhanced
image suggested carcinoma as the origin of both lesions more strongly than
the unenhanced mammogram. The biopsy report for both areas indicated in-
traductal in ltrating carcinoma, con rming the diagnosis from the enhanced
mammogram.
Case (2) was that of a 64-year-old patient. The digitized original mam-
mogram was the CC view of the left breast. The unenhanced mammogram
contained two lesions. The lesion in the lower-outer part of the breast had
irregular edges and coarse calci cations, whereas the other lesion appeared to
be a cyst. Examination of the unenhanced mammogram suggested that both
lesions were benign. Examination of the enhanced mammogram revealed no
additional details that would suggest a change in the original diagnosis. The
appearance of the lesions was not much dierent from that seen in the un-
enhanced mammogram however, the details in the internal architecture of
the breast appeared clearer, adding further weight to the diagnosis of benign
lesions. Excision biopsies carried out at both sites con rmed this diagnosis.
Case (3) was that of a 44-year-old patient, for whom the MLO view of the
left breast was digitized. The original digitized mammogram revealed multiple
benign cysts as well as a spiculated mass in the upper-outer quadrant of the
breast. There was some evidence of calcium, but it was dicult to con rm
the same by visual inspection. A dense nodule was present adjacent to the
spiculated mass. Examination of the enhanced mammogram revealed that the
356 Biomedical Image Analysis
spiculated mass did contain microcalci cations. The dense nodule appeared to
be connected to the spiculated mass, suggesting a further advanced carcinoma
than that suspected from the unenhanced mammogram. Biopsy reports were
available only for the spiculated region, and indicated lobular carcinoma. No
further information was available to verify the modi ed diagnosis from the
enhanced mammogram.
Case (4) was that of a 40-year-old patient, whose mammograms indicated
dense breasts. The image of the right breast indicated an area of uniform
density. The CC view of the right breast was digitized and enhanced. The
digitized original mammogram indicated a cluster of microcalci cations, all
of approximately uniform density, centrally located above the nipple. The
enhanced mammogram indicated a similar nding with a larger number of
microcalci cations visible, and some irregularity in the density of the calci -
cations. Both the original and the enhanced mammograms suggested a similar
diagnosis of intraductal carcinoma. Biopsy of the suspected area con rmed
this diagnosis.
Case (5) was that of a 64-year-old patient with a history of a benign mass
in the right breast. A digitized mammogram of the CC view of the right
breast was examined. The unenhanced mammogram clearly showed numer-
ous microcalci cations that were roughly linear in distribution, with some
variation in density. The original mammogram clearly suggested intraductal
carcinoma. The enhanced mammogram showed a greater number of calci -
cations, indicating a lesion of larger extent. The variation in the density of
the calci cations was more evident. Biopsy indicated an in ltrating ductal
carcinoma.
Case (6) was that of a 59-year-old patient whose right CC view was digitized.
The original mammogram indicated a poorly de ned mass with some spicu-
lations. The lesion was irregular in shape, and contained some calcium. The
unenhanced mammogram suggested intraductal carcinoma. The enhanced
mammogram provided stronger evidence of carcinoma with poor margins of
the lesion, a greater number of microcalci cations, and inhomogeneity in the
density of the calci cations. Biopsy con rmed the presence of the carcinoma.
Case (7) involved the same patient as in Case (6) however, the mammo-
gram was taken one year after that described in Case (6). The digitized
mammogram was the CC view of the right breast. The unenhanced view
showed signi cant architectural distortion due to segmental mastectomy. The
unenhanced mammogram showed an area extending past the scarred region
of fairly uniform density with irregular boundaries. The unenhanced mammo-
gram along with the patient's history suggested the possibility of cancer, and
biopsy was recommended. The enhanced mammogram suggested a similar
nding, with added evidence of some small microcalci cations in the uni-
form area. Biopsy of the region showed that the mass was, in fact, a benign
hematoma.
Case (8) was that of an 86-year-old patient the MLO view of the left breast
was digitized. In the unenhanced mammogram, a dense region was observed
Image Enhancement 357
with some spiculations. The mammogram suggested the possibility of carci-
noma and biopsy was recommended. The enhanced mammogram showed the
same detail as the unenhanced mammogram, with the additional nding of
some microcalci cations this added to the suspicion of cancer. The biopsy of
the region indicated intraductal invasive carcinoma with lymph-node metas-
tasis present.
In each of the eight cases described above, the overall contrast in the en-
hanced mammogram was signi cantly improved. This allowed the radiologist
to comment that \much better overall anatomical detail" was apparent in
the enhanced mammograms, and that \overall detail (internal architecture)
is improved" in the enhanced mammograms. In all cases, the radiological
diagnosis was con rmed by biopsy. In seven of the eight cases, the enhanced
mammogram added further weight to the diagnosis made from the original
mammogram, and the diagnosis was con rmed by biopsy. In one case, the
enhanced mammogram as well as the unenhanced mammogram suggested
the possibility of carcinoma however, the biopsy report indicated a benign
condition. This case was, however, complicated by the fact that the patient's
history inuenced the radiologist signi cantly. While it is not possible to make
a quantitative assessment of the dierences in diagnoses from the qualitative
comparison as above, it appeared that a clearer indication of the patient's
condition was obtained by examination of the enhanced mammogram.
The adaptive-neighborhood contrast enhancement method was used in a
preference study comparing the performance of enhancement algorithms by
Sivaramakrishna et al. 125]. The other methods used in the study were
adaptive unsharp masking, contrast-limited adaptive histogram equalization,
and wavelet-based enhancement. The methods were applied to mammograms
of 40 cases, including 10 each of benign and malignant masses, and 10 each
of benign and malignant microcalci cations. The four enhanced images and
the original image of each case were displayed randomly across three high-
resolution monitors. Four expert mammographers ranked the images from 1
(best) to 5 (worst). In a majority of the cases with microcalci cations, the
adaptive-neighborhood contrast enhancement algorithm provided the most-
preferred images. In the set of images with masses, the unenhanced images
were preferred in most of the cases.
See Sections 12.8.1, 12.8.2, and 12.10 for discussions on statistical analysis
of the clinical outcome with enhanced mammograms.

4.12 Remarks
Quite often, an image acquired in a real-life application does not have the
desired level of quality in terms of contrast, sharpness of detail, or the visibility
358 Biomedical Image Analysis
of the features of interest. We explored several techniques in this chapter that
could assist in improving the quality of a given image. The class of lters
based upon mathematical morphology 8, 192, 220, 221, 222] has not been
dealt with in this chapter.
An understanding of the exact phenomenon that caused the poor quality
of the image at the outset could assist in the design of an appropriate tech-
nique to address the problem. However, in the absence of such information,
one could investigate the suitability of existing and established models of
degradation, as well as the associated enhancement techniques to improve the
quality of the image on hand. It may be desirable to obtain several enhanced
versions using a variety of approaches the most suitable image may then be
selected from the collection of the processed images for further analysis. In
situations as above, there is no single or optimal solution to the problem.
Several enhanced versions of the given image may also be analyzed simulta-
neously however, this approach could demand excessive time and resources,
and may not be feasible in a large-scale screening application.
Given the subjective nature of image quality, and in spite of the several
methods we studied in Chapter 2 to characterize image quality and infor-
mation content, the issue of image enhancement is nonspeci c and elusive.
Regardless, if a poor-quality image can be enhanced to the satisfaction of
the user, and if the enhanced image leads to improved analysis | and more
accurate or con dent diagnosis in the biomedical context | an important
achievement could result.
The topic of image restoration | image quality improvement when the
exact cause of degradation is known and can be represented mathematically
| is investigated in Chapter 10.

4.13 Study Questions and Problems


(Note: Some of the questions may require background preparation with other sources
on the basics of signals and systems as well as digital signal and image processing,
such as Lathi 1], Oppenheim et al. 2], Oppenheim and Schafer 7], Gonzalez and
Woods 8], Pratt 10], Jain 12], Hall 9], and Rosenfeld and Kak 11].)
Selected data les related to some of the problems and exercises are available at
the site
www.enel.ucalgary.ca/People/Ranga/enel697
1. A poorly exposed image was found to have gray levels limited to the range
25 ; 90. Derive a linear transform to stretch this range to the display range
of 0 ; 255.
Give the display values for the original gray levels of 45 and 60.
Image Enhancement 359
2. Explain the dierences between the Laplacian and subtracting Laplacian op-
erators in the spatial and frequency domains.
3. Compute by hand the result of linear convolution of the following two images:
2 3
3555
66 0 0 1 3 77
66 4 4 4 3 77 (4.32)
42 2 2 25
2222
and
2 3
351
44 2 35: (4.33)
132
4. Explain the dierences between the 3  3 mean and median lters.
Would you be able to compare the lters in the Fourier domain? Why (not)?
5. Derive the frequency response of the 3  3 unsharp masking lter and explain
its characteristics.
6. An image has a uniform PDF (normalized gray-level histogram) over the range
0 255] A novice researcher derives the transform to perform histogram equal-
:

ization.
Derive an analytical representation of the transform. Explain its eects on
the image in terms of the modication of gray levels and the histogram.
7. An image has a uniform PDF (normalized gray-level histogram) over the range
25 90] with the probability being zero outside this interval within the avail-
able range of 0 255]. Derive an analytical representation of the transform to
perform histogram equalization. Explain its eects on the image in terms of
the modication of gray levels and the histogram.
8. Give an algorithmic representation of the method to linearly map a selected
range of gray-level values  1 2 ] to the range  1 2 ] in an image of size  .
x x y y M N

Values below 1 are to be mapped to 1 , and values above 2 mapped to 2 .


x y x y

Use pseudocode format and show all the necessary programming steps and
details.
9. An 8  8 image with an available gray-level range of 0 ; 7 at 3 bits/pixel has
the following pixel values:
2 3
35554443
66 3 3 4 5 5 3 3 2 77
66 0 0 1 3 4 4 4 4 77
64 5 5 5 3 2 2 4 7
666 4 4 4 3 3 3 3 2 777 : (4.34)
66 2 2 2 2 2 1 1 1 77
42 2 2 2 1 1 1 1 5
22221111
Derive the transformation and look-up table for enhancement of the image by
histogram equalization. Clearly show all of the steps involved, and give the
360 Biomedical Image Analysis
pixel values in the enhanced image using the available gray-level range of 3
bits/pixel.
Draw the histograms of the original image and the enhanced image. Explain
the dierences between them as caused by histogram equalization.
10. Write the expression for the convolution of an  digital image with an
N N

M  digital image (or lter function) with  .


M M N

Using pseudocode format, show all of the necessary programming steps and
details related to the implementation of convolution as above.
Explain how you handle the size and data at the edges of the resulting image.
11. Prepare a 5  5 image with zero pixel values. Add a square of size 3  3 pixels
with the value 100 at the center of the image. Apply
(a) the subtracting Laplacian operator,
and
(b) the Laplacian operator
to the image. Examine the pixel values inside and around the edges of the
square in the resulting images. Give reasons for the eects you nd.
12. Apply
(a) the subtracting Laplacian operator,
and
(b) the Laplacian operator
to the image in Equation 4.34. Give reasons for the eects you nd.
13. Derive the MTF of the 3  3 unsharp masking operator.
Explain its characteristics.
14. An image is processed by applying the subtracting Laplacian mask and then
by applying the 3  3 mean lter mask.
What is the impulse response of the complete system?
What is the MTF of the complete system?
Explain the eect of each operator.
15. Derive the MTF of the 3  3 subtracting Laplacian operator and explain its
characteristics.
16. What causes ringing artifact in frequency-domain ltering?
How do you prevent the artifact?
17. Discuss the dierences between highpass ltering and high-frequency emphasis
ltering in the frequency domain in terms of their
(a) transfer functions, and
(b) eects on image features.
18. List the steps of computation required in order to perform lowpass ltering
of an image in the frequency domain by using the Fourier transform.
Image Enhancement 361

4.14 Laboratory Exercises and Projects


1. Select two underexposed images, or images with bright and dark regions such
that the details in some parts are not clearly visible, from your collection. Ap-
ply histogram equalization, gamma adjustment, and linear gray-level mapping
transforms to the images.
Compare the results in terms of the enhancement of the visibility of details,
saturation or loss of details at the high or low ends of the gray scale, and
overall visual quality.
Plot the histograms of the resulting images and compare them with the his-
tograms of the original images. Comment upon the dierences.
2. Select two images from your collection, with one containing relatively sharp
and well-dened edges, and the other containing smooth features.
Apply the unsharp masking lter, the Laplacian operator, and the subtracting
Laplacian lter to the images. Study the results in terms of edge enhancement.
Create noisy versions of the images by adding Gaussian noise. Apply the
enhancement methods as above to the noisy images. Study the results in
terms of edge enhancement and the eect of noise.
3. Select two images from your collection, with one containing relatively sharp
and well-dened edges, and the other containing smooth features.
Apply the ideal highpass lter, the Butterworth highpass lter, and the But-
terworth high-emphasis lter to the images. Use at least two dierent cuto
frequencies. Study the results in terms of edge enhancement or edge extrac-
tion.
Create noisy versions of the images by adding Gaussian noise. Apply the lters
as above to the noisy images. Study the results in terms of edge enhancement
or extraction and the eect of noise.
5
Detection of Regions of Interest

Although a physician or a radiologist, of necessity, will carefully examine


an image on hand in its entirety, more often than not, diagnostic features of
interest manifest themselves in local regions. It is uncommon that a condition
or disease will alter an image over its entire spatial extent. In a screening
situation, the radiologist scans the entire image and searches for features
that could be associated with disease. In a diagnostic situation, the medical
expert concentrates on the region of suspected abnormality, and examines its
characteristics to decide if the region exhibits signs related to a particular
disease.
In the CAD environment, one of the roles of image processing would be to
detect the region of interest (ROI) for a given, specic, screening or diagnostic
application. Once the ROIs have been detected, the subsequent tasks would
relate to the characterization of the regions and their classication into one
of several categories. A few examples of ROIs in dierent biomedical imaging
and image analysis applications are listed below.
Cells in cervical-smear test images (Papanicolaou or Pap-smear test)
272, 273].
Calcications in mammograms 274].
Tumors and masses in mammograms 275, 276, 277].
The pectoral muscle in mammograms 278].
The breast outline or skin-air boundary in mammograms 279].
The broglandular disc in mammograms 280].
The air-way tree in lungs.
The arterial tree in lungs.
The arterial tree of the left ventricle, and constricted parts of the same
due to plaque development.
Segmentation is the process that divides an image into its constituent parts,
objects, or ROIs. Segmentation is an essential step before the description,
recognition, or classication of an image or its constituents. Two major ap-
proaches to image segmentation are based on the detection of the following
characteristics:

363
364 Biomedical Image Analysis
Discontinuity | Abrupt changes in gray level (corresponding to edges)
are detected.
Similarity | Homogeneous parts are detected, based on gray-level
thresholding, region growing, and region splitting/merging.
Depending upon the nature of the images and the ROIs, we may attempt to
detect the edges of the ROIs (if distinct edges are present), or we may attempt
to grow regions to approximate the ROIs. It should be borne in mind that,
in some cases, an ROI may be composed of several disjoint component areas
(for example, a tumor that has metastasized into neighboring regions and
calcications in a cluster). Edges that are detected may include disconnected
parts that may have to be matched and joined. We shall explore several
techniques of this nature in the present chapter.
Notwithstanding the stated interest in local regions as above, applications
do exist where entire images need to be analyzed for global changes in pat-
terns: for example, changes in the orientational structure of collagen bers in
ligaments (see Figure 1.8), and bilateral asymmetry in mammograms (see Sec-
tion 8.9). Furthermore, in the case of clustered calcications in mammograms,
cells in cervical smears, and other examples of images with multicomponent
ROIs, analysis may commence with the detection of single units of the pattern
of interest, but several such units present in a given image may need to be
analyzed, separately and together, in order to reach a decision regarding the
case.

5.1 Thresholding and Binarization


If the gray levels of the objects of interest in an image are known from prior
knowledge, or can be determined from the histogram of the given image, the
image may be thresholded to detect the features of interest and reject other
details. For example, if it is known that the objects of interest in the image
have gray-level values greater than L1 , we could create a binary image for
display as
g(m n) = 0255 ifif ff ((m n)  L1
m n)  L (5.1)
1
where f (m n) is the original image g(m n) is the thresholded image to be
displayed and the display range is 0 255]. See also Section 4.4.1.
Methods for the derivation of optimal thresholds are described in Sections
5.4.1, 8.3.2, and 8.7.2.
Example: Figure 5.1 (a) shows a TEM image of a ligament sample demon-
strating collagen bers in cross-section see Section 1.4. Inspection of the his-
togram of the image (shown in Figure 2.12) shows that the sections of the
Detection of Regions of Interest 365
collagen bers in the image have gray-level values less than about 180 values
greater than this level represent the brighter background in the image. The
histogram also indicates the fact that the gray-level ranges of the collagen-
ber regions and the background overlap signicantly. Figure 5.1 (b) shows a
thresholded version of the image in (a), with all pixels less than 180 appearing
in black, and all pixels above this level appearing in white. This operation
is the same as the thresholding operation given by Equation 5.1, but in the
opposite sense. Most of the collagen ber sections have been detected by the
thresholding operation. However, some of the segmented regions are incom-
plete or contain holes, whereas some parts that appear to be separate and
distinct in the original image have been merged in the result. An optimal
threshold derived using the methods described in Sections 5.4.1, 8.3.2, and
8.7.2 could lead to better results.

5.2 Detection of Isolated Points and Lines


Isolated points may exist in images due to noise or due to the presence of
small particles in the image. The detection of isolated points is useful in noise
removal and the analysis of particles. The following convolution mask may
be used to detect isolated points 8]:
2 3
;1 ;1 ;1
4 ;1 8 ;1 5 : (5.2)
;1 ;1 ;1

The operation computes the dierence between the current pixel at the center
of the mask and the average of its 8-connected neighbors. (The mask could
also be seen as a generalized version of the Laplacian mask in Equation 2.83.)
The result of the mask operation could be thresholded to detect isolated pixels
where the dierence computed would be large.
Straight lines or line segments oriented at 0o 45o 90o , and 135o may be
detected by using the following 3  3 convolution masks 8]:
2 3 2 3
;1 ;1 ;1 ;1 ;1
2
4 2 2 2 5 4 ;1 2 ;1 5
;1 ;1 ;1 2 ;1 ;1
2 3 2 3
;1 2 ;1 2 ;1 ;1
4 ;1 2 ;1 5 4 ;1 2 ;1 5 : (5.3)
;1 2 ;1 ;1 ;1 2
A line may be said to exist in the direction for which the corresponding mask
provides the largest response.
366 Biomedical Image Analysis

(a)

(b)
FIGURE 5.1
(a) TEM image of collagen bers in a scar-tissue sample from a rabbit ligament
at a magnication of approximately 30 000. See also Figure 1.5. Image
courtesy of C.B. Frank, Department of Surgery, University of Calgary. See
Figure 2.12 for the histogram of the image. (b) Image in (a) thresholded at
the gray level of 180.
Detection of Regions of Interest 367

5.3 Edge Detection


One of the approaches to the detection of an ROI is to detect its edges. The
HVS is particularly sensitive to edges and gradients, and some theories and
experiments indicate that the detection of edges plays an important role in
the detection of objects and analysis of scenes 122, 281, 282].
In Section 2.11.1 on the properties of the Fourier transform, we saw that
the rst-order derivatives and the Laplacian relate to the edges in the im-
age. Furthermore, we saw that these space-domain operators have equivalent
formulations in the frequency domain as highpass lters with gain that is
proportional to frequency in a linear or quadratic manner. The enhancement
techniques described in Sections 4.6 and 4.7 further strengthen the relation-
ship between edges, gradients, and high-frequency spectral components. We
shall now explore how these approaches may be extended to detect the edges
or contours of objects or regions.
(Note: Some authors consider edge extraction to be a type of image en-
hancement.)

5.3.1 Convolution mask operators for edge detection


An edge is characterized by a large change in the gray level from one side
to the other, in a particular direction dependent upon the orientation of the
edge. Gradients or derivatives measure the rate of change, and hence could
serve as the basis for the development of methods for edge detection.
The rst derivatives in the x and y directions, approximated by the rst
dierences, are given by (using matrix notation)
fyb (m n)  f (m n) ; f (m ; 1 n)
0

fxb (m n)  f (m n) ; f (m n ; 1)
0
(5.4)
where the additional subscript b indicates a backward-dierence operation.
Because causality is usually not a matter of concern in image processing, the
dierences may also be dened as
fyf (m n)  f (m + 1 n) ; f (m n)
0

fxf (m n)  f (m n + 1) ; f (m n)
0
(5.5)
where the additional subscript f indicates a forward-dierence operation. A
limitation of the operators as above is that they are based upon the values
of only two pixels this makes the operators susceptible to noise or spurious
pixel values. A simple approach to design robust operators and reduce the
sensitivity to noise is to incorporate averaging over multiple measurements.
368 Biomedical Image Analysis
Averaging the two denitions of the derivatives in Equations 5.4 and 5.5, we
get
fya (m n)  0:5 f (m + 1 n) ; f (m ; 1 n)]
0

fxa (m n)  0:5 f (m n + 1) ; f (m n ; 1)]


0
(5.6)
where the additional subscript a indicates the inclusion of averaging.
In image processing, it is also desirable to express operators in terms of
odd-sized masks that may be centered upon the pixel being processed. The
Prewitt operators take these considerations into account with the following
3  3 masks for the horizontal and vertical derivatives Gx and Gy , respectively:
2 3
;1 0 1
Gx : 4 ;1 0 1 5 : (5.7)
;1 0 1
2 3
;1 ;1 ;1
Gy : 4 0 0 0 5 : (5.8)
1 1 1
The Prewitt operators use three dierences across pairs of pixels in three
rows or columns around the pixel being processed. Due to this fact, and due
to the scale factor of 0:5 in Equation 5.6, in order to derive the exact gradient,
the results of the Prewitt operators should be divided by 3  2  , where 
is the sampling interval in x and y however, this step could be ignored if the
result is scaled for display or thresholded to detect edges.
In order to accommodate the orientation of the edge, a vectorial form of
the gradient could be composed as
Gf (m n) = Gfx (m n) + j Gfy (m n)
q
kGf (m n)k = G2fx (m n) + G2fy (m n)
 
6 Gf (m n) = tan;1 Gfy (m n) (5.9)
Gfx (m n)
where
Gfx (m n) = (f  Gx )(m n) (5.10)
and
Gfy (m n) = (f  Gy )(m n): (5.11)
If the magnitude is to be scaled for display or thresholded for the detec-
tion of edges, the square-root operation may be dropped, or the magnitude
approximated as jGfx j + jGfy j in order to save computation.
Detection of Regions of Interest 369
The Sobel operators are similar to the Prewitt operators, but include larger
weights for the pixels in the row or column of the pixel being processed as
2 3
;1 0 1
Gx : 4 ;2 0 2 5 (5.12)
;1 0 1
2 3
;1 ;2 ;1
Gy : 4 0 0 0 5 : (5.13)
1 2 1
Edges oriented at 45o and 135o may be detected by using rotated versions
of the masks as above. The Prewitt operators for the detection of diagonal
edges are 2 3
0 ;1 ;1
G45o : 4 1 0 ;1 5 (5.14)
1 1 0
and 2 3
;1 ;1 0
G135o : 4 ;1 0 1 5 : (5.15)
0 1 1
Similar masks may be derived for the Sobel operator.
(Note: The positive and negative signs of the elements in the masks above
may be interchanged to obtain operators that detect gradients in the opposite
directions. This step is not necessary if directions are considered in the range
0o ; 180o only, or if only the magnitudes of the gradients are required.)
Observe that the sum of all of the weights in the masks above is zero.
This indicates that the operation being performed is a derivative or gradient
operation, which leads to zero output values in areas of constant gray level,
and the loss of intensity information.
The Roberts operator uses 2  2 neighborhoods to compute cross-dierences
as    
;1 0 0 ;1 :
0 1 and 1 0 (5.16)
The masks are positioned with the upper-left element placed on the pixel
being processed. The absolute values of the results of the two operators are
added to obtain the net gradient:
g(m n) = jf (m + 1 n + 1) ; f (m n)j + jf (m + 1 n) ; f (m n + 1)j (5.17)
with the indices in matrix-indexing notation. The individual dierences may
also be squared, and the square root of their sum taken to be the net gradient.
The advantage of the Roberts operator is that it is a forward-looking operator,
as a result of which the result may be written in the same array as the input
image. This was advantageous when computer memory was expensive and in
short supply.
370 Biomedical Image Analysis
Examples: Figure 5.2 (a) shows the Shapes text image. Part (b) of the
gure shows the gradient magnitude image, obtained by combining, as in
Equation 5.9, the horizontal and vertical derivatives, shown in parts (c) and
(d) of the gure, respectively. The image in part (c) presents high values (pos-
itive or negative) at vertical edges only horizontally oriented edges have been
deleted by the horizontal derivative operator. The image in part (d) shows
high output at horizontal edges, with the vertically oriented edges having been
removed by the vertical derivative operator. The test image has strong edges
for most of the objects present, which are clearly depicted in the derivative
images however, the derivative images show the edges of a few objects that
are not readily apparent in the original image as well. Parts (e) and (f) of the
gure show the derivatives at 45o and 135o , respectively the images indicate
the diagonal edges present in the image.
Figures 5.3, 5.4, and 5.5 show similar sets of results for the clock, the
knee MR, and the chest X-ray test images, respectively. In the derivatives
of the clock image, observe that the numeral \1" has been obliterated by
the vertical derivative operator Figure 5.3 (d)], but gives rise to high output
values for the horizontal derivative Figure 5.3 (c)]. The clock image has the
minute hand oriented at approximately 135o with respect to the horizontal
this feature has been completely removed by the 135o derivative operator, as
shown in Figure 5.3 (f), but has been enhanced by the 45o derivative operator,
as shown in Figure 5.3 (e). The knee MR image contains sharp boundaries
that are depicted well in the derivative images in Figure 5.4. The derivative
images of the chest X-ray image in Figure 5.5 indicate large values at the
boundaries of the image, but depict the internal details with weak derivative
values, indicative of the smooth nature of the image.

5.3.2 The Laplacian of Gaussian


Although the Laplacian is a gradient operator, it should be recognized that it
is a second-order dierence operator. As we observed in Sections 2.11.1 and
4.6, this leads to double-edged outputs with positive and negative values at
each edge this property is demonstrated further by the example in Figure 5.6
(see also Figure 4.26). The Laplacian has the advantage of being omnidirec-
tional, that is, being sensitive to edges in all directions however, it is not
possible to derive the angle of an edge from the result. The operator is also
sensitive to noise because there is no averaging included in the operator the
gain in the frequency domain increases quadratically with frequency, caus-
ing signicant amplication of high-frequency noise components. For these
reasons, the Laplacian is not directly useful in edge detection.
The double-edged output of the Laplacian indicates an important property
of the operator: the result possesses a zero-crossing in between the positive
and negative outputs across an edge the property holds even when the edge
in the original image is signicantly blurred. This property is useful in the
development of robust edge detectors. The noise sensitivity of the Laplacian
Detection of Regions of Interest 371

(a) (b)

(c) (d)

(e) (f)
FIGURE 5.2
(a) Shapes test image. (b) Gradient magnitude, display range 0 400] out of
0 765]. (c) Horizontal derivative, display range ;200 200] out of ;765 765].
(d) Vertical derivative, display range ;200 200] out of ;765 765]. (e) 45o
derivative, display range ;200 200] out of ;765 765]. (f) 135o derivative,
display range ;200 200] out of ;765 765].
372 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 5.3
(a) Clock test image. (b) Gradient magnitude, display range 0 100] out of
0 545]. (c) Horizontal derivative, display range ;100 100] out of ;538 519].
(d) Vertical derivative, display range ;100 100] out of ;446 545]. (e) 45o
derivative, display range ;100 100] out of ;514 440]. (f) 135o derivative,
display range ;100 100] out of ;431 535].
Detection of Regions of Interest 373

(a) (b)

(c) (d)

(e) (f)
FIGURE 5.4
(a) Knee MR image. (b) Gradient magnitude, display range 0 400] out of
0 698]. (c) Horizontal derivative, display range ;200 200] out of ;596 496].
(d) Vertical derivative, display range ;200 200] out of ;617 698]. (e) 45o
derivative, display range ;200 200] out of ;562 503]. (f) 135o derivative,
display range ;200 200] out of ;432 528].
374 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 5.5
(a) Part of a chest X-ray image. (b) Gradient magnitude, display range
0 50] out of 0 699]. (c) Horizontal derivative, display range ;50 50] out of
;286 573]. (d) Vertical derivative, display range ;50 50] out of ;699 661].
(e) 45o derivative, display range ;50 50] out of ;452 466]. (f) 135o deriva-
tive, display range ;50 50] out of ;466 442].
Detection of Regions of Interest 375

profile of edge

first derivative

second derivative

FIGURE 5.6
Top to bottom: A prole of a blurred object showing two edges, the rst
derivative, and the second derivative (see also Figure 4.26).
376 Biomedical Image Analysis
may be reduced by including a smoothing operator. A scalable smoothing op-
erator could be dened in terms of a 2D Gaussian function, with the variance
controlling the spatial extent or width of the smoothing function. Combining
the Laplacian and the Gaussian, we obtain the popular Laplacian-of-Gaussian
or LoG operator 8, 122, 281, 282].
Consider the Gaussian specied by the function
 x2 + y 2 
g(x y) = ; exp ; 2 2 : (5.18)
The usual normalizing scale factor has been left out. Taking partial derivatives
with respect to x and y, we obtain
@ 2 g = ; x2 ; 2 exp ; x2 +y2

@x2 4 2 2
@ g = ; y2 ; 2 exp ; x2 +y2

2
@y2 4 2 2
(5.19)
which leads to
2 2  r2 
r2 g (x y) = LoG(r) = ; r ;42  exp ;
2 2 (5.20)
p
where r = x2 + y2 . The LoG function is isotropic and has positive and
negative values. Due to its shape, it is often referred to as the Mexican hat
or sombrero.
Figure 5.7 shows the LoG operator in image and mesh-plot formats the
basic Gaussian used to derive the LoG function is also shown for reference.
The Fourier magnitude spectra of the Gaussian and LoG functions are also
shown in the gure. It should be observed that, whereas the Gaussian is a
lowpass lter (which is also a 2D Gaussian in the frequency domain), the LoG
function is a bandpass lter. The width of the lters is controlled by the
parameter  of the Gaussian.
Figure 5.8 shows proles of the LoG and the related Gaussian for two values
of . Figure 5.9 shows the proles of the Fourier transforms of the functions
in Figure 5.8. The proles clearly demonstrate the nature of the functions
and their ltering characteristics.
An approximation to the LoG operator is provided by taking the dierence
between two Gaussians of appropriate variances: this operator is known as
the dierence-of-Gaussians or DoG operator 282].
Examples: Figure 5.10 shows the Shapes test image, the LoG of the image
with  = 1 pixel, and the locations of the zero-crossings of the LoG of the
image with  = 1 pixel and  = 2 pixels. The zero-crossings indicate the
locations of the edges in the image. The use of a large value for  reduces the
eect of noise, but also causes smoothing of the edges and corners, as well as
the loss of the minor details present in the image.
Detection of Regions of Interest 377

(a) (b)

(c) (d)

(e) (f)
FIGURE 5.7
The Laplacian of Gaussian in (b) image format and (d) as a mesh plot. The
related Gaussian functions are shown in (a) and (c). The size of the arrays
is 51  51 pixels standard deviation  = 4 pixels. The Fourier magnitude
spectra of the functions are shown in (e) and (f).
378 Biomedical Image Analysis

0.9

0.8

0.7
normalized LoG(r) and Gaussian(r)

0.6

0.5

0.4

0.3

0.2

0.1

−0.1
−25 −20 −15 −10 −5 0 5 10 15 20 25
r(x, y)

(a)

0.9

0.8

0.7
normalized LoG(r) and Gaussian(r)

0.6

0.5

0.4

0.3

0.2

0.1

−0.1
−25 −20 −15 −10 −5 0 5 10 15 20 25
r(x, y)

(b)
FIGURE 5.8
Proles of the Laplacian of Gaussian (solid line) and the related Gaussian
(dashed line) in Figure 5.7. The functions have been normalized to a maxi-
mum value of unity. The unit of r is pixels. (a)  = 4 pixels. (b)  = 2 pixels.
Detection of Regions of Interest 379

(a)

(b)
FIGURE 5.9
Proles of the Fourier magnitude spectra of the Laplacian of Gaussian (solid
line) and the related Gaussian (dashed line) in Figure 5.7. Both functions
have been normalized to have a maximum value equal to unity. (a)  = 4
pixels. (b)  = 2 pixels. The zero-frequency point is at the center of the
horizontal axis.
380 Biomedical Image Analysis
Figure 5.11 shows the clock image, its LoG, and the zero-crossings of the
LoG with  = 1 pixel and  = 2 pixels. The results illustrate the performance
of the LoG operator in the presence of noise.
Figures 5.12, 5.13, and 5.14 show similar sets of results for the myocyte
image, the knee MR image, and the chest X-ray test images. Comparative
analysis of the scales of the details present in the images and the zero-crossings
of the LoG for dierent values of  indicates the importance of selecting values
of the  parameter in accordance with the scale of the details to be detected.

5.3.3 Scale-space methods for multiscale edge detection


Marr and Hildreth 281, 282] suggested that physical phenomena may be de-
tected simultaneously over several channels tuned to dierent spatial sizes
or scales, with an approach known as the spatial coincidence. An intensity
change that is due to a single physical phenomenon is indicated by zero-
crossing segments present in independent channels over a certain range of
scales, with the segments having the same position and orientation in each
channel. A signicant intensity change indicates the presence of a major event
that is registered as a physical boundary, and is recognized as a single phys-
ical phenomenon. The boundaries of a signicant physical pattern should be
present over several channels, suggesting that the use of techniques based on
zero-crossings generated from lters of dierent scales could be more eec-
tive than the conventional (single-scale) methods for edge detection see, for
example, Figure 5.13.
Zero-crossings and scale-space: The multichannel model for the HVS
283] and the Marr-Hildreth spatial coincidence assumption 281] led to the
development of methods for the detection of edges based upon multiscale
analysis performed with lters of dierent scales. Marr and Hildreth pro-
posed heuristic rules to combine information from the dierent channels in a
multichannel vision model they suggested the use of a bank of LoG lters
with several values of , which may be represented as fr2 g(x y )g, with
 > 0.
A method for obtaining information in images across a continuum of scales
was suggested by Witkin 284], who introduced the concept of scale-space. The
method rapidly gained considerable interest, and has been explored further by
several researchers in image processing and analysis 285, 286, 287, 288, 289].
The scale-space (x y ) of an image f (x y) is dened as the set of all zero-
crossings of its LoG:
f(x y   )g = f(x y   )g j  (x y   ) = 0

@
2 @
2
@x + @y 6= 0 >0 (5.21)
where
 (x y ) = fr2 g(x y )  f (x y)g: (5.22)
Detection of Regions of Interest 381

(a) (b)

(c) (d)
FIGURE 5.10
(a) The Shapes test image. (b) The LoG of the image in (a) with  = 1 pixel.
(c) Locations of the zero-crossings in the LoG in (b). (d) Locations of the
zero-crossings in the LoG with  = 2 pixels.
382 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 5.11
(a) The clock test image. (b) The LoG of the image in (a) with  = 1 pixel.
(c) Locations of the zero-crossings in the LoG in (b). (d) Locations of the
zero-crossings in the LoG with  = 2 pixels.
Detection of Regions of Interest 383

(a) (b)

(c) (d)
FIGURE 5.12
(a) Image of a myocyte. (b) The LoG of the image in (a) with  = 2 pixels.
(c) Locations of the zero-crossings in the LoG in (b). (d) Locations of the
zero-crossings in the LoG with  = 4 pixels.
384 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 5.13
(a) MR image of a knee. (b) The LoG of the image in (a) with  = 2 pixels.
(c) { (f) Locations of the zero-crossings in the LoG with  = 1 2 3 and 4
pixels.
Detection of Regions of Interest 385

(a) (b)

(c) (d)
FIGURE 5.14
(a) Part of a chest X-ray image. (b) The LoG of the image in (a) with  = 2
pixels display range ;150 150] out of ;231 956]. (c) Locations of the zero-
crossings in the LoG in (b). (d) Locations of the zero-crossings in the LoG
with  = 4 pixels.
386 Biomedical Image Analysis
As the scale  varies from 0 to 1, the set f(x y )g forms continuous
surfaces in the (x y ) scale-space.
Several important scale-space concepts apply to 1D and 2D signals. It has
been shown that the scale-space of almost all signals ltered by a Gaussian
determines the signal uniquely up to a scaling constant 285] (except for noise-
contaminated signals and some special functions 290]). The importance of
this property lies in the fact that, theoretically, for almost all signals, no
information is lost by working in the scale-space instead of the image domain.
This property plays an important role in image understanding 291], image
reconstruction from zero-crossings 285, 292], and image analysis using the
scale-space approach 288]. Furthermore, it has also been shown that the
Gaussian does not create additional zero-crossings as the scale  increases
beyond a certain limit, and that the Gaussian is the only lter with this
desirable scaling behavior 285].
Based on the spatial-coincidence assumption, Witkin 284] proposed a 1D
stability analysis method for the extraction of primitive events that occur over
a large range of scales. The primitive events were organized into a qualitative
signal description representing the major events in the signal. Assuming that
zero-crossing curves do not cross one another (which was later proven to be
incorrect by Katz 293]), Witkin dened the stability of a signal interval as
the scale range over which the signal interval exists major events could then
be captured via stability analysis. However, due to the complex topological
nature of spatial zero-crossings, it is often dicult to directly extend Witkin's
1D stability analysis method to 2D image analysis. The following problems
aect Witkin's method for stability analysis:
It has been shown that zero-crossing curves do cross one another 293].
It has been shown that real (authentic) zero-crossings could turn into
false (phantom) zero-crossings as the scale  increases 294]. Use of
the complete scale-space (with  ranging from 0 to 1) may introduce
errors in certain applications an appropriate scale-space using only a
nite range of scales could be more eective.
For 2D signals (images), the scale-space consists of zero-crossing surfaces
that are more complex than the zero-crossing curves for 1D signals. The
zero-crossing surfaces may split and merge as the scale varies (decreases
or increases, respectively).
As a consequence of the above, there is no simple topological region as-
sociated with a zero-crossing surface, and tracing a zero-crossing surface
across scales becomes computationally dicult.
Liu et al. 295] proposed an alternative denition of zero-crossing surface
stability in terms of important spatial boundaries. In this approach, a spatial
boundary is dened as a region of steep gradient and high contrast, and is
well-dened if it has no neighboring boundaries within a given range. This
Detection of Regions of Interest 387
denition of spatial boundaries is consistent with the Marr-Hildreth spatial-
coincidence assumption. Furthermore, stability maps 288] associated with
the scale-space are used. A relaxation algorithm is included in the process to
generate zero-crossing maps.
In the method of Liu et al. 295], the discrete scale-space approach is used to
construct a representation of a given image in terms of a stability map, which
is a measure of pattern boundary persistence over a range of lter scales. For
a given image f (x y), a set of zero-crossing maps is generated by convolving
the image with the set of isotropic functions r2 g(x y i ), 1  i  N . It
was indicated that N = 8 sampled i values ranging from 1 to 8 pixels were
adequate for the application considered. Ideally, one would expect a pattern
boundary to be accurately located over all of the scales. However, it has been
shown 296, 297] that the accuracy of zero-crossing localization dependspupon
the width of the central excitatory region of the lter (dened as wi = 2 2 i
298]). Chen and Medioni 299] proposed a 1D method for localization of
zero-crossings that works well for ideal step edges and image patterns with
sharp contrast however, the method may not be eective for the construction
of the spatial scale-space for real-life images with poor and variable contrast.
Instead of directly matching all the zero-crossing locations at a point (x y)
over the zero-crossing maps, Liu et al. proposed a criterion C (i ) that is
a function of the scale i to dene a neighborhood in which the matching
procedure is performed at a particular scale:
C (i) = f(x0 y0 )g j x ;i  x0  x + i
y ;i  y0  y + i   1 (5.23)
0 0
where (x y ) are the actual locations of the zero-crossings, (x y) is the pixel
location at which the lters are being applied, and  is a constant to be
determined experimentally ( = 1 was used by Liu et al.). Therefore, if a
zero-crossing (x y i ) is found in the neighborhood dened by C (i ), an
arbitrary constant  is assigned to a function Si (x y), which otherwise is
assigned a zero, that is,
Si (x y) = 0 otherwise
if (x y i ) 2 C (i ) (5.24)
where the subscript i corresponds to the ith scale i .
Applying Equations 5.23 and 5.24 to the set of zero-crossings detected, a set
of adjusted zero-crossing maps fS1 (x y) S2 (x y) : : : SN (x y)g is obtained,
where N is the number of scales. The adjusted zero-crossing maps are used
to construct the zero-crossing stability map (x y) as
X
N
(x y) = Si (x y): (5.25)
i=1
The values of (x y) are, in principle, a measure of boundary stability through
the lter scales. Marr and Hildreth 281] and Marr and Poggio 300] suggested
388 Biomedical Image Analysis
that directional detection of zero-crossings be performed after the LoG oper-
ator has been applied to the image.
According to the spatial-coincidence assumption, a true boundary should
be high in contrast and have relatively large values at the corresponding
locations. Furthermore, there should be no other edges within a given neigh-
borhood. Thus, if in a neighborhood of (x y), nonzero stability map values
exist only along the orientation of a local segment of the stability map that
crosses (x y), then (x y) may be considered to signify a stable edge pixel at
(x y). On the other hand, if many nonzero stability map values are present at
dierent directions, (x y) indicates an insignicant boundary pixel at (x y).
In other words, a consistent stability indexing method (in the sense of the
spatial-coincidence assumption) should take neighboring stability indices into
account. Based upon this argument, Liu et al. proposed a relative stability
index
(x y) computed from the stability map where (x y) 6= 0, as follows.
In a neighborhood of (x y), if m nonzero values are found, (x y) is
relabeled as l0 , and the rest of (xk yk ) are relabeled as lk , k = 1 : : : m ;
1 see Figure 5.15. In order to avoid using elements in the neighborhood
that belong to the same edge, those (x0 y0 ) having the same orientation as
that of l0 are not included in the computation of
(x y). Based upon these
requirements, the relative stability index
(x y) is dened as

(x y) = Pm;l10 (5.26)
k=0 k lk
p
where k = exp(;d2k ) and dk = (x ; xk )2 + (y ; yk )2 and (xk yk ) are the
locations of lk . It should be noted that 0 <
(x y)  1 and that the value of

(x y) is governed by the geometrical distribution of the neighboring stability


index values.
Stability of zero-crossings: Liu et al. 295] observed that the use of zero-
crossings to indicate the presence of edges is reliable as long as the edges are
well-separated otherwise, the problem of false zero-crossings could arise 301].
The problem with using zero-crossings to localize edges is that the zeros of
the second derivative of a function localize the extrema in the rst derivative
of the function the extrema include the local minima and maxima in the
rst derivative of the function, whereas only the local maxima indicate the
presence of edge points. Intuitively, those zero-crossings that correspond to
the minima of the rst derivative are not associated with edge points at all.
In image analysis based upon the notion of zero-crossings, it is desirable to be
able to distinguish real zero-crossings from false ones, and to discard the false
zero-crossings. Motivated by the work of Richter and Ullman 301], Clark 294]
conducted an extensive study on the problem of false and real zero-crossings,
and proposed that zero-crossings may be classied as real if (x y) < 0 and
false if (x y) > 0, where (x y) = rr2 p(x y)] rp(x y), where denotes
the dot product, p(x y) is a smoothed version h @p of@p ithe given image such as
T
p(x y) = g(x y )  f (x y)], rp(x y) = @x @y and rr2 p(x y)] =
Detection of Regions of Interest 389

l3

l3
d3
d3
l0 d2
d1 l2
d1 l0
d2 l2 l1

l1

(a) (b)

FIGURE 5.15
A case where three zero-crossings fl1 l2 l3 g are found in a neighborhood of a
zero-crossing l0 . di indicates the distance from li to l0 . The arrows indicate
the directions of the zero-crossings. (a) The neighboring zero-crossings are far
apart from l0 , imposing a low penalty to the zero-crossing associated with l0 .
(b) The neighboring zero-crossings are close to l0 , imposing a high penalty to
the zero-crossing associated with l0 . Reproduced with permission from Z.-Q.
Liu, R.M. Rangayyan, and C.B. Frank, \Statistical analysis of collagen align-
ment in ligaments by scale-space analysis", IEEE Transactions on Biomedical
Engineering, 38(6):580{588, 1991.
c IEEE.
390 Biomedical Image Analysis
h @3p @3p @3p
i
@3p T
@x3 + @x @y2 @x2 @y + @y3 : Liu et al. included this step in their method
to detect true zero-crossings.
Example: Figure 5.16 (a) shows an SEM image of collagen bers in a lig-
ament scar-tissue sample. Parts (b) { (d) of the gure show the zero-crossing
maps obtained with  = 1 4 and 8 pixels. The result in (b) contains several
spurious or insignicant zero-crossings, whereas that in (d) contains smoothed
edges of only the major regions of the image. Part (e) shows the stability map,
which indicates the edges of the major objects in the image. The stability map
was used to detect the collagen bers in the image. See Section 8.5 for details
on directional analysis of oriented patterns by further processing of the stabil-
ity map. Methods for the directional analysis of collagen bers are described
in Section 8.7.1.
See Sections 5.10.2, 8.4, and 8.9 for more examples on multiscale analysis.

5.3.4 Canny's method for edge detection


Canny 302] proposed an approach for edge detection based upon three criteria
for good edge detection, multidirectional derivatives, multiscale analysis, and
optimization procedures. The three criteria relate to low probabilities of false
edge detection and missing real edges, represented in the form of an SNR good
localization, represented by the RMS distance of the detected edge from the
true edge and the production of a single output for a single edge, represented
by the distance between the adjacent maxima in the output. A basic lter
derived using the criteria mentioned above was approximated by the rst
derivative of a Gaussian. Procedures were proposed to incorporate multiscale
analysis and directional lters to facilitate ecient detection of edges at all
orientations and scales including adaptive thresholding with hysteresis.
The LoG lter is nondirectional, whereas Canny's method selectively evalu-
ates a directional derivative across each edge. By avoiding derivatives at other
angles that would not contribute to edge detection but increase the eects of
noise, Canny's method could lead to better results than the LoG lter.

5.3.5 Fourier-domain methods for edge detection


In Section 4.7, we saw that highpass lters may be applied in the Fourier-
domain to extract the edges in the given image. However, the inclusion of all
of the high-frequency components present in the image could lead to noisy
results. Reduction of high-frequency noise suggests the use of bandpass l-
ters, which may be easily implemented as a cascade of a lowpass lter with a
highpass lter. In the frequency domain, such a cascade of lters results in
the multiplication of the corresponding transfer functions. Because edges are
often weak or blurred in images, some form of enhancement of the correspond-
ing frequency components would also be desirable. This argument leads us
to the LoG lter: a combination of the Laplacian, which is a high-frequency-
Detection of Regions of Interest 391

FIGURE 5.16
(a) SEM image of a ligament scar-tissue sample. (b) { (d) Zero-crossing loca-
tions detected using the LoG operator with  = 1 4 and 8 pixels, respectively.
(e) The stability map, depicting the major edges present in the image. Re-
produced with permission from Z.-Q. Liu, R.M. Rangayyan, and C.B. Frank,
\Statistical analysis of collagen alignment in ligaments by scale-space anal-
ysis", IEEE Transactions on Biomedical Engineering, 38(6):580{588, 1991.

c IEEE.
392 Biomedical Image Analysis
emphasis lter with its gain quadratically proportional to frequency, with a
Gaussian lowpass lter. The methods and results presented in Section 5.3.2
demonstrate the edge-detection capabilities of the LoG lter, which may be
easily implemented in the frequency domain. Frequency-domain implemen-
tation using the FFT may be computationally advantageous when the LoG
function is specied with a large spatial array, which would be required in the
case of large values of .
Several other line-detection and edge-detection methods, such as Gabor
lters (see Sections 5.10, 8.4, 8.9, and 8.10) and fan lters (see Section 8.3)
may also be implemented in the frequency domain with advantages.

5.3.6 Edge linking


The results of most methods for edge detection are almost always discontin-
uous, and need to be processed further to link disjoint segments and obtain
complete representations of the boundaries of ROIs. Two principal proper-
ties that may be used to establish the similarity of edge pixels from gradient
images are the following 8]:

The strength of the gradient | a point (x0 y0 ) in a neighborhood of


(x y) is similar in gradient magnitude to the point (x y) if

kG(x y) ; G(x0 y0 )k  T (5.27)

where G(x y) is the gradient vector of the given image f (x y) at (x y)


and T is a threshold.

The direction of the gradient | a point (x0 y0 ) in a neighborhood of


(x y) is similar in gradient direction to the point (x y) if

j(x y) ; (x0 y0 )j  A
; @f ( x
(x y) = 6 G(x y) = tan @f (x y)=@x
1 y ) =@y (5.28)

where A is a threshold.

3  3 or 5  5 neighborhoods may be used for checking pixels for similarity in


their gradients as above. Further processing steps may include linking of edge
segments separated by small breaks and deleting isolated short segments.
See Section 5.10.2 (page 493) for the details of an edge analysis method
known as edge- ow propagation.
Detection of Regions of Interest 393

5.4 Segmentation and Region Growing


Dividing an image into regions that could correspond to structural units, ob-
jects of interest, or ROIs is an important prerequisite for most techniques for
image analysis. Whereas a human observer may, by merely looking at a dis-
played image, readily recognize its structural components, computer analysis
of an image requires algorithmic analysis of the array of image pixel values
before arriving at conclusions about the content of the image. Computer
analysis of images usually starts with segmentation, which reduces pixel data
to region-based information about the objects and structures present in the
image 303, 304, 305, 306, 307].
Image segmentation techniques may be classied into four main categories:
thresholding techniques 8, 306],
boundary-based methods 303, 308],
region-based methods 8, 123, 274, 309, 310, 311, 312], and
hybrid techniques 313, 314, 315, 316, 317] that combine boundary and
region criteria.
Thresholding methods are based upon the assumption that all pixels whose
values lie within a certain range belong to the same class see Section 5.1.
The threshold may be determined based upon the valleys in the histogram of
the image however, identifying thresholds to segment objects is not easy even
with optimal thresholding techniques 8, 306]. Moreover, because threshold-
ing algorithms are solely based upon pixel values and neglect all of the spatial
information in the image, their accuracy of segmentation is limited further-
more, thresholding algorithms do not cope well with noise or blurring at object
boundaries.
Boundary-based techniques make use of the property that, usually, pixel
values change rapidly at the boundaries between regions. The methods start
by detecting intensity discontinuities lying at the boundaries between ob-
jects and their backgrounds, typically through a gradient operation. High
values of the output provide candidate pixels for region boundaries, which
must then be processed to produce closed curves representing the boundaries
between regions, as well as to remove the eects of noise and discontinu-
ities due to nonuniform illumination and other eects. Although edge-linking
algorithms have been proposed to assemble edge pixels into a meaningful
set of object boundaries (see Section 5.3.6), such as local similarity analy-
sis, Hough-transform-based global analysis, and global processing via graph-
theoretic techniques 8], the accurate conversion of disjoint sets of edge pixels
to closed-loop boundaries of ROIs is a dicult task.
394 Biomedical Image Analysis
Region-based methods, which are complements of the boundary-based ap-
proach, rely on the postulate that neighboring pixels within a region have
similar values. Region-based segmentation algorithms may be divided into
two groups: region splitting and merging and region growing.
Segmentation techniques in the region splitting and merging category ini-
tially subdivide the given image into a set of arbitrary, disjoint regions, and
then merge and/or split the regions in an attempt to satisfy some prespecied
conditions.
Region growing is a procedure that groups pixels into regions. The simplest
of region-growing approaches is pixel aggregation, which starts with a seed
pixel and grows a region by appending spatially connected neighboring pixels
that meet a certain homogeneity criterion. Dierent homogeneity criteria
will lead to regions with dierent characteristics. It is important, as well as
dicult, to select an appropriate homogeneity criterion in order to obtain
regions that are appropriate for the application on hand.
Typical algorithms in the group of hybrid techniques rene image segmen-
tation by integration of boundary and region information proper combination
of boundary and region information may produce better segmentation results
than those obtained by either method on its own. For example, the mor-
phological watershed method 315] is generally applied to a gradient image,
which can be viewed as a topographic map with boundaries between regions
as ridges. Consequently, segmentation is equivalent to ooding the topogra-
phy from the seed pixels, with region boundaries being erected to keep water
from the dierent seed pixels from merging. Such an algorithm is guaranteed
to produce closed boundaries, which is known to be a major problem with
boundary-based methods. However, because the success of this type of an al-
gorithm relies on the accuracy of the edge-detection procedure, it encounters
diculties with images in which regions are both noisy and have blurred or
indistinct boundaries. Another interesting method within this category, called
variable-order surface tting 313], starts with a coarse segmentation of the
given image into several surface-curvature-sign primitives (for example, pit,
peak, and ridge), which are then rened by an iterative region-growing method
based on variable-order surface tting. This method, however, may only be
suitable to the class of images where the image contents vary considerably.
The main diculty with region-based segmentation schemes lies in the se-
lection of a homogeneity criterion. Region-based segmentation algorithms
have been proposed using statistical homogeneity criteria based on regional
feature analysis 312], Bayesian probability modeling of images 318], Markov
random elds 319], and seed-controlled homogeneity competition 311]. Seg-
mentation algorithms could also rely on homogeneity criteria with respect to
gray level, color, texture, or surface measures.
Detection of Regions of Interest 395
5.4.1 Optimal thresholding
Suppose it is known a priori that the given image consists of only two principal
brightness levels with the prior probabilities P1 and P2 . Consider the situation
where natural variations or noise modify the two gray levels to distributions
represented by Gaussian PDFs p1 (x) and p2 (x), where x represents the gray
level. The PDF of the image gray levels is then 8]
p(x) = P1 p1 (x) + P2 p2 (x)
   

1 )2 + p P2 exp ; (x ;
2 )2 (5.29)
= p P1 exp ; (x ; 212 222
21 22
where
1 and
2 are the means of the two regions, and 1 and 2 are their
standard deviations. Let
1 <
2 .
Suppose that the dark regions in the image correspond to the background,
and the bright regions to the objects of interest. Then, all pixels below a
threshold T may be considered to belong to the background, and all pixels
above T may be considered as pixels belonging to the object of interest. The
probability of erroneous classication is then
Z1 ZT
Pe ( T ) = P 1 p1 (x) dx + P2 p2 (x) dx: (5.30)
T ;1
To nd the optimal threshold, we may dierentiate Pe (T ) with respect to T
and equate the result to zero, which leads to
P1 p1 (T ) = P2 p2 (T ): (5.31)
Applying this result to the Gaussian PDFs gives (after taking logarithms and
some simplication) the quadratic equation 8]
AT 2 + BT + C = 0 (5.32)
where
A = 12 ; 22

B = 2(
1 22 ;
2 12 )
 
C = 12
22 ; 22
21 + 212 22 ln 2 PP1 : (5.33)
1 2
The possibility of two solutions indicates that it may require two thresholds
to obtain the optimal threshold.
396 Biomedical Image Analysis
If 12 = 22 = 2 , a single threshold may be used, given by
2
P 
T =
1 +
2 +
2
2

1 ;
2 ln P1 : (5.34)

Furthermore, if the two prior probabilities are equal, that is, P1 = P2 , or if the
variance is zero, that is,  = 0, the optimal threshold is equal to the average
of the two means.
Thresholding using boundary characteristics: The number of pixels
covered by the objects of interest to be segmented from an image is almost al-
ways a small fraction of the total number of pixels in the image: the gray-level
histogram of the image is then likely to be almost unimodal. The histogram
may be made closer to being bimodal if only the pixels on or near the bound-
aries of the object regions are considered.
The selection and characterization of the edge or boundary pixels may be
achieved by using gradient and Laplacian operators as follows 8]:
8
> 0 if rf (x y) < T
>
<
b(x y) = > L+ if rf (x y)  T and r2f (x y)  0 (5.35)
>
: L; if rf (x y)  T and r2 (x y) < 0
f
where rf (x y) is a gradient and r2f (x y) is the Laplacian of the given image
f (x y) T is a threshold and 0, L+ , L; represent three distinct gray levels.
In the resulting image, the pixels that are not on an edge are set to zero, the
pixels on the darker sides of edges are set to L+ , and the pixels on the lighter
sides of edges are set to L; . This information may be used not only to detect
objects and edges, but also to identify the leading and trailing edges of objects
(with reference to the scanning direction).
See Section 8.3.2 for a description of Otsu's method of deriving the optimal
threshold for binarizing a given image see also Section 8.7.2 for discussions
on a few other methods to derive thresholds.

5.4.2 Region-oriented segmentation of images


Let R represent the region spanning the entire space of the given image.
Segmentation may be viewed as a process that partitions R into n subregions
R1 R2 : : : Rn such that 8]
ni=1 Ri= R, that is, the union of all of the regions detected spans the
entire image (then, every pixel must belong to a region)
Ri is a connected region, i = 1 2 : : : n
Ri \ Rj = 8i j i 6= j (that is, the regions are disjoint)
Detection of Regions of Interest 397
P (R i )= TRUE , for i = 1 2 : : : n (for example, all pixels within a
region have the same intensity)
P (Ri Rj ) = FALSE 8i j i 6= j (for example, the intensities of the
pixels in dierent regions are dierent)
where P (Ri ) is a logical predicate dened over the points in the set Ri , and
is the null set.
A simple algorithm for region growing by pixel aggregation based upon the
similarity of a local property is as follows:
Start with a seed pixel (or a set of seed pixels).
Append to each pixel in the region those of its 4-connected or 8-connected
neighbors that have properties (gray level, color, etc.) that are similar
to those of the seed.
Stop when the region cannot be grown any further.
The results of an algorithm as above depend upon the procedure used to
select the seed pixels and the measures of similarity or inclusion criteria used.
The results may also depend upon the method used to traverse the image
that is, the sequence in which neighboring pixels are checked for inclusion.

5.4.3 Splitting and merging of regions


Instead of using seeds to grow regions or global thresholds to separate an image
into regions, one could initially consider to divide the given image arbitrarily
into a set of disjoint regions, and then to split and/or merge the regions using
conditions or predicates P .
A general split/merge procedure is as follows 8]: Assuming the image to
be square, subdivide the entire image R successively into smaller and smaller
quadrant regions such that, for any region Ri , P (Ri ) = TRUE: In other
words, if P (R) = FALSE , divide the image into quadrants if P is FALSE
for any quadrant, subdivide that quadrant into subquadrants. Iterate the
procedure until no further changes are made, or a stopping criterion is reached.
The splitting technique may be represented as a quadtree. Diculties could
exist in selecting an appropriate predicate P .
Because the splitting procedure could result in adjacent regions that are
similar, a merging step would be required, which may be specied as follows:
Merge two adjacent regions Ri and Rk if P (Ri Rk ) = TRUE: Iterate until
no further merging is possible.

5.4.4 Region growing using an additive tolerance


A commonly used region-growing scheme is pixel aggregation 8, 123]. The
method compares the properties of spatially connected neighboring pixels with
398 Biomedical Image Analysis
those of the seed pixel the properties used are determined by homogeneity
criteria. For intensity-based image segmentation, the simplest property is
the pixel gray level. The term \additive tolerance level" stands for the per-
mitted absolute gray-level dierence between the neighboring pixels and the
seed pixel: a neighboring pixel f (m n) is appended to the region if its abso-
lute gray-level dierence with respect to the seed pixel is within the additive
tolerance level T :
jf (m n) ; seedj  T: (5.36)
Figure 5.17 shows a simple example of additive-tolerance region growing
using dierent seed pixels in a 5  5 image. The additive tolerance level used
in the example is T = 3. Observe that two dierent regions are obtained by
starting with two seeds at dierent locations as shown in Figure 5.17 (b) and
Figure 5.17 (c). In order to overcome this dependence of the region on the
seed pixel selected, the following modied criterion could be used to determine
whether a neighboring pixel should be included in a region or not: instead of
comparing the incoming pixel with the gray level of the seed, the gray level of
a neighboring pixel is compared with the mean gray level, called the running
mean
Rc , of the region being grown at its current stage, Rc . This criterion
may be represented as
jf (m n) ;
Rc j  T (5.37)
where XX

Rc = N1 f (m n) (5.38)
c (mn)2Rc
where Nc is the number of pixels in Rc . Figure 5.17 (d) shows the result ob-
tained with the running-mean algorithm by using the same additive tolerance
level as before (T = 3). With the running-mean criterion, no matter which
pixel is selected as the seed, the same nal region is obtained in the present
example, as long as the seed pixel is within the region, which is the central
highlighted area in Figure 5.17 (d).
In the simple scheme described above, the seed pixel is always used to check
the incoming neighboring pixels, even though most of them are not spatially
connected or close to the seed. Such a region-growing procedure may fail when
a seed pixel is inappropriately located at a noisy pixel. Shen 320, 321, 322]
suggested the use of the \current center" pixel as the reference instead of
the seed pixel that was used to commence the region-growing procedure. For
example, the shaded area shown in Figure 5.18 represents a region being
grown. After the pixel C is appended to the region, its 4-connected neighbors
(labeled as Ni, i = 1 2 3 4) or 8-connected neighbors (marked as Ni, i =
1 2    8) would be checked for inclusion in the region, using
jNi ; C j  T: (5.39)
The pixel C is called the current center pixel. However, because some of
the neighboring pixels (N 1 and N 5 in the illustration in Figure 5.18) are
Detection of Regions of Interest 399

1 2 3 4 5 1 2 3 4 5
1 100 101 101 100 101 1 100 101 101 100 101
2 100 127 126 128 100 2 100 seed 126 128 100
3 100 124 128 127 100 3 100 124 128 127 100
4 100 124 125 126 101 4 100 124 125 126 101
5 101 100 100 101 102 5 101 100 100 101 102
(a) (b)

1 2 3 4 5 1 2 3 4 5
1 100 101 101 100 101 1 100 101 101 100 101
2 100 127 126 128 100 2 100 127 126 128 100
3 100 124 seed 127 100 3 100 124 128 127 100
4 100 124 125 126 101 4 100 124 125 126 101
5 101 100 100 101 102 5 101 100 100 101 102
(c) (d)

FIGURE 5.17
Example of additive-tolerance region growing using dierent seed pixels (T =
3). (a) Original image. (b) The result of region growing (shaded in black) with
the seed pixel at (2 2). (c) The result of region growing with the seed pixel at
(3 3). (d) The result of region growing with the running-mean algorithm or
the \current center pixel" method using any seed pixel within the highlighted
region. Figure courtesy of L. Shen 320].
400 Biomedical Image Analysis
already included in the region shown, only N 2, N 3, and N 4 in the case of
4-connectivity, or N 2, N 3, N 4, N 6, N 7, and N 8 in the case of 8-connectivity
are compared with their current center pixel C for region growing, rather
than with the original seed pixel. For the example shown in Figure 5.17, this
procedure generates the same result as shown in Figure 5.17 (d) independent
of the location of the seed pixel (within the ROI) when using the same additive
tolerance level (T = 3).

seed

N5 N1 N6
N2 C N3
N7 N4 N8

FIGURE 5.18
Illustration of the concept of the \current center pixel" in region growing.
Figure courtesy of L. Shen 320].

5.4.5 Region growing using a multiplicative tolerance


In addition to the sensitivity of the region to seed pixel selection with additive-
tolerance region growing, the additive tolerance level or absolute dierence in
gray level T is not a good criterion for region growing: an additive tolerance
level of 3, while appropriate for a seed pixel value or running mean of, for
example, 127, may not be suitable when the seed pixel gray level or running
mean is at a dierent level, such as 230 or 10. In order to address this problem,
a relative dierence, based upon a multiplicative tolerance level  , could be
employed. Then, the criterion for region growing could be dened as
jf (m n) ;
Rc j  

Rc (5.40)

or
2 jff ((m n) ;
Rc j  
m n) +
Rc (5.41)
Detection of Regions of Interest 401
where f (m n) is the gray level of the current pixel being checked for inclusion,
and
Rc could stand for the original seed pixel value, the current center pixel
value, or the running-mean gray level. Observe that the two equations above
are comparable to the denitions of simultaneous contrast in Equations 2.7
and 2.8.
The additive and multiplicative tolerance levels both determine the maxi-
mum gray-level deviation allowed within a region, and any deviation less than
this level is considered to be an intrinsic property of the region, or to be noise.
Multiplicative tolerance is meaningful when related to the SNR of a region (or
image), whereas additive tolerance has a direct connection with the standard
deviation of the pixels within the region or a given image.

5.4.6 Analysis of region growing in the presence of noise


In order to analyze the performance of region-growing methods in the presence
of noise, let us assume that the given image g may be modeled as an ideal
image f plus a noise image , where f consists of a series of strictly uniform,
disjoint, or nonoverlapping regions Ri , i = 1 2 : : : k, and includes their
corresponding noise parts i , i = 1 2 : : : k. Mathematically, the image may
be expressed as
g=f+ (5.42)
where 
f = Ri i = 1 2 : : : k (5.43)
i
and 
= i i = 1 2 : : : k: (5.44)
i
A strictly uniform region Ri is composed of a set of connected pixels f (m n)
at positions (m n) whose values equal a constant i , that is,
Ri = f (m n) j f (m n) = i g: (5.45)
The set of regions Ri , i = 1 2 : : : k, is what we expect to obtain as the
result of segmentation. Suppose that the noise parts i , i = 1 2 : : : k, are
composed of white noise with zero mean and standard deviation i  then, we
have 
g = (Ri + i ) i = 1 2 : : : k (5.46)
i
and  
f = Ri = g ; i i = 1 2 : : : k: (5.47)
i i
As a special case, when all the noise components have the same standard
deviation , that is,
1 = 2 =    = k =  (5.48)
402 Biomedical Image Analysis
and
1 ' 2 '    ' k '  (5.49)
where the symbol ' represents statistical similarity, the image f may be de-
scribed as 
g ' Ri +  i = 1 2 : : : k (5.50)
i
and 
f = Ri ' g ;  i = 1 2 : : : k: (5.51)
i
Additive-tolerance region growing is well-suited for segmentation of this spe-
cial type of image, and an additive tolerance level solely determined by  may
be used globally over the image. However, such special cases are rare in real
images. A given image generally has to be modeled, as in Equation 5.46,
where multiplicative-tolerance region growing may be more suitable, with the
expectation that a global multiplicative tolerance level can be derived for all
of the regions in the given image. Because the multiplicative tolerance level
could be made a function of ii that is directly related to the SNR, which
can be dened as 10 log10 ii2 dB for each individual region Ri , such a global
2

tolerance level can be found if


1 = 2 =    = k : (5.52)
1 2 k
5.4.7 Iterative region growing with multiplicative tolerance
Simple region growing with xed additive or multiplicative tolerance may pro-
vide good segmentation results with images satisfying either Equations 5.48
and 5.49, or Equation 5.52. However, many images do not meet these con-
ditions, and one may have to employ dierent additive or multiplicative tol-
erance levels at dierent locations in the given image. This leads to the
problem of nding the appropriate tolerance level for each individual region,
which the iterative multitolerance region-growing approach proposed by Shen
et al. 274, 320] attempts to solve.
When human observers attempt to recognize an object in an image, they
are likely to use the information available in both an object of interest and
its surroundings. Unlike most of the reported boundary detection methods,
the HVS detects object boundaries based on not only the information around
the boundary itself, such as the pixel variance and gradient around or across
the boundary, but also on the characteristics of the object. Shen et al. 274,
320] proposed using the information contained within a region as well as its
relationship with the surrounding background to determine the appropriate
tolerance level to grow a region. Such information could be represented by a
set of features characterizing the region and its background. With increasing
tolerance levels obtained using a certain step size, it could be expected that the
Detection of Regions of Interest 403
values of the feature set at successive tolerance levels will be either the same
or similar (which means that the corresponding regions are similar) when the
region-growing procedure has identied an actual object or region. Suppose
that the feature set mentioned above includes M features then, the feature
vector Vk at the tolerance level k may be expressed as
Vk = Vk1 Vk2 : : : VkM ]T : (5.53)
The minimum normalized distance dmin between the feature vectors at suc-
cessive tolerance levels could be utilized to select the nal region:
2 3 12
dmin = min  d k ] = min 64 X ( Vk+1m ; Vkm
) 75
M 2
(5.54)
k k m=1 Vk+1 m +Vk m 2
2
where k = 1 2 : : : K ; 1, and K is the number of tolerance values used.
As an example, the feature set could be chosen to include the coordinates of
the centroid of the region, the number of pixels in the region, a region shape
descriptor such as compactness, and the ratio of the mean pixel value of the
foreground region to the mean pixel value of a suitably dened background.
Figure 5.19 shows the owchart of the iterative multitolerance region-grow-
ing algorithm. The algorithm starts with a selected pixel, called the seed pixel,
as the rst region pixel. Then, the pixel value f (m n) of every 4-connected
neighbor of the seed (or all pixels belonging to the region, at later stages of
the algorithm) is checked for the condition described in Equation 5.40. If
the condition is satised, the pixel is included in the region. This recursive
procedure is continued until no spatially connected pixel meets the condition
for inclusion in the region. The outermost layer of connected pixels of the
region grown is then treated as the boundary or contour of the region.
In order to permit the selection of the most appropriate tolerance level for
each region, the iterative multitolerance region-growing procedure proposed
by Shen et al. uses a range of the fractional tolerance value  , from seed 2 to
1
0:40 with a step size of seed . The specied feature vector is computed for the
region obtained at each tolerance level. The normalized distance between the
feature vectors for successive tolerance levels is calculated. The feature vector
with the minimum distance, as dened in Equation 5.54, is selected and the
corresponding region is retained as the nal object region.
Figure 5.20 (a) represents an ideal image, composed of a series of strictly
uniform regions, with two ROIs: a relatively dark, small triangle, and a rel-
atively bright, occluded circle. Two versions of the image with white noise
added are shown in Figure 5.20 (b) (SNR = 30 dB ) and Figure 5.20 (c) (SNR
= 20 dB ). As expected according to the model dened in Equation 5.50, the
xed multiplicative-tolerance region-growing method detects the two ROIs
with properly selected tolerance values  : 0:02 for SNR = 30 dB and 0:05 for
SNR = 20 dB  the regions are shown highlighted in black and white in Figures
404 Biomedical Image Analysis

Start

Initialize tolerance range


and step value

Perform region growing with


Increment tolerance level
additive/ multiplicative tolerance

Compute feature vector values

Calculate normalized distance

No
Maximum tolerance
reached?

Yes

Select region with the minimum


normalized distance

End

FIGURE 5.19
Flowchart of the iterative, multitolerance, region-growing algorithm. Figure
courtesy of L. Shen 320].
Detection of Regions of Interest 405
5.20 (d) and (e). However, with the composite images shown in Figure 5.20
(f) and (g), which were obtained by appending the two noisy images in parts
(b) and (c) of the gure, the xed-tolerance region-growing approach fails, as
stated above. The detected regions in the lower part are incorrect when the
growth tolerance level is set to be 0:02, as shown in Figure 5.20 (f), whereas
the detection of the occluded circle in the upper part fails with the toler-
ance value of 0:05, as shown in Figure 5.20 (g). The iterative region-growing
technique automatically determines the correct growth tolerance level, and
has thereby successfully detected both of the ROIs in the composite image as
shown in Figure 5.20 (h).

5.4.8 Region growing based upon the human visual system


Despite its strength in automatically determining an appropriate tolerance
level for each region within an image, the iterative multitolerance region-
growing algorithm faces two limitations: the iterative procedure is time-
consuming, and diculties exist in the selection of the range of the tolerance
value as well as the increment step size. Shen et al. 274, 320, 322] used the
2 to 0:40 with a step size of 1 . The motivation for dening
range of seed seed
tolerance as being inversely related to the seed pixel value was to avoid the
possibilities of no change between regions grown at successive tolerance lev-
els due to the use of too small a tolerance value, and negligible increment
in terms of the general intensity of the region. A possible solution to avoid
searching for the appropriate growth tolerance level is to make use of some of
the characteristics of the HVS.
The multitolerance region-growing algorithm focuses mainly on the prop-
erties of the image to be processed without consideration of the observer's
characteristics. The main purpose of an image processing method, such as
segmentation, is to obtain a certain result to be presented to human observers
for further analysis or for further computer analysis. In either case, the result
should be consistent with a human observer's assessment. Therefore, eective
segmentation should be achievable by including certain basic properties of the
HVS.
The HVS is a nonlinear system with a large dynamic range and a band-
pass lter behavior 122, 282]. The ltering property is characterized by the
reciprocal of the threshold contrast CT that is a function of both the fre-
quency u and background luminance L. The smallest luminance dierence
that a human observer can detect when an object of a certain size appears
with a certain background luminance level is dened as the JND, which can
be quantied as 256]
JND = L CT : (5.55)
A typical variation of threshold contrast as a function of the background lumi-
nance, known as the Weber-Fechner relationship 323], is graphically depicted
in Figure 5.21. The curve can be typically divided into two asymptotic re-
406 Biomedical Image Analysis

(a) (b) (c)

(d) (e)

(f) (g) (h)


FIGURE 5.20
Demonstration of the multiplicative-tolerance region-growing algorithm.
(a) An ideal test image. (b) Test image with white noise (SNR = 30 dB ).
(c) Test image with white noise (SNR = 20 dB ). (d) Regions grown with xed
 = 0:02 for the image in (b). (e) Regions grown with xed  = 0:05 for the
image in (c). (f) Regions grown with xed  = 0:02. (g) Regions grown with
xed  = 0:05. (h) Regions grown with the iterative algorithm with adaptive
 . The original composite images in (f) { (h) were obtained by combining
the images in (b) at the top and (c) at the bottom. The detected regions are
highlighted in black for the triangle and in white for the occluded circle in
gures (d) { (h). Figure courtesy of L. Shen 320].
Detection of Regions of Interest 407
gions: the Rose { de Vries region where CT decreases when the background
luminance increases, with the relationship described as
C / 1 (5.56)
T L0:5
and the Weber region where CT is independent of the background luminance,
and the relationship obeys Weber's law:
CT = JND
L = C0 = constant: (5.57)

-1
10
R
os
Threshold Contrast

e
-d
e
Vr

-2
10
ie
s
R
eg
io
n

Weber Region

-3
10 -2 -1 0 1 2
10 10 10 10 10
Background Luminance, L (Foot-Lamberts)

FIGURE 5.21
A typical threshold contrast curve, known as the Weber-Fechner relationship.
Figure courtesy of L. Shen 320].

It is possible to determine the JND as a function of the background gray


level from psychophysical experiments. In one such study conducted by
Shen 320], a test was set up with various combinations of foreground and
background levels using an image containing a series of square boxes, with
the width ranging from 1 pixel to 16 pixels and a xed space of 7 pixels in
between the boxes. Also included was a series of groups of four, vertical, 64-
pixel lines, with the width ranging from 1 pixel to 6 pixels and a spacing of
408 Biomedical Image Analysis
the same number of pixels between any two adjacent lines, and a xed gap
of 12 pixels in between the groups of lines. Figure 5.22 shows one such test
image with the background gray level set to be 100 and the foreground at
200. Based upon the visibility or detection of up to the 2-pixel-wide square
and line group on a monitor, a relationship between the JND and the back-
ground gray level was obtained as shown in Figure 5.23. In order to obtain
a general JND relation, a large number of trials involving the participation
of a large number of subjects is necessary, along with strict control of the
experimental environment. Regardless, Shen 320] used the JND relationship
illustrated in Figure 5.23 to develop a region-growing algorithm based upon
the characteristics of the HVS.

FIGURE 5.22
Visual test image for determination of the JND as a function of background
gray level (foreground is 200 and background is 100 in the displayed image).
Figure courtesy of L. Shen 320].

The HVS-based region-growing algorithm starts with a 4-connected neigh-


bor-pixel grouping based upon the JND relationships of adjacent pixel gray
Detection of Regions of Interest 409

20

18

16
Just-noticeable Difference (JND)

14

12

10

0
0 50 100 150 200 250
Background Gray Level (0 - 255)

FIGURE 5.23
A relationship between the JND and the background gray level based upon a
psychophysical experiment. Figure courtesy of L. Shen 320].
410 Biomedical Image Analysis
levels. The JND condition is dened as
jp1 ; p2 j  minfJND (p1 ) JND (p2 )g (5.58)
where p1 and p2 are two connected pixels. This step is followed by the removal
of small regions (dened as regions having fewer than ve pixels in the study of
Shen 320]) by merging with a connected region with the minimum mean gray-
level dierence. Then, merging of connected regions is performed if any of two
neighboring regions meet the JND condition, with p1 and p2 representing the
regions' mean values. The procedure is iterated until no neighboring region
satises the JND condition.
Figure 5.24 shows the results of region growing with the same test image as
in the test of the tolerance-based region-growing algorithms in Figure 5.20 (f).
The HVS-based region-growing algorithm has successfully segmented the re-
gions at the two SNR levels. The method is not time-consuming because a
JND table is used to determine the parameters required.

FIGURE 5.24
Results of the HVS-based region-growing algorithm with the same test image
as in Figure 5.20 (f). Figure courtesy of L. Shen 320].

5.4.9 Application: Detection of calci cations by multitoler-


ance region growing
Microcalcications are the most important and sometimes the only mam-
mographic sign in early, curable breast cancer 52, 324]. Due to their sub-
tlety, detection and classication (as benign or malignant) are two major
problems. Several researchers in the eld of mammographic image analy-
Detection of Regions of Interest 411
sis 325, 326, 327, 328, 329, 330, 331, 332, 333] have focused attention on the
detection of calcications. Shen et al. 274] reported on a method to detect
and classify mammographic calcications based upon a multitolerance region-
growing procedure, shape analysis, and neural networks. The owchart of the
system is shown in Figure 5.25. The calcication detection algorithm consists
of three steps:
1. selection of seed pixels
2. detection of potential calcication regions and
3. conrmation of calcication regions.
Selection of seed pixels: One of the common characteristics of calci-
cations in mammograms is that they are relatively bright due to the higher
X-ray attenuation coecient (or density) of calcium as compared with other
normal breast tissues. Hence, a simple criterion to select seed pixels to search
for calcications could be based on the median or the mean value of the mam-
mogram. The scheme employed by Shen et al. 274] is as follows:
Every pixel with a value greater than the median gray level of the
mammogram is identied as a potential seed pixel for the next
two steps of calcication detection. The pixels identied as above
are processed in sequence by selecting the highest intensity pixel
remaining, in raster-scan order, as long the pixel has not been
included in any of the regions already labeled as calcications.
The selection scheme is simple and eective in most cases, but faces limi-
tations in analyzing mammograms of dense breasts. Removal of the parts of
the mammographic image outside the breast boundary (see Section 5.9) and
the high-density area of the pectoral muscle (see Section 5.10) could be useful
preprocessing steps.
Detection of potential calci cation regions: Calcications could ap-
pear in regions of varying density in mammograms. Calcications present
within dense masses or superimposed by dense tissues in the process of ac-
quisition of mammograms could present low gray-level dierences or contrast
with respect to their local background. On the other hand, calcications
present against a background of fat or low-density tissue would possess higher
dierences and contrast. The iterative multitolerance region-growing method
presented in Section 5.4.7 could be expected to perform well in this applica-
tion, by adapting to variable conditions as above.
In the method proposed by Shen et al. 274] for the detection of calcica-
tions, the algorithm starts a region-growing procedure with each seed pixel
selected as above. Every 4-connected neighbor f (m n) of the pixels belonging
to the region is checked for the following condition:
0:5 (1 +  )(Rmax + Rmin )  f (m n)  0:5 (1 ;  )(Rmax + Rmin ) (5.59)
412 Biomedical Image Analysis

Start

Thresholding image and selection


of seed pixels

Detection of potential calcification


regions by multitolerance region growing
for each seed pixel

Confirmation of calcification regions

Calculation of a shape feature vector

Classification by a neural network

End

FIGURE 5.25
Flowchart of a method for the detection and classication of mammographic
calcications. Figure courtesy of L. Shen 320].
Detection of Regions of Interest 413
where Rmax and Rmin are the current maximum and minimum pixel values
of the region being grown, and  is the growth tolerance. The fractional
tolerance value  for region growing is increased from 0:01 to 0:40 with a step
size determined as the inverse of the seed-pixel's gray level. A feature vector
area 2 (see Section 6.2.1 for
including compactness, dened as c = 1 ; 4 perimeter
details), the (x y) coordinates of the centroid, and the size or area in number
of pixels, is calculated for the region obtained at each tolerance level. The
normalized distance between the feature vectors for successive tolerance levels
is computed, as given by Equation 5.54. The feature set with the minimum
distance is selected as the nal set, and the corresponding region considered
to be a potential calcication region.
Con rmation of calci cation regions: Each potential calcication re-
gion detected is treated as a calcication region only if the size S in pixels
and contrast C , computed as in Equation 2.7, of the region at the nal level
meet the following conditions:
5 < S < 2 500 (5.60)
and
C > 0:20: (5.61)
The upper limit on the area corresponds to about 6:25 mm2 with a pixel
resolution of 50
m. The background region required to compute C is formed
by using pixels circumscribing the region contour to a thickness of 3 pixels.
The contrast threshold of 0:2 was selected based upon another study on seg-
mentation and analysis of calcications 334].
Examples: Two examples of the detection of calcications by the method
described above are presented in Figures 5.26 and 5.27. Figure 5.26 (a) and
Figure 5.27 (a) show two sections of mammograms of size 512  512 pixels
with benign calcications and malignant calcications, respectively. Figure
5.26 (b) and Figure 5.27 (b) show the same mammogram sections with the
contours of the calcication regions extracted by the algorithm as described
above.
Sections of size 1 024  768, 768  512, 512  768, and 512  768 pixels
of four typical mammograms from complete images of up to 2 560  4 096
pixels with biopsy-proven calcications were used in the study by Shen et
al. 274]. Two of the sections had a total of 58 benign calcications whereas
the other two contained 241  10 malignant calcications. Based upon visual
inspection by a radiologist, the detection rates of the multitolerance region-
growing algorithm were 81% with 0 false calcications and 85  3% with 29
false calcications for the benign and malignant mammograms, respectively.
Bankman et al. 335] compared their hill-climbing segmentation algorithm
for detecting microcalcications with the multitolerance region-growing algo-
rithm described above. Their results showed that the two algorithms have sim-
ilar discrimination powers based on an ROC analysis conducted with six mam-
mograms (containing 15 clusters with a total of 124 calcications). Bankman
414 Biomedical Image Analysis
et al. stated that \The multitolerance algorithm provides a good solution to
avoid the use of statistical models, local statistic estimators, and the manual
selection of thresholds. However, the cost is multiple segmentations of the
same structure and computation of features during the segmentation of each
structure. : : : The segmented regions were comparable : : : in many cases."
Details on further analysis of the calcications detected as above are pro-
vided in Sections 6.6 and 12.7. See Section 5.4.10 for another method for the
detection of calcications.

5.4.10 Application: Detection of calci cations by linear pre-


diction error
The simple seed selection method used by Shen et al. 274] and described
in Section 5.4.9 encounters limitations in the case of calcications present
in or superimposed by dense breast tissue. Serrano et al. 336] proposed a
method to detect seed pixels for region growing based upon the error of linear
prediction. The 2D linear prediction error computation method proposed by
Kuduvalli and Rangayyan 174, 337, 338] was used to compute the prediction
error directly from the image data without the need to compute the prediction
coecients see Section 11.8.1 for details.
The method proposed by Serrano et al. 336] starts with a preltering step.
Considering the small size of microcalcications, a lowpass lter with a wide
kernel could be expected to remove them from the image while conserving a
background of high density in the image. Conversely, a highpass lter may be
used to detect microcalcications. Serrano et al. employed a highpass lter
specied as h(m n) = 1 ; g(m n), where g(m n) was a lowpass Gaussian
function with a variance of 2:75 pixels a lter kernel size of 21  21 pixels
was chosen. In the next step, a 2D linear prediction error lter 174, 337,
338] was applied to the output of the highpass lter. A pixel was selected
as a seed for the multitolerance region-growing algorithm if its prediction
error was greater than an experimentally determined threshold. This was
based upon the observation that a microcalcication can be seen as a point of
nonstationarity in an approximately homogeneous region or neighborhood in
a mammogram such a pixel cannot be predicted well by the linear predictor,
and hence leads to a high error.
The detection algorithm was tested with three mammograms containing a
total of 428 microcalcications of dierent nature and diagnosis. The results
obtained with the algorithm were examined by a radiologist who determined
the accuracy of the detection. Figure 5.28 shows a segment of a mammogram
with several calcications, the seed pixels identied by thresholding the pre-
diction error, and the regions obtained by application of the multitolerance
region-growing algorithm to the seed pixels. In comparison with the algorithm
of Shen et al. 274], the detection accuracy of the method of Serrano et al.
was higher with a smaller number of false detections for the images tested,
Detection of Regions of Interest 415

(a)

(b)
FIGURE 5.26
Mammogram section with benign calcications. (a) Original image. (b) Im-
age with the contours of the calcication regions detected. The section shown
is of size 512  512 pixels (approximately 2:25 cm  2:25 cm), out of the
full matrix of 1 536  4 096 pixels of the complete mammogram. Repro-
duced with permission from L. Shen, R.M. Rangayyan, and J.E.L. Desautels,
\Detection and classication of mammographic calcications", International
Journal of Pattern Recognition and Articial Intelligence, 7(6): 1403{1416,
1993.
c World Scientic Publishing Co.
416 Biomedical Image Analysis

(a)

(b)
FIGURE 5.27
Mammogram section with malignant calcications. (a) Original image.
(b) Image with the contours of the calcication regions detected. The sec-
tion shown is of size 512  512 pixels (approximately 2:25 cm  2:25 cm), out
of the full matrix of 1 792  4 096 pixels of the complete mammogram. Repro-
duced with permission from L. Shen, R.M. Rangayyan, and J.E.L. Desautels,
\Detection and classication of mammographic calcications", International
Journal of Pattern Recognition and Articial Intelligence, 7(6): 1403{1416,
1993.
c World Scientic Publishing Co.
Detection of Regions of Interest 417
although the detection capability was diminished that is, more calcications
were missed by the prediction-error method.

FIGURE 5.28
(a) Mammogram section with malignant calcications 234  137 pixels with
a resolution of 160
m. (b) Seed pixels detected by thresholding the predic-
tion error (marked in black). (c) Image with the contours of the calcication
regions detected by region growing from the seed pixels in (b). Reproduced
with permission from C. Serrano, J.D. Trujillo, B. Acha, and R.M. Ran-
gayyan, \Use of 2D linear prediction error to detect microcalcications in
mammograms", CDROM Proceedings of the II Latin American Congress on
Biomedical Engineering, Havana, Cuba, 23{25 May 2001.
c Cuban Society
of Bioengineering.

5.5 Fuzzy-set-based Region Growing to Detect Breast


Tumors

Although mammography is being used for breast cancer screening, the anal-
ysis of masses and tumors on mammograms is, at times, dicult because
developing signs of cancer may be minimal or masked by superimposed tis-
418 Biomedical Image Analysis
sues. Additional diagnostic procedures may be recommended when the origi-
nal mammogram is equivocal.
Computer-aided image analysis techniques have the potential to improve
the diagnostic accuracy of mammography and reduce the use of adjunctive
procedures and morbidity, as well as health-care costs. Computer analysis can
facilitate the enhancement, detection, characterization, and quantication of
diagnostic features such as the shapes of calcications and masses, growth
of tumors into surrounding tissues, and the distortion caused by developing
densities. The annotation of mammograms with objective measures may assist
radiologists in diagnosis.
Computer-aided detection of breast masses is a challenging problem requir-
ing sophisticated techniques due to the low contrast and poor denition of
their boundaries. Classical segmentation techniques attempt to dene pre-
cisely the ROI, such as a calcication or a mass. Shen et al. 274] proposed
thresholding and multitolerance region growing methods for the detection of
potential calcication regions and extraction of their contours see Sections
5.4.7 and 5.4.9. Karssemeijer 339], Laine et al. 340], and Miller and Ram-
sey 341] proposed methods for tumor detection based on scale-space analysis.
Zhang et al. 342] proposed an automated detection method for the initial iden-
tication of spiculated lesions based on an analysis of mammographic texture
patterns. Matsubara et al. 343] described an algorithm based on an adaptive
thresholding technique for mass detection. Kupinski and Giger 344] presented
two methods for segmenting lesions in digital mammograms: a radial-gradient-
index-based algorithm that considers both the gray-level information and a
geometric constraint, and a probabilistic approach. However, dening criteria
to realize precisely the boundaries of masses in mammograms is dicult. The
problem is compounded by the fact that most malignant tumors possess fuzzy
boundaries with a slow and extended transition from a dense core region to
the surrounding tissues. (For detailed reviews on the detection and analy-
sis of breast masses, refer to Rangayyan et al. 163, 345] and Mudigonda et
al. 165, 275]. See Sections 6.7, 7.9, 12.11, and 12.12 for related discussions.)
An alternative to address the problem of detecting breast masses is to rep-
resent tumor or mass regions by fuzzy sets 307]. The most popular algorithm
that uses the fuzzy-set approach is the fuzzy C-means algorithm 346, 347,
348]. The fuzzy C-means algorithm uses iterative optimization of an objective
function based on weighted similarity measures between the pixels in the im-
age and each cluster center. The segmentation method of Chen and Lee 348]
uses fuzzy C-means as a preprocessing step in a Bayesian learning paradigm
realized via the expectation-maximization algorithm for edge detection and
segmentation of calcications and masses in mammograms. However, their
nal result is based on classical segmentation to produce crisp boundaries.
Sameti and Ward 349] proposed a lesion segmentation algorithm using fuzzy
sets to partition a given mammogram. Their method divides a mammogram
into two crisp regions according to a fuzzy membership function and an it-
erative optimization procedure to minimize an objective function. If more
Detection of Regions of Interest 419
than two regions are required, the algorithm can be applied to each region
obtained in the preceding step using the same procedure. The authors pre-
sented results of application of the method to mammograms with four levels
of segmentation.
Guliato et al. 276] proposed two segmentation methods that incorporate
fuzzy concepts. The rst method determines the boundary of a tumor or mass
by region growing after a preprocessing step based on fuzzy sets to enhance the
ROI. The second segmentation method is a fuzzy region-growing method that
takes into account the uncertainty present around the boundaries of tumors.
These methods are described in the following sections.

5.5.1 Preprocessing based upon fuzzy sets


A mass or tumor typically appears on a mammogram as a relatively dense
region, whose properties could be characterized using local density, gradient,
texture, and other measures. A set of such local properties could be used to
dene a feature vector of a mass ROI and/or a pixel belonging to the ROI.
Given a feature vector, a pixel whose properties are similar to those repre-
sented by the feature vector of the mass could be assigned a high intensity.
If the properties do not match, the pixel intensity could be made low. At
the end of such a process, the pixels in and around the ROI will be displayed
according to their degree of similarity with respect to the features of the mass
ROI.
A fuzzy set may be dened by assigning to each element considered from
the universal set " a value representing its grade of membership in the fuzzy
set 350, 351]. The grade corresponds to the degree with which the element
is similar to or compatible with the concept represented by the fuzzy set. Let
; : " ! L be a membership function that maps " into L, where L denotes
any set that is at least partially ordered. The most commonly used range of
values for membership functions is the unit real interval 0 1]. Crisp sets can
be seen as a particular case of fuzzy sets where ; : " ! f0 1g that is, the
range includes only the discrete values 0 and 1.
The enhancement, and subsequent detection, of an ROI may be achieved
by dening an appropriate membership function that evaluates the similarity
between the properties of the pixel being considered and those of the ROI
itself, given by the feature vector. In this procedure, the original image is
mapped to a fuzzy set according to the membership function, which:
assigns a membership degree equal to 1 to those pixels that possess the
same properties as the mass ROI
represents the degree of similarity between the features of the mass ROI
and those of the pixel being considered
exhibits symmetry with respect to the dierence between the features
of the ROI and those of the pixel being considered and
420 Biomedical Image Analysis
decreases monotonically from 1 to 0.
Guliato et al. 276] considered the mean intensity of a seed region, identied
by the user, as the ROI feature. A membership function with the character-
istics cited above, illustrated in Figure 5.29, is given by the function
;(p) = 1 +  j 1A ; B j (5.62)

where p is the pixel being processed, A is the feature vector of the mass (gray
level), B is the feature vector of the pixel being analyzed, and  denes the
opening of the membership function. For large  the opening is narrow and
the function's behavior is strict for small  the opening is wide, and the
function presents a more permissive behavior.

Fuzzy Membership Degree

0
Difference between the feature vector of the ROI
and that of each pixel of the original image

FIGURE 5.29
Fuzzy membership function for preprocessing. Reproduced with permission
from D. Guliato, R.M. Rangayyan, W.A. Carnielli, J.A. Zuo, and J.E.L.
Desautels, \Segmentation of breast tumors in mammograms using fuzzy sets",
c SPIE and IS&T.
Journal of Electronic Imaging, 12(3): 369 { 378, 2003.

The fuzzy set obtained by the method described above represents pixels
whose properties are close to those of the mass with a high membership degree
the opposite case results in a low membership degree. The membership degree
may be used as a scale factor to obtain gray levels and display the result as
an image. The contrast of the ROI in the resulting image depends upon the
parameter  .
Figures 5.30 (a) and (b) show a 700  700-pixel portion of a mammogram
with a spiculated malignant tumor and the result of fuzzy-set-based prepro-
cessing with  = 0:007, respectively. It is seen from the image in Figure
Detection of Regions of Interest 421
5.30 (b) that the pixels in the tumor region (the bright area in the upper-left
part of the image) have higher values than the pixels in other parts of the
image, indicating a higher degree of similarity with respect to the ROI or
seed region. The membership values decrease gradually across the boundary
of the tumor, as expected, due to the malignant nature of the tumor in this
particular case. Note, however, that a few other spatially disconnected regions
on the right-hand side of the image also have high values these regions can
be eliminated by further processing, as described in Section 5.5.2.

Figure 5.30 (a)

5.5.2 Fuzzy segmentation based upon region growing


Region growing is an image segmentation technique that groups pixels or
subregions into larger regions according to a similarity criterion. Statistical
measures provide good tools for dening homogeneous regions. The success of
image segmentation is directly associated with the choice of the measures and
a suitable threshold. In particular, mean and standard deviation measures are
often used as parameters to control region growing however, these measures
are in uenced by extreme pixel values. As a consequence, the nal shape
of the region grown depends upon the strategy used to traverse the image.
422 Biomedical Image Analysis

Figure 5.30 (b)

Figure 5.30 (c)


Detection of Regions of Interest 423

(d)
FIGURE 5.30
(a) A 700  700-pixel portion of a mammogram with a spiculated malignant
tumor. Pixel size = 62.5
m. (b) Fuzzy-set-based ROI enhancement with  =
0:007. (c) Contour extracted (white line) by region growing with the result in
(b). The black line represents the boundary drawn by a radiologist (shown for
comparison).  = 0:007, threshold = 0.63. (d) Result of fuzzy region growing
with the image in (a) with 
max = 45, CVmax = 0:01,  = 0:07. The
contour drawn by the radiologist is superimposed for comparison. Reproduced
with permission from D. Guliato, R.M. Rangayyan, W.A. Carnielli, J.A. Zuo,
and J.E.L. Desautels, \Segmentation of breast tumors in mammograms using
fuzzy sets", Journal of Electronic Imaging, 12(3): 369 { 378, 2003.
c SPIE
and IS&T.
424 Biomedical Image Analysis
Furthermore, the algorithm could present unstable behavior for example, dif-
ferent pixels with the same values that are rejected at an earlier stage may be
accepted later on in the region-growing method. It is also possible that the
stopping condition is not reached when the gray level in the image increases
slowly in the same direction as that of the traversal strategy. Besides, tradi-
tional region-growing methods represent the ROI by a classical set, dening
precisely the region's boundary. In such a case, the transition information is
lost, and the segmentation task becomes a critical stage in the image analysis
system. In order to address these concerns, Guliato et al. 276] presented
two image segmentation methods: the rst based on classical region growing
with the fuzzy-set preprocessed image (described in the following paragraphs),
and the second based on fuzzy region growing using statistical measures in
homogeneity criteria, described in Section 5.5.3.
The pixel values in the fuzzy-set preprocessed image represent the member-
ship degrees of pixels with respect to the ROI as dened by the seed region. To
perform contour extraction, the region-growing algorithm needs a threshold
value and a seed region that lies inside the ROI. The region-growing process
starts with the seed region. Four-connected neighboring pixels that are above
the threshold are labeled as zero, the neighbors of the pixels labeled as zero
are inspected, and the procedure continued. If the connected pixel is less than
the threshold, it is labeled as one, indicating a contour pixel, and its neigh-
borhood is not processed. The recursive process continues until all spatially
connected pixels fail the test for inclusion in the region. A post-processing step
is included to remove isolated pixels and regions that lie within the outermost
contour.
The algorithm is simple and easy to implement, and will always produce
closed contours. The method was evaluated with a number of synthetic test
images as well as medical images such as CT and nuclear medicine images,
and produced good results even in the presence of high levels of noise 352].
Examples: Figure 5.31 shows the results of the method with a synthetic
image for three representative combinations of parameters. The three results
exhibit a good degree of similarity and illustrate the robustness of the method
in the presence of noise.
Figure 5.30 (c) shows the contour extracted for the mammogram in part
(a) of the same gure. Figure 5.32 (a) shows a part of a mammogram with
a circumscribed benign mass part (b) of the gure shows the corresponding
enhanced image. Figure 5.32 (c) shows the contour obtained: the image
is superimposed with the contour obtained by region growing in white the
contour in black is the boundary drawn independently by an experienced
radiologist, shown for comparison.
Results of application to mammograms: Guliato et al. 276] tested
their method with 47 mammograms including 25 malignant tumors and 22
benign masses, and observed good agreement between the contours given by
the method and those drawn independently by a radiologist. The seed re-
gion and threshold value were selected manually for each case the threshold
Detection of Regions of Interest 425

(a) (b)

(c) (d)
FIGURE 5.31
Illustration of the eects of seed pixel and threshold selection on fuzzy-set
preprocessing and region growing. (a) Original image (128  128 pixels) with
additive Gaussian noise, with  = 12 and SNR = 2:66. Results with (b) seed
pixel (60 60) and threshold = 0:82 (c) seed pixel (68 60) and threshold =
0:85 (d) seed pixel (68 80) and threshold = 0:85. Reproduced with permission
from D. Guliato, R.M. Rangayyan, W.A. Carnielli, J.A. Zuo, and J.E.L.
Desautels, \Segmentation of breast tumors in mammograms using fuzzy sets",
Journal of Electronic Imaging, 12(3): 369 { 378, 2003.
c SPIE and IS&T.
426 Biomedical Image Analysis

Figure 5.32 (a)

values varied between 0:57 and 0:90 for the images used. The same value of
the membership function parameter  = 0:007 was used to process all of the
images in the study. It was observed that the result of segmentation depended
upon the choice of the seed to start region growing and the threshold. Au-
tomatic selection of the seed pixel or region and the threshold is a dicult
problem that was not addressed in the study. It was observed that the thresh-
old could possibly be derived as a function of the statistics (such as the mean
and standard deviation) of the fuzzy-set preprocessed image.
Measure of fuzziness: In order to compare the results obtained by seg-
mentation with the contours of the masses drawn by the radiologist, Guliato
et al. 276, 277] developed a method to aggregate the segmented region with
the reference contour. The procedure can aggregate not only two contours
but also a contour with a fuzzy region, and hence is more general than clas-
sical intersection. The method uses a fuzzy fusion operator that generalizes
classical intersection of sets, producing a fuzzy set that represents the agree-
ment present among the two inputs see Section 5.11 for details. The result
of fusion was evaluated by a measure of fuzziness computed as
P 1; j 2 ;(p) ; 1 j]
f (X ) = p2X j X j (5.63)
Detection of Regions of Interest 427

Figure 5.32 (b)

Figure 5.32 (c)


428 Biomedical Image Analysis

(d)
FIGURE 5.32
(a) A 1 024  1 024-pixel portion of a mammogram with a circumscribed be-
nign mass. Pixel size = 50
m. (b) Fuzzy-set-based ROI enhancement with
 = 0:007. (c) Contour extracted (white line) by region growing with the
result in (b). The black line represents the boundary drawn by a radiologist
(shown for comparison).  = 0:007, threshold = 0:87: (d) Result of fuzzy
region growing with the image in (a) with 
max = 15, CVmax = 0:01,
 = 0:07. The contour drawn by the radiologist is superimposed for compar-
ison. Reproduced with permission from D. Guliato, R.M. Rangayyan, W.A.
Carnielli, J.A. Zuo, and J.E.L. Desautels, \Segmentation of breast tumors
in mammograms using fuzzy sets", Journal of Electronic Imaging, 12(3): 369
{ 378, 2003.
c SPIE and IS&T.
Detection of Regions of Interest 429
where X is the result of aggregation, and ;(p) is the degree of membership of
the pixel p. The denominator in the expression above normalizes the measure
with respect to the area of the result of fusion, resulting in a value in the
range 0 1], with zero representing perfect agreement and unity indicating no
intersection between the two inputs.
The values of the measure of fuzziness obtained for the 47 mammograms in
the study were in the range (0:13 0:85), with the mean and standard deviation
being 0:42 and 0:17, respectively. The measure of fuzziness was less than 0:5
for 34 out of the 47 cases. In most cases where the measure of fuzziness was
greater than 0:5, the segmented region was smaller than, but contained within,
the region indicated by the contour drawn by the radiologist. Regardless of
the agreement in terms of the measure of fuzziness, it was argued that, for
a spiculated lesion, there is no denite number of spicules that characterizes
the lesion as malignant. The method captured the majority of the spicules in
the cases analyzed, providing sucient information for diagnosis (according
to the analysis of the results performed by an expert radiologist).
Assessment of the results by pattern classi cation: In order to de-
rive a parameter for discriminating between benign masses and malignant
tumors, the following procedure was applied by Guliato et al. 276, 353]. A
morphological erosion procedure with a square structuring element of size
equal to 25% of the shorter dimension of the smallest rectangle containing the
contour was applied to the contour, so that the core of the ROI was separated
from the boundary. A parameter labeled as DCV was computed from the
fuzzy-set preprocessed image, by taking the dierence between the coecient
of variation (CV ) of the entire ROI and that of the core of the ROI. A high
value of DCV represents an inhomogeneous ROI, which could be indicative
of a malignant tumor. The probability of malignancy based upon DCV was
computed using the logistic regression method (see Section 12.5 for details)
the result is illustrated in Figure 5.33. Several cut points were analyzed with
the curve the cut point of 0:02 resulted in all 22 benign masses and 16 out of
the 25 malignant tumors being correctly classied, yielding a high specicity
of 1:0 but a low sensitivity of 0:64.

5.5.3 Fuzzy region growing


Guliato et al. 276] also proposed a fuzzy region-growing algorithm to obtain
mass regions in mammograms. In this method, an adaptive similarity crite-
rion is used for region growing, with the mean and the standard deviation
of the pixels in the region being grown as control parameters. The region
is represented by a fuzzy set to preserve the transition information around
boundary regions.
The algorithm starts with a seed region that lies inside the ROI and spreads
by adding to the region 8-connected pixels that have similar properties. The
homogeneity of the region is evaluated by calculating the mean (
), standard
deviation (), and the coecient of variation CV =  .
430 Biomedical Image Analysis

FIGURE 5.33
The probability of malignancy (vertical axis) derived from the parameter
DCV (horizontal axis). Reproduced with permission from D. Guliato, R.M.
Rangayyan, W.A. Carnielli, J.A. Zuo, and J.E.L. Desautels, \Segmentation
of breast tumors in mammograms using fuzzy sets", Journal of Electronic
c SPIE and IS&T.
Imaging, 12(3): 369 { 378, 2003.

Let 
max , CVmax , and  be the control parameters for region growing.

max species the maximum allowed dierence between the value of the
pixel being analyzed and the mean of the subregion already grown. CVmax
indicates the desired degree of homogeneity between two subregions  denes
the opening of the membership function. Let p be the next pixel to be analyzed
and I (p) be the value of p. The segmentation algorithm is executed in two
steps:
1. j I (p) ;
j  
max . If this condition is not satised, then the pixel is
labeled as rejected. If the condition is satised, p is temporarily added
to the subregion and
new and new are calculated.
2. j  ; new
new
j CVmax . If the condition is satised, then p must de-
nitely be added to the subregion and labeled as accepted, and
and 
must be updated, that is,
=
new and  = new . If the condition is
not satised, p is added to the subregion with the label accepted with
restriction, and
and  are not modied.
The second step given above analyzes the distortion that the pixel p can
produce if added to the subregion. At the beginning of the process, the region
includes all the pixels in the seed region, and the standard deviation is set
to zero. While the standard deviation of the region being grown is zero, a
specic procedure is executed in the second step: j  ; new
new
j 2 CVmax .
The parameter CVmax works as a lter that avoids the possibility that the
Detection of Regions of Interest 431
mean and standard deviation measures suer undesirable modication during
the region-growing process. Furthermore, the algorithm processes pixels in
expanding concentric squares around the seed region, evaluating each pixel
only once. These steps provide stability to the algorithm.
The membership function that maps the pixel values of the region resulting
from the preceding procedure to the unit interval 0 1] could be based upon
the mean of the region. Pixels that are close to the mean will have a high
membership degree, and in the opposite case, a low membership degree. The
desirable characteristics of the membership function are:
the membership degree of the seed pixel or region must be 1
the membership degree of a pixel labeled as rejected must be 0
the membership function must be as independent of the seed pixel or
region as possible
the membership degree must represent the proximity between a pixel
labeled as accepted or accepted with restriction and the mean of the
resulting region
the function must be symmetric with respect to the dierence between
the mean and the pixel value and
the function must decrease monotonically from 1 to 0.
The membership function ; used by Guliato et al. 276] is illustrated in
Figure 5.34, where a = j mean seed region ;
j and b = 
max . The value
of a pixel p is mapped to the fuzzy membership degree ;(p) as follows:
if j I (p) ;
j a then ;(p) = 1
else if j I (p) ;
j> b then ;(p) = 0
else ;(p) = 1+ jI1(p);j .
The method was tested on several synthetic images with various levels of
noise. Figure 5.35 illustrates three representative results of the method with
a synthetic image and dierent seed pixels. The results do not dier signi-
cantly, indicating the low eect of noise on the method.
Results of application to mammograms: The fuzzy region for the
malignant tumor shown in Figure 5.30 (a) is illustrated in part (d) of the
same gure. Figure 5.32 (d) shows the fuzzy region obtained for the benign
mass shown in part (a) of the same gure.
An interactive graphical interface was developed by Guliato et al. 353],
using an object-oriented architecture with controller classes. Some of the fea-
tures of the interface are fast and easy upgradability, portability, and threads
432 Biomedical Image Analysis

membership degree

0
a b
difference between the mean and the pixel value

FIGURE 5.34
Fuzzy membership function for region growing, where a =j mean seed region;

j, and b = 
max . Reproduced with permission from D. Guliato, R.M.
Rangayyan, W.A. Carnielli, J.A. Zuo, and J.E.L. Desautels, \Segmentation
of breast tumors in mammograms using fuzzy sets", Journal of Electronic
c SPIE and IS&T.
Imaging, 12(3): 369 { 378, 2003.

to support parallelism between tasks. The interface integrates procedures to


detect contours using fuzzy preprocessing and region growing, extract fuzzy
regions using fuzzy region growing, compute statistical parameters, and clas-
sify masses and tumors as benign or malignant. The interface also provides
access to basic image processing procedures including zooming in or out, l-
ters, histogram operations, the Bezier method to manipulate contours, and
image format conversion.
Guliato et al. 276] applied the fuzzy region-growing method to 47 test
images maintaining the same values of  = 0:07 and CVmax = 0:01, and
varying only the parameter 
max . The values of the parameters were se-
lected by comparing the results of segmentation with the contours drawn by
a radiologist. The 
max parameter ranged from 5 to 48 for the 47 masses
and tumors analyzed.
The fuzzy regions obtained for the 47 mammograms were compared objec-
tively with the corresponding contours drawn by the radiologist, by computing
the measure of fuzziness as in Equation 5.63. The values were distributed over
the range (0:098 0:82), with the mean and standard deviation being 0:46 and
0:19, respectively. The measure of fuzziness was smaller than 0:5 in 27 of
the 47 cases analyzed. Regardless of this measure of agreement, it was found
that the fuzzy regions segmented contained adequate information to facilitate
discrimination between benign masses and malignant tumors, as described
next.
Detection of Regions of Interest 433

(a) (b)

(c) (d)
FIGURE 5.35
Illustration of the eects of seed pixel selection on fuzzy region growing.
(a) Original image (128  128 pixels) with Gaussian noise, with  = 12 and
SNR = 2:66. Results with (b) seed pixel (60 60), 
max = 18, CVmax =
0:007,  = 0:01 (c) seed pixel (68 60), 
max = 18, CVmax = 0:007,
 = 0:01 (d) seed pixel (68 80), 
max = 18, CVmax = 0:007,  = 0:01. Re-
produced with permission from D. Guliato, R.M. Rangayyan, W.A. Carnielli,
J.A. Zuo, and J.E.L. Desautels, \Segmentation of breast tumors in mam-
mograms using fuzzy sets", Journal of Electronic Imaging, 12(3): 369 { 378,
2003.
c SPIE and IS&T.
434 Biomedical Image Analysis
Assessment of the results by pattern classi cation: In order to de-
rive parameters for pattern classication, Guliato et al. 276] analyzed the
characteristics of a fuzzy ribbon, dened as the connected region whose pixels
possess membership degrees less than unity and separate the tumor core from
the background, as illustrated in Figure 5.36. Shape factors of mass contours
as well as measures of edge sharpness and texture have been proposed for the
purpose of classication of breast masses 163, 165, 275, 345, 354] see Sections
6.7, 7.9, 12.11, and 12.12 for related discussions. However, important infor-
mation is lost in analysis based on crisply dened contours: the uncertainty
present in and/or around the ROI is not considered. Guliato et al. evaluated
the potential use of statistical measures of each segmented fuzzy region and
of its fuzzy ribbon as tools to classify masses as benign or malignant. Observe
that the fuzzy ribbon of the malignant tumor in Figure 5.36 (a) contains more
pixels with low values than that of the benign mass in part (b) of the same
gure. This is due to the fact that, in general, malignant tumors possess
ill-dened boundaries, whereas benign masses are well-circumscribed. Based
upon this observation, Guliato et al. computed the coecient of variation
CVfr of the membership values of the pixels lying only within the fuzzy rib-
bon, and the ratio fr of the number of pixels with membership degree less
than 0:5 to the total number of pixels within the fuzzy ribbon. It was expected
that the fuzzy ribbons of malignant tumors would possess higher CVfr and
fr than those of benign masses.
In pattern classication experiments, discrimination between benign masses
and malignant tumors with the parameter fr had no statistical signicance.
The probability of malignancy curve based upon CVfr , computed using the
logistic regression method (see Section 12.5 for details), is illustrated in Fig-
ure 5.37. The cut point of 0:18 resulted in the correct classication of 20 out
of 25 malignant tumors and 20 out of 22 benign masses processed, leading to
a sensitivity of 0:8 and a specicity of 0:9.
The fuzzy segmentation techniques described above represent the ROI by
fuzzy sets instead of crisp sets as in classical segmentation. The results of
the fuzzy approach agree well with visual perception, especially at transitions
around boundaries. The methods allow the postponement of the crisp decision
to a higher level of image analysis. For further theoretical notions related to
fuzzy segmentation and illustrations of application to medical images, see
Udupa and Samarasekera 355] and Saha et al. 356, 357].

5.6 Detection of Objects of Known Geometry


Occasionally, images contain objects that may be represented in an analyti-
cal form, such as straight-line segments, circles, ellipses, and parabolas. For
Detection of Regions of Interest 435

Figure 5.36 (a)

example, the edge of the pectoral muscle appears as an almost-straight line


in MLO mammograms benign calcications and masses appear as almost-
circular or oval objects in mammograms several types of cells in pathology
specimens have circular or elliptic boundaries, and some may have nearly rect-
angular shapes and parts of the boundaries of malignant breast tumors may
be represented using parabolas. The detection, modeling, and characteriza-
tion of objects as above may be facilitated by prior knowledge of their shapes.
In this section, we shall explore methods to detect straight lines and circles
using the Hough transform.

5.6.1 The Hough transform


Hough 358] proposed a method to detect straight lines in images based upon
the representation of straight lines in the image (x y) space using the slope-
intercept equation
y =mx+c (5.64)
where m is the slope and c is the position where the line intercepts the y axis
see Figure 5.38. In the Hough domain or space, straight lines are character-
ized by the pair of parameters (m c) the Hough space is also known as the
parameter space. A disadvantage of this representation is that both m and
c have unbounded ranges, which creates practical diculties in the computa-
436 Biomedical Image Analysis

(b)
FIGURE 5.36
The fuzzy ribbons of (a) the malignant tumor in Figure 5.30 (a) and (b) the
benign mass in Figure 5.32 (a). Reproduced with permission from D. Gu-
liato, R.M. Rangayyan, W.A. Carnielli, J.A. Zuo, and J.E.L. Desautels,
\Segmentation of breast tumors in mammograms using fuzzy sets", Journal
c SPIE and IS&T.
of Electronic Imaging, 12(3): 369 { 378, 2003.

Detection of Regions of Interest 437

FIGURE 5.37
The probability of malignancy (vertical axis) derived from the parameter CVfr
(horizontal axis). Reproduced with permission from D. Guliato, R.M. Ran-
gayyan, W.A. Carnielli, J.A. Zuo, and J.E.L. Desautels, \Segmentation of
breast tumors in mammograms using fuzzy sets", Journal of Electronic Imag-
c SPIE and IS&T.
ing, 12(3): 369 { 378, 2003.

tional representation of the (m c) space. In order to overcome this limitation,


Duda and Hart 359] proposed the representation of straight lines using the
normal parameters ( ) as
= x cos  + y sin  (5.65)
see Figure 5.38. This representation has the advantage that  is limited to the
range 0 ] (or 0 2 ]) and is limited by the size of the given image. The
origin may be chosen to be at the center of the given image or at any other
convenient point the limits of the parameters ( ) are aected by the choice
of the origin.
A procedure for the detection of straight lines using this parametric rep-
resentation is described in the next section. The Hough transform may be
extended for the detection of any curve that may be represented in a para-
metric form see Rangayyan and Krishnan 360] for an application of the
Hough transform for the identication of linear, sinusoidal, and hyperbolic
frequency-modulated components of signals in the time-frequency plane.

5.6.2 Detection of straight lines


Suppose we are given a digital image that contains a straight line. Let the
pixels along the line be represented as fx(n) y(n)g, n = 0 1 2 : : : N ; 1,
where N is the number of pixels along the line. It is assumed that the image
has been binarized, such that the pixels that belong to the line have the value
438 Biomedical Image Analysis
y

slope = m

y = mx + c

ρ = x cos θ + y sin θ
ρ o
90

(0, 0)
x

FIGURE 5.38
Parametric representation of a straight line in three coordinate systems: (x y),
(m c), and ( ).

1, and all other pixels have the value 0. It is advantageous if the line is
one-pixel thick otherwise, several lines could exist within a thick line.
If the normal parameters of the line are ( 0 0 ), all pixels along the line
satisfy the relationship
0 = x(n) cos 0 + y(n) sin 0 : (5.66)
For a given pixel fx(n) y(n)g, this represents a sinusoidal curve in the ( )
parameter space it follows that the curves for all the N pixels intersect at
the point ( 0 0 ).
The following properties of the above representation follow 359]:
A point in the (x y) space corresponds to a sinusoidal curve in the ( )
parameter space.
A point in the ( ) space corresponds to a straight line in the (x y)
space.
Points lying on the same straight line in the (x y) space correspond to
curves through a common point in the parameter space.
Points lying on the same curve in the parameter space correspond to
lines through a common point in the (x y) space.
Detection of Regions of Interest 439
Based upon the discussion above, a procedure to detect straight lines is as
follows:
1. Discretize the ( ) parameter space into bins by quantizing and 
as k k = 0 1 2 : : : K ; 1 and l l = 0 1 2 : : : L ; 1 the bins
are commonly referred to as accumulator cells. Suitable limits may be
imposed on the ranges of the parameters ( ).
2. For each point in the given image that has a value of 1, increment by
1 each accumulator cell in the ( ) space that satises the relationship
= x(n) cos  + y(n) sin : Note that exact equality needs to be trans-
lated to a range of acceptance depending upon the discretization step
size of the parameter space.
3. The coordinates of the point of intersection of all the curves in the
parameter space provide the parameters of the line. This point will
have the highest count in the parameter space.
The procedure given above assumes the existence of a single straight line in
the image. If several lines exist, there will be the need to search for all possible
points of intersection of several curves (or the local maxima). Note that the
count in a given accumulator cell represents the number of pixels that lie on
a straight line or several straight-line segments that have the corresponding
( ) parameters. A threshold may be applied to detect only lines that have a
certain minimum length (number of pixels). All cells in the parameter space
that have counts above the threshold may be taken to represent straight lines
(or segments) with the corresponding ( ) values and numbers of pixels.
Examples: Figure 5.39 (a) shows an image with a single straight line,
represented by the parameters ( ) = (20 30o ). The limits of the x and y
axes are 50, with the origin at the center of the image. The Hough transform
of the image is shown in part (b) of the gure in the ( ) parameter space.
The maximum value in the parameter space occurs at ( ) = (20 30o ).
An image containing two lines with ( ) = (20 30o ) and (;50 60o ) is
shown in Figure 5.40 (a), along with its Hough transform in part (b) of the
gure. (The value of was considered to be negative for normals to lines
extending below the horizontal axis x = 0 in the image, with the origin at
the center of the image the range of  was dened to be 0 180o ]. It is also
possible to maintain to be positive, with the range of  extended to 0 360o ].)
The parameter space clearly demonstrates the expected sinusoidal patterns,
as well as two peaks at the locations corresponding to the parameters of the
two lines present in the image. Observe that the intensity of the point at
the intersection of the sinusoidal curves for the second line (the lower of the
two bright points in the parameter space) is less than that for the rst line,
re ecting its shorter length.
The application of the Hough transform to detect the pectoral muscle in
mammograms is described in Section 5.10. See Section 8.6 for further discus-
440 Biomedical Image Analysis
sion on the Hough transform, and for a modication of the Hough transform
by inclusion of the Radon transform.

(a) (b)
FIGURE 5.39
(a) Image with a straight line with ( ) = (20 30o ). The limits of the x and y
axes are 50, with the origin at the center of the image. (b) Hough transform
parameter space for the image. The display intensity is log(1+ accumulator
cell value). The horizontal axis represents  = 0 180o ] the vertical axis
represents = ;75 75].

5.6.3 Detection of circles


For example, all points along the perimeter of a circle of radius c centered at
(x y) = (a b) satisfy the relationship
(x ; a)2 + (y ; b)2 = c2 : (5.67)
Any circle is represented by a single point in the 3D (a b c) parameter space.
The points along the perimeter of a circle in the (x y) plane describe a right-
circular cone in the (a b c) parameter space. The algorithm for the detection
of straight lines (described in Section 5.6.2) may be easily extended for the
detection of circles using this representation.
Example: A circle of radius 10 pixels, centered at (x y) = (15 15) in a
30  30 image, is shown in Figure 5.41. The Hough parameter (a b c) space
Detection of Regions of Interest 441

(a) (b)
FIGURE 5.40
(a) Image with two straight lines with ( ) = (20 30o ) and (;50 60o ). The
limits of the x and y axes are 50, with the origin at the center of the image.
(b) Hough transform parameter space for the image. The display intensity is
log(1+ accumulator cell value). The horizontal axis represents  = 0 180o ]
the vertical axis represents = ;75 75].
442 Biomedical Image Analysis
of the circle is shown in Figure 5.42. The parameter space demonstrates a
clear peak at (a b c) = (15 15 10) as expected.

FIGURE 5.41
A 30  30 image with a circle of radius 10 pixels, centered at (x y) = (15 15).

An image derived from the scar-tissue collagen ber image in Figure 1.5 (b)
is shown in Figure 5.43 (a). The image was binarized with a threshold equal
to 0:8 of its maximum value. In order to detect the edges of the objects in
the image, an image was created with the value zero at all pixels having the
value zero and also having all of their 4-connected pixels equal to zero in the
binarized image. The same step was applied to all pixels with the value of one.
All remaining pixels were assigned the value of one. Figure 5.43 (b) shows
the result that depicts only the edges of the bers, which are nearly circular
in cross-section. Observe that some of the object boundaries are incomplete
and not exactly circular. The Hough parameter (a b c) space of the image is
shown in Figure 5.44 for circles of radius in the range 1 ; 12 pixels. Several
distinct peaks are visible in the planes for radius values of 4 5 6 and 7 pixels
the locations of the peaks give the coordinates of the centers of the circles that
are present in the image and their radii. The parameter space could be further
processed and thresholded to detect automatically the peaks present, which
relate to the circles in the image. Prior knowledge of the range of possible
radius values could assist in the process.
Figure 5.43 (c) shows the parameter space plane for a radius of 5 pixels,
superimposed on a reversed version of the edge image in Figure 5.43 (b) the
edges are shown in black. The composite image demonstrates clearly that
the peaks in the parameter space coincide with the centers of nearly circular
objects, with an estimated radius of 5 pixels no peaks or high values are
present within smaller or larger objects. A similar composite image is shown
in part (d) of the gure for a radius of 7 pixels. Similar results are shown in
Figures 5.45 and 5.46 for another TEM image of a normal ligament sample.
Detection of Regions of Interest 443

FIGURE 5.42
Hough parameter (a b c) space of the circle image in Figure 5.41. Each image
is of size 30  30 pixels, and represents the range of the (a b) coordinates of
the center of a potential circle. The series of images represents the various
planes of the (a b c) parameter space with c = 1 2 : : : 36 going left to right
and top to bottom, representing the radius of potential circles. The intensity
of the parameter space values has been enhanced with the log operation. The
maximum value in the Hough space is located at the center of the plane for
c = 10.
444 Biomedical Image Analysis
Observe that the Hough transform works well even when the given image has
incomplete and slightly distorted versions of the pattern being represented.
Frank et al. 33] performed an analysis of the diameter distribution of col-
lagen bers in rabbit ligaments. Scar-tissue samples from injured and healing
ligaments were observed to have an almost-uniform distribution of ber di-
ameter in the range 60 ; 70 nm, whereas normal samples were observed to
have a wider range of diameter, with an average value of about 150 nm.

5.7 Methods for the Improvement of Contour or Region


Estimates
It is often the case that the contour of an ROI provided by an image processing
technique does not satisfy the user. The user may wish to impose conditions,
such as smoothness, on the contour. It may also be desirable to have the
result authenticated by an expert in the eld of application. In such cases,
the need may arise to modify the contour. A few techniques that may assist
in this task are summarized below.
Polygonal and parabolic models: The contour on hand may be seg-
mented into signicant parts by selecting control points or by detecting its
points of in ection (see Section 6.1.3). The contour in its entirety, or its
segments, may then be approximated by parametric curves, such as a poly-
gon 245, 361] (see Section 6.1.4) or parabolas (see Section 6.1.5). Minor arti-
facts, details, or errors in the contour are removed by the parametric models,
to the extent permitted by the specied tolerance or error.
B-spline and Bezier curves: Bezier polynomials may be used to t
smooth curves between guiding points that are specied in an interactive man-
ner 245]. This approach may be used to modify a part of a contour without
aecting the rest of the contour. In the case where several guiding or critical
points are available around an ROI and a smooth curve is desired to connect
them, B-splines may be used to derive the interpolating points 245]. The
guiding points may also be used for compact representation of the contour.
Active contour models or \snakes": Kass et al. 362] proposed active
contour models or \snakes" that allow an initial contour to reshape and mold
itself to a desired ROI based upon constraints related to the derivatives of
the contour, image gradient, and energy functionals. A snake is an energy-
minimizing spline that is guided by external constraint forces, in uenced by
image-based forces that pull it toward features such as edges, and constrained
by internal spline forces that impose piecewise smoothness. The initial contour
may be provided by an image processing technique, or could be provided by
the user in the form of a simple (polygonal, circular, or oval) contour within
the ROI. The active contour model determines the \true" boundary of the
Detection of Regions of Interest 445

(a) (b)

(c) (d)
FIGURE 5.43
(a) TEM image showing collagen bers in cross-section a part of the image
in Figure 1.5 (b)]. The image is of size 85  85 pixels. (b) Edges extracted
from the image in (a). (c) Negative version of the image in (b), overlaid with
10 times the c = 5 plane of the Hough transform parameter space. (d) Same
as in (c) but with the c = 7 plane. See also Figure 5.44.
446 Biomedical Image Analysis

FIGURE 5.44
Hough parameter (a b c) space of the image in Figure 5.43 (b). Each image
is of size 85  85 pixels, and represents the range of the (a b) coordinates of
the center of a potential circle. The series of images represents the various
planes of the (a b c) parameter space with c = 1 2 : : : 12 going left to right
and top to bottom, representing the radius of potential circles. The intensity
of the parameter space values has been enhanced with the log operation.
Detection of Regions of Interest 447

(a) (b)

(c) (b)
FIGURE 5.45
(a) TEM image showing collagen bers in cross-section a part of the image
in Figure 1.5 (a)]. The image is of size 143  157 pixels. (b) Edges extracted
from the image in (a). (c) Negative version of the image in (b), overlaid with
10 times the c = 13 plane of the Hough transform parameter space. (d) Same
as in (c) but with the c = 20 plane. See also Figure 5.46.
448 Biomedical Image Analysis

FIGURE 5.46
Hough parameter (a b c) space of the image in Figure 5.45 (b). Each image
is of size 143  157 pixels, and represents the range of the (a b) coordinates
of the center of a potential circle. The series of images represents the various
planes of the (a b c) parameter space with c = 1 2 : : : 20 going left to right
and top to bottom, representing the radius of potential circles. The intensity
of the parameter space values has been enhanced with the log operation.
Detection of Regions of Interest 449
ROI that satises the conditions imposed. The use of active contour models
to obtain the breast boundary in mammograms is described in Section 5.9.
The \live wire": Falc~ao et al. 363, 364] argued that there will continue
to be situations where automatic image segmentation methods fail, and pro-
posed a user-steered segmentation paradigm labeled as the \live wire". Their
guiding principles were to provide eective control to the user over the seg-
mentation process during execution, and to minimize the user's time required
in the process of segmentation. In the live-wire approach, the user initially
species a point on the boundary of the ROI using a cursor. Then, as the user
moves the cursor, an optimal path connecting the initial point to the current
cursor position is computed and displayed in real time. Optimal paths are
determined by computing a number of features and their associated costs to
every boundary element, and nding the minimum-cost paths via dynamic
programming. When the cursor moves close to the true boundary, the live
wire snaps on to the boundary. If the result is satisfactory, the user deposits
the cursor, at which point the location of the cursor becomes the starting
point of a new live-wire segment. The entire boundary of the ROI is specied
as a set of live-wire segments. The features used include the image intensity
values on the inside and outside of the boundary, several gradient measures,
and the distance from the boundary in the preceding slice (in the case of seg-
mentation of 3D images). Falc~ao et al. indicated that the method could assist
in fast and accurate segmentation of ROIs in 2D and 3D medical images after
some training.
Fusion of multiple results of segmentation: The segmentation of re-
gions such as masses and tumors in mammograms is complicated by several
factors, such as poor contrast, superimposition of tissues, imaging geometry,
and the invasive nature of cancer. In such cases, it may be desirable to apply
several image processing techniques based upon dierent principles and image
characteristics, and then to combine or fuse the multiple results obtained into
a single region or contour. It could be expected that the result so obtained
would be better than any of the individual results. A fuzzy fusion operator
that can fuse a contour and a fuzzy region to produce another fuzzy region is
described in Section 5.11.

5.8 Application: Detection of the Spinal Canal


In an application to analyze CT images of neuroblastoma 365, 366] (see Sec-
tion 9.9), the spinal canal was observed to interfere with the segmentation of
the tumor using the fuzzy connectivity algorithm 355]. In order to address
this problem, a method was developed to detect the center of the spinal canal
in each CT slice, grow the 3D region containing the spinal canal, and remove
450 Biomedical Image Analysis
the structure. The initializing seeds for the region-growing procedure were
automatically obtained with the following procedure.
The outer region in the CT volume containing materials outside the patient,
the skin, and peripheral fat was rst segmented and removed 365, 366]. The
CT volume was then thresholded at +800 HU to detect the high-density
bone structures. All voxels not within 8 mm from the inner boundary of the
peripheral fat layer were rejected. Regions were grown using each remaining
voxel, and all of the resulting regions were merged to form the bone volume.
The inclusion criteria were in terms of the CT values being within +800 
2 HU , with  = 103 HU being the standard deviation of bone, and spatial
connectivity.
The resulting CT volume was cropped to limit the scope of further analysis,
as follows. The width of the image was divided into three equal parts, and
the outer thirds were rejected. The height of the image was divided into six
equal parts, and the lower fourth and fth parts were included in the cropped
region. In the interslice direction, the rst 13% of the slices were removed,
and the subsequent 20 slices were included in the cropped volume. Figure
5.47 (b) and Figure 5.48 (b) show the eect of cropping on two CT slices.
The cropped, binarized bone volume was subjected to a 3D derivative oper-
ator to produce the edges of the bone structures. The vertebral column is not
continuous, but made up of interlocking elements. As a result, the bone-edge
map could be sparse. Figures 5.47 (c) and 5.48 (c) show the bone-edge maps
of the binarized bone volume related to the corresponding CT images in parts
(b) of the same gures.
The Hough transform for the detection of circles, as described in Sec-
tion 5.6.1 and Equation 5.67, was applied to each slice of the bone-edge map.
The radius in the Hough space was limited to the range 6 ; 10 mm. Because
of the possibility of partial structures and edges in a given image, the global
maximum in the Hough space may not relate to the inner circular edge of the
spinal canal, as desired. In order to obtain the center and radius of the ROI,
the CT values of bone marrow (
= +142 HU and  = 48 HU ) and the spinal
canal (
= +30 HU and  = 8 HU ) were used as constraints. If the center
of the circle corresponding to the Hough-space maximum was not within the
specied HU range, the circle was rejected, and the next maximum in the
Hough space was evaluated. This process was continued until a suitable circle
was detected.
Figure 5.47 (d) shows the best-tting circle detected, drawn on the original
CT image. When the bone structure is clearly delineated, the best-tting
circle approximates well the spinal canal boundary. Figure 5.48 (d) shows
four circles related to the maximum in the corresponding Hough space and
the subsequent three peaks the related Hough-space slices are shown in Fig-
ure 5.49. The best-tting circle (which was not given by the global maximum
in the Hough space) was obtained by applying the constraints dened above.
The centers of the circles detected as above were used as the seed voxels
in a fuzzy connectivity algorithm to segment the spinal canal. The mean and
Detection of Regions of Interest 451
standard deviation required for this procedure were estimated using a 7  7  2
neighborhood around each seed voxel. The spinal canal detected over all of
the CT slices available for the case illustrated in Figures 5.47 and 5.48 is
shown in Figure 5.50. The spinal canal volume was then removed from the
CT volume, resulting in improved segmentation of the tumor volume.

5.9 Application: Detection of the Breast Boundary in


Mammograms
Identication of the breast boundary is important in order to demarcate the
breast region on a mammogram. The inclusion of this preliminary procedure
in CAD systems can avoid useless processing time and data storage. By
identifying the boundary of the breast, it is possible to remove any artifact
present outside the breast area, such as patient markings (often high-intensity
regions) and noise, which can aect the performance of image analysis and
pattern recognition techniques. Identication and extraction of the eective
breast region is also important in PACS and telemammography systems 367].

The prole of the breast has been used as additional information in dif-
ferent tasks in mammography. Bick et al. 368] and Byng et al. 369], for
example, used the skin-air boundary information to perform density correc-
tion of peripheral breast tissue on digital mammograms, which is aected
by the compression procedure applied during imaging. Chandrasekhar and
Attikiouzel 370] discussed the importance of the skin-air boundary prole
as a constraint in searching for the nipple location, which is often used as a
reference point for registering mammograms taken at dierent times of the
same subject. Other groups have used the breast boundary to perform reg-
istration between left and right mammograms in the process of detection of
asymmetry 371, 372].

Most of the works presented in the literature to identify the boundary of


the breast are based upon histogram analysis 367, 368, 369, 371, 372, 373],
which may be critically dependent upon the threshold-selection process and
the noise present in the image. Such techniques, as discussed by Bick et
al. 374], may not be robust for a screening application. Ferrari et al. 279, 375]
proposed active contour models, especially designed to be locally adaptive,
for identication of the breast boundary in mammograms. The methods,
including several preprocessing steps, are described in the following sections.
452 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 5.47
(a) A 512  512 CT image slice of a patient with neuroblastoma. Pixel width
= 0:41 mm. The tumor boundary drawn by a radiologist is shown in black.
(b) Cropped area for further processing. (c) Edge map of the binarized bone
volume. (d) Best-tting circle as determined by Hough-space analysis. The
circle has a radius of 6:6 mm. Figures courtesy of the Alberta Children's
Hospital and H.J. Deglint 365, 366].
Detection of Regions of Interest 453

(a) (b)

(c) (d)
FIGURE 5.48
(a) A 512  512 CT image slice of a patient with neuroblastoma. Pixel
width = 0:41 mm. (b) Cropped area for further processing. (c) Edge map
of the binarized bone volume. (d) The four circles corresponding to the rst
four peaks in the Hough space. The radii of the circles range from 6:6 mm
to 9:8 mm. See also Figure 5.49. Figures courtesy of the Alberta Children's
Hospital and H.J. Deglint 365, 366].
454 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 5.49
(a) Edge map of the binarized bone volume related to the ROI of the CT
image in Figure 5.48. Hough-space slices related to the detection of circles of
radius (b) 15, (c) 17, and (d) 19 pixels, with the pixel width being 0:41 mm.
Figures courtesy of H.J. Deglint 365, 366].
Detection of Regions of Interest 455

FIGURE 5.50
3D rendition of the spinal canal detected for the case related to the CT slices
shown in Figures 5.47 and 5.48. The spinal canal is shown in a relatively dark
shade of gray against the bone volume for reference.
456 Biomedical Image Analysis
5.9.1 Detection using the traditional active deformable con-
tour model
In an initial study, Ferrari et al. 375] used the traditional active deformable
contour model or snakes 362] for the detection of the breast boundary. The
method, summarized in Figure 5.51, is composed of six main stages, as de-
scribed in the following paragraphs.
Stage 1: The image contrast is enhanced by using a simple logarithmic
operation 8]. A contrast-correction step using a simple logarithmic operation
as
g(x y) = log1 + f (x y)] (5.68)
is applied to the original image f (x y) g(x y) is the transformed image. This
operation for dynamic range compression, although applied to the whole im-
age, signicantly enhances the contrast of the regions near the breast bound-
ary in mammograms, which are characterized by low density and poor de-
nition of details 368, 369]. The rationale behind the application of this pro-
cedure to the image is to determine an approximate breast contour as close
as possible to the true breast boundary. The eect of this procedure can be
seen by comparing the original and the enhanced images in Figures 5.52 (a)
and 5.52 (b).
Stage 2: A binarization procedure using the Lloyd{Max quantization al-
gorithm is applied to the image 118] see Section 2.3.2 for details. Figure
5.52 (c) shows the binarized version of the image in part (b) of the same
gure.
Stage 3: Spurious details generated by the binarization step are removed
by using a morphological opening operator 8] with a circular structuring
element with a diameter of 7 pixels. Figures 5.52 (c){(d) show the result of
the binarization procedure for the mammogram in Figure 5.52 (a), before and
after the application of the morphological opening operator, respectively.
Stage 4: After the binarization procedure, an approximate contour Cappr
of the breast is extracted by using the chain-code method 8] see Section 6.1.2
for details. The starting point of Cappr is obtained by following the horizontal
path that starts at the centroid of the image and is directed toward the chest
wall until a background pixel is found. This procedure avoids selecting an
initial boundary from artifacts or patient labels that may be present in the
image. Four control points see Figure 5.52 (e)] are automatically determined
and used to limit the breast boundary. The points are dened as N1: the
top-left corner pixel of the boundary loop N2: the farthest point on the
boundary from N3 (in terms of the Euclidean distance through the breast)
N3: the lowest pixel on the left-hand edge of the boundary loop and N4: the
farthest point on the skin-air boundary loop from N1.
Stage 5: Pixels along lines of length 40 pixels (length = 0:8 cm at a sam-
pling resolution of 200
m) are identied at each point of the approximate
Detection of Regions of Interest 457

FIGURE 5.51
Flowchart of the procedures for identication of the skin-air boundary of the
breast in mammograms. Reproduced with permission from R.J. Ferrari, R.M.
Rangayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere, \Identication of
the breast boundary in mammograms using active contour models", Medical
and Biological Engineering and Computing, 42: 201 { 208, 2004.
c IFMBE.
458 Biomedical Image Analysis

Figure 5.52 (a)

Figure 5.52 (b)


Detection of Regions of Interest 459

Figure 5.52 (c)

Figure 5.52 (d)


460 Biomedical Image Analysis

Figure 5.52 (e)

Figure 5.52 (f)


Detection of Regions of Interest 461

Figure 5.52 (g)

skin-air boundary in the original image in the direction normal to the bound-
ary see Figure 5.52 (f)]. The gray-level histogram of the pixels along each
normal line is computed, and the skin-air intersection is dened as the rst
pixel, while traversing along the normal line from inside the breast toward
the outside, that has the gray level associated with the maximum value in the
histogram, as illustrated in Figure 5.53. This procedure was designed in order
to provide a close estimate to the true skin-air boundary see Figure 5.52 (g)],
and thereby reduce the chances of the active contour (used in the next stage)
converging to a wrong contour.
Stage 6: The traditional snakes model is applied to dene the true breast
boundary. The contour determined in the previous stage is used as the input
to a traditional parametric active contour or snakes model 362]. The contour
is moved through the spatial domain of the image in order to minimize the
energy functional
Z 1 1 n o 
jv 0 (s)j +  jv 00 (s)j + Eext fv (s)g ds
2 2
E= 2  (5.69)
0
where  and  are weighting parameters that control, respectively, the ten-
sion and rigidity of the snake. The v0 (s) and v00 (s) values denote the rst and
second derivatives of v(s) with respect to s, where v(s) indicates the continu-
ous representation of the contour, and s represents distance along the contour
462 Biomedical Image Analysis

(h)
FIGURE 5.52
Results of each stage of the method for identication of the breast bound-
ary. (a) Image mdb042 from the Mini-MIAS database 376]. (b) Image after
the logarithmic operation. (c){(d) Binary image before and after applying
the binary morphological opening operator. (e) Control points N1 to N4
(automatically determined) used to limit the breast boundary. (f) Normal
lines computed from each pixel on the skin-air boundary. (g) Boundary after
histogram-based analysis of the normal lines. (h) Final boundary. Repro-
duced with permission from R.J. Ferrari, R.M. Rangayyan, J.E.L. Desautels,
R.A. Borges, and A.F. Fr&ere, \Identication of the breast boundary in mam-
mograms using active contour models", Medical and Biological Engineering
and Computing, 42: 201 { 208, 2004.
c IFMBE.
Detection of Regions of Interest 463
Normal line used to determine the true skin − air boundary
8

7.5

7
Gray−level value

6.5

5.5

5 ×

4.5

4
5 10 15 20 25 30 35 40
Distance in pixels

(a)
Gray−level histogram computed from the pixels of the normal line
30

25

20
Occurrence

15

10

0
5 10 15 20 25

Gray−level value

(b)
FIGURE 5.53
(a) Prole of a sample normal line used to determine an approximate skin-air
boundary. The symbol  indicates the skin-air intersection determined in
Stage 5 of the method. (b) Histogram computed from (a). Reproduced with
permission from R.J. Ferrari, R.M. Rangayyan, J.E.L. Desautels, R.A. Borges,
and A.F. Fr&ere, \Identication of the breast boundary in mammograms using
active contour models", Medical and Biological Engineering and Computing,
42: 201 { 208, 2004.
c IFMBE.
464 Biomedical Image Analysis
(normalized to the range 0 1]). The external energy function Eext fv(s)g is
derived from the original image f (x y) as

Eext (x y) = ; krf (x y)k2 (5.70)


where r is the gradient operator. The values  = 0:001 and  = 0:09 were
experimentally derived by Ferrari et al., based upon the approximate bound-
ary obtained in the previous stage, the quality of the external force derived
from the original image, and the nal contours obtained.
Results of application to mammograms: Sixty-six images from the
Mini-MIAS database 376] were used to assess the performance of the method.
The results were subjectively analyzed by an expert radiologist. According
to the opinion of the radiologist, the method accurately detected the breast
boundary in 50 images, and reasonably well in 11 images. The nal contour
for the mammogram in Figure 5.52 (a) is given in part (h) of the same gure.
The method failed in ve images because of distortions and artifacts present
near the breast boundary (see Figure 5.54). Limitations of the method exist
mainly in Stages 5 and 6. Although Stage 5 helps in obtaining a good breast
contour, it is time-consuming, and may impose a limitation in practical ap-
plications. The traditional snakes model used in Stage 6 is not robust (the
method is not locally adaptive) in the presence of noise and artifacts, and has
a short range of edge capture.
In order to address these limitations, Ferrari et al. proposed an improved
method, described in the following section, by replacing the traditional snakes
algorithm with an adaptive active deformable contour model specically de-
signed for the application to detect breast boundaries. The algorithm includes
a balloon force in an energy formulation that minimizes the in uence of the
initial contour on the convergence of the algorithm. In the energy formulation,
the external energy is also designed to be locally adaptive.

5.9.2 Adaptive active deformable contour model


The improved method to identify the breast boundary is summarized in the
owchart in Figure 5.51. Stages 1 { 4 of the initial method described in
the preceding section are used to nd an approximate breast boundary. The
approximate contour V = v1 v2    vN , with an ordered collection of N
points vi = (xi yi ) i = 1 2    N is obtained from Stage 4 of the previous
method by sampling the approximate contour Cappr see Figure 5.55 (a)], and
used as the initial contour in the adaptive active deformable contour model
(AADCM) see Figure 5.55 (b). Only 10% of the total number of points
present in Cappr are used in the sampled contour.
In the AADCM, which combines several characteristics from other known
active contour models 377, 378, 379], the contour is moved through the spatial
domain of the image in order to minimize the following functional of energy:
Detection of Regions of Interest 465

Figure 5.54 (a)

X
N
Etotal =  Einternal (vi ) +  Eexternal (vi )] (5.71)
i=1
where  and  are weighting parameters that control the internal and external
energies Einternal and Eexternal , respectively, at each point vi . The internal
energy is composed of two terms:
Einternal (vi ) = a Econtinuity (vi ) + b Eballoon (vi ) : (5.72)
This energy component ensures a stable shape for the contour and constrains
to keep constant the distance between the points in the contour. In the work
of Ferrari et al., the weighting parameters a and b were initially set to unity
(a = b = 1), because the initial contours present smooth shapes and are close
to the true boundary in most cases. For each element (m n) in a neighborhood
of 7  7 pixels of vi , the continuity term ec(mn) (vi ) is computed as
ec(mn) (vi ) = l(1V ) jp(mn) (vi ) ; (vi;1 + vi+1 )j2 (5.73)
P
where l(V ) = N1 Ni=1 jvi+1 ; vi j2 is a normalization factor that makes the
continuity energy independent of the size, location, and orientation of V 
466 Biomedical Image Analysis

(b)
FIGURE 5.54
Result of the segmentation algorithm showing wrong convergence of the breast
contour to a region of high gradient value. (a) Breast boundary detected, su-
perimposed on the original image mdb006 from the Mini-MIAS database.
(b) Details of the breast contour attracted to the image identication marker,
corresponding to the boxed region in (a). Compare with Figure 5.59. Repro-
duced with permission from R.J. Ferrari, R.M. Rangayyan, J.E.L. Desautels,
R.A. Borges, and A.F. Fr&ere, \Identication of the breast boundary in mam-
mograms using active contour models", Medical and Biological Engineering
and Computing, 42: 201 { 208, 2004.
c IFMBE.
Detection of Regions of Interest 467

Figure 5.55 (a)

p(mn) (vi ) is the point in the image at the position (m n) in the 7  7 neigh-
borhood of vi  and = 2 cos( 2N )];1 is a constant factor to keep the location
of the minimum energy lying on the circle connecting vi;1 and vi+1 , in the
case of closed contours see Figure 5.56.
The balloon force is used to force the expansion of the initial contour toward
the breast boundary. In the work of Ferrari et al., the balloon force was
made adaptive to the magnitude of the image gradient, causing the contour
to expand faster in homogeneous regions and slower near the breast boundary.
The balloon energy term eb(mn) (vi ) is dened as
eb(mn) (vi ) = ni fvi ; p(mn) (vi )g (5.74)
where ni is the outward unit normal vector of V at the point vi , and the
symbol indicates the dot product. ni is computed by rotating the vector
ti = jvvii ;;vvii 11 j + jvvii+1
; +1 ;vi o
;vi j which is the tangent vector at the point vi , by 90 
see Figure 5.57.
;

The external energy is based upon the magnitude and direction of the image
gradient, and is intended to attract the contour to the breast boundary. It is
dened as
ee(mn) (vi ) = ;ni rf fp(mn) (vi )g (5.75)
468 Biomedical Image Analysis

(b)
FIGURE 5.55
(a) Approximate breast contour obtained from Stage 4 of the method de-
scribed in Section 5.9.1, for the image mdb042. (b) Sampled breast contour
used as the input to the AADCM. Reproduced with permission from R.J.
Ferrari, R.M. Rangayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere,
\Identication of the breast boundary in mammograms using active contour
models", Medical and Biological Engineering and Computing, 42: 201 { 208,
2004.
c IFMBE.
Detection of Regions of Interest 469

p
1,1

v v
i-1 i

v’
i 7x7 neighborhood
p of v
7,7 i

active contour V

region inside the active contour

2 π
___
v
i+1
N

centroid of contour

FIGURE 5.56
Characteristics of the continuity energy component in the adaptive active
deformable contour model. Figure adapted with permission from B.T. Mack-
iewich 377].
470 Biomedical Image Analysis

ti ni
final
contour

balloon
force initial
contour

region with
uniform intensity

FIGURE 5.57
Characteristics of the balloon energy component in the adaptive active de-
formable contour model. Figure adapted with permission from B.T. Mack-
iewich 377].

where rf fp(mn) (vi )g is the image gradient vector at (m n) in the 7  7


neighborhood of vi  see Figure 5.58. The direction of the image gradient is
used to avoid the attraction of the contour by edges that may be located near
the true breast boundary, such as identication marks and small artifacts see
Figure 5.59. In this situation, the gradient direction at the position (m n)
on an edge near the breast boundary and the direction of the unit normal of
the contour will have opposite signs, which makes the functional of energy
present a large value at (m n).
Minimization of the energy functionals: In order to allow comparison
between the various energy components described above, in the work of Ferrari
et al., each energy parameter was scaled to the range 0 1] as follows:
e (v ) ; e (v )
Econtinuity (vi ) = ec(mn)(v i) ; e c min(v i)  (5.76)
c max i c min i

e (v ) ; e (v )  
f (vi )k 
Eballoon (vi ) = eb(mn)(v i) ; e b min(v i) 1 ; kr
krf kmax
(5.77)
b max i b min i

e ) (vi ) ; ee min (vi )


Eexternal (vi ) = maxe e(mn : (5.78)
e max (vi ) ; ee min (vi ) krf kmax ]
Here, emin and emax indicate the minimum and maximum of the corresponding
energy component in the 7  7 neighborhood of vi . krf kmax is the maximum
gradient magnitude in the entire image.
Ferrari et al. used the Greedy algorithm, proposed by Williams and Shah
379], to perform minimization of the functional of energy in Equation 5.71.
Detection of Regions of Interest 471

n
i
p
1,1
v
i-1

true edge in image v true edge in image


i

v’
i 7x7 neighborhood
p of v
7,7 i
g
i
active contour V

region inside the active contour


v
i+1

FIGURE 5.58
Characteristics of the external energy component in the adaptive active de-
formable contour model. Figure adapted with permission from B.T. Mack-
iewich 377].
472 Biomedical Image Analysis

Figure 5.59 (a)

Although this algorithm has the drawback of not guaranteeing a global-


minimum solution, it is faster than the other methods proposed in the litera-
ture such as dynamic programming, variational calculus, and nite elements.
It also allows the insertion of hard constraints, such as curvature evaluation,
as described below.
Convergence of the AADCM is achieved in two stages by smoothing the
original image with two dierent Gaussian kernels dened with x = y = 3
pixels, and x = y = 1:5 pixels. At each stage, the iterative process is
stopped when the total energy of the contour increases between consecutive
iterations. This coarse-to-ne representation is expected to provide more sta-
bility to the contour.
In order to allow the deformable contour to adjust to corner regions, such
as the upper-right limit of the breast boundary, a constraint was inserted at
the end of each iteration by Ferrari et al., to relax the continuity term dened
in Equation 5.73. The curvature value C (vi ) at each point vi of the contour
was computed as
    u u 2
i i ;
C (vi ) = 2 sin 2 =  ku k ; ku k 
 1 (5.79)
i i;1
where ui = (vi+1 ; vi ) is the vector joining two neighboring contour elements
and  is the external angle between such vectors sharing a common contour
Detection of Regions of Interest 473

(b)
FIGURE 5.59
Application of the gradient direction information to avoid the attraction of the
boundary to objects near the true boundary. (a) Breast boundary detected
automatically, superimposed on the original image mdb006 from the Mini-
MIAS database. (b) Details of the detected breast boundary close to the
image identication marker, corresponding to the boxed region in the original
image. Compare with Figure 5.54. Reproduced with permission from R.J.
Ferrari, R.M. Rangayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere,
\Identication of the breast boundary in mammograms using active contour
models", Medical and Biological Engineering and Computing, 42: 201 { 208,
2004.
c IFMBE.
474 Biomedical Image Analysis
element. This denition of curvature, as discussed by Williams and Shah 379],
has three important advantages over other curvature measures: it requires
only simple computation, gives coherent values, and depends solely on relative
direction.
At each vi , the weight values for the continuity term and the external energy
are set, respectively, to zero (a = 0) and to twice the initial value (  2 )
if C (vi ) > C (vi;1 )] and C (vi ) > C (vi+1 )] and C (vi ) > T ]. The thresh-
old value T was set equal to 0:25, which corresponds to an external angle of
approximately 29o . According to Williams and Shah 379], this value of the
threshold has been proven experimentally to be suciently large to dieren-
tiate between corners and curved lines. Figures 5.60 (b) and (c) illustrate an
example without and with the curvature constraint to correct corner eects.

Figure 5.60 (a)

In the work of Ferrari et al., the weighting parameters  and  in Equa-


tion 5.71 were initialized to 0:2 and 1:0, respectively, for each contour element.
This set of weights was determined experimentally by using a set of 20 images
randomly selected from the Mini-MIAS database 376], not including any im-
age in the test set used to evaluate the results. A larger weight was given to
the gradient energy to favor contour deformation toward the breast boundary
rather than smoothing due to the internal force.
Detection of Regions of Interest 475

(b) (c)
FIGURE 5.60
Example of the constraint used in the active contour model to prevent smooth-
ing eects at corners. (a) Original image the box indicates the region of
concern. (b) { (c) Details of the breast contour without and with the con-
straint for corner correction, respectively. Reproduced with permission from
R.J. Ferrari, R.M. Rangayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere,
\Identication of the breast boundary in mammograms using active contour
models", Medical and Biological Engineering and Computing, 42: 201 { 208,
2004.
c IFMBE.
476 Biomedical Image Analysis
5.9.3 Results of application to mammograms
Ferrari et al. applied their methods to 84 images randomly chosen from the
Mini-MIAS database 376]. All images were MLO views with 200
m sampling
interval and 8-bit gray-level quantization. For reduction of processing time, all
images were downsampled with a xed sampling distance so that the original
images corresponding to a matrix size of 1 024  1 024 pixels were transformed
to 256  256 pixels. The results obtained with the downsampled images were
mapped to the original mammograms for subsequent analysis and display.
The results were evaluated in consultation with two radiologists experienced
in mammography.
The test images were displayed on a computer monitor with a diagonal size
of 47:5 cm and dot pitch of 0:27 mm. By using the Gimp program 380], the
contrast and brightness of each image were manually enhanced so that the
breast contour could be easily visualized. The breast boundary was manually
drawn under the supervision of a radiologist, and the results printed on paper
by using a laser printer with 600 dpi resolution. The zoom option of the Gimp
program was used to aid in drawing the contours. The breast boundaries of
all images were visually checked by a radiologist using the printed images
(hardcopy) along with the displayed images (softcopy) the assessment was
recorded for analysis.
The segmentation results related to the breast contours detected by image
processing were evaluated based upon the number of false-positive (FP) and
false-negative (FN) pixels identied and normalized with reference to the cor-
responding areas demarcated by the manually drawn contours. The reference
area for the breast boundary was dened as the area of the breast image
delimited by the hand-drawn breast boundary. The FP and FN average per-
centages and the corresponding standard deviation values obtained for the 84
images were 0:41  0:25% and 0:58  0:67%, respectively. Thirty-three images
presented both FP and FN percentages less than 0:5% 38 images presented
FP and FN percentages between 0:5% and 1% the FP and FN percentages
were greater than 1% for 13 images. The most common cause of FN pixels
was related to missing the nipple region, as illustrated by the example in Fig-
ure 5.61 (c). By removing Stage 5 used to approximate the initial contour to
the true breast boundary (see Figure 5.51), the average time for processing
an image was reduced from 2:0 min to 0:18 min. However, the number of im-
ages where the nipple was not identied increased. The AADCM performed
successfully in cases where a small bending deformation of the contour was
required to detect the nipple see Figure 5.62.
The method for the detection of the breast boundary was used as a pre-
processing step in the analysis of bilateral asymmetry by Ferrari et al. 381]
(see Section 8.9). The method may also be used in other applications, such as
image compression by using only the eective area of the breast, and image
registration.
Detection of Regions of Interest 477

Figure 5.61 (a)

Figure 5.61 (b)


478 Biomedical Image Analysis

(c)
FIGURE 5.61
Results obtained for the image mdb003 from the Mini-MIAS database.
(a) Original image. (b) Hand-drawn boundary superimposed on the histo-
gram-equalized image. (c) Breast boundary detected, superimposed on the
original image. Reproduced with permission from R.J. Ferrari, R.M. Ran-
gayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere, \Identication of the
breast boundary in mammograms using active contour models", Medical and
Biological Engineering and Computing, 42: 201 { 208, 2004.
c IFMBE.
Detection of Regions of Interest 479

Figure 5.62 (a)

Figure 5.62 (b)


480 Biomedical Image Analysis

(c)
FIGURE 5.62
Results obtained for the image mdb114 from the Mini-MIAS database.
(a) Original image. (b) Hand-drawn boundary superimposed on the
histogram-equalized image. (c) Breast boundary detected, superimposed on
the original image. Reproduced with permission from R.J. Ferrari, R.M. Ran-
gayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere, \Identication of the
breast boundary in mammograms using active contour models", Medical and
Biological Engineering and Computing, 42: 201 { 208, 2004.
c IFMBE.
Detection of Regions of Interest 481

5.10 Application: Detection of the Pectoral Muscle in


Mammograms
The pectoral muscle represents a predominant density region in most MLO
views of mammograms, and can aect the results of image processing meth-
ods. Intensity-based methods, for example, can present poor performance
when applied to dierentiate dense structures such as the broglandular disc
or small suspicious masses, because the pectoral muscle appears at approxi-
mately the same density as the dense tissues of interest in the image. The
inclusion of the pectoral muscle in the image data being processed could also
bias the detection procedures. Another important need to identify the pec-
toral muscle lies in the possibility that the local information of its edge, along
with internal analysis of its region, could be used to identify the presence
of abnormal axillary lymph nodes, which may be the only manifestation of
occult breast carcinoma in some cases 54].
Karssemeijer 382] used the Hough transform and a set of threshold values
applied to the accumulator cells in order to detect the pectoral muscle. Ayl-
ward et al. 383] used a gradient-magnitude ridge-traversal algorithm at small
scale, and then resolved the resulting multiple edges via a voting scheme in
order to segment the pectoral muscle. Ferrari et al. 278, 375] proposed a tech-
nique to detect the pectoral muscle based upon the Hough transform 8, 10],
which was a modication of the method proposed by Karssemeijer 382].
However, the hypothesis of a straight line for the representation of the pec-
toral muscle is not always correct, and may impose limitations on subsequent
stages of image analysis. Subsequently, Ferrari et al. 278] proposed another
method based upon directional ltering using Gabor wavelets this method
overcomes the limitation of the straight-line representation mentioned above.
The Hough-transform-based and Gabor-wavelet-based methods are described
in the following sections.

5.10.1 Detection using the Hough transform


The initial method to identify the pectoral muscle proposed by Ferrari et
al. 375], summarized in the owchart in Figure 5.63, starts by automatically
identifying an appropriate ROI containing the pectoral muscle, as shown in
Figure 5.64. To begin with, an approximate breast contour delimiting the
control points is obtained by using a method for the detection of the skin-air
boundary 279, 375], described in Section 5.9. The six control points N1{N6
used to dene the ROI are determined as follows, in order:
1. N1: the top-left corner pixel of the boundary loop
2. N5: the lowest pixel on the left-hand edge of the boundary loop
482 Biomedical Image Analysis
3. N3: the mid-point between N1 and N5
4. N2: the farthest point on the boundary from N5 in terms of the Eu-
clidean distance through the breast (if this point is not located on the
upper edge of the mammogram, it is projected vertically to the upper
edge)
5. N4: the point that completes a rectangle with N1, N2, and N3 (not
necessarily on the boundary loop)
6. N6: the farthest point on the boundary loop from N1. In the case of
the mammogram in Figure 5.64, the points N5 and N6 have coincided.
The ROI is dened by the rectangular region delimited by the points N1,
N2, N3, and N4, as illustrated in Figure 5.64. Although, in some cases, this
region may not include the total length of the pectoral muscle, the portion of
the muscle present is adequate to dene a straight line to represent its edge.
By limiting the size of the ROI as described above, the bias that could be
introduced by other linear structures that may be present in the broglandular
disc is minimized. Diering from the method of Karssemeijer, Ferrari et
al. 375] did not use any threshold value in order to reduce the number of
unlikely pectoral lines. Instead, geometric and anatomical constraints were
incorporated into the method, as follows:
1. The pectoral muscle is considered to be a straight line limited to an
angle  between 120o and 170o , with the angle computed as indicated in
Figure 5.65. Mammograms of right breasts are ipped (mirrored) before
performing pectoral muscle detection.
2. The pectoral line intercepts the line segment N1 { N2, as indicated in
Figure 5.64.
3. The pectoral line is present, in partial or total length, in the ROI dened
as the rectangular region delimited by the points N1, N2, N3, and N4,
as illustrated in Figure 5.64.
4. The pectoral muscle appears on mammograms as a dense region with
homogeneous gray-level values.
After selecting the ROI, a Gaussian lter with x = y = 4 pixels is used
to smooth the ROI in order to remove the high-frequency noise in the image.
The Hough transform is then applied to the Sobel gradient of the ROI 10] to
detect the edge of the pectoral muscle. The representation of a straight line
for the Hough transform computation is specied as
= (x ; x0 ) cos  + (y ; y0 ) sin  (5.80)
where (x0 y0 ) is the origin of the coordinate system dened as the center of
the image, and and  represent, respectively, the distance and angle between
Detection of Regions of Interest 483

FIGURE 5.63
Procedure for the identication of the pectoral muscle by using the Hough
transform. Reproduced with permission from R.J. Ferrari, R.M. Rangayyan,
J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere, \Automatic identication of
the pectoral muscle in mammograms", IEEE Transactions on Medical Imag-
c IEEE.
ing, 23: 232 { 245, 2004.

484 Biomedical Image Analysis

Figure 5.64 (a)

(x0 y0 ) and the coordinates (x y) of the pixel being analyzed, as illustrated


in Figure 5.65. The Hough accumulator is quantized to 45 bins of 4o each
by using the constraint jxy ; j < 2o , where xy is the orientation of the
Sobel gradient at the pixel (x y). In the work of Ferrari et al. 278, 375],
the accumulator cells were incremented using the magnitude of the gradient
instead of unit increments thus, pixels with a strong gradient have larger
weights. Only values of  in the range 120o 170o ] were considered in the
analysis, because the pectoral muscle of the mammogram was positioned on
the left-hand side of the image before computing the Hough transform (see
Figure 5.65).
After computing the accumulator cell values in the Hough space, a lter-
ing procedure is applied to eliminate all lines (pairs of parameters and )
that are unlikely to represent the pectoral muscle. In this procedure, all lines
intercepting the top of the image outside the N1 { N2 line segment (see Fig-
ure 5.64) or with slopes outside the range 120o 170o ] are removed. (In the
present notation, the x axis corresponds to 0o , and the chest wall is positioned
on the left-hand side see Figure 5.65.) Each remaining accumulator cell is
also multiplied by the factor

 = 
2 A   (5.81)
Detection of Regions of Interest 485

(b)
FIGURE 5.64
(a) Image mdb042 from the Mini-MIAS database. (b) Approximate boundary
of the breast along with the automatically determined control points N1 { N6
used to limit the ROI (rectangle marked) for the detection of the pectoral mus-
cle. Reproduced with permission from R.J. Ferrari, R.M. Rangayyan, J.E.L.
Desautels, R.A. Borges, and A.F. Fr&ere, \Automatic identication of the pec-
toral muscle in mammograms", IEEE Transactions on Medical Imaging, 23:
232 { 245, 2004.
c IEEE.
486 Biomedical Image Analysis

FIGURE 5.65
Coordinate system used to compute the Hough transform. The pectoral mus-
cle line detected is also shown. Reproduced with permission from R.J. Ferrari,
R.M. Rangayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere, \Automatic
identication of the pectoral muscle in mammograms", IEEE Transactions
on Medical Imaging, 23: 232 { 245, 2004.
c IEEE.
Detection of Regions of Interest 487
where
and 2 are, respectively, the mean and the variance of the gray-level
values in the area A of the pectoral muscle (see Figure 5.65), dened by the
straight line specied by the parameters  and . This procedure was applied
in order to enhance the Hough transform peaks that dene regions with the
property stated in Item 4 of the list provided earlier in this section (page 482).
The weight related to the area was designed to dierentiate the true pectoral
muscle from the pectoralis minor the latter could present a higher contrast
than the former in some cases, albeit enclosing a smaller area than the former.
Finally, the parameters and  of the accumulator cell with the maximum
value are taken to represent the pectoral muscle line. Figure 5.66 shows the
Hough accumulator cells at the dierent stages of the procedure described
above for the mammogram in Figure 5.64 (a). The pectoral muscle line de-
tected for the mammogram in Figure 5.64 (a) is shown in Figure 5.65.

5.10.2 Detection using Gabor wavelets


In an improved method to detect the pectoral muscle edge, summarized by the
owchart in Figure 5.67, Ferrari et al. 278] designed a bank of Gabor lters
to enhance the directional, piecewise-linear structures that are present in an
ROI containing the pectoral muscle. Figure 5.68 (a) illustrates a mammogram
from the Mini-MIAS database Figure 5.68 (b) shows the ROI | dened
automatically as the rectangle formed by the chest wall as the left-hand edge,
and a vertical line through the upper-most point on the skin-air boundary
drawn along the entire height of the mammogram as the right-hand edge
| to be used for the detection of the pectoral muscle. Diering from the
Hough-transform-based method described in Section 5.10.1, the ROI here is
dened to contain the entire pectoral muscle region, because the straight-line
representation is not used. Decomposition of the ROI into components with
dierent scale and orientation is performed by convolution of the ROI image
with a bank of tunable Gabor lters. The magnitude and phase components of
the ltered images are then combined and used as input to a post-processing
stage, as described in the following paragraphs.
Gabor wavelets: A 2D Gabor function is a Gaussian modulated by a
complex sinusoid, which can be specied by the frequency of the sinusoid W
and the standard deviations x and y of the Gaussian envelope as 381, 384]

 1  x2 y2  
(x y) = 21  exp ; +
2 x2 y2 + j 2  Wx : (5.82)
x y

(See also Sections 8.4, 8.9, and 8.10.) Gabor wavelets are obtained by dilation
and rotation of (x y) as in Equation 5.82 by using the generating function
488 Biomedical Image Analysis

(a) (b) (c)

FIGURE 5.66
Hough accumulator cells obtained at three stages of the procedure to detect
the pectoral muscle. The contrast of the images has been modied for im-
proved visualization. (a) Accumulator cells obtained by using the constraint
jxy ;  j < 2o and 120o    170o . (b) After removing the lines intercepting
the top of the image outside the region dened by the control points N1 { N2
(see Figure 5.65). (c) After applying the multiplicative factor  = 2 A   .
Reproduced with permission from R.J. Ferrari, R.M. Rangayyan, J.E.L. De-
sautels, R.A. Borges, and A.F. Fr&ere, \Automatic identication of the pectoral
muscle in mammograms", IEEE Transactions on Medical Imaging, 23: 232 {
245, 2004.
c IEEE.
Detection of Regions of Interest 489

FIGURE 5.67
Flowchart of the procedure for the identication of the pectoral muscle by
using Gabor wavelets. Reproduced with permission from R.J. Ferrari, R.M.
Rangayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere, \Automatic iden-
tication of the pectoral muscle in mammograms", IEEE Transactions on
c IEEE.
Medical Imaging, 23: 232 { 245, 2004.

490 Biomedical Image Analysis

(a) (b)
FIGURE 5.68
(a) Image mdb028 from the Mini-MIAS database. (b) The ROI used to search
for the pectoral muscle region, dened by the chest wall and the upper limit
of the skin-air boundary. The box drawn in (a) is not related to the ROI in
(b). Reproduced with permission from R.J. Ferrari, R.M. Rangayyan, J.E.L.
Desautels, R.A. Borges, and A.F. Fr&ere, \Automatic identication of the pec-
toral muscle in mammograms", IEEE Transactions on Medical Imaging, 23:
232 { 245, 2004.
c IEEE.
Detection of Regions of Interest 491

mn (x y) = a;m (x0 y0 ) a > 1 m n = integers (5.83)


x0 = a;m  (x ; x0 ) cos n + (y ; y0 ) sin n ]
y0 = a;m ;(x ; x0 ) sin n + (y ; y0 ) cos n ]
where (x0 y0 ) is the center of the lter in the spatial domain n = n K,
n = 1 2 : : : K  K is the number of orientations desired and m and n indicate
the scale and orientation, respectively. The Gabor lter used by Ferrari et
al. 278] is expressed in the frequency domain as

1 (u ; W )2 + v2

(u v) = 21  exp 2 ;
u2 v2 (5.84)
u v
where u = 21 x and v = 21 y . The design strategy used by Ferrari et
al. was to project the lters so as to ensure that the half-peak magnitude
supports of the lter responses in the frequency spectrum touch one another,
as shown in Figure 5.69. By doing this, it can be ensured that the lters
will capture the maximum information with minimum redundancy. In order
for the designed bank of Gabor lters to be a family of admissible 2D Ga-
bor wavelets 385], the lters (x y) must satisfy the admissibility condition
of nite energy 386], which implies that their Fourier transforms are pure
bandpass functions having zero response at DC. This condition is achieved by
setting the DC gain of each lter as (0 0) = 0, which causes the lters not
to respond to regions with constant intensity.
The following formulas provide the lter parameters u and v :
 U  S1 1
a = Uh
;

l
(5.85)
u = (a ; p1)Uh
(a + 1) 2 ln 2
h 2
i (5.86)
tan( 2K ) Uh ; Uuh 2 ln 2
v = h i 21 (5.87)
2 ln 2 ; (2 lnU2)h2
2
u2

where Ul and Uh denote the lower and upper center frequencies of interest.
The K and S parameters are, respectively, the number of orientations and
the number of scales in the desired multiresolution decomposition procedure.
The sinusoid frequency W is set equal to Uh , and m = 0 1 : : : S ; 1.
In the application being considered, interest lies only in image analysis,
without the requirement of exact reconstruction or synthesis of the image from
492 Biomedical Image Analysis

FIGURE 5.69
Bank of Gabor lters designed in the frequency domain. Each ellipse repre-
sents the range of the corresponding lter response from 0:5 to 1:0 in squared
magnitude (only one half of the response is shown for each lter). The sam-
pling of the frequency spectrum can be adjusted by changing the Ul , Uh , S ,
and K parameters of the Gabor wavelets. Only the lters shown shaded are
used to enhance the directional piecewise-linear structures present in the ROI
images. The frequency axes are normalized. Reproduced with permission
from R.J. Ferrari, R.M. Rangayyan, J.E.L. Desautels, R.A. Borges, and A.F.
Fr&ere, \Automatic identication of the pectoral muscle in mammograms",
IEEE Transactions on Medical Imaging, 23: 232 { 245, 2004.
c IEEE.
Detection of Regions of Interest 493
the ltered components. Therefore, instead of using the wavelet coecients,
Ferrari et al. 278] used the magnitude of the lter response, computed as
 even (x y )
amn (x y) = f (x y)  mn (5.88)
where mneven (x y ) indicates the even-symmetric part of the complex Gabor
lter, f (x y) is the ROI being ltered, and  represents 2D convolution. The
phase and magnitude images, indicating the local orientation, were composed
by vector summation of the K ltered images (387], chapter 11) see also
Figure 8.10.
The area of each ellipse indicated in Figure 5.69 represents the frequency
spectrum covered by the corresponding Gabor lter. Once the range of the
frequency spectrum is adjusted, the choice of the number of scales and ori-
entations is made in order to cover the range of the spectrum as required.
The choice of the number of scales (S ) and orientations (K ) used by Ferrari
et al. for detecting the pectoral muscle was based upon the resolution re-
quired for detecting oriented information with high selectivity 388, 389]. The
spatial-frequency bandwidths of the simple and complex cells in mammalian
visual systems have been found to range from 0:5 to 2:5 octaves, clustering
around 1:2 octaves and 1:5 octaves, and their angular bandwidth is expected
to be smaller than 30o 389, 390]. By selecting Ul = 0:05, Uh = 0:45, S = 4,
and K = 12 for processing mammographic images, Ferrari et al. 278, 381]
indirectly adjusted the Gabor wavelets to have a frequency bandwidth of ap-
proximately one octave and angular bandwidth of 15o .
In the work of Ferrari et al., all images were initially oriented so that the
chest wall was always positioned on the left-hand side then, the pectoral
muscle edge in correctly acquired MLO views will be located between 45o
and 90o (391], p. 34). (Here, the orientation of the pectoral muscle edge is
dened as the angle between the horizontal line and an imaginary straight
line representing the pectoral muscle edge.) For this reason, Ferrari et al.
used only the Gabor lters with the mean orientation of their responses in
the image domain at 45o , 60o , and 75o  the corresponding frequency-domain
responses are shown shaded in Figure 5.69.
Post-processing and pectoral muscle edge detection: In the method
of Ferrari et al., after computing the phase and magnitude images by vector
summation, the relevant edges in the ROI are detected by using an algorithm
proposed by Ma and Manjunath 392] for edge- ow propagation, as described
below. The magnitude A(x y) and phase (x y) at each image location (x y)
are used to represent the edge- ow vector instead of using a predictive coding
model as initially proposed by Ma and Manjunath. The phase at each point
in the image is propagated until it reaches a location where two opposite
directions of ow encounter each other, as follows:
1. Set n = 0 and E0 (x y) = A(x y) cos (x y) A(x y) sin (x y)].
2. Set the edge- ow vector En+1 (x y) at iteration n + 1 to zero.
494 Biomedical Image Analysis
3. At each image location (x y), identify the neighbor (x0 y0 ) that has the
same direction  as that of the edge- ow vector En (x y). The direction
 is computed as  = tan;1 ((xy ;;yx)) .
0

4. If En (x0 y0 ) En (x y) > 0
then En+1 (x0 y0 ) = En+1 (x0 y0 ) + En (x y)
else En+1 (x y) = En+1 (x y) + En (x y),
where the symbol indicates the dot-product operation.

5. If nothing has been changed


then stop iterating
else go to Step 2 and repeat the procedure.

Figures 5.70 (b) and (c) illustrate an example of the orientation map before
and after applying the edge- ow propagation procedure to the image shown
in part (a) of the same gure.

(a) (b) (c)


FIGURE 5.70
(a) Region indicated by the box in Figure 5.68 (a) containing a part of the
pectoral muscle edge. (b) and (c) Edge- ow map before and after propaga-
tion. Each arrowhead represents the direction of the edge- ow vector at the
corresponding position in the image. Reproduced with permission from R.J.
Ferrari, R.M. Rangayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere,
\Automatic identication of the pectoral muscle in mammograms", IEEE
Transactions on Medical Imaging, 23: 232 { 245, 2004.
c IEEE.
Detection of Regions of Interest 495
After propagating the edge- ow vector, the boundary candidates for the
pectoral muscle are obtained by identifying the locations that have nonzero
edge- ow arriving from two opposite directions. Weak edges are eliminated by
thresholding the ROI image with a threshold value of 10% of the maximum
gray-level value in the ROI. Figure 5.71 shows images resulting from each
stage of the procedure for pectoral muscle detection.
In order to connect the disjoint boundary segments that are usually present
in the image after the edge- ow propagation step, a half-elliptical neighbor-
hood is dened with its center located at each boundary pixel being processed.
The half-ellipse is adjusted to be proportional to the length of the contour,
with R1 = 0:2 Clen and R2 = 5 pixels (where R1 and R2 are, respectively,
the major and minor axes of the half-ellipse, and Clen is the length of the
boundary), with its major axis oriented along the direction of the contour
line. If an ending or a starting pixel of an unconnected line segment is found
in the dened neighborhood, it is connected by linear interpolation. The iter-
ative method stops when all disjoint lines are connected this procedure took
5 to 20 iterations with the images used in the work of Ferrari et al. Next,
the false edges that may result in the ltered images due to structures inside
the broglandular disc or due to the ltering process see Figure 5.71 (c){(d)]
are removed by checking either if their limiting points are far away from the
upper and left-hand side of the ROI, or if the straight line having the same
slope as the pectoral muscle edge candidate intercepts outside the upper and
left-hand limits of the ROI. Finally, the largest line in the ROI is selected to
represent the pectoral muscle edge see Figure 5.71 (e).

5.10.3 Results of application to mammograms


Ferrari et al. tested their methods with 84 images randomly chosen from the
Mini-MIAS database 376]. All images were MLO views with 200
m sam-
pling interval and 8-bit gray-level quantization. For reduction of processing
time, all images were downsampled with a xed sampling distance so that the
original images with the matrix size of 1 024  1 024 pixels were transformed
to 256  256 pixels. The results obtained with the downsampled images were
mapped to the original 1 024  1 024 mammograms for subsequent analysis
and display.
The results obtained with both of the Hough-transform-based and Gabor-
wavelet-based methods were evaluated in consultation with two radiologists
experienced in mammography. The test images were displayed on a computer
monitor with diagonal size of 47:5 cm and dot pitch of 0:27 mm. By using the
Gimp program 380], the contrast and brightness of each image were manually
enhanced so that the pectoral muscle edge could be easily visualized. Then,
the pectoral muscle edges were manually drawn and the results printed on
paper by using a laser printer with 600 dpi resolution. The zoom option of
the Gimp program was used to aid in drawing the edges. The pectoral muscle
edges of all images were checked by a radiologist using the printed images
496 Biomedical Image Analysis

(a) (b)

Figure 5.71 (c) (d)


Detection of Regions of Interest 497

(e)
FIGURE 5.71
Result of each stage of the Gabor-wavelet-based method: (a) ROI used.
(b) Image magnitude after ltering and vector summation, enhanced by
gamma correction ( = 0:7). (c){(d) Results before and after the post-
processing stage. (e) Final boundary. Reproduced with permission from R.J.
Ferrari, R.M. Rangayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere,
\Automatic identication of the pectoral muscle in mammograms", IEEE
c IEEE.
Transactions on Medical Imaging, 23: 232 { 245, 2004.

498 Biomedical Image Analysis


(hardcopy) along with the displayed images (softcopy) the assessment was
recorded for further analysis.

The segmentation results were evaluated based upon the number of FP and
FN pixels normalized with reference to the corresponding numbers of pixels
in the regions demarcated by the manually drawn edges. The reference region
for the pectoral muscle was dened as the region contained between the left-
hand edge of the image and the hand-drawn pectoral muscle edge. An FP
pixel was dened as a pixel outside the reference region that was included
in the pectoral region segmented. An FN pixel was dened as a pixel in the
reference region that was not present within the segmented region. Table 5.1
provides a summary of the results.

TABLE 5.1
Average False-positive and False-negative Rates in the
Detection of the Pectoral Muscle by the hough-transform-based
and Gabor-wavelet-based Methods.
Method Hough Gabor
FP   1.98  6.09% 0.58  4.11%
FN   25.19  19.14% 5.77  4.83%

Number of images with


(FP and FN) < 5% 10 45
5% < (FP and FN) < 10% 8 22
(FP and FN) > 10% 66 17

Reproduced with permission from R.J. Ferrari, R.M. Rangayyan, J.E.L. De-
sautels, R.A. Borges, and A.F. Fr&ere, \Automatic identication of the pectoral
muscle in mammograms", IEEE Transactions on Medical Imaging, 23: 232 {
245, 2004.
c IEEE.

In the example illustrated in Figure 5.72, detection using the Hough trans-
form resulted in an underestimated pectoral region due to the limitation im-
posed by the straight-line hypothesis used to represent the pectoral muscle
edge this translated to a high FN rate. The segmentation result of the Gabor-
wavelet-based method is closer to the pectoral muscle edge drawn by the
radiologist.
Detection of Regions of Interest 499

Figure 5.72 (a)

Figure 5.72 (b)


500 Biomedical Image Analysis

Figure 5.72 (c)

Figure 5.73 shows a case where the pectoral muscle appears as an almost-
straight line. Even in this case, the result obtained using Gabor wavelets
is more accurate than the result obtained using the Hough transform. The
Gabor-wavelet-based method provided good results even in cases where the
pectoralis minor was present in the mammogram (see Figure 5.74).
Detection of the pectoral muscle was used as a preprocessing step in segmen-
tation of the broglandular discs in mammograms for the analysis of bilateral
asymmetry by Ferrari et al. 381] (see Section 8.9).

5.11 Application: Improved Segmentation of Breast Mas-


ses by Fuzzy-set-based Fusion of Contours and Re-
gions
Given the dicult nature of the problem of the detection of masses and tumors
in a mammogram, the question arises \Can the problem benet from the use
of multiple approaches?" Guliato et al. 276] proposed two approaches to the
detection problem: one based upon contour detection, and the other based
Detection of Regions of Interest 501

(d)
FIGURE 5.72
Results obtained for the image mdb003 from the Mini-MIAS database.
(a) Original image. (b) Hand-drawn pectoral muscle edge superimposed on
the histogram-equalized image. (c) and (d) Pectoral muscle edges detected by
the Hough-transform-based and Gabor-wavelet-based methods, respectively,
superimposed on the original image. Reproduced with permission from R.J.
Ferrari, R.M. Rangayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere,
\Automatic identication of the pectoral muscle in mammograms", IEEE
Transactions on Medical Imaging, 23: 232 { 245, 2004.
c IEEE.
502 Biomedical Image Analysis

Figure 5.73 (a)

Figure 5.73 (b)


Detection of Regions of Interest 503

Figure 5.73 (c)

upon a fuzzy region-growing method. The former method is simple and easy
to implement, always produces closed contours, and yields good results even
in the presence of high levels of noise (see Section 5.5.2) the latter produces
a fuzzy representation of the ROI, and preserves the uncertainty around the
boundaries of tumors (see Section 5.5.3). As a follow-up, Guliato et al. 277]
considered the following question: How may we combine the results of the
two approaches | which may be considered to be complementary | so as to
obtain a possibly better result?
In generic terms, the process of image segmentation may be dened as a
procedure that groups the pixels of an image according to one or more local
properties. A property of pixels is said to be local if it depends only on a pixel
or its immediate neighborhood (for example, gray level, gradient, and local
statistical measures). Techniques for image segmentation may be divided into
two main categories: those based on discontinuity of local properties, and
those based on similarity of local properties 305]. The techniques based on
discontinuity are simple in concept, but generally produce segmented regions
with disconnected edges, requiring the application of additional methods (such
as contour following). Techniques based on similarity, on the other hand, de-
pend on a seed pixel (or a seed subregion) and on a strategy to traverse the
image for region growing. Because dierent segmentation methods explore
distinct, and sometimes complementary, characteristics of the given image
504 Biomedical Image Analysis

(d)
FIGURE 5.73
Results obtained for the image mdb008 from the Mini-MIAS database.
(a) Original image. (b) Hand-drawn pectoral muscle edge superimposed on
the histogram-equalized image. (c) and (d) Pectoral muscle edges detected by
the Hough-transform-based and Gabor-wavelet-based methods, respectively,
superimposed on the original image. Reproduced with permission from R.J.
Ferrari, R.M. Rangayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere,
\Automatic identication of the pectoral muscle in mammograms", IEEE
Transactions on Medical Imaging, 23: 232 { 245, 2004.
c IEEE.
Detection of Regions of Interest 505

(a) (b)
FIGURE 5.74
Image mdb110 from the Mini-MIAS database, showing the result of the detec-
tion of the pectoral muscle in the presence of the pectoralis minor. (a) Edge
candidates after the post-processing stage. (b) Final boundary detected by the
Gabor-wavelet-based method. Reproduced with permission from R.J. Ferrari,
R.M. Rangayyan, J.E.L. Desautels, R.A. Borges, and A.F. Fr&ere, \Automatic
identication of the pectoral muscle in mammograms", IEEE Transactions
on Medical Imaging, 23: 232 { 245, 2004.
c IEEE.
506 Biomedical Image Analysis
(such as contour detection and region growing), it is natural to consider com-
binations of techniques that could possibly produce better results than any
one technique on its own.
Although cooperative combination of the results of segmentation procedures
can oer good results, there are only a few publications devoted to this sub-
ject 314, 316, 393, 394, 395, 396, 397]. This is partly due to the diculty in
simultaneously handling distinct local properties, and due to the limitations
of the commonly used Boolean-set operations in combining multiple results
of image segmentation. Using the theory of fuzzy sets, it is possible to dene
several classes of fusion operators that generalize Boolean operators. Guliato
et al. 277] proposed a general fusion operator, oriented by a nite automa-
ton, to combine information from dierent sources. The following paragraphs
provide descriptions of the method and present results of the fusion operator
applied to tumor regions in mammographic images.
Elementary concepts of fusion operators: A fusion operator over fuzzy
sets is formally dened as a function h : 0 1]n ! 0 1], where n  2 represents
the number of sources of input information. Fusion operators may be classied
according to their behavior into three classes: conjunctive, disjunctive, and
compromise operators 351, 398], as follows:
An operator is said to be conjunctive if h(a1 a2 : : : an )  min fa1 a2
: : : an g, where ai 2 0 1]. Conjunctive operators are those that rep-
resent a consensus between the items of information being combined.
They generalize classical intersection, and agree with the source that
oers the smallest measure while trying to obtain simultaneous satis-
faction of its criteria. We can say that conjunctive operators present a
severe behavior.
An operator is said to be disjunctive if h(a1 a2 : : : an )  max fa1 a2
: : : an g. Disjunctive operators generalize classical union. They agree
with the source that oers the greatest measure, and express redundancy
between criteria. We can say that they present a permissive behavior.
An operator is said to be a compromise operator if minfa1 a2 : : : an g 
h(a1 a2 : : : an )  maxfa1 a2 : : : an g. Compromise operators produce
an intermediate measure between items of information obtained from
several sources. They present cautious behavior.
Bloch 399] presented a classication scheme that describes a fusion operator
in more rened terms not only as conjunctive, disjunctive, or compromise, but
also according to its behavior with respect to the information values being
combined (input values): context-independent constant-behavior operators
that maintain the same behavior independent of the input variables context-
independent variable-behavior operators, whose behavior varies according to
the input variables and context-dependent operators, whose behavior varies
as in the previous case, also taking into account the agreement between the
Detection of Regions of Interest 507
sources and their reliability. The following paragraphs provide the description
of a class of fusion operators that generalize context-dependent operators,
taking into consideration dierent degrees of condence in the sources, specic
knowledge, and spatial context while operating with conceptually distinct
sources.
Considerations in the fusion of the results of complementary seg-
mentation techniques: Figure 5.75 illustrates an overlay of two segmen-
tation results obtained by two complementary techniques | region growing
represented by a fuzzy set Sr , and closed-contour detection represented by
a fuzzy set Sc | for the same ROI. The straight line within Sr indicates a
possible artifact. The results are not the same: dierent segmentation al-
gorithms may produce dierent results for the same ROI. A fusion operator
designed to aggregate such entities should produce a third entity that takes
into consideration the inputs and is better than either input on its own. In
order to realize this, the fusion operator must be able to identify regions of
certainty and uncertainty during its execution.

Sc

Sr

FIGURE 5.75
Superimposition of the results of two complementary segmentation techniques.
The circular region Sr represents the result of region growing. The square box
Sc represents the result of contour detection. Reproduced with permission
from D. Guliato, R.M. Rangayyan, W.A. Carnielli, J.A. Zuo, and J.E.L. De-
sautels, \Fuzzy fusion operators to combine results of complementary medical
image segmentation techniques", Journal of Electronic Imaging, 12(3): 379 {
389, 2003.
c SPIE and IS&T.

Considering a pixel p being analyzed, let ;Sr (p) be the membership degree
of p, such that Sr = ;Sr : I ! 0 1], where I is the original image. Also, let
;Sc (p) be the membership degree of p, such that Sc = ;Sc : I ! 0 1]. It is
important to note that ;Sc (p) is zero when the pixel p is inside or outside of Sc ,
and that ;Sc (p) possesses a high value when p is on the contour represented
by Sc . Similarly, ;Sr (p) is high when p belongs to the region, and ;Sr (p) is
low or zero when p does not belong to the region. With respect to the fusion
operator, four situations may be identied considering the position of p (see
Figure 5.76):
508 Biomedical Image Analysis
1
2

Sc 3

Sr
4
FIGURE 5.76
The four dierent situations treated by the fusion operator. Reproduced with
permission from D. Guliato, R.M. Rangayyan, W.A. Carnielli, J.A. Zuo, and
J.E.L. Desautels, \Fuzzy fusion operators to combine results of complemen-
tary medical image segmentation techniques", Journal of Electronic Imaging,
12(3): 379 { 389, 2003.
c SPIE and IS&T.

1. p belongs to the intersection of Sr and Sc that is, ;Sr (p) is high and
;Sc (p) is zero]. In this case the pixel p belongs to the nal segmentation
result with a high membership degree. The sources agree with respect to
the inclusion of the pixel p in the nal result. This is a case of certainty.
2. p does not belong to Sr or belongs to Sr with a low membership degree,
and is inside Sc that is, ;Sr (p) is low or zero and ;Sc (p) is zero]. In
this case the sources disagree with respect to the inclusion of the pixel
p in the nal result. This is a case of uncertainty.
3. p belongs to the contour line of Sc that is, ;Sc (p) is high] and does not
belong to Sr that is, ;Sr (p) is low or zero]. As in Item 2 above, this
is an uncertainty situation. Note that although the inputs are dierent
from those presented in Item 2 above, the result of the fusion operator
is expected to represent uncertainty.
4. p belongs to Sr that is, ;Sr (p) is high] and is outside of Sc that is,
;Sc (p) is zero]. Here again we have an uncertainty case. Observe that
although the inputs are similar to those in Item 1 that is, ;Sr (p) is high
and ;Sc (p) is zero], the result of the fusion operator is expected to be
dierent.
We can conclude from the discussion above that a practically applicable
fusion operator should be composed of a number of basic fusion operators,
and that the spatial position of the pixel being analyzed is an important item
of information that should be used in determining the basic fusion operator
to be applied to the pixel. Based upon these observations, Guliato et al. 277]
proposed a general fusion operator oriented by a nite automaton, where the
nite set of states of the automaton is determined by the spatial position of
Detection of Regions of Interest 509
the pixel being analyzed, and where the transition function (to be dened
later) depends on the strategy used to traverse the image.
An important question to be considered in fusion is the reliability of the
sources (original segmentation results). The result of the fusion operator de-
pends on how good the original segmentation results are. The evaluation of
the individual segmentation results is not a component of the fusion proce-
dure, although parameters are included in the denitions of the operators to
represent the reliability of the sources it is assumed that the parameters are
determined using other methods.
General nite-automaton-oriented fusion operators: Formally, a fu-
sion operator oriented by a nite automaton that aggregates n sources may
be dened as an ordered pair < H M >, where 351]:
H = fh1 h2 : : : hk g is a nite set of basic fusion operators, where hi
are functions that map 0 1]n ! 0 1], n  2.
M = (Q (  q0 F ) is a nite automaton, where:
{ Q is a nite set of states,
{ ( is a nite input alphabet,
{  is a transition function that maps Q  ( ! Q, where  is the
Cartesian product operator,
{ q0 2 Q is an initial state, and
{ F  Q is the set of nal states.
In the present case, the alphabet ( is given by a nite collection of labels
associated with the Cartesian product of nite partitions of the interval 0 1]:
For example, suppose that, coming from dierent motivations, we are dividing
0 1] into two nite partitions P1 and P2 , where P1 divides the values between
`low' and `high', and P2 between `good' and `bad'. Our alphabet may be
composed of ( = f0 1 2g representing, for example, the combinations (low,
good), (low, bad), and (high, good), respectively. Observe that we are not
necessarily using the whole set of possibilities.
The interpretation of the transition function  of a nite automaton is the
following: (qi a) = qj is a valid transition if, and only if, the automaton can
go from the state qi to qj through the input a. Sometimes, qi and qj could
be the same state. If there is a transition from the state qi to qj through
the input a, then there is a directed arc from qi to qj with the label a in the
graphical representation (transition diagram) of the specic automaton see
Figure 5.77.
Application of the fusion operator to image segmentation: The
fusion operator proposed by Guliato et al. 277] is designed to combine the
results obtained from two segmentation techniques that explore complemen-
tary characteristics of the image: one based on region growing, and the other
510 Biomedical Image Analysis
a
b

qi qj

FIGURE 5.77
Graphical representation of the transition function given by (qi a) = qj and
(qi b) = qi . Reproduced with permission from D. Guliato, R.M. Rangayyan,
W.A. Carnielli, J.A. Zuo, and J.E.L. Desautels, \Fuzzy fusion operators to
combine results of complementary medical image segmentation techniques",
c SPIE and IS&T.
Journal of Electronic Imaging, 12(3): 379 { 389, 2003.

based on closed-contour detection 276]. The result of fusion obtained is a


fuzzy set that represents the agreement or disagreement between the input
sources.
Let Sr (based on region growing) and Sc (based on closed-contour detection)
represent the two segmented images to be combined, as shown in Figure 5.75.
The process starts with a seed pixel selected by the user. The seed pixel must
belong to the intersection Sr \ Sc . It is assumed that Sr and Sc are each
endowed with a reliability measure, given by a number in the interval 0 1].
The fusion operator is represented by O = < H M >, where H = fh1 h2
: : : h6 g is a collection of six basic fusion operators (that take into considera-
tion the reliability measures of the sources, as explained below), and M is a
nite automaton that governs the actions of the operator.
In the following description of the basic fusion operators, the parameters
Cr and Cc range within the interval 0 1] and denote the reliability measures
of the sources Sr and Sc , respectively. The parameters are used to indicate
the in uence that a given source should have on the nal result of the fusion
operation: the higher the value, the larger the in uence of the source.
The result of each basic fusion operator should give information about the
agreement among the sources being analyzed. The absence of con ict is rep-
resented by a membership degree equal to 1 or 0 that is, both the sources
agree or do not agree with respect to the membership of the given pixel in
the ROI. Maximal con ict is represented by membership degree equal to 0:5
in this case, the sources do not agree with respect to the membership of the
given pixel. Intermediate membership degrees denote intermediate degrees of
agreement.
Let pij be the j th pixel of the segmented image Si and ;Si (pij ) be the
membership degree of the pixel pij , where i 2 fr cg, j = 1 2 : : : m, and m is
the total number of pixels in the image I . (Note: In the present section, we
are using only one index j to represent the position of a pixel in an image.)
Then, the basic fusion operators are dened as follows:
1. h1 = maxfCr ;Sr (prj ) Cc 0:5g.
Detection of Regions of Interest 511
This is a disjunctive operator that associates with the pixels in Sr \ Sc
new membership degrees taking into account the source with the greater
reliability measure (see h1 in Figure 5.78).
2. if max(Cr Cc )  0:5
then h2 = 0:5
else if (Cr  0:5)
then h2 = Cc ;Sc (pcj )
else if (Cc < 0:5)
then h2 = Cr ;Sr (prj )
else h2 = (Cr +1 Cc ) Cr ;Sr (prj ) + Cc ;Sc (pcj )].
This is a compromise operator that acts on the pixels belonging to the
transition region between the interior and the exterior of the result of
contour detection (see h2 in Figure 5.78).
3. if max(Cr Cc )  0:5
then h3 = 0:5
else if (Cr  0:5)
then h3 = Cc
else if (Cc  0:5)
then h3 = Cr ;Sr (prj )
else h3 = (Cr +1 Cc ) Cr ;Sr (prj ) + Cc ].
This is a compromise operator that acts on the pixels lying outside the
result of region growing and belonging to the interior of the result of
contour detection (see h3 in Figure 5.78).
4. if max(Cr Cc )  0:5
then h4 = 0:5
else if (Cr  0:5)
then h4 = 0
512 Biomedical Image Analysis

else if (Cc  0:5)


then h4 = Cr ;Sr (prj )
p
else h4 = (Cr +1 Cc ) fCr ;Sr (prj ) + 1 ; Cc ]2 g.
This is a compromise operator that acts on the pixels lying outside the
result of contour detection and belonging to the interior of the result
of region growing (see h4 in Figure 5.78). Artifacts within the region-
growing result (as indicated schematically by the line segment inside the
circle in Figure 5.78) are rejected by this operator.
5. h5 = maxfCr ;Sr (prj ) Cc ;Sc (pcj ) 0:5g.
This is a disjunctive operator that acts on the transition pixels lying in
the intersection Sr \ Sc (see h5 in Figure 5.78).
6. if max(Cr Cc )  0:5
then h6 = 0:0
else if (Cr  0:5)
then h6 = 0:0
else if (Cc  0:5)
then h6 = Cr ;Sr (prj )
else h6 = minfCr ;Sr (prj ) 1 ; Cc ]g.
This is a conjunctive operator that acts on the exterior of Sr Sc and
determines a limiting or stopping condition for the operator (see h6 in
Figure 5.78).
Description of the nite automaton: The nite automaton M =
(Q (  q0 F ) in O is dened by the following entities:
A set of nite states Q = fa b cg, where
{ state a indicates that the pixel being analyzed belongs to the inte-
rior of the contour,
{ state b indicates that the pixel being analyzed belongs to the con-
tour, and
{ state c indicates that the pixel being analyzed belongs to the exte-
rior of the contour (see Figure 5.79).
Detection of Regions of Interest 513
h3
h2
h1

h4

h4 h6
h5
FIGURE 5.78
The regions where the six basic fusion operators are applied are indicated by
fh1 h2 h3 h4 h5 h6 g. Reproduced with permission from D. Guliato, R.M.
Rangayyan, W.A. Carnielli, J.A. Zuo, and J.E.L. Desautels, \Fuzzy fusion
operators to combine results of complementary medical image segmentation
techniques", Journal of Electronic Imaging, 12(3): 379 { 389, 2003.
c SPIE
and IS&T.

A nite input alphabet ( = fI1 I2 I3 I4 g.


Let 1 and 2 be two nite partitions of 0 1], where 1 = 2 =
fhigh lowg. We can choose the classes high and low as follows:
low = 0 0:5),
high = 0:5 1:0]
pij 2 high if ;Si (pij )  0:5 for j = 1 2 : : : m, and
pij 2 low if ;Si (pij ) < 0:5 for j = 1 2 : : : m,
where pij , i 2 fr cg, and j = 1 2 : : : m, identify the ith source and the
j th pixel and ;Si (pij ) is the membership degree of the pixel pj in Si .
The nite input alphabet ( is produced by the function
: 1  2 ! (,
where:
{
(high low) = I1 . The pixel being analyzed presents a high mem-
bership degree in the region-growing segmentation result and a low
membership degree in the closed-contour result. This input repre-
sents a certainty or uncertainty situation depending on the spatial
position of the pixel being analyzed see Figure 5.80.
{
(high high) = I2 . The pixel being analyzed presents a high mem-
bership degree in the region-growing segmentation result and a high
514 Biomedical Image Analysis
membership degree in the closed-contour result. This indicates an
intersection case see Figure 5.80.
{
(low high) = I3 . The pixel being analyzed presents a low mem-
bership degree in the region-growing segmentation result and a high
membership degree in the closed-contour result. This indicates an
uncertainty case see Figure 5.80.
{
(low low) = I4 . The pixel being analyzed presents a low mem-
bership degree in the region-growing segmentation result and a low
membership degree in the closed-contour result. This indicates an
uncertainty case if the pixel belongs to the interior of the contour
in the opposite case, this indicates a stopping or limiting condition
of the fusion operator see Figure 5.80.
A transition diagram  of M , as shown in Figure 5.81.
The transition diagram illustrates the situations when the basic fusion
operator is executed. The analysis begins with a pixel that belongs to
the intersection of the two segmentation results. The rst input must be
of type I1  the initial state of the automaton is a, which corresponds to
the fact that the pixel belongs to the interior of the contour. The analysis
procedure is rst applied to all of the pixels inside the contour. While
the inputs are I1 or I4 , the operators h1 or h3 will be applied and the
automaton remains in state a. When an input of type I2 or I3 arrives,
the automaton goes to state b to inform the analysis process that the
pixel being processed is on the boundary given by the contour-detection
method. At this stage, all the pixels on the contour are processed.
While the inputs are I2 or I3 and the operators h5 or h2 are applied, the
automaton will remain in state b. If, while in state b, the input I1 or I4
occurs (and the operators h4 or h6 applied), the automaton goes to state
c, indicating that the pixel being analyzed is outside the contour. All
the pixels outside the contour are processed at this stage. Observe that,
depending upon the state of the automaton, dierent fusion operators
may be applied to the same inputs. As indicated by the transition
diagram in Figure 5.81, all of the pixels in the interior of the contour
are processed rst all of the pixels on the contour are processed next,
followed by the pixels outside the contour.
The initial state q0 2 Q, with q0 = fag.
The set of nal states F = fcg, where F  Q. In the present case, F
has only one element.
Behavior of the basic fusion operators: The fusion operator described
above can combine several results of segmentation, two at a time. The result
yielded by the fusion operator is a fuzzy set that identies the certainty and
uncertainty present in the inputs to the fusion process. It is expected that
Detection of Regions of Interest 515
a

Sc b

Sr

c
FIGURE 5.79
The three states of the automaton. Reproduced with permission from D.
Guliato, R.M. Rangayyan, W.A. Carnielli, J.A. Zuo, and J.E.L. Desautels,
\Fuzzy fusion operators to combine results of complementary medical image
segmentation techniques", Journal of Electronic Imaging, 12(3): 379 { 389,
c SPIE and IS&T.
2003.

I4
I1

Sc I3
I1

Sr

I3
I2
I2 I4
FIGURE 5.80
The four possible input values fI1 I2 I3 I4 g for the fusion operator. The short
line segments with the labels I2 and I3 represent artifacts in the segmentation
result. Reproduced with permission from D. Guliato, R.M. Rangayyan, W.A.
Carnielli, J.A. Zuo, and J.E.L. Desautels, \Fuzzy fusion operators to combine
results of complementary medical image segmentation techniques", Journal of
Electronic Imaging, 12(3): 379 { 389, 2003.
c SPIE and IS&T.
516 Biomedical Image Analysis
I1 / h1 I2 / h 5 I2 / h5 I1 , I2 / h 4
I1 / h4

a b c

I3 / h2 I4 / h 6
I4 / h3 I3 / h2 I3 , I4 / h 6
FIGURE 5.81
Transition diagram that governs the actions of the fusion operator. Repro-
duced with permission from D. Guliato, R.M. Rangayyan, W.A. Carnielli,
J.A. Zuo, and J.E.L. Desautels, \Fuzzy fusion operators to combine results
of complementary medical image segmentation techniques", Journal of Elec-
c SPIE and IS&T.
tronic Imaging, 12(3): 379 { 389, 2003.

maximal certainty will be represented by a membership degree equal to 1 or


0 (that is, the pixel being analyzed certainly belongs to or does not belong to
the nal segmentation result). When individual segmentation results disagree
with respect to a pixel belonging or not belonging to the nal result, or when
both sources do not present sucient reliability, the fusion operator yields a
membership degree equal to 0:5 to represent a situation with maximal un-
certainty. Other situations are represented by membership degrees ranging in
the interval (0 0:5) and (0:5 1), depending on the evidence with respect to the
membership of the analyzed pixel in the ROI and the reliability of the sources.
Two illustrative studies on the behavior of the basic fusion operators h1 and
h2 are presented below, taking into consideration a limited set of entries:
State a of the automaton (inside the contour), entry 1 (high, low), basic
fusion operator h1 = maxfCr ;Sr (prj ) Cc 0:5g.
This is the starting condition of the fusion operator see Figure 5.81. The
starting pixel must lie in the intersection of Sr and Sc (see Figures 5.78
and 5.79). In this case, ;Sr (prj )  0:5 (that is, p belongs to the region-
growing result with a high membership degree) and ;Sc (pcj ) < 0:5 (that
is, p is inside the contour). This situation represents the condition that
both sources agree with respect to the pixel p belonging to the ROI.
Table 5.2 provides explanatory comments describing the behavior of h1
for several values of the reliability parameters and inputs from the two
sources.
State a of the automaton (inside the contour) or state b (on the contour),
entry 3 (low, high), basic fusion operator h2 .
The operator h2 is applied when the automaton is in the state a or b
and a transition occurs from a pixel inside the contour to a pixel on
Detection of Regions of Interest 517
the contour that is, (a I3 ) ! b] or from a pixel on the contour to a
neighboring pixel on the contour that is, (b I3 ) ! b].
In this case, ;Sr (prj ) < 0:5 (that is, p does not belong to the region-
growing result or belongs with a low membership degree), and ;Sc (pcj ) >
0:5 (that is, p is on the contour). This situation represents the condition
where the sources disagree with respect to the pixel p belonging to the
ROI. The result of h2 is a weighted average of the input membership
values. Table 5.3 provides explanatory comments describing the behav-
ior of h2 for several values of the reliability parameters and inputs from
the two sources.

TABLE 5.2
Behavior of the Basic Fusion Operator h1 .
Cr ;Sr (prj ) Cc ;Sc (pcj ) h1 Comments
1.0 1.0 1.0 0.0 1.0 p belongs to the ROI with
maximal certainty
1.0 1.0 0.0 0.0 1.0 Result depends on the source
with the higher reliability
0.0 1.0 1.0 0.0 1.0 Result depends on the source
with the higher reliability
0.0 1.0 0.0 0.0 0.5 Both sources do not
present reliability
0.8 1.0 1.0 0.0 1.0 Source Sc presents the
higher reliability
0.8 1.0 0.8 0.0 0.8 Result depends on the source
with the higher reliability
0.9 1.0 0.3 0.0 0.9 Result depends on the source
with the higher reliability

Reproduced with permission from D. Guliato, R.M. Rangayyan, W.A.


Carnielli, J.A. Zuo, and J.E.L. Desautels, \Fuzzy fusion operators to combine
results of complementary medical image segmentation techniques", Journal of
Electronic Imaging, 12(3): 379 { 389, 2003.
c SPIE and IS&T.

Application of the fusion operator to the segmentation of breast


tumors: Figure 5.82 shows the results of the fusion of the ROIs represented in
518 Biomedical Image Analysis
TABLE 5.3
Behavior of the Basic Fusion Operator h2 .
Cr ;Sr (prj ) Cc ;Sc (pcj ) h2 Comments
1.0 0.0 1.0 1.0 0.5 Weighted averaging
1.0 0.0 0.0 1.0 0.0 Result depends on the source
with the higher reliability
0.0 0.0 1.0 1.0 1.0 Result depends on the source
with the higher reliability
0.0 0.0 0.0 1.0 0.5 Both sources do not present
reliability maximal uncertainty
0.8 0.0 1.0 1.0 0.56 Weighted averaging

Reproduced with permission from D. Guliato, R.M. Rangayyan, W.A.


Carnielli, J.A. Zuo, and J.E.L. Desautels, \Fuzzy fusion operators to combine
results of complementary medical image segmentation techniques", Journal of
Electronic Imaging, 12(3): 379 { 389, 2003.
c SPIE and IS&T.

Figures 5.30 and 5.32. The results have been superimposed with the contours
drawn by an expert radiologist, for comparison. Figure 5.83 demonstrates the
application of the methods for contour detection, fuzzy region growing, and
fusion to a segment of a mammogram with a malignant tumor. Observe that
the fusion results reduce the uncertainty present in the interior of the regions,
but also reduce the certainty of the boundaries. The features of the results
of the individual segmentation procedures contribute to the fusion results,
allowing the postponement of a crisp decision (if necessary) on the ROI or its
boundary to a higher level of the image analysis system.
Evaluation of the results of fusion using a measure of fuzziness:
In order to evaluate the results of the fusion operator, Guliato et al. 277]
compared the degree of agreement between the reference contour given by
an expert radiologist and each segmentation result: contour segmentation,
region-growing segmentation, and the result of fusion. The reference contour
and a segmentation result were aggregated using the fusion operator. The
fusion operator yields a fuzzy set that represents the certainty and uncertainty
identied during the aggregation procedure. The maximal certainty occurs
when ;(p) = 0 or ;(p) = 1, where ; is the membership degree of the pixel
p. The maximal uncertainty occurs when ;(p) = 0:5. In the former case, the
information sources agree completely with respect to the pixel p in the latter,
the information sources present maximal con ict with respect to the pixel p.
Intermediate values of the membership degree represent intermediate degrees
of agreement among the information sources. If the uncertainty presented
Detection of Regions of Interest 519

Figure 5.82 (a)

by the fusion result can be quantied, the result could be used to evaluate
the degree of agreement among two dierent information sources. In order to
quantify the uncertainty, Guliato et al. 277] proposed a measure of fuzziness.
In general, a measure of fuzziness is a function
f : F (X ) ! R+ (5.89)
where F (X ) denotes the set of all fuzzy subsets of X . For each fuzzy set A
of X , this function assigns a nonnegative real number f (A) that characterizes
the degree of fuzziness of A. The function f must satisfy the following three
requirements:
f (A) = 0 if, and only if, A is a crisp set
f (A) assumes its maximal value if, and only if, A is maximally fuzzy,
that is, all of the elements of A are equal to 0:5 and
if set A is undoubtedly sharper than set B , then f (A)  f (B ).
There are dierent ways of measuring fuzziness that satisfy all of the three
essential requirements 351]. Guliato et al. 277] chose to measure fuzziness in
terms of the distinctions between a set and its complement, observing that it
is the lack of distinction between a set and its complement that distinguishes
520 Biomedical Image Analysis

(b)
FIGURE 5.82
Result of the fusion of the contour and region (a) in Figures 5.30 (c) and (d)
for the case with a malignant tumor and (b) in Figures 5.32 (c) and (d) for
the case with a benign mass, with Cr = 1:0, Cc = 1:0. The contours drawn
by the radiologist are superimposed for comparison. Reproduced with per-
mission from D. Guliato, R.M. Rangayyan, W.A. Carnielli, J.A. Zuo, and
J.E.L. Desautels, \Fuzzy fusion operators to combine results of complemen-
tary medical image segmentation techniques", Journal of Electronic Imaging,
12(3): 379 { 389, 2003.
c SPIE and IS&T.
Detection of Regions of Interest 521

Figure 5.83 (a)

Figure 5.83 (b)


522 Biomedical Image Analysis

Figure 5.83 (c)

a fuzzy set from a crisp set. The implementation of this concept depends on
the denition of the fuzzy complement the standard complement is dened
as A*(x) = 1 ; A(x), for all x 2 X . Choosing the Hamming distance, the local
distinction between a given set A and its complement A* is measured by
j A(x) ; f1 ; A(x)g j = j 2A(x) ; 1 j (5.90)
and the lack of local distinction is given by
1; j 2A(x) ; 1 j : (5.91)
The measure of fuzziness, f (A), is then obtained by adding the local mea-
surements: X
f (A) = 1; j 2A(x) ; 1 j]: (5.92)
x2X
The range of the function f is 0 jX j]: f (A) = 0 if, and only if, A is a crisp
set f (A) = jX j when A(x) = 0:5 for all x 2 X .
In the work reported by Guliato et al. 277], for each mammogram the
reference contour drawn by the expert radiologist was combined, using the
fusion operator, with each of the results obtained by contour detection, fuzzy
region growing, and fusion, denoted by RSc , RSr , and RFr , respectively. The
fusion operator was applied with both the reliability measures equal to unity,
Detection of Regions of Interest 523

(d)
FIGURE 5.83
(a) A 700  700-pixel portion of a mammogram with a spiculated malignant
tumor. Pixel size = 62.5
m. (b) Contour extracted (white line) by fuzzy-
set-based preprocessing and region growing. The black line represents the
boundary drawn by a radiologist (shown for comparison). (c) Result of fuzzy
region growing. The contour drawn by the radiologist is superimposed for
comparison. (d) Result of the fusion of the contour in (b) and the region in
(c) with Cr = 1:0, Cc = 0:9. The contour drawn by the radiologist is super-
imposed for comparison. Reproduced with permission from D. Guliato, R.M.
Rangayyan, W.A. Carnielli, J.A. Zuo, and J.E.L. Desautels, \Fuzzy fusion
operators to combine results of complementary medical image segmentation
c SPIE
techniques", Journal of Electronic Imaging, 12(3): 379 { 389, 2003.

and IS&T.
524 Biomedical Image Analysis
that is, Cr = Cc = 1:0, for the two information sources being combined in each
case. When the result of contour detection was combined with the contour
drawn by the radiologist, the former was converted into a region because the
fusion method is designed to accept a contour and a region as the inputs.
Considering the results shown in Figure 5.83, the measures of fuzzinesss
obtained were f (RSc ) = 14,774, f (RSr ) = 14,245, and f (RFr ) = 9,710, re-
spectively. The aggregation or fusion of the two segmentation results presents
lower uncertainty than either, yielding a better result as expected.
The methods were tested with 14 mammographic images of biopsy-proven
cases the values of the measure of fuzzinesss for the cases are shown in Ta-
ble 5.4. The values of Cc and Cr used to obtain the result of fusion for the 14
mammograms are also listed in the table. Both Cc and Cr were maintained
equal to unity when computing the measure of fuzziness with respect to the
contour drawn by the radiologist for all the cases. In 11 cases, the fusion oper-
ator yielded improvement over the original results. There was no improvement
by fusion in three of the cases: in one of these cases both segmentation re-
sults were not accurate, and in the other two, the fuzzy region segmentation
was much better than the result of contour segmentation (based upon visual
comparison with the reference contour drawn by the radiologist). The results
provide good evidence that the fusion operator obtains regions with a higher
degree of certainty than the results of the individual segmentation methods.
The measure of fuzziness may be normalized by division by jX j. However,
in the context of the work of Guliato et al., this would lead to very small
values because the number of boundary pixels is far less than the number
of pixels inside a mass. The measure of fuzziness without normalization is
adequate in the assessment of the results of fusion because the comparison is
made using the measure for each mammogram separately.

5.12 Remarks
We have explored several methods to detect the edges of objects or to segment
ROIs. We have also studied methods to detect objects of known characteris-
tics, and methods to improve initial estimates of edges, contours, or regions.
The class of lters based upon mathematical morphology 8, 192, 220, 221, 222]
has not been dealt with in this chapter.
After ROIs have been detected and extracted from a given image, they
may be analyzed further in terms of representation, feature extraction, pattern
classication, and image understanding. Some of the measures and approaches
that could be used for these purposes are listed below. It should be recognized
that the accuracy of the measures derived will depend upon the accuracy of
the results of detection or segmentation 400].
Detection of Regions of Interest 525

TABLE 5.4
Measures of Fuzziness for the Results of Segmentation and Fusion for 14
Mammograms.
Mammogram Cr Cc f (RSc ) f (RSr ) f (RFr ) Is the result
of fusion better?
spic-s-1 1.0, 1.0 14,774 14,245 9,711 Yes
circ-fb-010 1.0, 1.0 8,096 9,223 7,905 Yes
spx111m 1.0, 1.0 5,130 9,204 4,680 Yes
spic-fh0 1.0, 0.6 28,938 23,489 21,612 Yes
circ-x-1 1.0, 0.8 6,877 2,990 3,862 No
spic-fh2 0.8, 1.0 45,581 38,634 34,969 Yes
circ-fb-005 1.0, 1.0 26,176 34,296 25,084 Yes
circ-fb-012 1.0, 0.9 16,170 15,477 12,693 Yes
spic-db-145 1.0, 0.9 8,306 7,938 7,658 Yes
circ-fb-025 1.0, 0.6 56,060 44,277 49,093 No
spic-fb-195 1.0, 1.0 11,423 12,511 10,458 Yes
spic-s-112 1.0, 0.6 31,413 17,784 12,838 Yes
spic-s-401 1.0, 0.6 13,225 11,117 11,195 No
circ-fb-069 1.0, 1.0 46,835 53,321 38,832 Yes

Reproduced with permission from D. Guliato, R.M. Rangayyan, W.A.


Carnielli, J.A. Zuo, and J.E.L. Desautels, \Fuzzy fusion operators to combine
results of complementary medical image segmentation techniques", Journal of
Electronic Imaging, 12(3): 379 { 389, 2003.
c SPIE and IS&T.
526 Biomedical Image Analysis
External characteristics:
{ boundary or contour morphology,
{ boundary roughness,
{ boundary complexity.
It is desirable that boundary descriptors are invariant to translation,
scaling, and rotation. Methods for the analysis of contours and shape
complexity are described in Chapter 6.
Internal characteristics:
{ gray level,
{ color,
{ texture,
{ statistics of pixel population.
Methods for the analysis of texture are presented in Chapters 7 and 8.
Description of (dis)similarity:
{ distance measures,
{ correlation coecient.
Chapter 12 contains the descriptions of several methods based upon
measures of similarity and distance however, the methods are described
in the context of pattern classication using vectors of features. Some
of the methods may be extended to compare sets of pixels representing
segmented regions.
Relational description:
{ placement rules,
{ string, tree, and web grammar,
{ structural description,
{ syntactic analysis.
See Gonzalez and Thomason 401] and Duda et al. 402] for details on
syntactic pattern recognition and image understanding.
Detection of Regions of Interest 527

5.13 Study Questions and Problems


Selected data les related to some of the problems and exercises are available at the
site
www.enel.ucalgary.ca/People/Ranga/enel697
1. Give the de nition of the 3 3 Sobel masks, and explain how they may be
used to detect edges of any orientation in an image.
What are the limitations of this approach to edge detection?
What type of further processing steps could help in improving edge represen-
tation?
2. Prepare a 5 5 image with the value 10 in the central 3 3 region and the
value zero in the remainder of the image.
Calculate the results of application of the 3 3 Laplacian operator and the
masks given in Section 5.2 for the detection of isolated points and lines.
Evaluate the results in terms of the detection of edges, lines, points, and
corners.

5.14 Laboratory Exercises and Projects


1. Create a test image with objects of various shapes and sizes. Process the
image with LoG functions of various scales. Analyze the results in terms of
the detection of features of varying size and shape.
2. Prepare a test image with a few straight lines of various slopes and positions.
Apply the Hough transform for the detection of straight lines. Study the eect
of varying the number of bins in the Hough space. Analyze the spreading of
values in the Hough space and develop strategies for the detection of the
straight lines of interest in the image.
What are the causes of artifacts with this method?
6
Analysis of Shape

Several human organs and biological structures possess readily identi able
shapes. The shapes of the human heart, brain, kidneys, and several bones
are well known, and, in normal cases, do not deviate much from an \aver-
age" shape. However, disease processes can aect the structure of organs,
and cause deviation from their expected or average shapes. Even abnormal
entities, such as masses and calci cations in the breast, tend to demonstrate
dierences in shape between benign and malignant conditions. For exam-
ple, most benign masses in the breast appear as well-circumscribed areas on
mammograms, with smooth boundaries that are circular or oval some benign
masses may be macrolobulated. On the other hand, malignant masses (can-
cerous tumors) are typically ill-de ned on mammograms, and possess a rough
or stellate (star-like) shape with strands or spicules appearing to radiate from
a central mass some malignant masses may be microlobulated
54, 345, 403].
Shape is a key feature in discriminating between normal and abnormal cells
in Pap-smear tests
272, 273]. However, biological entities demonstrate wide
ranges of manifestation, with signi cant overlap between their characteris-
tics for various categories. Furthermore, it should be borne in mind that the
imaging geometry, 3D-to-2D projection, and the superimposition of multi-
ple objects commonly aect the shapes of objects as perceived on biomedical
images.
Several techniques have been proposed to characterize shape
404, 405, 406].
We shall study a selection of shape analysis techniques in this chapter. A
few applications will be described to demonstrate the usefulness of shape
characteristics in the analysis of biomedical images.

6.1 Representation of Shapes and Contours


The most general form of representation of a contour in discretized space
is in terms of the (x y) coordinates of the digitized points (pixels) along
the contour. A contour with N points could be represented by the series of
coordinates fx(n) y(n)g, n = 0 1 2 : : : N ; 1. Observe that there is no gray
level associated with the pixels along a contour. A contour may be depicted
as a binary or bilevel image.

529
530 Biomedical Image Analysis
6.1.1 Signatures of contours
The dimensionality of representation of a contour may be reduced from two
to one by converting from a coordinate-based representation to distances from
each contour point to a reference point. A convenient reference is the centroid
or center of mass of the contour, whose coordinates are given by
;1
NX ;1
NX
x = N1 x(n) and y = N1 y(n): (6.1)
n=0 n=0
The signature of the contour is then de ned as
p
d(n) =
x(n) ; x]2 +
y(n) ; y]2 (6.2)
n = 0 1 2 : : : N ; 1 see Figure 6.1. It should be noted that the centroids of
regions that are concave or have holes could lie outside the regions.
A radial-distance signature may also be derived by computing the distance
from the centroid to the contour point(s) intersected for angles of the radial
line spanning the range (0o 360o ). However, for irregular contours, such a
signature may be multivalued for some angles that is, a radial line may
intersect the contour more than once (see, for example, Pohlman et al.
407]).
It is obvious that going around a contour more than once generates the same
signature hence, the signature signal is periodic with the period equal to N ,
the number of pixels on the contour. The signature of a contour provides
general information on the nature of the contour, such as its smoothness or
roughness.
Examples: Figures 6.2 (a) and 6.3 (a) show the contours of a benign breast
mass and a malignant tumor, respectively, as observed on mammograms
345].
The `*' marks within the contours represent their centroids. Figures 6.2 (b)
and 6.3 (b) show the signatures of the contours as de ned in Equation 6.2.
It is evident that the smooth contour of the benign mass possesses a smooth
signature, whereas the spiculated malignant tumor has a rough signature with
several signi cant rapid variations over its period.

6.1.2 Chain coding


An ecient representation of a contour may be achieved by specifying the
(x y) coordinates of an arbitrary starting point on the contour, the direction of
traversal (clockwise or counter-clockwise), and a code to indicate the manner
of movement to reach the next contour point on a discrete grid. A coarse
representation may be achieved by using only four possible movements: to the
point at the left of, right of, above, or below the current point, as indicated in
Figure 6.4 (a). A ner representation may be achieved by using eight possible
movements, including diagonal movements, as indicated in Figure 6.4 (b).
The sequence of codes required to traverse through all the points along the
Analysis of Shape 531
z(2) =
y z(1) = x(2) + j y(2)
x(1) + j y(1)
d(2)
z(0) = x(0) + j y(0) d(1)
z(N-1) = d(0)
x(N-1) + j y(N-1)
d(N-1)

*_ _
(x, y)

FIGURE 6.1
A contour represented by its boundary points z (n) and distances d(n) to its
centroid.

contour is known as the chain code


8, 245]. The technique was proposed by
Freeman
408], and is also known as the Freeman chain code.
The chain code facilitates more compact representation of a contour than
the direct speci cation of the (x y) coordinates of all of its points. Except
the initial point, the representation of each point on the contour requires only
two or three bits, depending upon the type of code used. Furthermore, chain
coding provides the following advantages:
 The code is invariant to shift or translation because the starting point
is kept out of the code.
 To a certain extent, the chain code is invariant to size (scaling). Con-
tours of dierent sizes may be generated from the same code by using
dierent sampling grids (step sizes). A contour may also be enlarged by
a factor of n by repeating each code element n times and maintaining
the same sampling grid
408]. A contour may be shrunk to half of the
original size by reducing pairs of code elements to single numbers, with
approximation of unequal pairs by their averages reduced to integers.
 The chain code may be normalized for rotation by taking the rst dif-
ference of the code (and adding 4 or 8 to negative dierences, depending
upon the code used).
532 Biomedical Image Analysis

(a)
160

150

140
distance to centroid

130

120

110

100

100 200 300 400 500 600 700


contour point index n

(b)
FIGURE 6.2
(a) Contour of a benign breast mass N = 768. The `*' mark represents the
centroid of the contour. (b) Signature d(n) as de ned in Equation 6.2.
Analysis of Shape 533

(a)
240

220

200

180
distance to centroid

160

140

120

100

80

60

500 1000 1500 2000 2500 3000


contour point index n

(b)
FIGURE 6.3
(a) Contour of a malignant breast tumor N = 3 281. The `*' mark represents
the centroid of the contour. (b) Signature d(n) as de ned in Equation 6.2.
534 Biomedical Image Analysis
 With reference to the 8-symbol code, the rotation of a given contour by
n  90o in the counter-clockwise direction may be achieved by adding a
value of 2n to each code element, followed by integer division by 8. The
addition of an odd number rotates the contour by the corresponding
multiple of 45o  however, the rotation of a contour by angles other than
integral multiples of 90o on a discrete grid is subject to approximation.
 In the case of the 8-symbolpcode, the length of a contour is given by the
number of even codes plus 2 times the number of odd codes, multiplied
by the grid sampling interval.
 The chain code may also be used to achieve reduction, check for closure,
check for multiple loops, and determine the area of a closed loop
408].
Examples: Figure 6.5 shows a contour represented using the chain codes
with four and eight symbols. The use of a discrete grid with large spacings
leads to the loss of ne detail in the contour. However, this feature may be
used advantageously to lter out minor irregularities due to noise, artifacts
due to drawing by hand, etc.

1 3 2 1

2 0 4 0

3 5 6 7

(a) (b)

FIGURE 6.4
Chain code with (a) four directional codes and (b) eight directional codes.

6.1.3 Segmentation of contours


The segmentation of a contour into a set of piecewise-continuous curves is a
useful step before analysis and modeling. Segmentation may be performed by
locating the points of inection on the contour.
Consider a function f (x). Let f 0 (x) f 00 (x) and f 000 (x) represent the rst,
second, and third derivatives of f (x). A point of inection of the function or
Analysis of Shape 535
2

0 0 0
1
1 3 1 3
0
0 o
1 3
0 0 0
−1
1 3
2 2
−2
1 3
2
−3
1 3
2 2
−4
1 3 1 3
2 2
−5

−6
−4 −3 −2 −1 0 1 2 3 4 5

Chain code:
0 1 0 3 3 0 0 3 2 3 2 3 3 2 1 2 2 3 2 1 1 1 2 1 0 1 1 0 0 3]
Figure 6.5 (a)

curve f (x) is de ned as a point where f 00 (x) changes its sign. Note that the
derivation of f 00 (x) requires f (x) and f 0 (x) to be continuous and dierentiable.
It follows that the following conditions apply at a point of inection:
f 00 (x) = 0
f 0 (x) 6= 0
f (x) f 00(x) = 0 and
0
f 0 (x) f 000 (x) 6= 0: (6.3)
Let C = f(x(n) y(n)g n = 0 1 2 : : : N ; 1, represent in vector form
the (x y) coordinates of the N points on the given contour. The points of
inection on the contour are obtained by solving

C0  C00 = 0
C0  C000 6= 0 (6.4)
where C0 , C00 and C000 are the rst, second, and third derivatives of C,
respectively, and  represents the vector cross product. Solving Equation 6.4
is equivalent to solving the system of equations given by

x00 (n) y0 (n) ; x0 (n) y00 (n) = 0


536 Biomedical Image Analysis

1
1 1
7 6
0
0 o
1
6
0
−1
3 7
4
−2
2
5
−3
2 6
4 4
−4
2 3 6
5
−5

−6
−4 −3 −2 −1 0 1 2 3 4 5

Chain code:
0 1 6 6 0 7 4 5 6 6 3 4 4 5 2 2 2 3 1 1 7 ]
(b)
FIGURE 6.5
A closed contour represented using the chain code (a) using four directional
codes as in Figure 6.4 (a), and (b) with eight directional codes as in Figure
6.4 (b). The `o' mark represents the starting point of the contour, which is
traversed in the clockwise direction to derive the code.
Analysis of Shape 537
x0 (n) y000 (n) ; x000 (n) y0 (n) 6= 0 (6.5)
where x0 (n) y0 (n) x00 (n) y00 (n) x000 (n) and y000 (n) are the rst, second, and
third derivatives of x(n) and y(n), respectively.
Segments of contours of breast masses between successive points of inection
were modeled as parabolas by Menut et al.
354]. Diculty lies in segmen-
tation because the contours of masses are, in general, not smooth. False or
irrelevant points of inection could appear on relatively straight parts of a
contour when x00 (n) and y00 (n) are not far from zero. In order to address this
problem, smoothed derivatives at each contour point could be estimated by
considering the cumulative sum of weighted dierences of a certain number of
pairs of points on either side of the point x(n) under consideration as
m
x(n + i) ; x(n ; i)]
X
x0 (n) = i (6.6)
i=1
where m represents the number of pairs of points used to compute the deriva-
tive x0 (n) the same procedure applies to the computation of y0 (n).
In the works reported by Menut et al.
354] and Rangayyan et al.
345],
the value of m was varied from 3 to 60 to compute derivatives that resulted
in varying numbers of inection points for a given contour. The number of
inection points detected as a function of the number of dierences used was
analyzed to determine the optimal number of dierences that would provide
the most appropriate inection points: the value of m at the rst straight
segment on the function was selected.
Examples: Figure 6.6 shows the contour of a spiculated malignant tumor.
The points of inection detected are marked with `*'. The number of inec-
tion points detected is plotted in Figure 6.7 as a function of the number of
dierences used (m in Equation 6.6) the horizontal and vertical lines indi-
cate the optimal number of dierences used to compute the derivative at each
contour point and the corresponding number of points of inection that were
located on the contour.
The contour in Figure 6.6 is shown in Figure 6.8, overlaid on the corre-
sponding part of the original mammogram. Segments of the contours are
shown in black or white, indicating if they are concave or convex, respec-
tively. Figure 6.9 provides a similar illustration for a circumscribed benign
mass. Analysis of concavity of contours is described in Section 6.4.

6.1.4 Polygonal modeling of contours


Pavlidis and Horowitz
361] and Pavlidis and Ali
409] proposed methods
for segmentation and approximation of curves and shapes by polygons for
computer recognition of handwritten numerals, cell outlines, and ECG signals.
Ventura and Chen
410] presented an algorithm for segmenting and polygonal
modeling of 2D curves in which the number of segments is to be prespeci ed
538 Biomedical Image Analysis

FIGURE 6.6
Contour of a spiculated malignant tumor with the points of inection indicated
by `*'. Number of points of inection = 58. See also Figure 6.8.
Analysis of Shape 539

1400

1200

1000
Number of points of inflection

800

600

400

200

0
0 5 10 15 20 25 30 35 40
Number of pairs of differences

FIGURE 6.7
Number of inection points detected as a function of the number of dierences
used to estimate the derivative for the contour in Figure 6.6. The horizontal
and vertical lines indicate the optimal number of dierences used to compute
the derivative at each contour point and the corresponding number of points
of inection that were located on the contour.
540 Biomedical Image Analysis

FIGURE 6.8
Concave and convex parts of the contour of a spiculated malignant tumor,
separated by the points of inection. See also Figure 6.6. The concave parts
are shown in black and the convex parts in white. The image size is 770 
600 pixels or 37:2  47:7 mm with a pixel size of 62 m. Shape factors
fcc = 0:47, SI = 0:62, cf = 0:94. Reproduced with permission from R.M.
Rangayyan, N.R. Mudigonda, and J.E.L. Desautels, \Boundary modeling and
shape analysis methods for classi cation of mammographic masses", Medical
and Biological Engineering and Computing, 38: 487 { 496, 2000.  c IFMBE.
Analysis of Shape 541

FIGURE 6.9
Concave and convex parts of the contour of a circumscribed benign mass,
separated by the points of inection. The concave parts are shown in black and
the convex parts in white. The image size is 730  630 pixels or 31:5  36:5 mm
with a pixel size of 50 m. Shape factors fcc = 0:16, SI = 0:22, cf =
0:30. Reproduced with permission from R.M. Rangayyan, N.R. Mudigonda,
and J.E.L. Desautels, \Boundary modeling and shape analysis methods for
classi cation of mammographic masses", Medical and Biological Engineering
and Computing, 38: 487 { 496, 2000.  c IFMBE.
542 Biomedical Image Analysis
for initiating the process, in relation to the complexity of the shape. This is
not a desirable step when dealing with complex or spiculated shapes of breast
tumors
163]. In a modi ed approach proposed by Rangayyan et al.
345], the
polygon formed by the points of inection detected on the original contour was
used as the initial input to the polygonal modeling procedure. This step helps
in automating the polygonalization algorithm: the method does not require
any interaction from the user in terms of the initial number of segments.
Given an irregular contour C as speci ed by the set of its (x y) coordinates,
the polygonal modeling algorithm starts by dividing the contour into a set
of piecewise-continuous curved parts by locating the points of inection on
the contour as explained in Section 6.1.3. Each segmented curved part is
represented by a pair of linear segments based on its arc-to-chord deviation.
The procedure is iterated subject to prede ned boundary conditions so as to
minimize the error between the true length of the contour and the cumulative
length computed from the polygonal segments.
Let C = fx(n) y(n)g n = 0 1 2 : : : N ; 1, represent the given contour.
Let SCmk SCmk 2 C m = 1 2 : : : M , be M curved parts, each con-
th
S points, at the start of the k iteration, such that
tainingSa set ofScontour
SC1k SC2k : : : SCMk  C: The iterative procedure proposed by
Rangayyan et al.
345] is as follows:

1. In each curved part represented by SCmk , the arc-to-chord distance


is computed for all the points, and the point on the curve with the
maximum arc-to-chord deviation (dmax ) is located.
2. If dmax  0:25 mm (5 pixels in the images with a pixel size of 50 m
used in the work of Rangayyan et al.
345]), the curved part is segmented
at the point of maximum deviation to approximate the same with a
pair of linear segments, irrespective of the length of the resulting linear
segments. If 0:1 mm  dmax < 0:25 mm, the curved part is segmented
at the point of maximum deviation subject to the condition that the
resulting linear segments satisfy a minimum-length criterion, which was
speci ed as 1 mm in the work of Rangayyan et al.
345]. If dmax <
0:1 mm, the curved part SCmk is considered to be almost linear and is
not segmented any further.
3. After performing Steps 1 and 2 on all the curved parts of the contour
available in the current kth iteration, the resulting vector of the poly-
gon's vertices is updated.
4. If the number of polygonal segments following the kth iteration equals
that of the previous iteration, the algorithm is considered to have con-
verged and the polygonalization process is terminated. Otherwise, the
procedure (Steps 1 to 3) is repeated until the algorithm converges.
Analysis of Shape 543
The criterion for choosing the threshold for arc-to-chord deviation was based
on the assumption that any segment possessing a smaller deviation is insignif-
icant in the analysis of contours of breast masses.
Examples: Figure 6.10 (a) shows the points of inection (denoted by `*')
and the initial stage of polygonal modeling (straight-line segments) of the
contour of a spiculated malignant tumor (see also Figure 6.8). Figure 6.10 (b)
shows the nal result of polygonal modeling of the same contour. The al-
gorithm converged after four iterations, as shown by the convergence plot in
Figure 6.11. The result of the application of the polygonal modeling algorithm
to the contour of a circumscribed benign mass is shown in Figure 6.12.
The number of linear segments required for the approximation of a contour
increases with its shape complexity polygons with the number of sides in the
range 20 ; 400 were used in the work of Rangayyan et al.
345]) to model
contours of breast masses and tumors. The number of iterations required for
the convergence of the algorithm did not vary much for dierent mass contour
shapes, remaining within the range 3 ; 5. This is due to the fact that the
relative complexity of the contour to be segmented is taken into considera-
tion during the initial preprocessing step of locating the points of inection
hence, the subsequent polygonalization process is robust and computationally
ecient. The algorithm performed well and delivered satisfactory results on
various irregular shapes of spiculated cases of benign and malignant masses.

6.1.5 Parabolic modeling of contours


Menut et al.
354] proposed the modeling of segments of contours of breast
masses between successive points of inection as parabolas. An inspection
of the segments of the contours illustrated in Figures 6.6 and 6.12 (a) (see
also Figures 6.8 and 6.9) indicates that most of the curved portions between
successive points inection lend themselves well to modeling as parabolas.
Some of the segments are relatively straight however, such segments may
not contribute much to the task of discrimination between benign masses and
malignant tumors.
Let us consider a segment of a contour represented in the continuous 2D
space by the points
x(s) y(s)] over the interval S1  s  S2 , where s indicates
distance along the contour and S1 and S2 are the end-points of the segment.
Let us now consider the approximation of the curve by a parabola. Regardless
of the position and orientation of the given curve, let us consider the simplest
representation of a parabola as Y = A X 2 in the coordinate space (X Y ). The
parameter A controls the narrowness of the parabola: the larger the value of
A, the narrower is the parabola. Allowing for a rotation of  and a shift of
(c d) between the (x y) and (X Y ) spaces, we have
x(s) = X (s) cos  ; Y (s) sin  + c
y(s) = X (s) sin  + Y (s) cos  + d: (6.7)
544 Biomedical Image Analysis

(a) (b)
FIGURE 6.10
Polygonal modeling of the contour of a spiculated malignant tumor. (a) Points
of inection (indicated by `*') and the initial polygonal approximation
(straight-line segments) number of sides = 58. (b) Final model after four
iterations number of sides = 146. See also Figure 6.8. Reproduced with
permission from R.M. Rangayyan, N.R. Mudigonda, and J.E.L. Desautels,
\Boundary modeling and shape analysis methods for classi cation of mam-
mographic masses", Medical and Biological Engineering and Computing, 38:
487 { 496, 2000. c IFMBE.
Analysis of Shape 545

150

140

130
Number of polygonal segments

120

110

100

90

80

70

60
0 1 2 3 4 5 6
Number of iterations

FIGURE 6.11
Convergence plot of the iterative polygonal modeling procedure for the con-
tour of the spiculated malignant tumor in Figure 6.10. Reproduced with
permission from R.M. Rangayyan, N.R. Mudigonda, and J.E.L. Desautels,
\Boundary modeling and shape analysis methods for classi cation of mam-
mographic masses", Medical and Biological Engineering and Computing, 38:
487 { 496, 2000. 
c IFMBE.
546 Biomedical Image Analysis

(a) (b)
FIGURE 6.12
Polygonal modeling of the contour of a circumscribed benign mass. (a) Points
of inection (indicated by `*') and the initial polygonal approximation
(straight-line segments) initial number of sides = 14. (b) Final model num-
ber of sides = 36, number of iterations = 4. See also Figure 6.9. Reproduced
with permission from R.M. Rangayyan, N.R. Mudigonda, and J.E.L. Desau-
tels, \Boundary modeling and shape analysis methods for classi cation of
mammographic masses", Medical and Biological Engineering and Computing,
38: 487 { 496, 2000. c IFMBE.
Analysis of Shape 547
We also have the following relationships:
X (s) =
x(s) ; c] cos  +
y(s) ; d] sin 
Y (s) = ;
x(s) ; c] sin  +
y(s) ; d] cos  (6.8)

X (s) = s
Y (s) = A s2 : (6.9)
Taking the derivatives of Equation 6.9 with respect to s, we get the following:
X 0 (s ) = 1
Y 0 (s) = 2As  (6.10)

X 00 (s) = 0
Y 00 (s) = 2A: (6.11)
Similarly, taking the derivatives of Equation 6.8 with respect to s, we get the
following:
X 00 (s) = x00 (s) cos  + y00 (s) sin 
Y 00 (s) = ;x00 (s) sin  + y00 (s) cos : (6.12)
Combining Equations 6.11 and 6.12, we get
X 00 (s) = 0 = x00 (s) cos  + y00 (s) sin  (6.13)
which, upon multiplication with sin , yields
x00 (s) sin  cos  + y00 (s) sin  sin  = 0: (6.14)
Similarly, we also get
Y 00 (s) = 2A = ;x00 (s) sin  + y00 (s) cos  (6.15)
which, upon multiplication with cos , yields
2A cos  = ;x00 (s) sin  cos  + y00 (s) cos  cos : (6.16)
Combining Equations 6.14 and 6.16 we get
2A cos  = y00 (s): (6.17)
The equations above indicate that y00 (s) and x00 (s) are constants with values
related to A and . The values of the two derivatives may be computed
from the given curve over all available points, and averaged to obtain the
corresponding (constant) values. Equations 6.14 and 6.17 may then be solved
548 Biomedical Image Analysis
simultaneously to obtain  and A. Thus, the parameter of the parabolic model
is obtained from the given contour segment.
Menut et al. hypothesized that malignant tumors, due to the presence of
narrow spicules or microlobulations, would have several parabolic segments
with large values of A on the other hand, benign masses, due to their char-
acteristics of being oval or macrolobulated, would have a small number of
parabolic segments with small values of A. The same reasons were also ex-
pected to lead to a larger standard deviation of A for malignant tumors than
for benign masses. In addition to the parameter A, Menut et al. proposed to
use the width of the projection of each parabola on to the X axis, with the
expectation that its values would be smaller for malignant tumors than for
benign masses. A classi cation accuracy of 76% was obtained with a set of
54 contours.

6.1.6 Thinning and skeletonization


Objects that are linear or oblong, or structures that have branching (anos-
tomotic) patterns may be eectively characterized by their skeletons. The
skeleton of an object or region is obtained by its medial-axis transform or via
a thinning algorithm
8, 245, 411].
The medial-axis transformation proposed by Blum
412] is as follows. First,
the given image needs to be binarized so as to include only the patterns of
interest. Let the set of pixels in the binary pattern be denoted as B , let C be
the set of contour pixels of B , and let ci be an arbitrary contour point in C .
For each point b in B , a point ci is found such that the distance between the
point b and ci , represented as d(b ci ), is at its minimum. If a second point ck
is found in C such that d(b ck ) = d(b ci ), then b is a part of the skeleton of
B  otherwise, b is not a part of the skeleton.
A simple algorithm for thinning is as follows
8, 411, 413]. Assume that
the image has been binarized, with the pixels inside the ROIs being labeled
as 1 and the background pixels as 0. A contour point is de ned as any pixel
having the value 1 and at least one 8-connected neighbor valued 0. Let the
8-connected neighboring pixels of the pixel p1 being processed be indexed as
2 3
p9 p2 p3
4 p8 p1 p4 5 : (6.18)
p7 p6 p5
1. Flag a contour point p1 for deletion if the following are true:
(a) 2  N (p1 )  6
(b) S (p1 ) = 1
(c) p2  p4  p6 = 0
(d) p4  p6  p8 = 0
where N (p1 ) is the number of nonzero neighbors of p1 , and S (p1 ) is the
number of 0 ; 1 transitions in the sequence p2 p3 : : : p9 p2 .
Analysis of Shape 549
2. Delete all agged pixels.
3. Do the same as Step 1 above replacing the conditions (c) and (d) with
(c') p2  p4  p8 = 0
(d') p2  p6  p8 = 0.
4. Delete all agged pixels.
5. Iterate Steps 1 ; 4 until no further pixels are deleted.
The algorithm described above has the properties that it does not remove
end points, does not break connectivity, and does not cause excessive erosion
of the region
8].
Example: Figure 6.13 (a) shows a pattern of blood vessels in a section
of a ligament
414, 415]. The vessels were perfused with black ink prior to
extraction of the tissue for study. Figure 6.13 (b) shows the skeleton of the
image in part (a) of the gure. It is seen that the skeleton represents the
general orientational pattern and overall shape of the blood vessels in the
original image. However, information regarding the variation in the thickness
(diameter) of the blood vessels is lost in skeletonization. Eng et al.
414]
studied the eect of injury and healing on the microvascular structure of
ligaments by analyzing the statistics of the volume and directional distribution
of blood vessels as illustrated in Figure 6.13 see Section 8.7.2 for details.

6.2 Shape Factors


Although contours may be eectively characterized by representations and
models such as the chain code and the polygonal model described in the
preceding section, it is often desirable to encode the nature or form of a
contour using a small number of measures, commonly referred to as shape
factors. The nature of the contour to be encapsulated in the measures may
vary from one application to another. Regardless of the application, a few
basic properties are essential for ecient representation, of which the most
important are:
 invariance to shift in spatial position,
 invariance to rotation, and
 invariance to scaling (enlargement or reduction).
Invariance to reection may also be desirable in some applications. Shape fac-
tors that meet the criteria listed above can eectively and eciently represent
contours for pattern classi cation.
550 Biomedical Image Analysis

(a) (b)
FIGURE 6.13
(a) Binarized image of blood vessels in a ligament perfused with black ink. Im-
age courtesy of R.C. Bray and M.R. Doschak, University of Calgary. (b) Skele-
ton of the image in (a) after 15 iterations of the algorithm described in Sec-
tion 6.1.6.
Analysis of Shape 551
A basic method that is commonly used to represent shape is to t an ellipse
or a rectangle to the given (closed) contour. The ratio of the major axis of
the ellipse to its minor axis (or, equivalently, the ratio of the larger side
to the smaller side of the bounding rectangle) is known as its eccentricity,
and represents its deviation from a circle (for which the ratio will be equal to
unity). Such a measure, however, represents only the elongation of the object,
and may have, on its own, limited application in practice. Several shape
factors of increasing complexity and speci city of application are described in
the following sections.

6.2.1 Compactness
Compactness is a simple and popular measure of the eciency of a contour
to contain a given area, and is commonly de ned as
2
Co = PA (6.19)
where P and A are the contour perimeter and area enclosed, respectively. The
smaller the area contained by a contour of a given length, the larger will be the
value of compactness. Compactness, as de ned in Equation 6.19, has a lower
bound of 4 for a circle (except for the trivial case of zero for P = 0), but
no upper bound. It is evident that compactness is invariant to shift, scaling,
rotation, and reection of a contour.
In order to restrict and normalize the range of the parameter to
0 1], as
well as to obtain increasing values with increase in complexity of the shape,
the de nition of compactness may be modi ed as
274, 163]
cf = 1 ; 4A :
P2 (6.20)
With this expression, cf has a lower bound of zero for a circle, and increases
with the complexity of the contour to a maximum value of unity.
Examples: Figure 6.14 illustrates a few simple geometric shapes along
with their values of compactness. Elongated contours with large values of the
perimeter and small enclosed areas possess high values of compactness.
Figure 6.15 illustrates a few objects with simple geometric shapes including
scaling and rotation the values of compactness Co and cf for the contours
of the objects are listed in Table 6.1
274, 320, 334, 416]. It is evident that
both de nitions of compactness provide the desired invariance to scaling and
rotation (within the limitations due to the use of a discrete grid).
Figure 6.16 illustrates a few objects of varying shape complexity, prepared
by cutting construction paper
215, 320, 334, 416]. The values of compactness
cf for the contours of the objects are listed in Table 6.2. It is seen that
compactness increases with shape roughness and/or complexity.
552 Biomedical Image Analysis

(a) (b) (c) (d) (e)

Co = 12.57 16.0 18.0 25.0 41.62

cf = 0 0.21 0.30 0.50 0.70

FIGURE 6.14
Examples of contours with their values of compactness Co and cf , as de ned
in Equations 6.19 and 6.20. (a) Circle. (b) Square. (c) Rectangle with sides
equal to 1:0 and 0:5 units. (d) Rectangle with sides equal to 1:0 and 0:25
units. (e) Right-angled triangle of height 1:0 and base 0:25 units.
Analysis of Shape 553

FIGURE 6.15
A set of simple geometric shapes, including scaling and rotation, created on
a discrete grid, to test shape factors. Reproduced with permission from L.
Shen, R.M. Rangayyan, and J.E.L. Desautels, \Application of shape analysis
to mammographic calci cations", IEEE Transactions on Medical Imaging,
13(2): 263 { 274, 1994.  c IEEE.
554
TABLE 6.1
Shape Factors for the Shapes in Figure 6.15
274, 320, 334, 416].
Shape Co cf F1 = F1 0
F2 F20
F3 F30
mf ff
Large circle (a) 14.08 0.1078 0.0056 0.4105 0.0042 2.0271 0.0067 0.0011 0.0358
Medium circle (b) 14.13 0.1105 0.0066 0.1731 0.0037 1.9285 0.0078 0.0012 0.0380
Small circle (c) 14.29 0.1205 0.0085 0.1771 0.0048 1.9334 0.0100 0.0015 0.0432
Large square (d) 15.77 0.2034 0.1083 0.5183 0.0870 1.9987 0.1288 0.0205 0.1416
Medium square (e) 15.70 0.1997 0.1081 0.5122 0.0865 2.0126 0.1287 0.0207 0.1389
Rotated square (f) 16.00 0.2146 0.1101 0.5326 0.0893 1.9987 0.1309 0.0208 0.1434
Small square (g) 15.56 0.1926 0.1078 0.4943 0.0853 2.0495 0.1290 0.0212 0.1362
Large rectangle (i) 17.60 0.2858 0.2491 -0.3313 -0.1724 1.5385 0.2775 0.0283 0.1494
Medium rectangle (j) 17.47 0.2807 0.2483 -0.3267 -0.1710 1.5429 0.2767 0.0284 0.1483
Rotated rectangle (k) 17.47 0.2807 0.2483 -0.3267 -0.1710 1.5429 0.2767 0.0284 0.1483
Small rectangle (l) 17.23 0.2707 0.2468 -0.3165 -0.1682 1.5583 0.2758 0.0290 0.1420
Large isosceles triangle (m) 22.41 0.4392 0.3119 0.0737 0.1308 2.3108 0.3846 0.0727 0.2248
Medium isosceles triangle (n) 22.13 0.4322 0.3051 0.2027 0.1792 2.2647 0.3743 0.0692 0.2233
Rotated isosceles triangle (o) 22.13 0.4322 0.3051 0.2027 0.1792 2.2647 0.3743 0.0692 0.2238

Biomedical Image Analysis


Small isosceles triangle (p) 21.61 0.4185 0.3014 0.1518 0.1608 2.2880 0.3707 0.0693 0.2198
Large right-angled triangle (q) 27.68 0.5459 0.3739 0.0475 0.1355 1.9292 0.4407 0.0668 0.2217
Medium right-angled triangle (r) 27.18 0.5377 0.3707 0.0534 0.1396 1.9433 0.4377 0.0670 0.2221
Rotated right-angled triangle (s) 26.99 0.5345 0.3752 0.0022 0.0487 1.8033 0.4347 0.0596 0.2216
Small right-angled triangle (t) 26.26 0.5215 0.3644 0.0646 0.1462 1.9750 0.4319 0.0676 0.2180
Analysis of Shape 555

FIGURE 6.16
A set of objects of varying shape complexity. The objects were prepared by
cutting construction paper. The contours of the objects include imperfections
and artifacts. Reproduced with permission from L. Shen, R.M. Rangayyan,
and J.E.L. Desautels, \Application of shape analysis to mammographic cal-
ci cations", IEEE Transactions on Medical Imaging, 13(2): 263 { 274, 1994.
c IEEE.

Shen et al.
274, 334] applied compactness to shape analysis of mammo-
graphic calci cations. The details of this application are presented in Sections
6.6 and 12.7. The use of compactness in benign-versus-malignant classi cation
of breast masses is discussed in Sections 6.7, 12.11, and 12.12.

6.2.2 Moments

Statistical moments of PDFs and other data distributions have been utilized
as pattern features in a number of applications the same concepts have been
extended to the analysis of images and contours
8, 417, 418, 419, 420]. Given
a 2D continuous image f (x y), the regular moments mpq of order (p + q) are
556 Biomedical Image Analysis
TABLE 6.2
Shape Factors for the Objects in
Figure 6.16 Arranged in Increasing Order
of ff
334, 274, 416, 320].
Shape cf mf ff Type
1 0.13 0.022 0.14 Circle
2 0.11 0.019 0.14 Circle
3 0.35 0.047 0.14 Ellipse
4 0.55 0.047 0.17 Rectangle
5 0.62 0.060 0.18 Rectangle
6 0.83 0.084 0.18 Rectangle
7 0.22 0.038 0.19 Pentagon
8 0.50 0.063 0.24 Triangle
9 0.44 0.063 0.25 Triangle
10 0.75 0.090 0.30 Other
11 0.63 0.106 0.36 Other
12 0.81 0.077 0.42 Other

de ned as
420, 8]:
Z +1 Z +1
mpq = xp yq f (x y) dx dy (6.21)
;1 ;1
for p q = 0 1 2 : : :. A uniqueness theorem
128] states that if f (x y) is
piecewise continuous and has nonzero values only in a nite part of the (x y)
plane, then moments of all orders exist, and the moment sequence mpq , p q =
0 1 2 : : :, is uniquely determined by f (x y). Conversely, the sequence mpq
uniquely determines f (x y)
8].
The central moments are de ned with respect to the centroid of the image
as Z +1 Z +1
pq = (x ; x)p (y ; y)q f (x y) dx dy (6.22)
;1 ;1
where
x= m
m
10 m01 :
y=m (6.23)
00 00
Observe that the gray levels of the pixels provide weights for the moments as
de ned above. If moments are to be computed for a contour, only the contour
pixels would be used with weights equal to unity the internal pixels would
have weights of zero, and eectively do not participate in the computation of
the moments.
For an M  N digital image, the integrals are replaced by summations for
example, Equation 6.22 becomes
;1 NX
MX ;1
pq = (m ; x)p (n ; y)q f (m n): (6.24)
m=0 n=0
Analysis of Shape 557
The central moments have the following relationships
8]:
00 = m00 =  (6.25)
10 = 01 = 0 (6.26)
20 = m20 ; x2 (6.27)
11 = m11 ; x y (6.28)
02 = m02 ; y2 (6.29)
30 = m30 ; 3m20 x + 2x3 (6.30)
21 = m21 ; m20 y ; 2m11 x + 2x2 y (6.31)
12 = m12 ; m02 x ; 2m11 y + 2x y2 (6.32)
03 = m03 ; 3m02 y + 2y3 : (6.33)
Normalization with respect to size is achieved by dividing each of the mo-
ments by 00 , where  = p+2 q + 1, to obtain the normalized moments as
8]
pq = pq : (6.34)
00
Hu
420] (see also Gonzalez and Woods
8]) de ned a set of seven shape
factors that are functions of the second-order and third-order central moments
as follows:
M1 = 20 + 02 (6.35)

M2 = (20 ; 02 )2 + 4112 (6.36)

M3 = (30 ; 312 )2 + (321 ; 03 )2 (6.37)

M4 = (30 + 12 )2 + (21 + 03 )2 (6.38)

M5 = (30 ; 312 )(30 + 12 )


(30 + 12 )2 ; 3(21 + 03 )2 ]
+(321 ; 03 )(21 + 03 )
3(30 + 12 )2 ; (21 + 03 )2 ] (6.39)

M6 = (20 ; 02 )
(30 + 12 )2 ; (21 + 03 )2 ]
+411 (30 + 12 )(21 + 03 ) (6.40)

M7 = (321 ; 03 )(30 + 12 )


(30 + 12 )2 ; 3(21 + 03 )2 ]
;(30 ; 312 )(21 + 03 )
3(30 + 12 )2 ; (21 + 03 )2 ] : (6.41)
The shape factors M1 through M7 are invariant to shift, scaling, and ro-
tation (within limits imposed by representation on a discrete grid), and have
558 Biomedical Image Analysis
been found to be useful for pattern analysis. Rangayyan et al.
163] computed
several versions of the factors M1 through M7 for 54 breast masses and tu-
mors, using the mass ROIs with and without their gray levels, as well as the
contours of the masses with and without their gray levels. The features pro-
vided benign-versus-malignant classi cation accuracies in the range 56 ; 75%
see Section 6.7.
Moments of distances to the centroid: When an ROI is represented
using only its contour, an alternative de nition of moments is based upon
a sequence that represents the Euclidean distances between the centroid of
the region and all of the points or pixels along the contour, shown as d(n) in
Figure 6.1. The distances from the center of a circle to its contour points are all
equal to the radius of the circle the variance of the values is zero. On the other
hand, for rough shapes, the distances will vary considerably see Figures 6.2
and 6.3 for examples. The variance and higher-order moments of the distance
values could be expected to provide indicators of shape complexity. The pth
moment of the sequence d(n) is de ned as
417]
;1
NX
mp = N1
d(n)]p (6.42)
n=0
and the pth central moment is de ned as
;1
NX
Mp = N1
d(n) ; m1 ]p : (6.43)
n=0
The corresponding normalized moments are de ned as
;1
NX
1
N
d(n)]p
mp = (Mm)pp=2 = ( n=0
)p=2 (6.44)
2 ;1
NX
1
N
d(n) ; m1 ]2
n=0
NX;1
1
N
d(n) ; m1 ]p
M
Mp = (M )pp=2 = ( n=0
2 NX ;1 )p=2 : (6.45)
1
N
d(n) ; m1 ]2
n=0
Gupta and Srinath
417] showed that the normalized moments mp and Mp
are invariant to translation, rotation, and scaling. This set of moments (in an
in nite series) reversibly represents the shape of a contour.
Although moments of any arbitrarily large order can be derived from a
contour and used as features for shape classi cation, high-order moments are
Analysis of Shape 559
sensitive to noise, and hence, the resulting classi er will be less tolerant to
noise. Therefore, Gupta and Srinath
417] selected four normalized low-order
moments to form a set of shape features as follows:
( N ;1 )1=2
X
1=2 N
1
d(n) ; m1 ]2
(M
F1 = m =2 ) n=0 (6.46)
1 m 1
;1
NX
N
1
d(n) ; m1 ]3
F2 = (MM)33=2 = ( n=0 )3=2 (6.47)
2 ;1
NX
1
N
d(n) ; m1 ]2
n=0
;1
NX
1
N
d(n) ; m1 ]4
M4 =
F3 = (M n=0
) 2 ( N ;1 )2 : (6.48)
2 X
1
N
d(n) ; m1 )2
n=0
A study by Shen et al.
334, 416] showed that the variations in F2 and F3
for diering shape complexity are small and do not show a simple progression.
Furthermore, F2 was observed to vary signi cantly for the same (geometric)
shape with scaling and rotation see Figure 6.15 and Table 6.1. In order to
overcome these limitations, F2 and F3 were modi ed by Shen et al.
274, 334,
416] as follows:
( N ;1 )1=3
X
1=3
1
N
d(n) ; m1 ]3
M
F2 = m3 =
0 n =0 (6.49)
1 m 1
( N ;1 )1=4
X
1=4 N
1
d(n) ; m1 ] 4
M 4
F3 = m =
0
n =0 : (6.50)
1 m1
Compared with the feature set proposed by Gupta and Srinath
417], the
set fF1 F2 F3 g has the following properties:
0 0

 All of the three features are directly comparable.


 F3 describes the roughness of a contour better than F3 . In general, the
0

larger the value of F3 , the rougher is the contour.


0
560 Biomedical Image Analysis
Although F2 was observed by Shen et al.
334] to have better invariance
0

with respect to size and rotation for a given geometric shape, it showed no
better variation than F2 across the shape categories tested. However, it was
shown that the combination mf = F3 ; F1 is a good indicator of shape
0 0

roughness because the fourth-order term in F3 will be much larger than the
0

second-order term in F1 as the contour becomes rougher. Also, mf provides


0

the desired invariance for a given contour type as well as the desired variation
across the various shape categories see Figure 6.15 and Table 6.1. Note that
the de nition of the features F1 and F3 makes it possible to perform the
0 0

subtraction directly, and that mf is limited to the range


0 1].
The values of mf for the contours of the objects in Figure 6.16 are listed in
Table 6.2. Observe that the values of mf do not demonstrate the same trends
as those of the other shape factors listed in Tables 6.1 and 6.2 for the same
contours: the shape factors characterize dierent notions of shape complexity
see Section 6.6.

6.2.3 Chord-length statistics


Methods to characterize 2D closed contours using their chord-length distri-
butions were proposed by You and Jain
421]. A chord-length measure Lk is
de ned as the length of the line segment that links a pair of contour points,
normalized by the length of the longest chord. The complete set of chords
for a given object consists of all possible chords drawn from every boundary
pixel to every other boundary pixel see Figure 6.17. You and Jain considered
the K = N (N ; 1)=2 unique chords of the N boundary points of an object
as a sample distribution set, and computed the Kolmogorov-Smirnov (K-S)
statistics of the chord-length distribution for use as shape factors as follows:
K
X
Mc1 = K1 Lk (6.51)
k=1
1 XK
Mc22 = K (Lk ; Mc1 )2 (6.52)
k=1
1 1 K
X
Mc3 = M 3 K (Lk ; Mc1 )3 (6.53)
c2 k=1
1 XK
Mc4 = M14 K (Lk ; Mc1 )4 : (6.54)
c2 k=1
The measures listed above, in order, represent the mean, variance, skewness,
and kurtosis of the chord-length distributions.
The chord-length statistics are invariant to translation, scaling, and ro-
tation, and are robust in the presence of noise and distortion in the shape
Analysis of Shape 561

2 3 3

14
4
13 15
2
4
10

11
1

12 5

7
1 8
9
5
6
0
FIGURE 6.17
The set of all possible chords for a contour with N = 6 boundary points.
There exist K = N (N ; 1)=2 = 15 unique chords (including the sides of the
polygonal contour) in the example. The contour points (0 { 5) are shown in
regular font the chord numbers (1 ; 15) are shown in italics.
562 Biomedical Image Analysis
boundary. The method has a major disadvantage: it is possible for contours
of dierent shapes to have the same chord-length distribution.
The technique was applied by You and Jain to the boundary maps of seven
countries and six machine parts with dierent levels of resolution, and the
results indicated good discrimination between the shapes. Rangayyan et
al.
163] applied chord-length statistics to the analysis of contours of breast
masses, but obtained accuracies of no more than 68% in discriminating be-
tween benign masses and malignant tumors see Section 6.7.

6.3 Fourier Descriptors


Given a contour with N points having the coordinates fx(n) y(n)g, n =
0 1 2 : : : N ; 1, we could form a complex sequence z (n) = x(n) + j y(n),
n = 0 1 2 : : : N ; 1 see Figure 6.1. Traversing the contour, it is evident
that z (n) is a periodic signal with a period of N samples. The sequence jz (n)j
may be used as a signature of the contour.
Periodic signals lend themselves to analysis via the Fourier series. Given a
discrete-space sequence z (n), we could derive its Fourier series as one period
of its DFT Z (k), de ned as
NX;1  
Z (k) = N1 2
z(n) exp ;j N nk (6.55)
n=0
k = ; N2 : : : ;1 0 1 2 : : : N2 ;1. The frequency index k could be interpreted
as the index of the harmonics of a fundamental frequency. The contour sample
sequence z (n) is given by the inverse DFT as
N
2 ;1  
X
z(n) = N1 Z (k) exp j 2N nk (6.56)
k=; N2
n = 0 1 2 : : : N ; 1. The coecients Z (k) are known as the Fourier de-
scriptors of the contour z (n)
8, 422]. (Note: Other de nitions of Fourier
descriptors exist
422, 423, 424, 425, 426].)
Observe that Z (k) represents a two-sided complex spectrum: with fold-
ing or shifting of the DFT or FFT array, the frequency index would run as
k = ; N2 : : : ;1 0 1 2 : : : N2 ; 1 without folding, as usually provided by
common DFT or FFT algorithms, the coecients are provided in the order
0 1 2 : : : N2 ; 1 ; N2 : : : ;2 ;1.
A few important properties and characteristics of Fourier descriptors are as
follows
8]:
Analysis of Shape 563
 The frequency index k represents the harmonic number or order in a
Fourier series representation of the periodic signal z (n). The fundamen-
tal frequency (k = 1) represents a sinusoid in each of the coordinates x
and y that exhibits one period while traversing once around the closed
contour z (n).
 Diering from the usual spectra of real signals, Z (k) does not possess
conjugate symmetry due to the complex nature of z (n).
 The zero-frequency (DC) coecient Z (0) represents the centroid or cen-
ter of mass (x y), as
;1
NX
Z (0) = N1 z(n) = (x y): (6.57)
n=0
 Each of the fundamental frequency coecients Z (1) and Z (;1) repre-
sents a circle. The set of coecients fZ (;1) Z (1)g represents an ellipse.
 High-order Fourier descriptors represent ne details or rapid excursions
of the contour.
 The rotation of a contour by an angle  may be expressed as z1 (n) = z(n)
exp(j), where z1 (n) represents the rotated contour. Rotation leads to
an additional phase component as Z1 (k) = Z (k) exp(j).
 If z(n) represents the points along a contour obtained by traversing the
contour in the clockwise direction, and z1 (n) represents the points ob-
tained by traversing in the counter-clockwise direction, we have z1 (n) =
z(;n) = z(N ; n), and Z1 (k) = Z (;k).
 Shifting or translating z(n) by (xo yo ) to obtain z1 (n) = z(n)+(xo +j yo )
leads to an additional DC component as Z1 (k) = Z (k)+(xo + j yo ) (k).
 Scaling a contour as z1 (n) =
z(n) leads to a similar scaling of the
Fourier descriptors as Z1 (k) =
Z (k).
 Shifting the starting point by no samples, expressed as z1 (n) = z(n ;
no), leads to an additional linear-phase component, with Z1 (k) = Z (k)
exp
;j 2N no k].
Fourier descriptors may be ltered in a manner similar to the ltering of
signals and images in the frequency domain. The full set or a subset of the
coecients may also be used to represent the contour, and to derive shape
factors.
Examples: Figure 6.18 shows the normalized Fourier descriptors (up to
k = 15 only) for the squares and right-angled triangles in Figure 6.15. It is
seen that the magnitude of the normalized Fourier descriptors is invariant to
scaling and rotation.
564 Biomedical Image Analysis

1
Large Square
Medium Square
Rotated Square
Small Square
0.8
Normalized FDs, NFD(k)

0.6

0.4

0.2

0
-15 -10 -5 0 5 10 15

Frequency index, k

(a)
1
Large Right-Angled Triangle
Medium Right-Angled Triangle
Rotated Right-Angled Triangle
Small Right-Angled Triangle
0.8
Normalized FDs, NFD(k)

0.6

0.4

0.2

0
-15 -10 -5 0 5 10 15

Frequency index, k

(b)
FIGURE 6.18
Normalized Fourier descriptors (NFD, up to k = 15) for (a) the squares and (b) the
right-angled triangles in Figure 6.15. Each gure shows the NFD for four objects

however, due to invariance with respect to scaling and rotation, the functions overlap
completely. Reproduced with permission from L. Shen, R.M. Rangayyan, and J.E.L.
Desautels, \Application of shape analysis to mammographic calci cations", IEEE
Transactions on Medical Imaging, 13(2): 263 { 274, 1994.  c IEEE.
Analysis of Shape 565
Figures 6.19 and 6.20 show the jz (n)j signatures and Fourier-descriptor
sequences for the benign-mass and malignant-tumor contours shown in Fig-
ures 6.2 (a) and 6.3 (a). It is evident that the signatures reect the smoothness
or roughness of the contours: the Fourier descriptors of the spiculated contour
indicate the presence of more high-frequency energy than those of the nearly
oval contour of the benign mass.
Figure 6.21 shows the results of ltering the benign-mass contour in Figure
6.2 (a) using Fourier descriptors. The coecients Z (1) and Z (;1) provide two
circles, with one of them tting the contour better than the other (depending
upon the direction of traversal of the contour). The combined use of Z (1)
and Z (;1) has provided an ellipse that ts the original contour well. The use
of additional Fourier descriptors has provided contours that t the original
contour better.
Figure 6.22 shows the results of ltering the malignant-tumor contour in
Figure 6.3 (a) using Fourier descriptors. The inclusion of more high-order
coecients has led to contours that approximate the original contour to better
levels of t. The ltered contours illustrate clearly the role played by high-
order Fourier descriptors in representing the ner details of the given contour.
Persoon and Fu
423] showed that Fourier descriptors may be used to char-
acterize the skeletons of objects, with applications in character recognition
and machine-part recognition. Lin and Chellappa
427] showed that the clas-
si cation of 2D shapes based on Fourier descriptors is accurate even when
20 ; 30% of the data are missing.
Shape factor based upon Fourier descriptors: In a procedure pro-
posed by Shen et al.
274, 334] to derive a single shape factor, the Fourier
descriptors are normalized as follows: Z (0) is set equal to zero in order to
make the descriptors independent of position, and each coecient is divided
by the magnitude of Z (1) in order to normalize for size. After these steps,
the magnitudes of the Fourier descriptors are independent of position, size,
orientation, and starting point of the contour note that the orientation and
starting point aect only the phase of the Fourier descriptors. The normalized
Fourier descriptors Zo (k) are de ned as
(
0 k = 0
Zo (k) = Z (k) otherwise:
jZ (1)j


Note: For normalization as above, the points of the contour must be in-
dexed from 0 to (N ; 1) in counter-clockwise order in the opposite case,
jZ (;1)j should be used.]
Contours with sharp excursions possess more high-frequency energy than
smooth contours. However, applying a weighting factor that increases with
frequency leads to unbounded values that are also sensitive to noise. A shape
factor ff based upon the normalized Fourier descriptors was de ned by Shen
566 Biomedical Image Analysis

900

850
|x(n) + j y(n)|

800

750

700

100 200 300 400 500 600 700


contour point index n

(a)

5.5

4.5
log10 ( 1 + |Fourier descriptors| )

3.5

2.5

1.5

−50 −40 −30 −20 −10 0 10 20 30 40 50


frequency index k

(b)
FIGURE 6.19
(a) Signature with jz (n)j of the benign-mass contour in Figure 6.2 (a).
(b) Magnitude of the Fourier descriptors, shown only for k =
;50 50].
Analysis of Shape 567

1700

1650

1600
|x(n) + j y(n)|

1550

1500

1450

1400

1350

500 1000 1500 2000 2500 3000


contour point index n

(a)

6.5

5.5
log10 ( 1 + |Fourier descriptors| )

4.5

3.5

−50 −40 −30 −20 −10 0 10 20 30 40 50


frequency index k

(b)
FIGURE 6.20
(a) Signature with jz (n)j of the malignant-tumor contour in Figure 6.3 (a).
(b) Magnitude of the Fourier descriptors, shown only for k =
;50 50].
568 Biomedical Image Analysis

(a) (b)
FIGURE 6.21
Filtering of the benign-mass contour in Figure 6.2 (a) using Fourier descrip-
tors. (a) Using coecients for k = 1 (smaller circle in dashed line), k = ;1
(larger circle in dashed line), and k = f;1 0 1g (ellipse in solid line). (b) Us-
ing coecients for k =
;2 2] (dashed line) and k =
;3 3] (solid line). The
original contour is indicated with a dotted line for reference.

(a) (b)
FIGURE 6.22
Filtering of the malignant-tumor contour in Figure 6.3 (a) using Fourier de-
scriptors. (a) Using coecients for k = 1 (smaller circle in dashed line),
k = ;1 (larger circle in dashed line), and k = f;1 0 1g (ellipse in solid line).
(b) Using coecients for k =
;10 10] (dashed line) and k =
;20 20] (solid
line). The original contour is indicated with a dotted line for reference.
Analysis of Shape 569
et al.
274] as
PN=2
k=;N=2+1 jZo (k)j=jkj :
ff = 1 ; P (6.58)
N=2
k=;N=2+1 jZo (k)j
The advantage of this measure is that it is limited to the range
0 1], and is
not sensitive to noise, which would not be the case if weights increasing with
frequency were used. ff is invariant to translation, rotation, starting point,
and contour size, and increases in value as the object shape becomes more
complex and rough.
Other forms of weighting could be used in Equation 6.58 to derive several
variants or dierent shape factors based upon Fourier descriptors. For exam-
ple, the normalized frequency given by N= jkj could be used to provide weights
2
increasing with frequency, and the computation limited to frequencies up to
a fraction of the highest available frequency (such as 0:2) in order to limit the
eect of noise and high-frequency artifacts. High-order moments could also
be computed by using powers of the normalized frequency. Subtraction from
unity as in Equation 6.58 could then be removed so as to obtain shape factors
that increase with roughness.
The values of ff for the contours of the objects in Figures 6.15 and 6.16 are
listed in Tables 6.1 and 6.2. The values of ff do not demonstrate the same
trends as those of the other shape factors listed in the tables for the same
contours. Several shape factors that characterize dierent notions of shape
complexity may be required for ecient pattern classi cation of contours in
some applications see Sections 6.6 and 6.7 for illustrations.
Malignant calci cations that have elongated and rough contours lead to
larger ff values than benign calci cations that are mostly smooth, round,
or oval in shape. Furthermore, tumors with microlobulations and jagged
boundaries are expected to have larger ff values than masses with smooth or
macrolobulated boundaries. Shen et al.
274, 334] applied ff to shape analysis
of mammographic calci cations. The details of this application are presented
in Section 6.6. Rangayyan et al.
163] used ff to discriminate between benign
breast masses and malignant tumors, and obtained an accuracy of 76% see
Section 6.7. Sahiner et al.
428] tested the classi cation performance of sev-
eral shape factors and texture measures with a dataset of 122 benign breast
masses and 127 malignant tumors ff was found to give the best individual
performance with an accuracy of 0:82.

6.4 Fractional Concavity


Most benign mass contours are smooth, oval, or have major portions of con-
vex macrolobulations. Some benign masses may have minor concavities and
570 Biomedical Image Analysis
spicules. On the other hand, malignant tumors typically possess both con-
cave and convex segments as well as microlobulations and prominent spicules.
Rangayyan et al.
345] proposed a measure of fractional concavity (fcc ) of
contours to characterize and quantify these properties.
In order to compute fcc , after performing segmentation of the contour as
explained in Section 6.1.3, the individual segments between successive inec-
tion points are labeled as concave or convex parts. A convex part is de ned as
a segment of the contour that encloses a portion of the mass (inside of the con-
tour), whereas a concave part is one formed by the presence of a background
region within the segment. Figure 6.9 shows a section of a mammogram with
a circumscribed benign mass, overlaid with the contour drawn by a radiolo-
gist specialized in mammography the black and white portions represent the
concave and convex parts, respectively. Figure 6.8 shows a similar result of
the analysis of the contour of a spiculated malignant tumor.
The contours used in the work of Rangayyan et al.
345] were manually
drawn, and included artifacts and minor modulations that could lead to in-
ecient representation for pattern classi cation. The polygonal modeling
procedure described in Section 6.1.4 was applied in order to reduce the eect
of the artifacts. The cumulative length of the concave segments was com-
puted using the polygonal model, and normalized by the total length of the
contour to obtain fcc . It is obvious that fcc is limited to the range
0 1], and
is independent of rotation, shift, and the size (scaling) of the contour. The
performance of fcc in discriminating between benign masses and malignant
tumors is illustrated in Sections 6.7, 12.11, and 12.12.
Lee et al.
429] proposed an irregularity index for the classi cation of cuta-
neous melanocytic lesions based upon their contours. The index was derived
via an analysis of the curvature of the contour and the detection of local in-
dentations (concavities) and protrusions (convexities). The irregularity index
was observed to have a higher correlation with clinical assessment of the le-
sions than other shape factors based upon compactness (see Section 6.2.1)
and fractal analysis (see Section 7.5).

6.5 Analysis of Spicularity


It is known that invasive carcinomas, due to their nature of in ltration into
surrounding tissues, form narrow, stellate distortions or spicules at their
boundaries. Based upon this observation, Rangayyan et al.
345] proposed
a spiculation index (SI ) to represent the degree of spiculation of a mass con-
tour. In order to emphasize narrow spicules and microlobulations, a weight-
ing factor was included to enhance the contributions of narrow spicules in the
computation of SI .
Analysis of Shape 571
For each curved part of a mass contour or the corresponding polygonal
model segment, obtained as described in Sections 6.1.3 and 6.1.4, the ratio of
its length to the base width can represent the degree of narrowness or spicula-
tion. A nonlinear weighting function was proposed by Rangayyan et al.
345],
based upon the segment's length S and angle of spiculation , to deliver pro-
gressively increasing weighting with increase in the narrowness of spiculation
of each segment. Spicule candidates were identi ed as portions of the contour
delimited by pairs of successive points of inection. The polygonal model,
obtained as described in Section 6.1.4, was used to compute the parameters
S and  for each spicule candidate.
If a spicule includes M polygonal segments, then there exist M ; 1 angles at
the points of intersection of the successive segments. Let sm m = 1 2 : : : M ,
be the polygonal segments, and !n n = 1 2 : : : M ; 1, be the angles sub-
tended. Then, the segment length (S ) and the angle of narrowness () of the
spicule under consideration are computed as follows:
1. If M = 1, the portion of the contour that has been delimited by suc-
cessive points of inection is relatively straight see Figure 6.10. Such
parts are merged into the spicules that include them, thus enhancing
the lengths of the corresponding spicules without aecting their angles
of spiculation. The merging process discards the redundant points of
inection lying on relatively straight parts of the contour. This may
be veri ed by comparing the initial points of inection present on the
contour in Figure 6.10 with the points of inection that are retained to
compute SI in the corresponding contour shown in Figure 6.23, specif-
ically in the spicule with the angle of spiculation labeled as 116o .
2. If M = 2, then the length of spicule is S = s1 + s2 , and the angle
subtended by the linear segments at the point of intersection represents
the angle of narrowness () of the spicule.
P
3. If M > 2, then the length of the spicule is S = M m=1 sm . In order
to estimate the angle of narrowness, an adaptive threshold is applied
by using the mean of the set of angles !n n = 1 2 : : : M ; 1, as the
threshold (!th ) for rejecting insigni cant angles (that is, angles that are
close to 180o ). The mean of the angles that are less than or equal to
!th is taken as an estimate of the angle of narrowness of the spicule.
Figure 6.24 illustrates the computation of S and  using the procedure given
above for two dierent examples of spicules with M = 2 and M = 5, respec-
tively.
Figure 6.23 shows the spicule candidates used in the computation of SI for
the contour of the spiculated malignant tumor in Figure 6.8 the corresponding
polygonal model is shown in Figure 6.10. The angles of spiculation computed
are indicated in Figure 6.23 for all spicule candidates some of the candidates
572 Biomedical Image Analysis
may not be considered to be spicules by a radiologist. (Note: Visual assess-
ment of the angles of spicules may not agree well with the computed values
due to the thresholding and averaging process.) Observe that most of the
angles computed for narrow spicules are acute on the other hand, the angles
computed are obtuse for large lobulations and relatively straight segments.
The procedure described above adapts to the complexity of each spicule and
delivers reliable estimates of the lengths and angles of narrowness of spicules
required for computing SI , following polygonal modeling. The computation
of SI for a given mass contour is described next.
Let Sn and n n = 1 2 : : : N be the length and angle of N sets of
polygonal model segments corresponding to the N spicule candidates of a
mass contour. Then, SI is computed as

PN
SI = n=1P(1N+ cos n )Sn : (6.59)
n=1 Sn

The factor (1+cos n ) modulates the length of each segment (possible spicule)
according to its narrowness. Spicules with narrow angles between 0 and 30
get high weighting, as compared to macrolobulations that usually form obtuse
angles, and hence get low weighting.
The majority of the angles of spicules of the masses and tumors in the
MIAS database
376], computed by using the procedure described above, were
found to be in the range of 30o to 150o
345]. The function (1 + cos n )
in Equation 6.59 is progressively decreasing within this range, giving lower
weighting to segments with larger angles. Relatively at segments having
angles ranging between 150o and 180o receive low weighting, and hence are
treated as insigni cant segments.
The denominator in Equation 6.59 serves as a normalization factor to take
into account the eect of the size of the contour it ensures that SI represents
only the severity of the spiculated nature of the contour, which in turn may
be linked to the invasive properties of the tumor under consideration. The
value of SI as in Equation 6.59 is limited to the range
0 2], and may be
normalized to the range
0 1] by dividing by 2. Circumscribed masses with
smooth contours could be expected to have low SI values, whereas sharp,
stellate contours with acute spicules should have high SI values. The perfor-
mance of SI in discriminating between benign masses and malignant tumors
is illustrated in Sections 6.7, 12.11, and 12.12.
Analysis of Shape 573

113
116

91
83
87
133
119
59

128
100
95 164
120 71
97

77

99 94

92

73

108
60

34
111

167 76

60 28

77
88
170 54

94

36

FIGURE 6.23
The polygonal model used in the procedure to compute SI for the spiculated
malignant tumor shown in Figure 6.8 (with the corresponding polygonal model
in Figure 6.10). The 0
0 marks correspond to the points of inection retained
to represent the starting and the ending points of spicule candidates, and
the 0 0 marks indicate the points of intersection of linear segments within the
spicules in the corresponding complete polygonal model. The numbers inside
or beside each spicule candidate are the angles in degrees computed for the
derivation of SI . Reproduced with permission from R.M. Rangayyan, N.R.
Mudigonda, and J.E.L. Desautels, \Boundary modeling and shape analysis
methods for classi cation of mammographic masses", Medical and Biological
Engineering and Computing, 38: 487 { 496, 2000.  c IFMBE.
574 Biomedical Image Analysis

s3
s2 Θ2 Θ3 s4
Θ1
s1 s2 Θ1 Θ4
s1 s5

Μ=2 Μ=5
θ = Θ1 Θth = (Θ1+Θ2+Θ3+Θ4) / 4
S = s1+ s2 Θ2 < Θth ; Θ3 < Θth
θ = (Θ2 + Θ3) / 2
S = s1+ s2+ s3+ s4+ s5

FIGURE 6.24
Computation of segment length S and angle of spiculation  for two examples
of spicule candidates with the number of segments M = 2 and M = 5, respec-
tively. !th is the threshold computed to reject insigni cant angles (that is,
angles that are close to 180o ). Reproduced with permission from R.M. Ran-
gayyan, N.R. Mudigonda, and J.E.L. Desautels, \Boundary modeling and
shape analysis methods for classi cation of mammographic masses", Medical
and Biological Engineering and Computing, 38: 487 { 496, 2000.  c IFMBE.
Analysis of Shape 575

6.6 Application: Shape Analysis of Calcications


Because of the higher attenuation coecient of calcium as compared with nor-
mal breast tissues, the main characteristic of calci cations in mammograms is
that they are relatively bright. This makes calci cations readily distinguish-
able on properly acquired mammograms. However, calci cations that appear
against a background of dense breast tissue may be dicult to detect see
Sections 5.4.9 and 5.4.10 for illustrations.
Malignant calci cations tend to be numerous, clustered, small, varying in
size and shape, angular, irregularly shaped, and branching in orientation
430,
431]. On the other hand, calci cations associated with benign conditions are
generally larger, more rounded, smaller in number, more diusely distributed,
and more homogeneous in size and shape. One of the key dierences between
benign and malignant calci cations lies in the roughness of their shapes.
Shen et al.
274, 334] applied shape analysis to the classi cation of mam-
mographic calci cations as benign or malignant. Eighteen mammograms of
biopsy-proven cases from the Radiology Teaching Library of the Foothills
Hospital (Calgary, Alberta, Canada) were digitized with high resolution of
up to 2560  4096 pixels with 12 bits per pixel using the Eikonix 1412 scan-
ner. Sixty-four benign calci cations from 11 mammograms and 79 malignant
calci cations from seven mammograms were manually selected for shape anal-
ysis. Multitolerance region growing (see Section 5.4.9) was performed, and the
shape factors (mf ff cf ) based upon moments (Section 6.2.2), Fourier de-
scriptors (Section 6.3), and compactness (Section 6.2.1) were computed from
their boundaries. Figures 5.26 and 5.27 illustrate parts of two mammograms,
one with benign calci cations and the other with malignant calci cations,
along with the contours of the calci cations that were detected. A plot of
the shape factors (mf ff cf ) for the 143 calci cations in the study of Shen
et al. is shown in Figure 6.25. It is evident that most of the malignant cal-
ci cations have large values, whereas most of the benign calci cations have
low values for the three shape factors. The bar graph in Figure 6.26 indicates
that the means of the three features possess good levels of dierences be-
tween the benign and malignant categories with respect to the corresponding
standard deviation values. The three measures represent shape complexity
from dierent perspectives, and hence could be combined for improved dis-
crimination between benign and malignant calci cations. The three features
permitted classi cation of the 143 calci cations with 100% accuracy using the
nearest-neighbor method (see Section 12.2.3) as well as neural networks (see
Section 12.7).
576 Biomedical Image Analysis

0.9

0.8

0.7

0.6
compactness cf

0.5

0.4

0.3

0.2

0.1

0 0.15
0.5
0.4 0.1
0.3
0.2 0.05
0.1
0 0
Fourier factor ff moment factor mf

FIGURE 6.25
Plot of the shape factors (mf ff cf ) of 143 calci cations. The + symbols
represent 79 malignant calci cations, and the symbols represent 64 benign
calci cations.
Analysis of Shape 577

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
b m b m b m

moment factor mf Fourier factor ff compactness cf

FIGURE 6.26
Means of the shape factors (mf ff cf ) of 64 benign calci cations (`b') and
79 malignant calci cations (`m'). The error bars indicate the range of mean
plus or minus one standard deviation.
578 Biomedical Image Analysis

6.7 Application: Shape Analysis of Breast Masses and


Tumors

Rangayyan et al.
163, 345] applied several shape factors to the analysis of a set
of contours of 28 benign breast masses and 26 malignant tumors. The dataset
included 39 mammograms from the MIAS database
376] and 15 images from
Screen Test: Alberta Program for the Early Detection of Breast Cancer
61].
The contours were drawn on digitized mammograms by an expert radiologist.
Figure 6.27 shows the 54 contours arranged in order of increasing shape com-
plexity as characterized by the magnitude of the feature vector (cf fcc SI )
Figure 6.28 shows a scatter plot of the three features. Each of the three fea-
tures has, in general, the distinction of reecting low values for circumscribed
benign masses and high values for spiculated malignant tumors.

In benign-versus-malignant pattern classi cation experiments using linear


discriminant analysis
163], ff , cf , and mf provided accuracies of 76%, 72%,
and 67%, respectively the moment-based shape factors provided classi cation
accuracy of up to 75% chord-length statistics provided accuracy up to 68%
only. The use of the parameters obtained via parabolic models of segments
of the contours separated by their points of inection led to a classi cation
accuracy of 76%
354]. In a dierent study
345], the shape factors fcc and
SI provided classi cation accuracies of 74% and 80%, respectively the set
(cf fcc SI ) provided the highest accuracy of 82%. The MIAS database was
observed to include an unusually high proportion of benign masses with spic-
ulated contours, which led to reduced accuracy of benign-versus-malignant
classi cation via shape analysis.

In pattern classi cation experiments to discriminate between circumscribed


and spiculated masses, several combinations of the shape factors mentioned
above provided accuracies of up to 91%
163, 345]. However, the classi ca-
tion of a contour as circumscribed or spiculated is a subjective decision of
a radiologist on the other hand, benign-versus-malignant classi cation via
pathology is objective. Furthermore, circumscribed-versus-spiculated classi -
cation is of academic interest, with the discrimination between benign disease
and malignancy being of clinical relevance and importance. For these reasons,
circumscribed-versus-spiculated classi cation is not important. Sections 7.9,
8.8, 12.11, and 12.12 provide further details on pattern classi cation of breast
masses and tumors.
Analysis of Shape 579

b b b b b b b b

b b b b b b b m

m b b m m m b m

b b m b m m b m

m b m b m m m m

m m b m m b m m

m b m m m b

FIGURE 6.27
Contours of 54 breast masses. `b': benign masses (28). `m': malignant tumors
(26). The contours are arranged in order of increasing magnitude of the
feature vector (cf fcc SI ). Note that the masses and their contours are of
widely diering size, but have been scaled to the same size in the illustration.
580 Biomedical Image Analysis

1.4

1.2

0.8

0.6
SI

0.4

0.2

0
0
0.2 0.7
0.4 0.6
0.5
0.6 0.4
0.3
0.8 0.2
0.1
1 0
cf
fcc

FIGURE 6.28
Feature-space plot of cf , fcc , and SI : for benign masses (28) and * for
malignant tumors (26). SI: spiculation index, fcc: fractional concavity, and
cf: modi ed compactness. See Figure 6.27 for an illustration of the contours.
Analysis of Shape 581

6.8 Remarks
In this chapter, we have explored several methods to model, characterize,
and parameterize contours. Closed contours were considered in most of the
discussion and illustrations, although some of the techniques described may
be extended to open contours or contours with missing parts.
Regardless of the success of some of the methods and applications illus-
trated, it should be noted that obtaining contours with good accuracy could
be dicult in many applications. It is not common clinical practice to draw
the contours of tumors or organs. Malignant tumors typically exhibit poor
de nitions of their margins due to their invasive and metastatic nature: this
makes the identi cation and drawing of their contours dicult, if not impos-
sible, either manually or by computer methods. Hand-drawn and computer-
detected contours may contain imperfections and artifacts that could corrupt
shape factors furthermore, there could be signi cant variations between the
contours drawn by dierent individuals for the same objects. It should be
recognized that the contour of a 3D entity (such as a tumor) as it appears
on a 2D image (for example, a mammogram) depends upon the imaging and
projection geometry. Above all, contours of biological entities often present
signi cant overlap in their characteristics between various categories, such as
for benign and malignant diseases. The inclusion of measures representing
other image characteristics, such as texture and gradient, could complement
shape factors, and assist in improved analysis of biomedical images. For ex-
ample, Sahiner et al.
428] showed that the combined use of shape and texture
features could improve the accuracy in discriminating between benign breast
masses and malignant tumors. Methods for the characterization of texture
and gradient information are described in Chapters 7 and 8. The use of the
fractal dimension as a measure of roughness is described in Section 7.5. See
Chapter 12 for several examples of pattern classi cation via shape analysis.

6.9 Study Questions and Problems


Selected data les related to some of the problems and exercises are available at the
site
www.enel.ucalgary.ca/People/Ranga/enel697
1. Prove that the zeroth-order Fourier descriptor represents the centroid of the
given contour.
2. Prove that the rst-order Fourier descriptors (k = 1 or k = ;1) represent
circles.
582 Biomedical Image Analysis
3. A robotic inspection system is required to discriminate between at (planar)
objects arriving on a conveyor belt. The objects may arrive at any orientation.
The set of possible objects includes squares, circles, and triangles of variable
size.
Propose an image analysis procedure to detect each object and recognize it
as being one of the three types mentioned above. Describe each step of the
algorithm briey. Provide equations for the measures that you may propose.

6.10 Laboratory Exercises and Projects


1. Using black or dark-colored paper, cut out at least 20 pieces of widely varying
shapes. Include a few variations of the same geometric shape (square, triangle,
etc.) with varying size and orientation.
Lay out the objects on a at surface and capture an image. Develop a program
to detect the objects and derive their contours. Verify that the contours are
closed, are one-pixel thick, and do not include knots.
Derive several shape factors for each contour, including compactness, Fourier
descriptors, moments, and fractional concavity. Rank-order the objects by
each shape factor individually, and by all of the factors combined into a single
vector. Study the characterization of various notions of shape complexity by
the dierent shape factors.
Request a number of your friends and colleagues to assign a measure of rough-
ness to each object on a scale of 0 ; 100. Normalize the values by dividing
by 100 and average the scores over all the observers. Analyze the correlation
between the subjective ranking and the objective measures of roughness.
2. Synthesize a digital image with rectangles, triangles, and circles of various
sizes. Compute several of the shape factors described in this chapter for each
object in the image.
Study the variation in the shape factors from one category of shapes to another
in your test image. Is the variation adequate to facilitate pattern classi ca-
tion?
Study the variation in the shape factors within each category of shapes in
your test image. Explain the cause of the variation.
7
Analysis of Texture

Texture is one of the important characteristics of images, and texture analysis


is encountered in several areas 432, 433, 434, 435, 436, 437, 438, 439, 440, 441,
442]. We nd around us several examples of texture: on wooden furniture,
cloth, brick walls, oors, and so on. We may group texture into two general
categories: (quasi-) periodic and random. If there is a repetition of a texture
element at almost regular or (quasi-) periodic intervals, we may classify the
texture as being (quasi-) periodic or ordered the elements of such a texture are
called textons 438] or textels. Brick walls and oors with tiles are examples
of periodic texture. On the other hand, if no texton can be identied, such
as in clouds and cement-wall surfaces, we can say that the texture is random.
Rao 432] gives a more detailed classication, including weakly ordered or
oriented texture that takes into account hair, wood grain, and brush strokes
in paintings. Texture may also be related to visual and/or tactile sensations
such as neness, coarseness, smoothness, granularity, periodicity, patchiness,
being mottled, or having a preferred orientation 441].
A signicant amount of work has been done in texture characterization 441,
442, 439, 432, 438] and synthesis 443, 438] see Haralick 441] and Haralick
and Shapiro 440] (Chapter 9) for detailed reviews. According to Haralick
et al. 442], texture relates to information about the spatial distribution of
gray-level variation however, this is a general observation. It is important
to recognize that, due to the existence of a wide variety of texture, no single
method of analysis would be applicable to several dierent situations. Sta-
tistical measures such as gray-level co-occurrence matrices and entropy 442]
characterize texture in a stochastic sense however, they do not convey a
physical or perceptual sense of the texture. Although periodic texture may
be modeled as repetitions of textons, not many methods have been developed
for the structural analysis of texture 444].
In this chapter, we shall explore the nature of texture found in biomedical
images, study methods to characterize and analyze such texture, and investi-
gate approaches for the classication of biomedical images based upon texture.
We shall concentrate on random texture in this chapter due to the extensive
occurrence of oriented patterns and texture in biomedical images, we shall
treat this topic on its own, in Chapter 8.

583
584 Biomedical Image Analysis

7.1 Texture in Biomedical Images


A wide variety of texture is encountered in biomedical images. Oriented tex-
ture is common in medical images due to the brous nature of muscles and
ligaments, as well as the extensive presence of networks of blood vessels, veins,
ducts, and nerves. A preferred or dominant orientation is associated with the
functional integrity and strength of such structures. Although truly periodic
texture is not commonly encountered in biomedical images, ordered texture
is often found in images of the skins of reptiles, the retina, the cornea, the
compound eyes of insects, and honeycombs.
Organs such as the liver are made up of clusters of parenchyma that are
of the order of 1 ; 2 mm in size. The pixels in CT images have a typical
resolution of 1  1 mm, which is comparable to the size of the parenchymal
units. With ultrasonic imaging, the wavelength of the probing radiation is of
the order of 1 ; 2 mm, which is also comparable to the size of parenchymal
clusters. Under these conditions, the liver appears to have a speckled random
texture.
Several samples of biomedical images with various types of texture are
shown in Figures 7.1, 7.2, and 7.3 see also Figures 1.5, 1.8, 9.18, and 9.20.
It is evident from these illustrations that no single approach can succeed in
characterizing all types of texture.
Several approaches have been proposed for the analysis of texture in med-
ical images for various diagnostic applications. For example, texture mea-
sures have been derived from X-ray images for automatic identication of
pulmonary diseases 433], for the analysis of MR images 445], and processing
of mammograms 165, 275, 446]. In this chapter, we shall investigate the na-
ture of texture in a few biomedical images, and study some of the commonly
used methods for texture analysis.

7.2 Models for the Generation of Texture


Martins et al. 447], in their work on the auditory display of texture in im-
ages (see Section 7.8), outlined the following similarities between speech and
texture generation. The sounds produced by the human vocal system may be
grouped as voiced, unvoiced, and plosive sounds 31, 176]. The rst two types
of speech signals may be modeled as the convolution of an input excitation
signal with a lter function. The excitation signal is quasi-periodic when we
use the vocal cords to create voiced sounds, or random in the case of unvoiced
sounds. Figure 7.4 (a) illustrates the basic model for speech generation.
Analysis of Texture 585

(a) (b)

(c) (d)
FIGURE 7.1
Examples of texture in CT images: (a) Liver. (b) Kidney. (c) Spine. (d) Lung.
The true size of each image is 55  55 mm. The images represent widely dif-
fering ranges of tissue density, and have been enhanced to display the inherent
texture. Image data courtesy of Alberta Children's Hospital.
586 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 7.2
Examples of texture in mammograms (from the MIAS database 376]): (a) {
(c) oriented texture true image size 60  60 mm (d) random texture true
image size 40  40 mm. For more examples of oriented texture, see Figures
9.20 and 1.8, as well as Chapter 8.
Analysis of Texture 587

(a) (b)

(c)
FIGURE 7.3
Examples of ordered texture: (a) Endothelial cells in the cornea. Image cour-
tesy of J. Jaroszewski. (b) Part of a y's eye. Reproduced with permission
from D. Suzuki, \Behavior in drosophila melanogaster: A geneticist's view",
Canadian Journal of Genetics and Cytology, XVI(4): 713 { 735, 1974.  c Ge-
netics Society of Canada. (c) Skin on the belly of a cobra snake. Image
courtesy of Implora, Colonial Heights, VA. http://www.implora.com. See
also Figure 1.5.
588 Biomedical Image Analysis

Unvoiced
Random
excitation

Vocal-tract
filter
Speech
Quasi-periodic signal
excitation
Voiced

(a)

Random
Random field
of impulses
Texture element
(texton) or
spot filter
Textured
Ordered field image
of impulses
Ordered

(b)

FIGURE 7.4
(a) Model for speech signal generation. (b) Model for texture synthesis. Re-
produced with permission from A.C.G. Martins, R.M. Rangayyan, and R.A.
Ruschioni, \Audication and sonication of texture in images", Journal of
Electronic Imaging, 10(3): 690 { 705, 2001. 
c SPIE and IS&T.
Analysis of Texture 589
Texture may also be modeled as the convolution of an input impulse eld
with a spot or a texton that would act as a lter. The \spot noise" model of
van Wijk 443] for synthesizing random texture uses this model, in which the
Fourier spectrum of the spot acts as a lter that modies the spectrum of a
2D random-noise eld. Ordered texture may be generated by specifying the
basic pattern or texton to be used, and a placement rule. The placement rule
may be expressed as a eld of impulses. Texture is then given by the convolu-
tion of the impulse eld with the texton, which could also be represented as a
lter. A one-to-one correspondence may thus be established between speech
signals and texture in images. Figure 7.4 (b) illustrates the model for tex-
ture synthesis: the correspondence between the speech and image generation
models in Figure 7.4 is straightforward.

7.2.1 Random texture


According to the model in Figure 7.4, random texture may be modeled as a
ltered version of a eld of white noise, where the lter is represented by a
spot of a certain shape and size (usually of small spatial extent compared to
the size of the image). The 2D spectrum of the noise eld, which is essentially
a constant, is shaped by the 2D spectrum of the spot. Figure 7.5 illustrates
a random-noise eld of size 256  256 pixels and its Fourier spectrum. Parts
(a) { (d) of Figure 7.6 show two circular spots of diameter 12 and 20 pixels
and their spectra parts (e) { (h) of the gure show the random texture
generated by convolving the noise eld in Figure 7.5 (a) with the circular
spots, and their Fourier spectra. It is readily seen that the spots have ltered
the noise, and that the spectra of the textured images are essentially those of
the corresponding spots.
Figures 7.7 and 7.8 illustrate a square spot and a hash-shaped spot, as well
as the corresponding random texture generated by the spot-noise model and
the corresponding spectra the anisotropic nature of the images is clearly seen
in their spectra.

7.2.2 Ordered texture


Ordered texture may be modeled as the placement of a basic pattern or texton
(which is of a much smaller size than the total image) at positions determined
by a 2D eld of (quasi-) periodic impulses. The separations between the
impulses in the x and y directions determine the periodicity or \pitch" in the
two directions. This process may also be modeled as the convolution of the
impulse eld with the texton in this sense, the only dierence between ordered
and random texture lies in the structure of the impulse eld: the former uses
a (quasi-) periodic eld of impulses, whereas the latter uses a random-noise
eld. Once again, the spectral characteristics of the texton could be seen as
a lter that modies the spectrum of the impulse eld (which is essentially a
2D eld of impulses as well).
590 Biomedical Image Analysis

(a) (b)
FIGURE 7.5
(a) Image of a random-noise eld (256256 pixels). (b) Spectrum of the image
in (a). Reproduced with permission from A.C.G. Martins, R.M. Rangayyan,
and R.A. Ruschioni, \Audication and sonication of texture in images",
Journal of Electronic Imaging, 10(3): 690 { 705, 2001. c SPIE and IS&T.

Figure 7.9 (a) illustrates a 256  256 eld of impulses with horizontal peri-
odicity px = 40 pixels and vertical periodicity py = 40 pixels. Figure 7.9 (b)
shows the corresponding periodic texture with a circle of diameter 20 pixels
as the spot or texton. Figure 7.9 (c) shows a periodic texture with the texton
being a square of side 20 pixels, px = 40 pixels, and py = 40 pixels. Figure
7.9 (d) depicts a periodic-textured image with an isosceles triangle of sides
12 16 and 23 pixels as the spot, and periodicity px = 40 pixels and py = 40
pixels. See Section 7.6 for illustrations of the Fourier spectra of images with
ordered texture.

7.2.3 Oriented texture

Images with oriented texture may be generated using the spot-noise model
by providing line segments or oriented motifs as the spot. Figure 7.10 shows
a spot with a line segment oriented at 135o and the result of convolution of
the spot with a random-noise eld the log-magnitude Fourier spectra of the
spot and the textured image are also shown. The preferred orientation of the
texture and the directional concentration of the energy in the Fourier domain
are clearly seen in the gure. See Figure 7.2 for examples of oriented texture
in mammograms. See Chapter 8 for detailed discussions on the analysis of
oriented texture and several illustrations of oriented patterns.
Analysis of Texture 591

(a) (b)

(c) (d)

(e) (f)

Figure 7.6 (g) (h)


592 Biomedical Image Analysis

FIGURE 7.6
(a) Circle of diameter 12 pixels. (b) Circle of diameter 20 pixels. (c) Fourier
spectrum of the image in (a). (d) Fourier spectrum of the image in (b).
(e) Random texture with the circle of diameter 12 pixels as the spot. (f) Ran-
dom texture with the circle of diameter 20 pixels as the spot. (g) Fourier
spectrum of the image in (e). (h) Fourier spectrum of the image in (f). The
size of each image is 256  256 pixels. Reproduced with permission from
A.C.G. Martins, R.M. Rangayyan, and R.A. Ruschioni, \Audication and
sonication of texture in images", Journal of Electronic Imaging, 10(3): 690
{ 705, 2001.  c SPIE and IS&T.

(a) (b)

(c) (d)
FIGURE 7.7
(a) Square of side 20 pixels. (b) Random texture with the square of side
20 pixels as the spot. (c) Spectrum of the image in (a). (d) Spectrum of
the image in (b). The size of each image is 256  256 pixels. Reproduced
with permission from A.C.G. Martins, R.M. Rangayyan, and R.A. Ruschioni,
\Audication and sonication of texture in images", Journal of Electronic
Imaging, 10(3): 690 { 705, 2001. 
c SPIE and IS&T.
Analysis of Texture 593

(a) (b)

(c) (d)
FIGURE 7.8
(a) Hash of side 20 pixels. (b) Random texture with the hash of side 20 pixels
as the spot. (c) Spectrum of the image in (a). (d) Spectrum of the image in
(b). The size of each image is 256  256 pixels. Reproduced with permission
from A.C.G. Martins, R.M. Rangayyan, and R.A. Ruschioni, \Audication
and sonication of texture in images", Journal of Electronic Imaging, 10(3):
690 { 705, 2001. c SPIE and IS&T.
594 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 7.9
(a) Periodic eld of impulses with px = 40 pixels and py = 40 pixels. (b) Or-
dered texture with a circle of diameter 20 pixels, px = 40 pixels, and py = 40
pixels as the spot. (c) Ordered texture with a square of side 20 pixels, px = 40
pixels, and py = 40 pixels as the spot. (d) Ordered texture with a triangle
of sides 12 16 and 23 pixels as the spot px = 40 pixels and py = 40 pix-
els. The size of each image is 256  256 pixels. Reproduced with permission
from A.C.G. Martins, R.M. Rangayyan, and R.A. Ruschioni, \Audication
and sonication of texture in images", Journal of Electronic Imaging, 10(3):
690 { 705, 2001.  c SPIE and IS&T.
Analysis of Texture 595

(a) (b)

(c) (d)
FIGURE 7.10
Example of oriented texture generated using the spot-noise model in Fig-
ure 7.4: (a) Spot with a line segment oriented at 135o . (b) Oriented texture
generated by convolving the spot in (a) with a random-noise eld. (c) and
(d) Log-magnitude Fourier spectra of the spot and the textured image, re-
spectively. The size of each image is 256  256 pixels.
596 Biomedical Image Analysis

7.3 Statistical Analysis of Texture


Simple measures of texture may be derived based upon the moments of the
gray-level PDF (or normalized histogram) of the given image. The kth central
moment of the PDF p(l) is dened as
;1
LX
mk = (l ; f )k p(l) (7.1)
l=0
where l = 0 1 2 : : : L ; 1 are the gray levels in the image f , and f is the
mean gray level of the image given by
;1
LX
f = l p(l): (7.2)
l=0
The second central moment, which is the variance of the gray levels and is
given by
;1
LX
f2 = m2 = (l ; f )2 p(l) (7.3)
l=0
can serve as a measure of inhomogeneity. The normalized third and fourth
moments, known as the skewness and kurtosis, respectively, and dened as
skewness = m3=32 (7.4)
m2
and
kurtosis = m
m2
4 (7.5)
2
indicate the asymmetry and uniformity (or lack thereof) of the PDF. High-
order moments are aected signicantly by noise or error in the PDF, and
may not be reliable features. The moments of the PDF can only serve as
basic representatives of gray-level variation.
Byng et al. 448] computed the skewness of the histograms of 24  24
(3:12  3:12 mm) sections of mammograms. An average skewness measure
was computed for each image by averaging over all the section-based skewness
measures of the image. Mammograms of breasts with increased broglandu-
lar density were observed to have histograms skewed toward higher density,
resulting in negative skewness. On the other hand, mammograms of fatty
breasts tended to have positive skewness. The skewness measure was found
to be useful in predicting the risk of development of breast cancer.
Analysis of Texture 597
7.3.1 The gray-level co-occurrence matrix
Given the general description of texture as a pattern of the occurrence of gray
levels in space, the most commonly used measures of texture, in particular of
random texture, are the statistical measures proposed by Haralick et al. 441,
442]. Haralick's measures are based upon the moments of a joint PDF that
is estimated as the joint occurrence or co-occurrence of gray levels, known as
the gray-level co-occurrence matrix (GCM). GCMs are also known as spatial
gray-level dependence (SGLD) matrices, and may be computed for various
orientations and distances.
The GCM P(d ) (l1 l2 ) represents the probability of occurrence of the pair
of gray levels (l1 l2 ) separated by a given distance d at angle . GCMs are
constructed by mapping the gray-level co-occurrence counts or probabilities
based on the spatial relations of pixels at dierent angular directions (specied
by ) while scanning the image from left-to-right and top-to-bottom.
Table 7.1 shows the GCM for the image in Figure 7.11 with eight gray levels
(3 b=pixel) by considering pairs of pixels with the second pixel immediately
1
below the rst. For example, the pair of gray levels 2 occurs 10 times in the
image. Observe that the table of counts of occurrence of pairs of pixels shown
in Table 11.2 and used to compute the rst-order entropy also represents a
GCM, with the second pixel appearing immediately after the rst in the same
row. Due to the fact that neighboring pixels in natural images tend to have
nearly the same values, GCMs tend to have large values along and around the
main diagonal, and low values away from the diagonal.
Observe that, for an image with B b=pixel, there will be L = 2B gray
levels the GCM is then of size L  L. Thus, for an image quantized to
8 b=pixel, there will be 256 gray levels, and the GCM will be of size 256 
256. Fine quantization to large numbers of gray levels, such as 212 = 4 096
levels in high-resolution mammograms, will increase the size of the GCM to
unmanageable levels, and also reduce the values of the entries in the GCM. It
may be advantageous to reduce the number of gray levels to a relatively small
number before computing GCMs. A reduction in the number of gray levels
with smoothing can also reduce the eect of noise on the statistics computed
from GCMs.
GCMs are commonly formed for unit pixel distances and the four angles
of 0o 45o , 90o , and 135o . (Strictly speaking, the distancesp to the diagonally
connected neighboring pixels at 45o and 135o would be 2 times the pixel
size.) For an M  N image, the number of pairs of pixels that can be formed
will be less than MN due to the fact that it may not be possible to pair the
pixels in a few rows or columns at the borders of the image with another pixel
according to the chosen parameters (d ).
598 Biomedical Image Analysis

1 1 1 1 1 1 1 1 1 1 2 3 2 2 1 2
0 1 1 1 1 1 1 1 1 1 1 2 2 3 4 5
1 0 0 0 1 1 1 1 1 1 1 1 2 2 4 6
2 2 3 5 4 3 1 0 1 1 1 1 1 2 3 5
4 6 5 4 3 1 1 2 2 1 1 1 1 1 2 4
5 5 2 1 2 3 2 2 2 3 3 4 3 2 1 3
4 3 1 2 1 1 1 2 2 2 1 2 2 2 3 5
2 0 2 0 1 3 1 3 5 3 3 2 2 3 3 6
1 1 2 2 1 2 1 2 3 3 3 4 4 6 5 6
1 1 2 4 1 0 0 1 3 4 5 5 5 4 4 6
1 1 1 4 2 1 2 3 5 5 5 4 4 3 4 6
1 1 1 4 4 4 5 6 6 5 4 3 2 3 5 6
1 1 2 5 5 4 5 5 4 3 3 2 3 4 5 6
2 1 4 5 5 5 5 4 3 1 1 1 4 6 5 6
2 2 5 5 5 4 3 2 2 1 1 4 6 6 6 7
4 4 4 4 3 2 2 1 0 1 5 6 6 6 6 7
FIGURE 7.11
A 16  16 part of the image in Figure 2.1 (a) quantized to 3 b=pixel, shown
as an image and as a 2D array of pixel values.
Analysis of Texture 599

TABLE 7.1
Gray-level Co-occurrence Matrix for the Image in Figure 7.11, with
the Second Pixel Immediately Below the First.
Current Pixel Next Pixel Below
0 1 2 3 4 5 6 7
0 0 3 4 1 0 1 0 0
1 6 44 10 9 5 1 0 0
2 3 13 13 5 8 3 1 0
3 1 5 11 5 3 5 2 0
4 0 1 5 7 5 9 3 0
5 0 0 1 5 11 10 4 0
6 0 0 0 0 2 3 10 1
7 0 0 0 0 0 0 0 1

Pixels in the last row were not processed. The GCM has not been normalized.
See also Table 11.2.
600 Biomedical Image Analysis
7.3.2 Haralick's measures of texture
Based upon normalized GCMs, Haralick et al. 441, 442] proposed several
quantities as measures of texture. In order to dene these measures, let us
normalize the GCM as
p(l1 l2) = PL;1 P P (l1 l2 ) : (7.6)
L;1
l1 =0 l2 =0 P (l1 l2 )
A few other entities used in the derivation of Haralick's texture measures are
as follows:
;1
LX
px (l1) = p(l1 l2 ) (7.7)
l2 =0
LX;1
py (l2) = p(l1 l2 ) (7.8)
l1 =0
;1 LX
LX ;1
px+y (k) = p(l1 l2 ) (7.9)
l1 =0 l2 =0
| {z }
l1 +l2 =k
where k = 0 1 2 : : : 2(L ; 1), and
;1 LX
LX ;1
px;y (k) = p(l1 l2 ) (7.10)
l1 =0 l2 =0
| {z }
jl1 ;l2 j=k
where k = 0 1 2 : : : L ; 1.
The texture measures are then dened as follows.
The energy feature F1 , which is a measure of homogeneity, is dened as
;1 LX
LX ;1
F1 = p2 (l1 l2 ): (7.11)
l1 =0 l2 =0
A homogeneous image has a small number of entries along the diagonal of the
GCM with large values, which will lead to a large value of F1 . On the other
hand, an inhomogeneous image will have small values spread over a larger
number of GCM entries, which will result in a low value for F1 .
The contrast feature F2 is dened as
;1
LX ;1 LX
LX ;1
F2 = k2 p(l1 l2 ): (7.12)
k=0 l1 =0 l2 =0
| {z }
jl1 ;l2 j=k
Analysis of Texture 601
The correlation measure F3 , which represents linear dependencies of gray
levels, is dened as
" L;1 L;1 #
F3 =  1
X X
l1 l2 p(l1 l2) ; x y (7.13)
x y l1 =0 l2 =0
where x and y are the means, and x and y are the standard deviations
of px and py , respectively.
The sum of squares feature is given by
;1 LX
LX ;1
F4 = (l1 ; f )2 p(l1 l2 ) (7.14)
l1 =0 l2 =0
where f is the mean gray level of the image.
The inverse dierence moment, a measure of local homogeneity, is dened
as
;1 LX
LX ;1 1
F5 = 1 + (l ; l )2 p(l1 l2 ): (7.15)
l1 =0 l2 =0 1 2
The sum average feature F6 is given by
L;1)
2(X
F6 = k px+y (k) (7.16)
k=0
and the sum variance feature F7 is dened as
L;1)
2(X
F7 = (k ; F6 )2 px+y (k): (7.17)
k=0
The sum entropy feature F8 is given by
L;1)
2(X
F8 = ; px+y (k) log2 px+y (k)] : (7.18)
k=0
Entropy, a measure of nonuniformity in the image or the complexity of the
texture, is dened as
;1 LX
LX ;1
F9 = ; p(l1 l2 ) log2 p(l1 l2 )] : (7.19)
l1 =0 l2 =0
The dierence variance measure F10 is dened as the variance of px;y ,
in a manner similar to that given by Equations 7.16 and 7.17 for its sum
counterpart.
602 Biomedical Image Analysis
The dierence entropy measure is dened as
;1
LX
F11 = ; px;y (k) log2 px;y (k)] : (7.20)
k=0
Two information-theoretic measures of correlation are dened as
Hxy ; Hxy1
F12 = max (7.21)
fH H g x y
and
F13 = f1 ; exp;2 (Hxy2 ; Hxy )]g2 (7.22)
where Hxy = F9  Hx and Hy are the entropies of px and py , respectively
;1 LX
LX ;1
Hxy1 = ; p(l1 l2) log2 px (l1) py (l2 )] (7.23)
l1 =0 l2 =0
and
;1 LX
LX ;1
Hxy2 = ; px (l1 ) py (l2 ) log2 px (l1 ) py (l2)] : (7.24)
l1 =0 l2 =0
The maximal correlation coecient feature F14 is dened as the square root
of the second largest eigenvalue of Q, where
LX;1 p(l k) p(l k)
Q(l1 l2 ) = 1 2
k=0 p x ( k ) py ( k) : (7.25)

The subscripts d and  in the representation of the GCM P( d) (l1 l2 ) have
been removed in the denitions above for the sake of notational simplicity.
However, it should be noted that each of the measures dened above may be
derived for each value of d and  of interest. If the dependence of texture
upon angle is not of interest, GCMs over all angles may be averaged into a
single GCM. The distance d should be chosen taking into account the sampling
interval (pixel size) and the size of the texture units of interest. More details
on the derivation and signicance of the features dened above are provided
by Haralick et al. 441, 442].
Some of the features dened above have values much greater than unity,
whereas some of the features have values far less than unity. Normalization
to a predened range, such as 0 1], over the dataset to be analyzed, may be
benecial.
Parkkinen et al. 449] studied the problem of detecting periodicity in tex-
ture using statistical measures of association and agreement computed from
GCMs. If the displacement and orientation (d ) of a GCM match the same
parameters of the texture, the GCM will have large values for the elements
Analysis of Texture 603
along the diagonal corresponding to the gray levels present in the texture el-
ements. A measure of association is the 2 statistic, which may be expressed
using the notation above as
;1 LX
LX ;1 p(l l ) ; p (l ) p (l )]2
2 = 1 2 x 1 y 2 : (7.26)
l1 =0 l2 =0 p x 1 py (l2 )
(l )
The measure may be normalized by dividing by L, and expected to possess a
high value for an image with periodic texture under the condition described
above.
Parkkinen et al. 449] discussed some limitations of the 2 statistic in the
analysis of periodic texture, and proposed a measure of agreement given by
 = P1o;;PPc (7.27)
c
where ;1
LX
Po = p(l l) (7.28)
l=0
and ;1
LX
Pc = px (l) py (l): (7.29)
l=0
The measure  has its maximal value of unity when the GCM is a diagonal
matrix, which indicates perfect agreement or periodic texture.
Haralick's measures have been applied for the analysis of texture in several
types of images, including medical images. Chan et al. 450] found the three
features of correlation, dierence entropy, and entropy to perform better than
other combinations of one to eight features selected in a specic sequence.
Sahiner et al. 428, 451] dened a \rubber-band straightening transform"
(RBST) to map ribbons around breast masses in mammograms into rect-
angular arrays (see Figure 7.26), and then computed Haralick's measures of
texture. Mudigonda et al. 165, 275] computed Haralick's measures using
adaptive ribbons of pixels extracted around mammographic masses, and used
the features to distinguish malignant tumors from benign masses details of
this work are provided in Sections 7.9 and 8.8. See Section 12.12 for a dis-
cussion on the application of texture measures for content-based retrieval and
classication of mammographic masses.

7.4 Laws' Measures of Texture Energy


Laws 452] proposed a method for classifying each pixel in an image based
upon measures of local \texture energy". The texture energy features rep-
604 Biomedical Image Analysis
resent the amounts of variation within a sliding window applied to several
ltered versions of the given image. The lters are specied as separable 1D
arrays for convolution with the image being processed.
The basic operators in Laws' method are the following:
L3 =  1 2 1 ]
E 3 =  ;1 0 1 ] (7.30)
S 3 =  ;1 2 ;1 ]:

The operators L3 E 3 and S 3 perform center-weighted averaging, symmetric


rst dierencing (edge detection), and second dierencing (spot detection),
respectively 453]. Nine 3  3 masks may be generated by multiplying the
transposes of the three operators (represented as vectors) with their direct
versions. The result of L3T E 3 gives one of the 3  3 Sobel masks.
Operators of length ve pixels may be generated by convolving the L3 E 3
and S 3 operators in various combinations. Of the several lters designed by
Laws, the following ve were said to provide good performance 452, 453]:
L5 = L3  L3 =  1 4 6 4 1 ]
E 5 = L3  E 3 =  ;1 ;2 0 2 1 ]
S 5 = ;E 3  E 3 =  ;1 0 2 0 ;1 ] (7.31)
R5 = S 3  S 3 =  1 ;4 6 ;4 1 ]
W 5 = ;E 3  S 3 =  ;1 2 0 ;2 1 ]
where  represents 1D convolution.
The operators listed above perform the detection of the following types of
features: L5 { local average E 5 { edges S 5 { spots R5 { ripples and W 5
{ waves 453]. In the analysis of texture in 2D images, the 1D convolution
operators given above are used in pairs to achieve various 2D convolution
operators (for example, L5L5 = L5T L5 and L5E 5 = L5T E 5), each of which
may be represented as a 5  5 array or matrix. Following the application of the
selected lters, texture energy measures are derived from each ltered image
by computing the sum of the absolute values in a 15  15 sliding window.
All of the lters listed above, except L5, have zero mean, and hence the
texture energy measures derived from the ltered images represent measures
of local deviation or variation. The result of the L5 lter may be used for
normalization with respect to luminance and contrast.
The use of a large sliding window to smooth the ltered images could lead
to the loss of boundaries across regions with dierent texture. Hsiao and
Analysis of Texture 605
Sawchuk 454] applied a modied LLMMSE lter so as to derive Laws' texture
energy measures while preserving the edges of regions, and applied the results
for pattern classication.
Example: The results of the application of the operators L5L5, E 5E 5,
and W 5W 5 to the 128  128 Lenna image in Figure 10.5 (a) are shown in
Figure 7.12 (a) { (c). Also shown in parts (e) { (f) of the gure are the sums
of the absolute values of the ltered images using a 9  9 moving window. It is
evident that the L5L5 lter results in a measure of local brightness. Careful
inspection of the results of the E 5E 5 and W 5W 5 lters shows that they have
high values for dierent regions of the original image possessing dierent types
of texture (edges and waves, respectively). Feature vectors composed of the
values of various Laws' operators for each pixel may be used for classifying
the image into texture categories on a pixel-by-pixel basis. The results may
be used for texture segmentation and recognition.
In an example provided by Laws 452] (see also Pietkainen et al. 453]),
the texture energy measures have been shown to be useful in the segmen-
tation of an image composed of patches with dierent texture. Miller and
Astley 372, 455] used features of mammograms based upon the R5R5 oper-
ator, and obtained an accuracy of 80:3% in the segmentation of the nonfat
(glandular) regions in mammograms. See Section 8.8 for a discussion on the
application of Laws' and other methods of texture analysis for the detection
of breast masses in mammograms.

7.5 Fractal Analysis


Fractals are dened in several dierent ways, the most common of which is
that of a pattern composed of repeated occurrences of a basic unit at multiple
scales of detail in a certain order of generation this denition includes the no-
tion of \self-similarity" or nested recurrence of the same motif at smaller and
smaller scales (see Section 11.9 for a discussion on self-similar, space-lling
curves). The relationship to texture is evident in the property of repeated
occurrence of a motif. Fractal patterns occur abundantly in nature as well
as in biological and physiological systems 456, 457, 458, 459, 460, 461, 462]:
the self-replicating patterns of the complex leaf structures of ferns (see Fig-
ure 7.13), the ramications of the bronchial tree in the lung (see Figure 7.1),
and the branching and spreading (anastomotic) patterns of the arteries in the
heart (see Figure 9.20), to name a few. Fractals and the notion of chaos are
related to the area of nonlinear dynamic systems 456, 463], and have found
several applications in biomedical signal and image analysis.
606 Biomedical Image Analysis

(a) (d)

(b) (e)

(c) (f)
FIGURE 7.12
Results of convolution of the Lenna test image of size 128  128 pixels see
Figure 10.5 (a)] using the following 55 Laws' operators: (a) L5L5, (b) E 5E 5,
and (c) W 5W 5. (d) { (f) were obtained by summing the absolute values of
the results in (a) { (c), respectively, in a 9  9 moving window, and represent
three measures of texture energy. The image in (c) was obtained by mapping
the range ;200 200] out of the full range of ;1338 1184] to 0 255].
Analysis of Texture 607

FIGURE 7.13
The leaf of a fern with a fractal pattern.
608 Biomedical Image Analysis
7.5.1 Fractal dimension
Whereas the self-similar aspect of fractals is apparent in the examples men-
tioned above, it is not so obvious in other patterns such as clouds, coastlines,
and mammograms, which are also said to have fractal-like characteristics. In
such cases, the \fractal nature" perceived is more easily related to the notion
of complexity in the dimensionality of the object, leading to the concept of the
fractal dimension. If one were to use a large ruler to measure the length of a
coastline, the minor details present in the border having small-scale variations
would be skipped, and a certain length would be derived. If a smaller ruler
were to be used, smaller details would get measured, and the total length
that is measured would increase (between the same end points as before).
This relationship may be expressed as 457]
l() = l0 1;df (7.32)
where l() is the length measured with  as the measuring unit (the size of
the ruler), df is the fractal dimension, and l0 is a constant. Fractal patterns
exhibit a linear relationship between the log of the measured length and the
log of the measuring unit:
logl()] = logl0 ] + (1 ; df ) log] (7.33)
the slope of this relationship is related to the fractal dimension df of the
pattern. This method is known as the caliper method to estimate the fractal
dimension of a curve. It is obvious that df = 1 for a straight line.
Fractal dimension is a measure that quanties how the given pattern lls
space. The fractal dimension of a straight line is unity, that of a circle or a 2D
perfectly planar (sheet-like) object is two, and that of a sphere is three. As the
irregularity or complexity of a pattern increases, its fractal dimension increases
up to its own Euclidean dimension dE plus one. The fractal dimension of a
jagged, rugged, convoluted, kinky, or crinkly curve will be greater than unity,
and reaches the value of two as its complexity increases. The fractal dimension
of a rough 2D surface will be greater than two, and approaches three as the
surface roughness increases. In this sense, fractal dimension may be used as
a measure of the roughness of texture in images.
Several methods have been proposed to estimate the fractal dimension of
patterns 464, 465, 466, 467, 464, 468, 469]. Among the methods described by
Schepers et al. 467] for the estimation of the fractal dimension of 1D signals
is that of computing the relative dispersion RD(), dened as the ratio of the
standard deviation to the mean, using varying bin size or number of samples
of the signal . For a fractal signal, the expected variation of RD() is
H ;1
RD() = RD(0 )  (7.34)
0
Analysis of Texture 609
where 0 is a reference value for the bin size, and H is the Hurst coecient
that is related to the fractal dimension as
df = dE + 1 ; H: (7.35)
(Note: dE = 1 for 1D signals, 2 for 2D images, etc.) The value of H , and
hence df , may be estimated by measuring the slope of the straight-line ap-
proximation to the relationship between logRD()] and log().

7.5.2 Fractional Brownian motion model


Fractal signals may be modeled in terms of fractional Brownian motion 466,
467, 470]. The expectation of the dierences between the values of such a
signal at a position  and another at  + ! follow the relationship
E jf ( + !) ; f ()j] / j!jH : (7.36)
The slope of a plot of the averaged dierence as above versus ! (on a log {
log scale) may be used to estimate H and the fractal dimension.
Chen et al. 470] applied fractal analysis for the enhancement and classi-
cation of ultrasonographic images of the liver. Burdett et al. 471] derived
the fractal dimension of 2D ROIs of mammograms with masses by using the
expression in Equation 7.36. Benign masses, due to their smooth and ho-
mogeneous texture, were found to have low fractal dimensions of about 2:38,
whereas malignant tumors, due to their rough and heterogeneous texture, had
higher fractal dimensions of about 2:56.
The PSD of a fractional Brownian motion signal "(!) is expected to follow
the so-called power law as
"(!) / 1 : (7.37)
j!j(2H +1)
The derivative of a signal generated by a fractional Brownian motion model
is known as a fractional Gaussian noise signal the exponent in the power-law
relationship for such a signal is changed to (2H ; 1).

7.5.3 Fractal analysis of texture


Based upon a fractional Brownian motion model, Wu et al. 472] dened an
averaged intensity-dierence measure id(k) for various values of the displace-
ment or distance parameter k as
"N ;1 N ;k;1
1
id(k) = 2N (N ; k ; 1)
X X
jf (m n) ; f (m n + k)j
m=0 n=0
;k;1 NX
NX ;1 #
+ jf (m n) ; f (m + k n)j : (7.38)
m=0 n=0
610 Biomedical Image Analysis
The slope of a plot of logid(k)] versus logk] was used to estimate H and
the fractal dimension. Wu et al. applied multiresolution fractal analysis as
well GCM features, Fourier spectral features, gray-level dierence statistics,
and Laws' texture energy measures for the classication of ultrasonographic
images of the liver as normal, hepatoma, or cirrhosis classication accuracies
of 88:9%, 83:3%, 80:0%, 74:4%, and 71:1%, respectively, were obtained. In
a related study, Lee et al. 473] derived features based upon fractal analysis
including the application of multiresolution wavelet transforms. Classica-
tion accuracies of 96:7% in distinguishing between normal and abnormal liver
images, and 93:6% in discriminating between cirrhosis and hepatoma were
obtained.
Byng et al. 448] (see also Peleg et al. 464], Yae et al. 474], and Caldwell
et al. 475]) describe a surface-area measure to represent the complexity of
texture in an image by interpreting the gray level as the height of a function
of space see Figure 7.14. In a perfectly uniform image of size N  N pixels,
with each pixel being of size    units of area, the surface area would be
equal to (N)2 . When adjacent pixels are of unequal value, more surface area
of the blocks representing the pixels will be exposed, as shown in Figure 7.14.
The total surface area for the image may be calculated as

;2 NX
NX ;2
A() = f 2 +
m=0 n=0
  jf (m n) ; f (m n + 1)j + jf (m n) ; f (m + 1 n)j ] g (7.39)

where f (m n) is the 2D image expressed as a function of the pixel size . The


method is analogous to the popular box-counting method 460, 476, 469]. In
order to estimate the fractal dimension of the image, we could derive several
smoothed and downsampled versions of the given image (representing various
scales ), and estimate the slope of the plot of logA()] versus log] the
fractal dimension is given as two minus the slope. Smoothing and downsam-
pling may be achieved simply by averaging pixels in blocks of 2  2, 3  3,
4  4, etc., and replacing the blocks by a single pixel with the correspond-
ing average. A perfectly uniform image would demonstrate no change in its
area, and have a fractal dimension of two images with rough texture would
have increasing values of the fractal dimension, approaching three. Yae et
al. 474] obtained fractal dimension values in the range of 2:23 2:54] with 60
mammograms. Byng et al. 448] demonstrated the usefulness of the fractal
dimension as a measure of increased broglandular density in the breast, and
related it to the risk of development of breast cancer. Fractal dimension was
found to complement histogram skewness (see Section 7.3) as an indicator of
breast cancer risk.
Analysis of Texture 611
A B
f (m, n)
η D C E f (m, n+1)
F η
H G
f (m+1, n)
η

η
η

FIGURE 7.14
Computation of the exposed surface area for a pixel f (m n) with respect to
its neighboring pixels f (m n + 1) and f (m + 1 n). A pixel at (m:n) is viewed
as a box (or building) with base area    and height equal to the gray level
f (m n). The total exposed surface area for the pixel f (m n), with respect to
its neighboring pixels at (m n + 1) and (m + 1 n), is the sum of the areas of
the rectangles ABCD, CBEF , and DCGH 448].

7.5.4 Applications of fractal analysis


Chaudhuri and Sarkar 477] proposed a modied box-counting method to es-
timate fractal measures of texture from a given image, its horizontally and
vertically smoothed versions, as well as high- and low-gray-valued versions
derived by thresholding operations. The features were applied for the seg-
mentation of multitextured images.
Zheng and Chan 478] used the fractal dimension of sections of mammo-
grams to select areas with rough texture for further processing toward the
detection of tumors. Pohlman et al. 407] derived the fractal dimension of
1D signatures of radial distance versus angle of boundaries of mammographic
masses the measure provided an average accuracy of 81% in discriminating
between benign masses and malignant tumors. It should be noted that the
function of radial distance versus angle could be multivalued for spiculated
and irregular contours, due to the fact that a radial line may cross the tu-
mor contour more than once (see Section 6.1.1). A measure related to this
characteristic was found to give an accuracy of 93% in discriminating between
benign masses and malignant tumors 407].
Iftekharuddin et al. 476] proposed a modied box-counting method to es-
timate the fractal dimension of images, and applied the method to brain MR
images. Their results indicated the potential of the methods in the detection
of brain tumors.
Lundahl et al. 466] estimated the values of H from scan lines of X-ray im-
ages of the calcaneus (heel) bone, and showed that the value was decreased by
injury and osteoporosis, indicating reduced complexity of structure (increased
612 Biomedical Image Analysis
gaps) as compared to normal bone. Saparin et al. 479], using symbol dynam-
ics and measures of complexity, found that the complexity of the trabecular
structure in bone declines more rapidly than bone density during the loss of
bone in osteoporosis. Jennane et al. 480] applied fractal analysis to X-ray
CT images of trabecular bone specimens extracted from the radius. It was
found that the H value decreased with trabecular bone loss and osteoporosis.
Samarabandhu et al. 481] proposed a morphological ltering approach to de-
rive the fractal dimension, and indicated that the features they derived could
serve as robust measures of the trabecular texture in bone. (For a discussion
on the application of fractal analysis to bone images, see Geraets and van der
Stelt 469].)
Sedivy et al. 482] showed that the fractal dimension of atypical nuclei in
dysplastic lesions of the cervix uteri increased as the degree of dysplasia in-
creased. They indicated that fractal dimension could quantify the irregularity
and complexity of the outlines of nuclei, and facilitate objective nuclear grad-
ing. Esgiar et al. 483] found the fractal dimension to complement the GCM
texture features of entropy and correlation in the classication of tissue sam-
ples from the colon: the inclusion of fractal dimension increased the sensitivity
from 90% to 95%, and the specicity from 86% to 93%. Penn and Loew 484]
discussed the limitations of the box-counting and PSD-based methods in frac-
tal analysis, and proposed fractal interpolation function models to estimate
the fractal dimension. The method was shown to provide improved results in
the separation of normal and sickle-cell red blood cells.
Lee et al. 429] applied shape analysis for the classication of cutaneous
melanocytic lesions based upon their contours. An irregularity index related
to local protrusions and indentations was observed to have a higher correlation
with clinical assessment of the lesions than compactness (see Section 6.2.1)
and fractal dimension.

7.6 Fourier-domain Analysis of Texture


As is evident from the illustrations in Figure 7.6, the Fourier spectrum of an
image with random texture contains the spectral characteristics of the spot
involved in its generation (according to the spot-noise model shown in Fig-
ure 7.4). The eects of multiplication with the spectrum of the random-noise
eld (which is essentially, and on the average, a constant) may be removed by
smoothing operations. Thus, the important characteristics of the texture are
readily available in the Fourier spectrum.
On the other hand, the Fourier spectrum of an image with periodic texture
includes not only the spectral characteristics of the spot, but also the eects of
multiplication with the spectrum of the impulse eld involved in its generation.
Analysis of Texture 613
The Fourier spectrum of a train of impulses in 1D is a discrete spectrum with
a constant value at the fundamental frequency (the inverse of the period)
and its harmonics 1, 2]. Correspondingly, in 2D, the Fourier spectrum of a
periodic eld of impulses is a eld of impulses, with high values only at the
fundamental frequency of repetition of the impulses in the image domain and
integral multiples thereof. Multiplication of the Fourier spectrum of the spot
with the spectrum of the impulse eld will cause signicant modulation of
the intensities in the former, leading to bright regions at regular intervals.
With real-life images, the eects of windowing or nite data, as well as of
quasi-periodicity, will lead to smearing of the impulses in the spectrum of the
impulse eld involved regardless, the spectrum of the image may be expected
to demonstrate a eld of bright regions at regular intervals. The information
related to the spectra of the spot and the impulse eld components may be
derived from the spectrum of the textured image by averaging in the polar-
coordinate axes as follows 485].
Let F (r t) be the polar-coordinate representation of the Fourier spectrum
p image in terms of the Cartesian frequency coordinates (u v), we
of the given
have r = u2 + v2 , and t = atan(v=u). Derive the projection functions in r
and t by integrating F (r t) in the other coordinate as
Z 
F (r ) = F (r t) dt (7.40)
t=0
and Z rmax
F (t) = F (r t) dr: (7.41)
r=0
The averaging eect of the integration above (or summation in the discrete
case) leads to improved visualization of the spectral characteristics of periodic
texture. Quantitative features may be derived from F (r) and F (t) or directly
from F (r t)] for pattern classication purposes.
Example: Figure 7.15 shows the components involved in the generation of
an image with periodic placement of a circular spot, and the related Fourier
spectra. The spectra clearly demonstrate the eects of periodicity in the
impulse eld and the texture. It is evident that the spectrum of the texture
also includes information related to the spectrum of the spot.
The spectrum of the texture in Figure 7.15 (f) is shown in polar coordinates
in Figure 7.16. The projection functions derived by summing the spectrum in
the radial and angular dimensions are shown in Figure 7.17. The projection
functions demonstrate the eect of periodicity in the texture.
The spectrum of the image of a y's eye in Figure 7.3 (b) is shown in
Figure 7.18 in Cartesian and polar coordinates the corresponding projection
functions are shown in Figure 7.19. A similar set of results is shown in Figures
7.20 and 7.21 for the snake-skin image in Figure 7.3 (c). The spectra show
regions of high intensity at quasi-periodic intervals in spite of the fact that
the original images are only approximately periodic, and the texture elements
vary considerably in size and orientation over the scope of the images.
614 Biomedical Image Analysis

(a) (d)

(b) (e)

(c) (f)
FIGURE 7.15
Fourier spectral characteristics of periodic texture generated using the spot-
noise model in Figure 7.4. (a) Periodic impulse eld. (b) Circular spot.
(c) Periodic texture generated by convolving the spot in (b) with the impulse
eld in (a). (d) { (f) Log-magnitude Fourier spectra of the images in (a) {
(c), respectively.
Analysis of Texture 615

120

100

80
radius r

60

40

20

20 40 60 80 100 120 140 160 180


angle t (degrees)

FIGURE 7.16
The spectrum in Figure 7.15 (f) converted to polar coordinates only the upper
half of the spectrum was mapped to polar coordinates.

Jernigan and D'Astous 486] used the normalized PSD values within selected
frequency bands as PDFs, and computed entropy values. It was expected that
structured texture would lead to low entropy (due to spectral bands with
concentrated energy) and random texture would lead to high entropy values
(due to a uniform distribution of spectral energy). Their results indicated that
the entropy values could provide discrimination between texture categories
that was comparable to that provided by spectral energy and GCM-based
measures. In addition to entropy, the locations and values of the spectral
peaks may also be used as features. Liu and Jernigan 487] dened 28 measures
in the Fourier spectral domain, including measures related to the frequency
coordinates and relative orientation of the rst and second spectral peaks the
percentages of energy and the moments of inertia of the normalized spectrum
in the rst and second quadrants the Laplacian of the magnitude and phase at
the rst and second spectral peaks and measures of isotropy and circularity of
the spectrum. Their results indicated that the spectral measures were eective
in discriminating between various types of texture, and also insensitive to
additive noise.
Laine and Fan 488] proposed the use of wavelet packet frames or tree-
structured lter banks for the extraction of features from textured images
in the frequency domain. The features were used for the segmentation of
multitextured images.
616 Biomedical Image Analysis

1600

1400

1200

1000
F(r)

800

600

400

200
20 40 60 80 100 120
radius r

(a)
500

450

400
F(t)

350

300

250
0 20 40 60 80 100 120 140 160
angle t (degrees)

(b)
FIGURE 7.17
Projection functions in (a) the radial coordinate r, and (b) the angle coordi-
nate t obtained by integrating (summing) the spectrum in Figure 7.16 in the
other coordinate.
Analysis of Texture 617

(a)

120

100

80
radius r

60

40

20

20 40 60 80 100 120 140 160 180


angle t (degrees)

(b)
FIGURE 7.18
Fourier spectral characteristics of the quasi-periodic texture of the y's eye
image in Figure 7.3 (b): (a) The Fourier spectrum in Cartesian coordinates
(u v). (b) The upper half of the spectrum in (a) mapped to polar coordinates.
618 Biomedical Image Analysis

2400

2200

2000
F(r)

1800

1600

1400

20 40 60 80 100 120
radius r

(a)

1250

1200
F(t)

1150

1100

1050

0 20 40 60 80 100 120 140 160


angle t (degrees)

(b)
FIGURE 7.19
Projection functions in (a) the radial coordinate r, and (b) the angle coordi-
nate t obtained by integrating (summing) the spectrum in Figure 7.18 (b) in
the other coordinate.
Analysis of Texture 619

(a)

120

100

80
radius r

60

40

20

20 40 60 80 100 120 140 160 180


angle t (degrees)

(b)
FIGURE 7.20
Fourier spectral characteristics of the ordered texture of the snake-skin image
in Figure 7.3 (c): (a) The Fourier spectrum in Cartesian coordinates (u v).
(b) The upper half of the spectrum in (a) mapped to polar coordinates.
620 Biomedical Image Analysis

2600

2400

2200

2000
F(r)

1800

1600

1400

1200
20 40 60 80 100 120
radius r

(a)
1250

1200

1150
F(t)

1100

1050

1000

0 20 40 60 80 100 120 140 160


angle t (degrees)

(b)
FIGURE 7.21
Projection functions in (a) the radial coordinate r, and (b) the angle coordi-
nate t obtained by integrating (summing) the spectrum in Figure 7.20 (b) in
the other coordinate.
Analysis of Texture 621
McLean 489] applied vector quantization in the transform domain, and
treated the method as a generalized template matching scheme, for the coding
and classication of texture. The method yielded better texture classication
accuracy than GCM-based features.
Bovik 490] discussed multichannel narrow-band ltering and modeling of
texture. Highly granular and oriented texture may be expected to present
spatio-spectral regions of concentrated energy. Gabor lters may then be
used to lter, segment, and analyze such patterns. See Sections 5.10.2, 7.7,
8.4, 8.9, and 8.10 for further discussion on related topics.

7.7 Segmentation and Structural Analysis of Texture


Many methods have been reported in the literature for the analysis of texture,
which may be broadly classied as statistical or structural methods 441,
439]. Most of the commonly used methods for texture analysis are based
upon statistical characterization, such as GCMs (see Section 7.3.1) and ACFs
Fourier spectrum analysis, described in Section 7.6, may be considered to be
equivalent to analysis based upon the ACF. Statistical methods are suitable
for the analysis of random or ne texture with no large-scale motifs for other
types of texture and for multitextured images, structural methods could be
more appropriate. Structural analysis of textured images requires some type
of segmentation of the given image into its distinct or basic components.
Texture elements (or textons, as called by Julesz and Bergen 438]) play
an important role in preattentive vision and texture perception. Ordered
texture may be modeled as being composed of repeated placement of a basic
motif or texton over the image eld in accordance with a placement rule
see Section 7.2. The placement rule may be expressed as a eld of impulses
indicating the locations of the repeated textons consequently, the textured
image is given by the convolution of the ordered or (quasi-) periodic impulse
eld with the texton. Although this model does not directly permit scale
and orientation dierences between the various occurrences of the texton,
such \jitter" could be introduced separately to synthesize realistic textured
images.
Vilnrotter et al. 491] proposed a system to describe natural textures in
terms of individual texture elements or primitives and their spatial relation-
ships or arrangement. The main steps of the system include the generation of
1D descriptors of texture elements from edge repetition data, the extraction
of elements that correspond to the preceding description, the generation of
2D descriptors of each texture primitive type, and the computation of spatial
arrangements or placement rules (when the texture is homogeneous and regu-
lar). The method was used to classify several types of texture including oor
622 Biomedical Image Analysis
grating, raa, brick, straw, and wool. The method does not extract a single
version of a texture element or primitive instead, all possible repeated struc-
tures are extracted. The analysis of a raa pattern, for example, resulted in
the extraction of three primitives.
He and Wang 492] dened \texture units" in terms of the 8-connected
neighbors of each pixel. The values in each 33 unit were reduced to the range
f0 1 2g, with the value of unity indicating that the value of the neighboring
pixel was within a predened range about the central pixel value the values
of 0 and 2 were used to indicate that the pixel value was lower or higher than
the specied range, respectively. The \texture spectrum" was dened as the
histogram or spectrum of the frequency of occurrence of all possible texture
units in the image it should be noted that the texture spectrum as above is
not based upon a linear orthogonal transform, such as the Fourier transform.
Methods were proposed to characterize as well as lter images based upon the
texture spectrum.
Wang et al. 493] proposed a thresholding scheme for the extraction of
texture primitives. It was assumed that the primitives would appear as regions
of connected pixels demonstrating good contrast with their background. The
primitives were characterized in terms of the statistics of their GCMs and
shape attributes the textured image could then be described in terms of
its primitives and placement rules. Tomita et al. 494] proposed a similar
approach based upon the extraction of texture elements, assumed to be regions
of homogeneous gray levels, via segmentation. The centroids of the texture
elements were used to dene detailed placement rules.
The problem of segmentation of complex images containing regions of dif-
ferent types of texture has been addressed by several researchers. Gabor
functions have been used by Turner 495] and Bovik et al. 496] for texture
analysis and segmentation. Gabor functions may be used to design lters with
tunable orientation, radial frequency bandwidth, and center frequencies that
can achieve jointly optimal resolution in the space and frequency domains.
Gabor lters are ecient in detecting discontinuities in texture phase, and
are useful in texture segmentation. Porat and Zeevi 497] developed a method
to describe texture primitives in terms of Gabor elementary functions. See
Sections 5.10.2, 8.4, 8.9, and 8.10 for further discussion on Gabor lters.
Reed and Wechsler 498] described approaches to texture analysis and seg-
mentation via the use of joint spatial and frequency-domain representations in
the form of spectrograms, that is, functions of (x y u v) obtained by the ap-
plication of the Fourier transform in a moving window of the image, a bank of
Gabor lters, DoG functions, and Wigner distributions. Reed et al. 499] de-
scribed a texture segmentation method using the pseudo-Wigner distribution
and a diusion region-growing method. Jain and Farrokhnia 500] presented
a method for texture analysis and segmentation based upon the application
of multichannel Gabor lters. The results of the lter bank were processed
in such a manner as to detect \blobs" in the given image texture discrim-
ination was performed by analyzing the attributes of the blobs detected in
Analysis of Texture 623
dierent regions of the image. Other related methods for texture segmenta-
tion include wavelet frames for the characterization of texture proprieties at
multiple scales 501], and circular Mellin features for rotation-invariant and
scale-invariant texture analysis 502].
Tardif and Zaccarin 503] proposed a multiscale autoregressive (AR) model
to analyze multifeatured images. The prediction error was used to segment
a given image into dierent textured parts. Unser and Eden 504] proposed
a multiresolution feature extraction method for texture segmentation. The
method includes the use of a local linear transformation that is equivalent to
processing the given image with a bank of FIR lters. (See Section 8.9.4 for
further discussion on related topics.)
If the texton and the placement rule (impulse eld) can be obtained from
a given image with ordered texture, the most important characteristics of
the image will have been determined. In particular, if a single texton or
motif is extracted from the image, further analysis of its shape, morphol-
ogy, spectral content, and internal details becomes possible. Martins and
Rangayyan 444, 505] proposed cepstral ltering in the Radon domain of the
image (see Section 10.3) to obtain the texton their methods and results are
described in Section 7.7.1.

7.7.1 Homomorphic deconvolution of periodic patterns


We have seen in Section 7.2 that an image with periodic texture may be mod-
eled as the convolution of a texton or motif with an impulse eld. Linear
lters may be applied to the complex cepstrum for homomorphic deconvolu-
tion of signals that contain convolved components see Section 10.3. A basic
assumption in homomorphic deconvolution is that the complex cepstra of the
components do not overlap. This assumption is usually met in 1D signal
processing applications, such as in the case of voiced speech signals, where
the basic wavelet is a relatively smooth signal 31, 176]. Whereas it would
be questionable to make the assumption that the 2D cepstra of an arbitrary
texton and an impulse eld do not overlap, it would be acceptable to make
the same assumption in the case of 1D projections (Radon transforms) of the
same images. Then, the homomorphic deconvolution procedures described
in Section 10.3 may be applied to recover the projections of a single tex-
ton 444, 505]. The texton may then be obtained via a procedure for image
reconstruction from projections (see Chapter 9).
The distinction between the application of homomorphic deconvolution for
the removal of visual echoes as described in Section 10.3 and for the extraction
of a texton is minor. An image with visual echoes may contain only one
copy or a few repetitions of a basic image, with possible overlap, and with
possibly unequal spacing of the echoes the basic image may be large in spatial
extent 505]. On the other hand, an image with ordered texture typically
contains several nonoverlapping repetitions of a relatively small texton or
motif at regular or quasi-periodic spacing.
624 Biomedical Image Analysis
Although homomorphic deconvolution has been shown to successfully ex-
tract the basic wavelets or motifs in periodic signals, the extraction of the
impulse train or eld is made dicult by the presence of noise and artifacts
related to the deconvolution procedure 31].

Example: An image of a part of a building with ordered arrangement of


windows is shown in Figure 7.22 (a). A single window section of the image
extracted by homomorphic deconvolution is shown in part (b) of the gure.

(a) (b)
FIGURE 7.22
(a) An image of a part of a building with a periodic arrangement of windows.
(b) A single window structure extracted by homomorphic deconvolution. Re-
produced with permission from A.C.G. Martins and R.M. Rangayyan, \Tex-
ture element extraction via cepstral ltering in the Radon domain", IETE
Journal of Research (India), 48(3,4): 143 { 150, 2002.  c IETE.

An image with a periodic arrangement of a textile motif is shown in Figure


7.23 (a). The result of the homomorphic deconvolution procedure of Martins
and Rangayyan 444, 505] to extract the texton is shown in part (b) of the same
gure. It is evident that a single motif has been extracted, albeit with some
blurring and loss of detail. The procedure, however, was not successful with
biomedical images due to the eects of quasi-periodicity as well as signicant
size and scale variations among the repeated versions of the basic pattern.
More research is desirable in this area.
Analysis of Texture 625

(a) (b)
FIGURE 7.23
(a) An image with a periodic arrangement of a textile motif. (b) A single
motif or texton extracted by homomorphic deconvolution. Reproduced with
permission from A.C.G. Martins and R.M. Rangayyan, \Texture element ex-
traction via cepstral ltering in the Radon domain", IETE Journal of Research
(India), 48(3,4): 143 { 150, 2002.  c IETE.

7.8 Audi cation and Soni cation of Texture in Images


The use of sound in scientic data analysis is rather rare, and analysis and
presentation of data are done almost exclusively by visual means. Even when
the data are the result of vibrations or sounds, such as the heart sound signals
or phonocardiograms, a Doppler ultrasound exam, or sonar, they are often
mapped to a graphical display or an image and visual analysis is performed.
The auditory system has not been used much for image analysis in spite of
the fact that it has several advantages over the visual system. Whereas many
interesting methods have been proposed for the auditory display of scientic
laboratory data and computer graphics representations of multidimensional
data, not much work has been reported for deriving sounds from visual images.
Chambers et al. 506] published a report on auditory data presentation in the
early 1970s. The rst international conference on auditory display of scientic
data was held in 1992 507], with specic interest in the use of sound for the
presentation and analysis of information.
Meijer 508, 509] proposed a sonication procedure to present image data to
the blind. In this method, the frequency of an oscillator is associated with the
position of each pixel in the image, and the amplitude is made proportional
626 Biomedical Image Analysis
to the pixel intensity. The image is scanned one column at a time and the
outputs of the associated oscillators are all presented as a sum, followed by
a click before the presentation of the next column. In essence, the image
is treated as a spectrogram or a time-frequency distribution 31, 176]. The
sound produced by this method with simple images such as a line crossing the
plane of an image can be easily analyzed however, the sound patterns related
to complex images could be complicated and confusing.
Texture analysis is often confounded by other neighboring or surrounding
features. Martins et al. 447] explored the potential of auditory display pro-
cedures, including audication and sonication, for aural presentation and
analysis of texture in images. An analogy was drawn between random tex-
ture and unvoiced speech, and between periodic texture and voiced speech, in
terms of generation based on the ltering of an excitation function as shown in
Figure 7.4. An audication procedure that played in sequence the projections
(Radon transforms) of the given image at several angles was proposed for the
auditory analysis of random texture. A linear-prediction model 510, 176, 31]
was used to generate the sound signal from the projection data. Martins
et al. also proposed a sonication procedure to convert periodic texture to
sound, with the emphasis on displaying the essential features of the texture
element and periodicity in the horizontal and vertical directions. Projec-
tions of the texton were used to compose sound signals including pitch like
voiced speech as well as a rhythmic aspect, with the pitch period and rhythm
related to the periodicities in the horizontal and vertical directions in the im-
age. Data-mapping functions were designed to relate image characteristics to
sound parameters in such a way that the sounds provided information in mi-
crostructure (timbre, individual pitch) and macrostructure (rhythm, melody,
pitch organization) that were related to the objective or quantitative measures
of texture.
In order to verify the potential of the proposed methods for aural analy-
sis of texture, a set of pilot experiments was designed and presented to 10
subjects 447]. The results indicated that the methods could facilitate quali-
tative and comparative analysis of texture. In particular, it was observed that
the methods could lead to the possibility of dening a sequence or order in
the case of images with random texture, and that sound-to-image association
could be achieved in terms of the size and shape of the spot used to syn-
thesize the texture. Furthermore, the proposed mapping of the attributes of
periodic texture to sound attributes could permit the analysis of features such
as texton size and shape, as well as periodicity in qualitative and comparative
manners. The methods could lead to the use of auditory display of images as
an adjunctive procedure to visualization.
Martins et al. 511] conducted preliminary tests on the audication of MR
images using selected areas corresponding to the gray and white matter of
the brain, and to normal and infarcted tissues. By using the audication
method, dierences between the various tissue types were easily perceived
by two radiologists visual discrimination of the same areas while remaining
Analysis of Texture 627
within their corresponding MR-image contexts was said to be dicult by the
same radiologists. The results need to be conrmed with a larger study.

7.9 Application: Analysis of Breast Masses Using Tex-


ture and Gradient Measures
In addition to the textural changes caused by microcalcications, the presence
of spicules arising from malignant tumors causes disturbances in the homo-
geneity of tissues in the surrounding breast parenchyma. Based upon this
observation, several studies have focused on quantifying the textural content
in the mass ROI and mass margins to achieve the classication of masses
versus normal tissue as well as benign masses versus malignant tumors.
Petrosian et al. 446] investigated the usefulness of texture features based
upon GCMs for the classication of masses and normal tissue. With a dataset
of 135 manually segmented ROIs, the methods indicated 89% sensitivity and
76% specicity in the training step, and 76% sensitivity and 64% specicity
in the test step using the leave-one-out method. Kinoshita et al. 512] used
a combination of shape factors and texture features based on GCMs. Using
a three-layer feed-forward neural network, they reported 81% accuracy in
the classication of benign and malignant breast lesions with a dataset of 38
malignant and 54 benign lesions.
Chan et al. 450], Sahiner et al. 451, 513], and Wei et al. 514, 515] in-
vestigated the eectiveness of texture features derived from GCMs for dier-
entiating masses from normal breast tissue in digitized mammograms. One
hundred and sixty-eight ROIs with masses and 504 normal ROIs were exam-
ined, and eight features including correlation, entropy, energy, inertia, inverse
dierence moment, sum average moment, sum entropy, and dierence entropy
were calculated for each region. All the ROIs were manually segmented by a
radiologist. Using linear discriminant analysis, Chan et al. 450] reported an
accuracy of 0:84 for the training set and 0:82 for a test set. Wei et al. 514, 515]
reported improved classication results with the same dataset by applying
multiresolution texture analysis. Sahiner et al. applied a convolutional neu-
ral network 513], and later used a genetic algorithm 513, 516] to classify the
masses and normal tissue in the same dataset.
Analysis of the gradient or transition information present in the bound-
aries of masses has been attempted by a few researchers in order to arrive
at benign-versus-malignant decisions. Kok et al. 517] used texture features,
fractal measures, and edge-strength measures computed from suspicious re-
gions for lesion detection. Huo et al. 518] and Giger et al. 519] extracted mass
regions using region-growing methods and proposed two spiculation measures
obtained from an analysis of radial edge-gradient information surrounding
628 Biomedical Image Analysis
the periphery of the extracted regions. Benign-versus-malignant classica-
tion studies performed using the features yielded an average eciency of 0:85.
Later on, the group reported to have achieved superior results with their
computer-aided classication scheme as compared to an expert radiologist by
employing a hybrid classier on a test set of 95 images 520].
Highnam et al. 521] investigated the presence of a \halo" | an area around
a mass region with a positive Laplacian | to indicate whether a circumscribed
mass is benign or malignant. They found that the extent of the halo varies be-
tween the CC and MLO views for benign masses, but is similar for malignant
tumors.
Guliato et al. 276, 277] proposed fuzzy region-growing methods for seg-
menting breast masses, and further proposed classication of the segmented
masses as benign or malignant based on the transition information present
around the segmented regions see Sections 5.5 and 5.11. Rangayyan et
al. 163] proposed a region-based edge-prole acutance measure for evaluating
the sharpness of mass boundaries see Sections 2.15 and 7.9.2.
Many studies have focused on transforming the space-domain intensities
into other forms for analyzing gradient and texture information. Claridge and
Richter 522] developed a Gaussian blur model to characterize the transitional
information in the boundaries of mammographic lesions. In order to analyze
the blur in the boundaries and to determine the prevailing direction of linear
patterns, a polar coordinate transform was applied to map the lesion into polar
coordinates. A measure of spiculation was computed from the transformed
images to discriminate between circumscribed and spiculated lesions as the
ratio of the sum of vertical gradient magnitudes to the sum of horizontal
gradient magnitudes.
Sahiner et al. 451, 516, 523] introduced the RBST method to transform
a band of pixels surrounding the boundary of a segmented mass onto the
Cartesian plane (see Figure 7.26). The band of pixels was extracted in the
perpendicular direction from every point on the boundary. Texture features
based upon GCMs computed from the RBST images resulted in an average
eciency of 0:94 in the benign-versus-malignant classication of 168 cases.
Sahiner et al. reported that texture analysis of RBST images yielded better
benign-versus-malignant discrimination than analysis of the original space-
domain images. However, such a transformation is sensitive to the precise
extraction of the band of pixels surrounding the ROI the method may face
problems with masses having highly spiculated margins.
Hadjiiski et al. 524] reported on the design of a hybrid classier | adaptive
resonance theory network cascaded with linear discriminant analysis | to
classify masses as benign or malignant. They compared the performance of the
hybrid classier that they designed with a back-propagation neural network
and linear discriminant classiers, using a dataset of 348 manually segmented
ROIs (169 benign and 179 malignant). Benign-versus-malignant classication
using the hybrid classier achieved marginal improvement in performance,
with an average eciency of 0:81. The texture features used in the classier
Analysis of Texture 629
were based upon GCMs and run-length sequences computed from the RBST
images.
Giger et al. 525] classied manually delineated breast mass lesions in ul-
trasonographic images as benign or malignant using texture features, margin
sharpness, and posterior acoustic attenuation. With a dataset of 135 ultra-
sound images from 39 patients, the posterior acoustic attenuation feature
achieved the best benign-versus-malignant classication results, with an aver-
age eciency of 0:84. Giger et al. reported to have achieved higher sensitivity
and specicity levels by combining the features derived from both mammo-
graphic and ultrasonographic images of mass lesions as against using features
computed from only the mammographic mass lesions.
Mudigonda et al. 275, 165] derived measures of texture and gradient us-
ing ribbons of pixels around mass boundaries, with the hypothesis that the
transitional information in a mass margin from the inside of the mass to its
surrounding tissues is important in discriminating between benign masses and
malignant tumors. The methods and results of this work are described in the
following sections. See Sections 6.7, 8.8, 12.11, and 12.12 for more discussion
on the detection and analysis of breast masses.

7.9.1 Adaptive normals and ribbons around mass margins


Mudigonda et al. 165, 275] obtained adaptive ribbons around boundaries of
breast masses and tumors that were drawn by an expert radiologist, in the
following manner. Morphological dilation and erosion operations 526] were
applied to the boundary using a circular operator of a specied diameter.
Figures 7.24 and 7.25 show the extracted ribbons across the boundaries of a
benign mass and a malignant tumor, respectively. The width of the ribbon in
each case is 8 mm across the boundary (4 mm or 80 pixels on either side of
the boundary at a resolution of 50 m per pixel). The ribbon width of 8 mm
was determined by a radiologist in order to take into account the possible
depth of inltration or diusion of masses into the surrounding tissues.
In order to compute gradient-based measures and acutance (see Section
2.15), Mudigonda et al. developed the following procedure to extract pixels
from the inside of a mass boundary to the outside along the perpendicular
direction at every point on the boundary. A polygonal model of the mass
boundary, computed as described in Section 6.1.4, was used to approximate
the mass boundary with a polygon of known parameters. With the known
equations of the sides of the polygonal model, it is possible to estimate the
normal at every point on the boundary. The length of the normal at any
point on the boundary was limited to a maximum of 80 pixels (4 mm) on
either side of the boundary or the depth of the mass at that particular point.
This is signicant, especially in the case of spiculated tumors possessing sharp
spicules or microlobulations, such that the extracted normals do not cross over
into adjacent spicules or mass portions. The normals obtained as above for a
benign mass and a malignant tumor are shown in Figures 7.24 and 7.25.
630 Biomedical Image Analysis

(a)

(b) (c)
FIGURE 7.24
(a) A 1 000  900 section of a mammogram containing a circumscribed benign
mass. Pixel size = 50 m. (b) Ribbon or band of pixels across the boundary
of the mass extracted by using morphological operations. (c) Pixels along the
normals to the boundary, shown for every tenth boundary pixel. Maximum
length of the normals on either side of the boundary = 80 pixels or 4 mm.
Images courtesy of N.R. Mudigonda 166]. See also Figure 12.28.
Analysis of Texture 631

(a)

(b) (c)
FIGURE 7.25
(a) A 630  560 section of a mammogram containing a spiculated malignant
tumor. Pixel size = 50 m. (b) Ribbon or band of pixels across the boundary
of the tumor extracted by using morphological operations. (c) Pixels along the
normals to the boundary, shown for every tenth boundary pixel. Maximum
length of the normals on either side of the boundary = 80 pixels or 4 mm.
Images courtesy of N.R. Mudigonda 166]. See also Figure 12.28.
632 Biomedical Image Analysis
With an approach that is dierent from the above but comparable, Sahiner
et al. 451] formulated the RBST method to map ribbons around breast masses
in mammograms into rectangular arrays see Figure 7.26. It was expected that
variations in texture due to the spicules that are commonly present around
malignant tumors would be enhanced by the transform, and lead to better
discrimination between malignant tumors and benign masses. The rectangular
array permitted easier and straightforward computation of texture measures.

7.9.2 Gradient and contrast measures


Due to the inltration into the surrounding tissues, malignant breast lesions
often permeate larger areas than apparent on mammograms. As a result, tu-
mor margins in mammographic images do not present a clear-cut transition
or reliable gradient information. Hence, it is dicult for an automated detec-
tion procedure to realize precisely the boundaries of mammographic masses,
as there cannot be any objective measure of such precision. Furthermore,
when manual segmentation is used, there are bound to be large inter-observer
variations in the location of mass boundaries due to subjective dierences
in notions of edge sharpness. Considering the above, it is appropriate for
gradient-based measures to characterize the global gradient phenomenon in
the mass margins without being sensitive to the precise location of the mass
boundary.
A modi ed measure of edge sharpness: The subjective impression of
sharpness perceived by the HVS is a function of the averaged variations in
intensities between the relatively light and dark areas of an ROI. Based upon
this, Higgins and Jones 115] proposed a measure of acutance to compute
sharpness as the mean-squared gradient along knife-edge spread functions
of photographic lms. Rangayyan and Elkadiki 116] extended this concept
to 2D ROIs in images see Section 2.15 for details. Rangayyan et al. 163]
used the measure to classify mammographic masses as benign or malignant:
acutance was computed using directional derivatives along the perpendicular
at every boundary point by considering the inside-to-outside dierences of
intensities across the boundary normalized to unit pixel distance. The method
has limitations due to the following reasons:
 Because derivatives were computed based on the inside-to-outside dier-
ences across the boundary, the measure is sensitive to the actual location
of the boundary. Furthermore, it is sensitive to the number of dier-
ences (pixel pairs) that are available at a particular boundary point,
which could be relatively low in the sharply spiculated portions of a
malignant tumor as compared to the well-circumscribed portions of a
benign mass. The measure thus becomes sensitive to shape complexity
as well, which is not intended.
 The nal acutance value for a mass ROI was obtained by normalizing
the mean-squared gradient computed at all the points on the boundary
Analysis of Texture 633

FIGURE 7.26
Mapping of a ribbon of pixels around a mass into a rectangular image by
the rubber-band straightening transform 428, 451]. Figure courtesy of B.
Sahiner, University of Michigan, Ann Arbor, MI. Reproduced with permission
from B.S. Sahiner, H.P. Chan, N. Petrick, M.A. Helvie, and M.M. Goodsitt,
\Computerized characterization of masses on mammograms: The rubber band
straightening transform and texture analysis", Medical Physics, 25(4): 516 {
526, 1995. c American Association of Medical Physicists.
634 Biomedical Image Analysis
with a factor dependent upon the maximum gray-level range and the
maximum number of dierences used in the computation of acutance.
For a particular mass under consideration, this type of normalization
could result in large dierences in acutance values for varying numbers
of pixel pairs considered.
Mudigonda et al. 165] addressed the above-mentioned drawbacks by de-
veloping a consolidated measure of directional gradient strength as follows.
Given the boundary of a mass formed by N points, the rst step is to com-
pute the RMS gradient in the perpendicular direction at every point on the
boundary with a set of successive pixel pairs as made available by the ribbon-
extraction method explained in Section 7.9.1. The RMS gradient dm at the
mth boundary point is obtained as
s
P(pm ;1) 2
dm = n=0 fm (n) ; fm (n + 1)] (7.42)
pm
where fm (n) n = 0 1 2 : : : pm are the (pm + 1) pixels available along the
perpendicular at the mth boundary point, including the boundary point. The
normal pm is limited to a maximum of 160 pixels (80 pixels on either side of
the boundary, with the pixel size being 50 m).
A modied measure of acutance based on the directional gradient strength
Ag of the ROI is computed as
N
Ag = N (f 1; f )
X
dm (7.43)
max min m=1

where fmax and fmin are the local maximum and the local minimum pixel
values in the ribbon of pixels extracted, and N is the number of pixels along
the boundary of the ROI. Because RMS gradients computed over several pixel
pairs at each boundary point are used in the computation of Ag , the measure is
expected to be stable in the presence of noise, and furthermore, expected to be
not sensitive to the actual location of the boundary. The factor (fmax ; fmin )
in the denominator in Equation 7.43 serves as an additional normalization
factor in order to account for the changes in the gray-level contrast of images
from various databases it also normalizes the Ag measure to the range 0 1].
Coe cient of variation of gradient strength: In the presence of ob-
jects with fuzzy backgrounds, as is the case in mammographic images, the
mean-squared gradient as a measure of sharpness may not result in adequate
condence intervals for the purposes of pattern classication. Hence, statis-
tical measures need to be adopted to characterize the feeble gradient varia-
tions across mass margins. Considering this notion, Mudigonda et al. 165]
proposed a feature based on the coecient of variation of the edge-strength
values computed at all points on a mass boundary. The stated purpose of this
Analysis of Texture 635
feature was to investigate the variability in the sharpness of a mass around
its boundary, in addition to the evaluation of its average sharpness with the
measure Ag . Variance is a statistical measure of signal strength, and can be
used as an edge detector because it responds to boundaries between regions
of dierent brightness 527]. In the procedure proposed by Mudigonda et al.,
the variance (w2 ) localized in a moving window of an odd number of pixels
(M ) in the perpendicular direction at a boundary pixel is computed as
bM=
X2c
2 1 fm (n) ; w ]2
w=M (7.44)
n=b;M=2c
where M = 5 fm (n) n = 0 1 2 : : : pm are the pixels considered at the mth
boundary point in the perpendicular direction and w is the running mean
intensity in the selected window:
bM=
X2c
w = M1 fm (n) : (7.45)
n=b;M=2c
The window is moved over the entire range of pixels made available at
a particular boundary point by the ribbon-extraction method described in
Section 7.9.1. The maximum of the variance values thus computed is used
to represent the edge strength at the boundary point being processed. The
coecient of variation (Gcv ) of the edge-strength values for all the points on
the boundary is then computed. The measure is not sensitive to the actual
location of the boundary within the selected ribbon, and is normalized so as
to be applicable to a mixture of images from dierent databases.

7.9.3 Results of pattern classi


cation
In the work of Mudigonda et al. 165, 166], four GCMs were constructed by
scanning each mass ROI or ribbon in the 0o , 45o , 90o , and 135o directions
with unit-pixel distance (d = 1). Five of Haralick's texture features, dened
as F1 F2 F3 F5 , and F9 in Section 7.3.2, were computed for the four GCMs,
thus resulting in a total of 20 texture features for each ROI or ribbon.
A pixel distance of d = 1 is preferred to ensure large numbers of co-
occurrences derived from the ribbons of pixels extracted from mass mar-
gins. Texture features computed from GCMs constructed for larger distances
(d = 3 5 and 10 pixels, with the resolution of the images being 50 or 62 m)
were found to possess a high degree of correlation (0:9 and higher) with the cor-
responding features computed for unit-pixel distance (d = 1). Hence, pattern
classication experiments were not carried out with the GCMs constructed
using larger distances.
In addition to the texture features described above, the two gradient-based
features Ag and Gcv were computed from adaptive ribbons extracted around
636 Biomedical Image Analysis
the boundaries of 53 mammographic ROIs, including 28 benign masses and 25
malignant tumors. Three leading features (with canonical coecients greater
than 1) including two texture measures of correlation (d = 1 at 90o and
45o ), and a measure of inverse dierence moment (d = 1 at 0o ) were selected
from the 20 texture features computed from the ribbons. The classication
accuracy was found to be the maximum with the three features listed above.
The two most-eective features selected for analyzing the mass ROIs included
two measures of correlation (d = 1 at 90o and 135o ).
Pattern classication experiments with 38 masses from the MIAS database
376] (28 benign and 10 malignant) indicated average accuracies of 68:4% and
78:9% using the texture features computed with the entire mass ROIs and the
adaptive ribbons around the boundaries, respectively. This result supports the
hypothesis that discriminant information is contained around the margins of
breast masses rather than within the masses. With the extended database of
53 masses (28 benign and 25 malignant), and with features computed using the
ribbons around the boundaries, the classication accuracies with the gradient
and texture features as well as their combination were 66%, 75:5%, and 73:6%,
respectively. (The area under the receiver operating characteristics curves
were, respectively, 0:71, 0:80, and 0:81 see Section 12.8.1 for details on this
method.) The gradient features were observed to increase the sensitivity, but
reduce the specicity when combined with the texture features.
In a dierent study, Alto et al. 528, 529] obtained benign-versus-malignant
classication accuracies of up to 78:9% with acutance (as in Equation 2.110),
66:7% with Haralick's texture measures, and 98:2% with shape factors applied
to a dierent database of 57 breast masses and tumors. Although combina-
tions of the features did not result in higher pattern classication accuracy,
advantages were observed in experiments on content-based retrieval (see Sec-
tion 12.12).
In experiments conducted by Sahiner et al. 428] with automatically ex-
tracted boundaries of 249 mammographic masses, Haralick's texture measures
individually provided classication accuracies of up to only 0:66, whereas the
Fourier-descriptor-based shape factor dened in Equation 6.58 gave an accu-
racy of 0:82 (the highest among 13 shape features, 13 texture features, and ve
run-length statistics). Each texture feature was computed using the RBST
method 451] (see Figure 7.26) in four directions and for 10 distances. How-
ever, the full set of the shape factors provided an average accuracy of 0:85, the
texture feature set provided the same accuracy, and the combination of shape
and texture feature sets provided an improved accuracy of 0:89. These results
indicate the importance of including features from a variety of perspectives
and image characteristics in pattern classication.
See Sections 5.5, 5.11, and 8.8 for discussions on the detection of masses in
mammograms Sections 6.7 and 12.11 for details on shape analysis of masses
and Section 12.12 for a discussion on the application of texture measures for
content-based retrieval and classication of mammographic masses.
Analysis of Texture 637

7.10 Remarks
In this chapter, we have examined the nature of texture in biomedical images,
and studied several methods to characterize texture. We have also noted
numerous applications of texture analysis in the classication of biomedical
images. Depending upon the nature of the images on hand, and the antici-
pated textural dierences between the various categories of interest, one may
have to use combinations of several measures of texture and contour rough-
ness (see Chapter 6) in order to obtain acceptable results. Relating statistical
and computational representations of texture to visually perceived patterns
or expert opinion could be a signicant challenge in medical applications. See
Ojala et al. 530] for a comparative analysis of several methods for the analysis
of texture. See Chapter 12 for examples of pattern classication via texture
analysis.
Texture features may also be used to partition or segment multitextured
images into their constituent parts, and to derive information regarding the
shape, orientation, and perspective of objects. Haralick and Shapiro 440]
(Chapter 9) describe methods for the derivation of the shape and orientation
of 3D objects or terrains via the analysis of variations in texture.
Examples of oriented texture were presented in this chapter. Given the
importance of oriented texture and patterns with directional characteristics in
biomedical images, Chapter 8 is devoted completely to the analysis of oriented
patterns.

7.11 Study Questions and Problems


Selected data les related to some of the problems and exercises are available at the
site
www.enel.ucalgary.ca/People/Ranga/enel697
1. Explain the manner in which
(a) the variance,
(b) the entropy, and
(c) the skewness
of the histogram of an image can represent texture.
Discuss the limitations of measures derived from the histogram of an image
in the representation of texture.
2. What are the main similarities and di erences between the histogram and a
gray-level co-occurrence matrix of an image?
What are the orders of these two measures in terms of PDFs?
638 Biomedical Image Analysis
3. Explain why gray-level co-occurrence matrices need to be estimated for several
values of displacement (distance) and angle.
4. Explain how shape complexity and texture (gray-level) complexity comple-
ment each other. (You may use a tumor as an example.)
5. Sketch two examples of fractals, in the sense of self-similar nested patterns,
in biomedical images.

7.12 Laboratory Exercises and Projects


1. Visit a medical imaging facility and a pathology laboratory. Collect examples
of images with
(a) random texture,
(b) oriented texture, and
(c) ordered texture.
Respect the priority, privacy, and con dentiality of patients.
Request a radiologist, a technologist, or a pathologist to explain how he or she
interprets the images. Obtain information on the di erences between normal
and abnormal (disease) patterns in di erent types of samples and tests.
Collect a few sample images for use in image processing experiments, after
obtaining the necessary permissions and ensuring that you carry no patient
identi cation out of the laboratory.
2. Compute the log-magnitude Fourier spectra of the images you obtained in
Exercise 1. Study the nature of the spectra and relate their characteristics to
the nature of the texture observed in the images.
3. Derive the histograms of the images you obtained in Exercise 1. Compute the
(a) the variance,
(b) the entropy,
(c) the skewness, and
(d) kurtosis
of the histograms. Relate the characteristics of the histograms and the values
of the parameters listed above to the nature of the texture observed in the
images.
4. Write a program to estimate the fractal dimension of an image using the
method given by Equation 7.39. Compute the fractal dimension of the images
you obtained in Exercise 1. Interpret the results and relate them to the nature
of the texture observed in the images.
8
Analysis of Oriented Patterns

Many images are composed of piecewise linear objects. Linear or oriented


objects possess directional coherence that can be quantied and examined to
assess the underlying pattern. An area that is closely related to directional im-
age processing is texture identication and segmentation. For example, given
an image of a human face, a method for texture segmentation would attempt
to separate the region consisting of hair from the region with skin, as well as
other regions such as the eyes that have a texture that is dierent from that
of either the skin or hair. In texture segmentation, a common approach for
identifying the diering regions is via nding the dominant orientation of the
dierent texture elements, and then segmenting the image using this informa-
tion. The subject matter of this chapter is more focused, and concerned with
issues of whether there is coherent structure in regions such as the hair or skin.
To put it simply, the question is whether the hair is combed or not, and if it is
not, the degree of disorder is of interest, which we shall attempt to quantify.
Directional analysis is useful in the eective identication, segmentation, and
characterization of oriented (or weakly ordered) texture 432].

8.1 Oriented Patterns in Images


In most cases of natural materials, strength is derived from highly coherent,
oriented bers an example of such structure is found in ligaments 35, 36].
Normal, healthy ligaments are composed of bundles of collagen brils that
are coherently oriented along the long axis of the ligament see Figure 1.8 (a).
Injured and healing ligaments, on the other hand, contain scabs of scar ma-
terial that are not aligned. Thus, the determination of the relative disorder
of collagen brils could provide a direct indicator of the health, strength, and
functional integrity (or lack thereof) of a ligament 35, 36, 37, 531] similar
patterns exist in other biological tissues such as bones, muscle bers, and
blood vessels in ligaments as well 414, 415, 532, 533, 534, 535, 536, 537, 538,
539, 540, 541, 542, 543].
Examples of oriented patterns in biomedical images include the following:
Fibers in muscles and ligaments see Figure 8.22.

639
640 Biomedical Image Analysis
Fibroglandular tissue, ligaments, and ducts in the breast see Figures
7.2 and 8.66.

Vascular networks in ligaments, lungs, and the heart see Figures 9.20
and 8.27.

Bronchial trees in the lungs see Figure 7.1.

Several more examples are presented in the sections to follow.


In man-made materials such as paper and textiles, strength usually relies
upon the individual bers uniformly knotting together. Thus, the strength
of the material is directly related to the organization of the individual bril
strands 544, 545, 546, 547, 548, 549].
Oriented patterns have been found to bear signicant information in several
other applications of imaging and image processing. In geophysics, the accu-
rate interpretation of seismic soundings or \stacks" is dependent upon the
elimination of selected linear segments from the stacks, primarily the \ground
roll" or low-frequency component of a seismic sounding 550, 551, 552]. Tho-
rarinsson et al. 553] used directional analysis to discover linear anomalies in
magnetic maps that represent tectonic features.
In robotics and computer vision, the detection of the objects in the vicinity
and the determination of their orientation relative to the robot are important
in order for the machine to function in a nonstandard environment 554, 555,
556]. By using visual cues in images, such as the dominant orientation of a
scene, robots may be enabled to identify basic directions such as up and down.
Information related to orientation has been used in remote sensing to ana-
lyze satellite maps for the detection of anomalies in map data 557, 558, 559,
560, 561, 562]. Underlying structures of the earth are commonly identied by
directional patterns in satellite images for example, ancient river beds 557].
Identifying directional patterns in remotely sensed images helps geologists to
understand the underlying processes in the earth that are in action 553, 562].
Because man-made structures also tend to have strong linear segments, di-
rectional features can help in the identication of buildings, roads, and urban
features 561].
Images commonly have sharp edges that make them nonstationary. Edges
render image coding and compression techniques such as LP coding and
DPCM (see Chapter 11) less ecient. By dividing the frequency space into di-
rectional bands that contain the directional image components in each band,
and then coding the bands separately, higher rates of compression may be
obtained 563, 564, 565, 566, 567, 568, 569]. In this manner, directional l-
tering can be useful in other applications of image processing, such as data
compression.
Analysis of Directional Patterns 641

8.2 Measures of Directional Distribution


Mardia 570] pointed out that the statistical measures that are commonly used
for the analysis of data points in rectangular coordinate systems may lead to
improper results if applied to circular or directional data. Because we do not
usually consider directional components in images to be directed elements (or
vectors), there should be no need to dierentiate between components that
are at angles and  180o therefore, we could limit our analysis to the
semicircular space of 0o  180o ] or ;90o  90o ].

8.2.1 The rose diagram


The rose diagram is a graphical representation of directional data. Corre-
sponding to each angular interval or bin, a sector (a petal of the rose) is
plotted with its apex at the origin. In common practice, the radius of the
sector is made proportional to the area of the image components directed in
the corresponding angle band.
The area of each sector in a rose diagram as above varies in proportion
to the square of the directional data. In order to make the areas of the
sectors directly proportional to the orientation data, the square roots of the
data elements could be related to the radii of the sectors. Linear histograms
conserve areas and are comparatively simple to construct however, they lack
the strong visual association with directionality that is obtained through the
use of rose diagrams. Several examples of rose diagrams are provided in the
sections to follow.

8.2.2 The principal axis


The spatial moments of an image may be used to determine its principal axis,
which could be helpful in nding the dominant angle of directional alignment.
The moment of inertia of an image f (x y) is at its minimum when the moment
is taken about the centroid (x y) of the image. The moment of inertia of the
image about the line (y ; y ) cos = (x ; x) sin passing through (x y) and
having the slope tan is given by
Z Z
m = (x ; x) sin ; (y ; y ) cos ]2 f (x y ) dx dy: (8.1)
x y
In order to make m independent of the choice of the coordinates, the
centroid of the image could be used as the origin. Then, x = 0 and y = 0,
and Equation 8.1 becomes
Z Z
m = (x sin ; y cos )2 f (x y) dx dy
x y
642 Biomedical Image Analysis
= m20 sin2 ; 2 m11 sin cos + m02 cos2  (8.2)
where mpq is the (p q)th moment of the image, given by
Z Z
mpq = xp yq f (x y) dx dy: (8.3)
x y
By denition, the moment of inertia about the principal axis is at its minimum.
Dierentiating Equation 8.2 with respect to and equating the result to zero
gives
m20 sin 2 ; 2 m11 cos 2 ; m02 sin 2 = 0 (8.4)
or
tan 2 = (m 2 m 11
;m )
: (8.5)
20 02
By solving this equation, we can nd the slope or the direction of the principal
axis of the given image 11].
If the input image consists of directional components along an angle  only,
then   . If there are a number of directional components at dierent
angles, then represents their weighted average direction. Evidently, this
method cannot detect the existence of components in various angle bands,
and is thus inapplicable for the analysis of multiple directional components.
Also, this method cannot quantify the directional components in various angle
bands.

8.2.3 Angular moments


The angular moment Mk of order k of an angular distribution is dened as
X
N
k (n) p(n)
Mk = (8.6)
n=1
where (n) represents the center of the nth angle band in degrees, p(n) repre-
sents the normalized weight or probability of the data in the nth band, and N
is the number of angle bands. If we are interested in determining the disper-
sion of the angular data about their principal axis, the moments may be taken
with respect to the centroidal angle = M1 of the distribution. Because the
second-order moment is at its minimum when taken about the centroid, we
could choose k = 2 for statistical analysis of angular distributions. Hence, the
second central moment M2 may be dened as
X
N
M2 =  (n) ; ]2 p(n): (8.7)
n=1
The use of M2 as a measure of angular dispersion has a drawback: because
the moment is calculated using the product of the square of the angular dis-
tance and the weight of the distribution, even a small component at a large
Analysis of Directional Patterns 643
angular distance from the centroidal angle could result in a high value for M2 .
(See also Section 6.2.2.)

8.2.4 Distance measures


The directional distribution obtained by a particular method for an image
may be represented by a vector p1 = p1 (1) p1 (2)     p1 (N )]T , where p1 (n)
represents the distribution in the nth angle band. The true distribution of the
image, if known, may be represented by another vector p0 . Then, the Eu-
clidean distance between the distribution obtained by the directional analysis
method p1 and the true distribution of the image p0 is given as
v
u
uXN
kp1 ; p0 k = t p1 (n) ; p0 (n)]2 : (8.8)
n=1
This distance measure may be used to compare the accuracies of dierent
methods of directional analysis.
Another distance measure that is commonly used is the Manhattan dis-
tance, dened as
X
N
jp1 ; p0 j = jp1 (n) ; p0 (n)j: (8.9)
n=1
The distance measures dened above may also be used to compare the
directional distribution of one image with that of another.

8.2.5 Entropy
The concept of entropy from information theory 127] (see Section 2.8) can be
eectively applied to directional data. If we take p(n) as the directional PDF
of an image in the nth angle band, the entropy H of the distribution is given
by
X
N
H =; p(n) log2 p(n)]: (8.10)
n=1
Entropy provides a useful measure of the scatter of the directional elements
in an image. If the image is composed of directional elements with a uniform
distribution (maximal scatter), the entropy is at its maximum if, however,
the image is composed of directional elements oriented at a single angle or in
a narrow angle band, the entropy is (close to) zero. Thus, entropy, while not
giving the angle band of primary orientation or the principal axis, could give
a good indication of the directional spread or scatter of an image 35, 36, 414,
415]. (See Figure 8.24.)
Other approaches that have been followed by researchers for the character-
ization of directional distributions are: numerical and statistical characteri-
zation of directional strength 535], morphological operations using a rotating
644 Biomedical Image Analysis
structural element 541], laser small-angle light scattering 538, 539, 549], and
optical diraction and Fourier analysis 532, 548, 558, 560].

8.3 Directional Filtering


Methods based upon the Fourier transform have dominated the area of direc-
tional image processing 36, 532, 550, 551, 552]. The Fourier transform of an
oriented linear segment is a sinc function oriented in the direction orthogonal
to that of the original segment in the spatial domain see Figure 8.1. Based
upon this property, we can design lters to select linear components at specic
angles. However, a diculty in using the Fourier domain for directional lter-
ing lies in the development of high-quality lters that are able to select linear
components without the undesirable eects of ringing in the spatial domain.
Schiller et al. 571] showed that the human eye contains orientation-selective
structures. This motivated research on human vision by Marr 282], who
showed that the orientation of linear segments, primarily edges, is impor-
tant in forming the primal sketch. Several researchers, including Kass and
Witkin 572], Zucker 573], and Low and Coggins 574] used oriented band-
pass lters in an eort to simulate the human visual system's ability to identify
oriented structures in images. Allen et al. 575] developed a very-large-scale
integrated (VLSI) circuit implementation of an orientation-specic \retina".
Several researchers 36, 572, 573, 574] have used many types of simple lters
with wide passbands at various angles to obtain a redundant decomposition
or representation of the given image. Such representations were used to de-
rive directional properties of the image. For example, Kass and Witkin 572]
formed a map of ow lines in the given image, and under conformal mapping,
obtained a transformation to regularize the ow lines onto a grid. The re-
sulting transformation was used as a parameter representing the texture of
the image. In this manner, various types of texture could be recognized or
generated by using the conformal map specic to the texture.
Chaudhuri et al. 36] used a set of bandpass lters to obtain directional com-
ponents in SEM images of ligaments however, the lter used was relatively
simple (see Sections 8.3.1 and 8.7.1). Generating highly selective lters in 2D
is not trivial, and considerable research has been directed toward nding gen-
eral rules for the formation of 2D lters. Bigun et al. 576] developed rules for
the generation of least-squares optimal beam lters in multiple dimensions.
Bruton et al. 577] developed a method for designing high-quality fan lters
using methods from circuit theory. This method results in 2D recursive lters
that have high directional selectivity and good roll-o characteristics, and is
described in Section 8.3.3.
Analysis of Directional Patterns 645

(a) (b)

(c) (d)
FIGURE 8.1
(a) A test image with a linear feature. (b) Log-magnitude Fourier spectrum
of the test image in (a). (c) Another test image with a linear feature at a
dierent angle. (d) Log-magnitude Fourier spectrum of the test image in (b).
See also Figure 2.30.
646 Biomedical Image Analysis
8.3.1 Sector ltering in the Fourier domain
Fourier-domain techniques are popular methods for directional quantication
of images 36, 532, 547, 550, 551, 552, 553, 557, 558, 559, 560, 562, 564,
565, 566, 567, 568, 569, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586,
587, 588, 589]. The results of research on biological visual systems provide
a biological base for directional analysis of images using lter-based methods
389, 563, 571, 572, 573, 575].
The Fourier transform is the most straightforward method for identifying
linear components. The Fourier transform of a line segment is a sinc function
oriented at =2 radians with respect to the direction of the line segment in
the spatial domain see Figure 8.1. This fact allows the selective ltering of
line segments at a specic orientation by ltering the transformed image with
a bandpass lter.
Consider a line segment of orientation (slope) a and y-axis intercept b in
the (x y) plane, with the spatial limits ;X X ] and ;Y Y ]. In order to
obtain the Fourier transform of the image, we could evaluate a line integral
in 2D along the line y = ax + b. To simplify the procedure, let us assume
that the integration occurs over a square region with X = Y . Because the
function f (x y) is a constant along the line, the term f (x y) in the Fourier
integral can be normalized to unity, giving the equation f (x y) = 1 along the
line y = ax + b. Making the substitution x = (y ; b)=a, we have the Fourier
transform of the line image given by
ZY ZY  (y ; b) 
F (u v) = ja1j exp ;j 2  u
a + v y dy dy
;Y ;Y
  h u  i
= 2jaYj exp j 2  bu
a sinc a +v Y : (8.11)

From the result above, we can see that, for the image of a line, the Fourier
transform is a sinc function with an argument that is a linear combination
of the two frequency variables (u v), and with a slope that is the negative
reciprocal of the slope of the original line. The intercept is translated into
a phase shift of b=a in the u variable. Thus, the Fourier transform of the
line is a sinc function oriented at 90o to the original line, centered about the
origin in the frequency domain regardless of the intercept of the original line.
This allows us to form lters to select lines solely on the basis of orientation
and regardless of the location in the space domain. Spatial components in
a certain angle band may thus be obtained by applying a bandpass lter in
an angle band perpendicular to the band of interest and applying the inverse
transform. If we include a spatial oset in the above calculation, it would
only result in a phase shift the magnitude spectrum would remain the same.
Figure 8.2 illustrates the ideal form of the \fan" lter that may used to select
oriented segments in the Fourier domain.
Analysis of Directional Patterns 647

FIGURE 8.2
Ideal fan lter in the Fourier domain to select linear components oriented
between +10o and ;10o in the image plane. Black represents the stopband
and white represents the passband. The origin (u v) = (0 0) is at the center
of the gure.

Prior to the availability of high-speed digital processing systems, attempts


at directional ltering used optical processing in the Fourier domain. Ar-
senault et al. 558] used optical bandpass lters to selectively lter con-
tour lines in aeromagnetic maps. Using optical technology, Duvernoy and
Chalasinska-Macukow 560] developed a directional sampling method to an-
alyze images the method involved integrating along an angle band of the
Fourier-transformed image to obtain the directional content. This method
was used by Dziedzic-Goclawska et al. 532] to identify directional content in
bone tissue images. The need for specialized equipment and precise instru-
mentation limits the applicability of optical processing. The essential idea of
ltering selected angle bands, however, remains valid as a processing tool, and
is the basis of Fourier-domain techniques.
The main problem with Fourier-domain techniques is that the lters do
not behave well with occluded components or at junctions of linear compo-
nents smearing of the line segments occurs, leading to inaccurate results when
inverse transformed to the space domain. Another problem lies in the trun-
cation artifacts and spectral leakage that can exist when ltering digitally,
which leads to ringing in the inverse-transformed image. Ringing artifacts
may be avoided by eective lter design, but this, in turn, could limit the
spatial angle band to be ltered.
Considerable research has been reported in the eld of multidimensional
signal processing to optimize direction-selective lters 569, 576, 577, 580,
583, 585, 590]. Bigun et al. 576] addressed the problem of detection of ori-
entation in a least-squares sense with FIR bandpass lters. This method has
the added benet of being easily implementable in the space domain. Bruton
648 Biomedical Image Analysis
et al. 577] proposed guidelines for the design of stable IIR fan lters. Hou
and Vogel 569] developed a novel method of using the DCT for directional
ltering: this method uses the fact that the DCT divides the spectrum into
an upper band and a lower band. In 2D, such band-splitting divides the
frequency plane into directional lter bands. By selecting coecients of the
DCT, the desired spectral components can be obtained. Because the DCT
has excellent spectral reconstruction qualities, this results in high-quality, di-
rectionally selective lters. A limitation of this technique is that the method
only detects the bounding edges of the directional components, because the
band-splitting in the DCT domain does not include the DC component of the
directional elements.
In the method developed by Chaudhuri et al. 36], a simple decomposition
of the spectral domain into 12 equal angle bands was employed, at 15o per
angle band. Each sector lter in this design is a combination of an ideal
fan lter, a Butterworth bandpass lter, a ramp-shaped lowpass lter, and a
raised cosine window as follows:
(1 ; fr )
 ; o

H (fr ) =   f 2p    f 2q 1=2 cos B   (8.12)
1+ L 1+ r
fr fH

where
 = slope of the weighting function = 0p:7
fr = normalized radial frequency = u2 + v2 
p = order of the highpass lter = 6
q = order of the lowpass lter = 4
fH = upper cuto frequency (normalized) = 0:5
fL = lower cuto frequency (normalized) = 0:02
= angle of the Fourier transform sample = atan(v=u)
o = central angle of the desired angle band
B = angular bandwidth, and
 = weighting factor = 0:5:
The combined lter with = 135o and B = 15o is illustrated in Figure 8.3.
Filtering an image with sector lters as above results in 12 component
images. Each component image contains the linear components of the original
image in the corresponding angle band.
Although the directional lter was designed to minimize spectral leakage,
some ringing artifacts were observed in the results. To minimize the artifacts,
a thresholding method was applied to accentuate the linear features in the
image. Otsu's thresholding algorithm 591] (see Section 8.3.2) was applied in
the study of collagen ber images by Chaudhuri et al. 36].
Analysis of Directional Patterns 649

FIGURE 8.3
Directional (sector) lter in the Fourier domain. The brightness is propor-
tional to the gain 36]. Figure courtesy of W.A. Rolston 542].

8.3.2 Thresholding of the component images


Many methods are available for thresholding images with an optimal thresh-
old for a given application 589, 591, 592]. The component images that result
from the sector lters as described in Section 8.3.1 possess histograms that
are smeared mainly due to the strong DC component that is present in most
images. Even with high-quality lters, the DC component appears as a con-
stant in all of the component images due to its isotropic nature. This could
pose problems in obtaining an eective threshold to select linear image fea-
tures from the component images. On the other hand, the removal of the DC
component would lead to the detection of edges, and the loss of information
related to the thickness of the oriented patterns.
Otsu's method of threshold selection 591] is based upon discriminant mea-
sures derived from the gray-level PDF of the given image. Discriminant crite-
ria are designed so as to maximize the separation of two classes of pixels into
a foreground (the desired objects) and a background.
Consider the gray-level PDF p(l) of an image with L gray levels, l =
0 1 2 : : :  L ; 1. If the PDF is divided into two classes C0 and C1 sepa-
rated by a threshold k, then the probability of occurrence !i of the class Ci ,
i = f0 1g is given by
X
k
!0 (k) = P (C0) = p(l) = !(k) (8.13)
l=0

;1
LX
!1 (k) = P (C1) = p(l) = 1 ; !(k) (8.14)
l=k+1
650 Biomedical Image Analysis
and the class mean levels i for Ci , i = f0 1g are given by
X
k X
k
0 (k) = l P (ljC0 ) = l !p((lk)) = ! ((kk))  (8.15)
l=0 l=0 0
and
;1
LX ;1
LX
1 (k) = l P (ljC1) = l !p((lk)) = 1T;;! ((kk))  (8.16)
l=k+1 l=k+1 1

where
X
k
!(k) = p(l) (8.17)
l=0
and
X
k
(k) = l p(l) (8.18)
l=0
are the cumulative probability and rst-order moment of the PDF p(l) up to
the threshold level k, and
;1
LX
T = l p(l) (8.19)
l=0
is the average gray level of the image.
The class variances are given by
X
k X
k

02 (k) = l ; 0 (k)]2 P (ljC0 ) = l ; 0 (k)]2 !p((lk))  (8.20)
l=0 l=0 0
and
;1
LX ;1
LX

12 (k) = l ; 1 (k)]2 P (ljC1 ) = l ; 1 (k)]2 !p((lk)) : (8.21)
l=k+1 l=k+1 1

Using the discriminant criterion


2
=
B
(2k)  (8.22)
T
where

B2 (k) = !0 (k) 0 (k) ; T ]2 + !1 (k) 1 (k) ; T ]2  (8.23)
and ;1
LX

T2 = (l ; T )2 p(l) (8.24)
l=0
Analysis of Directional Patterns 651
Otsu's algorithm aims to nd the threshold level k that maximizes the dis-
criminant criterion given in Equation 8.22. Maximizing reduces to max-
imizing
B2 , because the value
T2 does not vary with the threshold value k.
The optimal threshold value k is given as

k = arg max
0kL;1 B

2 (k) : (8.25)

Otsu's method of thresholding performs well in binarizing a large class of


images.
Example: Chaudhuri et al. 36] applied the directional ltering procedure
described above to a test pattern with line segments of various length, width,
and gray level at four dierent angles, namely 0o , 45o , 90o , and 135o , as
shown in Figure 8.4 (a). Signicant overlap was included in order to test
the performance of the ltering procedures under nonideal conditions. The
log-magnitude Fourier spectrum of the test image is shown in part (b) of the
gure directional concentrations of energy are evident in the spectrum. The
component image obtained using the ltering procedure for the angle band
125o ; 140o in the Fourier domain is shown in Figure 8.4 (c). It is evident
that only those lines oriented at 45o in the image plane have been passed,
along with some artifacts. The corresponding binarized image, using the
threshold value given by Otsu's method described above, is shown in part (d)
of the gure. Parts (e) and (f) of the gure show the binarized components
extracted from the test image for the angle bands 80o ; 95o and 125o ; 140o
in the image plane.
A close inspection of the component images in Figure 8.4 indicates that
regions of overlap of lines oriented at dierent directions contribute to each
direction. The 135o component image in Figure 8.4 (f) has the largest error
due to its low gray level and the large extent of overlap with the other lines.
The areas of the line segments extracted by the ltering procedure had errors,
with respect to the known areas in the original test image, of 3:0%, ;4:3%,
;3:0%, and ;28:6% for the 0o , 45o , 90o , and 135o components, respectively.
The results of application of methods as above for the directional analysis of
collagen bers in ligaments are described in Section 8.7.1.

8.3.3 Design of fan lters


Fan lters are peculiar to 2D ltering: there is no direct 1D analog of this
type of ltering. This fact presents some diculties in designing fan lters:
because we cannot easily extend the well-established concepts that are used to
design 1D lters. The main problem in the design of fan lters lies in forming
the lter at the origin (u v) = (0 0) or the DC point in the Fourier domain.
At the DC point, the ideal fan lter structure has a knife edge, which makes
the lter nonanalytic that is, if one were to approach the origin in the Fourier
domain from any point within the passband of the fan, the limit would ideally
652 Biomedical Image Analysis

(a) (b)

(c) (d)

Figure 8.4 (e) (f)


Analysis of Directional Patterns 653
FIGURE 8.4
(a) A test image with overlapping directional components at 0o  45o  90o  and
135o . (b) Log-magnitude Fourier spectrum of the test image. Results of
directional ltering (with the angle bands specied in the image domain):
(c) 35o ;50o . (d) Result in (c) after thresholding and binarization. (e) 80o ;95o
(binarized). (f) 125o ; 140o (binarized). Reproduced with permission from
S. Chaudhuri, H. Nguyen, R.M. Rangayyan, S. Walsh, and C.B. Frank, \A
Fourier domain directional ltering method for analysis of collagen alignment
in ligaments", IEEE Transactions on Biomedical Engineering, 34(7): 509 {
518, 1987.  c IEEE.

be unity. On the other hand, the limit as one approaches the origin from a
point within the stopband should be zero.
Various methods have been applied to overcome the problem stated above,
such as the use of spectrum-shaping smoothing functions with the ideal fan
lter for example, Chaudhuri et al. 36] used the Butterworth and raised
cosine functions, as described in Section 8.3.1. However, the performance
of even the best spectrum-shaping function is limited by the tight spectral
constraints imposed by the nonanalytic point at the origin. To obtain better
spectral shaping, as in 1D, high-order FIR lters or low-order IIR lters may
be used.
The discontinuity at the origin (u v) = (0 0) is the main problem in the
design of recursive fan lters. With nonrecursive or FIR lters, this problem
does not result in instability. With IIR lters, instability can occur if the lters
are not properly designed. Stability of lters is usually dened as bounded-
input { bounded-output (BIBO) stability 577]. Filters that are BIBO stable
ensure that all inputs that are not innite in magnitude will result in outputs
that are bounded.
2D lters are commonly derived from real, rational, continuous functions
of the form
P 2 PN2 q sm sn
T (s1  s2 ) = Q (s1  s2 ) = M
m=0 Pn=0 mn 1 2  (8.26)
P (s1 s2 ) PM N1
m=0 n=0 pmn s1 s2
1 m n
where s1 and s2 are the Laplace variables the function T (s1  s2 ) is the Laplace-
transformed version of the 2D partial dierential equation that is related to
the required lter response Q(s1  s2 ) is the numerator polynomial resulting
from the Laplace transform of the forward dierential forms expressed as a
sum of products in s1 and s2 with the coecients qmn M2 and N2 represent
the order of the polynomial Q in m and n, respectively P (s1  s2 ) is the de-
nominator polynomial obtained from the Laplace transform of the backward
dierential forms expressed as a sum of products in s1 and s2 with the coef-
cients pmn and M1 and N1 represent the order of the polynomial P in m
and n, respectively. The corresponding frequency response function T (u v) is
obtained by the substitution of s1 = j 2  u and s2 = j 2  v.
654 Biomedical Image Analysis
The discontinuous requirement in the continuous prototype lter at the
origin results in the lter transfer function T (s1  s2 ) having a nonessential
singularity of the second kind at the origin. A nonessential singularity of the
second kind occurs when the numerator and the denominator polynomials,
P (s1  s2 ) and Q(s1  s2 ) in Equation 8.26, approach zero at the same frequency
location (a1  a2 ), resulting in T (a1  a2 ) = 00 .
The discrete form of the function in Equation 8.26 is obtained through the
2D version of the bilinear transform in 1D, given as
si = ((zzi +
; 1)
for i = 1 2 (8.27)
i 1)
to obtain the following discrete version of the lter:
PM2 PN2 b z;m z;n
H (z1  z2 ) = BA((zz1 zz2)) = PMm=0 P n=0 mn 1
N ; m
2 
;n (8.28)
1 2 m=0 n=0 amn z1 z2
1 1

where the orders of the polynomials M1 , N1 , M2 , and N2 are dierent from
the corresponding limits of the continuous-domain lter in Equation 8.26 due
to the bilinear transform.
Filter design using nonessential singularities: The 2D lter design
method of Bruton and Bartley 587] views the nonessential singularity in-
herent to fan lters not as an obstacle in the design process, but as being
necessary in order to generate useful magnitude responses. The method relies
on classical electrical circuit theory, and views the input image as a surface
of electrical potential. The surface of electrical potential is then acted upon
by a 2D network of electrical components such as capacitors, inductors, and
resistors the components act as integrators, dierentiators, and dissipators,
respectively. The central idea is to construct a network of components that
will not add energy to the input that is, to make a completely passive cir-
cuit. The passiveness of the resulting circuit will result in no energy being
added to the system (that is, the lter is \nonenergic"), and thus will ensure
that the lter is stable. Bruton and Bartley 587] showed that the necessary
condition for a lter to be stable is that the admittance matrix that links
the current and voltage surfaces must have negative Toeplitz symmetry with
reactive elements supplied by inductive elements that satisfy the nonenergic
constraint.
The nonenergic condition ensures that the lter is stable, because it implies
that the lter is not adding any energy to the system. The maximum amount
of energy that is output from the lter is the maximum amount put into the
system by the input image. The derivation given by Bruton and Bartley 587]
is the starting point of a numerical method for designing stable, recursive,
2D lters. The derivation shows that recursive lters can be built reliably, as
long as the condition above on the admittance matrix is met.
Bruton and Bartley 587] provided the design and coecients of a narrow,
15o fan-stop lter, obtained using a numerical optimization method with the
Analysis of Directional Patterns 655
TABLE 8.1
Coe cients of the Discrete-domain Fan Filter with a 15o Fan Stopband 542].
bmn n=0 n=1 n=2
m=0 0.02983439380935332 -0.6855181788590949 0.7027763362367445
m=1 -0.1469615281783627 3.397745073546105 -3.629041657524303
m=2 0.2998008459584214 -6.767662643767763 7.49061181619684
m=3 -0.3165448124171246 6.771378027945815 -7.725572280971142
m=4 0.1724438585800683 -3.403226865621513 3.981678690012933
m=5 -0.03857214742977072 0.6872844383634052 -0.82045337027416

amn n=0 n=1 n=2


m = 0 1.000000000000000 -0.82545044546957 0.03722700706807863
m = 1 -4.476280705843249 3.791276128445935 -0.161179724642936
m = 2 8.03143251366382 -7.00124160940265 0.2870351311929377
m = 3 -7.220029589516617 6.499290024154175 -0.2623441075303727
m = 4 3.252431250257176 -3.03268003600527 0.122960282645262
m = 5 -0.5875259501210567 0.5687740686107076 -0.0236904653803231

Data courtesy of N.R. Bartley 587].

condition described above added. The method results in a recursive lter


of small order with remarkable characteristics. A lter of fth order in z1
and second order in z2 was designed using this method. The corresponding
coecients of the discrete function H (z1  z2 ) as in Equation 8.28 are listed
in Table 8.1. The coecients in the numerator and denominator each add
up to zero at z1 = 1 and z2 = 1, conrming that the lter conforms to the
requirement of the knife-edge discontinuity.
Rotation of the lter and image: The fan lter design algorithm of
Bruton and Bartley 587] provides lters only for a specic angle band | in
the above case, for a 15o bandstop lter centered at 0o in the Fourier domain.
In order to obtain lters with dierent central orientations, it is necessary to
perform a rotation of the prototype lter. This may be achieved by using the
following substitution in the analog prototype lter transfer function 542]:
s1 ( s1 cos + s2 sin (8.29)
s2 ( s2 
656 Biomedical Image Analysis
where is the amount of rotation desired. The discrete version of the lter
is obtained through the bilinear transform. The rotation step above is not
the usual rotational transformation for lters, but it is necessary to use this
transformation in order to ensure that the lter is stable. If the normal
rotational transformation were to be used, s2 would also be rotated as
s2 ( ;s1 sin + s2 cos : (8.30)
Then, values of s2 could turn out to be negative: this would indicate that there
would be energy added to the system, which would make the lter unstable.]
Suppose that the prototype lter of the form shown in Equation 8.26, given
by T0 (s1  s2 ) and with the corresponding frequency response function given
by T0 (u v), is bounded by the straight lines L; and L+ passing through the
origin at angles of ; p and + p with the central line of the lter CL = 0o
where u = 0, as shown in Figure 8.5 (a). The lines L; and L+ are given by
u cos p ; v sin p = 0 : L;
u cos p + v sin p = 0 : L+ : (8.31)
As a result of the transformation in Equation 8.29, the center of the passband
of the rotated frequency response Tr (u v) is given as T0 (u0  v0 ) = T0 (u cos c +
v sin c  v). Similarly, the straight lines L; and L+ are rotated to the straight
lines given by
u cos p cos c + v (sin c cos p ; sin p ) = 0 : L;
u cos p cos c + v (sin c cos p + sin p ) = 0 : L+ (8.32)
see Figure 8.5 (b)].
A limitation to lter rotation as above is that rotating the lter by more
that 45o would result in a loss of symmetry about the central line of the lter.
The rotational warping eect may be compensated for in the prototype lter
T0 (s1  s2 ). In the work of Rolston 542], the prototype lter was rotated by
45o in either direction to obtain lters covering an angle band of 90o (0o ; 45o
and 135o ; 180o in the Fourier domain). Filtering in the range 45o ; 135o
was achieved by rotating the image by 90o before passing it through the same
lters as above.
The fan lter as above has unit gain, which has some drawbacks as well
as some advantages. An advantage is that features in the ltered component
images have no more intensity than the corresponding features in the original
image, so that image components that exist in the original image in a particu-
lar angle band will not be attenuated at all or will be attenuated only slightly.
This also limits the number of iterations necessary to attenuate out-of-band
components for a certain class of images. The unit gain is an advantage with
images that have only a small depth of eld, because a global threshold can be
used for all component images to obtain a representation of the components
that exist in each specic angle band.
Analysis of Directional Patterns 657
v v

L-
−θ
p θc
CL u
u L-

p L+ CL

L+
(a) (b)
FIGURE 8.5
(a) Original fan lter. (b) The fan lter after rotation by the transformation
given in Equation 8.29. Figure courtesy of W.A. Rolston 542].

Example: A test image including rectangular patches of varying thickness


and length at 0o  45o  90o  and 135o , with signicant overlap, is shown in
Figure 8.6 (a). The fan lter as described above, for the central angle of
90o , was applied to the test image. Because the fan lter is a fan-stop lter,
the output of the lter was subtracted from the original image to obtain
the fan-pass response. As evident in Figure 8.6 (b), the other directional
components are still present in the result after one pass of the lter due to
the nonideal attenuation characteristics of the lter. The lter, however, can
provide improved results with multiple passes or iterations. The result of
ltering the test image nine times is shown in Figure 8.6 (c). The result
possesses good contrast between the desired objects and the background, and
may be thresholded to reject the remaining artifacts.

8.4 Gabor Filters


Most of the directional, fan, and sector lters that have been used in the
Fourier domain to extract directional elements are not analytic functions.
This implies that lter design methods in 1D are not applicable in 2D. Such
lters tend to possess poor spectral response, and yield images with not only
the desired directional elements but also artifacts.
One of the fundamental problems with Fourier methods of directional lter-
ing is the diculty in resolving directional content at the DC point (the origin
in the Fourier domain) 587]. The design of high-quality fan lters requires
658 Biomedical Image Analysis

(a)

(b) (c)
FIGURE 8.6
(a) A test image with overlapping directional components at 0o  45o  90o  and
135o . Results of fan ltering at 90o after (b) one pass, (c) nine passes. Figure
courtesy of W.A. Rolston 542].
Analysis of Directional Patterns 659
conicting constraints at the DC point: approaching the DC point from any
location within the passband requires the lter gain to converge to unity how-
ever, approaching the same point from any location in the stopband requires
the lter gain to approach zero. In terms of complex analysis, this means
that the lter is not analytic, or that it does not satisfy the Cauchy-Riemann
equations. This prevents extending results in 1D to problems in 2D.
The Gabor lter provides a solution to the problem mentioned above by
increasing the resolution at the DC point. Gabor lters are complex, sinu-
soidally modulated, Gaussian functions that have optimal localization in both
the frequency and space domains 389]. Gabor lters have been used for tex-
ture segmentation and discrimination 381, 495, 542, 543, 593, 594, 595], and
may yield better results than simple Fourier methods for directional ltering.
Time-limited or space-limited functions have Fourier spectra of unlimited
extent. For example, the time-limited rectangular pulse function transforms
into the innitely long sinc function. On the other hand, the time-unlimited
sine function transforms into a delta function with innite resolution in the
Fourier domain. Innitely long functions cannot be represented in nite cal-
culating machinery. Gabor 596] suggested the use of time-limited functions
as the kernels of a transform instead of the unlimited sine and cosine functions
that are the kernel functions of the Fourier transform. The functional nature
of the Fourier transform implies that there exists an \uncertainty principle",
similar, but not identical, to the well-known Heisenberg uncertainty principle
of quantum mechanics. Gabor showed that complex, sinusoidally modulated,
Gaussian basis functions satisfy the lower bound on the fundamental uncer-
tainty principle that governs the resolution in time and frequency, given by
t f  41  (8.33)
where t and f are time and frequency resolution, respectively. The un-
certainty principle implies that there is a resolution limit between the spatial
and the Fourier domains. Gabor proved that there are functions that can
form the kernel of a transform that exactly satisfy the uncertainty relation-
ship. The functions named after Gabor are Gaussian-windowed sine and co-
sine functions. By limiting the kernel functions of the Fourier transform with
a Gaussian windowing function, it becomes possible to achieve the optimal
resolution limit in both the frequency and time domains. The size of the
Gaussian window function needs to be used as a new parameter in addition
to the frequency of the sine and cosine functions.
Gabor functions provide optimal joint resolution in both the Fourier and
time domains in 1D, and form a complete basis set through phase shift and
scaling or dilation of the original (mother) basis function. The set of functions
forms a multiresolution basis that is commonly referred to as a wavelet basis
(formalized by Mallat 386]).
Daugman 389] extended Gabor functions to 2D as 2D sinusoidal plane
waves of some frequency and orientation within a 2D Gaussian envelope. Ga-
660 Biomedical Image Analysis
bor functions have also been found to provide good models for the receptive
elds of simple cells in the striate cortex 571, 389] for this reason, there has
been a signicant amount of research conducted on using the functions for
texture segmentation, analysis, and discrimination 381, 495, 542, 543, 593,
594, 595].
The extension of the principle above to 2D leads to space-limited plane
waves or complex exponentials. Such an analysis was performed by Daug-
man 389]. The uncertainty relationship in 2D is given by
x y u v  1612  (8.34)
where x and y represent the spatial resolution, and u and v represent
the frequency resolution. The 2D Gabor functions are given as
h(x y) = g(x0  y0 ) exp;j 2  (Ux + V y)]
(x0  y0 ) = (x cos  + y sin  ;x sin  + y cos ) (8.35)
where (x0  y0 ) are the (x y) coordinates rotated by an arbitrary angle ,
 1   (x= )2 + y2 
g(x y) = 2
2 exp ; 2
2 (8.36)
is a Gaussian shaping window with the aspect ratio , and U and V are the
center frequencies in the (u v) frequency plane. An example of the real part
of a Gabor kernel function is given in Figure 8.7 with
= 0:5 = 0:6 U =
1 V = 0 and  = 0 (with reference to Equations 8.35 and 8.36). Another
Gabor kernel function is shown in gray scale in Figure 8.8.
The imaginary component of the Gabor function is the Hilbert transform
of its real component. The Hilbert transform shifts the phase of the original
function by 90o , resulting in an odd version of the function.
The \Gabor transform" is not a transform as such that is, there is usually
no transform domain into which the image is transformed. The frequency
domain is usually divided into a symmetric set of slightly overlapping regions
at octave intervals. Examples of the ranges related to a few Gabor functions
are shown in Figure 8.9 see also Figures 5.69, 8.57, and 8.68. It is evident
that Gabor functions act as bandpass lters with directional selectivity.

8.4.1 Multiresolution signal decomposition


Multiresolution signal analysis is performed using a single prototype function
called a wavelet. Fine temporal or spatial analysis is performed with con-
tracted versions of the wavelet on the other hand, ne frequency analysis is
performed with dilated versions. The denition of a wavelet is exible, and
requires only that the function have a bandpass transform thus, a wavelet at
a particular resolution acts as a bandpass lter. The bandpass lters must
Analysis of Directional Patterns 661

0.4

0.2
Magnitude

-0.2

-0.4
40
30 40
20 30
20
10 10
rows 0 0
columns

FIGURE 8.7
An example of the Gabor kernel with
= 0:5 = 0:6 U = 1 V = 0 and
 = 0 (with reference to Equations 8.35 and 8.36). Figure courtesy of W.A.
Rolston 542].

FIGURE 8.8
An example of a Gabor kernel, displayed as an image. Figure courtesy of
W.A. Rolston 542].
662 Biomedical Image Analysis
v

FIGURE 8.9
Division of the frequency domain by Gabor lters. Two sets of oval regions
are shown in black, corresponding to the passbands of three lters in each
set, oriented at 0o and 90o . In each case, the three regions correspond to
three scales of the Gabor wavelets. There is a 90o shift between the angles
of corresponding lter functions in the space and frequency domains. Figure
courtesy of W.A. Rolston 542].

have constant relative bandwidth or constant quality factor. The importance


of constant relative bandwidth of perceptual processes such as the auditory
and visual systems has long been recognized 571]. Multiresolution analysis
has also been used in computer vision for tasks such as segmentation and
object recognition 284, 285, 288, 487]. The analysis of nonstationary signals
often involves a compromise between how well transitions or discontinuities
can be located, and how nely long-term behavior can be identied. This
is reected in the above-mentioned uncertainty principle, as established by
Gabor.
Gabor originally suggested his kernel function to be used over band-limited,
equally spaced areas of the frequency domain, or equivalently, with constant
window functions. This is commonly referred to as the short-time Fourier
transform (STFT) for short-time analysis of nonstationary signals 176, 31].
The 2D equivalent of the STFT is given by
Z1 Z1
FS (x0  y0  u v) = f (x y) w(x ; x0  y ; y0 )
x=;1 y=;1
 exp;j 2  (ux + vy )] dx dy (8.37)
where w is a windowing function and f is the signal (image) to be analyzed.
The advantage of short-time (or moving-window) analysis is that if the energy
of the signal is localized in a particular part of the signal, it is also localized
to a part of the resultant 4D space (x0  y0  u v). The disadvantage of this
method is that the same window is used at all frequencies, and hence, the
Analysis of Directional Patterns 663
resolution is the same at all locations in the resultant space. The uncertainty
principle does not allow for arbitrary resolution in both of the space and
frequency domains thus, with this method of analysis, if the window function
is small, the large-scale behavior of the signal is lost, whereas if the window
is large, rapid discontinuities are washed out. In order to identify the ne or
small-scale discontinuities in signals, one would need to use basis functions
that are small in spatial extent, whereas functions of large spatial extent
would be required to obtain ne frequency analysis. By varying the window
function, one will be able to identify both the discontinuous and stationary
characteristics of a signal. The notion of scale is introduced when the size
of the window is increased by an order of magnitude. Such a multiresolution
or multiscale view of signal analysis is the essence of the wavelet transform.
Wavelet decomposition, in comparison to STFT analysis, is performed over
regions in the frequency domain of constant relative bandwidth as opposed to
a constant bandwidth.
In the problem of determining the directional nature of an image, we have
the discontinuity in the frequency domain at the origin, or DC, to overcome.
Wavelet analysis is usually applied to identify discontinuities in the spatial
domain however, there is a duality in wavelet analysis, provided by the un-
certainty principle, that allows discontinuity analysis in the frequency domain
as well. In order to analyze the discontinuity at DC, large-scale or dilated
versions of the wavelet need to be used. This is the dual of using contracted
versions of the wavelet to analyze spatial discontinuities.
The wavelet basis is given by
 x ; x0 y ; y0 
1
hx y 1 2 (x y) = p
0 0 h  (8.38)
1 2 1 2

where x0  y0  1  and 2 are real numbers, and h is the basic or mother wavelet.
For large values of 1 and 2 , the basis function becomes a stretched or ex-
panded version of the prototype wavelet or a low-frequency function, whereas
for small 1 and 2 , the basis function becomes a contracted wavelet, that is,
a short, high-frequency function.
The wavelet transform is then dened as
1 Z1 Z1
FW (x0  y0  1  2 ) = p f (x y)
1 2 x=;1 y=;1
 x ; x0 y ; y 0 
h
1  2 dx dy: (8.39)

From this denition, we can see that wavelet analysis of a signal consists of
the contraction, dilation, and translation of the basic mother wavelet, and
computing the projections of the resulting wavelets on to the given signal.
664 Biomedical Image Analysis
8.4.2 Formation of the Gabor lter bank
In the method proposed by Bovik et al. 594], the given image is convolved
with the complex Gabor kernel, and the maximum magnitude of the result
is taken as an indicator to identify changes in the dominant orientation of
the image. In the work of Rolston and Rangayyan 542, 543], this method
was observed to fail in the presence of broad directional components. The
real component of the Gabor lter acts as a matched lter to detect broad
directional components, and thus, is better suited to the identication of such
regions.
The parameters of Gabor lters that may be varied are as follows: With
reference to Equations 8.35 and 8.36, the parameter
species the spatial
extent of the lter species the aspect ratio of the lter that modulates the

value. If = 1, the  parameter in Equation 8.35 need not be specied,


because g(x y) is then isotropic. In the frequency domain, such a lter results
in an oriented lter occupying the middle subsection of the corresponding ideal
fan lter, with the orientation being specied by tan;1 (V=U ) see Figure 8.9.
These parameters will then completely specify the Gabor lter bank.
In the directional analysis algorithm proposed by Rolston and Rangayyan
542, 543], only the real component of the Gabor wavelet is used, with =
1=0:6,
= 1:0, and the primary orientation given by tan;1 (V=U ) = 0o  45o 
90o  and 135o . A given image is analyzed by convolving band-limited and
decimated versions of the image with the same analyzing wavelet.
When a decimated image is convolved with a lter of constant spatial ex-
tent, relative to the original image, the lter is eectively scaled larger with
respect to the decimated image. The advantage of this procedure is that
lters with larger
values, or with center frequencies closer to DC, can be
simulated, instead of resorting to using lters of larger spatial extent. Filters
with larger
values correspond to portions of the frequency domain closer
to the DC point. This eect is shown in Figure 8.9. The frequency plane is
completely covered by the decimation and ltering operation. Each black oval
in Figure 8.9 represents the frequency band or region that is being ltered by
each decimation and ltering operation. The largest black oval at each orien-
tation corresponds to one-to-one ltering, and the smaller ovals closer to the
origin correspond to higher orders of decimation and ltering. Higher levels
of decimation and ltering geometrically approach the DC point. Theoreti-
cally, arbitrary resolution of the DC point can be achieved using this method
however, the size of the original image imposes a limiting factor, because an
image that has been digitized, for example, to 256  256 pixels, can only be
decimated a few times before the details of interest are lost.
Another advantage of this method is that, because ltering is performed
in the spatial domain, stable lters are obtained at DC, whereas, with strict
Fourier-transform-based methods, the resolution at DC is reduced to one fre-
quency increment or less, thereby making the Fourier-domain lters sensitive.
Analysis of Directional Patterns 665
Because the lter bank works on decimated images, the computational load
of convolution reduces geometrically at each successive stage of decimation.

8.4.3 Reconstruction of the Gabor lter bank output


In the directional ltering and analysis procedures proposed by Rolston and
Rangayyan 542, 543], the given image is decimated and convolved at each of
three scales with a lter of xed size. Decimation and ltering at each scale
results in equal energy across all of the scales due to the selection of the lter
coecients. Thus, after interpolation of the decimated and convolved images,
the responses at the dierent scales can be added without scaling to obtain
the overall response of the lter at the dierent scales.
After obtaining the responses to the lters at 0o  45o  90o  and 135o , a vector
summation of the lter responses is performed, as shown in Figure 8.10. The
vector summation is performed at each pixel in the original image domain to
obtain a magnitude and phase (or angle) at each pixel.

o
90
135 o 45
o

o o
180 0

FIGURE 8.10
Vector summation of the responses of Gabor lters at 0o  45o  90o  and 135o .
Figure courtesy of W.A. Rolston 542].

The Gabor lters as described above do not have a perfect reconstruction


condition. This results in a small amount of out-of-band energy interfering
with the reconstruction, translated into artifacts in the reconstructed image.
A thresholding operation can eectively remove such artifacts. Rolston and
Rangayyan 542, 543] set the threshold as the maximum of the sum of the
mean and standard deviation values across all of the component images.
Example: A test image including rectangular patches of varying thickness
and length at 0o  45o  90o  and 135o , with signicant overlap, is shown in Figure
666 Biomedical Image Analysis
8.11 (a). The output at each stage of the Gabor lter as described above for 0o
is shown in same gure. It is evident that, with one-to-one ltering, the narrow
horizontal elements have been successfully extracted however, only the edges
of the broader components are present in the result. As the decimation ratio
is increased, the broader components are extracted, which indicates that the
ltering is eective in low-frequency bands closer to the DC point.
See Sections 5.10.2, 8.9, and 8.10 for further discussions and results related
to Gabor lters.

8.5 Directional Analysis via Multiscale Edge Detection


Methods for edge detection via multiscale analysis using LoG functions are
described in Section 5.3.3. Liu et al. 37, 531] applied further steps to the
edge stability map obtained by this method (see Figure 5.16) to detect linear
segments corresponding to collagen bers in SEM images of ligaments, which
are described in the following paragraphs.
Estimating the area of directional segments: Directional analysis
requires the estimation of the area covered by linear segments in specied
angle bands. For this purpose, the pattern boundaries obtained by the relative
stability index (see Equation 5.26) may be used for computing the area of
coverage. The directional information of a pattern is given by the directions
of the gradients along the detected pattern boundaries.
Figure 8.12 (a) depicts the approach of Liu et al. 37, 531] to area compu-
tation, where two pattern-covered regions are denoted by RA and RB . The
arrows along the boundaries indicate the directions of the gradients, which are
computed from the original image on a discretized grid depending upon the
application. The use of gradients enables the denition of the region enclosed
by the boundaries. It is seen from Figure 8.12 (a) that a linear segment can
be identied by a pair of line segments running in opposite directions. In
order to identify the region, Liu et al. 37, 531] proposed a piecewise labeling
procedure that includes two steps: line labeling and region labeling. In the
line-labeling procedure 597], the full plane is sectioned into eight sectors (see
Figure 8.13), and a set of templates is dened for pixel classication. The
relative stability index is scanned left to right and top to bottom. To each
element in the relative stability index, a line label is assigned according to its
match with one of the templates. A structure array is constructed to store
the descriptions of the lines at both pixel and line levels. The structure array
contains several description elds, such as the line starting location (xs ys)
the ending location (xe ye) the orientation and a corner label, which is
also a structure array, containing the corner location and the lines that form
the corner 597].
Analysis of Directional Patterns 667

(a)

(b) (c)

(d) (e)
FIGURE 8.11
(a) A test image with overlapping directional components at 0o  45o  90o  and
135o . Results of Gabor ltering at 0o after decimation at (b) one-to-one,
(c) two-to-one, and (d) four-to-one. (e) Overall response at 0o after vec-
tor summation as illustrated in Figure 8.10. Figure courtesy of W.A. Rol-
ston 542].
668 Biomedical Image Analysis

FIGURE 8.12
(a) Computation of the area covered by directional segments. The arrows
perpendicular to the pattern boundaries represent gradient directions used for
detecting the interior of the linear segment over which the area is computed.
The directional information associated with the pattern is also stored for anal-
ysis. (b) Computation of occluded segments based upon the detected T-joints.
The subscripts denote dierent regions, and the superscripts denote the line
numbers. Reproduced with permission from Z.-Q. Liu, R.M. Rangayyan, and
C.B. Frank, \Directional analysis of images in scale-space", IEEE Transac-
tions on Pattern Analysis and Machine Intelligence, 13(11):1185{1192, 1991.
c IEEE.
Analysis of Directional Patterns 669
Once the line segments have been labeled, a set of region descriptors is gen-
erated, which includes paired line labels, their starting and ending locations,
orientation, and the area of the region see Figure 8.12 (a)]. In region label-
ing, a line (for example, Line1A ) is paired with an adjacent line (for example,
Line2A ) having a direction that is in the sector opposite to that of Line1A
(see Figure 8.13). The area of the linear segment (RA ) is then computed by
counting the number of pixels contained by the pair of line segments. The
orientation of the linear segment is indicated by the orientation of the pair of
line segments. For instance, if Line1A and Line2A form a pair, their associated
region descriptor can be dened as
RfA (xs ys) (xe ye) ]1 (xs ys) (xe ye) ]2 g (8.40)
1 2
where the subscripts 1 and 2 represent LineA and LineA , respectively, and 
is the area computed for the region RA see Figure 8.12 (a)].

FIGURE 8.13
The image plane is divided into eight sectors. Line1 and Line2 form a
pair. Reproduced with permission from Z.-Q. Liu, R.M. Rangayyan, and
C.B. Frank, \Directional analysis of images in scale-space", IEEE Transac-
tions on Pattern Analysis and Machine Intelligence, 13(11):1185{1192, 1991.
c IEEE.

Detection of occluded linear segments: It is often the case in natural


images, particularly in SEM images of ligaments, that as linear patterns in-
670 Biomedical Image Analysis
tersect, some segments of a linear pattern will be occluded. Analysis based
upon incomplete patterns will introduce errors. It is desirable that the oc-
cluded linear segments are detected and included in the nal analysis. In the
methods of Liu et al. 37, 531], a simple interpolation method is used for this
purpose.
Occluded segments typically appear as T-junctions in an edge image. (See
Chen et al. 598] for a discussion on various types of junctions in images with
oriented patterns.) As described above, a corner structure array is generated
along with the line structure array. T-junctions can be readily detected by
inspecting the corners, and if necessary, linking lines according to the following
procedure. The lines that form T-junctions with a common line see Figure
8.12 (b)] are considered to be occluded line segments and are stored in a
T-junction array structure:
T fk Line1A  Line2A Line1B  Line2B    Linek g (8.41)
where k indicates the kth T-junction structure, and the subscript indicates
the region associated with the common line. After all the T-junction struc-
tures are constructed, they are paired by bringing together the T-junction
structures with Linek that share the same region. Corresponding line ele-
ments in paired T-junction structures are then compared to detect lines that
cut across the common region. This is performed by verifying if a line in
one of the T-junction structures of the pair lies within a narrow cone-shaped
neighborhood of the corresponding line in the other T-junction structure of
the pair. If such a line pair is detected across a pair of T-junction structures,
the lines are considered to be parts of a single line with an occluded part under
the common region. Furthermore, if two such occluded lines form two regions
(on either side of the common region), the two regions are merged by adding
the occluded region, and relabeled as a single region. With reference to the
situation depicted in Figure 8.12 (b), the above procedure would merge the
regions labeled as RD and RE into one region, including the area occluded in
between them by R .
Overview of the algorithm for directional analysis: In summary, the
method proposed by Liu et al. 37, 531] for directional analysis via multiscale
ltering with LoG functions (see Section 5.3.3) consists of the following steps:
1. Generate a set of zero-crossing maps (images).
2. Classify or authenticate the zero crossings.
3. Generate the adjusted zero-crossing maps from the original zero-crossing
maps.
4. Generate a stability map from the set of adjusted zero-crossing maps.
5. Generate the relative stability index map from the stability map.
Analysis of Directional Patterns 671
6. Compute the edge orientation from the relative stability index map and
the original image.
7. Compute the orientational distribution of the segments identied.
8. Compute statistical measures to quantify the angular distribution of
the linear patterns (such as entropy and moments, as described in Sec-
tion 8.2).
The methods described above were tested with the image in Figure 8.4 (a).
The areas of the line segments extracted by the procedures had errors, with
respect to the known areas in the original test image, of ;2:0%, ;6:3%,
;3:4%, and ;40:6% for the 0o , 45o , 90o , and 135o components, respectively.
Liu et al. 37, 531] applied the procedures described above for the analysis
of collagen remodeling in ligaments this application is described in detail in
Section 8.7.1.

8.6 Hough-Radon Transform Analysis


The Hough transform is a method of transforming an image into a parameter
domain where it is easier to obtain the desired information in the image see
Section 5.6.1. The main drawback of the Hough transform is that it is primar-
ily applicable to binary images as a consequence, the results are dependent
upon the binarization method used for segmenting the image. Rangayyan and
Rolston 599, 542] proposed the use of a combination of the Hough transform
and the Radon transform (see Section 9.1) that overcomes this drawback
their methods and results obtained are described in the following paragraphs.

8.6.1 Limitations of the Hough transform


With reference to Figure 8.14, we see that a straight line can be specied in
terms of its orientation with respect to the x axis, and its distance from
the origin. In this form of parameterization, any straight line is bounded
in angular orientation by the interval 0 ] and bounded by the Euclidean
distance to the farthest point of the image from the center of the image. The
equation for an arbitrary straight-line segment in the image plane is given by
= x cos + y sin : (8.42)
For a specic point in the image domain (xi  yi ), we obtain a sinusoidal curve
in the Hough domain (  ). Each point (xi  yi ) lying on a straight line with
= 0 and = 0 in the image domain corresponds to a sinusoidal curve in
672 Biomedical Image Analysis
the (  ) domain specied by
0 = xi cos 0 + yi sin 0 : (8.43)
Through Equation 8.43, it is evident that for each point in the image do-
main, the Hough transform performs a one-to-many mapping, resulting in a
modulated sum of sinusoids in the Hough domain.
The Hough transform is often referred to as a voting procedure, where each
point in the image casts votes for all parameter combinations that could have
produced the point. All of the sinusoids resulting from the mapping of a
straight line in the image domain have a common point of intersection at
( 0  0 ) in the Hough domain. Linear segments in the spatial domain corre-
spond to large-valued points in the Hough domain see Figures 5.39 and 5.40.
Thus, the problem of determining the directional content of an image becomes
a problem of peak detection in the Hough parameter space.

Straight line Ray of integration

ρ’
ρ

θ’
θ

Hough transform Radon transform

FIGURE 8.14
Parameters in the representation of a straight line in the Hough transform
and a ray in the Radon transform. Reproduced with permission from R.M.
Rangayyan and W.A. Rolston, \Directional image analysis with the Hough
and Radon transforms", Journal of the Indian Institute of Science, 78: 17{29,
c Indian Institute of Science.
1998. 

From the properties listed above, the Hough transform appears to be the
ideal tool for detecting linear components in images. However, there are some
limitations to this approach. The results are sensitive to the quantization
intervals used for the angle and the distance . Decreasing the quantization
step for increases the computation time, because the calculation for needs
to be performed across each value of and each pixel. Another problem with
this method is the \crosstalk" between multiple straight lines in the Hough
domain. If the image contains several lines parallel to the x axis, they would
correspond to several peak values in the Hough domain at diering values
Analysis of Directional Patterns 673
for = 90o . However, the Hough transform would also detect false linear
segments for = 0o , which would show up as smaller peaks at a continuum
of values in the Hough domain see Figure 8.15. This is caused by the fact
that the Hough transform nds line segments at specic values that are not
necessarily contiguous. Another form of crosstalk can occur within a broad
directional element: several straight lines may be perceived within a broad
element with angles spread about the dominant orientation of the element, as
well as at several other angles: see Figure 8.16.

FIGURE 8.15
Crosstalk between multiple lines causing the Hough transform to detect false
lines. In the case illustrated, several short segments of vertical lines are de-
tected, in addition to the true horizontal lines.

The Hough transform has the desirable feature that it handles the occlusion
of directional components gracefully, because the size of the parameter peaks
is directly proportional to the number of matching points of the component.
The Hough transform also has the feature that it is robust to the addition
of random pixels from poor segmentation, because random image points are
unlikely to contribute coherently to a single point in the parameter space.

8.6.2 The Hough and Radon transforms combined


Deans 600] showed that there is a direct relationship between the Hough and
Radon transforms. The Hough transform may be viewed as a special case of
the Radon transform but with a dierent transform origin, and performed on
a binary image. Typically, the Radon transform is dened with its transform
origin at the center of the original image, whereas the Hough transform is
dened with its transform origin at the location of the image where the row
and column indices are zero. Thus, the distance as in Equation 8.42 for a
256  256 image for the Hough transform would be calculated relative to the
674 Biomedical Image Analysis

FIGURE 8.16
False detection of straight lines at several angles (dashed lines) within a broad
linear feature by the Hough transform.

(0 0) point in the original image, whereas, for the Radon transform, the
value would be calculated relative to the (128 128) point see Figure 8.14.
In the method proposed by Rangayyan and Rolston 542, 599], a Hough-
Radon hybrid transform is computed by updating the ( i  i ) parameter point
by adding the pixel intensity and not by incrementing by one as with the
Hough transform. In this sense, brighter lines correspond to larger peaks in
the Hough-Radon domain. The Hough-Radon o
o p pspace is indexed from 0 to
180 along one axis, and from ;N to M 2 + N 2 for an image with M rows
and N columns, as shown in Figure 8.17.
The generation of the Hough-Radon space produces relative intensities of
the directional features in the given image. An example of the Hough-Radon
space is shown in Figure 8.18 for a simple test pattern. In directional analysis,
it would be of interest to obtain the number of pixels or the percentage of
the image area covered by linear segments within a particular angle band.
Therefore, it is necessary to form a shadow parameter space with the numbers
of the pixels that are in a particular cell in the parameter space. The shadow
parameter space is the Hough transform of the image with no accompanying
threshold.
It is necessary to form both the Hough-Radon transform space and the
Hough-transform shadow space, because performing only the Hough transform
on an unthresholded image will produce, for most images, a transform with
little information about the image. Computing the shadow parameter space
is the same as performing the Hough transform on a thresholded image for
all pixels with a gray level greater than zero. The Hough-Radon transform,
however, facilitates the dierentiation between the light and dark regions in an
Analysis of Directional Patterns 675

M -N

M 2 + N 2
o o
N 0 180

Image domain Hough-Radon space

FIGURE 8.17
Mapping of a straight line from the image domain to the Hough-Radon space.
Reproduced with permission from R.M. Rangayyan and W.A. Rolston, \Di-
rectional image analysis with the Hough and Radon transforms", Journal of
c Indian Institute of Science.
the Indian Institute of Science, 78: 17{29, 1998. 

(a) (b) (c)

90
0.09

0.09
0.06

2
0.1
0.0

13
6

0.
0.

1
05

0.1
0.0
5
0.09
0.07
0.08
180 0

(d)
FIGURE 8.18
(a) A test image with ve line segments. (b) The Hough-Radon space of
the image. (c) Filtered Hough-Radon space. (d) Rose diagram of directional
distribution. See also Figure 8.17. Reproduced with permission from R.M.
Rangayyan and W.A. Rolston, \Directional image analysis with the Hough
and Radon transforms", Journal of the Indian Institute of Science, 78: 17{29,
1998. c Indian Institute of Science.
676 Biomedical Image Analysis
image, thus retaining all of the information about the image while performing
the desired transform. The Hough transform is needed for further processing
when deriving the numbers of pixels regardless of the related intensity.
From the result shown in Figure 8.18 (b), we can see the high level of
crosstalk in the upper-right quadrant. From Figure 8.17, we see that this sec-
tion maps to the angle band 100o  165o ]. This is due to the Hough transform's
tendency to identify several lines of varying orientation within a broad linear
segment, as illustrated in Figure 8.16: this is both a strength and a weakness
of the Hough transform. A ltering procedure is described in the following
subsection to reduce this eect.

8.6.3 Filtering and integrating the Hough-Radon space


Although the Hough-Radon transform is a powerful method for determining
the directional elements in an image, it lacks, by itself, the means to eliminate
elements that do not contribute coherently to a particular directional pattern.
This is due to the transform being performed on all points in the given image:
simple integration along a column of the transform space will include all the
points in the image. A second step is needed to eliminate those pixels that do
not contribute signicantly to a particular pattern. Leavers and Boyce 601]
proposed a simple 3  3 lter to locate maxima in the Hough space that
correspond to connected collinearities in an \edge image" space.
The lter is derived from the (  ) parameterization of lines and the ex-
pected shape of the distribution of counts in the accumulator of the Hough
space. For a linear element in an image, the expected shape is a characteris-
tic \buttery", which is a term commonly used to describe the typical fallo
from a focal accumulator point as shown in Figure 8.17. It was shown by
Leavers and Boyce 601] that, for any line in the image space, the extent of
the corresponding buttery in the Hough domain is limited to one radian or
approximately 58o of the corresponding focal accumulator point.
The 2D lter 2 3
0 ;2 0
4 1 +2 1 5 (8.44)
0 ;2 0
provides a high positive response to a distribution that has its largest value at
the focal point, and falls o to approximately 50% on either side, and vanishes
rapidly above and below the focal point. A drawback of this lter is that it
was designed for detecting peaks in the Hough space corresponding to lines of
one pixel width. In the example shown in Figure 8.18 (b), we can see that the
broad directional components in the test image correspond to broad peaks in
the Hough-Radon domain. This results in the lter of Equation 8.44 detecting
only the edges of the peaks in the Hough domain an example of this eect is
shown in Figure 8.18 (c).
The lter in Equation 8.44 is also sensitive to quantization of the incre-
ments. This can be seen in the vertical streaks of intensity in Figure 8.18 (c).
Analysis of Directional Patterns 677
The vertical streaks occur at values that correspond to points where the
value of in Equation 8.43 approaches an integral value. Increasing the spa-
tial extent of the lter would reduce the sensitivity of the lter to noise, as well
as improve the ability of the lter to detect larger components in the Hough-
Radon transform space. Detecting larger components in the Hough-Radon
domain corresponds to detecting broad directional image components.
In the method proposed by Rangayyan and Rolston 542, 599], after the
Hough-Radon transform has been ltered using the lter in Equation 8.44,
the result is normalized to the range of 0:0 to 1:0 and then multiplied, point-
by-point, with the shadow Hough transform mentioned earlier. This step is
performed in order to obtain the relative strength of the numbers of pixels
at each of the detected peaks. This step also reduces the accumulated quan-
tization noise from the Hough-Radon transformation and the ltering steps.
Although peaks may be detected in a region of the Hough-Radon domain,
there may be few corresponding pixels in the original image that map to such
locations. Multiplying noisy peaks by areas that contain few points in the
original image will reduce the nal count of pixels in the corresponding angle
bands.
The nal integration step is a simple summation along each of the columns
of the ltered parameter space. Because the Hough transform generates a
parameter space that is indexed in the column space from 0o to 180o , each of
the columns represents a fraction of a degree depending upon the quantization
interval selected for the transform. Also, because the Hough transform is a
voting process, the peaks selected will contain some percentage of the pixels
that are contained in the directional components.
Example: Figure 8.18 shows the results of the methods described above for
a simple test image. The problem of the small-extent lter mentioned above
is evident: the lter detects only the edges of the transformed components
and neglects the central section of each of these components. From the rose
diagram shown in Figure 8.18 (d), we can see that the lter has detected
the relative distribution of the linear components however, there is a large
amount of smearing of the results, leading to poor dierentiation between the
angle bands. We can also see from the rose diagram that there is a large
amount of crosstalk around 135o , where the result should be zero. This eect
is probably due to a combination of crosstalk as well as quantization noise in
the Hough-Radon transform.
For the ligament image shown in Figure 8.19 (a), the method has performed
reasonably well. Regardless, the results contain artifacts due to the limitation
of the algorithm with broad directional components, as described above.
See Rangayyan and Krishnan 360] for an application of the Hough-Radon
transform for the identication of linear, sinusoidal, and hyperbolic frequency-
modulated components of signals in the time-frequency plane.
678 Biomedical Image Analysis

(a) (b) (c)

90
0.11

0.14
0.08

1
0.1
0.0

09
6

0.
0.

7
06

0.0
0.0
6
0.07
0.07
0.08
180 0

(d)
FIGURE 8.19
(a) An SEM image of a normal ligament with well-aligned collagen bers.
(b) The Hough-Radon space of the image. (c) Filtered Hough-Radon space.
(d) Rose diagram of directional distribution. See also Figure 8.17. Reproduced
with permission from R.M. Rangayyan and W.A. Rolston, \Directional im-
age analysis with the Hough and Radon transforms", Journal of the Indian
Institute of Science, 78: 17{29, 1998. c Indian Institute of Science.
Analysis of Directional Patterns 679

8.7 Application: Analysis of Ligament Healing


The collagenous structure of ligaments 36]: Virtually all connective
tissues in the human body have some type of ber-lled matrix. The bers
consist of various sizes and shapes of chemically distinct proteins known as
collagen 602], with augmentation by other brous materials such as elastin.
There is a complex interaction between these materials and the nonbrous
\ground substance" in all tissues (water, proteoglycans, glycoproteins, and
glycolipids), giving each tissue relatively unique mechanical properties. As
with any composite ber-reinforced material, the quantity and quality, as
well as the organization of the reinforcing bers, have considerable inuence
on the mechanical behavior of ligaments 603].
Ligaments are highly organized connective tissues that stabilize joints. Lig-
aments normally consist of nearly parallel arrangements of collagen bers that
are attached to bone on both sides of a joint, serve to guide the joint through
its normal motions, and prevent its surfaces from becoming separated. Colla-
gen bers and their component brils make up the protenaceous \backbone"
of ligaments, and provide the majority of their resistance to tensile loading.
The spatial orientation of collagen brils is an important factor in determining
tissue properties. Ligaments need to be loose enough to allow joints to move,
but have to be tight enough to prevent the joint surfaces from separating.
Injuries to ligaments are common, with the normal, highly structured tis-
sue being replaced by relatively disordered scar tissue. The scar tissue in
ligaments has many quantitative and qualitative dierences from the normal
ligament 604], but, as with scar in other tissues 605], the relative disorga-
nization of its collagen-ber backbone may be among the most critical. The
loose meshwork of the scar may not be able to resist tensile loads within the
same limits of movement and deformation as a normal ligament. The injured
or healing joint, therefore, may become loose or unstable.
The ne vascular anatomy of ligaments 414]: When a ligament is
damaged by injury, the extent of the healing response determines whether and
when normal function of the ligament will return. A critical factor thought to
be important for the healing of a ligament is its blood supply, which exchanges
oxygen, nutrients, and proteins with ligament tissue 606]. The nature of
ligament vascularity has been qualitatively assessed in a few studies 607,
608, 609]. However, despite the potential importance of ligament blood-vessel
anatomy, not much quantitative information is available on either normal or
healing vascular anatomy of ligaments.
With respect to normal ligament vascularity, the medial collateral ligament
(MCL) of the knee in a rabbit model commonly used for ligament healing
studies has previously been characterized qualitatively 609]. Excluding its
bony attachments, the MCL complex of the rabbit is composed of two main
tissue types: epiligament and ligament tissue proper. The epiligament is a
680 Biomedical Image Analysis
thin layer of loose connective tissue surrounding the supercial surface of the
ligament proper. Blood vessels in the normal (uninjured) ligament tissue
proper appear sparse, and are oriented parallel to the long axis of the liga-
ment in an organized fashion, whereas blood vessels in the normal epiligament
appear more abundant, and are oriented in a less organized fashion 609]. In
ligament scar tissue early after ligament injury, blood vessels have been de-
scribed to be larger, more abundant, and more disorganized early on in the
ligament healing process 607]. The need for a greater supply of materials to
the ligament for early healing apparently leads to the formation of many new
blood vessels, but with longer term maturation of healing tissue, the vascular
supply decreases and vascularity may eventually return to normal 607].
Some of the qualitative and quantitative dierences between normal and
healing ligaments have been described in an animal model by Frank et al. 32].
Quantitative studies on collagen ber organization were conducted by Chaud-
huri et al. 36], Frank et al. 35], and Liu et al. 37] the methods and results
of these studies are described in Section 8.7.1. Eng et al. 414] and Bray
et al. 415] conducted studies with the aim to develop a method to analyze
quantitatively the variations in vascularity in normal and healing ligaments
to correlate such information with other aspects of ligament healing in a well-
characterized ligament healing model to predict ligament healing based upon
the vascular response to injury and to develop better methods to optimize lig-
ament vascularity after injury. The related methods and results are discussed
in Section 8.7.2.

8.7.1 Analysis of collagen remodeling


Tissue preparation and imaging: The animal model selected in the studies
of Chaudhuri et al. 36], Frank et al. 35], and Liu et al. 37] was the ruptured
and unrepaired MCL in the six-month-old female New Zealand white rabbit.
Under general anesthesia and with sterile techniques, the right MCL was
exposed through longitudinal medial incisions in the skin and fascia. The right
MCL was completely ruptured by passing a 3D braided steel wire beneath the
ligament, and failing the ligament with a strong upward pull on both ends of
the suture. The left MCL was not ruptured and served as a normal control.
The injured MCL was allowed to heal for a specied period. The animal
was then sacriced by intravenous injection of 375 mg of phenobarbitol, and
the healing (right) and normal control (left) MCLs were harvested, as follows.
The right and left MCLs were exposed through medial incisions in the skin
and fascia. The MCLs were xed in situ by dropping a fresh solution of
2:5% gluteraldehyde in 0:1 M cacodylate buer with pH = 7:4 onto their
surfaces. The MCLs were then removed at their insertions, placed in 2:5%
gluteraldehyde in 0:1 M cacodylate buer with pH = 7:4 for three hours,
and dehydrated in increasing concentrations of ethanol (30%, 50%, 75%, and
100%). Each xed and dehydrated ligament was then frozen quickly in liquid
nitrogen, and fractured longitudinally to expose the internal collagen ber and
Analysis of Directional Patterns 681
component bril arrangement along its length. The fractured tissue was then
critically point dried, aligned, mounted, and sprayed with gold/ palladium.
In each case, the longitudinal axis of the tissue was distinguishable at low
magnication, so as to allow the orientation of high-magnication photographs
relative to that axis.
Specimens were viewed under a Hitachi S-450 SEM. In order to select parts
of the ligaments for imaging, pairs of (x y) coordinates were obtained using a
random number generator. In every image, the vertical axis of the photograph
was aligned with the longitudinal axis of the original ligament tissue. A
number of photographs were taken randomly in the midsubstance area of each
of the healing and normal control MCLs at a magnication of 7 000. This
magnication was experimentally chosen to give a good compromise between
the resolution of the collagen brils and the area of the tissue being sampled.
The resulting images were then digitized into 256  256 arrays.
Directional analysis: A sample image of a normal ligament is shown in
Figure 8.20 (a). Parts (b) and (c) of the gure show two binarized component
images obtained via directional ltering for the angle bands 75o ; 90o and 0o ;
15o , using the sector-ltering methods described in Section 8.3.1. Directional
components were obtained over 12 angle bands spanning the full range of
0o  180o ]. The fractional ber-covered areas in the components are shown in
the form a rose diagram in part (d) of the gure. The rose diagram indicates
that most of the collagen bers in the normal ligament tissue sample are
aligned close to the long axis of the ligament (90o in the image plane).
A sample image of a one-week scar tissue sample is shown in Figure 8.21 (a).
It is readily seen that the collagen bers in the scar tissue do not have any
dominant or preferred orientation. Two binarized component images for the
angle bands 75o ; 90o and 0o ; 15o are shown in parts (b) and (c) of the gure.
The rose diagram in part (d) of the gure shows that the angular distribution
of collagen in the healing tissue is almost uniform or random.
Frank et al. 35] conducted a detailed assessment of collagen realignment
in healing ligaments in response to three methods of treatment: immobiliza-
tion of the aected joint for three weeks or six weeks, and no immobilization.
Scar tissue samples were obtained from three rabbits each at three, six, and
14 weeks after injury, except in the second case that lacked the three-week
samples. Sample images from the groups with no immobilization and immo-
bilization for three weeks are shown in Figure 8.22. Sets of 10 images were
obtained from each sample at randomly chosen locations. Composite rose
diagrams were computed for each group, and are shown in Figure 8.23 for the
groups with no immobilization and immobilization for three weeks. Figures
8.22 and 8.23 demonstrate the collagen remodeling or realignment process in
healing ligaments.
Plots of the entropy of the rose diagrams for all the cases in the study are
shown in Figure 8.24. The plots clearly demonstrate a reduction in entropy,
indicating a return to orderly structure, as the healing time increases. Im-
mobilization of the aected joint for three weeks after injury has resulted in
682 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 8.20
(a) A sample image showing collagen alignment in a normal ligament. Bina-
rized directional components in the angle band (b) 75o ; 90o , and (c) 0o ; 15o .
(d) Fractional ber-covered areas in the form a rose diagram. Figure courtesy
of S. Chaudhuri 610].
Analysis of Directional Patterns 683

(a) (b)

(c) (d)
FIGURE 8.21
(a) A sample image showing collagen alignment in ligament scar tissue. Bina-
rized directional components in the angle band (b) 75o ; 90o , and (c) 0o ; 15o .
(d) Fractional ber-covered areas in the form a rose diagram. Figure courtesy
of S. Chaudhuri 610].
684 Biomedical Image Analysis
entropy values that are close to the values at 14 weeks in all cases, and well
within the range for normal ligaments (the shaded region in Figure 8.24).
The results indicate that immobilization of the aected joint for three weeks
promotes the healing process, and that immobilization for the longer period
of six weeks does not provide any further advantage.
Among the limitations of the methods described above, it should be noted
that there is a certain degree of depth to the micrographs analyzed. The con-
tribution of bril components at varying depths in the dierent angle bands
is aected by a number of factors: the intensity of the electron beam in the
microscope, the depth of focus, photographic methods, and image digitization
parameters. The nal thresholding scheme applied to the ltered component
images, being adaptive in nature, aects the various component images dier-
ently depending upon their content this aspect cannot be controlled without
introducing bias. Artifacts are also caused by tissue xation and handling
(spaces between brils, ber disruption, and surface irregularities). Regard-
less, the results provide important quantitative information that can assist in
the understanding of ligament structure and healing.

8.7.2 Analysis of the microvascular structure


Tissue preparation and imaging: In the works of Eng et al. 414] and
Bray et al. 415], adult New Zealand White rabbits were used. Only 12-
month-old females were used to reduce variability between animals due to age
or gender dierences. The selected rabbits received a \gap injury" in the right
MCLs by surgical removal of a 4 mm segment of the tissue see Figure 8.25.
The remaining gap-injury site was marked by means of four small (6:0 nylon)
sutures attached to the original cut ligament ends. The injured animals were
allowed to heal for periods of three, six, 17 or 40 weeks. The MCLs from the
injured animals were then removed as described below. Selected uninjured
animals were used as normal controls. The right and left MCLs from these
animals were removed by the procedure as described below.
The control and healing rabbits were sacriced with an overdose of sodium
pentobarbitol (Euthanyl 2:5 cc =4:5 kg). Using a constant volume pump,
India-ink solution was perfused through the femoral arteries of each hind limb
to vessels in the knee ligaments according to an established protocol 609].
When complete perfusion of the limbs was evident (by noting when the nail
beds of claws were blackened), perfusion was stopped, and the entire hind
limb was removed, placed in a container at 4o C for a minimum of four
hours (to allow the ink solution to set), and subsequently the entire MCL
was removed as shown in Figure 8.26. The left MCLs of the injured animals
were not considered to be normal due to possible eects of injury to the
opposite (contralateral) knee, and were used as contralateral control MCLs.
The ligament scar material that developed over the gap-injury site on the
right MCL was labeled as the midsubstance scar (see Figure 8.25). The two
Analysis of Directional Patterns 685

(a) (b)
FIGURE 8.22
Sample images showing collagen alignment in ligament samples at three weeks,
six weeks, and 14 weeks after injury: (a) without immobilization of the af-
fected joint, (b) with immobilization of the aected joint for three weeks.
Images courtesy of C.B. Frank. See also Figure 8.23.
686 Biomedical Image Analysis

(a) (b)
FIGURE 8.23
Composite rose diagrams showing collagen realignment in ligament samples at
three weeks, six weeks, and 14 weeks after injury: (a) without immobilization
of the aected joint, (b) with immobilization of the aected joint for three
weeks. See also Figure 8.22. Reproduced with permission from C.B. Frank,
B. MacFarlane, P. Edwards, R. Rangayyan, Z.Q. Liu, S. Walsh, and R. Bray,
\A quantitative analysis of matrix alignment in ligament scars: A comparison
of movement versus immobilization in an immature rabbit model", Journal
of Orthopaedic Research, 9(2): 219 { 227, 1991.  c Orthopaedic Research
Society.
Analysis of Directional Patterns 687

FIGURE 8.24
Variation of the entropy of composite rose diagrams with collagen realign-
ment in ligament samples at three weeks, six weeks, and 14 weeks after injury.
The vertical bars indicate  one standard deviation about the corresponding
means. \NON": without immobilization of the aected joint \3 IMM": with
immobilization of the aected joint for three weeks \6 IMM": with immobi-
lization of the aected joint for six weeks. The shaded region indicates the
range of entropy for normal ligament samples. See also Figures 8.23 and 8.22.
Reproduced with permission from C.B. Frank, B. MacFarlane, P. Edwards,
R. Rangayyan, Z.Q. Liu, S. Walsh, and R. Bray, \A quantitative analysis of
matrix alignment in ligament scars: A comparison of movement versus immo-
bilization in an immature rabbit model", Journal of Orthopaedic Research,
9(2): 219 { 227, 1991. c Orthopaedic Research Society.
688 Biomedical Image Analysis
ligament sections directly connected to the midsubstance scar were labeled as
the original ligament ends.

FIGURE 8.25
Gap-injury site in the ligament and the formation of scar. A: Gap injury cre-
ated by removing a 4 mm section of the MCL. B: Scar after healing. C: Ex-
tracted ligament and its main regions. See also Figure 8.26. Reproduced
with permission from K. Eng, R.M. Rangayyan, R.C. Bray, C.B. Frank, L.
Anscomb, and P. Veale, \Quantitative analysis of the ne vascular anatomy
of articular ligaments", IEEE Transactions on Biomedical Engineering, 39(3):
296 { 306, 1992.  c IEEE.

Following removal, the ligament samples were xed in 4% paraformalde-


hyde, frozen in an embedding medium, sagitally sectioned at 50 m thick-
ness using a Cryostat (Reichert-Jung 2800 Frigocut N), and contact-mounted
maintaining proper orientation. This procedure resulted in black ink-lled
blood vessels in ligament tissue, which when viewed under light microscopy
showed black vessels on a clear background of ligament collagen. Every sec-
tion was divided using a grid system, and under a microscope, the image of
every tenth eld was photographed. Photographs that did not contain visible
blood vessels were discarded. The number of discarded (blank) photographs
was taken into account when estimating the fractional volume of blood vessels
in the ligament. The specimen magnication was measured to be 175, and
the scale of the digitized image was 2:32 m per pixel.
Typical images of a normal ligament section and a 17-week healing ligament
section are shown in Figure 8.27. It is evident that the normal ligament is
relatively avascular, and that the blood vessels that exist are aligned along
the length of the ligament (the horizontal direction of the image). On the
Analysis of Directional Patterns 689

FIGURE 8.26
Ligament sectioning procedure for the imaging of vascular anatomy. A: knee
joint. B: Extracted ligament and plane of sectioning. a: MCL complex. b: Lig-
ament. c: Epiligament. d: Femur. e: Tibia. f: Sectioning (imaging) plane.
See also Figure 8.25. Reproduced with permission from K. Eng, R.M. Ran-
gayyan, R.C. Bray, C.B. Frank, L. Anscomb, and P. Veale, \Quantitative
analysis of the ne vascular anatomy of articular ligaments", IEEE Transac-
tions on Biomedical Engineering, 39(3): 296 { 306, 1992.  c IEEE.
690 Biomedical Image Analysis
other hand, the scar tissue has a more abundant network of blood vessels to
facilitate the healing process, with extensive branching and lack of preferred
orientation.

(a)

(b)
FIGURE 8.27
Microvascular structure in ligaments: (a) normal (b) 17-week scar. Images
courtesy of R.C. Bray.

Binarization of the images: Images of ligament sections obtained as


above contained ink-perfused blood vessels as well as collagen brils and other
artifacts. In order to simplify the image analysis procedure, the gray-level im-
ages were thresholded to create binary images consisting of only two gray
levels representing the blood vessels and the background. The gray-level his-
togram for a blood-vessel image was assumed to be bimodal, with the rst
Analysis of Directional Patterns 691
peak representing the pixels of blood vessels, and the second one represent-
ing the background pixels. Otsu's method (see Section 8.3.2) for threshold
selection produced binary images with excessive artifacts. Threshold selec-
tion by histogram concavity analysis 611], a method that locates the locally
signicant minima and maxima in the gray-level histogram of the image and
produces a list of possible thresholds, was investigated however, it was di-
cult to choose the actual threshold to be used from the list. Another method
tried was the Rutherford{Appleton threshold-selection algorithm 592], which
computes a threshold by using gradient information from the image. The best
threshold for the binarization of the blood-vessel images was obtained by using
the Rutherford{Appleton algorithm to get a threshold estimate, followed by
histogram concavity analysis to ne tune the nal threshold value, as follows.
The derivatives of the given image f (m n) were obtained in the x and y
directions as
dx (m n) = f (m n + 1) ; f (m n ; 1) (8.45)
and
dy (m n) = f (m + 1 n) ; f (m ; 1 n): (8.46)
The larger of the two derivatives was saved as
d(m n) = maxjdx (m n)j jdy (m n)j]: (8.47)
Two sums were computed over the entire image as
XX
Sd = d(m n) (8.48)
and XX
Sdf = d(m n) f (m n): (8.49)
The Rutherford{Appleton threshold is given as
To = SSdf : (8.50)
d
Another potential threshold was determined by nding the position of maxi-
mal histogram concavity. A typical gray-level histogram consists of a number
of signicant peaks (local maxima) and valleys (local minima). Signicant
peaks may be identied by constructing a convex hull of the histogram, which
is dened as the smallest convex polygon h" (l) containing the given histogram
h(l), where l stands for the gray-level variable. The convex hull consists of
straight-line segments joining the signicant peaks in the histogram. The his-
togram concavity at any gray level is dened as the vertical distance between
the convex hull and the histogram, that is, h" (l) ; h(l)]. Within each straight-
line segment of the convex hull, the gray level at which the maximal concavity
occurred was labeled as the optimal threshold for that segment.
Because the area covered by the blood vessels is small compared to the
area covered by the background in the ligament section images, the gray-level
692 Biomedical Image Analysis
histogram was rst scaled logarithmically to make the histogram peak repre-
senting the blood vessels and the background peak closer in height. A convex
polygon of the scaled histogram was then constructed. The problem of choos-
ing between the thresholds of each of the segments of the convex polygon
was addressed by nding a threshold To using the Rutherford{Appleton algo-
rithm described above. The threshold estimate To was found to lie between
the background peak and the peak representing the blood-vessel pixels. The
threshold representing the maximal histogram concavity within the convex
hull segment joining these two peaks was chosen to be the threshold value Tc .
A threshold was also determined by nding the minimum point in the his-
togram between the peaks that represented the blood-vessel and background
pixels. This threshold, labeled Tm , yielded a smaller value than Tc because of
the height dierence between the peaks.
Comparing the various methods of threshold selection described above, it
was observed that Tc was often too high, resulting in an image with artifacts.
The threshold Tm was often too low, resulting in the loss of blood-vessel
pixels. By using the average of Tc and Tm as the nal threshold, an acceptable
compromise was reached.
A sample image of the vascular structure in a normal ligament is shown
in Figure 8.28 (a). The histogram of the image with the various thresholds
described above is shown in Figure 8.29. The binarized version of the image
is shown in Figure 8.28 (b).
Skeletonization: Skeletonization makes directional analysis easier by re-
ducing the binary blood-vessel patterns to their skeletal patterns with one-
pixel-thick lines (see Section 6.1.6). In the work of Eng et al. 414], the bina-
rized blood-vessel images as in Figure 8.28 (b) were reduced to their skeletons
by the method described in Section 6.1.6 see Figures 8.30 and 6.13 for exam-
ples.
In order to assist the analysis of both the directionality and the volume of
vascularization, an image array containing the diameter of the blood vessel at
each skeleton point was formed, and referred to as the diameter-proportional
skeleton of the image. The diameter at a skeleton point si was obtained as
(x y) = 2  minD(si  C )] (8.51)
where C is the set of contour points of the binary image before skeletonization,
and D is the Euclidean distance measure.
It is to be noted that during skeletonization, a line is shortened by one-half
of its original thickness at each of its two ends. In order to correct for this
during reconstruction and area estimation, pixels need to be added to the end
points. Because most of the blood-vessel images were observed to have smooth
contours, the ends of the line segments were assumed to be semicircular. The
areas of such half circles were added at the end points.
Directional analysis: Skeletonization allows the use of the simple method
of least-squares linear regression 612] to determine the angle of orientation
Analysis of Directional Patterns 693

(a)

(b)
FIGURE 8.28
Microvascular structure in a normal ligament sample. (a) original image
(b) binarized image. See Figure 8.29 for details on the selection of the thresh-
old for binarization. Reproduced with permission from K. Eng, R.M. Ran-
gayyan, R.C. Bray, C.B. Frank, L. Anscomb, and P. Veale, \Quantitative
analysis of the ne vascular anatomy of articular ligaments", IEEE Transac-
tions on Biomedical Engineering, 39(3): 296 { 306, 1992.  c IEEE.
694 Biomedical Image Analysis
Log (counts)

Gray level

FIGURE 8.29
Logarithmically scaled histogram of the image in Figure 8.28 (a), along
with its convex hull and several possible thresholds for binarization. RATS:
Rutherford{Appleton threshold-selection algorithm. Reproduced with per-
mission from K. Eng, R.M. Rangayyan, R.C. Bray, C.B. Frank, L. Anscomb,
and P. Veale, \Quantitative analysis of the ne vascular anatomy of articular
ligaments", IEEE Transactions on Biomedical Engineering, 39(3): 296 { 306,
1992. c IEEE.
Analysis of Directional Patterns 695

FIGURE 8.30
Skeleton of the image in Figure 8.27 (b). See also Figure 6.13.

of each blood-vessel segment in the image. In the work of Eng et al. 414],
from each point (x y) in the skeleton image, a line segment consisting of
N = 11 points was extracted, with the center point located at (x y). If (xi,
yi ), i = 1 2 : : :  N , represent the points in the line segment, the slope of the
best-tting straight line is given by
PN x PN y ; PN (x y )
m = i=1hP i i=1i2 i P i=1 i i : (8.52)
N x N (x )2
i=1 i ; i=1 i
It should be noted that, when the slope becomes large for a nearly vertical
line segment, slope estimation as above becomes inaccurate due to increasing
y-axis errors. This error can be obviated by adapting the least-squares formula
to minimize the x-axis errors if the slope found by Equation 8.52 is greater
than unity. The inverse of the slope is then given by
PN x PN y ; PN (x y )
1 =
m hPNi i=1i2 i PN i=1 2 i i :
i=1 (8.53)
y ;
i=1 i (y ) i=1 i
The angle of the skeleton at the point (x y) is then given by = atan(m).
The elemental area of the blood vessel at the point (x y) is
A(x y) = (x y) W ( ) (8.54)
where (x y) is the vessel thickness at (x y) as given by Equation 8.51, and
8 1 if j j < 45o
< cos( )
W( ) = : : (8.55)
1 if j j > 45o
sin( )
696 Biomedical Image Analysis
The factor W as above (in pixels), accounts for the fact that diagonally con-
nected pixels are farther apart than vertically or horizontally connected pixels.
The elemental area was added to the corresponding angle of the histogram,
and the process repeated for all points in the skeleton.

The overall accuracy of the directional analysis procedure as above was


estimated to be 3o by analyzing various test patterns. For this reason,
the blood-vessel angular distributions were computed in bins of width 6o .
Figure 8.31 shows composite rose diagrams obtained from 82 images from
four normal ligaments and 115 images from three ligament scar samples at 17
weeks of healing. It is evident that the normal ligaments demonstrate a well-
contained angular distribution of blood vessels (angular SD = 36:1o , entropy
= 4:4 out of a maximum of 4:9), whereas the scar tissues show relatively
widespread distribution (angular SD = 42:5o , entropy = 4:8).

(a) (b)
FIGURE 8.31
Angular distributions of blood vessels in (a) normal ligaments (averaged over
82 images from four ligaments), and (b) 17-week scar tissues from three lig-
aments (115 images). Reproduced with permission from K. Eng, R.M. Ran-
gayyan, R.C. Bray, C.B. Frank, L. Anscomb, and P. Veale, \Quantitative
analysis of the ne vascular anatomy of articular ligaments", IEEE Transac-
tions on Biomedical Engineering, 39(3): 296 { 306, 1992.  c IEEE.
Analysis of Directional Patterns 697
TABLE 8.2
Measures of Entropy and Standard Deviation (SD) of Composite Angular
Distributions of Blood Vessels in Ligaments.
Tissue type Ligaments Images Entropy SD (o ) % Vasc.
NORMAL:
Ligament 4 82 4.39 36.10 0.98
Epiligament 4 20 4.64 38.53 1.19
CONTRALATERAL:
Ligament 3 93 4.33 34.79 1.05
Epiligament 3 36 4.79 42.98 2.40
SCAR: 3 115 4.79 42.52 2.50
ENDS:
Ligament 3 80 4.59 36.55 2.24
Epiligament 3 20 4.78 44.08 3.10

The maximum possible value for entropy is 4:91. `SCAR': midsubtance scar
`ENDS': original ligament ends see Figures 8.26 and 8.25. `% Vasc.': per-
centage of the analyzed tissue volume covered by the blood vessels detected.
Reproduced with permission from K. Eng, R.M. Rangayyan, R.C. Bray, C.B.
Frank, L. Anscomb, and P. Veale, \Quantitative analysis of the ne vascular
anatomy of articular ligaments", IEEE Transactions on Biomedical Engineer-
c IEEE.
ing, 39(3): 296 { 306, 1992. 

In addition to the directional distributions and their statistics (entropy and


angular dispersion or standard deviation), the relative volume of blood vessels
in the various ligament samples analyzed were computed see Table 8.2. Using
the two-sample T-test, several assertions were arrived at about the relative
volume and organization of blood vessels in normal and healing ligaments see
Table 8.3. Statistical analysis of the results indicated, with 96% condence,
that 17-week scars contain a greater volume of blood vessels than normal
ligaments. Using entropy as a measure of chaos in the angular distribution of
the blood-vessel segments, statistical analysis indicated, with 99% condence,
that blood vessels in 17-week scars are more chaotic than in normal ligaments.
A factor that aects the accuracy in the angular distributions derived as
above is the width of the blood vessels. As the thickness of a blood vessel in-
creases, more material is lost at the ends of the vessels during skeletonization.
This loss, although corrected for by the addition of semicircular end pieces,
could lead to reduced accuracy of the angular distribution. Sampling and
quantization errors become signicant when the thickness of blood vessels is
small.
698 Biomedical Image Analysis

TABLE 8.3
Results of Statistical Comparison of the Relative Volume of
Vascularization (V ) and the Entropy of the Angular Distribution (H ) of
Various Ligament Samples.
Assertion Condence (%)
LIGAMENT:
V (normal) < V (contralateral) 70
V (normal) < V (midsubstance scar) 96
V (normal) < V (original ligament ends) 85
V (original ligament ends) < V (midsubstance scar) 55
H (contralateral) < H (normal) 73
H (normal) < H (midsubstance scar) 99
H (normal) < H (original ligament ends) 53
H (original ligament ends) < H (midsubstance scar) 96

EPILIGAMENT:
V (normal) < V (contralateral) 99
V (normal) < V (original ligament ends) 70
H (normal) < H (contralateral) 90
H (normal) < H (original ligament ends) 82

Reproduced with permission from K. Eng, R.M. Rangayyan, R.C. Bray, C.B.
Frank, L. Anscomb, and P. Veale, \Quantitative analysis of the ne vascular
anatomy of articular ligaments", IEEE Transactions on Biomedical Engineer-
c IEEE.
ing, 39(3): 296 { 306, 1992. 
Analysis of Directional Patterns 699
It should be observed that blood vessels branch and merge in 3D within
the ligament. The sectioning procedure used to obtain 2D slices imposes a
limitation in the analysis: the segments of the blood vessels that traverse
across the sectioning planes are lost in the procedures for directional analysis.
The increased blood-vessel volume and greater directional chaos observed
in scar tissue as compared to normal ligaments indicate that the blood supply
pattern is related to the healing process. Interestingly, after a 17-week healing
period, increases in both the blood-vessel volume and dispersion were also ob-
served in the MCL from the opposite knee (contralateral), as compared to the
uninjured, normal MCL 414, 415]. The directional chaos of the contralateral
MCL decreased in the ligament region, but increased in the epiligament region
of the tissue as compared to the normal tissue. This may be attributed to a
possible change in loading conditions on the contralateral limb after injury,
or to a nervous response to injury transmitted by neural pathways 613]. As
expected, both the vascularity and the directional chaos of the blood vessels in
the original ligament ends increased as compared to the normal. This shows
that injury to one portion of the tissue has some eect on the vascularity of
connected ligament tissue.

8.8 Application: Detection of Breast Tumors


The detection of breast tumors is dicult due to the nature of mammographic
images and features. Many studies have focused on image processing tech-
niques for the segmentation of breast parenchyma in order to detect suspicious
masses and reject false positives. The detection of masses requires the seg-
mentation of all possible suspicious regions, which may then be subjected to
a series of tests to eliminate false positives.
Early work by Winsberg et al. 614] using low-resolution imagery showed
promise in detecting large solitary lesions in mammograms. Later on, sev-
eral studies dealt with the detection of suspicious areas in xeromammograms.
Ackerman and Gose 615] developed computer techniques for categorizing sus-
picious regions marked by radiologists based upon features in xeromammo-
grams. Kimme et al. 616] proposed an automatic procedure for the detection
of suspicious abnormalities on xeromammograms by identifying breast tissues
and partitioning them into at most 144 sections per image. Ten normalized
statistics for each section were used as texture features, and the classication
of 2 270 mammographic sections of eight patients with six best-performing
features yielded a false-positive rate of 26% and a false-negative rate of 0:6%.
Hand et al. 617] and Semmlow et al. 618] reported on automated meth-
ods to detect suspicious regions in xeromammograms. The methods included
routines for the detection of the breast boundary and the extraction of suspi-
700 Biomedical Image Analysis
cious areas. Classication of normal and abnormal regions was achieved using
global and regional features. With a dataset of 60 xeroradiograms from 30
patients, the methods proposed by Hand et al. 617] correctly identied 87%
of the suspicious areas presented, but resulted in a high false-positive rate.
Lai et al. 619] presented a method to detect circumscribed masses. They
enhanced the contrast in mammograms using a selective median lter as a
preprocessing step, and proposed a method based upon template matching
to identify candidate regions. Finally, two tests (local neighborhood test and
region histogram test) were applied to reduce the number of false positives.
The method was eective in the detection of specic types of lesions, but was
not generally applicable to all types of masses. The masses detected included
both benign and malignant types however, the results of detection were not
reported separately for the individual mass types.
Multiscale methods: The features of masses in mammograms vary great-
ly in size and shape. Several computer techniques 620, 621] have, therefore,
employed multiscale concepts for the detection of masses in mammographic
images. Brzakovic et al. 620] proposed fuzzy pyramid linking for mass local-
ization, and used intensity links of edge pixels, in terms of their position, at
various levels of resolution of the image relative stability of the links from
one level of resolution to another was assumed. They reported 95% detec-
tion accuracy with 25 cases containing benign masses, malignant tumors, and
normal images (the numbers of each type were not specied). They further
reported to have achieved 85% accuracy in classifying the regions detected as
benign, malignant, or nontumor tissue by using features based upon tumor
size, shape, and intensity changes in the extracted regions. Details about the
range of the size of the masses detected and their distribution in terms of
circumscribed and spiculated types were not reported.
Chen and Lee 348] used multiresolution wavelet analysis and expectation-
maximization (EM) techniques in conjunction with fuzzy C-means concepts
to detect tumor edges. The method was tested on only ve images, and clas-
sication of masses as benign or malignant was not performed. Li et al. 622]
developed a segmentation method based upon a multiresolution Markov ran-
dom eld model, and used a fuzzy binary decision tree to classify the regions
segmented as normal tissue or mass. The algorithm was reported to have
achieved 90% sensitivity with two false-positive detections per image. Us-
ing a nonlinear multiscale approach, Miller and Ramsey 341] achieved an
accuracy of 85% in detecting malignant tumors in a screening dataset.
Qian et al. 262, 270, 623] developed three image processing modules for
computer-assisted detection of masses in mammograms: a tree-structured
nonlinear lter for noise suppression, a multiorientation directional wavelet
transform, and a multiresolution wavelet transform for image enhancement
to improve the segmentation of suspicious areas. They performed wavelet
decomposition of the original images, and used the lowpass-smoothed images
for detecting the central core portions of spiculated masses. Wavelet-based
adaptive directional ltering was performed on the highpass-detail images to
Analysis of Directional Patterns 701
enhance the spicules in the mass margins. Chang and Laine 624] used coher-
ence and orientation measures in a multiscale analysis procedure to enhance
features and provide visual cues to identify lesions.
Density-based methods: Masses are typically assumed to be hyperdense
with respect to their surroundings. However, due to the projection nature of
mammograms, overlapped broglandular tissues could also result in high-
intensity regions in the image, leading to their detection as false positives.
Therefore, many studies 371, 617, 618, 625, 626] focused on detecting poten-
tial densities as an initial step, and incorporated modules to reject the false
positives at a later stage. Woods and Bowyer 625] detected potential densi-
ties by using a local measure of contrast, and supplemented the method with a
region-growing scheme to nd the extents of the potential densities. They fur-
ther used a set of features computed from each region in linear and quadratic
neural classiers to classify the regions segmented as masses or false posi-
tives. In a later work, Woods and Bowyer 627] examined ve mass-detection
algorithms and described a general framework for detection algorithms com-
posed of two steps: pixel-level segmentation and region-level classication.
Their review reveals some fundamental advantages in concentrating eort on
pixel-level analysis for achieving higher detection accuracies.
Kok et al. 628] and Cerneaz 626] referred to a mass-detection approach
that was developed by Guissin and Brady 629] based upon isointensity con-
tours, and reported that the measures that Guissin and Brady used to reduce
false positives are not suitable for mammography. Cerneaz and Brady 626,
630] proposed methods to remove curvilinear structures including blood ves-
sels, milk ducts, and brous tissues prior to the detection of signicant den-
sities in mammograms. However, the assumption that such a step would not
aect the spicules of tumors is questionable.
In the detection scheme proposed by Cerneaz 626], dense regions (blobs) in
the mammogram were initially segmented by combining the methods proposed
by Guissin and Brady 629] and Lindeberg 631]. Novelty analysis methods
were then applied to separate mass regions from the plethora of dense regions
thus segmented. Three of the ve features used in the novelty analysis step
included variance in the intensity of a blob's pixels, average blob height, and
a measure of a blob's saliency. A set of 100 images (no details were men-
tioned about the nature of the distribution of the images) from the MIAS
database 376] were used for evaluating the methods after reducing the res-
olution to 300 m. The detection accuracies were not stated explicitly. The
detection approach proposed by Mudigonda et al. 275] and described in the
subsections to follow possesses similarities with the method of Guissin and
Brady 629] and Cerneaz and Brady 630, 626] in the initial stage of segmen-
tation of isolated regions in the image.
Analysis of texture of mass regions: Petrick et al. 632] reported on
the use of a two-stage adaptive density-weighted contrast enhancement lter
in conjunction with a LoG edge detector for the detection of masses. In their
approach, the original images at a resolution of 100 m were downsampled
702 Biomedical Image Analysis
by a factor of eight to arrive at images of size 256  256 pixels and 800 m
resolution. Then, for each potential mass object, an ROI was extracted from
the corresponding downsampled image using the bounding box of the object
to dene the region. A set of texture features based upon the GCMs yielded
a true-positive detection rate of 80% at 2:3 false positives per image, and
90% detection accuracy at 4:4 false positives per image with a dataset of 168
cases 633].
Kobatake et al. 634] applied the iris lter for the detection of approximately
rounded convex regions, and computed texture features based upon GCMs of
the iris lter's output to isolate malignant tumors from normal tissue. The
methods resulted in a detection rate of 90:4% with 1:3 false positives per
image, with a dataset of 1 214 CR images containing 208 malignant tumors.
Kegelmeyer 635] developed a method to detect stellate lesions in mam-
mograms, and computed Laws' texture features from a map of local edge
orientations. A binary decision tree was used to classify the features. Detec-
tion results with ve test images yielded a sensitivity of 83% with 0:6 false
ndings per image. Gupta and Undrill 636] used Laws' texture measures
and developed a texture-based segmentation approach to identify malignant
tumors.
Researchers have also used fractal measures for the detection and char-
acterization of mammographic lesions. Priebe et al. 637] used texture and
fractal features for the detection of developing abnormalities. Burdett et
al. 471] used fractal measures and nonlinear lters for the characterization of
lesion diusion. Byng et al. 448] proposed measures of skewness of the image
brightness histogram and the fractal dimension of image texture, which were
found to be strongly correlated with a radiologist's subjective classications
of mammographic patterns.
Gradient-based analysis of mass regions: Several published algorithms
for breast mass segmentation are based upon the analysis of the gradient
orientation at edges to locate radiating lines or spicules 339, 518, 625, 638,
639, 640]. Karssemeijer and te Brake 641] reported on the use of scale-
space operators and line-based pixel orientation maps to detect spiculated
distortions. The map of pixel orientations in the image was used to construct
two operators sensitive to star-like patterns of lines. A total of 50 images
(nine spiculated masses, 10 cases of architectural distortion, and 31 normal
cases) from the MIAS database 376] were analyzed. By combining the output
from both the operators in a classier, an accuracy of 90% was achieved in
detecting stellate carcinomas at one false positive per image 641]. The mass
cases studied belonged to the spiculated malignant category only, and did not
include circumscribed malignant or circumscribed benign cases.
In a related study, te Brake and Karssemeijer 642, 643] used three pixel-
based mass-detection methods, including the method they developed for de-
tecting stellate distortions, to examine if the detection of masses could be
achieved at a single scale. Experiments with simulated masses indicated that
little could be gained by applying their methods at a number of scales. Us-
Analysis of Directional Patterns 703
ing a dataset of 60 images (30 malignant and 30 normals) from the MIAS
database 376], they reported that their gradient orientation method 339,
638, 641] performed better than two other methods based upon template
matching and Laplacian ltering 642]. Karssemeijer and te Brake mentioned
that their gradient orientation method requires the optimization of several
parameters in order to achieve a good detection accuracy 641].
Kobatake and Yoshinaga 644] developed skeleton analysis methods using
the iris lter to detect spicules of lesions. They used a modied Hough trans-
form to extract radiating lines from the center of a mass region to discrim-
inate between star-shaped malignant tumors and nonmalignant masses, and
obtained an accuracy of 74% in detecting malignant tumors using a dataset
of 34 CR images including 14 malignant, nine benign, and 11 normal cases.
Polakowski et al. 645] developed a model-based vision algorithm using DoG
lters to detect masses and computed nine features based upon size, circu-
larity, contrast, and Laws' texture features. A multilayer perceptron neural
network was used for the classication of breast masses as benign or malig-
nant. With a dataset of 36 malignant and 53 benign cases, they reported a
detection sensitivity of 92% in identifying malignant masses, with 1:8 false
positives per image.
Directional analysis of broglandular tissues: Several researchers 639,
646, 647, 648, 649, 650, 651, 652] have developed methods to analyze the ori-
entation of broglandular tissues in order to detect mass regions and elim-
inate false positives. Zhang et al. 639] employed a Hough-spectrum-based
technique to analyze the texture primitives of mammographic parenchymal
patterns to detect spiculated mass regions and regions possessing architec-
tural distortion. With a dataset of 42 images obtained from 22 patients, the
methods yielded a sensitivity of 81% with 2:0 false positives per image. Parr
et al. 640] studied Gabor enhancement of radiating spicules, and concluded
that the linearity, length, and width parameters are signicantly dierent for
spicules in comparison to other linear structures in mammograms. Later on,
the group reported on the detection of linear structures in digital mammo-
grams by employing a multiscale directional line operator 647, 648, 650]. Two
images indicating the probability of suspicion were derived by applying PCA
and factor analysis techniques to model the orientations present in central
core regions and the surrounding patterns of lesions, respectively. A sensi-
tivity of 70% at 0:01 false positives per image was reported by combining
the evidence present in both the probability images in a k-nearest-neighbor
approach using a dataset of 54 mammograms containing 27 spiculated lesions
and 27 normal mammograms 651]. The basis for the computation of the
result mentioned above was not claried: a value of 0:01 false positives per
image with a dataset of 54 mammograms leads to a total of less than one false
positive detection. The sizes of the abnormalities in their studies ranged from
5 mm to 30 mm, with a mean of 13:4 mm 77% of the lesions tested were
smaller than 15 mm. Mudigonda et al. 275] applied techniques of texture
ow-eld analysis 432, 653] to analyze oriented patterns in mammograms by
704 Biomedical Image Analysis
computing the angle of anisotropy or dominant orientation and the strength
of ow-like information or coherence at every point in the image: the related
methods and results are described in the following subsections.
Analysis of bilateral asymmetry: Several studies 371, 654, 655, 656,
657] have investigated the bilateral asymmetry between the left and right
mammograms of an individual in order to localize breast lesions. Such stud-
ies have reported on the registration and unwarping transformations that are
required for the comparison of dierent images. Apart from the anatomical
dierences that exist between the left and right breasts of an individual, reg-
istration methods will have to cope with the additional complexities arising
due to the variation in the amounts of compression applied in the process of
obtaining the images, as well as the dierences in the angles of the projections.
The transformations used in registration methods often face limitations in ad-
equately addressing the above-mentioned complexities of the mammographic
imaging process.
Lau and Bischof 371] applied a transformation based upon the outline of the
breast to identify asymmetry in breast architecture. Using a B-spline model
of the breast outline to normalize images, they compared features including
brightness, roughness, and directionality to dene asymmetry measures for
breast tumor detection. Although they reported success in the detection of
tumors, they warned that the method by itself is not reliable for clinical
application, due mainly to the high rate of false negatives. Giger et al. 654]
and Yin et al. 655] used similar warping procedures to align bilateral images
to perform subtraction. Their method exploits the normal bilateral symmetry
of healthy parenchyma to label regions of signicant dierence as potential
masses. Nishikawa et al. 658] analyzed asymmetric densities to detect masses
by performing nonlinear subtraction between the right and left breasts their
computer-aided scheme for the classication of masses using articial neural
networks achieved a higher accuracy than radiologists.
Miller and Astley 372, 659] proposed a method for the detection of asym-
metry by comparing anatomically similar and homogeneous regions the pro-
cedure included segmentation, classication as fat or nonfat region, texture
energy, and shape features such as compactness, circularity, and eccentrici-
ty. Training and assessment were carried out on a leave-one-out basis with a
dataset of 52 mammogram pairs, achieving 72% correct classication with a
linear discriminant classier. Ferrari et al. developed methods to achieve seg-
mentation of the outline of the breast 279], the pectoral muscle 278], and the
broglandular disc 280] in mammograms, and analyzed the glandular tissue
patterns by applying directional ltering concepts to detect bilateral asym-
metry 381]. Their methods, described in Section 8.9, avoid the application of
registration procedures that are associated with most of the above-mentioned
methods to analyze asymmetry. Instead, they performed a global analysis of
the directional distributions of the glandular tissues using Gabor wavelets to
characterize the asymmetry in tissue ow patterns.
Analysis of Directional Patterns 705
Analysis of prior mammograms: It is known that a small but sig-
nicant number of cancer cases detected in screening programs have prompts
visible in earlier screening examinations 660]. These cases represent screening
errors or limitations, and may occur due to the lack of an adequate under-
standing of the perceptual features of early breast abnormalities as apparent
on mammograms. The above observations have prompted many researchers
661, 662, 663, 664, 665, 666, 667, 668] to analyze the previous or prior mam-
mograms taken as part of routine screening and follow-up studies of patients
diagnosed with cancer, in an eort to detect the disease in the prior mam-
mograms. Brzakovic et al. 661] developed methods for detecting changes in
mammograms of the veried positive group in the regions labeled as abnor-
mal by a medical expert, by comparing them with the features derived from
the corresponding regions of the previous screenings. Their procedures in-
cluded segmentation, partitioning into statistically homogeneous regions, and
region-based statistical analysis in order to achieve registration and perform
comparison between mammograms acquired at dierent instances of screen-
ing.
Sameti et al. 664, 669] studied the structural dierences between the re-
gions that subsequently formed malignant masses on mammograms, and other
normal areas in images taken in the last screening instance prior to the detec-
tion of tumors. Manually identied circular ROIs were transformed into their
optical density equivalents, and further divided into three discrete regions
representing low, medium, and high optical density. Based upon the discrete
regions, a set of photometric and texture features was extracted. They re-
ported that in 72% of the 58 breast cancer cases studied, it was possible to
realize the dierences between malignant mass regions and normal tissues in
previous screening images.
Petrick et al. 632, 667] studied the eectiveness of their mass-detection
method in the detection of masses in prior mammograms. The dataset used
included 92 images (54 malignant and 38 benign) from 37 cases (22 malignant
and 15 benign). Their detection methods achieved a \by lm" mass-detection
sensitivity of 51% with 2:3 false positives per image. They achieved a slightly
better accuracy of 57% in detecting only malignant tumors. Their detection
scheme attempts to segment salient densities by employing region growing
after enhancement of contrast in the image. Such an intensity-based seg-
mentation approach fails to detect the developing densities in the previous
screening images due to the inadequate contrast of mass regions before the
masses are actually formed.
Several semiautomated segmentation schemes 632, 633, 276, 518, 407] have
used manually segmented ROIs in order to search for masses in a specied
region of the breast. Such methods nd limited practical utility in a screening
program.
The female breast is a complex organ made up of brous, glandular, fatty,
and lymphatic tissues. The dierences in density information of the breast
tissues are captured in a mammogram in the form of intensity and textural
706 Biomedical Image Analysis
variations. Mudigonda et al. 275] proposed an unsupervised segmentation
approach to localize suspicious mass regions in mammographic images. The
approach aims to isolate the spatially interconnected structures in the image
to form regions concentrated around prominent intensities. It would then be
possible to extract high-level information characterizing the physical proper-
ties of mass regions, and to short-list suspicious ROIs for further analysis. A
block diagram of this approach is shown in Figure 8.32 the various steps of
the detection algorithm are explained in detail in the following subsections.

Original image

Wavelet decomposition and lowpass filtering

Isointensity contours via adaptive density slicing

Hierarchical grouping of isointensity contours

Segmentation of regions and upsampling their


boundaries to the full-resolution image

Analysis of segmented regions to reject false positives

Classification of the regions segmented

FIGURE 8.32
Block diagram of the mass-detection algorithm. Figure courtesy of N.R.
Mudigonda 166].
Analysis of Directional Patterns 707
8.8.1 Framework for pyramidal decomposition
Malignant tumors, due to their invasive nature, possess heterogeneous den-
sity distributions and margins causing distortion in the orientation of the
surrounding tissues. In order to detect such structures as single entities, prior
smoothing of the image is required. Mudigonda et al. 275] employed recursive
wavelet decomposition and Gaussian smoothing operations in a multiresolu-
tion pyramidal architecture as preprocessing steps to achieve the required level
of smoothing of the image.
A pyramidal representation of the given image was obtained by iterative
decimation operations on the full-resolution image, thereby generating a hi-
erarchy of subimages with progressively decreasing bandwidth and increasing
scale 670, 671]. Wavelet decomposition divides the frequency spectrum of the
original image f into its lowpass-subband-equivalent image fL and highpass-
equivalent detail image fH at dierent scales. The lowpass-subband image
at each scale, produced by decimating its preceding higher-resolution image
present in the hierarchy by an octave level, was further smoothed by a 3  3
Gaussian kernel, and the resulting image was stretched to the range of 0 ; 60
in pixel value. The wavelet used was a symlet of eighth order. Symlets are
compactly supported wavelets with the least asymmetry and the highest num-
ber of vanishing moments for a given support width 672]. Figure 8.33 shows
plots of the decomposition lowpass kernels used with symlets, at two dierent
scales. The wavelet decomposition was performed recursively to three octave
levels using the symlets mentioned above.
The preprocessing steps of wavelet decomposition and Gaussian smooth-
ing operations described above successively and cumulatively modulate the
intensity patterns of mass regions to form smooth hills with respect to their
surroundings in low-resolution images. Figure 8.34 (a) shows a 1 024  1 024
section of a mammogram containing two circumscribed benign masses. Parts
(b) { (d) of the gure show the corresponding low-resolution images after the
rst, second, and third levels of decomposition, respectively. The eects of the
preprocessing steps mentioned above may be observed in the low-resolution
images.
The choice of the wavelet, the width of the kernel used for lowpass ltering,
and the degree or scale factor of decomposition can inuence the smoothed
results. The preprocessing operations described above were employed to ar-
rive at an estimate of the extent of isolated regions in a low-resolution image,
and studies were not performed with dierent sets of choices. However, sat-
isfactory smoothed results were obtained with the wavelet chosen due to its
symmetry. A scale factor of three, which causes the decomposition of the
original 50 m=pixel images to a resolution of 400 m=pixel, was found to be
eective on most of the images tested: decomposition to a higher scale resulted
in over-smoothing of images and merged multiple adjoining regions into single
large regions, whereas a scale factor of two yielded insignicant regions due to
708 Biomedical Image Analysis

1
Order = 4

0.5

−0.5
1 2 3 4 5 6 7 8

1
Order = 8

0.5

−0.5
2 4 6 8 10 12 14 16

FIGURE 8.33
Plots of symlet decomposition lowpass lters at two scales. Figure courtesy
of N.R. Mudigonda 166].
Analysis of Directional Patterns 709

50

100

150

200

250

300

350

400

450

500

(a)
0 50 100 150 200 250 300 350 400 450 500

(b)
0 0

20
50

40

100

60

150
80

200 100

120
250
0 50 100 150 200 250 0 20 40 60 80 100 120

(c) (d)
FIGURE 8.34
(a) A 1 024  1 024 section of a mammogram containing two circumscribed
benign masses. Pixel size = 50 m. Image width = 51 mm. Low-resolution
images obtained by wavelet ltering: (b) After the rst level of decomposition
512  512 pixels, 100 m per pixel. (c) After two levels of decomposition
256  256 pixels, 200 m per pixel. (d) After three levels of decomposition
128  128 pixels, 400 m per pixel. The intensity of the ltered images has
been enhanced by four times for display purposes. Figure courtesy of N.R.
Mudigonda 166].
710 Biomedical Image Analysis
insucient smoothing. However, some researchers 632] have performed mass
detection after reducing images to a resolution of 800 m=pixel.

8.8.2 Segmentation based upon density slicing


The recursive smoothing and decimation operations described above result
in a gradual modulation of intensity information about the local intensity
maxima present in various isolated regions in the low-resolution image. As a
result, the intensity levels are expected to assume either unimodal or bimodal
histogram distributions. The next step in the algorithm is to threshold the
image at varying levels of intensity to generate a map of isointensity con-
tours 673]. The purpose of this step is to extract concentric groups of closed
contours to represent the isolated regions in the image.
The density-slicing or intensity-slicing technique slices the given image (rep-
resented as a 2D intensity function) by using a plane that is placed parallel
to the coordinate plane of the image 526]. A level curve (also known as
an isointensity curve) is then formed by extracting the boundary of the area
of intersection of the plane and the intensity function. Figure 8.35 shows a
schematic illustration of the density-slicing operation. Each level curve ob-
tained using the procedure explained above is guaranteed to be continuous
and closed. The number of levels of thresholding, starting with the maxi-
mum intensity in the image, and the step-size decrement for successive levels,
were adaptively computed based upon the histogram distribution of the image
under consideration, as explained below.
Let fmax represent the maximum intensity level in the low-resolution image
(which was scaled to 60), and let fth be the threshold representing the mass-
to-background separation, which is to be derived from the histogram. It
is assumed that the application of the preprocessing smoothing operations
results in exponentially decreasing intensity from the central core region of a
mass to its background, represented as fth = fmax exp; N ] where N is the
number of steps required for the exponentially decreasing intensity function to
attain the background level represented by fth , N = (fmax ; fth ), and is the
intended variation in step size between the successive levels of thresholding.
The step size may be computed through a knowledge of the parameters fth
and N . The threshold fth was derived from the histogram, and corresponds
to the intensity level representing the maximum number of occurrences when
the histogram assumes a unimodal distribution.
It is essential to set bounds for fth so as not to miss the detection of masses
with low-density core regions, while maintaining the computational time of
the algorithm at a reasonable level. A large threshold value might miss the
grouping of low-intensity mass regions, thereby aecting the sensitivity of the
detection procedure on the other hand, a low value would result in a large map
of isointensity contours, and increase the computational load on the algorithm
in further processing of the contours. A smaller threshold could also result
in large numbers of false detections. Initial estimates of fth derived from the
Analysis of Directional Patterns 711
f max

level 0

level 1
level 2

. .
. .
. .

level N .
.
.
background

Intensity profile Isointensity contours

FIGURE 8.35
Schematic illustration of the density-slicing operation. fmax represents the
maximum intensity in the image, and levels 0 1 2 : : :  N represent a set
of N threshold values used for density slicing. Figure courtesy of N.R.
Mudigonda 166].

corresponding histograms of low-resolution images were observed to range


between 50% and 90% of fmax , and N was observed to range between 10 and
30. The parameter fth was adaptively selected based upon the histogram as
explained below:
1. If 0:5 fmax < fth
0:9 fmax , fth could be assumed to represent the mass-
to-background transition, and the same threshold value is retained.
2. If fth > 0:9 fmax , the mass regions that are to be detected in the image
are expected to be merged with the surrounding background, and no
distinct central core regions would be present. In such cases, fth is
considered to be 0:9 fmax , and N is set to 30 (the maximum number
of levels of thresholding considered) to limit the step-size increments of
the level function to a low value. These steps facilitate close tracking of
dicult-to-detect mass-to-background demarcation.
3. If fth
0:5 fmax , fth might not represent the true mass-to-background
transition, and hence, is ignored. An alternative search for fth is initi-
ated so that the value obtained will lie in the upper half of the histogram
distribution.
The steps described above realize a domain of isointensity contours in the
low-resolution image.
712 Biomedical Image Analysis
8.8.3 Hierarchical grouping of isointensity contours
The next step in the algorithm is to perform grouping and elimination op-
erations on the framework of closed contours generated in the low-resolution
image, considering their parent-child nodal relations in a family-tree architec-
ture. A schematic representation of such a hierarchical grouping procedure
is shown in Figure 8.36, which depicts the segmentation of the low-resolution
image into three isolated regions based upon three concentric groups of con-
tours.
The strategy adopted was to short-list at rst the possible central dense-core
portions, which are usually small in size but of higher density (represented by
fmax in each group of contours in Figure 8.36), and to identify the immediate
low-density parent members encircling them. The process was continued until
all the members in the available set of closed contours in the image were vis-
ited. Each of the closed contours was assigned to a specic group or family of
concentric contours based upon nodal relations, thus leading to segmentation
of the image into isolated regions. A concentric group of contours represents
the propagation of density information from the central core portion of an ob-
ject in the image into the surrounding tissues. In some images with dense and
fatty backgrounds, the outermost contour members were observed to contain
multiple regions of dissimilar structures. For this reason, a specied number of
outer contours were discarded to separate the groups of contours representing
adjacent structures.
The outermost contour in each family or group and the family count in
terms of the number of contours present could be useful in the analysis of
the regions segmented in order to reject false positives. Masses, irrespective
of their size, were observed to result in a higher family count as compared
to elongated glandular tissues. By setting a threshold on the family count,
chosen to be ve, dense glandular structures could be avoided from further
analysis. A lower threshold value for the minimum-allowable family count
was observed to aect the specicity in terms of the number of false positives
detected, but not aect the sensitivity of the detection procedure. Finally,
the outermost contour from each of the short-listed groups was upsampled to
the full-resolution image to form the corresponding segmented area.

8.8.4 Results of segmentation of masses


The results of application of the algorithm to the image shown in Figure
8.34 (a) are presented in Figure 8.37 (a). The contour map and the outer-
most contours detected are shown superimposed on the low-resolution image
(at a scale of three). Figure 8.38 shows the histogram of the correspond-
ing low-resolution and smoothed image. Figure 8.37 (b) shows the contours
(white) upsampled from the low-resolution image in Figure 8.37 (a) to the full-
resolution image of Figure 8.34 (a) the corresponding contours (black) that
were manually drawn by an expert radiologist are overlaid for comparison.
Analysis of Directional Patterns 713

Contour domain

f
max
G1

Outermost

Space domain Space domain

f
Low- max
Segmented
resolution G2
regions
image
Outermost

f
max
G3

Outermost

FIGURE 8.36
Schematic representation of hierarchical grouping of contours. G1, G2, and
G3 are groups of contours that represent isolated regions in the image. Repro-
duced with permission from N.R. Mudigonda, R.M. Rangayyan, and J.E.L.
Desautels, \Detection of breast masses in mammograms by density slicing and
texture ow-eld analysis", IEEE Transactions on Medical Imaging, 20(12):
1215 { 1227, 2001. c IEEE.
714 Biomedical Image Analysis
Figure 8.39 shows a similar set of results of application of the mass-detection
algorithm to a 1 024  1 024 section of a mammogram containing a spiculated
malignant tumor.
As can be seen from Figures 8.37 and 8.39, the upsampled contours in the
full-resolution image contain the most signicant portions of the correspond-
ing masses, and are in close agreement with the corresponding areas manually
delineated by the radiologist. The results of application of the methods dis-
cussed above to full-size mammograms are presented in the subsections to fol-
low, along with methods based upon texture ow-eld principles for detailed
analysis of the various regions segmented in order to reject false positives.
The mass-detection algorithm was tested on segments of size up to 2 048 
2 048 pixels of 39 mammographic images (28 benign and 11 malignant) from
the MIAS database 376], with a spatial resolution of 50 m  50 m. In
29 of the 39 cases (19 benign and 10 malignant), the segmented regions were
in agreement with the corresponding regions that were manually identied
by the radiologist. In six images, including ve images with circumscribed
benign masses and an image with a spiculated malignant tumor, the regions
segmented by the algorithm were not in agreement with the corresponding
regions manually delineated by the radiologist. The radiologist indicated that
he encountered diculty while tracing the boundaries of the masses in some
images from the MIAS database. In the remaining four images where the
method failed, all belonging to the spiculated benign category, the mass por-
tions are merged in fatty and glandular background. In these images, the
technique failed to generate a contour map with the specied minimum num-
ber of concentric closed contours to be able to delineate the mass regions. In
two of the images, including a circumscribed benign mass and a spiculated
benign mass, the masses are located close to the edges of the images, and the
process of generation of concentric closed contours was impeded.
Overall, the mass-detection algorithm performed well on images contain-
ing malignant tumors, and successfully segmented tumor areas that were in
agreement with the corresponding regions identied manually by the radi-
ologist. However, the method encountered limited success in images with
benign masses. In the detection scheme, only the contour map generated in
the lowest-resolution image is analyzed to segment the mass regions, and the
information available in the intermediate-resolution images along the hierar-
chy is not considered. Establishment of reliable intensity links through the
intermediate-resolution images may result in improved detection results.
Benign-versus-malignant pattern classication was carried out using the
BMDP 7M stepwise discriminant analysis program 674] with texture features
computed based upon averaged GCMs for the 29 masses (19 benign and 10
malignant) that were successfully segmented by the mass-detection procedure.
(See Sections 7.3.2 and 7.9.1 for details on the computation of texture features
using adaptive ribbons.) Four eective features including entropy, second
moment, second dierence moment, and correlation were short-listed. The
Analysis of Directional Patterns 715
0

20

40

60

80

100

120

0 20 40 60 80 100 120

(a)
0

100

200

300

400

500

600

700

800

900

1000
0 100 200 300 400 500 600 700 800 900 1000

Figure 8.37 (b)


716 Biomedical Image Analysis

FIGURE 8.37
(a) Groups of isointensity contours and the outermost contour in each group in
the third low-resolution image of the mammogram section of Figure 8.34 (d).
(b) The contours (white) of two masses (indicated by arrows) and two false
positives detected in the full-resolution image of Figure 8.34 (a), with the
corresponding contours (black) of the masses drawn independently by a ra-
diologist. Reproduced with permission from N.R. Mudigonda, R.M. Ran-
gayyan, and J.E.L. Desautels, \Segmentation and classication of mammo-
graphic masses", Proceedings of SPIE Volume 3979, Medical Imaging 2000:
Image Processing, pp 55 { 67, 2000.  c SPIE.

4
x 10
2.5

1.5
Number of pixels

0.5

0
0 10 20 30 40 50 60
Intensity Threshold

FIGURE 8.38
Histogram of the low-resolution and smoothed image shown in Figure 8.37 (a).
Reproduced with permission from N.R. Mudigonda, R.M. Rangayyan, and
J.E.L. Desautels, \Segmentation and classication of mammographic masses",
Proceedings of SPIE Volume 3979, Medical Imaging 2000: Image Processing,
pp 55 { 67, 2000. c SPIE.
Analysis of Directional Patterns 717

(a)
0

20

40

60

80

100

120

0 20 40 60 80 100 120

Figure 8.39 (b)


718 Biomedical Image Analysis

4
x 10
3.5

2.5
Number of pixels

1.5

0.5

0
0 10 20 30 40 50 60
Intensity Threshold

(c)
0

100

200

300

400

500

600

700

800

900

1000
0 100 200 300 400 500 600 700 800 900 1000

Figure 8.39 (d)


Analysis of Directional Patterns 719
FIGURE 8.39
(a) A 1 024 1 024 section of a mammogram containing a spiculated ma-
lignant tumor. Pixel size = 50 m. Image width = 51 mm. (b) Group of
isointensity contours and the outermost contour in the group in the third low-
resolution image. (c) Histogram of the low-resolution and smoothed image
shown. (d) The contour (white) of the spiculated malignant tumor detected
in the full-resolution image, superimposed with the corresponding contour
(black) drawn independently by a radiologist. Reproduced with permission
from N.R. Mudigonda, R.M. Rangayyan, and J.E.L. Desautels, \Segmenta-
tion and classication of mammographic masses", Proceedings of SPIE Volume
3979, Medical Imaging 2000: Image Processing, pp 55 { 67, 2000.  c SPIE.

GCM-based texture features computed from the mass ribbons resulted in an


average classication eciency of 0:80.

8.8.5 Detection of masses in full mammograms


Masses containing important signs of breast cancer may be dicult to detect
as they often occur in dense glandular tissue. Successful identication of such
dicult-to-detect masses often results in a large number of false positives.
Rejection of false positives forms an important part of algorithms for mass
detection 341, 629, 632, 634, 635, 638, 643, 644, 645, 664, 675].
In the algorithm proposed by Mudigonda et al. 275] to detect masses, the
pyramidal decomposition approach described in the preceding subsections was
extended for application to full mammograms. Furthermore, the orientation
information present in the margins of the regions detected was analyzed using
texture ow-eld principles to reject false positives. The approach, described
below, is signicant in the following aspects:
 The methods constitute a comprehensive automated scheme for the de-
tection of masses, analysis of false positives, and classication of mam-
mographic masses as benign or malignant. The detection methods pro-
posed are not limitied to the analysis of any specic category of masses
instead, they cover a wide spectrum of masses that include malignant tu-
mors as well as benign masses of both the circumscribed and spiculated
categories.
 As described at the beginning of this section, many of the recently
published research works 639, 641, 643, 650, 651, 652] have noted the
signicance of analyzing the oriented information in mammograms in
identifying regions that correspond to abnormal distortions in the im-
ages. Such methods require precise computation of orientation esti-
mates. Mudigonda et al. 275] introduced methods to analyze oriented
textural information in mammograms using ow-eld principles, and
proposed features to dierentiate mass regions from other dense tissues
720 Biomedical Image Analysis
in the images in order to reduce the number of false positives. The ow-
eld methods employed use a strong analytical basis in order to provide
optimal orientation estimates 653].
 As discussed in Section 7.9.1, the analysis of textural information in
ribbons of pixels across the boundaries of masses has been found to be
eective in the benign-versus-malignant discrimination of masses 165,
451, 676]. The studies cited above computed textural features using
ribbons of pixels of a xed width of 8 mm. In the work of Mudigonda et
al. 275], a method is proposed to estimate the widths of the ribbons to
be able to adapt to variations in the size and shape of the detected mass
regions. The adaptive ribbons of pixels extracted are used to classify the
regions detected as masses or false positives at rst, and subsequently
to discriminate between benign masses and malignant tumors.
 The features used for the classication of masses and false positives are
based upon specic radiographic characteristics of masses as described
by an expert radiologist.
The block diagram shown in Figure 8.40 lists the various steps of the de-
tection algorithm, which are explained in detail in the following paragraphs.
Detection of the breast boundary: In order to limit further processing
to only the breast region, an approximate outline of the breast was detected
initially by employing the following steps. The image was smoothed with a
separable Gaussian kernel of width 15 pixels (pixel width = 200 m), and
quantized to 64 gray levels. The method proposed by Schunck 677] and Rao
and Schunck 653] was used to generate Gaussian kernels. Figure 8.41 shows
a plot of the Gaussian kernel used.
A map of isointensity contours was generated by thresholding the image
using a threshold close to zero. From the map of isointensity contours, a
set of closed contours was identied by employing the chain code 526]. The
contour containing the largest area was then considered to be the outline of
the breast. Figure 8.42 illustrates a mammogram of size 1 024 1 024 pixels
with a spiculated malignant tumor. The outline of the breast detected in the
mammogram of Figure 8.42 is shown in Figure 8.43. The method successfully
detected the outlines of all of the 56 images tested. The method worked
successfully with images lacking skin-air boundaries all around as well.
Detection of salient densities: Gaussian pyramidal decomposition was
employed to achieve the required smoothing instead of wavelet decomposition
that was used in the case of detection of masses in sectional images of mam-
mograms as discussed in Section 8.8.1. Gaussian decomposition, when tested
with some of the previously used sectional mammographic images, provided
comparable detection results in terms of the boundaries of the regions seg-
mented. Gaussian smoothing using separable kernels has the advantage of
ease of implementation, and also provides computational advantages, partic-
ularly with full-size mammograms.
Analysis of Directional Patterns 721

FIGURE 8.40
Block diagram of the algorithm for the detection of masses in full mammo-
grams. Reproduced with permission from N.R. Mudigonda, R.M. Rangayyan,
and J.E.L. Desautels, \Detection of breast masses in mammograms by den-
sity slicing and texture ow-eld analysis", IEEE Transactions on Medical
Imaging, 20(12): 1215 { 1227, 2001. 
c IEEE.
722 Biomedical Image Analysis
250

200

150
w=5

100

50

0
−6 −4 −2 0 2 4 6

FIGURE 8.41
Plot of a Gaussian kernel with the support width of 15 pixels. The width at
half-maximum height is ve pixels. Figure courtesy of N.R. Mudigonda 166].

The original 8 b images with a spatial resolution of 200 m were subsam-


pled to a resolution of 400 m after performing smoothing with a separable
Gaussian kernel of width ve pixels. The width of the Gaussian kernel at
half-maximum height is about 400 m, and hence, is not expected to cause
excessive smoothing of mass regions because mass features in mammograms
typically span a few millimeters.
The preprocessing steps described above are essential in order to capture
the complete extent of mass features as single large regions, so as to facilitate
adequate inputs for further discrimination between masses and false positives.
Masses were assumed to be hyperdense, or at least of the same density, with
respect to their background. The preprocessing steps described above may
not be eective with masses that do not satisfy this assumption.
Multilevel thresholding: In the procedure of Mudigonda et al. 275],
the low-resolution image is initially reduced to 64 gray levels in intensity
and thresholded at N = 30 levels starting from the maximum intensity level
fmax = 64, with a step-size decrement of  = 0:01 fmax . The purpose of this
step is to extract concentric groups of closed contours to represent the isolated
regions in the image as explained earlier. The above-mentioned parameters
were chosen based upon the observation of the histograms of several low-
resolution images. The histogram of the low-resolution image obtained by
preprocessing the mammogram in Figure 8.42 is shown in Figure 8.44. As
indicated in the gure, the intensity level at which the masses and other dense
Analysis of Directional Patterns 723

FIGURE 8.42
A mammogram (size 1 024 1 024 pixels, 200 m per pixel) with a spic-
ulated malignant tumor (radius = 2:28 cm). Case mdb184 from the MIAS
database 376]. Reproduced with permission from N.R. Mudigonda, R.M.
Rangayyan, and J.E.L. Desautels, \Detection of breast masses in mammo-
grams by density slicing and texture ow-eld analysis", IEEE Transactions
on Medical Imaging, 20(12): 1215 { 1227, 2001. c IEEE.
724 Biomedical Image Analysis

FIGURE 8.43
The map of isointensity contours extracted in the smoothed and subsampled
version (size 512 512 pixels, 400 m per pixel) of the mammogram shown in
Figure 8.42. The breast outline detected is superimposed. In some cases, sev-
eral contours overlap to produce thick contours in the printed version of the
image. Reproduced with permission from N.R. Mudigonda, R.M. Rangayyan,
and J.E.L. Desautels, \Detection of breast masses in mammograms by den-
sity slicing and texture ow-eld analysis", IEEE Transactions on Medical
Imaging, 20(12): 1215 { 1227, 2001.  c IEEE.
Analysis of Directional Patterns 725
tissues appear to merge with the surrounding breast parenchyma is around
the minimum threshold level of 44.

14000

12000

10000

8000

6000

4000

2000

0
0 10 20 30 40 50 60 70
minimum threshold level

FIGURE 8.44
Histogram of the low-resolution image corresponding to the mammogram in
Figure 8.42. Reproduced with permission from N.R. Mudigonda, R.M. Ran-
gayyan, and J.E.L. Desautels, \Detection of breast masses in mammograms by
density slicing and texture ow-eld analysis", IEEE Transactions on Medical
Imaging, 20(12): 1215 { 1227, 2001.  c IEEE.

Figure 8.43 shows the map of isointensity contours obtained by density


slicing or multilevel thresholding of the mammogram shown in Figure 8.42.
(Observe that multiple concentric contours may fuse into thick contours in
the printed version of the image.)
Grouping of isointensity contours: The scheme represented in Fig-
ure 8.36 was adopted to perform a two-step grouping and merging operation
on the individual contours possessing a minimum circumference of 2 mm (ve
pixels at 400 m), to arrive at groups of concentric isointensity contours.
Initially, the contour members with intensity values ranging from 0:8 fmax to
fmax , with fmax = 64, were grouped to form a set of regions corresponding
to high intensities in the image, and then the remaining contour members
were grouped into a separate set. The undesired merging of adjoining regions
726 Biomedical Image Analysis
was controlled by monitoring the running family count of each group for any
abrupt uctuations in terms of its family count. The information from both
the sets of groups of contours was combined by establishing correspondences
among the outermost members of the various groups present in each set to
arrive at the nal set of segmented regions in the low-resolution image. The
largest contour in each group thus nalized with a minimum family count
of two members was upsampled into the full-resolution image to form the
corresponding segmented area. The segmented regions identied in the full-
resolution image were analyzed using the features described above to identify
true-positive and false-positive regions.

8.8.6 Analysis of mammograms using texture ow-eld


In a mammogram of a normal breast, the broglandular tissues present ori-
ented and ow-like or anisotropic textural information. Mudigonda et al. 275]
proposed features to discriminate between masses and the strongly oriented
broglandular tissues based upon the analysis of oriented texture in mammo-
grams. The method proposed by Rao and Schunck 432, 653], briey described
in the following paragraphs, was used to characterize ow-like information in
the form of intrinsic orientation angle and coherence images. The intrinsic
angle image reveals the direction of anisotropy or ow orientation of the un-
derlying texture at every point in the image. Coherence is a measure of the
degree or strength of anisotropy in the direction of ow.
Rao and Schunck 653] and Rao 432] made a qualitative comparison of the
performance of their method with the method proposed by Kass and Witkin
678], and reported that their method achieved superior results in character-
izing ow-eld information. However, it appears that their implementation
of Kass and Witkin's scheme diered from the method that was originally
proposed by Kass and Witkin. Regardless, the method of Rao and Schunck
has a strong analytical basis with fewer assumptions.
The methodology to derive the intrinsic images begins with the computation
of the gradient information at every point in the image by preprocessing the
image with a gradient-of-Gaussian lter of a specied width. The impulse
response of a 2D Gaussian smoothing lter g(x y) of width  is given by
g(x y) = exp ;(x2+2 y )
2 2
(8.56)
where the scale factor has been ignored. The impulse response of the gradient-
of-Gaussian lter h(x y) tuned to a specied orientation  can be obtained
using g(x y) as

@g @g  cos  sin ]
h(x y) = @x (8.57)
@y
where  represents the dot product. At each point in the given image, the
lter h(x y), upon convolution with the image, yields the maximal response in
Analysis of Directional Patterns 727
the orientation () that is perpendicular to the orientation of the underlying
texture (that is, the angle of anisotropy). Based upon the above, and with
the assumption that there exists a dominant orientation at every point in the
given image, Rao and Schunck 653] derived the optimal solution to compute
the angle of anisotropy pq at a point (p q) in the image as described below.
Let Gmn and mn represent the gradient magnitude and gradient orientation
at the point (m n) in an image, respectively, and P P be the size of the
neighborhood around (p q) used for computing pq . The gradient magnitude
is computed as q
Gmn = G2x (m n) + G2y (m n) (8.58)
where Gx (m n) and Gy (m n) represent the outputs of the gradient-of-Gauss-
ian lter at (m n) in the x and y directions, respectively. The gradient ori-
entation is computed as
 
G y ( m
mn = arctan G (m n) : n ) (8.59)
x
The projection of Gmn on to the gradient orientation vector at (p q) at angle
pq is Gmn cos(mn ; pq ) as illustrated schematically in Figure 8.45.
Based upon the discussion above, the sum-of-squares S of the projections of
the gradient magnitudes computed at the various points of the neighborhood
in a reference orientation specied by  is given by
P X
X P
S = G2mn cos2 (mn ; ) : (8.60)
m=1 n=1
The sum S varies as the orientation  is varied, and attains its maximal
value when  is perpendicular to the dominant orientation that represents the
underlying texture in the given set of points. Dierentiating S with respect
to  yields
dS = 2 XP X P
d G2mn cos(mn ; ) sin(mn ; ) : (8.61)
m=1 n=1
By setting ddS = 0 and further simplifying the result, we obtain the solution
for  = pq that maximizes S at the point (p q) in the image as
 PP PP !
pq = 21 arctan m G2mn sin 2mn :
PP =1 PPn=1 (8.62)
m=1 n=1 G2mn cos 2mn
2
The second derivative ddS2 is given by
d2 S = ;2 XP X P
d2 G2mn cos(2mn ; 2) : (8.63)
m=1 n=1
728 Biomedical Image Analysis

G G cos ( θ − θ )
mn mn mn pq

θ pq
(θ − θ )
mn pq ( m , n)
G
pq

θ pq
(p, q)

FIGURE 8.45
Schematic illustration of the projection of the gradient magnitude for com-
puting the dominant orientation angle and coherence (the scheme of Rao and
Schunck 653]). Gpq and pq indicate the gradient magnitude and orientation
at (p q), respectively. The corresponding parameters at (m n) are Gmn and
mn . The size of the neighborhood shown is P P = 5 5 pixels. Figure
courtesy of N.R. Mudigonda 166].
Analysis of Directional Patterns 729
The value of pq that is obtained using Equation 8.62 represents the direc-
tion of the maximal gradient output, because the second derivative shown in
Equation 8.63 is negative at  = pq when the texture has only one dominant
orientation. The estimated orientation angle of ow pq at (p q) in the image
is then
pq = pq + 2 (8.64)
because the gradient vector is perpendicular to the direction of ow. The
angles computed as above range between 0 and  radians.
In order to analyze mammograms with the procedure described above, the
original image was initially smoothed using a separable Gaussian kernel 677]
of a specied width, and the gradients in the x and y directions were computed
from the smoothed image using nite dierences in the respective directions.
The choice of the width of the Gaussian aects the gradient computation a
width of 2:2 mm (11 pixels) was used by Mudigonda et al. 275], in relation to
the range of the size of features related to breast masses. The lter has a width
of about 1 mm at its half-maximum height. This lter size is appropriate
given that mammograms may demonstrate lumps that are as small as 3 mm
in diameter.
The gradient estimates computed as above were smoothed using a neigh-
borhood of size 15 15 pixels (3 3 mm), the width of which was chosen to
be larger than the Gaussian that was initially used to compute the gradient
estimates. Figure 8.46 shows the intrinsic angle image of the mammogram
shown in Figure 8.42. The bright needles, overlaid on the image, indicate the
underlying dominant orientation at points spaced every fth row and fth
column, computed using Equation 8.62. Needles have been plotted only for
those pixels where the coherence, computed as follows, is greater than zero.
The coherence pq at a point (p q) in the given image was computed as the
cumulative sum of the projections of the gradient magnitudes of the pixels in
a window of size P P , in the direction of the dominant orientation at the
point (p q) under consideration, as

PP PP
pq = Gpq mn cos(mn ; pq ) :
m=1 Pn=1 GP (8.65)
P P
m=1 n=1 Gmn

The result was normalized with the cumulative sum of the gradient magni-
tudes in the window, and multiplied with the gradient magnitude at the point
under consideration in order to obtain high coherence values at the points in
the image having high visual contrast. The coherence image computed for
the mammogram in Figure 8.42 is shown in Figure 8.47. It can be observed
that glandular tissues, ligaments, ducts, and spicules corresponding to archi-
tectural distortion possess high coherence values.
730 Biomedical Image Analysis

FIGURE 8.46
Intrinsic angle information (white lines) for the mammogram shown in Fig-
ure 8.42. The boundaries (black) represent the mass and false-positive regions
segmented at the initial stage of the mass-detection algorithm. The breast
outline detected is superimposed. Reproduced with permission from N.R.
Mudigonda, R.M. Rangayyan, and J.E.L. Desautels, \Detection of breast
masses in mammograms by density slicing and texture ow-eld analysis",
IEEE Transactions on Medical Imaging, 20(12): 1215 { 1227, 2001.  c IEEE.
Analysis of Directional Patterns 731

FIGURE 8.47
Intrinsic coherence image of the mammogram shown in Figure 8.42. Repro-
duced with permission from N.R. Mudigonda, R.M. Rangayyan, and J.E.L.
Desautels, \Detection of breast masses in mammograms by density slicing and
texture ow-eld analysis", IEEE Transactions on Medical Imaging, 20(12):
1215 { 1227, 2001.  c IEEE.
732 Biomedical Image Analysis
8.8.7 Adaptive computation of features in ribbons
The regions detected by the method described above vary greatly in size and
shape. For this reason, a method was devised to compute adaptively the width
of the ribbon for the derivation of features (see Section 7.9.1), or equivalently,
the diameter of the circular morphological operator for a particular region
based upon the region's size and shape.
Figure 8.48 shows a schematic representation of the method used to com-
pute adaptively the size of the ribbon. Initially, the diameter of the bounding
circle enclosing a given candidate region was found by computing the maxi-
mal distance between any two points on its boundary. Then, the areas of the
region (Ar ) and the bounding circle (Ac ) enclosing the region were computed.
The width of the ribbon was computed as
Rw = Rc A
A
r (8.66)
c
where Rc is the radius of the bounding circle. The ratio AArc is a simple measure
of the narrowness and shape complexity of the region. The size of the ribbon
computed above was limited to a maximum of 8 mm or 40 pixels. The regions
for which the sizes of ribbons computed was less than 0:8 mm or four pixels
were rejected, and not processed further in the false-positive analysis stage.
The ribbons of pixels (white) extracted across the boundaries (black) of the
various regions detected in the image shown in Figure 8.42 are illustrated in
Figure 8.49.

A
c

A
r

2R
c

FIGURE 8.48
Schematic representation of the adaptive computation of the width of the
ribbon. Ar : area of the candidate region, Ac : area of the bounding circle, and
Rc : radius of the bounding circle. Figure courtesy of N.R. Mudigonda 166].
Analysis of Directional Patterns 733

FIGURE 8.49
Ribbons of pixels (white) extracted adaptively across the boundaries (black)
of the regions detected in the mammogram shown in Figure 8.42. Repro-
duced with permission from N.R. Mudigonda, R.M. Rangayyan, and J.E.L.
Desautels, \Detection of breast masses in mammograms by density slicing and
texture ow-eld analysis", IEEE Transactions on Medical Imaging, 20(12):
1215 { 1227, 2001. c IEEE.
734 Biomedical Image Analysis
Features for mass-versus-false-positive classi cation: In order to
classify the regions detected as true masses or false positives, the following
features were proposed by Mudigonda et al. 275], based upon certain well-
established radiological notions about breast masses as apparent on mammo-
grams.
 Contrast (Cfg ) : Masses in mammograms may be presumed to be hy-
perdense, or at least isodense, with respect to their surroundings. For
this reason, the contrast (Cfg ) of a region was computed as the dier-
ence between the mean intensities of the foreground region or ROI, and
a background region dened as the region enclosed by the extracted rib-
bon of pixels excluding the ROI. Regions possessing negative contrast
values were rejected from further analysis.
 Coherence ratio (r ) : The interior regions of masses are expected to be
less coherent than their edges. The ratio (r ) of the mean coherence of
the ROI (excluding the ribbon of pixels) to the mean coherence in the
ribbon of pixels was used as a feature in pattern classication.
 Entropy of orientation estimates (Ho) : The orientation of spicules in
the margins of spiculated masses is usually random. Furthermore, the
orientation estimates computed in the margins of circumscribed masses
could cover a wide range of angles between zero and  radians, and may
not possess any dominant orientation. On the contrary, broglandular
tissues are highly directional. For these reasons, the entropy (Ho ) of
the orientation estimates was computed in the ribbon of pixels of each
region detected for use as a feature in pattern classication.
 Variance of coherence-weighted angle estimates (h2 ) : The fourth fea-
ture was based upon the coherence-weighted angular histogram, which
was computed for a particular region by incrementing the numbers of
occurrence of angles with the magnitudes of coherence values computed
at the respective points, after resampling the angle values in the rib-
bon regions to Q = 6 equally spaced levels between zero and . This
is equivalent to obtaining a cumulative sum of the coherence estimates
of the points belonging to each bin as the height of the corresponding
bin in the histogram. The histogram distributions obtained as above
were normalized with the cumulative sum of the coherence values com-
puted in the ribbons of the respective regions, and the variance (h2 ) was
computed as
Q
2 = 1
X
h Q i=1 ( i ;  )2
h (8.67)

where i, i = 1 2 : : : Q, are the normalized values of the heights of


the Q bins of the histogram formed by the coherence-weighted angle
Analysis of Directional Patterns 735
estimates, and h is the average height of the bins of the histogram:
Q
h = Q1
X
i : (8.68)
i=1

Features for benign-versus-malignant classi cation: The ecacy in


benign-versus-malignant classication of the true-positive mass regions suc-
cessfully segmented by the mass-detection algorithm was evaluated by using
a set of ve GCM-based texture features: entropy, second moment, dier-
ence moment, inverse dierence moment, and correlation (see Section 7.3.2
for details). The features were computed in the ribbon of pixels extracted
adaptively from each segmented mass margin as described above. The GCMs
constructed by scanning each mass ribbon in the 0o , 45o , 90o , and 135o di-
rections were averaged to obtain a single GCM, and the ve texture features
were computed for the averaged GCM for each ribbon.

8.8.8 Results of mass detection in full mammograms


Mudigonda et al. 275] tested their methods with a total of 56 images (each of
size 1 024 1 024 pixels at a resolution of 200 m) including 30 benign masses,
13 malignant tumors, and 13 normal cases selected from the Mini-MIAS 376]
database. The dataset included circumscribed and spiculated cases in both
of the benign and malignant categories. The mean values of the sizes of the
masses were 1:07  0:77 cm and 1:22  0:85 cm for the benign and malignant
categories, respectively. The radius of the smallest mass (malignant) was
0:34 cm, and that of the largest mass (benign) was 3:9 cm. The center of
abnormality and an approximate radius of each mass are indicated in the
database. The circular demarcation of masses as done in the database is not
useful for conrming the results of mass detection, because such a demarcation
may also include normal broglandular tissues in the ROIs, particularly in
spiculated cases. Hence, only the center of the abnormality as indicated for
each mass in the database was used to conrm the result of mass detection.
The mass-detection algorithm successfully detected all of the 13 malignant
tumors in the database used. However, the algorithm met with limited success
in detecting benign masses. In 11 (ve circumscribed and six spiculated) of
the 30 benign cases tested, the algorithm failed to detect the masses. The
overall detection accuracy was 74% with a total of 43 cases.
A close inspection of some of the benign cases in which the algorithm failed
to detect the mass revealed the following details. In three of the four dense-
glandular masses (cases labeled as mdb244, mdb290, and mdb315 in the Mini-
MIAS database 376]), two fatty-glandular masses (mdb017 and mdb175),
and a fatty mass (mdb069), the masses do not have prominent central core
regions and possess poor contrast with respect to their background. Con-
trast enhancement 123] of such mammograms prior to the detection step
736 Biomedical Image Analysis
could improve the performance of the detection algorithm. In a mammo-
gram containing a fatty mass (mdb190), the high intensity in the pectoral
muscle region aected the multilevel thresholding process of the detection al-
gorithm. This calls for methods to detect precisely the pectoral muscle 278]
(see Section 5.10), and a two-step detection procedure: initially, the mammo-
graphically apparent regions corresponding to suspicious lymph nodes could
be searched for inside the pectoral muscle region, and in the second stage, the
region of the breast excluding the pectoral muscle area could be searched for
the possible presence of masses.
In two other cases (mdb193 and mdb191), the masses did not satisfy the
hyperdense or isodense assumption. Successful detection of masses in such
cases may require additional methods based upon the asymmetry between
the right and the left mammograms, in order to detect regions possessing
architectural distortion 381, 595, 679, 680, 681].
In the method proposed by Mudigonda et al. 275], no region was rejected
based upon its size during the initial stage, because masses can be present
with any size. Instead, the emphasis was on reducing false positives by using
texture ow-eld features. As a result, a large number of false-positive regions,
at approximately 11 per image, were detected by the algorithm along with the
true mass regions during the initial stage of detection.
Mass-versus-false-positive classi cation: The four features Cfg , r ,
Ho and h2 , described in Section 8.8.7, were computed in the ribbons of the
candidate regions that were detected in all of the 56 cases tested, and used in
a linear discriminant classier to identify the true mass regions and false pos-
itives. The MIAS database contains an unusual number of spiculated benign
cases 345]. In order to study the eect of such an atypical distribution of cases
on the accuracy of the detection method, pattern classication experiments
were carried out in two stages: at rst, a mass-versus-normal-tissue classica-
tion was conducted with the 671 regions detected in the 56 cases tested. Next,
malignant-tumor-versus-normal-tissue classication was performed using the
features computed from the 343 regions detected in the 13 malignant and the
13 normal cases tested.
Pattern classication was carried out using the BMDP 7M stepwise dis-
criminant analysis program with the leave-one-out scheme 674]. In datasets
with limited numbers of cases, it is dicult to form separate sets of data
for training and testing purposes. The leave-one-out cross-validation scheme
helps to obtain the least-biased estimates of classication accuracy in such
situations. The overall classication eciency in the classication of malig-
nant tumors versus normal tissue was 0:9, and that for discriminating between
masses (both benign and malignant) and normal tissue was 0:87.
The linear discriminant function obtained at equal prior probability values
for the group with tumors and the group with normal cases was encoded
to arrive at a mass-versus-false-positive decision for each segmented region.
Figure 8.50 indicates the nal set of regions that were retained for the image in
Figure 8.42 after the detection and false-positive analysis stages. The region
Analysis of Directional Patterns 737
detected inside the pectoral muscle area as shown in Figure 8.50 was suspected
to be a lymph node aected by the invasive carcinoma. Such areas may be
prompted to a radiologist for studying possible nodal involvement, particularly
in cases with no localizing signs of the disease.
Figure 8.51 shows a mammogram with a spiculated malignant tumor (in-
dicated by an arrow, radius = 0:54 cm) that is smaller and less obvious than
the tumor shown in Figure 8.42. Figures 8.52 and 8.53 show the results of de-
tection for the image shown in Figure 8.51 before and after the false-positive
analysis stage, respectively. The tumor has been successfully detected, along
with one false positive.
The mass-versus-normal-tissue classication experiment, involving the 32
mass regions (19 benign and 13 malignant) that the algorithm successfully
detected and 639 false positives from a total of 56 images (including 13 normal
cases), resulted in an overall classication eciency of 0:87, with a sensitivity
of 81% at 2:2 false positives per image. A total of six masses (four benign and
two malignant) were misclassied as normal tissue. However, if the fact that
the algorithm missed 11 benign masses during the initial stage of detection
itself is taken into consideration, the true detection sensitivity of the algorithm
with the database of 30 benign and 13 malignant masses reduces to 60%
(26=43).
In the case of malignant-tumor-versus-normal-tissue classication, a high
overall classication eciency of 0:9 was achieved the dataset included 13
malignant tumors and 330 false positives from a total of 26 images (including
13 normal cases). A sensitivity of 85% was obtained at 2:46 false positives per
image. Although all of the 13 tumors were successfully detected in the initial
stage, two of the malignant tumors that were detected were misclassied later
as normal tissue, yielding a small proportion (2=13) of false negatives.
In a related work, te Brake and Karssemeijer 643] compared three mass-
detection schemes to detect malignant tumors of small size (radius smaller
than 6 mm), medium size (radius between 6 mm and 10 mm), and large size
(radius greater than 10 mm). The method based upon gradient-orientation
maps 641] was reported to have achieved the best results in the detection
of masses in the small and medium categories. The study of te Brake and
Karssemeijer focused only on malignant tumors and did not include benign
masses.
Petrick et al. 632] reported similar trends in detection results using a
dataset of 25 mammograms containing 14 benign and 11 malignant cases.
A sensitivity of 96% was reported at 4:5 false positives per image. Most
of the other previous studies 634, 643, 645] in the related eld reported on
the detection of only malignant tumors, and did not specically consider the
detection of benign masses.
Benign-versus-malignant classi cation: The eectiveness of the seg-
mentation results in benign-versus-malignant pattern classication was veri-
ed using the ve GCM-based texture features computed based upon aver-
aged GCMs as explained above for the 32 cases (19 benign and 13 malignant)
738 Biomedical Image Analysis

FIGURE 8.50
Adaptive ribbons of pixels (white) and boundaries (black) of the regions re-
tained in the mammogram shown in Figure 8.42 after the false-positive anal-
ysis stage. The larger region corresponds to the malignant tumor the other
region is a false positive. See also Figure 8.49. Reproduced with permission
from N.R. Mudigonda, R.M. Rangayyan, and J.E.L. Desautels, \Detection
of breast masses in mammograms by density slicing and texture ow-eld
analysis", IEEE Transactions on Medical Imaging, 20(12): 1215 { 1227, 2001.
c IEEE.
Analysis of Directional Patterns 739

FIGURE 8.51
A mammogram (size 1 024 1 024 pixels, 200 m per pixel) with a spiculated
malignant tumor (pointed by the arrow, radius = 0.54 cm). Case mdb144 from
the MIAS database 376]. Reproduced with permission from N.R. Mudigonda,
R.M. Rangayyan, and J.E.L. Desautels, \Detection of breast masses in mam-
mograms by density slicing and texture ow-eld analysis", IEEE Transac-
tions on Medical Imaging, 20(12): 1215 { 1227, 2001. c IEEE.
740 Biomedical Image Analysis

FIGURE 8.52
Ribbons of pixels (white) extracted adaptively across the boundaries (black)
of the regions detected in the mammogram shown in Figure 8.51. Repro-
duced with permission from N.R. Mudigonda, R.M. Rangayyan, and J.E.L.
Desautels, \Detection of breast masses in mammograms by density slicing and
texture ow-eld analysis", IEEE Transactions on Medical Imaging, 20(12):
1215 { 1227, 2001. c IEEE.
Analysis of Directional Patterns 741

FIGURE 8.53
Adaptive ribbons of pixels (white) and boundaries (black) of the regions re-
tained in the mammogram shown in Figure 8.51 after the false-positive anal-
ysis stage. The larger region corresponds to the malignant tumor the other
region is a false positive. See also Figure 8.52. Reproduced with permission
from N.R. Mudigonda, R.M. Rangayyan, and J.E.L. Desautels, \Detection
of breast masses in mammograms by density slicing and texture ow-eld
analysis", IEEE Transactions on Medical Imaging, 20(12): 1215 { 1227, 2001.
c IEEE.
742 Biomedical Image Analysis
that were successfully segmented by the mass-detection procedure. Pattern
classication was carried out using the BMDP stepwise logistic regression
program 674]. The ve GCM-based texture features resulted in an overall
classication eciency of 0:79. The results obtained conrm that the mass
regions segmented in images of resolution 200 m possess adequate discrim-
inant information to permit their classication as benign or malignant with
texture features. Similar benign-versus-malignant classication results were
obtained using partial images of the same cases, but with 50 m resolution.
It appears that the detection and classication of masses may be successfully
performed using images of resolution 200 m with the techniques described
above.

8.9 Application: Bilateral Asymmetry in Mammograms


Asymmetry between the left and right mammograms of a given subject is an
important sign used by radiologists to diagnose breast cancer 54]. Analysis
of asymmetry can provide clues about the presence of early signs of tumors
(parenchymal distortion, small asymmetric bright spots and contrast, etc.)
that are not evaluated by other methods 372]. Several works have been
presented in the literature addressing this problem 371, 372, 682, 683, 684],
with most of them applying some type of alignment of the breast images before
performing asymmetry analysis. However, alignment procedures applied to
mammograms have to confront many dicult problems, such as the natural
asymmetry of the breasts of a given subject, the absence of good corresponding
points between the left and right breast images to perform matching, and the
distortions inherent to breast imaging.
Procedures for systematic analysis were proposed by Lau and Bischof 371]
and Miller and Astley 372] to perform comparison of the corresponding
anatomical regions between the left and right breast images of an individual
in terms of shape, texture, and density. Lau and Bischof 371] also proposed
a directional feature to quantify oriented patterns.
Ferrari et al. 381] proposed a procedure based upon directional analysis
using Gabor wavelets in order to analyze the possible presence of global dis-
turbance between the left and right mammograms of an individual in the
normally symmetrical ow of mammary structures. The analysis was focused
on the broglandular disc of the mammograms, segmented in a preprocessing
step 280, 375]. The methods and results of Ferrari et al. are presented in the
following paragraphs.
Analysis of Directional Patterns 743
8.9.1 The broglandular disc
As indicated by the proceedings of the recent International Workshops on
Digital Mammography 685, 686, 687, 688], several researchers are developing
image processing methods to detect early breast cancer. Most of the tech-
niques proposed perform analysis of the whole mammogram, without taking
into account the fact that mammograms have dierent density patterns and
anatomical regions that are used by radiologists in diagnostic interpretation.
In fact, mammographic images are complex and dicult to analyze due to
the wide variation in the density and the variable proportion of fatty and
broglandular tissues in the breast 689]. Based upon these observations, a
few researchers have proposed methods to segment and also to model mam-
mograms in terms of anatomical regions 375, 383, 690, 691].
Miller and Astley 372] investigated the visual cues utilized by radiologists
and the importance of a comparison of the corresponding anatomical struc-
tures in order to detect asymmetry between the left and right mammograms
of an individual. However, in their work, the anatomical segmentation ap-
proach and the possible methodologies to segment the mammograms were
not the main issue. Aylward et al. 383] devised a modeling system to seg-
ment a given mammographic image into ve major components: background,
uncompressed fat (periphery of the breast close to the skin-air boundary),
fat, dense tissue, and muscle the system combined geometric and statistical
techniques. A few other segmentation techniques for mammograms have been
presented in the literature however, the focus has not been on anatomical seg-
mentation but on specic problems such as density correction of peripheral
breast tissue 368, 369], localization of the nipple on mammograms 370, 682],
and quantication of breast density 356] and its association with the risk of
breast cancer 382, 448, 689, 692, 693, 694, 695].
The broglandular disc is an anatomical region of the breast characterized
by dense tissues, ligaments, and milk ducts. Normally, it has the shape of a
disc or a cone, and goes through the interior of the breast from the region
near the chest wall to the nipple 696]. Segmentation of the broglandular
disc could form an important stage in techniques for the detection of breast
cancer that use asymmetry between the left and right mammograms of the
same subject, or for monitoring breast density changes in screening programs.
According to Caulkin et al. 697], it has been noticed clinically that breast
cancer occurs most frequently in the upper and outer quadrant of the breast,
and that the majority of cancers are associated with glandular rather than
fatty tissues. A common procedure used by radiologists in screening programs
for the detection of breast cancer is comparison between the left and right
broglandular discs of the mammograms of the same subject.
Several works reported in the literature have been directed to address the
problem of automatic quantication of breast density and its association with
the risk of breast cancer 382, 448, 689, 692, 693, 694, 695]. Most of such
works propose an index or a set of values for the quantication of breast tissue
744 Biomedical Image Analysis
density. However, only a few works have attempted to address the problem of
detection and segmentation of the broglandular disc 356, 375, 383, 690, 691]
for subsequent analysis.
Ferrari et al. 280] proposed a method to segment the broglandular disc
in mammograms. In this method, prior to the detection of the broglan-
dular disc, the breast boundary and the pectoral muscle are detected using
the methods described in Sections 5.9 and 5.10. The broglandular disc is
detected by dening a breast density model. The parameters of the model
are estimated by using the EM algorithm 698] and the minimum-description
length (MDL) principle 699]. Then, a reference value computed by using
information from the pectoral muscle region is used along with the breast
density model in order to identify the broglandular disc. The details of the
methods are described in the following subsections.

8.9.2 Gaussian mixture model of breast density


The breast density model used by Ferrari et al. 280] is based upon a Gaussian
mixture model 700] (see also Section 9.9.3) estimated by using the gray-
level intensity distribution that represents categories or classes with dierent
density values in mammograms. Except for the rst category, the categories
are related to types of tissue that may be present in the breast.
Dierent from the model proposed by Aylward et al. 383], which xes
at ve the number of tissue classes, the model used by Ferrari et al. 280]
was formulated with the hypothesis that the number of tissue classes in the
eective region of the breast (after extracting the pectoral muscle) may vary
from two to four among the following possibilities:
1. Uncompressed fatty tissues | represented by fatty tissues localized in
the periphery of the breast.
2. Fatty tissues | composed by fatty tissues that are localized next to
the uncompressed fatty tissues, and surround the denser areas of the
broglandular disc.
3. Nonuniform density tissues | including the density region that sur-
rounds the high-density portions of the broglandular disc extending
close to the chest wall.
4. High-density tissues | represented by the high-density portions of the
broglandular disc.
The hypothesis described above is based upon the fact that breast tissues
may naturally vary from one person to another, or even for the same person
during her life time due to aging 701] or hormone-replacement therapy 702].
A high-density and high-fat breast, for example, will likely present only two
categories. It was assumed that the data (the gray-level values in a segmented
Analysis of Directional Patterns 745
mammogram) are generated by a Gaussian mixture model with a one-to-one
correspondence between the mixture model components and the (observed)
tissue classes (see also Section 9.9). Thus, the marginal probability of having
a gray level x is the sum of the probability over all mixture components, and
it is represented by a linear superposition of multiple weighted Gaussians as
K
X
p(xj) = Wi p(xji ) (8.69)
i=1
where the x values represent the gray-level
P values in the image Wi are
the normalized mixing parameters ( Ki=1 Wi = 1 with 0  Wi  1)
p(xji ) is the Gaussian PDF parameterized by i = i i ] that is, the
mean value i and the standard deviation i of the ith Gaussian kernel the
vector  represents the collection of the parameters of the mixture model
(W1 W2 : : : WK 1 2 : : : K ) and K is the number of Gaussian kernels
(that is, tissue categories). The Gaussian kernel is represented as

p(xji ) = p 1 2 exp ; (x ; i )2 :
2i2 (8.70)
2i
In the case of using features other than the gray-level values of the image,
such as texture features, a multivariate Gaussian must be used instead of a
univariate Gaussian. In this case, the mean value and the standard deviation
of the gray-level values are replaced by the mean vector and the covariance
matrix of the feature vectors, respectively. In the model as above, the Bayesian
assumption is made: that the PDF associated with a pixel in the image is
independent of that of the other pixels given a class of tissue, and furthermore,
independent of its position in the image. The estimation of the parameters is
performed by using the EM algorithm, which is an iterative procedure that
maximizes the log-likelihood of the parameters of the model for a dataset
representing a PDF 703]. In the EM algorithm, the estimation of the model
parameters is performed in two consecutive steps: the E-step and the M-step.
In the E-step, the current set of parameters is used to compute the model.
The model is then assumed to be correct and the most likely distribution of
the data with respect to the model is found. In the M-step, the parameters
of the model are reevaluated with respect to the new data distribution by
maximizing the log-likelihood, given as
N
Y
log L(j ) = log p(xij) (8.71)
i=1
where N is the number of pixels in the eective region of the breast (which is
the region demarcated by the breast boundary without the pectoral muscle),
and represents the data sample. The procedure is iterated until the values
of log L(j ) between two consecutive estimation steps increase by less than
1%, or the number of iterations reaches a specied limit (200 cycles).
746 Biomedical Image Analysis
Initialization of the model parameters: The parameters of the model
were rst initialized by setting the center and weight of each Gaussian as
i = and Wi = 1=K where i = 1 2    K is the index of the Gaussian
kernel, and is a random value within the range dened by the minimum
and maximum gray-level values present in the eective area of the breast.
The variance i2 of each Gaussian was initialized to the nearest distance to
the other Gaussian kernels. If the variance i2 became less than unity during
the maximization step (the M-step), it was reinitialized with a large random
value. This procedure was intended to avoid shrinkage of the variance to a
small value. The EM estimation procedure was initialized and repeated three
times in order to minimize the chance of convergence to a local minimum.
Model selection: Besides the initialization of the parameters, another
diculty with the mixture model lies in choosing the number of components
that best suits the number of clusters (or density categories) present in the
image. Several methods have been proposed in the literature to estimate the
number of components in a mixture of Gaussians, among which Akaike's in-
formation criterion, Bayesian information criterion, and MDL are the most
commonly used methods 704, 705] These estimators are motivated from dif-
ferent theories that translate, in practice, to dierent penalty factors in the
formulation used to select the best model. The MDL criterion is based upon an
information-theoretic view of induction as data compression. It is equivalent
to the Bayesian information criterion, which gives a Bayesian interpretation.
Akaike's information criterion is derived from a dierent theoretical perspec-
tive: it is an optimal selection rule in terms of prediction error that is, the
criterion identies a nite-dimensional model that, while approximating the
data provided, has good prediction properties. However, Akaike's information
criterion tends to yield models that are large and complex. In general, the
MDL criterion outperforms Akaike's and Bayesian criteria.
As discussed above, the number of density categories in the breast density
model can vary from two to four. Because no reliable prior information is avail-
able about the number of tissue categories present in a given mammogram,
the MDL principle 699] was used to select the number K of the Gaussian
kernels of the model. The MDL principle deals with a tradeo between the
maximum-likelihood (or minimum-error) criterion for tting the model to a
dataset and the complexity of the model being designed 706]. Thus, if the
models designed by using two dierent values of K t the data equally well,
the simpler model is used. The value of K was chosen so as to maximize the
quantity
log L(j ) ; N (2K ) log K (8.72)

where N (K ) = K (2d + 1) is the number of free parameters in the mixture


model with K Gaussian kernels. The value of K ranges from two to four, and
d = 1 represents the dimension of the feature space.
Analysis of Directional Patterns 747
8.9.3 Delimitation of the broglandular disc
After computing the parameters of the Gaussian mixture model, the maximum-
likelihood method was applied to the original image to produce a K -level
image that encoded, at each pixel, a cluster membership with the highest
likelihood among the K estimated Gaussian kernels. Figure 8.54 shows the
eective region of the image mdb042 used for the model estimation process
and the frequency distribution plots of the resulting Gaussian mixture com-
ponents.
According to Karssemeijer 382], the density of the pectoral muscle is an
important item of information that can be used as a reference in the interpreta-
tion of densities in the breast tissue area, because regions of similar brightness
or density will most likely correspond to broglandular tissue. Based upon
this observation and the breast density model described above, a postprocess-
ing stage was developed in order to determine the cluster region in the K -level
image, if one existed, that agreed with the broglandular disc. In this stage,
the K -level cluster was classied as the broglandular region if K  P ; P
where P and P are, respectively, the mean and standard deviation of the
gray-level values of the pectoral muscle region, and K is the mean gray level
of the cluster K computed from the eective region of the given image. The
threshold value (P ; P ) used to determine the broglandular disc was ar-
rived at based upon experiments, because a direct comparison between the
densities of the pectoral muscle and the broglandular tissue, in terms of
physical parameters, would be dicult due to several inuential factors of the
image-acquisition protocol 707].
Figures 8.54 (e) and (f) illustrate, respectively, the K -level image (K = 4
by MDL) resulting from the mixture model designed, and the broglandular
disc identied, for the mammogram in Figure 8.54 (a). The results for another
mammogram are provided in Figure 8.55, with K = 3 by MDL. A simplied
description of the methods is as follows:
1. Initialize the Gaussian mixture model parameters  (i i2 Wi i =
1 2 : : : K ).
2. Repeat:
(a) E-step: Compute the model p(xj) by maximizing the log-likelihood
and assuming the parameter vector  to be correct.
(b) M-step : Reevaluate  based upon the new data distribution com-
puted in the previous step.
Until log L(j ) ; N (2K ) log K increases by less than 1%.
3. Obtain the K -level image by encoding in each pixel the cluster mem-
bership with the highest likelihood.
748 Biomedical Image Analysis

(a)

Figure 8.54 (b)


Analysis of Directional Patterns 749

(c)
0.018
Image histogram
Uncompressed fat
0.016 Fat
Nonuniform density
High density
0.014 Mixture summation
Frequency distribution

0.012

0.01

0.008

0.006

0.004

0.002

0
0 50 100 150 200 250
Gray−level values
Figure 8.54 (d)
750 Biomedical Image Analysis

(e)

Figure 8.54 (f)


Analysis of Directional Patterns 751
FIGURE 8.54
(a) Original mammographic image mdb042 from the Mini-MIAS
database 376]. (b) Breast contour and pectoral muscle edge detected
automatically. (c) Eective region of the mammogram obtained after
performing the segmentation steps. (d) Histogram of the eective area of the
mammogram and the mixture of Gaussian components. (e) Four-level image
resulting from the EM algorithm. (f) Fibroglandular disc obtained after
thresholding. Reproduced with permission from R.J. Ferrari, R.M. Ran-
gayyan, R.A. Borges, and A.F. Frere, \Segmentation of the bro-glandular
disc in mammograms using Gaussian mixture modelling", Medical and
Biological Engineering and Computing, 42: 378 { 387, 2004. c IFMBE.

4. Delimit the broglandular disc based upon the density of the pectoral
muscle.

Evaluation of the results of segmentation: In the work of Ferrari


et al. 280], 84 images randomly chosen from the Mini-MIAS database 376]
were used to test the segmentation of the broglandular disc. All images were
MLO views with 200 m sampling interval and 8 b gray-level quantization.
In order to reduce the processing time, all images were downsampled with a
xed sampling distance such that the original images with a matrix size of
1 024 1 024 pixels were reduced to 256 256 pixels. The results obtained
with the downsampled images were mapped to the original mammograms for
subsequent analysis and display.
The results obtained were evaluated in consultation with a radiologist ex-
perienced in mammography. The test images were displayed on a computer
monitor. By using the Gimp program 380], the contrast, brightness, and
zoom options were provided for improved visualization and assessment of the
results of the segmentation procedure.
Because the delineation of the broglandular disc is a dicult and time-
consuming task, and also because the segmentation method may provide dis-
joint regions, the results were evaluated by using the following subjective pro-
cedure. The segmented and original images were simultaneously presented to
the radiologist on a computer monitor. The radiologist visually compared the
two images and assigned one of ve categories to the result of segmentation,
as follows:

1. Excellent: Agreement between the segmented disc and the observed disc
on the mammogram is higher than 80%.
2. Good: Agreement between the segmented disc and the observed disc on
the mammogram is between 60 and 80%.
752 Biomedical Image Analysis

(a)
0.02
Image histogram
Uncompressed fat
0.018
Fat
High density
0.016 Mixture summation

0.014
Frequency distribution

0.012

0.01

0.008

0.006

0.004

0.002

0
0 50 100 150 200 250
Gray−level values
Figure 8.55 (b)
Analysis of Directional Patterns 753

(c)

Figure 8.55 (d)


754 Biomedical Image Analysis
FIGURE 8.55
(a) Breast contour and pectoral muscle edge superimposed on the original
image mdb008 376]. (b) Histogram of the eective area of the mammogram
and the mixture of Gaussian components. (c) Three-level image resulting
from the EM algorithm. (d) Fibroglandular disc obtained after thresholding.
Reproduced with permission from R.J. Ferrari, R.M. Rangayyan, R.A. Borges,
and A.F. Frere, \Segmentation of the bro-glandular disc in mammograms
using Gaussian mixture modelling", Medical and Biological Engineering and
Computing, 42: 378 { 387, 2004. 
c IFMBE.

3. Average: Agreement between the segmented disc and the observed disc
on the mammogram is between 40 and 60%.

4. Poor: Agreement between the segmented disc and the observed disc on
the mammogram is between 20 and 40%.

5. Complete failure: Agreement between the segmented disc and the ob-
served disc on the mammogram is less than 20%.

The overall results of segmentation of the broglandular disc were consid-


ered to be promising. Approximately 81% of the cases (68 images) were rated
as acceptable for CAD purposes (Categories 1 and 2). In Category 1, the
results for 10 images were considered by the radiologist to be underestimated,
and the results for 11 images to be overestimated. In Category 2, the result
for one image was considered to be underestimated, and the results for two
images to be overestimated. The results for about 19% of the cases (16 images
in Categories 3, 4, and 5) were considered to be unsatisfactory.
In spite of the attractive features of the EM algorithm, such as reliable
convergence, well-established theoretical basis, and easy implementation, a
major drawback is the problem of convergence to local maxima, which is
related to the initialization of the parameters of the model. A more ecient
minimization technique, such as the deterministic annealing EM algorithm
(DAEM) 708], may give better performance. The DAEM algorithm uses
the principle of maximum entropy and analogy with statistical mechanics to
reformulate the maximization step of the EM algorithm. The log-likelihood
function is replaced by a posterior PDF that is parameterized by , with
1= corresponding to the temperature in analogy to the annealing process.
This formulation of the PDF provides robustness to DAEM and minimizes
the possibility of convergence to local maxima.
The application of the DAEM algorithm to the images in the work of Ferrari
et al. 280] provided limited improvement in terms of the t between the
sums of the Gaussians in the mixture models and the corresponding true
Analysis of Directional Patterns 755
histograms. However, the change in terms of the nal result of segmentation
of the broglandular disc with respect to the result of the EM algorithm was
not signicant.
The stochastic EM algorithm 709] is a variant of EM with the aim to avoid
the dependence of the nal results upon the initialization. In the stochastic
EM algorithm, after the E-step, a stochastic classication step (the S-step) is
added to obtain partitions of the data according to posterior distributions. In
the absence of any other information, random initialization is applied. The
M-step uses the partitions to compute new estimates of the mean vector and
the covariance matrix. The stochastic EM algorithm presents the following
improvements in comparison with the standard EM algorithm: it is adequate
to know an upper bound on the number of classes the solution is essentially
independent of the initialization and the speed of convergence is appreciably
improved.
Other features such as texture may also be used as additional information
in order to improve the results. Because the segmentation method is based
upon the classication of each individual pixel in the image, the resulting
images are often not compact, and a follow-up procedure may be necessary in
order to demarcate the convex hull of the broglandular disc.

8.9.4 Motivation for directional analysis of mammograms


Most of the concepts used in image processing and computer vision for ori-
ented pattern analysis have their roots in neurophysiological studies of the
mammalian visual system. Campbell and Robson 710] suggested that the hu-
man visual system decomposes retinal images into a number of ltered images,
each of which contains intensity variations over a narrow range of frequency
and orientation. Marcelja 711] and Jones and Palmer 390] demonstrated
that simple cells in the primary visual cortex have receptive elds that are
restricted to small regions of space and are highly structured, and that their
behavior corresponds to local measurements of frequency.
According to Daugman 389, 712], a suitable model for the 2D receptive
eld proles measured experimentally in mammalian cortical simple cells is
the parameterized family of 2D Gabor functions (see Section 8.4). Another
important characteristic of Gabor functions or lters is their optimal joint
resolution in both space and frequency, which suggests that Gabor lters are
appropriate operators for tasks requiring simultaneous measurement in the
two domains 495]. Except for the optimal joint resolution possessed by the
Gabor functions, the DoG and dierence of oset Gaussian lters used by
Malik and Perona 713] have similar properties.
Gabor lters have been presented in several works on image processing 495,
500, 714] however, most of these works are related to the segmentation and
analysis of texture. Rolston and Rangayyan 542, 543] proposed methods for
directional decomposition and analysis of linear components in images using
multiresolution Gabor lters see Section 8.4.
756 Biomedical Image Analysis
Multiresolution analysis by using Gabor lters has natural and desirable
properties for the analysis of directional information in images most of these
properties are based upon biological vision studies as described above. Other
multiresolution techniques have also been used with success in addressing re-
lated topics, such as texture analysis and segmentation, and image enhance-
ment. Chang and Kuo 715], for instance, developed a new method for texture
classication that uses a tree-structured wavelet transform for decomposing
an image. Image decomposition was performed by taking into account the en-
ergy of each subimage instead of decomposing subsignals in the low-frequency
channels.
Laine and Fan 488] presented a new method for computing features for
texture segmentation based upon a multiscale representation. They used a
discrete wavelet packet frame to decompose an image at dierent levels of
resolution. The levels were analyzed by using two algorithms for envelope
detection based upon the Hilbert transform and zero crossings. Laine and
Fan stated that the selection of the lters for feature extraction should take
into account three important features: symmetry, frequency response, and
boundary accuracy. However, it should be noted that the Gabor function is
the only function that can achieve optimal joint localization (tradeo between
boundary accuracy and good frequency response).
Li et al. 716] performed analysis and comparison of the directional se-
lectivity of wavelet transforms by using three dierent types of frequency
decomposition: rectangular, hexagonal, and radial-angular decomposition.
The methods were applied to detect spiculated breast masses and lung nod-
ules. According to Li et al., the best results were achieved by using radial-
angular decomposition with both mammographic and chest radiographic im-
ages. Qian et al. 623] presented three image processing modules for mass
detection in digital mammography a tree-structured lter module for noise
suppression, a directional wavelet transform (DWT) technique for the decom-
position of mammographic directional features, and a tree-structured wavelet
transform for image enhancement. By making the parameters of the three
methods adaptive, Qian et al. improved the results obtained in previous re-
lated works 716, 717, 718]. In the DWT method, which is related to the
technique proposed by Ferrari et al. 381], the number of lter orientations
was adaptively selected in the range 4 ; 32, corresponding to angular band-
width in the range 45o ; 5:63o . The adaptive DWT module provided the best
results, with an overall classication eciency of 0:91.
Chang and Laine 624] proposed a method designed with the goal of en-
hancing mammograms based upon information regarding orientation. They
used a set of separable steerable lters to capture subtle features at dierent
scales and orientations spanning an over-complete multiscale representation
computed via a fast wavelet transform. By using the ltered output images,
they generated a coherence image and phase information that were used by
a nonlinear operator applied to the wavelet coecients to enhance the direc-
tional information in digital mammograms. Finally, the enhanced image was
Analysis of Directional Patterns 757
obtained from the modied wavelet coecients via an inverse fast wavelet
transform. (See Section 7.7 for further discussion on related topics.)
Inspired by studies on the Gabor function and the related topics mentioned
above, Ferrari et al. 381] proposed a scheme based upon a bank of self-similar
Gabor functions and the KLT to analyze directional components of images.
The method was applied to detect global signs of asymmetry in the broglan-
dular discs of the left and right mammograms of a given subject. The related
methods are described in the following paragraphs.

8.9.5 Directional analysis of broglandular tissue


Ferrari et al. 381] used the formulation of 2D Gabor functions as a Gaussian
modulated by a complex sinusoid, specied by the frequency of the sinusoid
W and the standard deviations x and y of the Gaussian envelope as

2  2  
(x y) = 2  exp ; 21 x2 + y 2 + j 2  Wx :
1 (8.73)
x y x y

Despite this general form, there is no standard and precise denition of a 2D


Gabor function, with several variations appearing in the literature 500, 714,
496] see Sections 8.4 and 8.10.3. Most of these variations are related to the
use of dierent measures of width for the Gaussian envelope and the frequency
of the sinusoid 719]. Based upon neurophysiological studies 711, 390] and
wavelet theory 386, 720], the Gabor function, normalized in an appropriate
way, can be used as a mother wavelet to generate a family of nonorthogonal
Gabor wavelets 385]. However, as pointed out by Jain and Farrokhnia 500],
although the Gabor function can be an admissible wavelet, by removing the
DC response of the function, it does not result in an orthogonal decomposi-
tion, which means that a wavelet transform based upon the Gabor wavelet
includes redundant information. A formal mathematical derivation of 2D Ga-
bor wavelets, along with the computation of the frame bounds for which this
family of wavelets forms a tight frame, is provided by Lee 385]. Despite the
lack of orthogonality presented by the Gabor wavelets, the Gabor function is
the only function that can achieve the theoretical limit for joint resolution of
information in both the space and the frequency domains.
Ferrari et al. 381] used the phrase \Gabor wavelet representation" to rep-
resent a bank of Gabor lters normalized to have DC responses equal to zero
and designed in order to have low redundancy in the representation. The Ga-
bor wavelet representation used was as proposed by Manjunath and Ma 384].
The Gabor wavelets were obtained by dilation and rotation of (x y) as in
Equation 8.73 by using the generating function
758 Biomedical Image Analysis

m n (x y) = a;m (x0 y0 ) a > 1 m n = integers (8.74)


x0 = a;m  (x ; x0 ) cos  + (y ; y0 ) sin ]
y0 = a;m ;(x ; x0 ) sin  + (y ; y0 ) cos ]
where (x0 y0 ) is the center of the lter in the spatial domain,  = n
K , K is
the total number of orientations desired, and m and n indicate the scale and
orientation, respectively. The scale factor a;m in Equation 8.74 is meant to
ensure that the energy is independent of m. Examples of the Gabor wavelets
used in the work of Ferrari et al. 381] are shown in Figure 8.56.
Equation 8.73 can be written in the frequency domain as
 
!(u v) = 2  1  exp ; 21 (u ;2W ) + v 2
2 2
(8.75)
u v u v
where u = 21 x and v = 21 y . The design strategy used is to project
the lters so as to ensure that the half-peak magnitude supports of the lter
responses in the frequency spectrum touch one another, as shown in Fig-
ure 8.57. In this manner, it can ensured that the lters will capture most of
the information with minimal redundancy.
Other related methods proposed in the literature use either complex-valued
Gabor lters 496] or pairs of Gabor lters with quadrature-phase relation-
ship 495]. In the formulation of Ferrari et al. 381], the Gabor wavelet repre-
sentation uses only real-valued, even-symmetric lters oriented over a range
of 180o only, as opposed to the full 360o range commonly described in the
literature. Because the Gabor lters are used to extract meaningful features
from real images (and hence with Hermitian frequency response 387]), the
response to the even-symmetric lter components will remain unchanged for
lters oriented 180o out of phase and the odd-symmetric component will be
negated. Thus, based upon this fact, and also on psychophysical grounds
provided by Malik and Perona 713], Manjunath and Ma 384] ignored one
half of the orientations in their Gabor representation, as illustrated in Fig-
ure 8.57. In order to ensure that the bank of Gabor lters designed as above
becomes a family of admissible 2D Gabor wavelets 385], the lters (x y)
must satisfy the admissibility condition of nite energy 386], which implies
that their Fourier transforms are pure bandpass functions having zero re-
sponse at DC. This condition was achieved by setting the DC gain of each
lter as !(0 0) = 0, which ensures that the lters do not respond to regions
with constant intensity.
The approach described above results in the following formulas for comput-
ing the lter parameters u and v :
  S;1 1
a = UUh (8.76)
l
Analysis of Directional Patterns 759

(a) (b)

(c) (d)
FIGURE 8.56
Examples of Gabor wavelets in the space domain, with four orientations ( =
0o , 45o , 90o , and 135o ) and four scales (x = 11 5 2 1 and y = 32 16 7 4
pixels). The size of each wavelet image shown is 121 121 pixels. Reproduced
with permission from R.J. Ferrari, R.M. Rangayyan, J.E.L. Desautels, and
A.F. Frere, \Analysis of asymmetry in mammograms via directional ltering
with Gabor wavelets", IEEE Transactions on Medical Imaging, 20(9): 953 {
964, 2001.  c IEEE.
760 Biomedical Image Analysis

Ul=0.05 ; Uh=0.45
0.7
S=4 ; K=12

0.6

0.5
frequency (v) − [cycles/pixel]

0.4

0.3

0.2

0.1

−0.1

−0.4 −0.2 0 0.2 0.4 0.6


frequency (u) − [cycles/pixel]

Figure 8.57 (a)


Analysis of Directional Patterns 761

0.6 Ul=0.05 ; Uh=0.45


S=6 ; K=6

0.5

0.4
frequency (v) − [cycles/pixel]

0.3

0.2

0.1

−0.1

−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5


frequency (u) − [cycles/pixel]

(b)
FIGURE 8.57
Examples of Gabor lters in the frequency domain. Each ellipse represents
the range of the corresponding lter response from 0:5 to 1:0 in squared mag-
nitude. The plots (a) and (b) illustrate two ways of dividing the frequency
spectrum by changing the Ul , Uh , S , and K parameters of the Gabor rep-
resentation. Plot (a) represents the lter bank used in the work of Ferrari
et al. 381] for the analysis of mammograms. The redundancy in the rep-
resentation is minimized by ensuring that the half-peak magnitude supports
of the lter responses touch one another. Reproduced with permission from
R.J. Ferrari, R.M. Rangayyan, J.E.L. Desautels, and A.F. Frere, \Analysis
of asymmetry in mammograms via directional ltering with Gabor wavelets",
IEEE Transactions on Medical Imaging, 20(9): 953 { 964, 2001.  c IEEE.
762 Biomedical Image Analysis
u = (a ; p 1)Uh (8.77)
(a + 1) 2 ln 2
h 2 i
tan( 2K ) Uh ; ( Uuh )2 ln 2
v = q (8.78)
2 ln 2 ; (2 lnU2)h2 u
2 2

where Ul and Uh denote the lower and upper center frequencies of interest.
The K and S parameters are, respectively, the number of orientations and
the number of scales in the desired multiresolution decomposition procedure.
The frequency of the sinusoid W is set equal to Uh , and m = 0 1 : : : S ; 1.
Because of the lack of orthogonality of the Gabor wavelets, the computation
of the expansion coecients becomes dicult. This task, however, is trivial
when using a set of orthogonal functions, because the expansion coecients,
given by
Z Z
cm n = hf (x y) m n (x y)i = f (x y) m n (x y) dx dy (8.79)
x y
are the projections of the image f (x y) onto the same set of functions, where
h i denotes the inner product. In this case, the analysis and synthesis win-
dows are the same, and the original image can be reconstructed as
X X
f (x y) = hf (x y) m n (x y)i m n (x y): (8.80)
m n
However, the joint localization and orthogonality of the set of functions are
properties that cannot be met simultaneously. Much work has been done to
overcome the problem of the lack of orthogonality of Gabor wavelets, with
most of them using dual frames or biorthogonal functions 720], iterative
methods 712], or adjustment of the phase-space sampling in order to ob-
tain a reasonably tight frame 385]. In Dthe dual-frame approach,
E for instance,
the set of projection coecients cm n = f (x y) m n (x y) of the dual frame
e
can be obtained by minimizing the cost function
 2
 X X 
 = f (x y) ; cm n m n (x y) (8.81)
m n
where em n is the dual frame.
In directional ltering and analysis, the interest lies in image analysis with-
out the requirement of exact reconstruction (synthesis) of the image. There-
fore, instead of using the wavelet coecients, Ferrari et al. 381] used the
magnitude of the lter response, computed as
 
am n = f (x y)  mevenn (x y) (8.82)
where mevenn (x y) indicates only the even-symmetric part of the complex Ga-
bor lter, and  represents 2D convolution.
Analysis of Directional Patterns 763
By adjusting the parameters Ul and Uh in the Gabor representation of
Manjunath and Ma 384], the range of the frequency spectrum to be used for
multiresolution analysis may be selected. The area of each ellipse indicated in
Figure 8.57 represents the spectrum of frequencies covered by the correspond-
ing Gabor lter. Once the range of the frequency spectrum is adjusted, the
choice of the number of scales and orientation may be made in order to cover
the range of the spectrum as required. The choice of the number of scales (S )
and orientations (K ) used in the work of Ferrari et al. 381] for processing
mammographic images was based upon the resolution required for detecting
oriented information with high selectivity 388, 389]. The frequency band-
widths of the simple and complex cells in the mammalian visual system have
been found to range from 0:5 to 2:5 octaves, clustering around 1:2 octaves
and 1:5 octaves, and their angular bandwidth is expected to be smaller than
30o 390, 389]. By selecting Ul = 0:05, Uh = 0:45, S = 4, and K = 12 for pro-
cessing mammographic images, Ferrari et al. 381] indirectly set the Gabor
representation to have a frequency bandwidth of approximately one octave
and an angular bandwidth of 15o . The eects of changing the Ul , Uh , S , and
K parameters of the Gabor representation as above on frequency localization
are shown in Figure 8.57.
The directional analysis method proposed by Ferrari et al. 381] starts by
computing the Gabor wavelets using four scales (S = 4) and twelve directions
(K = 12), with Ul = 0:05 and Uh = 0:45 cycles=pixel. The parameters Ul and
Uh were chosen according to the scales of the details of interest in the mam-
mographic images. Diering from other Gabor representations 500, 714, 385],
it should be noted that the parameters Ul , Uh , S , and K in the representation
described above have to be adjusted in a coordinated manner, by taking into
account the desirable frequency and orientation bandwidths. (In the Gabor
representation of Manjunath and Ma 384], the lters are designed so as to
represent an image with minimal redundancy the ellipses as in Figure 8.57
touch one another.)
The lter outputs for each orientation and the four scales were analyzed
by using the KLT (see Section 11.6.2). The KLT was used to select the
principal components of the lter outputs, preserving only the most relevant
directional elements present at all of the scales considered. Results were then
combined as illustrated in Figure 8.58, in order to allow the formation of an
S -dimensional vector (x) for each pixel from each set of the corresponding
pixels in the ltered images (S = 4).
The vectors corresponding to each position in the lter responses were used
to compute the mean vector and the covariance matrix . The eigenvalues
and eigenvectors of the covariance matrix were then computed and arranged in
descending order in a matrix A such that the rst row of A was the eigenvector
corresponding to the largest eigenvalue, and the last row was the eigenvector
corresponding to the smallest eigenvalue. The rst N principal components
corresponding to 95% of the total variance were then selected, and used to
represent the oriented components at each specic orientation. The principal
764 Biomedical Image Analysis

FIGURE 8.58
Formation of the vector x = x from the corresponding pixels of the same
orientation and four scales. Reproduced with permission from R.J. Ferrari,
R.M. Rangayyan, J.E.L. Desautels, and A.F. Frere, \Analysis of asymmetry
in mammograms via directional ltering with Gabor wavelets", IEEE Trans-
actions on Medical Imaging, 20(9): 953 { 964, 2001. 
c IEEE.

components were computed as y = A (x ; ). Analysis of variance was


performed by evaluating the eigenvalues of the matrix A. The KLT method
is optimal in the sense that it minimizes the MSE between the vectors x and
their resulting approximations y. The result of application of the KLT to all
orientations, as described above, is a set of K images, where K is the number
of orientations.
Because Gabor wavelets are nonorthogonal functions, they do not have a
perfect reconstruction condition. This fact results in a small amount of out-
of-band energy interfering with the reconstruction, which is translated into
artifacts in the reconstructed image. Although reconstruction of the original
image from the ltered images is not required in directional analysis, the
eects of spectral leakage need to be removed. For this purpose, the images
resulting from the KLT were thresholded by using the maximum of Otsu's
threshold value 591] (see Section 8.3.2) computed for the K images.
Phase and magnitude images, indicating the local orientation and inten-
sity, were composed by vector summation of the K ltered images 387], as
illustrated in Figure 8.10. Rose diagrams were composed from the phase and
magnitude images to represent the directional distribution of the broglan-
dular tissue in each mammogram. The complete algorithm for directional
analysis based upon Gabor ltering is summarized in Figure 8.59.
Analysis of Directional Patterns 765

FIGURE 8.59
Block diagram of the procedure for directional analysis using Gabor wavelets.
Reproduced with permission from R.J. Ferrari, R.M. Rangayyan, J.E.L. De-
sautels, and A.F. Frere, \Analysis of asymmetry in mammograms via direc-
tional ltering with Gabor wavelets", IEEE Transactions on Medical Imaging,
20(9): 953 { 964, 2001. c IEEE.
766 Biomedical Image Analysis
8.9.6 Characterization of bilateral asymmetry
Figures 8.60 and 8.61 show two pairs of images from the Mini-MIAS database
376]. All of the images in this database are 1 024 1 024 pixels in size, with
200 m sampling interval and 8 b gray-level quantization. Figures 8.60 (a)
and (b) are, respectively, the MIAS images mdb043 and mdb044 representing
the right and left MLO mammograms of a woman, classied as a normal
case. Figures 8.61 (a) and (b) show the MIAS images mdb119 and mdb120,
classied as a case of architectural distortion.
Only the broglandular disc of each mammogram was used to compute the
directional components, due to the fact that most of the directional compo-
nents such as connective tissues and ligaments exist in this specic region of
the breast. The broglandular disc ROIs of the mammograms selected for
illustration are shown in Figures 8.60 (c), 8.60 (d), 8.61 (c), and 8.61 (d).
Figure 8.62 shows the principal components obtained by applying the KLT
to the Gabor lter outputs at the orientation of 135o and four scales for
the ROI in Figure 8.61 (d). It can be seen that the relevant information is
concentrated in the rst two principal components. This is evident based
upon an evaluation of the eigenvalues, listed in the caption of Figure 8.62. In
this example, only the rst two principal components were used to represent
the oriented components in the 135o orientation, because their eigenvalues
add to 99:34% (> 95% ) of the total variance. After thresholding the ltered
images with Otsu's method in order to eliminate the eects of spectral leakage,
the magnitude and phase images were composed by vector summation, as
illustrated in Figures 8.63 and 8.64.
Figures 8.63 (a) { (d) show the result of Gabor ltering for the normal
case in Figure 8.60. The rose diagrams in Figures 8.63 (c) and (d) show the
distribution of the tissues in the broglandular discs of both the left and right
views. An inspection of the rose diagrams shows that the results obtained
are in good agreement with visual analysis of the ltered results in Figures
8.63 (a) and (b), and the corresponding ROIs in Figures 8.60 (c) and (d). The
most relevant angular information indicated in the rose diagrams are similar.
The results of the ltering process for the case of architectural distortion
(see Figure 8.61), along with the respective rose diagrams, are shown in Fig-
ure 8.64. By analyzing the results of ltering, we can notice a modication of
the tissue pattern caused by the presence of a high-density region. An impor-
tant characteristic of the Gabor lters may be seen in the result: the lters
do not respond to regions with nearly uniform intensity, that is, to regions
without directional information see Figures 8.61 (c) and 8.64 (a)]. This is
an important outcome that could be used to detect asymmetric dense regions
and local foci of architecture distortion. The global distortion of the tissue
ow pattern is readily seen by comparison of the rose diagrams of the left and
right breasts.
Analysis of Directional Patterns 767

(a) (b)

(c) (d)
FIGURE 8.60
Images mdb043 and mdb044 of a normal case 376]. (a) and (b) Original im-
ages (1 024 1 024 pixels at 200 m=pixel). The breast boundary (white) and
pectoral muscle edge (black) detected are also shown. (c) and (d) Fibroglan-
dular discs segmented and enlarged (512 512 pixels). Histogram equalization
was applied to enhance the global contrast of each ROI for display purposes
only. Reproduced with permission from R.J. Ferrari, R.M. Rangayyan, J.E.L.
Desautels, and A.F. Frere, \Analysis of asymmetry in mammograms via direc-
tional ltering with Gabor wavelets", IEEE Transactions on Medical Imaging,
20(9): 953 { 964, 2001. c IEEE.
768 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 8.61
Images mdb119 and mdb120 of a case of architectural distortion 376]. (a)
and (b) Original images (1 024 1 024 pixels at 200 m=pixel). The breast
boundary (white) and pectoral muscle edge (black) detected are also shown.
(c) and (d) Fibroglandular discs segmented and enlarged (512 512 pixels).
Histogram equalization was applied to enhance the global contrast of each
ROI for display purposes only. Reproduced with permission from R.J. Fer-
rari, R.M. Rangayyan, J.E.L. Desautels, and A.F. Frere, \Analysis of asym-
metry in mammograms via directional ltering with Gabor wavelets", IEEE
Transactions on Medical Imaging, 20(9): 953 { 964, 2001. 
c IEEE.
Analysis of Directional Patterns 769

(a) (b)

(c) (d)
FIGURE 8.62
The images (a), (b), (c), and (d) are, respectively, the rst, second, third,
and fourth components resulting from the KLT applied to the Gabor lter re-
sponses with orientation 135o to the ROI of the image mdb120 shown in Fig-
ure 8.61 (d). The eigenvalues of the four components above are: 1 = 10:80,
2 = 0:89, 3 = 0:09, and 4 = 0:01. The images were full brightness-contrast
corrected for improved visualization. Reproduced with permission from R.J.
Ferrari, R.M. Rangayyan, J.E.L. Desautels, and A.F. Frere, \Analysis of
asymmetry in mammograms via directional ltering with Gabor wavelets",
IEEE Transactions on Medical Imaging, 20(9): 953 { 964, 2001.  c IEEE.
770 Biomedical Image Analysis
The rose diagrams in Figures 8.63 and 8.64 present a strong visual associ-
ation with the directional components of the corresponding images, and may
be used by radiologists as an aid in the interpretation of mammograms.
Feature extraction and pattern classi cation: In order to characterize
bilateral asymmetry in an objective manner, three features were derived: the
entropy H (Equation 8.10), the rst moment M1 (Equation 8.6), and the
second central moment or variance M2 (Equation 8.7) of the rose diagram
given by the dierence between the rose diagrams computed for the left and
right mammograms of the same individual.
Classication of the normal and asymmetric cases was conducted by using
the Bayesian linear classier 721] (see Section 12.4.2). The Gaussian dis-
tribution was assumed in order to model the PDF, and the parameters of
the model were estimated by using the training samples. The prior proba-
bilities of the normal and asymmetry classes were assumed to be equal, and
the covariance matrix was calculated in a pooled manner by averaging the
covariance matrices of the normal and asymmetric classes. The leave-one-out
methodology 721] was used to estimate the classication accuracy.
The directional analysis scheme was applied to 80 images (20 normal cases,
14 cases of asymmetry, and six cases of architectural distortion) from the
Mini-MIAS database 376]. An exhaustive combination approach was used to
select the best set of features. The selection was conducted based upon the
classication results obtained by using the leave-one-out method. The best
result, by using only one feature in the classication process, was achieved
by the rst-order angular moment (M1 ), with the sensitivity, specicity, and
average accuracy values equal to 77:3%, 71:4%, and 74:4%, respectively. When
using two features, the best result was achieved with the combination of the
rst-order angular moment (M1 ) and the entropy (H ) features, indicating
that 80% of the asymmetric and distortion cases, and 65% of the normal
cases were correctly classied. The average rate of correct classication in
this case was 72:5%. The low rate of specicity may be explained by the
fact that even normal cases present natural signs of mild asymmetry the
mammographic imaging procedure may also distort the left and right breasts
in dierent ways.
In a subsequent study, Rangayyan et al. 681] revised the directional analysis
procedures as shown in Figure 8.65. The rose diagrams of the left and right
mammograms were aligned such that their mean angles corresponded to the
straight line perpendicular to the pectoral muscle, and then subtracted to
obtain the dierence rose diagram. In addition to the features H , M1 , and
M2 of the dierence rose diagram as described above, the dominant orientation
R and circular variance s2 were computed as follows:
N
X
XR = Ri cos i (8.83)
i=1
Analysis of Directional Patterns 771

(a) (b)

(c) (d)
FIGURE 8.63
Results obtained for the normal case in Figure 8.60. (a) and (b) Magnitude
images. (c) and (d) Rose diagrams. The magnitude images were histogram-
equalized for improved visualization. The rose diagrams have been congured
to match the mammograms in orientation. Reproduced with permission from
R.J. Ferrari, R.M. Rangayyan, J.E.L. Desautels, and A.F. Frere, \Analysis
of asymmetry in mammograms via directional ltering with Gabor wavelets",
IEEE Transactions on Medical Imaging, 20(9): 953 { 964, 2001.  c IEEE.
772 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 8.64
Results obtained for the case of architectural distortion in Figure 8.61. (a) and
(b) Magnitude images. (c) and (d) Rose diagrams. The magnitude images
were histogram-equalized for improved visualization. The rose diagrams have
been congured to match the mammograms in orientation. Reproduced with
permission from R.J. Ferrari, R.M. Rangayyan, J.E.L. Desautels, and A.F.
Frere, \Analysis of asymmetry in mammograms via directional ltering with
Gabor wavelets", IEEE Transactions on Medical Imaging, 20(9): 953 { 964,
2001. c IEEE.
Analysis of Directional Patterns 773
N
X
YR = Ri sin i (8.84)
i=1
 
R = atan XYR (8.85)
R
and q
s2 = 1 ; XR2 + YR2 (8.86)
where Ri is the normalized value and i is the central angle of the ith angle
band of the dierence rose diagram, and N is the number of bins in the rose
diagram.
In addition, a set of 11 features including seven of Hu's moments (see Sec-
tion 6.2.2 and Equation 8.3) and the area, average density, eccentricity , and
stretch  were computed to characterize the shape of the segmented broglan-
dular discs. Eccentricity was computed as
2
= (m20 ; m02 ) + 42 m11
2
(8.87)
(m20 + m02 )
where mpq are the geometric invariant moments as described in Section 6.2.2.
The stretch parameter was computed as
; xmin
 = xymax ; (8.88)
max y min
where xmax , xmin , ymax , and ymin are the corner coordinates of the rectangle
delimiting the broglandular disc. Feature selection was performed by PCA
and exhaustive combination techniques. With PCA, only the components
associated with 98% of the total variance were used in the classication step.
Classication was performed using linear and quadratic Bayesian classiers
with the leave-one-out method.
The revised directional analysis scheme was applied to 88 images (22 nor-
mal cases, 14 cases of asymmetry, and eight cases of architectural distortion)
from the Mini-MIAS database 376]. The best overall classication accuracy
of 84:4% (with a sensitivity of 82:6% and specicity of 86:4%) was obtained us-
ing the four features R , M1 , M2 , and H computed from the aligned-dierence
rose diagrams using the quadratic classier. The morphometric measures and
moments, after PCA-based feature selection, resulted in an overall classica-
tion accuracy of only 71:1% with the linear classier. The combination of
all of the directional statistics, morphometric measures, and moments, after
PCA-based feature selection, resulted in an overall classication accuracy of
82:2%, with a sensitivity of 78:3% and specicity of 86:4% with the linear
classier. The results indicate the importance of directional analysis of the
broglandular tissue in the detection of bilateral asymmetry.
774 Biomedical Image Analysis

FIGURE 8.65
Block diagram of the revised procedure for the analysis of bilateral asymme-
try 681].
Analysis of Directional Patterns 775

8.10 Application: Architectural Distortion in Mammo-


grams
Mammography is the best available tool for detecting early breast cancer
screening programs have been shown to reduce mortality rates by 30% to 70%
55] (Chapter 19), 722]. However, the sensitivity of screening mammography
is aected by image quality and the radiologist's level of expertise. Bird et
al. 723] estimated the sensitivity of screening mammography to be between
85% and 90% they also observed that misinterpretation of breast cancer signs
accounts for 52% of the errors, and overlooking signs is responsible for 43% of
the missed lesions. The extent of errors due to overlooking of lesions reinforces
the need for CAD tools in mammography.
CAD techniques and systems have been proposed to enhance the sensitiv-
ity of the detection of breast cancer: although these techniques are eective
in detecting masses and calcications, they have been found to fail in the
detection of architectural distortion with adequate levels of accuracy 724].
Therefore, new CAD systems are desirable for targeted detection of subtle
mammographic abnormalities, such as architectural distortion, which are the
most frequent source of screening errors and false-negative ndings related to
cases of interval cancer 660].
Architectural distortion is dened in BI-RADSTM 403] as follows: \The
normal architecture (of the breast) is distorted with no denite mass visible.
This includes spiculations radiating from a point and focal retraction or dis-
tortion at the edge of the parenchyma.". Focal retraction is considered to be
easier to perceive than spiculated distortion within the breast parenchyma 54]
(p61). Architectural distortion could be categorized as malignant or benign,
the former including cancer, and the latter including scar and soft-tissue dam-
age due to trauma.
According to van Dijck et al. 725], \in nearly half of the screen-detected
cancers, minimal signs appeared to be present on the previous screening mam-
mogram two years before the diagnosis". Burrell et al. 660], in a study of
screening interval breast cancers, showed that architectural distortion is the
most commonly missed abnormality in false-negative cases. Sickles 726] re-
ported that indirect signs of malignancy (such as architectural distortion,
bilateral asymmetry, single dilated duct, and developing densities) account
for almost 20% of the detected cancers. Broeders et al. 727] suggested that
improvement in the detection of architectural distortion could lead to an ef-
fective improvement in the prognosis of breast cancer patients.

8.10.1 Detection of spiculated lesions and distortion


The breast contains several piecewise linear structures, such as ligaments,
ducts, and blood vessels, that cause oriented texture in mammograms. The
776 Biomedical Image Analysis
presence of architectural distortion is expected to change the normal oriented
texture of the breast.
Figure 8.66 (a) shows a mammogram of a normal breast, where the normal
oriented texture is observed. The mammogram in Figure 8.66 (b) exhibits
architectural distortion Figure 8.67 shows an enlarged view of the site of
architectural distortion. Observe the appearance of lines radiating from a
central point in the ROI. It is also noticeable that the perceived increased
density close to the center of the ROI is due to the broglandular disk, as it
can be observed in the full mammogram view in Figure 8.66 (b).

(a) (b)
FIGURE 8.66
(a) Mammogram showing a normal breast image mdb243 from the Mini-
MIAS database 376]. Width of image = 650 pixels = 130 mm. (b) Archi-
tectural distortion present in a mammogram from the Mini-MIAS database
(mdb115). Width of image = 650 pixels = 130 mm. The square box overlaid
on the gure represents the ROI including the site of architectural distortion,
shown enlarged in Figure 8.67. Reproduced with permission from F.J. Ayres
and R.M. Rangayyan, \Characterization of architectural distortion in mam-
mograms via analysis of oriented texture", IEEE Engineering in Medicine and
Biology Magazine, January 2005.  c IEEE.
Analysis of Directional Patterns 777

FIGURE 8.67
Detail of mammogram mdb115 showing the site of architectural distortion
marked by the box in Figure 8.66 (b). Width of image = 300 pixels = 60 mm.
Reproduced with permission from F.J. Ayres and R.M. Rangayyan, \Charac-
terization of architectural distortion in mammograms via analysis of oriented
texture", IEEE Engineering in Medicine and Biology Magazine, January 2005.
c IEEE.

Several researchers have directed attention to the detection of stellate and


spiculated patterns associated with masses in mammograms see Mudigonda
et al. 275] and Section 8.8 for a review on this subject. However, most of such
methods have been directed toward and applied to the detection of tumors
with spiculations. A common strategy in most of the methods reported in
the literature for this application is to assume a stellate appearance for the
lesion detection is performed by tting a star pattern to the texture of the
breast parenchyma.
Qian et al. 623] proposed a directional wavelet transform to detect spicules
radiating from masses. Kegelmeyer et al. 728] used local edge orientation his-
tograms and Laws' texture features to detect stellate lesions. A sensitivity of
100% was obtained, with a specicity of 82%. Karssemeijer and te Brake 641]
applied operators sensitive to radial patterns of straight lines to an orienta-
tion eld to detect stellate patterns, and obtained a sensitivity of 90% with
one false positive per image however, only lesions with radiating spicules (re-
gardless of the presence of a central density) are detected in their method.
Mudigonda et al. 275] investigated the detection of masses using density slic-
ing and texture ow-eld a sensitivity of 85% with 2:45 false positives per
image was achieved (see Section 8.8).
Zwiggelaar et al. 652] proposed a technique to detect abnormal patterns of
linear structures, by detecting the radiating spicules and/or the central mass
778 Biomedical Image Analysis
expected to occur with spiculated lesions. PCA was applied to a training
set of mammograms including normal tissue patterns and spiculated lesions.
The results of PCA were used to construct a basis set of oriented texture
patterns, which was used to analyze radiating structures. A sensitivity of
80% was obtained in the classication of pixels as belonging to normal tissue
or lesions, but with low specicity. Sampat and Bovik 729] employed ltering
in the Radon domain to enhance mammograms, followed by the usage of radial
spiculation lters to detect spiculated lesions however, statistical evaluation
of the performance of the technique was not conducted.

Mudigonda and Rangayyan 730] proposed the use of texture ow-eld to


detect architectural distortion, based upon the local coherence of texture ori-
entation only preliminary results were given, indicating the potential of the
technique in the detection of architectural distortion. Ferrari et al. 381] used
Gabor lters to detect oriented patterns, with subsequent analysis designed
to characterize global bilateral asymmetry however, the method was not
aimed at detecting focal architectural distortion (see Section 8.9). Matsubara
et al. 731] used mathematical morphology to detect architectural distortion
around the skin line, and a concentration index to detect architectural dis-
tortion within the mammary gland they reported sensitivity rates of 94%
with 2:3 false positives per image and 84% with 2:4 false positives per image,
respectively, in the two tasks.

Burhenne et al. 732] studied the performance of a commercial CAD sys-


tem in the detection of masses and calcications in screening mammography,
obtaining a sensitivity of 75% in the detection of masses and architectural
distortion. Evans et al. 733] investigated the ability of a commercial CAD
system to mark invasive lobular carcinoma of the breast: the system identied
correctly 17 of 20 cases of architectural distortion. Birdwell et al. 734] eval-
uated the performance of a commercial CAD system in marking cancers that
were overlooked by radiologists: the software detected ve out of six cases
of architectural distortion. However, Baker et al. 724] found the sensitivity
of two commercial CAD systems to be poor in detecting architectural distor-
tion: fewer than 50% of the cases of architectural distortion were detected.
These ndings indicate the need for further research in this area, and the
development of algorithms designed specically to characterize architectural
distortion.

Ayres and Rangayyan 595, 679, 680] proposed the application of Gabor
lters and phase portraits to characterize architectural distortion in ROIs se-
lected from mammograms, as well as to detect sites of architectural distortion
in full mammograms their methods and results are described in the following
subsections.
Analysis of Directional Patterns 779
8.10.2 Phase portraits
Phase portraits provide an analytical tool to study systems of rst-order dif-
ferential equations 735]. The method has proved to be useful in characterizing
oriented texture 432, 736].
Let p(t) and q(t) denote two dierentiable functions of time t, related by a
system of rst-order dierential equations as
p_(t) = F p(t) q(t)] (8.89)
q_(t) = Gp(t) q(t)]
where the dot above the variable indicates the rst-order derivative of the
function with respect to time, and F and G represent functions of p and q.
Given initial conditions p(0) and q(0), the solution p(t) q(t)] to Equation 8.89
can be viewed as a parametric trajectory of a hypothetical particle in the
(p q) plane, placed at p(0) q(0)] at time t = 0, and moving through the (p q)
plane with velocity p_(t) q_(t)]. The (p q) plane is referred to as the phase
plane of the system of rst-order dierential equations. The path traced by
the hypothetical particle is called a streamline of the vector eld (p_ q_). The
phase portrait is a graph of the possible streamlines in the phase plane. A
xed point of Equation 8.89 is a point in the phase plane where p_(t) = 0 and
q_(t) = 0: a particle left at a xed point remains stationary.
When the system of rst-order dierential equations is linear, Equation 8.89
assumes the form  
p_(t) = A p(t) + b (8.90)
q_(t) q(t)
where A is a 2 2 matrix and b is a 2 1 column matrix (a vector). In
this case, there are only three types of phase portraits: node, saddle, and
spiral 735]. The type of phase portrait can be determined from the nature of
the eigenvalues of A, as shown in Table 8.4. The center (p0 q0 ) of the phase
portrait is given by the xed point of Equation 8.90:
 
p_(t) = 0 ) p0 = ;A;1 b: (8.91)
q_(t) q0
Solving Equation 8.90 yields a linear combination of complex exponentials
for p(t) and q(t), whose exponents are given by the eigenvalues of A multiplied
by the time variable t. Table 8.4 illustrates the streamlines obtained by solving
Equation 8.90 for a node, a saddle, and a spiral phase portrait: the solid lines
indicate the movement of the p(t) and the q(t) components of the solution,
and the dashed lines indicate the streamlines. The formation of each phase
portrait type is explained as follows:
 Node : the components p(t) and q(t) are exponentials that either simul-
taneously converge to, or diverge from, the xed-point coordinates p0
and q0 .
780 Biomedical Image Analysis
 Saddle : the components p(t) and q(t) are exponentials while one of the
components either p(t) or q(t)] converges to the xed point, the other
diverges from the xed point.
 Spiral : the components p(t) and q(t) are exponentially modulated sinu-
soidal functions | the resulting streamline forms a spiral curve.
Associating the functions p(t) and q(t) with the x and y coordinates of
the Cartesian (image) plane, we can dene the orientation eld generated by
Equation 8.90 as  
(x yjA b) = arctan pq__((tt)) (8.92)
which is the angle of the velocity vector p_(t) q_(t)] with the x axis at (x y) =
p(t) q(t)]. Table 8.4 lists the three phase portraits and the corresponding
orientation elds generated by a system of linear rst-order dierential equa-
tions.
Using the concepts presented above, the orientation eld of a textured image
may be described qualitatively by determining the type of the phase portrait
that is most similar to the orientation eld, along with the center of the phase
portrait. This notion was employed by Ayres and Rangayyan 595, 679, 680]
to characterize architectural distortion.

8.10.3 Estimating the orientation eld


The analysis of oriented texture is an important task in computer vision,
and several methods have been proposed for this task, such as weighted an-
gle histograms 653], integral curves 678], and phase portraits 432, 736].
Ayres and Rangayyan 595, 679, 680] analyzed the orientation eld in ROIs of
mammograms, through the usage of phase portraits, in order to characterize
architectural distortion. In order to extract the texture orientation at each
pixel of the ROI, the ROI was ltered with a bank of Gabor lters of dierent
orientations 384] (see Section 8.4). The Gabor lter kernel oriented at the
angle  = ; was formulated as
 2 2 
g(x y) = 21  exp ; 12 x2 + y 2 cos(2fo x) : (8.93)
x y x y
Kernels at other angles were obtained by rotating this kernel. A set of 180
kernels was used, with angles spaced evenly over the range ;=2 =2].
Gabor lters may be used as line detectors. In the work of Ayres and
Rangayyan 595, 679, 680], the parameters in Equation 8.93, namely x , y ,
and fo , were derived from a design rule as follows. Let  be the thickness of
the line detector. This parameter constrains x and fo as follows:
 The amplitude of the exponential term in Equation 8.93, that is, the
Gaussian term, is reduced p to one half of its maximum at x = =2 and
y = 0 therefore, x = =(2 2 ln 2).
Analysis of Directional Patterns 781

TABLE 8.4
Phase Portraits for a System of Linear First-order Dierential
Equations 736].
Phase
portrait Eigenvalues Streamlines Appearance of the
type orientation eld

Node Real eigenvalues


of same sign

Real eigenvalues
Saddle of opposite sign

Spiral Complex
eigenvalues

Solid lines indicate the movement of the p(t) and the q(t) components of the
solution dashed lines indicate the streamlines. Reproduced with permission
from F.J. Ayres and R.M. Rangayyan, \Characterization of architectural dis-
tortion in mammograms via analysis of oriented texture", IEEE Engineering
in Medicine and Biology Magazine, January 2005.  c IEEE.
782 Biomedical Image Analysis
 The cosine term has a period of   therefore, fo = 1= .
The value of y was dened as y = l x , where l determines the elongation
of the Gabor lter in the orientation direction, with respect to its thickness.
The values  = 4 pixels (corresponding to a thickness of 0:8 mm at a pixel size
of 200 m) and l = 8 were determined empirically, by observing the typical
spicule width and length in mammograms with architectural distortion in the
Mini-MIAS database 376].
The eects of the dierent design parameters are shown in Figure 8.68, and
are as follows:
 Figures 8.68 (a) and (e) show the impulse response of a Gabor lter and
its Fourier transform magnitude, respectively.
 In Figure 8.68 (b), the Gabor lter of Figure 8.68 (a) is stretched in
the x direction, by increasing the elongation factor l. Observe that the
Fourier spectrum of the new Gabor lter, shown in Figure 8.68 (f), is
compressed in the horizontal direction.
 The Gabor lter shown in Figure 8.68 (c) was obtained by increasing
the parameter  of the original Gabor lter, thus enlarging the lter in
both the x and y directions. Correspondingly, the Fourier spectrum of
the enlarged lter, shown in Figure 8.68 (g), has been shrunk in both
the vertical and horizontal directions.
 The eect of rotating the Gabor lter by 30o counterclockwise is dis-
played in Figures 8.68 (d) and (h), that show the rotated Gabor lter's
impulse response and the corresponding Fourier spectrum.
The texture orientation at a pixel was estimated as the orientation of the
Gabor lter that yielded the highest magnitude response at that pixel. The
orientation at every pixel was used to compose the orientation eld. The
magnitude of the corresponding lter response was used to form the magnitude
image. The magnitude image was not used in the estimation of the phase
portrait, but was found to be useful for illustrative purposes.
Let (x y) be the texture orientation at (x y), and gk (x y), k = 0 1    179,
be the Gabor lter oriented at k = ;=2 + k=180. Let f (x y) be the ROI
of the mammogram being processed, and fk (x y) = (f  gk )(x y), where the
asterisk denotes linear 2D convolution. Then, the orientation eld of f (x y)
is given by
(x y) = kmax where kmax = argfmax k k
jf (x y)j]g : (8.94)

8.10.4 Characterizing orientation elds with phase portraits


In the work of Ayres and Rangayyan 595, 679, 680], the analysis of oriented
texture patterns was performed in a two-step process. First, the orientation
Analysis of Directional Patterns 783

(a) (b) (c) (d)

(e) (f) (g) (h)


FIGURE 8.68
Eects of the dierent parameters of the Gabor lter. (a) Example of the
impulse response of a Gabor lter. (b) The parameter l is increased: the
Gabor lter is elongated in the x direction. (c) The parameter  is increased:
the Gabor lter is enlarged in the x and y directions. (d) The angle of the
Gabor lter is modied. Figures (e) { (h) correspond to the magnitude of the
Fourier transforms of the Gabor lters in (a) { (d), respectively. The (0 0) fre-
quency component is at the center of the spectra displayed. Reproduced with
permission from F.J. Ayres and R.M. Rangayyan, \Characterization of archi-
tectural distortion in mammograms via analysis of oriented texture", IEEE
Engineering in Medicine and Biology Magazine, January 2005.  c IEEE.
784 Biomedical Image Analysis
eld (x y) of the ROI was computed in a small analysis window. The sliding
analysis window was centered at pixels within the ROI, avoiding window posi-
tions with incomplete data at the edges of the ROI for the estimation of A and
b. Second, the matrix A and the vector b in Equation 8.90 were estimated
such that the best match was achieved between (x y) and (x yjA b). The
eigenvalues of A determine the type of the phase portrait present in (x y)
the xed point of the phase portrait is given by Equation 8.91.
The estimates of A and b were obtained as follows. For every point (x y),
let $(x y) = sin(x y) ; (x yjA b)] represent the error between the orien-
tation of the texture given by Equation 8.94 and the orientation of the model
given by Equation 8.92. Then, the estimation problem is that of nding A
and b that minimize the sum of the squared error
XX XX
2 = $2 (x y) = fsin(x y) ; (x yjA b)]g2 (8.95)
x y x y
which may be solved using a nonlinear least-squares algorithm 737].
The ROI was investigated by sliding the analysis window through the ori-
entation eld of the ROI, and accumulating the information obtained, that
is, the type of the phase portrait and the location of the xed point, for each
window position, as follows:
1. Create three maps, one for each type of phase portrait (hereafter called
the phase portrait maps ), that will be used to accumulate information
from the sliding analysis window. The maps are initialized to zero, and
are of the same size as the ROI or the image being processed.
2. Slide the analysis window through the orientation eld of the ROI. At
each position of the sliding window, determine the type of the phase
portrait and compute the xed point of the orientation eld.
3. Increment the value at the location of the xed point in the correspond-
ing phase portrait map.
The size of the sliding analysis window was set at 44 44 pixels (8:8 8:8 mm).
The three maps obtained as above provide the results of a voting procedure,
and indicate the possible locations of xed points corresponding to texture
patterns that (approximately) match the node, saddle, and spiral phase por-
traits. It is possible that, for some positions of the sliding analysis window,
the location of the xed point falls outside the spatial limits of the ROI or
image being processed the votes related to such results were ignored. The
value at each location (x y) in a phase portrait map provides the degree of
condence in determining the existence of the corresponding phase portrait
type centered at (x y). The three phase portraits were expected to relate to
dierent types of architectural distortion.
Analysis of Directional Patterns 785
8.10.5 Feature extraction for pattern classication
The estimates of the xed-point location for a given phase portrait pattern can
be scattered around the true xed-point position, due to the limited precision
of the estimation procedure, the presence of multiple overlapping patterns,
the availability of limited data within the sliding analysis window, and the
presence of noise. A local accumulation of the votes is necessary to diminish
the eect of xed-point location errors. Ayres and Rangayyan 595, 679, 680]
employed a Gaussian smoothing lter with a standard deviation of 25 pixels
(5 mm) for this purpose.
For the purpose of pattern classication, six features were extracted to char-
acterize each ROI: the maximum of each phase portrait map (three features),
and the entropy of each phase portrait map (three features). The maximum
of each map conveys information about the likelihood of the presence of the
corresponding phase portrait type, and the entropy relates to the uncertainty
in the location of the xed point in each map. The entropy H of a map h(x y)
was computed as
X X h(x 
H h(x y)] = ; y) h(x y)
x y Sh ln Sh (8.96)

where XX
Sh = h(x y): (8.97)
x y
A map with a dense spatial concentration of votes is expected to have a large
maximum value and a low entropy. On the contrary, a map with a wide
scatter of votes may be expected to have a low maximum and a large entropy.

8.10.6 Application to segments of mammograms


Ayres and Rangayyan 595, 679] analyzed a set of 106 ROIs, each of size
230 230 pixels (46 46 mm, with a resolution of 200 m), selected from
the Mini-MIAS database 376]. The set included 17 ROIs with architec-
tural distortion (all the cases of architectural distortion available in the MIAS
database), 45 ROIs with normal tissue patterns, eight ROIs with spiculated
malignant masses, four ROIs with circumscribed malignant masses, 11 ROIs
with spiculated benign masses, 19 ROIs with circumscribed benign masses,
and two ROIs with malignant calcications. The size of the ROIs was chosen
to accommodate the largest area of architectural distortion or mass identi-
ed in the Mini-MIAS database. ROIs related to all of the masses in the
database were included. The normal ROIs included examples of overlapping
ducts, ligaments, and other parenchymal patterns. Only the central portion of
150 150 pixels of each ROI was investigated using a sliding analysis window
of size 44 44 pixels the remaining outer ribbon of pixels was not processed
in order to discard the eects of the preceding ltering steps.
786 Biomedical Image Analysis
TABLE 8.5
Results of Linear Discriminant Analysis for ROIs with
Architectural Distortion Using the Leave-one-out Method.
Architectural #ROIs Classied as
distortion Architectural distortion Other
Benign 9 7 2
Malignant 8 6 2
Total 17 TP = 13 FN = 4

TP = true positives, FN = false negatives. The results correspond to the


prior probability of belonging to the architectural distortion class being 0:465.
Sensitivity = 76:5%. Reproduced with permission from F.J. Ayres and R.M.
Rangayyan, \Characterization of architectural distortion in mammograms via
analysis of oriented texture", IEEE Engineering in Medicine and Biology Mag-
azine, January 2005.  c IEEE.

Figure 8.69 illustrates the results obtained for an image with architectural
distortion (mdb115). The maximum of the node map is larger than the max-
ima of the other two maps. Also, the scattering of votes in the node map is
less than that in the saddle and spiral maps. These results indicate that the
degree of scattering of the votes (quantied by the entropy of the correspond-
ing map) and the maximum of each of the three phase portrait maps could
be useful features to distinguish between architectural distortion and other
patterns.
Linear discriminant analysis was performed using SPSS 738], with stepwise
feature selection. Architectural distortion was considered as a positive nding,
with all other test patterns (normal tissue, masses, and calcications) being
considered as negative ndings. The statistically signicant features were
the entropy of the node map and the entropy of the spiral map: the other
features were deemed to be not signicant by the statistical analysis package,
and were discarded in all subsequent analysis. With the prior probability of
architectural distortion set to 50%, the sensitivity obtained was 82:4%, and
the specicity was 71:9%. The fraction of cases correctly classied was 73:6%.
Tables 8.5 and 8.6 present the classication results with the prior probability
of architectural distortion being 46:5%. An overall classication accuracy of
76:4% was achieved.

8.10.7 Detection of sites of architectural distortion


Ayres and Rangayyan 595, 679, 680] hypothesized that architectural distor-
tion would appear as an oriented texture pattern that can be locally approxi-
mated by a linear phase portrait model furthermore, it was expected that the
Analysis of Directional Patterns 787

(a) (b) (c)

(d) (e) (f)


FIGURE 8.69
Analysis of the ROI from the image mdb115, which includes architectural dis-
tortion: (a) ROI of size 230 230 pixels (46 46 mm) (b) magnitude image
(c) orientation eld superimposed on the original ROI (d) node map, with
intensities mapped from 0 123] to 0 255] (e) saddle map, 0 22] mapped to
0 255] (f) spiral map, 0 71] mapped to 0 255]. This image was correctly
classied as belonging to the \architectural distortion" category (Table 8.5).
Reproduced with permission from F.J. Ayres and R.M. Rangayyan, \Charac-
terization of architectural distortion in mammograms via analysis of oriented
texture", IEEE Engineering in Medicine and Biology Magazine, January 2005.
c IEEE.
788 Biomedical Image Analysis

TABLE 8.6
Results of Linear Discriminant Analysis for ROIs Without
Architectural Distortion Using the Leave-one-out Method.
Type #ROIs Classied as
Architectural distortion Other
CB 19 4 15
Masses SB 11 3 8
CM 4 1 3
SM 8 3 5
Calcications 2 1 1
Normal 45 9 36
Total 89 FP = 21 TN = 68

CB = circumscribed benign mass, CM = circumscribed malignant tumor, SB


= spiculated benign mass, SM = spiculated malignant tumor, FP = false pos-
itives, TN = true negatives. The results correspond to the prior probability
of belonging to the architectural distortion class being 0:465. Specicity =
76:4%. Reproduced with permission from F.J. Ayres and R.M. Rangayyan,
\Characterization of architectural distortion in mammograms via analysis of
oriented texture", IEEE Engineering in Medicine and Biology Magazine, Jan-
uary 2005. c IEEE.
Analysis of Directional Patterns 789
xed-point location of the phase portrait model would fall within the breast
area in the mammogram. Then, the numbers of votes cast at each position of
the three phase portrait maps would indicate the likelihood that the position
considered is a xed point of a node, a saddle, or a spiral pattern.
Before searching the maps for sites of distortion, the orientation eld was
ltered and downsampled as follows. Let h(x y) be a Gaussian lter of stan-
dard deviation h , dened as
 
1 exp ; 1 x2 + y2
h(x y) = 2 : (8.98)
h 2 h2
Dene the images s(x y) = sin2(x y)] and c(x y) = cos2(x y)], where
(x y) is the orientation eld. Then, the ltered orientation eld f (x y) is
obtained as  
f (x y) = 21 arctan hh((xx yy))  sc((xx yy)) (8.99)
where the asterisk denotes 2D convolution.
The ltered orientation eld was downsampled by a factor of four, thus
producing the downsampled orientation eld d as
d (x y) = f (4x 4y): (8.100)
The ltering and downsampling procedures, summarized in Figure 8.70,
were applied in order to reduce noise and and also to reduce the computational
eort required for the processing of full mammograms. The ltering procedure
described above is a variant of Rao's dominant local orientation method 432]:
a Gaussian lter has been used instead of a box lter.
The following procedure was used to detect and locate sites of architectural
distortion, using only the node map:
1. The node map is ltered with a Gaussian lter of standard deviation
equal to 1:0 pixel (0:8 mm).
2. The ltered node map is thresholded (with the same threshold value for
all images).
3. The thresholded image is subjected to the following series of morpholog-
ical operations to group positive responses that are close to one another,
and to reduce each region of positive response to a single point. The re-
sulting points indicate the detected locations of architectural distortion.
(a) A closing operation is performed to group clusters of points that
are less than 8 mm apart. The structural element is a disk of radius
10 pixels (8 mm).
(b) A \close holes" lter is applied to the image. The resulting image
includes only compact regions.
790 Biomedical Image Analysis

FIGURE 8.70
Filtering and downsampling of the orientation eld. Figure courtesy of F.J.
Ayres.

(c) The image is subjected to a \shrink" lter, where each compact


region is shrunk to a single pixel.
The threshold value inuences the sensitivity of the method and the number
of false positives per image. A high threshold value reduces the number of
false positives, but also reduces the sensitivity. A low threshold value increases
the number of false positives.
The method was applied to 18 mammograms exhibiting architectural dis-
tortion, selected from the Mini-MIAS database 376]. The mammograms were
MLO views, digitized to 1 024 1 024 pixels at a resolution of 200 m and
8 b=pixel. Figures 8.71 and 8.72 illustrate the steps of the method, as applied
to image mdb115. Observe that the ltered orientation eld Figure 8.71 (d)]
is smoother and more coherent as compared to the original orientation eld
Figure 8.71 (c)]: the pattern of architectural distortion is displayed better in
the ltered orientation eld.
The architectural distortion present in the mammogram mdb115 has a stel-
late or spiculated appearance. As a consequence, a large number of votes
have been cast into the node map, at a location close to the center of the
distortion, as seen in Figure 8.72 (c). Another point of accumulation of votes
in the node map is observed in Figure 8.72 (c), at the location of the nipple.
This is not unexpected: the breast has a set of ducts that carry milk to the
nipple the ducts appear in mammograms as linear structures converging to
the nipple. Observe that the node map has the strongest response of all maps,
Analysis of Directional Patterns 791

Figure 8.71 (a) (b)

within the site of architectural distortion given by the Mini-MIAS database.


The technique has resulted in the identication of two locations as sites of
architectural distortion: one true positive and one false positive, as shown in
Figure 8.72 (d).
The free-response receiver operating characteristics (FROC) was derived
by varying the threshold level in the detection step the result is shown in
Figure 8.73. (See Section 12.8.1 for details on ROC analysis.) A sensitivity
of 88% was obtained at 15 false positives per image. Further work is required
in order to reduce the number of false positives and improve the accuracy of
detection.

8.11 Remarks
Preferred orientation and directional distributions relate to the functional
integrity of several types of tissues and organs changes in such patterns could
indicate structural damage as well as recovery. Directional analysis could,
792 Biomedical Image Analysis

(c) (d)
FIGURE 8.71
(a) Image mdb115 from the Mini-MIAS database 376]. The circle indicates
the location and the extent of architectural distortion, as provided in the Mini-
MIAS database 376]. (b) Magnitude image after Gabor ltering. (c) Orien-
tation eld superimposed on the original image. Needles have been drawn for
every fth pixel. (d) Filtered orientation eld superimposed on the original
image. Reproduced with permission from F.J. Ayres and R.M. Rangayyan,
\Detection of architectural distortion in mammograms using phase portraits",
Proceedings of SPIE Medical Imaging 2004: Image Processing, Volume 5370,
pp 587 { 597, 2004.  c SPIE. See also Figure 8.72.
Analysis of Directional Patterns 793

Figure 8.72 (a) (b)


794 Biomedical Image Analysis

(c) (d)
FIGURE 8.72
Phase portrait maps derived from the orientation eld in Figure 8.71 (d), and
the detection of architectural distortion. (a) Saddle map: values are scaled
from the range 0 20] to 0 255]. (b) Spiral map: values are scaled from the
range 0 47] to 0 255]. (a) Node map: values are scaled from the range
0 84] to 0 255]. (d) Detected sites of architectural distortion superimposed
on the original image: the solid line indicates the location and spatial extent
of architectural distortion as given by the Mini-MIAS database 376] the
dashed lines indicate the detected sites of architectural distortion (one true
positive and one false positive). Reproduced with permission from F.J. Ayres
and R.M. Rangayyan, \Detection of architectural distortion in mammograms
using phase portraits", Proceedings of SPIE Medical Imaging 2004: Image
Processing, Volume 5370, pp 587 { 597, 2004.  c SPIE.
Analysis of Directional Patterns 795

0.9

0.8

0.7

0.6
Sensitivity

0.5

0.4

0.3

0.2

0.1

0
0 5 10 15 20
False positives / image

FIGURE 8.73
Free-response receiver operating characteristics (FROC) curve for the detec-
tion of sites of architectural distortion. Reproduced with permission from F.J.
Ayres and R.M. Rangayyan, \Detection of architectural distortion in mam-
mograms using phase portraits", Proceedings of SPIE Medical Imaging 2004:
Image Processing, Volume 5370, pp 587 { 597, 2004.  c SPIE.
796 Biomedical Image Analysis
therefore, be used to study the health and well-being of a tissue or organ, as
well as to follow the pathological and physiological processes related to injury,
treatment, and healing.
In this chapter, we explored the directional characteristics of several biomed-
ical images. We have seen several examples of the application of fan lters
and Gabor wavelets. The importance of multiscale or multiresolution analysis
in accounting for variations in the size of the pattern elements of interest has
been demonstrated. In spite of theoretical limitations, the methods for direc-
tional analysis presented in this chapter have been shown to lead to practically
useful results in important applications.

8.12 Study Questions and Problems


Selected data les related to some of the problems and exercises are available at the
site
www.enel.ucalgary.ca/People/Ranga/enel697
1. Discuss how entropy can characterize a directional distribution.
2. Discuss the limitations of fan lters. Describe how Gabor functions address
these limitations.
3. Using an image with line segments of various widths as an example, discuss
the need for multiscale or multiresolution analysis.

8.13 Laboratory Exercises and Projects


1. Prepare a test image with line segments of di
erent directions, lengths, and
widths, with overlap. Apply Gabor lters at a few di
erent scales and angles,
as appropriate. Evaluate the results in terms of
(a) the lengths of the extracted components, and
(b) the widths of the extracted components.
Discuss the limitations of the methods and the artifacts in the results.
2. Decompose your test image in the preceding problem using eight sector or fan
lters spanning the full range of 0o ; 180o .
Apply a thresholding technique to binarize the results.
Compute the area covered by the ltered patterns for each angle band. Com-
pare the results with the known areas of the directional patterns.
Discuss the limitations of the methods and the artifacts in the results.
9
Image Reconstruction from Projections

Mathematically, the fundamental problem in CT imaging is that of estimating


an image (or object) from its projections (or integrals) measured at dierent
angles 11, 43, 80, 82, 739, 740, 741, 742, 743] see Section 1.6 for an intro-
duction to this topic. A projection of an image is also referred to as the
Radon transform of the image at the corresponding angle, after the main pro-
ponent of the associated mathematical principles 67, 68]. In the continuous
space, the projections are ray integrals of the image, measured at dierent
ray positions and angles in practice, only discrete measurements are avail-
able. The solution to the problem of image reconstruction from projections
may be formulated variously as completing the corresponding Fourier space,
backprojecting and summing the given projection data, or solving a set of
simultaneous equations. Each of these methods has its own advantages and
disadvantages that determine its suitability to a particular imaging applica-
tion. In this chapter, we shall study the three basic approaches to image
reconstruction from projections mentioned above.
(Note: Most of the derivations presented in this chapter closely follow those
of Rosenfeld and Kak 11], with permission. For further details, refer to Her-
man 80, 43] and Kak and Slaney 82]. Parts of this chapter are reproduced,
with permission, from R.M. Rangayyan and A. Kantzas, \Image reconstruc-
tion", Wiley Encyclopedia of Electrical and Electronics Engineering, Supple-
ment 1, Editor: J. G. Webster, Wiley, New York, NY, pp 249{268, 2000.
c This material is used by permission of John Wiley & Sons, Inc.)

9.1 Projection Geometry


Let us consider the problem of reconstructing a 2D image given parallel-ray
projections of the image measured at dierent angles. Referring to Figure
9.1, let f (x y) represent the density distribution within the image. Although
discrete images are used in practice, the initial presentation here will be in
continuous-space notations for easier comprehension. Consider the ray AB
represented by the equation
x cos  + y sin  = t1 : (9.1)

797
798 Biomedical Image Analysis
The integral of f (x y) along the ray path AB is given by
Z Z 1Z1
p (t1 ) = f (x y) ds = f (x y) (x cos  + y sin  ; t1 ) dx dy
AB ;1 ;1
(9.2)
where ( ) is the Dirac delta function, and s = ;x sin  + y cos . The mutually
parallel rays within the imaging plane are represented by the coordinates (t s)
that are rotated by angle  with respect to the (x y) coordinates as indicated
in Figures 1.9, 1.19, and 9.1, with the s axis being parallel to the rays ds is
thus the elemental distance along a ray. When this integral is evaluated for
dierent values of the ray oset t1 , we obtain the 1D projection p (t). The
function p (t) is known as the Radon transform of f (x y). Note: Whereas a
single projection p (t) of a 2D image at a given value of  is a 1D function,
a set of projections for various values of  could be seen as a 2D function.
Observe that t represents the space variable related to ray displacement along
a projection, and not time.] Because the various rays within a projection are
parallel to one another, this is known as parallel-ray geometry.
Theoretically, we would need an innite number of projections for all  to be
able to reconstruct the image. Before we consider reconstruction techniques,
let us take a look at the projection or Fourier slice theorem.

9.2 The Fourier Slice Theorem


The projection or Fourier slice theorem relates the three spaces we encounter
in image reconstruction from projections: the image, Fourier, and projection
(Radon) spaces. Considering a 2D image, the theorem states that the 1D
Fourier transform of a 1D projection of the 2D image is equal to the radial
section (slice or prole) of the 2D Fourier transform of the 2D image at the
angle of the projection. This is illustrated graphically in Figure 9.2, and may
be derived as follows.
Let F (u v) represent the 2D Fourier transform of f (x y), given by
Z 1Z1
F (u v) = f (x y) exp;j 2(ux + vy)] dx dy: (9.3)
;1 ;1
Let P (w) represent the 1D Fourier transform of the projection p (t), that is,
Z 1
P (w) = p (t) exp(;j 2wt) dt (9.4)
;1
where w represents the frequency variable corresponding to t. (Note: If x y s
and t are in mm, the units for u v and w will be cycles=mm or mm;1 .) Let
Image Reconstruction from Projections 799

Projection
p (t)
θ
p (t ) t
θ 1
B

y
s
f (x, y)
t1
t
θ
x

ds Ray
x cos θ + y sin θ = t
1

A
FIGURE 9.1
Illustration of a ray path AB through a sectional plane or image f (x y). The
(t s) axis system is rotated by angle  with respect to the (x y) axis system.
ds represents the elemental distance along the ray path AB. p (t1 ) is the ray
integral of f (x y) for the ray path AB. p (t) is the parallel-ray projection
(Radon transform or integral) of f (x y) at angle . See also Figures 1.9 and
1.19. Adapted, with permission, from A. Rosenfeld and A.C. Kak, Digital
Picture Processing, 2nd ed., New York, NY, 1982. c Academic Press.
800 Biomedical Image Analysis

p (t)
θ1 1D FT

y v F(u,v)
θ2 f (x,y)
θ1 F(w, θ1 ) = Pθ (w)
1
x u

2D FT F(w, θ2 ) = Pθ (w)
2
p (t)
θ2 1D FT

FIGURE 9.2
Illustration of the Fourier slice theorem. F (u v) is the 2D Fourier trans-
form of f (x y). F (w 1 ) = P 1 (w) is the 1D Fourier transform of p 1 (t).
F (w 2 ) = P 2 (w) is the 1D Fourier transform of p 2 (t). Reproduced, with
permission, from R.M. Rangayyan and A. Kantzas, \Image reconstruction",
Wiley Encyclopedia of Electrical and Electronics Engineering, Supplement 1,
Editor: J. G. Webster, Wiley, New York, NY, pp 249{268, 2000. c This
material is used by permission of John Wiley & Sons, Inc.

f (t s) represent the image f (x y) rotated by angle , with the transformation


given by   
t = cos  sin  x : (9.5)
s ; sin  cos  y
Then, Z 1
p (t) = f (t s) ds: (9.6)
;1
Z 1
P (w) = p (t) exp(;j 2wt) dt
;1
Z 1 Z 1 
= f (t s) ds exp(;j 2wt) dt : (9.7)
;1 ;1
Transforming from (t s) to (x y), we get
Z 1Z1
P (w) = f (x y) exp;j 2w(x cos  + y sin )] dx dy
;1 ;1
= F (u v) for u = w cos  v = w sin 
= F (w  ) (9.8)
Image Reconstruction from Projections 801
which expresses the projection theorem. Observe that t = x cos  + y sin  and
dx dy = ds dt.
It immediately follows that if we have projections available at all angles from
0o to 180o , we can take their 1D Fourier transforms, ll the 2D Fourier space
with the corresponding radial sections or slices, and take an inverse 2D Fourier
transform to obtain the image f (x y). The diculty lies in the fact that, in
practice, only a nite number of projections will be available, measured at
discrete angular positions or steps. Thus, some form of interpolation will be
essential in the 2D Fourier space 72, 73]. Extrapolation may also be required
if the given projections do not span the entire angular range. This method
of reconstruction from projections, known as the Fourier method, succinctly
relates the image, Fourier, and Radon spaces. The Fourier method is the most
commonly used method for the reconstruction of MR images.
A practical limitation of the Fourier method of reconstruction is that inter-
polation errors are larger for higher frequencies due to the increased spacing
between the samples available on a discrete grid. Samples of P (w) computed
from p (t) will be available on a polar grid, whereas the 2D Fourier transform
F (u v) and/or the inverse-transformed image will be required on a Carte-
sian (rectangular grid). This limitation could cause poor reconstruction of
high-frequency (sharp) details.

9.3 Backprojection
Let us now consider the simplest reconstruction procedure: backprojection
(BP). Assuming the rays to be ideal straight lines, rather than strips of nite
width, and the image to be made of dimensionless points rather than pixels
or voxels of nite size, it can be seen that each point in the image f (x y)
contributes to only one ray integral per parallel-ray projection p (t), with
t = x cos  + y sin . We may obtain an estimate of the density at a point by
simply summing (integrating) all rays that pass through it at various angles,
that is, by backprojecting the individual rays. In doing so, however, the
contributions to the various rays of all of the other points along their paths
are also added up, causing smearing or blurring yet this method produces a
reasonable estimate of the image. Mathematically, simple BP can be expressed
as 11] Z 
f (x y) ' p (t) d where t = x cos  + y sin : (9.9)
0
This is a sinusoidal path of integration in the ( t) Radon space. In practice,
only a nite number of projections and a nite number of rays per projection
will be available, that is, the ( t) space will be discretized hence, interpola-
tion will be required.
802 Biomedical Image Analysis
Examples of reconstructed images: Figure 9.3 (a) shows a synthetic
2D image (phantom), which we will consider to represent a cross-sectional
plane of a 3D object. The objects in the image were dened on a discrete
grid, and hence have step and/or jagged edges. Figure 9.4 (a) is a plot of the
projection of the phantom image computed at 90o  observe that the values
are all positive.

(a) (b)
FIGURE 9.3
(a) A synthetic 2D image (phantom) with 101  101 eight-bit pixels, repre-
senting a cross-section of a 3D object. (b) Reconstruction of the phantom
in (a) obtained using 90 projections from 2o to 180o in steps of 2o with the
simple BP algorithm. Reproduced, with permission, from R.M. Rangayyan
and A. Kantzas, \Image reconstruction", Wiley Encyclopedia of Electrical and
Electronics Engineering, Supplement 1, Editor: J. G. Webster, Wiley, New
York, NY, pp 249{268, 2000. c This material is used by permission of John
Wiley & Sons, Inc.

Figure 9.3 (b) shows the reconstruction of the phantom obtained using 90
projections from 2o to 180o in steps of 2o with the simple BP algorithm.
While the objects in the image are faintly visible, the smearing eect of the
BP algorithm is obvious.
Considering a point source as the image to be reconstructed, it becomes
evident that BP produces a spoke-like pattern with straight lines at all pro-
jection angles, intersecting at the position of the point source. This may be
considered to be the PSF of the reconstruction process, which is responsible
for the blurring of details.
Image Reconstruction from Projections 803

12000

10000

8000
Ray sum

6000

4000

2000

0
20 40 60 80 100 120 140 160 180 200 220
Ray number

(a)
600

400

200
Ray sum

−200

−400

−600

20 40 60 80 100 120 140 160 180 200 220


Ray number

(b)
FIGURE 9.4
(a) Projection of the phantom image in Figure 9.3 (a) computed at 90o .
(b) Filtered version of the projection using only the ramp lter inherent to the
FBP algorithm. Reproduced, with permission, from R.M. Rangayyan and A.
Kantzas, \Image reconstruction", Wiley Encyclopedia of Electrical and Elec-
tronics Engineering, Supplement 1, Editor: J. G. Webster, Wiley, New York,
NY, pp 249{268, 2000. c This material is used by permission of John Wiley
& Sons, Inc.
804 Biomedical Image Analysis
The use of limited projection data in reconstruction results in geometric dis-
tortion and streaking artifacts 744, 745, 746, 747, 748]. The distortion may
be modeled by the PSF of the reconstruction process if it is linear and shift-
invariant this condition is satised by the BP process. The PSFs of the simple
BP method are shown as images in Figure 9.5 (a) for the case with 10 projec-
tions over 180o , and in Figure 9.5 (b) for the case with 10 projections from 40o
to 130o . The reconstructed image is given by the convolution of the original
image with the PSF the images in parts (c) and (d) of Figure 9.5 illustrate the
corresponding reconstructed images of the phantom in Figure 9.3 (a). Lim-
ited improvement in image quality may be obtained by applying deconvolution
lters to the reconstructed image 744, 745, 746, 747, 748, 749, 750, 751]. De-
convolution is implicit in the ltered (convolution) backprojection technique,
which is described next.

9.3.1 Filtered backprojection


Consider the inverse Fourier transform relationship
Z 1Z1
f (x y) = F (u v) expj 2(ux + vy)] du dv: (9.10)
;1 ;1
p the Cartesian coordinates (u v ) to the polar coordinates (w ),
Changing from
where w = (u2 + v2 ) and  = tan;1 (v=u), we get
Z 2 Z 1
f (x y) = F (w ) expj 2w(x cos  + y sin )] w dw d
Z0  Z 01
= F (w ) expj 2w(x cos  + y sin )] w dw d
Z0  Z0 1
+ F (w  + )
0 0
 exp fj 2wx cos( + ) + y sin( + )]g w dw d: (9.11)
Here, u = w cos  v = w sin  and du dv = w dw d. Because F (w  + ) =
F (;w ), we get
Z  Z 1 
f (x y) = F (w )jwj exp(j 2wt) dw d
0
Z 
;1
Z 1 
= P (w)jwj exp(j 2wt) dw d (9.12)
0 ;1
with t = x cos  + y sin  as before. If we dene
Z 1
q (t) = P (w)jwj exp(j 2wt) dw (9.13)
;1
we get Z  Z 
f (x y) = q (t) d = q (x cos  + y sin ) d : (9.14)
0 0
Image Reconstruction from Projections 805

(a) (b)

(c) (d)
FIGURE 9.5
PSF of the BP procedure using: (a) 10 projections from 18o to 180o in steps
of 18o  (b) 10 projections from 40o to 130o in steps of 10o . The images (a)
and (b) have been enhanced with  = 3. Reconstruction of the phantom in
Figure 9.3 (a) obtained using (c) 10 projections as in (a) with the BP algo-
rithm (d) 10 projections as in (b) with the BP algorithm. Reproduced, with
permission, from R.M. Rangayyan and A. Kantzas, \Image reconstruction",
Wiley Encyclopedia of Electrical and Electronics Engineering, Supplement 1,
Editor: J. G. Webster, Wiley, New York, NY, pp 249{268, 2000. c This
material is used by permission of John Wiley & Sons, Inc.
806 Biomedical Image Analysis
It is now seen that a perfect reconstruction of f (x y) may be obtained by
backprojecting ltered projections q (t) instead of backprojecting the original
projections p (t) hence the name ltered backprojection (FBP). The lter is
represented by the jwj function, known as the ramp lter see Figure 9.6.
Observe that the limits of integration in Equation 9.12 are (0 ) for  and
(;1 1) for w. In practice, a smoothing window should be applied to reduce
the amplication of high-frequency noise by the jwj function. Furthermore,
the integrals change to summations in practice due to the nite number of
projections available, as well as the discrete nature of the projections them-
selves and of the Fourier transform computations employed. (Details of the
discrete version of FBP are provided in the next section.)
An important feature of the FBP technique is that each projection may be
ltered and backprojected while further projection data are being acquired,
which was of help in on-line processing with the rst-generation CT scanners
(see Figure 1.20). Furthermore, the inverse Fourier transform of the lter
jwj (with modications to account for the discrete nature of measurements,
smoothing window, etc. see Figure 9.7) could be used to convolve the projec-
tions directly in the t space 74] using fast array processors. FBP is the most
widely used procedure for image reconstruction from projections however,
the procedure provides good reconstructed images only when a large number
of projections spanning the full angular range of 0o to 180o are available.

9.3.2 Discrete ltered backprojection


The ltering procedure with the jwj function, in theory, must be performed
over ;1  w  1: In practice, the signal energy above a certain frequency
limit W will be negligible, and jwj ltering beyond the limit will only amplify
noise. Thus, we may consider the projections to be bandlimited to W . Then,
using the sampling theorem, p (t) can be represented by its samples at the
sampling rate 2W as
1   sin 2W (t ; m )
p 2m
X
p (t) = 2W
W 2 W (t ; m) : (9.15)
m=;1 2W

Then,
1   h m i b (w)
1 p 2m
X
P (w) = 2W W exp ; j 2w 2W W (9.16)
m=;1
where
bW (w) = 1 if jwj  W
= 0 otherwise: (9.17)
Image Reconstruction from Projections 807

(a)

−w w=0 +w

(b)
FIGURE 9.6
(a) The (bandlimited) ramp lter inherent to the FBP algorithm. (b) The
ramp lter weighted with a Butterworth lowpass lter. The lters are shown
for both positive and negative frequency along the w axis with w = 0 at the
center. The corresponding lters are shown in the Radon domain (t space) in
Figure 9.7.
808 Biomedical Image Analysis

(a)

−t t=0 +t

(b)
FIGURE 9.7
(a) The inverse Fourier transform of the (bandlimited) ramp lter inherent to
the FBP algorithm (in the t space). (b) The inverse Fourier transform of the
ramp lter weighted with a Butterworth lowpass lter. The functions, which
are used for convolution with projections in the CBP method, are shown for
both +t and ;t, with t = 0 at the center. The corresponding lters are shown
in the frequency domain (w space) in Figure 9.6.
Image Reconstruction from Projections 809
If the projections are of nite order, that is, they can be represented by a
nite number of samples N + 1, then
N=
X2   h
1
P (w ) = 2W p 2m exp ; j 2 w m i b ( w ): (9.18)
m=;N=2 W 2W W
Let us assume that N is even, and let the frequency axis be discretized as
w = k 2NW for k = ; N2  0  N2 : (9.19)
Then,
1 N= X2
     
P k 2NW = 2W p 2m W exp ;j 2N mk (9.20)
m=;N=2
k = ; N2  0  N2 : This represents a DFT relationship, and may be eval-
uated using the FFT algorithm.
The ltered projection q (t) may then be obtained as
Z W
q (t) = P (w)jwj exp(j 2wt)dw (9.21)
;W
2W N=X2    
' 2 W  2W  2W
N k=;N=2 P k N k N  exp j 2k N t : (9.22)
 

If we want to evaluate q (t) for only those values of t at which p (t) has been
sampled, we get
 m  2W N=
X2     
P k 2NW
 2W 
q 2W ' N k  exp j 2 mk (9.23)
 N  N
k=;N=2
m = ; N2  ;1 0 1  N:
2
In order to control noise enhancement by the jk 2NW j lter, it may be bene-
cial to include a lter window such as the Hamming window then,
  2W N= X2   
q 2m
W ' N k=;N=2 P k 2W k 2W 
N  N
   
2 W 2
 G k N exp j N mk  (9.24)
with
   
G k 2NW = 0:54 + 0:46 cos k 2NW W k = ; N2  0  N : (9.25)
2
810 Biomedical Image Analysis
Using the convolution theorem, we get
     m 
q 2mW ' 2NW p 2m W  g1 2W (9.26)

where  denotes circular (periodic) , m = ; N2 


;m
convolution, and g1 2 W
0  N2 is the inverse DFT of k 2NW  G k 2NW : Butterworth or other lowpass
  ; 
lters may also be used instead of the Hamming window.
Observe that the inverse Fourier transform of jwj does not exist due to
the fact that jwj is neither absolutely nor square integrable. However, if we
consider the inverse Fourier transform of jwj exp(;"jwj) as " ! 0, we get the
function 11]
p" (t) = ""2 +;(2(2t
2 t)2 
)2 ]2 (9.27)
for large t, p" (t) ' ; (2t1 )2 .
The reconstructed image may be obtained as
L
f~(x y) = L q (x cos l + y sin l )
X
l
(9.28)
l=1
where the L angles l are those at which the projections p (t) are available,
and the (x y) coordinates are discretized as appropriate.
For practical implementation of discrete FBP, let us consider the situation
where the projections have been sampled with an interval of  mm with no
aliasing error. Each projection p (m ) is then limited to the frequency band
(;W W ), with W = 21 cycles=mm. The continuous versions of the ltered
l

projections are
Z1
q (t) = P (w)H (w) exp(j 2wt)dw (9.29)
l
;1 l

where the lter H (w) = jwjbW (w), with bW (w) as dened in Equation 9.17.
The impulse response of the lter H (w) is 11]
 
h(t) = 21 2 sin(2 t=2 ) ; 1 sin(t=2 ) 2 : (9.30)
2t=2 4 2 t=2
Because we require h(t) only at integral multiples of the sampling interval  ,
we have 8 1
>
> 4 2
> n=0
>
<
h(n ) = > 0 n even : (9.31)
>
>
>
:
; n 1 
2 2 2 n odd
Image Reconstruction from Projections 811
The ltered projections q (m ) may be obtained as
l

;1
NX
q (m ) = 
l
p (n ) h(m ; n ) m = 0 1
l
 N ; 1 (9.32)
n=0
where N is the nite number of samples in the projection p (m ). Observe
that h(n ) is required for n = ;(N ; 1)  0  N ; 1. When the l-
l

ter is implemented as a convolution, the FBP method is also referred to as


convolution backprojection (CBP).
The procedure for FBP may be expressed in algorithmic form as follows:
1. Measure the projection p (m ).
l

2. Compute the ltered projection q (m ).


l

3. Backproject the ltered projection q (m ).


l

4. Repeat Steps 1 { 3 for all projection angles l l = 1 2  L.


The FBP algorithm is suitable for on-line implementation in a translate-
rotate CT scanner because each parallel-ray projection may be ltered and
backprojected as soon as it is acquired, while the scanner is acquiring the next
projection. The reconstructed image is ready as soon as the last projection
is acquired, ltered, and backprojected. If the projections are acquired using
fan-beam geometry (see Figure 1.20), one could either rebin the fan-beam
data to compose parallel-ray projections, or use reconstruction algorithms
specically tailored to fan-beam geometry 11, 82].
Examples of reconstructed images: Figure 9.4 (b) shows a plot of the
ltered version of the projection in Figure 9.4 (a) using only the ramp lter
(jwj) inherent to the FBP algorithm. Observe that the ltered projection has
negative values.
Figure 9.8 (a) shows the reconstruction of the phantom in Figure 9.3 (a)
obtained using 90 projections with the FBP algorithm only the ramp lter
that is implicit in the FBP process was used, with no other smoothing or low-
pass lter function. The contrast and visibility of the objects are better than
those in the case of the simple BP result in Figure 9.3 (b) however, the image
is noisy due to the increasing gain of the ramp lter at higher frequencies.
The reconstructed image also exhibits artifacts related to the computation of
the projections on a discrete grid refer to Herman 43], Herman et al. 752],
and Kak and Slaney 82] for discussions on this topic. The use of additional
lters could reduce the noise and artifacts: Figure 9.8 (b) shows the result of
reconstruction with the FBP algorithm including a fourth-order Butterworth
lter with the ;3 dB cuto at 0:4 times the maximum frequency present in
the data to lter the projections. The Butterworth lter has suppressed the
noise and artifacts at the expense of blurring the edges of the objects in the
image.
812 Biomedical Image Analysis

(a) (b)
FIGURE 9.8
(a) Reconstruction of the phantom in Figure 9.3 (a) obtained using 90 pro-
jections from 2o to 180o in steps of 2o with the FBP algorithm only the ramp
lter that is implicit in the FBP process was used. (b) Reconstruction of the
phantom with the FBP algorithm as in (a), but with the additional use of
a Butterworth lowpass lter. See also Figure 9.5. Reproduced, with permis-
sion, from R.M. Rangayyan and A. Kantzas, \Image reconstruction", Wiley
Encyclopedia of Electrical and Electronics Engineering, Supplement 1, Editor:
J. G. Webster, Wiley, New York, NY, pp 249{268, 2000. c This material is
used by permission of John Wiley & Sons, Inc.
Image Reconstruction from Projections 813
The Radon transform may be interpreted as a transformation of the given
image from the (x y) space to the ( t) space. In practical CT scanning, the
projection or ray-integral data are obtained as samples at discrete intervals in t
and . Just as we encounter the (Nyquist or Shannon) sampling theorem in the
representation of a 1D signal in terms of its samples in time, we now encounter
the requirement to sample adequately along both the t and  axes. A major
distinction lies in the fact that the measurements made in CT scanners are
discrete to begin with, and the signal (the body or object being imaged)
cannot be preltered to prevent aliasing. Undersampling in either axis will
lead to aliasing errors and poor reconstructed images.
Figure 9.9 (a) shows the reconstructed version of the phantom in Fig-
ure 9.3 (a) obtained using only 10 projections spanning the 0o ; 180o range
in sampling steps of 18o and using the FBP algorithm. Although the edges
of the objects in the image are sharper than those in the reconstruction ob-
tained using the BP algorithm with the same parameters see Figure 9.5 (c)],
the image is aected by severe streaking artifacts 753] due to the limited num-
ber of projections used. Figure 9.9 (b) shows the reconstructed image of the
phantom obtained using 10 projections but spanning only the angular range
of 40o ; 130o in steps of 10o . The limited angular coverage provided by the
projections has clearly aected the quality of the image, and has introduced
geometric distortion 753, 746, 748, 744, 747].

9.4 Algebraic Reconstruction Techniques


The algebraic reconstruction technique (ART) 11, 742, 743] is related to the
Kaczmarz method 754] of projections for solving simultaneous equations.
The Kaczmarz method takes an approach that is completely dierent from
that of the Fourier or FBP methods: the available projections | treated
as individual ray sums in a discrete representation | are seen as a set of
simultaneous equations, with the unknown quantities being the discrete pixels
of the image. The large size of images encountered in practice precludes the
use of the usual methods for solving simultaneous equations. Furthermore,
in many practical applications, the number of available equations may be far
less than the number of pixels in the image to be reconstructed the set of
simultaneous equations is then under-determined. The Kaczmarz method of
projections is an elegant iterative method that may be implemented easily.
(Note: The Kaczmarz method uses the term \projection" in the vectorial
or geometric sense, and individual ray sums are processed one at a time.
Observe that a set of ray sums or integrals is also known as a projection. The
distinction should be clear from the context.)
814 Biomedical Image Analysis

(a) (b)
FIGURE 9.9
(a) Reconstruction of the phantom in Figure 9.3 (a) obtained using 10 projec-
tions from 18o to 180o in steps of 18o with the FBP algorithm. (b) Reconstruc-
tion of the phantom obtained using 10 projections from 40o to 130o in steps
of 10o with the FBP algorithm. The ramp lter that is implicit in the FBP
process was combined with a Butterworth lowpass lter in both cases. See
also Figure 9.5. Reproduced, with permission, from R.M. Rangayyan and A.
Kantzas, \Image reconstruction", Wiley Encyclopedia of Electrical and Elec-
tronics Engineering, Supplement 1, Editor: J. G. Webster, Wiley, New York,
NY, pp 249{268, 2000. c This material is used by permission of John Wiley
& Sons, Inc.
Image Reconstruction from Projections 815
Let the image to be reconstructed be divided into N cells, fn denoting the
value in the nth cell. The image density or intensity is assumed to be constant
within each cell. Let M be the number of ray sums available, expressed as
N
X
pm = wmn fn m = 1 2  M (9.33)
n=1
where wmn is the contribution factor of the nth image element to the mth ray
sum, equal to the fractional area of the nth cell crossed by the mth ray path,
as illustrated in Figure 9.10. (Note: An image and its Radon transform are
each represented using only one index in this formulation of ART.) Observe
that for a given ray m, most of wmn will be zero, because only a few elements
of the image contribute to the corresponding ray sum. Equation 9.33 may
also be expressed as
w11 f1 +w12 f2 +  +w1N fN = p1
w21 f1 +w22 f2 +  +w2N fN = p2
.. (9.34)
.
wM 1 f1 +wM 2 f2 +  +wMN fN = pM :
A grid representation with N cells gives the image N degrees of freedom.
Thus, an image represented by f = f1 f2  fN ]T may be considered to
be a single point in an N -dimensional hyperspace. Then, each of the above
ray-sum equations will represent a hyperplane in this hyperspace. If a unique
solution exists, it is given by the intersection of all the hyperplanes at a single
point. To arrive at the solution, the Kaczmarz method takes the approach of
successively and iteratively projecting an initial guess and its successors from
one hyperplane to the next.
Let us, for simplicity, consider a 2D version of the situation (with N = M =
2), as illustrated in Figures 9.11 and 9.12. Let f (0) represent vectorially the
initial guess to the solution, and let w1 = w11 w12 ]T represent vectorially
the series of weights (coecients) in the rst ray equation. The rst ray sum
may then be written as
w1  f = p1 : (9.35)
The hyperplane represented by this equation is orthogonal to w1 . (Consider
two images or points f1 and f2 belonging to the hyperplane. We have w1  f1 =
p1 and w1  f2 = p1 . Hence, w1  f1 ; f2 ] = 0: Therefore, w1 is orthogonal to
the hyperplane.)
With reference to Figure 9.12, Equation 9.35 indicates that, for the vector
OC corresponding to any point C on the hyperplane, its projection on to the
vector w1 is of a constant length. The unit vector OU along w1 is given by
OU = pww1 w : (9.36)
1 1
816 Biomedical Image Analysis

f f
1 2

p
m

ray m of
width τ Β
Α C
∆y fn D f
N

∆x

area of ABCD
weight for cell n and ray m is w =
mn
∆x ∆y
FIGURE 9.10
ART treats the image as a matrix of discrete pixels of nite size (x y).
Each ray has a nite width. The fraction of the area of the nth pixel
crossed by the mth ray is represented by the weighting factor wmn =
area of ABCD=(xy) for the nth pixel fn in the gure. Adapted, with
permission, from A. Rosenfeld and A.C. Kak, Digital Picture Processing, 2nd
ed., New York, NY, 1982. c Academic Press.
Image Reconstruction from Projections 817
The perpendicular distance of the hyperplane from the origin is

kOAk = OU  OC = pww11  fw1 = pw1p1 w1 : (9.37)

Now,
f (1) = f (0) ; GH (9.38)
and
kGHk = kOFk ; kOAk = f (0)  OU ; kOAk (9.39)
= pf w  ww1 ; pw p1 w = f pw w1 w
; p1 :
(0) (0)
1 1 1 1 1 1
Because the directions of GH and OU are the same, GH = kGHk OU.
Thus,

GH = f 
 (0)
w ; p1

w 1
 (0)
f  w
pw1  w1 pw1  w1 = w1  w1 w1:
1 1 ; p1

(9.40)

Therefore,

 (0)
f w
; w1  w1 w1:
f (1) = f (0) 1 ; p 1

(9.41)

In general, the mth estimate is obtained from the (m ; 1)th estimate as


(m)
f =f (m ;1)
 (m;1)
f 
; wm  wm wm: w m ; p m

(9.42)

That is, the (m ; 1)th estimate on hand is projected on to the hyperplane of


th
the m ray sum, and the deviation from the true ray sum pm is obtained. This
deviation is normalized and applied as a correction to all the pixels according
to the weighting factors in wm . When this process is applied to all the M ray-
sum hyperplanes given, one cycle or iteration is completed. (Note: Because
the image is updated by altering the pixels along each individual ray sum, the
index of the updated estimate or of the iteration is equal to the index of the
latest ray sum used. However, as the entire process is iterated, the index of
the estimate is reset at the beginning of each iteration.)
Depending upon the initial guess and the organization of the hyperplanes,
a number of iterations may have to be completed in order to obtain the solu-
tion (if it exists). The following important characteristics of ART are worth
observing:

ART proceeds ray by ray, and is iterative.
818 Biomedical Image Analysis

f
2

6
Ray sum 1:
2f - f = 2
Root: 1 2
5
[3, 4] T

4 After iteration 2:
[3.05, 3.95] T
After iteration 1:
T
f (2) = [3.5, 3.5]
3
Ray sum 2:
f + f = 7
T 1 2
2 f (1) = [2, 2]

Initial
1 estimate
f (0) = [4, 1] T

0 1 2 3 4 5 f
1

FIGURE 9.11
Illustration of the Kaczmarz method of solving a pair of simultaneous equa-
tions in two unknowns. The solution is f = 3 4]T . The weight vectors
for the two ray sums (straight lines) are w1 = 2 ;1]T and w2 = 1 1]T .
The equations of the straight lines are w1  f = 2f1 ; f2 = 2 = p1 and
w2  f = f1 + f2 = 7 = p2 . The initial estimate is f (0) = 4 1]T . The
rst updated estimate is f (1) = 2 2]T  the second updated estimate is
f (2) = 3:5 3:5]T . Because two ray sums are given, two corrections constitute
one cycle (or iteration) of ART. The path of the second cycle of ART is also
illustrated in the gure. Reproduced, with permission, from R.M. Rangayyan
and A. Kantzas, \Image reconstruction", Wiley Encyclopedia of Electrical and
Electronics Engineering, Supplement 1, Editor: J. G. Webster, Wiley, New
York, NY, pp 249{268, 2000. c This material is used by permission of John
Wiley & Sons, Inc.
Image Reconstruction from Projections 819

f
2

Improved
estimate G
(1) (0) H Initial
f f guess

OC C
O
OU OA w1 f
U A 1
F

w .f = p
1 1
FIGURE 9.12
Illustration of the algebraic reconstruction technique. f (1) is an improved
estimate computed by projecting the initial guess f (0) on to the hyperplane
(the straight line AG in the illustration) corresponding to the rst ray sum
given by the equation w1  f = p1 . Adapted, with permission, from A. Rosen-
feld and A.C. Kak, Digital Picture Processing, 2nd ed., New York, NY, 1982.
c Academic Press.
820 Biomedical Image Analysis

If the hyperplanes of all the given ray sums are mutually orthogonal,
we may start with any initial guess and reach the solution in only one
cycle.
On the other hand, if the hyperplanes subtend small angles with one
another, a large number of iterations will be required. The number of
iterations may be reduced by using optimized ray-access schemes 755].

If the number of ray sums is greater than the number of pixels, that is,
M N , but the measurements are noisy, no unique solution exists |
the procedure will oscillate in the neighborhood of the intersections of
the hyperplanes.

If M < N , the system is under-determined and an indenite or innite
number of partial solutions exist. It has been shown that unconstrained
ART converges to the minimum-variance estimate 752].

The major advantage of ART is that any a priori information available
about the image may be introduced easily into the iterative procedure
(for example, upper and/or lower limits on pixel values, and the spatial
boundaries of the image). This may help in obtaining a useful \solution"
even if the system is under-determined.

9.4.1 Approximations to the Kaczmarz method


We could rewrite the reconstruction step in Equation 9.42 at the nth pixel
level as " #
( m)
fn = fn ( m; 1) pm ; q
+ PN 2 wmn m (9.43)
k=1 wmk
where qm = f (m;1) wm = Nk=1 fk(m;1) wmk . This equation indicates that
 P
;
when we project the (m 1)th estimate on to the mth hyperplane, the cor-
rection factor for the nth cell is
" #
fn(m) = fn(m) ; ;
fn(m;1) = PpmN qm2 wmn : (9.44)
k=1 wmk
Here, pm is the given (true) ray sum for the mth ray, and qm is the computed
;
ray sum for the same ray for the estimated image on hand. (pm qm ) is the
error in the estimate, which is normalized and applied as a correction to all
the pixels with appropriate weighting. Because the correction factor is added
to the current image, this version of ART is known as additive ART.
In one of the approximations to Equation 9.43, the weights wmn are simply
replaced by zeros or ones depending upon whether the center of the nth image
cell is within the mth ray (of nite width) or not 742, 743]. Then, the coef-
cients need not be computed and stored | we may instead determine the
Image Reconstruction from Projections 821
pixels to be corrected for the ray considered
P during the reconstruction pro-
cedure. Furthermore, it follows that Nk=1 wmk2 = Nm , the number of pixels
crossed by the mth ray. The correction applicable to all of the pixels along
the mth ray is (pm ; qm )=Nm . Then,
fn(m) = fn(m;1) + pmN; qm : (9.45)
m
Because the corrections could be negative, negative pixel values may be en-
countered. Because negative values are not meaningful in most imaging ap-
plications, the constrained (and thereby nonlinear) version of ART is dened
as
(m )
fn = max 0 fn ( m ;1) p
+ Nm ; q m

: (9.46)
m
The corrections could also be multiplicative 75]:
fn(m) = fn(m;1) pqm : (9.47)
m
This version of ART is known as multiplicative ART. In this case, no pos-
itivity constraint is required. Furthermore, the convex hull of the image is
almost guaranteed (subject to approximation related to the number of ray
sums available and their angular coverage), because a pixel once set to zero
will remain so during subsequent iterations. It has been shown that the mul-
tiplicative version of ART converges to the maximum-entropy estimate of the
image 43, 756].
A generic ART procedure may be expressed in the following algorithmic
form:
1. Prepare an initial estimate f (0) of the image. All of the pixels in the
initial image could be zero for additive ART however, for multiplicative
ART, pixels within at least the convex hull of the object in the image
must have values other than zero.
2. Compute the ray sum qm for the rst ray path (m = 1) for the estimate
of the image on hand.
3. Obtain the dierence between the true ray sum pm and the computed
ray sum qm , and apply the correction to all the pixels belonging to the
ray according to one of the ART equations (for example, Equation 9.43,
9.45, 9.46, or 9.47). Apply constraints, if any, based upon the a priori
information available.
4. Perform Steps 2 and 3 for all rays available, m = 1 2 : : : M .
5. Steps 2 { 4 constitute one cycle or iteration (over all available ray sums).
Repeat Steps 2 { 4 as many times as required. If desired, compute a
822 Biomedical Image Analysis
measure of convergence, such as
M
X
E1 = (pm ; qm )2 (9.48)
m=1
or
N 
X 2
E2 = fn(m) ; fn(m;1) : (9.49)
n=1

Stop if the error or dierence is less than a prespecied limit else, go


back to Step 2.

For improved convergence, a simultaneous correction procedure (Simulta-


neous Iterative Reconstruction Technique | SIRT 757]) has been proposed,
where the corrections to all the pixels from all the rays are rst computed,
and the averaged corrections are applied at the same time to all the pixels
(that is, only one correction is applied per pixel per iteration). Guan and
Gordon 755] proposed dierent ray-access schemes to improve convergence,
including the consecutive use of rays in mutually orthogonal directions. In
the absence of complete projection data spanning the full angular range of
0o to 180o , ART typically yields better results than the FBP or the Fourier
methods.
Examples of reconstructed images: Figure 9.13 (b) shows the recon-
struction of the phantom shown in part (a) of the same gure, obtained us-
ing 90 parallel-ray projections with three iterations of constrained additive
ART, as in Equation 9.46. Ray sums for use with ART were computed
from the phantom image data using angle-dependent ray width, given by
max(j sin()j j cos()j) 742, 753]. The ART reconstruction is better than that
given by the BP or the FBP algorithm. The advantages of constrained ART
due to the use of the positivity constraint | that is, the a priori knowledge
imposed | and of the ability to iterate, are seen in the improved quality of
the result.
Figure 9.14 shows reconstructed images of the phantom obtained using
only 10 projections spanning the angular ranges of (a) 18o to 180o in steps
of 18o , and (b) 40o ; 130o in steps of 10o , respectively. The limited number
of projections used and the limited angular coverage of the projections in
the second case have aected the quality of the reconstructed images and
introduced geometric distortion. However, when compared with the results
of BP and FBP with similar parameters (see Figures 9.5 and 9.9), ART has
provided better results.
Image Reconstruction from Projections 823

(a) (b)
FIGURE 9.13
(a) A synthetic 2D image (phantom) with 101  101 eight-bit pixels, repre-
senting a cross-section of a 3D object the same image as in Figure 9.3 (a)].
(b) Reconstruction of the phantom obtained using 90 projections from 2o to
180o in steps of 2o with three iterations of constrained additive ART. See
also Figure 9.8. Reproduced, with permission, from R.M. Rangayyan and A.
Kantzas, \Image reconstruction", Wiley Encyclopedia of Electrical and Elec-
tronics Engineering, Supplement 1, Editor: J. G. Webster, Wiley, New York,
NY, pp 249{268, 2000. c This material is used by permission of John Wiley
& Sons, Inc.
824 Biomedical Image Analysis

(a) (b)
FIGURE 9.14
Reconstruction of the phantom in Figure 9.13 (a) obtained using: (a) 10 pro-
jections from 18o to 180o in steps of 18o with three iterations of constrained
additive ART (b) 10 projections from 40o to 130o in steps of 10o with three
iterations of constrained additive ART. See also Figures 9.5 and 9.9. Re-
produced, with permission, from R.M. Rangayyan and A. Kantzas, \Image
reconstruction", Wiley Encyclopedia of Electrical and Electronics Engineer-
ing, Supplement 1, Editor: J. G. Webster, Wiley, New York, NY, pp 249{268,
2000. c This material is used by permission of John Wiley & Sons, Inc.
Image Reconstruction from Projections 825

9.5 Imaging with Diracting Sources


In some applications of CT imaging, such as imaging pregnant women, X-ray
imaging may not be advisable. Imaging with nonionizing forms of radiation,
such as acoustic (ultrasonic) 82, 83] and electromagnetic (optical or thermal)
imaging 85], is then a valuable alternative. X-ray imaging is also not suitable
when the object to be imaged has poor contrast in density or atomic number
distribution. An important point to observe in acoustic or electromagnetic
imaging is that these forms of energy do not propagate along straight-line ray
paths through a body due to refraction and diraction. When the dimensions
of the inhomogeneities in the object being imaged are comparable to or smaller
than the wavelength of the radiation used, geometric propagation concepts
cannot be applied it becomes necessary to consider wave propagation and
diraction-based methods.
When the body being imaged may be treated as a weakly scattering object
in the 2D sectional plane and invariant in the axial direction, the Fourier
diraction theorem is applicable 82]. This theorem states that the 1D Fourier
transform of a projection including the eects of diraction gives values of
the 2D Fourier transform of the image along a semicircular arc. Interpolation
methods may be developed in the Fourier space taking this property into
account for reconstruction of images from projections obtained with diracting
sources. Backpropagation and algebraic techniques have also been proposed
for the case of imaging with diracting sources 82].

9.6 Display of CT Images


X-ray CT is capable of producing images with high density resolution, on the
order of one part in 1 000. For display purposes, the attenuation coecients
are normalized with respect to that of water and expressed as
 
HU = K

; 1 (9.50)
w
where
is the measured attenuation coecient, and
w is the attenuation
coecient of water. The K parameter used to be set at 500 in early models
of the CT scanner. It is now common to use K = 1 000 to obtain the CT
number in Hounseld units (HU ) 758], named after the inventor of the rst
commercial medical CT scanner 77]. This scale results in values of about
+1 000 for bone, 0 for water, about ;1 000 for air, ;80 to 20 for soft tissue,
and about ;800 for lung tissue 38]. Table 9.1 shows the mean and SD of the
CT values in HU for several types of tissue in the abdomen.
826 Biomedical Image Analysis

TABLE 9.1
Mean and SD of CT Values in
Hounseld Units (HU ) for a Few
Types of Abdominal Tissue.
Tissue Mean HU SD
Air2 -1,006 2
Fat1 -90 18
Bile1 +16 8
Kidney1 +32 10
Pancreas1 +40 14
Blood (aorta)1 +42 18
Muscle1 +44 14
Necrosis2 +45 15
Spleen1 +46 12
Liver1 +60 14
Viable tumor2 +91 25
Marrow1 +142 48
Calcication2 +345 155
Bone2 +1,005 103
1 Based upon Mategrano et al. 759]. 2 Estimated from CT exams with con-
trast (see Section 9.9), based upon 1 000 { 4 000 pixels in each category. The
contrast medium is expected to increase the CT values of vascularized tissues
by 30 ; 40 HU . The CT number for air should be ;1000 the estimated
value is slightly dierent due to noise in the images. Reproduced with per-
mission from F.J. Ayres, M.K. Zuo, R.M. Rangayyan, G.S. Boag, V. Odone
Filho, and M. Valente, \Estimation of the tissue composition of the tumor
mass in neuroblastoma using segmented CT images", Medical and Biological
Engineering and Computing, 42:366 { 377, 2004. c IFMBE.
Image Reconstruction from Projections 827
The dynamic range of CT values is much wider than those of common dis-
play devices and that of the human visual system at a given level of adaptation.
Furthermore, clinical diagnosis requires detailed visualization and analysis of
small density dierences. For these reasons, the presentation of the entire
range of values available in a CT image in a single display is neither practi-
cally feasible nor desirable. In practice, small \windows" of the CT number
scale are selected and linearly expanded to occupy the capacity of the display
device. The window width and level (center) values may be chosen inter-
actively to display dierent density ranges with improved perceptibility of
details within the chosen density window. Values above or below the window
limits are displayed as totally white or black, respectively. This technique,
known as windowing or density slicing, may be expressed as
if f (x y)  m
8
>>0
>
>
<
g(x y) = > (MN;m) f (x y) ; m] if m < f (x y) < M (9.51)
>
>
>
:
N if f (x y) M
where f (x y) is the original image in CT numbers, g(x y) is the windowed
image to be displayed, m M ] is the range of CT values in the window to be
displayed, and 0 N ] is the range of the display values. The window width is
M ; m] and the window level (or center) is (M + m)=2 the display range is
typically 0 255] with 8-bit display systems.
Example: Figure 9.15 shows a set of two CT images of a patient with
head injury, with each image displayed using two sets of window level and
width. The eects of the density window chosen on the features of the image
displayed are clearly seen in the gure: either the fractured bone or the brain
matter are seen in detail in the windowed images, but not both in the same
image. See Figures 1.24 and 4.4 for more examples of density windowing
in the display of CT images. See also Figures 1.22, 1.23, and 2.15, as well
as Section 9.9 for more examples of X-ray CT images. Examples of images
reconstructed from projection data from other modalities of medical imaging
such as MRI, SPECT, and PET are provided in Sections 1.7 and 1.9.
A dramatic visualization of details may be achieved with pseudo-color tech-
niques. Arbitrary or structured color scales could be assigned to CT values
by LUTs or gray-scale-to-color transformations. Some of the popular color
transforms are the rainbow (VIBGYOR: violet { indigo { blue { green { yel-
low { orange { red) and the heated metal color (black { red { yellow { white)
sequences. Diculties may arise, however, in associating density values with
dierent colors if the transformation is arbitrary and not monotonic in in-
tensity or total brightness. Furthermore, small changes in CT values could
cause abrupt changes in the corresponding colors displayed, especially with
a mapping such as VIBGYOR. An LUT linking the displayed colors to CT
numbers or other pixel attributes may assist in improved visual analysis of
image features in engineering and scientic applications.
828 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 9.15
Two CT images of a patient with head injury, with each image displayed
with two sets of window level and width. Images (a) and (b) are of the same
section images (c) and (d) are of another section. The window levels used to
obtain images (a) { (d) are 18 138 22 and 138 HU , respectively the window
widths used are 75 400 75 and 400 HU , respectively. The windows in (b)
and (d) display the skull, the fracture, and the bone segments, but the brain
matter is not visible the windows (a) and (c) display the brain matter in
detail, but the fracture area is saturated. Images courtesy of W. Gordon,
Health Sciences Centre, Winnipeg, MB, Canada.
Image Reconstruction from Projections 829

9.7 Agricultural and Forestry Applications


Cruvinel et al. 760] and Vaz et al. 761] developed a portable X-ray and
gamma-ray minitomograph for application in soil science, and used the scan-
ner to measure water content and bulk density of soil samples. Soil-related
studies address the identication of features such as fractures, wormholes, and
roots, and assist in studies of ow of various contaminants in soil.
Forestry applications of CT have appeared in the literature in the form
of scanning of live trees to measure growth rings and detect decay using a
portable X-ray CT scanner 762], and monitoring tree trunks or logs in the
timber and lumber industry. Figures 9.16 and 9.17 show a portable CT scan-
ner in operation and images of a utility pole.

FIGURE 9.16
A portable CT scanner to image live trees. Reproduced with permission from
A.M. Onoe, J.W. Tsao, H. Yamada, H. Nakamura, J. Kogure, H. Kawamura,
and M. Yoshimatsu, \Computed tomography for measuring annual rings of a
live tree", Proceedings of the IEEE, 71(7):907{908, 1983. c IEEE.
830 Biomedical Image Analysis

FIGURE 9.17
CT images of a Douglas r utility pole and photographs of the correspond-
ing physical sections. The sections in (a) demonstrate normal annual growth
rings. The sections in (b) indicate severe decay in the heartwood region.
Reproduced with permission from A.M. Onoe, J.W. Tsao, H. Yamada, H.
Nakamura, J. Kogure, H. Kawamura, and M. Yoshimatsu, \Computed to-
mography for measuring annual rings of a live tree", Proceedings of the IEEE,
71(7):907{908, 1983. c IEEE.
Image Reconstruction from Projections 831

9.8 Microtomography
The resolution of common CT devices used in medical and other applications
varies from the common gure of 1  1 mm to about 200  200
m in cross-
section, and 1;5 mm between slices. Special systems have been built to image
small samples of the order of 1 cm3 in volume with resolution of the order of
5;10
m in cross-section 134, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772].
(See Section 2.12 for illustrations of the LSF and MTF of a
CT system.) Such
an imaging procedure is called microtomography, microCT, or
CT, being a
hybrid of tomography and microscopy. Most
CT studies are performed with
nely focused and nearly monochromatic X-ray beams produced by a particle
accelerator (such as a synchrotron). Stock 763] provides a review of the basic
principles, techniques, and applications of
CT. Whereas most
CT studies
have been limited to small, excised samples, Sasov 768] discusses the design
of a
CT to image whole, small animals with resolution of the order of 10
m.
Umetani et al. 767] present the application of synchrotron-based
CT in
a microangiography mode to the study of microcirculation within the heart
of small animals. Shimizu et al. 766] studied the eects of alveolar hem-
orrhage and alveolitis (pneumonia) on the microstructure of the lung using
synchrotron-based
CT. Johnson et al. 764] developed a microfocal X-ray
imaging system to image the arterial structure in the rat lung, and studied
the decrease in distensibility due to pulmonary hypertension.
Shaler et al. 769] applied
CT to study the ber orientation and void
structure in wood, paper, and wood composites. Illman and Dowd 765]
studied the xylem tissue structure of wood samples, and analyzed the loss
of structrural integrity due to fungal degradation.
Example of application to the analysis of bone structure: Injury
to the anterior cruciate ligament is expected to lead to a decrease in bone
mineral density. Post-traumatic osteoarthritis is known to cause joint space
narrowing and full-thickness cartilage erosion. It has been established that
the cancellous architecture of bone is related to its mechanical properties and
strength 134, 771, 772].
Conventional studies on bone structure have been performed through histo-
logical analysis of thin slices of bone samples. Spatial resolution of the order
of 0:3
m can be realized with optical microscopy or microradiography. How-
ever, in addition to being destructive, this procedure could cause artifacts due
to the slicing operation. Furthermore, limitations exist in the 3D information
derived from a few slices. Boyd et al. 771] applied
CT techniques to analyze
the morphometric and anisotropic changes in periarticular cancellous bone
due to ligament injury and osteoarthritis.
In the work of Boyd et al., anterior cruciate ligaments of dogs were tran-
sected by arthrotomy. After a specic recovery period, at euthanasia, the
femora and tibia were extracted. Cylindrical bone cores of diameter 6 mm
832 Biomedical Image Analysis
and length 12 ; 14 mm were obtained from the weight-bearing regions of
the medial femoral condyles and the medial tibial plateaus. Contralateral
bone core samples were also extracted and processed to serve as internal con-
trols. The cores were scanned at a resolution of 34
m, and 3D images with
diameter of 165 voxels and length of up to 353 voxels were reconstructed.
Morphometric parameters such as bone volume ratio, relative surface density,
and trabecular thickness were estimated from the images. It was postulated
that the anisotropy of the bone fabric or trabecular orientation is related to
mechanical anisotropy and loading conditions.

Figure 9.18 shows two sample bone-core sectional images each of a case
of transected anterior cruciate ligament of a dog after 12 weeks of recovery,
and the corresponding contralateral control sample Figure 9.19 shows 3D
renditions of the same samples. The bone core related to ligament transection
demonstrates increased trabecular spacing, and hence lower bone density, than
the contralateral sample. Boyd et al. observed that signicant periarticular
bone changes occur as early as three weeks after ligament transection the
changes were more pronounced 12 weeks after transection.

(a) (b)
FIGURE 9.18
(a) Two sample sectional images of a bone core sample in a case of anterior
cruciate ligament transection after 12 weeks of recovery. (b) Two sample
sectional images of the corresponding contralateral bone core sample. The
diameter of each sample is 6 mm. See also Figure 9.19. Images courtesy of
S.K. Boyd, University of Calgary 772].

Example of application to the study of microcirculation in the


heart: Umetani et al. 767] developed a
CT system using monochromatized
synchrotron radiation for use as a microangiography tool to study circulatory
disorders and early-stage malignant tumors. Two types of detection systems
were used: an indirect system including a uorescent screen, optical coupling,
and a CCD camera and a direct system with a beryllium faceplate, a photo-
conductive layer, and an electron-beam scanner.
Image Reconstruction from Projections 833

FIGURE 9.19
3D renditions of
CT reconstructions of a bone core sample in a case of
anterior cruciate ligament transection after 12 weeks of recovery (left), and
the corresponding contralateral bone core sample (right). The diameter of
each sample is 6 mm. See also Figure 9.18. Images courtesy of S.K. Boyd,
University of Calgary 772].
834 Biomedical Image Analysis
In one of the experiments of Umetani et al., the left side of the heart of a
rat was xed in formalin after barium sulphate was injected into the coronary
artery. Figure 9.20 (a) shows a projection image (a microradiograph) of the
specimen obtained at a resolution of 24
m. The image clearly shows the left-
anterior-descending coronary artery. Figure 9.20 (b) shows a 3D visualization
of a part of the specimen (of diameter 3:5 mm and height 5 mm) reconstructed
at a resolution of 6
m. The 3D structure of small blood vessels with diameter
of the order of 30 ; 40
m was visualized by this method. Visualization of
tumor-induced small vessels that feed lesions was found to be useful in the
diagnosis of malignant tumors.

9.9 Application: Analysis of the Tumor in Neuroblasto-


ma
9.9.1 Neuroblastoma
Neuroblastoma is a malignant tumor of neural-crest origin that may arise
anywhere along the sympathetic ganglia or within the adrenal medulla 773,
774]. There are three types of ganglion cell lesions that form a spectrum of
neoplastic disease. Neuroblastoma is the most immature and malignant form
of the three, usually presenting before the age of ve years. Ganglioneuro-
blastoma is a more mature form that retains some malignant characteristics,
with peak incidence between ve and 10 years of age. Ganglioneuroma is well
dierentiated and benign, typically presenting after 10 years of age 775].
Neuroblastoma is the most common extra-cranial solid malignant tumor
in children 776] it is the third most common malignancy of childhood. It
accounts for 8 { 10% of all childhood cancers 777], and 15% of all deaths
related to cancer in the pediatric age group. The median age at diagnosis is
two years, and 90% of the diagnosed cases are in children under the age of
ve years 776].
In the US, about 650 children and adolescents younger than 20 years of
age are diagnosed with neuroblastoma every year 778]. Neuroblastoma is the
most common cancer of infancy, with an incidence rate that is almost double
that of leukemia, the next most common malignancy occurring during the
rst year of life 778]. The rate of incidence of neuroblastoma among infants
in the US has increased from 53 per million in the period 1976 ; 84 to 74
per million in the period 1986 ; 94 778]. Gurney et al. 779] estimated an
annual increase rate of 3:4% for extracranial neuroblastoma. Although some
countries instituted screening programs to detect neuroblastoma in infants,
studies have indicated that the possible benet of screening on mortality is
small and has not yet been demonstrated in reliable data 780, 781].
Image Reconstruction from Projections 835

(a)

(b)
FIGURE 9.20
(a) Projection image of a rat heart specimen obtained using synchrotron ra-
diation. (b) 3D
CT image of the rat heart specimen. Reproduced with per-
mission from K. Umetani, N. Yagi, Y. Suzuki, Y. Ogasawara, F. Kajiya, T.
Matsumoto, H. Tachibana, M. Goto, T. Yamashita, S. Imai, and Y. Kajihara,
\Observation and analysis of microcirculation using high-spatial-resolution
image detectors and synchrotron radiation", Proceedings SPIE 3977: Medical
Imaging 2000 { Physics of Medical Imaging, pp 522{533, 2000, c SPIE.
836 Biomedical Image Analysis
Sixty-ve percent of the tumors related to neuroblastoma are located in the
abdomen approximately two-thirds of these arise in the adrenal gland. Fif-
teen percent of neuroblastoma are thoracic, usually located in the sympathetic
ganglia of the posterior mediastinum. Ten to twelve percent of neuroblastoma
are disseminated without a known site of origin 782].
Staging and prognosis: The most recent staging system for neuroblas-
toma is the International Neuroblastoma Staging System, which takes into
account radiologic ndings, surgical resectability, lymph-node involvement,
and bone-marrow involvement 783]. The staging ranges from Stage 1 for a
localized tumor with no lymph-node involvement, to Stage 4 with the disease
spread to distant lymph nodes, bone, liver, and other organs 783].
The main determinant factors for prognosis are the patient's age and the
stage of the disease 776, 782]. The survival rate of patients diagnosed with
neuroblastoma under the age of one year is 74%, whereas it is only 14% for
patients over the age of three years 776]. Whereas the survival rate of patients
diagnosed with Stage 1 neuroblastoma is in the range 95 ; 100%, it is only
10 ; 30% for patients diagnosed with Stage 4 of the disease 776].
The site of the primary tumor is also said to be of relevance in the overall
prognosis. Tumors arising in the abdomen and pelvis have the worst prognosis,
with adrenal tumors having the highest mortality. Thoracic neuroblastoma
has a better overall survival rate (61%) than abdominal tumors (20%) 782].
Surgical resection of the primary tumor is recommended whenever possi-
ble. However, the primary tumor of a patient with advanced neuroblastoma
(Stages 3 and 4) can be unresectable, if there is risk of damaging vital struc-
tures in the procedure, notably when the mass encases the aorta see Fig-
ure 9.21. Treatment in these cases requires chemotherapy or radiotherapy for
initial shrinkage of the mass, after which (delayed) surgical resection may be
performed 773].
Radiological analysis: The radiology of neuroblastoma has been studied
extensively over the past three decades, and several review articles concerning
the radiological aspects of the disease have been published 776, 782, 784, 785,
786]. Radiological exams can be useful in the initial diagnosis, assessment of
extension, staging, presurgical evaluation, treatment, and follow-up.
Several imaging modalities have been investigated in the context of neu-
roblastoma, including ultrasound, CT, MRI, bone and marrow scintigraphy,
excretory urography, and chest radiography. CT and MRI are regarded to be
the best modalities for the evaluation of tumor stage 776], resectability 787],
and prognosis and follow-up 776]. CT and MRI exams are mandatory for the
analysis of the primary tumor, and some investigators have reported that MRI
could be more useful than CT in evaluating metastatic disease 788, 789, 790].
Comparing CT and MRI, the latter has a higher diagnostic accuracy (the
highest among all imaging modalities) and is said to be more suitable for the
demonstration of tumor spread and visualization of the tumor in relationship
to neighboring blood vessels and vessels within the tumor 776], which are im-
portant factors in therapy and assessment of resectability. MRI also provides
Image Reconstruction from Projections 837

FIGURE 9.21
CT image of a patient with Stage 3 neuroblastoma, with the mass outline
drawn by a radiologist. The mass encases the aorta, which is the small, circu-
lar object just above the spine. Reproduced with permission from F.J. Ayres,
M.K. Zuo, R.M. Rangayyan, G.S. Boag, V. Odone Filho, and M. Valente,
\Estimation of the tissue composition of the tumor mass in neuroblastoma
using segmented CT images", Medical and Biological Engineering and Com-
puting, 42:366 { 377, 2004. c IFMBE.

better soft-tissue contrast than CT, and is more promising for the estimation
of tissue composition 791, 792]. Another advantageous feature is that MRI
uses nonionizing radiation. Nevertheless, CT is considered to be the modal-
ity of choice in several investigations, because it is comparable in usefulness
to MRI in evaluating local disease 776], and is more cost-eective. CT is
known to be eective in detecting calcications (a nding in favor of neu-
roblastoma 784] and against Wilms tumor), detection and assessment of the
extension of local disease, evaluation of lymph-node involvement, recurrent
disease, and nonskeletal metastasis 782].
On CT exams, abdominal neuroblastoma is seen as a mass of soft tissue,
commonly suprarenal or paravertebral, irregularly shaped, lobulated, extend-
ing from the ank toward the midline, and lacking a capsule. The mass tends
to be inhomogeneous due to tumor necrosis intermixed with viable tumor, and
contains calcications in 85% of patients. Calcications are usually dense,
amorphous, and mottled in appearance. Sometimes, neuroblastoma presents
areas of central necrosis, shown as low-attenuation areas, that are more ap-
parent after contrast enhancement 773, 776, 782]. Figure 9.21 shows a CT
image of a patient with Stage 3 neuroblastoma, with the mass outline drawn
by a radiologist. The mass encases the aorta, which makes it unresectable.
Despite the proven usefulness of imaging techniques in the detection, de-
lineation, and staging of the primary tumor, there is a need for improvement
in the usage of these techniques for more accurate assessment of the local
838 Biomedical Image Analysis
disease that could lead to better treatment planning and follow-up. Foglia
et al. 793] argued that the primary tumor status in advanced neuroblastoma
cannot be assessed denitively by diagnostic imaging, due to errors in sensitiv-
ity and specicity as high as 38%, when assessing tumor viability by imaging
methods and comparing it to ndings in delayed surgery. They also reported
that CT exams could not dierentiate viable tumor from brotic tissue or
nonviable tumor destroyed by previous chemotherapy.
Computer-aided image analysis could improve radiological analysis of neu-
roblastoma by oering more sophisticated, quantitative, accurate, and repro-
ducible measures of information in the image data 251, 794, 795]. Neverthe-
less, few researchers have investigated the potential of CAD of neuroblastoma
using diagnostic imaging related published works are limited to tumor volume
measurement using manually segmented CT slices (planimetry) 796, 797].
Ayres et al. 798, 799, 800] proposed methods for computer-aided quantita-
tive analysis of the primary tumor mass, in patients with advanced neurob-
lastoma, using X-ray CT exams. Specically, they proposed a methodology
for the estimation of the tissue content of the mass via statistical modeling
and analysis of manually segmented CT images. The results of the method
were compared with the results of histological analysis of surgically resected
masses.

9.9.2 Tissue characterization using CT


The linear attenuation coecient
of tissue is the physical entity that is
measured in CT (see Equations 1.1 and 1.2). The linear attenuation coef-
cient varies with two material properties: density and elemental compo-
sition 758]. The value of
has been measured and tabulated for several
materials 121, 801], including human and animal tissues, at dierent X-ray
energies (including measurements with multiple energies, such as dual-energy
imaging 758, 802, 803]). However, it is not common to display CT images
in terms of the linear attenuation coecient (which is dependent on the en-
ergy used 121, 803]). Instead, normalized CT units that are more convenient
and independent, to a certain extent, of the X-ray energy are used 38] see
Equation 9.50. Table 9.1 shows the mean and SD of the CT values in HU for
several types of tissue in the abdomen.
Soon after the invention of the CT scanner, several researchers focused their
attention on the problem of estimating the composition of body tissues from
CT images. Alter 804] evaluated the clinical usefulness of the CT scanner
and reported on the appearance on CT images of several organs, tissues, and
diseases, and used their Hounseld value in tissue characterization. Phelps et
al. 121] and Rao and Gregg 801] described the linear attenuation coecient of
several tissues, and related it to the ideal CT unit as reported by Wilson 802]
and Brooks 803]. Mategrano et al. 759] presented measures of the attenuation
coecient for tissues in the abdominal region see Table 9.1.
Image Reconstruction from Projections 839
A discussion of potential sources of error in the measurement of the atten-
uation coecient is found in the work of Williams et al. 805]. Duerinckx and
Macovski 806] discussed the nature of noise in CT images. Pullan et al. 807]
worked on the characterization of regions in a CT image using statistical cen-
tral moments (such as mean, SD, and skewness) obtained from histograms of
CT values and a gradient measure (taken as a simplied measure of texture)
within delimited regions, with application to brain tumors and lesions of the
spleen and liver. Kramer et al. 808] reported on a work similar to that of Pul-
lan et al. Latchaw et al. 809] opined that CT is nonspecic in the separation
of solid tumors and cystic lesions in the brain.
Intravascular contrast is usually employed in abdominal CT studies. The
contrast agent is injected rapidly into the venous system, with scanning com-
mencing shortly after completion of the injection. The objective of this tech-
nique is to image the patient while the intravascular concentration of contrast
is at its peak, and before redistribution of contrast into soft tissues occurs,
thus maximizing the density dierence between vascular structures and other
body organs, and allowing assessment of regional blood ow. The intravascu-
lar concentration of contrast decreases initially due to dilution in the blood
volume, then by redistribution throughout perfused tissues, and lastly by re-
nal excretion. The eect of contrast on HU measurements is dependent on
many factors, including: blood volume, body weight, contrast volume and
injection rate, time elapsed since injection, and vascularity of the structure of
interest. Some of the HU values listed in Table 9.1 were estimated using CT
data with contrast.
Some authors have reported relative success in using CT units to dierenti-
ate between tissues that are visually similar in CT images 759, 805, 807, 808],
while others have reported failure in the same task 793, 809]. Although the
measurement of the linear attenuation coecient of tissues in vitro can be
performed with good precision 121, 801], several sources of error can degrade
the performance of in vivo CT sensitometry 810], some of which are motion
artifacts, noise, partial-volume eect, and spectral spread of the energy of the
X ray.

9.9.3 Estimation of tissue composition from CT images


Ayres et al. 798, 799, 800] investigated quantitative analysis of the primary
tumor mass, in patients with advanced neuroblastoma, using CT exams. Their
methodology includes a statistical parametric model for the tissue composi-
tion of the tumor, and a method to estimate the parameters of the model.
Segmentation of the tumor was performed manually by a radiologist. The
histogram of the tumor mass was computed from the segmented regions over
all applicable CT image slices (see Section 2.7). The statistical model em-
ployed is the Gaussian mixture model 700], and the algorithm for parameter
estimation is the EM algorithm 811] see also Section 8.9.2.
840 Biomedical Image Analysis
The complete methodology is shown schematically in Figure 9.22. The
upper path shows the sequence of algorithmic procedures that lead to the
estimation of the statistical model. The lower path consists of obtaining
the histological information regarding the tumor from biopsy and delayed
surgery 793].
Estimation of tissue composition: The tumor mass in neuroblastoma
is inhomogeneous, due to intermixed necrosis and viable tumoral tissue, and
sometimes presents central areas of necrosis, shown as low-attenuation regions
inside the mass. Therefore, it is appropriate to develop a global description
of the mass that could lead to the estimation of the fractional volume cor-
responding to each tissue type, rather than attempting to separate the mass
into distinct regions.
Assume that the CT value for a voxel that arises from a given tissue inside
the mass (benign mass, necrosis, malignant tumor, brotic tissue, etc.) is a
Gaussian random variable. Then, the whole tumor mass may be modeled sta-
tistically as a mixture of Gaussian variables, known as the Gaussian mixture
model 812]. Let x denote the CT attenuation value for a given voxel, and
i = (
i i ) be the set of parameters that describes the Gaussian PDF of the
CT values of the ith type of tissue. The PDF for x, given that x belongs to
the ith tissue type, is pi (xji ), and is represented as

; (x ;2
i2 i)
2
pi (xji ) = p 1 exp . (9.52)
2 i
Let M be the true number of the dierent types of tissue in the given tumor
mass (assumed to be known for the moment estimation of the value of M will
be described later.) Let i be the probability that a givenPvoxel came from
the ith type of tissue in the tumor mass. By denition, M i=1 i = 1. The
value of i can be seen as the fraction of the tumor volume that is composed
of the ith type of tissue. Then, the PDF for the entire mass is a mixture of
Gaussians, specied by the P parameter vector # = ( 1 : : : M 1 : : : M )T ,
and described by p(xj#) = M i=1 i pi (xji ).
Let N be the number of voxels in the tumor mass, and let x = (x1 x2 : : :
xN )T be a vector composed of the values of the voxels (the observed data).
Under the condition of the observed data, the posterior probability of the
parameters is obtained by Bayes rule 700] (see Section 12.4.1) as
p(#jx) = p(#)p(px()xj#) : (9.53)
Here, p(x) is the probability of the data, regardless of the parameters be-
cause the estimation problem is conditional upon the observed data, p(x) is a
constant term. The term p(#) is the prior probability of the vector of param-
eters #. The term p(xj#) is called the likelihood of #, denoted by L(#jx),
and given by
Image Reconstruction from Projections
histogram of Gaussian mixture model
CT exam segmentation the primary model fitting parameters
tumour via EM

chemotherapy or biopsy histological tumor viability, comparative


radiotherapy analysis other pathology analysis
information
patient

delayed
surgery
validation
of the method

FIGURE 9.22
Schematic representation of the proposed method for the analysis of neuroblastoma. EM: Expectation-maximization. Re-
produced with permission from F.J. Ayres, M.K. Zuo, R.M. Rangayyan, G.S. Boag, V. Odone Filho, and M. Valente,
\Estimation of the tissue composition of the tumor mass in neuroblastoma using segmented CT images", Medical and
Biological Engineering and Computing, 42:366 { 377, 2004. c IFMBE.

841
842 Biomedical Image Analysis

N
4 p(xj#) =
L(#jx) =
Y
p(xj j#) : (9.54)
j =1
In the case that there is no prior belief about #, that is, nothing is known
about its prior probability, the situation is known as a
at prior 812]. In
this case, Equation 9.53 becomes p(#jx) = c p(xj#), where c is a normalizing
constant. Thus, nding the most probable value of # given the data, without
any prior knowledge about the PDF of the parameters, is the same as nding
the value of # that maximizes the likelihood, or the log-likelihood dened as
logL(#jx)]: this is the maximum likelihood (ML) principle 700, 812]. The
adoption of this principle leads to simplied calculations with reasonable re-
sults 813]. Fully Bayesian approaches for classication and parameter estima-
tion can provide better performance, at the expense of greater computational
requirements and increased complexity of implementation 814, 815].
In order to maximize the likelihood, Ayres et al. used the EM algo-
rithm 700, 811, 812, 813, 816]. The EM algorithm is an iterative procedure
that starts with an initial guess #g of the parameters, and iteratively improves
the estimate toward the local maximum of the likelihood. The generic EM
algorithm is comprised of two steps: the expectation step (or E-step) and the
maximization step (or M-step). In the E-step, one computes the parametric
probability model given the current estimate of the parameter vector. In the
M-step, one nds the parameter vector that maximizes the newly calculated
model, which is then treated as the new best estimate of the parameters. The
iterative procedure continues until some stopping condition is met, for exam-
ple, the dierence logL(#n+1 jx)] ; logL(#n jx)] or the modulus j#n+1 ; #n j
of the dierence vector between successive iterations n and n + 1 is smaller
than a predened value.
For each tissue type i, let p(ijxj #) represent the probability that the j th
voxel, with the value xj , belongs to the ith tissue type. This can be calculated
using Bayes rule as

p(ijxj #) = p(ij#) p(xj ji #) = i pi (xj ji ) :


p(xj j#) p(xj j#) (9.55)
The derivation of the EM algorithm for the Gaussian mixture model leads
to a set of iterative equations that perform the E-step and the M-step simul-
taneously. For the ith tissue type, the update equations are:
1 X N
new
i = N j
p(i xj #old ) (9.56)
j =1
PN

i = Pj=1
new j
xj p(i xj #old )
(9.57)
j =1 j
N p(i x #old )
j
Image Reconstruction from Projections 843
vP
new
u N
u (x
= t j=1 Pj
;
new
i )2 p(ijxj #old ) : (9.58)
i N p(ijx #old )
j =1 j
In order to estimate the value of M , that is, the number of types of tissue in
the mass, one cannot model M as a random variable and directly apply the ML
principle, because the maximum likelihood of # is a nondecreasing function
of M 811]. The estimated value of M should be the value that minimizes
a cost function that penalizes higher values of M . The common choice for
such a cost function is one that follows the MDL criterion 811] however,
other criteria exist to nd the value of M 811]. Ferrari et al. 375, 381, 817]
successfully used the MDL criterion to nd the number of Gaussian kernels in
a Gaussian mixture model, in the context of detecting the broglandular disc
in mammograms see Section 8.9.2. However, Ayres et al. found that it is
not appropriate to use the MDL criterion in the application to neuroblastoma
because the Gaussian kernels to be identied overlap signicantly in the HU
domain.
Finite mixture models are regarded as powerful tools in unsupervised clas-
sication tasks 811]. Gaussian mixture models are the most common type of
mixture models 811], and the EM algorithm is the common method of esti-
mation of the parameters in a Gaussian mixture model 811, 818]. Mixture
models have been employed with success in image processing for unsupervised
classication 811, 818], automatic segmentation of brain MR images 818] and
mammograms 375, 381, 817], automatic target recognition 814], correction
of intensity nonuniformity in MRI 813], tissue characterization 813, 819],
and partial volume segmentation in MRI 819].
Jain et al. 811] point out that current problems and research topics in using
the EM algorithm are: dealing with its local nature, which causes the algo-
rithm to be critically dependent on the initial value of # and the unbounded
nature of the parameters (because, in the ML principle, no prior probability is
assigned to #) that could cause # to converge to undesired points in the fea-
ture space, such as having i and i approach zero simultaneously for the ith
Gaussian kernel. Although the latter problem was not encountered by Ayres
et al., they did face the former problem of nding a good initial estimate of
#.
Parameter selection and initialization: The tumor bulk in neurob-
lastoma commonly contains up to three dierent tissue components: low-
attenuation necrotic tissue, intermediate-attenuation viable tumor, and high-
attenuation calcied tissue. The relative quantity of each of these tissue types
varies from tumor to tumor. Although the typical mean HU and standard de-
viation values of these types of tissue are known (as shown in Table 9.1), the
statistics of the tissue types could vary from one imaging system to another,
depend upon the imaging protocol (including the use of contrast agents), and
be in uenced by the partial-volume eect. It should also be noted that the
ranges of HU values of necrotic tissue, viable tumor, and several abdominal
organs overlap. For these reasons, it would be inappropriate to use xed
844 Biomedical Image Analysis
bands of HU values to analyze the density distribution of a given tumor mass.
The same reasons make it inappropriate to use xed initial values for the EM
algorithm. In the work of Ayres et al., the EM algorithm was initialized with
three mean values (M = 3) computed as the mean of the histogram of the tu-
mor, and the mean  one-half of the standard deviation of the histogram. The
variance of all three Gaussians was initialized to the variance of the histogram
of the tumor.

9.9.4 Results of application to clinical cases


Ayres et al. analyzed ten CT exams of four patients with Stage 3 neuroblas-
toma from the Alberta Children's Hospital, Calgary, Alberta, Canada. Tumor
outlines were manually drawn on the images by a radiologist. Each patient had
had an initial CT scan to assess the state of the disease prior to chemotherapy.
Two patients had follow-up CT exams during treatment. All patients had a
presurgical CT exam. After surgical resection, the tumor masses were ana-
lyzed by a pathologist. The following paragraphs describe the results obtained
with two of the cases.
Case 1: The two-year-old male patient had an initial diagnostic CT scan
in April 2001 labeled as Exam 1a, see Figure 9.23 (a)]. The patient had a
follow-up CT scan in June 2001 labeled as Exam 1b, see Figure 9.23 (b)],
and a presurgical CT scan in September 2001 labeled as Exam 1c, see Fig-
ure 9.23 (c)]. Surgical resection of the tumor was performed in September
2001. Pathologic analysis showed extensive necrosis and dystrophic calcica-
tion.
Figure 9.24 shows the results of decomposition of the histogram of Exam 1a
with dierent numbers of Gaussian components. (Note: Although only one
CT slice is shown for each exam in Figure 9.23, all applicable slices of each
exam were processed to obtain the corresponding histograms.) The results
of estimation of the tissue composition for all exams of Case 1, assuming the
existence of three tissue types, are shown in Figure 9.25 and Figure 9.26, along
with the tumor volume in each CT scan in the latter gure.
The initial diagnostic scan of the patient Exam 1a, see Figure 9.23 (a)]
showed a large mass with several components. Radiological analysis indicated
the existence of a calcied mass with a size of about 4:5  4:4  5:9 cm,
located in the right suprarenal region. The predominant components in this
case are low-density necrotic tissue, intermediate-density tumor, and high-
density areas of calcication, probably representing dystrophic calcication
in necrotic tumor. These three components are well-demonstrated in the
histogram corresponding to Exam 1a in Figure 9.26.
Exam 1b see Figure 9.23 (b)] represents an intermediate scan performed
part way through the presurgical chemotherapy regimen. The scan demon-
strated an overall decrease in tumor volume together with an increasing
amount of calcication. The corresponding histogram in Figure 9.26 is of
interest in that it indicates a disproportionate increase in the intermediate
Image Reconstruction from Projections 845
values. Observe, however, that the mean CT value for the central component
is signicantly higher than that for the initial diagnostic scan. This probably
represents areas of early faint calcication within necrotic tissue this com-
ponent has likely been emphasized by partial-volume averaging, which has
resulted in a higher value for the intermediate density.
Exam 1c Figure 9.23 (c)] shows a smaller, but largely and densely calci-
ed tumor, with very little remaining of the lower-density component. The
corresponding histogram in Figure 9.26 correlates with this increasing overall
density. Observe that the mean density of all three components now is high,
with the emphasis particularly on the calcication. This suggests that pre-
vious necrotic tumor has progressed to dystrophic calcication with little in
the way of potentially viable residual tumor.
Case 2: The two-year-old female patient had the initial diagnostic CT
scan in March 2000 labeled Exam 2a, see Figure 9.27 (a)]. The patient had
the presurgical CT scan in July 2000 labeled Exam 2b, see Figure 9.27 (b)].
Pathologic analysis of the resected mass indicated residual tumor consistent
with dierentiating neuroblastoma. Sections from the tumor showed extensive
necrosis (consistent with previous chemotherapy), and brosis. The results of
estimation of the tissue composition, assuming the existence of three tissue
types, are shown in Figure 9.28, along with the tumor volume in each CT
scan.
The initial diagnostic images of this patient demonstrated a large mass, pre-
dominantly of soft tissue (viable tumor) composition. There were signicant
areas of lower-attenuation necrotic tissue, but very little calcication. Radi-
ological analysis of the presurgical scan indicated the existence of a mixed-
density mass in the left adrenal region, with a size of about 5  5  3:6 cm,
showing peripheral calcication and central low density. The histogram for
Exam 2a in Figure 9.28 correlates well with these ndings.
Exam 2b Figure 9.27 (a)] shows a post-chemotherapy, presurgical CT scan
of the patient. This scan demonstrated a signicant overall decrease in tumor
volume. However, the composition had changed relatively little. The tumor
was still composed largely of soft-tissue, low-density material with signicant
areas of necrosis and relatively little calcication. The histogram of Exam 2b
in Figure 9.28 shows a similar composition, although there is considerable
overlap between the components observe that the mean densities of the com-
ponents dier little. The lack of progression to calcication suggests that
there is still considerable viable tumor remaining, with less evidence of necro-
sis and subsequent dystrophic calcication. These ndings were conrmed by
pathologic analysis, which showed residual viable tumor.

9.9.5 Discussion
With treatment, all of the four cases in the study of Ayres et al. demon-
strated a signicant response with an overall reduction in tumor bulk. Fre-
quently, the tumor undergoes necrosis, seen as an increase in the relative
846 Biomedical Image Analysis

(a) (b)

(c)
FIGURE 9.23
(a) Initial diagnostic CT image of Case 1, Exam 1a (April 2001). (b) Interme-
diate follow-up CT image, Exam 1b (June 2001). (c) Presurgical CT image,
Exam 1c (September 2001). The contours of the tumor mass drawn by a ra-
diologist are also shown. Reproduced with permission from F.J. Ayres, M.K.
Zuo, R.M. Rangayyan, G.S. Boag, V. Odone Filho, and M. Valente, \Esti-
mation of the tissue composition of the tumor mass in neuroblastoma using
segmented CT images", Medical and Biological Engineering and Computing,
42:366 { 377, 2004. c IFMBE.
Image Reconstruction from Projections 847

12000 12000

10000 10000

8000 8000
Voxel count

6000 Voxel count 6000

4000 4000

2000 2000

0 0
-50 0 50 100 150 200 250 300 -50 0 50 100 150 200 250 300
Voxel Value (HU) Voxel Value (HU)

(a) (b)
12000

10000

8000
Voxel count

6000

4000

2000

0
-50 0 50 100 150 200 250 300
Voxel Value (HU)

(c)
FIGURE 9.24
Results of decomposition of the histogram of Exam 1a. Plots (a), (b), and
(c) show two, three, and four estimated Gaussian kernels (thin lines), respec-
tively, and the original histogram (thick line) for comparison. The sum of
the Gaussian components is indicated in each case by the dotted curve how-
ever, this curve is not clearly visible in (c) because it overlaps the original
histogram. Figure courtesy of F.J. Ayres.
848 Biomedical Image Analysis

12000

10000

8000
Voxel count

6000

4000

2000

0
-100 0 100 200 300 400 500 600 700 800 900
Voxel value (HU)
(a)
1600

1400

1200

1000
Voxel count

800

600

400

200

0
-100 0 100 200 300 400 500 600 700 800 900
Voxel value (HU)
Figure 9.25 (b)
Image Reconstruction from Projections 849

400

350

300

250
Voxel count

200

150

100

50

0
-100 0 100 200 300 400 500 600 700 800 900
Voxel Value (HU)
(c)
FIGURE 9.25
Results of decomposition of the histograms of the three CT exams of Case 1
(Figure 9.23) with three estimated Gaussian kernels (thin lines) for each his-
togram. The original histograms (thick line) are also shown for comparison.
In each case, the sum of the Gaussian components is indicated by the dotted
curve however, this curve may not be clearly visible due to close matching
with the original histogram. Reproduced with permission from F.J. Ayres,
M.K. Zuo, R.M. Rangayyan, G.S. Boag, V. Odone Filho, and M. Valente,
\Estimation of the tissue composition of the tumor mass in neuroblastoma
using segmented CT images", Medical and Biological Engineering and Com-
puting, 42:366 { 377, 2004. c IFMBE.
850 Biomedical Image Analysis

1 600
0.9
500
0.8

Tumor volume (cm3)


Weight of Gaussians

0.7
400
0.6
0.5 300
0.4
200
0.3
0.2
100
0.1
0 0
m 64 94 134 94 175 323 114 215 398
s 12 24 57 25 60 146 35 71 139
Apr 2001 Jun 2001 Sep 2001
1a 1b 1c

FIGURE 9.26
Results of estimation of the tumor volume and tissue composition of each CT
Exam of Case 1. Reproduced with permission from F.J. Ayres, M.K. Zuo,
R.M. Rangayyan, G.S. Boag, V. Odone Filho, and M. Valente, \Estimation of
the tissue composition of the tumor mass in neuroblastoma using segmented
CT images", Medical and Biological Engineering and Computing, 42:366 {
377, 2004. c IFMBE.
Image Reconstruction from Projections 851

(a) (b)
FIGURE 9.27
(a) Initial diagnostic CT image of Case 2, Exam 2a (March 2000). (b) Presur-
gical CT image, Exam 2b (July 2000). The contours of the tumor mass drawn
by a radiologist are also shown. Reproduced with permission from F.J. Ayres,
M.K. Zuo, R.M. Rangayyan, G.S. Boag, V. Odone Filho, and M. Valente,
\Estimation of the tissue composition of the tumor mass in neuroblastoma
using segmented CT images", Medical and Biological Engineering and Com-
puting, 42:366 { 377, 2004. c IFMBE.
852 Biomedical Image Analysis

1 1000
0.9 900
0.8 800
Weight of Gaussians

Tumor volume (cm3)


0.7 700
0.6 600
0.5 500
0.4 400
0.3 300
0.2 200
0.1 100
0 0
m 54 62 66 59 64 72
s 16 8 54 19 9 36
Mar 2000 Jul 2000
2a 2b

FIGURE 9.28
Results of estimation of the tumor volume and tissue composition of each CT
exam of Case 2. Reproduced with permission from F.J. Ayres, M.K. Zuo,
R.M. Rangayyan, G.S. Boag, V. Odone Filho, and M. Valente, \Estimation of
the tissue composition of the tumor mass in neuroblastoma using segmented
CT images", Medical and Biological Engineering and Computing, 42:366 {
377, 2004. c IFMBE.
Image Reconstruction from Projections 853
volume of tissue with lower attenuation values. The necrotic tissue may sub-
sequently undergo calcication, and therefore, ultimately result in an increase
in the high-attenuation calcied component. One may hypothesize, there-
fore, that a progression in the pattern of the histograms from predominantly
intermediate-density tissues to predominantly low-attenuation necrotic tissue
and ultimately to predominantly high-attenuation calcied tissue represents a
good response to therapy, with the tumor progressing through necrosis to ulti-
mate dystrophic calcication. On the contrary, the absence of this progression
from necrosis to calcication, and the persistence of signicant proportions of
intermediate-attenuation soft tissue may be a predictor of residual viable tu-
mor. As such, the technique proposed by Ayres et al. may be of considerable
value in assessing response to therapy in patients with neuroblastoma.
Objective demonstration of the progression of a tumor through various
stages, as described above, requires the use of Gaussians of variable mean
values. In order to allow for the three possible tissue types mentioned above,
it is necessary to allow the use of at least three Gaussians in the mixture
model. However, when a tumor lacks a certain type of tissue, two (or more)
of the Gaussians derived could possibly be associated with the same tissue
type. This is evident, for example, in Exam 1c (Figure 9.26) where the two
Gaussians with mean values of 215 and 398 HU correspond to calcied tis-
sue. (Varying degrees of calcication of tissues and the partial-volume eect
could have contributed to a wide range of HU values for calcied tissue in
Exam 1c.) Furthermore, the results for Exam 1c indicate the clear absence
of viable tumor, and those for Exam 2a the clear absence of calcication. It
may be desirable to apply some heuristics to combine similar Gaussians (of
comparable mean and variance).
Although some initial work in tissue characterization of this type was per-
formed using CT, many investigators have shifted their interest away from CT
toward MRI for the purpose of tissue characterization. Although MRI shows
more long-term promise in this eld due to its inherently superior denition
of soft tissues, the CT technique may still be of considerable value. Specif-
ically, MRI scanners remain an expensive and dicult-to-access specialty in
many areas, whereas CT scanners have become much more economical and
widespread. With regard to the clinical problem of neuroblastoma presenting
in young children, the current standards of medical care for such patients in-
clude assessment by CT in almost all cases. On the other hand, MRI is used
only in a minority of cases, due to the lower level of accessibility, the need for
anesthesia or sedation in young children, expense, and diculties with artifact
due to bowel peristalsis. As such, CT methods for tissue characterization and
assessment of tumor bulk, tissue composition, and response to therapy may
be of considerable value in neuroblastoma.
It is clear from the study of Ayres et al., as well as past clinical experi-
ence, that the CT number by itself is not sucient to dene tumor versus
normal tissues. Tumor denition and diagnosis require an analysis of the
spatial distribution of the various CT densities coupled with a knowledge of
854 Biomedical Image Analysis
normal anatomy. Some work has been conducted in attempts to dene auto-
matically the boundaries of normal anatomical structures, and subsequently
identify focal or diuse abnormalities within those organs 820]. Ayres et
al. made no attempt to automatically dene normal versus abnormal struc-
tures, but rather attempted an analysis of the tissues in a manually identied
abnormality. However, this process may ultimately prove of value for the
analysis of abnormalities identied automatically by future image analysis
techniques 365, 366].

9.10 Remarks
The Radon transform oers a method to convert a 2D image to a series of 1D
functions (projections). This facilitates improved or convenient implementa-
tion of some image processing tasks in the Radon domain in 1D instead of in
the original 2D image plane some examples of this approach include edge de-
tection 821] and the removal of repeated versions of a basic pattern 444, 505]
(see Section 10.3).
The 1980s and 1990s brought out many new developments in CT imaging.
Continuing development of versatile imaging equipment and image processing
algorithms has been opening up newer applications of CT imaging. 3D imag-
ing of moving organs such as the heart is now feasible. 3D display systems
and algorithms have been developed to provide new and intriguing displays
of the interior of the human body. 3D images obtained by CT are being used
in planning surgery and radiation therapy, thereby creating the new elds of
image-guided surgery and treatment. The practical realization of portable
scanners has also made possible eld applications in agricultural sciences and
other biological applications. CT is a truly revolutionary investigative imaging
technique | a remarkable synthesis of many scientic principles.

9.11 Study Questions and Problems


Selected data les related to some of the problems and exercises are available at the
site
www.enel.ucalgary.ca/People/Ranga/enel697
1. A 2 2 image has the pixel values

7 2 (9.59)
4 0 :
Image Reconstruction from Projections 855
Compute parallel-ray projections of the image at 0o and 90o . Compute a
reconstruction of the image using the simple backprojection method.
2. State and explain the Fourier slice theorem. Given the notations ( ) for a
f x y

function in the image domain, ( ) for a function in the projection or Radon


p t

domain, and ( ) as well as ( ) for functions in the frequency or Fourier


F u v P w

domain, explain the relationships between these functions.


With reference to the notations provided above, what do the variables x y 

t u vand stand for? What are their units?


w

3. A researcher has obtained parallel-ray projections of an image at the angles


30o 50o 70o 90o 110o 130o , and 150o . The only algorithm available for re-
    

construction of the image is the Fourier method.


Draw a schematic representation of the information available in the Fourier
domain. Propose methods to help the researcher obtain the best possible
reconstruction of the image.
Under what conditions can a perfect reconstruction be obtained?
4. Give a step-by-step description of the Fourier method for reconstructing an
image from its projections. Explain the limitations of the method.
5. A 2 2 image has the pixel values

2 3 (9.60)
4 5 :

Compute parallel-ray projections of the image at 0o and 90o . Starting with


an initial estimate with all pixels equal to unity, compute reconstructions of
the image over one iteration of (a) additive ART, and (b) multiplicative ART.
6. One of the properties of ART is that if the hyperplanes of all the given ray
sums are mutually orthogonal, we may start with any initial guess and reach
the solution in only one cycle (or iteration) of projections (or corrections).
Prepare a set of two simultaneous equations in two unknowns such that the
corresponding straight lines in the 2D plane are mutually orthogonal. Show
graphically that, starting from any initial guess, the solution may be reached
in just one iteration (two projections).

9.12 Laboratory Exercises and Projects


1. Create a numerical phantom image by placing circles, ellipses, rectangles, and
triangles of dierent intensity values within an ellipse. Compute parallel-ray
projections of the image at a few dierent angles.
Compute reconstructions of the image using the simple backprojection and
the ltered backprojection methods using various numbers of projections with
dierent angular sampling and coverage. Compare the quality of the results
obtained.
2. Repeat the preceding exercise with additive and multiplicative ART.
10
Deconvolution, Deblurring, and Restoration

Image enhancement techniques are typically designed to yield \better look-


ing" images satisfying some subjective criteria. In comparison with the given
image, the processed image may not be closer to the true image in any sense.
On the other hand, image restoration 8, 9, 11, 589, 822, 823, 824, 825, 826,
827, 828] is dened as image quality improvement under objective evaluation
criteria to nd the best possible estimate of the original unknown image from
the given degraded image. The commonly used criteria are LMS, MMSE, and
distance measures of several types. Additional constraints based upon prior
and independent knowledge about the original image may also be imposed to
limit the scope of the solution. Image restoration may then be posed as a
constrained optimization problem.
Image restoration requires precise information about the degrading phe-
nomenon, and analysis of the system that produced the degraded image.
Typical items of information required are estimates or models of the impulse
response of the degrading lter (the PSF, or equivalently, the MTF) the PSD
(or ACF) of the original image and the PSD of the noise. If the degrading
system is shift-variant, then a model of the variation of its impulse response
across the eld of imaging would be required. The success of a procedure
for image restoration depends upon the accuracy of the model of degradation
used, and the accuracy of the functions used to represent the image degrading
phenomena. In this chapter, we shall explore several techniques for image
restoration under varying conditions of degradation, available information,
and optimization.

10.1 Linear Space-invariant Restoration Filters


Assuming the degrading phenomenon to be linear and shift-invariant, the
simplest model of image degradation is

g(x y) = h(x y) f (x y) + (x y)


G(u v) = H (u v) F (u v) + (u v) (10.1)

857
858 Biomedical Image Analysis
where f is the original image, h is the impulse response of the degrading
LSI system, g is the observed (degraded) image, and  is additive random
noise that is statistically independent of the image-generating process. The
functions represented by upper-case letters represent the Fourier transforms
of the image-domain functions represented by the corresponding lower-case
letters. A block diagram of the image degradation system as above is given
in Figure 10.1.

linear shift-
invariant system +
f g
h
original degraded
image image
η
noise

FIGURE 10.1
Image degradation model involving an LSI system and additive noise.

The image restoration problem is dened as follows: Given g and some


knowledge of h, f , and , nd the best possible estimate of f . When the
degrading phenomenon can be represented by an LSI system, it is possible to
design LSI lters to restore the image, within certain limits. A few well-known
LSI lters for image restoration are described in the following subsections.

10.1.1 Inverse ltering


Let us consider the degradation model expressed in matrix form (see Sec-
tion 3.5) as
g = hf (10.2)
with no noise being present. The restoration problem may be stated as follows:
Given g and h, estimate f.
In order to develop a mathematical statement of the problem, let us consider
an approximation ~f to f. In the least-squares approach 9], the criterion for
obtaining the optimal solution is stated as follows: Minimize the squared error
between the observed response g, and the response g~ had the input been ~f .
The error between g and g~ is given by
= g ; g~ = g ; h ~f : (10.3)
The squared error is given as
2 = T
= (g ; h ~f )T (g ; h ~f ) (10.4)
= gT g ; ~f T hT g ; gT h ~f + ~f T hT h ~f :
Deconvolution, Deblurring, and Restoration 859
Now, we can state the image restoration problem as an optimization prob-
lem: Find ~f that minimizes 2 . Taking the derivative of the squared error 2
in Equation 10.4 with respect to ~f , we get
@2 = ;2 hT g + 2 hT h ~f : (10.5)
@~f
Setting this expression to zero, we get
~f = (hT h);1 hT g: (10.6)
This is the least-squares or pseudo-inverse solution. If h is square and non-
singular, we get
~f = h;1 g: (10.7)
; ; ;
If h is circulant or block-circulant, we have h = W Dh W (see Sec-
1 1 1
tion 3.5.5). Then,
~f = W D;h 1 W;1 g (10.8)
which leads to
F~ (u v) = HG (u v ) : (10.9)
(u v )
This operation represents the inverse lter, which may be expressed as
LI (u v) = H (u1 v) : (10.10)

It is evident that the inverse lter requires knowledge of the MTF of the
degradation process see Sections 2.9, 2.12, and 10.1.6 for discussions on meth-
ods to derive this information.
The major drawback of the inverse lter is that it fails if H (u v) has zeros,
or if h is singular. Furthermore, if noise is present (as in Equation 10.1), we
get
F~ (u v) = F (u v) + H((uu vv)) : (10.11)
Problems arise because H (u v) is usually a lowpass function, whereas (u v)
is uniformly distributed over the entire spectrum then, the amplied noise at
higher frequencies (the second component in the equation above) overshadows
the restored image.
An approach to address the singularity problem associated with the inverse
lter is the use of the singular value decomposition (SVD) method 825]. A
widely used implementation of this approach is an iterative algorithm based on
Bialy's theorem to solve the normal equation 829] the algorithm is also known
as the Landweber iterative method 830]. McGlamery 831] demonstrated
the application of the inverse lter to restore images blurred by atmospheric
turbulence.
860 Biomedical Image Analysis
In an interesting extension of the inverse lter to compensate for distor-
tions or aberrations caused by abnormalities in the human eye, Alonso and
Barreto 832] applied a predistortion or precompensation inverse lter to test
images prior to being displayed on a computer monitor. The PSF of the af-
fected eye was estimated using the wavefront aberration function measured
using a wavefront analyzer. In order to overcome the limitations of the in-
verse lter, a weighting function similar to the parametric Wiener lter (see
Section 10.1.3) was applied. The subjects participating in the study indicated
improved visual acuity in reading predistorted images of test-chart letters
than in reading directly displayed test images.
Example: The original \Shapes" test image (of size 128  128 pixels) is
shown in Figure 10.2 (a), along with its log-magnitude spectrum in part (b) of
the gure. The image was blurred via convolution with an isotropic Gaussian
PSF having a radial standard deviation of two pixels. The PSF and the
related MTF are shown in parts (c) and (d) of the gure, respectively. The
blurred image is shown in part (e) of the gure. Gaussian-distributed noise of
variance 0:01 was added to the blurred image after normalizing the image to
the range 0 1] the degraded, noisy image is shown in part (f) of the gure.
The results of application of the inverse lter to the noise-free and noisy
blurred versions of the \Shapes" image are shown in Figure 10.3 in both the
space and frequency domains. The result of inverse ltering of the noise-free
blurred image for radial frequencies up to the maximum frequency in (u v),
shown in part (a) of the gure, demonstrates eective deblurring. A small
amount of ringing artifact may be observed upon close inspection, due to the
removal of frequency components beyond a circular region see the spectrum
in part (b) of the gure]. Inverse ltering of the noisy degraded image, even
when limited to radial frequencies less than 0:4 times the maximum frequency
in (u v), resulted in signicant amplication of noise that led to the complete
loss of the restored image information, as shown in part (c) of the gure. Lim-
iting the inverse lter to radial frequencies less than 0:2 times the maximum
frequency in (u v) prevented noise amplication, but also severely curtailed
the restoration process, as shown by the result in part (e) of the gure. The
results illustrate a severe practical limitation of the inverse lter. (See also
Figures 10.5 and 10.6.)

10.1.2 Power spectrum equalization


Considering the degradation model in Equation 10.1, the method of power
spectrum equalization (PSE) 825] takes the following approach: Find a linear
transform L so as to obtain an estimate f~(x y) = Lg(x y)], subject to the
constraint f~(u v) = f (u v), that is, the PSD of the restored image be
equal to the PSD of the original image. Applying the linear transform L to
Deconvolution, Deblurring, and Restoration 861

(a) (b)

(c) (d)

(e) (f)
FIGURE 10.2
(a) \Shapes" test image size 128  128 pixels. (b) Log-magnitude spectrum
of the test image. (c) PSF with Gaussian shape radial standard deviation
= 2 pixels. (d) MTF related to the PSF in (c). (e) Test image blurred with
the PSF in (c). (f) Blurred image in (e) after normalization to 0 1] and the
addition of Gaussian noise with variance = 0:01.
862 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 10.3
(a) Result of inverse ltering the blurred \Shapes" image in Figure 10.2 (e).
Result of inverse ltering the noisy blurred \Shapes" image in Figure 10.2 (f)
using the inverse lter up to the radial frequency equal to (c) 0:4 times and
(e) 0:2 times the maximum frequency in (u v). The log-magnitude spectra of
the images in (a), (c), and (e) are shown in (b), (d), and (f), respectively.
Deconvolution, Deblurring, and Restoration 863
Equation 10.1 as well as the constraint mentioned above to the result, we get

f~(u v) = jL(u v)j2 jH (u v)j2 f (u v) +  (u v) (10.12)
= f ( u v )
where L(u v) represents the MTF of the lter L. Rearranging the expression
above, we get
 1
LPSE (u v) = jL(u v)j = jH (u v)j2 f((uu vv)) +  (u v)
2
(10.13)
f
2 31
2

= 4
1 5 : (10.14)
 (uv)
jH (u v )j + f (uv)
2

A detailed inspection of the equation above indicates the following proper-


ties of the PSE lter:
 The PSE lter requires knowledge of the PSDs of the original image and
noise processes (or models thereof).
 The PSE lter tends toward the inverse lter in magnitude when the
noise PSD tends toward zero. This property may be viewed in terms of
the entire noise PSD or at individual frequency samples.
 The PSE lter performs restoration in spectral magnitude only. Phase
correction, if required, may be applied in a separate step. In most
practical cases, the degrading PSF and MTF are isotropic, and H (u v)
has no phase.
 The gain of the PSE lter is not aected by zeros in H (u v ), as long
as  (u v) is also not zero at the same frequencies. (In most cases, the
noise PSD is nonzero at all frequencies.)
 The gain of the PSE lter reduces to zero wherever the original im-
age PSD is zero. The noise-to-signal PSD ratio in the denominator of
Equation 10.14 controls the gain of the lter in the presence of noise.
Models of the PSDs of the original image and noise processes may be esti-
mated from practical measurements or experiments (see Section 10.1.6). See
Section 10.2 for a discussion on extending the PSE lter to blind deblurring.
Examples of application of the PSE lter are provided in Sections 10.1.3 and
10.5.

10.1.3 The Wiener lter


Wiener lter theory provides for optimal ltering by taking into account the
statistical characteristics of the image and noise processes 9, 198, 589, 833].
864 Biomedical Image Analysis
The lter characteristics are optimized with reference to a performance cri-
terion. The output is guaranteed to be the best achievable result under the
conditions imposed and the information provided. The Wiener lter is a pow-
erful conceptual tool that changed traditional approaches to signal processing.
The Wiener lter performs probabilistic (stochastic) restoration with the
least-squares error criterion 9, 589]. The basic degradation model used is
g = hf +  (10.15)
where f and  are real-valued, second-order-stationary random processes that
are statistically independent, with known rst-order and second-order mo-
ments. Observe that this equation is the matrix form of Equation 10.1. The
approach taken to estimate the original image is to determine a linear esti-
mate ~f = L g to f from the given image g, where L is the lter to be derived.
The criterion used is to minimize the MSE
  
2
2 = E f ; ~f  :

 (10.16)
Expressing the MSE as the trace of the outer product matrix of the error
vector, we have n h io
2 = E Tr (f ; ~f ) (f ; ~f )T : (10.17)
In expanding the expression above, we could make use of the following rela-
tionships:
(f ; ~f ) (f ; ~f )T = f f T ; f ~f T ; ~f f T + ~f ~f T : (10.18)
~f = g L = (f h +  ) L :
T T T T T T T (10.19)
~ T T
ff =ff h L +f L :T T T T (10.20)
~f f T = L h f f T + L  f T : (10.21)
~f ~f T = L ;h f f T hT + h f T +  f T hT +  T
LT : (10.22)
Because the trace of a sum of matrices is equal to sum of their traces, the E
and Tr operators may be interchanged in order. We then obtain the following
expressions and relationships:

E f f T = f (10.23)
the autocorrelation matrix of the original image.
h i
E f ~f T = f hT LT (10.24)
with the observation that 
E f T = 0 (10.25)
because f and  are statistically independent processes and  = 0.
h i
E ~f f T = L h f : (10.26)
Deconvolution, Deblurring, and Restoration 865
h i
E ~f ~f T = L h f hT LT + L  LT : (10.27)
 = E  T (10.28)
is the autocorrelation matrix of the noise process.
Now, the MSE may be written as:
;

2 = Tr f ; f hT LT ; L h f + L h f hT LT + L  LT
;
(10.29)
= Tr f ; 2 f hT LT + L h f hT LT + L  LT :
(Note: Tf = f and T =  because the autocorrelation matrices are
symmetric, and the trace of a matrix is equal to the trace of its transpose.)
At this point, the MSE is no longer a function of f, g, or , but depends
only on the statistical characteristics of f and , as well as on h and L.
In order to derive the optimal lter L, we could set the derivative of 2 with
respect to L to zero:
@2 = ;2  hT + 2 L h  hT + 2 L  = 0 (10.30)
@L f f
which leads to the optimal Wiener lter function
;
; 1
LW = f hT h f hT +  : (10.31)
Note that this solution does not depend upon the inverses of the individual
ACF matrices or h, but upon the inverse of their combination. The combined
matrix could be expected to be invertible even if the individual matrices are
singular.
Considerations in implementation of the Wiener lter: Consider
the matrix to be inverted in Equation 10.31:
Z = h f hT +  : (10.32)
This matrix would be of size N 2  N 2 for N  N images, making inversion
practically impossible. Inversion becomes easier if the matrix can be written
as a product of diagonal and unitary matrices. A condition that reduces the
complexity of the problem is that h, f , and  each be circulant or block-
circulant. We can now make the following observations:
 We know, from Section 3.5, that h is block-circulant for 2D LSI opera-
tions expressed using circular convolution.
  is a diagonal matrix if  is white (uncorrelated) noise.

 In most real images, the correlation between pixels reduces as the dis-
tance (spatial separation) increases: f is then banded and may be
approximated by a block-circulant matrix.
866 Biomedical Image Analysis
Based upon the observations listed above, we can write (see Section 3.5)
h = W Dh W;1 (10.33)
f = W Df W ; 1 (10.34)
and
 = W D W ;1 (10.35)
where the matrices D are diagonal matrices resulting from the application of
the DFT to the corresponding block-circulant matrices, with the subscripts
indicating the related entity (f : original image, h : degrading system, or  :
noise). Then, we have
Z = W Dh Df Dh W;1 + W D W;1
(10.36)
= W (Dh Df Dh + D ) W;1 :
The Wiener lter is then given by
LW = W Df Dh (Dh Df Dh + D );1 W;1 : (10.37)
The optimal MMSE estimate is given by
~f = LW g
= W Df Dh (Dh Df Dh + D );1 W;1 g: (10.38)
Interpretation of the Wiener lter: With reference to Equation 10.38,
we can make the following observations that help in interpreting the nature
of the Wiener lter.
 W;1 g is related to G(u v ) the Fourier transform of the given degraded
image g(x y).
 Df is related to the PSD f (u v ) of the original image f (x y ).
 D is related to the PSD  (u v ) of the noise process.
 Dh is related to the transfer function H (u v ) of the degrading system.
Then, the output of the Wiener lter before the nal inverse Fourier trans-
form is given by

F~ (u v) = H (u v)f(u(uv) vH) H(u (uv) vG)(+u v) (u v)
2
f 3
H  (u v )
=4  (uv) G(u v )
5
jH (u v )j2 + f (uv)
2 3
jH (u v)j2 G(u v)
=4 H (u v) : (10.39)
5
j H (u v)j2 + f ((uv
uv)
)
Deconvolution, Deblurring, and Restoration 867
The Wiener lter itself is given by the expression
2 3

LW ( u v ) = 4 H  (u v )
2  (uv) : (10.40)
5
jH (u v )j + f (uv)

We can now note the following characteristics of the Wiener lter 9, 589]:
 In the absence of noise, we have  (u v ) = 0, and the Wiener lter
reduces to the inverse lter. This is also applicable at any frequency
(u v) where  (u v) = 0.
 The gain of the Wiener lter is modulated by the noise-to-signal (spec-
tral) ratio f ((uv )
uv) . If the SNR is high, the Wiener lter is close to the
inverse lter.
 If the SNR is poor and both the signal and noise PSDs are \white",
the ratio f is large and could be assumed to be a constant. Then, the
Wiener lter is close to the matched lter H  (u v).
 If the noise-to-signal PSD ratio is not available as a function of (u v ),
it may be set equal to a constant K = SNR 1 . The lter is then known
as the parametric Wiener lter.
 The Wiener lter is not often singular or ill-conditioned. Wherever
H (u v) = 0, the output F~ (u v) = 0.
 If h(x y ) =  (x y ), then H (u v ) = 1 that is, there is no blurring and
the degradation is due to additive noise only. Then
 
F~ (u v) =  (u vf) (+u v) (u v) G(u v) (10.41)
f
which is the original Wiener lter for the degradation model g = f + 
(see Section 3.6.1 and Rangayyan 31]). The Wiener lter as in Equa-
tion 10.38 was proposed by Helstrom 197].
Comparative analysis of the inverse, PSE, and Wiener lters:
 When the noise PSD is zero, or, at all frequencies where the noise PSD
is equal to zero, the PSE lter is equivalent to the inverse lter in mag-
nitude.
 When the noise PSD is zero, or, at all frequencies where the noise PSD
is equal to zero, the Wiener lter is equivalent to the inverse lter.
 The gains of the inverse, PSE, and Wiener lter are related as
jLI (u v)j > jLPSE (u v)j > jLW (u v)j: (10.42)
868 Biomedical Image Analysis
 The PSE lter is the geometric mean of the inverse and Wiener lters,
that is,
LPSE (u v) = LI (u v) LW (u v)] 21 : (10.43)
 Because jLPSE (u v)j > jLW (u v)j, the PSE lter admits more high-
frequency components with larger gain than the Wiener lter. There-
fore, the result will be a sharper, but more noisy, image.
 The PSE lter does not have a phase component. Phase correction, if
required, may be applied in a separate step.
Examples: The original \Shapes" test image and a degraded version of the
image, with blurring via convolution using an isotropic Gaussian PSF having
a radial standard deviation of two pixels as well as the addition of Gaussian-
distributed noise of variance 0:01 to the blurred image after normalizing the
image to the range 0 1], are shown in Figure 10.4 (a) and (b), respectively.
(See also Figures 10.2 and 10.3.) In order to design the appropriate Wiener
and PSE lters, a model PSD of the image was prepared by computing the
PSD of an image of a square object of size 11  11 pixels values of the PSD less
than a limit equal to 10% of its maximum were replaced by the limit. The
noise PSD was prepared as an array of constant value equal to the known
variance of the noise times N 2 , where N  N is the size the PSD array. The
results of the Wiener and PSE lters are shown in Figure 10.4 (c) and (d),
respectively. It is evident that both lters have removed the blur to some
extent however, as expected, the result of the PSE lter is noisier than that
of the Wiener lter.
Figures 10.5 and 10.6 illustrate the application of the inverse, Wiener, and
PSE lters to degraded versions of the Lenna and Cameraman images, re-
spectively. The Lenna image was blurred using a Gaussian-shaped blur func-
tion with a radial standard deviation of three pixels and additive noise to
SNR = 35 dB . The Cameraman image was blurred with a straight line of
length nine pixels, simulating blurring due to motion in the horizontal direc-
tion, and then degraded further with additive noise to SNR = 35 dB . Inverse
ltering even to limited ranges of frequency has resulted in distorted images
due to the amplication of noise as well as ringing artifacts. The Wiener and
PSE lters have removed the blur to a good extent, with control over the
noise. Further examples of application of the Wiener lter are provided in
Section 10.4.2.
Aghdasi et al. 202] applied the parametric Wiener lter to deblur mammo-
grams after removing noise using the local LMMSE lter (see Section 3.7.1) or
a Bayesian lter. King et al. 834] and Honda et al. 835] applied the Wiener
lter to the restoration of nuclear medicine images. A comparative analysis
of the performance of the PSE, Wiener, and Metz lters, in the restoration of
nuclear medicine images, is provided in Section 10.5.
Deconvolution, Deblurring, and Restoration 869

(a) (b)

(c) (d)
FIGURE 10.4
(a) \Shapes" test image size 128  128 pixels. (b) Test image blurred with
a Gaussian PSF of radial standard deviation  = 2 pixels and degraded with
additive Gaussian noise with variance = 0:01 after normalization to 0 1].
(c) Result of Wiener restoration. (d) Result of restoration using the PSE
lter.
870 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 10.5
(a) Lenna test image of size 128  128 pixels. (b) Blurred image with a
Gaussian-shaped blur function and noise to SNR = 35 dB . Deblurred im-
ages: (c) Inverse ltering up to the radial frequency 0:3 times the maximum
frequency in (u v) (d) Inverse ltering up to the radial frequency 0:2 times
the maximum frequency in (u v) (e) Wiener ltering (f) Power spectrum
equalization. Reproduced with permission from T.F. Rabie, R.M. Rangayyan,
and R.B. Paranjape, \Adaptive-neighborhood image deblurring", Journal of
Electronic Imaging, 3(4): 368 { 378, 1994.  c SPIE.
Deconvolution, Deblurring, and Restoration 871

(a) (b)

(c) (d)

(e) (f)
FIGURE 10.6
(a) Cameraman test image of size 128  128 pixels and gray-level range 0 ; 255.
(b) Image blurred by 9-pixel horizontal motion and degraded by additive
Gaussian noise to SNR = 35 dB . Deblurred images: (c) Inverse ltering up
to the radial frequency 0:8 times the maximum frequency in (u v) (d) Inverse
ltering up to the radial frequency 0:4 times the maximum frequency in (u v)
(e) Wiener ltering (f) Power spectrum equalization. Reproduced with per-
mission from T.F. Rabie, R.M. Rangayyan, and R.B. Paranjape, \Adaptive-
neighborhood image deblurring", Journal of Electronic Imaging, 3(4): 368 {
378, 1994. c SPIE.
872 Biomedical Image Analysis
10.1.4 Constrained least-squares restoration
The Wiener lter is derived using a statistical procedure based on the cor-
relation matrices of the image and noise processes. The lter is optimal, in
an average sense, for the class of images represented by the statistical entities
used. It is possible that the result provided by the Wiener lter for a specic
image on hand is less than satisfactory. Gonzalez and Wintz 589] describe
a restoration procedure that is optimal for the specic image given, under
particular constraints that are imposed, as follows.
Let L be a linear lter operator. Using the degradation model as in Equa-
tion 10.15, the restoration problem may be posed as the following constrained
optimization problem: Minimize kL ~f k2 subject to kg ; h ~f k2 = kk2 , where ~f
is the estimate of f being derived. Using the method of Lagrange multipliers,
we now seek ~f that minimizes the function
h i
J (~f ) = kL ~f k2 + kg ; h ~f k2 ; kk2 (10.44)
where is the Lagrange multiplier. Taking the derivative of J (~f ) with respect
to ~f and setting it equal to zero, we get
@J (~f ) = 0 = 2 LT L ~f ; 2 hT (g ; h ~f ) (10.45)
@~f
which leads to
~f = (hT h +
LT L);1 hT g (10.46)
where
=  .1
Due to the ill-conditioned nature of restoration procedures, the results are
often obscured by noise and high-frequency artifacts. Artifacts and noise may
be minimized by formulating a criterion of optimality based on smoothness,
such as minimizing the Laplacian of the output. In order to agree with the
matrix-vector formulation as above, let us construct a block-circulant matrix
L using the Laplacian operator in Equation 2.83. Now, L is diagonalized by
the 2D DFT as DL = W;1 L W, where DL is a diagonal matrix. Then, we
have
~f = ;hT h +
LT L
;1 hT g
;
;1
(10.47)
 ;
= W Dh Dh W +
W DL DL W
1  ; 1  ;
W Dh W g: 1

The estimate before the nal inverse Fourier transform is given by


W;1 ~f = (Dh Dh +
DL DL );1 Dh W;1 g (10.48)
which is equivalent to
  
F~ (u v) = jH (u v)Hj2 +(u
vjL) (u v)j2 G(u v) (10.49)
Deconvolution, Deblurring, and Restoration 873
where L(u v) is the transfer function related to the constraint operator L.
The expression in Equation 10.49 resembles the result of the Wiener lter
in Equation 10.39, but does not require the PSDs of the image and noise
processes. However, estimates of the mean and variance of the noise process
are required to determine the optimal value for
, which must be adjusted to
satisfy the constraint kg ; h ~f k2 = kk2 . It is also worth observing that, if

= 0, the lter reduces to the inverse lter.


Gonzalez and Wintz 589] give an iterative procedure to estimate
as fol-
lows: Dene a residual vector as
r = g ; h ~f
= g ; h hT h +
LT L ;1 hT g:
;

(10.50)
We wish to adjust
such that
krk2 = k k2   (10.51)
where  is a factor of accuracy. Iterative trial-and-error methods or the
Newton-Raphson procedure may be used to determine the optimal value for
.
A simple iterative procedure to nd the optimal value for
is as follows 589]:
1. Choose an initial value for
.
2. Compute F~ (u v) and ~f .
3. Form the residual vector r and compute krk2 .
4. Increment
if krk2 < kk2 ; ,
or decrement
if krk2 > kk2 + , and return to Step 2.
Stop if krk2 = kk2  .
Note: kk2 is the total squared value or total energy of the noise process, and
is given by (2 + 2 ) multiplied with the image area.
Other approaches to constrained restoration: Several constrained op-
timization methods have been developed, such as the constrained least squares
method, the maximum-entropy method, and the Leahy-Goutis method 829].
The constrained least squares method requires only information about the
mean and variance of the noise function, and the estimate of the image is
dened as that which minimizes the output of a linear operator applied to the
image the result may be further subjected to an a priori constraint or an a
posteriori measurable feature, such as the smoothness of the image 196, 836].
The maximum-entropy approach maximizes the entropy of the image data
subject to the constraint that the solution should reproduce the ideal im-
age exactly or within a tolerable uncertainty dened by the observation noise
statistics 11, 837, 838]. The Leahy-Goutis method optimizes a convex crite-
rion function, such as minimum energy or cross entropy, over the intersection
of two convex constraint sets 839].
874 Biomedical Image Analysis
In constrained approaches, the estimate needs to satisfy a priori constraints
on the ideal image. The a priori constraints may be obtained from informa-
tion about the blur, noise, or the image 829]. An example of constrained
approaches is the PSE lter, which estimates the image using a linear trans-
form of the image based on the constraint that the PSD of the ltered image
be equal to that of the ideal image: see Section 10.1.2.
The maximum-a-posteriori probability (MAP) technique is designed to nd
the estimate that maximizes the probability of the actual image conditioned
on the observation (posterior probability). In the case of linear systems and
under Gaussian assumptions, the MAP estimate reduces to the LMMSE es-
timate 11].
Several other constraints may be imposed upon restoration lters and their
results. A simple yet eective constraint is that of nonnegativity of the im-
age values, which is relevant in several practical applications where the pixels
represent physical parameters that cannot take negative values. Biraud 840]
described techniques to increase the resolving power of signals by imposing
nonnegativity. Boas and Kac 841] derived inequalities for the Fourier trans-
forms of positive functions that may be used as limits on spectral components
of restored signals or images. Webb et al. 842] developed a constrained decon-
volution lter for application to liver SPECT images several parameters of
the lter could be varied to render the lter equivalent to the inverse, Wiener,
PSE, and other well-known lters.

10.1.5 The Metz lter


Metz and Beck 843] proposed a modication to the inverse lter for applica-
tion to nuclear medicine images, including noise suppression at high frequen-
cies. The Metz lter is dened as
2 (u v )]
LM (u v) = 1 ; 1H;(H
u v) (10.52)

where is a factor that controls the extent in frequency up to which the in-
verse lter is predominant, after which the noise-suppression feature becomes
stronger. It is apparent that, if = 0, the gain of the Metz lter reduces to
zero. Given that most degradation phenomena have a blurring or smoothing
eect, H (u v) is a lowpass lter. As a consequence, 1 ; H 2 (u v)] is a highpass
lter, and the numerator in Equation 10.52 is a lowpass lter, whose response
is controlled by the factor . The factor may be selected so as to minimize
the MSE between the ltered and the ideal images.
King et al. 834, 844, 845, 846, 847, 848, 849], Gilland et al. 850], Honda
et al. 835], Hon et al. 132], and Boulfelfel et al. 86, 749, 751, 851] applied
the Wiener and Metz lters for the restoration of nuclear medicine images. A
comparative analysis of the performance of the Metz lter with other lters,
in the restoration of nuclear medicine images, is provided in Section 10.5.
Deconvolution, Deblurring, and Restoration 875
10.1.6 Information required for image restoration
It is evident from Equations 10.10, 10.14, and 10.40 that image restoration,
using techniques such as the inverse, PSE, and Wiener lters, requires spe-
cic items of information, including the MTF of the degradation phenomenon
H (u v), the PSD of the noise process  (u v), and the PSD of the original
image process f (u v). Although this requirement may appear to be onerous,
methods and approaches may be devised to obtain each item based upon prior
or additional knowledge, measurements, and derivations.
Several methods to obtain the PSF and MTF are described in Sections 2.9
and 2.12. The PSF or related functions may be measured if the imaging sys-
tem is accessible and phantoms may be constructed for imaging as described
in Sections 2.9 and 2.12. The PSF may also be modeled mathematically if the
blurring phenomenon is known precisely, and can be represented as a mathe-
matical process an example of this possibility is described in Section 10.1.7.
The PSD of an image-generating stochastic process may be estimated as
the average of the PSDs of several observations or realizations of the process
of interest. However, care needs to be exercised in selecting the samples such
that they characterize the same process or type of images. For example, the
PSDs of a large collection of high-quality CT images of the brain of similar
characteristics may be averaged to derive a PSD model to represent such a
population of images. It would be inappropriate to combine CT images of the
brain with CT images of the chest or the abdomen, or to combine CT images
with MR images. In order to overcome the limitations imposed by noise as
well as a specic and small collection of sample images, a practical approach
would be to estimate the average ACF of the images, t a model such as a
Laplacian, and apply the Fourier transform to obtain the PSD model.
The PSD of noise may be estimated by using a collection of images taken
of phantoms with uniform internal characteristics, or a collection of parts of
images outside the patient or object imaged (representing the background).
When the required functions are unavailable, it will not be possible to apply
the inverse, PSE, or Wiener lters. However, using a few approximations, it
becomes possible to overcome the requirement of explicit and distinct knowl-
edge of some of the functions image restoration may then be performed while
remaining \blind" to these entities. Methods for blind deblurring are de-
scribed in Section 10.2.

10.1.7 Motion deblurring


Images acquired of living organisms are often aected by blurring due to
motion during the period of exposure (imaging). In some cases, it may be
appropriate to assume that the motion is restricted to within the plane of
the image. Furthermore, it may also be assumed that the velocity is constant
over the (usually short) period of exposure. Under such conditions, it becomes
876 Biomedical Image Analysis
possible to derive an analytical expression to represent the blurring function
(PSF).
Consider the imaging of an object f (x y), over the exposure interval 0 T ].
Let the relative motion between the object and the camera (or the imaging
system) be represented as a displacement of  (t) (t)] at the instant of time
t. Then, the recorded image g(x y) is given by 589]
Z T
g (x y ) = f x ; (t) y ; (t)] dt: (10.53)
0
Observe that, due to the integration over the period of exposure, the resulting
image g(x y) is not a function of time t. In order to derive the MTF of the
imaging (blurring) system, we may analyze the Fourier transform of g(x y),
as follows.
Z 1 Z 1
G(u v) = g(x y) exp;j 2  (ux + vy)] dx dy
;1 ;1 (Z )
Z 1 Z 1 T
= f x ; (t) y ; (t)] dt exp;j 2  (ux + vy)]
;1 ;1 0
dx dy: (10.54)
Reversing the order of integration with respect to t and (x y), we get
Z T Z 1 Z 1 
G(u v) = f x ; (t) y ; (t)] exp;j 2  (ux + vy)] dx dy
0 ;1 ;1
dt: (10.55)
The expression within the braces above represents the Fourier transform of
f x ; (t) y ; (t)], which is expf;j 2  u (t) + v (t)]g F (u v). Then, we
have
Z T
G(u v) = F (u v) expf;j 2  u (t) + v (t)]g dt: (10.56)
0
Therefore, the MTF of the system is given by
Z T
H (u v) = expf;j 2  u (t) + v (t)]g dt: (10.57)
0
Thus, if the exact nature of the motion during the period of exposure is known,
the MTF may be derived in an analytical form.
Let us consider the simple situation of linear motion within the plane of the
image and with constant velocity, such that the total displacement during the
period of exposure T is given by (a b) in the (x y) directions, respectively.
Deconvolution, Deblurring, and Restoration 877
Then, we have (t) = atT and (t) = btT , and
Z T  
bt  dt
H (u v ) = exp ;j 2  u at
T + v T
0

= T sin(ua
(ua + vb)] exp;j  (ua + vb)]:
+ vb) (10.58)

In the case of linear motion in one direction only, we have a = 0 or b = 0,


and the 2D function above reduces to a 1D function. It follows that the
corresponding PSF is a rectangle or box function that reduces to a straight
line in the case of motion in one direction only.
Given full knowledge of the MTF of the degradation phenomenon, it be-
comes possible to design an appropriate restoration lter, such as the inverse,
PSE, or Wiener lter. However, it should be observed that the sinc function
in Equation 10.58 has several zeros in the (u v) plane, at which points the
inverse lter would pose problems.
An example of simulated motion blurring is given in Section 10.4.2, along
with its restoration see also Figure 10.6. See Sondhi 826], Gonzalez and
Wintz 589], and Gonzalez and Woods 8] for several examples of motion
deblurring.

10.2 Blind Deblurring


In some cases, it may not be possible to obtain distinct models of the degra-
dation phenomena. An inspection of Equation 10.1 reveals the fact that the
given degraded image contains information regarding the blurring system's
MTF and the noise spectrum, albeit in a combined form in particular, we
have
g (u v) = jH (u v)j2 f (u v) +  (u v): (10.59)
Several methods have been proposed to exploit this feature for \blind" de-
blurring, with the adjective representing the point that the procedures do
not require explicit models of the degrading phenomena. Two procedures for
blind deblurring are described in the following paragraphs.
Extension of PSE to blind deblurring: The blurred image itself may
be used to derive the parameters required for PSE as follows 9, 589, 825].
Suppose that the given N  N image g(m n) is broken into M  M segments,
gl (m n) l = 1 2    Q2 , where Q = MN , and M is larger than the dimensions
of the blurring PSF. Then,
gl (m n) = h(m n) fl (m n) + l (m n) (10.60)
878 Biomedical Image Analysis
and
gl (u v) = jH (u v)j2 fl (u v) +  l (u v): (10.61)
In the expressions above, the combined eect of blurring across the boundaries
of adjacent subimages is ignored. If we now average the PSDs gl over all of
the Q2 available image segments, and make the further assumption that the
averaged PSDs tend toward the true signal and noise PSDs, we have
Q2
1 X gl (u v) = jH (u v)j2 ~ f (u v) + ~ (u v) (10.62)
Q2 l=1
where ~ indicates that the corresponding PSDs are estimates. The expression
in Equation 10.62 gives the denominator of the PSE lter as in Equation 10.13
the numerator f is required to be estimated separately. The MTF H (u v)
and the noise PSD  (u v) need not be estimated individually the procedure
is thus \blind" to these entities.

10.2.1 Iterative blind deblurring


Rabie et al. 852] presented an iterative technique for blind deconvolution
under the assumptions that the MTF of the LSI system causing the degrada-
tion has zero phase, and that its magnitude is a smoothly varying function
of frequency. These assumptions are valid for several types of degradation,
such as motion blur and out-of-focus blur 853, 854]. It has been demon-
strated in several works that the spectral magnitude in the Fourier repre-
sentation of a signal is aected by the blur function, while many of the im-
portant features of the signal, such as edge locations, are preserved in the
phase 854, 855, 856, 857, 858, 859, 860, 861]. For example, it has been shown
that the intelligibility of a sentence is retained if the phase of the Fourier
transform of a long segment of the speech signal is combined with unit mag-
nitude 854].
The method of Rabie et al. 852] makes use of the image characteristics (edge
information) preserved in the phase of the blurred image, and attempts to re-
cover the original magnitude spectrum that was altered by the blur function.
Their blind deconvolution algorithm, described in the following paragraphs,
diers from earlier related work 853, 862, 863] in that the averaging of the
PSD is achieved by smoothing in the frequency domain hence, the method is
based upon the use of the entire image rather than sections of the image (see
Section 10.4.1). This feature relaxes the assumption on the region of support
(ROS) of the PSF. Another key feature of the algorithm is that further en-
hancement of the initial estimate of the image is achieved through an iterative
approach.
Using the degradation model expressed in Equation 10.1 and neglecting the
presence of noise, we have
Mg (u v) = Mf (u v) Mh(u v) (10.63)
Deconvolution, Deblurring, and Restoration 879
and
g (u v) = f (u v) (10.64)
where Mf (u v) is the spectral magnitude of the original image, Mg (u v) is
the spectral magnitude of the degraded image, Mh (u v) is the degradation
MTF with the property that it is a smooth magnitude function with zero
phase, f (u v) is the spectral phase of the original image, and g (u v) is
the spectral phase of the degraded image. The blur model is thus dened
in terms of magnitude and phase. The spectral magnitude Mf (u v) of the
original image may be recovered from the spectral magnitude Mg (u v) of
the degraded image as follows. The initial estimate of Mf (u v) is based on
smoothing the spectral magnitude of the blurred image, Mg (u v), and using
the assumption that Mh (u v) is smooth. If we let S  ] denote a 2D linear,
separable, smoothing operator, then a smoothed Mg (u v) is given by
S  Mg ( u v)] = S Mf (u v) Mh(u v)] : (10.65)
Because Mh (u v) is a smooth function and S  ] is separable, S Mf (u v) Mh
(u v)] may be approximated by S Mf (u v)] Mh (u v) 854, 864]. Then, we
have
S Mg (u v )] ' S Mf (u v )] Mh (u v ): (10.66)
Combining Equation 10.63 and Equation 10.66, we obtain
Mf (u v) ' Mg (u v) SS M f (u v )] :
Mg (u v)] (10.67)

Equation 10.67 suggests that if we can obtain an initial approximation of


Mf (u v), we can rewrite Equation 10.67 in an iterative form, and use it to
rene the initial magnitude estimate. Equation 10.67 can thus be written in
an iterative form as h i
S M ~ fl
M~ fl+1 = Mg S M ] (10.68)
g
where l is the iteration number, ~ indicates an estimate of the variable under
the symbol, and the argument (u v) has been dropped for compact notation.
The initial estimate M~ f0 may be derived from the phase of the blurred image,
that is exp jg (u v)], which retains most of the high-frequency information
(spatial edges) in the image. A simple initial estimate of the original image, in
the frequency domain, may be dened as the sum of the phase of the blurred
image and the Fourier transform of the blurred image itself:
M~ f0 exp jg ] = Mg exp jg ] + exp jg ]  (10.69)
in terms of magnitudes, we have
M~ f0 = Mg + 1: (10.70)
880 Biomedical Image Analysis
From Equation 10.70, it is evident that a unit constant is being added to
the spectral magnitude at all frequencies, which would only raise the entire
spectral magnitude response by the corresponding amount. This has an eect
comparable to that of adding a high-pass ltered version of the image to the
blurred image, which would be of amplifying the high-frequency components
in the image. This step would, however, produce a noisy initial approximation
of Mf . Rather than simply adding a unit magnitude to recover high frequen-
cies, it would be more appropriate to add those high-frequency components in
Mf that were aected by blurring. Although we do not have explicit knowl-
edge of Mf , we do have a ratio between Mf and its smoothed version derived
from Equation 10.67, giving
Mf '
Mg : (10.71)
S Mf ] S M g ]

Adding this ratio to the blurred magnitude spectrum gives


M~ f0 = Mg + S M f
M ] : (10.72)
f
The expression above may be expected to be a more accurate approximation of
Mf than that in Equation 10.69, because the amount being added is the ratio
Mf
S Mf ] , which contains more information about the original spectral magnitude
than a simple constant (unity in Equation 10.70).
The advantages of adding SM f
Mf ] to the blurred magnitude spectrum are the
following:
 At low frequencies, Mf ' S Mf ]. Therefore, M ~ f0 ' Mg + 1, which
would not signicantly aect the magnitude spectrum.
 At higher frequencies, S Mf ] < Mf at high amplitudes of Mf , because
variations in Mf would be averaged out in S Mf ]. Thus, SM Mf ] > 1
f
at high-frequency components that are likely to represent spatial edges.
Adding this ratio to the blurred magnitude spectrum would have the
eect of amplifying high-frequency edges more than the high-frequency
noise components. Therefore, the high-frequency components of the
phase spectrum exp jg ] are emphasized in SM Mf ] exp jg ] for this rea-
f
son, Rabie et al. referred to the corresponding image as the enhanced
phase image. Thus, the operation in Equation 10.72 tends to add to
Mg the (normalized) high-frequency components that were aected by
blurring.
The iterative blind restoration procedure of Rabie et al. may be summarized
as follows:
1. Use Equation 10.72 to obtain an initial estimate of Mf .
Deconvolution, Deblurring, and Restoration 881
2. Update the estimate iteratively using Equation 10.68.

3. Stop when the MSE between the lth estimate and the (l + 1)th estimate
is less than a certain limit.

4. Using Equation 10.64, combine the best estimate of Mf (u v) with the


phase function f (u v) = g (u v). The Fourier transform of the restored
image is given as
F~ (u v) = M~ f (u v) exp jf (u v)] : (10.73)

5. Take the inverse Fourier transform of Equation 10.73 to obtain the de-
blurred image f~(x y).
Although the method described above neglects noise, it can still be used
for deblurring images corrupted by blur and additive noise after rst reducing
the noise in the blurred image 865, 864].
Examples: Figure 10.7 (a) shows the original test image \Lenna". Part
(b) of the gure shows the test image blurred by a Gaussian-shaped PSF with
the radial standard deviation r = 3 pixels. The magnitude of the inverse
Fourier transform of the enhanced phase function SM Mf ] exp jg ] is shown in
f
part (c) of the gure. It is evident that the enhanced phase image retains
most of the edges in the original image that were made weak or imperceptible
by blurring. Part (d) of the gure shows the initial estimate of f~(x y), formed
by the addition of the image in part (b) of the gure to the image in part
(c), as described in Equation 10.72. The addition has emphasized the edges
of the blurred image, thus giving a sharper image (albeit with a larger MSE
than that of the blurred image, as listed in the caption of Figure 10.7). The
dynamic range of the image in part (d) is dierent from that of the original
image in part (a) of the gure.
The image generated by combining the rst estimate M~ f1 obtained by using
Equation 10.68 with the phase of the blurred image exp jg ] is shown in part
(e) of the gure. Even after only one iteration, the deblurred image closely
resembles the original image. The restored image after four iterations, shown
in part (f) of the gure, demonstrates sharp features with minimal artifacts.
The smoothing operator S was dened as the 5  5 mean lter, applied to the
128  128 Fourier spectrum of the image 852, 854, 864].
Figures 10.8 and 10.9 illustrate the restoration of an image with text,
blurred to two dierent levels. The enhanced phase images clearly depict
the edge information retained in the phase of the blurred image. The restored
images demonstrate signicantly improved quality in terms of sharpened edges
and improved readability of the text, as well as reduced MSE.
882 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 10.7
Iterative blind deconvolution with the Lenna test image of size 128  128 pixels
and 256 gray levels: (a) original image (b) blurred image with Gaussian-
shaped blur function of radial standard deviation r = 3 pixels, MSE = 606
(c) enhanced phase image, with the range 0 100] mapped to the display range
0 256] (d) initial estimate image, MSE = 877 (e) deblurred image after the
rst iteration, MSE = 128 (f) deblurred image after four iterations, MSE =
110. Reproduced with permission from T.F. Rabie, R.B. Paranjape, and R.M.
Rangayyan, \Iterative method for blind deconvolution", Journal of Electronic
Imaging, 3(3):245{250, 1994.  c SPIE.
Deconvolution, Deblurring, and Restoration 883

(a) (b)

(c) (d) (e)


FIGURE 10.8
Iterative blind deconvolution of a slightly blurred text image, digitized to
64  64 pixels and 256 gray levels: (a) original image (b) blurred image with
Gaussian-shaped blur function of radial standard deviation r = 3 pixels,
MSE = 1,041 (c) enhanced phase image (d) initial estimate image, MSE =
385 (e) nal restored image after eight iterations, MSE = 156. Reproduced
with permission from T.F. Rabie, R.B. Paranjape, and R.M. Rangayyan,
\Iterative method for blind deconvolution", Journal of Electronic Imaging,
3(3):245{250, 1994. c SPIE.
884 Biomedical Image Analysis

(a) (b)

(c) (d) (e)


FIGURE 10.9
Iterative blind deconvolution of a severely blurred text image, digitized to
64  64 pixels and 256 gray levels: (a) original image (b) blurred version
of the test image with Gaussian-shaped blur function of radial standard de-
viation r = 5 pixels, MSE = 3,275 (c) enhanced phase image (d) initial
estimate image, MSE = 690 (e) restored image after one iteration, MSE =
618. Reproduced with permission from T.F. Rabie, R.B. Paranjape, and R.M.
Rangayyan, \Iterative method for blind deconvolution", Journal of Electronic
Imaging, 3(3):245{250, 1994. c SPIE.
Deconvolution, Deblurring, and Restoration 885

10.3 Homomorphic Deconvolution


Consider the case where we have an image that is given by the convolution of
two component images, as expressed by the relation
g(x y) = h(x y) f (x y): (10.74)
Similar to the case of the multiplicative homomorphic system described in
Section 4.8, the goal in homomorphic deconvolution is to convert the convo-
lution operation to addition. From the convolution property of the Fourier
transform, we know that Equation 10.74 translates to
G(u v) = H (u v)  F (u v): (10.75)
Thus, the application of the Fourier transform converts convolution to multi-
plication. Now, it is readily seen that the multiplicative homomorphic system
may be applied to convert the multiplication above to addition. Taking the
complex logarithm of G(u v), we have
logG(u v)] = logH (u v)] + logF (u v)] H (u v) 6= 0 F (u v) 6= 0 8 u v:
(10.76)
fNote: logF (u v )] = F^ (u v ) = logjF (u v )j]+ j 6 F (u v ).g A linear lter may
now be used to separate the transformed components of f and h, with the
assumption as before that they are separable in the transform space. A series
of the inverses of the transformations applied initially will take us back to the
original domain.
Figure 10.10 gives a block diagram of the steps involved in a homomorphic
lter for convolved signals. Observe that the path formed by the rst three
blocks (the top row) transforms the convolution operation at the input to ad-
dition. The set of the last three blocks (the bottom row) performs the reverse
transformation, converting addition to convolution. The lter in between thus
deals with (transformed) images that are combined by simple addition. Prac-
tical application of the homomorphic lter is not simple. Figure 10.11 gives
a detailed block diagram of the procedure 7, 239].

10.3.1 The complex cepstrum


The formal denition of the complex cepstrum states that it is the inverse
z-transform of the complex logarithm of the z-transform of the input signal
7, 239]. (The name \cepstrum" was derived by transposing the syllables of
the word \spectrum" other transposed terms 7, 236, 239] are less commonly
used.) In practice, the Fourier transform is used in place of the z -transform.
Given g(x y) = h(x y) f (x y), it follows that
G^ (u v) = H^ (u v) + F^ (u v) (10.77)
886 Biomedical Image Analysis
and furthermore, that the complex cepstra of the signals are related simply
as
g^(x y) = h^ (x y) + f^(x y): (10.78)
Here, the ^ symbol over a function of frequency indicates the complex loga-
rithm of the corresponding function, whereas the same symbol over a function
of space indicates its complex cepstrum.
An important consideration in the evaluation of the complex logarithm of
G(u v) relates to the phase of the signal. The phase spectrum h computed
as its principal value in the range 0 2], given by tan ; 1 imaginary fG(uv)g i,
realfG(uv)g
will almost always have discontinuities that will conict with the require-
ments of the inverse Fourier transform to follow. Thus, G(u v) needs to
be separated into its magnitude and phase components, the logarithmic op-
eration applied to the magnitude, the phase corrected to be continuous by
adding correction factors of 2 at discontinuities larger than , and the
two components combined again before the subsequent inverse transforma-
tion. Correcting the phase spectrum as above is referred to as phase unwrap-
ping 239, 7, 866, 867, 868, 869]. In addition, it has been shown that a linear
phase term, if present in the spectrum of the input, may cause rapidly decay-
ing oscillations in the complex cepstrum 239]. It is advisable to remove the
linear phase term, if present, during the phase-unwrapping step. The linear
phase term may be added to the ltered result, if necessary.
Taxt 870] applied 2D homomorphic ltering to improve the quality of med-
ical ultrasound images. The recorded image was modeled as the convolution
of a eld representing the sound-reecting structures (the anatomical surfaces
of interest) with a PSF related to the interrogating ultrasound pulse shape.
A Wiener lter was then used to deconvolve the eects of the PSF.
An important application of 1D homomorphic ltering is in the extraction of
the basic wavelet from a composite signal made up of quasiperiodic repetitions
(or echoes) of the wavelet, such as a voiced-speech signal or seismic signal 7,
31, 237, 238, 871, 872]. An extension of this technique to the removal of visual
echoes in images is described in Section 10.3.2.

10.3.2 Echo removal by Radon-domain cepstral ltering


Homomorphic deconvolution has been used to remove repeated versions of
a basic pattern in an image (known as a visual echo) 444, 505] and other
applications in image processing 873, 874, 875, 876, 877]. Martins and Ran-
gayyan 444, 505] performed 1D homomorphic deconvolution on the Radon
transform (projections) of the given image, instead of 2D homomorphic l-
tering: the nal ltered image was obtained by applying the ART method of
reconstruction from projections (see Section 9.4) to the ltered projections.
The general scheme of the method is shown in Figure 10.12 the details of the
method are described in the following paragraphs.
Deconvolution, Deblurring, and Restoration
* Fourier
x Logarithmic
+ Inverse +
Input transform transform Fourier
image transform

+ Linear
+ Complex cepstrum
filter

+ Fourier
+ Exponential
x Inverse *
transform transform Fourier
transform Filtered
image

FIGURE 10.10
Operations involved in a homomorphic lter for convolved signals. The symbol at the input or output of each block indicates
the operation that combines the signal components at the corresponding step. Reproduced with permission from Reproduced
with permission from R.M. Rangayyan, Biomedical Signal Analysis: A Case-Study Approach, IEEE Press and Wiley, New
York, NY, 2002.  c IEEE.

887
888
Phase
Phase unwrapping and
linear component
removal

Inverse
Exponential Fourier
Fourier
window transform
Input signal transform
Magnitude
Log

Lowpass/ Complex cepstrum


highpass
filter

Phase
Insert linear
phase component

Inverse Inverse

Biomedical Image Analysis


Fourier
Fourier exponential
transform
transform window Filtered
signal
Magnitude
Exponentiate

FIGURE 10.11
Detailed block diagram of the steps involved in deconvolution of signals using the complex cepstrum. Reproduced with
permission from A.C.G. Martins and R.M. Rangayyan, \Complex cepstral ltering of images and echo removal in the Radon
c Pattern Recognition Society. Published by Elsevier Science Ltd.
domain", Pattern Recognition, 30(11):1931{1938, 1997. 
Deconvolution, Deblurring, and Restoration 889
Radon Cepstral Reconstruction
transform filter from projections

input n original n filtered filtered


image projections projections image

FIGURE 10.12
Homomorphic (complex cepstral) ltering of an image in the Radon domain.
Reproduced with permission from A.C.G. Martins and R.M. Rangayyan,
\Complex cepstral ltering of images and echo removal in the Radon do-
c Pattern Recognition
main", Pattern Recognition, 30(11):1931{1938, 1997. 
Society. Published by Elsevier Science Ltd.

Let fe (x y) be an image given by


fe (x y) = f (x y) d(x y) (10.79)
where
d(x y) = (x y) + a (x ; x0 y ; y0 ) (10.80)
with a being a scalar weighting factor. In the context of images with visual
echoes, we could label f (x y) as the basic element image, d(x y) as a eld
of impulses at the positions of the echoes (including the original element),
and fe (x y) as the composite image. Applying the Radon transform (see
Section 9.1), we have the projection p (t) of the composite image fe (x y) at
angle  given by
Z +1 Z +1
p (t) = fe (x y) (x cos  + y sin  ; t) dx dy (10.81)
;1 ;1
which leads to
Z +1 Z +1 Z +1 Z +1
p (t) = f ( )  (x ; y ; )
;1 ;1 ;1 ;1
  (x cos  + y sin  ; t) dx dy d d

Z +1 Z +1 Z +1 Z +1
+a f ( ) (x ; x0 ) ; (y ; y0 ) ; ]
;1 ;1 ;1 ;1
  (x cos  + y sin  ; t) dx dy d d : (10.82)
Here, t is the displacement between the projection samples (rays) in the Radon
domain. Using the properties of the  function (see Section 2.9), we get
Z +1 Z +1
p (t) = f ( ) ( cos  + sin  ; t) d d
;1 ;1
Z +1 Z +1
+a f ( ) (x0 + ) cos  + (y0 + ) sin  ; t] d d :
;1 ;1
(10.83)
890 Biomedical Image Analysis
Dening
n = x0 cos  + y0 sin  (10.84)
and
Z +1 Z +1
f (t) = f ( ) ( cos  + sin  ; t) d d (10.85)
;1 ;1
we get
p (t) = f (t) + a f (t ; n ): (10.86)
Here, f (t) represents the projection of the basic element image f (x y) at
angle , and n is a displacement or shift (in the Radon domain) for the given
angle  and position (x0 y0 ). Equation 10.86 indicates that the projection of
the composite image at a given angle is the summation of the projections at
the same angle of the basic element image and its replication (or echo) with
an appropriate shift factor n .
In order to demonstrate the eect of an echo in a signal on its complex
cepstrum, we could apply the Fourier transform, the log operation, and then
the inverse Fourier transform, as follows: Applying the Fourier transform to
Equation 10.86, we get
P (w) = F (w) + a exp(;j 2 w n ) F (w) (10.87)
where P (w) and F (w) are the Fourier transforms of p (t) and f (t), re-
spectively, and w is the Fourier-domain variable corresponding to the space-
domain (Radon-domain) variable t. Applying the natural log, we get
P^ (w) = F^ (w) + ln1 + a exp(;j 2 w n )] (10.88)
where the ^ represents the log-transformed version of the variable under the
symbol. If a < 1, the log function may be expanded into a power series as
2
P^ (w) = F^ (w) + a exp(;j 2 w n ) ; a exp(;j22 w n )] + : : : : (10.89)
Applying the inverse Fourier transform, we get the complex cepstrum of p (t)
as 2
p^ (t) = f^ (t) + a (t ; n ) ; a2 (t ; 2n ) + : : : : (10.90)
Thus, the complex cepstrum p^ (t) of the projection p (t) at angle  of the
image fe (x y) is composed of the complex cepstrum f^ (t) of the projection at
angle  of the basic element image f (x y), and a train of impulses. If f^ (t)
and the impulse train are su ciently separated in the cepstral domain, the
use of a lowpass lter, followed by the inverse cepstrum operation, gives the
ltered projection f (t) of the basic element image. The impulse train may
also be suppressed by using a notch or a comb function. After the ltered
projections are obtained at several angles, it will be possible to reconstruct
Deconvolution, Deblurring, and Restoration 891
the basic element image f (x y). If the eects of the echoes are considered to
be a type of image distortion, ltering as above may be considered to be an
operation for image restoration.
If a
1, that is, if the echoes are of amplitude equal to or greater than that
of the basic element image, the impulses in the cepstrum will appear with
negative delays, and the ltering operation will lead to the extraction of an
echo image 505]. The situation may be modied easily by applying decaying
exponential weighting factors to the original image and/or the projections,
such that the weighted echoes and/or their projections are attenuated to lower
amplitudes. Then, the eect is equivalent to the situation with a < 1.
Example: A composite image with ve occurrences of a simple circular
object is shown in Figure 10.13 (a). If the object at the lower-right corner of
the image is considered to be the original object, the remaining four objects
could be viewed as echoes of the original object. Although echoes would, in
general, be of lower amplitude (intensity) than the original object, they have
been maintained at the same intensity as the original in this image. The test
image is of size 101  101 pixels the radius of each circle is 10 pixels, and the
intensity of each circle is 100 on an inverted gray scale.
The Radon-domain homomorphic lter was applied to the test image. The
image was multiplied by a weighting function given by y3 , where y is the
vertical axis of the image and the origin is at the top-left corner of the image.
Furthermore, another weighting function, given by t with = 0:985, was
applied to each projection before computing the complex cepstrum. The
weighting functions were used in order to reduce the eect of the echoes and
facilitate ltering of the cepstrum 239, 31]. The cepstrum was lowpass ltered
with a window of 40 pixels. Eighteen projections in the range 0o 170o ] in steps
of 10o were computed, ltered, and used to reconstruct the basic element
image. The result, shown in Figure 10.13 (b), was thresholded at the gray
level of 100. It is seen that a single circular object has been extracted, with
minimal residual artifact.
The method was extended to the extraction of the basic element in images
with periodic texture, known as the texton, by Martins and Rangayyan 444]
see Section 7.7.1.

10.4 Space-variant Restoration


Several image restoration techniques, such as the Wiener and PSE lters, are
based upon the assumption that the image can be modeled by a stationary
(random) eld. Restoration is achieved by ltering the degraded image with
an LSI lter, the frequency response of which is a function of the PSD of the
uncorrupted image. There are at least two di culties in this approach: most
892 Biomedical Image Analysis

(a) (b)

FIGURE 10.13
Illustration of homomorphic (complex cepstral) ltering of an image in the
Radon domain to remove echoes in images. (a) Original image. (b) Filtered
image after thresholding. Reproduced with permission from A.C.G. Martins
and R.M. Rangayyan, \Complex cepstral ltering of images and echo removal
c Pat-
in the Radon domain", Pattern Recognition, 30(11):1931{1938, 1997. 
tern Recognition Society. Published by Elsevier Science Ltd.

images are not stationary, and, at best, may be described as locally stationary
furthermore, in practice, the PSD of the uncorrupted original image is not
given, and needs to be estimated.
A common procedure to estimate the PSD of a signal involves sectioning
the signal into smaller, presumably stationary segments, and averaging their
modied PSDs or periodograms 12, 825, 853, 862, 863, 865, 878, 879, 880, 881,
882]. The procedure assumes shift-invariance of the blur PSF, and averages
out the nonstationary frequency content of the original image. In order for
the sectioning approach to be valid, the blurring PSF must have an ROS
(spatial extent) that is much smaller than the size of the subimages as a
result, the size of the subimages cannot be made arbitrarily small. Thus, the
number of subimages that may used to form the ensemble average is limited
consequently, the variance of the PSD estimate from the subimages could be
high, which leads to a poor estimate of the PSD of the original image.
Another consequence of the assumption of image stationarity and the use
of space-invariant ltering is the fact that the deblurred images suer from
artifacts at the boundaries of the image 853]. In a simple mathematical rep-
resentation of this problem, it is assumed that the image is of innite extent
however, practical images are of nite extent and the performance of a lter
may vary depending upon the assumptions made regarding the edges of the
image. The eect at the edges from deconvolution with incomplete informa-
tion (due to the lack of information beyond the image boundary) could cause
Deconvolution, Deblurring, and Restoration 893
dierent contributions from outside the image boundary during deblurring
than those that were convolved into the image during the actual degrada-
tion process. This leads to a layer of boundary pixels taking incorrect values
during deblurring, and consequently, artifacts at the image boundaries.
Attempts to overcome the problems caused by the inherent nonstationar-
ity of images have led to several methods. Angel and Jain 883] proposed
a technique to solve the general superposition integral iteratively by using a
conjugate-gradient method. The method faces problems with convergence in
the presence of noise. Techniques have been proposed to enhance the perfor-
mance of nonadaptive lters by using radiometric and geometric transforms
to generate images that are nearly stationary (block stationary) in the rst
and second moments 884]. The radiometric transform generates stationary
mean and variance, whereas the geometric transform provides stationary au-
tocorrelation.
Adaptive techniques for space-variant restoration have been proposed based
upon sectioning the given image into smaller subsections and assuming dier-
ent stationary models for each section 177, 865, 885, 886]. Two approaches
to sectioned deblurring are described in Sections 10.4.1 and 10.4.2.
The Kalman lter is the most popular method for truly space-variant l-
tering, and is described in Section 10.4.3.

10.4.1 Sectioned image restoration


In the iterative sectioned MAP restoration technique proposed by Trussell
and Hunt 177, 885], the input image is divided into small P  P sections
and the MAP estimate of the uncorrupted section is developed and iterated
upon for renement. This procedure is carried out on each section using an
overlap-save technique to reduce edge artifacts. Because sectioning an image
presumably causes each individual section to be close to a stationary process,
a simpler approach to sectioned deblurring could be based upon the use of
a conventional LSI lter, such as the Wiener or PSE lter, to deblur each
section individually the deblurred sections may then be combined to form
the nal deblurred image. A technique to reduce edge eects between the
sections is to center each subimage in a square region of size comparable to
that of the input image, and then pad the region surrounding the centered
subimage with its mean value, prior to ltering. Each mean-padded region
may also be multiplied in the space domain with a smooth window function,
such as the Hamming window.
Using the above argument and assuming that each section (of size P  P
pixels) is large compared to the ROS of the blur PSF, but small compared
to the actual image dimensions M  M , each section of the image may be
expressed as the convolution of the PSF with an equivalent section from the
original undegraded image f (m n). Then, we have
gl (m n) ' h(m n) fl (m n) + l (m n): (10.91)
894 Biomedical Image Analysis
In the frequency domain, Equation 10.91 becomes
Gl (u v) ' H (u v) Fl (u v) + l (u v): (10.92)
Now, applying the 2D Hamming window of size M  M , given by
     
wH (m n) = 0:54 ; 0:46 cos M ; 12  m 2 
0:54 ; 0:46 cos M ; 1n
(10.93)
to each region padded by its mean to the size M  M , we obtain
gl (m n) wH (m n) ' h(m n) fl (m n)] wH (m n) + l (m n) wH (m n)
(10.94)
or
glw (m n) ' h(m n) flw (m n) + lw (m n) (10.95)
where the w notation represents the corresponding windowed sections. The
PSD of each section gl (m n) may be expressed as
gl (u v) ' jH (u v)j2 fl (u v) +  l (u v): (10.96)
The Wiener or PSE lter may then be applied to each section. The nal
restored image f~(m n) is obtained by extracting the individual sections from
the center of the corresponding ltered mean-padded images, and placing
them at their original locations.
Limitations exist in sectioned restoration due to the fact that the assump-
tion of stationarity within the square sections may not be satised: sectioning
using regularly spaced square blocks of equal size cannot discriminate between
\at" and \busy" areas of any given image. Furthermore, because of the lim-
itations on the section size (that it must be large compared to the ROS of the
blur PSF), sections cannot be made arbitrarily small. Thus, the mean value
of a section could be signicantly dierent from its pixel values, and conse-
quently, artifacts could arise at section boundaries. To partially solve the
problem of edge artifacts, the sections could be overlapped, for example, by
one-half the section size in each dimension 862, 865]. This technique, however,
will not reduce the eects at the image boundaries. An adaptive-neighborhood
approach to address this limitation is described in Section 10.4.2, along with
examples of application.

10.4.2 Adaptive-neighborhood deblurring


Rabie et al. 178] proposed an adaptive-neighborhood deblurring (AND) algo-
rithm based on the use of adaptive-neighborhood regions determined individ-
ually for each pixel in the input image. (See Section 3.7.5 for a description of
adaptive-neighborhood ltering.) In the AND approach, the image is treated
as being made up of a collection of regions (features or objects) of relatively
uniform gray levels. An adaptive neighborhood is determined for each pixel
Deconvolution, Deblurring, and Restoration 895
in the image (called the seed pixel when it is being processed), being dened
as the set of pixels 8-connected to the seed pixel and having a dierence in
gray level with respect to the seed that is within specied limits of tolerance.
The tolerance used in AND is an additive factor, as given by Equation 3.164.
Thus, the tolerance determines the maximum allowed deviation in the gray
level from the seed pixel value within each adaptive neighborhood, and any
deviation less than the tolerance is considered to be an intrinsic property
of the adaptive-neighborhood region. The number of pixels in an adaptive-
neighborhood region may be limited by a predetermined number Q however,
there are no restrictions on the shape of the adaptive-neighborhood regions.
Assuming that each adaptive-neighborhood region grown is large compared
to the ROS of the PSF, each such region may be expressed as the convolution
of the PSF with an equivalent adaptive-neighborhood region grown in the
original undegraded image f (m n). Thus, similar to Equation 10.91, we have
gmn (p q) ' h(p q) fmn (p q) + mn (p q) (10.97)
where (m n) is the seed pixel location for which the adaptive-neighborhood
region gmn (p q) was grown, and (p q) give the locations of the pixels within
the region. It is assumed that regions corresponding to gmn (p q) may be
identied in the original image and noise elds.
Next, each adaptive-neighborhood region is centered within a rectangular
region of the same size as the input image (M  M ) the area surrounding the
region is padded with its mean value in order to reduce edge artifacts and to
enable the use of the 2D FFT. Thus, in the frequency domain, Equation 10.97
becomes
Gmn (u v) ' H (u v) Fmn (u v) + mn (u v): (10.98)
Applying the 2D Hamming window wH (p q) (see Equation 10.93) to each
mean-padded adaptive-neighborhood region, we obtain
gmn (p q) wH (p q) ' h(p q) fmn (p q)] wH (p q) + mn (p q) wH (p q)
(10.99)
or
w (p q ) ' h(p q ) f w (p q ) +  w (p q )
gmn (10.100)
mn mn
where w represents the corresponding windowed regions. The PSD of the
region gmn (p q) may be expressed as
gmn (u v) ' jH (u v)j2 fmn (u v) +  mn (u v): (10.101)
In deriving the AND lter, the stationarity of the adaptive-neighborhood
regions grown is taken into account. An estimate of the spectrum of the noise
mn (u v), within the current adaptive-neighborhood region grown from the
seed pixel at (m n), is obtained as
~mn (u v) = Amn (u v) Gmn(u v) (10.102)
896 Biomedical Image Analysis
where Amn (u v) is a frequency-domain, magnitude-only scale factor that
depends on the spectral characteristics of the adaptive-neighborhood region
grown. An estimate of Fmn (u v) is obtained from Equation 10.98 by using
the spectral estimate of the noise, ~mn (u v), in place of mn (u v). Then, we
have
F~mn (u v) = Gmn (u Hv)(u; v~)mn (u v) (10.103)
which reduces to
F~mn (u v) = GHmn(u(uv)v) 1 ; Amn (u v)]: (10.104)

The spectral noise estimator Amn (u v) may be derived by requiring the


PSD of the estimated noise  ~(u v) to be equal to the original noise PSD
 (u v) for the current adaptive-neighborhood region. Thus, using Equa-
tion 10.102, we can describe the relationship between the noise PSD and the
image PSD as
 mn (u v) = A2mn (u v) gmn (u v): (10.105)
From Equation 10.101 and Equation 10.105, the spectral noise estimator Amn
(u v) is given by
 1=2
 ( u v )
Amn (u v) = jH (u v)j2  (u v) +  (u v) (10.106)
fmn
where  mn (u v) =  (u v) = 2 for Gaussian white noise. The quantity
in Equation 10.101 gives the denominator in Equation 10.106. Therefore, no
additional information is required about the PSD of the original undegraded
image.
The frequency-domain estimate of the uncorrupted adaptive-neighborhood
region is obtained by using the value of Amn (u v), computed from Equa-
tion 10.106, in Equation 10.104. The spectral estimate of the original unde-
graded adaptive-neighborhood region is thus given by
"  1=2 #
~ G mn ( u v )  ( u v )
Fmn (u v) = H (u v) 1 ; jH (u v)j2  (u v) +  (u v)
fmn
"  1=2 #
G mn ( u v ) 
= H (u v) 1 ;  (u v) (u v ) : (10.107)
gmn
The space-domain estimate of the uncorrupted adaptive-neighborhood re-
gion f~mn (p q) is obtained from the inverse Fourier transform of the expres-
sion above. By replacing the seed pixel at (m n) with the deblurred pixel
f~mn (m n), and running the algorithm above for every pixel in the input im-
age, we will obtain a deblurred image based on stationary adaptive regions.
Deconvolution, Deblurring, and Restoration 897
A computational disadvantage of the algorithm above is the fact that it
requires two M  M 2D FFT operations per pixel. An approach to cir-
cumvent this di culty, to some extent, is provided by the intrinsic nature of
the adaptive neighborhood: Because most of the pixels inside an adaptive-
neighborhood region will have similar adaptive-neighborhood regions when
they become seed pixels (because they lie within similar limits of tolerance),
instead of growing an adaptive-neighborhood region for each pixel in the in-
put image, we could grow adaptive-neighborhood regions only from those
pixels that do not already belong to a previously grown region. Thus, after
ltering an adaptive-neighborhood region, the entire adaptive-neighborhood
region may be placed in the output image at the seed pixel location, instead of
replacing only the single restored seed pixel. Note that the various adaptive-
neighborhood regions grown as above could still overlap. This approach is a
compromise to reduce the computational requirements.
Examples: The test image \Lenna", of size 128  128 pixels and 256
gray levels, is shown in Figure 10.14 (a). The image, after degradation by
a Gaussian-shaped blur PSF with a radial standard deviation r = 3 pixels
and additive white Gaussian noise to 35 dB SNR, is shown in part (b) of the
gure. Parts (c) { (e) of the gure show three dierent xed-neighborhood
sections of the blurred image. Each section is of size 32  32 pixels, and is
centered in a square region of the same size as the full image (128  128),
with the surrounding area padded with the mean value of the section. Part
(f) shows the windowed version of the region in part (e) after weighting with
a Hamming window. It is evident from the images in parts (c) { (e) that the
assumption of stationarity within a given section is not satised: each section
contains a variety of image characteristics. The values of the mean-padded
areas are also signicantly dierent from the pixel values of the corresponding
centered sections.
Three dierent adaptive-neighborhood regions of the blurred test image
are shown in Figure 10.15 (c) { (e). Each adaptive-neighborhood region was
allowed to grow to any size as long as the pixel values were within an adaptive
tolerance given by pg2 , where g is an estimate of the standard deviation of
the noise-free blurred image g(m n) = h(m n) f (m n). Each adaptive-
neighborhood region was centered in a square region of the same size as the
full image (128  128), and the surrounding area was padded with the mean
value of the region. Unlike the xed square sections shown in Figure 10.14
(c) { (e), the adaptive-neighborhood regions in Figure 10.15 (c) { (e) do
not contain large spatial uctuations such as high-variance edges, but rather
slow-varying and relatively smooth details. Thus, we may assume that each
adaptive-neighborhood region approximates a stationary region in the input
image. From Figure 10.15 (c) { (e), it should also be observed that the
adaptive-neighborhood regions overlap.
Two restored versions of the blurred Lenna image, obtained using the sec-
tioned Wiener lter with sections of size 16  16 and 32  32 pixels, with the
898 Biomedical Image Analysis
adjacent sections overlapped by one-half of the section size in both dimen-
sions, are shown in Figure 10.16 (c) and (d), respectively. Overlapping the
sections has eectively suppressed the artifacts associated with the inner sec-
tions of the image however, it does not reduce the artifacts at the boundaries
of the image because of the lack of information beyond the image boundaries.
Figure 10.16 (e) shows the test image deconvolved using the full image frame
with the Wiener lter. In this result, less noise smoothing has been achieved
as compared to the restored images using sections of size 32  32 or 16  16
pixels. This could be due to the nonstationarity of the full-frame image used
to calculate the PSD required in the Wiener lter equation.
The restored image obtained using the AND lter of Equation 10.107 is
shown in Figure 10.16 (f). The result has almost no edge artifacts, which is
due to the overlapping adaptive-neighborhood regions used. The AND result
has the lowest MSE of all of the results shown, as listed in Table 10.1.
The \Cameraman" test image, of size 128  128 pixels and 256 gray levels, is
shown in Figure 10.17 (a). Part (b) of the gure shows the image after degra-
dation by a PSF representing 9-pixel horizontal motion blurring and additive
noise to SNR = 35 dB . Two restored images obtained using the sectioned
Wiener lter with sections of size 16  16 and 32  32 pixels, with the adja-
cent sections overlapped by one-half of the section size in both dimensions,
are shown in Figure 10.17 (c) and (d), respectively. Edge artifacts are appar-
ent in these deblurred images the artifacts are more pronounced around the
vertical edges in the image than around the horizontal edges. This is due to
the shape of the blur PSF, which is a 1D function along the horizontal axis.
The result of deconvolution using the full image frame and the Wiener
lter is shown in Figure 10.17 (e). It is evident that edge artifacts do not
exist within the boundaries of this result.
The test image restored by the application of the AND lter of Equa-
tion 10.107 is shown in Figure 10.17 (f). Almost no edge artifact is present
in this image due to the use of adaptive-neighborhood regions. The image is
sharper and cleaner than the other results shown in Figure 10.17 the AND
result also has the lowest MSE, as listed in Table 10.1.
The results demonstrate that image stationarity plays an important role in
the overall performance of restoration lters. The use of adaptive-neighbor-
hood regions can improve the performance of restoration lters.

10.4.3 The Kalman lter


The Kalman lter is a popular approach to characterize dynamic systems in
terms of state-space concepts 833, 887, 888, 889, 890]. The Kalman lter for-
mulation could be used for ltering, restoration, prediction, and interpolation
(smoothing).
Formulation of the Kalman lter: In the Kalman lter, the signals or
items of information involved are represented as a state vector f (n) and an
observation vector g(n), where n refers to the instant of time, or is an index
Deconvolution, Deblurring, and Restoration 899

(a) (b)

(c) (d)

(e) (f)
FIGURE 10.14
Sectioning of the Lenna image of size 128  128 pixels and gray-level range of
0 ; 255. (a) Original image. (b) Blurred image with a Gaussian-shaped blur
function and noise to SNR = 35 dB  MSE = 607. (c), (d), and (e): Three 32
32 sections mean-padded to 128  128 pixels. (f) Hamming-windowed version
of the region in (e). Reproduced with permission from T.F. Rabie, R.M.
Rangayyan, and R.B. Paranjape, \Adaptive-neighborhood image deblurring",
Journal of Electronic Imaging, 3(4): 368 { 378, 1994.  c SPIE.
900 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 10.15
Adaptive-neighborhood segmentation of the Lenna image of size 128  128
pixels and gray-level range of 0 ; 255. (a) Original image. (b) Blurred image
with a Gaussian-shaped blur function and noise to SNR = 35 dB , MSE =
607. (c), (d), and (e): Three adaptive-neighborhood mean-padded regions.
(f) Hamming-windowed version of the region in (e). Reproduced with per-
mission from T.F. Rabie, R.M. Rangayyan, and R.B. Paranjape, \Adaptive-
neighborhood image deblurring", Journal of Electronic Imaging, 3(4): 368 {
378, 1994. c SPIE.
Deconvolution, Deblurring, and Restoration 901

(a) (b)

(c) (d)

(e) (f)
FIGURE 10.16
(a) Lenna test image. (b) Blurred image with a Gaussian-shaped blur func-
tion and noise to SNR = 35 dB , MSE = 607. Sectioned deblurring with
overlapped sections of size (c) 16  16 pixels, MSE = 783 and (d) 32  32
pixels, MSE = 501. (e) Full-frame Wiener ltering, MSE = 634. (f) Adaptive-
neighborhood deblurring, MSE = 292. Reproduced with permission from T.F.
Rabie, R.M. Rangayyan, and R.B. Paranjape, \Adaptive-neighborhood image
c SPIE.
deblurring", Journal of Electronic Imaging, 3(4): 368 { 378, 1994. 
902 Biomedical Image Analysis

(a) (b)

(c) (d)

(e) (f)
FIGURE 10.17
(a) Cameraman test image of size 128  128 pixels and gray-level range 0 ; 255.
(b) Image blurred by 9-pixel horizontal motion and degraded by additive
Gaussian noise to SNR = 35 dB , MSE = 1,247. Deblurred images: (c) Sec-
tioned deblurring with overlapped sections of size 16  16 pixels, MSE = 539.
(d) Sectioned deblurring with overlapped sections of size 3232 pixels, MSE =
424. (e) Full-frame Wiener ltering, MSE = 217. (f) Adaptive-neighborhood
deblurring, MSE = 181. Reproduced with permission from T.F. Rabie, R.M.
Rangayyan, and R.B. Paranjape, \Adaptive-neighborhood image deblurring",
Journal of Electronic Imaging, 3(4): 368 { 378, 1994.  c SPIE.
Deconvolution, Deblurring, and Restoration 903

TABLE 10.1
Mean-squared Errors of the Results of Sectioned and
Adaptive-neighborhood Deblurring of the Lenna and Cameraman
Images of Size 128  128 Pixels and 256 Gray Levels for Various
Neighborhood Sizes and Two Dierent Blurring Functions, and
Approximate Computer Processing Time Using a SUN/Sparc-2
Workstation.
Filter Section Time Mean-squared error
size (minutes) Lenna Cameraman
Degraded | | 607 1,247
Wiener 16  16 25 783 539
PSE 16  16 751 538
Wiener 32  32 10 501 424
PSE 32  32 513 425
Wiener 64  64 5 483 463
PSE 64  64 488 460
Wiener Full frame 2 634 217
PSE Full frame 605 220
Adaptive
neighborhood max = 16 384 15 292 181

Reproduced with permission from T.F. Rabie, R.M. Rangayyan, and R.B.
Paranjape, \Adaptive-neighborhood image deblurring", Journal of Electronic
c SPIE.
Imaging, 3(4): 368 { 378, 1994. 
904 Biomedical Image Analysis
related to the sequence of the state or observation vectors. It is assumed
that the input is generated by a driving (or process) noise source d (n), and
that the output is aected by an observation noise source o (n). A state
transition matrix a(n + 1 n) is used to indicate the modication of the state
vector from one instant n to the next instant (n + 1). Note: The argument
in the notation a(n + 1 n) is used to indicate the dependence of the variable
(matrix) a upon the instants of observation n and (n +1), and not the indices
of the matrix a. This notation also represents the memory of the associated
system.] An observation matrix h(n) is used to represent the mapping from
the state vector to the observation vector.
Figure 10.18 illustrates the concepts described above in a schematic form. A
state vector could represent a series of the values of a 1D signal, or a collection
of the pixels of an image in a local ROS related to the pixel being processed
see Figure 10.19 for an illustration of two commonly used types of ROS in
image ltering. The state vector should be composed of the minimal amount
of data that would be adequate to describe the behavior of the system. As we
have seen in Section 3.5, images may be represented using vectors, and image
ltering or transformation operations may be expressed as multiplication with
matrices. The state transition matrix a(n + 1 n) could represent a linear
prediction or autoregressive model (see Section 11.8) that characterizes the
input state. The state vector would then be composed of a series of the
signal or image samples that would be related to the order of the model, and
be adequate to predict the subsequent values of the state and observation
vectors (with minimal error). The observation matrix h(n) could represent
a blurring PSF that degrades a given image. Observe that the Kalman lter
formulation permits the representation of the signals and their statistics, as
well as the operators aecting the signals, as functions that are nonstationary,
dynamic, or varying with time (or space).

Process Observation
system system

η (n) f(n+1) f(n) g (n)


d delay h(n)
+ +
(memory)

η (n)
o

a (n+1,n)

FIGURE 10.18
State-space representation of the basic Kalman lter formulation.
Deconvolution, Deblurring, and Restoration 905
The Kalman lter formulation 833] represents a new or updated value of
the state vector f recursively in terms of its previous value and new input as
f (n + 1) = a(n + 1 n) f (n) + d (n): (10.108)
This is known as the process equation the corresponding model is also known
as the plant, process, or message model. If we let the state vector f be of
size M  1, then the state transition matrix a is of size M  M , and the
driving noise vector d is of size M  1. We may interpret the driving noise
vector d as the source of excitation of the process system represented by the
state vector f and the state transition matrix a. It is assumed that the noise
process d is a zero-mean white-noise process that is statistically independent
of the stochastic process underlying the state vector f . The noise process d
is characterized by its M  M correlation matrix (ACF)  d as

E d (n) Td (k) = 0 d (n) ifotherwise
n=k

: (10.109)
The state transition matrix a(n + 1 n) is characterized by the following
properties 833]:
a(l m) a(m n) = a(l n) product rule
a;1 (m n) = a(n m) inverse rule (10.110)
a( m m ) = I identity:
For a stationary system, the state transition matrix would be a constant
matrix a that is stationary or independent of time or space.
The output side of the dynamic system (see Figure 10.18) is characterized
by a measurement or observation matrix h(n) that transforms the state vector
f (n). The result is corrupted by a source of measurement or observation noise
 o , which is assumed to be a zero-mean white-noise process that is statistically
independent of the processes related to the state vector f and the driving noise
 d . It is assumed that the observation vector g(n) is of size N  1 (dierent
from the size of the state vector f which is M  1) then, the observation
matrix h(n) is required to be of size N  M , and the observation noise o (n)
is required to be of size N  1. We then have the measurement or observation
equation
g(n) = h(n) f (n) + o (n): (10.111)
Similar to the characterization of the driving noise in Equation 10.109, the
observation noise o (n) is characterized by its N N correlation matrix (ACF)
 o . Due to the assumption of statistical independence of  d and  o , we have

E d (n) To (k) = 0 8 n k: (10.112)
With the situation formulated as above, the Kalman ltering problem may
be stated as follows: given a series of the observations Gn = fg(1) g(2)
906 Biomedical Image Analysis

(0,0)

past

(m,n)
present
pixel

future

(N-1, N-1)
(a)
(0,0) (0,0)

recent recent
past P past P
3 1

P P
2 2

P (m,n) (m,n)
1 present present
pixel pixel

future future

(N-1, N-1) (N-1, N-1)


(b) (c)
FIGURE 10.19
Regions of support (ROS) for image ltering. (a) Illustration of the past,
present, and future in ltering of an image, pixel by pixel, in raster-scan
order. (b) Nonsymmetric half plane (NSHP) of order or size P1  P2  P3 .
(c) Quarter plane (QP) of order or size P1  P2 .
Deconvolution, Deblurring, and Restoration 907
: : : g(n)g, for each n
1, nd the MMSE estimate of the state vector f (l).
The application is referred to as ltering if l = n prediction if l > n and
smoothing or interpolation if 1 l < n. Given our application of interest in
the area of image restoration, we would be concerned with ltering. (Note:
The derivation of the Kalman lter presented in the following paragraphs
closely follows those of Haykin 833] and Sage and Melsa 889].)
The innovation process: An established approach to obtain the solution
to the Kalman ltering problem is via a recursive estimation procedure using
a one-step prediction process, and the resultant dierence referred to as the
innovation process 833, 889]. Suppose that, based upon the set of obser-
vations Gn;1 = fg(1) g(2) : : : g(n ; 1)g, the MMSE estimate of f (n ; 1)
has been obtained let this estimate be denoted as ~f (n ; 1jGn;1 ). Given a
new observation g(n), we could update the previous estimate and obtain a
new state vector ~f (njGn ). Because the state vector f (n) and the observation
g(n) are related via the observation system, we may transfer the estimation
procedure to the observation variable, and let g~ (njGn;1 ) denote the MMSE
estimate of g(n) given Gn;1 . Then, the innovation process is dened as
 (n) = g(n) ; g~(njGn;1 ) n = 1 2 : : : : (10.113)
The innovation process  (n) represents the new information contained in g(n)
that cannot be estimated from Gn;1 . Using the observation equation 10.111,
we get
g~ (njGn;1) = h(n) ~f (njGn;1) + ~ o (njGn;1 )
= h(n) ~f (njGn;1 ) (10.114)
noting that ~ o (njGn;1 ) = 0 because the observation noise is orthogonal to
the past observations. Combining Equations 10.113 and 10.114, we have
 (n) = g(n) ; h(n) ~f (njGn;1 ): (10.115)
Using Equation 10.111, Equation 10.115 becomes
 (n) = h(n) p (n n ; 1) + o (n) (10.116)
where p (n n ; 1) is the predicted state error vector at n using the information
available up to (n ; 1), given by
p (n n ; 1) = f (n) ; ~f (njGn;1 ): (10.117)
The innovation process has the following properties 833]:
  (n) is orthogonal to the past observations g(1) g(2) : : : g(n ; 1), and
hence 
E  (n) gT (m) = 0 1 m n ; 1: (10.118)
908 Biomedical Image Analysis
 The innovation process is a series of vectors composed of random vari-
ables that are mutually orthogonal, and hence

E  (n)  T (m) = 0 1 m n ; 1: (10.119)
A one-to-one correspondence exists between the observations fg(1)

g(2) : : : g(n)g and the vectors of the innovation process f (1)  (2)
: : :  (n)g that is, one of the two series may be derived from the other
without any loss of information via linear operations.
The ACF matrix of the innovation process  (n) is given by

 (n) = E  (n)  T (n)
= h(n) p (n n ; 1) hT (n) +  o (n) (10.120)
where 
p (n n ; 1) = E p (n n ; 1) Tp (n n ; 1) (10.121)
is the ACF matrix of the predicted state error, and the property that p (n n ;
1) and o (n) are mutually orthogonal has been used. The ACF matrix of the
predicted state error provides a statistical representation of the error in the
predicted state vector ~f (njGn;1 ).
Estimation of the state vector using the innovation process: The
aim of the estimation process is to derive the MMSE estimate of the state
vector f (l). Given that g is related to f via a linear transform, and that  is
linearly related to g, we may formulate the state vector as a linear transform
of the innovation process as
n
~f (ljGn ) = X Ll (k)  (k) (10.122)
k=1
where Ll (k), l = 1 2 : : : n, is a series of transformation matrices. Now, the
predicted state error vector is orthogonal to the innovation process:
 h i
E p (l n)  T (m) = E ff (l) ; ~f (ljGn )g T (m)
= 0 m = 1 2 : : : n: (10.123)
Using Equations 10.122, 10.123, and 10.119, we get
 
E f (l)  T (m) = Ll (m) E  (m)  T (m)
= Ll (m)  (m): (10.124)
Consequently, we get
;

Ll (m) = E f (l)  T (m) 1
 (m): (10.125)
Deconvolution, Deblurring, and Restoration 909
Using the expression above for Ll (m) in Equation 10.122, we have
n
~f (ljGn ) = X E f (l)  T (k) ; 1 (k)  (k) (10.126)
k=1
from which it follows that
;1 E f (l)  T (k) ;1 (k)  (k)
~f (ljGn ) = Pnk=1 

E f (l)  T (n) ; 1 (n)  (n):



+ (10.127)
For l = n + 1, we obtain
n;1
~f (n + 1jGn ) = X E f (n + 1)  T (k) ; 1 (k)  (k)
k=1
+ E f (n + 1)  T (n) ; 1 (n)  (n):

(10.128)
Using the process equation 10.108, we have, for 0 k n,
 
E f (n + 1)  T (n) = E fa(n + 1 n) f (n) + d (n)g T (k)

= a(n + 1 n) E f (n)  T (k) : (10.129)
In arriving at the expression above, the property that d (n) and  (k) are
mutually orthogonal for 0 k n has been used. Now, using Equation 10.129
and Equation 10.126 with l = n, we get
;1
nX
E f (n + 1)  T (k) ; 1 (k)  (k)


k=1
;1
nX
E f (n)  T (k) ; 1 (k)  (k)

= a(n + 1 n)
k=1
= a(n + 1 n) ~f (njGn;1 ): (10.130)
Interpretation of the expression in Equation 10.128 is made easier by the
following formulations.
The Kalman gain: Let
K(n) = E f (n + 1)  T (n) ; 1 (n)

(10.131)
this is a matrix of size M  N , whose signicance will be apparent after a few
steps. The expectation in the equation above represents the cross-correlation
matrix between the state vector f (n + 1) and the innovation process  (n).
Using Equations 10.131 and 10.130, Equation 10.128 may be simplied to
~f (n + 1jGn ) = a(n + 1 n) ~f (njGn;1 ) + K(n)  (n): (10.132)
910 Biomedical Image Analysis
This is an important result, indicating that we may obtain the MMSE esti-
mate of the state vector ~f (n + 1jGn ) by applying the state transition matrix
a(n +1 n) to the previous estimate of the state vector ~f (njGn;1) and adding a
correction term. The correction term K(n)  (n) includes the innovation pro-
cess  (n) multiplied with the matrix K(n) for this reason, and in recognition
of the original developer of the underlying procedures, the matrix K(n) is
referred to as the Kalman gain.
In order to facilitate practical implementation of the steps required to com-
pute the Kalman gain matrix, we need to examine a few related entities, as
follows. Using Equation 10.117, we can write

E p (n n ; 1) Tp (n n ; 1)
 h i
= E f (n) Tp (n n ; 1) ; E ~f (njGn;1 ) Tp (n n ; 1)

= E f (n) Tp (n n ; 1) (10.133)
where the last step follows from the property that the estimated state vec-
tor ~f (njGn;1 ) and the predicted state error vector p (n n ; 1) are mutually
orthogonal. Using Equations 10.129, 10.116, and 10.133, we have
E  f (n + 1)  T (n) ] 
= a(n + 1 n) E f (n)  T (k)

= a(n + 1 n) E f (n) fh(n) p (n n ; 1) + o (n)gT

= a(n + 1 n) E f (n) Tp (n n ; 1) hT (n)

= a(n + 1 n) E p (n n ; 1) Tp (n n ; 1) hT (n)
= a(n + 1 n) p (n n ; 1) hT (n): (10.134)
In arriving at the result above, Equation 10.121 has been used use has also
been made of the property that f and o are independent processes. Using
Equation 10.134 in Equation 10.131, we get the Kalman gain matrix as
K(n) = a(n + 1 n) p (n n ; 1) hT (n) ; 1 (n) (10.135)
which upon the use of the expression in Equation 10.120 for  (n) gives
K(n) = a(n+1 n) p (n n;1) hT (n) h(n) p (n n ; 1) hT (n) +  o (n) ;1 :


(10.136)
This expression for the Kalman gain matrix may be used with Equation 10.132
to update the estimate of the state vector.
Practical computation of the Kalman gain matrix may be facilitated fur-
ther by the following derivations 833]. Extending Equation 10.117 one step
further, we have
p (n + 1 n) = f (n + 1) ; ~f (n + 1jGn ): (10.137)
Deconvolution, Deblurring, and Restoration 911
Putting together Equations 10.108, 10.115, 10.132, and 10.137, we have
h i
p (n + 1 n) = a(n + 1 n) f (n) ; ~f (njGn;1 )
h i
; K(n) g(n) ; h(n) ~f (njGn;1) + d (n): (10.138)
Using the measurement equation 10.111 and Equation 10.117, the equation
above may be modied to
p (n + 1 n) = a(n + h1 n) p (n n ; 1) i
; K(n) h(n) f (n) +  o (n) ; h(n) ~f (njGn;1 ) +  d (n)

= a(n + 1 n) p (n n ; 1)
h i
; K(n) h(n) f (n) ; ~f (njGn;1 ) +  d (n) ; K(n) o (n)

= a(n + 1 n) ; K(n) h(n)] p (n n ; 1) + d (n) ; K(n) o (n):


(10.139)
Putting together Equations 10.121 and 10.139, and noting the property that
the processes p , d , and o are mutually uncorrelated, we have the ACF
matrix of the predicted state error p (n + 1 n) as

p (n + 1 n) = E p (n + 1 n) Tp (n + 1 n)

= a(n + 1 n) ; K(n) h(n)] p (n n ; 1)


 a(n + 1 n) ; K(n) h(n)]T +  d (n) + K(n)  o (n)KT (n)

= a(n + 1 n) p (n n ; 1) aT (n + 1 n)
; K(n) h(n) p (n n ; 1) aT (n + 1 n)
; a(n + 1 n) p (n n ; 1) hT (n)KT (n)
+ K(n) h(n) p (n n ; 1) hT (n)KT (n)
+  d (n) + K(n)  o (n)KT (n)

= a(n + 1 n) p (n n ; 1) aT (n + 1 n)
; K(n) h(n) p (n n ; 1) aT (n + 1 n)
; a(n + 1 n) p (n n ; 1) hT (n)KT (n)

+ K(n) h(n) p (n n ; 1) hT (n) +  o (n) KT (n) +  d (n)

= a(n + 1 n) p (n n ; 1) aT (n + 1 n)
912 Biomedical Image Analysis
; K(n) h(n) p (n n ; 1) aT (n + 1 n)
; a(n + 1 n) p (n n ; 1) hT (n)KT (n)
+ K(n)  (n)KT (n) +  d (n) (10.140)
which results in
p (n + 1 n) = a(n + 1 n) p (n) aT (n + 1 n) +  d (n): (10.141)
In arriving at the result above, Equations 10.120 and 10.135 have been used.
A new matrix p (n), of size M  M , has been introduced, dened as
p (n) = p (n n ; 1) ; a(n n + 1) K(n) h(n) p (n n ; 1) (10.142)
use has been made of the property a;1 (n + 1 n) = a(n n + 1), which follows
from the inverse rule in Equation 10.110. Equation 10.141 is known as the
Riccati equation, and assists in the recursive computation of the ACF matrix
of the predicted state error.
The procedures developed to this point are referred to as Kalman's one-step
or one-stage prediction algorithm 833, 889]. The algorithm is represented by,
in order, Equations 10.135, 10.120, 10.115, 10.132, 10.142, and 10.141.
Application to ltering: In ltering, the aim is compute the estimate
~f (njGn ). The one-step prediction algorithm developed in the preceding para-
graphs may be extended to the ltering application as follows.
Because the processes f and d are mutually independent, it follows from
Equation 10.108 that the MMSE estimate of f (n + 1) given Gn is
~f (n + 1jGn ) = a(n + 1 n) ~f (njGn ) + d (njGn )
= a(n + 1 n) ~f (njGn ): (10.143)
Premultiplying both sides by a(n n + 1), and using the inverse rule in Equa-
tion 10.110, we get
a(n n + 1) ~f (n + 1jGn ) = a(n n + 1) a(n + 1 n) ~f (njGn)
= ~f (njGn ) (10.144)
or
~f (njGn ) = a(n n + 1) ~f (n + 1jGn ): (10.145)
Thus, given the result of the one-step prediction algorithm f (n + 1jGn ), we
can derive the ltered estimate ~f (njGn ).
Let us now consider the ltered estimation error, dened as
e (n) = g(n) ; h(n) ~f (njGn ): (10.146)
Using Equations 10.115, 10.132, and 10.145, we can modify Equation 10.146
as follows:
h i
e (n) = g(n) ; h(n) a(n n + 1) ~f (n + 1jGn )
Deconvolution, Deblurring, and Restoration 913
h i
= g(n) ; h(n) a(n n + 1) fa(n + 1 n) ~f (njGn;1 ) + K(n)  (n)g
= g(n) ; h(n) ~f (njGn;1 ) ; h(n) a(n n + 1) K(n)  (n)
=  (n) ; h(n) a(n n + 1) K(n)  (n)
= I ; h(n) a(n n + 1) K(n)]  (n): (10.147)
This expression indicates that the ltered estimation error e (n) is related to
the innovation process  (n) through a conversion factor that is given by the
matrix within the square brackets. Using Equations 10.120 and 10.135, we
may simplify the expression above as follows:
h i
e (n) = I ; h(n) a(n n + 1) a(n + 1 n) p (n n ; 1) hT (n) ; 1
 (n)  (n)
h i
= I ; h(n) p (n n ; 1) hT (n) ; 1 (n)  (n)
=  (n) ; h(n) p (n n ; 1) hT (n) ; 1 (n) (n)


=  o (n) ; 1 (n) (n): (10.148)


The dierence between the true state vector f (n) and the ltered estimate
~f (njGn ), labeled as the ltered state error vector f (n), is given by
f (n) = f (n) ; ~f (njGn ): (10.149)
Using Equations 10.132 and 10.145, we may modify the equation above as
f (n) = f (n) ; a(n n + 1) ~hf (n + 1jGn ) i
= f (n) ; a(n n + 1) a(n + 1 n) ~f (njGn;1 ) + K(n)  (n)
= f (n) ; a(n n + 1) a(n + 1 n) ~f (njGn;1 ) ; a(n n + 1) K(n)  (n)
= f (n) ; ~f (njGn;1 ) ; a(n n + 1) K(n)  (n)
= p (n n ; 1) ; a(n n + 1) K(n)  (n): (10.150)
Equation 10.137 has been used in the last step above, where p (n n ; 1) is the
predicted state error vector at n using the information provided up to (n ; 1).
The ACF matrix of the ltered state error vector f (n) is obtained as
follows:
 
E f (n) Tf (n) = E p (n n ; 1) Tp (n n ; 1)

+ a(n n + 1) K(n) E  (n)  T (n) KT (n) aT (n n + 1)

; E p (n n ; 1)  T (n) KT (n) aT (n n + 1)

; a(n n + 1) K(n) E  (n) Tp (n n ; 1) : (10.151)
The expectation in the third term in the expression above may be simplied
as
 h i
E p (n n ; 1)  T (n) = E ff (n) ; ~f (njGn;1)g  T (n)

= E f (n)  T (n) (10.152)
914 Biomedical Image Analysis
because ~f (njGn;1 ) is orthogonal to  (n). Using Equation 10.129 with k = n
and premultiplying both sides with a;1 (n + 1 n) = a(n n + 1), we get
 
E f (n)  T (n) = a(n n + 1) E f (n + 1)  T (n)
= a(n n + 1) K(n)  (n): (10.153)
Equation 10.131 has been used for the second step above. Therefore, we have

E p (n n ; 1)  T (n) = a(n n + 1) K(n)  (n): (10.154)
Using a similar procedure, the expectation in the fourth term of Equation
10.151 may be modied as

E  (n) Tp (n n ; 1) =  (n) KT (n) aT (n n + 1): (10.155)
Substituting Equations 10.153 and 10.155 in Equation 10.151, we get

E f (n) Tf (n) = p (n n ; 1) ; a(n n + 1) K(n)  (n) KT (n) aT (n n + 1):
(10.156)
From Equation 10.135, we get
K(n)  (n) = a(n + 1 n) p (n n ; 1) hT (n) (10.157)
using which we can modify Equation 10.156 as

E f (n) Tf (n) = p (n n ; 1) ; a(n n + 1) a(n + 1 n) p (n n ; 1)
 hT (n) KT (n) aT (n n + 1)
= p (n n ; 1) ; p (n n ; 1) hT (n) KT (n) aT (n n + 1):
(10.158)
The inverse rule of the state transition matrix (see Equation 10.110) has been
used in the last step above. Because ACF matrices are symmetric, we may
transpose the expression in Equation 10.158 and obtain

E f (n) Tf (n) = p (n n ; 1) ; a(n n + 1) K(n) h(n) p (n n ; 1)
= I ; a(n n + 1) K(n) h(n)] p (n n ; 1)
= p (n) (10.159)
where Equation 10.142 has been used for the last step. This result indicates
that the matrix p (n) introduced in the Riccati equation 10.141 is the ACF
matrix of the ltered state error.
Initial conditions: In practice, the initial state of the process equation
10.108 will not be known. However, it may be possible to describe it in a
statistical manner, in terms of the mean and ACF of the state vector (or
estimates thereof). The initial conditions given by
~f (1jG0 ) = 0 (10.160)
Deconvolution, Deblurring, and Restoration 915
and 
p (10) = E f (1) f T (1) : (10.161)
result in an unbiased ltered estimate 833].
Summary of the Kalman lter: The following description of the Kalman
lter, based upon one-step prediction, summarizes the main principles and
procedures involved, and assists in implementing the lter 833].
Data available: The observation vectors Gn = fg(1) g(2) : : : g(n)g.
System parameters assumed to be known:
 The state transition matrix a(n + 1 n).
 The observation system matrix h(n).
 The ACF matrix of the driving noise  d (n).
 The ACF matrix of the observation noise  o (n).
Initial conditions:
 ~f (1jG0 ) = E f (1)] = 0
 p (1 0) = D0 , a diagonal matrix with values of the order of 10;2 .
Recursive computational steps: For n = 1 2 3 : : :, do the following:
1. Using Equation 10.136, compute the Kalman gain matrix as
K(n) = a(n + 1 n) p (n n ; 1) hT (n)
;1
 h(n) p (n n ; 1) hT (n) +  o (n) : (10.162)
2. Obtain the innovation process vector using Equation 10.115 as
 (n) = g(n) ; h(n) ~f (njGn;1 ): (10.163)
3. Using Equation 10.132, update the estimate of the state vector as
~f (n + 1jGn ) = a(n + 1 n) ~f (njGn;1 ) + K(n)  (n): (10.164)
4. Compute the ACF matrix of the ltered state error, given by Equa-
tion 10.142 as
p (n) = p (n n ; 1) ; a(n n + 1) K(n) h(n) p (n n ; 1): (10.165)

5. Using Equation 10.141, update the ACF matrix of the predicted state
error as
p (n + 1 n) = a(n + 1 n) p (n) aT (n + 1 n) +  d (n): (10.166)
916 Biomedical Image Analysis
The Kalman lter formulation is a general formulation of broad scope, rel-
evance, and application. Haykin 833] provides a discussion on the Kalman
lter as the unifying basis for recursive least-squares (RLS) lters. Sage and
Melsa 889] present the derivation of a stationary Kalman lter and relate it
to the Wiener lter.
Extension to of the Kalman lter to image restoration: Extending
the Kalman lter to 2D image ltering poses the following challenges:
 dening the state vector
 determining the size (spatial extent) of the state vector
 deriving the dynamic (space-variant) state transition matrix (process,
phenomenon, or model)
 obtaining the dynamic (space-variant) observation matrix (system)
 estimating the driving and observation noise correlation matrices and
 dealing with the matrices of large size due to the large number of ele-
ments in the state vector.
Woods and Radewan 891, 892] and Woods and Ingle 893] proposed a scalar
processor that they called as the reduced-update Kalman lter (RUKF), dis-
cussing in detail the options for the ROS of the lter in particular the NSHP
ROS, see Figure 10.19 (b)] as well as the problems associated with the bound-
ary conditions (see also Tekalp et al. 894, 895, 896]). Boulfelfel et al. 750]
modied the RUKF algorithm of Woods and Ingle 893] for application to the
restoration of SPECT images with the following main characteristics:
 The state transition matrix was derived using a 2D AR model (see Sec-
tion 11.8).
 The observation matrix was composed by selecting one of 16 PSFs de-
pending upon the distance of the pixel being processed from the center
of the image (see Figure 10.20 and Section 10.5 for details).
 Filtering operations were performed in a QP ROS, as illustrated in Fig-
ure 10.19 (c). The image was processed pixel-by-pixel in raster-scan
order, by moving the QP ROS window from one pixel to the next.
 The size of the ROS was determined as the larger of the AR model order
(related to the state transition matrix) and the width of the PSF.
 With the assumption that the observation noise follows the Poisson
PDF, the variance of the observation noise was estimated as the total
photon count in a local window.
Deconvolution, Deblurring, and Restoration 917

FIGURE 10.20
In order to apply the nonstationary Kalman lter to SPECT images, the
image plane is divided into 16 zones based upon the distance from the center
of the axis of rotation of the camera 750]. A dierent PSF (MTF) is used to
process the pixels in each zone. The width of the PSF increases with distance
from the center.

The steps and equations of the scalar RUKF algorithm are as follows:
Update index n to n + 1 at the end of a row, reset n = 1, and update m
to m + 1.
1. Project the previous estimate of the state vector one step forward by
using the dynamic system model as
XX
f~b(mn) (m n) = a(mn) ( ) f~a(mn;1)(m ; n ; ): (10.167)

2. Project the error ACF matrix one step forward as
XX
(bmn) (m n ) = a(mn) (r s) (amn;1)(m ; r n ; s )
r s
(10.168)
and
XX
(bmn) (m n m n) = a(mn) ( ) (bmn)(m n m ; n ; )

+ 2(mn) :
d (10.169)
918 Biomedical Image Analysis
3. Compute the updated Kalman gain matrix as
hP P i
K (mn) (p q) = (mn) ( ) (mn) (m ; n ;  m ; p n ; q )
 h b
hP P P P
 p qh
(mn) ( ) h(mn)(p q) (bmn)(m ; n ;  m ; p n ; q)
i
+ 2(omn) :
(10.170)
4. Update the state vector as

f~a(mn) (p q) = f~b(mn) (p q) + K (mn) (m ; p n ; q)


2 3
XX
 4g (m n) ; h(mn) ( ) f~b(mn)(m ; n ; )5 :

(10.171)
5. Update the error ACF matrix as

(amn) (p q ) = (bmn) (p q ) ; K (mn) (m ; p n ; q)


XX
 h(mn) (r s) (bmn)(m ; r n ; s ):
r s
(10.172)
In the notation used above, the superscript (m n) indicates the ltering
step (position of the QP ltering window) the indices within the argument of
a variable indicate the spatial coordinates of the pixels of the variable being
used or processed the subscript b indicates the corresponding variable before
updating the subscript a indicates the corresponding variable after updating
and all summations are over the chosen ROS for the ltering operations. The
subscript p has been dropped from the error ACF .
With reference to the basic Kalman algorithm documented on page 915,
the RUKF procedure described above has the following dierences:
 The sequence of operations in the RUKF algorithm above follows the
Kalman lter algorithm described by Sage and Melsa 889] (p268), and
is dierent from that given by Haykin 833] as well as the algorithm on
page 915.
 Equation 10.167 computes only the rst part of the right-hand side of
Equation 10.164.
Deconvolution, Deblurring, and Restoration 919
 Equations 10.168 and 10.169 together perform the operations repre-
sented by Equation 10.166.
 Equation 10.170 is equivalent to Equation 10.162 except for the presence
of the state transition matrix a(n + 1 n) in the latter.
 Equation 10.171 is similar to Equation 10.164, with the observation
that the innovation process  (n) is given by the expression within the
brackets on the right-hand side of the latter.
 Equation 10.172 is equivalent to Equation 10.165 except for the presence
of the state transition matrix a(n n + 1) in the latter. Observe that the
term a(n + 1 n) in Equation 10.162 is cancelled by the term a(n n + 1)
in Equation 10.165 due to the inverse rule given in Equation 10.110.
Illustrations of application of the 2D Kalman lter (the RUKF algorithm)
to the restoration of SPECT images are provided in Section 10.5.

10.5 Application: Restoration of Nuclear Medicine Im-


ages
Nuclear medicine images, including planar, SPECT, and PET images, are
useful in functional imaging of several organs, such as the heart, brain, thyroid,
and liver see Section 1.7 for an introduction to the principles of nuclear
medicine imaging. However, nuclear medicine images are severely aected
by several factors that degrade their quality and resolution. Some of the
important causes of image degradation in nuclear medicine imaging are briey
described below, along with suitable methods for image restoration 46, 897,
898].
 Poor quality control: SPECT images could be blurred by misalign-
ment of the axis of rotation of the system in the image reconstruction
process. Data in the projection images could become inaccurate due
to defects and nonuniformities in the detector. Patient motion during
image data acquisition leads to blurring.
 Poor statistics: The number of photons acquired in a nuclear medicine
image will be limited due to the constrained dose of the radiopharma-
ceutical administered, the time that the patient can remain immobile
for the imaging procedure, the time over which the distribution and ac-
tivity of the radiopharmaceutical within the patient will remain stable,
the low e ciency of detection of photons imposed by the collimator,
the limited photon-capture e ciency of the detection system, and the
920 Biomedical Image Analysis
attenuation of the photons within the body. These factors lead to low
SNR in the images. SPECT images have poorer statistics due to the
limited time of acquisition of each projection image. See Figure 3.68
for an illustration of increased levels of noise in planar images due to
limited imaging periods.
 Photon-counting (Poisson) noise: The emission of gamma-ray pho-
tons is an inherently random process that follows the Poisson distribu-
tion (see Section 3.1.2). When the photon count is large, the Poisson
PDF tends toward the Gaussian PDF (see Figure 3.8). The detection of
gamma-ray photons is also a random process, governed by the probabil-
ities of interaction of photons with matter. The noise in SPECT images
is further aected by the lters used in the process of image reconstruc-
tion from projections: the noise is amplied and modied to result in a
mottled appearance of relatively uniform regions in SPECT images.
 Gamma-ray attenuation: Photons are continuously lost as a gamma
ray passes through an object due to scattering and photoelectric ab-
sorption 899] (p140). The eect of these phenomena is an overall at-
tenuation of the gamma ray. The number of photons that arrive at the
detector will depend upon the attenuating eect of the tissues along
the path of the ray, and hence will not be a true representation of the
strength at the source (the organ that collected the radiopharmaceutical
in relatively higher proportion).
 Compton scattering: Compton scattering occurs when a gamma-
ray photon collides with an outer-shell electron in the medium through
which it passes: the (scattered or secondary) gamma-ray photon contin-
ues in a dierent direction at a lower energy, and the electron that gained
the energy is released and scattered 899] (pp 140 { 142). Gamma-
ray photons aected by Compton scattering cause counts in images at
locations that correspond to no true emitting source, and appear as
background noise. However, because the aected photon arrives at the
detector with a lower energy than that at which it was emitted at the
source (which is known), it may be rejected based upon this knowledge.
 Poor spatial resolution: The spatial resolution of a gamma camera is
aected by two dierent factors: the intrinsic resolution of the detector
and the geometric resolution of the collimator. The intrinsic resolution
of the detector is related to the diameter of the PMTs, the thickness
of the crystal, and the energy of the gamma rays used. The intrinsic
resolution may be improved by using more PMT tubes, a thinner crystal,
and gamma rays of higher energy. However, a thinner crystal would lead
to lower e ciency in the detection of gamma-ray photons.
The geometric resolution of the collimator is a function of the depth
(thickness) of the collimator, the diameter of the holes, and the source-
Deconvolution, Deblurring, and Restoration 921
to-collimator distance. The geometric resolution of a collimator may
be improved by making it thicker and reducing the size of the holes
however, these measures would reduce the number of photons that reach
the crystal, and hence reduce the overall e ciency of photon detection.

Digital image processing techniques may be applied to correct for some of


the degrading phenomena mentioned above lters may also be designed to
remove some of the eects 900, 901]. The following sections provide details of
a few methods to improve the quality of nuclear medicine images a schematic
representation of the application of such techniques is shown in Figure 10.21.

Quality
control

Planar Scatter Geometric averaging of


(projection) compensation conjugate projections
images
Gamma rays
from the
patient Attenuation Prereconstruction
correction restoration: 2D filter
applied to each planar
(projection) image

Reconstruction from
projections

Post-reconstruction
restoration: 2D or 3D
filter applied to the
SPECT images

Restored SPECT images

FIGURE 10.21
Schematic representation of the application of several techniques to improve
the quality of nuclear medicine images. Attenuation correction may be
achieved via averaging of conjugate projections, modications to the recon-
struction algorithm, or other methods applied to the planar projections or the
reconstructed SPECT data. The blocks are shown separately to emphasize
the various procedures for quality improvement. Only one of the two blocks
shown in dashed lines | prereconstruction or post-reconstruction processing
| would be used.
922 Biomedical Image Analysis
10.5.1 Quality control
The quality of images obtained using a gamma camera is aected by imperfec-
tions in the detection system of the camera. System misalignment, detector
nonlinearity, and detector nonuniformity are the major contributors to im-
age degradation in this category. System misalignment errors arise when an
incorrect axis of rotation is used in the reconstruction process, or due to non-
linearities in the eld of view of the camera, and result in smearing of the
SPECT image 46, 902]. System misalignment may be corrected through reg-
ular calibration of the camera system 902, 903]. Detector nonlinearity arises
due to the compression of events located near the centers of the PMTs and an
expansion between the PMTs 904, 905]. Detector nonlinearities are usually
not perceptible however, they can have signicant eects on image nonuni-
formities and misalignment. The nite number of PMTs in the detector is
also a source of image nonuniformity, which results in variations in counts in
the acquired image of a uniform object 904, 906, 907].
The common approach to correct nonuniformity is by acquiring an image
of a ood-eld (a uniform source) and then using it as a correction matrix for
other images 46, 905, 908]. However, this method only corrects for variations
in amplitude. A more sophisticated method stores correction matrices for
regional dierences in pulse-height spectra and for positions over the entire
area of the detector. The correction matrices are then used on an event-
by-event basis to compensate for regional dierences by adjusting either the
system gain or the pulse-height window, and to compensate for nonlinearities
by accurately repositioning each event 904].

10.5.2 Scatter compensation


Compton scattering results in a broad, symmetric distribution centered about
the primary wavelength or energy level of the gamma rays at the source.
One approach to reduce the eect of scatter is by energy discrimination in
the camera system: by rejecting photons at all energy levels outside a nar-
row window centered at the photo-peak of the radio-isotope, most of the
Compton-scattered photons will be rejected however, this rejection is not
complete 909].
Jaszczak et al. 910] acquired projection images in two dierent energy win-
dows, one placed at the photo-peak of the radio-isotope and the other over the
Compton portion of the energy spectrum. A fraction of the second image was
subtracted from the photo-peak image. The procedure was shown to elimi-
nate most of the scatter however, di culty was encountered in determining
the fraction of the scatter image to be subtracted. Egbert and May 911]
modeled Compton scattering using the integral transport equation. Correc-
tion was performed in an iterative procedure using an attenuation-corrected
reconstructed image as the initial estimate. The next image estimate was
computed as a subtraction of the product of the Chang point-wise attenua-
Deconvolution, Deblurring, and Restoration 923
tion correction operator 912], with a scattering operator determined from a
given energy threshold from the previous estimate of the image. This tech-
nique suers from the necessity of having to determine the scattering operator
for each distribution of the scattering medium, photon energy, and threshold.
Axelsson et al. 913] proposed a method in which the scatter distribution is
modeled as the convolution of the projection image with an exponential ker-
nel, and correction is performed by subtracting the estimate of the scatter
from the acquired projection image. A similar technique was described by
Floyd et al. 914, 915], in which the correction is performed by deconvolution
rather than by subtraction.
See Rosenthal et al. 898], Buvat et al. 916], and Ljungberg et al. 917] for
other methods for scatter compensation.

10.5.3 Attenuation correction


When gamma-ray photons pass through body tissues, several photons get
attenuated and scattered. The amount of attenuation depends upon the at-
tenuation coe cient of the tissues and the depth of the source of the photons.
The attenuation in nuclear medicine imaging may be represented as
Id = Is exp; x] (10.173)
where Is is the intensity of the gamma ray at the source, Id is the intensity
at the detector,  is the attenuation coe cient of the attenuating medium
or tissue (assumed to be uniform in the expression above), and x is the dis-
tance traveled by the gamma ray through the attenuating medium. Sev-
eral methods for attenuation correction have been described in the literature
the methods may be divided into three categories: preprocessing correction
methods, intrinsic correction methods, and post-processing correction meth-
ods 46, 912, 918, 919, 920, 921, 922, 923, 924, 925, 926].
The most common preprocessing correction procedure is to estimate the
source-to-collimator distance at each point in the image, and use Equation
10.173 to compute the gamma-ray intensity at the source 921]. Other ap-
proaches include using the arithmetic mean or the geometric mean of con-
jugate projections to correct projections for attenuation before reconstruc-
tion 46] see Section 10.5.5. These two methods are employed in most SPECT
systems, and give acceptable results for head and liver SPECT images, where
the attenuation coe cients may be assumed to be constant.
In the intrinsic correction methods, attenuation correction is incorporated
into the reconstruction algorithm. Gullberg and Budinger 924] proposed a
technique where the solution to the attenuated Radon integral for the case of
a uniformly attenuating medium is used. This technique is limited to images
with high statistics because it can amplify noise and introduce artifacts in
low-count images. Censor et al. 927] proposed a correction method where
both the attenuation coe cient map and the radio-isotope distribution are
924 Biomedical Image Analysis
estimated using discrete estimation theory this approach provides noisier
images but allows correction of nonuniform attenuation.
A commonly used post-processing correction method is the Chang iterative
technique 912], in which each pixel of the SPECT image is multiplied by
the reciprocal of the average attenuation along all rays from the pixel to the
boundaries of the attenuating medium. The results are then iteratively rened
by reprojecting the image using the assumed attenuation coe cients 925].
The dierence between the true (acquired) projections and those obtained by
reprojection is used to correct the image. This method does not amplify noise,
and performs well in the case of inhomogeneous attenuating media.
See Rosenthal et al. 898] and King et al. 928] for other methods for atten-
uation correction.

10.5.4 Resolution recovery


Image restoration techniques could be applied to nuclear medicine images to
perform resolution recovery (deblurring) and noise removal. Most restoration
methods assume the presence of additive signal-independent white noise and
a shift-invariant blurring function. However, nuclear medicine images are
corrupted by Poisson noise that is correlated with or dependent upon the
image data, and the blurring function is shift-variant. Furthermore, the noise
present in the planar projection images is amplied in SPECT images by the
reconstruction algorithm. The SNR of nuclear medicine images is low, which
makes recovery or restoration di cult.
Several methods have been proposed for restoring nuclear medicine images
86, 87, 132, 749, 750, 751, 834, 835, 842, 844, 845, 846, 847, 848, 849, 851,
929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939]. SPECT images may
be either restored after reconstruction (post-reconstruction restoration) 132,
834, 844, 842, 851, 929], or the projection (planar) images may be restored
rst and SPECT images reconstructed later (prereconstruction restoration)
87, 751, 835, 845, 851, 930] see Figure 10.21.
Boardman 929] applied a constrained deconvolution lter to restore scinti-
graphic images of the brain: the lter required only the PSF as a priori infor-
mation it was assumed that there were no discontinuities in the object. Mad-
sen and Park 930] applied a Fourier-domain lter on the projection set for the
enhancement of SPECT images (prereconstruction restoration) they assumed
the PSF to be a 2D isotropic Gaussian function. King et al. 834, 844, 845]
applied the Wiener lter and a count-dependent Metz lter to restore both
projection and SPECT images, and reported that prereconstruction restora-
tion showed better results than post-reconstruction restoration 845]. How-
ever, a study of image contrast and percent fractional standard deviation of
counts in regions of uniform activity in the test images used in their studies
does not show a clear dierence between the two methods. Webb et al. 842]
attempted post-reconstruction restoration of liver SPECT images they pro-
posed a general formulation that unies a number of lters, among which are
Deconvolution, Deblurring, and Restoration 925
the inverse, maximum entropy, parametric Wiener, homomorphic, Phylips,
and hybrid lters. A study of image contrast in their work indicated that
properly tuned maximum-entropy or homomorphic lters provide good re-
sults. Honda et al. 835] also attempted post-reconstruction restoration of
myocardial SPECT images using a combination of Wiener and Butterworth
lters they assumed the SNR to be a constant and the system transfer func-
tion to be Gaussian.
Boulfelfel et al. 751, 851] performed a comparison of prereconstruction
and post-reconstruction restoration lters applied to myocardial images. The
results obtained showed that prereconstruction restored images present a
signicant decrease in RMS error and an increase in contrast over post-
reconstruction restored images. Examples from these studies are presented
in Section 10.5.6.
An important consideration in the restoration of SPECT images is the
stationarity of the PSF 46, 940, 941, 942, 943]. Hon et al. 132] and Boulfelfel
et al. 86, 749, 750] conducted several experiments to derive the FWHM values
of the PSF of several imaging congurations, and tabulated the variation in
the parameter with source-to-collimator distance and attenuating media.
Larsson 46] investigated the eects of using the arithmetic and geometric
mean of LSFs taken at opposing (conjugate) angular positions in air and water
to improve stationarity. It was found that the arithmetic mean of opposing
LSFs resulted in an LSF of approximately the same FWHM as the LSF at
the center of rotation of the camera, although there were signicant amplitude
dierences. However, the geometric mean of opposing LSFs provided compa-
rable FWHMs and amplitudes. Furthermore, SPECT images reconstructed
by using the geometric means of opposing LSFs did not show any signicant
distortion due to the spatial variation in the detector response with distance
from the collimator, as compared to the results with the arithmetic means 46].
However, Larsson found that when more than one source is employed, the use
of the geometric mean resulted in interference artifacts between the sources.
Axelsson et al. 913] investigated the shape of the LSF resulting from the geo-
metric mean of opposing planar projections, and concluded that the LSF was
approximately stationary in the central two-thirds of their phantom. Msaki
et al. 944] also found that the stationarity of the 2D MTF improved in their
study of the variation in the PSF of the geometric means of opposing views for
source positions both orthogonal and parallel to the collimator axis. Boulfelfel
et al. 87] evaluated the use of the geometric mean of opposing projections
in prereconstruction restoration, and found that this preprocessing step could
lead to improved results the details of their methods and results are presented
in Section 10.5.5.
Coleman et al. 945] used the arithmetic and geometric means of opposing
projections to calculate the 2D MTF and scatter fraction as a function of the
camera angle and source location. Their results showed that the geometric
mean provided an approximately stationary 2D MTF and scatter fraction,
except near the edges of the phantom. However, it was observed by Coleman
926 Biomedical Image Analysis
et al. that the arithmetic mean provided more nonstationary results than the
geometric mean, which were both less nonstationary than the MTF related
to the planar projections. It was concluded that the arithmetic mean may be
preferred to the geometric mean in the restoration of SPECT images because
the latter is nonlinear. Although Coleman et al. recommended the use of
averaging in restoration, their study did not perform any restoration experi-
ments. King et al. 946], continuing the study of Coleman et al. 945], used the
means of conjugate views (both arithmetic and geometric) for prereconstruc-
tion attenuation correction in their study on the use of the scatter degradation
factor and Metz ltering for the improvement of activity quantitation how-
ever, the Metz lter they used was predened and did not explicitly make use
of an MTF related to the averaging procedure. Glick et al. 942, 943, 947] and
Coleman et al. 945] also investigated the eect of averaging (arithmetic and
geometric) of opposing views on the stationarity of the PSF. Whereas these
studies concluded that the 2D PSF is not stationary and is shift-variant, the
3D PSF has been shown to be more stationary and only slightly aected by
the source-to-collimator distance 942].
The PSF of SPECT images has been shown to have a 3D spread 942, 943,
947] regardless, most of the restoration methods proposed in the literature
assume a 2D PSF. The use of a 2D PSF results in only a partial restoration
of SPECT images, because the inter-slice blur is ignored. Boulfelfel et al. 86,
749, 948] and Rangayyan et al. 935] studied the 3D nature of the PSF and
proposed 3D lters to address this problem examples from these works are
presented in Section 10.5.6.
It has been shown that discretization of the ltered backprojection pro-
cess can cause the MTF related to the blurring of SPECT images to be
anisotropic and nonstationary, especially near the edges of the camera's eld
of view 942, 943, 947]. Furthermore, the Poisson noise present in nuclear
medicine images is nonstationary. Shift-invariant restoration techniques will
fail in the restoration of large images because they do not account for such vari-
ations in the MTF and the noise. Therefore, restoration methods for SPECT
images should support the inclusion of a shift-variant MTF and nonstationary
noise characterization. Boulfelfel et al. 86, 750, 948] investigated the use of
a shift-variant 2D Kalman lter for the restoration of SPECT images. The
Kalman lter allows for the use of dierent MTFs and noise parameters at
each pixel, and was observed to perform better than shift-invariant lters in
the restoration of large objects and organs. Examples of this application are
presented in Section 10.5.6.

10.5.5 Geometric averaging of conjugate projections


Geometric averaging of conjugate projections (reviewed in Section 10.5.4 in
brief, and described in more detail in the following paragraphs) may be viewed
as a predistortion mapping technique that consists of a transformation applied
to the degraded image in such a way that the shift-variant blurring function
Deconvolution, Deblurring, and Restoration 927
that caused the degradation becomes shift-invariant. The widely used coordi-
nate transformation method 827, 828, 949] is a predistortion mapping scheme
that eliminates the shift-variance of a blurring function by changing to a sys-
tem of coordinates in which the blur becomes shift-invariant. Shift-invariant
restoration may then be used, with the image changed back to the original
coordinates by an inverse coordinate transformation. Coordinate transforma-
tion methods have been applied for shift-variant blurs such as motion blur,
where the shift-variance is linear. However, this technique is not applicable
in situations where the blur varies with the depth of eld, or where a projec-
tion (integration) operation is involved as in planar nuclear medicine images.
Geometric averaging of conjugate projections could be interpreted as a pre-
distortion mapping scheme that reduces the shift-variance of the blur (but not
eliminating it completely). Furthermore, because the procedure combines two
nearly identical planar projections images acquired from opposite directions
and averages them, only the blur function is aected, and the restored image
does not need any processing for coordinate change.
In the work of Boulfelfel et al. 86, 87], the MTF of the gamma camera was
modeled as 8 " p #2 9

H (u v) = exp :;2
<
  (u2 + v2 ) =
(10.174)
Nf s 

where N is the width of the MTF in pixels, fs is the sampling frequency,


and  is the standard deviation of the Gaussian in the model of the PSF.
The FWHM of the PSF was experimentally measured using a line source (see
Section 2.9 and Figure 2.21) for various source-to-collimator distances. The
variation of the FWHM with source-to-collimator distance d was found to be
linear, and modeled as
FWHM = a + b d (10.175)
where a and b are the parameters of the linear model that were determined
from experimental measurements of FWHM. The standard deviation  of the
Gaussian model of the PSF and MTF is related to FWHM as
 = FWHM = 
a + bd (10.176)
where  = 2:355 is a constant of proportionality.
Let us assume that the camera is rotating around a point source that is
located away from the center of rotation of the camera, as shown in Fig-
ure 10.22. If the radius of rotation is R, and the camera is at a distance d
from the point source when at the closest position, then rotating the camera
by 180o places the point source at a distance (2R ; d) from the camera. When
the point source is at a distance d from the camera, the MTF according to
Equation 10.174 is
8 " #29
p
H (u v) = exp :;2  (a + b d)N f (u2 + v2 ) =
<
(10.177)
s 
928 Biomedical Image Analysis
where  refers to the angle of the camera. The MTF at the conjugate position
is
8 " 9
#2
p
H (u v) = exp :;2  a + b (2R;Ndf)] (u2 + v2 ) =
<
(10.178)
s 

where  =  + 180o refers to the angle of the camera. The geometric mean
of the two MTFs given above is given by
Hg (u v) = H (u v) H (u v)]1=2 : (10.179)
Therefore, we have
2 u2 + v 2 )  
v) = exp ; 2 2(N

H 2 (u
g 2 f2 (a + b d)2 + a + b (2R ; d)]2

s 
2 u2 + v 2 ) 
= exp ; 2 2(N

2 f2 2a2 + 4Rab + b2 (2d2 ; 4Rd + 4R2 )
s
(10.180)
which leads to
2 u2 + v 2 )  
Hg (u v) = exp ; 2 2(N

2 fs2 a2 + 2Rab + b2 (d2 ; 2Rd + 2R2 ) :
(10.181)
If the point source is located at the center of rotation of the camera, we
have d = R, and the MTF is reduced to
2 (u2 + v 2 )  
H (u v) = exp ; 2  2 2
a + 2Rab + b R :2 
(10.182)
0 2 N 2 fs2
Letting
2 u2 + v 2 )
B (u v) = ; 2 2(N 2 f2 (10.183)
s
we have
 
H (u v) = exp B (u v) (a2 + 2abd + b2 d2 ) (10.184)
 
Hg (u v) = exp B (u v) a2 + 2Rab + b2 (d2 ; 2Rd + 2R2 ) (10.185)
and  
H0 (u v) = exp B (u v) (a2 + 2Rab + b2 R2 ) : (10.186)
Equations 10.184, 10.185, and 10.186 are, respectively, the MTF related to a
point source located at a distance d from a camera rotating around a circle of
radius R, the geometric mean of the MTFs related to two opposing projections
of a point source located at distances d and (2R ; d) from the camera, and the
MTF of a point source located at the center of rotation of the camera with
Deconvolution, Deblurring, and Restoration 929

2R - d d
camera camera

R point
source

FIGURE 10.22
Gamma-camera imaging geometry illustrating conjugate projections being ob-
tained for a point source at distances d and 2R ; d from the camera in the
two views, where R is the radius of rotation of the camera. The same prin-
ciples apply to imaging with a dual-camera or multi-camera imaging system.
Reproduced with permission from D. Boulfelfel, R.M. Rangayyan, L.J. Hahn,
and R. Kloiber, \Use of the geometric mean of opposing planar projections
in prereconstruction restoration of SPECT images", Physics in Medicine and
Biology, 37(10): 1915{1929, 1992.  c IOP Publishing Ltd.

d = R. Equation 10.184 shows clearly that the MTF is highly dependent on


the source-to-collimator distance, whereas Equation 10.185 suggests that the
geometric averaging procedure makes the MTF less dependent on the source-
to-collimator distance (only the last factor in the equation is a function of the
source-to-collimator distance d), and that it is close to the MTF of a point
source located at the center of rotation (Equation 10.186).
Comparing Equations 10.184, 10.185, and 10.186 leads to the comparison
of the following three functions involving the source-to-collimator distance d
and the other parameters encountered above:
f (d) = a2 + 2abd + b2 d2 (10.187)
fg (d) = a2 + 2Rab + b2 (d2 ; 2Rd + 2R2 ) (10.188)
and
f0 (d) = a2 + 2Rab + b2 R2 : (10.189)
The subscripts of the function f (d) relate to the subscripts of H (u v) in
Equations 10.184, 10.185, and 10.186. Figure 10.23 shows plots of the three
functions f (d), fg (d), and f0 (d) for radius of rotation R = 20:0 cm, and
the FWHM model parameters a = 0:383762 and b = 0:0468488 (from exper-
imental measurements using line sources with the Siemens Rota camera at
the Foothills Hospital, Calgary). It is seen that f0 (d) = 1:74 (a constant),
f (d) varies from 0:15 to 5:09, and fg (d) varies between 1:74 and 2:62. The
930 Biomedical Image Analysis
plots and the ranges of the values of the three functions show that geometric
averaging reduces the space-variance of the MTF.

FIGURE 10.23
Plots of the distance functions f (d) in Equation 10.187 (solid line), fg (d) in
Equation 10.188 (dashed line), and f0 (d) in Equation 10.189 (dotted line).
Reproduced with permission from D. Boulfelfel, R.M. Rangayyan, L.J. Hahn,
and R. Kloiber, \Use of the geometric mean of opposing planar projections
in prereconstruction restoration of SPECT images", Physics in Medicine and
Biology, 37(10): 1915{1929, 1992.  c IOP Publishing Ltd.

Figure 10.24 shows proles of the MTFs related to a point source at d =


20 cm and d = 40 cm with R = 30 cm, the averaged MTF with d = 20 cm
and d = 40 cm, and the MTF at the center of rotation of the camera (d = R),
computed using Equations 10.184, 10.185, and 10.186, respectively. The gure
shows that the averaged MTF and the MTF at the center of rotation are close
to each other, as compared to the MTFs for the point source at d = 20 cm
and d = 40 cm.
Boulfelfel et al. 87, 86] performed experimental measurements with a line
source to verify the validity of the theoretical results described above. The
line source was constructed with a thin plastic tube of internal radius of 1 mm
and lled with 1 mCi of 99m Tc. No scattering medium was used. Figure 10.25
Deconvolution, Deblurring, and Restoration 931

FIGURE 10.24
Computed proles of MTFs related to point sources in gamma-camera imag-
ing: (a) averaged MTF with d = 20 cm and d = 40 cm, (b) MTF at the center
of rotation of the camera (d = R = 30 cm), (c) d = 20 cm, and (d) d = 40 cm.
Reproduced with permission from D. Boulfelfel, R.M. Rangayyan, L.J. Hahn,
and R. Kloiber, \Use of the geometric mean of opposing planar projections
in prereconstruction restoration of SPECT images", Physics in Medicine and
Biology, 37(10): 1915{1929, 1992.  c IOP Publishing Ltd.
932 Biomedical Image Analysis
shows proles of the PSF derived from the LSF for source-to-collimator dis-
tances d = 20 30 and 40 cm, obtained using the ADAC GENESYS camera
with the low-energy general-purpose collimator at the Foothills Hospital, Cal-
gary. The averaged PSF for d = 20 cm and 40 cm is also plotted in the gure.
It is seen that the averaged PSF matches closely the PSF at the central posi-
tion of d = 30 cm.

FIGURE 10.25
Experimentally measured proles of PSFs in gamma-camera imaging: (a) d =
20 cm, (b) at the center of rotation of the camera (d = R = 30 cm), (c)
d = 40 cm, and (d) averaged PSF with d = 20 cm and d = 40 cm. Re-
produced with permission from D. Boulfelfel, R.M. Rangayyan, L.J. Hahn,
and R. Kloiber, \Use of the geometric mean of opposing planar projections
in prereconstruction restoration of SPECT images", Physics in Medicine and
Biology, 37(10): 1915{1929, 1992.  c IOP Publishing Ltd.

The preceding derivations and arguments were based upon a point source
being imaged. In order to extend the arguments to a combination of dis-
tributed sources, we could consider a planar source image p(x y) that is par-
allel to the plane of the camera. A 3D source may then be modeled as a
collection of several planar sources, with the individual planes being parallel
to the plane of the camera. Then, the acquired image of each plane could be
Deconvolution, Deblurring, and Restoration 933
modeled as being convolved with the blur PSF for the corresponding distance
to the camera. The net projection of the 3D source would be the sum of the
blurred images of the constituent planes.
For a planar source p(x y) placed away from the center of the axis of rota-
tion of the camera, let us consider two planar images acquired, at an angle 
and its conjugate  . In the frequency domain, we may represent the planar
images as
P (u v) = H (u v) P (u v) (10.190)
and
P (u v) = H (u v) P (u v): (10.191)
The geometric mean of the pair of conjugate planar images is given as
Pg (u v) = H (u v) P (u v) H (u v) P (u v)]1=2
= H (u v) H (u v)]1=2 P (u v)
= Hg (u v) P (u v): (10.192)
Therefore, the geometric mean of a pair of conjugate planar images of a planar
source is equal to the original source distribution blurred by an MTF that
is given by the geometric mean of the individual MTFs. This implies that
geometric means of pairs of conjugate planar images may be deconvolved by
using the geometric mean of the corresponding MTFs. In practice, it may be
appropriate to assume that the MTF is independent of the angular position
of the camera(s) then, the same averaged MTF may be used for all angles.
Pixel-by-pixel geometric averaging of opposing projections before prerecon-
struction restoration can improve the performance of the restoration lter
because it reduces the space-variance of the blur furthermore, the averag-
ing procedure reduces the eects of scatter and attenuation. The averaging
technique is applicable when the object to be restored is of medium size, over
which the averaged PSF (or MTF) may be assumed to be space-invariant
(see Figure 10.23). Prereconstruction restoration requires large computing
resources because each projection image needs to be restored. Averaging re-
duces the ltering time for restoration by 50%, because each opposing pair of
projections is replaced by a single image. Geometric averaging of opposing
projections performs well in applications where the object is not located in a
corner of the eld of view of the camera. It is required that projections be
acquired through the full range of 0o ; 360o .
Geometric averaging reduces the shift-variance of the blur function, but
does not completely eliminate the variance. Therefore, artifacts due to the
shift-variance of the blur function may remain in regions situated close to the
edges of the eld of view of the camera. Prereconstruction restoration ltering
assumes that the blur function for all averaged projections is the same as for
points located at the axis of rotation of the camera. Geometric averaging and
prereconstruction restoration procedures are well-suited to SPECT imaging
934 Biomedical Image Analysis
of the brain, where the image is centered and does not occupy the full eld of
view of the camera.
Examples of the application of geometric averaging as a preprocessing step
prior to the restoration of SPECT images are presented in Section 10.5.6.

10.5.6 Examples of restoration of SPECT images


Boulfelfel et al. 86, 87, 749, 750, 751, 851, 935, 948] conducted several stud-
ies on the restoration of SPECT images using the Wiener, PSE, Metz, and
Kalman lters, including the options of prereconstruction restoration, post-
reconstruction restoration, and geometric averaging of the projections, as well
as the application of the lters in 2D or 3D. The MTFs of the imaging sys-
tems used were experimentally measured using a line source to obtain the
LSF for various imaging parameters and congurations see Sections 2.9 and
2.12. Some of their experiments and results are described in the following
paragraphs.
Images of a tubular phantom: A tubular phantom was constructed
with an acrylic tube of length 40 cm and internal diameter 35 mm, within
which was introduced a solid acrylic rod of diameter 15 mm. The tube was
lled with a solution containing 99m Tc. Sixty planar projections, each of
size 64  64 pixels, were acquired, spanning the angular range of 0o 180o ],
with the source-to-collimator distances of 5 cm and 20 cm, using a Siemens
Rota camera with a low-energy all-purpose collimator. No scattering medium
was used around the phantom (other than the ambient atmosphere). SPECT
images of size 64  64 pixels each were reconstructed using the FBP algorithm.
Models of the PSD of the phantom were mathematically derived, for both
planar and SPECT imaging, from the known geometry of the phantom. Blob-
like artifacts could arise in ltered images due to the discontinuities that are
present in discrete mathematical models as above 950]. In order to prevent
such artifacts, the model PSDs were smoothed with a Gaussian window 951]
of the form "  2 #
1
W (r) = exp ; 2 N=2 r (10.193)
p
where r = u2 + v2 is the index of radial frequency in the 2D Fourier space,
N is the width of the PSD array in pixels, and is a scale factor. Under the
condition of MMSE of the restored images of known test images, the optimal
value of was determined to be 0:4. The PSD of the noise was derived from
the known total count in the image being restored.
Due to the small size of the phantom (in cross-section), and due to its
placement at the center of the eld of imaging with respect to the axis of
rotation of the gamma camera, it is reasonable to assume that the PSF of the
imaging system is stationary. Hence, shift-invariant lters may be applied for
restoration.
Deconvolution, Deblurring, and Restoration 935
The Wiener and PSE lters were used for prereconstruction restoration of
the planar images in post-reconstruction restoration, the same lters were
also applied to the SPECT image. Two sample projection images of the
phantom are shown in Figure 10.26 for source-to-collimator distances of 5 cm
and 20 cm it is evident that the latter image is blurred to a greater extent
than the former. The restored version of the planar image with the source-
to-collimator distance of 20 cm, using the Wiener lter, is shown in part
(c) of Figure 10.26 proles of the original image and the restored image are
shown in Figure 10.27. The restored image clearly demonstrates the expected
reduction in counts at the center of the image (due to the solid rod).
Figure 10.28 shows the original and restored SPECT images for various
cases of ltering. Prereconstruction restoration gave better results than post-
reconstruction restoration. The Gaussian window applied to the object model
eectively reduced the hot spots (blobs) seen in the restored images without
the window. Observe that the blobs appear with signicant amplitude in the
post-reconstruction restored images, but are reduced in the prereconstruction
restored images. RMS error values were computed for the original and restored
images with respect to the ideal (known) cross-section of the phantom (shown
in Figure 10.28 b). It was found that, whereas the acquired image had an RMS
error of 69.12, the error for the post-reconstruction Wiener-restored image was
48.58, and that for the prereconstruction Wiener-restored image was 26.52
application of the Gaussian window to the model PSD further decreased the
error to 23.37. The results of the PSE lter and Metz lter (not shown) had
comparable RMS errors.
Images of a cardiac phantom: A cardiac phantom (Data Spectrum
Corporation 912]) was used to simulate myocardial perfusion mages with
\defect" inserts to simulate myocardial ischemia and infarction. A sectional
view showing the dimensions of the phantom is illustrated in Figure 10.29. The
phantom consists of two U-shaped barrels of diameter 4:1 cm and 6:1 cm. The
space between the barrels was lled with 37 MBq (1 mCi) of 201 Tl-chloride.
Several types of defect arrangements were used Figure 10.29 shows, in cross-
section, a case with a 45o solid defect and a 45o defect with 50% activity. The
phantom was held inclined at 45o with respect to the axis of rotation of the
Rota camera with the low-energy all-purpose collimator, and 60 projections of
size 64  64 pixels each, were acquired over 180o . The acquisition time for each
projection was 5 s, and the radius of rotation was 20 cm. The SPECT images
of dierent transaxial slices were reconstructed, and the 3D information was
used to derive oblique sectional images at 45o using the Siemens Micro-Delta
software. The total count for each image was between 15 000 and 17 000,
which corresponds to low-statistics images in clinical studies.
In performing restoration of the projection images of the cardiac phantom,
an object PSD model needs to be derived. If the model is derived directly from
the physical shape of the phantom, it will match the actual phantom images,
but will not be applicable for the restoration of real myocardial images with
unknown defects. A model that matches closely the general case is a hollow
936 Biomedical Image Analysis

(a)

(b)

(c)
FIGURE 10.26
Projection (planar) images of the tubular phantom with the source-to-
collimator distance of (a) 5 cm and (b) 20 cm. (c) Result of Wiener restoration
of the image in (b). Figures courtesy of D. Boulfelfel 86]. See also Figures
10.27 and 10.28.
Deconvolution, Deblurring, and Restoration 937

FIGURE 10.27
Proles of planar images of the tubular phantom across the center. Solid
line: ideal (true) prole. Dotted line: prole of the acquired planar image
with the source-to-collimator distance of 20 cm see Figure 10.26 (b). Dashed
line: prole of the Wiener-ltered planar image see Figure 10.26 (c). Figure
courtesy of D. Boulfelfel 86].

cylinder of appropriate length inclined in the same way as the phantom. In


the works of Boulfelfel et al., for each projection, a rotation was performed on
such a model, and the Radon integral was computed to model the projection.
The acquired SPECT image of the phantom, for the defect arrangement
illustrated in Figure 10.29, as well as the restored images obtained by pre-
reconstruction restoration using the Wiener and PSE lters, are shown in
Figure 10.30. The contrast values of the defect regions were computed as de-
ned in Equation 2.7. Groups of pixels representing the defect regions as well
as suitable background regions were selected manually over 16  16 subimages
using the physical model of the phantom as depicted in Figure 10.29. The
restoration lters resulted in more than doubling of the contrast values (with
respect to the contrast of the same defect in the acquired image) 86].
Cardiac SPECT images: One of the major applications of nuclear me-
dicine imaging is in cardiology. With the use of 201 Tl, myocardial diseases
and defects such as ischemia and necrosis are detected in SPECT slices as
cold spots of reduced tracer concentration. Myocardial perfusion SPECT im-
ages, in the short-axis view (see Figure 1.27 a), appear similar to the rings
of activity in the images of the tubular phantom in Figure 10.28. The car-
diac phantom, described in the preceding paragraphs, may also be used to
simulate myocardial perfusion SPECT images related to various pathological
conditions see Figures 10.29 and 10.30.
938 Biomedical Image Analysis

(a) (b) (c) (d)

(e) (f) (g) (h)


FIGURE 10.28
(a) SPECT image of a tubular phantom size 6464 pixels RMS error = 69.12
with respect to the known, ideal version of the image, shown in (b). Post-
reconstruction restoration of the image in (a) using (c) the Wiener lter (RMS
error = 48.58), and (d) the PSE lter (RMS error = 54.12). SPECT image
after prereconstruction restoration of the planar images using (e) the Wiener
lter (RMS error = 26.52), and (f) the PSE lter (RMS error = 29.31). (g)
and (h) correspond to (e) and (f) with the inclusion of Gaussian smoothing of
the image PSD model (RMS errors = 23.37 and 26.57, respectively). Images
courtesy of D. Boulfelfel 86].
Deconvolution, Deblurring, and Restoration 939

defect with 50% activity

41 61

93

space filled with defect with zero activity


radiopharmaceutical
(100% activity)

FIGURE 10.29
Schematic representation of the cardiac phantom (Data Spectrum Corpora-
tion 912]) used to simulate myocardial perfusion images with \defect" inserts
to simulate myocardial ischemia and infarction. The dimensions shown are in
mm.

(a) (b) (c)


FIGURE 10.30
(a) Acquired SPECT image of the cardiac phantom with the defect arrange-
ment illustrated in Figure 10.29. Result of prereconstruction restoration us-
ing (b) the Wiener and (c) PSE lter. Reproduced with permission from D.
Boulfelfel, R.M. Rangayyan, L.J. Hahn, and R. Kloiber, \Pre-reconstruction
restoration of single photon emission computed tomography images", IEEE
Transactions on Medical Imaging, 11(3): 336 { 341, 1992.  c IEEE.
940 Biomedical Image Analysis
Unfortunately, myocardial SPECT images possess poor statistics, because
only a small fraction of the injected activity will accumulate in the my-
ocardium. Furthermore, as the peak photon energy of 201 Tl is about 80 keV ,
scattering has a serious eect on image quality. Boulfelfel et al. 86, 751] ap-
plied prereconstruction restoration and post-reconstruction restoration tech-
niques using the Wiener, PSE, and Metz lters to myocardial SPECT images
examples from their works are presented and discussed in the following para-
graphs.
In the procedure to acquire myocardial SPECT images of human patients,
74 MBq (2 mCi) of 201 Tl was injected into the body. After accumulation
of the tracer in the myocardium, 44 planar projections, each of size 64  64
pixels, spanning the full range of 0o 180o ], were acquired. The time for the
acquisition of each projection was 30 s. Each projection image had a total
count in the range 10 000 to 20 000. The projections were acquired in an el-
liptic trajectory with the average distance from the heart being about 20 cm.
Two energy peaks were used in the acquisition of the projections in order
to perform scatter correction using the dual-energy-window subtraction tech-
nique. No attenuation correction was performed as the organ is small. Given
the imaging protocol as above, and the fact that the organ being imaged is
small, it is valid to assume that the blurring function is nearly shift-invariant
hence, it becomes possible to apply shift-invariant lters, such as the Wiener,
PSE, and Metz lters, for the restoration of myocardial planar and SPECT
images.
Figures 10.31 and 10.32 show one projection image each of two patients.
Transverse SPECT images were reconstructed, and oblique slices perpendic-
ular to the long axis of the heart were then computed from the 3D data
available. Figures 10.31 and 10.32 show one representative oblique section
image in each case, along with several restored versions of the images. The
parts of the myocardium with reduced activity (cold spots) are seen more
clearly in the restored images than in the original images. The results of pre-
reconstruction restoration applied to the planar images are better than those
of post-reconstruction restoration ltering in terms of noise content as well as
improvement in sharpness and clarity.
SPECT images of the brain: Radionuclide brain scanning has been used
extensively in the study of neurological and psychiatric diseases. The main
area of application is the detection of pathology in the cerebral hemispheres
and the cerebellum. A number of radiopharmaceuticals are used for brain
scanning however, 99m Tc-based materials are most widely used.
An advantage in brain imaging is that the patient's head can be positioned
at the center of rotation of the camera, which allows imaging over 360o , rather
than over only 180o as in the case of myocardial imaging. Although the brain
may be considered to be a large organ, the homogeneity of the (scattering)
medium and the ability to image it from a short distance using a circular orbit
allow the use of geometric averaging of the planar images as a preprocessing
Deconvolution, Deblurring, and Restoration 941

FIGURE 10.31
Top: A sample planar projection image of a patient. (a) Short-axis SPECT
image showing the myocardium of the left ventricle in cross-section. Re-
sults of post-reconstruction restoration applied to the SPECT image using
(b) the Wiener, (c) the PSE, and (d) Metz lters. Results of prereconstruc-
tion restoration applied to the planar images using (e) the Wiener, (f) the
PSE, and (g) Metz lters. Images courtesy of D. Boulfelfel, L.J. Hahn, and
R. Kloiber, Foothills Hospital, Calgary 86].
942 Biomedical Image Analysis

FIGURE 10.32
Top: A sample planar projection image of a patient. (a) Short-axis SPECT
image showing the myocardium of the left ventricle in cross-section. Re-
sults of post-reconstruction restoration applied to the SPECT image using
(b) the Wiener, (c) the PSE, and (d) Metz lters. Results of prereconstruc-
tion restoration applied to the planar images using (e) the Wiener, (f) the
PSE, and (g) Metz lters. Images courtesy of D. Boulfelfel, L.J. Hahn, and
R. Kloiber, Foothills Hospital, Calgary 86].
Deconvolution, Deblurring, and Restoration 943
step to reduce both attenuation and shift-variance of the blur before restora-
tion. Brain images are also not as low in statistics as myocardial images.
In the procedure for nuclear medicine imaging of the brain, after the 99m Tc-
chloride administered to the patient had accumulated in the brain, 44 planar
projections, each of size 64  64 pixels, were acquired. The time for the
acquisition of each projection was 30 s. The projections were acquired over the
full range of 360o in a circular trajectory with the radius of rotation of 20 cm.
Two energy peaks were used in the acquisition of the projections to perform
scatter correction using the dual-energy-window subtraction technique.
Figures 10.33 and 10.34 show a set of opposing projection images as well
as their geometric mean for two patients. Transverse SPECT images were re-
constructed after performing geometric averaging of conjugate projections and
(prereconstruction) restoration using the Wiener, PSE, and Metz lters 86].
Figures 10.33 and 10.34 show one representative SPECT image in each case,
along with several restored versions. The results show that averaging of conju-
gate projections improves the quality of the restored images, which are sharper
than the images restored without averaging.
Images of a resolution phantom: Boulfelfel et al. 86, 87, 749, 750, 935]
conducted several restoration experiments with SPECT images of a \resolu-
tion" phantom. The phantom contains nine pairs of hot spots of diameters
39 22 17 14 12 9 8 6 and 5 mm in the \hot lesion" insert (Nuclear Asso-
ciates), with a total diameter of 200 mm see Figure 3.68 for related illustra-
tions. The phantom was lled with 1 mCi of 201 Tl-chloride, centered at the
axis of rotation of the gamma camera at a distance of 217 mm, and 120 pro-
jections, each of size 128  128 pixels, were acquired over 360o . SPECT images
of dierent transaxial slices were reconstructed using the Siemens Micro-Delta
software.
Given the large size of the phantom, it would be inappropriate to assume
that the degradation phenomena are shift-invariant. Boulfelfel et al. 750, 948]
applied the Kalman lter for restoration of SPECT images of the resolution
phantom. Figure 10.35 shows a representative SPECT image of the phantom,
along with (post-reconstruction) restoration of the image using the Kalman
lter the results of application of the shift-invariant Wiener and PSE lters
are also shown for comparison. It is evident that the shift-variant Kalman
lter has provided better results than the other lters: the Kalman-restored
image clearly shows seven of the nine pairs of hot spots, whereas the results
of the Wiener and PSE lters show only four or ve pairs. For the sake
of comparison, the results of prereconstruction restoration of the resolution
phantom image obtained by applying the shift-invariant Wiener, PSE, and
Metz lters after geometric averaging of conjugate projections are shown in
Figure 10.36. Observe that the orientation of these results is dierent from
that of the images in Figure 10.35 due to the alignment procedure required
for averaging. Although the results show some of the hot spots with more
clarity than the original image in Figure 10.35 (a), they are of lower quality
than the result of Kalman ltering, shown in Figure 10.35 (d).
944 Biomedical Image Analysis

FIGURE 10.33
Top row: A sample pair of conjugate projections of a patient, along with their
geometric mean. (a) SPECT image showing the brain in cross-section. Results
of prereconstruction restoration applied to the planar images using (b) the
Wiener, (c) the PSE, and (d) Metz lters. Results of geometric averaging
and prereconstruction restoration applied to the planar images using (e) the
Wiener, (f) the PSE, and (g) Metz lters. The orientation of the images in
(e) { (g) is dierent from that of the images in (a) { (d) due to the alignment
of conjugate projection images for geometric averaging. Images courtesy of
D. Boulfelfel, L.J. Hahn, and R. Kloiber, Foothills Hospital, Calgary 86].
Deconvolution, Deblurring, and Restoration 945

FIGURE 10.34
Top row: A sample pair of conjugate projections of a patient, along with their
geometric mean. (a) SPECT image showing the brain in cross-section. Results
of prereconstruction restoration applied to the planar images using (b) the
Wiener, (c) the PSE, and (d) Metz lters. Results of geometric averaging
and prereconstruction restoration applied to the planar images using (e) the
Wiener, (f) the PSE, and (g) Metz lters. The orientation of the images in
(e) { (g) is dierent from that of the images in (a) { (d) due to the alignment
of conjugate projection images for geometric averaging. Images courtesy of
D. Boulfelfel, L.J. Hahn, and R. Kloiber, Foothills Hospital, Calgary 86].
946 Biomedical Image Analysis

(a) (b)

(c) (d)
FIGURE 10.35
(a) Acquired SPECT image (128  128 pixels) of the resolution phantom.
Post-reconstruction restored versions using (b) the Wiener lter (c) the PSE
lter and (d) the Kalman lter. The images (a) { (c) were enhanced by
gamma correction with
= 0:8 the image (d) was enhanced with
= 0:3
(see Section 4.4.3). See also Figure 3.68. Reproduced with permission from
D. Boulfelfel, R.M. Rangayyan, L.J. Hahn, R. Kloiber, and G.R. Kuduvalli,
\Restoration of single photon emission computed tomography images by the
Kalman lter", IEEE Transactions on Medical Imaging, 13(1): 102 { 109,
1994. c IEEE.
Deconvolution, Deblurring, and Restoration 947

FIGURE 10.36
Prereconstruction restoration of the SPECT image of the resolution phantom
shown in Figure 10.35 (a) after geometric averaging of conjugate projection
images, using (a) the Wiener, (b) the PSE, and (c) the Metz lters. The
orientation of the images in this gure is dierent from that of the images in
Figure 10.35 due to the alignment of conjugate projection images for geomet-
ric averaging. Images courtesy of D. Boulfelfel, L.J. Hahn, and R. Kloiber,
Foothills Hospital, Calgary 86].

SPECT images of the liver and spleen: Liver and spleen images are
di cult to restore because of their large size and irregular shape. The liver
and spleen are imaged together when radiopharmaceuticals that are trapped
by the reticulo-endothelial cell system are used. The most commonly used ra-
diopharmaceutical for this purpose is a 99m Tc-based label. In the procedure
for imaging the liver and spleen, 2 mCi of a 99m Tc-based radiopharmaceu-
tical was given to the patient. After the isotope accumulated in the liver
and spleen, 44 projections, each of size 64  64 pixels, were acquired. The
time for the acquisition of each projection was 40 s. The projections were
acquired over the full range of 360o in a circular trajectory, with the average
radius of rotation of 25 cm. Two energy peaks were used in the acquisition of
the projections in order to perform scatter correction using the dual-energy-
window subtraction technique. Transverse SPECT images were reconstructed
after averaging and correcting for attenuation using the Siemens Micro-Delta
processor. The Chang algorithm was used for attenuation correction.
Figures 10.37 and 10.38 show a sample SPECT slice of the liver and spleen
of two patients, along with its restored version using the Kalman lter. The
restored images demonstrate the full outlines of the liver and spleen with
improved clarity, and show a few cold spots within the organs with increased
contrast as compared to the original images. The clinical validity of this
observation was not conrmed.
3D restoration of SPECT images: Boulfelfel et al. 86, 750, 948, 935]
applied 3D lters for the restoration of SPECT images, including 3D exten-
sions of the Wiener, PSE, and Metz lters, as well as a combination of a 2D
Kalman lter in the SPECT plane and a 1D Metz lter in the inter-slice di-
rection. Figures 10.39 and 10.40 show a sample planar image of the liver and
948 Biomedical Image Analysis

(a) (b)
FIGURE 10.37
(a) Acquired SPECT image of the liver and spleen of a patient. (b) Restored
image obtained by the application of the Kalman lter. Images courtesy of
D. Boulfelfel, L.J. Hahn, and R. Kloiber, Foothills Hospital, Calgary 86].

(a) (b)
FIGURE 10.38
(a) Acquired SPECT image of the liver and spleen of a patient. (b) Restored
image obtained by the application of the Kalman lter. Images courtesy of
D. Boulfelfel, L.J. Hahn, and R. Kloiber, Foothills Hospital, Calgary 86].
Deconvolution, Deblurring, and Restoration 949
spleen each of two patients, a sample SPECT image in each case, and restored
images of the SPECT slices after 3D restoration of the entire SPECT volumes
using the Wiener, PSE, and Metz lters. Figures 10.41 and 10.42 show a sam-
ple SPECT image and the corresponding restored image after 3D restoration
of the entire SPECT volume using the 2D Kalman lter in the SPECT plane
and a 1D Metz lter in the inter-slice direction. The restored images show
more cold spots within the liver, with increased contrast however, the clinical
validity of this observation was not conrmed. A sample SPECT image and
the corresponding restored version after 3D restoration of the entire SPECT
volume using the Kalman-Metz lter combination as above are shown in Fig-
ure 10.43. Compared to the result of 2D ltering shown in Figure 10.35, the
3D ltering procedure appears to have yielded a better image.

10.6 Remarks

The widespread occurrence of image degradation in even the most sophisti-


cated and expensive imaging systems has continually frustrated and chal-
lenged researchers in imaging and image processing. The eld of image
restoration has attracted a high level of activity from researchers with several
dierent perspectives 8, 11, 822, 823, 824, 825, 952, 953, 954, 955]. In this
chapter, we have studied a small selection of techniques that are among the
popular approaches to this intriguing problem. Most of the restoration tech-
niques require detailed and specic information about the original undegraded
image and the degradation phenomena. Several additional constraints may
also be applied, based upon a priori and independent knowledge about the
desired image. However, it is often di cult to obtain accurate information
as above. The quality of the result obtained is aected by the accuracy of
the information provided and the appropriateness of the constraints applied.
The nature of the problem is characterized very well by the title of a special
meeting held on this subject: \Signal recovery and synthesis with incomplete
information and partial constraints" 954, 955]. Regardless of the di cul-
ties and challenges involved, researchers in the eld of image restoration have
demonstrated that a good understanding of the problem can often lead to
usable solutions.
950 Biomedical Image Analysis

FIGURE 10.39
Top: A sample planar projection image of a patient. (a) SPECT image show-
ing the liver and spleen. Results of post-reconstruction 3D restoration applied
to the entire SPECT volume using (b) the Wiener, (c) the PSE, and (d) Metz
lters. Images courtesy of D. Boulfelfel, L.J. Hahn, and R. Kloiber, Foothills
Hospital, Calgary 86].
Deconvolution, Deblurring, and Restoration 951

FIGURE 10.40
Top: A sample planar projection image of a patient. (a) SPECT image show-
ing the liver and spleen. Results of post-reconstruction 3D restoration applied
to the entire SPECT volume using (b) the Wiener, (c) the PSE, and (d) Metz
lters. Images courtesy of D. Boulfelfel, L.J. Hahn, and R. Kloiber, Foothills
Hospital, Calgary 86].
952 Biomedical Image Analysis

(a) (b)
FIGURE 10.41
(a) Acquired SPECT image of the liver and spleen of a patient. (b) Restored
image obtained by the application of the 3D Kalman-Metz combined lter.
Images courtesy of D. Boulfelfel, L.J. Hahn, and R. Kloiber, Foothills Hospital,
Calgary 86].

(a) (b)
FIGURE 10.42
(a) Acquired SPECT image of the liver and spleen of a patient. (b) Restored
image obtained by the application of the 3D Kalman-Metz combined lter.
Images courtesy of D. Boulfelfel, L.J. Hahn, and R. Kloiber, Foothills Hospital,
Calgary 86].
Deconvolution, Deblurring, and Restoration 953

(a) (b)
FIGURE 10.43
(a) Acquired SPECT image of the resolution phantom. (b) Restored image
obtained by the application of the 3D Kalman-Metz combined lter. Im-
ages courtesy of D. Boulfelfel, L.J. Hahn, and R. Kloiber, Foothills Hospital,
Calgary 86].

10.7 Study Questions and Problems


Selected data les related to some of the problems and exercises are available at the
site
www.enel.ucalgary.ca/People/Ranga/enel697
1. Using mathematical expressions and operations as required, explain how a de-
graded image of an edge may be used to derive the MTF ( ) of an imaging
H u v

system. State clearly any assumptions made, and explain their relevance or
signi cance.
2. Given g = hf + and ~f = Lg, where g is a degraded image, f is the
original image, is the noise process, h is the PSF of the blurring sys-
n fh is the restored io image, and L is the restoration lter, expand 2 =
tem, ~ 

E Tr (f ; ~f ) (f ; ~f )T and simplify the result to contain only h, L, and


the ACF matrices of f and . Give the reasons for each step, and explain the
signi cance and implications of the assumptions made in deriving the Wiener
lter.
3. With reference to the Wiener lter for image restoration (deblurring), explain
the role of the signal-to-noise spectral ratio. How does this ratio control the
performance of the lter?
4. Prove that the PSE lter is the geometric mean of the inverse and Wiener
lters.
5. List the various items of information required in order to implement the
Wiener lter for deblurring a noisy image. Explain how you would derive
each item in practice.
954 Biomedical Image Analysis

10.8 Laboratory Exercises and Projects


1. Create or acquire a test image including components with sharp edges. Blur
the image by convolution with a Gaussian PSF. Add Gaussian-distributed
random noise to the blurred image.
Derive the MTF of the blurring function and the PSD of the noise. Pay
attention to the scale factors involved in the Fourier transform.
Restore the degraded image using the inverse, Wiener, and PSE lters. You
may have to restrict the inverse lter to a certain frequency limit in order to
prevent the ampli cation of noise.
How would you derive or model the ideal object PSD required in the design
of the Wiener and PSE lters?
2. Using a camera that is not in focus, capture a blurred image of a test image
containing a sharp line. Derive the PSF and the MTF of the imaging system.
3. Using a camera that is not in focus, capture a blurred image of a scene, such
as your laboratory, including a person and some equipment. Ensure that the
scene includes an object with a sharp edge (for example, the edge of a door
frame or a blackboard), as well as a uniform area (for example, a part of a
clean wall or board with no texture).
Derive the PSF and MTF of the imaging system by manual segmentation of
the edge spread function and further analysis as required. Estimate the noise
PSD by using segments of areas expected to be uniform. Design the Wiener
and PSE lters and restore the image.
How would you derive or model the ideal PSD of the original scene?
4. Restore the image in the preceding exercise by designing the blind deblurring
version of the PSE lter.
11
Image Coding and Data Compression

High spatial resolution and ne gray-scale quantization are often required


in biomedical imaging. Digital mammograms are typically represented in
arrays of 4 096 4 096 pixels with 12 b=pixel, leading to raw-data les of
the order of 32 MB per image. Volumetric data obtained by CT and MRI
could be of size 512 512 64 voxels with 16 b=voxel, occupying 32 MB
per examination. Patients with undetermined or multiple complications may
undergo several examinations via di erent modalities such as X-ray imaging,
ultrasound scanning, CT scanning, and nuclear medicine imaging, resulting
in large collections of image les.
Most health-care jurisdictions require medical records, including images, of
adults to be stored for durations of the order of seven years from the date
of acquisition. Children's records and images are required to be maintained
until at least the time they reach adulthood.
With the view to improve the eciency of storage and access, several imag-
ing centers and hospitals have moved away from lm-based storage toward
electronic storage. Furthermore, most medical imaging systems have moved
to direct digital image acquisition with adequate resolution, putting aside the
debate on the quality of an original lm-based image versus that of its scanned
(digitized) representation. Since 1980, an entire series of conferences has been
dedicated to PACS: see the PACS volumes of the SPIE Medical Imaging con-
ference series 956]. Networks and systems for PACS are integrated into the
infrastructure of most modern hospitals. The major advantages and disad-
vantages of digital and lm-based archival systems are listed below.
 Films deteriorate with age and handling. Digital images are una ected
by these factors.
 Despite elaborate indexing schemes, lms tend to get lost or misplaced.
Digital image les are less likely to face these problems.
 Digital image les may be accessed simultaneously by several users.
Although multiple copies of lm-based images may be made, it would
be an expensive option that adds storage and handling complexities.
 With the proliferation of computers, digital images may be viewed and
manipulated at several convenient locations, including a surgical suite,
a patient's bedside, and one's home or oce. Viewing lm-based im-

955
956 Biomedical Image Analysis
ages with detailed attention requires specialized viewing consoles under
controlled lighting conditions.
 Digital PACS require signi cant initial capital outlay, as well as routine
maintenance and upgrading of the computer, storage, and communi-
cation systems. However, these costs may be o set by the savings in
the continuing costs of lm, as well as the associated chemical process-
ing systems and disposal. The environmental concerns related to lm
processing are also removed by digital PACS.
 Digital images may be compressed via image coding and data compres-
sion techniques so as to occupy less storage space.
The nal point above forms the topic of the present chapter.
Although the discussion above has been in the context of image storage or
archival, similar concerns regarding the size of image les and the desirability
of compression arise in the communication of image data. In this chapter,
we shall study the basic concepts of information theory that apply to image
coding, compression, and communication. We shall investigate several tech-
niques for encoding image data, including decorrelation procedures to modify
the statistical characteristics of the data so as to permit ecient representa-
tion, coding, and compression.
The representation of the signi cant aspects of an image in terms of a small
number of numerical features for the purpose of pattern classi cation may
also be viewed as image coding or data compression however, we shall treat
this topic separately (see Chapter 12).

11.1 Considerations Based on Information Theory


Image data compression is possible due to the following basic characteristics:
 Code redundancy | all code words (pixel values) do not occur with
equal probability.
 Spatial redundancy | the values of neighboring pixels tend to lie within
a small dynamic range, and exhibit a high level of correlation.
 Psychovisual redundancy | human analysts can recognize the essential
nature and components of an image from severely reduced versions such
as caricatures, edges, and regions, and need not (or do not) pay attention
to precise numerical values.
Information-theoretic considerations are based upon the notion of informa-
tion as related to the statistical uncertainty of the occurrence of an event
Image Coding and Data Compression 957
(such as a signal, an image, or a pixel value), rather than the structural, sym-
bolic, pictorial, semantic, or diagnostic content of the entity. The measure of
entropy is based upon the probabilities of occurrence of the various symbols
involved in the representation of a message or image: see Section 2.8. Despite
the mathematical and theoretical powers of measures such as entropy, the
standpoint of viewing an image as being composed of discrete and indepen-
dent symbols (numerical values) removes the analyst from the real-world and
physical properties of the image. The use of the underlying assumptions also
lead to severe limitations in entropy-based source coding, with lossless com-
pression factors often limited to the order of 2:1. Additional techniques based
upon decorrelation of the image data via the identi cation and modeling of
the underlying image-generation phenomena, or the use of pattern recognition
techniques, could assist in improving the performance of image compression
procedures.

11.1.1 Noiseless coding theorem for binary transmission


Given a code with an alphabet of two symbols and a source A with an alphabet
of two symbols, the average length of the code words per source symbol may
be made arbitrarily close to the lower bound (entropy) H (A) by encoding
sequences of source symbols instead of encoding individual symbols 9, 126].
The average length L(n) of encoded n-symbol sequences is bounded by

H (A)  L(nn)  H (A) + n1 : (11.1)


Diculties exist in estimating the true entropy of a source due to the fact
that pixels are statistically dependent, that is, correlated, from pixel to pixel,
row to row, and frame to frame of real-life images. The computation of the
true entropy requires that symbols be considered in blocks over which the
statistical dependence is negligible. In practice, this would translate to esti-
mating joint PDFs of excessively long vectors. Values of entropy estimated
with single pixels or small blocks of pixels would result in over-estimates of
the source entropy. If blocks of pixels are chosen such that the sequence-
entropy estimates converge rapidly to the limit, then block-coding methods
may provide results close to the minimum length given by Equation 11.1.
Run-length coding may be viewed as an adaptive block-coding technique see
Section 11.3.2.

11.1.2 Lossy versus lossless compression


A coding or compression method is considered to be lossless if the original
image data can be recovered, with no error, from the coded and compressed
data. Such a technique may also be referred to as a reversible, bit-preserving,
or error-free compression technique.
958 Biomedical Image Analysis
A compression technique becomes lossy or irreversible if the original data
cannot be recovered, with complete pixel-by-pixel numerical accuracy, from
the compressed data. In the case of images, the human visual system can tol-
erate signi cant numerical di erences or error, in the sense that the degraded
image recovered from the compressed data is perceived to be essentially the
same as the original image. This arises from the fact that a human ob-
server will, typically, not examine the numerical values of individual pixels,
but instead assess the semantic or pictorial information conveyed by the data.
Furthermore, a human analyst may tolerate more error, noise, or distortion in
the uniform areas of an image than around its edges that attract visual atten-
tion. Data compression techniques may be designed to exploit these aspects
to gain signi cant advantages in terms of highly compressed representation,
with high levels of loss of numerical accuracy while remaining perceptually
lossless. On the same token, in medical imaging, if the numerical errors in the
retrieved and reconstructed images do not cause any change in the diagnostic
results obtained by using the degraded images, one could achieve high levels
of numerically lossy compression while remaining diagnostically lossless.

In the quest to push the limits of numerically lossy compression techniques


while remaining practically lossless under some criterion, the question arises as
to the worth of such practice. Medical practice in the present highly litigious
society could face large nancial penalties and loss due to errors. Radiologi-
cal diagnosis is often based upon the detection of minor deviations from the
normal (or average) patterns expected in medical images. If a lossy data com-
pression technique were to cause such a faint deviation to be less perceptible
in the compressed (and reconstructed) image than in the original image, and
the diagnosis based upon the reconstructed image were to be in error, the
nancial compensation to be paid would cost several times the amount saved
in data storage the loss in professional standing and public con dence could
be even more damaging. In addition, de ning the delity of representation in
terms of the closeness to the original image or distortion measures is a di-
cult and evasive activity. Given the high levels of the professional care and
concern, as well as the scal and emotional investment, that are part of medi-
cal image acquisition procedures, it would be undesirable to use a subsequent
procedure that could cause any degradation of the image. In this spirit, only
lossless coding and compression techniques will be described in the present
chapter. Regardless, it should be noted that any lossy compression technique
may be made lossless by providing the numerical error between the original
image and the degraded image reconstructed from the compressed data. Al-
though this step will lead to additional storage or transmission requirements,
the approach can facilitate the rapid retrieval or transmission of an initial,
low-quality image, followed by completely lossless recovery: such a procedure
is known as progressive transmission, especially when performed over multiple
stages of image quality or delity.
Image Coding and Data Compression 959
11.1.3 Distortion measures and delity criteria
Although we have stated our interest in lossless coding of biomedical images,
other processes, such as the transmission of large quantities of data over noisy
channels, may lead to some errors in the received images. Hence, it would be
relevant to consider the characterization of the distortion so introduced, and
analyze the delity of the received image with respect to the original 9].
The binary symmetric channel is characterized by a single parameter: the
bit-error probability p (see Figure 11.1). The channel capacity is given by
C = 1 + p log p + q log q (11.2)
where q = 1 ; p.

q = 1 - p
0 0

Input Output
(transmit) (receive)
p
1 1
q = 1 - p

FIGURE 11.1
Transmission error probabilities in a binary symmetric channel 9].

The least-squares single-letter delity criterion is de ned as 9]


X
n
n (x y) = n1 (xl ; yl )2 2(l;1) (11.3)
l=1
where x and y are the transmitted and received n-bit vectors (blocks or
words), respectively.
The Hamming distance between the vectors x and y is de ned as
X
n
DH (x y) = n1 (xl ; yl )2 : (11.4)
l=1
Measures of delity may also be de ned based upon entire images by de n-
ing an error image as
e(m n) = g(m n) ; f (m n) (11.5)
960 Biomedical Image Analysis
where g(m n) is the received (degraded) version of the original (transmitted)
image f (m n), and then de ning the RMS value of the error as
v
u
u NX ;1 NX
;1
eRMS = t N12 g(m n) ; f (m n)]2 (11.6)
m=0 n=0
or SNR as PN ;1 PN ;1 g2 (m n)
SNR = PmN =0 n=0
;1 PN ;1 e2 (m n) : (11.7)
m=0 n=0
See Section 2.13 for more details on measures of SNR.

11.2 Fundamental Concepts of Coding


In general, coding could be de ned as the use of symbols to represent infor-
mation. The following list provides the de nitions of a few basic terms and
concepts related to coding 9]:
 An alphabet is a prede ned set of symbols.
 A word is a nite sequence of symbols from an alphabet.
 A code is a mapping of words from a source alphabet into the words of
a code alphabet.
 A code is said to be distinct if each code word is distinguishable from
the other code words.
 A distinct code is uniquely decodable if every code word is identi able
when immersed in a sequence of code words (with no separators between
the words).
 A desirable property of a uniquely decodable code is that it be decodable
on a word-to-word basis. This is ensured if no code word may be a pre x
to another the code is then instantaneously decodable.
 A code is said to be optimal if it is instantaneously decodable and has
the minimum average length for a given source PDF.
Examples of symbols are f0 1g in the binary alphabet f0 1 2 3 4 5 6 7g
in the octal system f0 1 2 3 4 5 6 7 8 9g in the decimal system f0 1 2 3
4 5 6 7 8 9 A B C D E F g in the hexadecimal system fI V X L C D
M g in the Roman system (with the decimal equivalents of 1 5 10 50 100 500
and 1 000, respectively) and fA ; Z a ; z g in the English alphabet (not
Image Coding and Data Compression 961
considering punctuation marks and special symbols). An example of a word
in the context of image coding is 00001011 in 8 b binary coding, standing
for the gray level 11 in the decimal system. Table 11.1 lists the codes for
integers in the range 0 20] in the Roman, decimal, binary, Gray 957], octal,
and hexadecimal codes 958]. The Gray code has the advantageous feature
that only one digit is changed from one number to the next. Observe that, in
general, all of the codes described here (including the English language) fail
the conditions de ned above for an optimal code.

11.3 Direct Source Coding


Pixels generated by real-life sources of images bear limitations in dynamic
range and variability within a small spatial neighborhood. Therefore, codes
used to represent pixel data at the source may be expected to demonstrate
certain patterns of limited variation and high correlation. Furthermore, real-
life sources of images do not generate random, uncorrelated values that are
equally likely instead, it is common to encounter PDFs of gray levels that
are nonuniform. Some of these characteristics may be exploited to achieve
ecient representation of images by designing coding systems tuned to speci c
properties of the source. Because the coding method is applied directly to pixel
values generated by the source (without processing them by an algorithm to
generate a di erent series of values), such techniques are categorized as direct
source coding procedures.

11.3.1 Human coding


Hu man 9, 959] proposed a coding system to exploit the occurrence of some
pixel values with higher probabilities than other pixels. The basic idea in
Hu man coding is to use short code words for values with high probabilities of
occurrence, and longer code words to represent values with lower probabilities
of occurrence. This implies that the code words used will be of variable length
the method also presumes prior knowledge of the PDF of the source symbols
(gray levels). It is required that the code words be uniquely decodable on
a word-by-word basis, which implies that no code word may be a pre x to
another. Hu man devised a coding scheme to meet these requirements and
lead to average code-word lengths lower than that provided by xed-length
codes. Hu man coding provides an average code-word length L that is limited
by the zeroth-order entropy of the source H0 (see Equation 2.18) and H0 + 1:
H0  L  H0 + 1: (11.8)
The procedure to generate the Hu man code is as follows 9, 959]:
962 Biomedical Image Analysis
TABLE 11.1
Integers in the Range 0 20] in Several Alphabets or Codes 957, 958].
English Portuguese Roman Decimal Binary Gray Octal Hex
Zero Zero 0 00000 00000 000 0
One Un/Uma I 1 00001 00001 001 1
Two Dois/Duas II 2 00010 00011 002 2
Three Tr^es III 3 00011 00010 003 3
Four Quatro IV 4 00100 00110 004 4
Five Cinco V 5 00101 00111 005 5
Six Seis VI 6 00110 00101 006 6
Seven Sete VII 7 00111 00100 007 7
Eight Oito VIII 8 01000 01100 010 8
Nine Nove IX 9 01001 01101 011 9
Ten Dez X 10 01010 01111 012 A
Eleven Onze XI 11 01011 01110 013 B
Twelve Doze XII 12 01100 01010 014 C
Thirteen Treze XIII 13 01101 01011 015 D
Fourteen Catorze XIV 14 01110 01001 016 E
Fifteen Quinze XV 15 01111 01000 017 F
Sixteen Dezesseis XVI 16 10000 11000 020 10
Seventeen Dezessete XVII 17 10001 11001 021 11
Eighteen Dezoito XVIII 18 10010 11011 022 12
Nineteen Dezenove XIX 19 10011 11010 023 13
Twenty Vinte XX 20 10100 11110 024 14

Leading zeros have been removed in the decimal and hexadecimal (Hex) codes,
but retained in the binary, Gray, and octal codes.
Image Coding and Data Compression 963
1. Prepare a table listing the symbols (gray levels) in the source (image)
sorted in decreasing order of the probabilities of their occurrence.
2. Combine the last two probabilities. The list of probabilities now has
one less entry than before.
3. Copy the reduced list over to a new column, rearranging (as necessary)
such that the probabilities are in decreasing order.
4. Repeat the procedure above until the list of probabilities is reduced to
only two entries.
5. Assign the code digits 0 and 1 to the two entries in the nal column
of probabilities. (Note: There are two possibilities of this assignment
that will lead to two di erent codes however, their performance will be
identical.)
6. Working backwards through the columns of probabilities, assign addi-
tional bits of 0 and 1 to the two entries that resulted in the last com-
pounded entry in the column.
7. Repeat the procedure until the rst column of probabilities is reached
and all symbols have been assigned a code word.
It should be noted that a Hu man code is optimal for only the given source
PDF a change in the source PDF would require the design of a di erent code
in order to be optimal. A disadvantage of the Hu man code is the increasing
length of its code words, especially for sources with several symbols. The
method does not perform any decorrelation of the data, and is limited in
average code-word length by the zeroth-order entropy of the source.
Example: Figure 11.2 shows a 16 16 part of the image in Figure 2.1 (a),
quantized to 3 b=pixel. The gray levels in the image are in the range 0 7],
and would require 3 b=pixel with straight binary coding. The histogram of
the image is shown in Figure 11.3 it is evident that some of the pixel values
occur with low probabilities.
The procedure for accumulating the probabilities of occurrence of the source
symbols is illustrated in Figure 11.4. The Hu man coding process is shown in
Figure 11.5. Note that a di erent code with equivalent performance may be
generated by reversing the order of assignment of the code symbols 0 and 1
at each step. The average code-word length is 2:69 b=pixel, which is slightly
above the zeroth-order entropy of 2:65 b of the image. The advantage is
relatively small due to the fact that the source in the example uses only eight
symbols with 3 b=pixel, and has a relatively well-spread histogram (PDF).
However, simple representation of the data using ASCII coding would require
a minimum of 8 b=pixel the savings with reference to this requirement are
signi cant. Larger advantages may be gained by Hu man coding of sources
with more symbols and narrow PDFs.
964 Biomedical Image Analysis

1 1 1 1 1 1 1 1 1 1 2 3 2 2 1 2
0 1 1 1 1 1 1 1 1 1 1 2 2 3 4 5
1 0 0 0 1 1 1 1 1 1 1 1 2 2 4 6
2 2 3 5 4 3 1 0 1 1 1 1 1 2 3 5
4 6 5 4 3 1 1 2 2 1 1 1 1 1 2 4
5 5 2 1 2 3 2 2 2 3 3 4 3 2 1 3
4 3 1 2 1 1 1 2 2 2 1 2 2 2 3 5
2 0 2 0 1 3 1 3 5 3 3 2 2 3 3 6
1 1 2 2 1 2 1 2 3 3 3 4 4 6 5 6
1 1 2 4 1 0 0 1 3 4 5 5 5 4 4 6
1 1 1 4 2 1 2 3 5 5 5 4 4 3 4 6
1 1 1 4 4 4 5 6 6 5 4 3 2 3 5 6
1 1 2 5 5 4 5 5 4 3 3 2 3 4 5 6
2 1 4 5 5 5 5 4 3 1 1 1 4 6 5 6
2 2 5 5 5 4 3 2 2 1 1 4 6 6 6 7
4 4 4 4 3 2 2 1 0 1 5 6 6 6 6 7
1111111111232212011111111112234510001111111122462235431011111235
4654311221111124552123222334321343121112221222352020131353322336
1122121233344656112410013455544611142123555443461114445665432356
1125545543323456214555543111465622555432211466674444322101566667
FIGURE 11.2
Top to bottom: A 16 16 part of the image in Figure 2.1 (a) quantized to
3 b=pixel, shown as an image, a 2D array, and as a string of integers with
the gray-level values of every pixel. The line breaks in the string format have
been included only for the sake of printing within the width of the page.
Image Coding and Data Compression 965

0.35

0.3

0.25
Probability of occurrence

0.2

0.15

0.1

0.05

0
0 1 2 3 4 5 6 7
Gray level

FIGURE 11.3
Gray-level histogram of the image in Figure 11.2. Zeroth-order entropy H0 =
2:65 b.
966
Symbol Count Prob. Step 1 Step 2 Step 3 Step 4 Step 5 Step 6

1 77 0.30 0.30 0.30 0.30 0.30 0.44 0.56

2 48 0.19 0.19 0.19 0.25 0.26 0.30 0.44

4 34 0.13 0.13 0.13 0.19 0.25 0.26

3 33 0.13 0.13 0.13 0.13 0.19

5 32 0.12 0.12 0.13 0.13

6 20 0.08 0.08 0.12

0 10 0.04 0.05

7 2 0.01

FIGURE 11.4
Accumulation of probabilities (Prob.) of occurrence of gray levels in the derivation of the Hu man code for the image in

Biomedical Image Analysis


Figure 11.2. The probabilities of occurrence of the symbols have been rounded to two decimal places, and add up to unity.
Image Coding and Data Compression
Symbol Prob. Code Step 1 Step 2 Step 3 Step 4 Step 5 Step 6

1 0.30 00 0.30 00 0.30 00 0.30 00 0.30 00 0.44 1 0.56 0

2 0.19 11 0.19 11 0.19 11 0.25 10 0.26 01 0.30 00 0.44 1

4 0.13 010 0.13 010 0.13 010 0.19 11 0.25 10 0.26 01

3 0.13 011 0.13 011 0.13 011 0.13 010 0.19 11

5 0.12 101 0.12 101 0.13 100 0.13 011

6 0.08 1000 0.08 1000 0.12 101

0 0.04 10010 0.05 1001

7 0.01 10011

FIGURE 11.5
Steps in the derivation of the Hu man code for the image in Figure 11.2. (Prob. = probabilities of occurrence of the gray
levels.) The binary words in bold italics are the Hu man code words at the various stages of their derivation. See also
Figure 11.4.

967
968 Biomedical Image Analysis
Inter-pixel correlation may be taken into account in Hu man coding by
considering combinations of pixels (gray levels) as symbols. If we were to
consider pairs of gray levels in the example above, with gray levels quantized to
3 b=pixel, we would have a total of 8 8 = 64 possibilities see Table 11.2. The
rst-order entropy of the image, considering pairs of gray levels, is H1 = 2:25 b
an average code-word length close to this value may be expected if Hu man
coding is applied to pairs of gray levels.

TABLE 11.2
Counts of Occurrence of Pairs of Pixels in the Image in Figure 11.2.
Current pixel Next pixel in the same row
0 1 2 3 4 5 6 7
0 3 6 1 0 0 0 0 0
1 4 46 17 4 5 1 0 0
2 2 12 16 12 3 2 0 0
3 0 5 8 6 6 6 1 0
4 0 1 1 10 8 6 7 0
5 0 0 1 1 9 12 6 0
6 0 0 0 0 0 4 6 2
7 0 0 0 0 0 0 0 0

For example, the pair (1 2) occurs 17 times in the image. The last pixel in
each row was not paired with any pixel. The rst-order entropy of the image,
considering the probabilities of occurrence of pairs of gray-level values as in
Equation 2.23, is H1 = 2:25 b. The zeroth-order entropy is H0 = 2:65 b.

Although the performance of Hu man coding is limited when applied di-


rectly to source symbols, the method may be applied to decorrelated data with
signi cant advantage, due to the highly nonuniform or concentrated PDFs of
such data. The performance of Hu man coding as a post-encoder following
decorrelation methods is discussed in several sections to follow.
Image Coding and Data Compression 969
11.3.2 Run-length coding
Images with high levels of correlation may be expected to contain strings of
repeated occurrences of the same gray level: such strings are known as runs.
Data compression may be achieved by coding such runs of gray levels. For
example, the rst three rows of the image in Figure 11.2 may be represented
as follows:
Row 1: (1 10) (2 1) (3 1) (2 2) (1 1) (2 1)
Row 2: (0 1) (1 10) (2 2) (3 1) (4 1) (5 1)
Row 3: (1 1) (0 3) (1 8) (2 2) (4 1) (6 1).
In the code above, each pair of values represents a run, with the rst value
standing for the gray level and the second value giving the number of times
the value has occurred in the run. The coding procedure is interrupted at the
end of each row to permit synchronization in case of errors in the reception
and decoding of run values. Direct coding of the 16 3 = 48 pixels in the
three rows considered, at 3 b=pixel, leads to a total code length of 144 b. In
the run-length code given above, if we were to use three bits per gray level and
four bits per run-length value, we get a total code length of 18 7 = 126 b.
The savings are small in this case, due to the \busy" nature of the image,
which represents an eye.
Run-length coding is best suited for the compression of bilevel images, where
long runs may be expected of the two symbols 0 and 1. Images with ne
details, intricate texture, and high-resolution quantization with large numbers
of bits per pixel may not present long runs of the same gray level run-length
coding in such cases may lead to data expansion rather than compression.
(This is the case with the image in Figure 11.2 past the third row.)
Run-length coding may be advantageously applied to bit planes of gray-
level and color images. The use of Gray coding (see Table 11.1) improves the
chances of long runs in the bit planes due to the feature that the Gray code
changes in only one bit from one numerical value to the next.
Errors in run length could cause severe degradation of the reconstructed
image due to the loss of pixel position. Synchronization at the end of each
row can avoid the carrying over of such errors beyond the a ected row.
Runs may also be de ned over 2D areas. However, images with ne details
do not present such uniform areas with large numbers of occurrence to lend
much coding advantage.

11.3.3 Arithmetic coding


Arithmetic coding 960] is a family of codes that treat input symbols as mag-
nitudes. Shannon 126] presented the basic idea of representing a string of
symbols as the sum of their scaled probabilities. Most of the development
towards practical arithmetic coding has been due to Langdon and Rissa-
970 Biomedical Image Analysis
nen 960, 961, 962]. The basic advantage of arithmetic coding over Hu man
coding is that it does not su er by the limitation that each symbol should
have a unique code word that is at least one bit in length.
The mechanism of arithmetic coding is illustrated in Figure 11.6 338, 963].
The symbols of the source string are represented by their individual probabili-
ties pl and cumulative probabilities (sum of the probabilities of all symbols up
to, but not including, the current symbol) Pl . At any given stage in coding,
the source string is represented by a code point Ck and an interval Ak . The
code point Ck represents the cumulative probability of the current symbol
on a scale of interval size Ak . A new symbol (being appended to the source
string) is encoded by scaling the interval by the probability of the current
symbol as
Ak+1 = Ak pl (11.9)
and de ning the new code point as
Ck+1 = Ck + Ak Pl : (11.10)
Decoding is performed by the reverse of the procedure described above. The
interval A0 is initialized to unity, and the current code point is determined by
the range into which the nal code point Cnal falls. The scaling of the interval
and code point is performed as during encoding. The encoding procedure
ensures that no future code point Ck exceeds the current value Ck + Ak . Thus,
a carry over to a given bit position (in the binary representation of Cnal )
occurs at most once during encoding. This fact is made use of for incremental
coding and for using nite-precision arithmetic. Finite precision is used by
employing a technique known as bit stung 960], where, if a series of ones
longer than the speci ed precision occurs in the binary-fraction representation
of Ck , a zero is inserted this ensures that further carries do not propagate into
the series of ones. Witten et al. 963] provide an implementation of arithmetic
coding using integer arithmetic.
Direct arithmetic coding of an image consists of an initial estimation of the
probabilities of the gray values in the image, followed by row-wise arithmetic
coding of pixels. Direct coding does not take into account the correlation
between adjacent pixels. Arithmetic coding can be modi ed to make use of the
correlation between pixels to some extent by using conditional probabilities
of occurrence of gray levels.
In a version of arithmetic coding known as Q-coding 964], the individual
bit planes of an image are coded using probabilities conditioned on the sur-
rounding bits in the same plane as the context. A more ecient procedure is
to perform decorrelation of the pixels of the image separately, and to use the
basic arithmetic coder as a post-encoder on the decorrelated set of symbols
(see, for example, Rabbani and Jones 965]). The performance of arithmetic
coding as a post-encoder after the application of decorrelation methods is
discussed in several sections to follow.
Image Coding and Data Compression 971

A
k

P P
l l+1
p
l
C
k
P P
m m+1
p
m

C = C A P
k+1 k + k l
A = A p
k+1 k l

C = C A P
k+2 k+1 + k+1 m
A = A p
k+2 k+1 m

FIGURE 11.6
Arithmetic coding procedure. The range A0 is initialized to unity. Each sym-
bol is represented by its individual probability pl and cumulative probability
Pl . The string being encoded is represented by the code point Ck on the cur-
rent range Ak . The range is scaled down by the probability pl of the current
symbol, and the process is repeated. One symbol is reserved for the end of
the string 338, 963]. Figure courtesy of G.R. Kuduvalli 338].
972 Biomedical Image Analysis
Example: The symbols used to the represent the image in Figure 11.2 and
their individual as well as cumulative probabilities are listed in Table 11.3.
The intervals representing the symbols are also provided in the table. Let us
consider the rst three symbols f4 6 5g in the fth row of the image. The
procedure to derive the arithmetic code for this string of symbols is shown in
Figure 11.7.

TABLE 11.3
The Symbols Used in the Image in
Figure 11.2, Along with Their Individual
Probabilities of Occurrence pl , Cumulative
Probabilities Pl , and Intervals Used in
Arithmetic Coding.
Symbol l Count pl Pl Interval
0 10 0.04 0.00 0.00, 0.04)
1 77 0.30 0.04 0.04, 0.34)
2 48 0.19 0.34 0.34, 0.53)
3 33 0.13 0.53 0.53, 0.66)
4 34 0.13 0.66 0.66, 0.79)
5 32 0.12 0.79 0.79, 0.91)
6 20 0.08 0.91 0.91, 0.99)
7 2 0.01 0.99 0.99, 1.00)

The initial code point is C0 = 0, and the initial interval is A0 = 1. When


the rst symbol \4" is encountered, the code point and interval are updated
as C1 = C0 + A0 P4 = 0 + 0:66 = 0:66 A1 = A0 p4 = 1 0:13 = 0:13. For the
next symbol \6", we get C2 = C1 + A1 P6 = 0:66 + 0:13 0:91 = 0:7783, and
A2 = A1 p6 = 0:13 0:08 = 0:0104. With the third symbol \5" appended to
the string, we have C3 = C2 + A2 P5 = 0:7783 + 0:0104 0:79 = 0:786516,
and A3 = A2 p5 = 0:0104 0:12 = 0:001248.
The code points have been given in decimal code to full precision, as re-
quired, in this example the individual code probabilities have been rounded
to two decimal places. In actual application, the code points need to be rep-
resented in binary code with nite precision. The average code-word length
Image Coding and Data Compression 973

0 1 2 3 4 5 6 7

0 Symbol string: {} 1

0 1 2 3 4 5 6 7

0.66 Symbol string: {4} 0.79

0 1 2 3 4 5 6 7

0.7783 Symbol string: {4, 6} 0.7887

0 1 2 3 4 5 6 7

0.786516 Symbol string: {4, 6, 5} 0.787764


FIGURE 11.7
The arithmetic coding procedure applied to the string f4 6 5g formed by the
rst three symbols in the fth row of the image in Figure 11.2. See Table 11.3
for the related probabilities and intervals see also Figure 11.6. All intervals
are shown mapped to the same physical length, although their true values
decrease from the interval at the top of the gure to that at the bottom.
The numerals 0 1 2 : : : 7 in italics indicate the symbols (gray levels) in the
image. The values in bold at the ends of each interval give the values of Ck
and (Ck + Ak ) at the corresponding stage of coding.
974 Biomedical Image Analysis
per symbol is reduced by encoding long strings of symbols, such as an entire
row of pixels in the given image.

11.3.4 Lempel{Ziv coding


Ziv and Lempel 966] proposed a universal coding scheme for encoding symbols
from a discrete source when their probabilities of occurrence are not known
a priori. The coding scheme consists of a rule for parsing strings of symbols
from the source into substrings or words, and mapping the substrings into
uniquely decipherable code words of xed length. Thus, unlike the Hu man
code where codes of xed length are mapped into variable-length codes, the
Lempel{Ziv code maps codes of variable length (corresponding to symbol
strings of variable length) into codes of xed length.
The Lempel{Ziv coding scheme is illustrated in Figure 11.8 338]. The
coding procedure starts with a bu er of length
n = Ls  Ls (11.11)
where Ls is the maximum length of the input symbol strings being parsed,
 is the cardinality of the symbol source (in the case of image coding, the
number of possible gray levels), and  is chosen such that 0 <  < 1. The
bu er is initially lled with n ; Ls zeros and the rst Ls symbols from the
source. The bu er is then parsed for the string whose length lk is less than
Ls , but is the maximum of all such strings from 0 to n ; Ls ; 1, and which
has an identical string in the bu er starting at position n ; Ls . The code to
be mapped consists of the beginning position bk of this string in the bu er
from position 0 to n ; Ls ; 1, the length of the string lk , and the last symbol
sk following the end of the string. The total length of the code for a straight
binary representation is
l = dlog2 (n ; Ls ) + log2 (Ls ) + log2 ()e (11.12)
where dxe is the smallest integer  x. After coding the string, the bu er is
advanced by lk number of symbols. Ziv and Lempel 966] showed that, as the
total length of the input symbols tends to 1, the average bit rate for coding
the string approaches that of an optimal code with complete knowledge of the
statistics of the source.
The Lempel{Ziv coding procedure may be viewed as a search through a
xed-size, variable-content dictionary for words that match the current string.
A modi cation of this procedure, known as Lempel{Ziv{Welch (LZW) cod-
ing 967], consists of using a variable-sized dictionary with every new string
encountered in the source string added to the dictionary. The dictionary is
initialized to single-symbol strings, made up of the entire symbol set. This
eliminates the need for including the symbol sk in the code words. The LZW
string table has the pre x property: for every string of symbols in the table,
its pre x is also present in the table.
Image Coding and Data Compression 975
0 b n - L s- 1 n-1
k

s
k
lk lk

n-L s Ls

Buffer of length n
FIGURE 11.8
The Lempel{Ziv coding procedure. At each iteration, the bu er is scanned
for strings of length lk  Ls for a match in the substring of length (n ; Ls )
within the bu er. The matched string location bk is encoded and transmitted.
Figure courtesy of G.R. Kuduvalli 338].

Kuduvalli 338] implemented a slight variation of the LZW code, in which


the rst symbol of the current string is appended as the last symbol of the
previously parsed string, and the new string is added to the string table.
With this method, the decoded strings are generated in the same order as
the encoded strings. The string table itself is addressed during the encoding
procedure as a link-list. Each string contains the address of every other string
of which it is a pre x. Such a link-list is not necessary during decoding,
because the addresses of the strings are directly available to the decoder.
LZW coding may be applied directly for source coding, or applied to decor-
related data. The following sections provide examples of application of the
LZW code.
Example: Let us consider again the image in Figure 11.2. The image
has eight symbols in the range 0 7], each of which will be an item in the
LZW table, shown in Table 11.4 the eight basic symbols may be represented
by their own code. Index1 and Index2 represent two possibilities of coding.
Consider the string f2 2 3g, which occurs ve times in the image. In order
to exploit this feature, we need to add the strings f2 2g and f2 2 3g to the
table we could use the codes \8" for the former, and \9" or \83" for the latter
(\83" represents the symbol \3" being appended to the string represented by
the code \8"). The string f2 2 3 5g present at the beginning of the fourth
row in the image may be represented as \95". In this manner, long strings of
symbols are encoded with short code words. The code index in the dictionary
(table) is used to represent the symbol string. A prede ned limit may be
applied to the length of the dictionary.
976 Biomedical Image Analysis

TABLE 11.4
Development of the
Lempel{Ziv{Welch (LZW)
Code Table for the Image
in Figure 11.2.
String Index1 Index2
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
.. .. ..
. . .
22 8 22
223 9 83
2235 10 95
.. .. ..
. . .
Image Coding and Data Compression 977
11.3.5 Contour coding
Given that a digital image includes only a nite number of gray levels, we
could expect strings of the same values to occur in some form of 2D contours or
patterns in the image 589]. The same expectation may be arrived at if we were
to consider the gray level to represent height: an image may then be viewed
as a relief map, with iso-intensity contours representing steps or plateaus (as
in elevation maps of mountains). Information related to all such contours
may then be used to encode the image. Although the idea is appealing in
principle, ne quantization could result in low probabilities of occurrence of
large contours with simple patterns.
Each iso-intensity contour would require the encoding of the coordinates of
the starting point of the contour, the associated gray level, and the sequence
of steps (movement) needed to trace the contour. A consistent rule is required
for repeatable tracing of contours. The left-most-looking rule 589] for tracing
a contour is as follows:
Select a pixel that is not already a member of a contour.
Look at the pixel to the left relative to the direction of entry to the current
pixel on the contour.
If the pixel has the same gray level as the current pixel, move to the pixel
and encode the type of movement.
If not, look at the pixel straight ahead.
If the pixel has the same gray level as the current pixel, move to the pixel
and encode the type of movement.
If not, look at the pixel to the right.
If the pixel has the same gray level as the current pixel, move to the pixel
and encode the type of movement.
If not, move to the pixel behind the current pixel.
Repeat the procedure will trace a closed loop and return to the starting
point.
Repeat the procedure until every pixel in the image belongs to a contour.
The movements allowed are only to the left, straight ahead, to the right,
and back the four possibilities may be encoded using the Freeman chain
code illustrated in Figure 6.4.
Example: The image in Figure 11.2 is shown in Figure 11.9 along with the
tracings of three iso-intensity contours. With reference to the contour with
the value \1" at the top-left corner of the image, observe that several spatially
connected pixels with the same value and lying within the contour have not
been included in the contour: these pixels require additional contours. The
contour-coding procedure needs to be applied repeatedly until every pixel in
the image belongs to a contour. The encoding of short strings or isolated
occurrences of pixel values could require several coding steps, and lead to
increased code length.
The data required to represent the contour with the gray level \1" starting
with the pixel in the rst row and rst column would be as follows:
978 Biomedical Image Analysis
Initial point: Coordinates 1, 1]. Gray level 1.
Freeman code: 0000000003030303022221212233201122122212.
Contour code requirement: four bits for each coordinate three bits for the
pixel value two bits per Freeman code. Total 4 + 4 + 3 + 40 2 = 91 b.
Direct binary code requirement for the 15 pixels on the contour at 8 b=pixel =
120 b at 3 b=pixel = 45 b.
The data required to represent the contour with the gray level \2" starting
with the pixel in the fth row and eighth column would be as follows:
Initial point: Coordinates 5, 8]. Gray level 2.
Freeman code: 0330221201.
Contour code requirement: Total 31 b. Direct binary code requirement for
the eight pixels on the contour at 8 b=pixel = 64 b at 3 b=pixel = 24 b.
The data required to represent the contour with the gray level \1" starting
with the pixel in the ninth row and rst column would be as follows:
Initial point: Coordinates 9, 1]. Gray level 1.
Freeman code: 03303233121111.
Contour code requirement: 39 b. Direct binary code requirement for the 13
pixels on the contour at 8 b=pixel = 104 b at 3 b=pixel = 39 b.
It is evident that higher advantages may be gained if a number of long
contours with simple patterns are present in the image.

11.4 Application: Source Coding of Digitized Mammo-


grams
Kuduvalli et al. 174, 338] applied several coding techniques for the com-
pression of digitized mammograms. In their work, lm mammograms were
scanned using an Eikonix-1412 digitizing camera, with a linear CCD array to
provide a horizontal scan line of 4 096 pixels. A vertical array size of 4 096
pixels was achieved by stepping the array over 4 096 scan lines. A Gordon
Instruments Plannar-1417 light box was used to illuminate the X-ray lm be-
ing digitized. The gain and o set variations between the CCD elements were
corrected for in the camera, and the data were transferred to the host com-
puter over an IEEE-488 bus. Corrections were applied for the light-intensity
variations in the Plannar-1417 light source used to illuminate the lms, and
the digitized image was stored in a Megascan FDP-2111 frame bu er with a
capacity of 4 096 4 096 12 b. Several 12 : 8 b and 12 : 10 b transformations
were developed to test the dynamic range, light intensity, and focus settings.
The e ective dynamic range of an imaging system depends upon the scaling
factors used for correcting light-intensity variations and the SNR of the imag-
ing system. Kuduvalli et al. analyzed the intensity pro les of the Plannar-
1417 light box, and observed that a scaling factor of about 1:6 was required
Image Coding and Data Compression 979

FIGURE 11.9
Contour coding applied to the image in Figure 11.2. Three contours are
shown for the sake of illustration of the procedure. The pixels included in
the contours are shown in bold italics. The initial point of each contour is
underlined. Double-headed arrows represent two separate moves in the two
directions of the arrows.
980 Biomedical Image Analysis
to correct the values of the pixels at the edges of the image. Furthermore,
the local standard deviation of the intensity levels measured by the camera
with respect to a moving-average window of 10 counts was estimated to be
about 5:0 counts. The e ective noise level at the edge of the imaging eld
was estimated at 8 counts. For these reasons, it was concluded that two of
the least-signi cant bits in the 12 b data would only contribute to noise and
a ect the performance of data compression algorithms 968]. In addition, by
scanning a standard, calibrated, gray-scale step pattern, the e ective dynamic
range of the digitizer was observed to be about 2:5 OD. In consideration of
all of the above factors, it was determined that truncating the 12 b pixel val-
ues to 10 b values would be adequate for representing the digitized image.
This procedure reduced the e ective noise level by a factor of four, to about
2 counts.
Kuduvalli et al. also estimated the MTF of the digitization system from
measurements of the ESF (see Sections 2.9 and 2.12), with a view to demon-
strate the resolving capability of the system to capture sub-millimeter details
on X-ray lms, such as microcalci cations on mammograms. The normalized
value of the MTF at one-half of the sampling frequency was estimated to be
0:1, which is considered to be adequate for resolving objects and details at
the same frequency (which is the highest frequency component retained in the
digitized image) 969].
The average number of bits per pixel obtained for ten X-ray images using the
Hu man, arithmetic, and LZW coding techniques are listed in Table 11.5 the
zeroth-order entropy values of the images are also listed. The high values of the
zeroth-order entropy indicate limits on the performance of the Hu man coding
technique. The arithmetic coding method has given bit rates comparable
with those provided by the Hu man code, but at considerably higher level of
complexity. Figure 11.10 shows plots of the average bit rate as a function of
the bu er length in LZW coding, for four of the ten images listed in Table 11.5.
The maximum length of the symbol strings scanned was xed at Ls = 256.
The LZW code provided the best compression rates among the three methods
considered, to the extent of about 50% of the initial number of bits per pixel
in the images. The average bit rate provided by LZW coding is well below the
zeroth-order entropy values of the images, indicating that ecient encoding
of strings of pixel values can exploit the redundancy and correlation present
in the data without performing explicit decorrelation.

11.5 The Need for Decorrelation


The results of direct source encoding, discussed in Section 11.4, indicate the
limitations of direct encoding of the symbols generated by an image source.
Image Coding and Data Compression 981

TABLE 11.5
Average Number of Bits Per Pixel with Direct Source Coding for Data
Compression of Digitized Mammograms and Chest X-ray
Images 174, 338].
Image Type Size (pixels) Entropy Hu man Arith. LZW
1 Mammo. 4 096 1 990 7.26 8.20 8.09 5.34
2 Mammo. 4 096 1 800 7.61 8.59 8.50 5.76
3 Mammo. 3 596 1 632 6.68 6.96 6.88 4.98
4 Mammo. 3 580 1 696 7.21 7.80 7.71 4.68
5 Chest 3 536 3 184 8.92 9.62 9.43 6.11
6 Chest 3 904 3 648 9.43 9.83 9.81 6.27
7 Chest 3 264 3 616 6.26 7.20 7.12 4.61
8 Chest 4 096 4 096 8.65 9.39 9.35 5.83
9 Mammo. 4 096 2 304 8.83 9.71 9.57 6.13
10 Chest 4 096 3 800 8.57 9.42 9.33 5.99

Average 7.94 8.67 8.58 5.57

The entropy listed is the zeroth-order entropy. Pixel values in the original
images were quantized at 10 b=pixel. See also Table 11.7. Note: Arith.
= Arithmetic coding LZW: Lempel{Ziv{Welch coding Mammo. = Mam-
mogram. Reproduced with permission from G.R. Kuduvalli and R.M. Ran-
gayyan, \Performance analysis of reversible image compression techniques for
high-resolution digital teleradiology", IEEE Transactions on Medical Imaging,
11(3): 430 { 445, 1992.  c IEEE.
982 Biomedical Image Analysis
Average bit rate (bits/ pixel)

FIGURE 11.10
Average bit rate as a function of the bu er length, using Lempel{Ziv{Welch
coding, for four of the ten images (number 1, 3, 4, and 6) listed in Table 11.5.
The abscissa indicates the value of B , with the bu er length given by n = 2B .
The maximum length of the symbol strings scanned was xed at Ls = 256. Re-
produced with permission from G.R. Kuduvalli and R.M. Rangayyan, \Perfor-
mance analysis of reversible image compression techniques for high-resolution
digital teleradiology", IEEE Transactions on Medical Imaging, 11(3): 430 {
445, 1992.  c IEEE.
Image Coding and Data Compression 983
Although some of the methods described, such as the Lempel{Ziv and run-
length coding methods, have the potential to exploit the redundancy present
in images, their eciency in this task is limited.
The term \decorrelation" indicates a procedure that can remove or reduce
the redundancy or correlation present between the elements of a data stream,
such as the pixels in an image. The most commonly used decorrelation tech-
niques are the following:
 di erentiation, which can remove the commonality present between ad-
jacent elements
 transformation to another domain where the energy of the image is con-
ned to a narrow range, such as the Fourier, Karhunen{Loeve, discrete
cosine, or Walsh-Hadamard (orthogonal) transform domains
 model-based prediction, where the error of prediction would have re-
duced information content and
 interpolation, where a subsampled image is transmitted, the pixels in
between the preceding data are obtained by interpolation, and the error
of interpolation, which has reduced information content, is transmitted.
Observe that the decorrelated data (transform coecients, prediction error,
etc.) need to be encoded and transmitted the techniques described in Sec-
tion 11.3 for direct source encoding may also be applied to decorrelated data.
In addition, further information regarding initial values and the procedures
for the management of the transform coecients or the model parameters will
also have to be sent to facilitate complete reconstruction of the original image.
The advantages of decorrelating image data by di erentiation are demon-
strated by the following simple example. The use of transforms for image
data compression is discussed in Section 11.6. Interpolative coding is brie!y
described in Section 11.7. Methods for prediction-based data compression are
described in Section 11.8. Techniques based upon di erent scanning strategies
to improve the performance of decorrelation by di erentiation are discussed in
Sections 11.9 and 11.11. Strategies for combining several decorrelation steps
are discussed in Section 11.12.
Example: Consider the image in Figure 11.11 (a) the histogram of the
image is shown in part (b) of the gure. The image has a good spread of
gray levels over its spatial extent, and the histogram, while not uniform, does
exhibit a good spread over the dynamic range of the image. The zeroth-order
entropy, at 7:41 b, is close to the maximum possible value of 8 b for the image
with 256 gray levels. These characteristics suggest limited potential for direct
encoding methods.
The image in Figure 11.11 (a) was subjected to a simple rst-order partial
di erentiation procedure, given by
f 0 (m n) = f (m n) ; f (m ; 1 n): (11.13)
984 Biomedical Image Analysis
The result, shown in Figure 11.12 (a), has an extremely limited range of de-
tails the histogram of the image, shown in part (b) of the gure, indicates
that, although the image has more gray levels than the original image, most of
the gray levels occur with negligible probability. The concentrated histogram
leads to a lower value of entropy, at 5:52 b. Observe that the histogram
of the di erence image is close to a Laplacian PDF (see Section 3.1.2 and
Figure 3.9). The simple operation of di erentiation has reduced the entropy
of the image by about 25%. The reduced entropy suggests that the coding
requirement may be reduced signi cantly. Observe that the additional infor-
mation required to recover the original image from its derivative as above is
just the rst row of pixels in the original image. Data compression techniques
based upon di erentiation are referred to as di erential pulse code modulation
(DPCM) techniques. DPCM techniques vary in terms of the reference value
used for subtraction (in the di erentiation process). The reference value may
be derived as a combination of a few neighboring pixels, in which case the
method approaches linear prediction in concept.

11.6 Transform Coding


The main premise of transform-domain coding of images is that, when orthog-
onal transforms are used, the related coecients represent elements that are
mutually uncorrelated. (See Section 3.5.2 for the basics of orthogonal trans-
forms.) Furthermore, because most natural images have limitations on the
rate of change of their elemental values (that is, they are generally smooth),
their energy is con ned to a narrow low-frequency range in the transform do-
main. These properties lead to two characteristics of orthogonal transforms
that are of relevance and importance in data compression:
 orthogonal transforms perform decorrelation, and
 orthogonal transforms compress the energy of the given image into a
narrow region.
The second property listed above is commonly referred to as \energy com-
paction".
Example: The log-magnitude Fourier spectrum of the image in Figure
11.11 (a) is shown in Figure 11.13 (a). It is evident that most of the energy
of the image is concentrated in a small number of DFT coecients around
the origin in the 2D Fourier plane (at the center of the spectrum displayed).
In order to demonstrate the energy-compacting nature of the DFT, the (cu-
mulative) percentage of the total energy of the image present at the (0 0)
coordinate of the DFT, and contained within concentric square regions of
Image Coding and Data Compression 985

(a)

0.035

0.03
Probability of occurrence

0.025

0.02

0.015

0.01

0.005

0
0 50 100 150 200 250
Gray level

(b)
FIGURE 11.11
(a) A test image of size 225 250 pixels with 256 gray levels. (b) Gray-level
histogram of the test image. Dynamic range 18 255]. Zeroth-order entropy
7:41 b.
986 Biomedical Image Analysis

(a)

0.1

0.09

0.08
Probability of occurrence

0.07

0.06

0.05

0.04

0.03

0.02

0.01

0
−100 −50 0 50 100 150
Gray level

(b)
FIGURE 11.12
(a) Result of di erentiation of the test image in Figure 11.11 (a). (b) Gray-
level histogram of the image in (a). Dynamic range ;148 180]. Zeroth-order
entropy 5:52 b.
Image Coding and Data Compression 987
half-width 1 2 3 : : : 8 DFT coecients were computed. The result is plot-
ted in Figure 11.13 (b), which shows that 74% of the energy of the image is
present in the DC component, and 90% of the energy is contained within the
central 121 DFT components around the DC point only 7:2% of the energy
lies in the high-frequency region beyond the central 17 17 region of the
256 256 DFT array. (Regardless of the small fraction of the total energy
present at higher frequencies, it should be observed that high-frequency com-
ponents bear important information related to the edges and sharpness of the
image see Sections 2.11.1 and 3.4.1).
The DFT is the most commonly used orthogonal transform in the analysis
of systems, signals, and images. However, due to the complex nature of the
basis functions, the DFT has high computational requirements. In spite of
its symmetry, the DFT could lead to increased direct coding requirements
due to the need for large numbers of bits for quantization of the transform
coecients. Regardless, the discrete nature of most images at the outset could
be used to advantage in lossless recovery from transform coecients that have
been quantized to low levels of accuracy see Section 11.6.3.
We have already studied the DFT (Sections 2.11 and 3.5.2) and the WHT
(Section 3.5.2). The WHT has a major computational advantage due to the
fact that its basis functions are composed of only +1 and ;1, and has been ad-
vantageously applied in data compression. In the following sections, we shall
study two other transforms that are popular and relevant to data compres-
sion: the discrete cosine transform (DCT) and the Karhunen{Loeve transform
(KLT).

11.6.1 The discrete cosine transform


The DCT is a modi cation of the DFT that overcomes the e ects of discon-
tinuities at the edges of the latter 970], and is de ned as
;1 NX
NX ;1 h i h i
F (k l) = a(Nk l) f (m n) cos Nm (2 k + 1) cos Nn (2 l + 1)
m=0 n=0
(11.14)
for k = 0 1 2 : : : N ; 1, and l = 0 1 2 : : : N ; 1, where
if (k l) = (0 0)
a(k l) = 11 otherwise (11.15)
2 :
The inverse transformation is given by
;1 NX
NX ;1 h i h i
F (k l) = N1 a(k l) F (k l) cos Nm (2 k + 1) cos Nn (2 l + 1)
k=0 l=0
(11.16)
for m = 0 1 2 : : : N ; 1, and n = 0 1 2 : : : N ; 1. The basis vectors
of the DCT closely approximate the eigenvectors of a Toeplitz matrix (see
988 Biomedical Image Analysis

(a)
100

90
Cumulative percetange of image energy contained

80

70

60

50

40

30

20

10

0
1 2 3 4 5 6 7 8 9 10
Region of magnitude spectrum

(b)
FIGURE 11.13
(a) Log-magnitude Fourier spectrum of the image in Figure 11.11 (a), com-
puted over a 256 256 DFT array. (b) Distribution of the total image energy.
The 10 values represent the (cumulative) percentage of the energy of the im-
age present at the (0 0) or DC position contained within square boxes of
half-width 1 2 3 : : : 8 pixels centered in the DFT array and in the entire
256 256 DFT array. The numbers of DFT coecients corresponding to the
10 regions are 1 9 25 49 81 121 169 225 289 and 65 536.
Image Coding and Data Compression 989
Section 3.5.3) whose elements can be expressed by increasing powers of a
constant  < 1. The ACF matrices of most natural images can be closely
modeled by such a matrix 971]. An N N DCT may be computed from the
results of a 2N 2N DFT thus, FFT algorithms may be used for ecient
computation of the DCT.

11.6.2 The Karhunen{Lo eve transform


The KLT, also known as the principal-component transform, the Hotelling
transform, or the eigenvector transform 9, 589], is a data-dependent transform
based upon the statistical properties of the given image. The image is treated
as a vector f that is a realization of an image-generating stochastic process.
If the image is of size M N , the vector is of size P = MN (see Section 3.5).
The image f may be represented without error by a deterministic linear
transformation of the form
X
P
f =Ag= gm Am (11.17)
m=1
A = A1  A2      AP ] (11.18)
where jAj 6= 0, and Am are row vectors that make up the P P matrix
A. The matrix A needs to be formulated such that the vector g leads to an
ecient representation of the original image f .
The matrix A may be considered to be made up of P linearly indepen-
dent row vectors that span the P -dimensional space containing f . Let A be
orthonormal, that is,

ATm An = 10 m =n
m 6= n : (11.19)

It follows that
AT A = I or A;1 = AT : (11.20)
Then, the row vectors of A may be considered to form the set of orthonormal
basis vectors of a linear transformation. This formulation leads also to the
inverse relationship
X
P
g = AT f = ATm fm : (11.21)
m=1
In the procedure described above, each component of g contributes to the
representation of f . Given the formulation of A as a reversible linear trans-
formation, g provides a complete (lossless) representation of f if all of its
P = MN elements are made available.
Suppose that, in the interest of ecient representation of images via the
extraction of the most signi cant information contained, we wish to use only
990 Biomedical Image Analysis
Q < P components of g. The omitted components of g may be replaced
with other values bm m = Q + 1    P . Then, we have an approximate
representation of f , given as
Q
~f = X gm Am + X bm Am :
P
(11.22)
m=1 m=Q+1
The error in the approximate representation as above is
X
P
"=f ; ~f = (gm ; bm ) Am : (11.23)
m=Q+1
The MSE is given by
"2 = E "T "]
P P
= E  Pm=Q+1 Pn=Q+1 (gm ; bm ) (gn ; bn ) ATm An ] (11.24)
P
= Pm=Q+1 E (gm ; bm )2 ]:
The last step above follows from the orthonormality of A.
Taking the derivative of the MSE with respect to bm and setting the result
to zero, we get
@ "2 = ;2 E (g ; b )] = 0: (11.25)
@b m m
m
The optimal (MMSE) choice for bm is, therefore, given by
bm = E gm] = gm = ATm E f ] m = Q + 1 : : : P  (11.26)
that is, the omitted components are replaced by their means. The MMSE is
given by
"2 min
P
= Pm=Q+1 E (gm ; g m )2 ]
P
= Pm=Q+1 E ATm (f ; f ) (f ; f )T Am ] (11.27)
P
= Pm=Q+1 ATm f Am
where f is the covariance matrix of f .
Now, if the basis vectors Am are selected as the eigenvectors of f , that is,
 f Am = m Am (11.28)
and
m = ATm f Am (11.29)
Image Coding and Data Compression 991
because ATm Am = 1, where m are the corresponding eigenvalues, then, we
have
X
P
"2 min = m : (11.30)
m=Q+1

Therefore, the MSE may be minimized by ordering the eigenvectors (the rows
of A) such that the corresponding eigenvalues are arranged in decreasing
order, that is, 1 > 2 >    > P . Then, if a component gm of g is replaced
by bm = gm , the MSE increases by m . By replacing the components of g
corresponding to the eigenvalues at the lower end of the list, the MSE is kept
at its lowest-possible value for a chosen number of components Q.
From the above formulation and properties, it follows that the components
of g are mutually uncorrelated:
2 3
1
6
6 2 77
 g = AT  f A = 6 ... 75 = (11.31)
4
P
where is a diagonal matrix with the eigenvalues m placed along its diago-
nal. Because the eigenvalues m are equal to the variances of gm , a selection
of the larger eigenvalues implies the selection of the transform components
with the higher variance or information content across the ensemble of the
images considered.
The KLT has major applications in principal-component analysis (PCA),
image coding, data compression, and feature extraction for pattern classi -
cation. Diculties could exist in the computation of the eigenvectors and
eigenvalues of the large covariance matrices of even reasonably sized images.
It should be noted that a KLT is optimal only for the images represented
by the statistical parameters used to derive the transformation. New trans-
formations will need to be derived if changes occur in the statistics of the
image-generating process being considered, or if images of di erent statistical
characteristics need to be analyzed.
Because the KLT is a data-dependent transform, the transformation vectors
(the matrix A) need to be transmitted however, if a large number of images
generated by the same underlying process are to be transmitted, the same
optimal transform is applicable, and the transformation vectors need to be
transmitted only once. Note that the error between the original image and
the image reconstituted from the KLT components needs to be transmitted
in order to facilitate lossless recovery of the image.
See Section 8.9.5 for a discussion on the application of the KLT for the
selection of the principal components in multiscale directional ltering.
992 Biomedical Image Analysis
11.6.3 Encoding of transform coe cients
Regardless of the transform used (such as the DFT, DCT, or KLT), the trans-
form coecients form a set of continuous random variables, and have to be
quantized for encoding. This introduces quantization errors in the transform
coecients that are transmitted, and hence, errors arise in the image recon-
structed from the transform-coded image. In the following paragraphs, a rela-
tionship is derived between the quantization error in the transform domain and
the error in the reconstructed image. Kuduvalli and Rangayyan 174, 338, 972]
used such a relationship to develop a method for error-free transform coding
of images.
Consider the general 2D linear transformation, with the forward and inverse
transform kernels consisting of orthogonal basis functions a(m n k l) and
b(m n k l), respectively, such that the forward and inverse transforms are
given by
;1 NX
NX ;1
F (k l) = f (m n) a(m n k l) (11.32)
m=0 n=0
k l = 0 1 : : : N ; 1, and
;1 NX
NX ;1
f (m n) = F (k l) b(m n k l) (11.33)
k=0 l=0
m n = 0 1 : : : N ; 1: (See Section 3.5.2.) Now, let the transform coecient
F (k l) be quantized to F~ (k l), with a quantization error qF (k l) such that
F (k l) = F~ (k l) + qF (k l): (11.34)
The reconstructed image from the quantized transform coecients is given by
;1 NX
NX ;1
f~(m n) = F~ (k l) b(m n k l): (11.35)
k=0 l=0
The error in the reconstructed image is
qf (m n) = f (m n) ; f~(m n)
;1 NX
NX ;1
= F (k l) ; F~ (k l)] b(m n k l)
k=0 l=0
NX ;1
;1 NX
= qF (k l) b(m n k l):
k=0 l=0
(11.36)
The sum of the squared errors in the reconstructed image is
;1 NX
NX ;1
Qf = jqf (m n)j2
m=0 n=0
Image Coding and Data Compression 993
NX ;1 (NX
;1 NX ;1 NX
;1 )
= qF (k l) b(m n k l)
m=0 n=0
(Nk=0
;
l=0
;1 )
X X
1 N
qF (k0 l0 ) b(m n k0 l0 )
k =0 l =0
0 0

NX;1 NX
;1 NX;1 NX ;1 ;1 NX
NX ;1
= qF (k l) qF (k0 l0 ) b (m n k l) b(m n k0 l0 )
k=0 l=0 k =0 l =0
0 0 m=0 n=0
NX;1 NX;1
= jqF (k l)j2 = QF (11.37)
k=0 l=0
where the last line follows from the orthogonality of the basis functions b(m n
k l), and QF is the sum of the squared quantization errors in the transform
domain. This result is related to Parseval's theorem in 2D see Equation 2.55.
Applying the expectation operator to the rst and the last expressions above,
we get
;1 NX
NX ;1 ;1 NX
NX ;1
E jqf (m n)j2] = E jqF (k l)j2 ] (11.38)
m=0 n=0 k=0 l=0
or ;1 NX
NX ;1 ;1 NX
NX ;1
qf2 (m n) = qF2 (k l) = Q2 (11.39)
m=0 n=0 k=0 l=0
where the symbol indicates the expected (average) values of the correspond-
ing variables, and Q2 is the expected total squared error of quantization (in
either the image domain or the transform domain).
It is possible to derive a condition for the minimum average number of bits
required for encoding the transform coecients for a given total distortion in
the image domain. Let us assume that the transform coecients are normally
distributed. If the variance of the transform coecient F (k l) is F2 (k l), the
average number of bits required to encode the coecient F (k l) with the MSE
qF2 (k l) is given by its rate-distortion function 973]
2
" #
R(k l) = 12 log2 2F (k l) : (11.40)
qF (k l)
The overall average number of bits required to encode the transform coe-
cients with a total squared error Q2 is
;1 NX
NX ;1
Rav = N12 R(k l)
k=0 l=0
;1 NX;1 " #
= 2 N21 NX
log2 2F
2 (k l)
: (11.41)
k=0 l=0 qF (k l)
994 Biomedical Image Analysis
We now need to minimize Rav subject to the condition given by Equa-
tion 11.39. Using the method of Lagrange's multiplier, the minimum occurs
when n 1 PN ;1 PN ;1 h 2 (kl) i
@ log F
@ qF2 (kl) 2 N 2 k=0 l=0 2 qF2 (kl)
h io (11.42)
P N ; 1 P N ;1
; Q ; k=0 l=0 qF (k l) = 0
2 2

k l = 0 1 : : : N ; 1 where is the Lagrange multiplier. It follows that


; 1 + =0 (11.43)
2 N 2 ln(2) qF2 (k l)
or
qF2 (k l) = 2 N 2 1 ln(2) = q2 (11.44)
k l = 0 1 : : : N ; 1 where q2 is the average MSE, which is a constant for
all of the transform coecients. Thus, the average number of bits required
to encode the transform coecients Rav is minimum when the total squared
error is equally distributed among all of the transform coecients.
Maximum-error-limited encoding of transform coecients: Kudu-
valli and Rangayyan 174, 338, 972] derived the following condition for en-
coding transform coecients subject to a maximum error limit. Consider a
uniform quantizer with a quantization step size S for encoding the transform
coecients, such that the maximum quantization error is limited to S2 . It
may be assumed that the quantization error is uniformly distributed over the
set of transform coecients in the range ; S2 + S2 ] (see Figure 11.14). Then,
the average squared error in the transform domain is
2
q2 = S12 : (11.45)
From the result in Equation 11.39, it is seen that the errors in the recon-
structed image will also have a variance equal to q2 . We now wish to estimate
the fraction of the total number of pixels in the reconstructed image that are
in error by more than S . This is given by the area under the tail of the PDF
of the reconstruction error, shown in Figure 11.14. The worst case occurs
when the entropy of the reconstruction errors is at its maximum, under the
constraint that the variance of the reconstruction errors is bounded by q2 
this occurs when the error is normally distributed 126]. Therefore, the upper
bound on the estimated fraction of the pixels in error by more than S is
Z1 2 p
E (S ) = 2 q 1 exp ; x 2 dx = 2 erfc( 12) = 5:46 10;4
S 2  q2 2q
(11.46)
Image Coding and Data Compression 995

1.4

1.2

0.8
p(x)

0.6

0.4

0.2

0
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
x

FIGURE 11.14
Schematic PDFs of the transform-coecient quantization error (uniform PDF,
solid line) and the image reconstruction error (Gaussian PDF, dashed line).
The gure represents the case with the quantization step size S = 1.
996 Biomedical Image Analysis
where erfc(x) is the error function integral, de ned as 974]
Z1 1 x2
erfc(x) = p exp ; 2 dx: (11.47)
x 2
Thus, only a negligible number of pixels in the reconstructed image will
be in error by more than the quantization step size S . Conversely, if the
maximum error is desired to be  S , a quantization step size of S could be
used to encode the transform coecients, with only a negligibly small number
of the reconstructed pixels exceeding the error limit. The pixels in error by
more than the speci ed maximum could be encoded separately with a small
overhead. When the maximum allowed error is  0:5, error-free reconstruction
of the image is possible by simply rounding o the reconstructed values to
integers.
Variable-length encoding and bit allocation: The lower bound on
the average number of bits required for encoding normally distributed trans-
form coecients F (k l) with the MSE qF2 (k l) is given by Equation 11.40.
Goodness-of- t studies of PDFs of transform coecients 975] have shown
that transform coecients tend to follow the Laplacian PDF. The PDF of a
transform coecient F (k l) may be modeled as
p(F (k l)) =  (k2 l) exp(; (k l) jF (k l)j) (11.48)
p
where  (k l) = 2= F (k l) is the constant parameter of the Laplacian PDF.
A shift encoder could be used to encode the transform coecients such that
the maximum quantization error is  S . The shift-encoding procedure is
shown in Figure 11.15. In a shift encoder, 2 (k l) levels are nominally allo-
cated to encode a transform coecient F (k l), covering the range ;f (k l) ;
1g S f (k l) ; 1g S ] with the codes 0 1 2 : : : 2 (k l) ; 2. The code 2 (k l) ; 1
indicates that the coecient is out of the range ;f (k l) ; 1g S f (k l) ;
1g S ]. For the out-of-range coecients, an additional 2 (k l) levels are allo-
cated to cover the ranges ;f2 (k l);1g S ; (k l) S ], and  (k l) S f2 (k l);
1g S ]. The process is repeated with the allocation of additional levels until
the actual value of the transform coecient to be encoded is reached. If the
code value is represented by a simple binary code at each level, the average
number of bits required to encode the transform coecient F (k l) is given
by 338]
1 + log2 (k l)
R(k l) = 1 ; exp (11.49)
; (k l) (k l) S ]
and the nominal number of bits allocated to encode the transform coecient
is
b(k l) = dlog2  (k l)]e: (11.50)
It is now required to nd the b(k l) that minimizes the average number of
bits R(k l) required to encode F (k l). This can be done by using a nonlinear
Image Coding and Data Compression 997
optimization technique such as the Newton-Raphson method. However, be-
cause only integral values of b(k l) need to be searched, it is computationally
less expensive to search the space of b(k l) 2 1 31] for the corresponding
minimum value of R(k l), which is the range of the nominal number of bits
allocated to encode the transform coecient F (k l).

1.5

1
p ( F(k, l) )

0.5

0
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
F(k, l)

FIGURE 11.15
Schematic representation of shift coding of transform coecients with a Lapla-
cian PDF. With reference to the discussion in the text, the gure represents
(k l) S = 0:5 and  (k l) = 3:0.

This allocation requires an estimate of the variance F2 (k l) of the transform


coecients, or a model of the energy-compacting property of the transform
used. Most of the linear orthogonal transforms used in practice result in the
concentration of the variance in the lower-order transform coecients. The
variance of the transform coecients may be modeled as

F2 (k l) = F (0 0) exp ;( k2 + l2 ) (11.51)
where F (0 0) is the lowest-order transform coecient, and  and are the
constants of the model. For most transforms (except the KLT), F (0 0) is the
998 Biomedical Image Analysis
average value or a scaled version of the average value of the image pixels. The
parameters  and may be estimated using a least-squares t to the rst few
transform coecients of the image.
In an alternative coding procedure, a xed total number of bits may be
allocated for encoding the transform coecients, and the di erence image
encoded by a lossless encoding method. In such a procedure, no attempt is
made to allocate additional bits for transform coecients that result in errors
that fall out of the quantization range. The total number of bits allocated to
encode the transform coecients is varied until the total average bit rate is at
its minimum. Using such a procedure, Cox et al. 976] found an optimal com-
bination of bit rates between the error image and the transform coecients.
Wang and Goldberg 977] used a method of requantization of the quantization
errors in addition to encoding the error image they too observed a minimum
total average bit rate after a number of iterations of requantization. Exper-
iments conducted by Kuduvalli and Rangayyan 174, 338, 972] showed that
the lowest bit rates obtained by such methods for reversible compression can
also be obtained by allocating additional bits to quantize the out-of-range
transform coecients as described earlier. This is due to the fact that a large
quantization error in the transform domain, while needing only a few addi-
tional bits for encoding, will get redistributed over a large number of pixels
in the image domain, thereby increasing the entropy of the error image.
The large sizes of image arrays used for high-resolution representation of
medical images preclude the use of full-frame transforms. Partitioning an im-
age into blocks not only leads to computational advantages, but also permits
adaptation to the changing statistics of the image. In the coding procedure
used by Kuduvalli and Rangayyan 174, 338, 972], the images listed in Ta-
ble 11.5 were partitioned into blocks of size 256 256 pixels. The model
parameters  and in Equation 11.51 were computed for each block by us-
ing a least-squares t to the corresponding set of transform coecients. The
parameters were stored or transmitted along with the encoded transform co-
ecients in order to allow the decoder to reconstruct the model and the bit-
allocation table. Blocks at the boundaries of the images that were not squares
were encoded with 2D DPCM and the Hu man code.
Figure 11.16 shows the average bit rate for one of the images listed in
Table 11.5, as a function of the maximum allowed error, using four transforms.
The KLT (with the ACF estimated from the image) and the the DCT show
the best performance among the four transforms. The performance of the
KLT is only slightly superior to, and in some cases slightly worse than that
of the DCT this is to be expected because of the general nonstationarity of
medical images, and due to the problems associated with the estimation of
the ACF matrix from a nite image.
Figure 11.17 shows the average bit rate, obtained by using the DCT, as a
function of the maximum allowed error, for four of the ten images listed in Ta-
ble 11.5. When the maximum allowed error is  0:5, error-free reconstruction
of the original image is possible otherwise, the compression is irreversible.
Image Coding and Data Compression 999
The average bit rate for error-free reconstruction is seen to be in the range of
5 ; 6 b=pixel for the images considered (down from 10 b=pixel in their original
format).
Average bit rate (bits/ pixel)

FIGURE 11.16
Average bit rate as a function of the maximum allowed error, using four trans-
forms (KLT, DCT, DFT, and WHT) with the rst image listed in Table 11.5.
A block size of 256 256 pixels was used for each transform. The compression
is lossless if the maximum allowed error is  0:5 otherwise, it is irreversible or
lossy. See also Table 11.7. Reproduced with permission from G.R. Kuduvalli
and R.M. Rangayyan, \Performance analysis of reversible image compression
techniques for high-resolution digital teleradiology", IEEE Transactions on
Medical Imaging, 11(3): 430 { 445, 1992.  c IEEE.
1000 Biomedical Image Analysis
Average bit rate (bits/ pixel)

FIGURE 11.17
Average bit rate as a function of the maximum allowed error, using the DCT,
for the rst four images listed in Table 11.5. A block size of 256 256 pixels
was used. The compression is lossless if the maximum allowed error is 
0:5 otherwise, it is irreversible or lossy. See also Table 11.7. Reproduced
with permission from G.R. Kuduvalli and R.M. Rangayyan, \Performance
analysis of reversible image compression techniques for high-resolution digital
teleradiology", IEEE Transactions on Medical Imaging, 11(3): 430 { 445,
1992. c IEEE.
Image Coding and Data Compression 1001

11.7 Interpolative Coding


Interpolative coding consists of encoding a subsampled image using a re-
versible compression technique, deriving the values of the remaining pixels
via interpolation with respect to their neighboring pixels that have already
been processed, and then encoding the di erence between the actual pixels
and the interpolated pixels in successive stages using discrete symbol coding
techniques. This technique is also referred to as hierarchical interpolation
(HINT) 978], and is illustrated in Figure 11.18. In the 9 9 image shown in
the gure, the pixels marked \1" correspond to the original image decimated
by a factor of 4. The decimated image could be encoded using any coding
technique. The pixels marked \2" are estimated from those marked \1" by
bilinear interpolation, and rounded to ensure reversibility. This completes one
iteration of interpolation. Next, the pixels marked \3" are interpolated from
the pixels marked \1" and \2", and the process is repeated to interpolate the
pixels marked \4" and \5". The di erences between the actual pixel values
marked \2" { \5" and the corresponding interpolated values form a discrete
symbol set with a small dynamic range, and may be encoded eciently using
Hu man, arithmetic, or LZW coding.
The illustration in Figure 11.18 corresponds to interpolation of order four
higher-order interpolation may also be used. In general, interpolative coding
of order 2P , where P is an integer, involves 2 P iterations of interpolation.
In the work of Kuduvalli and Rangayyan 174, 338], the digitized radio-
graphic images listed in Table 11.5 were partitioned into blocks of size 256 256
pixels for interpolative coding. The initial subsampled images were decorre-
lated using 2D DPCM of order 1 1 the di erence data were compressed
by Hu man coding. It was observed that the di erences between the inter-
polated and actual pixel values could be modeled by Laplacian PDFs (see
Figure 11.12). The variance of the interpolation errors was seen to decrease
with increasing resolution, because pixels are correlated more to their immedi-
ate neighbors than to pixels farther away. The interpolation errors at di erent
iterations were modeled by Laplacian PDFs with the variance equal to the cor-
responding mean-squared interpolation errors. The PDFs were then used in
compressing the interpolation errors via arithmetic or Hu man coding. LZW
coding does not need modeling of the error distribution, but performed con-
siderably worse than the Hu man and arithmetic codes. Figure 11.19 shows
the average bit rate, for eight images, as a function of the order of interpo-
lation using Hu man coding as the post-encoding technique. It is observed
that increasing the order of interpolation has only a small e ect on the overall
average bit rate.
For examples on the performance of HINT, see Tables 11.7, 11.11, 11.15,
and 11.16.
1002 Biomedical Image Analysis

1 5 3 5 1 5 3 5 1
5 4 5 4 5 4 5 4 5
3 5 2 5 3 5 2 5 3
5 4 5 4 5 4 5 4 5
1 5 3 5 1 5 3 5 1
5 4 5 4 5 4 5 4 5
3 5 2 5 3 5 2 5 3
5 4 5 4 5 4 5 4 5
1 5 3 5 1 5 3 5 1

FIGURE 11.18
Stages of interpolative coding. The pixels marked \1" are coded and trans-
mitted rst. Next, the pixels marked \2" are estimated from those marked
\1", and the di erences are transmitted. The procedure continues iteratively
with the pixels marked \3", \4", and \5". Reproduced with permission from
G.R. Kuduvalli and R.M. Rangayyan, \Performance analysis of reversible im-
age compression techniques for high-resolution digital teleradiology", IEEE
Transactions on Medical Imaging, 11(3): 430 { 445, 1992.  c IEEE.
Image Coding and Data Compression 1003
Average bit rate (bits/ pixel)

FIGURE 11.19
Results of compression of eight of the images listed in Table 11.5 by inter-
polative and Hu man coding. Order zero corresponds to 1 1 2D DPCM
coding. Reproduced with permission from G.R. Kuduvalli and R.M. Ran-
gayyan, \Performance analysis of reversible image compression techniques for
high-resolution digital teleradiology", IEEE Transactions on Medical Imaging,
11(3): 430 { 445, 1992.  c IEEE.
1004 Biomedical Image Analysis

11.8 Predictive Coding


Samples of real-life signals and images bear a high degree of correlation, espe-
cially over small intervals of time or space. The correlation between sam-
ples may also be viewed as statistical redundancy. An outcome of these
characteristics is that a sample of a temporal signal may be predicted from
a small number of its preceding samples. When such prediction is per-
formed in a linear manner, we have a linear prediction (LP) model, given
by 31, 176, 510, 833, 979]
X
P
f~(n) = ; a(p) f (n ; p) + G d(n) (11.52)
p=1
where f (n) is the signal being modeled f~(n) is the predicted value of f (n)
a(p), p = 1 2 : : : P are the coecients of the LP model and P is the order
of the model. The signal f (n) is considered to be the output of a linear
system or lter with d(n) as the input or driving signal G is the gain of
the system. Because the prediction model uses only the past values of the
output signal f (n), it is known as an autoregressive (AR) model. The need for
causality in physically realizable lters and signal processing systems dictates
the requirement to use only the past samples of f (and the present value of
d) in predicting the current value f (n). If the past values of d are also used,
the model will include a moving-average or MA component.
The model represented in Equation 11.52 indicates that, given the initial
set of P values of the signal f and the input or driving signal d, any future
value of f may be (approximately) computed with a knowledge of the set
of coecients a(p). Therefore, the model coecients a(p), p = 1 2 : : : P ,
represent the signal-generating process. The model coecients may be used
to predict the values of f or to analyze the signal-generating process. Several
methods exist to derive the LP model coecients for a given signal, subject
to certain conditions on the error of prediction 31, 176, 510, 833, 979].
In the context of image data compression, we have a few considerations that
di er from the temporal signal application described above. In most cases, the
input or driving signal d is not known the omission of the related component
in the model represented in Equation 11.52 will cause only a small change in
the error of prediction. Furthermore, causality is not a matter of concern in
image processing however, a certain sequence of accessing or processing of
the pixels in the given image needs to be de ned, which could imply \past"
and \future" samples of the image see Figure 10.19. In the context of image
processing, the samples used to predict the current pixel could be labeled as
the ROS of the model. Then, we could express the basic 2D LP model as
XX
f~(m n) = ; a(p q) f (m ; p n ; q): (11.53)
(pq)2ROS
Image Coding and Data Compression 1005
In the application to image coding, the ROS needs to be de ned such that,
in the decoding process, only those pixels that have already been decoded are
included in the ROS for the current pixel being processed.
The error of prediction is given by
e(m n) = f (m n) ; f~(m n) (11.54)
and the MSE between the original image and the predicted image is given by
2 = E e2 (m n)]: (11.55)
The coecients a(p q) need to be chosen or derived so as to minimize the MSE
between the original image and the predicted image. Several approaches are
available for the derivation of optimal predictor coecients 31, 176, 510, 833,
979, 980].
Exact reconstruction of the image requires that, in addition to the initial
conditions and the model coecients, the error of prediction be also trans-
mitted and made available to the decoder. The advantage of LP-based image
compression lies in the fact that the error image tends to have a more concen-
trated PDF than the original image close to a Laplacian PDF in most cases
see Figure 11.12 (b)], which lends well to ecient data compression.
In the simplest model of prediction, the current pixel f (m n) may be mod-
eled as being equal to the preceding pixel f (m n ; 1) or f (m ; 1 n) let
f~(m n) = f (m ; 1 n): (11.56)
Then, the error of prediction is given by
e(m n) = f (m n) ; f (m ; 1 n) (11.57)
which represents simple DPCM. Note: Any data coding method based upon
the di erence between a data sample and a predicted value of the same using
any scheme is referred to as DPCM hence, all LP-based methods fall under
the category of DPCM. The di erentiation procedure given by Equation 11.13
and illustrated in Figure 11.12 is equivalent to LP with f~(m n) = f (m ; 1 n)
as above.] Several simple combinations of the immediate neighbors of the
current pixel may also be used in the prediction model see Section 11.10.2.
Ecient modeling requires the use of the optimal model order (ROS size)
and the derivation of the optimal coecients, subject to conditions related to
the minimization of the MSE. Several methods for the derivation of the model
coecients are described in the following sections.

11.8.1 Two-dimensional linear prediction


The error of prediction in the 2D LP model is given by
e(m n) = f (m n) ; f~(m n)
XX
= f (m n) + a(p q) f (m ; p n ; q): (11.58)
(pq)2ROS
1006 Biomedical Image Analysis
The squared error is given by
XX
e2 (m n) = f 2 (m n) + 2 a(p q) f (m n) f (m ; p n ; q) (11.59)
(pq)2ROS
XX XX
+ a(p q) a(r s) f (m ; p n ; q) f (m ; r n ; s):
(pq)2ROS (rs)2ROS
Applying the statistical expectation operator, we get
2 = E e2 (m n)]
XX
= f (0 0) + 2 a(p q) f (p q)
(pq)2ROS
XX XX
+ a(p q) a(r s) f (r ; p s ; q) (11.60)
(pq)2ROS (rs)2ROS
where f (p q) is the ACF of f , and the image-generating process is assumed
to be wide-sense stationary.
The coecients that minimize the MSE may be derived by setting to zero
the derivative of 2 with respect to a(p q), (p q) 2 ROS , which leads to
XX
f (r s) + a(p q) f (r ; p s ; q) = 0 (r s) 2 ROS: (11.61)
(pq)2ROS
Using this result in Equation 11.60, we get
XX
2 = f (0 0) + a(p q) f (p q): (11.62)
(pq)2ROS
Combining Equation 11.61 and 11.62, we get
XX
f (r s) + a(p q) f (r ; p s ; q) = 02 ((rr ss)) 2= ROS
(0 0) : (11.63)
(pq)2ROS
The equations represented by the expressions above are known as the 2D
normal or Yule{Walker equations, and may be solved to derive the prediction
coecients. Because the method uses the ACF of the image to derive the pre-
diction coecients, it is known as the autocorrelation method 510]. The ACF
may be estimated from the given nite image f (m n), m = 0 1 2 : : : M ; 1
n = 0 1 2 : : : N ; 1, as
1 ;1 NX
MX ;1
~f (p q) = MN f (m n) f (m ; p n ; q): (11.64)
m=p n=q
The prediction coecients may also be estimated by using least-squares
methods to minimize the prediction error averaged over the entire image,
Image Coding and Data Compression 1007
indicated as (m n) 2 IMG, as follows:
XX 2
"2 = 1
MN (mn)2IMG e (m n)
2
1 X X 4f 2 (m n) + 2 X X a(p q) f (m n) f (m ; p n ; q)
= MN
(mn)2IMG (pq)2ROS
3
XX XX
+ a(p q) a(r s) f (m ; p n ; q) f (m ; r n ; s)5
(pq)2ROS (rs)2ROS
XX
= f (0 0 0 0) + 2 a(p q) f (0 0 p q)
(pq)2ROS
XX XX
+ a(p q) a(r s) f (p q r s): (11.65)
(pq)2ROS (rs)2ROS
Here, f is the covariance of the image f , de ned as
f (p q r s) = MN1 X X f (m ; p n ; q) f (m ; r n ; s): (11.66)
(mn)2IMG
The coecients that minimize the averaged error may be derived by setting
to zero the derivative of "2 with respect to a(p q), which leads to
XX
f (0 0 r s) + a(p q) f (p q r s) = 0: (11.67)
(pq)2ROS
It follows that
XX
"2 = f (0 0 0 0) + a(p q) f (0 0 p q): (11.68)
(pq)2ROS
The 2D normal equations for this condition are given by
XX
f (0 0 r s) + a(p q) f (p q r s) = 0"2 ((rr ss)) 2= ROS
(0 0) (11.69)
(pq)2ROS
which may be solved to obtain the prediction coecients. Because the covari-
ance of the image is used to derive the prediction coecients, this method is
known as the covariance method.
Now, if the region IMG is de ned so as to span the entire image array
of size M N , and the image is assumed to be zero outside the range m =
0 1 2 : : : M ; 1 n = 0 1 2 : : : N ; 1, we have
1 ;1 NX
MX ;1
f (p q r s) = MN f (m ; p n ; q ) f ( m ; r n ; s )
m=0 n=0
= ~f (r ; p s ; q) (11.70)
1008 Biomedical Image Analysis
where ~f is an estimate of the ACF of the image f . Then, the covariance
method yields results that are identical to the results given by the autocorre-
lation method.
Equation 11.63 may be expressed in matrix form as
f a =  (11.71)
where, for the case of a QP ROS of size P1 P2 , the matrices f , a, and
 are of size (P1 + 1)(P2 + 1) (P1 + 1)(P2 + 1), (P1 + 1)(P2 + 1) 1, and
(P1 + 1)(P2 + 1) 1, respectively. The extended ACF matrix f is given by
2 (0 0)    (0 ;P ) (;1 0)    (;1 ;P2 ) (;2 0)    (;P1 ;P2 ) 3
2
6
6 .. . . .. .. . . .. .. .. .. 7
7
6
6 . . . . .. . . . 7
7
6
6 (0 P2 )    (0 0) (;1 P2 )    (;1 0) (;2 P2 )  (;P1  0) 7
6
6
(1 0)    (1 ;P2 ) (0 0)    (0 ;P2 ) (;1 0)  (;P1 + 1 ;P2 ) 777
6 .. . . .. .. . . .. .. .. .. 7
6
6
6 . .. . .. . . . 7
7
6
6
(1 P2 )    (1 0) (0 P2 )    (0 0) (;1 P2 )  (;P1 + 1 0) 777
6
6
(2 0)    (2 ;P2 ) (1 0)    (1 ;P2 ) (0 0)  (;P1 + 2 ;P2 ) 77
4 ...
6 . . ..
..
..
.
. . ..
..
..
.
..
.
..
.
7
5
(P1  P2 )    (P1  0) (P1 ; 1 P2 )    (P1 ; 1 0) (P1 ; 2 P2 )    (0 0)
(11.72)
where the subscript f has been dropped from the entries within the matrix
for the sake of brevity. The matrices composed by the prediction coecients
and the error are given by
2 a(0 0) 3 2 2 3
66 .. 77 66 .. 77
66 . 77 66 . 77
66 a (0 P2 ) 77 66 0 77
66 . a (1 0) 77 66 0. 77
a = 66 .. 7 and  = 66 .. 77 : (11.73)
66 a(1 P2) 777 66 0 77
66 a(2 0) 77 66 0 77
64 .. 75 64 .. 75
. .
a(P1 P2 ) 0
The matrix f is Toeplitz-block-Toeplitz in nature ecient algorithms are
available for the inversion of such matrices 981, 982].
The methods described above to compute the prediction coecients assume
the image to be stationary over the entire frame available. In practice, most
images are nonstationary, and may be assumed to be locally stationary only
over relatively small segments or ROIs. In order to maintain the optimality
of the model over the entire image, and in order to maintain the error of
prediction at low levels, we could follow one of two procedures:
 partition the image into small blocks over which stationarity may be
assumed, and compute the prediction coecients independently for each
block, or
Image Coding and Data Compression 1009
 adapt the model to the changing statistics of the image.
Encoding the prediction error: In order to facilitate error-free recon-
struction of the image, the prediction error has to be transmitted and made
available at the decoder (in addition to the prediction coecients and the ini-
tial conditions). For quantized original pixel values, the prediction error may
be rounded o to integers that may be encoded without error using a source
coding technique such as the Hu man code. The prediction error has been
observed to possess a Laplacian PDF 174, 338], which lends well to ecient
compression by the Hu man code.
Results of application to medical images: The average bit rates ob-
tained by the application of the autocorrelation method of computing the
prediction coecients, using blocks of size 128 128 pixels, to six of the im-
ages listed in Table 11.5, are shown in Figure 11.20, for various model orders.
For the images used, comparable performance was obtained using NSHP and
QP ROSs of similar extent. Good compression performance was obtained,
with average bit rates in the range 2:5 ; 3:2 b=pixel for most of the images,
with the original pixel values at 10 b=pixel, using QP ROSs of size 2 2 or
3 3.
See Section 5.4.10 for a method for the detection of calci cations based
upon the error of prediction.

11.8.2 Multichannel linear prediction


The di erence between LP in 1D and 2D may be bridged by multichannel
LP, where a certain number of rows of the given image may be viewed as a
collection of multichannel signals 337, 338, 979, 983, 984, 985, 986]. Kudu-
valli and Rangayyan 174, 337, 338] proposed the following procedures for the
application of multichannel LP to predictive coding and compression of 2D
images.
Consider a multichannel signal with (P2 + 1) channels, with the channels
indexed as q = 0 1 2 : : : P2 , and the individual signals labeled as f (m),
m = 0 1 2 : : : N ; 1. The collection of the signal's values at a position m,
given by fq (m), q = 0 1 2 : : : P2 , may be viewed as a multichannel signal or
vector (or a matrix) of size (P2 + 1) 1 see Figures 11.21 and 11.22. If we
were to use a multichannel linear predictor of order P1 , we could predict the
vector f (m) as a linear combination of the vectors f (m ; p), p = 1 2 : : : P1 :
P1
~f (m) = ; X a(p) f (m ; p) (11.74)
p=1

where a(p), p = 1 2 : : : P1 , are multichannel LP coecient matrices, each of


size (P2 + 1) (P2 + 1).
1010 Biomedical Image Analysis
Average bit rate (bits/ pixel)

FIGURE 11.20
Results of compression of six of the images listed in Table 11.5 by 2D LP
(autocorrelation method) and Hu man coding. The method was applied on
a block-by-block basis, using blocks of size 128 128 pixels. In the case of
modeling using the NSHP ROS, a model order of 1:5 indicates a 1 1 1
ROS (see Figure 10.19), 2:5 indicates a 2 2 2 ROS, etc. The orders of
models using the QP ROS are indicated by integers: 2 indicates a 2 2 ROS, 3
indicates a 3 3 ROS, etc. Reproduced with permission from G.R. Kuduvalli
and R.M. Rangayyan, \Performance analysis of reversible image compression
techniques for high-resolution digital teleradiology", IEEE Transactions on
Medical Imaging, 11(3): 430 { 445, 1992.  c IEEE.
Image Coding and Data Compression 1011
0 m f (m - P ) f (m) N-1
1

q
ROS
P +1
p 2

P
2
P P P
1 1 1
region of forward prediction

region of backward prediction

FIGURE 11.21
Multichannel linear prediction. Each row of the image is viewed as a channel
or component of a multichannel signal or vector. The column index of the
image may be considered to be equivalent to a temporal index 174, 337,
338]. The indices shown correspond to Equations 11.74 and 11.88. See also
Figure 11.22.

The error of prediction is given by


P1
X
e(m) = f (m) + a(p) f (m ; p): (11.75)
p=1
The covariance matrix of the error of prediction is given by
 e = E e(m) eT (m)]: (11.76)
For optimal prediction, we need to derive the prediction coecient matrices
a(p) that minimize the trace of the covariance matrix of the error of prediction.
From Equation 11.75 we can write
e(m) eT (m) = f (m) f T (m)
P1
X P1
X
+ a(p) f (m ; p) f T (m) + f (m) f T (m ; q) aT (q)
p=1 q=1
XP1 X
P1
+ a(p) f (m ; p) f T (m ; q) aT (q): (11.77)
p=1 q=1
Applying the statistical expectation operator, and assuming wide-sense sta-
tionarity of the multichannel signal-generating process, we get
P1
X
e = c (0) + a(p) c (;p)
p=1
1012 Biomedical Image Analysis

P P
1 1
(0, 0) f (m) m (0, N-1)

region of forward prediction


P +1
2
region of backward prediction
(P ,0) (P , N-1)
2 2

.
.
.

(M-1, 0) (M-1, N-1)

FIGURE 11.22
Multichannel LP applied to a 2D image 337, 338]. See also Figure 11.21.
Reproduced with permission from G.R. Kuduvalli and R.M. Rangayyan, \An
algorithm for direct computation of 2-D linear prediction coecients", IEEE
Transactions on Signal Processing, 41(2): 996{1000, 1993.  c IEEE.
Image Coding and Data Compression 1013
P1
X P1 X
X P1
+ c (q ) aT (q ) + a(p) c (q ; p) aT (q) (11.78)
q=1 p=1 q=1
where c is the ACF of the image computed over the set of rows or channels
being used in the multichannel prediction model, given by
2 3
00 (r)    0P2 (r)
6
c (r) = 4 ... . . . .. 75 (11.79)
.
P20 (r)    P2 P2 (r)
and
pq (r) = E fp (m) fq (m ; r)]: (11.80)
In order to minimize the trace of the error covariance matrix e , we could
di erentiate both the sides of Equation 11.78 with respect to the prediction
coecient matrices a(r), r = 1 2 : : : P1 , and equate the result to the null
matrix of size (P2 + 1) (P2 + 1), which leads to
0 = @@a(re)  r = 1 2 : : : P1
P1
X
= 2 c (;r) + 2 Tc (r ; p) aT (p)
p=1
P1
X
= c (;r) + c (p ; r) aT (p) r = 1 2 : : : P1 : (11.81)
p=1
Note: Tc (r ; p) = c (p ; r).]
Now, Equation 11.78 may be rewritten as
P1
X
e = c (0) + a(p) c (p)
p=1
P1 " P1 #
X X
+  c (; q ) + a(p) c (q ; p) aT (q)
q=1 p=1
P1
X
= c (0) + a(p) Tc (p)
p=1
XP1
= c (0) + c (p) aT (p): (11.82)
p=1
The relationships derived above may be summarized as
c A =  (11.83)
1014 Biomedical Image Analysis
where the matrices are given in expanded form as
2
c (0) c (1)     c (P 1 ) 3 2 I 3 2 3
e
66 c (;1) c (0)    c (P1 ; 1) 77 66 aT (1) 77 66 0 77
64 .. .. . . . .. 75 64 .. 75 = 64 .. 75 : (11.84)
. . . . .
c (;P1 ) c (;P1 + 1)    c (0) aT (P1) 0
In the equation above, the submatrices c are as de ned in Equation 11.79
I is the identity matrix of size (P2 + 1) (P2 + 1) and 0 is the null matrix
of size (P2 + 1) (P2 + 1). This system of equations may be referred to as
the multichannel version of the Yule{Walker equations. The solution to this
set of equations may be used to obtain the 2D LP coecients by making the
following associations:
pq (r) = f (r p ; q) (11.85)
(compare Equations 11.80 and 11.64)
 =  e a0  (11.86)
and
ar = aT (r) a0 (11.87)
where ar = a(r 0) a(r 1)    a(r P2)]T is composed by the elements of the
rth row of the matrix of prediction coecients a given in Equation 11.73
(written as a column matrix).
The Levinson-Wiggins-Robinson algorithm: The multichannel pre-
diction coecient matrix may be obtained by the application of the algorithms
due to Levinson 987] and Wiggins and Robinson 988]. In the multichannel
version of this algorithm 337, 338], the prediction coecients for order (P1 +1)
are recursively related to those for order P1 . The prediction model given by
Equation 11.74 is known as the forward predictor. Going in the opposite di-
rection, the backward predictor is de ned to predict the vector f (m) in terms
of the vectors f (m + p), p = 1 2 : : : P1 , as
P1
~f (m) = ; X b(p) f (m + p) (11.88)
p=1
where b(p), p = 1 2 : : : P1 , are the multichannel backward prediction coe-
cient matrices, each of size (P2 + 1) (P2 + 1) see Figure 11.21.
In order to derive the multichannel version of Levinson's algorithm, let us
rewrite the multichannel ACF matrix in Equations 11.83 and 11.84 as follows:
2
(0) (1)    (P1 ) 3
6 (;1) (0)    (P1 ; 1) 77
P1+1 = 664 .. .. . . . .. 75 (11.89)
. . .
(;P1 ) (;P1 + 1)    (0)
Image Coding and Data Compression 1015
where the subscript (P1 + 1) is used to indicate the order of the model, and
the subscript c has been dropped from the submatrices for compact notation.
The matrix P1 +1 may be partitioned as
2 3 2 3
(0)  TP1 P 1 #
P1
P1+1 = 4 5=4 5 (11.90)
 P1 P1 #T
P1 (0)
where
 P1= (1) (2)    (P1 )]T (11.91)
# T
P1 = (;P1 ) (;P1 + 1)    (;1)] (11.92)
and the property that (r) = T (;r) has been used. It follows that
2 3
 P1 ;1
 P1 = 4 5 (11.93)
(;P1 )
and 2 (P ) 3
1
# P1 = 4 5: (11.94)
#
P1 ;1
Let us also de ne partitions of the forward and backward prediction coecient
matrices as follows:  
AP1 = IA~ P (11.95)
1

and ~ 
BP1 = B I
P1 (11.96)
where AP1 is the same as A in Equations 11.83 and 11.84, and BP1 is formed
in a similar manner for the backward predictor.
Using the partitions as de ned above, we may rewrite the multichannel
Yule{Walker equations, given by Equation 11.84, in two forms for forward
and backward prediction, as follows:
2 32 3 2 f 3
(0)  TP1 I 
4 5 4 5 = 4 P1 5 (11.97)
 TP1 P1 A~ P1 0P 1
and 2 32~ 3 2 3
P1 #
P1 BP 0P 1
4 5 4 15=4 5 (11.98)
# T
P1 (0) I  bP 1
where 0P 1 is the null matrix of size (P1 P2 + 1) (P2 + 1). The matrix fP 1
is the covariance matrix of the error of forward prediction (the same as e
1016 Biomedical Image Analysis
given by Equation 11.82), with the matrix bP 1 being the counterpart for the
backward predictor. It follows that
A~ P1 = ;;P11 P1 (11.99)
B~ P1 = ;;P11 #P1 (11.100)
 fP 1 = (0) +  TP1 A
~ P1 (11.101)
and
 bP1 = (0) +  # T ~ :
P1 B P1 (11.102)
Applying the inversion theorem for partitioned matrices 979], and making
use of the preceding six relationships, we get
2
0 0TP 1 3
;P11+1 = 4 5 + AP1 fP 1 ];1 ATP1 (11.103)
0P 1 ;P11
2 ;1 3
 0P 1
P1
=4 5 + BP1 bP 1 ];1 BTP1 : (11.104)
0TP 1 0
Multiplying both sides of Equation 11.104 by P1 +1 , making use of the par-
titioned form in Equation 11.93, and using Equation 11.99, we get
~ 
A~ P1+1 = A P1 b ;1 T
0 ; BP1 P 1 ] BP1 P1+1 : (11.105)
Extracting the lower (P2 + 1) (P2 + 1) matrix from both sides of Equa-
tion 11.105 in its partitioned form, we have
aTP1+1 (P1 + 1) = ;bP 1 ];1 BTP1 P1+1 (11.106)
which upon transposition yields
aP1+1 (P1 + 1) = ;TP1 +1 BP1 bP 1 ];1
= ;bP1 +1 ]T bP 1 ];1 (11.107)
where
bP1+1 = BTP1 P1+1
P1
X
= (;P1 ; 1) + bP1 (p) (;P1 ; 1 + p): (11.108)
p=1
Using Equation 11.107 in Equation 11.105, we get
 
~AP1 +1 = A~ P1 + BP1 aTP1 +1 (P1 + 1) (11.109)
0
Image Coding and Data Compression 1017
which upon transposition and expansion of the matrix notation yields
aP1+1 (p) = aP1 (p)+aP1+1 (P1 +1) bP1 (P1 +1;p) p = 1 2 : : : P1 : (11.110)
Similarly, multiplying both sides of Equation 11.103 by #P1 +1 and using Equa-
tion 11.100, we get
bP1 +1 (P1 + 1) = ;fP1+1 ]T fP 1 ];1 (11.111)
where
P1
X
fP1 +1 = (P1 + 1) + aP1 (p) (P1 + 1 ; p) (11.112)
p=1
and
bP1+1 (p) = bP1 (p)+bP1+1 (P1 +1) aP1 (P1 +1;p) p = 1 2 : : : P1 : (11.113)
Substituting Equation 11.109 in Equation 11.101 and using the partitioned
form of P1 +1 in Equation 11.93, we get
 fP 1+1 = (0) +  TP1 A~ P1 + TP1 +1 BP1 aTP1 +1 (P1 + 1)
= fP 1 + bP1 +1 ]T aTP1 +1 (P1 + 1)
= fP 1 ; aP1 +1 (P1 + 1) bP 1 aTP1 +1 (P1 + 1) (11.114)
where Equation 11.107 has been used in the last step. Similarly, we can obtain
an expression for the covariance matrix of the backward prediction error as
 bP 1+1 =  bP 1 ; bP1 +1 (P1 + 1)  fP 1 bTP1 +1 (P1 + 1): (11.115)
Now, consider the matrix product
T     (0) TP1+1   AP1 
0B  P1 +2
A P1
= 0 B T
P1 0 P1  P1 +1 0
P1+1
  P1 
= BTP1 P1 +1 BTP1 P1 +1 A 0
2 3
 I
= bP1 +1 0    bP 1 4 A~ P1 5
0
b
= P1 +1 : (11.116)
Taking the transpose of the expression above, and noting that P1 +2 is sym-
metric, we get
  
bP1 +1 ]T = ATP1 0 P1 +2 0B = fP1 +1 : (11.117)
P1
1018 Biomedical Image Analysis
Equations 11.105, 11.107, 11.109 { 11.112, 11.114, 11.115, and 11.117 consti-
tute the Levinson-Wiggins-Robinson algorithm, with the initialization
 f0 = b0 = c (0): (11.118)
With the autocorrelation matrices (p) de ned by the association given in
Equation 11.85, the matrix P1 +1 is a Toeplitz-block-Toeplitz matrix: the
block elements (submatrices) (p) along the diagonals of P1 +1 are mutually
identical, and furthermore, the elements along the diagonals of each subma-
trix (p) are mutually identical. Thus, P1 +1 and (p) are symmetrical
about their cross diagonals (that is, they are per-symmetric). A property of
Toeplitz-block-Toeplitz matrices that is of interest here is de ned in terms of
the exchange matrices 2 3
0 0  0 1
60 0  1 07
J = 664 .. .. .. .. .. 775 (11.119)
.. . ..
1  0 0 0
of size (P2 + 1) (P2 + 1), and
2
0 0  0 J3
60 0  J 07
JP1 = 664 .. .. .. .. .. 775 (11.120)
. . . . .
J  0 0 0
of size (P1 +1)(P2 +1) (P1 +1)(P2 +1), such that J J = I and JP1 +1 JP1 +1 =
IP1 +1 . With these de nitions, we have
J (p) J = T (p) (11.121)
and
JP1 +1 P1+1 JP1+1 = P1+1 : (11.122)
Now, premultiplying both sides of Equation 11.84 by JP1 +1 and post-multiplying
both sides by J, we get
2
(0) (;1)    (;P1 ) 3 2 J aTP1 (P1) J 3 2 0 3
66 (1) (0)    (;P1 + 1) 77 66 ... 77 66 .. 77
64 .. .. . . . .. 7 6 7 = 6 . 75 :
. . . 5 4 J aTP1 (1) J 5 4 0
(P1 ) (P1 ; 1)    (0) I J fP1 J
(11.123)
Equation 11.123 is identical to the modi ed Yule{Walker equations for com-
puting the matrices bP1 (p) and bP1 in Equation 11.98. Comparing the terms
in the two equations, we get
bP1 (p) = J aTP1 (p) J p = 1 2 : : : P1 (11.124)
Image Coding and Data Compression 1019
and
= J fP1 J = P1 :
 bP1 (11.125)
With these simpli cations, the recursive procedures in the multichannel
Levinson algorithm may be modi ed for the computation of 2D LP coecients
as follows:
P1
X
P1 +1 = f (P1 + 1) + aP1 (p) f (P1 + 1 ; p) (11.126)
p=1
aP1+1 (P1 + 1) = ;P1 +1 J ;P11 J (11.127)
aP1+1 (p) = aP1 (p) + aP1+1 (P1 + 1) J aP1 (P1 + 1 ; p) J p = 1 2 : : : P1
(11.128)
and
 P 1+1 = P 1 ; aP1 +1 (P1 + 1) J  P 1 J aTP1 +1 (P1 + 1) (11.129)
with the initialization
 0 = f (0) (11.130)
where 2 3
f (p 0)    f (p ;P2)
6
f (p) = 4 ... . . . .. 75 : (11.131)
.
f (p P2)    f (p 0)
Equations 11.126 { 11.131 constitute the 2D Levinson algorithm for solving the
2D Yule{Walker equations for the case of a QP ROS. The Levinson algorithm
provides results that are identical to those obtained by direct inversion of f .
Computation of the 2D LP coecients directly from the image
data: The multichannel version of the Levinson algorithm may be used to
derive the multichannel version of the Burg algorithm 986, 337, 338], as fol-
lows. Equation 11.109 may be augmented using the partition shown in Equa-
tion 11.95 as
   
AP1+1 = A P1 0 T
0 + BP1 aP1+1 (P1 + 1): (11.132)
From Equation 11.75, the forward prediction error vector for order (P1 + 1) is
PX
1 +1
efP1+1 (m) = f (m) + aP1+1 (p) f (m ; p)
p=1
= ATP1 +1 FP1 +1 (m) (11.133)
where FP1 +1 (m) is the multichannel data matrix of order (P1 + 1) at (m),
which may be partitioned as
   
FP1 +1 (m) = fF(Pm()m ; 1) = Ff (Pm1 (;m)P1 ; 1) : (11.134)
1
1020 Biomedical Image Analysis
Similarly, the backward prediction error vector may be expressed as
PX
1 +1
ebP1+1 (m) = f (m ; P1 ; 1) + bP1+1 (p) f (m ; P1 + p)
p=1
= BTP1 +1 FP1 +1 (m): (11.135)
Transposing both sides of Equation 11.132 and multiplying by FP1 +1 (m),
using the partitioned forms shown on the right-hand side of Equation 11.134,
as well as using Equations 11.133 and 11.135, we get
efP1+1 (m) = efP1 (m) + aP1+1 (P1 + 1) ebP1 (m ; 1): (11.136)
Similarly, the backward prediction error vector is given by
ebP1+1 (m) = ebP1 (m ; 1) + bP1+1 (P1 + 1) efP1 (m): (11.137)
The matrices aP1 +1 (P1 + 1) and bP1 +1 (P1 + 1) | known as the re!ection
coecient matrices 510] | that minimize the the sum of the squared forward
and backward prediction errors over the entire multichannel set of data points
N given by
" N ;1 n o#
X
2 c = Tr efP1+1 (m) efP1+1 (m)]T + ebP1+1 (m) ebP1+1 (m)]T
m=P1 +1
(11.138)
are obtained by solving 986]
bP1 ];1 bP1 ];1 EbP1 aP1 +1 (P1 + 1) + aP1 +1 (P1 + 1) fP1 ];1 EfP1 fP1 ];1 =
;bP1 ];1 bP1 ];1 Efb b ;1 fb f ;1
P1 ;  P1 ] EP1  P1 ] (11.139)
where ;1
NX
EfP1 = efP1 (m) efP1 (m)]T (11.140)
m=P1
;1
NX
EbP1 = ebP1 (m) ebP1 (m)]T (11.141)
m=P1
and ;1
NX
Efb
P1 = efP1 (m) ebP1 (m)]T : (11.142)
m=P1
Equations 11.136, 11.137, and 11.139 { 11.142 may be used to compute the
multichannel re!ection coecients directly from the image data without com-
puting the ACF.
In order to adapt the multichannel version of the Burg algorithm to the
2D image case, we could force the structure obtained by relating the 2D
Image Coding and Data Compression 1021
and multichannel ACFs in Equation 11.85 on to the expressions in Equations
11.137 and 11.139, and rede ne the error covariance matrices EfP1 , EbP1 , and
Efb
P1 to span the entire M N image. Then, the 2D counterpart of the
re!ection coecient matrix aP1 +1 (P1 +1) is obtained by solving the following
equation 338, 337]:
;1 ;1
P1  P1 EbP1 aP1+1 (P1 + 1) + aP1+1 (P1 + 1) ;P11 EfP1 ;P11 =
;;P11 ;P11 Efb ;1 fb ;1
P1 ;  P1 EP1  P1 : (11.143)
In order to compute the error covariance matrices EfP1 , EbP1 , and Efb
P1 , a strip
of width (P2 + 1) is de ned so as to span the top (P2 + 1) rows of the image,
as shown in Figure 11.22, and the strip is moved down one row at a time. The
region over which the summations are performed includes only those parts of
the strip for which the forward and backward prediction operators do not run
out of data. At the beginning of the recursive procedure, the error values are
initialized to the actual values of the corresponding pixels. Furthermore, the
forward and backward prediction error vectors are computed by forcing the
relationship in Equation 11.124 on to Equation 11.137, resulting in
ebP1+1 (m) = ebP1 (m ; 1) + J aP1+1 (P1 + 1) J efP1 (m): (11.144)
The 2D Burg algorithm for computing the 2D LP coecients directly from
the image data may be summarized as follows:
1. The prediction error covariance matrix 0 is initialized to f (0).
2. The prediction error vectors are computed using Equations 11.136 and
11.144.
3. The prediction error covariance matrices EfP1 , EbP1 , and Efb
P1 are com-
puted from the prediction error vectors using Equations 11.140 { 11.142
and summing over strips of width (P2 + 1) rows of the image.
4. The re!ection coecient matrix aP1 +1 (P1 + 1) is obtained from the
prediction error covariance matrices by solving Equation 11.143, which
is of the form AX + XB = C and can be solved by using Kronecker
products 986, 989].
5. The remaining prediction coecient matrices aP1 +1 (p) are computed by
using Equation 11.128, and the expected value of the prediction error
covariance matrix P1 +1 is updated using Equation 11.129.
6. When the recursive procedure reaches the desired order P1 , the 2D LP
coecients are computed by solving Equations 11.86 and 11.87.
1022 Biomedical Image Analysis
TABLE 11.6
Variables in the 2D LP, Burg, and Levinson Algorithms for LP 338].
Variable Size Description
f (m n) M N 2D image array
f (m) (P2 + 1) 1 Multichannel vector at column m
spanning P2 + 1 rows
(r) (P2 + 1) (P2 + 1) Autocorrelation submatrix related
to f (m)
 (P1 + 1)(P2 + 1) Extended 2D autocorrelation matrix
(P1 + 1)(P2 + 1)
a (P1 + 1)(P2 + 1) 1 2D LP coecient matrix
aP1 (p) (P2 + 1) (P2 + 1) Multichannel-equivalent prediction
coecient matrix
aP1 (P1 ) (P2 + 1) (P2 + 1) Multichannel-equivalent re!ection
coecient matrix
 P1 (P2 + 1) (P2 + 1) Multichannel prediction error
covariance matrix
ef (m n) M N Forward prediction error array
eb (m n) M N Backward prediction error array
ef (m) (P2 + 1) 1 Multichannel forward prediction
error vector
eb (m) (P2 + 1) 1 Multichannel backward prediction
error vector
e(m)(n) scalar nth element of the vector e(m)
EfP1 (P2 + 1) (P2 + 1) Forward prediction
error covariance matrix
EbP1 (P2 + 1) (P2 + 1) Backward prediction
error covariance matrix
Efb
P1 (P2 + 1) (P2 + 1) Forward-backward prediction
error covariance matrix

Bold characters represent vectors or matrices. A QP ROS of size P1 P2 is


assumed.
Image Coding and Data Compression 1023
The variables involved in the 2D Burg and Levinson algorithms are summa-
rized in Table 11.6.
The modi ed multichannel version of the Burg algorithm o ers advantages
similar to those of its 1D counterpart, over the direct inversion method: it is a
fast and ecient procedure to compute the prediction coecients and predic-
tion errors without computing the autocorrelation function. The optimization
of the prediction coecients does not make any assumptions about the im-
age outside its nite dimensions, and hence, should result in lower prediction
errors and ecient coding. Furthermore, the forced 2D structure makes the
algorithm computationally more ecient than the direct application of the
multichannel Burg procedure.
Computation of the prediction error: In order to compute the predic-
tion error for coding and transmission, the trace of the covariance matrix in
Equation 11.138 may be minimized, using Equations 11.136 and 11.144, and
eliminating the covariance matric  P1 +1 , as follows. From Equation 11.138,
the squared forward and backward prediction error vectors in the 2D Burg
algorithm are given as 338]
;1
NX h i
EP1+1 = efP1+1 (m) efP1+1 (m)]T + ebP1+1 (m) ebP1+1 (m)]T
m=P1 +1
;1
NX h i
= efP1 (m) + aP1+1 (P1 + 1) ebP1 (m ; 1)
m=P1 +1
hf iT
eP1 (m) + aP1+1 (P1 + 1) ebP1 (m ; 1)
h i
+ ebP1 (m ; 1) + J aP1 +1 (P1 + 1) J efP1 (m)
hb f
iT 
eP1 (m ; 1) + J aP1+1 (P1 + 1) J eP1 (m)
= EfP1 + EbP1 + Efb T fb T
P1 aP1 +1 (P1 + 1) + aP1 +1 (P1 + 1) EP1 ]

+ aP1 +1 (P1 + 1) EbP1 aTP1 +1 (P1 + 1) + Efb T T


P1 ] J aP1 +1 (P1 + 1) J

+ J aP1 +1 (P1 + 1) J Efb f T


P1 + J aP1 +1 (P1 + 1) J EP1 J aP1 +1 (P1 + 1) J:
(11.145)
The 2D Burg algorithm for the purpose of image compression consists of
determining the re!ection coecient matrix aP1 +1 (P1 +1) that minimizes the
trace of the error covariance matrix EP1 +1 . This is achieved by di erentiating
Equation 11.145 with respect to aP1 +1 (P1 + 1) and equating the result to the
null matrix, which yields
2 Efb T b fb f
P1 ] + 2 EP1 aP1 +1 (P1 + 1) + 2 EP1 + 2 J aP1 +1 (P1 + 1) J EP1 = 0
(11.146)
1024 Biomedical Image Analysis
or
n o
J EbP1 aP1+1 (P1 +1)+ aP1 +1 (P1 +1) J EfP1 = ;J Efb fb T
P1 + EP1 ] : (11.147)
If the 2D autocorrelation matrices f (r) are symmetric, the matrix aP1 +1 (P1 +
1) will also be symmetric, which reduces Equation 11.147 to
n o
EbP1 aP1+1 (P1 + 1) + aP1+1 (P1 + 1)EfP1 = ; Efb fb T fb
P1 + EP1 ] = ;2 EP1 :
(11.148)
When the recursive procedure reaches the desired order P1 , the multichannel-
equivalent 2D prediction error image is obtained as
e0 m(P2 + 1) + p q] = efP1 (mN + q)(p) (11.149)

p = 0 1 2 : : : P2  q = 0 1 2 : : : N ; 1 m = 0 1 2 : : : P M+ 1 ; 1:
2
Equation 11.86 is in the form of the normal equations for 1D LP 510]. This
suggests that the 1D Burg algorithm for LP 990] may be applied to the
multichannel-equivalent 2D prediction error image to obtain the nal predic-
tion error image, in a recursive manner, as follows 338]:
1. Compute the sum of the squared forward and backward prediction errors
as
;1 MX
NX ;1
2f = jeP2 (m n)j2 (11.150)
m=P2 n=0
;1 MX
NX ;1
2
b= jcP2 (m n)j2 (11.151)
m=P2 n=0
and
;1 MX
NX ;1
2fb = eP2 (m n) cP2 (m n) (11.152)
m=P2 n=0
where cP2 (m n) is the M N backward prediction error array, initialized
as
c0 (m n) = e0 (m n) m = 0 1 2 : : : M ; 1 n = 0 1 2 : : : N ; 1:
(11.153)
2. Compute the coecient a(0 P2 + 1), known as the re!ection coecient,
as
2
a(0 P2 + 1) = ; 2 +fb 2 : (11.154)
f b
Image Coding and Data Compression 1025
3. Obtain the prediction errors at higher orders as
eP2 +1 (m n) = eP2 (m n) + a(0 P2 + 1) cP2 +1 (m n ; 1) (11.155)
and
cP2+1 (m n) = a(0 P2 + 1) eP2+1 (m n) + eP2 (m n): (11.156)
When the desired order P2 is reached, the prediction errors eP2 (m n) are
encoded using a method such as the Hu man code. The re!ection coecient
matrices aP1 +1 (P1 + 1) and the 1D re!ection coecients a(0 P2 ) are also
encoded and transmitted as overhead information.
Error-free reconstruction of the image from the forward and back-
ward prediction errors: In order to reconstruct the original image at
the decoder without any error, the prediction coecients need to be re-
computed from the re!ection coecients. The prediction coecients a(0 p),
p = 0 1 2 : : : P2 , may be computed recursively using the Burg algorithm as
a(0 p) ( a(0 p) + a(0 q + 1) a(0 q + 1 ; p) p = 1 2 : : : q q = 1 2 : : : P2:
(11.157)
The multichannel-equivalent 2D prediction error image is given by
P2
X
e0 (m n) = a(0 p) e0 (m ; p n) + eP2 +1 (m n) (11.158)
p=1
n = 0 1 2 : : : N ; 1 m = P2 + 1 P2 + 2 : : : M ; 1:
The multichannel prediction error vectors are related to the error data de ned
above as
efP1 (mN + q)(p) = e0 m(P2 + 1) + p q] (11.159)
p = 0 1 2 : : : P2  q = 0 1 2 : : : N ; 1 m = 0 1 2 : : : P M+ 1 ; 1 :
2
The multichannel signal vectors may be reconstructed from the error vectors
via multichannel prediction as
P1
~f (m) = ; X a(p) f (m;p)+efP1 (m) m = P1 +1 P1 +2 : : : N ;1: (11.160)
p=1
Finally, the original image is recovered from the multichannel signal vectors
as
f m(P2 + 1) + p q] = f (mN + q)(p) (11.161)
p = 0 1 2 : : : P2  q = 0 1 2 : : : N ; 1 m = 0 1 2 : : : P M+ 1 ; 1
2
1026 Biomedical Image Analysis
with rounding of the results to integers. In a practical implementation, for
values of eP2 +1 (m n) exceeding a preset limit, the true image pixel values
would be transmitted and made available directly at the decoder.
Results of application to medical images: Kuduvalli and Rangayyan
174, 337, 338] applied the 2D Levinson and Burg algorithms described above
to the 10 high-resolution digitized medical images listed in Table 11.5. The
average bit rate with lossless compression of the 10 test images using the 2D
block-wise LP method described in Section 11.8.1, the 2D Levinson algorithm,
and the 2D Burg algorithm were, respectively, 3:15 3:02 and 2:81 b=pixel,
with the original images having 10 b=pixel (see also Table 11.7). The multi-
channel LP algorithms, in particular, the 2D Burg algorithm, provided better
compression than the other methods described in the preceding sections in
this chapter. The LP models described in this section are related to AR mod-
eling for spectral estimation Kuduvalli and Rangayyan 337] found the 2D
Burg algorithm to provide good 2D spectral estimates that were comparable
to those provided by other AR models.

11.8.3 Adaptive 2D recursive least-squares prediction


The LP model with constant prediction coecients given by Equation 11.53 is
based on an inherent assumption of stationarity of the image-generating pro-
cess. The multichannel-based prediction methods described in Section 11.8.2
are two-pass methods, where an estimation of the statistical parameters of
the image is performed in the rst pass (such as, for example, the autocor-
relation matrix of the image in the 2D Levinson method), and the parame-
ters are then used to estimate the prediction coecients in the second pass.
Once computed, the same prediction coecients are used for prediction over
all of the image data from which the coecients were estimated. However,
this assumption of stationarity is rarely valid in the case of natural images
as well as biomedical images. To overcome this problem, in the case of the
multichannel-based methods. the approach taken was that of partitioning the
image into blocks, and computing the prediction coecients independently
for each block. Another possible approach is to adapt the coecients recur-
sively to the changing statistical characteristics of the image. In this section,
the basis for such adaptive algorithms is described, and a 2D recursive least-
squares (2D RLS) algorithm for adaptive computation of the LP coecients
is formulated 338]. The procedures are based upon adaptive lter theory in
1D 833, 979] and in multichannel signal ltering 991, 992, 993].
With reference to the basic 2D LP model given in Equation 11.53, several
approaches are available for adaptive computation of the coecients a(p q)
for each pixel being predicted at the location (m n) 833]. The approach
based on Wiener lter theory 198] (see Section 3.6.1), leading to the 2D
LMS algorithm 206, 994] (see Section 3.7.3), although applicable to image
compression 995], su ers from the fact that the estimation of the coecients
a(p q) does not make use of all the image data available up to the current
Image Coding and Data Compression 1027
location. Adaptive estimation of the coecients based upon the Kalman
lter 833, 887, 888, 891, 892, 893] (see Section 10.4.3), where the prediction
coecients are represented as the state vector describing the current state
of the image-generating process, has not been explored much. However, this
approach depends upon the statistics of the image represented in terms of
ensemble averages because only estimates of the ensemble averages can be
obtained, this approach is likely to be suboptimal.
The approach that is described in this section for adaptive prediction, based
upon the work of Kuduvalli 338], is founded upon the method of least squares.
This approach is deterministic in its formulation, and involves the minimiza-
tion of a weighted sum of prediction errors. In Section 11.8.2, it was observed
that the estimation of the prediction coecients based on the direct mini-
mization of the actual prediction errors (the 2D Burg method) yielded better
results in image compression than the method based on the estimation of an
ensemble image statistic (the 2D ACF) from the image data (the 2D Levin-
son method). This result suggests that a deterministic approach could also
be appropriate for the adaptive computation of prediction coecients.
In 2D RLS prediction, the aim is to minimize a weighted sum of the squared
prediction errors, computed up to the present location, given by
XX
2 (m n) = w(m n p q) e(p q)]2 (11.162)
(pq)2ROS
where e(p q) is the prediction error at (p q), and w(m n p q) is a weight-
ing factor chosen to selectively \forget" the errors from the preceding pixel
locations (\the past") in order for the prediction coecients to adapt to the
changing statistical nature of the image at the current location. Boutalis et
al. 992] used an exponential weighting factor whose magnitude reduces in the
direction opposite to the scanning model used in the generation of the image.
With this weighting-factor model and special ROSs, Boutalis et al. used the
multichannel version of the RLS algorithm directly for adaptive estimation of
images however, their weighting-factor model does not take into account the
2D nature of images: the weight assigned to the error at a location adjacent
in the row direction to the current location is higher than the weight assigned
to the error at a location adjacent in the column direction. Kuduvalli 338]
proposed a weighting-factor model that is truly 2D in its formulation. In this
method, using a rectangular region spanning the image up to the current loca-
tion for minimizing the sum of the prediction errors as shown in Figure 11.23,
and an exponential weighting factor de ned as w(m n p q) = (m;p+n;q) ,
where 0 <  1 is a forgetting factor, the weighted squared error is de ned
as
Xm X n
2
 (m n) = (m;p+n;q) e(p q)]2 : (11.163)
p=0 q=0
Let us consider a QP ROS of order P Q for prediction, as shown in
Figure 11.23, and use the following notation for representing the prediction
1028 Biomedical Image Analysis

Region of summation of prediction errors

Column of new information at (m, n)

n
(0, 0) (0, N-1)

P+1

m
f(m,n)
Q+1

(M-1, 0) (M-1, N-1)

ROS of forward prediction only

ROS of backward prediction only

ROS of both forward and backward prediction

FIGURE 11.23
ROSs in adaptive LP by the 2D RLS method 338]. While the image is scanned
from the position (m n ; 1) to (m n), a column of m pixels becomes available
as new information that may be used to update the forward and backward
predictors. Observe that a part of the column of new information is hidden
by the ROS for both forward and backward prediction in the gure.
Image Coding and Data Compression 1029
coecients and the image data spanning the current ROS as vectors:

a(m n) = aT0 (m n) aT1 (m n)    aTP (m n) T (11.164)
where
ap (m n) = a(m n)(p 0) a(m n)(p 1)    a(m n)(p Q)]T (11.165)
with a(m n)(0 0) = 1, and

FP +1 (m n) = fmT (n) fmT ;1 (n)    fmT ;P (n) T (11.166)
with
fm;p (n) = f (m ; p n) f (m ; p n ; 1)    f (m ; p n ; Q)]T : (11.167)
Here, the subscripts P and P + 1 represent the order (size) of the matrices
and vectors, and the indices (m n) indicate that the values of the parameters
corresponding to the pixel location (m n). Observe that a(m n) is a P
Q matrix, with a(m n)(p q) representing its element at (p q). With this
notation, the prediction error may be written as
e(m n) = aT (m n) FP +1 (m n): (11.168)
The 2D RLS normal equations: The coecients that minimize the
weighted sum of the squared prediction errors 2 (m n) given in Equation
11.163 are obtained as the solution to the 2D RLS normal equations, which
are obtained as follows 338]. Let us perform partitioning of the matrices
a(m n) and FP +1 (m n) as
 
1
a(m n) = a~(m n) (11.169)
and
   # 
f ( m n ) F (m n )
FP +1 (m n) = F~ P +1 (m n) = f (m ; P n ; Q) :
P +1 (11.170)

Observe that the coecient matrix a~(m n) and the data matrix F~ P +1 (m n)
consist of all of the 2D RLS coecients a(m n)(p q) and all of the image
pixels f (m ; p n ; q) such that (p q) 2 QP ROS for a forward predictor.
With partitioning as above, the prediction error in Equation 11.168 may be
written as
e(m n) = f (m n) + a~T (m n) F~ P +1 (m n): (11.171)
The sum of the squared prediction errors in Equation 11.163 may now be
expressed as
X
m X
n
2 (m n) = (m;p+n;q) e(p q)]2
p=0 q=0
1030 Biomedical Image Analysis
X
m X
n h i
= (m;p+n;q) f (p q) + a~T (m n) F~ P +1 (p q)
p=0 q=0
h iT
f (p q) + a~T (m n) F~ P +1 (p q)
Xm X n h
= (m;p+n;q) f 2 (p q) + 2 a~T (m n) F~ P +1 (p q) f (p q)
p=0 q=0
i
+ a~T (m n) F~ P +1 (p q) F~ TP +1 (p q) a~(m n) : (11.172)
In order to determine the coecients a(m n)(p q) that minimize 2 (m n),
we could di erentiate the expression above for 2 (m n) with respect to the
coecient matrix a~(m n) and equate the result to the null matrix of size
(P + 1)(Q + 1) ; 1] 1, which yields
2
0 = @@ a~((mm nn))
X
m X
n h
= (m;p+n;q) F~ P +1 (p q) f (p q)
p=0 q=0
i
+ F~ P +1 (p q) F~ TP +1 (p q) a~(m n) : (11.173)
Equation 11.173 may be expressed in matrix notation as
X
m X
n h i 1 
(m;p+n;q) F~ ~ TP +1 (p q)
P +1 (p q ) f (p q ) F a~(m n) = 0:
p=0 q=0
(11.174)
In addition to the above, using Equation 11.173 in Equation 11.172, we have
X
m X
n h i
2 (m n) = (m;p+n;q) f 2 (p q) + f (p q) F~ TP +1 (p q) a~(m n)
p=0 q=0
(11.175)
which may be written in matrix form as
X
m X
n h i1 
2 (m n) = (m;p+n;q) f (p ~ T
q) f (p q) FP +1 (p q) a~(m n) :
p=0 q=0
(11.176)
Combining Equations 11.174 and 11.176, we get
X
m X
n  h i
(m;p+n;q) f (p q) f ( p q ) F~ T (p q )
p=0 q=0 F~ P +1 (p q) P +1
  2 
1 =  (m n) (11.177)
a~(m n) 0
Image Coding and Data Compression 1031
or
X
m X
n  
(m;p+n;q) F 2 (m n)
P +1 (p q ) FTP +1 (p q ) a(m n) = 0
p=0 q=0
(11.178)
which may be expressed as
P +1 (m n) a(m n) = (m n) (11.179)
where 
(m n) = 2 (m n) 0 0    0 T (11.180)
and P +1 (m n) is the deterministic autocorrelation matrix of the weighted
image given by
X
m X
n
P +1 (m n) = (m;p+n;q) FP +1 (p q) FTP +1 (p q): (11.181)
p=0 q=0
Equation 11.179 represents the 2D RLS normal equations, solving which we
can obtain the prediction coecients a(m n)(p q) that adapt to the statistics
of the image at the location (m n).
Solving the 2D RLS normal equations: Direct inversion of the auto-
correlation matrix in Equation 11.179 gives the desired matrix of prediction
coecients as
a(m n) = ;P +1
1 (m n) (m n): (11.182)
The matrix P +1 (m n) is of size (P +1)(Q +1) (P +1)(Q +1) the inversion
of such a matrix at every pixel (m n) of the image could be computationally
intensive. Kuduvalli 338] developed the following procedure to reduce the size
of the matrix to be inverted to (Q +1) (Q +1). The procedure starts with a
recursive relationship expressing the solution for the normal equations at the
pixel location (m n) in terms of that at (m ; 1 n). Consider the expression
X
m X
n
P +1 (m n) = (m;p+n;q) FP +1 (p q) FTP +1 (p q)
2p=0 q=0 3
00 (m n) T01 (m n)    T0P (m n)
6 01 (m n) 11 (m n)    T1P (m n) 77
= 664 .. .. . . . .. 75 (11.183)
. . .
0P (m n) 1P (m n)    PP (m n)

where
X
m X
n
rs (m n) = (m;p+n;q) fp;r (q) fpT;s (q): (11.184)
p=0 q=0
1032 Biomedical Image Analysis
Observe that
rs (m n) = 0(s;r)(m ; r s) (11.185)
which follows from the assumption that the image data have been windowed
such that f (m n) = 0 for m < 0 or n < 0.
The normal equations may now be expressed as
2 32 3 2 3
00 (m n) T01 (m n)    T0P (m n) a0 (m n) (m n)
66 01 (m n) 11 (m n)    T1P (m n) 77 66 a1 (m n) 77 66 0Q+1 77
64 .. .. . . . .. 75 64 .. 75 = 64 .. 75
. . . . .
0P (m n) 1P (m n)    PP (m n) aP (m n) 0Q+1
(11.186)
where 0Q+1 is the null matrix of size (Q + 1) 1.
Equation 11.186 may be solved in two steps: First, solve
2 32
00 (m n) T01 (m n)    T0P (m n) IQ+1 3 2 FQ+1 (m n) 3
66 01 (m n) 11 (m n)    T1P (m n) 77 66 A1 (m n) 77 66 0Q+1 77
64 .. .. ... . . 7 6 .
5 4 .. 7 = 6
5 4 .. . 75
. . .
0P (m n) 1P (m n)    PP (m n) AP (m n) 0Q+1
(11.187)
for the (Q + 1) (Q + 1) matrices Ap (m n), p = 1 2 : : : P , and FQ+1 (m n).
Here, IQ+1 is the identity matrix of size (Q +1) (Q +1), and 0Q+1 is the null
matrix of size (Q +1) (Q +1). Then, obtain the solution to Equation 11.186
by solving
FQ+1 (m n) a0 (m n) = (m n) (11.188)
and using the relationship
Ap (m n) a0 (m n) = ap (m n) p = 1 2 : : : P: (11.189)
This approach is similar to the approach taken to solve the 2D Yule{Walker
equations by the 2D Levinson method, described in Section 11.8.2, and leads
to a recursive algorithm that is computationally ecient the details of the
algorithm are given by Kuduvalli 338].
Results of application to medical images: Kuduvalli 338] conducted
preliminary studies on the application of the 2D RLS algorithm to predictive
coding and compression of medical images. In the application to coding, the
value of the pixel at the current location (m n) is not available at the decoder
before the prediction coecient matrix a(m n) is computed. However, the
prediction coecient matrix a(m ; 1 n) is available. Thus, for error-free de-
coding, the a priori prediction error computed using the prediction coecient
matrix a(m ; 1 n) is encoded. These error values, which have a PDF that
is close to a Laplacian PDF, may be eciently encoded using methods such
as the Hu man code. Using a QP ROS of size 4 4 and a forgetting factor
of = 0:95, Kuduvalli 338] obtained an average bit rate of 2:71 b=pixel for
two of the images listed in Table 11.5 this rate, however, is only marginally
Image Coding and Data Compression 1033
lower than the bit rate of 2:77 b=pixel for the same two images obtained by
using the 2D Burg algorithm described in Section 11.8.2. Although the 2D
RLS algorithm has the elegance of being a truly 2D algorithm that adapts
to the changing statistics of the image on a pixel-by-pixel basis, the method
did not yield appreciable advantages in image data compression. Regardless,
the method has applications in other areas, such as spectrum estimation and
ltering.
Kuduvalli and Rangayyan 174] performed a comparative analysis of several
image compression techniques, including direct source coding, transform cod-
ing, interpolative coding, and predictive coding, applied to the high-resolution
digitized medical images listed in Table 11.5. The average bit rates obtained
using several coding and compression techniques are listed in Table 11.7. It
should be observed that decorrelation can provide signi cant advantages over
direct source encoding of the original pixel data. The adaptive predictive
coding techniques have performed better than the transform and interpola-
tive coding techniques tested.
In a study of the e ect of sampling resolution on image data compression,
Kuduvalli and Rangayyan 174] prepared low-resolution versions of the images
listed in Table 11.5 by smoothing and downsampling. The results of the
application of the 2D Levinson predictive coding algorithm yielded average
bit rates of 3:5 ; 5 b=pixel for 512 512 images, and 2:5 ; 3 b=pixel for
4 096 4 096 images (with the original images at 10 b=pixel). This result
indicates that high-resolution images possess more redundancy, and hence
may be compressed by larger extents than their low-resolution counterparts.
Therefore, increasing the resolution of medical images does not increase the
amount of the related compressed data in direct proportion to the increase
in matrix size, but by a lower factor. This result could be a motivating
factor supporting the use of high resolution in medical imaging, without undue
concerns related to signi cant increases in data-handling requirements.
See Aiazzi et al. 996] for a description of other methods for adaptive pre-
diction and a comparative analysis of several methods for lossless image data
compression.

11.9 Image Scanning Using the Peano-Hilbert Curve


Peano scanning is a method of scanning an image by following the path de-
scribed by a space- lling curve 997, 998, 999, 1000, 1001, 1002, 1003, 1004].
Giuseppe Peano, an Italian mathematician, described the rst space- lling
curve in an attempt to map a line into a 2D space 997]. The term \Peano
scanning" is used to refer to such a scanning scheme irrespective of the space-
lling curve used to de ne the scan path. Peano's curve was modi ed by
1034 Biomedical Image Analysis

TABLE 11.7
Average Bit Rates Obtained in
the Lossless Compression of
the Medical Images Listed in
Table 11.5 Using Several Image
Coding Techniques
174, 337, 338].
Coding method Bits/ pixel
Original 10.00
Entropy H0 7.94
Hu man 8.67
Arithmetic 8.58
LZW 5.57
DCT 5.26
Interpolative 3.45
2D LP 3.15
2D Levinson 3.02
2D Burg 2.81
2D RLS* 2.71

The Hu man code was used to encode the results of the transform, interpola-
tive, and predictive coding methods. *Only two images were compressed with
the 2D RLS method.
Image Coding and Data Compression 1035
Hilbert 998], and the modi ed curve came to be known as the \Peano-Hilbert"
curve.
Moore 1005] studied the geometric and analytical interpretation of contin-
uous space- lling curves. Space- lling curves have aided in the development of
fractals 462] (see Section 7.5 for a discussion on fractals). The Peano-Hilbert
curve has been applied to display continuous-tone images 1006] in order to
eliminate de ciencies of the ordered dithered technique, such as Moir$e fringes.
Lempel and Ziv 1003] used the Peano-Hilbert curve to scan images and de ne
the lowest bound of compressibility. Zhang et al. 1001, 1002] explored the
statistical characteristics of medical images using Peano scanning.
Provine and Rangayyan 1000, 999] studied the application of Peano scan-
ning for image data compression, with an additional step of decorrelation
using di erentiation, orthogonal transforms, or LP the following paragraphs
describe the basics of the methods involved and the results obtained in their
work.

11.9.1 Denition of the Peano-scan path


If a physical scanner that can scan an image by following the Peano curve
is not available, Peano scanning may be simulated by selecting pixels from
a raster-scanned image by traversing the 2D data along the path described
by the Peano-Hilbert curve. The reordered pixel data so obtained (in a 1D
stream) may be subjected to decorrelation and encoding operations as desired.
An inverse Peano-scanning operation would be required at the receiving end
to reconstruct the original image. A general image compression scheme as
above is summarized in Figure 11.24.

Raster-scanned
image
Peano-scan Decorrelation Encoder
operation transform

Compressed
data

Decoded image Inverse Peano- Inverse Decoder


scan operation transform

FIGURE 11.24
Image compression using Peano scanning.

The Peano-scan operation is recursive in nature, and spans a 2D space


encountering a total of 2i 2i points (where i is an integer and i > 1). From the
1036 Biomedical Image Analysis
perspective of processing a 2D array containing the pixel values of an image,
this would require that the dimensions of the array be an integral power of 2.
In the scanning procedure, the given image of size 2n 2n is divided into four
quadrants, each of them forming a subimage see Figure 11.25. Each of the
subimages is further divided into four quadrants, and the procedure continues.
The original image is divided into a total of Ti = 22(n;i+1) subimages, each
of size 2i;1 2i;1 , where i = 1 2 : : : n. In the following discussion, the
subimages formed by the recursive subdivision procedure as above will be
referred to as si;1 (k) k = 1 2 : : : Ti , where k increases along the direction of
the scan path the entire image will be referred to as sn . Thus, each of the four
quadrants formed by partitioning a subimage si , for any i, is of size 2i;1 2i;1 .
The division of a given image into subimages is shown in Figure 11.25: the
recursive division of subimages is performed until the s1 subimages are formed.
The four pixels within the smallest 2 2 subimage are denoted as p1 p2 p3,
and p4 respectively, in the order of being scanned. As the scan path builds
recursively, the path de nition is based on the basic de nitions for a 2 2
subimage, as well as the recursive de nitions for subimages of larger size, until
the entire image is scanned. The four basic de nitions of the Peano-scanning
operation are given in Figure 11.26.
The recursive de nitions of the Peano-scanning operation are given in Fig-
ure 11.27, which inherently use the basic de nitions (shown in Figure 11.26)
to obtain further pixels from the subimages. The de nitions go down recur-
sively from i = n to i = 1, that is, from the image sn down to the subimages
s1  the basic de nitions are used to obtain the pixels from the s1 subimages.
The scan-path de nition for an image or a subimage depends on i. At a
higher level, that is, for an si , i > 1, the recursive de nition is as follows:
 If i is odd, the recursive de nition is \R" (see Figure 11.27).
 If i is even, the recursive de nition is \D".
From the recursive de nitions shown in Figure 11.27, the de nitions of the
scan pattern in each of the subimages is obtained as follows:
 If si (k) has the recursive de nition \R", si;1 (4k ; 3) will follow the path
given by \D" si;1 (4k ; 2) and si;1 (4k ; 1) the path \R" and si;1 (4k)
the path \U".
 If si (k) has the recursive de nition \D", si;1 (4k ; 3) will follow the path
given by \R" si;1 (4k ; 2) and si;1 (4k ; 1) the path \D" and si;1 (4k)
the path \L".
 If si (k) has the recursive de nition \L", si;1 (4k ; 3) will follow the path
given by \U" si;1 (4k ; 2) and si;1 (4k ; 1) the path \L" and si;1 (4k)
the path \D".
 If si (k) has the recursive de nition \U", si;1 (4k ; 3) will follow the path
given by \L" si;1 (4k ; 2) and si;1 (4k ; 1) the path \U" and si;1 (4k)
the path \R".
Image Coding and Data Compression 1037

i=0 i=1 i=2 i=3 i=4


p1 p2
s(1) s(4) s(5) s(6)
1 1 1 1
p4 p3
i=1 s(1) s(2)
2
2
s(2) s(3) s(8) s(7)
1 1 1 1
i=2 s(1)
3 s(4)
3
s(15) s(14) s(9) s(10)
1 1 1 1

s(4)
2
s(3)
2
s(16) s(13) s(12) s(11)
1 1 1 1
i=3
s(17)
1
s(5)
2

s(2)
3
s(3)
3

i=4
FIGURE 11.25
Division of a 16 16 image into subimages during Peano scanning. Repro-
duced with permission from J.A. Provine and R.M. Rangayyan, \Lossless
compression of Peanoscanned images", Journal of Electronic Imaging, 3(2):
176 { 181, 1994. 
c SPIE and IS&T.
1038 Biomedical Image Analysis

R L D U

p1 p2 p3 p4 p1 p4 p3 p2

p4 p3 p2 p1 p2 p3 p4 p1

FIGURE 11.26
Basic de nitions of the Peano-scanning operation. The points marked p1 ; p4
represent the four pixels in a 2 2 subimage. Each scan pattern shown visits
four pixels in the order indicated by the arrows. R: right. L: left. D: down.
U: up. Reproduced with permission from J.A. Provine and R.M. Rangayyan,
\Lossless compression of Peanoscanned images", Journal of Electronic Imag-
ing, 3(2): 176 { 181, 1994. c SPIE and IS&T.

R L D U
i-1 i i-1 i i-1 i i-1 i
D R L D R L U U
i-1 i-1 i-1 i-1
U R L U D D R L
i i i i

FIGURE 11.27
Recursive de nitions of the Peano-scanning operation. Reproduced with per-
mission from J.A. Provine and R.M. Rangayyan, \Lossless compression of
Peanoscanned images", Journal of Electronic Imaging, 3(2): 176 { 181, 1994.
c SPIE and IS&T.
Image Coding and Data Compression 1039
The index k can take any value in the range 1 2 : : : Ti for i = 1 2 : : : n.
From the recursive de nitions, it is evident that k cannot continuously increase
horizontally or vertically, due to the nature of the scan. Furthermore, because
the scan path takes its course recursively, except for i = n, the recursive
de nitions \L" and \U" (see Figure 11.27) are also possible for lower values of
i. All the subimages are divided and de ned recursively until all the subimages
s1 are de ned. The basic de nitions of the Peano scan are then followed for
each of the s1 subimages. The Peano-scan pattern for a 16 16 image is
illustrated in Figure 11.28, where the heavy dots indicate the positions of
the rst 16 pixels scanned. Understanding the scan pattern is facilitated by
viewing Figure 11.28 along with Figure 11.25.

i=1 i=2 i=3 i=4

i=1

i=2

i=3

i=4

FIGURE 11.28
Peano-scan pattern for a 16 16 image. The positions of the pixels on the scan
pattern are shown by heavy dots in the rst 4 4 subimage. Reproduced with
permission from J.A. Provine and R.M. Rangayyan, \Lossless compression of
Peanoscanned images", Journal of Electronic Imaging, 3(2): 176 { 181, 1994.
c SPIE and IS&T.
1040 Biomedical Image Analysis
Implementation of Peano scanning in software could use the recursive nature
of the scan path eciently to obtain the pixel stream. Recursive functions
may call themselves within their body as the image is divided progressively
into subimages until the s1 subimages are formed, and the pixels are obtained
recursively as the function builds back from the s1 subimages to the full image
sn . Thus, the 2D image is unwrapped into a 1D data stream by following a
continuous scan path.
The inverse Peano-scanning operation accomplishes the task of lling up the
2D array with the 1D data stream. This operation corresponds to the original
works of Peano 997] and Hilbert 998], where the continuous mapping of a
straight line into a 2D plane was described. Because the Peano-scanning
operation is reversible, no loss of information is incurred.

11.9.2 Properties of the Peano-Hilbert curve


The Peano-Hilbert curve has several interesting and useful properties. The
curve is continuous but not di erentiable: it does not have a tangent at any
point. Moore 1005] gave an explanation of the Peano-Hilbert curve adhering
to this property. This property motivated the development of several other
curves with the same property, which are used in the domain of fractals.
The Peano-Hilbert curve lls the 2D space continuously without passing
through any point more than once. This feature enables the mapping of a 2D
array into a 1D data stream. The recursive nature of the curve is useful in
ecient implementation of the path of the curve. These two properties aid in
scanning an image recursively quadrant by quadrant, leaving each quadrant
only after having obtained every pixel within that quadrant, with each pixel
visited only once in the process see Figures 11.28 and 11.25. Preservation of
the local 2D context in the scanning path could be expected to increase the
correlation between successive elements in the 1D data stream. This aspect
could facilitate improved image data compression.
Two other aspects of the Peano-Hilbert curve have proven to be useful
in the bilevel display of continuous-tone images 1006, 1007]. Linearizing a
2D array along the path described by the Peano-Hilbert curve reduces the
error between the sum of the bilevel values and the sum of the continuous-
tone values of the original image because 2D locality is maintained by the scan
path, unlike the 1D vector formed by concatenating the horizontal raster-scan
lines of the image. The problem of long sections of scan lines running adjacent
to one another is eliminated by following the Peano-scan path instead of the
raster scan. Thus, Moir$e patterns can be eliminated in regions of uniform
intensity when presenting gray-level images on a bilevel display.

11.9.3 Implementation of Peano scanning


A practical problem that could arise in implementing the Peano-scanning op-
eration on large images is the diculty in allocating memory for the long linear
Image Coding and Data Compression 1041
array used within the body of the recursive function for storing the scanned
data. Provine 999] suggested the following approach to address this problem,
by using a symmetrical pattern exhibited by the Peano-Hilbert curve.
The Peano-Hilbert curve exhibits a symmetrical pattern, which may be
described as follows: For any subimage si (k), the scan paths for the subimages
si;1 (4k ; 3) and si;1 (4k ; 2) are the mirror re!ections of the paths for the
subimages si;1 (4k) and si;1 (4k ; 1), respectively. Two types of symmetry
exist in the Peano-scan paths for a 2i 2i subimage depending on whether i
is odd or even: If i is odd, the pattern for the upper half of the 2D space is
re!ected in the lower half if i is even, the pattern in the left-hand half of the
2D space is re!ected in the right-hand half see Figure 11.29.
A symmetrical scan pattern exists for any subimage formed by the recursive
division process. Hence, for the smallest subimage, the symmetrical pattern
suggests that the basic de nitions e ectively obtain only two pixels, one after
the other, either horizontally or vertically. In other words, the basic scan
pattern e ectively obtains only two pixels, p1 and p2, out of the four pixels
in a 2 2 subimage see Figure 11.30. The manner in which the remaining
two pixels p3 and p4 are obtained follows the symmetry property stated above
(substituting i = 1 and k = 1). The sequence in which the two pixels p3 and
p4 are obtained is shown in Figure 11.30 in dashed lines.
In scanning large images, for any subimage si (k), the pattern of the Peano-
scan path from the rst pixel of si;1 (4k ; 3) to the last pixel of si;1 (4k ; 2) is
the same as that from the last pixel of si;1 (4k) to the rst pixel of si;1 (4k ;1).
Because the Peano-scan path does not leave any quadrant without visiting all
the pixels within the quadrant, two equal sections of the image can be un-
wrapped independently, without a ecting each other, into individual linear
arrays by following the same scan path but in opposite directions. The re-
sulting linear arrays, when concatenated appropriately, give the required 1D
data stream. For the case illustrated in Figure 11.29 (a), the 64 pixels can be
obtained as a linear sequence by tracing the Peano-scan path on pixels 1 ; 32
in the upper half, followed by the pixels 32 ; 1 in the lower half. Thus, several
1D arrays of a reasonable size may be used to hold the pixels obtained from
di erent sections of the image. After all the subimages have been scanned (in
parallel, if desired), the arrays may be concatenated appropriately to form
the long sequence containing the entire image data.

11.9.4 Decorrelation of Peano-scanned data


The Peano-scanning operation scans the given picture recursively, quadrant by
quadrant. Therefore, we could expect the 2D local statistics to be preserved
in the resulting 1D data stream. Furthermore, we could also expect a higher
correlation between pixels for larger lags in the Peano-scanned data than
between the pixels obtained by concatenating the raster-scanned lines into a
1D array of pixels.
1042 Biomedical Image Analysis

1 2 15 16 17 20 21 22

4 3 14 13 18 19 24 23
31 30 25 26
5 8 9 12
6 7 10 11 32 29 28 27

A B

6 7 10 11 32 29 28 27
5 8 9 12
31 30 25 26
4 3 14 13 18 19 24 23

1 2 15 16 17 20 21 22

(a)

(b)
FIGURE 11.29
Symmetrical patterns exhibited by the Peano-Hilbert curve for (a) an 8 8
subimage and (b) a 16 16 subimage. The line AB indicates the axis of
symmetry in each case. The 64 pixels in the image in (a) are labeled in the
order of being scanned the pixels 1 ; 32 in the upper half, followed by the
pixels 32 ; 1 in the lower half of the subimage in (a)]. Figure courtesy of J.A.
Provine 999].
Image Coding and Data Compression 1043
R L D U
p1 p2 p2 p1 p1 p4 p2 p3

p4 p3 p3 p4 p2 p3 p1 p4

FIGURE 11.30
Symmetrical patterns in the basic de nitions of the Peano scan. Figure cour-
tesy of J.A. Provine 999].

Figure 11.31 shows the ACFs obtained for the Lenna test image and a mam-
mogram for raster-scanned and Peano-scanned pixel streams. As expected,
Peano scanning has maintained higher inter-pixel correlation than raster scan-
ning. Similar observations have been made by Zhang et al. 1001, 1002] in their
study on the stochastic properties of medical images and data compression
with Peano scanning.
The simplest method for decorrelating pixel data is to produce a data se-
quence containing the di erences between successive pixels. As shown earlier
in Section 11.5 and Figure 11.12, the di erentiated data may be expected to
have a Laplacian PDF, which is useful in compressing the data. Provine and
Rangayyan 999, 1000] applied a simple rst-di erence operation to decorre-
late Peano-scanned image data, and encoded the resulting values using the
Hu man, arithmetic, and LZW coding schemes. In addition, they applied the
1D DCT and LP modeling procedures to raster-scanned and Peano-scanned
data streams, as well as the 2D DCT and LP modeling procedures to the
original image data. Some of the results obtained by Provine and Rangayyan
are summarized in Tables 11.8, 11.9, and 11.10. The application of either
the Hu man or the arithmetic code to the di erentiated Peano-scanned data
stream resulted in the lowest average bit rate in the study.

11.10 Image Coding and Compression Standards


Two highly recognized international standards for the compression of still
images are the Joint Bi-level Image experts Group (JBIG) standard 1008,
1009, 1010, 1011] and the Joint Photographic Experts Group (JPEG) stan-
dard 1012, 1011]. JBIG and JPEG are sanctioned by the International Orga-
nization for Standardization (ISO) and the Comit$e Consultatif International
T$el$ephonique et T$el$egraphique (CCITT). Although JBIG was initially pro-
posed for bilevel image compression, it may also be applied to continuous-
1044 Biomedical Image Analysis

1 +-
- - - : raster-scanned pixels
+ +++ : Peano-scanned pixels
-+
+
0.95 +
+
+
- +
++
+++
+++
++++
- ++++
0.9 +++
+++
++++
+++++
Autocorrelation

- +++++++++
++++++
+++++
++
-
0.85 -
-
-
-
-
0.8 -
- -
- - - - - - - - - -
- -
- -
- - - - - - - -
- - - - - - - - - - - - - -
- -
- -
0.75 - - - - - - -

0.7
0 10 20 30 40 50 60

Distance in pixels

(a)
1 +- +- +
-
++
++ - - - : raster-scanned pixels
++ +++ : Peano-scanned pixels
+++
- ++++
0.995 +++
++++
- ++++
++++
- +++++
++++++++
++++++
0.99 - ++++++
+++++
-
-
-
0.985 -
-
Autocorrelation

-
-
-
0.98 -
-
-
-
-
0.975 -
-
-
-
0.97 -
-
-
-
-
0.965 -
-
- -
- -
- -
- - - - - - - - - - - -
0.96 - - - - - - - - - - - -

0.955
0 10 20 30 40 50 60

Distance in pixels

(b)
FIGURE 11.31
ACF of raster-scanned and Peano-scanned pixels plotted as a function of the
distance (lag) between the scanned pixels: (a) for the Lenna image and (b) for
a mammogram. Figure courtesy of J.A. Provine 999].
Image Coding and Data Compression 1045
TABLE 11.8
Average Bit Rate with the Application of the Hu man, Arithmetic, and
LZW Coding Schemes to Raster-scanned and Peano-scanned Data
Obtained from Eight Test Images 999].
Entropy Average number of bits/pixel
Image Size H0 LZW
(8 b=pixel) (pixels) (bits) Hu man Arith. Raster Peano
Airplane 512 512 6.49 6.84 6.65 7.47 9.06
Baboon 512 512 7.14 9.40 7.30 9.64 9.63
Cameraman 256 256 7.04 7.39 7.40 8.99 8.09
Lenna-256 256 256 7.57 8.12 7.95 9.10 8.97
Lenna-512 512 512 7.45 10.38 7.61 9.00 8.85
Peppers 512 512 7.37 10.89 7.54 7.94 8.06
Sailboat 512 512 7.27 9.35 7.43 8.69 9.56
Ti any 512 512 6.38 7.74 6.54 7.51 9.51

Mean | 7.09 8.76 7.30 8.54 8.97


SD | 0.44 1.46 0.48 0.8 0.62

See also Tables 11.9 and 11.10. Note: Arith. = arithmetic coding.

tone images by treating bit planes as independent bilevel images. (Note:


The term \continuous-tone" images is used to represent gray-level images,
color images, and multicomponent images whereas some authors use the
term \m-ary" for the same purpose, the former is preferred as it is used
by JPEG.) The eciency of such an application depends upon preprocess-
ing for bit-plane decorrelation. The Moving Picture Experts Group (MPEG)
standard 1013, 1014] applies to the compression of video images. In the
context of medical image data handling and PACS, the ACR and the US
National Electrical Manufacturers Association (NEMA) proposed standards
known as the ACR/ NEMA and DICOM (Digital Imaging and Communica-
tions in Medicine) standards 1015, 1016, 1017, 1018]. The following sections
provide brief reviews of the standards mentioned above.
1046 Biomedical Image Analysis
TABLE 11.9
Average Bit Rate with Di erentiated Peano-scanned Data (PD),
Compared with the Results of 1D and 2D DPCM Encoding of
Raster-scanned Data from Eight Test Images 999].
Average number of bits/pixel
Hu man Arithmetic LZW
DPCM DPCM DPCM
Image PD 1D 2D PD 1D 2D PD 1D 2D

Airplane 4.52 5.49 4.60 4.48 5.50 4.58 5.13 5.33 5.09
Baboon 6.46 7.14 7.34 6.46 7.29 7.39 7.67 7.48 7.66
Cameraman 5.38 5.82 5.57 5.41 5.81 5.56 5.88 5.97 5.94
Lenna-256 5.66 6.35 5.74 5.64 6.40 5.71 6.24 6.60 6.02
Lenna-512 5.05 5.93 5.37 5.00 5.82 5.29 5.80 6.43 5.82
Peppers 5.06 5.82 6.20 4.97 5.73 6.09 5.63 5.72 5.88
Sailboat 5.55 6.34 6.61 5.54 6.39 6.59 6.50 6.51 6.65
Ti any 4.63 5.59 5.02 4.60 5.52 4.91 5.26 5.83 5.52

Mean 5.29 6.06 5.81 5.26 6.06 5.77 6.01 6.23 6.07
SD 0.62 0.54 0.89 0.64 0.61 0.91 0.81 0.67 0.78

See also Tables 11.8 and 11.10.

11.10.1 The JBIG Standard


JBIG is a standard for progressive coding of bilevel images that supports three
coding modes: progressive coding, compatible progressive/ sequential coding,
and single-layer coding. A review of the single-layer coding mode is presented
in the following paragraphs.
For a bilevel image b(m n) 0  m < M , 0  n < N , a typical JBIG coding
scheme includes four main functional blocks as shown in Figure 11.32. The
following items form important steps in the JBIG coding procedure:
 The typical prediction step is a line-skipping algorithm. A given line is
marked as \typical" if it is identical to the preceding line. The encoder
adds a special label to the encoded data stream for each typical line
instead of encoding the line. The decoder generates the pixels of typical
lines by line duplication.
Image Coding and Data Compression 1047
TABLE 11.10
Average bit rates for eight test images with several decorrelation and
encoding methods 999, 1000].
Average bit rate (bits/pixel)
Direct 2D linear 2D
arith. PD PD predictive DCT
(Peano (Hu man) (arith.) coding coding
Image or raster) encoding encoding (raster) (raster)
Airplane 6.65 4.52 4.48 4.50 5.94
Baboon 7.30 6.46 6.46 6.59 9.47
Cameraman 7.40 5.38 5.41 5.48 8.42
Lenna-256 7.95 5.66 5.64 5.49 8.00
Lenna-512 7.61 5.05 5.00 4.92 6.65
Peppers 7.54 5.06 4.97 5.30 7.12
Sailboat 7.43 5.55 5.54 5.72 7.70
Ti any 6.54 4.63 4.60 4.76 6.55

Mean 7.30 5.29 5.26 5.35 7.48


SD 0.48 0.62 0.64 0.65 1.15

See also Tables 11.8 and 11.9. Note: arith. = arithmetic coding PD =
Di erentiated Peano-scanned data.

 The adaptive templates block provides substantial coding gain by look-


ing for horizontal periodicity in the bilevel image. When a periodic
template is changed, the encoder multiplexes a control sequence into
the output data stream.
 The model templates block is a context arithmetic coder. The context is
determined by ten particular neighboring pixels that are de ned by two
model templates: the three-line template and the two-line template as
shown in Figure 11.33 (labeled as `X' and `A', where `A' is the adaptive
pixel whose position could be varied during the coding process 1008,
1009]).
 The adaptive arithmetic encoder is an entropy coder that determines
the necessity of coding a given pixel based upon the outputs of the
typical prediction block and the model templates block. If necessary,
1048 Biomedical Image Analysis
the encoder notes the context and uses its internal probability estimator
to estimate the conditional probability that the current pixel will be of
a given value.
It should be noted that the JBIG coding algorithm includes at least three
decorrelation steps for a given bilevel image (or bit plane).

Input bilevel image or bit plane

Adaptive Encoded data


Typical Adaptive Model
prediction templates templates arithmetic
encoder

FIGURE 11.32
The single-layer JBIG encoder. Reproduced with permission from L. Shen
and R.M. Rangayyan, \Lossless compression of continuus-tone images by com-
bined inter-bit-plane decorrelation and JBIG coding", Journal of Electronic
Imaging, 6(2): 198 { 207, 1997.  c SPIE and IS&T.

X X X

X X X X A X X X X X A
X X ? X X X X ?

Three-line model template Two-line model template


FIGURE 11.33
Two context-model templates used in the single-layer JBIG encoder. ?: Pixel
being encoded. X: Pixels in the context model. A: Adaptive pixel. Repro-
duced with permission from L. Shen and R.M. Rangayyan, \Lossless compres-
sion of continuus-tone images by combined inter-bit-plane decorrelation and
JBIG coding", Journal of Electronic Imaging, 6(2): 198 { 207, 1997. 
c SPIE
and IS&T.

In a 1D form of JBIG coding, run-length coding is used to encode each


line in the image by using a variable-length coding scheme see Gonzalez and
Woods 8] for further details of this and other JBIG procedures. See Sections
11.11, 11.12, and 11.13 for discussions on the results of application of JBIG.
Image Coding and Data Compression 1049
11.10.2 The JPEG Standard
JPEG is a continuous-tone image compression standard that supports both
lossless and lossy coding 8, 1011, 1012]. The standard includes three sys-
tems 8]:
 A lossy baseline coding system based upon block-wise application of the
DCT.
 An extended coding system for greater compression, higer precision, and
progressive transmission and recovery.
 A lossless independent coding system for reversible compression.
In the baseline coding system, the pixel data are limited to a precision
of 8 b. The image is broken into 8 8 blocks, shifted in gray level, and
transformed using the DCT. The transform coecients are quantized with
variable-length code assignment (to a maximum of 11 b). Due to the presence
of block artifacts and other errors in lossy compression, this mode of JPEG
would not be suitable for the compression of medical images.
The JPEG 2000 standard is based upon the wavelet transform 8, 1019].
JPEG 2000 o ers the option of progressive coding (from lossy toward lossless),
as well as the option of coding ROIs with higher quality than that for the
other regions in the given image 1019], which may be of interest in some
applications.
The lossless mode of JPEG uses a form of predictive (DPCM) coding. A
linear combination of each pixel's neighbors at the left, upper, and upper-
left positions is employed to predict the pixel's value, and then the di erence
between the true value of the pixel and its predicted value is coded through
an entropy coder, such as the Hu man or arithmetic coder. Lossless JPEG
de nes seven linear combinations known as prediction selection values (PSV).
For a continuous-tone image f (m n) 0  m < M , 0  n < N , the predic-
tors used for an interior pixel f (m n) 0 < m < M , 0 < n < N in lossless
JPEG coding are as follows:
 PSV=0: no prediction or f~(m n) = 0], which indicates entropy encod-
ing of the original image directly
 PSV=1: f~(m n) = f (m ; 1 n)
 PSV=2: f~(m n) = f (m n ; 1)
 PSV=3: f~(m n) = f (m ; 1 n ; 1)
 PSV=4: f~(m n) = f (m ; 1 n) + f (m n ; 1) ; f (m ; 1 n ; 1)
 PSV=5: f~(m n) = f (m ; 1 n) + f (m n ; 1) ; f (m ; 1 n ; 1)]=2
 PSV=6: f~(m n) = f (m n ; 1) + f (m ; 1 n) ; f (m ; 1 n ; 1)]=2
1050 Biomedical Image Analysis
 PSV=7: f~(m n) = f (m ; 1 n) + f (m n ; 1)]=2
where f~(m n) is the predicted value for the pixel f (m n). The boundary
pixels could be treated through various possible ways, which will not make a
signi cant di erence in the nal compression ratio due to their small popula-
tion. A typical method is to use PSV=1 for the rst row and PSV=2 for the
rst column, with special treatment of the rst pixel at position (0 0) using
the value of 2K ; 1, where K is the number of precision bits of the pixels.
Sung et al. 1020] evaluated the application of JPEG 2000 for the compres-
sion of mammograms, and suggested that compression ratios of up to 15 : 1
were possible without visual loss, \preserving signi cant medical information
at a con dence level of 99%". It was also suggested that compression of up to
80 : 1 could be achieved \without a ecting clinical diagnostic performance".
See Sections 11.11, 11.12, and 11.13 for discussions on the results of applica-
tion of lossless JPEG to several test images.

11.10.3 The MPEG Standard


The MPEG standard includes several schemes for the compression of video
images for various applications, based upon combinations of the DCT and
DPCM, including motion compensation 1013, 8, 1021]. The techniques ex-
ploit data redundancy and correlation within each frame as well as between
frames furthermore, they take advantage of certain psychovisual properties of
the human visual system. Recent versions of MPEG include special features
suitable for video-conferece, multimedia, streaming media, and video-game
systems 1021]. Most of such special features are not of relevance in lossless
compression of biomedical image data.

11.10.4 The ACR/ NEMA and DICOM Standards


The proliferation of medical imaging technology and devices of several types
in the 1970s and 1980s led to a situation where, due to the lack of standards,
interconnection and communication of data between imaging and comput-
ing devices was not possible. In order to rectify this situation, the ACR
and NEMA established the ACR/ NEMA 300 standard 1017, 1018] on dig-
ital imaging and communication, specifying the desired hardware interface,
a minimum set of software commands, and a consistent set of data formats
to facilitate communication between imaging devices and computers across
networks. This was followed by another standard on data compression | the
ACR/ NEMA PS 2 1016] | specifying the manner in which header data
were to be provided such that a recipient of the compressed data could iden-
tify the data compression method and parameters used, and reconstruct the
image data. The standard permits the use of several image decorrelation and
data compression techniques, including transform (DCT), predictive (DPCM),
Hu man, and Lempel{Ziv coding techniques. The DICOM standard 1015]
Image Coding and Data Compression 1051
includes a number of enhancements to the ACR/ NEMA standard, including
conformance levels and applicability to a networked environment.

11.11 Segmentation-based Adaptive Scanning


Shen and Rangayyan 321, 320] proposed a segmentation-based lossless image
coding (SLIC) method based on a simple but ecient region-growing proce-
dure. An embedded region-growing procedure was used to produce an adap-
tive scanning pattern for the given image with the help of a discontinuity-index
map that required a small number of bits for encoding. The JBIG method was
used for encoding both the error-image data and the discontinuity-index map
data. The details of the SLIC method and the results obtained are described
in the following paragraphs.

11.11.1 Segmentation-based coding


Kunt et al. 566] proposed a contour-texture approach to picture coding they
called such approaches \second-generation" image coding techniques. The
main idea behind such techniques is to rst segment the image into nearly
homogeneous regions surrounded by contours such that the contours corre-
spond, as much as possible, to those of the objects in the image, and then
to encode the contour and texture information separately. Because contours
can be represented as 1D signals and the pixels within a region are highly
correlated, such methods are expected to lead to high compression ratios.
Although the idea appears to be promising, its implementation meets with a
series of diculties. A major problem exists at its very important rst step |
segmentation | which determines the nal performance of the segmentation-
based coding method. It is well recognized that there are no satisfactory
segmentation algorithms for application to a wide variety of general images.
Most of the available segmentation algorithms are sophisticated and give good
performance only for speci c types of images.
In order to overcome the problem mentioned above in relation to segmen-
tation, Shen and Rangayyan 321, 320] proposed a simple region-growing
method. In this procedure, instead of generating a contour set, a discontinu-
ity map is obtained during the region-growing procedure. Concurrently, the
method also produces a corresponding error image based upon the di erence
between each pixel and its corresponding \center pixel". The discontinuity
map and the error image are then encoded separately.
1052 Biomedical Image Analysis
11.11.2 Region-growing criteria
The aim of segmentation in image compression is not the identi cation of
objects or the analysis of features instead, the aim is to group spatially
connected pixels lying within a small gray-level dynamic range. The region-
growing procedure in SLIC starts with a single pixel, called the seed pixel (#0
in Figure 11.34). Each of the seed's 4-connected neighboring pixels, from #1
to #4 in the order as shown in Figure 11.34, is checked with a region-growing
(or inclusion) condition. If the condition is satis ed, the neighboring pixel is
included in the region. The four neighbors of the newly added neighboring
pixel are then checked for inclusion in the region. This recursive procedure is
continued until no spatially connected pixel meets the growing condition. A
new region-growing procedure is then started with the next pixel in the image
that is not already a member of a region the procedure ends when every pixel
in the image has been included in one of the regions grown.

seed seed

#1 REGION 2
REGION 1 #2
seed
#0 #3
#4

REGION 3
(initial stage)

FIGURE 11.34
Demonstration of a seed pixel (#0) and its 4-connected neighbors (#1 to
#4) in region growing. Reproduced with permission from L. Shen and R.M.
Rangayyan, \A segmentation-based lossless image coding method for high-
resolution medical image compression", IEEE Transactions on Medical Imag-
ing, 16(3): 301 { 306, 1997. 
c IEEE.
Image Coding and Data Compression 1053
The region-growing conditions used in SLIC are the following:
1. The neighboring pixel is not a member of any of the regions already
grown.
2. The absolute di erence between the neighboring pixel and the corre-
sponding center pixel is less than the limit error-level (to be de ned
later).
Figure 11.35 demonstrates the relationship between a neighboring pixel
and its center pixel: at the speci c stage illustrated in the gure, pixel A
has become a member of the region (3) being grown, and its four neighbors,
namely B , C , D, and E , are being checked for inclusion (E is already a
member of the region). Under this circumstance, pixel A is the center pixel
of the neighboring pixels B , C , D, and E . When a new neighboring pixel
is included in the region being grown, its error-level-shift-up di erence with
respect to its center pixel is stored as the pixel's \error" value. If only the
rst of the two region-growing conditions is met, the discontinuity index of the
pixel is incremented. By this process, after region growing, a \discontinuity-
index image data part" and an \error-image data part" will be obtained. The
maximum value of the discontinuity index is 4. Most of the segmentation-
based coding algorithms reported in the literature include contour coding and
region coding instead of these steps, the SLIC method uses a discontinuity-
index map and an error-image data part.
The error-level in SLIC is determined by a preselected parameter error-
bits as
error-level = 2 (error-bits ; 1) : (11.190)
For instance, if the error image is allowed to take up to 5 b=pixel (error-bits
= 5), the corresponding error-level is 16 the allowed di erence range is then
;16 15]. The error value of the seed pixel of each region is de ned as the
value of its lower error-bits bits the value of the higher (N ; error-bits) bits
of the pixel is stored in a \high-bits seed-data part", where N is the number
of bits per pixel in the original image data.
The three data parts described above are used to fully recover the original
image during the decoding process. The region-growing conditions during
decoding are that the neighboring pixel under consideration for inclusion be
not in any of the previously grown regions, and that its discontinuity index
equal 0. When the conditions are met for a pixel, its value is restored as the
sum of its error-level-shift-down error value and its center pixel value (except
for the seed pixels of every region). If only the rst of the two conditions is
satis ed, the discontinuity index of that pixel is decremented. Thus, the
discontinuity index generated during segmentation is used to guide region
growing during decoding. The \high-bits seed-data part" is combined with
the \error-image data part" to recover the seed pixel value of each region.
Figure 11.36 provides a simple example for illustration of the region-growing
procedure and its result. The 8 8 image in the example, shown in Figure
1054 Biomedical Image Analysis

seed seed

REGION 2
REGION 1 seed

B
REGION 3 C A D
(being grown) E

FIGURE 11.35
Demonstration of a neighboring pixel (B , C , D, or E ) being checked for inclu-
sion against the current center pixel (A) during region growing. Reproduced
with permission from L. Shen and R.M. Rangayyan, \A segmentation-based
lossless image coding method for high-resolution medical image compression",
IEEE Transactions on Medical Imaging, 16(3): 301 { 306, 1997.  c IEEE.
Image Coding and Data Compression 1055
11.36 (a), is a section of an eye of the 512 512 Lenna image shown in Figure
11.37 (a)]. The value of error-bits was set to 5 for this example. Figure
11.36 (b) shows the result of region growing. The corresponding three data
parts, namely the discontinuity-index image data, error-image data, and high-
bits seed data, are shown in Figure 11.36 (c), (d), and (e), respectively. The
full 512 512 Lenna image and the corresponding discontinuity-index image
data (scaled) and error-image data (scaled) are shown in Figure 11.37.

11.11.3 The SLIC procedure


The complete SLIC procedure is illustrated in Figure 11.38. At the encoding
end, the original image is transformed into three parts: discontinuity-index
image data, error-image data, and high-bits seed data, by the region-growing
procedure. The rst two data parts are encoded using the Gray code (see
Table 11.1), broken down into bit planes, and nally encoded using JBIG.
The last data part is stored or transmitted as is it needs only N ; error-bits
bits per region.
At the decoding end, the JBIG-coded data les are JBIG-decoded rst, and
then the Gray-coded bit planes are composed back to binary code. Finally,
the three parts are combined together by the same region-growing procedures
as before to recover the original image.
A lossless compression method basically includes two major stages: one is
image transformation, with the purpose of data decorrelation the other is
encoding of the transformed data. However, in the SLIC procedure, image
transformation is achieved in both the region-growing procedure, and later,
in the JBIG procedure, whereas encoding is accomplished within the JBIG
procedure.

11.11.4 Results of image data compression with SLIC


Five mammograms and ve chest radiographs were used to evaluate the per-
formance of SLIC 320, 321]. The procedure was tested using 8 b=pixel and
10 b=pixel versions of the images obtained by direct mapping and by discard-
ing the two least-signi cant bits, respectively, from the original 12 b images.
The method was also tested with commonly used 8 b nonmedical test im-
ages. The performance of SLIC was compared with that of JBIG 1008, 1009],
JPEG 1012], adaptive Lempel{Ziv (ALZ) coding 966], HINT 978], and 2D
LP coding 174]. In using JBIG, for direct encoding of the image or for
encoding the error-image data part and the discontinuity-index data part, pa-
rameters were selected so as to use three lines of the image (NLPS0 = 3) in
the underlying model (\3D") and no progressive spatial resolution buildup.
The lossless JPEG package used in the study includes features of automatic
determination of the best prediction pattern and optimal Hu man table gen-
eration. The UNIX utility compress was used for ALZ compression. For the
10 b test images, the 2D Burg LP algorithm (see Section 11.8.1) followed by
1056 Biomedical Image Analysis

84 84 91 83 72 57 66 126 84 84 91 83 72 57 66 126
(seed) (seed)

86 90 80 76 55 65 113 173 86 90 80 76 55 65 113 173


(seed) (seed)

54 60 57 64 77 107 160 198 54 60 57 64 77 107 160 198


(seed) (seed) (seed)

63 65 75 88 127 158 188 202 63 65 75 88 127 158 188 202


(seed) (seed)

101 102 116 137 163 186 197 198 101 102 116 137 163 186 197 198
(seed) (seed) (seed)

132 138 146 157 182 187 193 197 132 138 146 157 182 187 193 197

149 156 156 158 168 173 176 187 149 156 156 158 168 173 176 187

133 142 151 154 151 158 169 167 133 142 151 154 151 158 169 167

(a) (b)
91-84+16

0 0 0 0 0 0 0 1 20 16 23 8 5 1 25 30

0 0 0 0 2 0 2 2 18 22 5 9 6 24 17 13

1 1 1 0 0 2 2 2 10 19 9 4 29 11 0 6

0 0 1 1 2 4 1 0 25 21 26 29 31 30 2 20

2 2 2 3 4 0 0 0 5 17 30 9 3 5 25 12

0 0 0 1 0 0 0 0 10 8 5 15 11 17 12 15

0 0 0 0 0 0 1 0 9 16 14 6 2 2 5 6

1 0 0 0 1 0 0 1 7 2 11 12 9 1 9 14

(c) (d) 169-176+16

2 3 3 5 3 5 6 3 4 3 4 5

(e)

FIGURE 11.36
A simple example of the region-growing procedure and its result with error-
bits set to be 5. (a) Original image with the size of 8 rows by 8 columns (a
section of the 512 512 Lenna image shown in Figure 11.37). (b) The result
of region growing. The seed pixel of every region grown is identi ed a few
regions include only the corresponding seed pixels. (c) The discontinuity-index
image data part. (d) The error-image data part. (e) The high-bits seed-data
part (from left to right). Reproduced with permission from L. Shen and
R.M. Rangayyan, \A segmentation-based lossless image coding method for
high-resolution medical image compression", IEEE Transactions on Medical
Imaging, 16(3): 301 { 306, 1997.  c IEEE.
Image Coding and Data Compression 1057

(a)

(b) (c)
FIGURE 11.37
The 512 512 Lenna image and the results with SLIC (with error-bits = 5):
(a) Original image. (b) The discontinuity-index image data part (scaled).
(c) The error-image data part (scaled). See also Figure 11.36. Reproduced
with permission from L. Shen and R.M. Rangayyan, \A segmentation-based
lossless image coding method for high-resolution medical image compression",
IEEE Transactions on Medical Imaging, 16(3): 301 { 306, 1997.  c IEEE.
1058 Biomedical Image Analysis

SLIC encoder Error


data
Binary
to JBIG
Discon- Gray encoder
Region-growing tinuity-
procedure index
data

Transmission / Storage
High-bits seed data

Original
image
SLIC decoder Error
data
Gray
JBIG
to
decoder
Discon- binary
Region-growing tinuity-
procedure index
data

High-bits seed data

FIGURE 11.38
Illustration of the segmentation-based lossless image coding (SLIC) proce-
dure. Reproduced with permission from L. Shen and R.M. Rangayyan, \A
segmentation-based lossless image coding method for high-resolution medical
image compression", IEEE Transactions on Medical Imaging, 16(3): 301 {
306, 1997.  c IEEE.
Image Coding and Data Compression 1059
Hu man error coding (2D-Burg-Hu man) was utilized instead of the JPEG
package. (The 2D-Burg-Hu man program was designed speci cally for 10 b
images the JPEG program permits only 8 b images.)
The SLIC method has a tunable parameter, which is error-bits. The data
compression achieved was found to be almost the same for most of the 8 b
medical images by using the value 4 or 5 for error-bits. Subsequently, error-
bits = 5 was used in the remaining experiments with the 8 b versions of the
medical images. The performance of the SLIC method is summarized in Ta-
ble 11.11, along with the results of ALZ, JBIG, JPEG, and HINT compression.
It is seen that SLIC has outperformed all of the other methods studied with
the test-image set used, except in the case of one mammogram and one chest
radiograph for which JBIG gave negligibly better results. On the average,
SLIC improved the bit rate by about 9%, 29%, and 13%, as compared with
JBIG, JPEG, and HINT, respectively.

TABLE 11.11
Average Bits Per Pixel with SLIC (error-bits = 5), ALZ, JBIG, JPEG
(Best Mode), and HINT Using Five 8 b Mammograms (m1 to m4 and
m9) and Five 8 b Chest Radiographs (c5 to c8 and c10) by Bits/Pixel.
Image Entropy
(8 b=pixel) H0 ALZ JBIG JPEG HINT SLIC
m1 (4,096 1,990) 6.13 2.95 1.82 2.14 1.87 1.73
m2 (4,096 1,800) 6.09 2.89 1.84 2.18 1.92 1.78
m3 (3,596 1,632) 5.92 3.02 2.40 2.37 2.12 2.12
m4 (3,580 1,696) 5.90 2.45 1.96 1.89 1.61 1.44
c5 (3,536 3,184) 7.05 3.03 1.60 1.92 1.75 1.42
c6 (3,904 3,648) 7.46 3.32 1.88 2.15 2.00 1.76
c7 (3,120 3,632) 5.30 1.73 0.95 1.60 1.40 0.99
c8 (3,744 3,328) 7.45 3.13 1.50 1.89 1.64 1.35
m9 (4,096 2,304) 6.04 2.60 1.67 2.12 1.84 1.67
c10 (3,664 3,680) 7.17 2.95 1.63 1.97 1.73 1.50
Average 6.45 2.81 1.72 2.02 1.79 1.58

The lowest bit rate in each case is highlighted. Reproduced with permission
from L. Shen and R.M. Rangayyan, \A segmentation-based lossless image
coding method for high-resolution medical image compression", IEEE Trans-
actions on Medical Imaging, 16(3): 301 { 306, 1997. c IEEE.
1060 Biomedical Image Analysis
Whereas the SLIC procedure performed well with high-resolution medical
images, its performance with low-resolution general images was comparable
to the performance of JBIG and JPEG, as shown in Table 11.12. When
SLIC used an optimized JBIG algorithm with the maximum number of lines,
that is, NLPS0 = row, the number of rows of the image (the last column of
Table 11.12) in the inherent model instead of only three lines (NLPS0=3 in
Table 11.12), its performance was better than that of JBIG and comparable
to that of JPEG.

TABLE 11.12
Average Bits Per Pixel with SLIC (error-bits = 8), ALZ, JBIG, and
JPEG (Best Mode) Using Eight Commonly Used 8 b Images.
SLIC
Image Entropy NLPS0
(8 b=pixel) (order=0) ALZ JBIG JPEG 3 row
Airplane (512 512) 6.49 5.71 4.24 4.20 4.22 4.11
Baboon (512 512) 7.14 7.84 6.37 6.18 6.55 6.44
Cameraman (256 256) 7.01 6.68 4.92 4.95 5.14 4.91
Lenna (256 256) 7.57 7.91 5.37 5.07 5.26 5.04
Lenna (512 512) 7.45 7.05 4.80 4.69 4.74 4.63
Peppers (512 512) 7.37 6.86 4.82 4.76 4.90 4.80
Sailboat (512 512) 7.27 7.07 5.34 5.26 5.46 5.36
Ti any (512 512) 6.38 5.88 4.38 4.35 4.43 4.32
Average 7.19 6.87 5.03 4.93 5.09 4.95

Reproduced with permission from L. Shen and R.M. Rangayyan, \A


segmentation-based lossless image coding method for high-resolution medi-
cal image compression", IEEE Transactions on Medical Imaging, 16(3): 301
{ 306, 1997. 
c IEEE.

In the studies of Shen and Rangayyan 321, 320], setting error-bits = 6 was
observed to be a good choice for the compression of the 10 b versions of the
medical images. Table 11.13 lists the results of compression with the ALZ,
JBIG, 2D-Burg-Hu man, HINT, and SLIC methods. The SLIC technique
has provided lower bit rates than the other methods. The average bit rate
Image Coding and Data Compression 1061
is 2:92 b=pixel with SLIC, whereas HINT, JBIG, and 2D-Burg-Hu man have
average bit rates of 3:03, 3:17, and 3:14 b=pixel, respectively.

TABLE 11.13
Average Bits Per Pixel with SLIC (error-bits = 6), ALZ, JBIG,
2D-Burg-Hu man (2DBH), and HINT Using Five 10 b Mammograms
(m1 to m4 and m9) and Five 10 b Chest Radiographs (c5 to c8 and c10).
Image Entropy
(10 b=pixel) (order=0) ALZ JBIG 2DBH HINT SLIC
m1 (4,096 1,990) 7.27 5.45 2.89 2.92 2.75 2.73
m2 (4,096 1,800) 7.61 6.02 3.19 3.19 3.15 3.08
m3 (3,596 1,632) 6.68 4.73 3.13 2.97 2.81 2.81
m4 (3,580 1,696) 7.21 5.14 3.20 2.76 2.58 2.57
c5 (3,536 3,184) 8.92 7.21 3.33 3.31 3.28 2.99
c6 (3,904 3,648) 9.43 7.84 3.87 3.81 3.89 3.59
c7 (3,120 3,632) 7.07 4.67 2.30 2.58 2.34 2.27
c8 (3,744 3,328) 9.29 7.40 3.25 3.16 3.02 2.89
m9 (4,096 2,304) 7.81 5.93 3.18 3.41 3.29 3.17
c10 (3,664 3,680) 9.05 7.51 3.35 3.27 3.18 3.07
Average 8.03 6.19 3.17 3.14 3.03 2.92

The lowest bit rate in each case is highlighted. Reproduced with permission
from L. Shen and R.M. Rangayyan, \A segmentation-based lossless image
coding method for high-resolution medical image compression", IEEE Trans-
actions on Medical Imaging, 16(3): 301 { 306, 1997. c IEEE.

Most of the previously reported segmentation-based coding techniques 566,


1022, 1023] involve three procedures: segmentation, contour coding, and tex-
ture coding. For segmentation, a sophisticated procedure is generally em-
ployed for the extraction of closed contours. In the SLIC method, a simple
single-scan neighbor-pixel checking algorithm is used. The major problem af-
ter image segmentation with the other methods lies in the variance of pixel
intensity within the regions, which has resulted in the application of such
methods to lossy coding instead of lossless coding. In order to overcome
this problem, the SLIC method uses the most-correlated neighboring pixels
to generate a low-dynamic-range error image during segmentation. Contour
1062 Biomedical Image Analysis
coding is replaced by a coding step applied to a discontinuity-index map with
a maximum of 5 levels, and texture coding is turned into the encoding of a
low-dynamic-range error image. Further improvement of the performance of
SLIC may be possible by the application of more ecient coding methods
to the error image and discontinuity-index data parts, and by modifying the
region-growing procedure.
The SLIC method was extended to the compression of 3D biomedical im-
ages by Lopes and Rangayyan 1024]. The performance of the method varied
depending upon the nature of the image. However, the advantage of im-
plementation of SLIC in 3D versus 2D on a slice-by-slice basis was not sig-
ni cant. For example, both 2D and 3D SLIC with the compression utility
bzip2 1025] (as the encoding scheme after decomposition of the image by the
segmentation-based procedure) reduced the data in a 3D CT head examina-
tion from the original 16 b=voxel size to 5:4 b=voxel the zeroth-order entropy
of the image was 7:9 b=voxel. The inclusion of the SLIC procedure improved
the performance of the compression utilities gzip and compress by 15 ; 20%.
Acha et al. 1026] extended the SLIC method to the compression of color
images in the application of diagnosis of burn injuries. Color images of size
832 624 pixels at 24 b=pixel in the RGB format were compressed to the
rate of 7:7 b=pixel application of the JPEG lossless method resulted in a
rate of 8:2 b=pixel. In a further study, Serrano et al. 1027] converted the
same images as in the study of Acha et al. 1026] from the RGB format to
the YIQ (luminance, in-phase, and quadrature) system in a lossless manner,
and showed that the bit rate could be further reduced by the use of SLIC to
4:6 b=pixel.

11.12 Enhanced JBIG Coding


The SLIC procedure demonstrates a successful example of the application of
the JBIG method for coding the discontinuity-index map and error-data parts.
We may, therefore, expect the incorporation of decorrelation procedures, such
as predictive algorithms, into JBIG coding to provide better performance.
Shen and Rangayyan 1028, 320] proposed a combination of multiple decor-
relation procedures, including a lossless JPEG-based predictor, a transform-
based inter-bit-plane decorrelator, and a JBIG-based intra-bit-plane decorre-
lator the details of their procedure are described in the following paragraphs.
Although JBIG includes an ecient intra-bit-plane decorrelation procedure,
it needs an ecient preprocessing algorithm for inter-bit-plane decorrelation
in order to achieve good compression eciency with continuous-tone images.
There are several ways in which bit planes may be created from a continuous-
tone image. One common choice is to use the bits of a folded-binary or Gray
Image Coding and Data Compression 1063
representation of intensity. The Gray code is the most common alternative
to the binary code for representing digital numbers see Table 11.1. The
major advantage of the Gray code is that only one bit changes between each
pair of successive code words, which is useful to provide a good bit-plane
representation for original image pixels: most neighboring pixels have highly
correlated and close values, and thus most neighboring bits within each Gray-
coded bit plane may be expected to have the same value. It has been shown
that by using the Gray representation, JBIG can obtain compression ratios
at least comparable to those of lossless JPEG 1029].
Coding schemes other than the Gray code may be derived based on speci c
requirements. For instance, for a K -bit image f (m n), the prediction error
e(m n) could be represented as
e(m n) = f (m n) ; f~(m n) (11.191)
where f~(m n) is the predicted value of the original pixel f (m n). In general,
up to K + 1 bits could be required to represent the di erence between two
K -bit numbers. However, because the major concern in compression is to
retrieve f (m n) from e(m n) and f~(m n), we could make use of the following
binary arithmetic operation:
e~(m n) = e(m n) & f0 11    1}gb
| {z (11.192)
K bits
where & is the bit-wise AND operation, and the subscript b indicates binary
representation. Then, only K bits are necessary to represent the prediction
error e~(m n). The original pixel value may be retrieved as
f (m n) = ~e(m n) + f~(m n)] & f0 |11 {z
   1}gb : (11.193)
K bits
With the transformation as above, the value of (2K ; v) for e~(m n) denotes
a prediction error e(m n) of either (2K ; v) or ;v. In general, the lower and
the higher ends of the value of the error e~(m n) appear more frequently than
mid-range values.
The F1 transformation 1030] could make bit-plane coding more ecient by
increasing the run length in the most-signi cant bits:
8
< 0 v=0
v1 = F1 (v) = : 2v ; 1 v  2K ;1 (11.194)
2(2 ; v) v > 2K ;1
K

with the inverse transform given by


8
< 0 v1 = 0
v = F1;1 (v1 ) = : 2K ; p v1 = 2p (even) : (11.195)
q v1 = 2q ; 1 (odd)
1064 Biomedical Image Analysis
The F2 transformation 1030, 1031] has a similar function, but through the
reversal of the higher-half of the value range:
8
< v v < 2K ;1
v2 = F2 (v) = : v f0 11    1} gb  v  2K ;1
| {z (11.196)
K ;1 bits
where is the bit-wise exclusive OR operation its inverse transform is
v = F2;1 (v2 ) = F2 (v2 ): (11.197)
Shen and Rangayyan 1028] investigated the combined use of the PSV sys-
tem in JPEG (see Section 11.10.2) and bit-plane coding using JBIG, along
with one of the Gray, F1 , and F2 transforms. In using JBIG (for direct coding
of the original image or for coding the prediction error image), the method
was parameterized to use the three-line model template and stripe size equal
to the number of rows of the image with no progressive spatial resolution
buildup (see Section 11.10.1). The lossless JPEG scheme was set to generate
the optimal Hu man table for each PSV value.
The methods were tested with a commonly used set of eight images (see
Table 11.8). A comparison of the compression eciencies of lossless JPEG,
JBIG, and PSV-incorporated JBIG with one of the Gray, F1 , or F2 trans-
forms is shown in Figure 11.39, for one of the test images. Each group of
vertical bars in each case consists of ve bars, corresponding to the entropy
of the prediction error image, followed by the actual bit rates with lossless
JPEG coding, and PSV-incorporated JBIG bit-plane coding with the Gray,
F1 , and F2 transformations, respectively. In the gure, there are three hor-
izontal lines representing the zeroth-order entropy of the original image, the
bit rate by direct JBIG coding with the Gray transformation, and the best
bit rate among all of the methods tested. The best performance for each test
image was achieved with PSV-incorporated JBIG bit-plane coding with the
F1 transformation and PSV=7, except for the 256 256 Lenna image, for
which the best rate was given with PSV=6 and JBIG bit-plane coding of the
prediction error after F2 transformation. The F1 and F2 transforms provided
similar performance and performed better than the Gray transform, with the
F1 transform giving slightly lower bit rates in most of the cases.
In the results obtained by Shen and Rangayyan 1028], it was observed that
the zeroth-order entropies of the prediction error images were much lower than
those of the original images, that the bit rates by lossless JPEG compression
were always higher than the zeroth-order entropies of the prediction error
images, and that PSV-incorporated JBIG bit-plane coding provided bit rates
lower than the zeroth-order entropies of the prediction error images (with
the exceptions being the Baboon and Sailboat images). These observations
show that a simple prediction procedure, such as the PSV scheme employed
in lossless JPEG, is useful for decorrelation. In particular, prediction with
PSV=7 followed by the F1 transform and JBIG bit-plane coding achieves an
Image Coding and Data Compression 1065
average bit rate that is about 0:2 b=pixel lower than those achieved with direct
Gray-coded JBIG compression and the optimal mode of lossless JPEG. This
also indicates that the F1 transform is a better inter-bit-plane decorrelator
than the Gray code for prediction error images, and that the intra-bit-plane
decorrelation steps within the JBIG algorithm are not redundant with prior
decorrelation by the PSV system in JPEG.

7.5

6.5
Bit rates (bits/ pixel)

1st order entropy of original Airplane image


6

5.5
direct JBIG coding
5
best of all
4.5

3.5
PSV=1 PSV=2 PSV=3 PSV=4 PSV=5 PSV=6 PSV=7

FIGURE 11.39
Comparison of image compression eciency with the enhanced JBIG scheme.
The ve bars in each case represent the zeroth-order entropy of the prediction
error image and actual bit rates with lossless JPEG, and PSV-incorporated
JBIG with the Gray, F1 , and F2 transformation, respectively (from left to
right). Reproduced with permission from L. Shen and R.M. Rangayyan,
\Lossless compression of continuus-tone images by combined inter-bit-plane
decorrelation and JBIG coding", Journal of Electronic Imaging, 6(2): 198 {
207, 1997. c SPIE and IS&T.

Shen and Rangayyan 1028] applied the best mode of the PSV-incorporated
JBIG bit-plane coding scheme (the PSV=7 predictor followed by JBIG bit-
plane coding of F1 -transformed prediction error, denoted as PSV7-F1-JBIG)
to the JPEG standard set of continuous-tone test images the results are
shown in Table 11.14. The table also shows the zeroth-order entropies of the
prediction error images, the results of the best mode of lossless JPEG coding
1066 Biomedical Image Analysis
(for 8 b component images only), the results of direct Gray-transformed JBIG
coding, and the results of the best mode of the CREW technique (Compression
with Reversible Embedded Wavelets, one of the methods proposed to the
Committee of Next Generation Lossless Compression of Continuous-tone Still
Pictures) 1032, 1033].
The results in Table 11.14 demonstrate that the enhanced JBIG bit-plane
coding scheme for continuous-tone images performs the best among the four
algorithms tested. In terms of the average bit rate, the scheme outperforms
direct JBIG coding and the best mode of CREW by 0:13 and 0:12 b=pixel,
respectively, and achieves much lower bit rates than the zeroth-order entropies
of the prediction error images by an average of 0:46 b=pixel for the entire test
set of images. If only the 8 b component images are considered, the enhanced
JBIG technique provides better compression performance by 0:56 0:08 0:52
and 0:18 b=pixel, in terms of average bit rates when compared with lossless
JPEG coding, direct Gray-transform JBIG coding, the zeroth-order entropy of
the prediction error image, and the best mode of lossless CREW, respectively.
For comparison with SLIC in the context of radiographic images, the en-
hanced JBIG (PSV7-F1-JBIG) procedure was applied for compression of the
ve mammograms and ve chest radiographs listed in Tables 11.11 and 11.13.
The bit rates achieved for the 8 b and 10 b versions of the images are listed in
Tables 11.15 and 11.16, respectively. The results indicate that the enhanced
JBIG procedure provides an additional 5% improvement over SLIC on the 8 b
image set, and that the method lowers the average bit rates to 2:75 b=pixel
from the 2:92 b=pixel rate of SLIC for the 10 b images.

11.13 Lower-limit Analysis of Lossless Data Compres-


sion
There is, as yet, no practical technique available for the determination of the
lowest limit of bit rate in reversible compression of a given image, although
such a number should exist, based upon information theory. It is, there-
fore, dicult to judge how good a compression algorithm is, other than by
comparing its performance with those of other published methods, or with the
zeroth-order entropy of the decorrelated data, if available: the latter approach
is commonly used by researchers when di erent test-image sets are involved,
and when other compression programs are not available. Either of the two
approaches mentioned above can only analyze relatively how well a compres-
sion algorithm performs, in comparson with the other techniques available or
the zeroth-order entropy. It should be apparent from the results presented in
the preceding sections that the usefulness of comparative analysis is limited.
For example, SLIC provided the best compression results among the methods
Image Coding and Data Compression 1067
TABLE 11.14
Compression of the JPEG Test Image Set Using Enhanced JBIG
(EJBIG), Lossless JPEG (8 b Component Images Only), Direct
Gray-transformed JBIG Coding, and CREW Coding (Bits/Component)
with the Best Bit Rate Highlighted in Each Case.
Image Best Direct PSV = 7 Best
(cols rows comp. bits) JPEG JBIG He0 EJBIG CREW
hotel (720 576 3 8) 4.37 4.20 4.26 4.08 4.05
gold (720 576 3 8) 4.33 4.31 4.16 4.13 4.08
bike (2,048 2,560 4 8) 4.33 3.94 4.20 3.80 3.92
woman (2,048 2,560 4 8) 4.84 4.64 4.74 4.37 4.33
cafe (2,048 2,560 4 8) 5.63 5.27 5.60 5.17 5.17
tools (1,524 1,200 4 8) 5.69 5.38 5.60 5.37 5.47
bike3 (781 919 3 8) 5.15 4.58 5.07 4.73 5.11
water (3,072 2,048 3 8) 2.62 1.96 2.50 1.89 1.86
cats (3,072 2,048 3 8) 3.70 2.90 3.64 2.70 2.65
aerial1 (1,024 1,024 3 11) || 8.96 8.75 8.79 8.71
aerial2 (720 1,024 3 8) 4.93 4.61 5.00 4.44 4.47
cmpnd1 (512 768 3 8) 2.51 1.32 2.24 1.76 2.26
cmpnd2 (1,024 1,400 3 8) 2.50 1.39 2.26 1.72 2.32
nger (512 512 1 8) 5.85 6.49 5.85 5.84 5.84
x ray (2,048 1,680 1 12) || 6.55 6.26 6.19 6.10
cr (1,744 2,048 1 10) || 5.61 5.32 5.43 5.38
ct (512 512 1 12) || 4.66 5.32 4.01 4.08
us (512 488 1 8) 3.04 2.57 3.61 2.57 2.92
mri (256 256 1 11) || 6.62 6.45 6.33 6.06
faxball (1,024 512 3 8) 1.50 0.67 1.38 0.51 1.15
graphic (2,644 3,046 3 8) 2.81 2.84 2.79 2.34 2.51
chart (1,752 2,375 3 8) 2.23 1.33 2.26 1.34 1.66
chart s (1,688 2,347 3 8) 3.86 2.87 3.97 3.02 3.26
Average (all) || 4.07 4.40 3.94 4.06
Average (8 b/ comp. only) 3.88 3.40 3.84 3.32 3.50

Note: He0 = zeroth-order entropy of the PSV prediction error comp. = num-
ber of components cols = number of columns. Reproduced with permission
from L. Shen and R.M. Rangayyan, \Lossless compression of continuus-tone
images by combined inter-bit-plane decorrelation and JBIG coding", Journal
of Electronic Imaging, 6(2): 198 { 207, 1997. 
c SPIE and IS&T.
1068 Biomedical Image Analysis

TABLE 11.15
Comparison of Enhanced JBIG (PSV7-F1-JBIG or EJBIG) with
JBIG, JPEG (Best Lossless Mode), HINT, and SLIC
(error-bits = 5) Using Five 8 b Mammograms (m1 to m4 and m9)
and Five 8 b Chest Radiographs (c5 to c8 and c10) by
Bits/Pixel 320].
Image (8 b=pixel) H0 JBIG JPEG HINT SLIC EJBIG
m1 (4,096 1,990) 6.13 1.82 2.14 1.87 1.73 1.62
m2 (4,096 1,800) 6.09 1.84 2.18 1.92 1.78 1.66
m3 (3,596 1,632) 5.92 2.40 2.37 2.12 2.12 1.93
m4 (3,580 1,696) 5.90 1.96 1.89 1.61 1.44 1.34
c5 (3,536 3,184) 7.05 1.60 1.92 1.75 1.42 1.41
c6 (3,904 3,648) 7.46 1.88 2.15 2.00 1.76 1.70
c7 (3,120 3,632) 5.30 0.95 1.60 1.40 1.00 0.95
c8 (3,744 3,328) 7.45 1.50 1.89 1.64 1.35 1.40
m9 (4,096 2,304) 6.04 1.67 2.12 1.84 1.67 1.54
c10 (3,664 3,680) 7.17 1.63 1.97 1.73 1.50 1.44
Average 6.45 1.72 2.02 1.79 1.58 1.50

The lowest bit rate is highlighted in each case. See also Table 11.11. Note:
H0 = zeroth-order entropy.
Image Coding and Data Compression 1069

TABLE 11.16
Comparison of Enhanced JBIG (PSV7-F1-JBIG or EJBIG) with
JBIG, 2D-Burg-Hu man (2DBH), HINT, and SLIC (error-bits = 6)
Using Five 10 b Mammograms (m1 to m4 and m9) and Five 10 b
Chest Radiographs (c5 to c8 and c10) by Bits/Pixel 320].
Image (10 b=pixel) H0 JBIG 2DBH HINT SLIC EJBIG
m1 (4,096 1,990) 7.27 2.89 2.92 2.75 2.73 2.59
m2 (4,096 1,800) 7.61 3.19 3.19 3.15 3.08 2.88
m3 (3,596 1,632) 6.68 3.13 2.97 2.81 2.81 2.57
m4 (3,580 1,696) 7.21 3.20 2.76 2.58 2.57 2.39
c5 (3,536 3,184) 8.92 3.33 3.31 3.28 3.00 2.88
c6 (3,904 3,648) 9.43 3.87 3.81 3.89 3.59 3.38
c7 (3,120 3,632) 7.07 2.30 2.58 2.34 2.27 2.14
c8 (3,744 3,328) 9.29 3.25 3.16 3.02 2.89 2.81
m9 (4,096 2,304) 7.81 3.18 3.41 3.29 3.17 2.93
c10 (3,664 3,680) 9.05 3.35 3.27 3.18 3.07 2.90
Average 8.03 3.17 3.14 3.03 2.92 2.75

The lowest bit rate is highlighted in each case. See also Table 11.13. Note:
H0 = zeroth-order entropy.
1070 Biomedical Image Analysis
tested in Section 11.11 however, it is seen in Section 11.12 that the enhanced
JBIG scheme performs better than SLIC.
Even if it were to be not practical to achieve the lowest possible bit rate
in the lossless compression of a given image, it is of interest to estimate an
achievable lower-bound bit rate for an image. Information theory indicates
that the lossless compression eciency is bounded by high-order entropy val-
ues. However, the accuracy of estimating high-order statistics is limited by
the length of the data (the number of samples available), the number of in-
tensity levels, and the order. The highest possible order of entropy that can
be estimated with high accuracy is limited due to the nite length of the
data available. In spite of this limitation, high-order entropy values can pro-
vide a better estimate of the lower-bound bit rate than the commonly used
zeroth-order entropy. Shen and Rangayyan 1028, 320] proposed methods for
the estimation of high-order entropy and the lower-bound bit rate, which are
described in the following paragraphs.

11.13.1 Memoryless entropy


A memoryless source is the simplest form of an information source, in which
successive source symbols are statistically independent 1034]. Such a source
is completely speci ed by its source alphabet A = fa0 a1 a2    an g and
the associated probabilities of occurrence fp(a0 ) p(a1 ) p(a2 )    p(an )g. The
memoryless entropy H (A), which is also known as the average amount of
information per source symbol, is de ned as
X
n
H (A) = ; p(ai) log2 p(ai): (11.198)
i=0
The mth -order extension entropy is de ned as
Hm (A) = Hm (ai0 ai1 ai2    aim ) (11.199)
X
= ; p (ai0 ai1 ai2    aim ) log2 p (ai0 ai1 ai2    aim )
Am
where p (ai0 ai1 ai2    aim ) is the probability of a symbol string from the
mth -order extension of the memoryless source, and Am represents the set of
all possible strings with m symbols following ai0 . For a memoryless source,
using the property
p (ai0 ai1 ai2    aim ) = p(ai0 ) p(ai1 ) p(ai2 )    p(aim ) (11.200)
we get
H (A) = Hmm+(A1) : (11.201)
Image Coding and Data Compression 1071
11.13.2 Markov entropy
A memoryless source model could be restrictive in many applications due
to the fact that successive source symbols can be signi cantly interdepen-
dent, which means the source has memory. Image sources are such exam-
ples, in which there always exists some statistical dependence among neigh-
boring pixels, even after the source symbol stream has been decorrelated.
A source possessing dependence or memory as above may be modeled as
a Markov source, in which the probability of occurrence of a source sym-
bol ai depends upon the probabilities of occurrence of a nite number m of
the preceding symbols 1034]. The corresponding mth -order Markov entropy
H (ai0 jai1 ai2    aim ) may be computed as
X
H (ai0 jai1 ai2    aim ) = ; p (ai0 ai1 ai2    aim )
Am
log2 p (ai0 jai1 ai2    aim ) (11.202)
where p (ai0 ai1 ai2    aim ) is the probability of a particular state and
p (ai0 jai1 ai2    aim ) is the PDF of ai0 conditioned upon the occurrence
of the string fai1 ai2    aim g. It may be shown that
; 
H (ai0 jai1 ai2    aim )  H ai0 jai1 ai2    aim 1  : : :
;

 H (ai0 jai1 )  H (ai0 ) = H (A) (11.203)


where the equality is satis ed if and only if the source symbols are statistically
independent.

11.13.3 Estimation of the true source entropy


Although a given image or its prediction error image could be modeled with
a Markov source, the order of the model and the conditional PDFs will be
usually unknown. However, it is seen from Equation 11.203 that the higher the
order of the conditional probability, the lower is the resulting entropy, which
is closer to the true source entropy. In order to maintain a reasonable level
of accuracy in the estimation of the conditional probability, larger numbers
of data strings are needed for higher-order functions the estimation error  is
given by 1035]
mK
 = 22N ln;21 (11.204)
for an mth -order Markov source model with 2K intensity levels and N data
samples. Thus, for a speci c image with known K and N , the highest order
of conditional Markov entropy that could be calculated within a practical
estimation error of conditional probability is bounded by
max m = b log2 (2N ln 2 + 1) c
K (11.205)
1072 Biomedical Image Analysis
where bxc is the !oor function that returns the largest integer less than or
equal to the argument x. Therefore, given an error limit , the only way
to derive a higher-order parameter is to decrease K by splitting data bytes,
because the data length N cannot be extended.
For example, the highest orders of Markov entropy that may be calculated
with a probability estimation error of less than 0:05 for a 512 512, 8 b=pixel
image are 1, 3, 7, and 14, for the original data (one 8 b data part, K = 8),
split data with two 4 b data parts (K = 4), split data with four 2 b data
parts (K = 2), and split data with eight 1 b data parts (K = 1), respectively.
Figure 11.40 shows the Markov entropy values up to the maximum possible
order of max m with  = 0:05 for four forms of splitting for the test image
Airplane. It is obvious that the entropy values become larger with splitting
into more data parts due to the high correlation present among the data parts,
although the maximum order could go higher after splitting and the entropy
values are decreasing with increasing order for each form of splitting see also
Table 11.17. This indicates that decorrelation of the data bits is needed before
splitting in order to get a good estimate of the entropy, because the source
entropy is not changed by any reversible transformation.
From the results of the enhanced JBIG algorithm, it is seen that the Gray,
F1 , and F2 transformations provide good decorrelation among bit planes af-
ter PSV-based prediction. This is demonstrated with four plots of the bi-
nary, Gray, F1 , and F2 representations of PSV=7 prediction error data of
the Airplane image in Figure 11.41 as well as in Table 11.17. The binary
representation is seen to lead to poor decorrelation among the bits of the
prediction error data: it actually makes the maximum-order entropy increase
to 6:95 b=pixel while splitting the error data into eight 1 b data parts (the
zeroth-order entropies of the original error data and the original image are
4:18 and 6:49 b=pixel, respectively). It is also seen that the highest-order
entropy values that could be estimated within the error limit speci ed are
increasing with increasing number of data parts when using the binary rep-
resentation. On the other hand, with the Gray, F1 , or F2 transformation,
the situation is di erent. It is seen that the highest-order entropy values with
splitting become smaller than the entropy of the original data when one of the
three transformations is used, which shows their ecient decorrelation e ect
among the bit planes of the prediction error image. Finally, it is seen that
the F1 transform is the best among the four representation schemes, with the
lowest estimated entropy value of 3:46 b=pixel achieved when the prediction
error is split into four 2 b data parts.
The lowest estimated entropy value is not guaranteed to occur when the
prediction error is split into four 2 b data parts. The F1 transform was ob-
served to always provide the best or near-best performance. Table 11.18 lists
the lowest estimated Markov entropy values with PSV=7 prediction for the
test-image set used, together with their bit rates obtained with the enhanced
JBIG scheme (PSV7-F1-JBIG) and the zeroth-order entropies of the PSV=7
prediction error images. It is seen that the higher-order entropy values pro-
Image Coding and Data Compression 1073

7
Markov entropy (bits/ pixel)

One 8-bit data part


Two 4-bit data parts
6 Four 2-bit data parts
Eight 1-bit data parts

3
0 2 4 6 8 10 12 14
Order

FIGURE 11.40
Markov entropy values up to the maximum order possible with error limit  =
0:05 for four forms of splitting, for the 512 512, 8 b=pixel Airplane image see
also Table 11.17. Note: Order=0 indicates memoryless entropy. Reproduced
with permission from L. Shen and R.M. Rangayyan, \Lossless compression
of continuus-tone images by combined inter-bit-plane decorrelation and JBIG
coding", Journal of Electronic Imaging, 6(2): 198 { 207, 1997.  c SPIE and
IS&T.
1074 Biomedical Image Analysis

TABLE 11.17
Estimated Values of the Highest Possible Order of Markov Entropy
(b=pixel) for the Airplane Image.
Data Code/ One part Two parts Four parts Eight parts
transform 8b 4 b/part 2 b/part 1 b/part
Original Binary 4.06 4.21 4.52 5.10
Airplane Gray 4.06 4.00 4.02 4.21
image F1 4.06 4.26 4.43 4.93
F2 4.06 4.21 4.50 4.97
PSV=7 Binary 3.75 4.21 5.04 6.95
prediction Gray 3.75 3.63 3.58 3.68
error of F1 3.75 3.61 3.46 3.60
Airplane F2 3.75 3.63 3.54 3.61

Values are shown with and without prediction, combined with four di erent
code representation (transformation) schemes, and with error limit  = 0:05.
Reproduced with permission from L. Shen and R.M. Rangayyan, \Lossless
compression of continuus-tone images by combined inter-bit-plane decorrela-
tion and JBIG coding", Journal of Electronic Imaging, 6(2): 198 { 207, 1997.
c SPIE and IS&T.
Image Coding and Data Compression 1075

7
Markov entropy (bits/ pixel)

One 8-bit data part


Two 4-bit data parts
Four 2-bit data parts
6 Eight 1-bit data parts

3
0 2 4 6 8 10 12 14
Order
Figure 11.41 (a)
8

7
Markov entropy (bits/ pixel)

One 8-bit data part


Two 4-bit data parts
Four 2-bit data parts
6 Eight 1-bit data parts

3
0 2 4 6 8 10 12 14
Order
Figure 11.41 (b)
1076 Biomedical Image Analysis
8

7
Markov entropy (bits/ pixel)

One 8-bit data part


Two 4-bit data parts
Four 2-bit data parts
6 Eight 1-bit data parts

3
0 2 4 6 8 10 12 14
Order
Figure 11.41 (c)

vide lower estimates of the bit-rate limit than the zeroth-order entropies. The
average entropy value decreases to 4:52 from 4:88 b=pixel with higher-order
entropy estimation while using binary representation by using the F1 trans-
form instead of binary representation of the error data, the average Markov
entropy value further reduces to 4:15 b=pixel.

The disadvantage of using the zeroth-order entropy to measure the perfor-


mance of a data compression algorithm is clearly shown in Table 11.18: the
enhanced JBIG (PSV7-F1-JBIG) coding scheme achieves an average bit rate
of 4:70 b=pixel compared with the average zeroth-order entropy of 4:88 b=pixel
for the prediction error images. Considering the higher-order entropy values
shown, it appears that the compression eciency of the enhanced JBIG tech-
nique could be further improved. An important application of high-order
entropy estimation could be to provide a potentially achievable lower bound
of bit rate for an original or decorrelated image, if the high-order entropy is
estimated with adequate accuracy.
Image Coding and Data Compression 1077

7
Markov entropy (bits/ pixel)

One 8-bit data part


Two 4-bit data parts
Four 2-bit data parts
6
Eight 1-bit data parts

3
0 2 4 6 8 10 12 14
Order
(d)
FIGURE 11.41
Plots of the Markov entropy values up to the maximum order possible with
error limit  = 0:05, with four forms of splitting, for PSV=7 prediction error
of the 512 512, 8 b=pixel Airplane image, with (a) Binary representation
(b) Gray representation (c) F1 transformation and (d) F2 transformation.
Note: Order=0 indicates memoryless entropy. Reproduced with permission
from L. Shen and R.M. Rangayyan, \Lossless compression of continuus-tone
images by combined inter-bit-plane decorrelation and JBIG coding", Journal
of Electronic Imaging, 6(2): 198 { 207, 1997.  c SPIE and IS&T.
1078 Biomedical Image Analysis

TABLE 11.18
Lowest Estimated Markov Entropy Values with PSV=7 Prediction
for Eight 8 b Test Images.
JPEG with PSV = 7
8 b image Bit rate Lowest entropy
(columns rows) He0 (EJBIG) F1 Binary
Airplane (512 512) 4.18 3.83 3.46 3.75
Baboon (512 512) 6.06 6.04 5.39 5.77
Cameraman (256 256) 4.90 4.67 3.79 4.28
Lenna-256 (256 256) 5.15 4.94 4.16 4.54
Lenna-512 (512 512) 4.65 4.45 4.09 4.41
Peppers (512 512) 4.66 4.54 4.10 4.43
Sailboat (512 512) 5.14 5.05 4.51 4.91
Ti any (512 512) 4.27 4.11 3.74 4.05
Average 4.88 4.70 4.15 4.52

Also shown are bit rates via enhanced JBIG bit-plane coding of F1 -
transformed PSV=7 prediction error (PSV7-F1-JBIG or EJBIG) and the
zeroth-order entropies (He0 ) of the PSV=7 prediction error images (in
b=pixel). Reproduced with permission from L. Shen and R.M. Rangayyan,
\Lossless compression of continuus-tone images by combined inter-bit-plane
decorrelation and JBIG coding", Journal of Electronic Imaging, 6(2): 198 {
207, 1997. c SPIE and IS&T.
Image Coding and Data Compression 1079

11.14 Application: Teleradiology


Teleradiology is commonly de ned as the practice of radiology at a distance
338, 1036, 1037, 1038, 1039, 1040, 1041]. Teleradiology o ers a technological
approach to the problem of eliminating the delay in securing the consultation
of a radiologist to patients in rural and remote areas. The timely availabil-
ity of radiological diagnosis via telecommunication could potentially reduce
the morbidity, mortality, and costs of transportation to tertiary health-care
centers of patients in remotely situated areas, and in certain situations, in
developing countries as well.
In the military environment, a study in 1983 1042] indicated that over 65%
of the medical facilities with radiographic equipment had no radiologists as-
signed to them, and an additional 15% had only one radiologist. In such cases,
teleradiology could be a vehicle for redistributing the image-reading workload
from under-sta ed sites to more adequately sta ed central locations 1043].
According to a study conducted in 1989, the province of Alberta, Canada,
had a total of 130 health-care centers with radiological imaging facilities, out
of which only 30 had resident radiologists 1044]. Sixty-one of the other cen-
ters depended upon visiting radiologists. The remaining 39 centers used to
send their radiographs to other centers for interpretation, with a delay of
3 ; 14 days in receiving the results 1044]. The situation was comparable in
the neighboring provinces of Saskatchewan and Manitoba, and it was observed
that the three provinces could bene t signi cantly from teleradiology. Even
in the case of areas served by contract radiologists, teleradiology can per-
mit evaluation and consultation by other radiologists at tertiary health-care
centers in emergency situations as well as in complicated cases.
Early attempts at teleradiology systems 1045, 1046] consisted of analog
transmission of slow-scan TV signals over existing telephone lines, ultra-high-
frequency (UHF) radiolinks, and other such analog channels 1037]. Analog
transmission and the concomitant slow transmission rates were satisfactory for
low-resolution images such as nuclear medicine images. However, the trans-
mission times were prohibitively high for high-resolution images, such as chest
radiographs. Furthermore, the quality of the images received via analog trans-
mission is a function of the distance, which could result in an unpredictable
performance of radiologists with the received images. Thus, the natural pro-
gression of teleradiology systems was toward digital transmission. The initial
choice of the transmission medium was the ordinary telephone line, operating
at 300 ; 1 200 bps (bits per second). Several commercial teleradiology systems
were based upon the use of telephone lines for data transmission. Improve-
ments in modem technology allowing transmission speeds of up to 19:2 Kbps
over standard telephone lines, and the establishment of a number of 56 Kbps
lines for commercial use by telephone companies made this medium viable for
low-resolution images 1047].
1080 Biomedical Image Analysis
The major reason for users' reluctance in accepting early teleradiology sys-
tems was the inability to meet the resolution of the original lm. Spatial reso-
lution of even 2 048 2 048 pixels was found to be inadequate to capture the
sub-millimeter features found in chest radiographs and mammograms 1048].
It was recommended that spatial resolution of the order of 4 096 4 096
pixels, with at least 1 024 shades of gray, would be required to capture ac-
curately the diagnostic information on radiographic images of the chest and
breast. This demand led to the development of high-resolution laser digitiz-
ers capable of digitizing X-ray lms to images of the order of 4 096 4 096
pixels, with 12 b=pixel, by the mid 1990s. Imaging equipment capable of di-
rect digital acquisition of radiographic images to the same resolution as above
were also developed in the late 1990s. Teleradiology system designers were
then faced with the problem of dealing with the immense amount of data in-
volved in such high-resolution digital images. The transmission of such large
amounts of data over ordinary telephone lines involved large delays, which
could be overcome to some extent by using parallel lines for increased data
transfer rate 1037]. The use of satellite channels was also an option to speed
up image data transmission 1049], but problems associated with image data
management and archival hindered the anticipated widespread acceptance of
high-resolution teleradiology systems. Such diculties motivated advanced
research into image data compression and encoding techniques.
The development of PACS and teleradiology systems share some historical
common ground. Although delayed beyond initial predictions, both PACS
and teleradiology established their presence and value in clinical practice by
the late 1990s. The following paragraphs provide a historical review of telera-
diology 338, 1041].

11.14.1 Analog teleradiology


The rst instance of transmitting pictorial information for medical diagnosis
dates back to 1950 when Gershon-Cohen and Cooley used telephone lines,
and a facsimile system adapted to convert medical images into video signals,
for transmitting images between two hospitals 45 km apart in Philadelphia,
PA 1050]. In a pioneering project in 1959, Jutras 1051] conducted what is
perhaps the rst teleradiology trial, by interlinking two hospitals, 8 km apart,
in Montreal, Qu$ebec, Canada, using a coaxial cable to transmit tele!uoroscopy
examinations. The potential of teleradiology in the provision of the services
of a radiologist to remotely situated areas, and in the redistribution of radiol-
ogists' workload from under-sta ed centers to more adequately sta ed centers
was immediately recognized, and a number of clinical evaluation projects were
conducted 1037, 1045, 1046, 1052, 1053, 1054, 1055, 1056, 1057]. Most of the
early attempts consisted of analog transmission of medical images via standard
telephone lines, dedicated coaxial cables, UHF radio, microwave, and satellite
channels, with display on TV monitors at the receiving terminal. James et
al. 1057] give a review of the results of the early experiments. Andrus and
Image Coding and Data Compression 1081
Bird 1036] describe the concept of a teleradiology system in which the radi-
ologist, stationed at a medical center, controls a video camera to zoom in on
selected areas of interest of an image at another site located far away, and
observes the results in real time on a TV screen. Steckel 1058] conducted
experiments with a system using an 875-line closed-circuit TV system for
transmitting radiographic images within a hospital for educational purposes,
and concluded that the system's utility far outweighed disadvantages such as
the inability to view a sequence of images belonging to a single study.
In 1972, Webber and Corbus 1045] used existing telephone lines and slow-
scan TV for transmitting radiographs and nuclear medicine images. The
resolution achieved was satisfactory for nuclear medicine images, but both the
spatial resolution and the gray-scale dynamic range (radiometric resolution)
were found to be inadequate for radiographs. A similar experiment using
telephone lines and slow-scan TV by Jelasco et al. 1046] resulted in 80%
correct interpretation of radiographs. Other experiments with slow-scan TV
over telephone lines 1057] demonstrated the inadequacy of this medium, and
also that the diagnostic accuracy with such systems varied with the nature of
the images.
Webber et al. 1054] used UHF radio transmission, in 1973, for transmitting
nuclear medicine images and radiographs. While the system worked satisfac-
torily for nuclear medicine images, evaluation of chest X-ray images needed
zoom and contrast manipulation of the TV monitor. Murphy et al. 1053] used
a microwave link for the transmission of images of chest radiographs acquired
with a remotely controlled video camera, over a distance of about 4 km, and
indicated that it would be an acceptable method for providing health care to
people in remote areas.
Andrus et al. 1052] transmitted X-ray images of the abdomen, chest, bone,
and skull over a 45 km round loop, using a 4 MHz , 512-line TV channel in-
cluding three repeater stations. The TV camera was remotely controlled using
push buttons and a joystick to vary the zoom, aperture, focus, and direction
of the camera. It was concluded that the TV interpretations were of accept-
able accuracy. Such real-time operation calls for special skills on the part
of the radiologist, requires coordination between the operator at the image
acquisition site and the radiologist at the receiving center, and could take up
a considerable amount of the radiologist's valuable time. Moreover, practical
microwave links exist only between and within major cities, and cannot serve
the communication needs of teleradiology terminals in rural and remote areas.
In addition, the operating costs over the duration of interactive manipulations
could reach high levels, and render such a scheme uneconomical.
In 1973, Lester et al. 1055] used a satellite (ATS-1) for analog transmis-
sion of video-taped radiologic information, and concluded that satisfactory
radiographic transmission is possible \if a satisfactory sensor of radiographic
images were constructed." In 1979, Carey et al. 1056] reported on the results
of an analog teleradiology experiment using the Hermes spacecraft. They re-
ported the e ectiveness of TV !uoroscopy to be 90% of that with conventional
1082 Biomedical Image Analysis
procedures. Page et al. 1059] used a two-way analog TV network with the
Canadian satellite ANIK-B to transmit radiographic images from northern
Qu$ebec to Montr$eal, and reported an initial accuracy in TV interpretation
of 81% with respect to lm reading. The accuracy rose to 94% after a 3-
month training of the participant radiologists in the use of the TV system.
The noise associated with analog transmission, the low resolution of the TV
monitors used, and the requirement on the part of the radiologists to partici-
pate in real-time control of the image-acquisition cameras made the concept
of TV transmission of radiographic images unacceptable. Furthermore, the
noise associated with analog transmission is dependent upon the distance.
Not surprisingly, James et al. 1057] reported that their teleradiology system,
transmitting emergency department radiographs via a satellite channel from a
local TV studio, was unacceptable due to a decrease in the accuracy of image
interpretation to about 86% with respect to that with standard protocols.

11.14.2 Digital teleradiology


Given the advantages of digital communication over analog methods 1060],
the natural progression of teleradiology was toward the use of digital data
transmission techniques. The advent of a number of digital medical imaging
modalities facilitated this trend 1061, 39, 1062]. Digital imaging also allowed
for image processing, enhancement, contrast scaling, and !exible manipula-
tion of images on the display monitors after acquisition. Many of the initial
attempts at digital teleradiology 1047, 1063, 1064, 1065, 1066, 1067] were
based upon microcomputers and used low-resolution digitization, display, and
printers. The resolution was of the order of 256 256 to 512 512 pixels with
256 shades of gray, mostly because of the unavailability of high-resolution
equipment. Gayler et al. 1063] described a laboratory evaluation of such a
microcomputer-based teleradiology system, based upon a 512 512 8 b for-
mat for image acquisition and display, and evaluated radiologists' performance
with routine radiographs. They found the diagnostic performance to be signif-
icantly worse than that using the original lm radiographs. Nevertheless, they
concluded that microcomputer-based teleradiology systems \warrant further
evaluation in a clinical environment."
In 1982, Rasmussen et al. 1067] compared the performance of radiologists
with images transmitted by analog and digital means and light-box viewing
of the original lms. The resolution of digitization used was 512 256 pixels
with 6 b=pixel. The digital images were converted to analog signals for analog
transmission. It was concluded that the resolution used would provide satis-
factory radiographic images for gross pathological disorders, but that subtle
features would require higher resolution.
Gitlin 1065], Curtis et al. 1066], and Skinner et al. 1068] followed the
laboratory evaluation of Gayler et al. 1063] with eld trials using standard
telephone lines at 9 600 bps for the transmission of 512 512 8 b images
from ve medical-care facilities to a central hospital in Maryland. A relative
Image Coding and Data Compression 1083
accuracy of 97% with video-image readings was reported 1066], as compared
to standard lm interpretation. This was a substantially higher accuracy than
that obtained in a preceding laboratory study 1063] the improvement was
attributed to the larger percentage of normal images used in the eld trial,
and to the higher experience of the analysts in clinical radiology.
In a eld trial in 1984, Gitlin 1065] used a 1 024 1 024 matrix of pixels,
9 600 bps telephone lines, and lossy data compression to bring down the
transmission times. A relative accuracy with video interpretation of 87%, with
respect to standard lm interpretation, was observed. The relative accuracy
was observed to be dependent upon the type of data compression used, among
other factors.
Gordon et al. 1069] presented an analysis of a number of scenarios and
tradeo s for practical implementation of digital teleradiology. In related pa-
pers, Rangayyan and Gordon 1070] and Rangaraj and Gordon 1071] dis-
cussed the potential for providing advanced imaging services such as CT
through teleradiology.
In 1987, DiSantis et al. 1072] digitized excretory urographs to 1 024 1 024
matrices, and transmitted the images over standard telephone lines, after data
compression, to a receiving unit approximately 3 km away. A panel of three
radiologists interpreted the images on video monitors, and the results were
compared with the original lm readings performed about a week earlier.
An agreement of 93% was found between the lm and video readings in the
diagnosis of obstructions. However, only 64% of urethral calculi detected
with the original radiographs were also detected with the video images. This
result demonstrated clearly that, whereas a resolution of 1 024 1 024 pixels
could be adequate for certain types of diagnosis, higher resolution is required
for capturing all of the diagnostic information that could be present on the
original lm.
In 1987, Kagetsu et al. 1064] reported on the performance of a commercially
available teleradiology system using 512 512 8 b images and transmission
over 9 600 bps standard telephone lines after 2.5:1 data compression. Exper-
iments were conducted with a wide variety of radiographs over a four-month
period. An overall relative accuracy of 89% was reported, between the re-
ceived images on video display and the original lms. Based on these results,
Kagetsu et al. recommended a review of the original lms at some later date
because of the superior spatial and contrast resolution of lm.
Several commerical systems were released for digital teleradiology in the
late 1980s. Although such systems were adequate for handling low-resolution
images in CT, MRI, and nuclear medicine, they were not suitable for han-
dling large-format images such as chest radiographs and mammograms. Ex-
periments with such systems demonstrated the inadequacy of low-resolution
digital teleradiology systems as an alternative to the physical transportation
of the lms or the patients to centers with radiological diagnostic facilities. Al-
though higher resolution was required in the digitized images, the substantial
increase in the related volume of data and the associated diculties remained
1084 Biomedical Image Analysis
a serious concern. Furthermore, the use of lossy data compression schemes to
remain within the data-rate limitation of telephone lines and other low-speed
communication channels was observed to be unacceptable.

11.14.3 High-resolution digital teleradiology


The development of high-resolution image digitizers and display equipment,
and the routine utilization of high-data-rate communication media, paved the
way for high-resolution digital teleradiology. In 1989, Carey et al. 1049]
reported on the performance of the DTR-2000 teleradiology system from
DuPont, consisting of a 1 684 2 048-pixel laser digitizer with 4 096 quan-
tization levels, a T1 satellite transmission channel (at 1:544 Mbps), and a
DuPont laser lm recorder with 256 possible shades of gray. A nonlinear
mapping was performed from the original 4 096 quantization levels to 256
levels on the lm copy to make use of the fact that the eye is more sensitive to
contrast variations at lower density. With this mapping, at the lower end of
the gray scale, small di erences in gray values correspond to larger di erences
in optical densities than at the higher end of the gray scale. Thus, the over-
all optical density range of the lm is much larger than can be obtained by
linear mapping. Carey et al. 1049] transmitted radiographic and ultrasono-
graphic images over the system from Seaforth to London, in Ontario, Canada,
and reported a relative accuracy of 98% in reading the laser-sensitive lm as
compared to the original lm. It was concluded that the laser-sensitive lm
\clearly duplicated the original lm ndings." However, they also reported
\contouring" on the laser-sensitive lm, which might have been due to the
nonlinear mapping of the 4 096 original gray levels to 256 levels on the lm.
Certain portions of the original gray scale with rapidly changing gray levels
could have been mapped into the same optical density on the lm, giving rise
to contouring artifacts.
Barnes et al. 1047] suggested that the challenge of integrating the increas-
ing number of medical imaging technologies could be met by networked mul-
timodality imaging workstations. Cox et al. 1073] compared images digitized
to 2 048 2 048 12 b and displayed on monitors with 2 560 2 048 8 b pix-
els, digital laser lm prints, and conventional lm. They reported signi cant
di erences in the performance of the three display formats: digital hard copy
performed as well as or better than conventional lm, whereas the interactive
display failed to match the performance of the other two. They suggested
that although the di erences could be eliminated by training the personnel
in reading from displays and by using image enhancement techniques, it was
premature to conclude either way.
In 1990, Batnitzky et al. 1074] conducted an assessment of the then-
available technologies for lm digitization, display, generation of hard copy,
and data communication for application in teleradiology systems. They con-
cluded that 2 048 2 048 12 b laser digitizers, displays with scan lines of
1 024 ; 2 048, 8 ; 12 b=pixel, hard copiers that interpolate 2 048 2 048
Image Coding and Data Compression 1085
matrices to 4 096 4 096 matrices, and the merger of computer and com-
munication technologies resulting in !exible wide-area networks, had paved
the way for the acceptance of \ nal interpretation teleradiology," completely
eliminating the need to go back to the original lms. Gillespy et al. 1075]
described the installation of a DuPont Clinical Review System, consisting of
a laser lm digitizer with 1 680 2 048 12 b pixels, and a 1 024 840 12 b
display unit, and reported that \clinicians were generally satis ed with the
unit." Several studies on the contrast and resolution of high-resolution digi-
tizers 1076, 1077, 1078] demonstrated that the resolution of the original lm
was maintained in the digitized images.

Several systems are now available for digital teleradiology, including high-
resolution laser digitizers that can provide images of the order of 4 000
5 000 12 b pixels with a spatial resolution of 50 m or better high-luminance
monitors that can display up to 2 560 2 048 pixels at 12 b=pixel and non-
interlaced refresh rates of 70 ; 80 fps and laser- lm recorders with a spatial
resolution of 50 m that can print images of size 4 000 5 000 12 b pixels.
Satellite, cable, or ber-optic transmission equipment and channels may be
leased with transmission rates of several Mbps. However, the large amount
of data related to high-resolution images can create huge demands in data
transmission and archival capacity. Lossless data compression techniques can
bring down the amount of data, and have a signi cant impact on the practical
implementation of teleradiology and related technologies.

The introduction of data compression, encoding, and decoding in digital


teleradiology systems raises questions on the overall throughput of the sys-
tem in transmission and reception, storage and retrieval of image data, patient
con dentiality, and information security. The compression of image data re-
moves the inherent redundancy in images, and makes the data more sensitive
to errors 1079]. In dedicated communication links, appropriate error con-
trol should be provided for detecting and correcting such errors. In the case
of packet-switched communication links, the removal of redundancy by data
compression could result in increased retransmission overhead. However, with
sophisticated digital communication links operating at typical bit-error rates
of 1 in 109 , and channel utilization (throughput) eciency of about 97% using
high-level packet-switched protocols 1080], the advantages of data compres-
sion far outweigh the overheads due to the reasons mentioned above.

High-resolution digital teleradiology is now feasible without any sacri ce in


image quality, and can serve as an alternative to transporting patients or lms.
Distance should no longer be a limitation in providing reliable diagnostic
service by city-based expert radiologists to patients in remote or rural areas.
1086 Biomedical Image Analysis

11.15 Remarks
Lossless data compression is desirable in medical image archival and transmis-
sion. In this chapter, we studied several lossless data compression techniques.
A lossless compression scheme generally consists of two steps: decorrelation
and encoding. The success of a lossless compression method is mainly based
upon the eciency of the decorrelation procedure used. In practice, a decor-
relation procedure could include several cascaded decorrelation blocks, each of
which could accomplish a di erent type of decorrelation and facilitate further
decorrelation by the subsequent blocks. Some of the methods described in
this chapter illustrate creative ways of combining multiple decorrelators with
di erent characteristics for achieving better compression eciency.
Several information-theoretic concepts and criteria as applicable to data
compression were also discussed in this chapter. A practical method for the
estimation of high-order entropy was presented, which could aid in the lower-
limit analysis of lossless data compression. High-order entropy estimation
could aid in the design, analysis, and evaluation of cascaded decorrelators.
A historical review of selected works in the development of PACS and telera-
diology systems was presented in the concluding section, in order to demon-
strate the need for image compression and data transmission in a practical
medical application. PACS, teleradiology, and telemedicine are now well es-
tablished areas that are providing advanced technology for improved health
care 1039, 1040, 1081, 1082, 1083].

11.16 Study Questions and Problems


Selected data les related to some of the problems and exercises are available at the
site
www.enel.ucalgary.ca/People/Ranga/enel697
1. The probabilities of occurrence of eight symbols are given as
(0:1 0:04 0:3 0:15 0:03 0:2 0:15 0:03).
Derive the Human code for the source.
2. For a 4-symbol source, derive all possible sets of the Human code. (The
exact PDF of the source is not relevant.)
Create a few strings of symbols and generate the corresponding Human
codes. Verify that the result satis es the basic requirements of a code, in-
cluding unique and instantaneous decodability.
3. For the 4 4 image shown below, design a Human coding scheme. Show all
steps of your design.
Image Coding and Data Compression 1087
Compute the entropy of the image and the average bit rates using direct
binary coding and Human coding.
2 3
1022
66 0 2 1 1 77
43 3 2 25: (11.206)
2222
4. Discuss the similarities and dierences between the Karhunen{Loeve, discrete
Fourier, and the Walsh-Hadamard transforms. Discuss the particular aspects
of each transform that are of importance in transform-domain coding for image
data compression.
5. For the 5 5 image shown below, compute the bit rate using
(a) direct binary coding
(b) horizontal run-length coding, and
(c) predictive coding (or DPCM) using the model f~(m n) = f (m n ; 1).
Show and explain all steps. State your assumptions, if any, and explain your
procedures.
2 3
121 125 119 121 121
66 121 119 125 125 125 77
66 126 126 126 126 126 77 : (11.207)
4
130 135 135 135 135 5
129 129 129 129 129
6. For the 3 3 image shown below, prepare the bit planes using the direct
binary and Gray codes. Examine the bit planes for the application of run-
length coding.
Which code can provide better compression? Explain your observations and
results.
2 3
022
42 1 15: (11.208)
322

11.17 Laboratory Exercises and Projects


1. Write a program (in C, C++, or MATLAB) to compute the histogram and
the zeroth-order entropy of a given image. Apply the program to a few images
in your collection. Study the nature of the histograms and relate their charac-
teristics as well as entropy to the visual features present in the corresponding
images.
2. Write a program to compute the zeroth-order and rst-order entropy of an
image considering pairs of gray-level values. Apply the program to a few
images in your collection and analyze the trends in the zeroth-order and rst-
order entropy values.
1088 Biomedical Image Analysis
What are the considerations, complexities, and limitations involved in com-
puting entropy of higher orders?
3. For the image in the le RajREye.dat with 3 b=pixel, create bit planes using
(a) the binary code, and (b) the Gray code. Compute the entropy of each bit
plane. Compute the average entropy over all of the bit planes for each code.
What is the expected trend?
Do your results meet your expectations? Explain.
4. Develop a program to compute the DFT of an image. Write steps to compute
the energy contained in concentric circles or squares of several sizes spanning
the full spectrum of the image and to plot the results.
Apply the program to a few images in your collection. Relate the spectral
energy distribution to the visual characteristics of the corresponding images.
Discuss the relevance of your ndings in data compression.
5. Develop a program to compute the error of prediction based upon a few simple
predictors, such as
(a) f~(m n) = f (m ; 1 n).
(b) f~(m n) = f (m n ; 1).
(c) f~(m n) = f (m ; 1 n) + f (m n ; 1) + f (m ; 1 n ; 1)]=3:
Derive the histograms and the entropies of the original image and the error of
prediction for a few images with each of the predictors listed above. Evaluate
the results and comment upon the relevance of your ndings in image coding
and data compression.
12
Pattern Classi cation and Diagnostic
Decision

The nal purpose of biomedical image analysis is to classify a given image, or


the features that have been detected in the image, into one of a few known
categories. In medical applications, a further goal is to arrive at a diagnos-
tic decision regarding the condition of the patient. A physician or medical
specialist may achieve this goal via visual analysis of the image and data
presented: comparative analysis of the given image with others of known di-
agnoses or the application of established protocols and sets of rules assist in
such a decision-making process. Images taken earlier of the same patient may
also be used, when available, for comparative or dierential analysis. Some
measurements may also be made from the given image to assist in the anal-
ysis. The basic knowledge, clinical experience, expertise, and intuition of the
physician play signi cant roles in this process.
When image analysis is performed via the application of computer algo-
rithms, the typical result is the extraction of a number of numerical features.
When the numerical features relate directly to measurements of organs or
features represented by the image | such as an estimate of the size of the
heart or the volume of a tumor | the clinical specialist may be able to use
the features directly in his or her diagnostic logic. However, when parame-
ters such as measures of texture and shape complexity are derived, a human
analyst is not likely to be able to analyze or comprehend the features. Fur-
thermore, as the number of the computed features increases, the associated
diagnostic logic may become too complicated and unwieldy for human analy-
sis. Computer methods would then be desirable to perform the classi cation
and decision process.
At the outset, it should be borne in mind that a biomedical image forms but
one piece of information in arriving at a diagnosis: the classi cation of a given
image into one of many categories may assist in the diagnostic procedure,
but will almost never be the only factor. Regardless, pattern classi cation
based upon image analysis is indeed an important aspect of biomedical image
analysis, and forms the theme of the present chapter. Remaining within the
realm of CAD as introduced in Figure 1.33 and Section 1.11, it would be
preferable to design methods so as to aid a medical specialist in arriving at a
diagnosis rather than to provide a decision.

1089
1090 Biomedical Image Analysis
A generic problem statement for pattern classi cation may be expressed as
follows: A number of measures and features have been derived from a biomed-
ical image. Develop methods to classify the image into one of a few speci-
ed categories. Investigate the relevance of the features and the classication
methods in arriving at a diagnostic decision about the patient.
Observe that the features mentioned above may have been derived manually
or by computer methods. Recognize the distinction between classifying the
given image and arriving at a diagnosis regarding the patient: the connection
between the two tasks or steps may not always be direct. In other words, a
pattern classi cation method may facilitate the labeling of a given image as
being a member of a particular class arriving at a diagnosis of the condition of
the patient will most likely require the analysis of several other items of clinical
information. Although it is common to work with a prespeci ed number of
pattern classes, many problems do exist where the number of classes is not
known a priori. A special case is screening, where the aim is to simply decide
on the presence or absence of a certain type of abnormality or disease. The
initial decision in screening may be further focused on whether the subject
appears to be free of the speci c abnormality of concern or requires further
investigation.
The problem statement and description above are rather generic. Several
considerations arise in the practical application of the concepts mentioned
above to medical images and diagnosis. Using the detection of breast can-
cer as an example, the following questions illustrate some of the problems
encountered in practice.
Is a mass or tumor present? (Yes/No)
If a mass or tumor is present
{ Give or mark its location.
{ Compare the density of the mass to that of the surrounding tissues:
hypodense, isodense, hyperdense.
{ Describe the shape of its boundary: round, ovoid, irregular, ma-
crolobulated, microlobulated, spiculated.
{ Describe its texture: homogeneous, heterogeneous, fatty.
{ Describe its edge: sharp (well-circumscribed), ill-de ned (fuzzy).
{ Decide if it is a benign mass, a cyst (solid or uid- lled), or a
malignant tumor.
Are calci cations present? (Yes/No)
If calci cations are present:
{ Estimate their number per cm2 .
{ Describe their shape: round, ovoid, elongated, branching, rough,
punctate, irregular, amorphous.
Pattern Classi cation and Diagnostic Decision 1091
{ Describe their spatial distribution or cluster.
{ Describe their density: homogeneous, heterogeneous.
Are there signs of architectural distortion? (Yes/No)
Are there signs of bilateral asymmetry? (Yes/No)
Are there major changes compared to the previous mammogram of the
patient?
Is the case normal? (Yes/No)
If the case is abnormal:
{ Is the disease benign or malignant (cancer)?
The items listed above give a selection of the many features of mammo-
grams that a radiologist would investigate see Ackerman et al. 1084] and the
BI-RADSTM manual 403] for more details. Figure 12.1 shows a graphical user
interface developed by Alto et al. 528, 1085] for the categorization of breast
masses related to some of the questions listed above. Figure 12.2 illustrates
four segments of mammograms demonstrating masses and tumors of dier-
ent characteristics, progressing from a well-circumscribed and homogeneous
benign mass to a highly spiculated and heterogeneous tumor.
The subject matter of this book | image analysis and pattern classi ca-
tion | can provide assistance in responding to only some of the questions
listed above. Even an entire set of mammograms may not lead to a nal
decision: other modes of diagnostic imaging and means of investigation may
be necessary to arrive at a de nite diagnosis.
In the following sections, a number of methods for pattern classi cation,
decision making, and evaluation of the results of classi cation are reviewed
and illustrated.
(Note: Parts of this chapter are reproduced, with permission, from R.M.
Rangayyan, Biomedical Signal Analysis: A Case-Study Approach, IEEE Press
and Wiley, New York, NY. 2002,  c IEEE.)

12.1 Pattern Classication


Pattern recognition or classi cation may be de ned as the categorization of
the input data into identi able classes via the extraction of signi cant features
or attributes of the data from a background of irrelevant detail 402, 721,
1086, 1087, 1088, 1089, 1090]. In biomedical image analysis, after quantitative
features have been extracted from the given images, each image (or ROI) may
be represented by a feature vector x = x1 x2 : : : xn ]T , which is also known
1092 Biomedical Image Analysis

FIGURE 12.1
Graphical user interface for the categorization of breast masses. Reproduced
with permission from H. Alto, R.M. Rangayyan, R.B. Paranjape, J.E.L.
Desautels, and H. Bryant, \An indexed atlas of digital mammograms for
computer-aided diagnosis of breast cancer", Annales des Telecommunications,
58(5): 820 { 835, 2003.  c GET { Lavoisier. Figure courtesy of C. LeGuil-
lou, E cole Nationale Superieure des Telecommunications de Bretagne, Brest,
France.
Pattern Classi cation and Diagnostic Decision 1093

(a) b145lc95 (b) b164ro94 (c) m51rc97 (d) m55lo97


fcc = 0.00 0.42 0.64 0.83
SI = 0.04 0.18 0.49 0.61
cf = 0.11 0.26 0.55 0.99
A = 0.07 0.08 0.09 0.01
F8 = 8.11 8.05 8.15 8.29
FIGURE 12.2
Examples of breast mass regions and contours with the corresponding values
of fractional concavity fcc , spiculation index SI , compactness cf , acutance
A, and sum entropy F8 . (a) Circumscribed benign mass. (b) Macrolobulated
benign mass. (c) Microlobulated malignant tumor. (d) Spiculated malignant
tumor. Note that the masses and their contours are of widely diering size,
but have been scaled to the same size in the illustration. The rst letter
of the case identi er indicates a malignant diagnosis with `m' and a benign
diagnosis with `b' based upon biopsy. The symbols after the rst numerical
portion of the identi er represent l: left, r: right, c: cranio-caudal view, o:
medio-lateral oblique view, x: axillary view. The last two digits represent
the year of acquisition of the mammogram. An additional character of the
identi er after the year (a { f), if present, indicates the existence of multiple
masses visible in the same mammogram. Reproduced with permission from
H. Alto, R.M. Rangayyan, and J.E.L. Desautels, \Content-based retrieval and
analysis of mammographic masses", Journal of Electronic Imaging, in press,
2005. c SPIE and IS&T.
1094 Biomedical Image Analysis
as the measurement vector or a pattern vector. When the values xi are real
numbers, x is a point in an n-dimensional Euclidean space: vectors of similar
objects may be expected to form clusters as illustrated in Figure 12.3.

x x x
2 x x
z class C
x
2 x 1
x x x
x
decision function
o oo d( x)= w x + w x + w = 0
o o o 1 1 2 2 3
z
o 1o o
o oo
o class C
2

x
1
FIGURE 12.3
Two-dimensional feature vectors of two classes, C1 and C2 . The prototypes
of the two classes are indicated by the vectors z1 and z2 . The linear decision
function d(x) shown (solid line) is the perpendicular bisector of the straight
line joining the two prototypes (dashed line). Reproduced with permission
from R.M. Rangayyan, Biomedical Signal Analysis: A Case-Study Approach,
IEEE Press and Wiley, New York, NY. 2002,  c IEEE.

For e!cient pattern classi cation, measurements that could lead to dis-
joint sets or clusters of feature vectors are desired. This point underlines the
importance of the appropriate design of the preprocessing and feature extrac-
tion procedures. Features or characterizing attributes that are common to all
patterns belonging to a particular class are known as intraset or intraclass
features. Discriminant features that represent the dierences between pattern
classes are called interset or interclass features.

The pattern classi cation problem is that of generating optimal decision


boundaries or decision procedures to separate the data into pattern classes
based on the feature vectors provided. Figure 12.3 illustrates a simple linear
decision function or boundary to separate 2D feature vectors into two classes.
Pattern Classi cation and Diagnostic Decision 1095

12.2 Supervised Pattern Classication


The problem considered in supervised pattern classi cation may be stated
as follows: You are provided with a number of feature vectors with classes
assigned to them. Propose techniques to characterize and parameterize the
boundaries that separate the classes.
A given set of feature vectors of known categorization is often referred to
as a training set. The availability of a training set facilitates the development
of mathematical functions that can characterize the separation between the
classes. The functions may then be applied to new feature vectors of unknown
classes to classify or recognize them. This approach is known as supervised
pattern classication. A set of feature vectors of known categorization that
is used to evaluate a classi er designed in this manner is referred to as a test
set. After adequate testing and con rmation of the method with satisfactory
results, the classi er may be applied to new feature vectors of unknown classes
the results may then be used to arrive at diagnostic decisions. The following
subsections describe a few methods that can assist in the development of
discriminant and decision functions.

12.2.1 Discriminant and decision functions


A general linear discriminant or decision function is of the form
d(x) = w1 x1 + w2 x2 +    + wn xn + wn+1 = wT x (12.1)
where x = x1 x2 : : : xn 1]T is the feature vector augmented by an additional
entry equal to unity, and w = w1 w2 : : : wn wn+1 ]T is a correspondingly
augmented weight vector. A two-class pattern classi cation problem may be
stated as
d(x) = wT x > 0 if x 2 C1 (12.2)
 0 if x 2 C2
where C1 and C2 represent the two classes. The discriminant function may be
interpreted as the boundary separating the classes C1 and C2 , as illustrated
in Figure 12.3.
In the general case of an M -class pattern classi cation problem, we will
need M weight vectors and M decision functions to perform the following
decisions:
di (x) = wiT x > 0 if x 2 Ci i = 1 2 : : : M (12.3)
 0 otherwise
where wi = (wi1 wi2 ::: win wi n+1 )T is the weight vector for the class Ci .
Three cases arise in solving this problem 1086]:
Case 1: Each class is separable from the rest by a single decision surface:
if di (x) > 0 then x 2 Ci : (12.4)
1096 Biomedical Image Analysis
Case 2: Each class is separable from every other individual class by a distinct
decision surface, that is, the classes are pairwise separable. There are
M (M ; 1)=2 decision surfaces given by dij (x) = wijT x.
if dij (x) > 0 8 j 6= i then x 2 Ci : (12.5)
Note: dij (x) = ;dji (x).]
Case 3: There exist M decision functions dk (x) = wkT x k = 1 2 : : : M
with the property that
if di (x) > dj (x) 8 j 6= i then x 2 Ci : (12.6)
This is a special instance of Case 2. We may de ne
dij (x) = di (x) ; dj (x) = (wi ; wj )T x = wijT x: (12.7)
If the classes are separable under Case 3, they are separable under Case
2 the converse is, in general, not true.
Patterns that may be separated by linear decision functions as above are
said to be linearly separable. In other situations, an in nite variety of complex
decision boundaries may be formulated by using generalized decision functions
based upon nonlinear functions of the feature vectors as
d(x) = w1 f1 (x) + w2 f2 (x) +    + wK fK (x) + wK +1 (12.8)
KX
+1
= wi fi (x): (12.9)
i=1
Here, ffi (x)g i = 1 2 : : : K are real, single-valued functions of x fK +1 (x) =
1. Whereas the functions fi (x) may be nonlinear in the n-dimensional space
of x, the decision function may be formulated as a linear function by de n-
ing a transformed feature vector xy = f1 (x) f2 (x) : : : fK (x) 1]T . Then,
d(x) = wT xy , with w = w1 w2 : : : wK wK +1 ]T : Once evaluated, ffi (x)g is
just a set of numerical values, and xy is simply a K -dimensional vector aug-
mented by an entry equal to unity. Several methods exist for the derivation
of optimal linear discriminant functions 402, 738, 674].
Example of application: The ROIs of 57 breast masses are shown in
Figure 12.4 arranged in the order of decreasing acutance A (see Sections 2.15,
7.9.2, and 12.12). Figure 12.5 shows the contours of the 57 masses arranged in
the increasing order of fractional concavity fcc (see Section 6.4). Most of the
contours of the benign masses are seen to be smooth, whereas most of the con-
tours of the malignant tumors are rough and spiculated. Furthermore, most of
the benign masses have well-de ned, sharp edges and are well-circumscribed,
whereas the majority of the malignant tumors possess ill-de ned and fuzzy
borders. It is seen that the shape factor fcc facilitates the ordering of the
Pattern Classi cation and Diagnostic Decision 1097
contours in terms of shape complexity. However, the contours of a few be-
nign masses and a few malignant tumors do not follow the expected trend.
In addition, the acutance measure has lower values for most of the malignant
tumors than for a majority of the benign masses.
The three shape factors cf , fcc , and SI (see Chapter 6) the 14 texture
measures as de ned by Haralick 441, 442] (see Section 7.3) and four measures
of edge sharpness as de ned by Mudigonda et al. 165] (see Section 7.9.2) were
computed for the ROIs and their contours. (Note: The factor SI was divided
by two in this example to reduce it to the range 0 1].) Figure 12.6 gives a plot
of the 3D feature-vector space (fcc A F8 ) for the 57 masses. The feature F8
shows poor separation between the benign and malignant samples, whereas
the feature A demonstrates some degree of separation. A scatter plot of the
three shape factors (fcc cf SI ) of the 57 masses is given in Figure 12.7. Each
of the three shape factors demonstrates high discriminant capability.
Figure 12.8 shows a 2D plot of the shape-factor vectors fcc SI ] for a train-
ing set formed by selecting the vectors for 18 benign masses and 10 malignant
tumors. The prototypes for the benign and malignant classes, obtained by av-
eraging the vectors over all the members of the two classes in the training set,
are marked as `B' and `M', respectively, on the plot. The solid straight line is
the perpendicular bisector of the line joining the two prototypes (dashed line),
and represents a linear discriminant function. The equation of the straight
line is SI + 0:6826 fcc ; 0:5251 = 0: The decision function is represented by
the following rule:
if SI + 0:6826 fcc ; 0:5251 < 0 then
benign mass
else
malignant tumor
end.
It is seen that the rule given above will correctly classify all of the training
samples.
Figure 12.9 shows the result of application of the linear discriminant func-
tion designed and shown in Figure 12.8 to a test set of 19 benign masses and
10 malignant tumors. The test set does not include any of the cases from the
training set. It is seen that the classi er will lead to three false negatives in
the test set.

12.2.2 Distance functions


Consider M pattern classes represented by their prototype patterns z1 z2
: : : zM . The prototype of a class is typically computed as the average of all
the feature vectors belonging to the class. Figure 12.3 illustrates schematically
the prototypes z1 and z2 of the two classes shown.
1098 Biomedical Image Analysis
m51rc97 0.088 b164ro94 0.085 b164rc94 0.085 b146ro96 0.084 b62lc97 0.084 b62lo97 0.080

b155ro95 0.080 m23lc97 0.079 b155rc95 0.078 m23lo97 0.074 m63ro97 0.073 b62lx97 0.072

b145lc95 0.071 b166lc94 0.069 b146rc96 0.068 b62rc97e 0.065 b62rc97d 0.064 m63rc97 0.064

b62rc97a 0.063 b62rc97b 0.063 b164rx94 0.063 b62ro97e 0.063 b145lo95 0.062 b62ro97a 0.059

b110rc95 0.059 b148ro97 0.058 b157lc96 0.057 b62ro97d 0.057 b157lo96 0.056 b110ro95 0.055

b62ro97c 0.054 b62rc97c 0.053 b64rc97 0.052 b161lc95 0.051 m22lo97 0.051 m62lx97 0.051

b62rc97f 0.051 b148rc97 0.050 b166lo94 0.050 m59lc97 0.049 b158lc95 0.047 m22lc97 0.046

b62ro97b 0.045 m58rm97 0.044 b161lo95 0.043 m59lo97 0.041 m61lc97 0.040 b158lo95 0.039

b62ro97f 0.036 m51ro97 0.033 m64lc97 0.029 m62lo97 0.029 m55lc97 0.027 m61lo97 0.024

m58ro97 0.021 m58rc97 0.014 m55lo97 0.012

FIGURE 12.4
ROIs of 57 breast masses, including 37 benign masses and 20 malignant tumors. The
ROIs are arranged in the order of decreasing acutance A. Note that the masses are
of widely diering size, but have been scaled to the same size in the illustration. For
details regarding the case identiers, see Figure 12.2. Reproduced with permission
from H. Alto, R.M. Rangayyan, and J.E.L. Desautels, \Content-based retrieval and
analysis of mammographic masses", Journal of Electronic Imaging, in press, 2005.
c SPIE and IS&T.
Pattern Classi cation and Diagnostic Decision 1099
b145lc95 0 b146rc96 0 b148rc97 0 b148ro97 0 b161lc95 0 b62rc97e 0

b62lo97 0.017 b164rx94 0.028 b62rc97a 0.03 b62rc97b 0.033 b110rc95 0.035 b62ro97e 0.041

b161lo95 0.05 b62lx97 0.052 b62rc97d 0.061 b158lo95 0.063 b164rc94 0.064 b145lo95 0.074

b62ro97b 0.091 b110ro95 0.094 b155rc95 0.098 b62ro97f 0.099 b157lo96 0.13 b62rc97c 0.15

b166lo94 0.15 b166lc94 0.17 b62ro97d 0.17 b157lc96 0.2 b62lc97 0.2 b155ro95 0.2

b62rc97f 0.2 b62ro97a 0.22 b146ro96 0.22 b62ro97c 0.24 b164ro94 0.24 b158lc95 0.26

m63ro97 0.29 m62lx97 0.29 b64rc97 0.31 m51rc97 0.37 m23lc97 0.39 m55lc97 0.41

m59lo97 0.42 m23lo97 0.42 m59lc97 0.43 m63rc97 0.44 m51ro97 0.44 m58rc97 0.45

m58ro97 0.46 m61lo97 0.47 m22lc97 0.47 m55lo97 0.48 m58rm97 0.5 m61lc97 0.5

m62lo97 0.5 m64lc97 0.5 m22lo97 0.57

FIGURE 12.5
Contours of 57 breast masses, including 37 benign masses and 20 malignant tumors.
The contours are arranged in the order of increasing fcc . Note that the masses and
their contours are of widely diering size, but have been scaled to the same size
in the illustration. For details regarding the case identiers, see Figure 12.2. See
also Figure 12.30. Reproduced with permission from H. Alto, R.M. Rangayyan, and
J.E.L. Desautels, \Content-based retrieval and analysis of mammographic masses",
Journal of Electronic Imaging, in press, 2005. c SPIE and IS&T.
1100 Biomedical Image Analysis

0.8
Sum Entropy

0.6

0.4

0.2
1

0.8
0
1 0.6
0.8 ity
0.4 av
0.6 nc
o
0.4 0.2 lC
na
Acuta 0.2 tio
nce ac
0 0 Fr

FIGURE 12.6
Plot of the 3D feature-vector space (fcc A F8 ) for the set of 57 masses in
Figure 12.4. `o': benign masses (37). `': malignant tumors (20). Repro-
duced with permission from H. Alto, R.M. Rangayyan, and J.E.L. Desautels,
\Content-based retrieval and analysis of mammographic masses", Journal of
c SPIE and IS&T.
Electronic Imaging, in press, 2005. 
Pattern Classi cation and Diagnostic Decision 1101

0.8

0.7

0.6
Spiculation Index

0.5

0.4

0.3

0.2

0.1

0
0 0.4 0.5
0.2 0.4 0.3
0.6 0.1 0.2
0.8 0
Compactness l Concavity
Fractiona

FIGURE 12.7
Plot of the 3D feature-vector space (fcc cf SI ) for the set of 57 contours in
Figure 12.5. `o': benign masses (37). `': malignant tumors (20). Figure
courtesy of H. Alto.

The Euclidean distance between an arbitrary pattern vector x and the ith
prototype is given as
q
Di = kx ; zi k = (x ; zi )T (x ; zi ): (12.10)
A simple rule to classify the pattern vector x would be to choose that class
for which the vector has the smallest distance:
if Di < Dj 8 j 6= i then x 2 Ci : (12.11)
(See Section 12.12 for the description of an application of the Euclidean dis-
tance to the analysis of breast masses and tumors.)
A simple relationship may be established between discriminant functions
and distance functions as follows 1086]:
Di2 = kx ; zi k2 = (x ; zi )T (x ; zi )   (12.12)
= xT x ; 2xT zi + zTi zi = xT x ; 2 xT zi ; 21 zTi zi :

Choosing the minimum of Di2 is equivalent to choosing the minimum of


Di (because all Di > 0). Furthermore, from the equation above, it follows
1102 Biomedical Image Analysis

0.9

0.8 x

x x x
x
0.7
Spiculation index SI

0.6 Mx
x

0.5 x

x
0.4

0.3
x

0.2
o
o o
0.1 o o
o
o o B o o o
o o
o o
oo
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Fractional concavity fcc

FIGURE 12.8
Plot of the 2D feature-vector space (fcc SI ) for the training set of 18 be-
nign masses (`o') and 10 malignant tumors (`x') selected from the dataset in
Figure 12.5. The prototypes of the two classes are indicated by the vectors
marked `B' and `M'. The solid line shown is a linear decision function, obtained
as the perpendicular bisector of the straight line joining the two prototypes
(dashed line).
Pattern Classi cation and Diagnostic Decision 1103

0.9
x x

0.8
x
x
0.7

x
Spiculation index SI

0.6 x

0.5
x
0.4

0.3

0.2
o
o x
o x
0.1 o o o o o
o o x
o o
o oo oo o
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Fractional concavity fcc

FIGURE 12.9
Plot of the 2D feature-vector space (fcc SI ) for the test set of 19 benign masses
(`o') and 10 malignant tumors (`x') selected from the dataset in Figure 12.5.
The solid line shown is a linear decision function designed as illustrated in
Figure 12.8. Three malignant cases are misclassi ed by the decision function
shown.
1104 Biomedical Image Analysis
that choosing the minimum of Di2 is equivalent to choosing the maximum of
(xT zi ; 12 zTi zi ). Therefore, we may de ne the decision function
di (x) = xT zi ; 12 zTi zi i = 1 2 : : : M: (12.13)
A decision rule may then be stated as
if di (x) > dj (x) 8 j 6= i then x 2 Ci : (12.14)
This is a linear discriminant function, which becomes obvious from the fol-
lowing representation: If zij j = 1 2 : : : n are the components of zi , let
wij = zij j = 1 2 : : : n wi n+1 = ; 12 zTi zi and x = x1 x2 : : : xn 1]T :
Then, di (x) = wiT x i = 1 2 : : : M , where wi = wi1 wi2 : : : wi n+1 ]T :
Therefore, distance functions may be formulated as linear discriminant or
decision functions.

12.2.3 The nearest-neighbor rule


Suppose that we are provided with a set of N sample patterns fs1 s2 : : :
sN g of known classi cation: each pattern belongs to one of M classes fC1 C2
: : : CM g, with N >> M . We are then given a new feature vector x whose
class needs to be determined. Let us compute a distance measure D(si x)
between the vector x and each sample pattern. Then, the nearest-neighbor
rule states that the vector x is to be assigned to the class of the sample that
is the closest to x:
x 2 Ci if D(si x) = minfD(sl x)g l = 1 2 : : : N: (12.15)
A major disadvantage of the above method is that the classi cation decision
is made based upon a single sample vector of known classi cation. The nearest
neighbor may happen to be an outlier that is not representative of its class.
It would be more reliable to base the classi cation upon several samples: we
may consider a certain number k of the nearest neighbors of the sample to
be classi ed, and then seek a majority opinion. This leads to the so-called
k-nearest-neighbor or k-NN rule : Determine the k nearest neighbors of x, and
use the majority of equal classi cations in this group as the classi cation of
x: See Section 12.12 for the description of an application of the k-NN method
to the analysis of breast masses and tumors.

12.3 Unsupervised Pattern Classication


Let us consider the situation where we are given a set of feature vectors with
no categorization or classes attached to them. No prior training information
is available. How may we group the vectors into multiple categories?
Pattern Classi cation and Diagnostic Decision 1105
The design of distance functions and decision boundaries requires a training
set of feature vectors of known classes. The functions so designed may then
be applied to a new set of feature vectors or samples to perform pattern
classi cation. Such a procedure is known as supervised pattern classi cation
due to the initial training step. In some situations a training step may not
be possible, and we may be required to classify a given set of feature vectors
into either a prespeci ed or unknown number of categories. Such a problem is
labeled as unsupervised pattern classi cation, and may be solved by cluster-
seeking methods.

12.3.1 Cluster-seeking methods


Given a set of feature vectors, we may examine them for the formation of
inherent groups or clusters. This is a simple task in the case of 2D vectors,
where we may plot them, visually identify groups, and label each group with
a pattern class. Allowance may have to be made to assign the same class
to multiple disjoint groups. Such an approach may be used even when the
number of classes is not known at the outset. When the vectors have a
dimension higher than three, visual analysis will not be feasible. It then
becomes necessary to de ne criteria to group the given vectors on the basis
of similarity, dissimilarity, or distance measures. A few examples of such
measures are described below 1086]:
Euclidean distance
X
n
DE2 = kx ; zk2 = (x ; z)T (x ; z) = (xi ; zi )2 : (12.16)
i=1
Here, x and z are two feature vectors the latter could be a class pro-
totype, if available. A small value of DE indicates greater similarity
between the two vectors than a large value of DE .
Manhattan or city-block distance
X
n
DC = j xi ; zi j : (12.17)
i=1
The Manhattan distance is the shortest path between x and z, with
each segment being parallel to a coordinate axis 402].
Mahalanobis distance
DM
2
= (x ; m)T C;1 (x ; m) (12.18)
where x is a feature vector being compared to a pattern class for which
m is the class mean vector and C is the covariance matrix. A small
value of DM indicates a higher potential membership of the vector x in
1106 Biomedical Image Analysis
the class than a large value of DM . (See Section 12.12 for the description
of an application of the Mahalanobis distance to the analysis of breast
masses and tumors.)
Normalized dot product (cosine of the angle between the vectors x and
z)
Dd = kxxkkzzk :
T
(12.19)
A large dot product value indicates a greater degree of similarity between
the two vectors than a small value.
The covariance matrix is de ned as
C = E (y ; m)(y ; m)T ] (12.20)
where the expectation operation is performed over all feature vectors y that
belong to the class. The covariance matrix provides the covariance of all
possible pairs of the features in the feature vector over all samples belonging
to the given class being considered. The elements along the main diagonal
of the covariance matrix provide the variance of the individual features that
make up the feature vector. The covariance matrix represents the scatter of
the features that belong to the given class. The mean and covariance need
to be updated as more samples are added to a given class in a clustering
procedure.
When the Mahalanobis distance needs to be calculated between a sample
vector and a number of classes represented by their mean and covariance
matrices, a pooled covariance matrix may be used if the numbers of members
in the various classes are unequal and low 1088]. If the covariance matrices
of two classes are C1 and C2 , and the numbers of members in the two classes
are N1 and N2 , the pooled covariance matrix is given by
C = (N1 ; 1) C1 + (N2 ; 1)C2 : (12.21)
N1 + N2 ; 2
Various performance indices may be designed to measure the success of a
clustering procedure 1086]. A measure of the tightness of a cluster is the sum
of the squared errors performance index:
X
N X
c
J= kx ; mj k2 (12.22)
j =1 x2Sj
where Nc is the number of cluster domains, Sj is the set of samples in the j th
cluster, X
mj = N1 x (12.23)
j x2Sj
is the sample mean vector of Sj , and Nj is the number of samples in Sj .
A few other examples of performance indices are:
Pattern Classi cation and Diagnostic Decision 1107
Average of the squared distances between the samples in a cluster do-
main.
Intracluster variance.
Average of the squared distances between the samples in dierent cluster
domains.
Intercluster distances.
Scatter matrices.
Covariance matrices.
A simple cluster-seeking algorithm 1086]: Suppose we have N sample
patterns fx1 x2 : : : xN g.
1. Let the rst cluster center z1 be equal to any one of the samples, say
z1 = x1 .
2. Choose a nonnegative threshold .
3. Compute the distance D21 between x2 and z1 . If D21 < , assign x2
to the domain (class) of cluster center z1 otherwise, start a new cluster
with its center as z2 = x2 . For the subsequent steps, let us assume that
a new cluster with center z2 has been established.
4. Compute the distances D31 and D32 from the next sample x3 to z1 and
z2 , respectively. If D31 and D32 are both greater than , start a new
cluster with its center as z3 = x3 otherwise, assign x3 to the domain of
the closer cluster.
5. Continue to apply Steps 3 and 4 by computing and checking the distance
from every new (unclassi ed) pattern vector to every established cluster
center and applying the assignment or cluster-creation rule.
6. Stop when every given pattern vector has been assigned to a cluster.
Observe that the procedure does not require a priori knowledge of the
number of classes. Recognize also that the procedure does not assign a real-
world class to each cluster: it merely groups the given vectors into disjoint
clusters. A subsequent step is required to label each cluster with a class related
to the actual problem. Multiple clusters may relate to the same real-world
class, and may have to be merged.
A major disadvantage of the simple cluster-seeking algorithm is that the
results depend upon
the rst cluster center chosen for each domain or class,
the order in which the sample patterns are considered,
1108 Biomedical Image Analysis
the value of the threshold , and
the geometrical properties or distributions of the data, that is, the
feature-vector space.
The maximin-distance clustering algorithm 1086]: This method is
similar to the previous \simple" algorithm, but rst identi es the cluster re-
gions that are the farthest apart. The term \maximin" refers to the combined
use of maximum and minimum distances between the given vectors and the
centers of the clusters already formed.
1. Let x1 be the rst cluster center z1 .
2. Determine the farthest sample from x1 , and label it as cluster center z2 .
3. Compute the distance from each remaining sample to z1 and to z2 . For
every pair of these computations, save the minimum distance, and select
the maximum of the minimum distances. If this \maximin" distance is
an appreciable fraction of the distance between the cluster centers z1 and
z2 , label the corresponding sample as a new cluster center z3 otherwise
stop forming new clusters and go to Step 5.
4. If a new cluster center was formed in Step 3, repeat Step 3 using a
\typical" or the average distance between the established cluster centers
for comparison.
5. Assign each remaining sample to the domain of its nearest cluster center.
The K -means algorithm 1086]: The preceding \simple" and \maximin"
algorithms are intuitive procedures. The K -means algorithm is based on
iterative minimization of a performance index that is de ned as the sum of
the squared distances from all points in a cluster domain to the cluster center.
1. Choose K initial cluster centers z1 (1) z2 (1) : : : zK (1). K is the number
of clusters to be formed. The choice of the cluster centers is arbitrary,
and could be the rst K of the feature vectors available. The index in
parentheses represents the iteration number.
2. At the kth iterative step, distribute the samples fxg among the K cluster
domains, using the relation
x 2 Sj (k) if kx;zj (k)k < kx;zi (k)k 8 i = 1 2 : : : K i 6= j (12.24)
where Sj (k) denotes the set of samples whose cluster center is zj (k).
3. From the results of Step 2, compute the new cluster centers zj (k+1) j =
1 2 : : : K such that the sum of the squared distances from all points
Pattern Classi cation and Diagnostic Decision 1109
in Sj (k) to the new cluster center is minimized. In other words, the new
cluster center zj (k + 1) is computed so that the performance index
X
Jj = kx ; zj (k + 1)k2 j = 1 2 ::: K (12.25)
x2Sj (k)

is minimized. The zj (k + 1) that minimizes this performance index is


simply the sample mean of Sj (k). Therefore, the new cluster center is
given by
X
zj (k + 1) = N 1(k) x j = 1 2 ::: K (12.26)
j x2Sj (k)

where Nj (k) is the number of samples in Sj (k). The name \K -means"


is derived from the manner in which cluster centers are sequentially
updated.
4. If zj (k + 1) = zj (k) for j = 1 2 : : : K the algorithm has converged:
terminate the procedure otherwise go to Step 2.
The behavior of the K -means algorithm is inuenced by:
the number of cluster centers speci ed (K ),
the choice of the initial cluster centers,
the order in which the sample patterns are considered, and
the geometrical properties or distributions of the data, that is, the
feature-vector space.
Example: Figures 12.10 to 12.14 show four cluster plots of the shape
factors fcc and SI of the 57 breast mass contours shown in Figure 12.5 (see
Section 12.12 for details). Although the categories of the samples would be
unknown in a practical situation, the samples are identi ed in the plots with
the + symbol for malignant tumors and the symbol for the benign masses.
(The categorization represents the ground-truth or true classi cation of the
samples based upon biopsy.)
The plots in Figures 12.10 to 12.14 show the progression of the K -means
algorithm from its initial state to the converged state. K = 2 in this example,
representing the benign and malignant categories. The only prior knowledge
or assumption used is that the samples are to be split into two clusters, that is,
there are two classes. Figure 12.10 shows two samples selected to represent the
cluster centers, marked with the diamond and asterisk symbols. The straight
line indicates the decision boundary, which is the perpendicular bisector of
the straight line joining the two cluster centers. The K -means algorithm
converged, in this case, at the fth iteration (that is, there was no change in the
1110 Biomedical Image Analysis
cluster centers after the fth iteration). The nal decision boundary results
in the misclassi cation of four of the malignant samples as being benign. It
is interesting to note that even though the two initial cluster centers belong
to the benign category, the algorithm has converged to a useful solution.
(See Section 12.12.1 for examples of application of other pattern classi cation
techniques to the same dataset.)

12.4 Probabilistic Models and Statistical Decision


Pattern classi cation methods such as discriminant functions are dependent
upon the set of training samples provided. Their success when applied to new
cases will depend upon the accuracy of the representation of the various pat-
tern classes by the training samples. How can we design pattern classi cation
techniques that are independent of speci c training samples and are optimal
in a broad sense?
Probability functions and probabilistic models may be developed to rep-
resent the occurrence and statistical attributes of classes of patterns. Such
functions may be based upon large collections of data, historical records, or
mathematical models of pattern generation. In the absence of information as
above, a training step with samples of known categorization will be required
to estimate the required model parameters. It is common practice to assume a
Gaussian PDF to represent the distribution of the features for each class, and
estimate the required mean and variance parameters from the training sets.
When PDFs are available to characterize pattern classes and their features,
optimal decision functions may be designed, based upon statistical functions
and decision theory. The following subsections describe a few methods in this
category.

12.4.1 Likelihood functions and statistical decision


Let P (Ci ) be the probability of occurrence of class Ci i = 1 2 : : : M this is
known as the a priori, prior , or unconditional probability. The a posteriori
or posterior probability that an observed sample pattern x came from Ci is
expressed as P (Ci jx). If a classi er decides that x comes from Cj when it
actually came from Ci , the classi er is said to incur a loss Lij , with Lii = 0
or a xed operational cost and Lij > Lii 8 j 6= i.
Because x may belong to any one of the M classes under consideration, the
expected loss, known as the conditional average risk or loss, in assigning x to
Cj is 1086]
X
M
R j (x ) = Lij P (Cijx): (12.27)
i=1
Pattern Classi cation and Diagnostic Decision 1111

0.9

0.8

0.7
Spiculation index/2 SI/2

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
Fractional concavity fcc

FIGURE 12.10
Initial state of the K -means algorithm. The symbols in the cluster plot repre-
sent the 2D feature vectors (fcc SI ) for 37 benign ( ) and 20 malignant (+)
breast masses. (See Figure 12.5 for the contours of the masses.) The cluster
centers (class means) are indicated by the solid diamond and the * symbols.
The straight line indicates the decision boundary between the two classes.
Figure courtesy of F.J. Ayres.
1112 Biomedical Image Analysis

0.9

0.8

0.7
Spiculation index/2 SI/2

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
Fractional concavity fcc

FIGURE 12.11
Second iteration of the K -means algorithm. Details as in Figure 12.10. Figure
courtesy of F.J. Ayres.
Pattern Classi cation and Diagnostic Decision 1113

0.9

0.8

0.7
Spiculation index/2 SI/2

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
Fractional concavity fcc

FIGURE 12.12
Third iteration of the K -means algorithm. Details as in Figure 12.10. Figure
courtesy of F.J. Ayres.
1114 Biomedical Image Analysis

0.9

0.8

0.7
Spiculation index/2 SI/2

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
Fractional concavity fcc

FIGURE 12.13
Fourth iteration of the K -means algorithm. Details as in Figure 12.10. Figure
courtesy of F.J. Ayres.
Pattern Classi cation and Diagnostic Decision 1115

0.9

0.8

0.7
Spiculation index/2 SI/2

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
Fractional concavity fcc

FIGURE 12.14
Final state of the K -means algorithm after the fth iteration. Details as in
Figure 12.10. Figure courtesy of F.J. Ayres.
1116 Biomedical Image Analysis
A classi er could compute Rj (x) j = 1 2 : : : M , for each sample x, and
then assign x to the class with the smallest conditional loss. Such a classi er
will minimize the total expected loss over all decisions, and is called the Bayes
classier. From a statistical point of view, the Bayes classi er represents the
optimal classi er.
According to Bayes rule, we have 721, 1086]
P (Cijx) = P (Cip)(px()xjCi) (12.28)
where p(xjCi ) is called the likelihood function of class Ci or the state-condi-
tional PDF of x, and p(x) is the PDF of x regardless of class membership
(unconditional). Note: P (y) is used to represent the probability of occur-
rence of an event y p(y) is used to represent the PDF of a random variable y.
Probabilities and PDFs involving a multidimensional feature vectors are mul-
tivariate functions with dimension equal to that of the feature vector.] Bayes
rule shows how observing the sample x changes the a priori probability P (Ci )
to the a posteriori probability P (Ci jx): In other words, Bayes rule provides a
mechanism to update the a priori probability P (Ci ) to the a posteriori prob-
ability P (Ci jx) due to the observation of the sample x. Then, we can express
the expected loss as 1086]
1 XM
Rj (x) = p(x) Lij p(xjCi) P (Ci): (12.29)
i=1
Because p(1x) is common for all j , we could modify Rj (x) to
X
M
rj (x) = Lij p(xjCi) P (Ci ): (12.30)
i=1
In a two-class case with M = 2, we obtain the following expressions 1086]:
r1 (x) = L11 p(xjC1 ) P (C1 ) + L21 p(xjC2) P (C2): (12.31)
r2 (x) = L12 p(xjC1 ) P (C1 ) + L22 p(xjC2) P (C2): (12.32)
x 2 C1 if r1 (x) < r2 (x) (12.33)
that is,
x 2 C1 if L11 p(xjC1) P (C1 ) + L21 p(xjC2) P (C2)] (12.34)
< L12 p(xjC1) P (C1) + L22 p(xjC2) P (C2)]
or equivalently,
x 2 C1 if (L21 ; L22 ) p(xjC2) P (C2)] < (L12 ; L11 ) p(xjC1) P (C1)]:
(12.35)
Pattern Classi cation and Diagnostic Decision 1117
This expression may be rewritten as
x 2 C1 if pp((xxjjCC1)) > PP ((CC2 )) ((LL21 ;; LL22 )) : (12.36)
2 1 12 11

The left-hand side of the inequality above, which is a ratio of two likelihood
functions, is often referred to as the likelihood ratio :
l12 (x) = pp((xxjjCC1 )) : (12.37)
2

Then, Bayes decision rule for M = 2 is 1086]:


1. Assign x to class C1 if l12 (x) > 12 where 12 is a threshold given by
12 = PP ((CC21)) ((LL2112;;LL2211)) :
2. Assign x to class C2 if l12 (x) < 12 :
3. Make an arbitrary or heuristic decision if l12 (x) = 12 :
The rule may be generalized to the M -class case as 1086]:
X
M X
M
x 2 Ci if Lki p(xjCk ) P (Ck ) < Lqj p(xjCq ) P (Cq )
k=1 q=1
j = 1 2 : : : M j 6= i: (12.38)
In most pattern classi cation problems, the loss is nil for correct decisions.
The loss could be assumed to be equal to a certain quantity for all erroneous
decisions. Then, Lij = 1 ; ij , where
i=j
ij = 10 ifotherwise (12.39)

and
X
M
rj (x) = (1 ; ij ) p(xjCi ) P (Ci ) (12.40)
i=1
= p(x) ; p(xjCj ) P (Cj )
because
X
M
p(xjCi) P (Ci) = p(x): (12.41)
i=1
The Bayes classi er will assign a pattern x to class Ci if
p(x) ; p(xjCi)P (Ci) < p(x) ; p(xjCj )P (Cj ) j = 1 2 : : : M j 6= i (12.42)
1118 Biomedical Image Analysis
that is,
x 2 Ci if p(xjCi)P (Ci) > p(xjCj )P (Cj ) j = 1 2 : : : M j 6= i: (12.43)
This is nothing more than using the decision functions
di (x) = p(xjCi) P (Ci) i = 1 2 : : : M (12.44)
where a pattern x is assigned to class Ci if di (x) > dj (x) 8 j 6= i for that
pattern. Using Bayes rule, we get
di (x) = P (Cijx) p(x) i = 1 2 : : : M: (12.45)
Because p(x) does not depend upon the class index i, this can be reduced to
di (x) = P (Cijx) i = 1 2 : : : M: (12.46)
The dierent decision functions given above provide alternative yet equivalent
approaches, depending upon whether p(xjCi ) or P (Ci jx) is used (or available).
The estimation of p(xjCi ) would require a training set for each class Ci . It
is common to assume a Gaussian distribution and estimate its mean and
variance using the training set.

12.4.2 Bayes classier for normal patterns


The univariate normal or Gaussian PDF for a single random variable x is
given by "  2 #
p(x) = p 1 1
exp ; 2 x ;m
(12.47)
2  
which is completely speci ed by two parameters: the mean
Z1
m = E x] = x p(x) dx (12.48)
;1
and the variance
Z1
 = E (x ; m) ] =
2 2
(x ; m)2 p(x) dx: (12.49)
;1
In the case of M pattern classes and pattern vectors x of dimension n
governed by multivariate normal PDFs, we have
1
 1 
T ;
p(xjCi) = (2)n=2 jC j1=2 exp ; 2 (x ; mi ) Ci (x ; mi )
1
(12.50)
i
i = 1 2 : : : M where each PDF is completely speci ed by its mean vector
mi and its n  n covariance matrix Ci , with
mi = Ei x] (12.51)
Pattern Classi cation and Diagnostic Decision 1119
and
Ci = Ei(x ; mi )(x ; mi )T ]: (12.52)
Here, Ei  ] denotes the expectation operator over the patterns belonging to
class Ci .
Normal distributions occur frequently in nature, and have the advantage
of analytical tractability. A multivariate normal PDF reduces to a product
of univariate normal PDFs when the elements of x are mutually independent
(in which case the covariance matrix is a diagonal matrix).
We had earlier formulated the decision functions
di (x) = p(xjCi) P (Ci) i = 1 2 : : : M (12.53)
see Equation 12.44. Given the exponential in the normal PDF, it is convenient
to use
di (x) = ln p(xjCi) P (Ci)] = ln p(xjCi) + ln P (Ci) (12.54)
which is equivalent in terms of classi cation performance because the natural
logarithm ln is a monotonically increasing function. Then 1086],
 
di (x) = ln P (Ci) ; n2 ln 2 ; 21 ln jCi j ; 12 (x ; mi )T C;i 1 (x ; mi ) (12.55)
i = 1 2 : : : M: The second term does not depend upon i therefore, we can
simplify di (x) to
 
di (x) = ln P (Ci) ; 21 ln jCi j ; 21 (x ; mi )T C;i 1 (x ; mi ) i = 1 2 : : : M:
(12.56)
The decision functions above are hyperquadrics hence, the best that a
Bayes classi er for normal patterns can do is to place a general second-order
decision surface between each pair of pattern classes. In the case of true nor-
mal distributions of patterns, the decision functions as above will be optimal
on an average basis: they minimize the expected loss with the simpli ed loss
function Lij = 1 ; ij 1086].
If all the covariance matrices are equal, that is, Ci = C i = 1 2 : : : M
we get
di (x) = ln P (Ci) + xT C;1 mi ; 12 mTi C;1 mi i = 1 2 : : : M (12.57)
after omitting terms independent of i. The Bayesian classi er is now repre-
sented by a set of linear decision functions.
Before one may apply the decision functions as above, it would be appro-
priate to verify the Gaussian nature of the PDFs of the variables on hand by
conducting statistical tests 168, 1087]. Furthermore, it would be necessary
to derive or estimate the mean vector and covariance matrix for each class
sample statistics computed from a training set may serve this purpose.
1120 Biomedical Image Analysis
Example: Figure 12.15 shows plots of Gaussian PDF models applied to the
shape factor fcc (fractional concavity) of the 57 breast mass contours shown in
Figure 12.5 (see Chapter 6 and Section 12.12 for details). The two Gaussians
represent the state-conditional PDFs of fcc for the benign and malignant
categories. Also shown are the posterior probabilities that the class of a
sample is benign or malignant given an observed value of fcc . The posterior
probability functions were derived using Bayes rule as in Equation 12.28, with
the values of the prior probabilities of the two classes being equal to 0:5. It
is seen that the posterior probabilities are both equal to 0:5 at fcc = 0:32
the probability of a malignant classi cation is higher than that of a benign
classi cation for fcc > 0:32. Due to the use of equal prior probabilities, the
transition point is the same as the point where the two Gaussian models for
the state-conditional PDFs cross each other.
Figure 12.16 shows the same Gaussian models for the state-conditional
PDFs as in Figure 12.15. However, the posterior probability functions were de-
rived using the prior probability value of 0:9 for the benign category the prior
probability for the malignant category is then 0:1. In this case, the probability
of a malignant classi cation is higher than that of a benign classi cation for
fcc > 0:36. The prior assumption that 90% of all masses encountered will be
benign has pushed the decision threshold on fcc to a higher value.
Figure 12.17 illustrates the 2D cluster plot and Gaussian PDF models for
the shape factors fcc and SI for the same dataset as described above see
Chapter 6 and Section 12.12 for details. The decision boundary indicated by
the solid line is the optimal boundary under the assumption of 2D Gaussian
PDFs for the two features and the two classes. Two malignant samples are
misclassi ed by the decision boundary shown. (See Section 12.12.1 for ex-
amples of application of other pattern classi cation techniques to the same
dataset.)
An interesting point to note from the examples above is that the Gaussian
PDF models used are not capable of accommodating the prior knowledge that
the shape factors are limited to the range 0 1]. Other PDF models such as
the Rayleigh distribution (see Section 3.1.2) should be used if this aspect is
important.

12.5 Logistic Regression


Logistic classi cation is a statistical technique based on a logistic regression
model that estimates the probability of occurrence of an event 1091, 1092,
1093]. The technique is designed for problems where patterns are to be classi-
ed into one of two classes. When the response variable is binary, theoretical
and empirical considerations indicate that the response function is often curvi-
Pattern Classi cation and Diagnostic Decision 1121

1 3.5

0.8

2.5

p(fcc | Ci )
P(Ci | fcc)

0.6
2

1.5
0.4

0.2
0.5

0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Fractional concavity fcc

FIGURE 12.15
Plots of Gaussian state-conditional PDF models for the shape factor fcc for
benign (dashed line) and malignant (dotted line) breast masses. (See Fig-
ure 12.5 for the contours of the masses.) The fcc values of the samples are
indicated on the horizontal axis for 37 benign masses with and for 20 malig-
nant tumors as . The posterior probability functions for the benign (solid
line) and malignant (dash-dot line) classes are also shown. Equal prior prob-
abilities of 0:5 were used for the two classes. See also Figure 12.16. Figure
courtesy of F.J. Ayres.
1122 Biomedical Image Analysis

1 3.5

0.8

2.5

p(fcc | Ci )
P(Ci | fcc)

0.6
2

1.5
0.4

0.2
0.5

0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Fractional concavity fcc

FIGURE 12.16
Same as in Figure 12.15, but with the prior probability of the benign class
equal to 0:9. Figure courtesy of F.J. Ayres.
Pattern Classi cation and Diagnostic Decision 1123

0.9

0.8

0.7
Spiculation index/2 SI/2

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
Fractional concavity fcc

FIGURE 12.17
Plots of 2D Gaussian state-conditional PDF models for the shape factors
fcc and SI for benign masses (dash-dot line) and malignant tumors (dashed
line). See Figure 12.5 for the contours of the masses. Feature values for
37 benign masses are indicated with , and for 20 malignant tumors as .
The benign class prototype (mean) is indicated by the solid diamond that
for the malignant class is indicated by the * symbol. The dashed and dash-
dot contours indicate two constant-Mahalanobis-distance contours (level sets)
each for the two Gaussian PDF models (see Equation 12.18) for the malignant
and benign classes, respectively. The solid contour indicates the decision
boundary, as given by Equation 12.53 to Equation 12.56, with the decision
function being equal for the two classes. The prior probabilities for the two
classes were assumed to be equal to 0:5. Figure courtesy of F.J. Ayres.
1124 Biomedical Image Analysis
linear. The typical response function is shaped as a forward or backward tilted
\S", and is known as a sigmoidal function. The function has asymptotes at 0
and 1.
In logistic pattern classi cation, an event is de ned as the membership of
a pattern vector in one of the two classes of concern. The method computes
a variable that depends upon the given parameters and is constrained to the
range 0 1] so that it may be interpreted as a probability. The probability
of the pattern vector belonging to the second class is simply the dierence
between unity and the estimated value.
For the case of a single feature or parameter, the logistic regression model
is given as
P (event) = 1 +exp( b0 + b1 x) (12.58)
exp(b0 + b1 x)
or equivalently,
P (event) = 1 + exp;1(b + b x)] (12.59)
0 1

where b0 and b1 are coe!cients estimated from the data, and x is the inde-
pendent (feature) variable. The relationship between the independent vari-
able and the estimated probability is nonlinear, and follows an S-shaped curve
that closely resembles the integral of a Gaussian function. In the case of an
n-dimensional feature vector x, the model can be written as
1
P (event) = 1 + exp( (12.60)
;z )

where z is the linear combination


z = b0 + b1 x1 + b2 x2 +    + bn xn = bT x (12.61)
that is, z is the dot product of the augmented feature vector x with a coe!-
cient vector or weight vector b.
In linear regression, the coe!cients of the model are estimated using the
method of least squares the selected regression coe!cients are those that re-
sult in the smallest sums of squared distances between the observed and the
predicted values of the dependent variable. In logistic regression, the parame-
ters of the model are estimated using the maximum likelihood method 1087,
1091] the coe!cients that make the observed results \most likely" are se-
lected. Because the logistic regression model is nonlinear, an iterative al-
gorithm is necessary for the estimation of the coe!cients 1092, 1093]. A
training set is required to design a classi er based upon logistic regression.
See Sections 5.5.2, 5.5.3, and 12.12.1 for illustrations of the application of
logistic regression to the classi cation of breast masses and tumors.
Pattern Classi cation and Diagnostic Decision 1125

12.6 The Training and Test Steps


In the situation when a limited number of sample vectors with known classi-
cation are available, questions arise as to how many of the samples may be
used to design or train a classi er, with the understanding that the classi er
so designed needs to be tested using an independent set of samples of known
classi cation as well. When a su!ciently large number of samples are avail-
able, they may be randomly split into two approximately equal sets, one for
use as the training set and the other to be used as the test set. The random-
splitting procedure may be repeated a number of times to generate several
classi ers. Finally, one of the classi ers so designed may be selected based
upon its performance in both the training and test steps.

12.6.1 The leave-one-out method


The leave-one-out method 1087] is suitable for the estimation of the classi-
cation accuracy of a pattern classi cation technique, particularly when the
number of available samples is small. In this method, one of the available
samples is excluded, the classi er is designed with the remaining samples,
and then the classi er is applied to the excluded sample. The validity of the
classi cation so performed is noted. This procedure is repeated with each
available sample: if N training samples are available, N classi ers are de-
signed and tested. The training and test sets for any one classi er so designed
and tested are independent. However, while the training set for each classi er
has N ; 1 samples, the test set has only one sample. In the nal analysis,
every sample will have served as a training sample (N ; 1 times) as well as a
test sample (once). An average classi cation accuracy is then computed using
all of the test results.
Let us consider a simple case in which the covariances of the sample sets
of two classes are equal. Assume that two sample sets, S1 = fx(1) 1 : : : xN1 g
(1)

from class C1 , and S2 = fx(2)


1 ::: xN2 g from class C2 are given. Here, N1 and
(2)

N2 are the numbers of samples in the sets S1 and S2 , respectively. Assume


also that the prior probabilities of the two classes are equal to each other.
Then, according to the Bayes classi er, and assuming x to be governed by a
multivariate Gaussian PDF, a sample x is assigned to the class C1 if
(x ; m1 )T (x ; m1 ) ; (x ; m2 )T (x ; m2 ) >  (12.62)
where  is a threshold, and the sample mean m
~ i is given by
X
N
m~ i = N1
i
x(i) : (12.63)
i j =1 j
1126 Biomedical Image Analysis
In the leave-one-out method, one sample x(ki) is excluded from the training
set and then used as the test sample. The mean estimate for class Ci without
x(ki) , labeled as m~ ik , may be computed as
2N 3
1 X
4 xj ; xk 5
i
i
i
m~ ik = N ; 1 ( ) ( )
(12.64)
i j =1
which leads to
x(ki) ; m~ ik = N N;i 1 (x(ki) ; m~ i ): (12.65)
i
Then, testing a sample x(1)
k from C1 can be carried out as
~ )T (x(1)
(x(1) ; m k ;m ~ 1k ) ; (x(1)
k ;m ~ 2 )T (x(1)
k ;m ~ 2)
 k N 1k2
= N ;1 1 (x(1)k ;m ~ 1 )T (x(1)
k ;m ~ 1 ) ; (x(1)
k ;m ~ 2 )T (x(1)
k ;m~ 2)
1
> : (12.66)
Observe that when x(1)
k is tested, only m
~ 1 is changed and m
~ 2 is not changed.
Likewise, when a sample xk from C2 is tested, the decision rule is
(2)

(x(2)
k ;m~ 1 )T (x(2)
k ;m~ 1 ) ; (x(2)
k ;m~ 2k )T (x(2)
k ;m~ 2k )
 N 2
~ 1 )T (x(2)
= (xk ; m
(2)
k ;m~ 1) ; T (2)
N2 ; 1 (xk ; m~ 2 ) (xk ; m~ 2 )
2 (2)

< : (12.67)
The leave-one-out method provides the least-biased (practically unbiased)
estimate of the classi cation accuracy of a given classi cation method for a
given training set, and is useful when the number of samples available with
known classi cation is small.

12.7 Neural Networks


In many practical problems, we may have no knowledge of the prior probabil-
ities of patterns belonging to one class or another. No general classi cation
rules may exist for the patterns on hand. Clinical knowledge may not yield
symbolic knowledge bases that could be used to classify patterns that demon-
strate exceptional behavior. In such situations, conventional pattern classi -
cation methods as described in the preceding sections may not be well-suited
for the classi cation of pattern vectors. Arti cial neural networks (ANNs),
Pattern Classi cation and Diagnostic Decision 1127
with the properties of experience-based learning and fault tolerance, should
be eective in solving such classi cation problems 274, 402, 1089, 1090, 1094,
1095, 1096].
Figure 12.18 illustrates a two-layer perceptron with one hidden layer and
one output layer for pattern classi cation. The network learns the similar-
ities among patterns directly from their instances in the training set that
is provided initially. Classi cation rules are inferred from the training data
without prior knowledge of the pattern class distributions in the data. Train-
ing of an ANN classi er is typically achieved by the back-propagation algo-
rithm 274, 402, 1089, 1090, 1094, 1095, 1096]. The actual output of the ANN
yk is calculated as
0J 1
X
yk = f @ wjk
# #
xj ; k# A k = 1 2 : : : K (12.68)
j =1
where X
I !
xj = f
#
wij xi ; j j = 1 2 ::: J (12.69)
i=1
and
1
f ( ) = 1 + exp( : (12.70)
; )
In the equations above, j and k# are node osets wij and wjk#
are node
weights xi are the elements of the pattern vectors (input parameters) and I ,
J , and K are the numbers of nodes in the input, hidden, and output layers,
respectively. The weights and osets are updated by
wjk
#
(n+1) = wjk
#
(n)+
yk (1;yk )(dk ;yk )]x#j + wjk
#
(n);wjk
#
(n;1)] (12.71)
k#(n +1) = k# (n)+
yk (1 ; yk )(dk ; yk )](;1)+ k#(n) ; k#(n ; 1)] (12.72)
" X
K #
wij (n + 1) = wij (n) +
xj (1 ; xj )
# #
fyk (1 ; yk )(dk ; yk )wjk g
#
xi
k=1
+ wij (n) ; wij (n ; 1)] (12.73)
and
" X
K #
j (n + 1) = j +
xj (1 ; xj )
# #
fyk (1 ; yk )(dk ; yk )wjk g
#
(;1)
k=1
+ j (n) ; j (n ; 1)] (12.74)
where dk are the desired outputs, is a momentum term,
is a gain term,
and n refers to the iteration number. Equations 12.71 and 12.72 represent the
1128 Biomedical Image Analysis
1

1 1
2

2 2

I K
J #
wij wjk
#
xi xj yk
INPUT HIDDEN OUTPUT
LAYER LAYER LAYER

FIGURE 12.18
A two-layer perceptron. Reproduced with permission from L. Shen, R.M. Ran-
gayyan, and J.E.L. Desautels, \Detection and classi cation of mammographic
calci cations", International Journal of Pattern Recognition and Articial In-
c World Scienti c Publishing Company.
telligence, 7(6):1403{1416, 1993. 

back-propagation steps, with yk (1 ; yk )x#j being the sensitivity of yk to wjk


#
,
@y
that is, @w# .
k
jk
The classi er training algorithm is repeated until the errors between the
desired outputs and actual outputs for the training data are smaller than a
predetermined threshold value.
Example of application: In the work of Shen et al. 274, 320], the shape
features (mf , ff , cf ) (see Section 6.6) were employed as inputs (xi , i =
1 2 3) to the ANN as above, and calci cations were classi ed into two groups:
benign or malignant. Therefore, the numbers of input (I ) and output (K )
nodes are 3 and 2, respectively. Feature sets for training of the ANN were
computed for 143 calci cations, of which 64 were benign and 79 malignant.
The calci cations were obtained from 18 mammograms of biopsy-proven cases
chosen from the Radiology Teaching Library of the Foothills Hospital, Calgary,
Pattern Classi cation and Diagnostic Decision 1129
Alberta, Canada. Boundaries of the calci cations were obtained by region
growing after manual selection of seed pixels and tolerance 334]. Figure 6.25
provides a 3D plot of the feature vectors for the 143 calci cations used as the
training data.
Three parameters | the number of hidden nodes J , the gain term
, and
the momentum term | need to be determined before training the two-
layer perceptron. There is no general rule available for the selection of these
parameters. One of the most common methods is trial-and-error by choosing
the set of parameters with which the highest training speed (the smallest
number of iterations) is achieved. However, the major disadvantage of this
method is that the classi cation eectiveness after training is not considered.
To use the training data set more e!ciently and to overcome the above-
mentioned shortcoming of the trial-and-error method, Shen et al. included a
leave-one-out type of algorithm 1087, 1097] in the procedure for determining
the three parameters (J ,
, and ), as described next.
First,
and were held at xed values in order to determine the most suit-
able number of hidden nodes J . Table 12.1 and Table 12.2 list the results (the
average number of iterations and the number of erroneous classi cations) of
the training and parameter determination method under two circumstances:
(
= 1:0 = 0:7) and (
= 2:0 = 0:7), respectively. Based upon Ta-
bles 12.1 and 12.2, J = 10 could be selected as it achieved the fewest number
of classi cation errors in a reasonable number of iterations. (The use of a
higher number of nodes did not result in a signi cant reduction in the num-
ber of iterations and did not reduce the error of classi cation.)
After determining the most suitable value for J , the best value for
was
determined by xing J = 10 and = 0:7. The corresponding results are listed
in Table 12.3. It is seen that
= 2:0 is the best value of those evaluated.
Finally, the value for was determined with J = 10 and
= 2:0. Table 12.4
provides the results of various trials. It is clear that the best value of those
tried for is 0:7.
After obtaining the most suitable values for the parameter set, the percep-
tron was trained again by using all of the training data with J = 10,
= 2:0,
and = 0:7. All of the 143 calci cations in the training set were correctly
classi ed in 1 268 iterations. The weight set obtained by this procedure was
utilized for the classi cation of calci cations in the test set.
Sections of size 1 024  768, 768  512, 512  768, and 512  768 pixels
of four typical mammograms from complete images of up to 2 560  4 096
pixels with biopsy-proven calci cations were utilized for the test step. Two
of the sections had a total of 58 benign calci cations whereas the other two
contained 241
10 malignant calci cations. Based upon visual inspection by a
radiologist, the detection rates of the multitolerance region-growing algorithm
(see Section 5.4.9) were 81% with no false calci cations and 85
3% with 29
false calci cations for the benign and malignant sections, respectively 274].
After the detection procedure, the calci cations were classi ed by the ANN
classi er. The correct classi cation rate for the detected benign calci cations
1130 Biomedical Image Analysis
TABLE 12.1
Results of the Training and ANN Parameter Determination Algorithm
with
= 1:0 and = 0:7.
Number of Number of Number of erroneous
hidden nodes (J ) iterations (mean) classi cations out of 143 samples
1 1,810 4
3 1,809 4
5 1,774 4
7 1,729 4
9 1,697 4
10 1,697 3
15 1,676 3
20 1,675 3
30 1,676 3

Reproduced with permission from L. Shen, R.M. Rangayyan, and J.E.L. De-
sautels, \Detection and classi cation of mammographic calci cations", Inter-
national Journal of Pattern Recognition and Articial Intelligence, 7(6):1403{
1416, 1993. c World Scienti c Publishing Company.
TABLE 12.2
Results of the Training and ANN Parameter Determination Algorithm
with
= 2:0 and = 0:7.
Number of Number of Number of erroneous
hidden nodes (J ) iterations (mean) classi cations out of 143 samples
5 1,448 3
10 1,391 3
12 1,380 4
15 1,387 3
20 1,401 4

Reproduced with permission from L. Shen, R.M. Rangayyan, and J.E.L. De-
sautels, \Detection and classi cation of mammographic calci cations", Inter-
national Journal of Pattern Recognition and Articial Intelligence, 7(6):1403{
1416, 1993. c World Scienti c Publishing Company.
Pattern Classi cation and Diagnostic Decision 1131

TABLE 12.3
Results of the Training and ANN Parameter Determination
Algorithm with J = 10 and = 0:7.
Gain Number of Number of erroneous

iterations (mean) classi cations out of 143 samples
0.3 4,088 4
0.7 2,145 4
1.0 1,697 3
1.5 1,431 4
2.0 1,391 3
2.3 1,430 3
3.0 2,691 5

Reproduced with permission from L. Shen, R.M. Rangayyan, and J.E.L. De-
sautels, \Detection and classi cation of mammographic calci cations", Inter-
national Journal of Pattern Recognition and Articial Intelligence, 7(6):1403{
1416, 1993. c World Scienti c Publishing Company.

TABLE 12.4
Results of the Training and ANN Parameter Determination
Algorithm with J = 10 and
= 2:0.
Momentum Number of Number of erroneous
iterations (mean) classi cations out of 143 samples
0.5 2,426 4
0.7 1,391 3
0.9 4,152 7

Reproduced with permission from L. Shen, R.M. Rangayyan, and J.E.L. De-
sautels, \Detection and classi cation of mammographic calci cations", Inter-
national Journal of Pattern Recognition and Articial Intelligence, 7(6):1403{
1416, 1993. c World Scienti c Publishing Company.
1132 Biomedical Image Analysis
was 94%, whereas the correct classi cation rate for the correctly detected
malignant calci cations was 87%.
Classi cation errors for benign calci cations arose mainly from overlap-
ping calci cations. One possible solution to this problem is two-view analy-
sis 1098]. A likely reason for erroneous classi cation of malignant calci ca-
tions is that not all calci cations within a malignant calci cation cluster may
possess rough contours a cluster analysis procedure may assist in overcoming
this situation.

12.8 Measures of Diagnostic Accuracy


Pattern recognition or classi cation decisions that are made in the context of
medical diagnosis have implications that go beyond statistical measures of ac-
curacy and validity. We need to provide a clinical or diagnostic interpretation
of statistical or rule-based decisions made with pattern vectors.
Consider the simple situation of screening, which represents the use of a test
to determine the presence or absence of a speci c disease in a certain study
population: the decision to be made is binary. Let us represent by A the event
that a subject has the particular pathology (or is abnormal), and by N the
event that the subject does not have the disease (which may not necessarily
mean that the subject is normal). Let the prior probabilities P (A) and P (N )
represent the fractions of subjects with the disease and the normal subjects,
respectively, in the test population. Let T + represent a positive screening test
result (indicative of the presence of the disease) and T ; a negative result. The
following possibilities arise 1099]:
A true positive (TP) or a \hit" is the situation when the test is positive
for a subject with the disease. The true-positive fraction (TPF ) or
sensitivity S + is given as P (T + jA) or
S + = numbernumber of TP decisions
of subjects with the disease : (12.75)
The sensitivity of a test represents its capability to detect or identify
the presence of the disease of concern.
A true negative (TN) represents the case when the test is negative for
a subject who does not have the disease. The true-negative fraction
(TNF ) or specicity S ; is given as P (T ; jN ) or
S ; = numbernumber of TN decisions
of subjects without the disease : (12.76)
The speci city of a test indicates its accuracy in recognizing the absence
of the disease of concern.
Pattern Classi cation and Diagnostic Decision 1133
A false negative (FN) or a \miss" is said to occur when the test is
negative for a subject who has the disease of concern that is, the test
has missed the case. The probability of this error, known as the false-
negative fraction (FNF ), is P (T ; jA).
A false positive (FP) or a false alarm is de ned as the case where the
result of the test is positive when the individual being tested does not
have the disease. The probability of this type of error, known as the
false-positive fraction (FPF ), is P (T + jN ).
Table 12.5 summarizes the classi cation possibilities. Observe that
FNF + TPF = 1,
FPF + TNF = 1,
S ; = 1 ; FPF = TNF , and
S + = 1 ; FNF = TPF .
A summary measure of accuracy may be de ned as 1099]
accuracy = S + P (A) + S ; P (N ) (12.77)
where P (A) is the fraction of the study population that actually has the
disease (that is, the prevalence of the disease) and P (N ) is the fraction of the
study population that is actually free of the disease.

TABLE 12.5
Schematic Representation of a
Classi cation Matrix.
Predicted group

Actual group Normal Abnormal

Normal S ; = TNF FPF


Abnormal FNF S = TPF
+

Reproduced with permission from R.M. Rangayyan, Biomedical Signal Anal-


ysis: A Case-Study Approach, IEEE Press and Wiley, New York, NY. 2002,
c IEEE.
1134 Biomedical Image Analysis
The e!ciency of a test may also be indicated by its predictive values. The
positive predictive value PPV of a test, de ned as
PPV = 100 TPTP
+ FP (12.78)
represents the percentage of the cases labeled as positive by the test that are
actually positive. The negative predictive value NPV , de ned as
NPV = 100 TNTN
+ FN (12.79)
represents the percentage of cases labeled as negative by the test that are
actually negative.
When a new test or method of diagnosis is being developed and tested, it
will be necessary to use another previously established method as a reference
to con rm the presence or absence of the disease. Such a reference method
is often called the gold standard. When computer-based methods need to be
tested, it is common practice to use the diagnosis or classi cation provided by
an expert in the eld as the gold standard. Results of biopsy, other established
laboratory or investigative procedures, or long-term clinical follow-up in the
case of normal subjects may also serve this purpose. The term \actual group"
in Table 12.5 indicates the result of the gold standard, and the term \predicted
group" refers to the result of the test conducted.
Health-care professionals (and the general public) would be interested in
knowing the probability that a subject with a positive test result actually
has the disease: this is given by the conditional probability P (AjT + ). The
question could be answered by using Bayes rule 1087], using which we can
obtain
P (AjT +) = P (A)P (TP+(jAA)) P+(PT (NjA))P (T +jN ) :
+
(12.80)
Observe that P (T + jA) = S + and P (T + jN ) = 1 ; S ; . In order to determine
the posterior probability as above, the sensitivity and speci city of the test,
as well as the prior probabilities of negative cases and positive cases (the rate
of prevalence of the disease) should be known.
A cost matrix may be de ned, as in Table 12.6, to reect the overall cost-
eectiveness of a test or method of diagnosis. The cost of conducting the test
and arriving at a TN decision is indicated by CN : this could be seen as the
cost of subjecting an otherwise-normal subject to the test for the purposes of
screening for a disease. The cost of the test when a TP is found is shown as
CA : this might include the costs of further tests, treatment, follow-up, etc.,
which are secondary to the test itself, but part of the screening and health-care
program. The value CFP indicates the cost of an FP result, that is, a false
alarm: this represents the cost of erroneously subjecting an individual without
the disease to further tests or therapy. Whereas it may be easy to identify
the costs of clinical tests or treatment procedures, it is di!cult to quantify
Pattern Classi cation and Diagnostic Decision 1135
the traumatic and psychological eects of an FP result and the consequent
procedures on a normal subject. The cost CFN is the cost of an FN result:
the presence of the disease in a patient is not diagnosed, the condition worsens
with time, the patient faces more complications of the disease, and the health-
care system or the patient has to bear the costs of further tests and delayed
therapy.
A loss factor due to misclassi cation may be de ned as
L = FPF  CFP + FNF  CFN : (12.81)
The total cost of the screening program may be computed as
CS = TPF  CA + TNF  CN + FPF  CFP + FNF  CFN : (12.82)
Metz 1099] provides more details on the computation of the costs of diagnostic
tests.

TABLE 12.6
Schematic Representation of the
Cost Matrix of a Diagnostic Method.

Predicted group

Actual group Normal Abnormal

Normal CN CFP
Abnormal CFN CA

Reproduced with permission from R.M. Rangayyan, Biomedical Signal Anal-


ysis: A Case-Study Approach, IEEE Press and Wiley, New York, NY. 2002,
c IEEE.

12.8.1 Receiver operating characteristics


Measures of overall correct classi cation of patterns as percentages provide
limited indications of the accuracy of a diagnostic method. The provision
of a separate correct classi cation rate for each category, such as sensitivity
and speci city, can facilitate improved analysis. However, these measures
do not indicate the dependence of the results upon the decision threshold.
Furthermore, the eect of the rate of incidence or prevalence of the particular
disease is not considered.
1136 Biomedical Image Analysis
From another perspective, it is desirable to have a screening or diagnostic
test that is both highly sensitive and highly speci c. In reality, however, such
a test is usually not achievable. Most tests are based on clinical measurements
that can assume limited ranges of a variable (or a few variables) with an in-
herent trade-o between sensitivity and speci city. The relationship between
sensitivity and speci city is illustrated by the receiver operating character-
istics (ROC) curve, which facilitates improved analysis of the classi cation
accuracy of a diagnostic method 1099, 1100, 1101].
Consider the situation illustrated in Figure 12.19. For a given diagnostic
test with the decision variable z , we have predetermined state-conditional
PDFs of the decision variable z for actually negative or normal cases indicated
as p(z jN ), and for actually positive or abnormal cases indicated as p(z jA). As
indicated in Figure 12.19, the two PDFs will almost always overlap, given that
no method can be perfect. The user or operator needs to determine a decision
threshold (indicated by the vertical line) so as to strike a compromise between
sensitivity and speci city. Lowering the decision threshold will increase TPF
at the cost of increased FPF . (Observe that TNF and FNF may be derived
easily from FPF and TPF , respectively.)

Decision
0.5 threshold

p(z|N)

0.4

0.3
p(z|A)
PDF

0.2 TNF TPF

0.1

FNF FPF
0
1 2 3 4 5 6 7 8 9 10
Decision variable z

FIGURE 12.19
State-conditional PDFs of a diagnostic decision variable z for normal and
abnormal cases. The vertical line represents the decision threshold. Repro-
duced with permission from R.M. Rangayyan, Biomedical Signal Analysis: A
Case-Study Approach, IEEE Press and Wiley, New York, NY. 2002,  c IEEE.
Pattern Classi cation and Diagnostic Decision 1137
An ROC curve is a graph that plots (FPF TPF ) points obtained for a
range of decision threshold or cut points of the decision method (see Fig-
ure 12.20). The cut point could correspond to the threshold of the probability
of prediction. By varying the decision threshold, we get dierent decision frac-
tions, within the range 0 1]. An ROC curve describes the inherent detection
(diagnostic or discriminant) characteristics of a test or method: a receiver
(user) may choose to operate at any point along the curve. The ROC curve
is independent of the prevalence of the disease or disorder being investigated
because it is based upon normalized decision fractions. Because all cases may
be simply labeled as negative or all may be labeled as positive, an ROC curve
has to pass through the points (0 0) and (1 1).

Ideal
0.9
ROC3

0.8 ROC2

0.7

ROC1
0.6
TPF

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
FPF

FIGURE 12.20
Examples of ROC curves. Reproduced with permission from R.M. Rangayyan,
Biomedical Signal Analysis: A Case-Study Approach, IEEE Press and Wiley,
New York, NY. 2002, c IEEE.

In a diagnostic situation where a human operator or specialist is required


to provide the diagnostic decision, ROC analysis is usually conducted by re-
quiring the specialist to rank each case as one of ve possibilities 1099]:
1. de nitely or almost de nitely negative (normal),
1138 Biomedical Image Analysis
2. probably negative,
3. possibly positive,
4. probably positive,
5. de nitely or almost de nitely positive (abnormal).
Item 3 above may be replaced by \indeterminate", if desired. Various values
of TPF and FPF are then calculated by varying the decision threshold from
level 5 to level 1 according to the decision items listed above. The resulting
(FPF TPF ) points are then plotted to form an ROC curve. The maximum
likelihood estimation method 1102] is commonly used to t a binormal ROC
curve to data as above.
A summary measure of the eectiveness of a test is given by the area under
the ROC curve, traditionally labeled as Az . It is clear from Figure 12.20
that Az is limited to the range 0 1]. A test that gives a larger area under
the ROC curve indicates a better method than one with a smaller area: in
Figure 12.20, the method corresponding to ROC3 is better than the method
corresponding to ROC2 both are better than the method represented by
ROC1 with Az = 0:5. An ideal method will have an ROC curve that follows
the vertical line from (0 0) to (0 1), and then the horizontal line from (0 1)
to (1 1), with Az = 1: the method has TPF = 1 with FPF = 0, which is
ideal. (Note: This would require the PDFs represented in Figure 12.19 to
be nonoverlapping.) Examples of ROC curves are provided in Sections 12.10,
12.11, and 12.12.
In a form of ROC analysis known as the free-response ROC or FROC, the
sensitivity is plotted against the number of false positives per image. See
Section 8.10.7 and Figure 8.73 for an example of this type of analysis.

12.8.2 McNemar's test of symmetry


Suppose we have available two methods to perform a certain diagnostic test.
How may we compare the classi cation performance of one against that of the
other?
Measures of overall classi cation accuracies such as a percentage of correct
classi cation or the area under the ROC curve provide simple measures to
compare two or more diagnostic methods. If more details are required as to
how the classi cation of groups of cases varies from one method to another,
McNemar's test of symmetry 1103, 1104] would be an appropriate tool.
McNemar's test is based on the construction of contingency tables that
compare the results of two classi cation methods. The rows of a contingency
table represent the outcomes of one of the methods used as the reference,
possibly a gold standard (labeled as Method A in Table 12.7) the columns
represent the outcomes of the other method, which is usually a new method
(Method B) to be evaluated against the gold standard. The entries in the
Pattern Classi cation and Diagnostic Decision 1139
table are counts that correspond to particular diagnostic categories, which in
Table 12.7 are labeled as normal, indeterminate, and abnormal. A separate
contingency table should be prepared for each true category of the patterns
for example, normal and abnormal. (The class \indeterminate" may not be
applicable as a true category.) The true category of each case may have to be
determined by a third method (for example, biopsy or surgery).

TABLE 12.7
Schematic Representation of a Contingency Table for
McNemar's Test of Asymmetry.
Method B

Method A Normal Indeterminate Abnormal Total

Normal a (1) b (2) c (3) R1


Indeterminate d (4) e (5) f (6) R2
Abnormal g (7) h (8) i (9) R3
Total C1 C2 C3 N

Reproduced with permission from R.M. Rangayyan, Biomedical Signal Anal-


ysis: A Case-Study Approach, IEEE Press and Wiley, New York, NY. 2002,
c IEEE.

In Table 12.7, the variables a, b, c, d, e, f , g, h, and i denote the counts in


each cell, and the numbers in parentheses denote the cell number. The vari-
ables C 1, C 2, and C 3 denote the total numbers of counts in the corresponding
columns R1, R2, and R3 denote the total numbers of counts in the corre-
sponding rows. The total number of cases in the true category represented by
the table is N = C 1 + C 2 + C 3 = R1 + R2 + R3.
Each cell in a contingency table represents a paired outcome. For example,
in evaluating the diagnostic e!ciency of Method B versus Method A, cell
number 3 will contain the number of samples that were classi ed as normal
by Method A but as abnormal by Method B. The row totals R1, R2, and
R3, and the column totals C 1, C 2, and C 3 may be used to determine the
sensitivity and speci city of the two methods.
High values along the main diagonal (a e i) of a contingency table (see Ta-
ble 12.7) indicate no signi cant change in diagnostic performance with Method
B as compared to Method A. In a contingency table for truly abnormal cases,
1140 Biomedical Image Analysis
a high value in the upper-right portion (cell number 3) will indicate an im-
provement in diagnosis (higher sensitivity) with Method B as compared to
Method A. In evaluating a contingency table for truly normal cases, Method
B will have a higher speci city than Method A if a large value is found in cell
7. McNemar's method may be used to perform detailed statistical analysis of
improvement in performance based upon contingency tables if large numbers
of cases are available in each category 1103, 1104]. Examples of contingency
tables are provided in Section 12.10.

12.9 Reliability of Features, Classiers, and Decisions


In most practical applications of biomedical image analysis, the researcher is
presented with the problem of designing a pattern classi cation and decision-
making system using a small number of training samples (images), with no
knowledge of the distributions of the features or parameters computed from
the images. The size of the training set, relative to the number of features
used in the pattern classi cation system, aects the accuracy and reliability
of the decisions made 1105, 1106, 1107]. One should not increase the num-
ber of features to be used without a simultaneous increase in the number of
training samples, as the two quantities together aect the bias and variance
of the classi er. On the other hand, when the training set has a xed number
of samples, the addition of more features beyond a certain limit will lead to
poorer performance of the classi er: this is known as the \curse of dimen-
sionality". The situation leads to \over-training": the classi er is trained to
recognize the idiosyncrasies of the training set, and does not generalize or
extend well to other data. It is desirable to be able to analyze the bias and
variance of a classi cation rule while isolating the eects of the functional
form of the distributions of the features used.
Raudys and Jain 1106] give a rule-of-thumb table for the number of train-
ing samples required in relation to the number of features used in order to
remain within certain limits of classi cation errors for ve pattern classi -
cation methods. When the available features are ordered in terms of their
individual classi cation performance, the optimal number of features to be
used with a certain classi cation method and training set may be determined
by obtaining unbiased estimates of the classi cation accuracy with the num-
ber of features increased one at a time in order. A point will be reached
when the performance deteriorates, which will indicate the optimal number
of features to be used. This method, however, cannot take into account the
joint performance of various combinations of features: exhaustive combina-
tions of all features may have to be evaluated to take this aspect into con-
sideration. Software packages such as the Statistical Package for the Social
Pattern Classi cation and Diagnostic Decision 1141
Sciences (SPSS) 1092, 1093] provide programs to facilitate feature evaluation
and selection, as well as the estimation of classi cation accuracies.

12.9.1 Statistical separability and feature selection


In practical applications of pattern classi cation, several parameters are often
required in order to discriminate between multiple classes. Given the fact
that most features provide for limited discrimination between classes due to
the overlap in the ranges of their values for the various classes, it is natural
to use several features. However, there would be costs associated with the
measurement, derivation, and/or computation of each feature. It would be
advantageous to be able to assess the contribution made by each feature to-
ward the task of discriminating between the classes of interest, and to be able
to select the feature set that provides the best separation between classes and
the lowest classi cation error. The notion of statistical separability of features
between classes is useful in addressing these concerns 1108].
Normalized distance between PDFs: Consider a feature x that has
the means m1 and m2 and standard deviation values 1 and 2 for the two
classes C1 and C2 . Assuming that the PDFs p(xjC1 ) and p(xjC2 ) overlap,
the area of overlap is related to the error of classi cation. If the variances are
held constant, the overlap between the PDFs decreases as jm1 ; m2 j increases.
If the means are held constant, the overlap increases as 1 and 2 increase
(the dispersion of the features increases). These notions are captured by the
normalized distance between the means, de ned as 1108]
dn = jm1 +
; m2 j
 : (12.83)
1 2
The measure dn provides an indicator of the statistical separability of the
PDFs. A limitation of dn , however, is that dn = 0 if m1 = m2 regardless of 1
and 2 . Furthermore, the formulation above is valid only for a single feature
x a generalization to a feature vector x would be desirable.
Divergence: Let us rewrite the likelihood ratio in Equation 12.37 as
lij (x) = pp((xxjjCCi)) : (12.84)
j
Applying the logarithm, we get
lij0 (x) = lnlij (x)] = lnp(xjCi )] ; lnp(xjCj )]: (12.85)
The divergence Dij between the PDFs p(xjCi ) and p(xjCj ) is de ned as 1108]
Dij = E lij0 (x)jCi] + E lji0 (x)jCj ] (12.86)
where Z
E lij (x)jCi] = lij0 (x) p(xjCi) d x:
0 (12.87)
x
Divergence has the following properties 1108]:
1142 Biomedical Image Analysis
Dij > 0
Dii = 0
Dij = Dji and
P x1 x2 : : : xn are statistically independent,
if the individual features
Dij (x1 x2 : : : xn ) = nk=1 Dij (xk ).
It follows that adding more features that are statistically independent of one
another will increase divergence and statistical separability.
In the case of multivariate Gaussian PDFs, we have 1108]
Dij = 21 Tr(Ci ; Cj )(C;j 1 ; C;i 1 )]
+ 12 Tr(C;i 1 + C;j 1 )(mi ; mj )(mi ; mj )T ]: (12.88)
The second term in the equation above is similar to the normalized distance
dn as de ned in Equation 12.83, and becomes zero for PDFs with identical
means however, due to the rst term, Dij 6= 0 unless the covariance matrices
are identical.
In the case of the existence of multiple classes Ci i = 1 2 : : : m, the pair-
wise divergence values may be averaged to obtain a single measure across all
of the m classes as 1108]
X
m X
m
Dav = p(Ci) p(Cj ) Dij : (12.89)
i=1 j =1
A limitation of both dn and Dij is that they increase without an upper
bound as the separation between the means increases. On the other hand,
the error of classi cation is limited to the range 0 ; 100% or 0 1].
Jeries{Matusita distance: The Jeries{Matusita (JM) distance pro-
vides an improved measure of the separation between PDFs than the normal-
ized distance and divergence. The JM distance between the PDFs p(xjCi )
and p(xjCj ) is de ned as 1108]
(Z p q 2 )1=2
Jij = p(xjCi) ; p(xjCj ) d x : (12.90)
x
In the case of multivariate Gaussian PDFs, we have 1108]
p
Jij = 21 ; exp(; )] (12.91)
where
1
 C + C ;1
= 8 (mi ; mj ) T i j (mi ; mj )
2
 
+ 12 ln j(Ci + Cj )1j==22 (12.92)
(jCi j jCj j)
Pattern Classi cation and Diagnostic Decision 1143
where jCi j is the determinant of Ci .
p
An advantage of the JM distance is that it is limited to the range 0 2].
Jij = 0 when the means of the PDFs are equal and the covariance matrices are
zero matrices. Pairwise JM distances may be averaged over multiple classes
similar to the averaging of divergence as in Equation 12.89. The JM distance
determines the upper and lower bounds on the error of classi cation 1108].
It should be observed that divergence and JM distance are de ned for a
given feature vector x. The measures would have to be computed for all
combinations of features in order to select the best feature set for a particular
pattern-classi cation problem.

12.10 Application: Image Enhancement for Breast Can-


cer Screening
Accurate detection of breast cancer depends upon the quality of mammo-
grams in particular, on the visibility of small, low-contrast objects within the
breast image. Unfortunately, the contrast between malignant tissue and nor-
mal tissue is often low in mammograms, making the detection of malignant
patterns di!cult. Contrast between malignant tissue and normal dense tis-
sue may be present on a mammogram, but be below the threshold of human
perception. As well, microcalci cations in a su!ciently dense mass may not
be readily visible because of low contrast. Hence, the fundamental enhance-
ment needed in mammography is an increase in contrast, especially for dense
breasts. Although many enhancement techniques reported are able to enhance
speci c details, they may also produce disturbing artifacts see Chapter 4, in
particular, Section 4.11.
It is important to distinguish between the evaluation of the detection of the
presence of features such as microcalci cations in an image, and the evalua-
tion of the diagnostic conclusion about a subject. Whereas some enhancement
techniques may enhance the visibility of features such as calci cations, they
may also distort their appearance and shape characteristics, which may lead to
misdiagnosis see Morrow et al. 123] for a discussion on this topic. A similar
observation was made by Kimme-Smith et al. 264], who stated that \Stud-
ies of digitally enhanced mammograms should examine the actual ability to
form diagnostic conclusions from the enhanced images, rather than the ability
merely to report the increased numbers of clusters of simulated microcalci -
cations that it is possible to detect. Radiologic evaluation obviously begins
with the detection of an abnormality, but if the image of the abnormality is
distorted, an incorrect diagnosis may result." ROC analysis and McNemar's
method could assist in assessing the eect of enhancement on the diagnosis.
1144 Biomedical Image Analysis
In their ROC study to evaluate the eects of digitization and unsharp-mask
ltering on the detection of calci cations, Chan et al. 254] used 12 images
with calci cations and 20 normal images. The digitization was performed
at a spatial resolution of 0:1 mm per pixel, and the enhanced images were
printed on lm. Nine radiologists interpreted the images. They found that
the detectability of calci cations in the digitized mammograms was improved
by unsharp-mask ltering, although both the unprocessed digitized and the
processed mammograms provided lower accuracy than the conventional mam-
mograms.
Kimme-Smith et al. 264] compared contact, magni ed, and TV-enhanced
mammographic images of 31 breasts for diagnosis of calci cations. The inter-
pretation was performed by three experienced radiologists and three radiol-
ogy residents. The TV enhancement procedure used the Wallis lter, which
is similar to unsharp masking. They concluded that TV enhancement could
not replace microfocal spot magni cation and could lead to misdiagnosis by
inexperienced radiologists. Experienced radiologists showed no signi cant im-
provement in performance with the enhanced images.
Nab et al. 1109] performed ROC analysis comparing 270 mammographic
lms with 2K  2K 12-bit digitized versions (at 0:1 mm per pixel) displayed
on monitors. The task for the two radiologists in the study was to indicate
the presence or absence of tumors or calci cations. No signi cant dierence in
performance was observed between the use of lms and their digitized versions.
Kallergi et al. 267] conducted an ROC study with 100 mammograms and
four radiologists, including the original lms, digitized images (105 m pixel
size) displayed on monitors, and wavelet-enhanced images displayed on mon-
itors (limited to 8-bit gray scale). The diagnostic task was limited to the
detection and classi cation of calci cations. While they observed a statisti-
cally signi cant reduction in the area under the ROC curve with the digitized
images, the dierence between reading the original lms and the wavelet-
enhanced images displayed on monitors was not signi cant. They also noted
that interobserver variation was reduced with the use of the wavelet-enhanced
images. They concluded that lmless mammography with their wavelet-based
enhancement method is comparable to screen- lm mammography for detect-
ing and classifying calci cations.
The ANCE method (described in Sections 4.9.1 and 4.11) was used in a
preference study comparing the performance of mammographic enhancement
algorithms 125]. The other methods used in the study were adaptive unsharp
masking, contrast-limited adaptive histogram equalization, and wavelet-based
enhancement. In a majority of the cases with microcalci cations, the ANCE
algorithm provided the most preferred results. In the set of images with
masses, the unenhanced images were preferred in most of the cases. Ran-
gayyan et al. 124, 266, 271, 322] evaluated the role of the ANCE technique
for enhancement of mammograms in a breast cancer screening program using
ROC and McNemar's methods. The methods and results of these works are
described in detail in the following sections.
Pattern Classi cation and Diagnostic Decision 1145
12.10.1 Case selection, digitization, and presentation
In order to evaluate the diagnostic utility of the ANCE technique, two ROC
studies were conducted using two dierent datasets: dicult cases and interval-
cancer cases. The di!cult-cases dataset is a collection of cases for which the
radiologist had been unsure enough to call for a biopsy. The cases were di!cult
in terms of both the detection of the abnormality present and the diagnosis as
normal, benign, or malignant. The investigation described in this section was
conducted to test if ANCE could be used to improve the distinction between
benign and malignant cases 266].
Interval-cancer cases are cases in a screening program where cancer is de-
tected prior to a scheduled return screening visit they may be indicative of
the inability to detect an already present cancer or an unusually rapid-growing
cancer. In these cases, the radiologist had declared that there was no evidence
of cancer on the previous mammograms. The purpose of the study, described
in this section, was to test if interval cancers could be detected earlier with ap-
propriate digital enhancement and analysis 271]. The goal of interpretation
in the study was screening, and not the detection of signs such as calci cations
or masses as such, no record was maintained of the number or sizes of the
signs, as done by Kallergi et al. 267] and Nishikawa et al. 1110]. Localization
of pathology was not required: the radiologists had to nd lesions, if any, and
assess them for the likelihood of malignancy, but did not have to mark their
locations on the lms.
Dicult cases: An experienced radiologist selected 21 di!cult cases, re-
lated to 14 subjects with benign breast disease and seven subjects with malig-
nant disease, from les over the period 1987 ; 1992 at the Foothills Hospital,
Calgary, Alberta, Canada. Four lms, including the MLO and CC views of
each breast, were available for each of 18 cases, but only two lms of one
breast each were available for three cases, leading to a total of 78 screen- lm
mammograms. Biopsy results were also available for each subject.
Each lm was digitized using an Eikonix 1412 scanner (Eikonix Inc., Bed-
ford, MA) to 4 096 by about 2 048 pixels with 12-bit gray-scale resolution.
(The size of the digitized image diered from lm to lm depending upon
the the size of the actual image in the mammogram.) Sampling as above
represents a spot size on the lm of about 0:062 mm  0:062 mm. Films were
illuminated by a Plannar 1417 light box (Gordon Instruments, Orchard Park,
NY). Although the light box is designed to have a uniform light intensity
distribution, it was necessary to correct for nonuniformities in illumination.
After correction, pixel gray levels were determined to be accurate to 10 bits,
with a dynamic range of approximately 0:02 ; 2:52 OD 174].
The digital images were down-sampled by a factor of two for processing
and display for interpretation on a Megascan 2111 monitor (Advanced Video
Products Inc., Littleton, MA). Although the memory buer of the Megascan
system is of size 4 096  4 096  12 bits, the display buer is limited to
2 560  2 048  8 bits, with panning and zooming facilities. The original
1146 Biomedical Image Analysis
screen- lm mammograms used in the study were presented to the radiologists
using a standard mammogram lm viewer.
Six radiologists from the Foothills Hospital interpreted the original, the un-
processed digitized, and the enhanced mammograms separately. Only one of
the radiologists had prior experience with digitized and enhanced mammo-
graphic images. The images were presented in random order and the radiolo-
gists were given no additional information about the patients. The radiologists
ranked each case as (1) de nitely or almost de nitely benign, (2) probably
benign, (3) possibly malignant, (4) probably malignant, or (5) de nitely or
almost de nitely malignant.
Interval-cancer cases: Two hundred and twenty-two screen- lm mam-
mograms of 28 interval-cancer patients and six control patients with benign
breast disease were selected for this study from les over the period 1991;1995
at the Screen Test Centres of the Alberta Program for the Early Detection of
Breast Cancer 61]. Some of the cases of cancer were diagnosed by physical
exam or mammography performed after the preceding visit to the screening
program but prior to the next scheduled visit. The radiologists who inter-
preted the mammograms taken prior to the diagnosis of the cancer had de-
clared that there was no evidence of cancer on the lms. The small number of
benign cases were included to prevent \over-diagnosis" the radiologists were
not informed of the proportion of benign to malignant cases in the dataset.
Most of the les included multiple sets of lms taken at dierent times all
sets except one included at least four lms each (the MLO and CC views of
each breast) in the dataset. (More speci cally, the dataset included fty-two
4- lm sets, one 3- lm set, one 5- lm set, and one 6- lm set.) Previous lms
of all of the interval-cancer cases had initially been reported as being normal.
Biopsy results were available for each subject.
The aim of this study was to investigate the possibility of earlier detection
of interval breast cancers with the aid of appropriate image processing tech-
niques. Because a few sets of lms taken at dierent times were available
for each subject, each set of mammograms of each subject was labeled as a
separate case. All lms of the subjects with malignant disease within the se-
lected period were labeled as being malignant, even though the cases had not
been previously interpreted as such. By this process, 55 cases were obtained,
of which 47 were malignant and eight were benign (the numbers of subjects
being 28 with malignant disease and six with benign disease).
The lms were digitized as described previously in this section, and pro-
cessed using the ANCE technique with the full digitized resolution available.
The digitized version and the ANCE-processed version were printed on lm
using a KODAK XL 7700 digital continuous-tone printer (Eastman Kodak
Company, Rochester, NY) with pixel arrays up to 2 048  1 536 (8-bit pix-
els). Gray-level remapping (10 bits/pixel to 8 bits/pixel) and down-sampling
by a factor of two were applied before a digitized/enhanced mammogram
image was sent for printing with two dierent LUTs.
Pattern Classi cation and Diagnostic Decision 1147
Three reference radiologists from the Screen Test Program separately inter-
preted the original lms of the involved side, their digitized versions, and their
ANCE-processed versions on a standard mammogram lm viewer. Only one
of the radiologists had prior experience with digitized and enhanced mam-
mographic images. Interpretation of the digitized images (without ANCE
processing) was included in the test to evaluate the eect on diagnostic accu-
racy of digitization and printing with the resolution and equipment used. The
images were presented in random order, and the radiologists were given no ad-
ditional information about the patients. The radiologists ranked each case as
(1) de nitely or almost de nitely benign, (2) probably benign, (3) indetermi-
nate, (4) probably malignant, or (5) de nitely or almost de nitely malignant.
Note that the diagnostic statement for rank (3) is dierent in this study from
that described previously in this section.
Images were interpreted in random order by the radiologists images were
presented in the same random order to each radiologist individually. Each
radiologist interpreted all of the images in a single sitting. Multiple sets of
lms of a given subject (taken at dierent times) were treated as dierent
cases and interpreted separately to avoid the development of familiarity and
bias. The original, digitized, and enhanced versions of any given case were
mixed for random ordering, treated as separate cases, and were interpreted
separately to prevent the development of familiarity and bias. All available
views of a case were read together as one set. It should be recognized that
the initial (original) diagnosis of the cases was performed by dierent teams
of radiologists experienced in the interpretation of screening mammograms,
which further limits the scope of bias in the study being described.

12.10.2 ROC and statistical analysis


ROC analysis 1100, 1101] was used to compare the radiologists' performance
in detecting abnormalities in the various images. The maximum likelihood
estimation method 1102] was used to t a binormal ROC curve to each ra-
diologist's con dence rating data for each set of mammograms. The slope
and intercept parameters of the binormal ROC curve (when plotted on nor-
mal probability scales) were calculated for each tted curve. To estimate
the average performance of the group of radiologists on each set of images,
composite ROC curves 1101] were calculated by averaging the slope and the
intercept parameters of the individual ROC curves. Finally, the area under
the binormal ROC curve (as plotted in the unit square) was computed, which
represents the overall abnormality detection accuracy for each type of images.
In addition to ROC analysis of the interval-cancer cases, McNemar's test
of symmetry 1103, 1104] was performed on a series of 3  3 contingency ta-
bles obtained by cross-tabulating (i) diagnostic con dence using the original
mammograms (categories 1 or 2, 3, and 4 or 5) against the diagnostic con -
dence using the digitized mammograms, and (ii) diagnostic con dence using
the digitized mammograms against the diagnostic con dence using the en-
1148 Biomedical Image Analysis
hanced mammograms. Separate 3  3 tables, as illustrated in Figure 12.21,
were formed for the malignant cases and the benign cases. Cases in which
there is no change in the diagnostic con dence will fall on the diagonal (up-
per left to lower right, labeled as D in Figure 12.21) of the table. For the
malignant cases, improvement in the diagnostic accuracy is illustrated by a
3  3 table with the majority of the cases in the three upper right-hand cells
(labeled as U in Figure 12.21). Conversely, for the benign cases, improvement
in the diagnostic accuracy will be illustrated by a 3  3 table with the majority
of the cases in the three lower left-hand cells (labeled as L in Figure 12.21).
The hypothesis of signi cant improvement can be tested statistically using
McNemar's test of symmetry 1103, 1104], namely that the probability of an
observation being classi ed into a cell i j ] is the same as the probability of
being classi ed into the cell j i].

Level with Enhanced Methodology

1 or 2 3 4 or 5
Level with Original Methodology

1 or 2

D U U

L D U
3
4 or 5

L L D

FIGURE 12.21
Illustration of the contingency table for McNemar's test. Figure courtesy of
L. Shen 320].

The validity of McNemar's test depends on the assumption that the cell
counts are at least moderately large. In order to avoid the limitations due to
this factor, and also to avoid the problem of excessive multiple comparisons,
the data across the individual radiologists were combined in two dierent
ways before applying McNemar's test. The rst method (referred to as \aver-
aged") averaged the radiologists' diagnostic ratings before forming the 3  3
tables. In the second method (referred to as \combined"), the 3  3 tables for
each of the radiologists were formed rst and then combined by summing the
corresponding cells.
Because this analysis involves multiple p-values, the Bonferroni correction
was used to adjust the p-values 1111]. When multiple p-values are produced,
the probability of making a Type I error increases. (Rejection of the null hy-
Pattern Classi cation and Diagnostic Decision 1149
pothesis when it is true is called a Type I error.) The Bonferroni method for
adjusting multiple p-values requires that when k hypothesis tests are per-
formed, each p-value be multiplied by k, so that the adjusted p-value is
p = kp. In order to reduce the number of p-values, symmetry was tested
for each situation (malignant/benign and averaged/combined) for the two
tables original-to-digitized and digitized-to-enhanced, but not the original-to-
enhanced (which follows from the other two).
ROC analysis of dicult cases: Because the population involved in this
study was such that the original mammograms were su!ciently abnormal to
cause the initial attending radiologist to call for biopsy, the aim was to test
whether speci city could be improved with the ANCE method.
The composite ROC curves representing breast cancer diagnosis by the
six radiologists in this study are compared in Figure 12.22, which illustrates
several points: First, the process of digitization (and down-sampling to an
eective pixel size of 0:124 mm  0:124 mm) degraded the quality of the
images and thereby made the radiologists' performance worse, especially in
the low-FPF range. However, better performance of the radiologists is seen
with the digitized images at high FPFs (better sensitivity with worse speci-
city). Second, the ANCE method improved the radiologists' performance in
all ranges of FPF (more signi cantly in the low-FPF range) as compared with
the unprocessed digitized images, although it is still lower than that with the
original lms in the low range of FPF. The Az values for the original, digi-
tized, and enhanced mammograms were computed to be 0:67, 0:63, and 0:67,
respectively. (Kallergi et al. 267] also observed a drop in the area under the
ROC curve when digitized images were interpreted from monitor display as
compared with the original lms their wavelet-based enhancement method
provided an improvement over the digitized version, although the enhanced
images did not provide any statistically signi cant bene t over the original
lms.) The Az values are lower than those normally encountered in most
studies (in the range 0:85 ; 0:95) due to the fact that the cases selected were
di!cult enough to call for biopsy. The labeling of all mammograms taken
prior to the detection of cancer as abnormal will also have had a bearing on
this result. Regardless, the numerical results indicate that the ANCE tech-
nique improved the radiologists' overall performance, especially over unpro-
cessed digitized mammograms, and allowed the radiologists to discriminate
between the two populations slightly better while interpreting the enhanced
mammograms as compared with the original lms.
McNemar's tests on dicult cases: Table 12.8 and Table 12.9 contain
details of the variation of the radiologists' diagnostic performance for ma-
lignant cases and benign cases, respectively, with the di!cult-cases dataset.
(Note: In the tables, B refers to benign, U to undecided or indeterminate, and
M to malignant ratings.) For almost every table for individual readers, the
numbers were too small to perform the McNemar chi-square test (including
the average). In other cases, the numbers would be too small to detect a
1150 Biomedical Image Analysis

1
True-Positive Fraction

0.8 Enhanced

0.6 Original

0.4

0.2 Digitized

0
0 0.2 0.4 0.6 0.8 1
False-Positive Fraction
FIGURE 12.22
Comparison of composite ROC curves for the detection of abnormalities by
interpreting the original, unprocessed digitized, and enhanced images of 21
di!cult cases. Reproduced with permission from R.M. Rangayyan, L. Shen,
Y. Shen, J.E.L. Desautels, H. Bryant, T.J. Terry, N. Horeczko, and M.S.
Rose, \Improvement of sensitivity of breast cancer diagnosis with adaptive
neighborhood contrast enhancement of mammograms", IEEE Transactions
on Information Technology in Biomedicine, 1(3):161{170, 1997.  c IEEE.
Pattern Classi cation and Diagnostic Decision 1151
TABLE 12.8
Variation of Radiologists' Diagnostic Performance with Original,
Unprocessed Digitized, and ANCE-processed Digitized Mammograms
for the Seven Malignant Cases in the Di!cult-cases Dataset.
Change in diagnostic con dence level
B : level 1 or 2 U : level 3 M : level 4 or 5
B!U U!M B!M B! U! M!
Rad. Images (U ! B) (M ! U) (M ! B) B U M
O ! D 1 (1) 0 (2) 0 (0) 1 0 2
#1 D ! E 1 (0) 1 (0) 0 (0) 1 2 2
O ! E 0 (0) 0 (2) 1 (0) 1 1 2
O ! D 0 (0) 0 (1) 0 (0) 1 1 4
#2 D ! E 0 (0) 0 (0) 0 (0) 1 2 4
O ! E 0 (0) 0 (1) 0 (0) 1 1 4
O ! D 1 (0) 1 (2) 0 (0) 0 2 1
#3 D ! E 0 (1) 0 (0) 0 (0) 0 4 2
O ! E 1 (1) 1 (2) 0 (0) 0 1 1
O ! D 0 (1) 0 (1) 0 (1) 1 0 3
#4 D ! E 1 (0) 1 (0) 0 (0) 2 0 3
O ! E 0 (0) 0 (0) 0 (1) 1 1 4
O ! D 0 (0) 1 (0) 0 (0) 1 2 3
#5 D ! E 1 (0) 1 (1) 0 (0) 0 1 3
O ! E 1 (0) 2 (1) 0 (0) 0 1 2
O ! D 0 (1) 0 (0) 0 (2) 1 0 3
#6 D ! E 0 (0) 0 (1) 2 (0) 2 0 2
O ! E 0 (0) 1 (1) 0 (1) 1 0 3
O ! D 0 (0) 0 (2) 0 (0) 1 1 3
Av. D ! E 0 (0) 1 (0) 0 (0) 1 2 3
O ! E 0 (0) 1 (2) 0 (0) 1 0 3

Rad. = Radiologist O = original mammogram D = unprocessed digitized


mammogram E = ANCE-processed digitized mammogram. The average
(Av.) values were obtained by averaging the individual con dence levels 320].
1152 Biomedical Image Analysis
TABLE 12.9
Variation of Radiologists' Diagnostic Performance with Original,
Unprocessed Digitized, and ANCE-processed Digitized Mammograms
for the 14 Benign Cases in the Di!cult-cases Dataset.
Change in diagnostic con dence level
B : level 1 or 2 U : level 3 M : level 4 or 5
U!B M!U M!B B! U! M!
Rad. Images (B ! U) (U ! M) (B ! M) B U M
O ! D 3 (1) 2 (0) 0 (0) 2 3 3
#1 D ! E 1 (2) 0 (1) 0 (1) 2 4 3
O ! E 2 (1) 2 (1) 0 (1) 1 3 3
O ! D 1 (0) 0 (1) 0 (0) 5 4 3
#2 D ! E 0 (0) 0 (0) 0 (0) 6 4 4
O ! E 1 (0) 0 (1) 0 (0) 5 4 3
O ! D 3 (1) 1 (2) 0 (1) 4 1 1
#3 D ! E 1 (1) 1 (0) 0 (0) 6 2 3
O ! E 2 (1) 1 (2) 0 (0) 5 2 1
O ! D 4 (0) 0 (1) 2 (0) 5 1 1
#4 D ! E 0 (2) 0 (0) 0 (0) 9 1 2
O ! E 3 (1) 0 (1) 2 (0) 4 2 1
O ! D 3 (1) 1 (1) 1 (1) 2 2 2
#5 D ! E 2 (0) 1 (1) 1 (1) 5 1 2
O ! E 3 (0) 1 (2) 1 (0) 4 1 2
O ! D 5 (0) 0 (1) 2 (0) 5 0 1
#6 D ! E 0 (1) 0 (0) 0 (2) 9 0 2
O ! E 4 (1) 0 (2) 2 (1) 3 0 1
O ! D 5 (1) 1 (1) 1 (0) 3 1 1
Av. D ! E 1 (1) 0 (1) 0 (0) 8 1 2
O ! E 4 (0) 1 (2) 1 (0) 4 1 1

Rad. = Radiologist O = original mammogram D = unprocessed digitized


mammogram E = ANCE-processed digitized mammogram. The average
(Av.) values were obtained by averaging the individual con dence levels 320].
Pattern Classi cation and Diagnostic Decision 1153
statistically signi cant dierence. Therefore, the data were combined for the
six readers by simply summing the corresponding matrices.
For the benign cases (combined), p-values of 0:004, 0:53, and 0:022 were ob-
tained for original-to-digitized, digitized-to-enhanced, and original-to-enhan-
ced, respectively. For the malignant cases (combined), the p-values were 0:16,
0:36, and 0:69 for original-to-digitized, digitized-to-enhanced, and original-to-
enhanced, respectively. The p-values represent no evidence of improvement
in the diagnostic accuracy for any of the three tables (original-to-digitized,
digitized-to-enhanced, and original-to-enhanced) for the malignant cases in
the di!cult-cases dataset. However, for the benign cases, there was a statis-
tically signi cant improvement in the diagnostic accuracy (p = 0:004, Bon-
ferroni adjusted value p = 0:024). There was no evidence of a signi cant
improvement from digitized to enhanced, and although there was a signi -
cant improvement from the original to the digitized category (but not sig-
ni cant after Bonferroni adjustment, p = 0:13), this was attributed to the
improvement in moving from the original to the digitized category.
ROC analysis of interval-cancer cases: Figure 12.23 shows the vari-
ation of the ROC curves among the three radiologists who interpreted the
same set of unprocessed digitized mammograms of the interval-cancer cases.
Similar variation was observed with the sets of the original lm mammograms
and the enhanced mammograms. Details of the variation of the radiologists'
diagnostic performance with the original mammograms, unprocessed digitized
mammograms, and ANCE-processed mammograms are listed in Table 12.10
and Table 12.11 for the 47 malignant and eight benign cases, respectively.
It is seen from Table 12.10 that, on the average (average of individual di-
agnostic con dence levels), almost half (21) of the 47 malignant cases, which
were originally diagnosed as benign (average diagnostic con dence level of less
than 2:5) by the three radiologists with the original lms were relabeled as
malignant (average diagnostic con dence level of greater than 3:5) with the
ANCE-processed versions. Only three malignant cases whose original average
diagnostic con dence levels were greater than 3:5 had their average con -
dence levels reduced to the range of 2:5 to 3:5 when interpreting the enhanced
mammograms. However, in general, no signi cant changes are observed for
the benign cases (Table 12.11) with the ANCE procedure.
Composite ROC curves for breast cancer diagnosis with the original, un-
processed digitized, and enhanced images are plotted in Figure 12.24. The
following points may be observed in Figure 12.24: First, the radiologists' per-
formance with the enhanced versions is the best among the three, especially
for FPF > 0:3. This is reasonable, because most of the cancer cases in this
dataset were di!cult and were initially diagnosed as normal when interpret-
ing the original lms. Therefore, the FPF level has to be increased in order
to achieve good sensitivity (high TPF). Second, the digitized versions appear
to provide better diagnostic results when compared with the original lms.
This is likely due to the fact that two printouts for each digitized image with
two dierent LUTs (unchanged and lighten2) were provided to the radiolo-
1154 Biomedical Image Analysis

0.8
True-Positive Fraction

0.6

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1
False-Positive Fraction

FIGURE 12.23
Variation of conventional ROC curves among three radiologists interpreting
the same set of unprocessed digitized mammograms from the interval-cancer
cases dataset. Reproduced with permission from R.M. Rangayyan, L. Shen,
Y. Shen, J.E.L. Desautels, H. Bryant, T.J. Terry, N. Horeczko, and M.S.
Rose, \Improvement of sensitivity of breast cancer diagnosis with adaptive
neighborhood contrast enhancement of mammograms", IEEE Transactions
on Information Technology in Biomedicine, 1(3):161{170, 1997. c IEEE.
Pattern Classi cation and Diagnostic Decision 1155

TABLE 12.10
Variation of Radiologists' Diagnostic Performance with Original,
Unprocessed Digitized, and ANCE-processed Digitized Mammograms
for the 47 Malignant Cases in the Interval-cancer Dataset.
Change in diagnostic con dence level
B : level 1 or 2 U : level 3 M : level 4 or 5
B!U U!M B!M B! U! M!
Rad. Images (U ! B) (M ! U) (M ! B) B U M
O!D 8 (0) 4 (0) 15 (1) 2 1 16
#1 D ! E 0 (0) 9 (0) 2 (0) 0 1 35
O!E 1 (0) 5 (1) 24 (0) 0 0 16
O!D 8 (0) 7 (0) 5 (0) 4 7 16
#2 D ! E 1 (1) 12 (2) 3 (0) 0 3 25
O!E 4 (1) 13 (1) 13 (0) 0 0 15
O!D 7 (1) 0 (1) 4 (3) 9 6 16
#3 D ! E 5 (2) 8 (3) 2 (1) 6 4 16
O!E 6 (0) 4 (3) 8 (3) 6 3 14
O!D 9 (0) 2 (4) 10 (0) 4 4 14
Av. D ! E 1 (0) 14 (2) 3 (0) 0 3 24
O!E 2 (0) 5 (3) 21 (0) 0 1 15

Rad. = Radiologist O = original mammogram D = unprocessed digitized


mammogram E = ANCE-processed digitized mammogram. The average
(Av.) values were obtained by averaging the individual con dence levels.
Reproduced with permission from R.M. Rangayyan, L. Shen, Y. Shen, J.E.L.
Desautels, H. Bryant, T.J. Terry, N. Horeczko, and M.S. Rose, \Improvement
of sensitivity of breast cancer diagnosis with adaptive neighborhood contrast
enhancement of mammograms", IEEE Transactions on Information Technol-
ogy in Biomedicine, 1(3):161{170, 1997.  c IEEE.
1156 Biomedical Image Analysis

TABLE 12.11
Variation of Radiologists' Diagnostic Performance with Original,
Unprocessed Digitized, and ANCE-processed Digitized Mammograms
for the Eight Benign Cases in the Interval-cancer Dataset.
Change in diagnostic con dence level
B : level 1 or 2 U : level 3 M : level 4 or 5
U!B M!U M!B B! U! M!
Rad. Images (B ! U) (U ! M) (B ! M) B U M
O ! D 0 (0) 1 (4) 0 (0) 1 0 2
#1 D ! E 0 (1) 1 (0) 0 (0) 0 1 5
O ! E 0 (1) 1 (3) 0 (0) 0 1 2
O ! D 0 (0) 1 (0) 0 (0) 1 2 4
#2 D ! E 0 (1) 0 (1) 0 (0) 0 2 4
O ! E 0 (1) 1 (1) 0 (0) 0 1 4
O ! D 0 (0) 1 (0) 0 (1) 2 1 1
#3 D ! E 0 (0) 0 (1) 0 (1) 1 1 1
O ! E 0 (0) 0 (0) 0 (2) 1 1 2
O ! D 0 (0) 1 (2) 0 (0) 1 1 3
Av. D ! E 0 (1) 1 (1) 0 (0) 0 1 4
O ! E 0 (1) 1 (2) 0 (0) 0 1 3

Rad. = Radiologist O = original mammogram D = unprocessed digitized


mammogram E = ANCE-processed digitized mammogram. The average
(Av.) values were obtained by averaging the individual con dence levels.
Reproduced with permission from R.M. Rangayyan, L. Shen, Y. Shen, J.E.L.
Desautels, H. Bryant, T.J. Terry, N. Horeczko, and M.S. Rose, \Improvement
of sensitivity of breast cancer diagnosis with adaptive neighborhood contrast
enhancement of mammograms", IEEE Transactions on Information Technol-
ogy in Biomedicine, 1(3):161{170, 1997.  c IEEE.
Pattern Classi cation and Diagnostic Decision 1157
gists the lighten2 LUT (see Figure 12.25) provided by Kodak performs some
enhancement. Two print LUTs were used as the radiologists did not favor
the use of the hyperbolic tangent (sigmoid) function, which is an approximate
model of an X-ray lm system, during initial setup tests.

0.8 Enhanced
True-Positive Fraction

0.6

Digitized

0.4

Original

0.2

0
0 0.2 0.4 0.6 0.8 1
False-Positive Fraction

FIGURE 12.24
Comparison of composite ROC curves for the detection of abnormalities by
interpreting the original, unprocessed digitized, and enhanced images from
the interval-cancer dataset. Reproduced with permission from R.M. Ran-
gayyan, L. Shen, Y. Shen, J.E.L. Desautels, H. Bryant, T.J. Terry, N.
Horeczko, and M.S. Rose, \Improvement of sensitivity of breast cancer diag-
nosis with adaptive neighborhood contrast enhancement of mammograms",
IEEE Transactions on Information Technology in Biomedicine, 1(3):161{170,
1997. c IEEE.

The Az values for the original, digitized, and enhanced mammograms were
computed to be 0:39, 0:47, and 0:54, respectively. These numbers are much
lower than the commonly encountered area values due to the fact that the
cases selected are di!cult cases, and more importantly, due to the fact that
signs of earlier stages of the interval cancers were either not present on the
previous lms or were not visible. The radiologists interpreting the mammo-
grams taken prior to the diagnosis of the cancer had declared that the there
was no evidence of cancer on the lms hence, the improvement indicated by
the ROC curve is signi cant in terms of the diagnostic outcome. This also
explains why Az is less than 0:5 for the original and digitized mammograms.
1158 Biomedical Image Analysis
250

Lighten2 LUT
200
Output Level

150

100 Unchanged LUT

50

0
0 50 100 150 200 250
Input Level

FIGURE 12.25
The two LUTs used for printing mammograms. Reproduced with permission
from R.M. Rangayyan, L. Shen, Y. Shen, J.E.L. Desautels, H. Bryant, T.J.
Terry, N. Horeczko, and M.S. Rose, \Improvement of sensitivity of breast
cancer diagnosis with adaptive neighborhood contrast enhancement of mam-
mograms", IEEE Transactions on Information Technology in Biomedicine,
1(3):161{170, 1997. c IEEE.

The results indicate that the ANCE technique can improve sensitivity and
assist radiologists in detecting breast cancer at earlier stages.
McNemar's tests on interval-cancer cases: For the benign cases (av-
eraged), and for the original-to-enhanced, the numbers were too small to pro-
vide a valid chi-square statistic for McNemar's test. Therefore, for the benign
cases, two tables (digitized-to-enhanced and original-to-enhanced) combined
over the three radiologists were tested. No signi cant dierence in diagnostic
accuracy was found for the benign cases, for either the digitized-to-enhanced
table with p = 0:097 (Bonferroni adjusted p-value p = 0:58) or for the original-
to-enhanced table with p = 0:083 (p = 0:50).
For each of the four tables for the malignant cases, a signi cant improvement
was observed in the diagnostic accuracy, with the following p-values: original-
to-digitized (combined) p < 0:001, p < 0:001 digitized-to-enhanced (com-
bined) p = 0:0001, p = 0:0006 original-to-digitized (averaged) p = 0:002,
p = 0:012 digitized-to-enhanced (averaged) p = 0:0046, p = 0:028.
In summary, no signi cant changes were seen in the diagnostic accuracy for
the benign control cases. For the malignant cases, a signi cant improvement
was seen in the diagnostic accuracy in all four tables tested, even after using
a Bonferroni adjustment for the multiple p-values (k = 6).
Pattern Classi cation and Diagnostic Decision 1159
12.10.3 Discussion
The results of the interval-cancer study indicate that the ANCE method had
a positive impact on the interpretation of mammograms in terms of early de-
tection of breast cancer (improved sensitivity). The ANCE-processed mam-
mograms increased the detectability of signs of malignancy at earlier stages
(of the interval-cancer cases) as compared with the original and unprocessed
digitized mammograms. In terms of the average diagnostic con dence levels
of three experts, 19 of 28 interval-cancer patients were not diagnosed during
their earlier mammography tests with the original lms only. However, had
the ANCE procedure been used, all of these cases would have been diagnosed
as malignant at the corresponding earlier times. Only one of six patients ini-
tially labeled as having benign disease with the original mammogram lms
was interpreted as malignant after enhancement. Although the resultant high
sensitivity (TPF) comes with increased FPF of over 0:3, such an improvement
in the detection of breast cancer at early stages is important.
The results obtained with the set of di!cult cases are not as conclusive
as the results with the interval-cancer cases. Three reasons for this could be
(i) lack of familiarity of ve of the six radiologists with digitized and enhanced
mammographic images (ii) interpreting the images on a monitor and (iii) use
of down-sampled images at a lower resolution of 124 m. Better results may be
achieved if mammograms are digitized and processed with the desired spatial
resolution of 50 m and dynamic range of 0 ; 3:5 OD, and printed at the
same resolution on lm. (No monitor is as yet available to display images of
the order of 4 096  4 096 pixels at 12 bits/pixel.)
The results of statistical analysis using McNemar's tests show (more con-
clusively than ROC analysis) that the ANCE procedure resulted in a sta-
tistically signi cant improvement in the diagnosis of interval-cancer cases,
with no signi cant eect on the benign control cases. Statistical tests such
as McNemar's test complement ROC analysis in certain circumstances, such
as those in the study being discussed with small numbers of di!cult cases.
Both methodologies are useful as they analyze the results from dierent per-
spectives: ROC analysis provides a measure of the accuracy of the procedure
in terms of sensitivity and speci city, whereas McNemar's test analyzes the
statistical signi cance and consistency of the change (improvement) in perfor-
mance. ROC analysis could include a chi-square test of statistical signi cance,
if large numbers of cases are available.
In the study with the di!cult-cases dataset, both the ROC and statisti-
cal analysis using McNemar's tests have shown that the digital versions led
to some improvements in distinguishing benign cases from malignant cases
(speci city). However, the improvement in the unprocessed digitized mam-
mograms may have come from the availability of a zooming utility.
Although the ANCE algorithm includes procedures to control noise en-
hancement, increased noise was observed in the processed images. Improve-
ments in noise control could lead to better speci city while increasing the
1160 Biomedical Image Analysis
sensitivity of breast cancer detection. The results could also be improved by
interpreting a combination of the original or digitized mammograms with their
enhanced versions increased familiarity with the enhanced mammograms may
assist the radiologists in the detection of abnormalities. New laser printers
(such as the Kodak 8610) can print images of the order of 4 096  4 096 pix-
els with 12-bit gray scale on lm this could lead to improved quality in the
reproduction of the enhanced images and consequent improved interpretation
by radiologists.
Digital image enhancement has the potential to improve the accuracy of
breast cancer diagnosis and lead to earlier detection of breast cancer. Parallel
computing strategies may assist in the practical application of the ANCE
technique in a screening program 246, 1112, 1113, 1114].

12.11 Application: Classication of Breast Masses and


Tumors via Shape Analysis
Based upon the dierences in the shape characteristics of benign masses and
malignant tumors as observed on mammograms, several methods have been
proposed for their classi cation by using shape factors (see Chapter 6). Ack-
erman and Gose 615] analyzed breast lesions on xeroradiographs and in-
vestigated the use of four measures of malignancy: calci cation, spiculation,
roughness, and area-to-perimeter ratio. Their spiculation and roughness mea-
sures required the location of the center of the lesion as a reference point for
computing radial projections. The center of the lesion was simply de ned as
the average position of the left-to-right and top-to-bottom borders of the rect-
angle bounding the lesion. Given a suspicious area on a xeroradiograph, their
computer-aided classi cation methods obtained similar operational character-
istic curves as that of the radiologist. In another study on xeroradiographs,
Ackerman et al. 1084] used 36 radiographic properties of lesions to estimate
the probability of malignancy. Using the properties in an automated cluster-
ing scheme, they achieved an FN rate of zero at an FP rate of 45%.
Pohlman et al. 1115] used measures of tumor circularity and surface rough-
ness to classify breast tumors. By using a logistic regression model, they re-
ported an area (Az ) of 0:9 under the ROC curve. The shape features were
based on the radial distances of a mass boundary from its centroid. The sur-
face roughness was calculated as the percentage of angles with multiple bound-
ary points. In another study, Pohlman et al. 407] segmented lesions from their
background using an adaptive region-growing technique and achieved a 97%
detection rate with a set of 51 mammograms. They also used six morpho-
logical descriptors for benign-versus-malignant classi cation of the detected
lesions, and achieved areas under the ROC curve ranging from 0:76 to 0:93.
Pattern Classi cation and Diagnostic Decision 1161
Their detection method required manual selection of seed points for region
growing and adequate segmentation was obtained over several trials.
Kilday et al. 1116] developed a set of seven shape features based on tu-
mor circularity and radial distance measures (RDM) from the centroid to the
points on the boundary. The features included compactness, mean of RDM,
standard deviation of RDM, entropy of the RDM histogram, area ratio, zero
crossings, and boundary roughness. A three-group classi cation of breast tu-
mors as broadenoma, cyst, and cancer was performed by using the features
in a linear discriminant function. They reported a classi cation accuracy of
51% using the leave-one-out method.
Bruce and Kallergi 1117] studied the eect of the resolution of the images
on the detection and classi cation of mammographic mass shapes as round,
lobular, or irregular, using the same shape features as proposed by Kilday
et al. 1116] along with wavelet-based scalar energy features. Methods based
upon Markov random elds were employed to extract mass regions. Features
computed from the regions extracted in images at two dierent resolutions
(220 m and 180 m) resulted in similar classi cation trends. The best over-
all classi cation rate of 75:9% was obtained by using wavelet-based features
computed from manually segmented mass regions. Later on, Bruce and Ad-
hami 1118] classi ed manually segmented mass shapes as round, nodular,
or stellate using multiresolution shape features derived via the application of
the discrete wavelet transform modulus-maxima method. They reported to
have achieved at best 80% classi cation accuracy using the multiresolution
features in linear discriminant analysis with boundaries of masses extracted
from 60 digitized mammograms. Their morphological description as round or
oval shapes for benign masses, and nodular or stellate shapes for malignant
tumors may not hold good with all possible mass shapes. It is well known
that some benign masses possess stellate shapes and a small proportion of
malignant tumors are circumscribed 163, 345, 376]. The database used by
Bruce and Adhami 1118] did not represent a good mixture of shapes of all
types of masses.
Rangayyan et al. 163] used moments of distances of contour points from
the centroid, compactness of the boundary, Fourier descriptors, and chord-
length statistics to characterize the roughness of tumor boundaries. Whereas
circumscribed-versus-spiculated classi cation of masses was achieved at ac-
curacies of up to 94:4%, the benign-versus-malignant classi cation accuracy
obtained by using only shape factors based upon contours was limited to
about 76%, with a database of 54 masses (28 benign and 26 malignant).
Using the same dataset, Menut et al. 354] achieved similar benign-versus-
malignant classi cation accuracy (76%) by performing parabolic modeling of
tumor boundaries and using the mean and variance values of the narrowness
and width of the individual parabolic segments for classi cation. The re-
sults mentioned above emphasize the di!culties involved in the benign-versus-
malignant classi cation of masses based only on morphological features.
1162 Biomedical Image Analysis
Other methods developed to detect distortions in mammographic images
as a result of the presence of masses have included steps to follow the mor-
phological signs or orientations of masses during various stages of detec-
tion 339, 635, 1119].
In the work of Rangayyan et al. 166, 345], the shape parameters cf , fcc ,
and SI (see Chapter 6) were applied to classify a set of contours of 28 benign
breast masses and 26 malignant tumors. Figure 6.27 shows the 54 contours
arranged in the order of increasing shape complexity as characterized by the
feature vector (cf fcc SI ) Figure 6.28 shows a scatter plot of the three fea-
tures. The features were used in the BMDP 7M stepwise discriminant anal-
ysis program 674] to perform pattern classi cation. The program realizes a
jack-knife validation procedure using the leave-one-out algorithm. The clas-
si cation performance of the features was validated using ROC methodology.
ROC plots were obtained by using the BMDP software package and varying
the cut points for benign and malignant prior probabilities between 0 and
1 in steps of 0:1. The procedure does not aect the discriminant ratings of
the variables and inuences only the computation of the constant term in the
discriminant function, thus resulting in varying classi cation accuracies. The
ROC curves for some of the feature combinations are shown in Figure 12.26.
The area Az under each ROC curve was computed using the trapezoidal rule.
Table 12.12 provides details on the benign-versus-malignant classi cation
performance of various combinations of the three features (cf fcc SI ). All
of the features, individually and in dierent combinations, could eectively
discriminate circumscribed benign masses from spiculated malignant tumors.
The parameter cf , being a global measure of shape complexity, failed to clas-
sify almost all spiculated benign masses, and classi ed four out of seven cir-
cumscribed malignant tumors correctly.
Concavity analysis is sensitive to the presence of spicules in a mass bound-
ary. Fractional concavity (fcc ) increases with the number of spicules and their
length however, it does not take into account the degree of spicularity of the
individual spicules present in the boundary. Circumscribed malignant tumors
typically contain a large number of microlobulations in their boundaries that
could appear as alternating concave and convex segments hence, fcc could
distinguish six out of the seven circumscribed malignant tumors correctly, and
resulted in a high sensitivity of 88:5%. However, fcc does not represent the
characteristics of the spicules intricately in terms of their depth and narrow-
ness hence, it failed to distinguish spiculated benign masses from spiculated
malignant tumors, and resulted in a poor speci city of 60:7% with Az = 0:75.
Because the degree of spiculation of boundary segments is characterized by
the spiculation index, a signi cant portion of the spiculated benign cases ( ve
out of 12) with large convexities and a few narrow spicules were correctly clas-
si ed as benign by SI . The boundary of one of such benign masses is shown
in Figure 12.27 as an example. Both the parameters cf and fcc computed
for this boundary are high and misclassi ed the mass as malignant. How-
ever, a careful observation of the boundary reveals that although the mass is
Pattern Classi cation and Diagnostic Decision 1163

0.9

0.8

0.7

0.6
Sensitivity

0.5

0.4

0.3

0.2

SI
0.1 SI+fcc
SI+fcc+C
fcc
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1−Specificity

FIGURE 12.26
ROC plots for SI , fcc , (SI fcc ), and (SI fcc cf ). SI: spiculation index.
fcc: fractional concavity. C: modi ed compactness cf . The set of contours
used is illustrated in Figure 6.27. Reproduced with permission from R.M.
Rangayyan, N.R. Mudigonda, and J.E.L. Desautels, \Boundary modeling and
shape analysis methods for classi cation of mammographic masses", Medical
and Biological Engineering and Computing, 38:487{496, 2000.  c IFMBE.
1164 Biomedical Image Analysis

TABLE 12.12
Numbers of Masses and Tumors Correctly Classi ed as Benign or
Malignant by the Three Shape Factors cf , fcc , and SI .
Benign Malignant % Accuracy
Features Circ. Spic. Circ. Spic. Ben. Mal. Total Az
SI 16/16 5/12 3/7 19/19 75.0 84.6 79.6 0.82
fcc 15/16 2/12 6/7 17/19 60.7 88.5 74.1 0.75
cf 15/16 1/12 4/7 19/19 57.1 88.5 72.2 0.76
cf , SI 16/16 5/12 3/7 19/19 75.0 84.6 79.6 0.80
fcc , SI 16/16 4/12 3/7 19/19 71.4 84.6 77.8 0.80
cf , fcc 15/16 1/12 5/7 19/19 57.1 92.3 74.1 0.72
cf , fcc , SI 16/16 4/12 5/7 19/19 71.4 92.3 81.5 0.79
Circ.: circumscribed Spic.: spiculated. Ben.: benign Mal.: malignant. See
Figure 6.27 for an illustration of the contours. Reproduced with permission
from R.M. Rangayyan, N.R. Mudigonda, and J.E.L. Desautels, \Boundary
modeling and shape analysis methods for classi cation of mammographic
masses", Medical and Biological Engineering and Computing, 38:487{496,
2000. c IFMBE.
Pattern Classi cation and Diagnostic Decision 1165
spiculated in nature, a major portion of its boundary does not possess sharp
and narrow spicules. Hence, the parameter SI , which is sensitive to narrow
spicules, correctly classi ed this mass as a benign mass.

FIGURE 12.27
Shape factors for the boundary of a spiculated benign mass: cf = 0:8, fcc =
0:53, and SI = 0:3. Only SI correctly classi ed the mass as benign 166].

SI provided an improved speci city of 75% in classifying the benign masses


in the database used. This is an encouraging result because none of the other
shape parameters in earlier studies 163, 354] could be eective in separating
the spiculated benign cases in the MIAS database. Even though SI failed to
correctly classify four of the seven circumscribed malignant cases (although
with narrow classi cation margins), it resulted in the best classi cation ac-
curacy among all combinations of the three features with Az = 0:82. The
misclassi ed cases, although malignant, did not have prominent spicules in
their boundaries. It may be observed from Table 12.12 that combining SI
with the other features generally yielded improved classi cation results with
high values of Az the ROC curves for some of the feature combinations are
shown in Figure 12.26. Because spicules are characterized and emphasized by
the narrowness of their angles, SI is particularly sensitive to the stellate or
star-like distortions in malignant tumors the recognition of such distortion
has been the focus of several studies on tumor detection 339, 635, 644].

A benign-versus-malignant classi cation accuracy of 82% was obtained,


with Az = 0:79, by combining fcc and SI , which are sensitive to local varia-
tions in a boundary, with the global shape feature cf . A total of eight FPs
out of the 28 benign masses and two FNs out of the 26 malignant tumors were
observed with the combination of all of the three features.
1166 Biomedical Image Analysis
SI , fcc , and cf individually resulted in benign-versus-malignant classi ca-
tion accuracies of 80%, 74%, and 72%, respectively. Using the same dataset
of 54 contours, Rangayyan et al. 163] and Menut et al. 354] reported benign-
versus-malignant classi cation accuracies of no more than 76% using various
combinations of several other shape factors.
In the linear discriminant analysis model, the criterion to optimize the fea-
ture weights is based on maximizing the variance between the classes while
minimizing the variance within each class. Also, the size of the class inu-
ences the computation of the feature weights. Although the dataset in the
study being described here is evenly divided between benign masses (28) and
malignant tumors (26), it includes an unusual proportion of spiculated benign
masses from the MIAS database (12 out of 28 benign masses in a total of 54
cases). Considering the above, the performance of the features as described
above could be regarded to be good. The addition of other features based
upon density variations and textural information 163, 165, 275, 676] may
result in improved benign-versus-malignant discrimination see Section 12.12.

12.12 Application: Content-based Retrieval and Analy-


sis of Breast Masses
Alto et al. 528, 529, 1120, 1121] applied combinations of the three shape fac-
tors cf , fcc , and SI 14 texture measures as de ned by Haralick 441, 442] (see
Section 7.3) and four measures of edge sharpness as de ned by Mudigonda
et al. 165] (see Section 7.9.2) for content-based image retrieval (CBIR) and
pattern classi cation studies with a set of 57 ROIs of breast masses and tu-
mors. The cases were selected from Screen Test: Alberta Program for the
Early Detection of Breast Cancer 61], and include 20 cases (22 breasts af-
fected) exhibiting a total of 28 masses visible as 57 ROIs on 45 mammograms.
Twenty of the ROIs correspond to biopsy-proven malignant tumors, and the
remaining 37 to biopsy-proven benign masses. The lm mammograms were
digitized using the Lumiscan 85 scanner at a resolution of 50 m with 12 bits
per pixel. The 57 ROIs are shown in Figure 12.4 arranged in the order of de-
creasing acutance. Figure 12.5 shows the contours of the 57 masses arranged
in increasing order of fcc . Figures 12.5 and 12.4 demonstrate that a few of
the benign masses and malignant tumors may appear to be out of place if
their classi cation is based only on the features used in the two illustrations
of rank-ordering.
Figure 12.28 illustrates the region corresponding to a macrolobulated be-
nign mass, the contour of the mass with its concave and convex parts iden-
ti ed for the computation of fcc , the ribbon of pixels extracted to compute
the texture features, and the normals to the contour for the computation of
Pattern Classi cation and Diagnostic Decision 1167
edge-sharpness measures. A single combined GCM was created from the pixel
values extracted in the four directions with respect to the pixel under con-
sideration. Fourteen texture features were computed according to Haralick's
de nitions 441, 442] for the mass ribbons. In addition to the features for the
original ribbons at a resolution of 50 m, the texture features were computed
using Gaussian-smoothed and down-sampled versions of the ribbons at pixel
resolution of 200 m and 800 m.

12.12.1 Pattern classication of masses


Pattern classi cation experiments were conducted using linear discriminant
analysis with the 14 texture features at pixel resolution of 50 200 and 800 m.
The results indicated 200 m to be the most suitable resolution for discrim-
ination between benign and malignant masses. Stepwise logistic regression
was performed using the SPSS software package 1092, 1093] to select a sub-
set of features from the three separate sets of shape, edge-sharpness, and
texture features. As a result of this evaluation, the shape factor of fractional
concavity fcc , the edge-sharpness feature of acutance A as de ned in Equa-
tion 2.110, and the texture feature of sum entropy F8 were selected. (Chan
et al. 450] found that the three texture features of correlation, dierence
entropy, and entropy performed better in the classi cation of breast masses
than other combinations of one to eight texture features selected in a speci c
sequence.) Although the shape features, in particular fcc , gave high sensitiv-
ity and speci city, they are highly dependent on the accuracy of the contour,
which is not easily drawn even by an experienced radiologist. The results of
automatic segmentation methods still need to be con rmed by an expert ra-
diologist, and are often subject to errors and artifacts. It should be observed
that large populations of pixels around the given contour are used in the pro-
cedures to compute the texture and edge-sharpness measures. The de nitions
of the ribbon for texture measures and the set of normals to the contour for
edge-sharpness measures make the parameters less sensitive to inaccuracies in
the contour than the shape factors. These observations lend support to the
argument in favor of combining features representing multiple characteristics.
A scatter plot of the three features fcc A F8 ] of the 57 masses is given
in Figure 12.6. It is seen that while fcc separates the benign and malignant
categories well, the texture feature F8 does not possess good discriminant
capability. The measure of acutance A indicates an intermediate degree of
discriminant capability. A scatter plot of the three shape factors fcc cf SI ]
of the 57 masses is given in Figure 12.7. Each of the three shape factors
demonstrates high discriminant capability.
The benign-versus-malignant discriminatory performance of the features
was validated using several approaches, including linear discriminant analysis,
logistic regression, Mahalanobis distance, k-NN, and the ROC methodology.
In the pattern classi cation experiments conducted with the Mahalanobis dis-
tance, each mass was treated in turn as the sample to be classi ed. Mean vec-
1168 Biomedical Image Analysis

FIGURE 12.28
(a) ROI of the benign mass b164ro94 (see Figure 12.4.) (b) ROI overlaid with
the contour demonstrating concave parts in black and convex parts in white.
(c) Ribbon of pixels for the purpose of computing texture measures, derived by
dilating and eroding the contour in (b). (d) Normals to the contour, shown at
every tenth point on the contour, used for the computation of edge-sharpness
measures. See also Figures 7.24 and 7.25. Reproduced with permission from
H. Alto, R.M. Rangayyan, and J.E.L. Desautels, \Content-based retrieval and
analysis of mammographic masses", Journal of Electronic Imaging, in press,
2005. c SPIE and IS&T.
Pattern Classi cation and Diagnostic Decision 1169
tors and pooled covariance matrices were computed using the feature vectors
of the remaining benign and malignant samples. The Mahalanobis distance
was computed from the sample on hand to the mean vectors of the benign
and malignant classes the sample was assigned to the class with the smaller
distance.
The sensitivity and speci city values for the ROC plots were obtained with
the BMDP 7M stepwise discriminant analysis program 674] by varying the
cut points for benign and malignant prior probabilities between 0 and 1 in
steps of 0:1. The program realizes a jack-knife validation procedure using
the leave-one-out algorithm. Figure 12.29 shows the ROC curves for fcc A
and F8 individually and combined. The area Az under each ROC curve was
computed using the trapezoidal rule. The results of all of the experiments
mentioned aboved are listed in Table 12.13. The sensitivity and speci city
values obtained at a prior probability value of 0:5 for both the benign and
malignant groups are also shown in Table 12.13 for the sake of illustration.
The shape factor fcc has demonstrated consistently high classi cation accu-
racy regardless of the pattern classi cation method. The classi cation perfor-
mance of the texture features is poor. The measure of acutance has provided
slightly better accuracy than the texture measures. The addition of texture
and edge-sharpness measures did not signi cantly alter the performance of fcc .
(See Sections 12.3.1 and 12.4.2 for examples of application of other pattern
classi cation methods to the same dataset.)

12.12.2 Content-based retrieval


Systems for CBIR from multimedia databases oer advantages over traditional
archiving systems such as single-media, text-based, relational, and hierarchi-
cal databases 1122, 1123, 1124, 1125, 1126, 1127, 1128, 1129, 1130, 1131].
The retrieval of relevant multimedia information (such as images, video clips,
attributes, and numerical data) from large databases should be accomplished
eectively and e!ciently in order to assist both the novice and the expert
user. Content-based retrieval methods use image content, in the form of fea-
ture values, image attributes, and other image descriptors, to identify relevant
images or image-related information in response to a query. A query may be
an example image, a hand-drawn outline of a shape, a natural language query,
or a selection from a set of possible categories provided by the system's user
interface. A number of dierent retrieval methods have been proposed in the
literature: some have utilized shape factors whereas others have used other
image attributes or textual annotation.
1170 Biomedical Image Analysis

0.9

0.8

0.7

0.6
Sensitivity

0.5

0.4

0.3

0.2

Fcc
0.1 A
F8
Fcc+A+F8
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1−Specificity

FIGURE 12.29
ROC plots for fcc A F8 ] individually and combined. The ROC curves for
fcc and fcc A F8] overlap completely. Reproduced with permission from H.
Alto, R.M. Rangayyan, and J.E.L. Desautels, \Content-based retrieval and
analysis of mammographic masses", Journal of Electronic Imaging, in press,
2005. c SPIE and IS&T.
Pattern Classi cation and Diagnostic Decision
TABLE 12.13
Accuracy of Classi cation of Masses as Benign or Malignant Using Combinations of the Shape Factor fcc , Acutance
A, and the Texture Measure Sum Entropy F8 with Pattern Classi cation Methods and CBIR (Precision) 529].
Logistic regression Mahalanobis distance Linear discriminant analysis k-NN Precision
Features Sens Spec Avg Sens Spec Avg Sens Spec Avg Az k=5 k=7 k=5 k=7
fcc 90 97.3 94.7 90 97.3 94.7 100.0 97.3 98.2 0.99 94.7 94.7 95.1 95.2
A 50 94.6 78.9 75 67.6 70.0 75.0 73.0 73.7 0.74 68.4 73.7 65.3 67.4
F8 30 86.5 66.7 65 56.8 59.6 75.0 54.1 61.4 0.68 63.2 54.4 58.2 60.9
fcc A 90 97.3 94.7 90 97.3 94.7 95.0 97.3 96.5 0.98 96.5 94.7 93.0 91.2
fcc F8 90 97.3 94.7 90 97.3 94.7 100.0 97.3 98.2 0.99 94.7 94.7 95.1 93.7
A F8 55 86.5 75.4 60 70.3 66.7 75.0 73.0 73.7 0.76 75.4 75.4 68.4 68.4
fcc F8 A 90 97.3 94.7 95 97.3 96.5 100.0 97.3 98.2 0.99 96.5 96.5 90.9 91.2
14 texture * * * 70 50.0 64.9 65.0 64.9 64.9 0.67 # # # #
*: Logistic regression identi ed the texture feature F8 as the only signi cant feature results were computed for F8 only.
#: Experiments not conducted for this feature set. Sens = sensitivity Spec = speci city Avg = average accuracy as
percentages. See Figures 12.4 and 12.5 for illustrations of the masses and their contours.

1171
1172 Biomedical Image Analysis
A review of CBIR systems by Gudivada and Raghavan 1123] outlines pre-
vious approaches to content-based retrieval, and expresses the need to utilize
features from a variety of approaches based on attributes, feature extrac-
tion, or object recognition, for information representation. Gudivada and
Raghavan 1123] and Yoshitaka and Ichikawa 1122] indicate that conventional
database systems are not well suited to handle multimedia data containing
images, video, and text. Hence, it is necessary to explore more exible query
and retrieval methods.
Representation of breast mass images for CBIR: The rst step in
the development of a CBIR system is to represent the data or information in a
meaningful way in the database so that retrieval is facilitated for a given appli-
cation 529, 1085, 1121]. The representation of breast masses and tumors in a
database requires the design of a reasonable number of descriptors to represent
the image features of interest (or diagnostic value) with minimal loss of infor-
mation. It is well established that most benign masses have contours that are
well-circumscribed, smooth, and are round or oval, and have a relatively ho-
mogeneous internal texture. On the other hand, malignant tumors typically
exhibit ill-dierentiated and rough or spiculated contours, with a heteroge-
neous internal texture 54, 55]. For these reasons, shape factors and texture
measures have been proposed for dierentiating between benign masses and
malignant tumors 163, 165, 275, 345, 354, 428, 451].
Various researchers have chosen to represent the contours of objects in a
variety of ways, some of which include: coding the object's contour as an
ordered sequence of points or high-curvature points 1124, 1125, 1126] using
chain-code histograms 1125, 1126, 1127, 1128] and using shape descriptors
such as compactness 163, 274, 345, 428, 1118, 1132], concavity/convexity
345, 354, 1132], moments 163, 274, 1128, 1132], Fourier descriptors 274,
428, 1124, 1126], spiculation index 345], and the wavelet transform modulus-
maxima 1118]. Loncaric 406] gives an overview of shape analysis techniques
from chain codes to fractal geometry. (See Chapter 6 for details on shape
analysis.)
Automatically extracted shapes and parameters are considered to be prim-
itive features, whereas logical features are abstract representations of images
at various levels of detail and may be synthesized from primitive features.
Logical features require more human intervention and domain expertise, and
therefore, there is a higher cost associated with preprocessing the data for
the database. CBIR approaches dier with respect to the image features
that are extracted, the level of abstraction of the features, and the degree of
desired domain independence 1123]. Depending upon the application, object-
based descriptors such as tumor shape may be preferred to attribute-based
descriptors such as color or texture keywords may also be used where ap-
propriate 1129, 1130]. In the work of Alto et al. 529], the features used
are related to radiologically established attributes of breast masses. When
the images in a database are indexed with objective measures of diagnostic
features, the database may be referred to as an indexed atlas 1085, 1133].
Pattern Classi cation and Diagnostic Decision 1173
The query process: Once the mammographic masses have an appropriate
representation in a database, the next step in the development of a CBIR
system is to design the query techniques to t the needs of the end-user. In a
CAD application, the end-user could be a radiologist, a radiology intern, or a
physician. In standard text-based databases, queries are generally comprised
of keywords, natural language queries, or browsing procedures (that is, query
by subject). For image or multimedia databases, the same methods may
apply only if there are searchable keywords or textual descriptors associated
with the images. In a \query by example", the user speci es a condition by
giving examples of the desired image or object, either by cutting and pasting
an image or by sketching the example object's contour 1122]. One of the
best-known commercial CBIR systems is Query by Image Content (QBIC)
developed at IBM 1129]. QBIC uses visual content such as color percentages,
color layout, and texture extracted from images of art collections. A CBIR
system developed by Srihari 1130], known as Piction, contains images of
newspaper photos annotated with their associated captions. Queries based
on text and image features extracted from the photos may then be used to
identify human faces found in the newspaper photographs.
A comprehensive list of query classes is given by Gudivada and Ragha-
van 1123] as: color, texture, sketch, shape, volume, spatial constraints, brows-
ing, objective attributes, subjective attributes, motion, text, and domain con-
cepts. Fewer classes may be used when the database is highly domain-speci c,
and more are needed when the database is of general scope.
When a query is made in a CBIR system, the retrieved results are typically
presented as a set or series of images that are rank-ordered by their degree
of similarity (or a distance measure, such as the Euclidean, Manhattan, or
Mahalanobis distance, as an indicator of dissimilarity) with respect to the
query image. This is dierent from retrieval in text-based database systems,
which generally provide results with an exact match (that is, a single word,
set of words, or a phrase). The CBIR work of Alto et al. was focused on
retrieving similar masses, of established diagnosis, to assist the radiologist by
suggesting a probable diagnosis for the query case on hand. The concept of
similarity is especially pertinent in such an application because no two breast
masses may be expected to be identical, and a perfect or exact match to a
query would be improbable in practice.
Some of the shape-matching procedures suggested by Trimeche et al. 1125]
require a comparison of the vertices of polygonal models of the query contour
and the database contours. Each vertex is represented by a set of values (such
as scale, angle, ratio of consecutive segments, and the ratio to the overall
length). The feature vectors could be excessively lengthy if the contour has
many vertices, such as a spiculated mass. A matrix containing all possible
matches between the vertices of the query shape and each of the candidate
contours may then be created. The polygonal model method produced good
results with sh contours. The use of shape factors to represent the contours
1174 Biomedical Image Analysis
of masses, as in the work of Alto et al., simpli es the process of comparative
analysis.
Visualization of the query results is accomplished by rank-ordering the re-
trieved results from the minimum to the maximum distance and presenting
the top k objects to the user, where k could be 3 5 7    N . One suggested
method of visualization of the retrieved results that may enhance the user's
perception of the overall information presented was de ned by Moghaddam
et al. 1134] as a Splat: the retrieved images were displayed in rank order of
their visual similarities, with their placement on the page with respect to the
query being dictated by their mutual similarities.
Evaluation of retrieval: An important step in developing a CBIR system
is to evaluate its e!ciency with respect to the retrieval of relevant informa-
tion. Measures of precision and recall have been proposed to assess the per-
formance of general information retrieval systems, based upon the following
de nitions 1131]:
Correct detections
Xk
Ak = Vn (12.93)
n=1
where k is the number of retrieved objects, and Vn 2 f0 1g with Vn =
1 if the retrieved object is relevant to the query and Vn = 0 if it is
irrelevant. In the present application, a relevant object is a retrieved
benign mass for a benign query sample a retrieved malignant tumor
would be considered irrelevant. In the case of a malignant query sample,
a retrieved benign mass would be irrelevant and a malignant tumor
would be relevant.
False alarms
Xk
Bk = (1 ; Vn ): (12.94)
n=1
Misses X
N !
Mk = Vn ; Ak (12.95)
n=1
where N is the total number of objects in the database.
Correct dismissals
X
N !
Dk = (1 ; Vn ) ; Bk : (12.96)
n=1
Recall, de ned as the ratio of the number of relevant retrieved objects
to all relevant objects in the database, and computed as
Rk = A A+kM : (12.97)
k k
Pattern Classi cation and Diagnostic Decision 1175
Precision, de ned as the ratio of the number of relevant retrieved objects
to all retrieved objects, and computed as
Pk = A A+k B : (12.98)
k k
Fallout, de ned as the ratio of the number of retrieved irrelevant objects
to all irrelevant objects in the database, and computed as
Fk = B B+kD : (12.99)
k k
The following plots may be used to evaluate the eectiveness of CBIR sys-
tems:
Retrieval eectiveness: precision versus recall.
ROC: correct detections versus false alarms.
Relative operating characteristics: correct detections versus fallout.
Response ratio: BAkk versus Ak .
In general, an eective CBIR system will demonstrate high precision for all
values of recall 1131].
Results of CBIR with breast masses: Alto et al. 529] applied a
content-based retrieval algorithm to the 57 masses and their contours shown
in Figures 12.4 and 12.5. The retrieval algorithm uses the Euclidean distance
between a query sample's feature vector and the feature vector of each of the
remaining masses in the database, and rank-orders the masses corresponding
to the vectors that are most similar to the query vector (that is, the short-
est Euclidean distance). The rank-ordered masses are presented to the user,
annotated with the biopsy-proven diagnosis for each retrieved mass. The 57
masses were each used, in turn, as the query sample.
Figure 12.30 shows the contours of the 57 masses rank-ordered by the Eu-
clidean distance from the origin in the three-feature space of fcc cf SI ]. This
is equivalent to sorting the contours by the magnitudes of the feature vectors
fcc cf SI ]. Observe that the use of three shape factors has led to a more
comprehensive characterization of shape roughness than only one (fcc ) as in
Figure 12.5, resulting in a dierent order of sorting.
Figures 12.31 through 12.34 show the retrieval results for the four masses
illustrated in Figure 12.2 using various feature vectors including fcc , A, and
F8 . (In each case, the rst mass at the left is the query sample. The retrieved
samples are arranged in the increasing order of distance from the query sam-
ple from left to right. The contour of the mass in each ROI is provided above
the ROI.) The results indicate that the masses retrieved and their sequence
1176 Biomedical Image Analysis

b161lc95 0.12 b145lc95 0.12 b62rc97e 0.13 b158lo95 0.14 b164rx94 0.14 b145lo95 0.15

b62lx97 0.15 b148rc97 0.15 b62rc97a 0.15 b62ro97e 0.16 b62lo97 0.16 b148ro97 0.16

b62rc97d 0.17 b110rc95 0.18 b62rc97b 0.18 b62ro97b 0.19 b62ro97f 0.2 b164rc94 0.2

b161lo95 0.21 b157lo96 0.21 b110ro95 0.22 b166lo94 0.22 b155rc95 0.22 b62ro97a 0.27

b62ro97d 0.28 b62lc97 0.28 b62rc97f 0.28 b166lc94 0.29 b146ro96 0.29 b146rc96 0.31

b157lc96 0.31 b62rc97c 0.32 b158lc95 0.34 b62ro97c 0.37 m62lx97 0.38 b164ro94 0.4

m63ro97 0.42 b64rc97 0.45 m23lc97 0.56 b155ro95 0.57 m63rc97 0.64 m51rc97 0.82

m58rm97 0.9 m23lo97 0.94 m61lc97 1 m62lo97 1.1 m22lc97 1.2 m61lo97 1.2

m55lc97 1.2 m51ro97 1.2 m59lo97 1.2 m22lo97 1.2 m55lo97 1.3 m58ro97 1.3

m59lc97 1.3 m58rc97 1.3 m64lc97 1.4

FIGURE 12.30
Contours of 57 breast masses, including 37 benign masses and 20 malignant tumors.
The contours are arranged in the order of increasing magnitude of the feature vector
fcc cf SI ], which is given next to each sample. Note that the masses and their
contours are of widely diering size, but have been scaled to the same size in the
illustration. For details regarding the case identiers, see Figure 12.2. See also
Figure 12.5. Figure courtesy of H. Alto 528].
Pattern Classi cation and Diagnostic Decision 1177
depend upon the features used to characterize and index the masses. Regard-
less, all of the results illustrated clearly lead to decisions that agree with the
known diagnoses of the query samples, except for the case in Figure 12.33 (b).
Although the texture and edge-sharpness measures resulted in poor clas-
si cation accuracies on their own, the results of retrieval using both of the
features with the shape factor fcc indicate the need to include these measures
so as to provide a broader scope of representation of radiographic features
than shape complexity alone.
The results of content-based retrieval, as illustrated in Figures 12.31 { 12.34,
lend easily to pattern classi cation with the k-NN method. The k-NN method
may be applied by simple visual inspection of the rst k cases in the results of
retrieval the classi cation of the query sample is made based upon the known
classi cation of the majority of the rst k objects. Alto et al. applied the k-
NN method to the retrieval results with k = 5 7 9 and 11. Correct detections
in these cases refer to the retrieval of at least 3 4 5 or 6, respectively, correct
cases by virtue of their diagnosis corresponding to the known diagnosis of the
query (test) sample. The results of k-NN analysis and retrieval precision are
presented in Table 12.13 for k = 5 and 7. The high levels of classi cation
accuracy and retrieval precision with the use of shape factors indicate the
importance of shape in the analysis of breast masses and tumors. A study by
Sahiner et al. 428] has also indicated the importance of shape parameters in
the classi cation of breast masses and tumors.
See Zheng et al. 1135] for the application of CBIR to image analysis in
pathology (histology).

12.12.3 Extension to telemedicine


The concepts of an indexed atlas and CBIR may be combined with mobile
software agents for web-based medical image retrieval and analysis of medi-
cal images, telemedicine, and remote medical consultation applications 1085].
Software agents are autonomous, intelligent, software objects that can pro-
cess, analyze, and make decisions about data 1136, 1137]. A mobile agent is
a self-contained software program that can move within a computer network
and perform tasks for the user. A mobile agent can reduce search time and
function with limited computational resources and low-bandwidth communi-
cation links: this is accomplished by having the agent process or evaluate the
data at the source, and then transmit only the pertinent data to the user.
Mobile agents can serve a variety of functions, and may be used to 1136,
1137, 1138, 1139, 1140]:
search information residing at remote nodes and report back to the
source (information agent)
nd under-utilized network resources to perform computationally inten-
sive processing tasks (computation agent) and
1178 Biomedical Image Analysis

b145lc95 b146rc96 b148rc97 b148ro97 b161lc95 b62rc97e

(a)

b145lc95 b62lx97 b166lc94 m63ro97 b146rc96 m23lo97

(b)

b145lc95 b62lx97 b164rx94 b148ro97 b110rc95 b155rc95

(c)

FIGURE 12.31
Content-based retrieval with the circumscribed benign query sample b145lc95
(a) using the shape factor fcc only, (b) using acutance A only, and (c) us-
ing the three features fcc A F8 ]. For details of case identi cation, see Fig-
ure 12.2. Reproduced with permission from H. Alto, R.M. Rangayyan, and
J.E.L. Desautels, \Content-based retrieval and analysis of mammographic
masses", Journal of Electronic Imaging, in press, 2005.  c SPIE and IS&T.
Pattern Classi cation and Diagnostic Decision 1179

b164ro94 b62ro97c b146ro96 b62ro97a b158lc95 b62rc97f

(a)

b164ro94 b164rc94 b146ro96 b62lc97 m51rc97 b62lo97

(b)

b164ro94 b146ro96 b62lc97 b155ro95 m63ro97 m51rc97

(c)

FIGURE 12.32
Content-based retrieval with the macrolobulated benign query sample
b164ro94 (a) using the shape factor fcc only, (b) using acutance A only, and
(c) using the three features fcc A F8 ]. For details of case identi cation,
see Figure 12.2. Reproduced with permission from H. Alto, R.M. Rangayyan,
and J.E.L. Desautels, \Content-based retrieval and analysis of mammographic
masses", Journal of Electronic Imaging, in press, 2005. c SPIE and IS&T.
1180 Biomedical Image Analysis

m51rc97 m23lc97 m55lc97 m59lo97 m23lo97 b64rc97

(a)

m51rc97 b164ro94 b164rc94 b146ro96 b62lc97 b62lo97

(b)

m51rc97 m23lo97 b164ro94 m63ro97 b146ro96 m23lc97

(c)

FIGURE 12.33
Content-based retrieval with the microlobulated malignant query sample
m51rc97 (a) using the shape factor fcc only, (b) using acutance A only, and
(c) using the three features fcc A F8 ]. For details of case identi cation,
see Figure 12.2. Reproduced with permission from H. Alto, R.M. Rangayyan,
and J.E.L. Desautels, \Content-based retrieval and analysis of mammographic
masses", Journal of Electronic Imaging, in press, 2005. c SPIE and IS&T.
Pattern Classi cation and Diagnostic Decision 1181

m55lo97 m22lc97 m61lo97 m58ro97 m58rm97 m58rc97

(a)

m55lo97 m58rc97 m58ro97 m61lo97 m55lc97 m62lo97

(b)

m55lo97 m58rc97 m61lo97 m55lc97 m64lc97 m62lo97

(c)

FIGURE 12.34
Content-based retrieval with the spiculated malignant query sample m55lo97
(a) using the shape factor fcc only, (b) using acutance A only, and (c) us-
ing the three features fcc A F8 ]. For details of case identi cation, see Fig-
ure 12.2. Reproduced with permission from H. Alto, R.M. Rangayyan, and
J.E.L. Desautels, \Content-based retrieval and analysis of mammographic
masses", Journal of Electronic Imaging, in press, 2005.  c SPIE and IS&T.
1182 Biomedical Image Analysis
send messages back and forth between clients residing at various network
nodes (communication agent).
Figure 12.35 shows a schematic representation of the combined use of an
indexed atlas, CBIR, and mobile agents in the context of mammography and
CAD of breast cancer.
The major strength of mobile agents is that they permit program execution
near or at a distributed data source by moving to each site in order to per-
form a computational task. In addition, mobile-agent systems typically have
the characteristics of low network tra!c, load balancing, fault tolerance, and
asynchronous interaction. Agents can function independently of one another,
as well as cooperate to solve problems. The use of mobile agents with a CBIR
system brings speci c bene ts and di!culties. For example, mobile agents
can move to sites with better or more data, and faster computers. They can
replicate themselves and use the inherent power of parallelism to improve
productivity. The basic strengths of mobile-agent systems include the inher-
ent parallelism of multiple agents conducting simultaneous searches, parallel
searching with intelligent prepreprocessing, and agent-to-agent communica-
tion. Speci c di!culties with the mobile-agent paradigm include issues of
security, complexity, and control. Security is important when dealing with
patient information. Data encryption and restricting the agents to operations
within secure networks could address security concerns.

12.13 Remarks
We have studied how biomedical images may be processed and analyzed to
extract quantitative features that may be used to classify the images as well as
lead toward diagnostic decisions. The practical development and application
of such techniques is usually hampered by a number of limitations related
to the extent of discriminant information present in the images selected for
analysis, as well as the limitations of the features designed and computed.
Artifacts inherent in the images or caused by the image acquisition systems
impose further limitations.
The subject of pattern classi cation is a vast area by itself 402, 1086, 401,
1087]. The topics presented in this chapter provide a brief introduction to the
subject.
A pattern classi cation system that is designed with limited data and infor-
mation about the chosen images and features will provide results that should
be interpreted with due care. Above all, it should be borne in mind that
the nal diagnostic decision requires far more information than that provided
by images and image analysis: this aspect is best left to the physician or
health-care specialist in the spirit of computer-aided diagnosis.
Pattern Classi cation and Diagnostic Decision
FIGURE 12.35
Use of an indexed atlas, CBIR, and mobile agents in the context of mammography and CAD of breast cancer. Note: \U
of C" = University of Calgary, Calgary, Alberta, Canada. Reproduced with permission from H. Alto, R.M. Rangayyan,
R.B. Paranjape, J.E.L. Desautels, and H. Bryant, \An indexed atlas of digital mammograms for computer-aided diagnosis

1183
c GET { Lavoisier.
of breast cancer", Annales des Telecommunications, 58(5): 820 { 835, 2003. 
1184 Biomedical Image Analysis

12.14 Study Questions and Problems


Selected data les related to some of the problems and exercises are available at the
site

www.enel.ucalgary.ca/People/Ranga/enel697
1. The prototype vectors of two classes of images are specied as Class 1 :
1 0:5]T , and Class 2 : 3 3]T . A new sample vector is given as 2 1]T . Give
the equations for two measures of similarity or dissimilarity, compute the mea-
sures for the sample vector, and classify the sample into Class 1 or Class 2
using each measure.
2. In a three-class pattern classication problem, the three decision boundaries
are d1 (x) = ;x1 + x2 , d2 (x) = x1 + x2 ; 5, and d3 (x) = ;x2 + 1.
Draw the decision boundaries on a sheet of graph paper.
Classify the sample pattern vector x = 6 5]T using the decision functions.
3. Two pattern class prototype vectors are given to you as z1 = 3 4]T and
z2 = 10 2]T . Classify the sample pattern vector x = 4 5]T using (a) the
normalized dot product, and (b) the Euclidean distance.
4. A researcher makes two measurements per sample on a set of 10 normal and
10 abnormal samples.
The set of feature vectors for the normal samples is
f2 6]TT 22 20]T T 10 T14]T 10T 10]T 24T 24]T
8 10] 8 8] 6 10] 8 12] 6 12] g:
The set of feature vectors for the abnormal samples is
f4 10]TT 24 16]TT 16 18]TT 18 20]TT 14 20]TT
20 22] 18 16] 20 20] 18 18] 20 18] g:
Plot the scatter diagram of the samples in both classes in the feature-vector
space (on a sheet of graph paper). Design a linear decision function to classify
the samples with the lowest possible error of misclassication. Write the
decision function as a mathematical rule and draw the same on the scatter
diagram.
How many (if any) samples are misclassied by your decision function? Mark
the misclassied samples on the plot.
Two new observation sample vectors are provided to you as x1 = 12 15]T
and x2 = 14 15]T . Classify the samples using your decision rule.
Now, classify the samples x1 and x2 using the k-nearest-neighbor method,
with k = 7. Measure distances graphically on your graph paper plot and
mark the neighbors used in this decision process for each sample.
Comment upon the results | whether the two methods resulted in the same
classication result or not | and provide reasons.
Pattern Classi cation and Diagnostic Decision 1185

12.15 Laboratory Exercises and Projects


1. The le tumor shape1.dat gives the values of several shape factors for 28
benign masses and 26 malignant tumors the contours are illustrated in Fig-
ure 6.27. See Chapter 6 for details regarding the methods see the le tu-
mor shape1.txt for details regarding the data le. Select the three shape
factors (SI fcc cf ) and form feature vectors for each case. Using each feature
vector as the test case and the remaining vectors in the dataset as the training
set, classify each case as benign or malignant using (a) the Euclidean distance,
(b) the Manhattan distance, and (b) the Mahalanobis distance. Compare the
results with the classication provided in the data le and determine the TPF,
TNF, FPF, and FNF.
Comment upon the performance of the three distance measures.
Repeat experiment with the data in the le tumor shape2.dat for 37 benign
masses and 20 malignant tumors the contours are illustrated in Figure 12.5.
See the le tumor shape2.txt for details regarding the data le. Discuss the
dierences in the results you obtain with the two datasets.
2. Using the data in the preceding problem, classify each case as benign or ma-
lignant using the k-NN method, with k = 3 5 and 7. Use the Euclidean
distance. Comment upon the results.
3. The les mfc ben.dat and mfc mal.dat give the values of the three shape fac-
tors (mf ff cf ) for 64 benign calcications and 79 malignant calcications,
respectively. (See Chapter 6 for details.) Design a pattern classication sys-
tem using the Mahalanobis distance and evaluate its performance in terms of
the TPF, TNF, FPF, and FNF.
References

1] Lathi BP. Signal Processing and Linear Systems. Berkeley-Cambridge,


Carmichael, CA, 1998.
2] Oppenheim AV, Willsky AS, and Nawab SH. Signals and Systems. Prentice
Hall, Englewood Clis, NJ, 2nd edition, 1997.
3] Barrett HH and Swindell W. Radiological Imaging { Volumes 1 and 2. Aca-
demic, New York, NY, 1981.
4] Cho ZH, Jones JP, and Singh M. Foundations of Medical Imaging. Wiley,
New York, NY, 1993.
5] Macovski A. Medical Imaging Systems. Prentice Hall, Englewood Clis, NJ,
1983.
6] Huda W and Slone R. Review of Radiologic Physics. Williams and Wilkins,
Baltimore, MD, 1995.
7] Oppenheim AV and Schafer RW. Discrete-time Signal Processing. Prentice
Hall, Englewood Clis, NJ, 1989.
8] Gonzalez RC and Woods RE. Digital Image Processing. Prentice Hall, Upper
Saddle River, NJ, 2nd edition, 2002.
9] Hall EL. Computer Image Processing and Recognition. Academic, New York,
NY, 1979.
10] Pratt WK. Digital Picture Processing. Wiley, New York, NY, 2nd edition,
1991.
11] Rosenfeld A and Kak AC. Digital Picture Processing. Academic, New York,
NY, 2nd edition, 1982.
12] Jain AK. Fundamentals of Digital Image Processing. Prentice Hall, Engle-
wood Clis, NJ, 2nd edition, 1989.
13] Sonka M, Hlavac V, and Boyle R. Image Processing, Analysis and Machine
Vision. Chapman & Hall Computing, London, UK, 1993.
14] Robb RA, editor. Three-Dimensional Biomedical Imaging, Volumes I and
II. CRC Press, Boca Raton, FL, 1985.
15] Cooper KE, Cranston WI, and Snell ES. Temperature regulation during
fever in man. Clinical Science, 27(3):345{356, 1964.
16] Cooper KE. Body temperature and its regulation. In Encyclopedia of Human
Biology, volume 2, pages 73{83. Academic, New York, NY, 1997.
17] Sickles EA. Breast thermography. In Feig SA and McLelland R, editors,
Breast Carcinoma: Current Diagnosis and Treatment, pages 227{231. Mas-
son, New York, NY, 1983.

1187
1188 Biomedical Image Analysis
18] Zhou X and Gordon R. Detection of early breast cancer: An overview and
future prospects. Critical Reviews in Biomedical Engineering, 17(3):203{255,
1989.
19] Keyserlingk JR, Ahlgren P, Yu E, Belliveau N, and Yassa M. Functional
infrared imaging of the breast. IEEE Engineering in Medicine and Biology
Magazine, 19(3):30{41, 2000.
20] Ohashi Y and Uchida I. Applying dynamic thermography in the diagno-
sis of breast cancer. IEEE Engineering in Medicine and Biology Magazine,
19(3):42{51, 2000.
21] Head JF, Wang F, Lipari CA, and Elliott RL. The important role of in-
frared imaging in breast cancer. IEEE Engineering in Medicine and Biology
Magazine, 19(3):52{57, 2000.
22] Keyserlingk JR, Yassa M, Ahlgren P, and Belliveau N. Preliminary evalu-
ation of preoperative chemohormonotherapy-induced reduction of the func-
tional infrared imaging score in patients with locally advanced breast cancer.
In CDROM Proceedings of the 23rd Annual International Conference of the
IEEE Engineering in Medicine and Biology Society, Istanbul, Turkey, Octo-
ber 2001.
23] Qi H and Head JF. Asymmetry analysis using automated segmentation
and classication for breast cancer detection in thermograms. In CDROM
Proceedings of the 23rd Annual International Conference of the IEEE Engi-
neering in Medicine and Biology Society, Istanbul, Turkey, October 2001.
24] Merla A, Ledda A, Di Donato L, Di Luzio S, and Romani GL. Use of infrared
functional imaging to detect impaired thermoregulatory control in men with
asymptomatic varicocele. Fertility and Sterility, 78(1):199{200, 2002.
25] Merla A, Ledda A, Di Donato L, and Romani GL. Diagnosis of sub-clinical
varicocele by means of infrared functional imaging. In CDROM Proceedings
of the 23rd Annual International Conference of the IEEE Engineering in
Medicine and Biology Society, Istanbul, Turkey, October 2001.
26] Vlaisavljevic V. A comparative study of the diagnostic value of telethermog-
raphy and contact thermography in the diagnosis of varicocele. In Zorgniotti
AW, editor, Temperature and Environmental Eects on the Testis, pages
261{265. Plenum, New York NY, 1991.
27] Carlsen EN. Transmission spectroscopy: An improvement in light scanning.
RNM Images, 13(2):22{25, 1983.
28] Sickles EA. Breast CT scanning, heavy-ion mammography, NMR imaging,
and diaphanography. In Feig SA and McLelland R, editors, Breast Carci-
noma: Current Diagnosis and Treatment, pages 233{250. Masson, New York,
NY, 1983.
29] Kopans DB. Nonmammographic breast imaging techniques: Current status
and future developments. In Sickles EA, editor, The Radiologic Clinics of
North America, volume 25, number 5, pages 961{971. Saunders, Philadel-
phia, PA, September 1987.
30] Bozzola JJ and Russell LD. Electron Microscopy: Principles and Techniques
for Biologists. Jones and Bartlett, Sudbury, MA, 2nd edition, 1999.
References 1189
31] Rangayyan RM. Biomedical Signal Analysis { A Case-Study Approach. IEEE
and Wiley, New York, NY, 2002.
32] Frank C, Woo SLY, Amiel D, Harwood F, Gomez M, and Akeson W. Medial
collateral ligament healing { A multidisciplinary assessment in rabbits. The
American Journal of Sports Medicine, 11(6):379{389, 1983.
33] Frank C, McDonald D, Bray D, Bray R, Rangayyan R, Chimich D, and Shrive
N. Collagen bril diameters in the healing adult rabbit medial collateral
ligament. Connective Tissue Research, 27:251{263, 1992.
34] Gubler MC, Heidet L, and Antignac C. Alport's syndrome, thin base-
ment membrane nephropathy, nail-patella syndrome, and Type III collagen
glomerulopathy. In Jennette JC, Olson JL, Schwartz MM, and Silva FG,
editors, Heptinstall's Pathology of the Kidney, pages 1207{1230. Lippincott-
Raven, Philadelphia, PA, 5th edition, 1998.
35] Frank C, MacFarlane B, Edwards P, Rangayyan R, Liu ZQ, Walsh S, and
Bray R. A quantitative analysis of matrix alignment in ligament scars: A
comparison of movement versus immobilization in an immature rabbit model.
Journal of Orthopaedic Research, 9(2):219{227, 1991.
36] Chaudhuri S, Nguyen H, Rangayyan RM, Walsh S, and Frank CB. A Fourier
domain directional ltering method for analysis of collagen alignment in liga-
ments. IEEE Transactions on Biomedical Engineering, 34(7):509{518, 1987.
37] Liu ZQ, Rangayyan RM, and Frank CB. Statistical analysis of collagen align-
ment in ligaments by scale-space analysis. IEEE Transactions on Biomedical
Engineering, 38(6):580{588, 1991.
38] Robb RA. X-ray computed tomography: An engineering synthesis of
mulitscientic principles. CRC Critical Reviews in Biomedical Engineering,
7:264{333, 1982.
39] Nudelman S and Roehrig H. Photoelectronic-digital imaging for diagnos-
tic radiology. In Robb RA, editor, Three-Dimensional Biomedical Imaging,
Volume I, pages 5{60. CRC Press, Boca Raton, FL, 1985.
40] Yae MJ. Development of full eld digital mammography. In Karssemeijer
N, Thijssen M, Hendriks J, and van Erning L, editors, Proceedings of the 4th
International Workshop on Digital Mammography, pages 3{10, Nijmegen,
The Netherlands, June 1998.
41] Cheung L, Bird R, Chitkara A, Rego A, Rodriguez C, and Yuen J. Ini-
tial operating and clinical results of a full eld mammography system. In
Karssemeijer N, Thijssen M, Hendriks J, and van Erning L, editors, Pro-
ceedings of the 4th International Workshop on Digital Mammography, pages
11{18, Nijmegen, The Netherlands, June 1998.
42] Maidment ADA, Fahrig R, and Yae MJ. Dynamic range requirements in
digital mammography. Medical Physics, 20(6):1621{1633, 1993.
43] Herman GT. Image Reconstruction From Projections: The Fundamentals of
Computed Tomography. Academic, New York, NY, 1980.
44] Macovski A. Physical problems of computerized tomography. Proceedings of
the IEEE, 71(3):373{378, 1983.
1190 Biomedical Image Analysis
45] Andersson I. Mammography in clinical practice. Medical Radiography and
Photography, 62(2):1{41, 1986.
46] Larsson SA. Gamma camera emission tomography. Acta Radiologica Sup-
plementum 363, 1980.
47] Canadian Cancer Society. Facts on Breast Cancer, April 1989.
48] National Cancer Institute of Canada, Toronto. Annual Report of the National
Cancer Institute of Canada, 1987.
49] Spratt JS and Spratt JA. Growth rates. In Donegan WL and Spratt JS,
editors, Cancer of the Breast, chapter 10, pages 270{302. Saunders, Philadel-
phia, PA, 3rd edition, 1988.
50] Feig SA, Schwartz F, Nerlinger R, and Edeiken J. Prognostic factors of breast
neoplasms detected on screening by mammography and physical examina-
tion. Radiology, 133:577{582, 1979.
51] McLelland R. Screening for breast cancer: Opportunities, status and chal-
lenges. In Brunner S and Langfeldt B, editors, Recent Results in Cancer
Research, volume 119, pages 29{38. Springer-Verlag, Berlin, Germany, 1990.
52] Basset LW and Gold RH, editors. Breast cancer detection: Mammography
and other methods in breast imaging. Grune & Stratton, Orlando, FL, 2nd
edition, 1987.
53] Cardenosa G. Breast Imaging Companion. Lippincott-Raven, Philadelphia,
PA, 1997.
54] Homer MJ. Mammographic Interpretation: A Practical Approach. McGraw-
Hill, Boston, MA, 2nd edition, 1997.
55] Heywang-Kobrunner SH, Schreer I, and Dershaw DD. Diagnostic Breast
Imaging. Thieme, Stuttgart, Germany, 1997.
56] Haus AG. Recent trends in screen-lm mammography: Technical factors
and radiation dose. In Brunner S and Langfeldt B, editors, Recent Results
in Cancer Research, volume 105, pages 37{51. Springer-Verlag, Berlin, Ger-
many, 1987.
57] Warren SL. A roentgenologic study of the breast. American Journal of
Roentgenology and Radiation Therapy, 24:113{124, 1930.
58] Egan RL. Experience with mammography in a tumor institute. Evaluation
of 1000 studies. Radiology, 75:894{900, 1960.
59] Sickles EA and Weber WN. High-contrast mammography with a moving
grid: Assessment of clinical utility. American Journal of Radiology, 146:1137{
1139, 1986.
60] Sickles EA. The role of magnication technique in modern mammography.
In Brunner S and Langfeldt B, editors, Recent Results in Cancer Research,
volume 105, pages 19{24. Springer-Verlag, Berlin, Germany, 1987.
61] Alberta Cancer Board, Alberta, Canada. Screen Test: Alberta Program for
the Early Detection of Breast Cancer { 1999/2001 Biennial Report, 2001.
62] Edholm P. The tomogram { Its formation and content. Acta Radiologica,
Supplement No. 193:1{109, 1960.
References 1191
63] Rangayyan RM and Kantzas A. Image reconstruction. In Webster JG, editor,
Wiley Encyclopedia of Electrical and Electronics Engineering, Supplement 1,
pages 249{268. Wiley, New York, NY, 2000.
64] Rangayyan RM. Computed tomography techniques and algorithms: A tuto-
rial. Innovation et Technologie en Biologie et Medecine, 7(6):745{762, 1986.
65] Boyd DP, Gould RG, Quinn JR, and Sparks R. Proposed dynamic cardiac
3-D densitometer for early detection and evaluation of heart disease. IEEE
Transactions on Nuclear Science, NS-26(2):2724{2727, 1979.
66] Boyd DP and Lipton MJ. Cardiac computed tomography. Proceedings of the
IEEE, 71(3):298{307, 1983.
67] Radon J. Uber die bestimmung von funktionen durch ihre integralwerte
langs gewisser mannigfaltigkeiten. Berichte der Sachsischen Akadamie der
Wissenschaft, 69:262{277, 1917.
68] Radon J. On the determination of functions from their integral values along
certain manifolds (English translation by Parks PC). IEEE Transactions on
Medical Imaging, 5(4):170{176, 1986.
69] Cormack AM. Representation of a function by its line integrals, with some
radiological applications. Journal of Applied Physics, 34:2722{2727, 1963.
70] Cormack AM. Representation of a function by its line integrals, with some
radiological applications II. Journal of Applied Physics, 35:2908{2913, 1964.
71] Bracewell RN and Riddle AC. Inversion of fan-beam scans in radio astron-
omy. The Astrophysical Journal, 150:427{434, 1967.
72] Crowther RA, Amos LA, Finch JT, De Rosier DJ, and Klug A. Three
dimensional reconstructions of spherical viruses by Fourier synthesis from
electron micrographs. Nature, 226:421{425, 1970.
73] De Rosier DJ and Klug A. Reconstruction of three dimensional images from
electron micrographs. Nature, 217:130{134, 1968.
74] Ramachandran GN and Lakshminarayanan AV. Three-dimensional recon-
struction from radiographs and electron micrographs: Application of convo-
lutions instead of Fourier transforms. Proceedings of the National Academy
of Science, USA, 68:2236{2240, 1971.
75] Gordon R, Bender R, and Herman GT. Algebraic Reconstruction Techniques
(ART) for three-dimensional electron microscopy. Journal of Theoretical
Biology, 29:471{481, 1970.
76] Oldendorf WH. Isolated !ying spot detection of radio-density discontinu-
ities { Displaying the internal structural pattern of a complex object. IRE
Transactions on Bio-Medical Electronics, BME-8:68{72, 1961.
77] Hounseld GN. Computerized transverse axial scanning (tomography) Part
I: Description of system. British Journal of Radiology, 46:1016{1022, 1973.
78] Ambrose J. Computerized transverse axial scanning (tomography) Part II:
Cinical application. British Journal of Radiology, 46:1023{1047, 1973.
79] Robb RA, Homan EA, Sinak LJ, Harris LD, and Ritman EL. High-speed
three-dimensional x-ray computed tomography: The Dynamic Spatial Re-
constructor. Proceedings of the IEEE, 71(3):308{319, 1983.
1192 Biomedical Image Analysis
80] Herman GT. Image Reconstruction From Projections: Implementation and
Applications. Springer-Verlag, Berlin, Germany, 1979.
81] Knoll GF. Single-photon emission computed tomography. Proceedings of the
IEEE, 71(3):320{329, 1983.
82] Kak AC and Slaney M. Principles of Computerized Tomographic Imaging.
IEEE, New York, NY, 1988.
83] Greenleaf JF. Computerized tomography with ultrasound. Proceedings of
the IEEE, 71(3):330{337, 1983.
84] Hinshaw WS and Lent AH. An introduction to NMR imaging: From the
Bloch equation to the imaging equation. Proceedings of the IEEE, 71(3):338{
350, 1983.
85] Muller G, Chance B, Alfano R, Arridge S, Beuthan J, Gratton E, Kaschke
M, Masters B, Svanberg S, and van der Zee P, editors. Medical Optical
Tomography: Functional Imaging and Monitoring. SPIE, Bellingham, WA,
1993.
86] Boulfelfel D. Restoration of Nuclear Medicine Images. PhD thesis, De-
partment of Electrical and Computer Engineering, University of Calgary,
Calgary, Alberta, Canada, November 1992.
87] Boulfelfel D, Rangayyan RM, Hahn LJ, and Kloiber R. Use of the geomet-
ric mean of opposing planar projections in pre-reconstruction restoration of
SPECT images. Physics in Medicine and Biology, 37(10):1915{1929, 1992.
88] Ter-Pogossian MM. Positron emission tomography (PET). In Robb RA, ed-
itor, Three-Dimensional Biomedical Imaging, Volume II, pages 41{56. CRC
Press, Boca Raton, FL, 1985.
89] Seitz RJ and Roland PE. Vibratory stimulation increases and decreases the
regional cerebral blood !ow and oxidative metabolism: a positron emission
tomography (PET) study. Acta Nerologica Scandinavica, 86:60{67, 1992.
90] Ogawa M, Magata Y, Ouchi Y, Fukuyama H, Yamauchi H, Kimura J,
Yonekura Y, and Konishi J. Scopolamine abolishes cerebral blood !ow re-
sponse to somatosensory stimulation in anesthetized cats: PET study. Brain
Research, 650:249{252, 1994.
91] Cloutier G, Chen D, and Durand LG. Performance of time-frequency rep-
resentation techniques to measure blood !ow turbulence with pulsed-wave
Doppler ultrasound. Ultrasound in Medicine and Biology, 27(4):535{550,
2001.
92] Fenster A and Downey DB. Three-dimensional ultrasound imaging of the
prostate. In Proceedings SPIE 3659: Medical Imaging { Physics of Medical
Imaging, pages 2{11, San Diego, CA, February 1999.
93] Robinson BS and Greenleaf JF. Computerized ultrasound tomography. In
Robb RA, editor, Three-Dimensional Biomedical Imaging, Volume II, pages
57{78. CRC Press, Boca Raton, FL, 1985.
94] Lauterbur PC and Lai CM. Zeugmatography by reconstruction from projec-
tions. IEEE Transactions on Nuclear Science, 27(3):1227{1231, 1980.
References 1193
95] Liang ZP and Lauterbur PC. Principles of Magnetic Resonance Imaging: A
Signal Processing Perspective. IEEE, New York, NY, 2000.
96] Hill BC and Hinshaw WS. Fundamentals of NMR imaging. In Robb RA,
editor, Three-Dimensional Biomedical Imaging, Volume II, pages 79{124.
CRC Press, Boca Raton, FL, 1985.
97] Tompkins WJ. Biomedical Digital Signal Processing. Prentice Hall, Upper
Saddle River, NJ, 1995.
98] Cromwell L, Weibell FJ, and Pfeier EA. Biomedical Instrumentation and
Measurements. Prentice Hall, Englewood Clis, NJ, 2nd edition, 1980.
99] Aston R. Principles of Biomedical Instrumentation and Measurement. Mer-
rill, Columbus, OH, 1990.
100] Webster JG, editor. Medical Instrumentation: Application and Design. Wi-
ley, New York, NY, 3rd edition, 1998.
101] Bronzino JD. Biomedical Engineering and Instrumentation. PWS Engineer-
ing, Boston, MA, 1986.
102] Bronzino JD, editor. The Biomedical Engineering Handbook. CRC Press,
Boca Raton, FL, 1995.
103] Bartlett J. Familiar Quotations. Little, Brown and Co., Boston, MA, 15th
edition, 1980.
104] Chakraborty D, Pfeier DE, and Brikman I. Perceptual noise measurement
of displays. In Proceedings SPIE 1443: Medical Imaging V { Image Physics,
pages 183{190, San Jose, CA, February 1991.
105] Eckert MP and Chakraborty D. Quantitative analysis of phantom images
in mammography. In Proceedings SPIE 2167: Medical Imaging { Image
Processing, pages 887{899, Newport Beach, CA, February 1994.
106] Chakraborty DP. Physical measures of image quality in mammography. In
Proceedings SPIE 2708: Medical Imaging 1996 { Physics of Medical Imaging,
pages 179{193, Newport Beach, CA, February 1996.
107] Chakraborty DP and Eckert MP. Quantitative versus subjective evaluation
of mammography accreditation phantom images. Medical Physics, 22(2):133{
143, 1995.
108] Chakraborty DP. Computer analysis of mammography phantom images
(CAMPI): An application to the measurement of microcalcication image
quality of directly acquired digital images. Medical Physics, 24(8):1269{1277,
1997.
109] Bijkerk KR, Thijssen MAO, and Arnoldussen TJM. Modication of the
CDMAM contrast-detail phantom for image quality evaluation of full-eld
digital mammography systems. In Yae MJ, editor, Proceedings of the 5th
International Workshop on Digital Mammography, pages 633{640, Toronto,
Canada, June 2000.
110] Furuie S, Herman GT, Narayan TK, Kinahan PE, Karp JS, Lewitt RM,
and Matej S. A methodology for testing statistically signicant dierences
between fully 3D PET reconstruction algorithms. Physics in Medicine and
Biology, 39:341{354, 1994.
1194 Biomedical Image Analysis
111] Barrett HH. Objective assessment of image quality: Eects of quantum
noise and object variability. Journal of the Optical Society of America { A,
7(7):1266{1278, July 1990.
112] Kayargadde V and Martens JB. Perceptual characterization of images de-
graded by blur and noise: model. Journal of the Optical Society of America
{ A, 13(6):1178{1188, June 1996.
113] Kayargadde V and Martens JB. Perceptual characterization of images de-
graded by blur and noise: experiments. Journal of the Optical Society of
America { A, 13(6):1166{1177, June 1996.
114] Perrin FH. Methods of appraising photographic systems Part I { Historical
review. Journal of the Society of Motion Picture and Television Engineers,
69(3):151{156, 1960.
115] Higgins GC and Jones LA. The nature and evaluation of the sharpness of
photographic images. Journal of the Society of Motion Picture and Television
Engineers, 58(4):277{290, 1952.
116] Rangayyan RM and Elkadiki S. A region-based algorithm for the computa-
tion of image edge prole acutance. Journal of Electronic Imaging, 4(1):62{
70, 1995.
117] Olabarriaga SD and Rangayyan RM. Subjective and objective evaluation
of image sharpness { Behavior of the region-based image edge prole acu-
tance measure. In Proceedings SPIE 2712: Medical Imaging 1996 { Image
Perception, pages 154{162, Newport Beach, CA, February 1996.
118] Lloyd SP. Least squares quantization in PCM. IEEE Transactions on In-
formation Theory, 28(2):129{137, 1982.
119] Max J. Quantizing for minimum distortion. IEEE Transactions on Infor-
mation Theory, 6:7{12, 1960.
120] von Gierke HE. Transmission of vibratory energy through human tissue.
In Glasser O, editor, Medical Physics, Volume 3, pages 661{669. Year Book
Medical, Chicago, IL, 1960.
121] Phelps ME, Homan EJ, and Ter-Pogossian MM. Attenuation coe"cients of
various body tissues, !uids and lesions at photon energies of 18 to 136 keV.
Radiology, 117:573{583, December 1975.
122] Levine MD. Vision in Man and Machine. McGraw-Hill, New York, NY,
1985.
123] Morrow WM, Paranjape RB, Rangayyan RM, and Desautels JEL. Region-
based contrast enhancement of mammograms. IEEE Transactions on Medi-
cal Imaging, 11(3):392{406, 1992.
124] Rangayyan RM, Shen L, Shen Y, Desautels JEL, Bryant H, Terry TJ,
Horeczko N, and Rose MS. Improvement of sensitivity of breast cancer di-
agnosis with adaptive neighborhood contrast enhancement of mammograms.
IEEE Transactions on Information Technology in Biomedicine, 1(3):161{
170, 1997.
125] Sivaramakrishna R, Obuchowski NA, Chilcote WA, Cardenosa G, and Powell
KA. Comparing the performance of mammographic enhancement algorithms
{ a preference study. American Journal of Roentgenology, 175:45{51, 2000.
References 1195
126] Shannon CE. A mathematical theory of communication. Bell System Tech-
nical Journal, 27:379{423, 623{656, 1948.
127] Shannon CE. Communication in the presence of noise. Proceedings of the
IRE, 37:10{21, 1949.
128] Papoulis A. Probability, Random Variables, and Stochastic Processes.
McGraw-Hill, New York, NY, 1965.
129] Gray RM. Entropy and Information Theory. Springer-Verlag, New York,
NY, 1990.
130] Jain AK. Image data compression: A review. Proceedings of the IEEE,
69(3):349{389, 1981.
131] Goodman JW. Introduction to Fourier Optics. McGraw-Hill, New York, NY,
1968.
132] Hon TC, Rangayyan RM, Hahn LJ, and Kloiber R. Restoration of gamma
camera-based nuclear medicine images. IEEE Transactions on Medical Imag-
ing, 8(4):354{363, 1989.
133] Higashida Y, Baba Y, Hatemura M, Yoshida A, Takada T, and Takahashi
M. Physical and clinical evaluation of a 2 048  2 048-matrix image inten-
sier TV digital imaging system in bone radiography. Academic Radiology,
3(10):842{848, 1996.
134] Pateyron M, Peyrin F, Laval-Jeantet AM, Spanne P, Cloetens P, and Peix
G. 3D microtomography of cancellous bone samples using synchrotron radi-
ation. In Proceedings SPIE 2708: Medical Imaging 1996 { Physics of Medical
Imaging, pages 417{426, Newport Beach, CA, February 1996.
135] Schade OH. Image gradation, graininess and sharpness in television and
motion-picture systems, Part IV, A & B: Image analysis in photographic
and television systems (Denition and sharpness). Journal of the Society of
Motion Picture and Television Engineers, 64(11):593{617, 1955.
136] Schade OH. Optical and photoelectric analog of the eye. Journal of the
Optical Society of America, 46(9):721{739, 1956.
137] Schade OH. An evaluation of photographic image quality and resolving
power. Journal of the Society of Motion Picture and Television Engineers,
73(2):81{119, 1964.
138] Burke JJ and Snyder HL. Quality metrics of digitally derived imagery and
their relation to interpreter performance. In Proceedings SPIE 310: Image
Quality, pages 16{23, 1981.
139] Tapiovaara MJ and Wagner RF. SNR and noise measurements for medical
imaging: I. A practical approach based on statistical decision theory. Physics
in Medicine and Biology, 38:71{92, 1993.
140] Tapiovaara MJ. SNR and noise measurements for medical imaging: II. Ap-
plication to !uoroscopic x-ray equipment. Physics in Medicine and Biology,
38:1761{1788, 1993.
141] Hall CF. Subjective evaluation of a perceptual quality metric. In Proceedings
SPIE 310: Image Quality, pages 200{204, 1981.
1196 Biomedical Image Analysis
142] Crane EM. An objective method for rating picture sharpness: SMT acu-
tance. Journal of the Society of Motion Picture and Television Engineers,
73(8):643{647, 1964.
143] Crane EM. Acutance and granulance. In Proceedings SPIE 310: Image
Quality, pages 125{132, 1981.
144] Yip KL. Imaging characteristics of CRT multiformat printers. In Proceedings
SPIE 1653: Image Capture, Formatting, and Display, pages 477{487, 1992.
145] Kriss MA. Image analysis of discrete and continuous systems { lm and CCD
sensors. In Proceedings SPIE 1398: CAN-AM Eastern, pages 4{14, 1990.
146] Gendron RG. An improved objective method for rating picture sharpness:
CMT acutance. Journal of the Society of Motion Picture and Television
Engineers, 82(12):1009{1012, 1973.
147] Westheimer G. Spatial frequency and light spread descriptions of visual
acuity and hyperacuity. Journal of the Optical Society of America, 67(2):207{
212, 1977.
148] Westheimer G. The spatial sense of the eye. Investigative Opthalmology and
Visual Science, 18(9):893{912, 1979.
149] Wolfe RN and Eisen FC. Psychometric evaluation of the sharpness of
photographic reproductions. Journal of the Optical Society of America,
43(10):914{923, 1953.
150] Perrin FH. Methods of appraising photographic systems, Part II { Manip-
ulation and signicance of the sine-wave response function. Journal of the
Society of Motion Picture and Television Engineers, 69(4):239{250, 1960.
151] Barten PGJ. Evaluation of subjective image quality with the square-root
integral method. Journal of the Optical Society of America A, 7(10):2024{
2031, 1990.
152] Higgins GC. Methods for analyzing the photographic system, including the
eects of nonlinearity and spatial frequency response. Photographic Science
and Engineering, 15(2):106{118, 1971.
153] Granger EM and Cupery KN. An optical merit function (SQF) which corre-
lates with subjective image judgments. Photographic Science and Engineer-
ing, 16(3):221{230, 1972.
154] Higgins GC. Image quality criteria. Journal of Applied Photographic Engi-
neering, 3(2):53{60, 1977.
155] Task HL, Pinkus AR, and Hornseth JP. A comparison of several television
display image quality measures. Proceedings of the Society of Information
Display, 19(3):113{119, 1978.
156] Barten PGJ. Contrast Sensitivity of the Human Eye and its Eects on Image
Quality. SPIE, Bellingham, WA, 1999.
157] Carlson CR and Cohen RW. A simple psychophysical model for predicting
the visibility of displayed information. Proceedings of the Society of Infor-
mation Display, 21(3):229{246, 1980.
158] Saghri JA, Cheatham PS, and Habibi A. Image quality measure based on a
human visual system model. Optical Engineering, 28(7):813{818, 1989.
References 1197
159] Nill NB and Bouzas BH. Objective image quality measure derived from
digital image power spectra. Optical Engineering, 31(4):813{825, 1992.
160] Lukas FXJ and Budrikis ZL. Picture quality prediction based on a visual
model. IEEE Transactions on Communications, 30(7):1679{1692, 1982.
161] Budrikis ZL. Visual delity criterion and modeling. Proceedings of the IEEE,
60(7):771{779, 1972.
162] Westerink JHDM and Roufs JAJ. A local basis for perceptually relevant
resolution measures. In Society of Information Display Digest, pages 360{
363, 1988.
163] Rangayyan RM, El-Faramawy NM, Desautels JEL, and Alim OA. Measures
of acutance and shape for classication of breast tumors. IEEE Transactions
on Medical Imaging, 16(6):799{810, 1997.
164] Rangayyan RM and Das A. Image enhancement based on edge prole acu-
tance. Journal of the Indian Institute of Science, 78:17{29, 1998.
165] Mudigonda NR, Rangayyan RM, and Desautels JEL. Gradient and texture
analysis for the classication of mammographic masses. IEEE Transactions
on Medical Imaging, 19(10):1032{1043, 2000.
166] Mudigonda NR. Image Analysis Methods for the Detection and Classica-
tion of Mammographic Masses. PhD thesis, Department of Electrical and
Computer Engineering, University of Calgary, Calgary, Alberta, Canada,
December 2001.
167] Papoulis A. Signal Analysis. McGraw-Hill, New York, NY, 1977.
168] Bendat JS and Piersol AG. Random Data: Analysis and Measurement Pro-
cedures. Wiley, New York, NY, 2nd edition, 1986.
169] Au~n$on JI and Chandrasekar V. Introduction to Probability and Random
Processes. McGraw-Hill, New York, NY, 1997.
170] Ramsey FL and Schafer DW. The Statistical Sleuth | A Course in Methods
of Data Analysis. Wadsworth Publishing Company, Belmont, CA, 1997.
171] Rienburgh RH. Statistics in Medicine. Academic, San Diego, CA, 1993.
172] Bailar III JC and Mosteller F, editors. Medical Uses of Statistics. NEJM
Books, Boston, MA, 2nd edition, 1992.
173] Peebles Jr. PZ. Probability, Random Variables, and Random Signal Princi-
ples. McGraw-Hill, New York, NY, 3rd edition, 1993.
174] Kuduvalli GR and Rangayyan RM. Performance analysis of reversible im-
age compression techniques for high-resolution digital teleradiology. IEEE
Transactions on Medical Imaging, 11(3):430{445, 1992.
175] Mascarenhas NDA. An overview of speckle noise ltering in SAR images. In
Guyenne TD, editor, Proceedings of the First Latin American Seminar on
Radar Remote Sensing: Image Processing Techniques, volume 1, pages 71{
79. European Space Agency, Buenos Aires, Argentina, December 2-4 1996.
176] Rabiner LR and Schafer RW. Digital Processing of Speech Signals. Prentice
Hall, Englewood Clis, NJ, 1978.
1198 Biomedical Image Analysis
177] Trussell HJ and Hunt BR. Sectioned methods for image restoration. IEEE
Transactions on Acoustics, Speech, and Signal Processing, 26(2):157{164,
1978.
178] Rabie TF, Rangayyan RM, and Paranjape RB. Adaptive-neighborhood im-
age deblurring. Journal of Electronic Imaging, 3(4):368{378, 1994.
179] Jiang SS and Sawchuk AA. Noise updating repeated Wiener lter and other
adaptive noise smoothing lters using local image statistics. Applied Optics,
25:2326{2337, July 1986.
180] Barner KE and Arce GR. Optimal detection methods for the restoration
of images degraded by signal-dependent noise. In Pearlman WA, editor,
Proceedings of SPIE on Visual Communications and Image Processing IV,
volume 1199, pages 115{124. SPIE, 1989.
181] Froehlich GK, Walkup JF, and Krile TF. Estimation in signal-dependent
lm-grain noise. Applied Optics, 20:3619{3626, October 1981.
182] Naderi F and Sawchuk AA. Estimation of images degraded by lm-grain
noise. Applied Optics, 17:1228{1237, April 1978.
183] Downie JD and Walkup JF. Optimal correlation lters with signal-dependent
noise. Journal of the Optical Society of America, 11:1599{1609, May 1994.
184] Lee JS. Speckle analysis and smoothing of synthetic aperture radar images.
Computer Graphics and Image Processing, 17:24{32, 1981.
185] Schultze MA. An edge-enhancing nonlinear lter for reducing multiplicative
noise. In Dougherty ER and Astola JT, editors, Proceedings of SPIE on
Nonlinear Image Processing VIII, volume 3026, pages 46{56. SPIE, February
1997.
186] Kuan DT, Sawchuk AA, Strand TC, and Chavel P. Adaptive restoration of
images with speckle. IEEE Transactions on Acoustics, Speech, and Signal
Processing, 35:373{383, 1987.
187] Lim JS and Nawab H. Techniques for speckle noise removal. Optical Engi-
neering, 21:472{480, 1981.
188] Kuan DT, Sawchuk AA, Strand TC, and Chavel P. Adaptive noise smoothing
lter for images with signal-dependent noise. IEEE Transactions on Pattern
Analysis and Machine Intelligence, PAMI-7:165{177, 1985.
189] Arsenault HH, Gendron C, and Denis M. Transformation of lm-grain noise
into signal-independent Gaussian noise. Journal of the Optical Society of
America, 71:91{94, January 1981.
190] Arsenault HH and Levesque M. Combined homomorphic and local-statistics
processing for restoration of images degraded by signal-dependent noise. Ap-
plied Optics, 23:845{850, March 1984.
191] Kasturi R, Walkup JF, and Krile TF. Image restoration by transformation of
signal-dependent noise to signal-independent noise. Applied Optics, 22:3537{
3542, November 1983.
192] Dougherty ER and Astola J. An Introduction to Nonlinear Image Processing.
SPIE, Bellingham, WA, 1994.
References 1199
193] Pitas I and Venetsanopoulos AN. Order statistics in digital image processing.
Proceedings of the IEEE, 80:1893{1923, 1992.
194] Theilheimer F. A matrix version of the fast Fourier transform. IEEE Trans-
actions on Audio and Electroacoustics, AU-17(2):158{161, 1969.
195] Hunt BR. A matrix theory proof of the discrete convolution theorem. IEEE
Transactions on Audio and Electroacoustics, AU-19(4):285{288, 1971.
196] Hunt BR. The application of constrained least squares estimation to im-
age restoration by digital computer. IEEE Transactions on Computers, C-
22(9):805{812, 1973.
197] Helstrom CW. Image restoration by the method of least squares. Journal of
the Optical Society of America, 57(3):297{303, 1967.
198] Wiener NE. Extrapolation, Interpolation, and Smoothing of Stationary Time
Series, with Engineering Applications. MIT Press, Cambridge, MA, 1949.
199] Lim JS. Two-dimensional Signal and Image Processing. Prentice Hall, En-
glewood Clis, NJ, 1990.
200] Lee JS. Digital image enhancement and noise ltering by use of local statis-
tics. IEEE Transactions on Pattern Analysis and Machine Intelligence,
PAMI-2:165{168, March 1980.
201] Rangayyan RM, Ciuc M, and Faghih F. Adaptive neighborhood ltering of
images corrupted by signal-dependent noise. Applied Optics, 37(20):4477{
4487, 1998.
202] Aghdasi F, Ward RK, and Palcic B. Restoration of mammographic images in
the presence of signal-dependent noise. In Proceedings of SPIE vol. 1905 on
Biomedical Image Processing and Biomedical Visualization, pages 740{751,
San Jose, CA, 1993.
203] The Math Works Inc., Natick, MA. Image Processing Toolbox for use with
MATLAB: User's Guide, 2nd edition, 1997.
204] Lee JS. Rened ltering of image noise using local statistics. Computer
Graphics and Image Processing, 15:380{389, 1981.
205] Chan P and Lim JS. One-dimensional processing for adaptive image restora-
tion. IEEE Transactions on Acoustics, Speech, and Signal Processing,
33(1):117{129, 1985.
206] Hadhoud MM and Thomas DW. The two-dimensional adaptive LMS
(TDLMS) algorithm. IEEE Transactions on Circuits and Systems,
35(5):485{494, 1988.
207] Song WJ and Pearlman WA. Restoration of noisy images with adaptive
windowing and non-linear ltering. In Proceedings of SPIE on Visual Com-
munications and Image Processing, volume 707, pages 198{206. SPIE, 1986.
208] Song WJ and Pearlman WA. A minimum-error, minimum-correlation l-
ter for images. In Proceedings of SPIE on Applications of Digital Image
Processing IX, volume 697, pages 225{232. SPIE, 1986.
209] Song WJ and Pearlman WA. Edge-preserving noise ltering based on adap-
tive windowing. IEEE Transactions on Circuits and Systems, 35(8):1046{
1055, 1988.
1200 Biomedical Image Analysis
210] Mahesh B, Song WJ, and Pearlman WA. Adaptive estimators for ltering
noisy images. Optical Engineering, 29(5):488{494, 1990.
211] Paranjape RB, Rangayyan RM, and Morrow WM. Adaptive neighborhood
mean and median ltering. Journal of Electronic Imaging, 3:360{367, Octo-
ber 1994.
212] Paranjape RB, Rabie TF, and Rangayyan RM. Image restoration by adap-
tive neighborhood noise subtraction. Applied Optics, 33:1861{1869, May
1994.
213] Rangayyan RM and Das A. Filtering multiplicative noise in images using
adaptive region-based statistics. Journal of Electronic Imaging, 7:222{230,
1998.
214] Ciuc M, Rangayyan RM, Zaharia T, and Buzuloiu V. Filtering noise in
color images using adaptive-neighborhood statistics. Journal of Electronic
Imaging, 9(4):484{494, 2000.
215] Morrow WM. Region-based image processing with application to mammog-
raphy. Master's thesis, Department of Electrical and Computer Engineering,
University of Calgary, Calgary, Alberta, Canada, December 1990.
216] Hunter CJ, Matyas JR, and Duncan NA. The three-dimensional architecture
of the notochordal nucleus pulposus: novel observations on cell structures in
the canine intervertebral disc. Journal of Anatomy, 202(3):279{291, 2003.
217] University of California, Los Angeles, CA. The Confocal Microscope:
http://www.gonda.ucla.edu/bri; core/confocal.htm, accessed April 2002.
218] University of British Columbia, Vancouver, BC, Canada.
Introduction to Laser Scanning Confocal Microscopy:
http://www.cs.ubc.ca/spider/ladic/intro.html, accessed April 2002.
219] Al-Kofahi KA, Lasek S, Szarowski DH, Pace CJ, Nagy G, Turner JN, and
Roysam B. Rapid automated three-dimensional tracing of neurons from
confocal image stacks. IEEE Transactions on Information Technology in
Biomedicine, 6(2):171{187, 2002.
220] Haralick RM, Sternberg SR, and Zhuang X. Image analysis using mathe-
matical morphology. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 9(4):532{550, 1987.
221] Giardina CR and Dougherty ER. Morphological Methods in Image and Signal
Processing. Prentice Hall, Englewood Clis, NJ, 1988.
222] Dougherty ER. An Introduction to Morphological Image Processing. SPIE,
Bellingham, WA, 1992.
223] Meijering EHW. Image Enhancement in Digital X-ray Angiography. PhD
thesis, Image Sciences Institute, University Medical Center Utrecht, Utrecht,
The Netherlands, 2000.
224] Meijering EHW, Zuiderveld KJ, and Viergever MA. Image registration for
digital subtraction angiography. International Journal of Computer Vision,
31(2/3):227{246, 1999.
225] Meijering EHW, Niejessen WJ, Bakker J, van der Molen AJ, de Kort GAP,
Lo RTH, Mali WPTM, and Viergever MA. Reduction of patient motion
References 1201
artifacts in digital subtraction angiography: Evaluation of a fast and fully
automatic technique. Radiology, 219:288{293, 2001.
226] MacMahon H. Improvement in detection of pulmonary nodules: Digital
image processing and computer-aided diagnosis. RadioGraphics, 20:1169{
1177, 2000.
227] MacMahon H. Energy subtraction: The University of Chicago Hospitals'
observer test. Insights & Images, Fujilm Medical Systems, Stamford, CT,
USA, Late Summer:1{2, 1999.
228] Mazur AK, Mazur EJ, and Gordon R. Digital dierential radiography: a
new diagnostic procedure for locating neoplasms, such as breast cancers, in
soft, deformable tissues. In Proceedings of SPIE vol. 1905 on Biomedical
Image Processing and Biomedical Visualization, pages 443{455, San Jose,
CA, 1993.
229] Lindley CA. Practical Image Processing in C. Wiley, New York, NY, 1991.
230] Paranjape RB, Morrow WM, and Rangayyan RM. Adaptive-neighborhood
histogram equalization for image enhancement. CVGIP: Graphical Models
and Image Processing, 54(3):259{267, May 1992.
231] Ketchum DJ. Real-time image enhancement techniques. In Proceedings
SPIE/OSA 74, pages 120{125, 1976.
232] Pizer SM, Amburn EP, Austin JD, Cromartie R, Geselowitz A, Geer T, tar
Haar Remeny B, Zimmerman JB, and Zuiderveld K. Adaptive histogram
equalization and its variations. Computer Vision, Graphics, and Image Pro-
cessing, 39:355{368, 1987.
233] Leszczynski KW and Shalev S. A robust algorithm for contrast enhancement
by local histogram modication. Image and Vision Computing, 7(3):205{209,
1989.
234] Rehm K and Dallas WJ. Artifact suppression in digital chest radiographs
enhanced with adaptive histogram equalization. In Proceedings SPIE 1092,
pages 220{300, 1989.
235] Buzuloiu V, Ciuc M, Rangayyan RM, and Vertan C. Adaptive-neighborhood
histogram equalization of color images. Journal of Electronic Imaging,
10(4):445{459, 2001.
236] Bogert BP, Healy MJR, and Tukey JW. The quefrency alanysis of time series
for echoes: Cepstrum, pseudo-autocovariance, cross-cepstrum, and saphe
cracking. In Rosenblatt M, editor, Proceedings of the Symposium on Time
Series Analysis, pages 209{243. Wiley, New York, NY, 1963.
237] Oppenheim AV, Schafer RW, and Stockham Jr. TG. Nonlinear ltering of
multiplied and convolved signals. Proceedings of the IEEE, 56(8):1264{1291,
1968.
238] Oppenheim AV and Schafer RW. Homomorphic analysis of speech. IEEE
Transactions on Audio and Electroacoustics, AU-16(2):221{226, 1968.
239] Childers DG, Skinner DP, and Kemerait RC. The cepstrum: A guide to
processing. Proceedings of the IEEE, 65(10):1428{1443, 1977.
1202 Biomedical Image Analysis
240] Stockham Jr. TG. Image processing in the context of a visual model. Pro-
ceedings of the IEEE, 60(7):828{842, 1972.
241] Yoon JH, Ro YM, Kim SI, and Park DS. Contrast enhancement of mam-
mography image using homomorphic lter in wavelet domain. In Yae MJ,
editor, Proceedings of the 5th International Workshop on Digital Mammog-
raphy, pages 617{623, Toronto, Canada, June 2000.
242] Gordon R and Rangayyan RM. Feature enhancement of lm mammograms
using xed and adaptive neighborhoods. Applied Optics, 23(4):560{564,
February 1984.
243] Rangayyan RM and Nguyen HN. Pixel-independent image processing tech-
niques for enhancement of features in mammograms. In Proceedings of the
8th IEEE Engineering in Medicine and Biology Conference, pages 1113{1117,
1986.
244] Rangayyan RM and Nguyen HN. Pixel-independent image processing tech-
niques for noise removal and feature enhancement. In IEEE Pacic Rim
Conference on Communications, Computers, and Signal Processing, pages
81{84, Vancouver, June 1987. IEEE.
245] Pavlidis T. Algorithms for Graphics and Image Processing. Computer Science
Press, Rockville, MD, 1982.
246] Rangayyan RM, Alto H, and Gavrilov D. Parallel implementation of the
adaptive neighborhood contrast enhancement technique using histogram-
based image partitioning. Journal of Electronic Imaging, 10:804{813, 2001.
247] Dhawan AP, Buelloni G, and Gordon R. Enhancement of mammographic
features by optimal adaptive neighborhood image processing. IEEE Trans-
actions on Medical Imaging, 5(1):8{15, 1986.
248] Dronkers DJ and Zwaag HV. Photographic contrast enhancement in mam-
mography. Radiologia Clinica et Biologica, 43:521{528, 1974.
249] McSweeney MB, Sprawls P, and Egan RL. Enhanced-image mammography.
In Recent Results in Cancer Research, volume 90, pages 79{89. Springer-
Verlag, Berlin, Germany, 1984.
250] Askins BS, Brill AB, Rao GUV, and Novak GR. Autoradiographic enhance-
ment of mammograms. Diagnostic Radiology, 130:103{107, 1979.
251] Bankman IN, editor. Handbook of Medical Imaging: Processing and Analysis.
Academic Press, London, UK, 2000.
252] Ram G. Optimization of ionizing radiation usage in medical imaging by
means of image enhancement techniques. Medical Physics, 9(5):733{737,
1982.
253] Rogowska J, Preston K, and Sashin D. Evaluation of digital unsharp masking
and local contrast stretching as applied to chest radiographs. IEEE Trans-
actions on Biomedical Engineering, 35(10):817{827, 1988.
254] Chan HP, Vyborny CJ, MacMahon H, Metz CE, Doi K, and Sickles EA.
ROC studies of the eects of pixel size and unsharp-mask ltering on the
detection of subtle microcalcications. Investigative Radiology, 22:581{589,
1987.
References 1203
255] Dhawan AP and Le Royer E. Mammographic feature enhancement by
computerized image processing. Computer Methods and Programs in
Biomedicine, 27:23{35, 1988.
256] Ji TL, Sundareshan MK, and Roehrig H. Adaptive image contrast enhance-
ment based on human visual properties. IEEE Transactions on Medical
Imaging, 13(4):573{586, 1994.
257] Laine AF, Schuler S, Fan J, and Huda W. Mammographic feature en-
hancement by multiscale analysis. IEEE Transactions on Medical Imaging,
13(4):725{740, December 1994.
258] Vuylsteke P and Schoeters E. Multiscale image contrast amplication (MU-
SICA). In Proceedings of SPIE on Medical Imaging 1994: Image Processing,
volume 2167, pages 551{560, 1994.
259] Belikova T, Lashin V, and Zaltsman I. Computer assistance in the digitized
mammogram processing to improve diagnosis of breast lesions. In Proceedings
of the 2nd International Workshop on Digital Mammography, pages 69{78,
York, England, 10-12 July 1994.
260] Qu G, Huda W, Laine A, Steinbach B, and Honeyman J. Use of accreditation
phantoms and clinical images to evaluate mammography image processing
algorithms. In Proceedings of the 2nd International Workshop on Digital
Mammography, pages 345{354, York, England, 10-12 July 1994.
261] Tahoces PG, Correa J, Souto M, Gonzalez C, Gomez L, and Vidal JJ. En-
hancement of chest and breast radiographs by automatic spatial ltering.
IEEE Transactions on Medical Imaging, 10(3):330{335, 1991.
262] Qian W, Clarke LP, Kallergi M, and Clark RA. Tree-structured nonlinear
lters in digital mammography. IEEE Transactions on Medical Imaging,
13(4):25{36, 1994.
263] Chen J, Flynn MJ, and Rebner M. Regional contrast enhancement and data
compression for digital mammographic images. In Proceedings of SPIE on
Biomedical Image Processing and Biomedical Visualization, volume SPIE-
1905, pages 752{758, San Jose, CA, February 1993.
264] Kimme-Smith C, Gold RH, Bassett LW, Gormley L, and Morioka C. Diagno-
sis of breast calcications: Comparison of contact, magnied, and television-
enhanced images. American Journal of Roentgenology, 153:963{967, 1989.
265] Simpson K and Bowyer KW. A comparison of spatial noise ltering tech-
niques for digital mammography. In Proceedings of the 2nd International
Workshop on Digital Mammography, pages 325{334, York, England, 10-12
July 1994.
266] Rangayyan RM, Shen L, Paranjape RB, Desautels JEL, MacGregor JH,
Morrish HF, Burrowes P, Share S, and MacDonald FR. An ROC evaluation
of adaptive neighborhood contrast enhancement for digitized mammography.
In Proceedings of the 2nd International Workshop on Digital Mammography,
pages 307{314, York, England, 10-12 July 1994.
267] Kallergi M, Clarke LP, Qian W, Gavrielides M, Venugopal P, Berman CG,
Holman-Ferris SD, Miller MS, and Clark RA. Interpretation of calcications
1204 Biomedical Image Analysis
in screen/lm, digitized, and wavelet-enhanced monitor-displayed mammo-
grams: A receiver operating characteristic study. Academic Radiology, 3:285{
293, 1996.
268] Laine A, Fan J, and Schuler S. A framework for contrast enhancement by
dyadic wavelet analysis. In Proceedings of the 2nd International Workshop
on Digital Mammography, pages 91{100, York, England, 10-12 July 1994.
269] Laine A, Fan J, and Yan WH. Wavelets for contrast enhancement of dig-
ital mammography. IEEE Engineering in Medicine and Biology Magazine,
14(5):536{550, September/October 1995.
270] Qian W, Clarke LP, and Zheng BY. Computer assisted diagnosis for dig-
ital mammography. IEEE Engineering in Medicine and Biology Magazine,
14(5):561{569, September/October 1995.
271] Shen L, Shen Y, Rangayyan RM, Desautels JEL, Bryant H, Terry TJ, and
Horeczko N. Earlier detection of interval breast cancers with adaptive neigh-
borhood contrast enhancement of mammograms. In Proceedings of SPIE
on Medical Imaging 1996: Image Processing, volume 2710, pages 940{949,
Newport Beach, CA, February 1996.
272] Palcic B, MacAulay C, Shlien A, Treurniet W, Tezcan H, and Anderson
G. Comparison of three dierent methods for automated classication of
cervical cells. Analytical Cellular Pathology, 4:429{441, 1992.
273] Harauz G, Chiu DKY, MacAulay C, and Palcic B. Probabilistic inference
in computer-aided screening for cervical cancer: an event covering approach
to information extraction and decision rule formulation. Analytical Cellular
Pathology, 6:37{50, 1994.
274] Shen L, Rangayyan RM, and Desautels JEL. Detection and classication of
mammographic calcications. International Journal of Pattern Recognition
and Articial Intelligence, 7(6):1403{1416, 1993.
275] Mudigonda NR, Rangayyan RM, and Desautels JEL. Detection of breast
masses in mammograms by density slicing and texture !ow-eld analysis.
IEEE Transactions on Medical Imaging, 20(12):1215{1227, 2001.
276] Guliato D, Rangayyan RM, Carnielli WA, Zuo JA, and Desautels JEL.
Segmentation of breast tumors in mammograms using fuzzy sets. Journal of
Electronic Imaging, 12(3):369{378, 2003.
277] Guliato D, Rangayyan RM, Carnielli WA, Zuo JA, and Desautels JEL.
Fuzzy fusion operators to combine results of complementary medical im-
age segmentation techniques. Journal of Electronic Imaging, 12(3):379{389,
2003.
278] Ferrari RJ, Rangayyan RM, Desautels JEL, Borges RA, and Fr%ere AF. Au-
tomatic identication of the pectoral muscle in mammograms. IEEE Trans-
actions on Medical Imaging, 23:232{245, 2004.
279] Ferrari RJ, Rangayyan RM, Desautels JEL, Borges RA, and Fr%ere AF. Iden-
tication of the breast boundary in mammograms using active contour mod-
els. Medical and Biological Engineering and Computing, 42:201{208, 2004.
References 1205
280] Ferrari RJ, Rangayyan RM, Borges RA, and Fr%ere AF. Segmentation of
the bro-glandular disc in mammograms using Gaussian mixture modelling.
Medical and Biological Engineering and Computing, 42:378{387, 2004.
281] Marr D and Hildreth E. Theory of edge detection. Journal of the Royal
Society of London B, 207:187{217, 1980.
282] Marr D. Vision: A Computational Investigation into the Human Represen-
tation and Processing of Visual Information. WH Freeman, San Francisco,
CA, 1982.
283] Limb JO. A model of threshold vision incorporating inhomogeneity of the
visual elds. Vision Research, 17(4):571{584, 1977.
284] Witkin AP. Scale-space ltering. In Proceedings of the International Joint
Conference on Articial Intelligence, pages 1019{1022, Karlsruhe, Germany,
August 1983.
285] Yuille AL and Poggio TA. Scale theorems for zero-crossing. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence, 8(1):15{25, 1986.
286] Babaud J, Witkin AP, Baudin M, and Duda RO. Uniqueness of the Gaussian
kernel for scale-space ltering. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 8(1):26{33, 1986.
287] Mokhtarian F and Mackworth A. Scale-based description and recognition of
planar curves and two-dimensional shapes. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 8(1):34{43, 1986.
288] Bischof WF and Caelli TM. Parsing scale-space and spatial stability analysis.
Computer Vision, Graphics, and Image Processing, 42:192{205, 1988.
289] Caelli TM, Bischof WF, and Liu ZQ. Filter-based approaches to pattern
recognition. Pattern Recognition, 6(6):639{650, 1988.
290] Hummel RA. Representations based on zero-crossings in scale-space. In Pro-
ceedings of the IEEE Conference on Computer Vision and Pattern Recogni-
tion, pages 204{209, Miami Beach, FL, 1986.
291] Binford T. Image understanding: intelligent systems. In Proceedings of the
DARPA Image Understanding Workshop, pages 18{31, Los Angeles, CA,
1987. Defence Advanced Research Projects Agency.
292] Rotem D and Zeevi YY. Image reconstruction from zero-crossings. IEEE
Transactions on Acoustics, Speech, and Signal Processing, 34(5):1269{1277,
1986.
293] Katz I. Coaxial stereo and scale-based matching. Department of Computer
Science, University of British Columbia, Vancouver, BC, Canada, 1985.
294] Clark JJ. Authenticating edges produced by zero crossing algorithms. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 11(1):43{57,
1989.
295] Liu ZQ, Rangayyan RM, and Frank CB. Directional analysis of images in
scale space. IEEE Transactions on Pattern Analysis and Machine Intelli-
gence, 13(11):1185{1192, 1991.
1206 Biomedical Image Analysis
296] Huertas A and Medioni G. Detection of intensity changes with subpixel
accuracy using Laplacian-Gaussian masks. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 8(5):651{664, 1986.
297] Berzins V. Accuracy of Laplacian detectors. Computer Vision, Graphics,
and Image Processing, 27:195{210, 1984.
298] Grimson WEL and Hildreth EC. Comments on `Digital step edges from zero
crossings of second directional derivatives'. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 7(1):121{126, 1985.
299] Chen JS and Medioni G. Detection, localization, and estimation of edges.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(2):191{
198, 1989.
300] Marr D and Poggio T. Some comments on a recent theory of stereopsis. MIT
Articial Intelligence Laboratory Memo, MIT, Boston, MA, 1980.
301] Richter J and Ullman S. Non-linearities in cortical simple cells and the
possible detection of zero-crossings. Biological Cybernetics, 53:195{202, 1986.
302] Canny J. A computational approach to edge detection. IEEE Transactions
on Pattern Analysis and Machine Intelligence, PAMI-8(6):670{698, 1986.
303] Davis LS. A survey of edge detection techniques. Computer Graphics and
Image Processing, 4:248{270, 1975.
304] Fu KS and Mui JK. A survey on image segmentation. Pattern Recognition,
13:3{16, 1981.
305] Haralick RM and Shapiro LG. Image segmentation techniques. Computer
Vision, Graphics, and Image Processing, 29:100{132, 1985.
306] Sahoo PK, Soltani S, and Wong AKC. A survey of thresholding techniques.
Computer Vision, Graphics, and Image Processing, 41:233{260, 1988.
307] Pal NR and Pal SK. A review on image segmentation techniques. Pattern
Recognition, 26:1277{1294, 1993.
308] German D, Gra"gne C, and Dong P. Boundary detection by constrained
optimization. IEEE Transactions on Pattern Analysis and Machine Intelli-
gence, 12(7):609{628, 1990.
309] Zucker SW. Region growing: Childhood and adolescence. Computer Graph-
ics and Image Processing, 5:382{399, 1976.
310] Cheevasuvit F, Maitre H, and Vidal-Madjar D. A robust method for pic-
ture segmentation based on split-and-merge procedure. Computer Vision,
Graphics, and Image Processing, 34:268{281, 1986.
311] Adams R and Bischof L. Seeded region growing. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 16(6):641{647, 1994.
312] Chang YL and Li XB. Adaptive image region-growing. IEEE Transactions
on Image Processing, 3(6):868{872, 1994.
313] Besl PJ and Jain RC. Segmentation through variable-order surface tting.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 10:167{
192, 1988.
References 1207
314] Pavlidis T and Liow YT. Integrating region growing and edge detection.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(3):225{
233, March 1990.
315] Meyer F and Beucher S. Morphological segmentation. Journal of Visual
Communication and Image Representation, 1:21{46, 1990.
316] Haddon JF and Boyce JF. Image segmentation by unifying region and bound-
ary information. IEEE Transactions on Pattern Analysis and Machine In-
telligence, 12(10):929{948, 1990.
317] Moigne JL and Tilton JC. Rening image segmentation by integration of
edge and region data. IEEE Transactions on Geoscience and Remote Sens-
ing, 33(3):605{615, 1995.
318] LaValle SM and Hutchinson SA. A Bayesian segmentation methodology
for parametric image models. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 17(2):211{217, 1995.
319] Won CS and Derin H. Unsupervised segmentation of noisy and textured
images using Markov random elds. CVGIP: Graphical Models and Image
Processing, 54(4):308{328, 1992.
320] Shen L. Region-based Adaptive Image Processing Techniques for Mammog-
raphy. PhD thesis, Department of Electrical and Computer Engineering,
University of Calgary, Calgary, Alberta, Canada, August 1998.
321] Shen L and Rangayyan RM. A segmentation-based lossless image coding
method for high-resolution medical image compression. IEEE Transactions
on Medical Imaging, 16(3):301{307, 1997.
322] Rangayyan RM, Shen L, Shen Y, Rose MS, Desautels JEL, Bryant HE, Terry
TJ, and Horeczko N. Region-based contrast enhancement. In Strickland RN,
editor, Image-Processing Techniques for Tumor Detection, pages 213{242.
Marcel Dekker, New York, NY, 2002.
323] Sezan MI, Yip KL, and Daly SJ. Uniform perceptual quantization: Appli-
cations to digital radiography. IEEE Transactions on Systems, Man, and
Cybernetics, SMC-17:622{634, 1987.
324] Sickles EA. Mammographic features of malignancy found during screening.
In Brunner S and Langfeldt B, editors, Recent Results in Cancer Research,
volume 119, pages 88{93. Springer-Verlag, Berlin, Germany, 1990.
325] Spiesberger W. Mammogram inspection by computer. IEEE Transactions
on Biomedical Engineering, BME-26(4):213{219, 1979.
326] Chan HP, Doi K, Galhotra S, Vyborny CJ, MacMahon H, and Jokich PM.
Image feature analysis and computer-aided diagnosis in digital radiography:
I. Automated detection of microcalcications in mammography. Medical
Physics, 14(4):538{548, 1987.
327] Chan HP, Doi K, Vyborny CJ, Lam KL, and Schmidt RA. Computer-
aided detection of microcalcications in mammograms: Methodology and
preliminary clinical study. Investigative Radiology, 23(9):664{671, 1988.
328] Chan HP, Doi K, Vyborny CJ, Schmidt RA, Metz CE, Lam KL, Ogura
T, Wu YZ, and MacMahon H. Improvement in radiologists' detection of
1208 Biomedical Image Analysis
clustered microcalcications on mammograms: The potential of computer-
aided diagnosis. Investigative Radiology, 25(10):1102{1110, 1990.
329] Davies DH and Dance DR. Automatic computer detection of clustered calci-
cations in digital mammograms. Physics in Medicine Biology, 35(8):1111{
1118, 1990.
330] Fam BW, Olson SL, Winter PF, and Scholz FJ. Algorithm for the detection
of ne clustered calcications on lm mammograms. Radiology, 169:333{337,
1988.
331] Karssemeijer N. Adaptive noise equalization and recognition of microcalci-
cation clusters in mammograms. International Journal of Pattern Recogni-
tion and Articial Intelligence, 7:1357{1376, December 1993.
332] Brzakovic D and Neskovic M. Mammogram screening using multiresolution-
based image segmentation. International Journal of Pattern Recognition and
Articial Intelligence, 7:1437{1460, December 1993.
333] Netsch T. A scale-space approach for the detection of clustered microcal-
cications in digital mammograms. In Proceedings of the 3rd International
Workshop on Digital Mammography, pages 301{306, Chicago, IL, 9-12 June
1996.
334] Shen L, Rangayyan RM, and Desautels JEL. Application of shape analysis
to mammographic calcications. IEEE Transactions on Medical Imaging,
13(2):263{274, 1994.
335] Bankman IN, Nizialek T, Simon I, Gatewood OB, Weinberg IN, and Brody
WR. Segmentation algorithms for detecting microcalcications in mam-
mograms. IEEE Transactions on Information Technology in Biomedicine,
1(2):141{149, 1997.
336] Serrano C, Trujillo JD, Acha B, and Rangayyan RM. Use of 2D linear
prediction error to detect microcalcications in mammograms. In CDROM
Proceedings of the II Latin American Congress on Biomedical Engineering,
Havana, Cuba, 23-25 May 2001.
337] Kuduvalli GR and Rangayyan RM. An algorithm for direct computation of
2-D linear prediction coe"cients. IEEE Transactions on Signal Processing,
41(2):996{1000, 1993.
338] Kuduvalli GR. Image Data Compression for High-resolution Digital Telera-
diology. PhD thesis, Department of Electrical and Computer Engineering,
University of Calgary, Calgary, Alberta, Canada, May 1992.
339] Karssemeijer N. Detection of stellate distortions in mammograms using scale
space operators. In Bizais Y, Barillot C, and Paola PD, editors, Information
Processing in Medical Imaging, pages 335{346. Kluwer Academic, Dordrecht,
The Netherlands, 1995.
340] Laine A, Huda W, Chen D, and Harris J. Segmentation of masses using
continous scale representations. In Doi K, Giger ML, Nishikawa RM, and
Schmidt RA, editors, 3rd International Workshop on Digital Mammography,
pages 447{450, Chicago, IL, 9-12 June 1996.
341] Miller L and Ramsey N. The detection of malignant masses by non-linear
multiscale analysis. In Doi K, Giger ML, Nishikawa RM, and Schmidt RA,
References 1209
editors, 3rd International Workshop on Digital Mammography, pages 335{
340, Chicago, IL, 9-12 June 1996.
342] Zhang M, Giger ML, Vyborny CJ, and Doi K. Mammographic texture anal-
ysis for the detection of spiculated lesions. In Doi K, Giger ML, Nishikawa
RM, and Schmidt RA, editors, 3rd International Workshop on Digital Mam-
mography, pages 347{350, Chicago, IL, 9-12 June 1996.
343] Matsubara T, Fujita H, Endo T, Horita K, Ikeda M, Kido C, and Ishigaki
T. Development of mass detection algorithm based on adaptive thresholding
technique in digital mammograms. In Doi K, Giger ML, Nishikawa RM, and
Schmidt RA, editors, 3rd International Workshop on Digital Mammography,
pages 391{396, Chicago, IL, 9-12 June 1996.
344] Kupinski MA and Giger ML. Automated seeded lesion segmentation on
digital mammograms. IEEE Transactions on Medical Imaging, 17(4):510{
517, 1998.
345] Rangayyan RM, Mudigonda NR, and Desautels JEL. Boundary modeling
and shape analysis methods for classication of mammographic masses. Med-
ical and Biological Engineering and Computing, 38:487{496, 2000.
346] Cannon RL, Dave JV, and Bezdek JC. Fuzzy C-Means clustering algorithms.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(2):248{
255, 1986.
347] Clark MC, Hall DB, Goldgof DB, Clarke LP, Velthuizen RP, and Silbiger MS.
MRI segmentation using fuzzy clustering techniques. IEEE Engineering in
Medicine and Biology, pages 730{742, November/December 1994.
348] Chen CH and Lee GG. On digital mammogram segmentation and microcal-
cication detection using multiresolution wavelet analysis. Graphical Models
and Image Processing, 59(5):349{364, 1997.
349] Sameti M and Ward RK. A fuzzy segmentation algorithm for mammogram
partitioning. In K. Doi, M.L. Giger, R.M. Nishikawa, and R.A. Schmidt,
editors, 3rd International Workshop on Digital Mammography, pages 471{
474, Chicago, IL, 9-12 June 1996.
350] Zadeh LA. Fuzzy sets. Information and Control, 8:338{353, 1965.
351] Klir GJ and Yuon B. Fuzzy Sets and Fuzzy Logic. Prentice Hall, Englewood
Clis, NJ, 1995.
352] Guliato D. Combinac~ao de Algoritmos de Segmentaca~o por Operadores de
Agregac~ao (in Portuguese). PhD thesis, Department of Electrical Engineer-
ing, University of S~ao Paulo, S~ao Paulo, S~ao Paulo, Brazil, August 1998.
353] Guliato D, Rangayyan RM, Adorno F, and Ribeiro MMG. Analysis and
classication of breast masses by fuzzy-set-based image processing. In Peit-
gen HO, editor, 6th International Workshop on Digital Mammography, pages
196{197, Bremen, Germany, 22-25 June 2002.
354] Menut O, Rangayyan RM, and Desautels JEL. Parabolic modeling and
classication of breast tumours. International Journal of Shape Modeling,
3(3 & 4):155{166, 1998.
1210 Biomedical Image Analysis
355] Udupa JK and Samarasekera S. Fuzzy connectedness and object denition:
Theory, algorithms, and applications in image segmentation. Graphical Mod-
els and Image Processing, 58(3):246{261, May 1996.
356] Saha PK, Udupa JK, Conant EF, Chakraborty DP, and Sullivan D. Breast
tissue density quantication via digitized mammograms. IEEE Transactions
on Medical Imaging, 20(8):792{803, 2001.
357] Saha PK, Udupa JK, and Odhner D. Scale-based fuzzy connected image
segmentation: Theory, algorithms, and validation. Computer Vision and
Image Understanding, 77:145{174, 2000.
358] Hough PVC. Method and means for recognizing complex patterns. US
Patent 3,069,654, December 18, 1962.
359] Duda RO and Hart PE. Use of the Hough transformation to detect lines and
curves in pictures. Communications of the ACM, 15:11{15, 1972.
360] Rangayyan RM and Krishnan S. Feature identication in the time-frequency
plane by using the Hough-Radon transform. Pattern Recognition, 34:1147{
1158, 2001.
361] Pavlidis T and Horowitz SL. Segmentation of plane curves. IEEE Transac-
tions on Computers, C-23:860{870, August 1974.
362] Kass M, Witkin A, and Terzopoulos D. Snakes: active contour models.
International Journal of Computer Vision, 1(4):321{331, 1988.
363] Falc~ao AX, Udupa JK, Samarasekera, Sharma S, Hirsch BE, and Lotufo RA.
User-steered image segmentation paradigms: Live wire and live lane. Models
and Image Processing, 60:233{260, 1998.
364] Falc~ao AX, Udupa JK, and Miyazawa FK. An ultra-fast user-steered image
segmentation paradigm: Live wire on the !y. IEEE Transactions on Medical
Imaging, 19(1):55{62, 2000.
365] Deglint HJ, Rangayyan RM, and Boag GS. Three-dimensional segmentation
of the tumor mass in computed tomographic images of neuroblastoma. In
Fitzpatrick JM and Sonka M, editors, Proceedings of SPIE Medical Imag-
ing 2004: Image Processing, volume 5370, pages 475{483, San Diego, CA,
February 2004.
366] Deglint HJ. Image processing algorithms for three-dimensional segmentation
of the tumor mass in computed tomographic images of neuroblastoma. Mas-
ter's thesis, Department of Electrical and Computer Engineering, University
of Calgary, Calgary, Alberta, Canada, August 2004.
367] Lou SL, Lin HD, Lin KP, and Hoogstrate D. Automatic breast region ex-
traction from digital mammograms for PACS and telemammography appli-
cations. Computerized Medical Imaging and Graphics, 24:205{220, 2000.
368] Bick U, Giger ML, Schmidt RA, Nishikawa RM, and Doi K. Density cor-
rection of peripheral breast tissue on digital mammograms. RadioGraphics,
16(6):1403{1411, November 1996.
369] Byng JW, Critten JP, and Yae MJ. Thickness-equalization processing for
mammographic images. Radiology, 203(2):564{568, 1997.
References 1211
370] Chandrasekhar R and Attikiouzel Y. A simple method for automatically lo-
cating the nipple on mammograms. IEEE Transactions on Medical Imaging,
16(5):483{494, 1997.
371] Lau TK and Bischof WF. Automated detection of breast tumors using
the asymmetry approach. Computers and Biomedical Research, 24:273{295,
1991.
372] Miller P and Astley S. Automated detection of mammographic asymmetry
using anatomical features. International Journal of Pattern Recognition and
Articial Intelligence, 7(6):1461{1476, 1993.
373] M$endez AJ, Tahoces PG, Lado MJ, Souto M, Correa JL, and Vidal JJ.
Automatic detection of breast border and nipple in digital mammograms.
Computer Methods and Programs in Biomedicine, 49:253{262, 1996.
374] Bick U, Giger ML, Schmidt RA, Nishikawa RM, Wolverton DE, and Doi K.
Automated segmentation of digitized mammograms. Academic Radiology,
2(1):1{9, 1995.
375] Ferrari RJ, Rangayyan RM, Desautels JEL, and Fr%ere AF. Segmentation of
mammograms: Identication of the skin-air boundary, pectoral muscle, and
bro-glandular disc. In Yae M J, editor, Proceedings of the 5th International
Workshop on Digital Mammography, pages 573{579, Toronto, ON. Canada,
June 2000.
376] Suckling J, Parker J, Dance DR, Astley S, Hutt I, Boggis CRM, Ricketts I,
Stamatakis E, Cerneaz N, Kok SL, Taylor P, Betal D, and Savage J. The
Mammographic Image Analysis Society digital mammogram database. In
Gale AG, Astley SM, Dance DR, and Cairns AY, editors, Proceedings of the
2nd International Workshop on Digital Mammography, volume 1069 of Ex-
cerpta Medica International Congress Series, pages 375{378, York, England,
July 1994.
377] Mackiewich B. Intracranial boundary detection and radio frequency correc-
tion in magnetic resonance images. Master's thesis, School of Computing
Science - Simon Fraser University, Burnaby, BC, Canada, August 1995.
378] Lobregt S and Viergever MA. A discrete dynamic contour model. IEEE
Transactions on Medical Imaging, 14(1):12{24, 1995.
379] Williams DJ and Shah M. A fast algorithm for active contours and curva-
ture estimation. Computer Vision, Graphics, and Image Processing: Image
Understanding, 55(1):14{26, 1992.
380] Mattis P and Kimball S. GIMP | GNU Image Manipulation Program |
version 1.1.17. http://www.gimp.org | GNU General Public License |
GPL, accessed May 2002.
381] Ferrari RJ, Rangayyan RM, Desautels JEL, and Fr%ere AF. Analysis of asym-
metry in mammograms via directional ltering with Gabor wavelets. IEEE
Transactions on Medical Imaging, 20(9):953{964, 2001.
382] Karssemeijer N. Automated classication of parenchymal patterns in mam-
mograms. Physics in Medicine and Biology, 43(2):365{378, 1998.
383] Aylward SR, Hemminger BH, and Pisano ED. Mixture modeling for digital
mammogram display and analysis. In Karssemeijer N, Thijssen M, Hendriks
1212 Biomedical Image Analysis
J, and van Erning L, editors, Proceedings of the 4th International Workshop
on Digital Mammography, pages 305{312, Nijmegen, The Netherlands, June
1998.
384] Manjunath BS and Ma WY. Texture features for browsing and retrieval of
image data. IEEE Transactions on Pattern Analysis and Machine Intelli-
gence, 18(8):837{842, 1996.
385] Lee TS. Image representation using 2{D Gabor wavelets. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 18(10):959{971, 1996.
386] Mallat S. A theory for multiresolution signal decomposition: The wavelet
representation. IEEE Transactions on Pattern Analysis and Machine Intel-
ligence, 11(7):674{693, 1989.
387] Jahne B. Digital Image Processing. Springer, San Diego, CA, 4th edition,
1997.
388] De Valois RL, Albrecht DG, and Thorell LG. Spatial frequency selectivity
of cells in macaque visual cortex. Vision Research, 22:545{559, 1982.
389] Daugman JG. Uncertainty relation for resolution in space, spatial frequency,
and orientation optimized by two-dimensional visual cortical lters. Journal
of the Optical Society of America, 2(7):1160{1169, 1985.
390] Jones P and Palmer LA. An evaluation of the two-dimensional Gabor lter
model of simple receptive elds in cat striate cortex. Journal of Neurophys-
iology, 58(6):1233{1258, 1987.
391] Tucker AK. Textbook of mammography. Churchill Livingstone, New York,
NY, 1993.
392] Ma WY and Manjunath BS. EdgeFlow: A technique for boundary de-
tection and image segmentation. IEEE Transactions on Image Processing,
9(8):1375{1388, 2000.
393] Meyer F and Beucher S. Morphological segmentation. Journal of Visual
Communication and Image Representation, 1(1):21{46, 1990.
394] Asar H, Nandhakumar N, and Aggarwal JK. Pyramid-based image segmen-
tation using multisensory data. Pattern Recognition, 23(6):583{593, 1990.
395] Vincent L and Soille P. Watershed in digital spaces: An e"cient algorithm
based on immersion simulations. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 13(6):583{598, 1991.
396] Xiaohan Y and Yla-Jaaski J. Direct segmentation in 3D and its application
to medical images. In Proceedings of SPIE: Image Processing, volume 1898,
pages 187{192, 1993.
397] Hadjarian A, Bala J, Gutta S, Trachiots S, and Pachowicz P. The fusion of su-
pervised and unsupervised techniques for segmentation of abnormal regions.
In Karssemeijer N, Thijssen M, Hendriks J, and van Erning L", editors, 4th
International Workshop on Digital Mammography, pages 299{302, Nijmegen,
The Netherlands, June 1998. Kluwer Academic Publishers.
398] Yager RR. Connectives and quantiers in fuzzy sets. Fuzzy Sets and Systems,
40:39{75, 1991.
References 1213
399] Bloch I. Information combination operators for data fusion: a compara-
tive review with classication. IEEE Transactions on Systems, Man, and
Cybernetics - Part A: Systems and Humans, 26(1):52{67, 1996.
400] Sahiner B, Petrick N, Chan HP, Hadjiiski LM, Paramagul C, Helvie MA, and
Gurcan MN. Computer-aided characterization of mammographic masses:
Accuracy of mass segmentation and its eects on characterization. IEEE
Transactions on Medical Imaging, 20(12):1275{1284, 2001.
401] Gonzalez RC and Thomason MG. Syntactic Pattern Recognition: An Intro-
duction. Addison-Wesley, Reading, MA, 1978.
402] Duda RO, Hart PE, and Stork DG. Pattern Classication. Wiley, New York,
NY, 2nd edition, 2001.
403] American College of Radiology, Reston, VA. Illustrated Breast Imaging Re-
porting and Data System (BI-RADSTM ), 3rd edition, 1998.
404] van Otterloo PJ. A Contour-oriented Approach to Shape Analysis. Prentice
Hall, New York, NY, 1991.
405] Bookstein FL. The Measurement of Biological Shape and Shape Change.
Springer-Verlag, New York, NY, 1978.
406] Loncaric S. A survey of shape analysis techniques. Pattern Recognition,
31(8):983{1001, 1998.
407] Pohlman S, Powell KA, Obuchowski NA, Chilcote WA, and Grundfest-
Broniatowski S. Quantitative classication of breast tumors in digitized
mammograms. Medical Physics, 23(8):1337{1345, 1996.
408] Freeman H. On the encoding of arbitrary geometric congurations. IRE
Transactions on Electronic Computers, EC-10:260{268, 1961.
409] Pavlidis T and Ali F. Computer recognition of handwritten numerals by
polygonal approximations. IEEE Transactions on Systems, Man, and Cy-
bernetics, SMC-5:610{614, November 1975.
410] Ventura JA and Chen JM. Segmentation of two-dimensional curve contours.
Pattern Recognition, 25(10):1129{1140, 1992.
411] Suen CY and Zhang TY. A fast parallel algorithm for thinning digital pat-
terns. Communications of the ACM: Image Processing and Computer Vision,
27(5):236{239, 1984.
412] Blum H. A transformation for extracting new descriptors of shape. In
Wathen-Dunn W, editor, Models for the Perception of Speech and Visual
Form. MIT Press, Cambridge, MA, 1967.
413] Lu HE and Wang PSP. A comment on `A fast parallel algorithm for thin-
ning digital patterns'. Communications of the ACM: Image Processing and
Computer Vision, 29(3):239{242, 1986.
414] Eng K, Rangayyan RM, Bray RC, Frank CB, Anscomb L, and Veale P.
Quantitative analysis of the ne vascular anatomy of articular ligaments.
IEEE Transactions on Biomedical Engineering, 39(3):296{306, 1992.
415] Bray RC, Rangayyan RM, and Frank CB. Normal and healing ligament
vascularity: a quantitative histological assessment in the adult rabbit medial
collateral ligament. Journal of Anatomy, 188:87{95, 1996.
1214 Biomedical Image Analysis
416] Shen L. Shape analysis of mammographic calcications. Master's thesis,
Department of Electrical and Computer Engineering, University of Calgary,
Calgary, Alberta, Canada, July 1992.
417] Gupta L and Srinath MD. Contour sequence moments for the classication
of closed planar shapes. Pattern Recognition, 20(3):267{272, 1987.
418] Dudani SA, Breeding KJ, and McGhee RB. Aircraft identication by mo-
ment invariants. IEEE Transactions on Computer, C-26(1):39{45, 1983.
419] Reeves AP, Prokop RJ, Andrews SE, and Kuhl F. Three dimensional shape
analysis using moments and Fourier descriptors. IEEE Transactions on Pat-
tern Analysis and Machine Intelligence, 10(6):937{943, 1988.
420] Hu MK. Visual pattern recognition by moment invariants. IRE Transactions
on Information Theory, IT-8(2):179{187, 1962.
421] You Z and Jain AK. Performance evaluation of shape matching via chord
length distribution. Computer Vision, Graphics, and Image Processing,
28:185{198, 1984.
422] Granlund GH. Fourier preprocessing for hand print character recognition.
IEEE Transactions on Computers, C-21:195{201, February 1972.
423] Persoon E and Fu KS. Shape discrimination using Fourier descriptors. IEEE
Transactions on Systems, Man, and Cybernetics, SMC-7(3):170{179, 1977.
424] Zahn CT and Roskies RZ. Fourier descriptors for plane closed curves. IEEE
Transactions on Computers, C-21:269{281, March 1972.
425] Lestrel PE, editor. Fourier Descriptors and their Applications in Biology.
Cambridge University Press, Cambridge, UK, 1997.
426] Kuhl FP and Giardina CR. Elliptic Fourier features of a closed contour.
Computer Graphics and Image Processing, 18:236{258, 1982.
427] Lin CC and Chellappa R. Classication of partial 2-D shapes using Fourier
descriptors. IEEE Transactions on Pattern Analysis and Machine Intelli-
gence, PAMI-9:686{690, 1987.
428] Sahiner BS, Chan HP, Petrick N, Helvie MA, and Hadjiiski LM. Improve-
ment of mammographic mass characterization using spiculation measures
and morphological features. Medical Physics, 28(7):1455{1465, 2001.
429] Lee TK, McLean DI, and Atkins MS. Irregularity index: A new border irreg-
ularity measure for cutaneous melanocytic lesions. Medical Image Analysis,
7:47{64, 2003.
430] Feig SA, Galkin BM, and Muir HD. Evaluation of breast microcalcications
by means of optically magnied tissue specimen radiographs. In Brunner S
and Langfeldt B, editors, Recent Results in Cancer Research, volume 105,
pages 111{123. Springer-Verlag, Berlin, Germany, 1987.
431] Sickles EA. Breast calcications: Mammographic evaluation. Radiology,
160:289{293, 1986.
432] Rao AR. A Taxonomy for Texture Description and Identication. Springer-
Verlag, New York, NY, 1990.
433] Sutton RN and Hall EL. Texture measures for automatic classication of
pulmonary disease. IEEE Transactions on Computers, 21(7):667{676, 1972.
References 1215
434] Paget RD and Longsta D. Terrain mapping of radar satellite. Journal of
Electronic Imaging, 6(2):6{7, 1996.
435] Ojala T, Pietikainen M, and Nisula J. Determining composition of grain
mixtures by texture classication based on feature distribution. International
Journal of Pattern Recognition and Articial Intelligence, 10(1):73{81, 1996.
436] Swarnakar V, Acharya RS, Sibata C, and Shin K. Fractal-based character-
ization of structural changes in biomedical images. In Proceedings of SPIE,
volume 2709, pages 444{455, 1996.
437] Uppaluri R, Mitsa T, Homan EA, McLennan G, and Sonka M. Texture
analysis of pulmonary parenchyma in normal and emphysematous lung. In
Proceedings of SPIE, volume 2709, pages 456{467, 1996.
438] Julesz B and Bergen JR. Textons, the fundamental elements in preatten-
tive vision and perception of textures. The Bell System Technical Journal,
62(6):1619{1645, 1983.
439] Wechsler H. Texture analysis { A survey. Signal Processing, 2:271{282, 1980.
440] Haralick RM and Shapiro LG. Computer and Robot Vision. Addison-Wesley,
Reading, MA, 1992.
441] Haralick RM. Statistical and structural approaches to texture. Proceedings
of the IEEE, 67(5):786{804, 1979.
442] Haralick RM, Shanmugam K, and Dinstein I. Textural features for im-
age classication. IEEE Transactions on Systems, Man, Cybernetics, SMC-
3(6):610{622, 1973.
443] van Wijk JJ. Spot noise. Computer Graphics, 25(4):309{318, 1991.
444] Martins ACG and Rangayyan RM. Texture element extraction via cep-
stral ltering in the Radon domain. IETE Journal of Research (India),
48(3,4):143{150, 2002.
445] Lerski RA, Straughan K, Schad LR, Boyce D, Bluml S, and Zuna I. MR
image texture analysis { An approach to tissue characterization. Magnetic
Resonance Imaging, 11:873{887, 1993.
446] Petrosian A, Chan H, Helvie MA, Goodsitt MM, and Adler DD. Computer-
aided diagnosis in mammography: Classication of mass and normal tissue
by texture analysis. Physics in Medicine and Biology, 39:2273{2288, 1994.
447] Martins ACG, Rangayyan RM, and Ruschioni RA. Audication and soni-
cation of texture in images. Journal of Electronic Imaging, 10(3):690{705,
2001.
448] Byng JW, Boyd NF, Fishell E, Jong RA, and Yae MJ. Automated analysis
of mammographic densities. Physics in Medicine and Biology, 41:909{923,
1996.
449] Parkkinen J, Selkainaho M, and Oja E. Detecting texture periodicity from
the cooccurrence matrix. Pattern Recognition Letters, 11:43{50, January
1990.
450] Chan HP, Wei D, Helvie MA, Sahiner B, Adler DD, Goodsitt MM, and
Petrick N. Computer-aided classication of mammographic masses and nor-
1216 Biomedical Image Analysis
mal tissue: linear discriminant analysis in texture feature space. Physics in
Medicine and Biology, 40(5):857{876, 1995.
451] Sahiner BS, Chan HP, Petrick N, Helvie MA, and Goodsitt MM. Com-
puterized characterization of masses on mammograms: The rubber band
straightening transform and texture analysis. Medical Physics, 25(4):516{
526, 1998.
452] Laws KI. Rapid texture identication. In Proceedings of SPIE Vol. 238:
Image Processing for Missile Guidance, pages 376{380, 1980.
453] Pietkainen M, Rosenfeld A, and Davis LS. Experiments with texture clas-
sication using averages of local pattern matches. IEEE Transactions on
Systems, Man, and Cybernetics, 13:421{426, 1983.
454] Hsiao JY and Sawchuk AA. Supervised textured image segmentation using
feature smoothing and probabilistic relaxation techniques. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence, 11(12):1279{1292, 1989.
455] Miller P and Astley S. Classication of breast tissue by texture analysis.
Image and Vision Computing, 10(5):277{282, 1992.
456] Goldberger AL, Rigney DR, and West BJ. Chaos and fractals in human
physiology. Scientic American, 262:42{49, February 1990.
457] West BJ. Fractal forms in physiology. International Journal of Modern
Physics B, 4(10):1629{1669, 1990.
458] Liu SH. Formation and anomalous properties of fractals. IEEE Engineering
in Medicine and Biology Magazine, 11(2):28{39, June 1992.
459] Deering W and West BJ. Fractal physiology. IEEE Engineering in Medicine
and Biology Magazine, 11(2):40{46, June 1992.
460] Barnsley M. Fractals Everywhere. Academic, San Diego, CA, 1988.
461] Peitgen HO and Saupe D, editors. The Science of Fractal Images. Springer-
Verlag, New York, NY, 1988.
462] Mandelbrot BB. Fractals. WH Freeman and Company, San Francisco, CA,
1977.
463] Kantz H, Kurtis J, and Mayer-Kress G, editors. Nonlinear Analysis of Phys-
iological Data. Springer-Verlag, Berlin, Germany, 1998.
464] Peleg S, Naor J, Hartley R, and Avnir D. Multiple-resolution texture analysis
and classication. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 6(4):518{523, 1984.
465] Pentland AP. Fractal-based description of natural scenes. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 6(6):661{674, 1984.
466] Lundahl T, Ohley W, Kay SM, and Siert R. Fractal Brownian motion: A
maximum likelihood estimator and its application to image texture. IEEE
Transactions on Medical Imaging, 5(3):152{161, 1986.
467] Schepers HE, van Beek JHGM, and Bassingthwaighte JB. Four methods to
estimate the fractal dimension from self-a"ne signals. IEEE Engineering in
Medicine and Biology Magazine, 11(2):57{64, June 1992.
References 1217
468] Fortin C, Kumaresan R, Ohley W, and Hoefer S. Fractal dimension in the
analysis of medical images. IEEE Engineering in Medicine and Biology Mag-
azine, 11(2):65{71, June 1992.
469] Geraets WGM and van der Stelt PF. Fractal properties of bone. Dentomax-
illofacial Radiology, 29:144{153, 2000.
470] Chen CC, DaPonte JS, and Fox MD. Fractal feature analysis and classica-
tion in medical imaging. IEEE Transactions on Medical Imaging, 8(2):133{
142, 1989.
471] Burdett CJ, Longbotham HG, Desai M, Richardson WB, and Stoll JF. Non-
linear indicators of malignancy. In Proceedings of SPIE Vol. 1905 on Biomed-
ical Image Processing and Biomedical Visualization, pages 853{860, San Jose,
CA, Feb. 1993.
472] Wu CM, Chen YC, and Hsieh KS. Texture features for classication of
ultrasonic liver images. IEEE Transactions on Medical Imaging, 11(2):141{
152, 1992.
473] Lee WL, Chen YC, and Hsieh KS. Ultrasonic liver tissues classication by
fractal feature vector based on M-band wavelet transform. IEEE Transac-
tions on Medical Imaging, 22(3):382{392, 2003.
474] Yae MJ, Byng JW, and Boyd NF. Quantitative image analysis for esti-
mation of breast cancer risk. In Bankman IN, editor, Handbook of Medical
Imaging: Processing and Analysis, chapter 21, pages 323{340. Academic
Press, London, UK, 2000.
475] Caldwell CB, Stapleton SJ, Holdsworth DW, Jong RA, Weiser WJ, Cooke
G, and Yae MJ. Characterization of mammographic parenchymal pattern
by fractal dimension. Physics in Medicine and Biology, 35(2):235{247, 1990.
476] Iftekharuddin KM, Jia W, and Marsh R. Fractal analysis of tumor in brain
MR images. Machine Vision and Applications, 13:352{362, 2003.
477] Chaudhuri BB and Sarkar N. Texture segmentation using fractal dimension.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(1):72{
77, 1995.
478] Zheng L and Chan AK. An articial intelligent algorithm for tumor de-
tection in screening mammogram. IEEE Transactions on Medical Imaging,
20(7):559{567, 2001.
479] Saparin PI, Gowin W, Kurths J, and Felsenberg D. Quantication cancellous
bone structure using symbol dynamics and measures of complexity. Physical
Review E, 58:6449{6459, 1998.
480] Jennane R, Ohley WJ, Majumdar S, and Lemineur G. Fractal analysis
of bone X-ray tomographic microscopy projections. IEEE Transactions on
Medical Imaging, 20(5):443{449, 2001.
481] Samarabandhu J, Acharya R, Hausmann E, and Allen K. Analysis of bone
X-rays using morphological fractals. IEEE Transactions on Medical Imaging,
12(3):466{470, 1993.
482] Sedivy R, Windischberger Ch, Svozil K, Moser E, and Breitenecker G. Frac-
tal analysis: An objective method for identifying atypical nuclei in dysplastic
lesions of the cervix uteri. Gynecologic Oncology, 75:78{83, 1999.
1218 Biomedical Image Analysis
483] Esgiar AN, Naguib RNG, Sharif BS, Bennett MK, and Murray A. Fractcal
analysis in the detection of colonic cancer images. IEEE Transactions on
Information Technology in Biomedicine, 6(1):54{58, 2002.
484] Penn AI and Loew MH. Estimating fractal dimension with fractal interpola-
tion function models. IEEE Transactions on Medical Imaging, 16(6):930{937,
1997.
485] Bankman IN, Spisz TS, and Pavlopoulos S. Two-dimensional shape and
texture quantication. In Bankman IN, editor, Handbook of Medical Imag-
ing: Processing and Analysis, chapter 14, pages 215{230. Academic Press,
London, UK, 2000.
486] Jernigan ME and D'Astous F. Entropy-based texture analysis in the spatial
frequency domain. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 6(2):237{243, 1984.
487] Liu S and Jernigan ME. Texture analysis and discrimination in additive
noise. Computer Vision, Graphics, and Image Processing, 49:52{67, 1990.
488] Laine A and Fan J. Frame representations for texture segmentation. IEEE
Transactions on Image Processing, 5(5):771{780, 1996.
489] McLean GF. Vector quantization for texture classication. IEEE Transac-
tions on Systems, Man, Cybernetics, 23(3):637{649, 1993.
490] Bovik AC. Analysis of multichannel narrow-band lters for image texture
segmentation. IEEE Transactions on Signal Processing, 39(9):2025{2043,
1991.
491] Vilnrotter FM, Nevatia R, and Price KE. Structural analysis of natural
textures. IEEE Transactions on Pattern Analysis and Machine Intelligence,
8(1):76{89, 1986.
492] He DC and Wang L. Textural lters based on the texture spectrum. Pattern
Recognition, 24(12):1187{1195, 1991.
493] Wang S, Velasco FRD, Wu AY, and Rosenfeld A. Relative eectiveness of
selected texture primitive statistics for texture discrimination. IEEE Trans-
actions on Systems, Man, and Cybernetics, SMC-11(5):360{370, 1981.
494] Tomita F, Shirai Y, and Tsuji S. Description of textures by structural
analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence,
4(2):183{191, 1982.
495] Turner MR. Texture discrimination by Gabor functions. Biological Cyber-
netics, 55:71{82, 1986.
496] Bovik AC, Clark M, and Geisler WS. Multichannel texture analysis using
localized spatial lters. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 12(1):55{73, 1990.
497] Porat M and Zeevi YY. Localized texture processing in vision: Analysis
and synthesis in the Gaborian space. IEEE Transactions on Biomedical
Engineering, 36(1):115{129, 1989.
498] Reed TR and Wechsler H. Segmentation of textured images and Gestalt
organization using spatial/ spatial-frequency representations. IEEE Trans-
actions on Pattern Analysis and Machine Intelligence, 12(1):1{12, 1990.
References 1219
499] Reed TR, Wechsler H, and Werman M. Texture segmentation using a diu-
sion region growing technique. Pattern Recognition, 23(9):953{960, 1990.
500] Jain AK and Farrokhnia F. Unsupervised texture segmentation using Gabor
lters. Pattern Recognition, 24(12):1167{1186, 1991.
501] Unser M. Texture classication and segmentation using wavelet frames.
IEEE Transactions on Image Processing, 4:1549{1560, 1995.
502] Ravichandran G and Trivedi MM. Circular-Mellin features for texture seg-
mentation. IEEE Transactions on Image Processing, 4:1629{1639, 1995.
503] Tardif P and Zaccarin A. Multiscale autoregressive image representation for
texture segmentation. In Proceedings of SPIE: Image Processing, volume
3026, pages 327{337, 1997.
504] Unser M and Eden M. Multiresolution feature extraction and selection for
texture segmentation. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 11(7):717{728, 1989.
505] Martins ACG and Rangayyan RM. Complex cepstral ltering of images and
echo removal in the Radon domain. Pattern Recognition, 30(11):1931{1938,
1997.
506] Chambers JM, Mathews MV, and Moore FR. Auditory Data Inspection.
Report TM 74-122-2, Bell Laboratories, New York, NY, 1974.
507] Kramer G, editor. Auditory Display: Sonication, Audication, and Audi-
tory Interfaces. Addison Wesley, Reading, MA, 1994.
508] Meijer P. An experimental system for auditory image representation. IEEE
Transactions on Biomedical Engineering, 39(2):112{121, 1992.
509] Meijer PBL. Let's Make Vision Accesible. http://www. visualprosthe-
sis.com/ voicover.htm, accessed June 2004.
510] Makhoul J. Linear prediction: A tutorial. Proceedings of the IEEE,
63(4):561{580, 1975.
511] Martins ACG, Rangayyan RM, Portela LA, Amaro Jr. E, and Ruschioni RA.
Auditory display and sonication of textured images. In Proceedings of the
Third International Conference on Auditory Display, pages 9{11, Palo Alto,
CA, Nov. 1996.
512] Kinoshita SK, de Azevedo Marques PM, Slaets AFF, Marana HRC, Ferrari
RJ, and Villela RL. Detection and characterization of mammographic masses
by articial neural network. In Karssemeijer N, Thijssen M, Hendriks J,
and van Erning L, editors, Proceedings of the 4th International Workshop
on Digital Mammography, pages 489{490, Nijmegen, The Netherlands, June
1998.
513] Sahiner B, Chan HP, Petrick N, Wei D, Helvie MA, Adler DD, and Goodsitt
MM. Classication of mass and normal breast tissue: a convolution neural
network classier with spatial domain and texture images. IEEE Transac-
tions on Medical Imaging, 15(5):598{611, 1996.
514] Wei D, Chan HP, Helvie MA, Sahiner B, Petrick N, Adler DD, and Goodsitt
MM. Classication of mass and normal breast tissue on digital mammograms:
multiresolution texture ananlysis. Medical Physics, 22(9):1501{1513, 1995.
1220 Biomedical Image Analysis
515] Wei D, Chan HP, Petrick N, Sahiner B, Helvie MA, Adler DD, and Goodsitt
MM. False-positive reduction technique for detection of masses on digital
mammograms: global and local multiresolution texture analysis. Medical
Physics, 24(6):903{914, 1997.
516] Sahiner B, Chan HP, Petrick N, Helvie MA, and Goddsitt MM. Design
of a high-sensitivity classier based on a genetic algorithm: application to
computer-aided diagnosis. Physics in Medicine and Biology, 43(10):2853{
2871, 1998.
517] Kok SL, Brady JM, and Tarassenko L. The detection of abnormalities in
mammograms. In Gale AG, Astley SM, Dance DR, and Cairns AY, editors,
Proceedings of the 2nd International Workshop on Digital Mammography,
pages 261{270, York, England, 10-12 July 1994.
518] Huo Z, Giger ML, Vyborny CJ, Bick U, Lu P, Wolverton DE, and Schmidt
RA. Analysis of spiculation in the computerised classication of mammo-
graphic masses. Medical Physics, 22(10):1569{1579, 1995.
519] Giger ML, Lu P, Huo Z, Bick U, Vyborny CJ, Schmidt RA, Zhang W,
Metz CE, Wolverton D, Nishikawa RM, Zouras W, and Doi K. CAD in
digital mammography: computerized detection and classication of masses.
In Gale AG, Astley SM, Dance DR, and Cairns AY, editors, Proceedings of
the 2nd International Workshop on Digital Mammography, pages 281{288,
York, England, 10-12 July 1994.
520] Huo Z, Giger ML, Vyborny CJ, Wolverton DE, Schmidt RA, and Doi K.
Computer-aided diagnosis: Automated classication of mammographic mass
lesions. In Doi K, Giger ML, Nishikawa RM, and Schmidt RA, editors,
Proceedings of the 3rd International Workshop on Digital Mammography,
pages 207{211, Chicago, IL, 9-12 June 1996.
521] Highnam RP, Brady JM, and Shepstone BJ. A quantitative feature to aid di-
agnosis in mammography. In Doi K, Giger ML, Nishikawa RM, and Schmidt
RA, editors, Proceedings of the 3rd International Workshop on Digital Mam-
mography, pages 201{206, Chicago, IL, 9-12 June 1996.
522] Claridge E and Richter JH. Characterisation of mammographic lesions. In
Gale AG, Astley SM, Dance DR, and Cairns AY, editors, Proceedings of the
2nd International Workshop on Digital Mammography, pages 241{250, York,
England, 10-12 July 1994.
523] Sahiner B, Chan HP, Petrick N, Helvie MA, Adler DD, and Goodsitt MM.
Classication of masses on mammograms using rubber-band straightening
transform and feature analysis. In Proceedings of SPIE vol. 2710 on Medical
Imaging 1996 { Image Processing, pages 44{50, Newport Beach, CA, 1996.
524] Hadjiiski L, Sahiner B, Chan HP, Petrick N, and Helvie MA. Classication
of malignant and benign masses based on hybrid ART2LDA approach. IEEE
Transactions on Medical Imaging, 18(12):1178{1187, 1999.
525] Giger ML, Huo Z, Wolverton DE, Vyborny CJ, Moran C, Schmidt RA, Al-
hallaq H, Nishikawa RM, and Doi K. Computer-aided diagnosis of digital
mammographic and ultrasound images of breast mass lesions. In Karsse-
meijer N, Thijssen M, Hendriks J, and van Erning L, editors, Proceedings
References 1221
of the 4th International Workshop on Digital Mammography, pages 143{147,
Nijmegen, The Netherlands, June 1998.
526] Gonzalez RC and Woods RE. Digital Image Processing. Addison-Wesley,
Reading, MA, 1992.
527] Russ JC. The Image Processing Handbook. CRC Press, Boca Raton, FL,
1995.
528] Alto H. Computer-aided Diagnosis of Breast Cancer. PhD thesis, Depart-
ment of Electrical and Computer Engineering, University of Calgary, Cal-
gary, Alberta, Canada, April 2003.
529] Alto H, Rangayyan RM, and Desautels JEL. Content-based retrieval and
analysis of mammographic masses. Journal of Electronic Imaging, 14:in
press, 2005.
530] Ojala T, Pietikainen M, and Harwood D. A comparative study of texture
measures with classication based on feature distributions. Pattern Recog-
nition, 29(1):51{59, 1996.
531] Liu ZQ, Rangayyan RM, and Frank CB. Directional analysis of images in
scale-space. IEEE Transactions on Pattern Analysis and Machine Intelli-
gence, 13(11):1185{1192, 1991.
532] Dziedzic-Goclawska A, Rozycka M, Czyba JC, Sawicki W, Moutier R, Lenc-
zowski S, and Ostrowski K. Application of the optical Fourier transform
for analysis of the spatial distribution of collagen bers in normal and os-
teopetrotic bone tissue. Histochemistry, 74:123{137, 1982.
533] Komori M, Minato K, Nakano Y, Hirakawa Y, and Kuwahara M. Automatic
measurement system for congenital hip dislocation using computed radiog-
raphy. In Proceedings of the SPIE, Volume 914: Medical Imaging II, pages
665{668, 1988.
534] Kurogi S, Jianqiang Y, and Matsuoka K. Measurement of the angle of rotated
images using Fourier transform. Transactions of the Institute of Electronics,
Information and Communication Engineers D-II, J73D-II(4):590{596, April
1990.
535] Denslow S, Zhang Z, Thompson RP, and Lam CF. Statistically characterized
features for directionality quantitation in patterns and textures. Pattern
Recognition, 26:1193{1205, 1993.
536] Goncharov AB and Gelfand MS. Determination of mutual orientation of
identical particles from their projections by the moments method. Ultrami-
croscopy, 25(4):317{328, 1988.
537] Abramson SB and Fay FS. Application of multiresolution spatial lters to
long-axis tracking. IEEE Transactions on Medical Imaging, 9(2):151{158,
1990.
538] Kronick PL and Sacks MS. Quantication of vertical-ber defect in cattle
hide by small-angle light scattering. Connective Tissue Research, 27:1{13,
1991.
539] Sacks MS and Chuong CJ. Characterization of collagen ber architecture in
the canine diaphragmatic central tendon. Journal of Biomechanical Engi-
neering, 114:183{190, 1992.
1222 Biomedical Image Analysis
540] Petroll WM, Cavanagh HD, Barry P, Andrews P, and Jester JV. Quantitative
analysis of stress ber orientation during corneal wound contraction. Journal
of Cell Science, 104:353{363, 1993.
541] Thackray BD and Nelson AC. Semi-automatic segmentation of vascular net-
work images using a rotating structural element (ROSE) with mathematical
morphology and dual feature thresholding. IEEE Transactions on Medical
Imaging, 12(3):385{392, 1993.
542] Rolston WA. Directional image analysis. Master's thesis, Department of
Electrical and Computer Engineering, University of Calgary, Calgary, Al-
berta, Canada, April 1994.
543] Rolston WA and Rangayyan RM. Directional analysis of images using multi-
resolution Gabor lters. In Proceedings of the International Conference on
Robotics, Vision, and Parallel Processing for Industrial Vision, pages 307{
313, Ipoh, Malaysia, 26-28 May 1994.
544] Johnson RW. Characterization of ber behavior in a nonwoven web through
image analysis of tracer bers. In TAPPI Proceedings { 1988 Nonwovens
Conference, pages 217{221, Nashville, TN, April 1988. TAPPI Press.
545] Villa KM and Buchanan DR. Image analysis and the structure of non-
woven fabrics. In INDA-TEC: The International Nonwovens Technological
Conference, pages 83{101, Philadelphia, PA, June 1986. Association of the
Nonwoven Fabrics Industry, New York, NY.
546] Haley CS and Landoll LM. Image analysis of real and simulated nonwoven
fabrics. In INDA-TEC: The International Nonwovens Technological Confer-
ence, pages 65{82, Philadelphia, PA, June 1986.
547] Yuhara T, Hasuike M, and Murakami K. Fibre orientation measurement with
the two-dimensional power spectrum of a high-resolution soft x-ray image.
Journal of Pulp and Paper Science, 17(4):J110{J114, 1991.
548] Yang CF, Crosby CM, Eusufzai ARK, and Mark RE. Determination of
paper sheet ber orientation by a laser optical diraction method. Journal
of Applied Polymer Science, 34:1145{1157, 1987.
549] Bresee RR and Donelson DS. Small-angle light scattering for analysis of a
single ber. Journal of Forensic Sciences, 25(2):413{422, 1980.
550] Embree P and Burg JP. Wide-band velocity ltering | the pie slice process.
Geophysics, 28:948{974, 1963.
551] Treitel S, Shanks JL, and Frasier CW. Some aspects of fan ltering. Geo-
physics, 32:789{806, 1967.
552] Bezvoda V, Je'zek J, and Segeth K. FREDPACK- A program package for lin-
ear ltering in the frequency domain. Computers & Geosciences, 16(8):1123{
1154, 1990.
553] Thorarinsson F, Magnusson SG, and Bjornsson A. Directional spectral anal-
ysis and ltering of geophysical maps. Geophysics, 53(12):1587{1591, 1988.
554] Tashiro H, Yoshikawa K, Nomura T, and Hamabe A. Measurement of po-
sition and posture using image processing by projective pattern on an ob-
ject. Journal of the Japan Society of Precision Engineering, 56(7):1286{1291,
1990.
References 1223
555] Kono H. Measurement of angle and side length on the rectangle using direc-
tional code. In Proceedings of the 10th International Conference on Assembly
Automation, pages 413{420, 1989.
556] Marra M, Dunlay R, and Mathis D. Terrain classication using texture for
the ALV. In Proceedings of SPIE, volume 1289, pages 64{70, 1989.
557] Jacobberger PA. Mapping abandoned river channels in Mali through direc-
tional ltering of thematic mapper data. Remote Sensing of Environment,
26(2):161{170, 1988.
558] Arsenault HH, Sequin MK, and Brousseau N. Optical ltering of aeromag-
netic maps. Applied Optics, 13:1013{1017, May 1974.
559] Moore GK and Waltz FA. Objective procedures for lineament enhance-
ment and extraction. Photogrammetric Engineering and Remote Sensing,
49(5):641{647, 1983.
560] Duvernoy J and Chalasinska-Macukow K. Processing measurements of the
directional content of Fourier spectra. Applied Optics, 20(1):136{144, 1981.
561] Duggin M, Rowntree RA, and Odell AW. Application of spatial ltering
methods to urban feature analysis using digital image data. International
Journal of Remote Sensing, 9(3):543{553, 1988.
562] Carrere V. Development of multiple source data processing for structural
analysis at a regional scale. Photogrammetric Engineering and Remote Sens-
ing, 56(5):587{595, 1990.
563] Shlomot E, Zeevi Y, and Pearlman WA. The importance of spatial frequency
and orientation in image decomposition and coding. In Proceedings of SPIE,
Volume 845: Visual Communication and Image Processing, pages 152{158,
1987.
564] Li H and He Z. Directional subband coding of images. In International
Conference on Acoustics, Speech, and Signal Processing, volume III, pages
1823{1826, Glasgow, Scotland, May 1989.
565] Ikonomopoulos A and Kunt M. Directional ltering, zero crossing, edge
detection and image coding. In Schussler HW, editor, Signal Processing II:
Theories and Applications, pages 203{206. Elsevier, New York, NY, 1983.
566] Kunt M, Ikonomopoulos A, and Kocher M. Second-generation image-coding
techniques. Proceedings of the IEEE, 73(4):549{574, 1985.
567] Ikonomopoulos A and Kunt M. High compression image coding via direc-
tional ltering. Signal Processing, 8:179{203, 1985.
568] Kunt M. Recent results in high-compression image coding. IEEE Transac-
tions on Circuits and Systems, CAS-34:1306{1336, November 1987.
569] Hou HS and Vogel MJ. Detection of oriented line segments using discrete
cosine transform. In Intelligent Robots and Computer Vision: Seventh in a
Series, volume 1002, pages 81{87. Proceedings of SPIE, 1988.
570] Mardia KV. Statistics of Directional Data. Academic Press, New York, NY,
1972.
1224 Biomedical Image Analysis
571] Schiller P, Finlay B, and Volman S. Quantitative studies of single-cell prop-
erties of monkey striate cortex. I. Spatiotemporal organization of receptive
elds. Journal of Neurophysiology, 6:1288{1319, 1976.
572] Kass M and Witkin A. Analyzing oriented patterns. In Proceedings of the
9th International Joint Conference on Articial Intelligence, pages 944{952,
Los Angeles, CA, 1985.
573] Zucker SW. Early orientation selection: Tangent elds and the dimension-
ality of their support. Computer Vision, Graphics, and Image Processing,
32:74{103, 1985.
574] Low KC and Coggins JM. Multiscale vector elds for image pattern recogni-
tion. In Proceedings of SPIE, Volume 1192, Intelligent Robots and Computer
Vision VIII: Algorithms and Techniques, pages 159{168, 1989.
575] Allen T, Mead C, Faggin F, and Gribble G. Orientation-selective VLSI
retina. In Proceedings of SPIE, Volume 1001: Visual Communications and
Image Processing, pages 1040{1046, 1988.
576] Bigun J, Grandlund GH, and Wiklund J. Multidimensional orientation esti-
mation with applications to texture analysis and optical !ow. IEEE Trans-
actions on Pattern Analysis and Machine Intelligence, 13(8):775{790, 1991.
577] Bruton LT, Bartley NR, and Stein RA. The design of stable high-quality
two-dimensional recursive lters for seismic signal processing. Advances in
Geophysical Data Processing, 2:233{261, 1985.
578] Bamberger RH and Smith MJT. A lter bank for the directional decomposi-
tion of images: Theory and design. IEEE Transactions on Signal Processing,
40(4):882{893, 1992.
579] Ikonomopoulos A and Unser M. A directional ltering approach to texture
discrimination. In Proceedings of the 7th International Conference of Pattern
Recognition, volume 1, pages 87{89, Montr$eal, Qu$ebec, Canada, 1984.
580] Wang TX. Three-dimensional ltering using Hilbert transforms. Chinese
Science Bulletin, 35(2):123{127, January 1990.
581] Bonnet C, Brettel H, and Cohen I. Visibility of the spatial frequency com-
ponents predicts the perceived orientational structure of a visual pattern. In
Proceedings of SPIE, Volume 1077: Human Vision, Visual Processing, and
Digital Display, pages 277{284, 1989.
582] Porat B and Friedlander B. A frequency domain algorithm for multiframe
detection and estimation of dim targets. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 12(4):398{401, 1990.
583] O'Gorman L and Nickerson JV. Matched lter design for ngerprint image
enhancement. In IEEE International Conference on Acoustics, Speech, and
Signal Processing '88, pages 916{919, New York, NY, April 1988.
584] Fowlow TJ and Bruton LT. Attenuation characteristics of three-dimensional
planar-resonant recursive digital lters. IEEE Transactions on Circuits and
Systems, 35(5):595{599, 1988.
585] Freeman WT and Adelson EH. The design and use of steerable lters. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 13(9):891{906,
1991.
References 1225
586] Marquerre H. Optical preparation of image data with directional lter for
automatic inspection. Optik, 79(2):47{52, March 1988.
587] Bruton LT and Bartley NR. Using nonessential singularities of the second
kind in two-dimensional lter design. IEEE Transactions on Circuits and
Systems, 36:113{116, 1989.
588] Goodman D. Some di"culties with the double bilinear transformation in
2-D recursive lter design. Proceedings of the IEEE, 66:796{797, 1978.
589] Gonzalez RC and Wintz P. Digital Image Processing. Addison-Wesley, Read-
ing, MA, 2nd edition, 1992.
590] Ning SX, Fan YP, and Tong C. A new smooth lter for directional detection
and enhancement. In 9th International Conference on Pattern Recognition,
pages 628{630, Rome, Italy, 1988. IEEE Computer Society Press.
591] Otsu N. A threshold selection method from gray-level histograms. IEEE
Transactions on Systems, Man, and Cybernetics, 9(1):62{66, 1979.
592] Illingworth J and Kittler J. A parallel threshold selection algorithm. In Pro-
ceedings of the SPIE, Volume 596: Architectures and Algorithms for Digital
Image Processing, pages 129{133, 1985.
593] Clark M, Bovik AC, and Geisler WS. Texture segmentation using Gabor
modulation/ demodulation. Pattern Recognition Letters, 6(4):261{267, 1987.
594] Bovik AC, Clark M, and Geisler WS. Computational texture analysis us-
ing localized spatial ltering. In Proceedings of the IEEE Computer Society
Workshop on Computer Vision, pages 201{206, Miami Beach, FL, November
1987. IEEE.
595] Ayres FJ and Rangayyan RM. Characterization of architectural distor-
tion in mammograms. In CDROM Proceedings of the 25th Annual Interna-
tional Conference of the IEEE Engineering in Medicine and Biology Society,
Canc$un, Mexico, September 2003.
596] Gabor D. Theory of communication. Journal of the Institute of Electrical
Engineers, 93:429{457, 1946.
597] Zhou YT, Venkateswar V, and Chellappa R. Edge detection and linear
feature extraction using a 2-D random eld model. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 11(1):84{95, 1989.
598] Chen J, Sato Y, and Tamura S. Orientation space ltering for multiple
orientation line segmentation. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 22(5):417{429, 2000.
599] Rangayyan RM and Rolston A. Directional image analysis with the Hough
and Radon transforms. Journal of the Indian Institute of Science, 78:3{16,
1998.
600] Deans SR. Hough transform from the Radon transform. IEEE Transactions
on Pattern Analysis and Machine Intellegence, 3:185{188, 1981.
601] Leavers VF and Boyce JF. The Radon transform and its applicaton to shape
parameterization in machine vision. Image and Vision Computing, 5:161{
166, 1987.
1226 Biomedical Image Analysis
602] Nimni M. Collagen: Its structure and function in normal and pathological
connective tissue. Seminars in Arthritis and Rheumatology, 4:95{150, 1974.
603] Butler DL, Zernicke RF, Grood ES, and Noyes FR. Biomechanics of liga-
ments and tendons. In Hutton R, editor, Exercise and Sports Science Review,
pages 125{182. Franklin Institute Press, Hillsdale, NJ, 1978.
604] Frank C, Amiel D, Woo SLY, and Akeson W. Normal ligament properties
and ligament healing. Clinical Orthopaedics, 196:15{25, 1985.
605] Forrester JC, Zederfeldt BH, Hayes TL, and Hunt TK. Tape-closed and
sutured wounds: A comparison by tensiometry and scanning electron mi-
croscopy. British Journal of Surgery, 57:729{737, 1970.
606] Oegema T, An K, Weiland A, and Furcht L. Injury and repair of the mus-
culoskeletal soft tissues. In Woo SLY and Buckwalter JA, editors, American
Academy of Orthopaedic Surgeons Symposium, page 355. C.V. Mosby, St.
Louis, MO, 1988.
607] Arnoczky SP, Marshall JL, and Rubin RM. Microvasculature of the cru-
ciate ligaments and its response to injury. Journal of Bone Joint Surgery,
61A:1221{1229, 1979.
608] Arnoczky SP, Marshall JL, and Tarvin GB. Anterior cruciate ligament re-
placement using patellar tendon { An evaluation of graft revascularization
in the dog. Journal of Bone Joint Surgery, 64A:217{224, 1982.
609] Bray RC, Fisher AWF, and Frank CB. Fine vascular anatomy of adult rabbit
knee ligaments. Journal of Anatomy, 172:69{79, 1990.
610] Chaudhuri S. Digital image processing techniques for quantitative analysis of
collagen bril alignment in ligaments. Master's thesis, Department of Elec-
trical and Computer Engineering, University of Calgary, Calgary, Alberta,
Canada, 1987.
611] Rosenfeld A and de la Torre P. Histogram concavity as an aid to threshold
selection. IEEE Transactions on Systems, Man, and Cybernetics, SMC-
13:231{235, 1983.
612] Walpole RE and Myers RH, editors. Probability and Statistics for Engineers
and Scientists. Macmillan, New York, NY, 1985.
613] Chimich DD, Bray RC, Frank CB, and Shrive NG. Contralateral knee lig-
aments may not be `normal' after opposite knee surgery: A biomechanical
study in the adult rabbit MCL complex. In Transactions of the Canadian
Orthopaedic Research Society 24th Annual Meeting, Vancouver, BC, Canada,
June 1990.
614] Winsberg F, Elkin M, Macy JJ, Bordaz V, and Weymouth W. Detection
of radiographic abnormalities in mammograms by means of optical scanning
and computer analysis. Radiology, 89:211{215, 1967.
615] Ackerman LV and Gose E. Breast lesion classication by computer and
xeroradiograph. Cancer, 30:1025{1035, 1972.
616] Kimme C, O'Loughlin BJ, and Sklansky J. Automatic detection of suspicious
abnormalities in breast radiographs. In Klinger A, Fu KS, and Kunii TL,
editors, Data Structures, Computer Graphics, and Pattern Recognition, pages
427{447. Academic Press, New York, NY, 1977.
References 1227
617] Hand W, Semmlow JL, Ackerman LV, and Alcorn FS. Computer screening
of xeromammograms: A technique for dening suspicious areas of the breast.
Computers and Biomedical Research, 12:445{460, 1979.
618] Semmlow JL, Shadagoppan A, Ackerman LV, Hand W, and Alcorn FS. A
fully automated system for screening xeromammograms. Computers and
Biomedical Research, 13:350{362, 1980.
619] Lai SM, Li XB, and Bischof WF. On techniques for detecting circumscribed
masses in mammograms. IEEE Transactions on Medical Imaging, 8(4):377{
386, 1989.
620] Brzakovic D, Luo XM, and Brzakovic P. An approach to automated detec-
tion of tumours in mammograms. IEEE Transactions on Medical Imaging,
9(3):233{241, 1990.
621] Barman H and Granlund GH. Computer aided diagnosis of mammograms
using a hierarchical framework. In Gale AG, Astley SM, Dance DR, and
Cairns AY, editors, Proceedings of the 2nd International Workshop on Digital
Mammography, pages 271{280, York, England, 10-12 July 1994.
622] Li HD, Kallergi M, Clarke LP, Jain VK, and Clark RA. Markov random
eld for tumor detection in digital mammography. IEEE Transactions on
Medical Imaging, 14(3):565{576, 1995.
623] Qian W, Li L, Clarke LP, Mao F, and Clark RA. Adaptive CAD modules
for mass detection in digital mammography. In Chang HK and Zhang YT,
editors, Proceedings of the 20th Annual International Conference of the IEEE
Engineering in Medicine and Biology Society, pages 1013{1016, Hong Kong,
October 1998.
624] Chang CM and Laine A. Coherence of multiscale features for enhancement
of digital mammograms. IEEE Transactions on Information Technology in
Biomedicine, 3(1):32{46, 1999.
625] Woods KS and Bowyer KW. Computer detection of stellate lesions. In
Gale AG, Astley SM, Dance DR, and Cairns AY, editors, Proceedings of the
2nd International Workshop on Digital Mammography, pages 221{230, York,
England, 10-12 July 1994.
626] Cerneaz NJ. Model-based Analysis of Mammograms. PhD thesis, Department
of Engineering Science, University of Oxford, Oxford, England, 1994.
627] Woods KS and Bowyer KW. A general view of detection algorithms. In
Doi K, Giger ML, Nishikawa RM, and Schmidt RA, editors, Proceedings of
the 3rd International Workshop on Digital Mammography, pages 385{390,
Chicago, IL, 9-12 June 1996.
628] Kok SL, Brady M, and Highnam R. Comparing mammogram pairs for the
detection of lesions. In Karssemeijer N, Thijssen M, Hendriks J, and van
Erning L, editors, Proceedings of the 4th International Workshop on Digital
Mammography, pages 103{110, Nijmegen, The Netherlands, June 1998.
629] Guissin R and Brady JM. Iso-intensity contours for edge detection. Technical
report OUEL 1935/92. Departmentment of Engineering Science, Oxford
University, Oxford, England, 1992.
1228 Biomedical Image Analysis
630] Cerneaz N and Brady M. Enriching digital mammogram image analysis with
a description of the curvi-linear structures. In Gale AG, Astley SM, Dance
DR, and Cairns AY, editors, Proceedings of the 2nd International Workshop
on Digital Mammography, pages 297{306, York, England, 10-12 July 1994.
631] Lindeberg T. Detecting salient blob-like image structures and their scales
with a scale-space primal sketch: a method for focus-of-attention. Interna-
tional Journal of Computer Vision, 11(3):283{318, 1993.
632] Petrick N, Chan HP, Sahiner B, and Wei D. An adaptive density-weighted
contrast enhancement lter for mammographic breast mass detection. IEEE
Transactions on Medical Imaging, 15(1):59{67, 1996.
633] Petrick N, Chan HP, Wei D, Sahiner B, Helvie MA, and Adler DD. Auto-
mated detection of breast masses on mammograms using adaptive contrast
enhancement and texture classication. Medical Physics, 23(10):1685{1696,
1996.
634] Kobatake H, Murakami M, Takeo H, and Nawano S. Computerized detection
of malignant tumors on digital mammograms. IEEE Transactions on Medical
Imaging, 18(5):369{378, 1999.
635] Kegelmeyer Jr. WP. Evaluation of stellate lesion detection in a standard
mammogram data set. International Journal of Pattern Recognition and
Articial Intelligence, 7(12):1477{1493, 1993.
636] Gupta R and Undrill PE. The use of texture analysis to delineate suspicious
masses in mammography. Physics in Medicine and Biology, 40(5):835{855,
1995.
637] Priebe CE, Lorey RA, Marchette DJ, Solka JL, and Rogers GW. Nonpara-
metric spatio-temporal change point analysis for early detection in mam-
mography. In Gale AG, Astley SM, Dance DR, and Cairns AY, editors,
Proceedings of the 2nd International Workshop on Digital Mammography,
pages 111{120, York, England, 10-12 July 1994.
638] Karssemeijer N. Recognition of stellate lesions in digital mammograms. In
Gale AG, Astley SM, Dance DR, and Cairns AY, editors, Proceedings of the
2nd International Workshop on Digital Mammography, pages 211{220, York,
England, 10-12 July 1994.
639] Zhang M, Giger ML, Vyborny CJ, and Doi K. Mammographic texture anal-
ysis for the detection of spiculated lesions. In Doi K, Giger ML, Nishikawa
RM, and Schmidt RA, editors, Proceedings of the 3rd International Work-
shop on Digital Mammography, pages 347{350, Chicago, IL, 9-12 June 1996.
640] Parr T, Astley S, and Boggis C. The detection of stellate lesions in digital
mammograms. In Gale AG, Astley SM, Dance DR, and Cairns AY, editors,
Proceedings of the 2nd International Workshop on Digital Mammography,
pages 231{240, York, England, 10-12 July 1994.
641] Karssemeijer N and te Brake GM. Detection of stellate distortions in mam-
mograms. IEEE Transactions on Medical Imaging, 15(10):611{619, 1996.
642] te Brake GM and Karssemeijer N. Comparison of three mass detection meth-
ods. In Karssemeijer N, Thijssen M, Hendriks J, and van Erning L, editors,
References 1229
Proceedings of the 4th International Workshop on Digital Mammography,
pages 119{126, Nijmegen, The Netherlands, June 1998.
643] te Brake GM and Karssemeijer N. Single and multiscale detection of masses
in digital mammograms. IEEE Transactions on Medical Imaging, 18(7):628{
639, 1999.
644] Kobatake H and Yoshinaga Y. Detection of spicules on mammogram based
on skeleton analysis. IEEE Transactions on Medical Imaging, 15(3):235{245,
1996.
645] Polakowski WE, Cournoyer DA, Rogers SK, DeSimio MP, Ruck DW,
Homeister JW, and Raines RA. Computer-aided breast cancer detection
and diagnosis of masses using dierence of Gaussians and derivative-based
feature saliency. IEEE Transactions on Medical Imaging, 16(6):811{819,
1997.
646] Matsubara T, Fujita H, Hara T, Kasai S, Otsuka O, Hatanaka Y, and Endo
T. Development of a new algorithm for detection of mammographic masses.
In Karssemeijer N, Thijssen M, Hendriks J, and van Erning L, editors, Pro-
ceedings of the 4th International Workshop on Digital Mammography, pages
139{142, Nijmegen, The Netherlands, June 1998.
647] Parr T, Astley S, Taylor CJ, and Boggis CRM. Model based classication
of linear structures in digital mammograms. In Doi K, Giger ML, Nishikawa
RM, and Schmidt RA, editors, Proceedings of the 3rd International Workshop
on Digital Mammography, pages 351{356, Chicago, IL, 9-12 June 1996.
648] Parr T, Taylor CJ, Astley S, and Boggis CRM. A statistical representation of
pattern structure for digital mammography. In Doi K, Giger ML, Nishikawa
RM, and Schmidt RA, editors, Proceedings of the 3rd International Workshop
on Digital Mammography, pages 357{360, Chicago, IL, 9-12 June 1996.
649] Zwiggelaar R, Astley S, and Taylor C. Detecting the central mass of a spic-
ulated lesion using scale-orientation signatures. In Karssemeijer N, Thijssen
M, Hendriks J, and van Erning L, editors, Proceedings of the 4th Inter-
national Workshop on Digital Mammography, pages 63{70, Nijmegen, The
Netherlands, June 1998.
650] Parr T, Taylor CJ, Astley S, and Boggis CRM. Statistical modeling of
oriented line patterns in mammograms. In Proceedings of SPIE, Volume
3034: Medical Imaging | Image Processing, pages 44{55, 1997.
651] Parr T, Zwiggelaar R, Astley S, Boggis C, and Taylor C. Comparison of
methods for combining evidence for spiculated lesions. In Karssemeijer N,
Thijssen M, Hendriks J, and van Erning L, editors, Proceedings of the 4th
International Workshop on Digital Mammography, pages 71{78, Nijmegen,
The Netherlands, June 1998.
652] Zwiggelaar R, Parr TC, Schumm JE, Hutt IW, Taylor CJ, Astley SM, and
Boggis CRM. Model-based detection of spiculated lesions in mammograms.
Medical Image Analysis, 3(1):39{62, 1999.
653] Rao AR and Schunck BG. Computing oriented texture elds. Computer
Vision, Graphics, and Image Processing, 53(2):157{185, 1991.
1230 Biomedical Image Analysis
654] Giger ML, Yin FF, Doi K, Metz CE, Schmidt RA, and Vyborny CJ. Inves-
tigation of methods for the computerized detection and analysis of mammo-
graphic masses. In Proceedings of SPIE Volume 1233, Medical Imaging IV:
Image Processing, pages 183{184, 1990.
655] Yin FF, Giger ML, Doi K, Metz CE, Vyborny CJ, and Schmidt RA. Com-
puterized detection of masses in digital mammograms: Analysis of bilateral
subtraction images. Medical Physics, 18(5):955{963, 1991.
656] Sallam M and Bowyer KW. Registering time sequences of mammograms
using a two-dimensional image unwarping technique. In Gale AG, Astley
SM, Dance DR, and Cairns AY, editors, Proceedings of the 2nd International
Workshop on Digital Mammography, pages 121{130, York, England, 10-12
July 1994.
657] Stamatakis EA, Cairns AY, Ricketts IW, Walker C, Preece PE, and Thomp-
son AJ. A novel approach to aligning mammograms. In Gale AG, Astley
SM, Dance DR, and Cairns AY, editors, Proceedings of the 2nd International
Workshop on Digital Mammography, pages 355{364, York, England, 10-12
July 1994.
658] Nishikawa RM, Giger ML, Doi K, Vyborny CJ, and Schmidt RA. Computer-
aided detection and diagnosis of masses and microcalcications from digital
mammograms. In Bowyer KW and Astley S, editors, State of the Art in
Digital Mammographic Image Analysis, pages 82{102. World Scientic, Sin-
gapore, 1994.
659] Miller P and Astley S. Automated detection of mammographic asymmetry
using anatomical features. In Bowyer KW and Astley S, editors, State of
the Art in Digital Mammographic Image Analysis, pages 247{261. World
Scientic, Singapore, 1994.
660] Burrell HC, Sibbering DM, Wilson ARM, Pinder SE, Evans AJ, Yeoman LJ,
Elston CW, Ellis IO, Blamey RW, and Robertson JFR. Screening interval
breast cancers: Mammographic features and prognostic factors. Radiology,
199:811{817, 1996.
661] Brzakovic D, Vujovic N, Neskovic M, Brzakovic P, and Fogarty K. Mammo-
gram analysis by comparison with previous screenings. In Gale AG, Astley
SM, Dance DR, and Cairns AY, editors, Proceedings of the 2nd International
Workshop on Digital Mammography, pages 131{140, York, England, 10-12
July 1994.
662] Sallam M and Bowyer KW. Detecting abnormal densities in mammograms
by comparison to previous screenings. In Doi K, Giger ML, Nishikawa RM,
and Schmidt RA, editors, Proceedings of the 3rd International Workshop on
Digital Mammography, pages 417{420, Chicago, IL, 9-12 June 1996.
663] te Brake GM, Karssemeijer N, and Hendriks JHCL. Automated detec-
tion of breast carcinomas not detected in a screening program. Radiology,
207(2):465{471, 1998.
664] Sameti M, Morgan-Parkes J, Ward RK, and Palcic B. Classifying image
features in the last screening mammograms prior to detection of a malignant
mass. In Karssemeijer N, Thijssen M, Hendriks J, and van Erning L, editors,
References 1231
Proceedings of the 4th International Workshop on Digital Mammography,
pages 127{134, Nijmegen, The Netherlands, June 1998.
665] Hadjiiski L, Chan HP, Sahiner B, Petrick N, Helvie MA, and Gopal SS.
Automated identication of breast lesions in temporal pairs of mammograms
for interval change analysis. Radiology, 213(P):229{230, 1999.
666] Gopal SS, Chan HP, Wilson TE, Helvie MA, Petrick N, and Sahiner B.
A regional registration technique for automated interval change analysis of
breast lesions on mammograms. Medical Physics, 26:2669{2679, 1999.
667] Petrick N, Chan HP, Sahiner B, Helvie MA, and Paquerault S. Evaluation of
an automated computer-aided diagnosis system for the detection of masses on
prior mammograms. In Proceedings of SPIE Volume 3979, Medical Imaging
2000: Image Processing, pages 967{973, 2000.
668] Hadjiiski L, Chan HP, Sahiner B, Petrick N, Helvie MA, Paquerault S, and
Zhou C. Interval change analysis in temporal pairs of mammograms using
a local a"ne transformation. In Proceedings of SPIE Volume 3979, Medical
Imaging 2000: Image Processing, pages 847{853, 2000.
669] Sameti M. Detection of Soft Tissue Abnormalities in Mammographic Images
for Early Diagnosis of Breast Cancer. PhD thesis, Department of Electrical
and Computer Engineering, University of British Columbia, Vancouver, BC,
Canada, November 1998.
670] Ranganath S. Image ltering using multiresolution representations. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 13(5):426{440,
1991.
671] Rezaee MR, van der Zwet PMJ, Lelieveldt BPF, van der Geest RJ, and
Reiber JHC. Multiresolution image segmentation technique based on pyra-
midal segmentation and fuzzy clustering. IEEE Transactions on Image Pro-
cessing, 9(7):1238{1248, 2000.
672] Daubechies I. Ten lectures on wavelets. CBMS, SIAM, 61:198{202, 1995.
673] Shiman S, Rubin GD, and Napel S. Medical image segmentation using
analysis of isolable-contour maps. IEEE Transactions on Medical Imaging,
19(11):1064{1074, 2000.
674] Brown MB and Engelman L. BMDP Statistical Software Manual. University
of California, Berkeley, CA, 1988.
675] Groshong BR and Kegelmeyer Jr. WP. Evaluation of a Hough transform
method for circumscribed lesion detection. In Doi K, Giger ML, Nishikawa
RM, and Schmidt RA, editors, Proceedings of the 3rd International Workshop
on Digital Mammography, pages 361{366, Chicago, IL, 9-12 June 1996.
676] Mudigonda NR, Rangayyan RM, and Desautels JEL. Segmentation and
classication of mammographic masses. In SPIE Vol. 3979 Medical Imaging
2000: Image Processing, pages 55{67, February 2000.
677] Schunck BG. Gaussian lters and edge detection, Research Publication GMR-
5586. Computer Science Department, General Motors Research Laborato-
ries, Detroit, MI, 1986.
678] Kass M and Witkin A. Analyzing oriented patterns. Computer Vision,
Graphics, and Image Processing, 37:362{385, 1987.
1232 Biomedical Image Analysis
679] Ayres FJ and Rangayyan RM. Characterization of architectural distor-
tion in mammograms via analysis of oriented texture. IEEE Engineering
in Medicine and Biology Magazine, 24:in press, January 2005.
680] Ayres FJ and Rangayyan RM. Detection of architectural distortion in mam-
mograms using phase portraits. In Fitzpatrick JM and Sonka M, editors,
Proceedings of SPIE Medical Imaging 2004: Image Processing, volume 5370,
pages 587{597, San Diego, CA, February 2004.
681] Rangayyan RM, Ferrari RJ, and Fr%ere AF. Detection of asymmetry be-
tween left and right mammograms. In Proceedings of the 7th International
Workshop on Digital Mammography, Chapel Hill, NC, June 2004.
682] Yin FF, Giger ML, Doi K, Vyborny CJ, and Schmidt RA. Computerized
detection of masses in digital mammograms: Automated alignment of breast
images and its eect on bilateral-subtraction technique. Medical Physics,
21(3):445{452, 1994.
683] Vujovic N and Brzakovic D. Establishing the correspondence between con-
trol points in pairs of mammographic images. IEEE Transactions on Image
Processing, 6(10):1388{1399, 1997.
684] Karssemeijer N and te Brake GM. Combining single view features and asym-
metry for detection of mass lesions. In Karssemeijer N, Thijssen M, Hendriks
J, and van Erning L, editors, Proceedings of the 4th International Workshop
on Digital Mammography, pages 95{102, Nijmegen, The Netherlands, June
1998.
685] Doi K, Giger ML, Nishikawa RM, and Schmidt RA, editors. Proceedings of
the 3rd International Workshop on Digital Mammography, Chicago, IL, June
1996. Elsevier.
686] Karssemeijer N, Thijssen M, Hendriks J, and van Erning L, editors. Proceed-
ings of the 4th International Workshop on Digital Mammography, Nijmegen,
The Netherlands, June 1998. Kluwer Academic Publishers.
687] Yae MJ, editor. Proceedings of the 5th International Workshop on Digital
Mammography, Toronto, ON, Canada, June 2000. Medical Physics Publish-
ing.
688] Peitgen HO, editor. Proceedings of the 6th International Workshop on Digital
Mammography, Bremen, Germany, June 2002. Springer-Verlag.
689] Wolfe JN. Risk for breast cancer development determined by mammographic
parenchymal pattern. Cancer, 37:2486{2492, 1976.
690] Matsubara T, Yamazaki D, Fujita H, Hara T, Iwase T, and Endo T. An
automated classication method for mammograms based on evaluation of -
broglandular breast tissue density. In Yae MJ, editor, Proceedings of the 5th
International Workshop on Digital Mammography, pages 737{741, Toronto,
ON, Canada, June 2000.
691] Zhou C, Chan HP, Petrick N, Helvie MA, Goodsitt MM, Sahiner B, and
Hadjiiski LM. Computerized image analysis: Estimation of breast density
on mammograms. Medical Physics, 28(6):1056{1069, 2001.
References 1233
692] Byng JW, Boyd NF, Fishell E, Jong RA, and Yae MJ. The quantita-
tive analysis of mammographic densities. Physics in Medicine and Biology,
39:1629{1638, 1994.
693] Tahoces PG, Correa J, Souto M, G$omez L, and Vidal JJ. Computer-assisted
diagnosis: the classication of mammographic breast parenchymal patterns.
Physics in Medicine and Biology, 40:103{117, 1995.
694] Huo Z, Giger ML, Zhong W, and Olopade OI. Analysis of relative contribu-
tions of mammographic features and age to breast cancer risk prediction. In
Yae MJ, editor, Proceedings of the 5th International Workshop on Digital
Mammography, pages 732{736, Toronto, ON, Canada, June 2000.
695] Sivaramakrishna R, Obuchowski NA, Chilcote WA, and Powell KA. Au-
tomatic segmentation of mammographic density. Academic Radiology,
8(3):250{256, 2001.
696] Bassett LW and Gold RH. Breast Cancer Detection: Mammography and
Other Methods in Breast Imaging. Grune & Stratton, Orlando, FL, 2nd
edition, 1987.
697] Caulkin S, Astley S, Asquith J, and Boggis C. Sites of occurrence of ma-
lignancies in mammograms. In Karssemeijer N, Thijssen M, Hendriks J,
and van Erning L, editors, Proceedings of the 4th International Workshop
on Digital Mammography, pages 279{282, Nijmegen, The Netherlands, June
1998.
698] McLachlan GJ and Krishnan T. The EM Algorithm and Extensions. Wiley-
Interscience, New York, NY, 1997.
699] Rissanen J. Modeling by shortest data description. Automatica, 14:465{471,
1978.
700] Bishop CM. Neural Networks for Pattern Recognition. Claredon Press, Ox-
ford, England, 1995.
701] Wolfe JN. Breast parenchymal patterns and their changes with age. Radiol-
ogy, 121:545{552, 1976.
702] Rutter CM, Mandelson MT, Laya MB, and Taplin S. Changes in breast
density associated with initiation, discontinuation, and continuing use of
hormone replacement therapy. Journal of the American Medical Association,
285(2):171{176, 2001.
703] Dempster AP, Laird NM, and Rubin DB. Maximum likelihood from incom-
plete data via the EM algorithm. Journal of the Royal Statistical Society, B
39(1):1{38, 1977.
704] Liavas AP and Regalia PA. On the behavior of information theoretic cri-
teria for model order selection. IEEE Transactions on Signal Processing,
49(8):1689{1695, 2001.
705] Hansen MH and Yu B. Model selection and the principle of minimum descrip-
tion length. Journal of the American Statistical Association, 96(454):746{
774, 2001.
706] Carson C, Thomas M, Belongie S, Hellerstein JM, and Malik J. Blobworld: A
system for region-based image indexing and retrieval. In Huijsmans DP and
1234 Biomedical Image Analysis
Smeulders AWM, editors, Proceedings of the 3rd International Conference on
Visual Information and Information Systems, pages 509{516, Amsterdam,
The Netherlands, June 1999.
707] Dance DR. Physical principles of breast imaging. In Doi K, Giger ML,
Nishikawa RM, and Schmidt RA, editors, Proceedings of the 3rd International
Workshop on Digital Mammography, pages 427{430, Chicago, IL, June 1996.
708] Ueda N and Nakano R. Deterministic annealing EM algorithm. Neural
Networks, 11:271{282, 1998.
709] Masson P and Pieczynski W. SEM algorithm and unsupervised statisti-
cal segmentation of satellite images. IEEE Transactions on Geoscience and
Remote Sensing, 31(3):618{633, 1993.
710] Campbell FW and Robson JG. Application of Fourier analysis to the visi-
bility of gratings. Journal of Physiology, 197:551{566, 1968.
711] Marcelja S. Mathematical description of the response of simple cortical cells.
Journal of the Optical Society of America, 70(11):1297{1300, 1980.
712] Daugman JG. Complete discrete 2-D Gabor transforms by neural networks
for image analysis and compression. IEEE Transactions on Acoustics, Speech,
and Signal Processing, 36(7):1169{1179, 1988.
713] Malik J and Perona P. Preattentive texture discrimination with early vision
mechanisms. Journal of the Optical Society of America A, 7(2):923{932,
1990.
714] Dunn D, Higgins WE, and Wakeley J. Texture segmentation using 2-D Gabor
elementary functions. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 16(2):130{149, 1994.
715] Chang T and Kuo CCJ. Texture analysis and classication with tree-
structured wavelet transform. IEEE Transactions on Image Processing,
2(4):429{441, 1993.
716] Li L, Mao F, Qian W, and Clarke LP. Wavelet transform for directional fea-
ture extraction in medical imaging. In Proceedings of the IEEE International
Conference on Image Processing, volume 3, pages 500{503, Santa Barbara,
CA, 1997.
717] Qian W and Clarke LP. Hybrid M-channel wavelet transform methods:
adaptive, automatic and digital X-ray sensor independent. Medical Physics,
22(6):983{984, 1995.
718] Li L, Qian W, and Clarke LP. Digital mammography: CAD method for
mass detection using multiresolution and multiorientation wavelet trans-
forms. Academic Radiology, 4:724{731, 1997.
719] Graham NVS. Visual Pattern Analyzers. Oxford University Press, New
York, NY, 1989.
720] Daubechies I. The wavelet transform, time-frequency localization and signal
analysis. IEEE Transactions on Information Theory, 36(5):961{1004, 1990.
721] Duda RO and Hart PE. Pattern Classication and Scene Analysis. Wiley,
New York, NY, 1973.
References 1235
722] Schneider MA. Better detection: Improving our chances. In Yae MJ,
editor, Digital Mammography: 5th International Workshop on Digital Mam-
mography, pages 3{6, Toronto, ON, Canada, June 2000. Medical Physics
Publishing.
723] Bird RE, Wallace TW, and Yankaskas BC. Analysis of cancers missed at
screening mammography. Radiology, 184:613{617, 1992.
724] Baker JA, Rosen EL, Lo JY, Gimenez EI, Walsh R, and Soo MS. Computer-
aided detection (CAD) in screening mammography: Sensitivity of commer-
cial CAD systems for detecting architectural distortion. American Journal
of Roentgenology, 181:1083{1088, 2003.
725] van Dijck JAAM, Verbeek ALM, Hendriks JHCL, and Holland R. The cur-
rent detectability of breast cancer in a mammographic screening program.
Cancer, 72:1933{1938, 1993.
726] Sickles EA. Mammographic features of 300 consecutive nonpalpable breast
cancers. American Journal of Roentgenology, 146:661{663, 1986.
727] Broeders MJM, Onland-Moret NC, Rijken HJTM, Hendriks JHCL, Verbeek
ALM, and Holland R. Use of previous screening mammograms to identify
features indicating cases that would have a possible gain in prognosis follow-
ing earlier detection. European Journal of Cancer, 39:1770{1775, 1993.
728] Kegelmeyer Jr. WP, Pruneda JM, Bourland PD, Hillis A, Riggs MW, and
Nipper ML. Computer-aided mammographic screening for spiculated lesions.
Radiology, 191:331{337, 1994.
729] Sampat MP and Bovik AC. Detection of spiculated lesions in mammograms.
In Proceedings of the 25th Annual International Conference of the IEEE
Engineering in Medicine and Biology Society (CD-ROM), pages 810{813,
Canc$un, Mexico, September 2003.
730] Mudigonda NR and Rangayyan RM. Texture !ow-eld analysis for the detec-
tion of architectural distortion in mammograms. In Proceedings of BioVision,
pages 76{81, Bangalore, India, December 2001.
731] Matsubara T, Ichikawa T, Hara T, Fujita H, Kasai S, Endo T, and Iwase
T. Automated detection methods for architectural distortions around skin-
line and within mammary gland on mammograms. In Lemke HU, Vannier
MW, Inamura K, Farman AG, Doi K, and Reiber JHC, editors, International
Congress Series: Proceedings of the 17th International Congress and Exibi-
tion on Computer Assisted Radiology and Surgery, pages 950{955, London,
UK, June 2003. Elsevier.
732] Burhenne LJW, Wood SA, D'Orsi CJ, Feig SA, Kopans DB, O'Shaughnessy
KF, Sickles EA, Tabar L, Vyborny CJ, and Castellino RA. Potential contri-
bution of computer-aided detection to the sensitivity of screening mammog-
raphy. Radiology, 215:554{562, 2000.
733] Evans WP, Burhenne LJW, Laurie L, O'Shaughnessy KF, and Castellino RA.
Invasive lobular carcinoma of the breast: Mammographic characteristics and
computer-aided detection. Radiology, 225(1):182{189, 2002.
734] Birdwell RL, Ikeda DM, O'Shaughnessy KF, and Sickles EA. Mammographic
characteristics of 115 missed cancers later detected with screening mam-
1236 Biomedical Image Analysis
mography and the potential utility of computer-aided detection. Radiology,
219(1):192{202, 2001.
735] Wylie CR and Barrett LC. Advanced Engineering Mathematics. McGraw-
Hill, New York, NY, 6th edition, 1995.
736] Rao AR and Jain RC. Computerized !ow eld analysis: Oriented texture
elds. IEEE Transactions on Pattern Analysis and Machine Intelligence,
14(7):693{709, 1992.
737] Gershenfeld N. The Nature of Mathematical Modeling. Cambridge University
Press, Cambridge, UK, 1999.
738] Sweet SA. Data Analysis with SPSS. Allyn & Bacon, Boston, MA, 1999.
739] Louis AK and Natterer F. Mathematical problems of computerized tomog-
raphy. Proceedings of the IEEE, 71(3):379{389, 1983.
740] Lewitt RM. Reconstruction algorithms: Transform methods. Proceedings of
the IEEE, 71(3):390{408, 1983.
741] Censor Y. Finite series-expansion reconstruction methods. Proceedings of
the IEEE, 71(3):409{419, 1983.
742] Gordon R. A tutorial on ART (Algebraic Reconstruction Techniques). IEEE
Transactions on Nuclear Science, 21:78{93, 1974.
743] Gordon R and Herman GT. Three-dimensional reconstruction from projec-
tions: A review of algorithms. International Review of Cytology, 38:111{151,
1974.
744] Gordon R and Rangayyan RM. Geometric deconvolution: A meta-algorithm
for limited view computed tomography. IEEE Transactions on Biomedical
Engineering, 30:806{810, 1983.
745] Gordon R, Dhawan AP, and Rangayyan RM. Reply to comments on geomet-
ric deconvolution: A meta-algorithm for limited view computed tomography.
IEEE Transactions on Biomedical Engineering, 32:242{244, 1985.
746] Soble PJ, Rangayyan RM, and Gordon R. Quantitative and qualitative
evaluation of geometric deconvolution of distortion in limited-view computed
tomography. IEEE Transactions on Biomedical Engineering, 32:330{335,
1985.
747] Rangayyan RM, Gordon R, and Dhawan AP. Algorithms for limited-view
computed tomography: An annotated bibliography and a challenge. Applied
Optics, 24(23):4000{4012, 1985.
748] Dhawan AP, Rangayyan RM, and Gordon R. Image restoration by
Wiener deconvolution in limited-view computed tomography. Applied Optics,
24(23):4013{4020, 1985.
749] Boulfelfel D, Rangayyan RM, Hahn LJ, and Kloiber R. Three-dimensional
restoration of single photon emission computed tomography images. IEEE
Transactions on Nuclear Science, 41(5):1746{1754, 1994.
750] Boulfelfel D, Rangayyan RM, Hahn LJ, Kloiber R, and Kuduvalli GR.
Restoration of single photon emission computed tomography images by the
Kalman lter. IEEE Transactions on Medical Imaging, 13(1):102{109, 1994.
References 1237
751] Boulfelfel D, Rangayyan RM, Hahn LJ, and Kloiber R. Pre-reconstruction
restoration of single photon emission computed tomography images. IEEE
Transactions on Medical Imaging, 11(3):336{341, 1992.
752] Herman GT, Lent A, and Rowland SW. ART: Mathematics and Applications
{ A report on the mathematical foundations and on the applicability to real
data of the Algebraic Reconstruction Techniques. Journal of Theoretical
Biology, 42:1{32, 1973.
753] Rangayyan RM and Gordon R. Streak preventive image reconstruction via
ART and adaptive ltering. IEEE Transactions on Medical Imaging, 1:173{
178, 1982.
754] Kaczmarz MS. Angenaherte au!osung von systemen linearer gleichungen.
Bulletin International de l'Academie Polonaise des Sciences et des Lettres
Serie A, Sciences Mathematiques, pages 355{357, 1937.
755] Guan H and Gordon R. Computed tomography using ART with dierent
projection access schemes: a comparison study under practical situations.
Physics in Medicine and Biology, 41:1727{1743, 1996.
756] Lent A. A convergent algorithm for maximum entropy image restoration,
with a medical x-ray application. In Shaw R, editor, Image Analysis and
Evaluation, pages 249{257. Society of Photographic Scientists and Engineers,
Washington DC, 1977.
757] Gilbert P. Iterative methods for the three-dimensional reconstruction of an
object from projections. Journal of Theoretical Biology, 36:105{117, 1972.
758] Fullerton GD. Fundamentals of CT tissue characterization. In Fullerton
GD and Zagzebski JA, editors, Medical Physics of CT and Ultrasound: Tis-
sue Imaging and Characterization, pages 125{162. American Association of
Physicists in Medicine, New York, NY, 1980.
759] Mategrano VC, Petasnick J, Clark J, Bin AC, and Weinstein R. Attenuation
values in computed tomography of the abdomen. Radiology, 125:135{140,
October 1977.
760] Cruvinel PE, Cesareo R, Crestana S, and Mascarenhas S. X- and gamma-rays
computerized minitomograph scanner for soil science. IEEE Transactions on
Instrumentation and Measurements, 39(5):745{750, 1990.
761] Vaz CMP, Crestana S, Mascarenhas S, Cruvinel PE, Reichardt K, and Stolf
R. Using a computed tomography miniscanner for studying tillage induced
soil compaction. Soil Technology, 2:313{321, 1989.
762] Onoe AM, Tsao JW, Yamada H, Nakamura H, Kogure J, Kawamura H, and
Yoshimatsu M. Computed tomography for measuring annual rings of a live
tree. Proceedings of the IEEE, 71(7):907{908, 1983.
763] Stock SR. X-ray microtomography of materials. International Materials
Review, 44(4):141{164, 1999.
764] Johnson RH, Karau KL, Molthen RC, Haworth ST, and Dawson CA. Micro-
CT image-derived metrics quantify arteial wall distensibility reduction in a
rat model of pulmonary hypertension. In Proceedings SPIE 3978: Medi-
cal Imaging 2000 { Physiology and Function from Multidimensional Images,
pages 320{330, San Diego, CA, February 2000.
1238 Biomedical Image Analysis
765] Illman B and Dowd B. High-resolution microtomography for density and
spatial information about wood structures. In Proceedings SPIE 3772: De-
velopments in X-ray Tomography, pages 198{330, Denver, CO, July 1999.
766] Shimizu K, Ikezoe J, Ikura H, Ebara H, Nagareda T, Yagi N, Umetani K,
Uesugi K, Okada K, Sugita A, and Tanaka M. Synchrotron radiation mi-
crotomography of the lung specimens. In Proceedings SPIE 3977: Medical
Imaging 2000 { Physics of Medical Imaging, pages 196{204, San Diego, CA,
February 2000.
767] Umetani K, Yagi N, Suzuki Y, Ogasawara Y, Kajiya F, Matsumoto T,
Tachibana H, Goto M, Yamashita T, Imai S, and Kajihara Y. Observation
and analysis of microcirculation using high-spatial-resolution image detec-
tors and synchrotron radiation. In Proceedings SPIE 3977: Medical Imaging
2000 { Physics of Medical Imaging, pages 522{533, San Diego, CA, February
2000.
768] Sasov A. High-resolution in-vivo micro-CT scanner for small animals. In
Proceedings SPIE 4320: Medical Imaging 2001 { Physics of Medical Imaging,
pages 705{710, San Diego, CA, February 2001.
769] Shaler SM, Keane DT, Wang H, Mott L, Landis E, and Holzman L. Mi-
crotomography of cellulosic structures. In TAPPI Proceedings: Process and
Product Quality Conference, pages 89{96, 1998.
770] Machin K and Webb S. Cone-beam x-ray microtomography of small speci-
mens. Physics in Medicine and Biology, 39:1639{1657, 1994.
771] Boyd SK, Muller R, Matyas JR, Wohl GR, and Zernicke RF. Early morpho-
metric and anisotropic change in periarticular cancellous bone in a model of
experimental knee osteoarthritis quantied using microcomputed tomogra-
phy. Clinical Biomechanics, 15:624{631, 2000.
772] Boyd SK. Microstructural Bone Adaptation in an Experimental Model of
Osteoarthritis. PhD thesis, Faculty of Kinesiology, University of Calgary,
Calgary, Alberta, Canada, August 2001.
773] Alexander F. Neuroblastoma. Urologic Clinics of North America, 27(3):383{
392, 2000.
774] Cotterill SJ, Pearson ADJ, Pritchard J, Foot ABM, Roald B, Kohler JA, and
Imeson J. Clinical prognostic factors in 1277 patients with neuroblastoma:
Results of The European Neuroblastoma Study Group `Survey' 1982-1992.
European Journal of Cancer, 36:901{908, 2000.
775] Meza MP, Benson M, and Slovis TL. Imaging of mediastinal masses in
children. Radiologic Clinics of North America, 31(3):583{604, 1993.
776] Abramson SJ. Adrenal neoplasm in children. Radiologic Clinics of North
America, 35(6):1415{1453, 1997.
777] Castleberry RP. Neuroblastoma. European Journal of Cancer, 33(9):1430{
1438, 1997.
778] Goodman MT, Gurney JG, Smith MA, and Olshan AF. Cancer in-
cidence and survival among children and adolescents: United States
Surveillance, Epidemiology, and End Results (SEER) Program 1975-1995.
References 1239
Chapter IV Sympathetic nervous system tumors. National Cancer In-
stitute, http://seer.cancer.gov/publications/childhood/sympathetic.pdf, ac-
cessed May 2003.
779] Gurney JG, Ross JA, Wall DA, Bleyer WA, Severson RK, and Robison LL.
Infant cancer in the U.S.: Histology-specic incidence and trends. Journal
of Pediatric Hematology/Oncology, 19(5):428{432, 1997.
780] Parker L and Powell J. Screening for neuroblastoma in infants younger than
1 year of age: Review of the rst 30 years. Medical and Pediatric Oncology,
31:455{469, 1998.
781] Woods WG and Tuchman M. A population-based study of the usefulness of
screening for neuroblastoma. Lancet, 348(9043):1682{1687, 1998.
782] Bousvaros A, Kirks DR, and Grossman H. Imaging of neuroblastoma: An
overview. Pediatric Radiology, 16:89{106, 1986.
783] Brodeur GM, Pritchard J, Berthold F, Carlsen NLT, Castel V, Castleberry
RP, de Bernardi B, Evans AE, Favrot M, Hedborg F, Kaneko M, Kemshead
J, Lampert F, Lee REJ, Look T, Pearson ADJ, Philip T, Roald B, Sawada T,
Seeger RC, Tsuchida Y, and Voute PA. Revisions of the international criteria
for neuroblastoma diagnosis, staging, and response to treatment. Journal of
Clinical Oncology, 11(8):1466{1477, 1993.
784] Stark DD, Moss AA, Brasch RC, deLorimier AA, Albin AR, London DA, and
Gooding CA. Neuroblastoma: Diagnostic imaging and staging. Radiology,
148:101{105, July 1983.
785] Kirks DR, Merten DF, Grossman H, and Bowie JD. Diagnostic imaging
of pediatric abdominal masses: An overview. Radiologic Clinics of North
America, 19(3):527{545, 1981.
786] Cohen MD, Bugaieski EM, Haliloglu M, Faught P, and Siddiqui AR. Vi-
sual presentation of the staging of pediatric solid tumors. RadioGraphics,
16(3):523{545, 1996.
787] Boechat MI, Ortega J, Homan AD, Cleveland RH, Kangarloo H, and
Gilsanz V. Computed tomography in Stage III neuroblastoma. American
Journal of Radiology, 145:1456{1283, December 1985.
788] Corbett R, Olli J, Fairley N, Moyes J, Husband J, Pinkerton R, Carter
R, Treleaven J, McElwain T, and Meller S. A prospective comparison be-
tween magnetic resonance imaging, meta-iodobenzylguanidine scintigraphy
and marrow histology/cytology in neuroblastoma. European Journal of Can-
cer, 27(12):1560{1564, 1991.
789] Fletcher BD, Kopiwoda SY, Strandjord SE, Nelson AD, and Pickering SP.
Abdominal neuroblastoma: Magnetic resonance imaging and tissue charac-
terization. Radiology, 155(3):699{703, 1985.
790] Sofka CM, Semelka RC, Kelekis NL, Worawattanakul S, Chung CJ, Gold
S, and Fordham LA. Magnetic resonance imaging of neuroblastoma using
current techniques. Magnetic Resonance Imaging, 17(2):193{198, 1999.
791] Chezmar JL, Robbins SM, Nelson RC, Steinberg HV, Torres WE, and
Bernardino ME. Adrenal masses: Characterization with T1-weighted MR
imaging. Radiology, 166(2):357{359, 1988.
1240 Biomedical Image Analysis
792] Kornreich L, Horev G, Kaplinsky NZ, and Grunebaum M. Neuroblas-
toma: Evaluation with contrast enhanced MR imaging. Pediatric Radiology,
21:566{569, 1991.
793] Foglia RP, Fonkalsrud EW, Feig SA, and Moss TJ. Accuracy of diagnos-
tic imaging as determined by delayed operative intervention for advanced
neuroblastoma. Journal of Pediatric Surgery, 24(7):708{711, 1989.
794] Sonka M and Fitzpatrick JM, editors. Handbook of Medical Imaging, Volume
2: Medical Image Processing and Analysis. SPIE Press, Bellingham, WA,
2000.
795] Duncan JS and Ayache N. Medical image analysis: Progress over two decades
and the challenges ahead. IEEE Transactions on Pattern Analysis and Ma-
chine Intelligence, 22(1):85{106, January 2000.
796] Wheatley JM, Roseneld NS, Heller G, Feldstein D, and LaQuaglia MP.
Validation of a technique of computer-aided tumor volume determination.
Journal of Surgical Research, 59(6):621{626, 1995.
797] Hopper KD, Singapuri K, and Finkel A. Body CT and oncologic imaging.
Radiology, 215(1):27{40, 2000.
798] Ayres FJ. Segmenta(c~ao e estima(c~ao da composi(c~ao histol$ogica da massa
tumoral em imagens de CT de neuroblastomas. Master's thesis, Universidade
de S~ao Paulo, S~ao Paulo, Brazil, August 2001.
799] Ayres FJ, Zuo MK, Rangayyan RM, Odone Filho V, and Valente M. Seg-
mentation and estimation of the histological composition of the tumor mass
in computed tomographic images of neuroblastoma. In Proceedings of the
23rd Annual International Conference of the IEEE Engineering in Medicine
and Biology Society, Istanbul, Turkey, October 2001.
800] Ayres FJ, Zuo MK, Rangayyan RM, Boag GS, Odone Filho V, and Valente
M. Estimation of the tissue composition of the tumor mass in neuroblas-
toma using segmented CT images. Medical and Biological Engineering and
Computing, 42:366{377, 2004.
801] Rao PS and Gregg EC. Attenuation of monoenergetic gamma rays in tissues.
American Journal of Roentgenology, 123(3):631{637, 1975.
802] Wilson CR. Quantitative computed tomography. In Fullerton GD and Za-
gzebski JA, editors, Medical Physics of CT and Ultrasound: Tissue Imaging
and Characterization, pages 163{175. American Association of Physicists in
Medicine, New York, NY, 1980.
803] Brooks RA. A quantitative theory of the Hounseld unit and its applica-
tion to dual energy scanning. Journal of Computer Assisted Tomography,
1(4):487{493, 1977.
804] Alter AJ. Computerized tomography: A clinical perspective. In Fullerton
GD and Zagzebski JA, editors, Medical Physics of CT and Ultrasound: Tis-
sue Imaging and Characterization, pages 125{162. American Association of
Physicists in Medicine, New York, NY, 1980.
805] Williams G, Bydder GM, and Kreel L. The validity and use of computed to-
mography attenuation values. British Medical Bulletin, 36(3):279{287, 1980.
References 1241
806] Duerinckx AJ and Macovski A. Information and artifact in computed tomog-
raphy image statistics. Medical Physics, 7(2):127{134, March-April 1980.
807] Pullan BR, Fawcitt RA, and Isherwood I. Tissue characterization by an
analysis of the distribution of attenuation values in computed tomography
scans: A preliminary report. Journal of Computer Assisted Tomography,
2(1):49{54, 1978.
808] Kramer RA, Yoshikawa BM, Scheibe PO, and Janetos GP. Statistical proles
in computed tomography. Radiology, 125:145{147, October 1977.
809] Latchaw RE, Gold LHA, Moore JS, and Payne JT. The nonspecicity of
absorption coe"cients in the dierentiation of solid tumors and cystic lesions.
Radiology, 125:141{144, October 1977.
810] Goodenough DJ. Tomographic imaging. In Beutel J, Kundel H L, and Van
Metter R L, editors, Handbook of Medical Imaging, Volume 1: Physics and
Psychophysics, chapter 8, pages 511{554. SPIE Press, Bellingham, WA, 2000.
811] Jain AK, Duin RPW, and Mao J. Statistical pattern recognition: A review.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1):4{
37, January 2000.
812] Tanner MA. Tools for Statistical Inference: Methods for the Exploration
of Posterior Distributions and Likelihood Functions. Springer-Verlag, New
York, NY, 3rd edition, 1996.
813] Dawant BM and Zijdenbos AP. Image segmentation. In Sonka M and Fitz-
patrick JM, editors, Handbook of Medical Imaging, Volume 2: Medical Image
Processing and Analysis, chapter 2, pages 71{127. SPIE Press, Bellingham,
WA, 2000.
814] Copsey K and Webb A. Bayesian approach to mixture models for discrimi-
nation. In Ferri FJ, I~nesta JM, Amin A, and Pudil P, editors, Advances in
Pattern Recognition, Joint IAPR International Workshops SSPR 2000 and
SPR 2000, 8th International Workshop on Structural and Syntactic Pattern
Recognition, 3rd International Workshop on Statistical Techniques in Pat-
tern Recognition], pages 491{500, Alicante, Spain, August 30 { September 1,
2000. Springer. Lecture Notes in Computer Science, Vol. 1876.
815] Richardson S and Green PJ. On Bayesian analysis of mixtures with an
unknown number of components. Journal of the Royal Statistical Society B,
59(4):731{792, 1997.
816] Meng XL and van Dyk D. The EM algorithm { and old folk-song sung to a
fast new tune. Journal of the Royal Statistical Society, 59(3):511{567, 1997.
817] Rangayyan RM, Ferrari RJ, Desautels JEL, and Fr%ere AF. Directional analy-
sis of images with Gabor wavelets. In Proceedings of SIBGRAPI 2000: XIII
Brazilian Symposium on Computer Graphics and Image Processing, pages
170{177, Gramado, Rio Grande do Sul, Brazil, 17-20 October 2000. IEEE
Computer Society Press.
818] Goldszal AF and Pham DL. Volumetric segmentation. In Bankman IN,
editor, Handbook of Medical Imaging: Processing and Analysis, chapter 12,
pages 185{194. Academic Press, London, UK, 2000.
1242 Biomedical Image Analysis
819] Laidlaw DH, Fleischer KW, and Barr AH. Partial volume segmentation
with voxel histograms. In Bankman IN, editor, Handbook of Medical Imag-
ing: Processing and Analysis, chapter 13, pages 185{194. Academic Press,
London, UK, 2000.
820] Chen EL, Chung PC, Chen CL, Tsai HM, and Chang CI. An automatic
diagnostic system for CT liver image classication. IEEE Transactions on
Biomedical Engineering, 45(6):783{794, 1998.
821] Srinivasa N, Ramakrishnan KR, and Rajgopal K. Detection of edges from
projections. IEEE Transactions on Medical Imaging, 11(1):76{80, 1992.
822] Stark H, editor. Image Recovery: Theory and Application. Academic, Or-
lando, FL, 1987.
823] Sezan MI, editor. Selected Papers on Digital Image Restoration. SPIE, Bel-
lignham, WA, 1992.
824] Jansson PA, editor. Deconvolution of Images and Spectra. Academic, San
Diego, CA, 2nd edition, 1984.
825] Andrews HC and Hunt BR. Digital Image Restoration. Prentice Hall, En-
glewood Clis, NJ, 1977.
826] Sondhi MM. Image restoration: The removal of spatially invariant degrada-
tions. Proceedings of the IEEE, 60:842{853, 1972.
827] Sawchuk AA. Space-variant motion degradation and restoration. Proceedings
of the IEEE, 60:854{861, 1972.
828] Robbins GM and Huang TS. Inverse ltering for space-variant imaging sys-
tems. Proceedings of the IEEE, 60:862{872, 1972.
829] Sezan MI and Tekalp AM. A survey of recent developments in digital image
restoration. Optical Engineering, 29:393{404, 1990.
830] Sanz JLC and Huang TS. Unied Hilbert space approach to iterative least-
squares linear signal restoration. Journal of the Optical Society of America,
73:1455{1465, 1983.
831] McGlamery BL. Restoration of turbulence-degraded images. Journal of the
Optical Society of America, 57(3):293{297, 1967.
832] Alonso Jr. M and Barreto AB. Pre-compensation for high-order aberrations
of the human eye using on-screen image deconvolution. In Proceedings of the
25th Annual International Conference of the IEEE Engineering in Medicine
and Biology Society, pages 556{559, Canc$un, Mexico, 2003.
833] Haykin S. Adaptive Filter Theory. Prentice Hall, Upper Saddle River, NJ,
4th edition, 2002.
834] King MA, Doherty PW, and Schwinger RB. A Wiener lter for nuclear
medicine images. Medical Physics, 10(6):876{880, 1983.
835] Honda N, Machida K, Tsukada J, Kaizu H, and Hosoba M. Optimal prepro-
cessing Butterworth-Wiener lter for Tl-201 myocardial SPECT. European
Journal of Nuclear Medicine, 13:404{407, 1987.
836] Tikhonov AN and Arsenin VY. Solutions of Ill-posed Problems. VH Winston
and Sons, Washington, DC, 1977.
References 1243
837] Frieden BR. Restoring with maximum likelihood and maximum entropy.
Journal of the Optical Society of America, 62:511{518, 1972.
838] Gull SF and Daniell GJ. Image reconstruction from incomplete and noisy
data. Nature, 272:686{690, 1978.
839] Leahy RM and Goutis CE. An optimal technique for constraint-based image
restoration and reconstruction. IEEE Transactions on Acoustics, Speech,
and Signal Processing, 34:1629{1642, 1986.
840] Biraud Y. A new approach for increasing the resolving power by data pro-
cessing. Astronomy and Astrophysics, 1:124{127, 1969.
841] Boas Jr. RP and Kac M. Inequalities for Fourier transforms of positive
functions. Duke Mathematics Journal, 12:189{206, 1945.
842] Webb S, Long AP, Ott RJ, Leach MO, and Flower MA. Constrained de-
convolution of SPECT liver tomograms by direct digital image restoration.
Medical Physics, 12:53{58, 1985.
843] Metz CE and Beck RN. Quantitative eects of stationary linear image pro-
cessing on noise and resolution of structure in radionuclide images. Journal
of Nuclear Medicine, 15:164{170, 1974.
844] King MA, Doherty PW, Schwinger RB, Jacobs DA, Kidder RE, and Miller
TR. Fast count-dependent digital ltering of nuclear medicine images: Con-
cise communication. Journal of Nuclear Medicine, 24:1039{1045, 1983.
845] King MA, Schwinger RB, Doherty PW, and Penney BC. Two-dimensional
ltering of SPECT images using the Metz and Wiener lters. Journal of
Nuclear Medicine, 25:1234{1240, 1984.
846] King MA, Schwinger RB, Penney BC, Doherty PW, and Bianco JA. Digital
restoration of Indium-111 and Iodine-123 SPECT images with optimized
Metz lters. Journal of Nuclear Medicine, 27:1327{1336, 1986.
847] King MA, Schwinger RB, and Penney BC. Variation of the count-dependent
Metz lter with imaging system modulation transfer function. Medical
Physics, 13(2):139{149, 1986.
848] King MA, Glick SJ, Penney BC, Schwinger RB, and Doherty PW. Interactive
visual optimization of SPECT prereconstruction ltering. Journal of Nuclear
Medicine, 28:1192{1198, 1987.
849] King MA, Penney BC, and Glick SJ. An image-dependent Metz lter for
nuclear medicine images. Journal of Nuclear Medicine, 29:1980{1989, 1988.
850] Gilland DR, Tsui BMW, McCartney WH, Perry JR, and Berg J. Determina-
tion of the optimum lter function for SPECT imaging. Journal of Nuclear
Medicine, 29:643{650, 1988.
851] Boulfelfel D, Rangayyan RM, Hahn LJ, and Kloiber R. Pre-reconstruction
restoration versus post-reconstruction restoration of single photon emission
computed tomography images. In Proceedings of the 1990 IEEE Colloquium
in South America, pages 112{118. IEEE, Piscataway, NJ, September, 1990.
852] Rabie TF, Paranjape RB, and Rangayyan RM. Iterative method for blind
deconvolution. Journal of Electronic Imaging, 3(3):245{250, 1994.
1244 Biomedical Image Analysis
853] Hunt BR. Digital image processing. In Oppenheim AV, editor, Applications
of Digital Signal Processing, pages 169{237. Prentice Hall, Englewood Clis,
NJ, 1978.
854] Oppenheim AV and Lim JS. The importance of phase in signals. Proceedings
of the IEEE, 69(5):529{541, 1981.
855] Hayes MH, Lim JS, and Oppenheim AV. Signal reconstruction from phase or
magnitude. IEEE Transactions on Acoustics, Speech, and Signal Processing,
28(6):672{680, 1980.
856] Huang TS, Burnett JW, and Deczky AD. The importance of phase in im-
age processing lters. IEEE Transactions on Acoustics, Speech, and Signal
Processing, 23(6):529{542, 1975.
857] Behar J, Porat M, and Zeevi YY. Image reconstruction from localized phase.
IEEE Transactions on Acoustics, Speech, and Signal Processing, 40(4):736{
743, 1992.
858] Oppenheim AV, Hayes MH, and Lim JS. Iterative procedures for signal
reconstruction from phase. In Proceedings of SPIE Vol. 231: International
Conference on Optical Computing, pages 121{129, 1980.
859] Hayes MH. The reconstruction of a multidimensional sequence from the
phase or magnitude of its Fourier transform. IEEE Transactions on Acous-
tics, Speech, and Signal Processing, 30(2):140{154, 1982.
860] Kermisch D. Image reconstruction from phase information only. Journal of
the Optical Society of America, 60(1):15{17, 1970.
861] Espy CY and Lim JS. Eects of additive noise on signal reconstruction
from Fourier transform phase. IEEE Transactions on Acoustics, Speech, and
Signal Processing, 31(4):894{898, 1983.
862] Stockham Jr. TG, Cannon TM, and Ingebretsen RB. Blind deconvolution
through digital signal processing. Proceedings of the IEEE, 63(4):678{692,
1975.
863] Cannon M. Blind deconvolution of spatially invariant image blurs with phase.
IEEE Transactions on Acoustics, Speech, and Signal Processing, 24(1):58{63,
1976.
864] Pohlig SC, Lim JS, Oppenheim AV, Dudgeon DE, and Filip AE. New tech-
nique for blind deconvolution. In Proceedings of SPIE Vol. 207: Applications
of Digital Image Processing III, pages 119{124, 1979.
865] Lim JS. Image restoration by short space spectral subtraction. IEEE Trans-
actions on Acoustics, Speech, and Signal Processing, 28(2):191{197, 1980.
866] Ghiglia DC and Romero LA. Robust two-dimensional weighted and un-
weighted phase unwrapping that uses fast transforms and iterative methods.
Journal of the Optical Society of America A, 11(1):107{117, 1994.
867] Ching NH, Rosenfeld D, and Braun M. Two-dimensional phase unwrapping
using a minimum spanning tree algorithm. IEEE Transactions on Image
Processing, 1(3):355{365, 1992.
References 1245
868] Secilla JP, Garcia N, and Carrascosa JL. Evaluation of two-dimensional un-
wrapped phase averaging for the processing of quasi-periodical noisy images.
Signal Processing IV: Theories and Applications, pages 255{258, 1988.
869] Hedley M and Rosenfeld D. A new two-dimensional phase unwrapping algo-
rithm for MRI images. Magnetic Resonance in Medicine, 24:177{181, 1992.
870] Taxt T. Restoration of medical ultrasound images using two-dimensional ho-
momorphic deconvolution. IEEE Transactions on Ultrasonics, Ferroelectrics,
and Frequency Control, 42(4):543{554, 1995.
871] Tribolet JM. Applications of short-time homomorphic signal analysis to
seismic wavelet estimation. Geoexploration, 16:75{96, 1978.
872] Oppenheim AV, editor. Applications of Digital Signal Processing. Prentice
Hall, Englewood Clis, NJ, 1978.
873] Dudgeon DE. The computation of two-dimensional cepstra. IEEE Transac-
tions on Acoustics, Speech, and Signal Processing, 25(6):476{484, 1977.
874] Bandari E and Little JJ. Visual echo analysis. In Proceedings of the 4th Inter-
national Conference on Computer Vision, pages 202{225, Berlin, Germany,
May 1993.
875] Skoneczny S. Homomorphic 2-D ltering in computer simulation of image
degradation. AMSE Review, 14(2):31{40, 1990.
876] Lee JK, Kabrisky M, Oxley ME, Rogers SK, and Ruck DW. The com-
plex cepstrum applied to two-dimensional images. Pattern Recognition,
26(10):1579{1592, 1993.
877] Yeshurun Y and Schwartz EL. Cepstral ltering on a columnar image archi-
tecture: a fast algorithm for binocular stereo segmentation. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence, 11(7):759{767, 1989.
878] Wahl FM. Digital Image Signal Processing. Artech House, Norwood, MA,
1987.
879] Biemond J, Lagendijk RL, and Mersereau RM. Iterative methods for image
deblurring. Proceedings of the IEEE, 78(5):856{883, 1990.
880] Welch PD. The use of fast Fourier transform for the estimation of power spec-
tra: A method based on time averaging over short, modied periodograms.
IEEE Transactions on Audio and Electroacoustics, AU-15:70{73, 1967.
881] Bingham C, Godfrey MD, and Tukey JW. Modern techniques of power
spectrum estimation. IEEE Transactions on Audio and Electroacoustics,
AU-15(2):56{66, 1967.
882] Rader CM. An improved algorithm for high speed autocorrelation with
applications to spectral estimation. IEEE Transactions on Audio and Elec-
troacoustics, AU-18(4):439{441, 1970.
883] Angel ES and Jain AK. Restoration of images degraded by spatially varying
point spread functions by a conjugate gradient method. Applied Optics,
17:2186{2190, 1978.
884] Strickland RN. Transforming images into block stationary behavior. Applied
Optics, 22(10):1462{1473, May 1983.
1246 Biomedical Image Analysis
885] Trussell HJ and Hunt BR. Image restoration of space-variant blurs by sec-
tioned methods. IEEE Transactions on Acoustics, Speech, and Signal Pro-
cessing, 26(6):608{609, 1978.
886] Rajala SA and De Figueiredo RJP. Adaptive nonlinear image restoration
by a modied Kalman ltering approach. IEEE Transactions on Acoustics,
Speech, and Signal Processing, 29(5):1033{1042, 1981.
887] Kalman RE. A new approach to linear ltering and prediction problems.
Transactions of the American Society of Mechanical Engineers: Journal of
Basic Engineering, 82:35{45, 1960.
888] Kalman RE and Bucy RS. New results in linear ltering and prediction the-
ory. Transactions of the American Society of Mechanical Engineers: Journal
of Basic Engineering, 83:95{108, 1961.
889] Sage AP and Melsa JL. Estimation Theory with Applications to Communi-
cations and Control. McGraw-Hill, New York, NY, 1971.
890] Grewal MS and Andrews AP. Kalman Filtering: Theory and Practice Using
MATLAB. Wiley Interscience, New York, NY, 2nd edition, 2001.
891] Woods JW and Radewan CH. Kalman ltering in two dimensions. IEEE
Transactions on Information Theory, IT-23(4):473{482, 1977.
892] Woods JW. Correction to \Kalman ltering in two dimensions". IEEE
Transactions on Information Theory, IT-23(5):628, 1979.
893] Woods JW and Ingle VK. Kalman ltering in two dimensions: Further
results. IEEE Transactions on Acoustics, Speech, and Signal Processing,
ASSP-29(2):188{197, 1981.
894] Tekalp AM, Kaufman H, and Woods J. Edge-adaptive Kalman ltering for
image restoration with ringing suppression. IEEE Transactions on Acoustics,
Speech, and Signal Processing, 37:892{898, 1989.
895] Tekalp AM and Pavlovic G. Space-variant and color image restoration us-
ing Kalman ltering. In Proceedings of IEEE International Symposium on
Circuits and Systems, pages 1029{1031, 1989.
896] Tekalp AM, Kaufman H, and Woods J. Model-based segmentation and space-
variant restoration of blurred images by decision-directed ltering. Signal
Processing, 15:259{269, 1988.
897] Adams R. The scintillation gamma camera. In Williams LE, editor, Nuclear
Medical Physics, pages 89{154. CRC Press, Boca Raton, FL, 1987.
898] Rosenthal MS, Cullom J, Hawkins W, Moore SC, Tsui BMW, and Yester
M. Quantitative SPECT imaging: A review and recommendations by the
Focus Committee of the Society of Nuclear Medicine Computer and Instru-
mentation Council. Journal of Nuclear Medicine, 36:1489{1513, 1995.
899] Brown BH, Smallwood RH, Barber DC, Lawford PV, and Hose DR. Medical
Physics and Biomedical Engineering. Institute of Physics Publishing, Bristol,
UK, 1999.
900] Todd-Pokropek A. Image processing in nuclear medicine. IEEE Transactions
on Nuclear Science, 27:1080{1094, 1980.
References 1247
901] Pizer SM and Todd-Pokropek A. Improvement of scintigrams by computer
processing. Seminars in Nuclear Medicine, VIII(2):125{146, 1978.
902] Jaszczak RJ, Coleman RE, and Lim CB. SPECT: Single-photon emission
computed tomography. IEEE Transactions on Nuclear Science, 27:1137{
1153, 1980.
903] Todd-Pokropek AE, Zurowski S, and Soussaline F. Non-uniformity and arti-
fact creation in emission tomography. Journal of Nuclear Medicine, 21:38{45,
1980.
904] Sorenson JA and Phelps ME. Physics in Nuclear Medicine. Grune & Strat-
ton, New York, NY, 1987.
905] Jaszczak RJ, Coleman RE, and Whitehead FR. Physical factors aecting
quantitative measurements using camera-based single photon computed to-
mography (SPECT). IEEE Transactions on Nuclear Science, 28:69{80, 1981.
906] Wicks R and Blau M. Eects of spatial distortion on Anger camera eld-
uniformity correction. Journal of Nuclear Medicine, 20:252{265, 1979.
907] Muehllehner G, Colsher JG, and Stoub EW. Correction for eld nonunifor-
mity in scintillation cameras through removal of spatial distortion. Journal
of Nuclear Medicine, 21:771{779, 1980.
908] Chandra R. Introductory Physics of Nuclear Medicine. Lea and Febiger,
Philadelphia, PA, 1987.
909] Croft BY. Single-Photon Emission Computed Tomography. Year Book Med-
ical Publishers, Chicago, IL, 1986.
910] Jaszczak RJ, Greer KL, and Floyd Jr. CE. Improved SPECT quantitation
using compensation for scattered photons. Journal of Nuclear Medicine,
25:893{906, 1984.
911] Egbert SD and May RS. An integral-transport method for Compton scatter
correction in emission computed tomography. IEEE Transactions on Nuclear
Science, 27:543{551, 1980.
912] Chang T. A method for attenuation correction in radionuclide computed
tomography. IEEE Transactions on Nuclear Science, 25:638{643, 1978.
913] Axelsson B, Msaki P, and Israelsson A. Subtraction of Compton scattered
photons in single-photon emission computerized tomography. Journal of
Nuclear Medicine, 25:490{494, 1984.
914] Floyd Jr. CE, Jaszczak RJ, and Harris CC. Monte Carlo evaluation of Comp-
ton scatter compensation by deconvolution in SPECT. Journal of Nuclear
Medicine, 25:71, 1984.
915] Floyd Jr. CE, Jaszczak RJ, Greer KL, and Coleman RE. Deconvolution of
Compton scatter in SPECT. Journal of Nuclear Medicine, 26:403{408, 1985.
916] Buvat I, Benali H, Todd-Pokropek A, and Di Paola R. Scatter correction in
scintigraphy: the state of the art. European Journal of Nuclear Medicine,
21:675{694, 1994.
917] Ljungberg M, King MA, Hademenos GJ, and Strand SE. Comparison of four
scatter correction methods using Monte Carlo simulated source distributions.
Journal of Nuclear Medicine, 35:143{151, 1994.
1248 Biomedical Image Analysis
918] Rogers L and Clinthorne NH. Single photon emission computed tomography
(SPECT). In Williams LE, editor, Nuclear Medical Physics, pages 1{48.
CRC Press, Boca Raton, FL, 1987.
919] Tsui ET and Budinger TF. A stochastic lter for transverse section recon-
struction. IEEE Transactions on Nuclear Science, 26:2687{2690, 1979.
920] Kay DB and Keyes Jr. JW. First-order corrections for absorption and reso-
lution compensation in radionuclide Fourier tomography. Journal of Nuclear
Medicine, 16:540{551, 1975.
921] Sorenson JA. Quantitative measurement of radiation in vivo by whole body
counting. In Hine GJ and Sorenson JA, editors, Instrumentation in Nuclear
Medicine, pages 311{365. Academic, New York, NY, 1984.
922] Walters TE, Simon W, Chesler DA, and Correia J. Attenuation correction
in gamma emission computed tomography. Journal of Computer Assisted
Tomography, 5:89{102, 1981.
923] Budinger TF and Gullberg GT. Three-dimensional reconstruction in nuclear
medicine emission imaging. IEEE Transactions on Nuclear Science, 21:2{16,
1974.
924] Gullberg GT and Budinger TF. The use of ltering methods to compensate
for constant attenuation in single-photon emission computed tomography.
IEEE Transactions on Biomedical Engineering, 28:142{153, 1981.
925] Moore SC, Brunelle JA, and Kirsch CM. An iterative attenuation correction
for a single photon scanning multidetector tomography system. Journal of
Nuclear Medicine, 22:65{76, 1981.
926] Faber TL, Lewis MH, and Corbett JR. Attenuation correction for SPECT:
An evaluation of hybrid approaches. IEEE Transactions on Medical Imaging,
3:101{109, 1984.
927] Censor Y, Gustafson DE, Lent A, and Tuy H. A new approach to the emission
computed tomography problem: Simultaneous calculation of attenuation and
activity coe"cients. IEEE Transactions on Nuclear Science, 26:146{154,
1979.
928] King MA, Tsui BMW, Pan TS, Glick SJ, and Soares EJ. Attenuation com-
pensation for cardiac single-photon emission computed tomographic imaging:
Part 2. Attenuation compensation algorithms. Journal of Nuclear Cardiol-
ogy, 3:55{63, 1996.
929] Boardman AK. Constrained optimization and its application to scintigraphy.
Physics in Medicine and Biology, 24:363{371, 1979.
930] Madsen MT and Park CH. Enhancement of SPECT images by Fourier
ltering the projection image set. Journal of Nuclear Medicine, 26:395{402,
1985.
931] Yanch JC, Flower MA, and Webb S. A comparison of deconvolution and
windowed subtraction techniques for scatter compensation in SPECT. IEEE
Transactions on Medical Imaging, 7:13{20, 1988.
932] Miller TR and Rollins ES. A practical method of image enhancement by
interactive digital ltering. Journal of Nuclear Medicine, 26:1075{1080, 1985.
References 1249
933] Cordier S, Biraud Y, Champailler A, and Voutay M. A study of the appli-
cation of a deconvolution method to scintigraphy. Physics in Medicine and
Biology, 24:577{582, 1979.
934] Maeda J and Murata K. Digital restoration of scintigraphic images by a two-
step procedure. IEEE Transactions on Medical Imaging, 6:320{324, 1987.
935] Rangayyan RM, Boulfelfel D, Hahn LJ, and Kloiber R. Two-dimensional and
three-dimensional restoration of SPECT images. Medical and Life Sciences
Engineering (Journal of the Biomedical Engineering Society of India), 14:82{
94, 1997.
936] Penney BC, Glick SJ, and King MA. Relative importance of the error sources
in Wiener restoration of scintigrams. IEEE Transactions on Medical Imaging,
9:60{70, 1990.
937] Iwata S, Yoshida C, and Nakajima M. Correction method for collimator
eect of ECT. Systems and Computers in Japan, 17:43{50, 1986.
938] Ra U, Stroud DN, and Hendee WR. Improvement of lesion detection in
scintigraphic images by SVD techniques for resolution recovery. IEEE Trans-
actions on Medical Imaging, 5:35{44, 1986.
939] Stritzke P, King MA, Vaknine R, and Goldsmith SL. Deconvolution using
orthogonal polynomials in nuclear medicine: A method for forming quanti-
tative functional images from kinetic studies. IEEE Transactions on Medical
Imaging, 9:11{23, 1990.
940] Dor$e S, Kearney RE, and de Guise J. Quantitative assessment of CT PSF
isotropicity and isoplanicity. In Proceedings of the Canadian Medical and
Biological Engineering Conference, pages 31{32. Canadian Medical and Bi-
ological Engineering Society, Winnipeg, MB, 1990.
941] Dor$e S, Kearney RE, and de Guise J. Experimental determination of CT
point spread function. In Proceedings of the 11th Annual International Con-
ference of the IEEE Engineering in Medicine and Biology Society, pages
620{621, Seattle, WA, 1989.
942] Glick SJ, King MA, and Penney BC. Characterization of the modulation
transfer function of discrete ltered backprojection. IEEE Transactions on
Medical Imaging, 8:203{213, 1989.
943] Glick SJ, King MA, and Knesaurek K. An investigation of the 3D modu-
lation transfer function used in 3D post-reconstruction restoration ltering
of SPECT imaging. In Ortendahl DA and Lllacer J, editors, Information
Processing in Medical Imaging, pages 107{122. Wiley-Liss, New York, NY,
1991.
944] Msaki P, Axelsson B, Dahl CM, and Larsson SA. Generalized scatter correc-
tion method in SPECT using point scatter distribution functions. Journal
of Nuclear Medicine, 28:1861{1869, 1987.
945] Coleman M, King MA, Glick SJ, Knesaurek K, and Penney BC. Investiga-
tion of the stationarity of the modulation transfer function and the scatter
fraction in conjugate view SPECT restoration ltering. IEEE Transactions
on Nuclear Science, 36:969{971, 1989.
1250 Biomedical Image Analysis
946] King MA, Coleman M, Penney BC, and Glick SJ. Activity quantitation in
SPECT: A study of prereconstruction Metz ltering and use of the scatter
degradation factor. Medical Physics, 18(2):184{189, 1991.
947] Glick SJ, King MA, Knesaurek K, and Burbank K. An investigation of
the stationarity of the 3D modulation transfer function of SPECT. IEEE
Transactions on Nuclear Science, 36:973{977, 1989.
948] Boulfelfel D, Rangayyan RM, Hahn LJ, and Kloiber R. Three-dimensional
restoration of single photon emission computed tomography images using the
Kalman-Metz lter. In Computerized Tomography: Proceedings of the Fourth
International Symposium, Novosibirsk, Siberia, Russia, 10-14 August 1993,
pages 98{105. VSP BV, Utrecht, The Netherlands, 1995.
949] Sawchuk AA. Space-variant image restoration by coordinate transformations.
Journal of the Optical Society of America, 64:138{144, 1974.
950] Pizer SM and Todd-Pokropek AE. Noise character in processed scintigrams.
In Proceedings of 4th International Conference on Information Processing in
Scintigraphy, pages 1{16, 1975.
951] Fredric HJ. On the use of windows for harmonic analysis with the discrete
Fourier transform. Proceedings of the IEEE, 66:51{83, 1978.
952] Frieden BR. Image enhancement and restoration. In Huang TS, editor,
Picture Processing and Digital Filtering, volume 6, pages 88{93. Springer-
Verlag, Berlin, Germany, 1978.
953] Aubert G and Kornprobst P. Mathematical Problems in Image Processing.
Springer, New York, NY, 2002.
954] Optical Society of America. Digest of Topical Meeting on Signal recovery
and synthesis with incomplete information and partial constraints. Optical
Society of America, Incline Village, NV, 1983.
955] Optical Society of America. Signal Recovery: Journal of the Optical Society
of America, Volume 73, Number 11. Optical Society of America, New York,
NY, November 1983.
956] SPIE International Symposium on Medical Imaging: PACS and Imaging In-
formatics. http:// www.spie.org/ conferences/ calls/ 05/mi/, accessed June
2004.
957] MathWorld, www.mathworld.wolfram.com/ GrayCode.html. Gray Code, ac-
cessed February 2004.
958] American Standard Code for Information Interchange,
www.asciitable.com/ascii2.html. ASCII Table and Description, accessed
February 2004.
959] Human DA. A method for the construction of minimum-redundancy codes.
Proceedings of the IRE, 40(10):1098{1101, 1952.
960] Langdon Jr. GG. An introduction to arithmetic coding. IBM Journal of
Research and Development, 28(2):135{149, 1984.
961] Rissanen J and Langdon Jr. GG. Arithmetic coding. IBM Journal of Re-
search and Development, 23(2):149{162, 1979.
References 1251
962] Langdon Jr. GG and Rissanen J. Compression of black-white images with
arithmetic coding. IEEE Transactions on Communications, COM-29:858{
867, 1981.
963] Witten IH, Neal RM, and Cleary JG. Arithmetic coding for data compres-
sion. Communications of the ACM, 30(6):520{540, 1987.
964] Pennebaker WB, Mitchell JL, Langdon Jr. GG, and Arps RB. An overview
of the basic principles of the Q-coder adaptive binary arithmetic coder. IBM
Journal of Research and Development, 32:717{726, 1988.
965] Rabbani M and Jones PW. Image compression techniques for medical diag-
nostic imaging systems. Journal of Digital Imaging, 4(2):65{78, 1991.
966] Ziv J and Lempel A. A universal algorithm for sequential data compression.
IEEE Transactions on Information Theory, IT-23(3):337{343, 1977.
967] Welch TA. A technique for high-performance data compression. IEEE Com-
puter, pages 8{19, June 1984.
968] Lo SC, Krasner B, and Mun SK. Noise impact on error-free image compres-
sion. IEEE Transactions on Medical Imaging, 9(2):202{206, 1990.
969] Dainty JC. Image Science. Academic, London, UK, 1974.
970] Ahmed N, Natarajan T, and Rao KR. Discrete cosine transform. IEEE
Transactions on Computer, C-23:90{93, 1974.
971] Ahmed N and Rao KR. Orthogonal Transforms for Digital Signal Processing.
Springer-Verlag, New York, NY, 1975.
972] Kuduvalli GR and Rangayyan RM. Error-free transform coding by
maximum-error-limited quantization of transform coe"cients. In Proceed-
ings of SPIE on Visual Communications and Image Processing, volume 1818,
pages 1458{1461. SPIE, 1992.
973] Berger T. Rate Distortion Theory. Prentice Hall, Englewood Clis, NJ, 1971.
974] Helstrom CW. Probability and Stochastic Processes for Engineers. Macmil-
lan, New York, NY, 1991.
975] Reininger RC and Gibson JD. Distribution of the two-dimensional DCT co-
e"cients for images. IEEE Transactions on Communications, COM-31:835{
839, 1983.
976] Cox JR, Moore SM, Blaine GJ, Zimmerman JB, and Wallace GK. Optimiza-
tion of trade-os in error-free image transmission. In Proceedings of SPIE
Vol. 1091. Medical imaging III: Image capture and display, pages 19{30, 1989.
977] Wang L and Goldberg M. Progressive image transmission by transform coef-
cient residual error quantization. IEEE Transactions on Communications,
36:75{76, 1988.
978] Roos P, Viergever MA, VanDijke MCA, and Peters JH. Reversible intraframe
compression of medical images. IEEE Transactions on Medical Imaging,
7:328{336, December 1988.
979] Strobach P. Linear Prediction Theory: A Mathematical Basis of Adaptive
Systems. Springer-Verlag, New York, NY, 1990.
1252 Biomedical Image Analysis
980] Maragos MA, Schafer RW, and Mersereau RM. Two-dimensional linear pre-
diction and its application to adaptive predictive coding of images. IEEE
Transactions on Acoustics, Speech, and Signal Processing, 32:1213{1229,
1984.
981] Wax M and Kailath T. E"cient inversion of Toeplitz-block Toeplitz matrix.
IEEE Transactions on Acoustics, Speech, and Signal Processing, 31:1218{
1221, 1983.
982] Kalouptsidis N, Carayannis C, and Manolakis D. Fast algorithms for block
Toeplitz matrices with Toeplitz entries. Signal Processing, 6:77{81, 1984.
983] Therrien CW and El-Shaer HT. A direct algorithm for computing 2-D AR
spectrum estimates. IEEE Transactions on Acoustics, Speech, and Signal
Processing, 37:1795{1797, 1989.
984] Therrien CW and El-Shaer HT. Multichannel 2-D AR spectrum estimation.
IEEE Transactions on Acoustics, Speech, and Signal Processing, 37:1798{
1800, 1989.
985] Therrien CW. Relations between 2-D and multichannel linear prediction.
IEEE Transactions on Acoustics, Speech, and Signal Processing, 29:454{457,
1981.
986] Strand ON. Multichannel complex maximum entropy (autoregressive) spec-
tral analysis. IEEE Transactions on Automatic Control, 22:634{640, 1977.
987] Levinson N. The Wiener rms (root mean square) error criterion in lter
design and prediction. Journal of Mathematical Physics, 25:261{278, 1947.
988] Wiggins RA and Robinson EA. Recursive solution to the multichannel l-
tering problem. Journal of Geophysical Research, 70:1885{1891, 1965.
989] Graham A. Kronecker Products and Matrix Calculus with Applications. Wi-
ley, Chichester, UK, 1981.
990] Burg JP. A new analysis technique for time series data. NATO Advance
Study Institute on Signal Processing, Enschede, The Netherlands, 1968.
991] Cio" JM and Kailath T. Fast recursive-least-squares transversal lters for
adaptive ltering. IEEE Transactions on Acoustics, Speech, and Signal Pro-
cessing, 32:304{337, 1984.
992] Boutalis YS, Kollias SD, and Carayannis G. A fast multichannel approach
to adaptive image estimation. IEEE Transactions on Acoustics, Speech, and
Signal Processing, 37:1090{1098, 1989.
993] Scott KE. Adaptive Equalization and Antenna Diversity in Digital Radio.
PhD thesis, Department of Electrical and Computer Engineering, University
of Calgary, Calgary, Alberta, Canada, May 1991.
994] Ohki M and Hashiguchi S. Two-dimensional LMS adaptive lters. IEEE
Transactions on Consumer Electronics, 37(1):66{72, 1991.
995] Alexander ST and Rajala SA. Image compression results using the LMS
adaptive algorithm. IEEE Transactions on Acoustics, Speech, and Signal
Processing, 33:712{714, 1985.
References 1253
996] Aiazzi B, Alparone L, and Baronti S. Trends in lossless image compression
by adaptive prediction. Electronic Imaging: SPIE's International Technical
Group Newsletter, 13(1):5,9, January 2003.
997] Peano G. Sur une courbe, qui remplit toute une aire plane. Mathematische
Annalen, 36:157{160, 1890.
998] Hilbert D. Ueber die stetige abbildung einer linie auf ein !achenstuck. Math-
ematische Annalen, 38:459{460, 1891.
999] Provine JA. Peanoscanning for image compression. Master's thesis, De-
partment of Electrical and Computer Engineering, University of Calgary,
Calgary, Alberta, Canada, December 1992.
1000] Provine JA and Rangayyan RM. Lossless compression of Peanoscanned im-
ages. Journal of Electronic Imaging, 3(2):176{181, 1994.
1001] Zhang YQ, Loew MH, and Pickholtz RL. A methodology for modeling the
distributions of medical images and their stochastic properties. IEEE Trans-
actions on Medical Imaging, 9(4):376{382, 1990.
1002] Zhang YQ, Loew MH, and Pickholtz RL. A combined-transform coding
(CTC) scheme for medical images. IEEE Transactions on Medical Imaging,
11(2):196{202, 1992.
1003] Lempel A and Ziv J. Compression of two-dimensional data. IEEE Transac-
tions on Information Theory, IT-32(1):2{8, 1986.
1004] Bially T. Space-lling curves: Their generation and their application
to bandwidth reduction. IEEE Transactions on Information Theory, IT-
15(6):658{664, 1969.
1005] Moore EH. On certain crinkly curves. Transactions of the American Math-
ematical Society, 1:72{90, 1900.
1006] Witten IH and Neal RM. Using Peano curves for bilevel display of
continuous-tone images. IEEE Computer Graphics, pages 47{52, May 1982.
1007] Witten IH and Wyvill B. On the generation and use of space-lling curves.
Software { Practice and Experience, 13:519{525, 1983.
1008] International Telegraph and Telephone Consultative Committee (CCITT).
Progressive bi-level image compression. Recommendation T.82, 1993.
1009] International Organization for Standards/ International Electrotechnical
Commission (ISO/IEC). Progressive bi-level image compression. Interna-
tional Standard 11544, 1993.
1010] Hampel H, Arps RB, Chamzas C, Dellert D, Duttweiler DL, Endoh T, Equitz
W, Ono F, Pasco R, Sebestyen I, Starkey CJ, Urban SJ, Yamazaki Y, and
Yoshida T. Technical features of the JBIG standard for progressive bi-level
image compression. Signal Processing: Image Communication, 4:103{111,
1992.
1011] JPEG and JBIG website. http://www.jpeg.org/, accessed May 2004.
1012] Wallace GK. The JPEG still picture compression standard. Communications
of the ACM, 34(4):30{44, 1991.
1013] Sikora T. MPEG video webpage. http://www.bs.hhi.de/mpeg-video/, ac-
cessed May 2004.
1254 Biomedical Image Analysis
1014] Chiariglione L. MPEG and multimedia communications. IEEE Transactions
on Circuits and Systems for Video Technology, 7(1):5{18, 1997.
1015] National Electrical Manufacturers Association, Rosslyn, VA. Digital Imag-
ing and Communications in Medicine (DICOM) PS 3.1-2003, available at
http:// medical.nema.org/, accessed May 2004.
1016] American College of Radiology and National Electrical Manufacturers Asso-
ciation, Washington, DC. ACR/NEMA Standards Publication PS 2: Data
Compression Standard, 1989.
1017] American College of Radiology and National Electrical Manufacturers As-
sociation, Washington, DC. ACR/NEMA Standards Publication No. 300:
Digital Imaging and Communications, 1989.
1018] Wang Y, Best DE, Homan JG, Horii SC, Lehr JL, Lodwick GS, Morse RR,
Murphy LL, Nelson OL, Perry J, Thompson BG, and Zielonka JS. ACR-
NEMA digital imaging and communications standards: minimum require-
ments. Radiology, 166:529{532, February 1988.
1019] Rao KR and Huh Y. JPEG 2000. In Proceedings of IEEE Region 8 Inter-
national Symposium on Video/ Image Processing and Multimedia Commu-
nications, pages 1{6, Zadar, Croatia, June 2002.
1020] Sung MM, Kim HJ, Kim EK, Kwak JY, Yoo JK, and Yoo HS. Clinical
evaluation of JPEG2000 compression algorithm for digital mammography.
IEEE Transactions on Nuclear Science, 49(3):827{832, 2002.
1021] Fossbakk E, Manzanares P, Yago JL, and Perkis A. An MPEG-21 framework
for streaming media. In Proceedings of IEEE 4th Workshop on Multimedia
Signal Processing, pages 147{152, October 2001.
1022] Vaisey J and Gersho A. Image compression with variable block size segmen-
tation. IEEE Transactions on Signal Processing, 40(8):2040{2060, August
1992.
1023] Leou FC and Chen YC. A contour-based image coding technique with its
texture information reconstructed by polyline representation. Signal Pro-
cessing, 25:81{89, 1991.
1024] Lopes RD and Rangayyan RM. Lossless volumetric data compression via
decomposition based upon region growing. In Proceedings SPIE 3658: Med-
ical Imaging 1996 { Physics of Medical Imaging, pages 427{435, San Diego,
CA, February 1999.
1025] The bzip2 and libbzip2 ocial home page. http:// sources. redhat. com/
bzip2/, accessed May 2004.
1026] Acha B, Serrano C, Rangayyan RM, and Roa LM. Lossless compression
algorithm for colour images. Electronics Letters, 35(3):214{215, 4 February
1999.
1027] Serrano C, Acha B, Rangayyan RM, and Roa LM. Segmentation-based
lossless compression of burn wound images. Journal of Electronic Imaging,
10(3):720{726, 2001.
1028] Shen L and Rangayyan RM. Lossless compression of continuous-tone im-
ages by combined inter-bit-plane decorrelation and JBIG coding. Journal of
Electronic Imaging, 6(2):198{207, 1997.
References 1255
1029] Arps RB and Truong TK. Comparison of international standards for lossless
still image compression. Proceedings of the IEEE, 82(6):889{899, 1994.
1030] Wang Y. A set of transformations for lossless image compression. IEEE
Transactions on Image Processing, 4(5):677{679, 1995.
1031] Rabbani M and Melnychuck PW. Conditioning contexts for the arithmetic
coding of bit planes. IEEE Transactions on Signal Processing, 40(1):232{236,
1992.
1032] RICOH California Research Center. CREW lossless/lossy image compression
{ Contribution to ISO/IEC JTC 1.29.12, June 1995.
1033] Zandi A, Allen JD, Schwartz EL, and Boliek M. CREW: Compression with
reversible embedded wavelets. In Proceedings of IEEE Data Compression
Conference, pages 212{221, Snowbird, UT, March 1995.
1034] Rabbani M and Jones PW. Digital Image Compression Techniques. SPIE
Optical Engineering Press, Bellingham, WA, 1991.
1035] Bacharin GP. On a statistical estimate for the entropy of a sequence of
independent random variables. Theory of Probability and its Applications,
4:333{336, 1959.
1036] Andrus WS and Bird KT. Teleradiology: evolution through bias to reality.
Chest, 62:655{657, 1972.
1037] Carey LS. Teleradiology: part of a comprehensive telehealth system. Radi-
ologic Clinics of North America, 23(2):357{362, 1985.
1038] House M. Use of telecommunications to meet health needs of rural, remote,
and isolated communities. In Rangayyan RM, editor, Telecommunication
for Health Care: Telemetry, Teleradiology, and Telemedicine Proceedings of
SPIE Vol. 1355, pages 2{9, Bellingham, WA, 1990. SPIE.
1039] Brown JHU. Telecommunication for Health Care. CRC Press, Boca Raton,
FL, 1985.
1040] Rangayyan RM, editor. Telecommunication for Health Care: Telemetry, Tel-
eradiology, and Telemedicine Proceedings of SPIE Vol. 1355. SPIE, Belling-
ham, WA, 1990.
1041] Kuduvalli GR, Rangayyan RM, and Desautels JEL. High-resolution digital
teleradiology: A perspective. Journal of Digital Imaging, 4(4):251{261, 1991.
1042] Allman R. Potential contribution of teleradiology to the management of mil-
itary. A technical report: Radiologist resources in `Military Medicine'. Radi-
ological Society of North America (RSNA), Chicago, IL, December, 1983.
1043] Goeringer F, Mun SK, and Kerlin BD. Digital medical imaging: implemen-
tation strategy for the defense medical establishment. In Proceedings of SPIE
Vol. 1093. Medical Imaging III: PACS system design and evaluation, pages
429{437, 1989.
1044] Drew PG. Market Study for a High-performance Teleradiology (in Alberta,
Canada). Drew Consultants, Carlisle, MA, 1989.
1045] Webber MM and Corbus HF. Image communications by telephone. Journal
of Nuclear Medicine, 13:379{381, 1972.
1256 Biomedical Image Analysis
1046] Jelasco DV, Southworth G, and Purcell LH. Telephone transmission of ra-
diographic images. Radiology, 127:147{149, 1978.
1047] Barnes GT, Johnson GA, and Staab EV. Teleradiology: Fundamental con-
siderations and clinical applications. Unpublished lecture notes, 1989.
1048] Arenson RL, Sheshadri SB, Kundel HA, DeSimone D, Van der Voorde F,
Gefter WB, Epstein DM, Miller WT, Aronchick JM, Simson MB, Lanken
PN, Khalsa S, Brikman I, Davey M, and Brisbon N. Clinical evaluation of
a medical image management system for chest images. American Journal of
Roentgenology, 150:55{59, January, 1988.
1049] Carey LS, O'Connor BD, Bach DB, Hobbs BB, Hutton LC, Lefcoe MS,
Lynos RO, Munro TG, Paterson RG, Rankin RN, and Rutt BK. Digital
teleradiology: Seaforth { London network. Journal of Canadian Association
of Radiology, 40:71{74, 1989.
1050] Gershon-Cohen J and Cooley AG. Telegnosis. Radiology, 55:582{587, 1950.
1051] Jutras A. Teleroentgen diagnosis by means of videotape recording (Edito-
rial). American Journal of Roentgenology, 82:1099{1102, 1959.
1052] Andrus WS, Dreyfuss JR, Jaer F, and Bird KT. Interpretation of
roentgenograms via interactive television. Radiology, 116:25{31, 1975.
1053] Murphy RL, Barber D, Broadhurst A, and Bird KT. Microwave transmission
of chest roentgenograms. American Review of Respiratory Diseases, 102:771{
777, 1972.
1054] Webber MM, Wilk S, Pirrucello R, and Aiken J. Telecommunication of
images in the practice of diagnostic radiology. Radiology, 109:71{74, 1973.
1055] Lester RG, O'Foghludha F, Porter F, Friedman DS, and Pedolsky HR. Trans-
mission of radiologic information by satellite. Radiology, 109:731{732, 1973.
1056] Carey LS, Russell ES, Johnson EE, and Wilkins WW. Radiologic consul-
tation to a remote Canadian hospital using Hermes spacecraft. Journal of
Canadian Association of Radiology, 30:12{20, 1979.
1057] James JJ, Grabowski W, and Mangelsdor AD. The transmission and in-
terpretation of emergency department radiographs. Annals of Emergency
Medicine, 11:404{408, 1982.
1058] Steckel RJ. Daily x-ray rounds in a large teaching hospital using high-
resolution closed-circuit television. Radiology, 116:25{31, 1975.
1059] Page G, Gregoire A, Galand C, Sylvestre J, Chahlaoul J, Fauteux P, Dussault
R, Seguin R, and Roberge F. Teleradiology in Northern Qu$ebec. Radiology,
140:361{366, 1981.
1060] Kretz F and Nasse D. Digital television: transmission and coding. Proceed-
ings of the IEEE, 73:575{591, 1985.
1061] Kuni CC. Introduction to Computers and Digital Processing in Medical Imag-
ing. Year Book Medical Publishers, Chicago, IL, 1988.
1062] Nudelman S. Historical perspectives on photoelectronic digital radiology. In
James AE, Anderson JH, and Higgins CB, editors, Digital Image Processing
in Radiology, pages 1{27. Williams & Wilkins, Baltimore, MD, 1985.
References 1257
1063] Gayler BW, Gitlin JN, Rappaport W, Skinner FL, and Cerva J. Teleradiol-
ogy: An evaluation of a microcomputer-based system. Radiology, 140:355{
360, 1981.
1064] Kagetsu NJ, Zulauf DRP, and Ablow RC. Clinical trial digital teleradiology
in the practice of emergency room radiology. Radiology, 165:551{554, 1987.
1065] Gitlin JN. Teleradiology. Radiologic Clinics of North America, 24:55{68,
1986.
1066] Curtis DJ, Gayler BW, Gitlin JN, and Harrington MB. Teleradiology: results
of a eld trial. Radiology, 149:415{418, 1983.
1067] Rasmussen W, Stevens I, Gerber FH, and Kuhlman JA. Teleradiology via the
naval Remote Medical Diagnosis System (RMDS). In Proceedings of SPIE
Vol. 318 (Part I). Picture Archiving and Communication Systems (PACS)
for Medical Applications, pages 174{181, 1982.
1068] Skinner FL, Cerva J, Kerlin B, and Millstone T. The teleradiology eld
demonstration. In Proceedings of SPIE Vol. 318 (Part I). Picture Archiving
and Communication Systems (PACS) for Medical Applications, pages 168{
173, 1982.
1069] Gordon R, Rangayyan RM, Wardrop DH, and Beeman TM. Improving image
quality in teleradiology and tele-computed tomography. In Proceedings of the
IEEE Systems, Man, and Cybernetics Society Conference, pages 908{913,
Bombay and New Delhi, India, December/January 1983/84.
1070] Rangayyan RM and Gordon R. Computed tomography from ordinary radio-
graphs for teleradiology. Medical Physics, 10:687{690, 1983.
1071] Rangaraj MR and Gordon R. Computed tomography for remote areas via tel-
eradiology. In Proceedings of SPIE Vol. 318 on Picture Archival and Commu-
nication Systems for Medical Applications, pages 182{185, Newport Beach,
CA, January 1982.
1072] DiSantis DJ, Cramer MS, and Scatarige JC. Excretory urography in the
emergency department: utility of teleradiology. Radiology, 164:363{364,
1987.
1073] Cox GG, Cook LT, McMillan JH, Rosenthal SJ, and Dwyer III SJ. High-
resolution 2560  2048  12 bit digital displays for chest radiography - A
comparison with conventional lm and digital hardcopy. University of Kansas
Medical Center, Kansas City, KA, 1988.
1074] Batnitzky S, Rosenthal SJ, Siegel EL, Wetzel LH, Murphey MD, Cox GG,
McMillan JH, Templeton AW, and Dwyer III SJ. Teleradiology: an assess-
ment. Radiology, 177:11{17, 1990.
1075] Gillespy T, Staab EV, Staab EW, and Lawrence E. Electronic imaging in a
teaching hospital intensive care unit: evaluation of the clinical review system.
Journal of Digital Imaging, 3:124{128, 1990.
1076] Lo SC, Gaskill JW, Mun SK, and Krasner BH. Contrast information of
digital imaging in laser lm digitizer and display monitor. Journal of Digital
Imaging, 3:119{123, May 1990.
1258 Biomedical Image Analysis
1077] Yip K, Lubinsky AR, Whiting BR, Muka E, and Cocher TE. Performance
analysis of medical x-ray lm digitizers. In Proceedings of SPIE Vol. 1231.
Medical Imaging IV: Image Formation, pages 508{525, 1990.
1078] Slasky BS, Gur D, Costa-Greco MA, and Harris KM. Receiver operating
characteristic analysis of chest image interpretation with conventional, laser-
printed, and high-resolution workstation images. Radiology, 174:775{780,
1990.
1079] Proakis JG. Digital Communications. McGraw-Hill, New York, NY, 1989.
1080] Telesat Canada, Gloucester, ON, Canada. Satellite Delay and Response
Times (Product Literature), 1990.
1081] Kim Y and Horii SC, editors. Handbook of Medical Imaging, Volume 3:
Display and PACS. SPIE Press, Bellingham, WA, 2000.
1082] Society for Computer Applications in Radiology (SCAR), Harrisburg, PA.
Understanding Teleradiology, 1994.
1083] Society for Computer Applications in Radiology (SCAR), Harrisburg, PA.
Understanding PACS: Picture Archiving and Communications Systems,
1994.
1084] Ackerman LV, Mucciardi AN, Gose EE, and Alcorn FS. Classication of
benign and malignant breast tumours on the basis of 36 radiographic prop-
erties. Cancer, 31:342{352, 1973.
1085] Alto H, Rangayyan RM, Paranjape RB, Desautels JEL, and Bryant H. An
indexed atlas of digital mammograms for computer-aided diagnosis of breast
cancer. Annales des Telecommunications, 58(5-6):820{835, 2003.
1086] Tou JT and Gonzalez RC. Pattern Recognition Principles. Addison-Wesley,
Reading, MA, 1974.
1087] Fukunaga K. Introduction to Statistical Pattern Recognition. Academic, San
Diego, CA, 2nd edition, 1990.
1088] Johnson RA and Wichern DW. Applied Multivariate Statistical Analysis.
Prentice Hall, Englewood Clis, NJ, 3rd edition, 1992.
1089] Schurmann J. Pattern Classication { A unied view of statistical and neural
approaches. Wiley, New York, NY, 1996.
1090] Micheli-Tzanakou E. Supervised and Unsupervised Pattern Recognition. CRC
Press, Boca Raton, FL, 2000.
1091] Neter J, Kutner MH, Nachtsheim CJ, and Wasserman W. Applied Linear
Statistical Models. Irwin, Chicago, IL, 4th edition, 1990.
1092] SPSS Inc., Chicago, IL. SPSS Advanced Statistics User's Guide, 1990.
1093] SPSS Inc., Chicago, IL. SPSS Base System User's Guide, 1990.
1094] Pao YH. Adaptive Pattern Recognition and Neural Networks. Addison-
Wesley, Reading, MA, 1989.
1095] Lippmann RP. An introduction to computing with neural nets. IEEE Signal
Processing Magazine, pages 4{22, April 1987.
1096] Nigrin A. Neural Networks for Pattern Recognition. MIT, Cambridge, MA,
1993.
References 1259
1097] Devijver PA and Kittler J. Pattern Recognition: A Statistical Approach.
Prentice Hall, London, UK, 1982.
1098] Shen L, Rangayyan RM, and Desautels JEL. A knowledge-based position
matching technique for mammographic calcications. In Proceedings of the
14th Annual International Conference of the IEEE Engineering in Medicine
and Biology Society, pages 1936{1937, Paris, France, October 1992.
1099] Metz CE. Basic principles of ROC analysis. Seminars in Nuclear Medicine,
VIII(4):283{298, 1978.
1100] Metz CE. ROC methodology in radiologic imaging. Investigative Radiology,
21:720{733, 1986.
1101] Swets JA and Pickett RM. Evaluation of diagnostic systems: Methods from
signal detection theory. Academic, New York, NY, 1982.
1102] Dorfman DD and Alf E. Maximum likelihood estimation of parameters of
signal detection theory and determination of condence intervals { rating
method data. Journal of Mathematical Psychology, 6:487{496, 1969.
1103] Fleiss JL. Statistical Methods for Rates and Proportions. Wiley, New York,
NY, 2nd edition, 1981.
1104] Zar JH. Biostatistical Analysis. Prentice Hall, Englewood Clis, NJ, 2nd
edition, 1984.
1105] Fukunaga K and Hayes RR. Eects of sample size in classier design. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 11(8):873{885,
1989.
1106] Raudys SJ and Jain AK. Small sample size eects in statistical pattern recog-
nition: Recommendations for practitioners. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 13(3):252{264, 1991.
1107] Sahiner BS, Chan HP, Petrick N, Wagner RF, and Hadjiiski L. Feature
selection and classier performance in computer-aided diagnosis: The eect
of nite sample size. Medical Physics, 27(7):1509{1522, 2000.
1108] Swain PH. Fundamentals of pattern recognition in remote sensing. In Swain
PH and Davis SM, editors, Remote Sensing: The Quantitative Approach,
pages 136{187. McGraw-Hill, New York, NY, 1978.
1109] Nab HW, Karssemeijer N, van Erning LJTHO, and Hendriks JHCL. Com-
parison of digital and conventional mammography: A ROC study of 270
mammograms. Medical Informatics, 17:125{131, 1992.
1110] Nishikawa RM, Giger ML, Doi K, Metz CE, Yin FF, Vyborny CJ, and
Schmidt RA. Eect of case selection on the performance of computer-aided
detection schemes. Medical Physics, 21:265{269, 1994.
1111] Altman DG. Practical Statistics for Medical Research. Chapman and Hall,
London, UK, 1991.
1112] Morrow WM and Rangayyan RM. Implementation of adaptive neighbor-
hood image processing algorithm on a parallel supercomputer. In Pelletier
M, editor, Proceedings of the Fourth Canadian Supercomputing Symposium,
pages 329{334, Montreal, PQ, Canada, 1990.
1260 Biomedical Image Analysis
1113] Paranjape RB, Rolston WA, and Rangayyan RM. An examination of three
high performance computing systems for image processing operations. In
Proceedings of the Supercomputing Symposium, pages 208{218, Montreal,
PQ, Canada, 1992.
1114] Alto H, Gavrilov D, and Rangayyan RM. Parallel implementation of the
adaptive neighborhood contrast enhancement algorithm. In Proceedings of
SPIE on Parallel and Distributed Methods for Image Processing, volume
3817, pages 88{97, 1999.
1115] Pohlman SK, Powell KA, Obuchowski N, Chilcote W, and Grundfest-
Broniatowski S. Classication of breast lesions based on quantitative mea-
sures of tumour morphology. In IEEE Engineering in Medicine and Biol-
ogy Society 17th Annual International Conference, Montreal, Canada, page
2.4.2.3, 1995.
1116] Kilday J, Palmieri F, and Fox MD. Classifying mammographic lesions us-
ing computerized image analysis. IEEE Transactions on Medical Imaging,
12(4):664{669, 1993.
1117] Bruce LM and Kallergi M. Eects of image resolution and segmentation
method on automated mammographic mass shape classication. In Proceed-
ings of SPIE vol. 3661 on Medical Imaging 1999: Image Processing, pages
940{947, San Diego, CA, Feb. 1999.
1118] Bruce LM and Adhami RR. Classifying mammographic mass shapes us-
ing the wavelet transform modulus-maxima method. IEEE Transactions on
Medical Imaging, 18(12):1170{1177, 1999.
1119] Tarassenko L, Hayton P, Cerneaz NJ, and Brady M. Novelty detection for the
identication of masses in mammograms. In Proceedings of the 4th Interna-
tional Conference on Articial Neural Networks, pages 442{447, Cambridge,
UK, 26-28 June 1995.
1120] Alto H, Rangayyan RM, and Desautels JEL. An indexed atlas of digital
mammograms. In Proceedings of the 6th International Workshop on Digital
Mammography, pages 309{311, Bremen, Germany, June 2002.
1121] Alto H, Rangayyan RM, Solaiman B, Desautels JEL, and MacGregor JH. Im-
age processing, radiological and clinical information fusion in breast cancer
detection. In Proceedings of SPIE on Sensor Fusion: Architectures, Algo-
rithms, and Applications VI, volume 4731, pages 134{144, 2002.
1122] Yoshitaka A and Ichikawa T. A survey on content-based retrieval for multi-
media databases. IEEE Transactions on Knowledge and Data Engineering,
11(1):81{93, 1999.
1123] Gudivada VN and Raghavan VV. Content-based image retrieval systems.
IEEE Computer, 28(9):18{22, 1995.
1124] Mehrotra R and Gray JE. Similar-shape retrieval in shape data management.
IEEE Computer, 28(9):57{62, 1995.
1125] Trimeche M, Cheikh FA, and Gabbouj M. Similarity retrieval of occluded
shapes using wavelet-based shape features. In Proceedings of SPIE 4210: In-
ternational Symposium on Internet Multimedia Management Systems, pages
281{289, 2000.
References 1261
1126] Safar M, Shahabi C, and Sun X. Image retrieval by shape: a comparative
study. In IEEE International Conference on Multimedia and Expo (ICME),
volume 1, pages 141{144, 2000.
1127] Ivarinen J and Visa A. Shape recognition of irregular objects. In Proceedings
of SPIE 2904: Intelligent Robots and Computer Vision XV, pages 25{32,
1996.
1128] Heesch D and Ruger S. Combining features for content-based sketch retrieval
{ a comparative evaluation of retrieval performance. In Crestani F, Girolarni
M, and van Rijsbergen CJ, editors, 24th BCS-IRSG European Colloquium on
IR Research, volume LNCS 2291, pages 41{52, 2002.
1129] Flickner M, Sawhney H, Niblack W, Ashley J, Huang Q, Som B, Gorkani A,
Hafner J, Lee D, Petkovic D, Steele D, and Yanker P. Query by image and
video content: the QBIC system. IEEE Computer, 28(9):23{32, 1995.
1130] Srihari RK. Automatic indexing and content-based retrieval of captioned
images. IEEE Computer, 28(9):49{56, 1995.
1131] Smith JR. Image retrieval evaluation. In IEEE Workshop on Content-based
Access of Image and Video Libraries, pages 112{113, 1998.
1132] Ivarinen J and Visa A. An adaptive texture and shape based defect classica-
tion. In Proceedings of the International Conference on Pattern Recognition,
pages 117{123, 1998.
1133] Cauvin JM, Le Guillou C, Solaiman B, Robaskiewicz M, Le Beux P, and
Roux C. Computer-assisted diagnosis system in digestive endoscopy. IEEE
Transactions on Information Technology in Biomedicine, 7(4):256{262, 2003.
1134] Moghaddam B, Tian Q, and Huang TS. Spatial visualization for content-
based image retrieval. In International Conference on Multimedia and Expo
(ICME '01), 2001.
1135] Zheng L, Wetzel AW, Gilbertson J, and Becich MJ. Design and analysis of
a content-based pathology image retrieval system. IEEE Transactions on
Information Technology in Biomedicine, 7(4):249{255, 2003.
1136] Wooldridge M. Agent-based software engineering. IEE Proceedings on Soft-
ware Engineering, 144:26{37, 1997.
1137] Paranjape RB and Smith KD. Mobile software agents for Web-based medical
image retrieval. Journal of Telemedicine and Telecare, 6(2):53{55, 2000.
1138] Wooldridge M and Jennings NR. Intelligent agents: theory and practice.
Knowledge Engineering Review, 10(2):115{152, 1997.
1139] Huhns MN and Singh MP. Agents on the Web. IEEE Internet Computing,
pages 80{82, May/June 1997.
1140] Oshuga A, Nagai Y, Irie Y, Hattori M, and Honiden S. PLANGENT: An
approach to making mobile agents intelligent. IEEE Internet Computing,
pages 50{57, July/August 1997.
Index

abdomen, imaging of, 35 attenuation coe"cient, 15


ACR/ NEMA standard, 1050 attenuation correction, 923
active contour model, 444, 456 audication, 625
acuity, 139 auditory display of images, 625
acutance, 139, 1093, 1096, 1166, 1167, autocorrelation, 155, 205, 1006
1178 averaging
adaptive lters, 228 geometric, 926
adaptive-neighborhood of confocal microscopy images, 270
contrast enhancement, 338 of nuclear medicine images, 926
lter, 241 synchronized, 171, 271, 277
histogram equalization, 311
LLMMSE lter, 251
mean lter, 242 back-propagation algorithm, 1127
median lter, 242 backprojection, 801
noise subtraction, 244 convolution, 811
region growing, 242, 249 ltered, 804, 811
restoration, 894 backward prediction, 1014
additive noise, 115, 131, 170 bandpass lter, 646, 648, 660
agreement, 602 Bayes classier, 1116, 1118, 1125
algebraic reconstruction techniques, 813 Bayes rule, 840, 1116
additive, 820 Beer's law, 15
constrained, 821 Bessel function, 105, 197
multiplicative, 821 Bezier curves, 444
alpha-trimmed mean lter, 181 binarization, 291, 364, 456, 649, 690
Anger camera, see gamma camera
angular moments, 642 blind deblurring, 875, 877
anisotropy, 726 block-by-block processing, 167, 311,
architectural distortion, 775 313, 893, 998, 1001, 1008,
arithmetic coding, 969 1010
artifact, 151 blood vessels
block, 313 imaging of, 286, 684
edge, 322, 893 in ligaments, 679
grid, 21, 199 in the brain, 286
motion, 287 blur, 16, 90
other, 166 bone, imaging of, 831
periodic, 199 box-counting method, 610
physiological, 40, 62, 165 brain, imaging of, 32, 49, 940
ringing, 197, 316, 326 breast boundary
association, 602 detection of, 451, 720
asymmetry, 596, 704, 742 breast cancer, 3, 6, 22

1262
Index 1263
pattern classication, 429, 434, transform, 984
1090, 1097, 1109, 1120, 1162, coherence, 726, 729, 734
1166, 1167 collagen bers, 10, 14, 105, 110, 390,
screening, 27, 1143 442, 679
breast masses, see also tumors, 1092, collimator, 39
1093, 1098, 1099 color
content-based retrieval, 1166, 1171 pseudo-, 827
detection of, 699 comb lter, 890
pattern classication, 1171 compactness, 578, 1160, 1162
shape analysis, 578, 1097, 1162, complexity, 543, 551, 555, 558, 560,
1166 575, 578, 601, 608, 610
texture analysis, 627, 1166 compression, 955
breast, imaging of, 3, 25 lossy versus lossless, 957
Burg algorithm, 1019, 1021 segmentation based, 1051
Butterworth lter, 197, 325, 648, 807, Compton scattering, 920
811 computed radiography, 17, 287
computed tomography, 29
CAD, 53, 55 display of images, 825
calcications with diracting sources, 825
detection of, 410, 414 computer-aided diagnosis, see CAD
enhancement of, 344 concavity, 537, 569, 578, 691, 1096,
shape analysis, 575 1160, 1162
caliper method, 608 confocal microscopy, 270
Canny's method, 390 constrained least-squares restoration,
causality, 92 872
central limit theorem, 160 content-based image retrieval, 1169
centroid, 530, 563, 641 contingency table, 1138
centroidal angle, 642 contour coding, 977
cepstrum, 623, 885 contrast, 75, 129, 600, 734
chain coding, 530 for X-ray imaging, 286, 839
channel capacity, 959 contrast enhancement, 344
chest, imaging of, 17 adaptive-neighborhood, 338
chord-length statistics, 560 clinical evaluation, 1145
circle function, 105, 197 of mammograms, 350
circles contrast histogram, 346
detection of, 440, 450 contrast-to-noise ratio, 137
circulant matrix, 215 convolution, 91, 117, 220, 224, 315
block-, 221 backprojection, 811
diagonalization, 218 circular, 213
city-block distance, 1105 illustration, 215
cluster seeking, 1105 linear, 213, 215
K-means, 1108 periodic, 213
maximin-distance, 1108 convolution mask, 176, 314
coding, 955, 960 correlation, 118, 601
Human, 961 correlation coe"cient, 119, 168
interpolative, 1001 covariance, 168, 205, 1007
predictive, 1004 covariance matrix, 1106
run-length, 969 CREW coding, 1066
source, 961 cross-correlation, 169
1264 Biomedical Image Analysis
CT, see computed tomography directional ltering, 644
cyclo-stationary signal, 167 directional pattern analysis, 639
cyst, imaging of, 3, 6, 43, 46 discrete cosine transform, 987
discriminant analysis, 1095
deblurring, see also restoration of breast masses, 1162
blind, 877 distance, 1105
motion, 875 between probability density func-
decision function, 1095, 1101, 1118, tions, 1141
1119 city-block, 1105
decision making, 1091 Euclidean, 643, 1101, 1105, 1175
deconvolution, see also restoration Hamming, 959
homomorphic, 623, 885, 886 Jeries{Matusita, 1142
decorrelation, 980 Mahalanobis, 1105
delta function, 78, 90, 95, 798 Manhattan, 643, 1105
sifting property, 90 distance function, 1097
density slicing, 292, 710, 827 distortion measure, 959
detection, see also segmentation divergence, 1141
of architectural distortion, 775 dominant angle, 641
of asymmetry, 742 dot product, 1124
of breast boundary, 451, 720 normalized, 1106
of breast masses, 699 DPCM, 984, 998, 1005, 1046, 1049
of calcications, 410 dual-energy imaging, 287
of circles, 440, 450 dynamic range, 73
of edges, 604 dynamic system, 165, 167
of isolated lines, 365
of isolated points, 365 eccentricity, 551
of lines, 780 ECG signal, 280
of pectoral muscle, 481 echo removal, 623, 886
of ripples, 604 echocardiography, 44
of spots, 604 edge detection, 367, 493, 604
of straight lines, 437 edge enhancement, 315, 322
of the broglandular disc, 743 edge function, 96
of the spinal canal, 449 edge linking, 392, 493, 495
of tumors, 417, 500 edge spread function, 93, 131, 139
of waves, 604 edge-!ow propagation, 493
diagnostic accuracy, 1132 eight-connected neighborhood, 176
diagnostic decision, 56, 1089, 1090, electron microscopy, 10
1182 energy-subtraction imaging, 287
diagonalization of a circulant matrix, enhancement
218 in breast cancer screening, 1143
dierence of Gaussians, 376 ensemble averages, 155, 167, 172
dierentiation, 119, 367{369, 983 entropy, 84, 601, 643, 681, 687, 697,
in matrix representation, 224 734, 957, 983, 1070
di"culties in image acquisition, 61 conditional, 89
digital radiography higher-order, 89
MTF, 127 joint, 88
digital subtraction angiography, 286 Markov, 1071
directional distribution mutual information, 90
measures of, 641 ergodic process, 167, 176
Index 1265
error measures, 138 L, 181
Euclidean distance, 643, 1101, 1105, LMMSE, 225
1175 local LMMSE, 228
expectation maximization, 745, 754, local-statistics, 174
842 lowpass, 194
max, 181
F1 transform, 1063 mean, 176
F2 transform, 1064 median, 177
false negative, 1133 Metz, 874, 940
false positive, 1133 min, 181
fan lter, 646, 647, 651 min/max, 181
feature selection, 1141 motion deblurring, 875
feature vector, 1091 moving average, 121
fetal imaging, 46 multiresolution, 660
broblasts, image of, 7 noise-updating repeated Wiener,
broglandular disc, 743 234
delity criteria, 959 nonstationary, 231, 235
lm-grain noise, 170, 254 notch, 199, 890
lter optimal, 224, 865, 872, 898
adaptive, 228 order-statistic, 177, 181
adaptive 2D LMS, 235 power spectrum equalization, 860,
adaptive rectangular window, 237 867, 877, 935
adaptive-neighborhood, 241 ramp, 648, 806
adaptive-neighborhood noise sub- sector, 646, 647, 651
traction, 244 space-invariant, 857
alpha-trimmed mean, 181 space-variant, 231, 235, 891
band-reject, 199 three-dimensional, 947
bandpass, 648, 660 Wiener, 225, 233, 863, 897, 935
blind deblurring, 877 ltered backprojection, 804, 811
Butterworth, 197, 325, 648, 807, nite impulse response lter, 213
811 xed operators
comb, 199, 890 limitations of, 323
comparative analysis, 867 !uoroscopy, 17
constrained least-squares restora- focal spot, 19, 92, 127
tion, 872 four-connected neighborhood, 175
directional, 644 Fourier descriptors, 562
fan, 646, 647, 651 Fourier slice theorem, 798
nite impulse response, 213 Fourier transform, 99, 122, 206
for periodic artifacts, 199 convolution property, 117
frequency domain, 193 coordinates of, 105
Gabor, 657 display of, 116
generalized linear, 328 folding of, 116
high-emphasis, 325 inverse, 115
highpass, 325 linearity, 115
homomorphic, 328, 335, 885, 886 matrix representation, 206
ideal, 194, 325, 647, 648 multiplication property, 118
innite impulse response, 213 of a circle, 105
inverse, 858, 867 of a line, 646
Kalman, 898, 943 of a rectangle, 104, 105
1266 Biomedical Image Analysis
periodicity, 116 gray-scale windowing, 292, 827
properties, 110 grid artifact, 21, 199
rotation, 117 ground-truth, 1109
scaling property, 117
separability, 115 Hadamard matrices, 211
shift property, 116 Hamming distance, 959
shifting of, 116 Hamming window, 809, 894
symmetry, 115, 116 Haralick's measures of texture, 600
fractal analysis of texture, 605 head, imaging of, 32
fractals, 1035 heart, imaging of, 41, 44, 277, 834,
fractional Brownian motion, 609 937
frame averaging, see synchronized av- hexadecimal code, 961
eraging high-emphasis lter, 325
Freeman chain coding, 530, 977, 978 high-frequency emphasis, 315, 322
frequency-domain lters, 193 high-frequency noise, 194
functional imaging, 6, 36, 40, 43, 44, highpass lter, 325
49 histogram, 78
fusion operator, 426, 500 of contrast, 346
fuzzy connectivity, 449 histogram concavity, 691
fuzzy fusion, 426, 500 histogram equalization, 301, 478, 501
fuzzy region growing, 429 adaptive-neighborhood, 311
fuzzy segmentation, 421 local-area, 310
fuzzy sets, 417 histogram specication, 305
homogeneity, 601
Gabor function, 622, 657, 659, 755, homomorphic deconvolution, 623, 885,
757, 780 886
Gabor wavelets, 487, 663, 757, 780 homomorphic ltering, 328, 335
gamma, 73 Hough transform, 435, 450, 481
gamma camera, 38 Hounseld units, 825, 826
spread function, 97 Human coding, 961
gamma correction, 294, 345 human{instrument system, 53
gated blood-pool imaging, 168, 277 Hurter{Dri"eld curve, 73
Gaussian mixture model, 744, 840
Gaussian noise, 154, 172, 180, 194, ideal lter, 194, 325, 647, 648
254 impulse function, 78, 90
Gaussian PDF, 159, 1110, 1118, 1120 impulse noise, 177
generalized linear ltering, 328 indexed atlas, 1172, 1177
geometric distortion, 804, 813, 822 innite impulse response lter, 213
geometric mean, 868, 940 in!ection
of planar images, 925, 926 points of, 534, 542
geometric transformation, 291 information content, 61
global operations joint, 87, 88
limitations of, 310 mutual, 90
gold standard, 1134, 1138 information theory, 956
gradient analysis, 632 infrared imaging, 3
of breast masses, 627, 702 inhomogeneity, 596
gradients, 367 integration, 121
Gray code, 961, 969, 1055, 1062 interference, 151
gray-level co-occurrence matrix, 597 physiological, 165
Index 1267
power-line, 164 line, Fourier transform of, 646
interpolation, 66 linear discriminant analysis, 1095, 1097,
interpolative coding, 1001 1104, 1166
interval cancer, 1145, 1146 linear prediction, 1005
intrinsic orientation angle, 726 linear regression, 692
inverse lter, 858 live wire, 449
isointensity contours, 710, 712, 725 liver, imaging of, 947
Lloyd{Max quantizer, 68
JBIG, 1046 LMMSE lter, 225
enhanced, 1062 LMS lter, 235
Jeries{Matusita distance, 1142 local LMMSE lter, 228
JM distance, 1142 local-area histogram equalization, 310
joint information, 88 local-statistics lters, 174
JPEG, 1049 logarithmic transformation, 334, 456,
just-noticeable dierence, 76, 343, 405 462
logistic regression, 429, 434, 1120
K-means clustering, 1108 loss function, 1110
k-NN method, 1104 lossless compression, 957
applied to breast masses, 1171, lowpass lter, 194
1177
Kaczmarz method, 813 magnetic resonance imaging, see MRI
Kalman lter, 898, 943 magnication, 27
Karhunen{Lo%eve transform, 763, 769, Mahalanobis distance, 1105
989 applied to breast masses, 1167
kidney, imaging of, 10 mammograms
knee, imaging of, 49 analysis of, 410, 575, 578, 627,
kurtosis, 560, 596 699, 742, 775, 1160, 1166
enhancement of, 335, 344, 350,
L lter, 181 1143
Laplacian, 120, 138, 316 mammography, 22
subtracting, 316 Manhattan distance, 643, 1105
Laplacian MSE, 138 Markov entropy, 1071
Laplacian of Gaussian, 376 Markov source, 1071
Laplacian PDF, 163, 984, 1001 matched lter, 867
Laws' measures of texture, 603 matrix representation, 69, 202
leave-one-out method, 1125 dierentiation in, 224
left-most-looking rule, 977 of convolution, 212
Lempel{Ziv coding, 974 of images, 203
Levinson algorithm, 1014, 1019, 1023 of transforms, 206
ligament tissue, imaging of, 7, 10, 14, max lter, 181
680, 684 maximin-distance clustering, 1108
likelihood function, 745, 840, 1110 maximum likelihood, 842, 1124, 1138,
limitations of 1147
xed operators, 323 McNemar's test of symmetry, 1138
global operations, 310 in breast cancer screening, 1147
line detection, 780 mean, 152, 204, 1118
line function, 95 geometric, 868, 925, 926, 940
line spread function, 93, 127, 131 of planar images, 925
gamma camera, 97 mean lter, 176
1268 Biomedical Image Analysis
adaptive-neighborhood, 242 neural networks, 1126
mean-squared error, 68, 138 neuroblastoma, 834
mean-squared value, 152 NMR, 47
measure of fuzziness, 426 noise, 131, 151
medial axis, 548 additive, 115, 131, 170
median lter, 177 lm-grain, 170, 254
adaptive-neighborhood, 242 Gaussian, 154, 172, 180, 194, 254
Metz lter, 874, 940 high-frequency, 194
microscopy impulse, 177
confocal, 270 in nuclear medicine images, 271
electron, 10 multiplicative, 170
light, 6 PDFs, 159
microtomography, 831 photon detection, 21
min lter, 181 Poisson, 169, 180, 254, 271
min/max lter, 181 salt-and-pepper, 166, 177, 180,
minimum-description length, 746, 843 181, 259
mobile agents, 1177 signal-dependent, 169, 248
modulation transfer function, see MTF SNR, 131
moments, 601 speckle, 43, 170, 254
angular, 642 structured, 40, 164
rst-order, 152 uniformly distributed, 254
second-order, 152 noiseless coding theorem, 957
motion artifact, 287 nonstationary lter, 231, 235
motion deblurring, 875 nonstationary process, 166, 230, 414
moving-average lter, 121 normal equations, 1006, 1029
moving-window processing, 167, 174, normalized error, 138
662 normalized MSE, 138
MPEG, 1050 notch lter, 890
MRI, 47 nuclear magnetic resonance, 47
MTF, 122, 129, 131, 140 nuclear medicine images
digital radiography, 127 restoration of, 919, 934
screen-lm system, 127 nuclear medicine imaging, 36
multichannel linear prediction, 1009 attenuation correction, 923
multiframe averaging, see synchronized noise reduction in, 271
averaging quality control, 922
multiplication of images, 118, 328 scatter compensation, 922
multiplicative noise, 170
multiresolution analysis, 660, 707, 762 objectives of image analysis, 53
multiscale analysis, 380, 390, 491, 666, octal code, 961
700 optical density, 72
mutual information, 90 optical transfer function, 122
myocyte, image of, 7 optimal lter, 224, 865, 872, 898
order-statistic lters, 177, 181
nearest-neighbor rule, 1104 oriented pattern analysis, 639
negative predictive value, 1134 Otsu's method of thresholding, 649
neighborhood outliers, 181
eight-connected, 176
four-connected, 175 parabolic modeling of contours, 543
shapes, 175 parallel-ray geometry, 798
Index 1269
Parseval's theorem, 115, 993 probability density function, 80, 151
pattern classication, 1089, 1091 divergence, 1141
reliability, 1140 Gaussian, 159, 1118
supervised, 1095 Laplacian, 163
test set, 1095 normalized distance, 1141
test step, 1125 Poisson, 160
training set, 1095, 1140 Rayleigh, 163
training step, 1125 uniform, 160
unsupervised, 1104 projection
pectoral muscle fan-beam, 31
detection of, 481 geometry, 797
perceptron, 1127 image reconstruction from, 797
PET, 41 imaging, 15, 32, 40, 41
phantom, 64, 92, 97, 199, 271, 934, parallel-ray, 31, 797
935, 943 projection theorem, 798
phase, 104, 116 prototype, 157, 1097, 1123
linear component, 116, 886 pseudo-color, 827
unwrapping, 886 pyramidal decomposition, 707, 720
use in blind deblurring, 878
phase portraits, 779 quality of images, 61
photomultiplier tube, 39 characterization of, 64
photon detection noise, 21 quantitative analysis, 56
physiological imaging, see functional quantization, 66
imaging quasistationary process, 167, 176
physiological interference, 165
planar imaging, 15, 32, 40, 41 radiographic imaging, 15
point spread function, 91, 802, 805 Radon transform, 798, 886
Poisson noise, 169, 180, 254, 271 ramp lter, 648, 806
Poisson process, 15, 160 random noise, 151
polygonal modeling of contours, 537 random variable, 20, 87, 151
positive predictive value, 1134 ray integral, 798
positivity, 203, 874 Rayleigh PDF, 163
positron emission tomography, see PET receiver operating characteristics, 1135
power law, 609 applied to breast masses, 1147,
power spectral density, 159 1162, 1167
power spectrum equalization, 860, 877, free-response, 791
935 in breast cancer screening, 1145
power-line interference, 164 reconstruction
predictive coding, 1004, 1049 algebraic reconstruction techniques,
Prewitt operators, 368 813, 821
principal axis, 641 additive, 820
principal-component analysis, 769, 989, constrained, 821
991 multiplicative, 821
probabilistic models, 1110 backprojection, 801
Bayes rule, 1116 convolution backprojection, 804,
conditional probability, 1116 811
likelihood function, 1116 ltered backprojection, 804, 811
posterior probability, 1110 Fourier method, 97, 801
prior probability, 1110 from projections, 797
1270 Biomedical Image Analysis
simultaneous iterative reconstruc- shape analysis, 529
tion technique, 822 of breast masses, 1097, 1162
with limited data, 804, 813, 822 of calcications, 575, 1128
rectangle function, 104 of tumors, 578
region growing, 340, 393, 421, 429, shape factors, 549
1052 sharpness, 139
adaptive-neighborhood, 242, 249 short-time analysis, 167, 662
splitting and merging, 397 signal-dependent noise, 169, 248
region-based segmentation, 396 transformation of, 171
relative dispersion, 608 signal-to-noise ratio, see SNR
resolution, 99 signature of a contour, 530, 562
recovery, 924 simultaneous iterative reconstruction
restoration, 857 technique, 822
comparison of lters, 867 sinc function, 104, 646
information required, 875 single-photon emission computed to-
ringing artifact, 197, 316, 326 mography, see SPECT
ripple detection, 604 skeletonization, 548, 692
risk{benet analysis, 54 skewness, 560, 596
Roberts operator, 369 snakes, 444, 456
root mean-squared value, 152 SNR, 131
rose diagram, 641, 678, 682, 683, 686, Sobel operators, 369, 604
696, 771, 772 sonication, 625
rotation, 105, 117, 655 source coding, 961
run-length coding, 969 space-invariant lter, 857
Rutherford{Appleton threshold, 691 space-variant lters, 231, 235, 891
space-variant systems
salt-and-pepper noise, 166, 177, 180, transformation to space-invariant
181, 259 systems, 893
sampling, 65 spatial statistics, 157
scale-space, 380 specicity, 1132
scaling, 117 speckle noise, 43, 170, 254
scanning geometry, 31 SPECT, 32, 40
scatter compensation, 922 restoration of, 919
scattering, 40, 43, 92 spread function, 99
Compton, 920 spectrum, 99
screen-lm system, 16, 92 spiculation index, 570, 578, 1160, 1162
MTF, 127 spinal canal
screening, 1132 detection of, 449
breast cancer, 1143 splines, 444
sectioned ltering, 893, 897 spot detection, 604
sector lter, 646, 647, 651 spread functions, 90
segmentation, 393, 396, see also de- standard deviation, 152
tection stationary process, 166
improvement of, 444 block-wise, 167
of contours, 534 cyclo-, 167
of texture, 622 ergodic, 167
segmentation-based coding, 1051 non-, 167
sensitivity, 1132 quasi-, 167, 176
sequency, 210 statistical decision, 1110
Index 1271
statistical separability, 1141 Rutherford{Appleton, 691
statistically independent processes, 155 time averages, 157, 167
step function, 96 tissue characterization, 838
straight line, detection of, 437 Toeplitz matrix, 213, 1008
streaking, 804, 813 tomography, 27
structural analysis of texture, 621 transform coding, 984
structured noise, 164 transillumination, 6
subtracting Laplacian, 316 true negative, 1132
subtraction, 287 true positive, 1132
angiography, 286 tumor
energy, 287 in neuroblastoma, 834
temporal, 291 tumors, see also breast masses
synchronized averaging, 171, 277 detection of, 417, 500, 699
in confocal microscopy, 271 shape analysis, 578
telemedicine, 1177 ultrasonography, 43
teleradiology, 1079 uncertainty principle, 659
analog, 1080 uniform PDF, 160, 254
digital, 1082 uniformity, 596
high-resolution, 1084 unit step function, 163
temperature, 2, 4, 5 unsharp masking, 314, 322, 345
template matching, 119, 169
temporal statistics, see time averages variance, 135, 152, 560, 596, 601, 1118
temporal subtraction, 291 varicocele, detection of, 3
texton, 583, 621, 891 vector representation, see matrix rep-
texture, 583 resentation
Fourier spectrum, 612
fractal measures, 605 Walsh functions, 209
Haralick's measures, 600 Walsh{Hadamard transform, 209, 211
in liver image, 43 warping, 291
Laws' measures, 603 wave detection, 604
model for generation, 584 wavelets, 659, 663, 707, 756, 757
of breast masses, 627, 699, 701, Gabor, 487, 663, 757, 780
1166 Weber's law, 76, 343, 405
ordered, 589 Wiener lter, 225, 233, 863, 897, 935
oriented, 590, 639 noise-updating repeated, 234
periodic, 891 window
random, 589 Hamming, 809, 894
statistical analysis, 596 windowing of gray scale, 292, 827
structural analysis, 621
texture energy, 603 X ray
texture !ow-eld, 726 beam hardening, 20
thermal imaging, 2 contrast medium, 286, 839
thermography, 3 dual-energy imaging, 287
thinning, 548 energy, 19
three-dimensional ltering, 947 energy-subtraction imaging, 287
thresholding, 291, 364, 690, 722 exposure, 19
optimal, 395 grids, 20
Otsu's method, 649 imaging, 15
1272 Biomedical Image Analysis
scatter, 20, 25
stoppping, 22
xeromammography, 25
Yule{Walker equations, 1006, 1014
zero padding, 101, 193, 215
zero-crossings, 370, 380

You might also like