0% found this document useful (0 votes)
57 views

Image Processing Course 1

This document proposes a framework called Parallel Medical Imaging (PMI) for medical image analysis using artificial intelligence. PMI couples image data collection and medical knowledge extraction through parallel learning in an evolutionary way. It introduces Artificial Imaging Systems that use prescriptive learning to guide the generation of synthetic medical images based on extracted or prior medical knowledge. This enhances data collection and the interpretability of diagnostic decisions. The document then discusses approaches for augmenting and selecting real images, and generating synthetic images, to build a large dataset for extracting medical knowledge through computational experiments.

Uploaded by

Ali Hadi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

Image Processing Course 1

This document proposes a framework called Parallel Medical Imaging (PMI) for medical image analysis using artificial intelligence. PMI couples image data collection and medical knowledge extraction through parallel learning in an evolutionary way. It introduces Artificial Imaging Systems that use prescriptive learning to guide the generation of synthetic medical images based on extracted or prior medical knowledge. This enhances data collection and the interpretability of diagnostic decisions. The document then discusses approaches for augmenting and selecting real images, and generating synthetic images, to build a large dataset for extracting medical knowledge through computational experiments.

Uploaded by

Ali Hadi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Ministry of Higher Education ‫وزارة التعليم العالي والبحث‬

And Scientific Researches ‫العلمي‬


Baghdad University ‫جامعة بغداد‬
Al-Khwarizmi College of
‫كلية هندسة الخوارزمي‬
Engineering
‫قسم هندسة الطب الحياتي‬

Parallel Medical Imaging in artificial


intelligence
for Medical Image Analysis

submitted by (Ali Hadi Moahammed)


To (Dr. Muhanad Sabir)
In digital image processing
Course 1

5th stage / evening study


2020 / 2019

Table of contents
1.INTRODUCTION………………………………………….1
1. 2.PARALLEL MEDICAL
IMAGING……………………..2
2.1Image data
collection……………………………………..3
2.2Medical knowledge extraction……………………………
6
1.1 2.3Parallel evolution with parallel learning…………………
7
2 3.CASE STUDY OF
MAMMOGRAM……………………..9
1.
4.CONCLUSION…………………………………………...14
References…………………………………………………..15

List of figures:
Figure 1 ………………………………………………………2
Figure 2 ………………………………………………………3
Figure 3 ………………………………………………………4
Figure 4 ………………………………………………………5
Figure 5 ………………………………………………………6
Figure 6 ………………………………………………………8
Figure 7 ……………………………………………………..10
Figure 8 ……………………………………………………..11
Figure 9 …………………………………………………..…12
Figure 10 …………………………………………………….14
1. INTRODUCTION
Medical image analysis aims at extracting clinically useful information
from computed tomography (CT), positron emission tomography (PET),
magnetic resonance (MR), ultrasound, X-ray and other modalities of
images with the assistance of computers for diagnostic decision support.
With urgent requirements of medical imaging, medical societies have
entered a new era that medical equipment’s, image data, domain
knowledge, and humans including physicians and patients are coupled in
the large scale cyber-physical social spaces (CPSS). Hence, vision-based
medical image analysis is becoming an increasingly prominent role at
many clinical workflow stages from screening and diagnosis to treatment
delivery, especially in the domain of remote medical consultation.
Recently, vision-based medical image analysis has achieved promising
results for skin cancer diagnosis, red lesion detection in fundus images,
mammography analysis and pulmonary nodule detection. ACP
methodology was first proposed in for modeling, managing and
controlling the complex systems, and it consists of Artificial societies,
Computational experiments and Parallel execution. The ACP-based
parallel intelligence is one form of intelligence generated from the
interactions and executions between physical and artificial systems. As
part of parallel intelligence, parallel learning framework is presented in to
address issues of data collection and policy exploring in current machine
learning framework. Different from conventional medical image analysis
frameworks that solely perform data-to-knowledge extraction, we further
introduce artificial imaging systems to select and generate specific
medical image data for data collection in a knowledge-driven way. the
data-knowledge-driven parallel evolution can enable effective large-scale
data collection and enhance the interpretability of diagnosis. There are
many applications of artificial intelligence in medical imaging.

1
Fig. 1. Proposed Parallel Medical Imaging: A Data-Knowledge-Driven
Framework.

2. PARALLEL MEDICAL IMAGING.


Conventional medical image analysis framework extracts clinical
knowledge from image data in a bottom-up manner where the model
learning is driven by data ignoring the prior medical knowledge. However,
in the field of medical imaging, domain knowledge plays a critical role for
data collection and diagnosis decision support. Properly utilizing medical
knowledge in a top-down manner can not only improve the diagnosis but
also enhance the interpretability of diagnostic decision. Inspired by the
parallel intelligence and the framework of evolutionary systems, we
propose a data-knowledge driven framework termed as parallel medical
imaging (PMI) for medical image analysis. Two major parts of image data
collection and medical knowledge extraction are coupled in PMI by
parallel learning in an evolutionary way. The key point is to select and
generate image data which are representative to extract desired medical
knowledge for final diagnostic decision. Particularly, raw images are
collected firstly, followed by variation operators such as augmentation,
selection and reproduction with generation for large scale of image data
2
collection. Computational experiments with predictive learning are
conducted for data-to-knowledge extraction. In this work, inspired by the
key idea of evolutionary optimization through the interactions and
executions between physical and artificial systems, we introduce artificial
imaging systems (AIS) parallel to physical ones. Seeking to provide
clarity, we re-illustrate the overall proposed evolutionary framework of
PMI as Fig. 2.

Fig. 2. Evolutionary framework of PMI with parallel learning.


In AIS, prescriptive learning is adopted to guide the data generation based
on the predictively extracted or prior medical knowledge where
knowledge-to-data is achieved. This step can also enhance the
interpretability of decision. In addition, descriptive learning is adopted in
AIS to guide the data selection and generation based on the captured data
distribution and knowledge. As a result, final effective diagnosis and
prognosis can be achieved through extracted knowledge with enhanced
interpretability. More details are given in the following subsections.

2.1 Image data collection.


For medical imaging, large scale of image data with accurate annotations
is critical for the performance of learning-based methods. Parallel imaging
framework was introduced in for image generation for PV to tackle the
problems of complex vision systems. However, compared with natural
image analysis, medical image analysis requires a higher level of expertise
for interpretation and labeling. In addition, it is not easy to collect image

3
data from medical institutions or imaging communities since they should
be in accordance with the specific security and privacy policies.
Moreover, some lesion types and abnormalities have a very low rate of
occurrence in the general population. It is therefore more time consuming
and costly to collect effective training data which makes medical imaging
remain a challenging task. Through effective reproduction and variation
operation such as conventional augmentation, active selection, and
generation by introduced artificial imaging systems, a set of ’big data’
with real and synthetic images is formed for conducting computational
experiments for medical knowledge extraction.
a. Augmentation and selection of real images:
In the step of image data collection, small and/or imbalanced real images
for training can be augmented. Similar to conventional methods, rotation,
scaling, flipping, translation and adding noise can be applied for medical
image augmentation. Examples for skin lesion augmentation are illustrated
in Fig.3.

Fig. 3. Skin lesion augmentation through rotation, flipping and cropping.


The performance of learning-based methods for medical image analysis
not only depends on the size but also the representativeness of labeled
images. However, due to a lack of standardization in imaging and
acquisition for medical images, selecting representative training samples
for computational experiments remains a challenging task. In this
framework, suitable selection of real images is performed to address this
challenge. To this end, simple unsupervised/semi-supervised can be
applied for data selection. In addition, active learning that aims at using
limited medical images for disease classification can be developed. Active
4
learning iteratively selects the most informative samples through the
interaction between experts and computer. In active learning, the key is to
develop a criterion for uncertainty in sample selection process.
b. Generation of synthetic images:
To utilize the medical domain knowledge, we propose to apply descriptive
learning and design artificial imaging systems parallel to real imaging
systems that can generate synthetic and specific medical images following
the distribution of real ones. Many techniques for generating new
synthetic medical images in our proposed framework of artificial imaging
systems can be applied. They typically fall into three categories. In the
first one, new lesions are mathematically simulated based on various
deformation, followed by inserting into the raw projection data or
reconstructed clinical images, such as mammography and lung nodules. A
example from is illustrated in Fig.4.

Fig. 4. Synthesis by insertion. (a) Normal mammogram, (b) Insert mass, (c)
Simulated lesions by diffusion limited aggregation
To assure the realism of the characteristics of the artificial samples, real
lesions can be extracted and inserted to the same or different images In the
second one, virtual images are simulated through computer graphics based

5
on abstraction of the prior medical knowledge. Particularly, synthetic
images are generated by selection of simulation parameters of models
under controlled hypothetical imaging conditions. Computerized phantom
(eXtended CArdiac-Torso, XCAT) is served as a virtual patient, followed
by feeding into artificial imaging system with an accurate computerized
model, which can generate photorealistic CT image data with patient-
quality as show in Fig.5.

Fig. 5. Computed tomography (CT) synthesis through an XCAT phantom.


In the third one, generative models for image synthesis can be learned in
the artificial imaging systems. the authors propose a model of fully
convolutional neural networks for MRI synthesis. This model learns to
input modalities into a shared modality-invariant latent space which
allows it to benefit from additional input modalities and robust to missing
data. Recently, adversarial learning for the generative model is widely
used for medical image synthesis.

2.2 Medical knowledge extraction.

Conventional methods of turning data into medical knowledge relied on


visual analysis and interpretation by a domain expert or radiologist in
order to find useful patterns in data for decision support. As pointed in
radiomics, effective conversion of images to mineable data supports the
diagnostic decision. In this work, after effective image collection,
computational experiments with predictive learning are conducted to
extract ’small’ medical knowledge in PMI. Hence, medical knowledge
extraction from image also is a part of radiomics. For this research topic in

6
parallel medical imaging, any information about the patient’s ultrasonic
signs, X-ray findings and other related image-based medical descriptions
are termed as ’symptom’. Computational experiments with predictive
learning try to perform effective diagnosis. To achieve this goal, we have
to extract medical knowledge by studying the relationships of obligatory
proving or excluding symptoms for diagnosis in books and in practical
experience. These certain information about relationships that exist
between symptoms and diagnoses, symptoms and symptoms, diagnoses
and diagnoses and more complex relationships of combinations of
symptoms and diagnoses to a symptom or diagnosis are formalizations of
what is called medical knowledge. Predictive learning was originally
inspired by the cognitive psychology study that how children construct
knowledge of the world by interacting with it. In the step of computation
experiments, we perform predictive learning for diagnosis model from
collected image data for decision support. It can be simplified as part of
medical knowledge extraction from image data. Conventional data-driven
machine learning techniques especially deep learning models can be
learned to address knowledge extraction in PMI. In general, computational
experiments in PMI include detection, segmentation, classification, or
relationship caption for decision support for clinical applications. The
detection model extracts the knowledge of rough location and size of the
lesion area. Subsequently, the segmentation model extracts the detailed
shape and margin information of the lesion. Finally, the knowledge of
pathological types and assessment categories are obtained through the
classification task. Sometimes we need to capture the relationship between
symptoms and diagnosis.

2.3 Parallel evolution with parallel learning.

we have introduced some techniques that can be applied in artificial


imaging systems to utilize the domain knowledge. In this subsection, we

7
further discuss the details of parallel learning that is incorporated in PMI
to achieve evolutionary optimization. As shown in Fig. 6, we introduce
parallel learning to take advantage of bidirectional relationship between
medical image data and clinical description/representation of medical
knowledge. Predictive learning of parallel learning to achieve datato-
knowledge extraction in a bottom-up manner is discussed in last
subsection. Different from traditional diagnosis of treating medical images
as pictures intended solely for visual interpretation, conversely, through a
top-down inference, the extracted medical knowledge can be used for
guiding the image generation as well as increasing the interpretability of
future diagnosis. As described in subsection I, we employ descriptive and
prescriptive learning of parallel learning to improve the model
generalization ability and enhance the interpretation for medical diagnosis
decision.

Fig. 6. Data-knowledge-driven for decision pyramid in PMI.

a) Descriptive learning:
Descriptive learning aims to devise models to explain and predict learning
results. In this work, it urges the introduced artificial imaging system to
generate new images that follow the distribution of observed data. For
PMI in this paper, the key idea of descriptive learning is to model the
image distribution inside the designed artificial imaging systems,
perception and reasoning based on the observation in real world. The
descriptive learning process allows for learning features from unlabeled
8
data in a semi-supervised or unsupervised manner. Adversarial learning of
GAN for image generation can be seen as a special case where the
objective is to minimize the difference of distribution for real and
generated images.
b) Prescriptive learning:
Prescriptive learning is concerned with guidelines that describe what to do
in order to generate specific outcomes. They are often based on
descriptive theories or derived from prior knowledge. In this work, we
achieve knowledge-to-data generation and enhance interpretability
through prescriptive learning of parallel learning. According to the ACP
methodology, we perform parallel execution with prescriptive learning to
guide the artificial medical imaging systems to collect specific
representative image data based on the extracted or prior medical
descriptions and knowledge. For instance, based on the prior medical
knowledge that mammograms with spiculate and irregular mass are
mostly malignant, we can prescriptively generate various irregular and
spiculate mass image with associated pleomorphic calcifications for
malignant breast cancer analysis in mammograms. As a result, visual
interpretation on the diagnostic results is enhanced through prescriptive
learning which effectively capture the relationship between malignancy
and interpretability.

3. CASE STUDY OF MAMMOGRAM.


To validate the effectiveness of proposed PMI framework, we further
perform a case study of mammogram analysis in this section. The clinical
descriptive details from standard Breast Imaging Reporting and Data
System (BI-RADS) are illustrated in Table.I and Table.II
Breast composition a. The breast are almost entirely fatty;
b. There are scattered areas of fibroglandular density;
c. The breasts are heterogeneously dense, which may obscure small masses;
d. The breasts are extremely dense, which lowers the sensitivity of

9
mammography.
masses shape Oval; Round; Irregular.
Margin Circumscribed; Obscured; Microlobulated; Indistinct; Spiculated.
Density High density; Equal density; Low density; Fat-containing.
TABLE I CLINICAL DESCRIPTION FOR MAMMOGRAPHY: BREAST COMPOSITION, MASS SHAPE AND
MARGIN, DENSITY.

Categor Description
y
0 Needs additional imaging evaluation and/or prior mammograms for
comparison.
1 Negative.
2 Benign finding(s).
3 Probably benign finding(s). Short-interval follow-up is suggested.
4 Suspicious anomaly. Biopsy should be considered.
5 Highly suggestive of malignancy. Appropriate action should be taken
6 Biopsy proven malignancy.
TABLE II: BREAST IMAGING REPORTING AND DATA SYSTEM (BI-RADS)
ASSESSMENT CATEGORIES

Similar to the work in as shown in Fig.7, after capturing the relationship


between the malignancy and clinical description as listed in Table II,
diagnosis with interpretability can be enhanced.

Fig. 7. Clinical description in mammography lexicon with interpretability for


breast mass analysis
For visual results and diagnosis, the visual diagnosis models is trained for
visual information extraction like detection, segmentation and
classification. Due to page limitation, we only study the problem of local
X-ray breast mass classification (benign/malignant) for diagnosis. Built
upon PMI, we perform an implementation based on GANs with image

10
data collection, medical description of knowledge and parallel
evolutionary learning. The overall framework is illustrated in Fig. 8. More
details are given in following subsections.

Fig. 8. GANs-based PMI framework for breast mass analysis.


a. Dataset and Evaluation Criteria
Experiments are conducted in the public available dataset of INbreast
which is one of most widely used for mammogram analysis. The INbreast
dataset is created by the Breast Research Group, INESC Porto, Portugal,
and consists a total of 115 cases (410 images) including 107 images of
cancer and 236 images of normal breast. In this work, local ROI of 107
mass images with cancers are cropped into 256 × 256 pixels along with
the corresponding mask applying the same operation. A total of 112
squared mass images are obtained because some of these cases have more
than one mass and they are annotated (benign or malignant) according to
the Breast Imaging Reporting and Data System (BI-RADS), which is a
standard criteria developed by the American College of Radiology (ACR)
as listed in Table II. In this work, 36 masses with BI-RADS Category ∈
{2, 3} are categorized as benign, and 76 masses with BI-RADS Category
∈ {4, 5, 6} are categorized as malignant. The performance is analyzed by
11
measurement metrices in the binary classification problem, including
overall accuracy, TP and TPR, FN and FNR, TN and TNR, FP and FPR.
TP, TN, FP, and FN are defined the number of true positive, true negative,
false positive, and false negative detections, respectively. The rest metrics
are defined in the following equations:

A good performance of classification is achieved with high accuracy,


TPR(TP) and TNR(TN) as well as low FNR(FN) and FPR(FP). Moreover,
ROC (Receiver Operating Characteristic) curves and their AUCs (Area
Under the Curve) is also used to evaluate the performance of classification
model. ROC curve is produced by FPR (horizontal axis) and TPR (vertical
axis). A better performance is achieved with a larger AUC.
b. Implementation
1) Predictive learning for malignancy extraction:
Firstly, conventional augmentation including rotation and flipping on real
images are performed for training. In particular, 64 pairs of real mass
images (a mass image y and a corresponding mask x that incorporates
shape and margin information of masses) are randomly selected to
augment into 512 pairs of images for training the descriptive models. The
rest 48 pairs of real images are used for testing. In this work, we introduce
a CNN architecture and perform predictive learning to classify the mass
image with corresponding masks as malignant or benign in the step of
computational experiments. The CNN architecture with details is
exhibited in Fig. 9

12
Fig. 9. CNN architecture for the mass classification: Dropout (0.3) is used before the fully connected layer

2) Descriptive learning for synthetic data generation:


The quantity of available medical image data is always small.
Combination of large scale of generated synthetic data and the ’small’ real
data has shown to be helpful in data-driven optimization problem. In this
work, inspired by the idea that adversarial learning is a special case of
parallel learning, we introduce a generative adversarial network structure
for descriptive mass images generation in the artificial imaging system.
Specifically, a conditional GAN (cGAN) structure is designed for
generation from given binary masks x which already incorporate the shape
and margin descriptive information. The generator G and discriminator D
of GAN are trained for learning the distribution of mass images as well as
a mapping G : {x, z} → y between masks x, random noise z, and real mass
images y. Similar to, a U-net structure is introduced as the generator and a
PatchGAN architecture is introduced as the discriminator.
3) Prescriptive learning for specific data generation and selection:
The workflow of data-to knowledge is a bottom-up manner with the
medical visual and descriptive knowledge learning from existing training
samples. Conversely, through the knowledge-to-data inference in a
topdown manner, the medical knowledge is used for guiding the image
augmentation, selection and generation, and enhancing the interpretability
of diagnosis. In this case, prescriptive learning is adopted to generate
specific malignant/benign mask images with corresponding shape and
margin based on the descriptively extracted knowledge or prior medical

13
knowledge. In this work, 37 benign and 75 malignant masks are
augmented into 296 benign and 600 malignant masks for training the
DCGAN model. 262 benign and 409 malignant masks are obtained and
used to generate the corresponding mass images through the previously
trained cGAN model in the step of descriptive learning. Some generated
masks are shown in Fig.10

Fig. 10. Qualitative results for generating binary masks and corresponding mass
images by our proposed framework.
By feeding the generated benign and/or malignant binary mask into
artificial imaging systems through descriptive learning, more specific
realistic-looking lesion images from interpreting conditions such as
margin and shape of masses can be collected. Then we can extract more
suitable medical knowledge through predictive learning in a data-driven
way for final diagnosis. The overall framework jointly employ the image
data collection and medical knowledge extraction in a closed loop through
data-to-knowledge predictive learning and knowledge-to-data prescriptive
learning. Parallel data knowledge-driven evolutionary optimization is
achieved.

4. CONCLUSION.
14
In this paper, we propose an evolutionary data-knowledge driven
framework termed as Parallel Medical Imaging for vision-based medical
image analysis. Artificial imaging systems with descriptive learning allows
to collect large scale of real and synthetic images for training and
evaluating the models in the computational experiments. With a knowledge
to-data in a top-down manner through prescriptive learning, we can select
and generate specific image data based on the prior or extracted medical
domain knowledge. With a data-to knowledge in a bottom-up inference
through predictive learning, we can extract medical knowledge for clinical
diagnostic supporting systems. Through parallel evolution, ’large’ scale of
medial image data are collected from ’small’ set of real images, followed
by ’small’ intelligence with interpretable medical knowledge extraction.
Experimental results from the case study also demonstrate that parallel
data-knowledge driven evolutionary scheme alleviates the limitations of
small quantity of available medical images and enhance the interpretability
for final diagnosis and prognosis with more descriptive information. Future
work will focus on expanding proposed PMI framework beyond diagnosis
decision support in medical imaging. For the foreseeable future, the field of
parallel medical imaging has tremendous potential to supplement and
verify the work of clinicians, train radiologists to be more skilled, perform
the surgical planning, apply intra-operative navigation, give personalized
medicine recommendation, and visualize medical images with interpretable
masks, particularly in the complex field of imaging analytics with
complicated diseases.
References.
[1] Chao Gou, Member, IEEE, Tianyu Shen, Wenbo Zheng, Oliver Kwan,
and Fei-Yue Wang, Fellow, IEEE.
[2] D. Shen, G. Wu, and H.-I. Suk, “Deep learning in medical image
analysis,” Annual review of biomedical engineering, vol. 19, pp. 221– 248,
2017.

15
[3] Z. Hu, J. Tang, Z. Wang, K. Zhang, L. Zhang, and Q. Sun, “Deep
learning for image-based cancer detection and diagnosisa survey,” Pattern
Recognition, 2018.
[4] W. Zhu, C. Liu, W. Fan, and X. Xie, “Deeplung: Deep 3d dual path
nets for automated pulmonary nodule detection and classification,” arXiv
preprint arXiv:1801.09555, 2018.

16

You might also like