0% found this document useful (0 votes)
11 views31 pages

Watershed Transform

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views31 pages

Watershed Transform

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Watershed Segmentation

Related terms:

Image Segmentation, Segmentation, Target Structure, Energy Concentration, Seg-


mentation Method, Segmentation Technique

View all Topics

Image Analysis for Medical Visualiza-


tion
Bernhard Preim, Charl Botha, in Visual Computing for Medicine (Second Edition),
2014

4.3.5 Watershed Segmentation


Watershed segmentation is another region-based method that has its origins in
mathematical morphology [Serra, 1982]. The general concept was introduced by
[Digabel and Lantuejoul, 1978]. A break-through in applicability was achieved by
Vincent and Soille [1991] who presented an algorithm that is orders of magnitudes
faster and more accurate than previous ones (see also [Hahn, 2005] for a discussion
of the “watershed segmentation history”). Since then, it has been widely applied to
a variety of medical image segmentation tasks.

In watershed segmentation an image is regarded as a topographic landscape with


ridges and valleys. The elevation values of the landscape are typically defined by the
gray values of the respective pixels or their gradient magnitude. Based on such a
3D representation the watershed transform decomposes an image into catchment
basins. For each local minimum, a catchment basin comprises all points whose
path of steepest descent terminates at this minimum (see Fig. 4.13). Watersheds
separate basins from each other. The watershed transform decomposes an image
completely and thus assigns each pixel either to a region or a watershed. With noisy
medical image data, a large number of small regions arises. This is known as the
“over-segmentation” problem (see Fig. 4.14).
Figure 4.13. Principle of the watershed transform where the intensity values define
hills and basins. For segmentation purposes, basins may be flooded in order to
combine corresponding regions (From: [Hahn, 2005]).

Figure 4.14. Illustration of the over-segmentation problem of the watershed trans-


form applied to an axial slice of a CT image. Individual basins are merged to form
successively larger regions.(Courtesy of Thomas Schindewolf, Fraunhofer MEVIS
Bremen)

The most widespread variant employs the gradient image (recall § 4.2.6) as the basis
for the watershed transform. Gradient magnitude, however, is strongly sensitive to
image noise. Therefore, appropriate filtering is essential. There are many variants
how the watershed transform may be used as a basis for a general segmentation
approach. The “over-segmentation” problem may be solved by some criteria for
merging regions. The user must be provided with some facilities to influence the
merging process. We describe the approach introduced by Hahn and Peitgen [2000]
and later refined by Hahn and Peitgen [2003].

Merging Basins The decomposition of an image into regions is the basis for merging
them. In the metaphorical sense of a landscape, catchment basins are merged at
their watershed locations by flooding them. While some regions merge early (with
low flooding level), other regions are merged later (see Fig. 4.15). In order to support
interactive merging, Hahn and Peitgen [2003] introduced a merge tree. This tree
consists of the original catchment basins as leafs and of intermediate nodes that
represent merging events. A merging event is characterized by the nodes that are
merged and by the flood level that is necessary for merging. As a first step, a certain
amount of flooding may be applied (“pre-flooding” which may already be sufficient
for segmenting the target structure [Hahn and Peitgen, 2000]).
Figure 4.15. Principle of the watershed transform where the intensity values define
hills and basins. For segmentation purposes, basins may be flooded in order to
combine corresponding regions. As result, a merge tree arises consisting of merge
events (From: [Hahn and Peitgen, 2003]).

Marker-based Watershed Often however, no examined flooding levelis sufficient


to segment target structures. Therefore, the user may specify image locations that
belong to the target structure (include points), or that do not belong the target
structure (exclude points). If the user specifies an include point and an exclude point,
an additional watershed is constructed at the maximum level between them (see
Fig. 4.16). The merge tree is traversed such that each region contains either include
points or exclude points but not both. This interaction style is called marker-based
watershed segmentation. There are many variants of the watershed transform. For
example, merging may consider also gradient information or other criteria for
homogeneity. A frequently used variant is to merge regions where the difference of
the mean gray value is below a threshold. This process can be carried out iteratively
and results also in a hierarchical merging tree.

Figure 4.16. The image data is considered as a landscape and flooded to merge
small basins. To prevent merging, a watershed is erected between an include and
an exclude point specified to the user.(Courtesy of Horst Hahn, Fraunhofer MEVIS
Bremen)

Applications The watershed transform has been successfully applied to a variety of


segmentation tasks. Hahn and Peitgen [2000] extracted the brain with a single water-
shed transform from MRI data. Also, the cerebral ventricles were reliably segmented
with minimal interaction. Hahn and Peitgen [2003] demonstrated the application to
the challenging problem of delineating individual bones in the human wrist (see Fig.
4.17). Kuhnigk et al. [2003] employed the above-described variant of the watershed
segmentation to the delineation of lung lobes in CT data. Ray et al. [2008] used the
iterative watershed transform for hepatic tumor segmentation (and volumetry).
Figure 4.17. Watershed segmentation was applied to segment individual bones from
CT data. While bones are relatively easy to distinguish from other structures, the
challenge is to delineate individual bones (From: [Hahn and Peitgen, 2003]).

> Read full chapter

Multitemplate-based multiview learn-


ing for Alzheimer’s disease diagnosis
M. Liu, ... D. Shen, in Machine Learning and Medical Imaging, 2016

9.2.4 Feature Extraction


Features are first extracted from each individual template space, and then inte-
grated together for a more complete representation. In Section 9.2.4.1 a set of
regions-of-interest (ROIs) in each template space is first adaptively determined by
performing watershed segmentation (Vincent and Soille, 1991; Grau et al., 2004)
on the correlation map obtained between the voxel-wise tissue density values and
the class labels from all training subjects. Then, to improve both discrimination and
robustness of the volumetric feature computed from each ROI, in Section 9.2.4.2
each ROI is further refined by picking only voxels with reasonable representation
power. Finally, to show the consistency and difference of ROIs obtained in all
templates, in Section 9.2.4.3 some analysis is provided to demonstrate the capability
of the feature extraction method in extracting the complementary features from
multiple templates for representing each subject brain.

9.2.4.1 Watershed segmentation


For robust feature extraction, it is important to group voxel-wise morphometric
features into regional features. Voxel-wise morphometric features (such as the
Jacobian determinants, voxel-wise displacement fields, and tissue density maps)
usually have very high feature dimensionality, which includes a large amount of
redundant/irrelevant information as well as noises that are due to registration errors.
On the other hand, using regional features can alleviate the above issues and thus
provide more robust features in classification.

A traditional way to obtain regional features is to use prior knowledge, that is,
predefined ROIs, which summarizes all voxel-wise features in each predefined ROI.
However, this method is inappropriate in the case of using multiple templates for
complementary representation of brain images, since in this way ROI features from
multiple templates will be very similar (we use the volume-preserving measurement
to calculate the template-specific morphometric pattern of tissue density change
within the same ROI w.r.t. each different template). To capture different sets of
distinctive brain features from different templates, a clustering method (Fan et al.,
2007) is adopted for adaptive feature grouping. Since clustering will be performed
on each template space separately, the complementary information from different
templates can be preserved for the same subject image. As indicated in Fan et
al. (2007), the clustering algorithm can improve the discriminative power of the
obtained regional features, and reduce the negative impacts from registration errors.

Let denote a voxel-wise tissue density value at voxel u in the kth template for the
ith training subject, i [1, N]. The ROI partition for the kth template is based on
the combined discrimination and robustness measure, DRMk(u), computed from
all N training subjects, which takes into account both feature relevance and spatial
consistency as defined below:

(9.1)

where Pk(u) is the voxel-wise Pearson correlation (PC) between tissue density set
and label set {yi [−1, 1], i [1, N]} (1 for AD and −1 for NC) from all N training
subjects, and Ck(u) denotes the spatial consistency among all features in the spatial
neighborhood (Fan et al., 2007).

Watershed segmentation is then performed on each calculated DRMk map for ob-
taining the ROI partitions for the kth template. Note that, before applying watershed
segmentation, we use a Gaussian kernel to smooth each map DRMk, to avoid any
possible oversegmentation, as also suggested in Fan et al. (2007). As a result, for
example, we can partition the kth template into totally Rk nonoverlapping regions, ,
with each region rlk owning voxels. It is worth noting that each template will yield
its own unique ROI partition, since different tissue density maps (of same subject)
are generated in different template spaces.
Fig. 9.4 shows the partition results obtained from the same group of images reg-
istered to the two different templates. It is clear that the obtained ROIs are very
different, in terms of both their structures and discriminative powers (as indicated
by different colors). Those differences will naturally guide the subsequent steps of
feature extraction and selection, and thus provide the complementary information
to represent each subject and also improve its classification.

Fig. 9.4. Watershed segmentation of the same group of subjects on two different
templates. Color indicates the discriminative power learned from the group of
subjects (with the hotter color denoting more discriminative regions). Upper row:
two different templates. Lower row: the corresponding partition results.

9.2.4.2 Regional feature aggregation


Instead of using all voxels in each region for total regional volumetric measurement,
only a subregion in each region is aggregated to further optimize the discriminative
power of the obtained regional feature, by employing an iterative voxel selection
algorithm. Specifically, one first selects a most relevant voxel, according to the
PC calculated between this voxel’s tissue density values and class labels from all N
training subjects. Then the neighboring voxels are iteratively included to increase the
discriminative power of all selected voxels, until no increase is found when adding
new voxels. Note that this iterative voxel selection process will finally lead to a voxel
set (called the optimal subregion) with voxels, which are selected from the region
. In this way, for a given subject i, its lth regional feature in the region of the kth
template can be computed as
(9.2)

Each regional feature is then normalized to have zero mean and unit variance, across
all N training subjects. Finally, from each template, M (out of Rk) most discriminative
features are selected using their PC. Thus for each subject, its feature representation
from all K templates consists of M × K features, which will be further selected for
classification. Fig. 9.5 shows the top 100 regions selected using the regional feature
aggregation scheme, for the same image registered to two templates (as shown in
Fig. 9.4). It clearly shows the structural and discriminative differences of regional
features from different templates.

Fig. 9.5. Illustration of the top 100 regions identified using the regional feature
aggregation scheme, where the same subject is registered to two different templates.
The axial, sagittal, and coronal views of the original MR image of the subject after
warping to each of the two different templates are displayed. Color indicates the
discriminative power of the identified region (with the hotter color denoting more
discriminative region). Upper row: image registered to template 1. Lower row: image
registered to template 2. (For the definitions of both hetero-M and homo-M, please
refer to Section 9.2.4.3.)

9.2.4.3 Anatomical analysis


It is important to understand how the identified regions (ROIs) from different
templates are correlated with the target brain abnormality (ie, AD), in order to better
reveal the advantages of using multiple templates for morphometric pattern analysis
in comparison to using only a single template. Accordingly, we categorize the identi-
fied regions (ROIs) into two classes: (1) the class with homogeneous measurements
(homo-M) and (2) the class with heterogeneous measurements (hetero-M) (see Fig.
9.5). The homo-M refers to the regions that are simultaneously identified from
different templates, whereas the hetero-M refers to the regions identified in a certain
template but not in other templates. In Fig. 9.5, it can be observed that a region
within the left corpus callosum is identified in both templates 1 and 2 (see the
coronal view). On the other hand, a region within the frontal lobe is only identified
in template 1, and a region within the temporal lobe is only identified in template 2
(see the sagittal view). When jointly considering all identified regions from different
templates in the classification, the integration of homo-M features is helpful to
improve both robustness and generalization of feature extraction for the unseen
subjects, while the combination of hetero-M features can provide complementary
information for distinguishing subjects during the classification.

> Read full chapter

Computational Automated System for


Red Blood Cell Detection and Segmen-
tation
Muhammad Mahadi Abdul Jamil, ... Mohd Farid Johan, in Intelligent Data Analysis
for Biomedical Applications, 2019

8.6 Proposed Method for Red Blood Cell Detection and Seg-
mentation
The controlled watershed method with morphological operation is robust and
efficient for early segmentation algorithms as reported by Refs. [13,18]. It is
fundamentally different from conventional segmentation tools as it uses edge de-
tectors to connect to pixel component extractors. The solution to rectify the under-
and over-segmentation problems is by utilizing watershed segmentation to divide
images into unique regions based on their regional minima. For overlap blood cell
images, watershed segmentation is very effective with the use of a marker [19].

Watershed segmentation increases the architectural complexity and computational


cost of the segmentation algorithm. It also successfully overcomes the problems of
high overlap RBC. Fig. 8.3 shows the pseudocode of the developed marker-con-
trolled watershed method. The neighborhood of each single pixel is defined by
the Euclidian distance measurement based on the diameter of the structuring
elements so that Souter and sinner return the sum of intensities of all pixels to
the neighborhood of the outer and inner circular areas of the structuring elements,
respectively. Two counters are employed to count the pixels charted with correspond-
ing structuring elements during each pass. The boxed area specifies the computation
of the intensity.
Figure 8.3. The pseudocode of the algorithm.

The pseudocode of linked components based on the marker-controlled watershed


algorithm is shown in Fig. 8.3, where p represents a pixel, I is the input preprocessed
image (processed by filter and morphological operation), and L is the segmented
label image.

I(p) defines the gray level value of p, ne is the neighbor pixel of p, and I(ne) represents
gray level value of the next respective pixel. The array d[p] is used to store the distance
from the lowest pixels or plateau. However, the array l[p] is used to stock the labels.
DMAX and LMAX indicate the maximum distance and maximum value for the label
in the structure, respectively. DMAX determines the distance between the first pixels
of the first row to the last pixel of the last row. Image scanning can be continued
for step two if the v[p] array structure does not change. The image scanning can
continue if l[p] array structure does not change.
> Read full chapter

Detection, Classification, and Estima-


tion in the (t,f ) Domain
In Time-Frequency Signal Analysis and Processing (Second Edition), 2016

12.6.3 Time-Frequency Image Features for Pattern Recognition


One approach to extract features from a given TFD is to interpret it as an image and
extract features from the TFD using image-processing techniques. This approach
exploits well established image-processing methods to extract highly discriminating
features. The image-related features are discussed below.

• Geometric (t,f) features: signal components appear as regions of energy con-


centration in the (t,f) domain. Image segmentation techniques such as water-
shed segmentation can be used to locate such regions. Watershed segmenta-
tion techniques can be adapted by interpreting image as topographic surface
where watershed boundaries separate each energy concentration region [62].
Geometric features are then extracted from these segments to get information
regarding the geometry of these segments in the (t,f) plane.Let us assume that
a given TFD (n,k) is segmented into L number of time-frequency regions. For
a given (t,f) image, the moments of order (p,q) can be extracted from each
segment of the image using the following expression [62].(12.6.18)where p,q =
0,1,2,…, l(n,k) represents the lth segment of the (t,f) image, and represents the
moment of order (p,q) for lth segment of the (t,f) image. The central moments
are expressed as (12.6.19)where and are the coordinates of the centroids
in the (t,f) plane of the lth segment. The following shape features can be
extracted from the moments [62]: •(t,f) convex hull/area, i.e., •(t,f) perimeter:
•(t,f) compactness: •(t,f) coordinates of the centroid for the segmented region:
and Newborn EEG seizure signals appear as large connected regions in the
(t,f) domain, while nonseizure signals appear as small disconnected regions
of energy concentrations. So, the geometric properties of EEG signals can be
used as (t,f) features for detecting seizures.
• Texture (t,f) feature: such features are related to direction and shape of energy
distribution in the (t,f) plane. These features can be obtained by first convolving
a TFD with a set of convolution masks and then extracting features from the
convolved images [63]. The convolution masks are selected to detect certain
shapes in the time-frequency images, e.g., [−1,−2,0,1,2]T can be used to detect
edges in the given image.Let j(n,k) represent a (t,f) image obtained as a result
of convolution with the mask Hj(n,k). It is obtained as (12.6.20)Statistical (t,f) •
features such as mean and variance can then be extracted from j(n,k). Note
that Eq. (12.6.20) can be implemented in the ambiguity domain ( , ), therefore
resulting in a product and two FTs. The operation can be combined with the
formulation of high-resolution QTFDs (see Eqs. (2.7.36) and (3.2.13)), given
that j(n,k) can be expressed as (12.6.21)where Wz(n,k) is the WVD. If we define ,
then the two 2D convolution operations can be reduced to one 2D convolution
operation, i.e., , thus reducing the computational cost.
Histogram (t,f) features: the histogram is a graphical representation of the
distribution of data. Let us assume that the signal energy in a TFD is quantized
to L discrete levels. Then, the histogram of a TFD is the number of samples in
the TFD for each energy level. The histogram of TFDs can be estimated using
the following expression [64]: (12.6.22)where i = 1,2,…,L. Statistical quantities
such as mean, standard deviation, skewness, and kurtosis describing the shape
of the (t,f) histogram can then be used as (t,f) features. Contrast can also be
used [65,66].

> Read full chapter

Machine learning and its application in


microscopic image analysis
F. Xing, L. Yang, in Machine Learning and Medical Imaging, 2016

4.3.1.2 Hierarchical image segmentation


Recently, the hierarchical strategy has been successfully applied to image segmen-
tation (Farabet et al., 2013; Uzunbaş et al., 2014). In general, the hierarchical image
segmentation consists of two steps: candidate region generation and selection.
Specifically, a collection of segmentation candidates is first generated by running
some existing segmentation algorithms with different parameters. Usually, an undi-
rected graph is constructed from these partition candidates, in which an edge exists
between two touching or overlapping regions. Next, based on some domain-specific
criteria, a subset of nonoverlapping regions is selected as the final segmentation
results. For example, Felzenszwalb’s method (Felzenszwalb and Huttenlocher, 2004)
with multiple levels is used to generate the segmentation candidate pool, and an
optimal purity cover algorithm (Farabet et al., 2013) can be adopted to select the
most representative regions. In Uzunbaş et al. (2014), the watershed segmentation
method with different thresholds gives a collection of partitions, and then a con-
ditional random field (CRF)-based learning algorithm is utilized to find the best
ensembles as final segmentation.

In our implementation, an UCM (Arbelaez et al., 2011; Arbelaez, 2006), which defines
a duality between closed, nonself-intersecting weighted contours and a hierarchy
of regions, is used to generate a pool of segmentation candidates. Because of the
nice property of UCM where the segmentation results using different thresholds are
nested into one another, we can construct a tree graph for this pool of segmentation
candidates. The final step is to solve this tree graph-based problem using dynamic
programming.

Given a set of segmentation candidates generated with different thresholds using


UCM, an undirected and weighted tree graph, G = (V, E, w), is constructed, where V
= {vi, i = 1, 2, …, n} represents the nodes with each vi corresponding to a segmented
region Si. E denotes the edges of the graph. The w(vi) is learned via a general random
decision forest classifier to represent the likelihood of Si as a real muscle cell. An
adjacent matrix A = {aij|i, j = 1, …, n} is then built with aij = 1 if or , and otherwise
0. Denote x {0, 1}n the indicator vector, where its element is equal to 1 if the
corresponding node is selected, otherwise 0. Finally, the constrained subset selection
problem is formulated as

(4.15)

where denotes all possible valid configurations of x. Considering the special tree
graph structure, we can efficiently solve (4.15) via the dynamic programming
approach with a bottom-up strategy.

In order to ensure that solving (4.15) selects the desired regions, each candidate
region (node) must be assigned an appropriate muscle cell likelihood score w. In
our algorithm, each candidate region is discriminatively represented with a feature
vector that consists of a descriptor to model the convexity of the shape, and two
histograms to describe the gradient magnitudes of the pixels on the cell boundary
and inside the cell region. These morphological features are proposed based on the
following observations: (1) the shape of a muscle cell is nearly convex; (2) the cell
boundaries often exhibit higher gradient magnitudes; (3) the intensities within the
cell regions should be relatively homogeneous.

> Read full chapter

Detection and matching of object using


proposed signature
Hany A. Elsalamony, in Emerging Trends in Image Processing, Computer Vision and
Pattern Recognition, 2015

4 The proposed algorithm


The proposed object detection and matching framework are divided into three
parts. They are consisting of segmentation, construction of objects' signatures in
image, and matching them to classify the object based on its signature [13]. The
segmentation process represents the main stone in this algorithm, which is giving
initial hypotheses of object positions, scales and supporting based on matching.
These hypotheses are then refined through the object signature classifier, to obtain
final detection and signatures matching results. Figure 1 describes all steps of the
proposed algorithm [22].

Figure 1. The proposed algorithm steps.

This figure starts by an example of original RGB image with all different objects,
many morphological functions and filters (edge detection, erosion, dilation, de-
termination the number of objects, watershed segmentation, etc.) are applied to
enhance the work of this image. The areas, centroids, orientations, eccentricities,
convex areas for every object can easily be determined. Moreover, the boundary
points (xij, yij) for each object are calculated individually, where i represents the
number of objects and j is the number of boundary points related to an object. These
boundary points and the objects' previous information are saved to start construction
of own proposed signature for every object based on all these information. The
relation among all these information and Euclidian distance from objects' centroids
(xtc,ytc) is plotted and saved as an individual signature for each object, that is shown
in Figure 1 by one object. These signatures for all objects are saved and waiting for
matching with any input object's signature, as in Section 5. Moreover, the contour
is drawn around all objects and tracing the exterior boundaries of them.

The third part of this proposed algorithm after segmentation and constructing
signatures is the matching process between input and saved objects' signatures as in
the above. Two different ways in matching process have used to make a comparison
between them in accuracy and activity; one is using statistical measures related to
the signatures and the second is using SURF [22].

Additionally, all shown steps in Figure 1 have applied on the input object to construct
its signature. Actually, the matching process is depending on statistical measures on
both types of objects' signatures (saved ones and input). Firstly, as shown in Figure 1,
not all objects in the example image are in the same size, orientation, or even shape,
and afterwards, their data should not be equal in length or characteristics. For that
reason and more in checking accuracy, some preprocesses of matching have carried
out; one is sorting all the data of all signatures, and then computing the variability
in the dataset by calculating the average of the absolute deviations of data points
from their mean. The equation for average deviation is

(4)

For all i's are the number of objects in the image and j's are the number of object's
signature data, xij represents the number of signature's data points, is their mean,
and ni is the number of signature data rows. Secondly, the results of Equation (4)
have applied on all input and saved objects to make a comparison between them for
get the exact matching by least error. Equation (5) introduces a method for calculate
differences between the results of Equation (4):

(5)

The components of Equation (5) are the absolute value of the difference between the
two results of Equation (4) related to saved and input object signatures. The decision
of matching based on the least value of DIF, which is given the exact matched object.

> Read full chapter

Detecting distorted and benign blood


cells using the Hough transform based
on neural networks and decision trees
Hany A. Elsalamony, in Emerging Trends in Image Processing, Computer Vision and
Pattern Recognition, 2015

2 Related work
In recent years, research on the blood cells' detection and diagnosis of diseases using
image processing has grown rapidly. In 2010, Y M Hirimutugoda and Gamini Wija-
yarathna presented a method to detect thalassaemia and malarial parasites in blood
sample images acquired from light microscopes and investigated the possibility of
rapid and accurate automated diagnosis of red blood cell disorders. To evaluate
the accuracy of the classification in the recognition of medical image patterns,
they trained two back propagation Artificial Neural Network models (3 layers and
4 layers) together with image analysis techniques on morphological features of
RBCs. The three layers had the best performance with an error of 2.74545e-005 and
86.54% correct recognition rate. The trained three layer ANN acts as a final detection
classifier to determine diseases [7].

In December 2012, D.K. DAS, C. CHAKRABORTY, B. MITRA, A.K. MAITI, and A.K.
RAY introduced a methodology using some techniques of machine learning for
characterizing RBCs in anemia based on microscopic images of peripheral blood
smears. In the first, for reducing unevenness of background illumination and noise
they preprocessed peripheral blood smear images based on geometric mean f-
ilter and the technique of gray world assumption. Then watershed segmentation
technique applied to erythrocyte cells. The distorted RBCs, such as, sickle cells,
echinocyte, tear drop, acanthocyte, elliptocyte, and benign cells have been classified
as dependent on their morphological shape changes. They observed that when a
small subset of features used by using information gain measures, the logistic
regression classifier presented better in performance. They achieved highest pre-
diction in terms of overall accuracy by 86.87%, sensitivity to 95.3%, and specificity
was 94.13% [8].

In May 2013, Thirusittampalam, Hossain, Ghita, and Whelan developed a novel


tracking algorithm that extracted cell motility indicators and determined cellular
division (mitosis) events in large time-lapse phase-contrast image sequences. Their
process of automatic, unsupervised cell tracking was carried out in a sequential
manner, with the interframe cell's association achieved by assessing the variation
in the local cellular structures in consecutive frames from the image sequence. The
experimental results indicated that their algorithm achieved 86.10% overall tracking
accuracy and 90.12% mitosis detection accuracy [9].

Also in May 2013, Khan and Maruf presented an algorithm for cell segmentation and
counting via the detection of cell centroids in microscopic images. Their method was
specifically designed for counting circular cells with a high probability of occlusion.
The experimental results showed an accuracy of 92% of cell counting, even at around
60% overlap probability [10].

An algorithm presented by Mushabe, Dendere, and Douglas in July 2013 identified


and counted RBCs as well as parasites in order to perform a parasitemia calculation.
The authors employed morphological operations and histogram-based thresholds
to detect the RBCs, and they used boundary curvature calculations and Delaunay
triangulation to split overlapped cells. A Bayesian classifier with their RGB pixel
values as features classified the parasites, and the results showed 98.5% sensitivity
and 97.2% specificity for detecting infected RBCs [11]. In 2014, Rashmi Mukherjee
presented an evaluation the morphometric features of placental villi and capillaries
in preeclamptic and normal placentae. The study included light microscopic im-
ages of placental tissue sections of 40 preeclamptic and 35 normotensive preg-
nant women. The villi and capillaries characterized based on preprocessing and
segmentation of these images. He applied principal component analysis (PCA),
Fisher’s linear discriminant analysis (FLDA), and hierarchical cluster analysis (HCA)
to identify placental (morphometric) features, which are the most significant from
microscopic images. He achieved 5 significant morphometric features (>90%
overall discrimination accuracy) identified by FLDA, and PCA returned three most
significant principal components cumulatively explained 98.4% of the total variance
[12]. From this literature survey, it was noticed that research work has been done
towards anemia-affected RBCs’ characterization using computer vision approach.
The proposed work methodology has described in Section 6.

> Read full chapter

PDEs for Morphological Scale Spaces


and Eikonal Applications
Petros Maragos, in Handbook of Image and Video Processing (Second Edition), 2005

7.4 Watershed Segmentation


Segmentation is one of the most important and difficult tasks in image analysis.
A powerful morphologic approach to image segmentation is the watershed [8, 83],
which transforms an image f(x,y) to the crest lines separating adjacent catchment
basins that surround regional minima or other “marker” sets of feature points. The
segmentation process starts with creating flooding waves that emanate from the set
of markers and flood the image gradient surface. The points where the emanating
waves meet each other form the segmentation boundaries. Very fast algorithms to
find a digital watershed via flooding have been developed on the basis of immersion
simulations [83] and hierarchic queues [8]. The simplest markers are the regional
minima of the gradient image, but very often they are extremely numerous and
lead to oversegmentation. Thus, in practice the flooding sources are a smaller set of
markers, which have been identified by a preliminary step as inside particles of the
regions or objects that need to be extracted via segmentation. The robust watershed
version using markers has been successfully used for both interactive and automated
segmentation.

There is also an eikonal interpretation of the watershed. Najman and Schmitt [59]
have established that (in the continuous domain and assuming that the image is
sufficiently smooth and has isolated critical points) the continuous watershed
is equivalent to finding a skeleton by influence zones with respect to a weighted
distance function that uses points in the regional minima of the image f as sources
and || f|| as the field of indices. A similar result has been obtained by Meyer [54] for
digital images. In Maragos and Butt [47] the eikonal PDE modeling the watershed
segmentation of an image-related function f has been solved by finding a WDT
via level sets and curve evolution where the curve's normal speed is proportional
to 1/|| f||. Specifically, if is a curve representing a marker boundary, then the
propagation of the corresponding flooding wave is modeled by

(81)

Thus, the curve speed is proportional to 1/|| f||. Related ideas can be found in
[76]. Subsequently, Nguyen et al. [60] proposed a PDE segmentation approach
that combines the watershed ideas with active contours. Namely, they used curve
evolution as in (81) but with a velocity V = c0/|| f||- c1 that contains both the eikonal
speed 1/|| f|| and a term proportional to curvature that smoothens the evolving
boundaries. Further, they related this combined curve evolution to minimizing an
energy functional.

The stationary eikonal PDE || T|| = 1/V corresponding to (81), where T is the curve's
arrival time, was solved in [47] using the FMM. Further, the results of this eikonal
PDE segmentation approach were compared with the digital watershed algorithm
via flooding [83] and to the eikonal approach solved via a discrete chamfer WDT
[54, 81]. In all three approaches, robust features are extracted first as markers
of the regions, and the original image I is transformed to another function f by
smoothing via alternating opening-closings, taking the gradient magnitude of the
filtered image, and changing (via morphologic reconstruction) the homotopy of the
gradient image so that its only minima occur at the markers. The segmentation is
done on the final outcome f of the above processing.

In the standard digital watershed algorithm [8, 83], the flooding at each height
level is achieved by a planar distance propagation that uses the chessboard metric.
This kind of distance propagation is not isotropic and could give incorrect results,
particularly for images with large plateaus. Eikonal segmentation using chamfer
WDTs improves this situation a little but not entirely. In contrast, for images with
large plateaus/regions, segmentation via the eikonal PDE and curve evolution WDT
gives results close to ideal. As Fig. 11 shows, compared on a test image that is
difficult (because expanding wavefronts meet watershed lines at many angles
ranging from being perpendicular to almost parallel), the continuous segmentation
approach based on the eikonal PDE and curve evolution outperforms the discrete
segmentation results [47].

FIGURE 11. Performance of various segmentation algorithms on a Test image (250


× 400 pixels). This image is the minimum of two potential functions. Its contour
plot (thin bright curves) is superimposed on all segmentation results. Markers are
the two source points of the potential functions. Segmentation results based on (A)
digital watershed flooding algorithm; (B) weighted distance transform (WDT) based
on optimal 3 × 3 chamfer metric; and (C) WDT based on curve evolution (thick bright
curve shows the correct segmentation).

The standard watershed flooding, both in the digital algorithms [8, 83] as well as in
the eikonal curve evolution (81), corresponds to a uniform height flooding. This height
criterion corresponds to a contrast-based segmentation. Alternative criteria, such as
area (size-based segmentation) or volume (contrast and size-based segmentation)
and corresponding generalized floodings have been considered in [55]. In [75] the
uniform volume flooding was modeled via a time-dependent eikonal-type PDE and
level sets. In this new model, the evolution of a marker's boundary curve C is
described by

(82)

where A(t) denotes the instant area enclosed by the evolving curve. The correspond-
ing stationary eikonal PDE is

(83)

where T is arrival time. Whereas in the case of uniform height flooding the
FMM-based solution of the eikonal PDE was simple to implement as done in
[47], in the case of uniform volume flooding solving the stationary PDE (83) has
the peculiarity of the time-varying term A(T). This was numerically solved in [75]
using a variation of the FMM that took into consideration the time-dependent area
variations of the fronts.
Experimental results of the PDE-based height and volume flooding applied to real
image segmentation are shown in Fig. 12. The volume flooding seems to perform in
a more balanced way. This is expected since it exploits both the contrast and the (area)
size properties of the objects-regions present in an image. It is a useful segmentation
tool in cases where we want to keep a balance between the aforementioned image
properties and can give good results in cases in which contrast-based driven seg-
mentation fails. In addition, the PDE implementation has the advantage of a more
isotropic flooding.

FIGURE 12. Watershed-type segmentation with various Partial differential equation


(PDE)-based floodings. A: Original image. B: Markers (round black seeds) and two
stages of their evolution using the eikonal PDE corresponding to uniform height
flooding (contrast criterion). C: Final segmentation of the flooding in B. D: As in
B but using volume flooding (simultaneous contrast and area criteria). E: Final
segmentation of the flooding in D.

> Read full chapter


Machine learning as a means toward
precision diagnostics and prognostics
A. Sotiras, ... C. Davatzikos, in Machine Learning and Medical Imaging, 2016

10.2.2 Spatial Grouping of Structural MRI


Structural imaging based on magnetic resonance provides information regarding
the integrity of gray and white matter structures in the brain, making it an integral
part of the clinical assessment of patients with dementia, such as AD and FTD.
Automated classification approaches applied on structural MRI data have shown
promise for the diagnosis of AD and the identification of whole-brain patterns of
disease-specific atrophy. In this scenario, when dimensionality reduction is per-
formed prior to a supervised machine learning task, such as patient classification,
it is appealing to adopt a supervised clustering approach. The goal is to exploit prior
information (ie, disease diagnosis) in order to generate regions of interest that are
adapted not only to the data, but also to the machine learning task, with the aim to
improve its performance.

This supervised approach was adopted by the COMPARE method (Fan et al., 2007)
that aims to perform classification of morphological patterns using adaptive regional
elements. COMPARE extracts spatially smooth clusters that can be used to train
a classifier to predict patient diagnosis by combining information stemming from
both the imaging signal and subjects’ diagnosis. The two types of information are
integrated at each image location, p, in a multiplicative fashion though the score:

(10.1)

where C(p) measures the spatial consistency of the imaging signal, while P(p)
measures discriminative power. More precisely, P is calculated as the following
leave-one-out absolute Pearson correlation:

(10.2)

where (p, i) denotes the Pearson correlation measured between the imaging signal
at p and the classification labels when excluding the ith subject/sample. The consis-
tency C(p) is the intra-class coefficient measuring the proportion of neighboring
feature variance that is explained by the inter-subject variability (McGraw and Wong,
1996; Fan et al., 2007). It takes values between 0 and 1, with higher values indicating
that the variance of the measurements across neighboring brain location is small
with respect to the inter-subject variability of the imaging signal. As a result, the
score s(p) is bounded between 0 and 1, with values close to 1 indicating that the
imaging signal around p is simultaneously highly reliable and discriminative (ie,
highly correlated or anticorrelated with patient diagnosis).

This score map is subsequently smoothed, and its gradient is used in conjunction
with a watershed segmentation algorithm (Vincent and Soille, 1991) to partition the
brain into different regions (Fig. 10.1 presents brain regions generated by watershed
from white matter tissue density maps of demented and normally aging subjects
(Shen and Davatzikos, 2002; Fan et al., 2007)). These regions are then refined by
considering only locations that optimize the classification power of the extracted
features. This is performed in a region growing fashion where, initially, only the node
of the region with the highest discriminative score is selected, and adjacent locations
are incrementally aggregated as long as the discriminative power does not decrease.
The previous steps are summarized in Algorithm??. This approach extracts a single
connected component per watershed region. Each component comprises highly
discriminative elements whose average imaging signal may be used as a feature for
training a classifier, such as an SVM (Vapnik, 2000).

Fig. 10.1. Coronal and sagittal cross-sectional views of a watershed segmentation


generated by COMPARE.

Algorithm 10.1

Pseudo-Code for Compare

The efficiency of this supervised dimensionality reduction scheme was demon-


strated in classifying patients with clinical dementia versus normal individuals, as
well as distinguishing between schizophrenic patients and normal controls (Fan et
al., 2007). COMPARE is generic and can be readily extended to incorporate different
forms of prior information, such as the ones provided in regression and multiclass
classification settings.

10.2.2.1 Spatial grouping of rs-fMRI


Functional MRI is an imaging technique that reflects neural activity in the whole
brain by detecting changes in oxygen consumption. Resting-state fMRI reveals brain
networks (Biswal, 2012) by evaluating regional interactions that occur when the
subjects are relaxed and do not perform a particular mental task during the brain
scan. The dynamic nature of this imaging modality results in extremely voluminous
and complex datasets, underlining the need for efficient dimensionality reduction.

Clustering approaches have received considerable attention towards reducing the


dimensionality of functional data. This is due to the fact that clustering is not only an
efficient way to reduce the spatial dimension of rs-fMRI data, but also a biologically
meaningful one. Clustering sheds light on the mid-scale functional structure of the
brain that is considered to follow a segregation and integration principle. In other
words, information is thought to be processed by compact groups of neurons in
the brain, or functional units, that collaborate together towards addressing complex
tasks (Tononi et al., 1994).

Clustering approaches typically aim to divide the brain into spatially smooth areas
that are likely to correspond to the functional units that constitute the brain. This is
usually performed by first representing the brain in the form of a graph, where nodes
represent brain locations and edges connect nodes that correspond to spatially ad-
jacent locations. The weight of the edges represents the strength of the connectivity
between nodes and is estimated by computing the similarity between the rs-fMRI
signals that are measured at each node. The similarity is commonly measured by
the Pearson correlation or partial correlation (Smith et al., 2011). Once the node
is constructed, adjacent brain locations that are strongly connected are grouped
together in the same parcel.

Numerous methods have been proposed for this task. Among the most popular
methods, one may cite hierarchical clustering (Cordes et al., 2002), normalized
clustering (Shen et al., 2010; Craddock et al., 2012), k-means (Bellec et al., 2010),
region growing (Blumensath et al., 2013; Heller et al., 2006), and Markov random
fields (MRFs) (Ryali et al., 2013; Golland et al., 2007; Honnorat et al., 2015). Different
methods exhibit distinct advantages and disadvantages. Generally, many of the
above methods are either initialization dependent (eg, region growing (Blumensath
et al., 2013; Heller et al., 2006) and k-means (Bellec et al., 2010)), or rely on complex
models that involve a large number of parameters (Golland et al., 2007). As a result,
they are sensitive to initialization and suffer from limitations related to the employed
heuristics (eg, hierarchical clustering may lead to the creation of poorly fit parcels
at coarser scales (Cordes et al., 2002; Honnorat et al., 2015)), and the large number
of inferred parameters that may negatively impact the quality of the locally optimal
solution that is obtained (Golland et al., 2007). Moreover, not all methods produce
contiguous parcels.

In order to address the aforementioned concerns, a discrete MRF approach, termed


GraSP (Graph based segmentation with Shape Priors), was recently introduced in
(Honnorat et al., 2015). This approach adopts an exemplar-based clustering approach
that allows for the reduction of the number of parameters by representing the
rs-fMRI time series of each parcel by the signal of one of the nodes that are assigned
to it. Thus, the clustering framework is simplified through the encoding of the
parcels with their functional center. Only one parameter needs to be chosen by the
user, the label cost K. This corresponds to the cost of introducing a new parcel into
the clustering result, which indirectly determines the size of the produced parcels
(Delong et al., 2012). Contrary to other MRF clustering methods (Ryali et al., 2013),
these parcels are connected (Fig. 10.2 presents a functional parcellation that was
produced for reducing the dimension of rs-fMRI scans from a neurodevelopmental
study (Satterthwaite et al., 2014)). Parcel connectedness is promoted without any
spatial smoothing by the inclusion of a shape prior term into the MRF energy
formulation (Veksler, 2008; Gulshan et al., 2010). Lastly, the energy is optimized in a
single step, thus removing the need for initialization and specification of a stopping
criterion.

Fig. 10.2. Functional parcellation of the left hemisphere of the brain, projected on
an inflated brain surface.

The MRF energy is summarized in the following form:

where p denotes a node of the brain graph, lp the parcel that should contain this
node, Vp(lp) is a cost that decreases when the node p is assigned to a parcel lp
with highly correlated rs-fMRI signal, Lp({lp}) penalizes by a positive cost K the
introduction of a parcel of functional center p, and the Sp({lp}) are the shape
priors that enforce the connectedness of each parcel p. This energy is optimized
by exploiting advanced solvers (Delong et al., 2012) that could provide a substantial
advance over existing methods. Experimental results on large datasets demonstrated
that this approach is capable of generating parcels that are all highly coherent, while
the overall parcellation is slightly more reproducible than the results produced by
hierarchical clustering and normalized cuts (Honnorat et al., 2015).

> Read full chapter

Mathematical Morphology Applied to


Circular Data
Allan Hanbury, in Advances in Imaging and Electron Physics, 2003

2 Segmentation
Morphological segmentation of a grayscale image is usually done by applying the
watershed algorithm to the gradient of the image. The circular centered gradient
operator allows one to segment an image containing circular data in the same
way. We present an example of the segmentation of an oriented texture. The aim
of this type of segmentation is to create regions in which the orientations are
homogeneous.

The steps in the segmentation algorithm, which are illustrated in Figure 20, are:

Figure 20. Steps in the segmentation of an oriented texture. (a) Initial image with size
420 × 1040 pixels. (b) Orientation image with size 50 × 125 pixels. (c) Morphological
circular centered gradient (with a square SE of size 2). (d) h-minima. (e) Watershed
segmentation of the circular centered gradient image (c) using the markers in (d).
The graylevel in each region encodes the mean orientation of the region.(courtesy
of Scanwood System, Pont-à-Mousson, France)

(1) The Rao and Schunck algorithm is applied to the initial image (a plank of oak,
Figure 20(a)) to calculate the orientation image (Figure 20(b)). For the example,
the parameters 1 = 1.4, 2h = 2v = 64, and Δh = Δv = 8 were used.
(2) The circular centered gradient of the orientation image is calculated (Fig-
ure 20(c)). For the example, a square SE of size 2 was used.
(3) The minima are extracted. So as to avoid finding a large number of small
minima which would result in an over-segmentation of the image, we close
the gradient image with a square SE of size 1, and then find the h-minima
(Soille, 1999) of height h = 5 (Figure 20(d)).
(4) The watershed is applied to the gradient image using the minima extract-
ed in the previous step as markers, producing the segmentation shown in
Figure 20(e). In this image, the watershed lines are shown in black, and the
graylevel of each region encodes the mean orientation of the region, calculated
using circular statistics. For visualization purposes, the segmentation obtained
is superimposed on the initial image in Figure 21(a), in which the black lines
represent the watershed lines.Figure 21. Results of segmentations of oriented
textures by the watershed algorithm for four oak images.(Images courtesy of
Scanwood System, Pont-à-Mousson, France.)

Some further results of the segmentation of oriented textures are shown in Fig-
ures 21(b)–(d). In general, this algorithm manages to segment the textures into
homogeneous regions, with the more globally homogeneous textures segmented
into the fewest regions, as in Figure 21(b) for example. Some problems are never-
theless present in the current formulation of the algorithm: the first is common to
almost all watershed segmentations, and involves the choice of markers. If we take
all the minima of the gradient as markers, an over-segmentation (segmentation
into too many regions) is produced. With the current approach, a small closing
followed by the extraction of the h-minima, the number of regions is reduced,
but a change in the value of parameter h can provoke a large difference in the
segmentation. The segmentation of an oriented texture can also be modified by
changing the scale parameter ( 1) in the calculation of the orientation image. A last
difficulty is when the orientation variations are not localized enough to be detected
by the structuring element used in the gradient calculation, which can lead to the
presence of more than one orientation in one of the regions of the segmentation.
Several possible solutions to these problems remain to be studied. For example,
starting with an over-segmentation of the image, and then fusing regions with
similar mean orientations using a graph of the partition (Meyer, 1999) so as to
eliminate over-segmented regions, or taking into account several partitions of the
same texture so as to find the most probable one (Nickels and Hutchinson, 1997).

3 Defect Detection with the Circular Centered Top-Hat


We show the application of the circular centered top-hat operator to the detec-
tion of defects in oriented textures. The examples in this section were used by
Chetverikov and Hanbury (2002) in studying the contribution and the limits of using
the two most important perceptual properties of texture, regularity and isotropy,
in detecting texture defects. Here we show some of the examples which use the
orientation-based method so as to illustrate the application of the circular centered
top-hat. For texture defects characterized by an orientation anomaly, the circular
centered top-hat is a good choice for creating an image in which the defects can be
detected by a threshold. To show the application of this operator, five images of size
256 × 256 pixels having a texture defect visible were chosen from the Brodatz (1966)
album. Their reference numbers are d050, d052, d053, d077, and d079.

The orientation images were calculated using the Rao and Schunck algorithm with
parameters 1 = 1.75, Δh = Δv = 2, and 2h = 2v = 16, except for images d052 and
d079 for which 2h = 2v = 32 were used. The threshold for isolating the defect
was chosen by hand for each image. The results for the five images are shown in
Figure 22. In each line, the initial image, the orientation image, the result of the
circular centered top-hat, and the borders of the thresholded regions superimposed
on the initial image are shown. In textures d052 and d053, the defects are very
subtle modifications of the structure, yet they perturb the orientations enough to
be detected. The defect in texture d077 is easily seen and easily detectable in the
orientation image. In textures d050 and d079, the defects cause perturbations in
the orientation field, but the borders of these defects are not obvious, even to the
naked eye. Among these textures, the only one made up of oriented lines is d050,
but the others are anisotropic enough to have a uniform orientation field which is
perturbed by the defects. This approach is obviously not applicable to textures which
are not anisotropic, some examples of which are given by Chetverikov and Hanbury
(2002).
Figure 22. Results of defect detection by the circular centered top-hat applied to
some Brodatz textures. The images labeled “ini” are the initial images, “ori” the
orientation images, “ath” the top-hat operator results, and “det” the regions detected
by the threshold superimposed on the initial image.

4 Defect Detection with the Labeled Opening


The labeled opening and its associated top-hat (Section II.G.5) have the advantage
of being extremely rapid, and are therefore attractive for high-speed industrial
inspection problems. In this section we show some examples of its application to the
important industrial problem of the automated detection of defects on wood boards
(Kim and Koivo, 1994; Silvén and Kaupinen, 1996; Niskanen et al., 2001), part of a
project done in collaboration with Scanwood System, Pont-à-Mousson, France.
In most of the existing algorithms, the defects are detected using color characteris-
tics. For example, the knots are considered to be the darkest objects on the boards.
We briefly consider the possibility of enriching the color information by a preliminary
detection which takes into account the fact that certain types of defects cause a
perturbation in the orientation of the veins in their neighborhood. This could allow
the detection of defects which are not completely discernible by their color.

For wood, the most important structural defects are the knots, some of which do
not have a color very different to that of the wood, but which nevertheless cause
perturbations in the surrounding vein orientations. The defects identifiable only by
a change in color are obviously not detectable by these orientational methods, the
same being applicable to defects due to external influences, such as insect stings.

For the experiments, we used a database of oak boards with a very high defect
occurrence. In order to speed up the calculation, we used a large separation between
the neighborhoods in the orientation image calculation. The parameters used were
1 = 1.4, 2h = 2v = 64, and Δh = Δv = 16. Some images and their corresponding

orientation images are shown in the first two columns of Figure 23. We then
calculated the top-hat based on the labeled opening of the orientation images. The
opening was done with a sector size = 45°, and by varying from 0 = 0° to 157.5°
in steps of Δ = 22.5°. A square SE of size 3 was used. The residue of this top-hat
enlarged and superimposed on the initial image is shown in the rightmost column
of Figure 23, in which the light regions correspond to the residues.
Figure 23. Results of the detection of regions having orientation perturbations using
the top-hat based on the labeled opening. The “ini” images are the initial images
of oak boards, on which the defects found by an expert are outlined in black (the
dark horizontal lines are red chalk marks on the board, and have no bearing on the
experiment). The “ori” images are the orientation images. The light regions of the
“eye” images correspond to the residues of the orientation images detected by the
top-hat. The size of the images, in pixels, is given below each image.(courtesy of
Scanwood System, Pont-à-Mousson, France)

We briefly discuss the results shown in Figure 23:

• For image c005, the black vein is evidently not detected as it does not perturb
the orientation. The knot at the top right is detected, but the small knots at
the bottom left do not perturb the orientation enough to be detected.
• For image c007, the large knot is detected, but the fissures to the left of the
knot, which have similar orientations to the veins, are not detected. Some false
detections near the borders of the image are also present.
• Image c034 demonstrates that this method is not very useful on boards which
contain veins having elliptical forms. Their large curvature leads to many false
detections.

If one calculates the orientation image at a finer resolution, then smaller defects can
be detected. For oak, this is useful for detecting the small light patches, some of
which are indicated in Figure 24(a). Even if these are not classified as defects, their
detection can be important if one wishes to determine the aesthetic appearance of
the wood. The detection of these light patches based only on their color is rather
difficult, as their color is very similar to that of other structures on the wood. On
looking at the orientation image, one can see that because the light patches cut the
veins, they produce perturbations in the orientation field which can be detected by
a top-hat. The orientation image of Figure 24(a), calculated using the parameters
1 = 1.4, 2h = 2v = 16, and Δh = Δv = 8 is shown in Figure 24(b). The result of a

top-hat based on a labeled opening is shown in Figure 24(c). The parameters of this
operator are = 45°, 0 = 0° and Δ = 22.5°, and a square SE of size 4 was used.

Figure 24. Detection of defects at a smaller scale. (a) An oak board with some of the
small white patches manually indicated (size 608 × 955 pixels). (b) Orientation image
(size 50 × 112 pixels). (c) Result of a top-hat based on a labeled opening. The light
pixels indicate the residue.(courtesy of Scanwood System, Pont-à-Mousson, France)

Globally, it is clear that a perturbation in the vein orientation is not always associated
with a defect, and that a defect does not always perturb the surrounding veins. This
method of defect detection can therefore not function as a total solution to the defect
detection problem. The results can nevertheless be used in a defect classification
step, which takes color and other texture variations into account along with the
orientation perturbations in the calculation of the probability of a defect being
present at a certain position on the board.
ScienceDirect is Elsevier’s leading information solution for researchers.
Copyright © 2018 Elsevier B.V. or its licensors or contributors. ScienceDirect ® is a registered trademark of Elsevier B.V. Terms and conditions apply.

> Read full chapter

You might also like