Automatic Segmentation of Bone Tissue in X Ray Hand Images
Automatic Segmentation of Bone Tissue in X Ray Hand Images
Hand Images
1 Introduction
X-ray imaging is a highly available and low cost imaging method [1]. Although it is a
popular imaging system, computer assisted analyzing of X-ray images have some
difficulties which originate from nature of X-rays, X-ray machine, X-ray film or
scanning methods. According to Ogiela et al [2], some specific problems in the analysis
of 2D hand X-ray images are listed: i) Some of the details become blurred because of
the overlapping bones. As a result, some of the bones may disappear. Therefore, the
segmentation algorithm must estimate the form of the bone and its relation with the
other elements on the image. ii) There can be some extra bones or bone decrements
which are not described as a priori by anatomical maps in some patients. iii) Fractures
and displacements caused by injuries or some pathological conditions can be displayed
on the image. In addition to these problems, X-ray beams do not touch to the examined
material at equal strength due to the X-ray beam source location. As a result, non
uniform intensity distribution occurs on the resulted X-ray image.
X-ray bone images are used in the areas such as bone age assessment [3-7], bone
mass assessment [8,9] and examination of bone fractures [10]. Extraction of bone
M. Kolehmainen et al. (Eds.): ICANNGA 2009, LNCS 5495, pp. 590–599, 2009.
© Springer-Verlag Berlin Heidelberg 2009
Automatic Segmentation of Bone Tissue in X-Ray Hand Images 591
tissue from other tissues and background (segmentation) is one of the main steps in
such applications. Segmentation of medical images is a challenging problem and a
necessary first step in many image analysis and classification processes [10]. There
are three approaches in the segmentation step: Manual segmentation, semi-automatic
segmentation and fully automatic segmentation. In manual segmentation, selection of
different anatomical structures is realized by an operator. Hence, they lead to operator
dependent and subjective results [11]. In semi-automatic segmentation, some
parameters and initial conditions of the method are set in supervision of an expert
user. Semi-automatic segmentation and manual methods generally have better
performances due to expert knowledge. However, non-automatic segmentation
methods are difficult and time consuming [12].
In fully automatic segmentation systems, the image is segmented by developed
algorithms in a computer and there is no need for expert knowledge. In the literature,
there are many studies for segmenting the X-ray images. However, a fully automatic
segmentation method that segments all X-ray images has not been met yet. Most of
the studies were applied to limited number of images, not to general [4,6,11] Also, a
number of automatic segmentation methods are not applicable for all bones in the
hand X-ray image [3,5,8,9,13]. In addition, some studies needs user intervention in
some part of the algorithm [13,14]. An automatic segmentation is the first step in
automatic bone age assessment studies [4-7]. Likewise, an automatic segmentation
step is a need in bone density assessment [8,9] and bone fracture analysis [10].
In this study, an automatic segmentation method for X-ray hand images was
presented. Here, X-ray images were segmented using C-means algorithm by
incorporating some structural pre-information.
2 Method
In this study, a new method for segmentation of bones in X-Ray hand images was
proposed. Segmentation procedure was studied in three steps. In the first step, the
intensity non-uniformity of X-ray image was removed. Then, some structural
information about image was obtained in order to use for improving feature extraction
block.
In the second step, the features which emphasize the bone boundaries were
analyzed. Here, two feature images were generated. The difference operation between
the two feature images emphasizes the bone edges. Also a feature improvement
method was applied in this step.
In the third step, an unsupervised neural network was used for segmenting bone
tissues in the decision process. In this step, feature image was given to neural network
by parceling small windows in order to take care of local feature variations.
Segmented image contains only bone edges. Finally, by using morphological image
processing techniques bone tissue was extracted.
592 A. Yuksel and T. Olmez
2.1 Pre-processing
(a) (b)
Fig. 2. Result of intensity distribution procedure. 3-D representation of the original image (a)
and the corrected image (b).
Automatic Segmentation of Bone Tissue in X-Ray Hand Images 593
found by analyzing binary hand mask image. The task was realized by looking for
peaks in the boundary of hand mask. Wrist region was determined by creating a hand
silhouette from X-ray image. To do this, X-ray image is masked by the hand mask
and extracted hand region is filtered with a CWT filter with a scaling factor of 0.5.
Obtained hand silhouette gives its maximum gray level value at the position where
wrist bones stands because their thicknesses cause brighter regions in X-ray images.
An X-ray image consists of three different regions; background, soft tissue and bone
tissue. Also, two types of boundaries can be thought; background- soft tissue
boundary and soft tissue-bone tissue boundary. The boundaries between two regions
form edges which give higher variance values than other areas.
The goal of the feature extraction step is to expose the features which emphasize
the bone edges. Generating a variance map is an easy method for focusing on sudden
feature differences or edges. However, when the variance of the image is calculated
directly, other soft tissue-background edges are revealed near edges of bone tissue.
Therefore, two variance maps are calculated: i) variance map for emphasizing bone
tissue -soft tissue edges, ii) variance map for emphasizing soft tissue- background
edges.
For the first variance map, an averaged square image is created by the equation (1).
In this equation, energy value of a pixel is calculated by averaging square values of
pixels in N-neighborhood window, where N was selected 1 in the study. This
operation makes edges between bone tissue and soft tissue more strong than other
edges.
N N
E( x, y ) =
1
(2 N + 1)2
∑ ∑I
j =− N i =− N
( x +i , y + j )
2
(1)
Created variance map has high values at bone edges and the boundaries between soft
tissue-background. In order to eliminate the unwanted boundaries between soft tissue-
background and obtain a result image which have only bone edges, another feature
image which is dominated by soft tissue-background borders is created. For this
purpose, a logarithmic image was created and variance map of the image was
obtained. Boundary of soft tissue-background is dominant in created logarithmic
variance image. Calculations of logarithmic image and variance map of this image are
given in equations 3 and 4.
594 A. Yuksel and T. Olmez
N N
L( x, y ) = ln(
1
(2 N + 1)2
∑ ∑I
j =− N i =− N
( x +i, y + j ) ) (3)
N N
σ 2 L( x, y ) =
1
( 2 N + 1) 2 ∑ ∑ (L
j =− N i =− N
( x +i, y + j ) − μ L ( x, y ) )
2
(4)
Where μ L( x, y ) is average gray level value of pixels which are inside the N-
neighborhood of pixel at position (x,y). N is the neighborhood degree which was
selected 1 for the study.
Initially, two images, created by equations 2 and 4, were normalized in order to
have the same average gray level value. Next, difference image, φ was obtained by
subtracting μ L( x, y ) feature from μ E ( x, y ) feature as given in equation (5). This
procedure makes bone edges dominant while eliminating soft tissue-background
borders in the image. Variance features of energy image, logarithmic image and
φ difference image are depicted in Fig. 3.
(5)
φ = σ 2E − σ 2L
(a) (b)
(c)
Fig. 3. Obtained feature images: (a) Variance map of energy image, (b) variance map of
logarithmic image, (c) difference image
Automatic Segmentation of Bone Tissue in X-Ray Hand Images 595
Difference of variance maps, φ , contains bone edges with higher values and gives
good results when used as a feature for segmentation of bones but segmentation with
this feature generally fails at some specific regions where variance values are not
enough high like fingertips and wrist region. Improving the features at those specific
locations will increase the segmentation performance.
For improving the features, φ was multiplied with a mask which has bell shaped
curves with peaks at the pre-determined locations of fingertips and wrist region. 3D
representation of multiplication mask is seen in Figure 4-a. Here, there are five curves
at fingertips and one curve at the wrist region. Widths and peak values of each curve
were predetermined by taking anatomical features into consideration. Improvement of
φ is also depicted in Figure 4-b with 3-D coordinates where –z coordinate stands for
the feature value at related position.
(a) (b)
Fig. 4. Improving of features: (a) 3-D representation of multiplication mask (b) feature values
in 3-D after improvement step
2.3 Segmentation
A segmentation method that analyses feature image with sub windows was used in the
study. Sub windows were then segmented by neural network and local segmentation
results from all sub windows were obtained. Then a method was developed for
combining local segmentation methods and generating a segmentation result for
whole image. Finally, segmentation result was processed by a series of structural
image processing methods in order to get final segmentation result. Segmentation
process can be examined by three steps: i) segmentation in sub windows, ii) obtaining
global segmentation result from local segmentation method, iii) converting
segmentation result into final segmented bone tissue by structural image processing
methods.
Because the feature values of the bone tissue in the feature image varies among
whole hand region, a segmentation method which analyses all of input image at a time
will fail. So, there is a need for a method which is able to observe local feature
variations in the image. For this purpose, segmentation process was realized by
596 A. Yuksel and T. Olmez
examining the feature image inside small sized, randomly located sub windows.
Square sub window size was set at about width of a finger. Sub window locations
were selected randomly during segmentation process. This approach will generate
more reliable results because it provides a lot of decisions for any pixel with a lot of
point of views. Sub windows are shown in Figure 5. In order to show more
comprehensible image, only fifty sub windows are shown.
Sub windows were segmented by using C-means classifier with two nodes. The
nodes were labeled as edge (high valued) and non-edge (low valued) according to
their position in one dimensional feature space. Distribution of feature values inside
the sub window determines node position. Nodes of classifier are attracted to class
centers automatically.
After the training step of the classifier has been completed for the selected sub
window, segmentation process starts. In segmentation process, each pixel in sub
window is given to classifier as a one dimensional feature vector. Label of this pixel
was defined by the label of nearest (according to Euclidian distance in feature space)
node of the classifier.
Every pixel in the hand will be covered by a sub window due to the high number
(1000) of sub windows. Since there should be many sub windows which cover the
same pixel, different labels may be obtained for that pixel at different sub windows.
Therefore, In order to store the segmentation results, two label counters was assigned
for each pixel in the image. When it is covered by a sub window, only one of the
counters of a pixel is incremented by one according to the result of the neural
network. This process is iterated until determined number of sub window is selected
and segmented. After sub window segmentation step has been completed, global
segmentation result is generated by examining label counters for each pixel in the
image. Label of any pixel counter with highest value will be assigned to it.
In the previous step, only bone edges were found. However, the extraction of the
whole bone tissue is needed. To do this, bone tissue was obtained by using bone edge
Automatic Segmentation of Bone Tissue in X-Ray Hand Images 597
In order to examine the validity of the proposed method, X-Ray hand images of ten
different people were segmented and quantitative performance values of segmentation
process were investigated by applying a manually created test set which was created
by selecting total 300 points from inside and outside of the bone tissue. Then,
segmentation performance for each image was evaluated by the ratio of true decision
over test size. Hand X-Ray images used in the study were obtained from X-Ray
database of Image Processing and Informatics Laboratory of South California
University [16].
Proposed segmentation result was coded in Matlab ® Release 14 and executed
with a 1.73 GHz Intel Centrino Notebook. Segmentation of an image with sizes about
1000x1200 pixels takes about three minutes (Preprocessing 1 minute, feature
extraction 0.5 minute and segmentation 1.5 minute, approximately ).
In Table 1, Performance results for segmentation of ten images are given. Original
and segmentation results for two of the images are shown in Figure 6.
(a) (b)
(c) (d)
Fig. 6. Original X-Ray hand images (c,d) Segmentation results of (a) and (b) respectively
3 Discussion
In this work, a method for segmenting of bone tissue from X-Ray hand images was
developed. Features of bone tissue vary over whole hand image. Because of this,
methods that segment the bone tissue by looking general image often fail. Better and
more reliable results were obtained with segmentation of the image by searching local
feature variations and generating global segmentation results by interpreting local
segmentation results as a kind of statistical data.
In the study, using some structural information about hand improved the
segmentation performance. It was observed that, without adding structural
information, segmentation process had given poor performance value. So, using
structural information of hand with extracted features will give better results.
Segmentation of bones automatically is an extremely challenging task. Steps of
segmentation process takes nearly three minutes for an image with sizes 1000x1200.
The method gave similar results for different people’s images which is the evidence
for generality of method.
Automatic Segmentation of Bone Tissue in X-Ray Hand Images 599
In the future work, segmentation of bones from different organs will be studied.
Also, in this work, bones are segmented as a whole tissue. A method for segmenting
of the image bone by bone will be searched.
References
1. Felisberto, M.K., Lopes, H.S., Centeno, T.M., Arruda, L.V.R.: Object detection and
recognition system for weld bead extraction from digital radiographs. Computer Vision
and Image Understanding 102, 238–249 (2006)
2. Ogiela, M.R., Tadeusiewicz, R., Ogiela, L.: Graph image language techniques supporting
radiological, hand image interpretations. Computer Vision and Image Understanding 103,
112–120 (2006)
3. Garcia, R.L., Fernandez, M.M., Arribas, J.I., Lopez, C.A.: A fully automatic algorithm for
contour detection of bones in hand radiographs using active contours. In: International
Conference on Image Processing, pp. 421–424 (2003)
4. Han, C.C., Lee, C.H., Peng, W.L.: Hand radiograph image segmentation using a coarse-to-
fine strategy. Pattern Recognition 40, 2994–3004 (2007)
5. Zhang, A., Gertych, A., Liu, B.J.: Automatic bone age assessment for young children from
newborn to 7-year-old using carpal bones. Computerized Medical Imaging and
Graphics 31, 299–310 (2007)
6. Mahmoodi, S., Sharif, B.S., Chester, E.G., Owen, J.P., Lee, R.: Skeletal growth estimation
using radiographic image processing and analysis. IEEE Transactions on Information
Technology in Biomedicine 4, 292–297 (2000)
7. Pietka, E., Kurkowska, S.P., Gertych, A., Cao, F.: Integration of computer assisted bone
age assessment with clinical PACS. Computerized Medical Imaging and Graphics 27,
217–228 (2003)
8. Sotoca, J.M., Inesta, J.M., Belmonte, M.A.: Hand bone segmentation in
radioabsorptiometry images for computerised bone mass assessment. Computerized
Medical Imaging and Graphics 27, 459–467 (2003)
9. Haidekker, M.A., Stevens, H.Y., Frangos, J.A.: Computerised methods for X-ray-based small
bone densitometry. Computer Methods and Programs in Biomedicine 73, 35–42 (2004)
10. Jiang, Y., Babyn, P.: X-ray bone fracture segmentation by incorporating global shape
model priors into geodesic active contours. In: Computer Assisted Radiology and Surgery,
pp. 219–224 (2004)
11. Sharif, B.S., Chester, E.G., Owen, J.P., Lee, E.J.: Bone edge detection in hand
radiographic images. In: Engineering Advances: New Opportunities for Biomedical
Engineer, pp. 514–515 (1994)
12. Kurnaz, M.N.: Artımsal Yapay Sinir Ağları Kullanılarak Ultrasonik Görüntülerin
Bölütlenmesi (in Turkish). Ph.D Thesis, İstanbul Technical University (2006)
13. Bocchi, L., Ferrara, F., Nicoletti, I., Valli, G.: An artificial neural network architecture for
skeletal age assessment. In: International Conference on Image Processing, vol. 1, pp.
1077–1080 (2003)
14. Behiels, G., Maes, F., Vandermeulen, D., Suetens, P.: Evaluation of image features and
search strategies for segmentation of bone structures in radiographs using Active Shape
Models. Medical Image Analysis 6, 47–62 (2002)
15. Yuksel, A., Dokur, Z., Korurek, M., Olmez, T.: Modeling of inhomogeneous intensity
distribution of X-ray source in radiographic images. In: 23rd International Symposium on
Computer and Information Sciences, 2008. ISCIS 2008, pp. 1–5 (2008)
16. Image Processing and Informatics Laboratory, http://www.ipilab.org/