Change Detection of Pulmonary Embolism Using Isomeric Cluster and Computer Vision
Change Detection of Pulmonary Embolism Using Isomeric Cluster and Computer Vision
Mekala Srinivasa Rao1, Sagenela Vijaya Kumar2, Rambabu Pemula3, Anil Kumar Prathipati3
1
Department of Computer Science and Engineering, Lakireddy Bali Reddy College of Engineering, Mylavaram, India
2
Department of Computer Science and Engineering, School of Technology, GITAM (Deemed to be University), Hyderabad, India
3
Department of Computer Science and Engineering, RAGHU Engineering College, Visakhapatnam, India
Corresponding Author:
Rambabu Pemula
Department of Computer Science and Engineering, RAGHU Engineering College
Visakhapatnam, Andhra Pradesh, India
Email: [email protected], [email protected]
1. INTRODUCTION
Digital imaging technological advancements have resulted in an unprecedented increase in X-rays
and high-definition pictures, and digital image archives. The vast repositories of visual data available on the
Internet have resulted in an extreme case of information overload, which has become a severe issue. In order
to cope with such large amounts of visual data, robust picture indexing and efficient retrieval systems have
become essential. It is critical to extract useful information from large amounts of X-ray data to maximize
resource usage in a similar vein. It is necessary to detect changes in the motion of the item in X-rays to
analyze and comprehend the material of interest automatically. The methods in content-based image retrieval
(CBIR) and change detection are a couple of the basic low-level applications in many X-ray and computer
vision processing applications. Among them are behavior analysis, biometrics, e-Commerce product
cataloging, medical diagnosis, object tracking, texture matching, and visual surveillance. One of the most
critical stages in creating advanced pattern recognition systems is extracting features from a dataset. A
feature extraction method's success depends on selecting the feature descriptor for a particular picture or
X-ray. Visual components such as texture, form, gradients, and color are often used to feature descriptions in
the earlier stated computer vision applications. It is challenging to create a feature descriptor capable of
negotiating shadow, scale variations, rotation, noise, light shifts, and blur while remaining stable.
Environmental circumstances and changing backdrops also offer challenges for effective motion
identification (or background removal) in X-rays, as previously stated. Furthermore, it is critical to guarantee
that the feature descriptors are fast and have a low dimensionality to maintain real-time demands. Different
methods have been suggested in the literature to deal with these difficulties in various ways. In order to get a
thorough overview of CBIR and motion detection methods, the readers are recommended to consult the
survey articles in [1]. Once it comes to the composition of an image, one of the most apparent and essential
elements is the image's texture. Using texture descriptors created by hand, several computer vision
applications have been successfully implemented. A local descriptor is a visual representation that uses the
visual characteristics of a particular area or neighborhood to create an image representation. Recent advances
in motion analysis, image retrieval, texture classification, and identification have been made possible by the
use of the local binary pattern (LBP) [2] and the scale-invariant feature transform [3]. Many LBP variations
have been developed to improve the discriminative capacity and resilience of the algorithm's many
applications. A successful adaption of LBP variations for the previously described CBIR issue is shown in
[3]. When used in X-rays, LBP histograms at the pixel level may be used to build background models and
detect foreground motion to identify foreground motion. Apart from textural and spatiotemporal
characteristics, local features and color fusion are some of the techniques that have been used to remove
background information from images.
Whenever it refers to feature selection methods, the bulk of methods rely on the connection between
the reference and its surrounding pixels to determine the characteristics of their features. Because the LBP
and its variations often calculate the adjacent pattern in a single direction, it may not be easy to extract the
potentially isomeric and isomeric thetical information available in the immediate neighborhood using this
technique. When we talk of Isomericthetic property, we are referring to Isomeric-directional information in
the immediate vicinity. In X-ray streams, medical image change detection (MICD) is a critical computer
vision problem with many visual surveillance applications, traffic monitoring, synopsis creation, human-
machine interaction, behavior analysis, anomaly detection, object tracking, and action identification, to name
a few. The MICD method splits an X-ray picture into two distinct areas, which are referred to as the
background and foreground. According to what has been said before, pre-processed X-ray frames are often
utilized in higher-level processes such as image analysis. Because the result of the MICD algorithm has a
significant effect on the overall performance of the following stages in high-level applications, knowing how
the algorithm works are essential to understanding how the algorithm works. As a result, the technique must
provide the most precise foreground/background segmentation possible. One of the advantages of MICD
algorithms is that they do not need the user manually configuring the target and object masks. It is a
significant advantage. This function is also in charge of backdrop creation and maintenance to distinguish
between foreground and background objects. As seen in the picture, the MICD algorithms may also assist
visual object tracking techniques in allocating target objects for further processing, as shown in the image.
The creation of a robust MICD technique, on the other hand, is complex. Because of the many real-world
difficulties that have been discussed before. As a consequence of Deep learning advances, many computer
vision applications for intelligent transportation systems, notably MICD in autonomous vehicles, have seen a
significant improvement. Because of their ease of accessibility and cheaper cost, X-ray-based analytics are
often chosen over other modalities (such as LIDAR) in developing information technology. Change
detection, also known as moving object detection, is a low-level X-ray technique widely employed in traffic
analysis, intelligent surveillance, autonomous driving, and anomaly detection. Different real-world scenarios,
such as changing weather conditions, variable object motion caused by different cameras' variable frame
rates, shadow, intermittent object motion caused by illumination variation, heterogeneous object shapes,
fluctuation in background regions, and camera jitter, make change detection difficult. Furthermore, for real-
time applications in various mobile devices, the MICD techniques must operate at a high rate while using the
least number of resources possible. These difficulties have been addressed in part (either separately or
jointly) in the literature to some extent. Our in-depth study and analysis of the current deep MICD techniques
is a critical addition to the field of ITS applications and should also be highlighted.
2. LITERATURE REVIEW
Many LBP variations are suggested in the literature for use in the development of CBIR systems.
Using LBP, Guo et al. [4] transformed it to a rotationally invariant version, which lowers the dimensionality
of texture classification features by restricting the number of possible rotational transformations that may be
performed. The researchers created a final LBP technique, which decomposes an image into a globally
thresholded and sign-magnitude binary pattern to enhance discriminative power and reduce false positives.
They also developed a distance transform-based matching method for extracting local ternary patterns (LTP),
which improved the appearance of texture patterns in low-light environments. According to Zhang et al. [5],
[6], it is possible to recover high-order local patterns by storing multiple-order local derivative directional
changes in a local derivative space. Murala et al. [7] estimated texture retrieval performance in vertical and
horizontal directions using first-order derivatives in vertical and horizontal dimensions and suggested local
tetra patterns that outperformed earlier work [8] in both vertical and horizontal directions. As an additional
contribution to the field, they created local maximum edge binary patterns for texture retrieval and object
tracking [9], which they used in their research.
Vipparthi et al. [10] also addressed illumination change by creating mask maximum edge designs
that maximized the amount of light that could be captured. A local mesh pattern (LMeP) was also suggested
by Vipparthi et al. [10] for encoding the connection between the pixels' neighbors in the vicinity of a pixel.
Since these LMeP patterns were retrieved at various distances, they demonstrated better biological image
retrieval applications. Peak valley edge patterns, local ternary co-occurrence patterns, and directional binary
wavelet patterns were all suggested by the authors for use in comparable situations, and they were all
implemented. The research team of Sorensen et al. [11] utilized a variety of LBP-based patterns to conduct a
thorough qualitative study of pulmonary fibrosis, which was followed up by studies by others [12]–[18].
Additionally, the local bit-plane decoded pattern [19] is utilized in several biological picture retrieval studies,
including image retrieval from a database. Ruberto [20] developed the OT COM descriptor to evaluate
texture patterns for classification purposes, which extracts and employs the Radon transform and texton
matrix histogram in tandem to analyze texture patterns for classification. Recently, Song et al. [6] presented a
diagonal texture structure descriptor to describe edge information as a receptive field characteristic, which
they believe is a novel way of representing edge information. Image retrieval has also benefited from deep
learning-based techniques in recent years, with promising results. Deep learning frameworks were used by
Lin et al. [21] to develop binary hash codes for rapid picture retrieval, and their results were published in the
journal Nature Communications. In their study, Lin et al. [21] developed an overall descriptor by integrating
in-depth data from multiple convolutional neural networks (CNNs). However, this study uses a selected
approach for convolutional descriptor aggregation that minimizes the noisy background and foreground while
preserving the essential deep features [10], which contrasts with previous studies. A click feature was utilized
to bridge the gap between deep features and enhance the retrieval rate. It combined high-level features from
CNN and low-level features from dot-diffused block truncation coding to increase retrieval rate. Also
presented was a new deep multimodal distance learning method for query-based picture ranking, developed
by the researchers. Several recent studies have also used deep learning frameworks to improve the retrieval
of medical images from databases. This technology is being used to develop a content-based medical image
retrieval system that makes use of CNNs. Stacking denoising autoencoders and CNNs are also employed in
conjunction with it to help develop computer-aided diagnostic systems.
As previously mentioned, numerous significant studies on PEDIC change detection have been
published in peer-reviewed literature. Numerous outstanding surveys of traditional deep neural network,
background subtraction methods for background initialization and foreground subtraction, detection with a
moving camera, foreground detection, maritime surveillance, moving object, traffic monitoring, wide-area,
and motion detection are contained within this collection. There are just a few research focused explicitly on
deep learning-based techniques for change detection, which is a small amount compared to the overall
number of studies. The study goes through the classification of various kinds of networks, which is all that is
covered. Furthermore, while presenting the comparative performance assessment tables, the authors assume
that all of the current techniques would be evaluated in the same way. In particular, it fails to address two
critical problems linked to the assessment frameworks discussed in the literature: i) the lack of a formal
evaluation framework and ii) the lack of a formal evaluation framework. Existing profound change detection
techniques have distinct divisions for training and testing than they do for testing and training. Because of the
disparate data-division methods used by various publications, the findings reported by different papers are
incomparable to those obtained by other methodologies. As a result of using the same video frames in both
the training and testing sets, the models get an unfair edge when testing them. Recently, a small number of
academics have attempted to solve this problem by providing scene-independent evaluation (SIE) in films
that were never viewed before. The survey does not include a comparative analysis of the various assessment
techniques that have been used in the various deep learning approaches that are now available. This study
provides a thorough empirical review of the existing deep learning model designs (technical characteristics)
and evaluation techniques instead of prior research. We believe that this is the first attempt to evaluate and
contrast the different evaluation frameworks utilized by the numerous profound change detection methods
now in use, to the best of our knowledge.
Change detection of pulmonary embolism using isomeric cluster and … (Mekala Srinivasa Rao)
790 ISSN: 2252-8938
suggested three essential background model maintenance policies: a long-term history memoryless updating
technique, random background sample replacement to represent short-term history, and geographic
dispersion through background sample propagation. Many of the most advanced change detection methods
available today use some combination of these tactics to different degrees. Decision thresholds for learning
rates were updated using adaptive updating rules, which were previously unavailable. The model upgrade
also included foreground segmentation and adaptive updating techniques for decision thresholds previously
not included. Furthermore, an adaptive feedback system was created to continuously check the accuracy of
the backdrop model and the entropy of segmentation and change the parameters as needed. Mandal and
colleagues [10] presented a deterministic approach for updating background models, successfully
implemented in practice.
Traditional learning methods are still in widespread use. There have also been many learning-based
methods introduced into the literature, such as support vector machines (SVM), principal component analysis
(PCA), and neural networks (NN). The self-organizing background subtraction algorithm, a real-world
application of neural networks, was developed using the 2D self-organizing neural network design. It learns
in a self-organizing way and uses what it has learned to construct the picture sequence and neural background
model while preserving pixel spatial connections in the process. A winner-take-all function and a technique
for updating local weights of neurons are implemented, allowing for learning to be spatially restricted to the
immediate surrounding area of the most active neurons. As a result, it operates as a competitive neural
network, and its performance is comparable to that of a competitive neural network. Several enhancements
compared to previous models have also been identified. Background removal has been accomplished at
various levels via the use of SVM models. Cheng et al. [24] developed an online learning system that
monitors temporal changes over time by using one-class support vector machines (1-SVMs) to regulate
spatial interactions, and they published their findings in science. Furthermore, Han et al. [1] calculated
background probability vectors for a collection of characteristics and then used a support vector machine to
eliminate the background probabilities from the model (SVM). Others have looked at using SVM models to
detect changes similar to what we are doing here. It has been decided to use the PCA for subspace learning to
deal with the variations in lighting across video episodes. Previously, discriminative models and mixed
subspace learning were used in conjunction with one another. In contrast, regular subspace models are
susceptible to outliers, noise, and missing data; as a result, they are inappropriate for a wide range of
applications. The development of robust principal component analysis-based models, which estimate the
background as a low-rank component and the foreground as a sparse matrix, has been undertaken to solve
these concerns. It was shown how to create robust spatiotemporal subspaces for dynamic movies using a
dynamic movie generator. Many incremental efforts are also made to enhance the PCA models' overall
performance to improve the model's overall performance. Various studies have merged several modalities of
algorithms in order to enhance the overall performance of such algorithms. In their study, Bianco et al. [25]
conducted many tests using genetic programming to integrate different change detection methods. They
suggested that multiple background models, such as a fusion of the YCbCr and RGB color models, be used to
determine the background probability of a change detection methodology. In a similar vein, segmentation
inclusion is one of the other exciting hybrids that the researchers have revealed. Supattra Puttinaovarat et al.
[26] presents technical development of a toolbox for rivers classification and their change detection from
Landsat images, by using water index analysis and four machine learning algorithms, which are KMeans,
ISODATA, maximum likelihood classification (MLC), and support vector machine (SVM).
There are other options. In addition to traditional methods, several additional non-conventional ways
have also proven effective in background removal. The researchers suggested another set of exciting
techniques to address the difficulties in motion identification: edge-based foreground segmentation, local
codebook-based models, motion modeling, physics-based change detection, graph cut, and optical flow.
Fuzzy models, on the other hand, have also been investigated in the literature. It is possible to get a more in-
depth classification of conventional change detection methods. Regarding foreground detection, the available
research suggests that threshold-based segmentation combined with post-processing methods is the most
frequently utilized approach in the field. In addition, a slew of strategies has been suggested to update the
foreground segmentation criteria adaptively. In addition, the current frame and fuzzy similarity between
background models have been assessed using interval similarity and membership values, as well as
membership values. P. Rambabu, et al. [27] proposed optimal thresholding technique using fuzzy Otsu (OT-
FO) method to improves the image quality.
and discriminator networks. Instead of using the usual residual architecture with single frame as depicted in
Figure 2(a), we have used the the encoding/decoding block is composed of a convolution/deconvolution layer
with three frames as shown in Figures 2(b) and 2(c), followed by a rectified linear unit (ReLU). we
incorporate the inception module in the residual architecture in order to improve its learning capability as
depicted in Figure 2(d). We present ResINet, a generator network for frame-wise haze removal that
incorporates both residual and inception ideas. ResINet is a contraction of the term’s residual and inception.
Predict the architecture of the proposed ResINet is depicted in Figure 2(e).
Figure 2. The change detection methods with (a) single frame, (b) three frames Conv, (c) 3×3 DeConv, (d)
inception module, and (e) architectural flow
In addition, we show the ground-breaking deep learning methods and datasets to identify changes in
consecutive figures. We provide the results of an empirical investigation into these deep learning techniques.
Several researchers have recently utilized convolutional neural networks (CNNs) to divide video frames into
the foreground and background areas, a method known as change detection. CNNs are a kind of neural
network that learns from experience. Creating CNN models for MICD presents a different set of diffficulties
than developing CNN models for video-based or other pictures. Spatial domain characteristics, such as
segmentation, object identification, and picture classification, for example, can only be learned in the spatial
domain and cannot be learned anywhere else. When dealing with single image-based decision-making issues,
the spatial dimension characteristics are adequate to meet the application's needs. It is not essential to take
into account the characteristics of the time dimension in these activities. As a result, the models developed
for these tasks do not function correctly in the MICD environment.
In action recognition, the characteristics collected from both the temporal and spatial dimensions are
utilized to predict high-level categorization labels, which are then used to categorize the actions based on
their classification. For the MICD, on the other hand, the creation of a spatiotemporal feature learning
framework and the prediction of low-level packed pixel-wise labels are necessary prerequisites. The
combination of these variables makes the task of building and developing deep learning models for MICD
challenging. The following features of the most current deep MICD techniques are discussed in more detail
in Table 1.
𝜃
𝑃𝐸𝐷𝐼𝐶𝑝,𝑟 (𝑎, 𝑏) = ∑ 𝑠 𝑖 𝑔𝑛
𝑖=1
0, if Y ≤ 0
𝑠𝑔𝑛( 𝑌) = { }
1, ifY>0
0,
𝜉(𝑖, 𝑚) = { 𝑚+1
1, if 𝑖 ∈ [1, [ ]]
2
Change detection of pulmonary embolism using isomeric cluster and … (Mekala Srinivasa Rao)
794 ISSN: 2252-8938
2. Pattern with several resolutions as part of the multiresolution Gaussian filter integration, PEDIC
integrates PEDIC. It has been shown in the field that the multiresolution Gaussian filter may be used
effectively, as local derivative designs demonstrated in LBP [3].
3. Feature representation and matching methods are both critical in the development of any pattern
recognition applications.
𝑀 𝑁
𝜎,𝜃
𝐻𝑖𝑠𝑡𝜎 = ∑ ∑ 𝛿 (𝑀𝑃𝐸𝐷𝐼𝐶𝑝,𝑟 (𝑝, 𝑞) − (nd,vd,pd,hd))
𝑎=1 𝑏=1
for 0 ≤ 𝑝𝑑 ≤ 𝑊 − 1,
0 ≤ ℎ𝑑 ≤ 𝑊 − 1,
for
for 0 ≤ 𝑣𝑑 ≤ 𝑊 − 1, for 0 ≤ 𝑛𝑑 ≤ 𝑊 − 1,
4. The PEDIC feature response map histogram to provide a robust picture is utilized as a feature vector. In
order to calculate the intra-PEDIC for a given picture, we use the following equations:
𝑝
intra 𝑃𝐸𝐷𝐼𝐶 (𝑎, 𝑏) = ⊕ 𝐷𝑡 (𝑥𝑡,𝑢 , 𝑥𝑡,𝑖 , 𝑥𝑡, modi ,𝑝)+1 )
𝑖=1
00, if |𝑧2 − 𝑧1 | > 𝜏 ⋅ 𝑧1 \ |𝑧3 − 𝑧2 | > 𝜏 ⋅ 𝑧1
01, if |𝑧2 − 𝑧1 | > 𝜏 ⋅ 𝑧1 \ |𝑧3 − 𝑧2 | ≤ 𝜏 ⋅ 𝑧1
𝐷(𝑧1 , 𝑧2 , 𝑧3 ) =
10, if |𝑧2 − 𝑧1 | ≤ 𝜏 ⋅ 𝑧1 \ |𝑧3 − 𝑧2 | > 𝜏 ⋅ 𝑧1
{11, if |𝑧2 − 𝑧1 | ≤ 𝜏 ⋅ 𝑧1 \ |𝑧3 − 𝑧2 | ≤ 𝜏 ⋅ 𝑧1
5. The inter-PEDIC for a given picture is calculated using the following mathematical equations:
6. Bit-stream and Color and the Intra-PEDIC for each color channel define the pixel-level background
model described in detail.
7. Stop.
migrate from inside the bloodstream into sick or infected tissues in response to cytokines, and they typically
move in the same direction as a chemical gradient in a chemotaxis process. Infiltration is the expression used
to represent the presence of lymphocytes in more significant numbers than usual in a tissue as depicted in
Figure 3.
Local anesthetics may be administered at more than one site to permeate a region before a surgical
operation as part of medical intervention [29]. Extravasation or “tissuing” are terms that may refer to
unintentional iatrogenic leaking of fluids following phlebotomy or intravenous drug administration
techniques, a process that is also known as extravasation or “tissuing.” [30]–[32]. Lung opacification is
defined as reducing the gas to soft tissue (including blood, lung parenchyma, and stromal cells) inside the
lung. As presented in Figure 4, when examining a chest radiograph or CT scan for increased attenuation
(opacification), it is critical to identify the location of the increased attenuation (opacification). The patterns
may be classified into three categories: opacification of the airspace, lines, and dots.
Change detection of pulmonary embolism using isomeric cluster and … (Mekala Srinivasa Rao)
796 ISSN: 2252-8938
6. CONCLUSION
This study presents the results of this paper, which examines current medical image change
detection techniques in terms of model design and assessment frameworks. Change detection is investigated
using a range of current deep learning architectures, and the efficacy of different deep learning architectures
for change detection is investigated. It is necessary to split the MICD techniques into main categories and
their related subclasses to give a thorough evaluation. It is shown in this paper that a complete feature
descriptor for CBIR and change detection applications has been developed. Designed in the spirit of
isomerism, the PEDIC makes use of both the PEDIC and clustering characteristics to achieve its goals. In
part, this is due to the proposed texture descriptor's ability to extract line and corner point information from
the immediate neighborhood, which is considered to be a robust texture descriptor. Additionally, just four
isomeric cluster patterns are required to extract all directional information.
REFERENCES
[1] B. Han and L. S. Davis, “Density-based multifeature background subtraction with support vector machine,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 34, no. 5, pp. 1017–1023, May 2012, doi: 10.1109/TPAMI.2011.243.
[2] M. Srinivasa Rao, V. Vijaya Kumar, and M. Krishna Prasad, “Texture classification based on local features using dual
neighborhood approach,” International Journal of Image, Graphics and Signal Processing, vol. 9, no. 9, pp. 59–67, Sep. 2017,
doi: 10.5815/ijigsp.2017.09.07.
[3] S. T. Cochran, K. Bomyea, and J. W. Sayre, “Trends in adverse events after IV administration of contrast media,” American
Journal of Roentgenology, vol. 176, no. 6, pp. 1385–1388, Jun. 2001, doi: 10.2214/ajr.176.6.1761385.
[4] E. Guo et al., “Learning to measure change: fully convolutional siamese metric networks for scene change detection,” Oct. 2018,
arXiv:1810.09111.
[5] J. Zhang et al., “X-Net: a binocular summation network for foreground segmentation,” IEEE Access, vol. 7, pp. 71412–71422,
2019, doi: 10.1109/ACCESS.2019.2919802.
[6] W. Song et al., “Taking advantage of multi-regions-based diagonal texture structure descriptor for image retrieval,” Expert
Systems with Applications, vol. 96, pp. 347–357, Apr. 2018, doi: 10.1016/j.eswa.2017.12.006.
[7] S. Murala, R. P. Maheshwari, and R. Balasubramanian, “Directional binary wavelet patterns for biomedical image indexing and
retrieval,” Journal of Medical Systems, vol. 36, no. 5, pp. 2865–2879, Oct. 2012, doi: 10.1007/s10916-011-9764-4.
[8] S. Murala and Q. M. J. Wu, “Local mesh patterns versus local binary patterns: biomedical image indexing and retrieval,” IEEE
Journal of Biomedical and Health Informatics, vol. 18, no. 3, pp. 929–938, May 2014, doi: 10.1109/JBHI.2013.2288522.
[9] P. W. Patil, S. Murala, A. Dhall, and S. Chaudhary, “MsEDNet: multi-scale deep saliency learning for moving object detection,”
in 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Oct. 2018, pp. 1670–1675, doi:
10.1109/SMC.2018.00289.
[10] M. Mandal, L. K. Kumar, and S. K. Vipparthi, “MOR-UAV: a benchmark dataset and baselines for moving object recognition in
UAV videos,” in Proceedings of the 28th ACM International Conference on Multimedia, New York, NY, USA: ACM, 2020, pp.
2626–2635.
[11] L. Srensen, S. B. Shaker, and M. de Bruijne, “Quantitative analysis of pulmonary emphysema using local binary patterns,” IEEE
Transactions on Medical Imaging, vol. 29, no. 2, pp. 559–569, Feb. 2010, doi: 10.1109/TMI.2009.2038575.
[12] P. D. Stein et al., “Multidetector computed tomography for acute pulmonary embolism,” New England Journal of Medicine, vol.
354, no. 22, pp. 2317–2327, Jun. 2006, doi: 10.1056/NEJMoa052367.
[13] M. Srinivasa Rao, V. Vijaya Kumar, and M. H. M. Krishna Prasad, “Texture classification based on statistical properties of local
units,” Journal of Theoretical and Applied Information Technology, vol. 93, no. 2, pp. 246–256, 2016.
[14] R. Tanaka et al., “Development of pulmonary blood flow evaluation method with a dynamic flat-panel detector: quantitative
correlation analysis with findings on perfusion scan,” Radiological Physics and Technology, vol. 3, no. 1, pp. 40–45, Jan. 2010,
doi: 10.1007/s12194-009-0074-1.
[15] R. Tanaka et al., “Pulmonary blood flow evaluation using a dynamic flat-panel detector: feasibility study with pulmonary
diseases,” International Journal of Computer Assisted Radiology and Surgery, vol. 4, no. 5, pp. 449–455, Sep. 2009, doi:
10.1007/s11548-009-0364-4.
[16] R. Tanaka et al., “Evaluation of pulmonary function using breathing chest radiography with a dynamic flat panel detector,”
Investigative Radiology, vol. 41, no. 10, pp. 735–745, Oct. 2006, doi: 10.1097/01.rli.0000236904.79265.68.
[17] R. Tanaka et al., “Detection of pulmonary embolism based on reduced changes in radiographic lung density during cardiac
beating using dynamic flat-panel detector: an animal-based study,” Academic Radiology, vol. 26, no. 10, pp. 1301–1308, Oct.
2019, doi: 10.1016/j.acra.2018.12.012.
[18] H. Watanabe et al., “Impact of earthquakes on risk for pulmonary embolism,” International Journal of Cardiology, vol. 129, no.
1, pp. 152–154, Sep. 2008, doi: 10.1016/j.ijcard.2007.06.039.
[19] S. R. Dubey, S. K. Singh, and R. K. Singh, “Local bit-plane decoded pattern: a novel feature descriptor for biomedical image
retrieval,” IEEE Journal of Biomedical and Health Informatics, vol. 20, no. 4, pp. 1139–1147, Jul. 2016, doi:
10.1109/JBHI.2015.2437396.
[20] C. Di Ruberto, “Histogram of radon transform and texton matrix for texture analysis and classification,” IET Image Processing,
vol. 11, no. 9, pp. 760–766, Sep. 2017, doi: 10.1049/iet-ipr.2016.1077.
[21] K. Lin, H.-F. Yang, J.-H. Hsiao, and C.-S. Chen, “Deep learning of binary hash codes for fast image retrieval,” in 2015 IEEE
Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Jun. 2015, pp. 27–35, doi:
10.1109/CVPRW.2015.7301269.
[22] Z. Zivkovic, “Improved adaptive gaussian mixture model for background subtraction,” in Proceedings of the 17th International
Conference on Pattern Recognition, 2004, vol. 2, pp. 28–31, doi: 10.1109/ICPR.2004.1333992.
[23] Y. Yamasaki, K. Abe, K. Hosokawa, and T. Kamitani, “A novel pulmonary circulation imaging using dynamic digital
radiography for chronic thromboembolic pulmonary hypertension,” European Heart Journal, vol. 41, no. 26, pp. 2506–2506, Jul.
2020, doi: 10.1093/eurheartj/ehaa143.
[24] Li Cheng and Minglun Gong, “Realtime background subtraction from dynamic scenes,” in 2009 IEEE 12th International
Conference on Computer Vision, Sep. 2009, pp. 2066–2073, doi: 10.1109/ICCV.2009.5459454.
[25] S. Bianco, G. Ciocca, and R. Schettini, “Combination of video change detection algorithms by genetic programming,” IEEE
Transactions on Evolutionary Computation, vol. 21, no. 6, pp. 914–928, Dec. 2017, doi: 10.1109/TEVC.2017.2694160.
[26] S. Puttinaovarat et al., “River classification and change detection from landsat images by using a river classification toolbox,”
IAES International Journal of Artificial Intelligence (IJ-AI), vol. 10, no. 4, pp. 948–959, Dec. 2021, doi:
10.11591/ijai.v10.i4.pp948-959.
[27] P. Rambabu and C. N. Raju, “The optimal thresholding technique for image segmentaion using fuzzy otsu method,” International
Journal of Applied Engineering Research, vol. 10, no. 13(2015), pp. 33842-22846.
[28] S. V. Konstantinides et al., “2019 ESC guidelines for the diagnosis and management of acute pulmonary embolism developed in
collaboration with the european respiratory society (ERS),” European Respiratory Journal, vol. 54, no. 3, Sep. 2019, doi:
10.1183/13993003.01647-2019.
[29] H. Miyatake, T. Tabata, Y. Tsujita, K. Fujino, R. Tanaka, and Y. Eguchi, “Detection of pulmonary embolism using a novel
dynamic flat-panel detector system in monkeys,” Circulation Journal, vol. 85, no. 4, pp. 361–368, Mar. 2021, doi:
10.1253/circj.CJ-20-0835.
[30] K. M. Moser, “Frequent asymptomatic pulmonary embolism in patients with deep venous thrombosis,” JAMA: The Journal of the
American Medical Association, vol. 271, no. 3, Jan. 1994, doi: 10.1001/jama.1994.03510270069042.
[31] M. Sakuma et al., “Acute pulmonary embolism after an Earthquake in Japan,” Seminars in Thrombosis and Hemostasis, vol. 32,
no. 8, pp. 856–860, Nov. 2006, doi: 10.1055/s-2006-955468.
[32] S. Elyassami and A. Ait Kaddour, “Implementation of an incremental deep learning model for survival prediction of
Change detection of pulmonary embolism using isomeric cluster and … (Mekala Srinivasa Rao)
798 ISSN: 2252-8938
cardiovascular patients,” IAES International Journal of Artificial Intelligence (IJ-AI), vol. 10, no. 1, pp. 101–109, Mar. 2021, doi:
10.11591/ijai.v10.i1.pp101-109.
BIOGRAPHIES OF AUTHORS