-
Scanning Micro X-ray Fluorescence and Multispectral Imaging Fusion: A Case Study on Postage Stamps
-
Neural Radiance Field-Inspired Depth Map Refinement for Accurate Multi-View Stereo
-
Enhancing Deep Edge Detection through Normalized Hadamard-Product Fusion
-
ControlFace: Feature Disentangling for Controllable Face Swapping
Journal Description
Journal of Imaging
Journal of Imaging
is an international, multi/interdisciplinary, peer-reviewed, open access journal of imaging techniques published online monthly by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), PubMed, PMC, dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: CiteScore - Q1 (Computer Graphics and Computer-Aided Design)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 20.9 days after submission; acceptance to publication is undertaken in 3.4 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.7 (2023);
5-Year Impact Factor:
3.0 (2023)
Latest Articles
Automatic Segmentation of Mediastinal Lymph Nodes and Blood Vessels in Endobronchial Ultrasound (EBUS) Images Using Deep Learning
J. Imaging 2024, 10(8), 190; https://doi.org/10.3390/jimaging10080190 (registering DOI) - 6 Aug 2024
Abstract
Endobronchial ultrasound (EBUS) is used in the minimally invasive sampling of thoracic lymph nodes. In lung cancer staging, the accurate assessment of mediastinal structures is essential but challenged by variations in anatomy, image quality, and operator-dependent image interpretation. This study aimed to automatically
[...] Read more.
Endobronchial ultrasound (EBUS) is used in the minimally invasive sampling of thoracic lymph nodes. In lung cancer staging, the accurate assessment of mediastinal structures is essential but challenged by variations in anatomy, image quality, and operator-dependent image interpretation. This study aimed to automatically detect and segment mediastinal lymph nodes and blood vessels employing a novel U-Net architecture-based approach in EBUS images. A total of 1161 EBUS images from 40 patients were annotated. For training and validation, 882 images from 30 patients and 145 images from 5 patients were utilized. A separate set of 134 images was reserved for testing. For lymph node and blood vessel segmentation, the mean ± standard deviation (SD) values of the Dice similarity coefficient were 0.71 ± 0.35 and 0.76 ± 0.38, those of the precision were 0.69 ± 0.36 and 0.82 ± 0.22, those of the sensitivity were 0.71 ± 0.38 and 0.80 ± 0.25, those of the specificity were 0.98 ± 0.02 and 0.99 ± 0.01, and those of the F1 score were 0.85 ± 0.16 and 0.81 ± 0.21, respectively. The average processing and segmentation run-time per image was 55 ± 1 ms (mean ± SD). The new U-Net architecture-based approach (EBUS-AI) could automatically detect and segment mediastinal lymph nodes and blood vessels in EBUS images. The method performed well and was feasible and fast, enabling real-time automatic labeling.
Full article
(This article belongs to the Special Issue Advances in Medical Imaging and Machine Learning)
Open AccessReview
Insights into Ultrasound Features and Risk Stratification Systems in Pediatric Patients with Thyroid Nodules
by
Carla Gambale, José Vicente Rocha, Alessandro Prete, Elisa Minaldi, Rossella Elisei and Antonio Matrone
J. Imaging 2024, 10(8), 189; https://doi.org/10.3390/jimaging10080189 - 5 Aug 2024
Abstract
Thyroid nodules in pediatric patients are less common than in adults but show a higher malignancy rate. Accordingly, the management of thyroid nodules in pediatric patients is more complex the younger the patient is, needing careful evaluation by physicians. In adult patients, specific
[...] Read more.
Thyroid nodules in pediatric patients are less common than in adults but show a higher malignancy rate. Accordingly, the management of thyroid nodules in pediatric patients is more complex the younger the patient is, needing careful evaluation by physicians. In adult patients, specific ultrasound (US) features have been associated with an increased risk of malignancy (ROM) in thyroid nodules. Moreover, several US risk stratification systems (RSSs) combining the US features of the nodule were built to define the ROM. RSSs are developed for the adult population and their use has not been fully validated in pediatric patients. This study aimed to evaluate the available data about US features of thyroid nodules in pediatric patients and to provide a summary of the evidence regarding the performance of RSS in predicting malignancy. Moreover, insights into the management of thyroid nodules in pediatric patients will be provided.
Full article
(This article belongs to the Special Issue Clinical and Pathological Imaging in the Era of Artificial Intelligence: New Insights and Perspectives)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00189/article_deploy/html/images/jimaging-10-00189-g001-550.jpg?1722854032)
Figure 1
Open AccessArticle
Screening Mammography Diagnostic Reference Level System According to Compressed Breast Thickness: Dubai Health
by
Entesar Z. Dalah, Maryam K. Alkaabi, Hashim M. Al-Awadhi and Nisha A. Antony
J. Imaging 2024, 10(8), 188; https://doi.org/10.3390/jimaging10080188 - 5 Aug 2024
Abstract
Screening mammography is considered to be the most effective means for the early detection of breast cancer. However, epidemiological studies suggest that longitudinal exposure to screening mammography may raise breast cancer radiation-induced risk, which begs the need for optimization and internal auditing. The
[...] Read more.
Screening mammography is considered to be the most effective means for the early detection of breast cancer. However, epidemiological studies suggest that longitudinal exposure to screening mammography may raise breast cancer radiation-induced risk, which begs the need for optimization and internal auditing. The present work aims to establish a comprehensive well-structured Diagnostic Reference Level (DRL) system that can be confidently used to highlight healthcare centers in need of urgent action, as well as cases exceeding the dose notification level. Screening mammographies from a total of 2048 women who underwent screening mammography at seven different healthcare centers were collected and retrospectively analyzed. The typical DRL for each healthcare center was established and defined as per (A) bilateral image view (left craniocaudal (LCC), right craniocaudal (RCC), left mediolateral oblique (LMLO), and right mediolateral oblique (RMLO)) and (B) structured compressed breast thickness (CBT) criteria. Following this, the local DRL value was established per the bilateral image views for each CBT group. Screening mammography data from a total of 8877 images were used to build this comprehensive DRL system (LCC: 2163, RCC: 2206, LMLO: 2288, and RMLO: 2220). CBTs were classified into eight groups of <20 mm, 20–29 mm, 30–39 mm, 40–49 mm, 50–59 mm, 60–69 mm, 70–79 mm, 80–89 mm, and 90–110 mm. Using the Kruskal–Wallis test, significant dose differences were observed between all seven healthcare centers offering screening mammography. The local DRL values defined per bilateral image views for the CBT group 60–69 mm were (1.24 LCC, 1.23 RCC, 1.34 LMLO, and 1.32 RMLO) mGy. The local DRL defined per bilateral image view for a specific CBT highlighted at least one healthcare center in need of optimization. Such comprehensive DRL system is efficient, easy to use, and very clinically effective.
Full article
(This article belongs to the Special Issue Clinical and Pathological Imaging in the Era of Artificial Intelligence: New Insights and Perspectives)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00188/article_deploy/html/images/jimaging-10-00188-g001-550.jpg?1722854828)
Figure 1
Open AccessArticle
Semantic Segmentation in Large-Size Orthomosaics to Detect the Vegetation Area in Opuntia spp. Crop
by
Arturo Duarte-Rangel, César Camacho-Bello, Eduardo Cornejo-Velazquez and Mireya Clavel-Maqueda
J. Imaging 2024, 10(8), 187; https://doi.org/10.3390/jimaging10080187 - 1 Aug 2024
Abstract
This study focuses on semantic segmentation in crop Opuntia spp. orthomosaics; this is a significant challenge due to the inherent variability in the captured images. Manual measurement of Opuntia spp. vegetation areas can be slow and inefficient, highlighting the need for more advanced
[...] Read more.
This study focuses on semantic segmentation in crop Opuntia spp. orthomosaics; this is a significant challenge due to the inherent variability in the captured images. Manual measurement of Opuntia spp. vegetation areas can be slow and inefficient, highlighting the need for more advanced and accurate methods. For this reason, we propose to use deep learning techniques to provide a more precise and efficient measurement of the vegetation area. Our research focuses on the unique difficulties posed by segmenting high-resolution images exceeding 2000 pixels, a common problem in generating orthomosaics for agricultural monitoring. The research was carried out on a Opuntia spp. cultivation located in the agricultural region of Tulancingo, Hidalgo, Mexico. The images used in this study were obtained by drones and processed using advanced semantic segmentation architectures, including DeepLabV3+, UNet, and UNet Style Xception. The results offer a comparative analysis of the performance of these architectures in the semantic segmentation of Opuntia spp., thus contributing to the development and improvement of crop analysis techniques based on deep learning. This work sets a precedent for future research applying deep learning techniques in agriculture.
Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00187/article_deploy/html/images/jimaging-10-00187-g001-550.jpg?1722829609)
Figure 1
Open AccessArticle
Overlapping Shoeprint Detection by Edge Detection and Deep Learning
by
Chengran Li, Ajit Narayanan and Akbar Ghobakhlou
J. Imaging 2024, 10(8), 186; https://doi.org/10.3390/jimaging10080186 - 31 Jul 2024
Abstract
In the field of 2-D image processing and computer vision, accurately detecting and segmenting objects in scenarios where they overlap or are obscured remains a challenge. This difficulty is worse in the analysis of shoeprints used in forensic investigations because they are embedded
[...] Read more.
In the field of 2-D image processing and computer vision, accurately detecting and segmenting objects in scenarios where they overlap or are obscured remains a challenge. This difficulty is worse in the analysis of shoeprints used in forensic investigations because they are embedded in noisy environments such as the ground and can be indistinct. Traditional convolutional neural networks (CNNs), despite their success in various image analysis tasks, struggle with accurately delineating overlapping objects due to the complexity of segmenting intertwined textures and boundaries against a background of noise. This study introduces and employs the YOLO (You Only Look Once) model enhanced by edge detection and image segmentation techniques to improve the detection of overlapping shoeprints. By focusing on the critical boundary information between shoeprint textures and the ground, our method demonstrates improvements in sensitivity and precision, achieving confidence levels above 85% for minimally overlapped images and maintaining above 70% for extensively overlapped instances. Heatmaps of convolution layers were generated to show how the network converges towards successful detection using these enhancements. This research may provide a potential methodology for addressing the broader challenge of detecting multiple overlapping objects against noisy backgrounds.
Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning: Trends and Applications (2nd Edition))
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00186/article_deploy/html/images/jimaging-10-00186-g001-550.jpg?1722503227)
Figure 1
Open AccessArticle
A Cortical-Inspired Contour Completion Model Based on Contour Orientation and Thickness
by
Ivan Galyaev and Alexey Mashtakov
J. Imaging 2024, 10(8), 185; https://doi.org/10.3390/jimaging10080185 - 31 Jul 2024
Abstract
An extended four-dimensional version of the traditional Petitot–Citti–Sarti model on contour completion in the visual cortex is examined. The neural configuration space is considered as the group of similarity transformations, denoted as . The left-invariant subbundle of the tangent bundle
[...] Read more.
An extended four-dimensional version of the traditional Petitot–Citti–Sarti model on contour completion in the visual cortex is examined. The neural configuration space is considered as the group of similarity transformations, denoted as . The left-invariant subbundle of the tangent bundle models possible directions for establishing neural communication. The sub-Riemannian distance is proportional to the energy expended in interneuron activation between two excited border neurons. According to the model, the damaged image contours are restored via sub-Riemannian geodesics in the space M of positions, orientations and thicknesses (scales). We study the geodesic problem in M using geometric control theory techniques. We prove the existence of a minimal geodesic between arbitrary specified boundary conditions. We apply the Pontryagin maximum principle and derive the geodesic equations. In the special cases, we find explicit solutions. In the general case, we provide a qualitative analysis. Finally, we support our model with a simulation of the association field.
Full article
(This article belongs to the Special Issue Modelling of Human Visual System in Image Processing)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00185/article_deploy/html/images/jimaging-10-00185-g001-550.jpg?1722426604)
Figure 1
Open AccessArticle
The Usefulness of a Virtual Environment-Based Patient Setup Training System for Radiation Therapy
by
Toshioh Fujibuchi, Kosuke Kaneko, Hiroyuki Arakawa and Yoshihiro Okada
J. Imaging 2024, 10(8), 184; https://doi.org/10.3390/jimaging10080184 - 30 Jul 2024
Abstract
In radiation therapy, patient setup is important for improving treatment accuracy. The six-axis couch semi-automatically adjusts the patient’s position; however, adjusting the patient to twist is difficult. In this study, we developed and evaluated a virtual reality setup training tool for medical students
[...] Read more.
In radiation therapy, patient setup is important for improving treatment accuracy. The six-axis couch semi-automatically adjusts the patient’s position; however, adjusting the patient to twist is difficult. In this study, we developed and evaluated a virtual reality setup training tool for medical students to understand and improve their patient setup skills for radiation therapy. First, we set up a simulated patient in a virtual space to reproduce the radiation treatment room. A gyro sensor was attached to the patient phantom in real space, and the twist of the phantom was linked to the patient in the virtual space. Training was conducted for 24 students, and their operation records were analyzed and evaluated. The training’s efficacy was also evaluated through questionnaires provided at the end of the training. The total time required for patient setup tests before and after training decreased significantly from 331.9 s to 146.2 s. As a result of the questionnaire regarding the usability of training to the trainee, most were highly evaluated. We found that training significantly improved students’ understanding of the patient setup. With the proposed system, trainees can experience a simulated setup that can aid in deepening their understanding of radiation therapy treatments.
Full article
(This article belongs to the Special Issue Virtual Reality and Related Simulation Technologies in Medicine and Health Sciences)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00184/article_deploy/html/images/jimaging-10-00184-g001-550.jpg?1722360496)
Figure 1
Open AccessArticle
Optimized Crop Disease Identification in Bangladesh: A Deep Learning and SVM Hybrid Model for Rice, Potato, and Corn
by
Shohag Barman, Fahmid Al Farid, Jaohar Raihan, Niaz Ashraf Khan, Md. Ferdous Bin Hafiz, Aditi Bhattacharya, Zaeed Mahmud, Sadia Afrin Ridita, Md Tanjil Sarker, Hezerul Abdul Karim and Sarina Mansor
J. Imaging 2024, 10(8), 183; https://doi.org/10.3390/jimaging10080183 - 30 Jul 2024
Abstract
Agriculture plays a vital role in Bangladesh’s economy. It is essential to ensure the proper growth and health of crops for the development of the agricultural sector. In the context of Bangladesh, crop diseases pose a significant threat to agricultural output and, consequently,
[...] Read more.
Agriculture plays a vital role in Bangladesh’s economy. It is essential to ensure the proper growth and health of crops for the development of the agricultural sector. In the context of Bangladesh, crop diseases pose a significant threat to agricultural output and, consequently, food security. This necessitates the timely and precise identification of such diseases to ensure the sustainability of food production. This study focuses on building a hybrid deep learning model for the identification of three specific diseases affecting three major crops: late blight in potatoes, brown spot in rice, and common rust in corn. The proposed model leverages EfficientNetB0′s feature extraction capabilities, known for achieving rapid high learning rates, coupled with the classification proficiency of SVMs, a well-established machine learning algorithm. This unified approach streamlines data processing and feature extraction, potentially improving model generalizability across diverse crops and diseases. It also aims to address the challenges of computational efficiency and accuracy that are often encountered in precision agriculture applications. The proposed hybrid model achieved 97.29% accuracy. A comparative analysis with other models, CNN, VGG16, ResNet50, Xception, Mobilenet V2, Autoencoders, Inception v3, and EfficientNetB0 each achieving an accuracy of 86.57%, 83.29%, 68.79%, 94.07%, 90.71%, 87.90%, 94.14%, and 96.14% respectively, demonstrated the superior performance of our proposed model.
Full article
(This article belongs to the Special Issue Imaging Applications in Agriculture)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00183/article_deploy/html/images/jimaging-10-00183-g001-550.jpg?1722493373)
Figure 1
Open AccessReview
Special Types of Breast Cancer: Clinical Behavior and Radiological Appearance
by
Marco Conti, Francesca Morciano, Silvia Amodeo, Elisabetta Gori, Giovanna Romanucci, Paolo Belli, Oscar Tommasini, Francesca Fornasa and Rossella Rella
J. Imaging 2024, 10(8), 182; https://doi.org/10.3390/jimaging10080182 - 29 Jul 2024
Abstract
Breast cancer is a complex disease that includes entities with different characteristics, behaviors, and responses to treatment. Breast cancers are categorized into subgroups based on histological type and grade, and these subgroups affect clinical presentation and oncological outcomes. The subgroup of “special types”
[...] Read more.
Breast cancer is a complex disease that includes entities with different characteristics, behaviors, and responses to treatment. Breast cancers are categorized into subgroups based on histological type and grade, and these subgroups affect clinical presentation and oncological outcomes. The subgroup of “special types” encompasses all those breast cancers with insufficient features to belong to the subgroup “invasive ductal carcinoma not otherwise specified”. These cancers account for around 25% of all cases, some of them having a relatively good prognosis despite high histological grade. The purpose of this paper is to review and illustrate the radiological appearance of each special type, highlighting insights and pitfalls to guide breast radiologists in their routine work.
Full article
(This article belongs to the Special Issue Clinical and Pathological Imaging in the Era of Artificial Intelligence: New Insights and Perspectives)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00182/article_deploy/html/images/jimaging-10-00182-g001-550.jpg?1722260418)
Figure 1
Open AccessArticle
Fiduciary-Free Frame Alignment for Robust Time-Lapse Drift Correction Estimation in Multi-Sample Cell Microscopy
by
Stefan Baar, Masahiro Kuragano, Naoki Nishishita, Kiyotaka Tokuraku and Shinya Watanabe
J. Imaging 2024, 10(8), 181; https://doi.org/10.3390/jimaging10080181 - 29 Jul 2024
Abstract
When analyzing microscopic time-lapse observations, frame alignment is an essential task to visually understand the morphological and translation dynamics of cells and tissue. While in traditional single-sample microscopy, the region of interest (RoI) is fixed, multi-sample microscopy often uses a single microscope that
[...] Read more.
When analyzing microscopic time-lapse observations, frame alignment is an essential task to visually understand the morphological and translation dynamics of cells and tissue. While in traditional single-sample microscopy, the region of interest (RoI) is fixed, multi-sample microscopy often uses a single microscope that scans multiple samples over a long period of time by laterally relocating the sample stage. Hence, the relocation of the optics induces a statistical RoI offset and can introduce jitter as well as drift, which results in a misaligned RoI for each sample’s time-lapse observation (stage drift). We introduce a robust approach to automatically align all frames within a time-lapse observation and compensate for frame drift. In this study, we present a sub-pixel precise alignment approach based on recurrent all-pairs field transforms (RAFT); a deep network architecture for optical flow. We show that the RAFT model pre-trained on the Sintel dataset performed with near perfect precision for registration tasks on a set of ten contextually unrelated time-lapse observations containing 250 frames each. Our approach is robust for elastically undistorted and translation displaced (x,y) microscopic time-lapse observations and was tested on multiple samples with varying cell density, obtained using different devices. The approach only performed well for registration and not for tracking of the individual image components like cells and contaminants. We provide an open-source command-line application that corrects for stage drift and jitter.
Full article
(This article belongs to the Special Issue Medical Image Classification and Segmentation: Progress and Challenges)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00181/article_deploy/html/images/jimaging-10-00181-g001-550.jpg?1722314325)
Figure 1
Open AccessReview
Image-Based 3D Reconstruction in Laparoscopy: A Review Focusing on the Quantitative Evaluation by Applying the Reconstruction Error
by
Birthe Göbel, Alexander Reiterer and Knut Möller
J. Imaging 2024, 10(8), 180; https://doi.org/10.3390/jimaging10080180 - 24 Jul 2024
Abstract
Image-based 3D reconstruction enables laparoscopic applications as image-guided navigation and (autonomous) robot-assisted interventions, which require a high accuracy. The review’s purpose is to present the accuracy of different techniques to label the most promising. A systematic literature search with PubMed and google scholar
[...] Read more.
Image-based 3D reconstruction enables laparoscopic applications as image-guided navigation and (autonomous) robot-assisted interventions, which require a high accuracy. The review’s purpose is to present the accuracy of different techniques to label the most promising. A systematic literature search with PubMed and google scholar from 2015 to 2023 was applied by following the framework of “Review articles: purpose, process, and structure”. Articles were considered when presenting a quantitative evaluation (root mean squared error and mean absolute error) of the reconstruction error (Euclidean distance between real and reconstructed surface). The search provides 995 articles, which were reduced to 48 articles after applying exclusion criteria. From these, a reconstruction error data set could be generated for the techniques of stereo vision, Shape-from-Motion, Simultaneous Localization and Mapping, deep-learning, and structured light. The reconstruction error varies from below one millimeter to higher than ten millimeters—with deep-learning and Simultaneous Localization and Mapping delivering the best results under intraoperative conditions. The high variance emerges from different experimental conditions. In conclusion, submillimeter accuracy is challenging, but promising image-based 3D reconstruction techniques could be identified. For future research, we recommend computing the reconstruction error for comparison purposes and use ex/in vivo organs as reference objects for realistic experiments.
Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00180/article_deploy/html/images/jimaging-10-00180-g001-550.jpg?1721895485)
Figure 1
Open AccessArticle
Deep Learning for Single-Shot Structured Light Profilometry: A Comprehensive Dataset and Performance Analysis
by
Rhys G. Evans, Ester Devlieghere, Robrecht Keijzer, Joris J. J. Dirckx and Sam Van der Jeught
J. Imaging 2024, 10(8), 179; https://doi.org/10.3390/jimaging10080179 - 24 Jul 2024
Abstract
In 3D optical metrology, single-shot deep learning-based structured light profilometry (SS-DL-SLP) has gained attention because of its measurement speed, simplicity of optical setup, and robustness to noise and motion artefacts. However, gathering a sufficiently large training dataset for these techniques remains challenging because
[...] Read more.
In 3D optical metrology, single-shot deep learning-based structured light profilometry (SS-DL-SLP) has gained attention because of its measurement speed, simplicity of optical setup, and robustness to noise and motion artefacts. However, gathering a sufficiently large training dataset for these techniques remains challenging because of practical limitations. This paper presents a comprehensive DL-SLP dataset of over 10,000 physical data couples. The dataset was constructed by 3D-printing a calibration target featuring randomly varying surface profiles and storing the height profiles and the corresponding deformed fringe patterns. Our dataset aims to serve as a benchmark for evaluating and comparing different models and network architectures in DL-SLP. We performed an analysis of several established neural networks, demonstrating high accuracy in obtaining full-field height information from previously unseen fringe patterns. In addition, the network was validated on unique objects to test the overall robustness of the trained model. To facilitate further research and promote reproducibility, all code and the dataset are made publicly available. This dataset will enable researchers to explore, develop, and benchmark novel DL-based approaches for SS-DL-SLP.
Full article
(This article belongs to the Special Issue Deep Learning in Computer Vision)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00179/article_deploy/html/images/jimaging-10-00179-g001-550.jpg?1721813888)
Figure 1
Open AccessArticle
Iterative Tomographic Image Reconstruction Algorithm Based on Extended Power Divergence by Dynamic Parameter Tuning
by
Ryuto Yabuki, Yusaku Yamaguchi, Omar M. Abou Al-Ola, Takeshi Kojima and Tetsuya Yoshinaga
J. Imaging 2024, 10(8), 178; https://doi.org/10.3390/jimaging10080178 - 23 Jul 2024
Abstract
Computed tomography (CT) imaging plays a crucial role in various medical applications, but noise in projection data can significantly degrade image quality and hinder diagnosis accuracy. Iterative algorithms for tomographic image reconstruction outperform transform methods, especially in scenarios with severe noise in projections.
[...] Read more.
Computed tomography (CT) imaging plays a crucial role in various medical applications, but noise in projection data can significantly degrade image quality and hinder diagnosis accuracy. Iterative algorithms for tomographic image reconstruction outperform transform methods, especially in scenarios with severe noise in projections. In this paper, we propose a method to dynamically adjust two parameters included in the iterative rules during the reconstruction process. The algorithm, named the parameter-extended expectation-maximization based on power divergence (PXEM), aims to minimize the weighted extended power divergence between the measured and forward projections at each iteration. Our numerical and physical experiments showed that PXEM surpassed conventional methods such as maximum-likelihood expectation-maximization (MLEM), particularly in noisy scenarios. PXEM combines the noise suppression capabilities of power divergence-based expectation-maximization with static parameters at every iteration and the edge preservation properties of MLEM. The experimental results demonstrated significant improvements in image quality in metrics such as the structural similarity index measure and peak signal-to-noise ratio. PXEM improves CT image reconstruction quality under high noise conditions through enhanced optimization techniques.
Full article
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00178/article_deploy/html/images/jimaging-10-00178-g001-550.jpg?1722412172)
Figure 1
Open AccessArticle
Influence of Examiner Experience on the Measurement of Bone-Loss by Low-Dose Cone-Beam Computed Tomography: An Ex Vivo Study
by
Maurice Ruetters, Korallia Alexandrou, Antonio Ciardo, Sinclair Awounvo, Holger Gehrig, Ti-Sun Kim, Christopher J. Lux and Sinan Sen
J. Imaging 2024, 10(8), 177; https://doi.org/10.3390/jimaging10080177 - 23 Jul 2024
Abstract
►▼
Show Figures
The aim of this study was to investigate the influence of examiner experience on measurements of bone-loss using high-dose (HD) and low-dose (LD) CBCT. Three diagnosticians with varying levels of CBCT interpretation experience measured bone-loss from CBCT scans of three cadaveric heads at
[...] Read more.
The aim of this study was to investigate the influence of examiner experience on measurements of bone-loss using high-dose (HD) and low-dose (LD) CBCT. Three diagnosticians with varying levels of CBCT interpretation experience measured bone-loss from CBCT scans of three cadaveric heads at 30 sites, conducting measurements twice. Between the first and second measurements, diagnostician 2 and diagnostician 3 received training in LD-CBCT diagnostics. The diagnosticians also classified the certainty of their measurements using a three-grade scale. The accuracy of bone-loss measurements was assessed using the absolute difference between observed and clinical measurements and compared among diagnosticians with different experience levels for both HD and LD-CBCT. At baseline, there was a significant difference in measurement accuracy between diagnostician 1 and diagnostician 2, and between diagnostician 1 and diagnostician 3, but not between diagnostician 2 and diagnostician 3. Training improved the accuracy of both HD-CBCT and LD-CBCT measurements in diagnostician 2, and of LD-CBCT measurements in diagnostician 3. Regarding measurement certainty, there was a significant difference among diagnosticians before training. Training enhanced the certainty for diagnosticians 2 and 3, with a significant improvement noted only for diagnostician 3. Examiner experience level significantly impacts the accuracy and certainty of bone-loss measurements using HD- and LD-CBCT.
Full article
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00177/article_deploy/html/images/jimaging-10-00177-g001-550.jpg?1721892807)
Figure 1
Open AccessReview
Deep Learning for Pneumonia Detection in Chest X-ray Images: A Comprehensive Survey
by
Raheel Siddiqi and Sameena Javaid
J. Imaging 2024, 10(8), 176; https://doi.org/10.3390/jimaging10080176 - 23 Jul 2024
Abstract
►▼
Show Figures
This paper addresses the significant problem of identifying the relevant background and contextual literature related to deep learning (DL) as an evolving technology in order to provide a comprehensive analysis of the application of DL to the specific problem of pneumonia detection via
[...] Read more.
This paper addresses the significant problem of identifying the relevant background and contextual literature related to deep learning (DL) as an evolving technology in order to provide a comprehensive analysis of the application of DL to the specific problem of pneumonia detection via chest X-ray (CXR) imaging, which is the most common and cost-effective imaging technique available worldwide for pneumonia diagnosis. This paper in particular addresses the key period associated with COVID-19, 2020–2023, to explain, analyze, and systematically evaluate the limitations of approaches and determine their relative levels of effectiveness. The context in which DL is applied as both an aid to and an automated substitute for existing expert radiography professionals, who often have limited availability, is elaborated in detail. The rationale for the undertaken research is provided, along with a justification of the resources adopted and their relevance. This explanatory text and the subsequent analyses are intended to provide sufficient detail of the problem being addressed, existing solutions, and the limitations of these, ranging in detail from the specific to the more general. Indeed, our analysis and evaluation agree with the generally held view that the use of transformers, specifically, vision transformers (ViTs), is the most promising technique for obtaining further effective results in the area of pneumonia detection using CXR images. However, ViTs require extensive further research to address several limitations, specifically the following: biased CXR datasets, data and code availability, the ease with which a model can be explained, systematic methods of accurate model comparison, the notion of class imbalance in CXR datasets, and the possibility of adversarial attacks, the latter of which remains an area of fundamental research.
Full article
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00176/article_deploy/html/images/jimaging-10-00176-g001-550.jpg?1721735684)
Figure 1
Open AccessArticle
Disclosure of a Concealed Michelangelo-Inspired Depiction in a 16th-Century Painting
by
Alice Dal Fovo, Margherita Morello, Anna Mazzinghi, Caterina Toso, Enrico Pampaloni and Raffaella Fontana
J. Imaging 2024, 10(8), 175; https://doi.org/10.3390/jimaging10080175 - 23 Jul 2024
Abstract
Some paintings may have hidden depictions beneath the visible surface, which can provide valuable insights into the artist’s creative process and the genesis of the artwork. Studies have shown that these covered paintings can be revealed through image-based techniques and integrated data processing.
[...] Read more.
Some paintings may have hidden depictions beneath the visible surface, which can provide valuable insights into the artist’s creative process and the genesis of the artwork. Studies have shown that these covered paintings can be revealed through image-based techniques and integrated data processing. This study analyzes an oil painting by Beceri from the mid-16th century depicting the Holy Family, owned by the Uffizi Galleries. During the analysis of the materials, we discovered evidence of pictorial layers beneath the visible scene. To uncover the hidden figuration, we applied a multimodal approach that included microprofilometry, reflectance imaging spectroscopy, macro X-ray fluorescence, and optical coherence tomography. We analyzed the brushstrokes of the hidden painting, visualized the underdrawing, located the painted areas beneath the outermost painting, and quantified the thicknesses of the pictorial layers. The pigments used for the underpainting were identified through cross-analysis of X-ray fluorescence and spectral correlation maps. The underlying pictorial subject, Leda and the Swan, appears to be inspired by a long-lost and replicated work by Michelangelo. This information places Beceri and his production in a more defined context.
Full article
(This article belongs to the Section Image and Video Processing)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00175/article_deploy/html/images/jimaging-10-00175-g001-550.jpg?1721731366)
Figure 1
Open AccessArticle
Acne Detection Based on Reconstructed Hyperspectral Images
by
Ali Mohammed Ridha, Nor Ashidi Mat Isa and Ayman Tawfik
J. Imaging 2024, 10(8), 174; https://doi.org/10.3390/jimaging10080174 - 23 Jul 2024
Abstract
Acne Vulgaris is a common type of skin disease that affects more than 85% of teenagers and frequently continues even in adulthood. While it is not a dangerous skin disease, it can significantly impact the quality of life. Hyperspectral imaging (HSI), which captures
[...] Read more.
Acne Vulgaris is a common type of skin disease that affects more than 85% of teenagers and frequently continues even in adulthood. While it is not a dangerous skin disease, it can significantly impact the quality of life. Hyperspectral imaging (HSI), which captures a wide spectrum of light, has emerged as a tool for the detection and diagnosis of various skin conditions. However, due to the high cost of specialised HS cameras, it is limited in its use in clinical settings. In this research, a novel acne detection system that will utilise reconstructed hyperspectral (HS) images from RGB images is proposed. A dataset of reconstructed HS images is created using the best-performing HS reconstruction model from our previous research. A new acne detection algorithm that is based on reconstructed HS images and RetinaNet algorithm is introduced. The results indicate that the proposed algorithm surpasses other techniques based on RGB images. Additionally, reconstructed HS images offer a promising and cost-effective alternative to using expensive HSI equipment for detecting conditions like acne or other medical issues.
Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00174/article_deploy/html/images/jimaging-10-00174-g001-550.jpg?1721816974)
Figure 1
Open AccessCommunication
High-Resolution Iodine-Enhanced Micro-Computed Tomography of Intact Human Hearts for Detailed Coronary Microvasculature Analyses
by
Joerg Reifart and Paul Iaizzo
J. Imaging 2024, 10(7), 173; https://doi.org/10.3390/jimaging10070173 - 18 Jul 2024
Abstract
Identifying the detailed anatomies of the coronary microvasculature remains an area of research; one needs to develop methods for non-destructive, high-resolution, three-dimensional imaging of these vessels for computational modeling. Currently employed Micro-Computed Tomography (Micro-CT) protocols for vasa vasorum analyses require organ dissection and,
[...] Read more.
Identifying the detailed anatomies of the coronary microvasculature remains an area of research; one needs to develop methods for non-destructive, high-resolution, three-dimensional imaging of these vessels for computational modeling. Currently employed Micro-Computed Tomography (Micro-CT) protocols for vasa vasorum analyses require organ dissection and, in most cases, non-clearable contrast agents. Here, we describe a method developed for a non-destructive, economical means to achieve high-resolution images of the human coronary microvasculature without organ dissection. Formalin-fixed human hearts were cannulated using venogram balloon catheters, which were then fixed into the specimen’s aortic root. The canulated hearts, protected by a polyethylene bag, were placed in radiolucent containers filled with insulating polyurethane foam to reduce movement. For vasculature staining, iodine potassium iodide (IKI, Lugol’s solution; 6.3% Potassium Iodide, 4.1% Iodide) was injected. Contrast distributions were monitored using a North Star Imaging X3000 micro-CT scanner with low-radiation settings, followed by high-radiation scanning (3600 rad, 60 kV, 900 mA) for the final high-resolution imaging. We successfully imaged four intact human hearts presenting with chronic total coronary occlusions of the right coronary artery. This imaging enabled detailed analyses of the vasa vasorum surrounding stenosed and occluded segments. After imaging, the hearts were cleared of iodine and excess polyurethane foam and returned to their initial formalin-fixed state for indefinite storage. Conclusions: the described methodologies allow for the non-destructive, high-resolution micro-CT imaging of coronary microvasculature in intact human hearts, paving the way for detailed computational 3D microvascular reconstructions with a macrovascular context.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00173/article_deploy/html/images/jimaging-10-00173-g001-550.jpg?1721720896)
Figure 1
Open AccessArticle
Reducing Manual Annotation Costs for Cell Segmentation by Upgrading Low-Quality Annotations
by
Serban Vădineanu, Daniël M. Pelt, Oleh Dzyubachyk and Kees Joost Batenburg
J. Imaging 2024, 10(7), 172; https://doi.org/10.3390/jimaging10070172 - 17 Jul 2024
Abstract
Deep-learning algorithms for cell segmentation typically require large data sets with high-quality annotations to be trained with. However, the annotation cost for obtaining such sets may prove to be prohibitively expensive. Our work aims to reduce the time necessary to create high-quality annotations
[...] Read more.
Deep-learning algorithms for cell segmentation typically require large data sets with high-quality annotations to be trained with. However, the annotation cost for obtaining such sets may prove to be prohibitively expensive. Our work aims to reduce the time necessary to create high-quality annotations of cell images by using a relatively small well-annotated data set for training a convolutional neural network to upgrade lower-quality annotations, produced at lower annotation costs. We investigate the performance of our solution when upgrading the annotation quality for labels affected by three types of annotation error: omission, inclusion, and bias. We observe that our method can upgrade annotations affected by high error levels from 0.3 to 0.9 Dice similarity with the ground-truth annotations. We also show that a relatively small well-annotated set enlarged with samples with upgraded annotations can be used to train better-performing cell segmentation networks compared to training only on the well-annotated set. Moreover, we present a use case where our solution can be successfully employed to increase the quality of the predictions of a segmentation network trained on just 10 annotated samples.
Full article
(This article belongs to the Special Issue Medical Image Classification and Segmentation: Progress and Challenges)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00172/article_deploy/html/images/jimaging-10-00172-g001-550.jpg?1721197675)
Figure 1
Open AccessArticle
Deep Efficient Data Association for Multi-Object Tracking: Augmented with SSIM-Based Ambiguity Elimination
by
Aswathy Prasannakumar and Deepak Mishra
J. Imaging 2024, 10(7), 171; https://doi.org/10.3390/jimaging10070171 - 16 Jul 2024
Abstract
Recently, to address the multiple object tracking (MOT) problem, we harnessed the power of deep learning-based methods. The tracking-by-detection approach to multiple object tracking (MOT) involves two primary steps: object detection and data association. In the first step, objects of interest are detected
[...] Read more.
Recently, to address the multiple object tracking (MOT) problem, we harnessed the power of deep learning-based methods. The tracking-by-detection approach to multiple object tracking (MOT) involves two primary steps: object detection and data association. In the first step, objects of interest are detected in each frame of a video. The second step establishes the correspondence between these detected objects across different frames to track their trajectories. This paper proposes an efficient and unified data association method that utilizes a deep feature association network (deepFAN) to learn the associations. Additionally, the Structural Similarity Index Metric (SSIM) is employed to address uncertainties in the data association, complementing the deep feature association network. These combined association computations effectively link the current detections with the previous tracks, enhancing the overall tracking performance. To evaluate the efficiency of the proposed MOT framework, we conducted a comprehensive analysis of the popular MOT datasets, such as the MOT challenge and UA-DETRAC. The results showed that our technique performed substantially better than the current state-of-the-art methods in terms of standard MOT metrics.
Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
►▼
Show Figures
![](https://pub.mdpi-res.com/jimaging/jimaging-10-00171/article_deploy/html/images/jimaging-10-00171-g001-550.jpg?1721383672)
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Sensors, J. Imaging, MAKE, Optics
Applications in Image Analysis and Pattern Recognition
Topic Editors: Bin Fan, Wenqi RenDeadline: 31 August 2024
Topic in
Applied Sciences, Electronics, J. Imaging, MAKE, Remote Sensing
Computational Intelligence in Remote Sensing: 2nd Edition
Topic Editors: Yue Wu, Kai Qin, Maoguo Gong, Qiguang MiaoDeadline: 31 December 2024
Topic in
Future Internet, Information, J. Imaging, Mathematics, Symmetry
Research on Deep Neural Networks for Video Motion Recognition
Topic Editors: Hamad Naeem, Hong Su, Amjad Alsirhani, Muhammad Shoaib BhuttaDeadline: 31 January 2025
Topic in
Applied Sciences, Computers, Electronics, Information, J. Imaging
Visual Computing and Understanding: New Developments and Trends
Topic Editors: Wei Zhou, Guanghui Yue, Wenhan YangDeadline: 30 March 2025
![loading...](https://pub.mdpi-res.com/img/loading_circle.gif?9a82694213036313?1722924778)
Conferences
Special Issues
Special Issue in
J. Imaging
Recent Advances in X-ray Imaging
Guest Editor: Silvia CipicciaDeadline: 31 August 2024
Special Issue in
J. Imaging
New Insights into Photoacoustic Imaging
Guest Editors: Yan Li, Min WuDeadline: 31 August 2024
Special Issue in
J. Imaging
Deep Learning in Computer Vision
Guest Editors: Dong Zhang, Rui YanDeadline: 15 September 2024
Special Issue in
J. Imaging
Novel Approaches to Image Quality Assessment
Guest Editors: Luigi Celona, Hanhe LinDeadline: 31 October 2024