-
Considerations for a Micromirror Array Optimized for Compressive Sensing (VIS to MIR) in Space Applications
-
A Mathematical Model for Wind Velocity Field Reconstruction and Visualization Taking into Account the Topography Influence
-
Anatomical Characteristics of Cervicomedullary Compression on MRI Scans in Children with Achondroplasia
-
Evaluating Brain Tumor Detection with Deep Learning Convolutional Neural Networks Across Multiple MRI Modalities
Journal Description
Journal of Imaging
Journal of Imaging
is an international, multi/interdisciplinary, peer-reviewed, open access journal of imaging techniques published online monthly by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), PubMed, PMC, dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: CiteScore - Q1 (Computer Graphics and Computer-Aided Design)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 18.3 days after submission; acceptance to publication is undertaken in 3.3 days (median values for papers published in this journal in the second half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.7 (2023);
5-Year Impact Factor:
3.0 (2023)
Latest Articles
Establishing Diagnostic Reference Levels for Mammography Digital Breast Tomosynthesis, Contrast Enhance, Implants, Spot Compression, Magnification and Stereotactic Biopsy in Dubai Health Sector
J. Imaging 2025, 11(3), 79; https://doi.org/10.3390/jimaging11030079 (registering DOI) - 7 Mar 2025
Abstract
The aim of this patient dose review is to establish a thorough diagnostic reference level (DRL) system. This entails calculating a DRL value for each possible image technique/view considered to perform a diagnostic mammogram in our practice. Diagnostic mammographies from a total of
[...] Read more.
The aim of this patient dose review is to establish a thorough diagnostic reference level (DRL) system. This entails calculating a DRL value for each possible image technique/view considered to perform a diagnostic mammogram in our practice. Diagnostic mammographies from a total of 1191 patients who underwent a diagnostic mammogram study in our designated diagnostic mammography center were collected and retrospectively analyzed. The DRL representing our health sector was set as the median of the mean glandular dose (MGD) for each possible image technique/view, including the 2D standard bilateral craniocaudal (LCC/RCC) and mediolateral oblique (LMLO/RMLO), the 2D bilateral spot compression CC and MLO (RSCC/LSCC and RSMLO/LSMLO), the 2D bilateral spot compression with magnification (RMSCC/LMSCC and RMSMLO/LMSMLO), the 3D digital breast tomosynthesis CC and MLO (RCC/LCC and RMLO/LMLO), the 2D bilateral implant CC and MLO (RIMCC/LIMCC and RIMMLO/LIMMLO), the 2D bilateral contrast enhanced CC and MLO (RCECC/LCECC and RCEMLO/LCEMLO) and the 2D bilateral stereotactic biopsy guided CC (SBRCC/SBLCC). This patient dose review revealed that the highest MGD was associated with the 2D bilateral spot compression with magnification (MSCC/MSMLO) image view. For the compressed breast thickness (CBT) group 60–69 mm, the median and 75th percentile of the MGD values obtained were MSCC: 3.35 and 3.96, MSMLO: 4.14 and 5.25 mGy respectively. Obvious MGD variations were witnessed across the different possible views even for the same CBT group. Our results are in line with the published DRLs when using same statistical quantity and CBT group.
Full article
(This article belongs to the Special Issue Tools and Techniques for Improving Radiological Imaging Applications)
►
Show Figures
Open AccessArticle
A Novel Method to Compute the Contact Surface Area Between an Organ and Cancer Tissue
by
Alessandra Bulanti, Alessandro Carfì, Paolo Traverso, Carlo Terrone and Fulvio Mastrogiovanni
J. Imaging 2025, 11(3), 78; https://doi.org/10.3390/jimaging11030078 - 6 Mar 2025
Abstract
►▼
Show Figures
The contact surface area (CSA) quantifies the interface between a tumor and an organ and is a key predictor of perioperative outcomes in kidney cancer. However, existing CSA computation methods rely on shape assumptions and manual annotation. We propose a novel approach using
[...] Read more.
The contact surface area (CSA) quantifies the interface between a tumor and an organ and is a key predictor of perioperative outcomes in kidney cancer. However, existing CSA computation methods rely on shape assumptions and manual annotation. We propose a novel approach using 3D reconstructions from computed tomography (CT) scans to provide an accurate CSA estimate. Our method includes a segmentation protocol and an algorithm that processes reconstructed meshes. We also provide an open-source implementation with a graphical user interface. Tested on synthetic data, the algorithm showed minimal error and was evaluated on data from 82 patients. We computed the CSA using both our approach and Hsieh’s method, which relies on subjective CT scan measurements, in a double-blind study with two radiologists of different experience levels. We assessed the correlation between our approach and the expert radiologist’s measurements, as well as the deviation of both our method and the less experienced radiologist from the expert’s values. While the mean and variance of the differences between the less experienced radiologist and the expert were lower, our method exhibited a slight deviation from the expert’s, demonstrating its reliability and consistency. These findings are further supported by the results obtained from synthetic data testing.
Full article

Figure 1
Open AccessArticle
Detection of Chips on the Threaded Part of Cosmetic Glass Bottles
by
Daiki Tomita and Yue Bao
J. Imaging 2025, 11(3), 77; https://doi.org/10.3390/jimaging11030077 - 4 Mar 2025
Abstract
Recycled glass has been the focus of attention owing to its role in reducing plastic waste and further increasing the demand for glass containers. Cosmetics glass bottles require strict quality inspections because of the frequent handling, safety concerns, and other factors. During manufacturing,
[...] Read more.
Recycled glass has been the focus of attention owing to its role in reducing plastic waste and further increasing the demand for glass containers. Cosmetics glass bottles require strict quality inspections because of the frequent handling, safety concerns, and other factors. During manufacturing, glass bottles sometimes develop chips on the top surface, rim, or screw threads of the bottle mouth. Conventionally, these chips are visually inspected by inspectors; however, this process is time consuming and prone to inaccuracies. To address these issues, automatic inspection using image processing has been explored. Existing methods, such as dynamic luminance value correction and ring-shaped inspection gates, have limitations: the former relies on visible light, which is strongly affected by natural light, and the latter acquires images directly from above, resulting in low accuracy in detecting chips on the lower part of screw threads. To overcome these challenges, this study proposes a method that combines infrared backlighting and image processing to determine the range of screw threads and detect chips accurately. Experiments were conducted in an experimental environment replicating an actual factory production line. The results confirmed that the detection accuracy of chipping was 99.6% for both good and defective bottles. This approach reduces equipment complexity compared to conventional methods while maintaining high inspection accuracy, contributing to the productivity and quality control of glass bottle manufacturing.
Full article
(This article belongs to the Section Image and Video Processing)
►▼
Show Figures

Figure 1
Open AccessArticle
GM-CBAM-ResNet: A Lightweight Deep Learning Network for Diagnosis of COVID-19
by
Junjiang Zhu, Yihui Zhang, Cheng Ma, Jiaming Wu, Xuchen Wang and Dongdong Kong
J. Imaging 2025, 11(3), 76; https://doi.org/10.3390/jimaging11030076 - 3 Mar 2025
Abstract
COVID-19 can cause acute infectious diseases of the respiratory system, and may probably lead to heart damage, which will seriously threaten human health. Electrocardiograms (ECGs) have the advantages of being low cost, non-invasive, and radiation free, and is widely used for evaluating heart
[...] Read more.
COVID-19 can cause acute infectious diseases of the respiratory system, and may probably lead to heart damage, which will seriously threaten human health. Electrocardiograms (ECGs) have the advantages of being low cost, non-invasive, and radiation free, and is widely used for evaluating heart health status. In this work, a lightweight deep learning network named GM-CBAM-ResNet is proposed for diagnosing COVID-19 based on ECG images. GM-CBAM-ResNet is constructed by replacing the convolution module with the Ghost module (GM) and adding the convolutional block attention module (CBAM) in the residual module of ResNet. To reveal the superiority of GM-CBAM-ResNet, the other three methods (ResNet, GM-ResNet, and CBAM-ResNet) are also analyzed from the following aspects: model performance, complexity, and interpretability. The model performance is evaluated by using the open ‘ECG Images dataset of Cardiac and COVID-19 Patients’. The complexity is reflected by comparing the number of model parameters. The interpretability is analyzed by utilizing Gradient-weighted Class Activation Mapping (Grad-CAM). Parameter statistics indicate that, on the basis of ResNet19, the number of model parameters of GM-CBAM-ResNet19 is reduced by 45.4%. Experimental results show that, under less model complexity, GM-CBAM-ResNet19 improves the diagnostic accuracy by approximately 5% in comparison with ResNet19. Additionally, the interpretability analysis shows that CBAM can suppress the interference of grid backgrounds and ensure higher diagnostic accuracy under lower model complexity. This work provides a lightweight solution for the rapid and accurate diagnosing of COVD-19 based on ECG images, which holds significant practical deployment value.
Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Reduction of Uneven Brightness and Ghosts of Aerial Images Using a Prism in a Micromirror Array Plate
by
Kaito Shoji, Yuto Osada, Atsutoshi Kurihara and Yue Bao
J. Imaging 2025, 11(3), 75; https://doi.org/10.3390/jimaging11030075 - 3 Mar 2025
Abstract
►▼
Show Figures
A micro-mirror array plate is a type of aerial image display that allows an observer to touch the aerial image directly. The problem with this optical element is that it produces stray light, called a ghost, which reduces the visibility of the aerial
[...] Read more.
A micro-mirror array plate is a type of aerial image display that allows an observer to touch the aerial image directly. The problem with this optical element is that it produces stray light, called a ghost, which reduces the visibility of the aerial image. Conventional methods can suppress the occurrence of ghosts; however, depending on the observation position, uneven luminance is produced in aerial images. Therefore, in this study, we proposed a method for eliminating ghosts while suppressing the unevenness in the luminance of an aerial image using a prism. In the proposed device, a prism is placed between the liquid crystal display and the diffuser, which is the light source of the aerial display. The experimental results showed that the proposed method can suppress the unevenness in the luminance of aerial images better than the conventional ghost removal methods and can reduce the formation of ghosts better than the micromirror array plate alone. Therefore, the proposed method can be shown to be a ghost removal method that can suppress unevenness in the brightness of aerial images.
Full article

Figure 1
Open AccessArticle
Typical Diagnostic Reference Levels of Radiation Exposure on Neonates Under 1 kg in Mobile Chest Imaging in Incubators
by
Ioannis Antonakos, Matina Patsioti, Maria-Eleni Zachou, George Christopoulos and Efstathios P. Efstathopoulos
J. Imaging 2025, 11(3), 74; https://doi.org/10.3390/jimaging11030074 - 28 Feb 2025
Abstract
The purpose of this study is to determine the typical diagnostic reference levels (DRLs) of radiation exposure values for chest radiographs in neonates (<1 kg) in mobile imaging at a University Hospital in Greece and compare these values with the existing DRL values
[...] Read more.
The purpose of this study is to determine the typical diagnostic reference levels (DRLs) of radiation exposure values for chest radiographs in neonates (<1 kg) in mobile imaging at a University Hospital in Greece and compare these values with the existing DRL values from the literature. Patient and dosimetry data, including age, sex, weight, tube voltage (kV), tube current (mA), exposure time (s), exposure index of a digital detector (S), and dose area product (DAP) were obtained from a total of 80 chest radiography examinations performed on neonates (<1 kg and <30 days old). All examinations were performed in a single X-ray system, and all data (demographic and dosimetry data) were collected from the PACS of the hospital. Typical radiation exposure values were determined as the median value of DAP and ESD distribution. Afterward, these typical values were compared with DRL values from other countries. Three radiologists reviewed the images to evaluate image quality for dose optimization in neonatal chest radiography. From all examinations, the mean value and standard deviation of DAP was 0.13 ± 0.11 dGy·cm2 (range: 0.01–0.46 dGy·cm2), and ESD was measured at 11.55 ± 4.96 μGy (range: 4.01–30.4 μGy). The typical values in terms of DAP and ESD were estimated to be 0.08 dGy·cm2 and 9.87 μGy, respectively. The results show that the DAP value decreases as the exposure index increases. This study’s typical values were lower than the DRLs reported in the literature because our population had lower weight and age. From the subjective evaluation of image quality, it was revealed that the vast majority of radiographs (over 80%) met the criteria for being diagnostic as they received an excellent rating in terms of noise levels, contrast, and sharpness. This study contributes to the recording of typical dose values in a sensitive and rare category of patients (neonates weighing <1 kg) as well as information on the image quality of chest X-rays that were performed in this group.
Full article
(This article belongs to the Special Issue Learning and Optimization for Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Deepfake Media Forensics: Status and Future Challenges
by
Irene Amerini, Mauro Barni, Sebastiano Battiato, Paolo Bestagini, Giulia Boato, Vittoria Bruni, Roberto Caldelli, Francesco De Natale, Rocco De Nicola, Luca Guarnera, Sara Mandelli, Taiba Majid, Gian Luca Marcialis, Marco Micheletto, Andrea Montibeller, Giulia Orrù, Alessandro Ortis, Pericle Perazzo, Giovanni Puglisi, Nischay Purnekar, Davide Salvi, Stefano Tubaro, Massimo Villari and Domenico Vitulanoadd
Show full author list
remove
Hide full author list
J. Imaging 2025, 11(3), 73; https://doi.org/10.3390/jimaging11030073 - 28 Feb 2025
Abstract
The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic
[...] Read more.
The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic yet fabricated content, while these advancements enable creative and innovative applications, they also pose severe ethical, social, and security risks due to their potential misuse. The proliferation of deepfakes has triggered phenomena like “Impostor Bias”, a growing skepticism toward the authenticity of multimedia content, further complicating trust in digital interactions. This paper is mainly based on the description of a research project called FF4ALL (FF4ALL-Detection of Deep Fake Media and Life-Long Media Authentication) for the detection and authentication of deepfakes, focusing on areas such as forensic attribution, passive and active authentication, and detection in real-world scenarios. By exploring both the strengths and limitations of current methodologies, we highlight critical research gaps and propose directions for future advancements to ensure media integrity and trustworthiness in an era increasingly dominated by synthetic media.
Full article
(This article belongs to the Special Issue Advancements in Deepfake Technology, Biometry System and Multimedia Forensics)
►▼
Show Figures

Figure 1
Open AccessArticle
Concealed Weapon Detection Using Thermal Cameras
by
Juan D. Muñoz, Jesus Ruiz-Santaquiteria, Oscar Deniz and Gloria Bueno
J. Imaging 2025, 11(3), 72; https://doi.org/10.3390/jimaging11030072 - 26 Feb 2025
Abstract
In an era where security concerns are ever-increasing, the need for advanced technology to detect visible and concealed weapons has become critical. This paper introduces a novel two-stage method for concealed handgun detection, leveraging thermal imaging and deep learning, offering a potential real-world
[...] Read more.
In an era where security concerns are ever-increasing, the need for advanced technology to detect visible and concealed weapons has become critical. This paper introduces a novel two-stage method for concealed handgun detection, leveraging thermal imaging and deep learning, offering a potential real-world solution for law enforcement and surveillance applications. The approach first detects potential firearms at the frame level and subsequently verifies their association with a detected person, significantly reducing false positives and false negatives. Alarms are triggered only under specific conditions to ensure accurate and reliable detection, with precautionary alerts raised if no person is detected but a firearm is identified. Key contributions include a lightweight algorithm optimized for low-end embedded devices, making it suitable for wearable and mobile applications, and the creation of a tailored thermal dataset for controlled concealment scenarios. The system is implemented on a chest-worn Android smartphone with a miniature thermal camera, enabling hands-free operation. Experimental results validate the method’s effectiveness, achieving an mAP@50-95 of 64.52% on our dataset, improving state-of-the-art methods. By reducing false negatives and improving reliability, this study offers a scalable, practical solution for security applications.
Full article
(This article belongs to the Special Issue Object Detection in Video Surveillance Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Sowing, Monitoring, Detecting: A Possible Solution to Improve the Visibility of Cropmarks in Cultivated Fields
by
Filippo Materazzi
J. Imaging 2025, 11(3), 71; https://doi.org/10.3390/jimaging11030071 - 25 Feb 2025
Abstract
This study explores the integration of UAS-based multispectral remote sensing and targeted agricultural practises to improve cropmark detection in buried archaeological contexts. The research focuses on the Vignale plateau, part of the pre-Roman city of Falerii (Viterbo, Italy), where traditional remote sensing methods
[...] Read more.
This study explores the integration of UAS-based multispectral remote sensing and targeted agricultural practises to improve cropmark detection in buried archaeological contexts. The research focuses on the Vignale plateau, part of the pre-Roman city of Falerii (Viterbo, Italy), where traditional remote sensing methods face challenges due to complex environmental and archaeological conditions. As part of the Falerii Project at Sapienza Università di Roma, a field was cultivated with barley (Hordeum vulgare L.), selected for its characteristics, enabling a controlled experiment to maximise cropmark visibility. The project employed high-density sowing, natural cultivation practises, and monitoring through a weather station and multispectral imaging to observe crop growth and detect anomalies. The results demonstrated enhanced crop uniformity, facilitating the identification and differentiation of cropmarks. Environmental factors, particularly rainfall and temperature, were shown to significantly influence crop development and cropmark formation. This interdisciplinary approach also engaged local stakeholders, including students from the Istituto Agrario Midossi, fostering educational opportunities and community involvement. The study highlights how tailored agricultural strategies, combined with advanced remote sensing technologies, can significantly improve the precision and efficiency of non-invasive archaeological investigations. These findings suggest potential developments for refining the methodology, offering a sustainable and integrative model for future research.
Full article
(This article belongs to the Special Issue Unmanned Aircraft Systems and Remote Sensing in Locating Covered Archaeological Remains)
►▼
Show Figures

Figure 1
Open AccessArticle
Evaluation of Radiation Dose and Image Quality in Clinical Routine Protocols from Three Different CT Scanners
by
Thawatchai Prabsattroo, Jiranthanin Phaorod, Piyaphat Tathuwan, Khanitta Tongluan, Puengjai Punikhom, Tongjit Maharantawong and Waraporn Sudchai
J. Imaging 2025, 11(3), 70; https://doi.org/10.3390/jimaging11030070 - 25 Feb 2025
Abstract
Computed tomography examination plays a vital role in imaging and its use has rapidly increased in radiology diagnosis. This study aimed to assess radiation doses of routine CT protocols of the brain, chest, and abdomen in three different CT scanners, together with a
[...] Read more.
Computed tomography examination plays a vital role in imaging and its use has rapidly increased in radiology diagnosis. This study aimed to assess radiation doses of routine CT protocols of the brain, chest, and abdomen in three different CT scanners, together with a qualitative image quality assessment. Methods: A picture archiving and communication system (PACS) and Radimetrics software version 3.4.2 retrospectively collected patients’ radiation doses. Radiation doses were recorded as the CTDIvol, dose length product, and effective dose. CT images were acquired using the Catphan700 phantom to evaluate image quality. Results: The findings revealed that median values for the CTDIvol and DLP across the brain, chest, and abdomen protocols were lower than the national and international DRLs. Effective doses for brain, chest, and abdomen protocols were also below the median value of R. Smith-Bindman. Neusoft achieved higher spatial frequencies in brain protocols, while Siemens outperformed others in chest protocols. Neusoft consistently exhibited superior high-contrast resolution. Siemens and Neusoft outperformed low-contrast detectability, while Siemens also outperformed the contrast-to-noise ratio. In addition, Siemens had the lowest image noise in brain protocols and high uniformity in chest and abdomen protocols. Neusoft showed the lowest noise in chest and abdomen protocols and high uniformity in the brain protocol. The noise power spectrum revealed that Philips had the highest noise magnitude with different noise textures across protocols and scanners. Conclusions: This study provides a comprehensive evaluation of radiation doses and image quality for three different CT scanners using standard clinical protocols. Almost all CT protocols exhibited radiation doses below the DRLs and demonstrated varying image qualities across each protocol and scanner. Selecting the right CT scanner for each protocol is essential to ensure that the CT images exhibit the best quality among a wide range of CT machines. The MTF, HCR, LCD, CNR, NPS, noise, and uniformity are suitable parameters for evaluating and monitoring image quality.
Full article
(This article belongs to the Special Issue Tools and Techniques for Improving Radiological Imaging Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Improving Object Detection in High-Altitude Infrared Thermal Images Using Magnitude-Based Pruning and Non-Maximum Suppression
by
Yajnaseni Dash, Vinayak Gupta, Ajith Abraham and Swati Chandna
J. Imaging 2025, 11(3), 69; https://doi.org/10.3390/jimaging11030069 - 24 Feb 2025
Abstract
The advancement of technology has ushered in remote sensing with the adoption of high-altitude infrared thermal object detection to leverage the distinct advantages of high-altitude platforms. These new technologies readily capture the thermal signatures of objects from an elevated point, generally unmanned aerial
[...] Read more.
The advancement of technology has ushered in remote sensing with the adoption of high-altitude infrared thermal object detection to leverage the distinct advantages of high-altitude platforms. These new technologies readily capture the thermal signatures of objects from an elevated point, generally unmanned aerial vehicles or drones, and thus allow for the enhancement of the detection and monitoring of extensive areas. This study explores the application of YOLOv8’s advanced architecture, as well as dynamic magnitude-based pruning techniques paired with non-maximum suppression for high-altitude infrared thermal object detection using UAVs. The current research addresses the complexities of processing high-resolution thermal imagery, where traditional methods fall short. We converted dataset annotations from the COCO and PASCAL VOC formats to YOLO’s required format, enabling efficient model training and inference. The results demonstrate the proposed architecture’s superior speed and accuracy, effectively handling thermal signatures and object detection. Precision–recall metrics indicate robust performance, though some misclassification, particularly for persons, suggests areas for further refinement. This work highlights the advanced architecture of YOLOv8’s potential in enhancing UAV-based thermal imaging applications, paving the way for more effective real-time object detection solutions.
Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
►▼
Show Figures

Figure 1
Open AccessArticle
Machine Learning-Driven Radiomics Analysis for Distinguishing Mucinous and Non-Mucinous Pancreatic Cystic Lesions: A Multicentric Study
by
Neus Torra-Ferrer, Maria Montserrat Duh, Queralt Grau-Ortega, Daniel Cañadas-Gómez, Juan Moreno-Vedia, Meritxell Riera-Marín, Melanie Aliaga-Lavrijsen, Mateu Serra-Prat, Javier García López, Miguel Ángel González-Ballester, Maria Teresa Fernández-Planas and Júlia Rodríguez-Comas
J. Imaging 2025, 11(3), 68; https://doi.org/10.3390/jimaging11030068 - 20 Feb 2025
Abstract
The increasing use of high-resolution cross-sectional imaging has significantly enhanced the detection of pancreatic cystic lesions (PCLs), including pseudocysts and neoplastic entities such as IPMN, MCN, and SCN. However, accurate categorization of PCLs remains a challenge. This study aims to improve PCL evaluation
[...] Read more.
The increasing use of high-resolution cross-sectional imaging has significantly enhanced the detection of pancreatic cystic lesions (PCLs), including pseudocysts and neoplastic entities such as IPMN, MCN, and SCN. However, accurate categorization of PCLs remains a challenge. This study aims to improve PCL evaluation by developing and validating a radiomics-based software tool leveraging machine learning (ML) for lesion classification. The model categorizes PCLs into mucinous and non-mucinous types using a custom dataset of 261 CT examinations, with 156 images for training and 105 for external validation. Three experienced radiologists manually delineated the images, extracting 38 radiological and 214 radiomic features using the Pyradiomics module in Python 3.13.2. Feature selection was performed using Least Absolute Shrinkage and Selection Operator (LASSO) regression, followed by classification with an Adaptive Boosting (AdaBoost) model trained on the optimized feature set. The proposed model achieved an accuracy of 89.3% in the internal validation cohort and demonstrated robust performance in the external validation cohort, with 90.2% sensitivity, 80% specificity, and 88.2% overall accuracy. Comparative analysis with existing radiomics-based studies showed that the proposed model either outperforms or performs on par with the current state-of-the-art methods, particularly in external validation scenarios. These findings highlight the potential of radiomics-driven machine learning approaches in enhancing PCL diagnosis across diverse patient populations.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
An Efficient Forest Smoke Detection Approach Using Convolutional Neural Networks and Attention Mechanisms
by
Quy-Quyen Hoang, Quy-Lam Hoang and Hoon Oh
J. Imaging 2025, 11(2), 67; https://doi.org/10.3390/jimaging11020067 - 19 Feb 2025
Abstract
►▼
Show Figures
This study explores a method of detecting smoke plumes effectively as the early sign of a forest fire. Convolutional neural networks (CNNs) have been widely used for forest fire detection; however, they have not been customized or optimized for smoke characteristics. This paper
[...] Read more.
This study explores a method of detecting smoke plumes effectively as the early sign of a forest fire. Convolutional neural networks (CNNs) have been widely used for forest fire detection; however, they have not been customized or optimized for smoke characteristics. This paper proposes a CNN-based forest smoke detection model featuring novel backbone architecture that can increase detection accuracy and reduce computational load. Since the proposed backbone detects the plume of smoke through different views using kernels of varying sizes, it can better detect smoke plumes of different sizes. By decomposing the traditional square kernel convolution into a depth-wise convolution of the coordinate kernel, it can not only better extract the features of the smoke plume spreading along the vertical dimension but also reduce the computational load. An attention mechanism was applied to allow the model to focus on important information while suppressing less relevant information. The experimental results show that our model outperforms other popular ones by achieving detection accuracy of up to 52.9 average precision (AP) and significantly reduces the number of parameters and giga floating-point operations (GFLOPs) compared to the popular models.
Full article

Figure 1
Open AccessArticle
Direct Distillation: A Novel Approach for Efficient Diffusion Model Inference
by
Zilai Li and Rongkai Zhang
J. Imaging 2025, 11(2), 66; https://doi.org/10.3390/jimaging11020066 - 19 Feb 2025
Abstract
Diffusion models are among the most common techniques used for image generation, having achieved state-of-the-art performance by implementing auto-regressive algorithms. However, multi-step inference processes are typically slow and require extensive computational resources. To address this issue, we propose the use of an information
[...] Read more.
Diffusion models are among the most common techniques used for image generation, having achieved state-of-the-art performance by implementing auto-regressive algorithms. However, multi-step inference processes are typically slow and require extensive computational resources. To address this issue, we propose the use of an information bottleneck to reschedule inference using a new sampling strategy, which employs a lightweight distilled neural network to map intermediate stages to the final output. This approach reduces the number of iterations and FLOPS required for inference while ensuring the diversity of generated images. A series of validation experiments were conducted involving the COCO dataset as well as the LAION dataset and two proposed distillation models, requiring 57.5 million and 13.5 million parameters, respectively. Results showed that these models were able to bypass 40–50% of the inference steps originally required by a stable U-Net diffusion model, which included 859 million parameters. In the original sampling process, each inference step required 67,749 million multiply–accumulate operations (MACs), while our two distillate models only required 3954 million MACs and 3922 million MACs per inference step. In addition, our distillation algorithm produced a Fréchet inception distance (FID) of 16.75 in eight steps, which was remarkably lower than those of the progressive distillation, adversarial distillation, and DDIM solver algorithms, which produced FID values of 21.0, 30.0, 22.3, and 24.0, respectively. Notably, this process did not require parameters from the original diffusion model to establish a new distillation model prior to training. Information theory was used to further analyze primary bottlenecks in the FID results of existing distillation algorithms, demonstrating that both GANs and typical distillation failed to achieve generative diversity while implicitly studying incorrect posterior probability distributions. Meanwhile, we use information theory to analyze the latest distillation models including LCM-SDXL, SDXL-Turbo, SDXL-Lightning, DMD, and MSD, which reveals the basic reason for the diversity problem confronted by them, and compare those distillation models with our algorithm in the FID and CLIP Score.
Full article
(This article belongs to the Section AI in Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Impact of Data Capture Methods on 3D Reconstruction with Gaussian Splatting
by
Dimitar Rangelov, Sierd Waanders, Kars Waanders, Maurice van Keulen and Radoslav Miltchev
J. Imaging 2025, 11(2), 65; https://doi.org/10.3390/jimaging11020065 - 18 Feb 2025
Abstract
This study examines how different filming techniques can enhance the quality of 3D reconstructions with a particular focus on their use in indoor crime scene investigations. Using Neural Radiance Fields (NeRF) and Gaussian Splatting, we explored how factors like camera orientation, filming speed,
[...] Read more.
This study examines how different filming techniques can enhance the quality of 3D reconstructions with a particular focus on their use in indoor crime scene investigations. Using Neural Radiance Fields (NeRF) and Gaussian Splatting, we explored how factors like camera orientation, filming speed, data layering, and scanning path affect the detail and clarity of 3D reconstructions. Through experiments in a mock crime scene apartment, we identified optimal filming methods that reduce noise and artifacts, delivering clearer and more accurate reconstructions. Filming in landscape mode, at a slower speed, with at least three layers and focused on key objects produced the most effective results. These insights provide valuable guidelines for professionals in forensics, architecture, and cultural heritage preservation, helping them capture realistic high-quality 3D representations. This study also highlights the potential for future research to expand on these findings by exploring other algorithms, camera parameters, and real-time adjustment techniques.
Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Vision-Based Collision Warning Systems with Deep Learning: A Systematic Review
by
Charith Chitraranjan, Vipooshan Vipulananthan and Thuvarakan Sritharan
J. Imaging 2025, 11(2), 64; https://doi.org/10.3390/jimaging11020064 - 17 Feb 2025
Abstract
Timely prediction of collisions enables advanced driver assistance systems to issue warnings and initiate emergency maneuvers as needed to avoid collisions. With recent developments in computer vision and deep learning, collision warning systems that use vision as the only sensory input have emerged.
[...] Read more.
Timely prediction of collisions enables advanced driver assistance systems to issue warnings and initiate emergency maneuvers as needed to avoid collisions. With recent developments in computer vision and deep learning, collision warning systems that use vision as the only sensory input have emerged. They are less expensive than those that use multiple sensors, but their effectiveness must be thoroughly assessed. We systematically searched academic literature for studies proposing ego-centric, vision-based collision warning systems that use deep learning techniques. Thirty-one studies among the search results satisfied our inclusion criteria. Risk of bias was assessed with PROBAST. We reviewed the selected studies and answer three primary questions: What are the (1) deep learning techniques used and how are they used? (2) datasets and experiments used to evaluate? (3) results achieved? We identified two main categories of methods: Those that use deep learning models to directly predict the probability of a future collision from input video, and those that use deep learning models at one or more stages of a pipeline to compute a threat metric before predicting collisions. More importantly, we show that the experimental evaluation of most systems is inadequate due to either not performing quantitative experiments or various biases present in the datasets used. Lack of suitable datasets is a major challenge to the evaluation of these systems and we suggest future work to address this issue.
Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning: Trends and Applications (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessReview
Unraveling the Role of PET in Cervical Cancer: Review of Current Applications and Future Horizons
by
Divya Yadav, Elisabeth O’Dwyer, Matthew Agee, Silvina P. Dutruel, Sonia Mahajan and Sandra Huicochea Castellanos
J. Imaging 2025, 11(2), 63; https://doi.org/10.3390/jimaging11020063 - 17 Feb 2025
Abstract
FDG PET/CT provides complementary metabolic information with greater sensitivity and specificity than conventional imaging modalities for evaluating local recurrence, nodal, and distant metastases in patients with cervical cancer. PET/CT can also be used in radiation treatment planning, which is the mainstay of treatment.
[...] Read more.
FDG PET/CT provides complementary metabolic information with greater sensitivity and specificity than conventional imaging modalities for evaluating local recurrence, nodal, and distant metastases in patients with cervical cancer. PET/CT can also be used in radiation treatment planning, which is the mainstay of treatment. With the implementation of various oncological guidelines, FDG PET/CT has been utilized more frequently in patient management and prognostication. Newer PET tracers targeting the tumor microenvironment offer valuable biologic insights to elucidate the mechanism of treatment resistance and tumor aggressiveness and identify the high-risk patients. Artificial intelligence and machine learning approaches have been utilized more recently in metastatic disease detection, response assessment, and prognostication of cervical cancer.
Full article
(This article belongs to the Special Issue New Perspectives in Medical Image Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
Non-Hospitalized Long COVID Patients Exhibit Reduced Retinal Capillary Perfusion: A Prospective Cohort Study
by
Clayton E. Lyons, Jonathan Alhalel, Anna Busza, Emily Suen, Nathan Gill, Nicole Decker, Stephen Suchy, Zachary Orban, Millenia Jimenez, Gina Perez Giraldo, Igor J. Koralnik and Manjot K. Gill
J. Imaging 2025, 11(2), 62; https://doi.org/10.3390/jimaging11020062 - 17 Feb 2025
Abstract
The mechanism of post-acute sequelae of SARS-CoV-2 (PASC) is unknown. Using optical coherence tomography angiography (OCT-A), we compared retinal foveal avascular zone (FAZ), vessel density (VD), and vessel length density (VLD) in non-hospitalized Neuro-PASC patients with those in healthy controls in an effort
[...] Read more.
The mechanism of post-acute sequelae of SARS-CoV-2 (PASC) is unknown. Using optical coherence tomography angiography (OCT-A), we compared retinal foveal avascular zone (FAZ), vessel density (VD), and vessel length density (VLD) in non-hospitalized Neuro-PASC patients with those in healthy controls in an effort to elucidate the mechanism underlying this debilitating condition. Neuro-PASC patients with a positive SARS-CoV-2 test and neurological symptoms lasting ≥6 weeks were included. Those with prior COVID-19 hospitalization were excluded. Subjects underwent OCT-A with segmentation of the full retinal slab into the superficial (SCP) and deep (DCP) capillary plexus. The FAZ was manually delineated on the full slab in ImageJ. An ImageJ macro was used to measure VD and VLD. OCT-A variables were analyzed using linear mixed-effects models with fixed effects for Neuro-PASC, age, and sex, and a random effect for patient to account for measurements from both eyes. The coefficient of Neuro-PASC status was used to determine statistical significance; p-values were adjusted using the Benjamani–Hochberg procedure. Neuro-PASC patients (N = 30; 60 eyes) exhibited a statistically significant (p = 0.005) reduction in DCP VLD compared to healthy controls (N = 44; 80 eyes). The sole reduction in DCP VLD in Neuro-PASC may suggest preferential involvement of the smallest blood vessels.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Accurate Prostate Segmentation in Large-Scale Magnetic Resonance Imaging Datasets via First-in-First-Out Feature Memory and Multi-Scale Context Modeling
by
Jingyi Zhu, Xukun Zhang, Xiao Luo, Zhiji Zheng, Kun Zhou, Yanlan Kang, Haiqing Li and Daoying Geng
J. Imaging 2025, 11(2), 61; https://doi.org/10.3390/jimaging11020061 - 16 Feb 2025
Abstract
Prostate cancer, a prevalent malignancy affecting males globally, underscores the critical need for precise prostate segmentation in diagnostic imaging. However, accurate delineation via MRI still faces several challenges: (1) The distinction of the prostate from surrounding soft tissues is impeded by subtle boundaries
[...] Read more.
Prostate cancer, a prevalent malignancy affecting males globally, underscores the critical need for precise prostate segmentation in diagnostic imaging. However, accurate delineation via MRI still faces several challenges: (1) The distinction of the prostate from surrounding soft tissues is impeded by subtle boundaries in MRI images. (2) Regions such as the apex and base of the prostate exhibit inherent blurriness, which complicates edge extraction and precise segmentation. The objective of this study was to precisely delineate the borders of the prostate including the apex and base regions. This study introduces a multi-scale context modeling module to enhance boundary pixel representation, thus reducing the impact of irrelevant features on segmentation outcomes. Utilizing a first-in-first-out dynamic adjustment mechanism, the proposed methodology optimizes feature vector selection, thereby enhancing segmentation outcomes for challenging apex and base regions of the prostate. Segmentation of the prostate on 2175 clinically annotated MRI datasets demonstrated that our proposed MCM-UNet outperforms existing methods. The Average Symmetric Surface Distance (ASSD) and Dice similarity coefficient (DSC) for prostate segmentation were 0.58 voxels and 91.71%, respectively. The prostate segmentation results closely matched those manually delineated by experienced radiologists. Consequently, our method significantly enhances the accuracy of prostate segmentation and holds substantial significance in the diagnosis and treatment of prostate cancer.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Investigating Eye Movements to Examine Attachment-Related Differences in Facial Emotion Perception and Face Memory
by
Karolin Török-Suri, Kornél Németh, Máté Baradits and Gábor Csukly
J. Imaging 2025, 11(2), 60; https://doi.org/10.3390/jimaging11020060 - 16 Feb 2025
Abstract
Individual differences in attachment orientations may influence how we process emotionally significant stimuli. As one of the most important sources of emotional information are facial expressions, we examined whether there is an association between adult attachment styles (i.e., scores on the ECR questionnaire,
[...] Read more.
Individual differences in attachment orientations may influence how we process emotionally significant stimuli. As one of the most important sources of emotional information are facial expressions, we examined whether there is an association between adult attachment styles (i.e., scores on the ECR questionnaire, which measures the avoidance and anxiety dimensions of attachment), facial emotion perception and face memory in a neurotypical sample. Trait and state anxiety were also measured as covariates. Eye-tracking was used during the emotion decision task (happy vs. sad faces) and the subsequent facial recognition task; the length of fixations to different face regions was measured as the dependent variable. Linear mixed models suggested that differences during emotion perception may result from longer fixations in individuals with insecure (anxious or avoidant) attachment orientations. This effect was also influenced by individual state and trait anxiety measures. Eye movements during the recognition memory task, however, were not related to either of the attachment dimensions; only trait anxiety had a significant effect on the length of fixations in this condition. The results of our research may contribute to a more accurate understanding of facial emotion perception in the light of attachment styles, and their interaction with anxiety characteristics.
Full article
(This article belongs to the Special Issue Human Attention and Visual Cognition (2nd Edition))
►▼
Show Figures

Figure 1

Journal Menu
► ▼ Journal Menu-
- J. Imaging Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections
- Article Processing Charge
- Indexing & Archiving
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Conferences
- Editorial Office
- 10th Anniversary
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computers, Electronics, Information, J. Imaging
Visual Computing and Understanding: New Developments and Trends
Topic Editors: Wei Zhou, Guanghui Yue, Wenhan YangDeadline: 30 March 2025
Topic in
Applied Sciences, Computation, Entropy, J. Imaging, Optics
Color Image Processing: Models and Methods (CIP: MM)
Topic Editors: Giuliana Ramella, Isabella TorcicolloDeadline: 30 July 2025
Topic in
Applied Sciences, Bioengineering, Diagnostics, J. Imaging, Signals
Signal Analysis and Biomedical Imaging for Precision Medicine
Topic Editors: Surbhi Bhatia Khan, Mo SaraeeDeadline: 31 August 2025
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025

Conferences
Special Issues
Special Issue in
J. Imaging
Approaches to Person Re-identification: Progress and Challenges
Guest Editor: Feng LiuDeadline: 31 March 2025
Special Issue in
J. Imaging
Hexagonal Image Processing in Computer Vision
Guest Editors: Danny Kowerko, Tobias SchlosserDeadline: 31 March 2025
Special Issue in
J. Imaging
Deep Learning in Biomedical Image Segmentation and Classification: Advancements, Challenges and Applications
Guest Editor: Ebrahim KaramiDeadline: 31 March 2025
Special Issue in
J. Imaging
Advances in Biomedical Image Processing and Artificial Intelligence for Computer-Aided Diagnosis in Medicine
Guest Editors: Andrea Loddo, Albert Comelli, Cecilia Di Ruberto, Lorenzo Putzu, Alessandro StefanoDeadline: 31 March 2025