Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,694)

Search Parameters:
Keywords = human visual system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 5070 KiB  
Article
AI-Driven Insect Detection, Real-Time Monitoring, and Population Forecasting in Greenhouses
by Dimitrios Kapetas, Panagiotis Christakakis, Sofia Faliagka, Nikolaos Katsoulas and Eleftheria Maria Pechlivani
AgriEngineering 2025, 7(2), 29; https://doi.org/10.3390/agriengineering7020029 - 27 Jan 2025
Viewed by 376
Abstract
Insecticide use in agriculture has significantly increased over the past decades, reaching 774 thousand metric tons in 2022. This widespread reliance on chemical insecticides has substantial economic, environmental, and human health consequences, highlighting the urgent need for sustainable pest management strategies. Early detection, [...] Read more.
Insecticide use in agriculture has significantly increased over the past decades, reaching 774 thousand metric tons in 2022. This widespread reliance on chemical insecticides has substantial economic, environmental, and human health consequences, highlighting the urgent need for sustainable pest management strategies. Early detection, insect monitoring, and population forecasting through Artificial Intelligence (AI)-based methods, can enable swift responsiveness, allowing for reduced but more effective insecticide use, mitigating traditional labor-intensive and error prone solutions. The main challenge is creating AI models that perform with speed and accuracy, enabling immediate farmer action. This study highlights the innovating potential of such an approach, focusing on the detection and prediction of black aphids under state-of-the-art Deep Learning (DL) models. A dataset of 220 sticky paper images was captured. The detection system employs a YOLOv10 DL model that achieved an accuracy of 89.1% (mAP50). For insect population prediction, random forests, gradient boosting, LSTM, and the ARIMA, ARIMAX, and SARIMAX models were evaluated. The ARIMAX model performed best with a Mean Square Error (MSE) of 75.61, corresponding to an average deviation of 8.61 insects per day between predicted and actual insect counts. For the visualization of the detection results, the DL model was embedded to a mobile application. This holistic approach supports early intervention strategies and sustainable pest management while offering a scalable solution for smart-agriculture environments. Full article
Show Figures

Figure 1

28 pages, 5280 KiB  
Article
Chloride and Acetonitrile Ruthenium(IV) Complexes: Crystal Architecture, Chemical Characterization, Antibiofilm Activity, and Bioavailability in Biological Systems
by Agnieszka Jabłońska-Wawrzycka, Patrycja Rogala, Grzegorz Czerwonka, Maciej Hodorowicz, Justyna Kalinowska-Tłuścik and Marta Karpiel
Molecules 2025, 30(3), 564; https://doi.org/10.3390/molecules30030564 - 26 Jan 2025
Viewed by 322
Abstract
Due to the emergence of drug resistance, many antimicrobial medications are becoming less effective, complicating the treatment of infections. Therefore, it is crucial to develop new active agents. This article aims to explore the ruthenium(IV) complexes with the following formulas: (Hdma)2(HL) [...] Read more.
Due to the emergence of drug resistance, many antimicrobial medications are becoming less effective, complicating the treatment of infections. Therefore, it is crucial to develop new active agents. This article aims to explore the ruthenium(IV) complexes with the following formulas: (Hdma)2(HL)2[RuIVCl6]·2Cl·2H2O (1), where Hdma is protonated dimethylamine and L is 2-hydroxymethylbenzimidazole, and [RuIVCl4(AN)2]·H2O (2), where AN is acetonitrile. This paper delves into the physicochemical characteristics and crystal structures of these complexes, employing various techniques such as spectroscopy (IR, UV–Vis), electrochemistry (CV, DPV), and X-ray crystallography. Hirshfeld surface analysis was also performed to visualize intermolecular interactions. Furthermore, the potential antibiofilm activity of the complexes against Pseudomonas aeruginosa PAO1 was investigated and the effect of the compounds on the production of pyoverdine, one of the virulence factors of the Pseudomonas strain, was assessed. The results show that particularly complex 1 reduces biofilm formation and pyoverdine production. Additionally, the bioavailability of these complexes in biological systems (by fluorescence quenching of human serum albumin (HSA) and molecular docking studies) is discussed, assessing how their chemical properties influence their interactions with biological molecules and their potential therapeutic applications. Full article
Show Figures

Figure 1

21 pages, 1368 KiB  
Article
Radar Signal Processing and Its Impact on Deep Learning-Driven Human Activity Recognition
by Fahad Ayaz, Basim Alhumaily, Sajjad Hussain, Muhamamd Ali Imran, Kamran Arshad, Khaled Assaleh and Ahmed Zoha
Sensors 2025, 25(3), 724; https://doi.org/10.3390/s25030724 - 25 Jan 2025
Viewed by 330
Abstract
Human activity recognition (HAR) using radar technology is becoming increasingly valuable for applications in areas such as smart security systems, healthcare monitoring, and interactive computing. This study investigates the integration of convolutional neural networks (CNNs) with conventional radar signal processing methods to improve [...] Read more.
Human activity recognition (HAR) using radar technology is becoming increasingly valuable for applications in areas such as smart security systems, healthcare monitoring, and interactive computing. This study investigates the integration of convolutional neural networks (CNNs) with conventional radar signal processing methods to improve the accuracy and efficiency of HAR. Three distinct, two-dimensional radar processing techniques, specifically range-fast Fourier transform (FFT)-based time-range maps, time-Doppler-based short-time Fourier transform (STFT) maps, and smoothed pseudo-Wigner–Ville distribution (SPWVD) maps, are evaluated in combination with four state-of-the-art CNN architectures: VGG-16, VGG-19, ResNet-50, and MobileNetV2. This study positions radar-generated maps as a form of visual data, bridging radar signal processing and image representation domains while ensuring privacy in sensitive applications. In total, twelve CNN and preprocessing configurations are analyzed, focusing on the trade-offs between preprocessing complexity and recognition accuracy, all of which are essential for real-time applications. Among these results, MobileNetV2, combined with STFT preprocessing, showed an ideal balance, achieving high computational efficiency and an accuracy rate of 96.30%, with a spectrogram generation time of 220 ms and an inference time of 2.57 ms per sample. The comprehensive evaluation underscores the importance of interpretable visual features for resource-constrained environments, expanding the applicability of radar-based HAR systems to domains such as augmented reality, autonomous systems, and edge computing. Full article
(This article belongs to the Special Issue Non-Intrusive Sensors for Human Activity Detection and Recognition)
Show Figures

Figure 1

22 pages, 1855 KiB  
Article
Estimation of Pressure Pain in the Lower Limbs Using Electrodermal Activity, Tissue Oxygen Saturation, and Heart Rate Variability
by Youngho Kim, Seonggeon Pyo, Seunghee Lee, Changeon Park and Sunghyuk Song
Sensors 2025, 25(3), 680; https://doi.org/10.3390/s25030680 - 23 Jan 2025
Viewed by 366
Abstract
Quantification of pain or discomfort induced by pressure is essential for understanding human responses to physical stimuli and improving user interfaces. Pain research has been conducted to investigate physiological signals associated with discomfort and pain perception. This study analyzed changes in electrodermal activity [...] Read more.
Quantification of pain or discomfort induced by pressure is essential for understanding human responses to physical stimuli and improving user interfaces. Pain research has been conducted to investigate physiological signals associated with discomfort and pain perception. This study analyzed changes in electrodermal activity (EDA), tissue oxygen saturation (StO2), heart rate variability (HRV), and Visual Analog Scale (VAS) under pressures of 10, 20, and 30 kPa applied for 3 min to the thigh, knee, and calf in a seated position. Twenty participants were tested, and relationships between biosignals, pressure intensity, and pain levels were evaluated using Friedman tests and post-hoc analyses. Multiple linear regression models were used to predict VAS and pressure, and five machine learning models (SVM, Logistic Regression, Random Forest, MLP, KNN) were applied to classify pain levels (no pain: VAS 0, low: VAS 1–3, moderate: VAS 4–6, high: VAS 7–10) and pressure intensity. The results showed that higher pressure intensity and pain levels affected sympathetic nervous system responses and tissue oxygen saturation. Most EDA features and StO2 significantly changed according to pressure intensity and pain levels, while NN interval and HF among HRV features showed significant differences based on pressure intensity or pain level. Regression analysis combining biosignal features achieved a maximum R2 of 0.668 in predicting VAS and pressure intensity. The four-level classification model reached an accuracy of 88.2% for pain levels and 81.3% for pressure intensity. These results demonstrated the potential of EDA, StO2, HRV signals, and combinations of biosignal features for pain quantification and prediction. Full article
12 pages, 1610 KiB  
Article
Rapid Detection of Alpha-Fetoprotein (AFP) with Lateral Flow Aptasensor
by Meijing Ma, Min Zhang, Jiahui Wang, Yurui Zhou, Xueji Zhang and Guodong Liu
Molecules 2025, 30(3), 484; https://doi.org/10.3390/molecules30030484 - 22 Jan 2025
Viewed by 366
Abstract
We present a lateral flow aptasensor for the visual detection of alpha-fetoprotein (AFP) in human serum. Leveraging the precise molecular recognition capabilities of aptamers and the distinct optical features of gold nanoparticles, a model system utilizing AFP as the target analyte, along with [...] Read more.
We present a lateral flow aptasensor for the visual detection of alpha-fetoprotein (AFP) in human serum. Leveraging the precise molecular recognition capabilities of aptamers and the distinct optical features of gold nanoparticles, a model system utilizing AFP as the target analyte, along with a pair of aptamer probes, is implemented to establish proof-of-concept on standard lateral flow test strips. It is the first report of an antibody-free lateral flow assay using aptamers as recognition probes for the detection of AFP. The analysis circumvents the numerous incubation and washing steps that are typically involved in most current aptamer-based protein assays. Qualitative analysis involves observing color changes in the test area, while quantitative data are obtained by measuring the optical response in the test zone using a portable strip reader. The biosensor exhibits a linear detection range for AFP concentrations between 10 and 100 ng/mL, with a minimum detection limit of 10 ng/mL. Additionally, it has been successfully applied to detect AFP in human serum samples. The use of aptamer-functionalized gold nanoparticle probes in a lateral flow assay offers great promise for point-of-care applications and fast, on-site detection. Full article
(This article belongs to the Section Analytical Chemistry)
Show Figures

Figure 1

34 pages, 8852 KiB  
Article
A Biologically Inspired Model for Detecting Object Motion Direction in Stereoscopic Vision
by Yuxiao Hua, Sichen Tao, Yuki Todo, Tianqi Chen, Zhiyu Qiu and Zheng Tang
Symmetry 2025, 17(2), 162; https://doi.org/10.3390/sym17020162 - 22 Jan 2025
Viewed by 353
Abstract
This paper presents a biologically inspired model, the Stereoscopic Direction Detection Mechanism (SDDM), designed to detect motion direction in three-dimensional space. The model addresses two key challenges: the lack of biological interpretability in current deep learning models and the limited exploration of binocular [...] Read more.
This paper presents a biologically inspired model, the Stereoscopic Direction Detection Mechanism (SDDM), designed to detect motion direction in three-dimensional space. The model addresses two key challenges: the lack of biological interpretability in current deep learning models and the limited exploration of binocular functionality in existing biologically inspired models. Rooted in the fundamental concept of ’disparity’, the SDDM is structurally divided into components representing the left and right eyes. Each component mimics the layered architecture of the human visual system, from the retinal layer to the primary visual cortex. By replicating the functions of various cells involved in stereoscopic motion direction detection, the SDDM offers enhanced biological plausibility and interpretability. Extensive experiments were conducted to evaluate the model’s detection accuracy for various objects and its robustness against different types of noise. Additionally, to ascertain whether the SDDM matches the performance of established deep learning models in the field of three-dimensional motion direction detection, its performance was benchmarked against EfficientNet and ResNet under identical conditions. The results demonstrate that the SDDM not only exhibits strong performance and robust biological interpretability but also requires significantly lower hardware and time costs compared to advanced deep learning models. Full article
Show Figures

Figure 1

20 pages, 1381 KiB  
Article
Ecological Trait Differences Are Associated with Gene Expression in the Primary Visual Cortex of Primates
by Trisha M. Zintel, John J. Ely, Mary Ann Raghanti, William D. Hopkins, Patrick R. Hof, Chet C. Sherwood, Jason M. Kamilar, Amy L. Bauernfeind and Courtney C. Babbitt
Genes 2025, 16(2), 117; https://doi.org/10.3390/genes16020117 - 22 Jan 2025
Viewed by 390
Abstract
Primate species differ drastically from most other mammals in how they visually perceive their environments, which is particularly important for foraging, predator avoidance, and detection of social cues. Background/Objectives: Although it is well established that primates display diversity in color vision and various [...] Read more.
Primate species differ drastically from most other mammals in how they visually perceive their environments, which is particularly important for foraging, predator avoidance, and detection of social cues. Background/Objectives: Although it is well established that primates display diversity in color vision and various ecological specializations, it is not understood how visual system characteristics and ecological adaptations may be associated with gene expression levels within the primary visual cortex (V1). Methods: We performed RNA-Seq on V1 tissue samples from 28 individuals, representing 13 species of primates, including hominoids, cercopithecoids, and platyrrhines. We explored trait-dependent differential expression (DE) by contrasting species with differing visual system phenotypes and ecological traits. Results: Between 4–25% of genes were determined to be differentially expressed in primates that varied in type of color vision (trichromatic or polymorphic di/trichromatic), habitat use (arboreal or terrestrial), group size (large or small), and primary diet (frugivorous, folivorous, or omnivorous). Conclusions: Interestingly, our DE analyses revealed that humans and chimpanzees showed the most marked differences between any two species, even though they are only separated by 6–8 million years of independent evolution. These results show a combination of species-specific and trait-dependent differences in the evolution of gene expression in the primate visual cortex. Full article
(This article belongs to the Section Population and Evolutionary Genetics and Genomics)
Show Figures

Figure 1

17 pages, 248 KiB  
Review
Sustainable Architecture and Human Health: A Case for Effective Circadian Daylighting Metrics
by Bhaswati Mukherjee and Mohamed Boubekri
Buildings 2025, 15(3), 315; https://doi.org/10.3390/buildings15030315 - 21 Jan 2025
Viewed by 467
Abstract
The development of the fluorescent lamp and the air-conditioning system resulted in buildings being lit inexpensively without having to rely on daylighting to save energy, as was the case during the incandescent lamp era. Consequently, architects were able to design buildings with deep [...] Read more.
The development of the fluorescent lamp and the air-conditioning system resulted in buildings being lit inexpensively without having to rely on daylighting to save energy, as was the case during the incandescent lamp era. Consequently, architects were able to design buildings with deep floor plates for maximum occupancy, placing workstations far away from windows since daylighting was no longer a necessity. Floor-to-ceiling heights became lower to minimize the inhabitable volumes that needed to be cooled or heated. With the rising costs of land in some major American cities such as New York City and Chicago at the beginning of the twentieth century, developers sought to optimize their investments by erecting tall structures, giving rise to densely inhabited city centers with massive street canyons that limit sunlight access in the streets. Today, there is growing awareness in terms of the impact of the built environment on people’s health especially in terms of the health benefits of natural light. The fact that buildings, through their shapes and envelope, filter a large amount of daylight, which may impact building occupants’ health and well-being, should cause architects and building developers to take this issue seriously. The amount and quality of light we receive daily impacts many of our bodily functions and consequently several aspects of our health and well-being. The human circadian rhythm is entrained by intrinsically photosensitive retinal ganglion cells (ipRGCs) in our eyes that are responsible for non-visual responses due to the presence of a short-wavelength sensitive pigment called melanopsin. The entrainment of the circadian rhythm depends on several factors such as the intensity, wavelength, timing, and duration of light exposure. Recently, this field of research has gained popularity, and several researchers have tried to create metrics to quantify photopic light, which is the standard way of measuring visual light, into a measure of circadian effective lighting. This paper discusses the relationship between different parameters of daylighting and their non-visual effects on the human body. It also summarizes the existing metrics of daylighting, especially those focusing on its effects on the human circadian rhythm and its shortcomings. Finally, it discusses areas of future research that can address these shortcomings and potentially pave the way for a universally acceptable standardized metric. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
22 pages, 11667 KiB  
Article
Deep Learning-Enabled Visual Inspection of Gap Spacing in High-Precision Equipment: A Comparative Study
by Xiuling Li, Fusheng Li, Huan Yang and Peng Wang
Machines 2025, 13(2), 74; https://doi.org/10.3390/machines13020074 - 21 Jan 2025
Viewed by 322
Abstract
In the realm of industrial quality control, visual inspection plays a pivotal role in ensuring product precision and consistency. Moreover, it enables non-contact inspection, preventing the products from potential damage, and timely monitoring capabilities facilitate quick decision making. However, traditional methods, such as [...] Read more.
In the realm of industrial quality control, visual inspection plays a pivotal role in ensuring product precision and consistency. Moreover, it enables non-contact inspection, preventing the products from potential damage, and timely monitoring capabilities facilitate quick decision making. However, traditional methods, such as manual inspection using feeler gauges, are time-consuming, labor-intensive, and prone to human error. To address these limitations, this study proposes a deep learning-based visual inspection system for measuring gap spacing in high-precision equipment. Utilizing the DeepLSD algorithm, the system integrates traditional and deep learning techniques to enhance line segment detection, resulting in more robust and accurate inspection outcomes. Key performance improvements were realized, with the proposed system being a piece of deep learning-enabled high-precision mobile equipment for inspecting gap spacing in real-time. Through a comparative analysis with the traditional feeler gauge method, the proposed system demonstrated significant improvements in inspection time, accuracy, and user experience, while reducing workload. Experimental results validate the effectiveness and efficiency of the proposed approach, highlighting its potential for widespread application in industrial quality inspection activities. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

19 pages, 25413 KiB  
Article
No-Reference Image Quality Assessment with Moving Spectrum and Laplacian Filter for Autonomous Driving Environment
by Woongchan Nam, Taehyun Youn and Chunghun Ha
Viewed by 460
Abstract
The increasing integration of autonomous driving systems into modern vehicles heightens the significance of Image Quality Assessment (IQA), as it pertains directly to vehicular safety. In this context, the development of metrics that can emulate the Human Visual System (HVS) in assessing image [...] Read more.
The increasing integration of autonomous driving systems into modern vehicles heightens the significance of Image Quality Assessment (IQA), as it pertains directly to vehicular safety. In this context, the development of metrics that can emulate the Human Visual System (HVS) in assessing image quality assumes critical importance. Given that blur is often the primary aberration in images captured by aging or deteriorating camera sensors, this study introduces a No-Reference (NR) IQA model termed BREMOLA (Blind/Referenceless Model via Moving Spectrum and Laplacian Filter). This model is designed to sensitively respond to varying degrees of blur in images. BREMOLA employs the Fourier transform to quantify the decline in image sharpness associated with increased blur. Subsequently, deviations in the Fourier spectrum arising from factors such as nighttime lighting or the presence of various objects are normalized using the Laplacian filter. Experimental application of the BREMOLA model demonstrates its capability to differentiate between images processed with a 3 × 3 average filter and their unprocessed counterparts. Additionally, the model effectively mitigates the variance introduced in the Fourier spectrum due to variables like nighttime conditions, object count, and environmental factors. Thus, BREMOLA presents a robust approach to IQA in the specific context of autonomous driving systems. Full article
Show Figures

Figure 1

33 pages, 1112 KiB  
Review
A Comprehensive Review of Vision-Based Sensor Systems for Human Gait Analysis
by Xiaofeng Han, Diego Guffanti and Alberto Brunete
Sensors 2025, 25(2), 498; https://doi.org/10.3390/s25020498 - 16 Jan 2025
Viewed by 512
Abstract
Analysis of the human gait represents a fundamental area of investigation within the broader domains of biomechanics, clinical research, and numerous other interdisciplinary fields. The progression of visual sensor technology and machine learning algorithms has enabled substantial developments in the creation of human [...] Read more.
Analysis of the human gait represents a fundamental area of investigation within the broader domains of biomechanics, clinical research, and numerous other interdisciplinary fields. The progression of visual sensor technology and machine learning algorithms has enabled substantial developments in the creation of human gait analysis systems. This paper presents a comprehensive review of the advancements and recent findings in the field of vision-based human gait analysis systems over the past five years, with a special emphasis on the role of vision sensors, machine learning algorithms, and technological innovations. The relevant papers were subjected to analysis using the PRISMA method, and 72 articles that met the criteria for this research project were identified. A detailing of the most commonly used visual sensor systems, machine learning algorithms, human gait analysis parameters, optimal camera placement, and gait parameter extraction methods is presented in the analysis. The findings of this research indicate that non-invasive depth cameras are gaining increasing popularity within this field. Furthermore, depth learning algorithms, such as convolutional neural networks (CNNs) and long short-term memory (LSTM) networks, are being employed with increasing frequency. This review seeks to establish the foundations for future innovations that will facilitate the development of more effective, versatile, and user-friendly gait analysis tools, with the potential to significantly enhance human mobility, health, and overall quality of life. This work was supported by [GOBIERNO DE ESPANA/PID2023-150967OB-I00]. Full article
(This article belongs to the Special Issue Advanced Sensors in Biomechanics and Rehabilitation)
Show Figures

Figure 1

23 pages, 5966 KiB  
Article
Intelligent Human–Computer Interaction for Building Information Models Using Gesture Recognition
by Tianyi Zhang, Yukang Wang, Xiaoping Zhou, Deli Liu, Jingyi Ji and Junfu Feng
Inventions 2025, 10(1), 5; https://doi.org/10.3390/inventions10010005 - 16 Jan 2025
Viewed by 398
Abstract
Human–computer interaction (HCI) with three-dimensional (3D) Building Information Modelling/Model (BIM) is the crucial ingredient to enhancing the user experience and fostering the value of BIM. Current BIMs mostly use keyboard, mouse, or touchscreen as media for HCI. Using these hardware devices for HCI [...] Read more.
Human–computer interaction (HCI) with three-dimensional (3D) Building Information Modelling/Model (BIM) is the crucial ingredient to enhancing the user experience and fostering the value of BIM. Current BIMs mostly use keyboard, mouse, or touchscreen as media for HCI. Using these hardware devices for HCI with BIM may lead to space constraints and a lack of visual intuitiveness. Somatosensory interaction represents an emergent modality of interaction, e.g., gesture interaction, which requires no equipment or direct touch, presents a potential approach to solving these problems. This paper proposes a computer-vision-based gesture interaction system for BIM. Firstly, a set of gestures for BIM model manipulation was designed, grounded in human ergonomics. These gestures include selection, translation, scaling, rotation, and restoration of the 3D model. Secondly, a gesture understanding algorithm dedicated to 3D model manipulation is introduced in this paper. Then, an interaction system for 3D models based on machine vision and gesture recognition was developed. A series of systematic experiments are conducted to confirm the effectiveness of the proposed system. In various environments, including pure white backgrounds, offices, and conference rooms, even when wearing gloves, the system has an accuracy rate of over 97% and a frame rate maintained between 26 and 30 frames. The final experimental results show that the method has good performance, confirming its feasibility, accuracy, and fluidity. Somatosensory interaction with 3D models enhances the interaction experience and operation efficiency between the user and the model, further expanding the application scene of BIM. Full article
Show Figures

Figure 1

32 pages, 3661 KiB  
Systematic Review
Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It
by Yasir Hafeez, Khuhed Memon, Maged S. AL-Quraishi, Norashikin Yahya, Sami Elferik and Syed Saad Azhar Ali
Diagnostics 2025, 15(2), 168; https://doi.org/10.3390/diagnostics15020168 - 13 Jan 2025
Viewed by 719
Abstract
Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it has not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, and medical experts have been working in [...] Read more.
Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it has not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, and medical experts have been working in the direction of designing and developing computer aided diagnosis (CAD) tools to serve as assistants to doctors, their large-scale adoption and integration into the healthcare system still seems far-fetched. Diagnostic radiology is no exception. Imagining techniques like magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET) scans have been widely and very effectively employed by radiologists and neurologists for the differential diagnoses of neurological disorders for decades, yet no AI-powered systems to analyze such scans have been incorporated into the standard operating procedures of healthcare systems. Why? It is absolutely understandable that in diagnostic medicine, precious human lives are on the line, and hence there is no room even for the tiniest of mistakes. Nevertheless, with the advent of explainable artificial intelligence (XAI), the old-school black boxes of deep learning (DL) systems have been unraveled. Would XAI be the turning point for medical experts to finally embrace AI in diagnostic radiology? This review is a humble endeavor to find the answers to these questions. Methods: In this review, we present the journey and contributions of AI in developing systems to recognize, preprocess, and analyze brain MRI scans for differential diagnoses of various neurological disorders, with special emphasis on CAD systems embedded with explainability. A comprehensive review of the literature from 2017 to 2024 was conducted using host databases. We also present medical domain experts’ opinions and summarize the challenges up ahead that need to be addressed in order to fully exploit the tremendous potential of XAI in its application to medical diagnostics and serve humanity. Results: Forty-seven studies were summarized and tabulated with information about the XAI technology and datasets employed, along with performance accuracies. The strengths and weaknesses of the studies have also been discussed. In addition, the opinions of seven medical experts from around the world have been presented to guide engineers and data scientists in developing such CAD tools. Conclusions: Current CAD research was observed to be focused on the enhancement of the performance accuracies of the DL regimens, with less attention being paid to the authenticity and usefulness of explanations. A shortage of ground truth data for explainability was also observed. Visual explanation methods were found to dominate; however, they might not be enough, and more thorough and human professor-like explanations would be required to build the trust of healthcare professionals. Special attention to these factors along with the legal, ethical, safety, and security issues can bridge the current gap between XAI and routine clinical practice. Full article
Show Figures

Figure 1

12 pages, 4096 KiB  
Article
Benzo[1,2-b:6,5-b’]dithiophene-4,5-diamine: A New Fluorescent Probe for the High-Sensitivity and Real-Time Visual Monitoring of Phosgene
by Yingzhen Zhang, Jun Xiao, Ruiying Peng, Xueliang Feng, Haimei Mao, Kunming Liu, Zhenzhong Liu and Chunxin Ma
Sensors 2025, 25(2), 407; https://doi.org/10.3390/s25020407 - 11 Jan 2025
Viewed by 513
Abstract
The detection of highly toxic chemicals such as phosgene is crucial for addressing the severe threats to human health and public safety posed by terrorist attacks and industrial mishaps. However, timely and precise monitoring of phosgene at a low cost remains a significant [...] Read more.
The detection of highly toxic chemicals such as phosgene is crucial for addressing the severe threats to human health and public safety posed by terrorist attacks and industrial mishaps. However, timely and precise monitoring of phosgene at a low cost remains a significant challenge. This work is the first to report a novel fluorescent system based on the Intramolecular Charge Transfer (ICT) effect, which can rapidly detect phosgene in both solution and gas phases with high sensitivity by integrating a benzo[1,2-b:6,5-b’]dithiophene-4,5-diamine (BDTA) probe. Among existing detecting methods, this fluorescent system stands out as it can respond to phosgene within a mere 30 s and has a detection limit as low as 0.16 μM in solution. Furthermore, the sensing mechanism was rigorously validated through high-resolution mass spectrometry (HRMS) and density functional theory (DFT) calculations. As a result, this fluorescent probing system for phosgene can be effectively adapted for real-time, high-sensitivity sensing and used as a test strip for visual monitoring without the need for specific equipment, which will also provide a new strategy for the fluorescent detection of other toxic materials. Full article
(This article belongs to the Collection Collection:Fluorescent Biosensors)
Show Figures

Figure 1

20 pages, 10203 KiB  
Article
Emotional State as a Key Driver of Public Preferences for Flower Color
by Juan She, Renwu Wu, Bingling Pi, Jie Huang and Zhiyi Bao
Horticulturae 2025, 11(1), 54; https://doi.org/10.3390/horticulturae11010054 - 7 Jan 2025
Viewed by 557
Abstract
Flowers, as integral elements of urban landscapes, are critical not only for aesthetic purposes but also for fostering human–nature interactions in green spaces. However, research on flower color preferences has largely been descriptive, and there is a lack of exploration of potential mechanisms [...] Read more.
Flowers, as integral elements of urban landscapes, are critical not only for aesthetic purposes but also for fostering human–nature interactions in green spaces. However, research on flower color preferences has largely been descriptive, and there is a lack of exploration of potential mechanisms influencing flower color preferences, such as economic and social factors. This study created visual samples through precise color adjustment techniques and introduced the L*, a*, and b* parameters from the CIELAB color system to quantify the flower colors of the survey samples, conducting an online survey with 354 Chinese residents. The complex aesthetic process’s driving factors were unveiled through a comprehensive analysis using a Generalized Additive Model (GAM), a piecewise Structural Equation Model (SEM), and linear regression models. The results show that the public’s flower color preference is primarily related to the a* and b* parameters, which represent color dimensions in the CIELAB color space, and it is not significantly related to L* (lightness). Factors such as age, annual household income level (AI), personal income sources (PI), nature experience, and emotional state (TMD) significantly influence color preferences, with emotional state identified as the most critical factor. Lastly, linear regression models further explain the potential mechanism of the influencing factors. This study proposes a framework to assist urban planners in selecting flower colors that resonate with diverse populations, enhancing both the attractiveness of urban green spaces and their potential to promote pro-environmental behavior. By aligning flower color design with public preferences, this study contributes to sustainable urban planning practices aimed at improving human well-being and fostering deeper connections with nature. Full article
Show Figures

Figure 1

Back to TopTop