Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (951)

Search Parameters:
Keywords = cross-modal

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 3356 KiB  
Systematic Review
Traditional and Complementary Medicine Use among Cancer Patients in Asian Countries: A Systematic Review and Meta-Analysis
by Soojeung Choi, Sangita Karki Kunwor, Hyeabin Im, Dain Choi, Junghye Hwang, Mansoor Ahmed and Dongwoon Han
Cancers 2024, 16(18), 3130; https://doi.org/10.3390/cancers16183130 - 11 Sep 2024
Viewed by 237
Abstract
Globally, cancer patients frequently use T&CM during their treatment for various reasons. The primary concerns regarding the use of T&CM among cancer patients are the potential risks associated with interactions between pharmaceuticals and T&CM, as well as the risk of noncompliance with conventional [...] Read more.
Globally, cancer patients frequently use T&CM during their treatment for various reasons. The primary concerns regarding the use of T&CM among cancer patients are the potential risks associated with interactions between pharmaceuticals and T&CM, as well as the risk of noncompliance with conventional cancer treatments. Despite the higher prevalence of T&CM use in Asia, driven by cultural, historical, and resource-related factors, no prior review has tried to estimate the prevalence and influencing factors of T&CM use and disclosure among cancer patients in this region. This study aims to examine the prevalence and disclosure rates of T&CM use among cancer patients in Asia to assess various factors influencing its use across different cancer treatment settings in Asia. Systematic research on T&CM use was conducted using four databases (PubMed, EMBASE, Web of Science, and CINAHAL) from inception to January 2023. Quality was assessed using the Appraisal Tool for Cross-Sectional Studies (AXIS). A random effects model was used to estimate the pooled prevalence of T&CM use, and data analysis was performed using Stata Version 16.0. Among the 4849 records retrieved, 41 eligible studies conducted in 14 Asian countries were included, involving a total of 14,976 participants. The pooled prevalence of T&CM use was 49.3%, ranging from 24.0% to 94.8%, and the disclosure rate of T&CM use was 38.2% (11.9% to 82.5%). The most commonly used T&CM modalities were herbal medicines and traditional medicine. Females were 22.0% more likely to use T&CM than males. A subgroup analysis revealed the highest prevalence of T&CM use was found in studies conducted in East Asia (62.4%) and those covered by both national and private insurance (55.8%). The disclosure rate of T&CM use to physicians remains low. Moreover, the factors influencing this disclosure are still insufficiently explored. Since the disclosure of T&CM use is a crucial indicator of patient safety and the quality of cancer treatment prognosis, future research should focus on identifying the determinants of non-disclosure. Full article
(This article belongs to the Section Cancer Drug Development)
Show Figures

Figure 1

8 pages, 1673 KiB  
Case Report
Anterior Segment Optical Coherence Tomography for the Tailored Treatment of Mooren’s Ulcer: A Case Report
by Luca Lucchino, Elvia Mastrogiuseppe, Francesca Giovannetti, Alice Bruscolini, Marco Marenco and Alessandro Lambiase
J. Clin. Med. 2024, 13(18), 5384; https://doi.org/10.3390/jcm13185384 - 11 Sep 2024
Viewed by 224
Abstract
Background: Mooren’s ulcer (MU) is a rare and debilitating form of peripheral ulcerative keratitis (PUK), characterized by a crescent-shaped ulcer with a distinctive overhanging edge at the corneal periphery. If left untreated, MU can lead to severe complications such as corneal perforation and [...] Read more.
Background: Mooren’s ulcer (MU) is a rare and debilitating form of peripheral ulcerative keratitis (PUK), characterized by a crescent-shaped ulcer with a distinctive overhanging edge at the corneal periphery. If left untreated, MU can lead to severe complications such as corneal perforation and blindness. Despite various treatment approaches, including anti-inflammatory and cytotoxic drugs, as well as surgical interventions, there is no clear evidence of the most effective treatment due to the lack of randomized controlled trials. AS-OCT is a non-invasive imaging technique that provides high-resolution cross-sectional images of the anterior segment, allowing for accurate evaluation of corneal ulcer characteristics, including depth, extent, and disease progression. Methods: We present the case of a 20-year-old male patient with MU managed using a stepladder approach, which included local and systemic corticosteroids, limbal conjunctival resection, and Cyclosporine A 1% eye drops. The patient underwent consecutive AS-OCT examinations and strict follow-up to tailor systemic and topical therapy. Results: Complete healing of the corneal ulcer with resolution of the inflammatory process was achieved. There was no recurrence of the disease at the 7-month follow-up. AS-OCT demonstrated progressive reorganization and thickening of the stromal tissue until the complete recovery of stromal thickness. Conclusions: The AS-OCT imaging modality allowed for the accurate evaluation of corneal ulcer characteristics, facilitating informed decision-making regarding the use of systemic immunosuppression, surgical interventions, and local immunomodulation and providing detailed and precise assessment of disease progression. This approach enabled a tailored and effective treatment strategy for the patient and played a critical role in guiding the therapeutic approach. Full article
(This article belongs to the Special Issue Clinical Utility of Optical Coherence Tomography in Ophthalmology)
Show Figures

Figure 1

14 pages, 4441 KiB  
Article
AI-Enabled Sensor Fusion of Time-of-Flight Imaging and mmWave for Concealed Metal Detection
by Chaitanya Kaul, Kevin J. Mitchell, Khaled Kassem, Athanasios Tragakis, Valentin Kapitany, Ilya Starshynov, Federica Villa, Roderick Murray-Smith and Daniele Faccio
Sensors 2024, 24(18), 5865; https://doi.org/10.3390/s24185865 - 10 Sep 2024
Viewed by 232
Abstract
In the field of detection and ranging, multiple complementary sensing modalities may be used to enrich information obtained from a dynamic scene. One application of this sensor fusion is in public security and surveillance, where efficacy and privacy protection measures must be continually [...] Read more.
In the field of detection and ranging, multiple complementary sensing modalities may be used to enrich information obtained from a dynamic scene. One application of this sensor fusion is in public security and surveillance, where efficacy and privacy protection measures must be continually evaluated. We present a novel deployment of sensor fusion for the discrete detection of concealed metal objects on persons whilst preserving their privacy. This is achieved by coupling off-the-shelf mmWave radar and depth camera technology with a novel neural network architecture that processes radar signals using convolutional Long Short-Term Memory (LSTM) blocks and depth signals using convolutional operations. The combined latent features are then magnified using deep feature magnification to reveal cross-modality dependencies in the data. We further propose a decoder, based on the feature extraction and embedding block, to learn an efficient upsampling of the latent space to locate the concealed object in the spatial domain through radar feature guidance. We demonstrate the ability to detect the presence and infer the 3D location of concealed metal objects. We achieve accuracies of up to 95% using a technique that is robust to multiple persons. This work provides a demonstration of the potential for cost-effective and portable sensor fusion with strong opportunities for further development. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

23 pages, 4573 KiB  
Article
AVaTER: Fusing Audio, Visual, and Textual Modalities Using Cross-Modal Attention for Emotion Recognition
by Avishek Das, Moumita Sen Sarma, Mohammed Moshiul Hoque, Nazmul Siddique and M. Ali Akber Dewan
Sensors 2024, 24(18), 5862; https://doi.org/10.3390/s24185862 - 10 Sep 2024
Viewed by 378
Abstract
Multimodal emotion classification (MEC) involves analyzing and identifying human emotions by integrating data from multiple sources, such as audio, video, and text. This approach leverages the complementary strengths of each modality to enhance the accuracy and robustness of emotion recognition systems. However, one [...] Read more.
Multimodal emotion classification (MEC) involves analyzing and identifying human emotions by integrating data from multiple sources, such as audio, video, and text. This approach leverages the complementary strengths of each modality to enhance the accuracy and robustness of emotion recognition systems. However, one significant challenge is effectively integrating these diverse data sources, each with unique characteristics and levels of noise. Additionally, the scarcity of large, annotated multimodal datasets in Bangla limits the training and evaluation of models. In this work, we unveiled a pioneering multimodal Bangla dataset, MAViT-Bangla (Multimodal Audio Video Text Bangla dataset). This dataset, comprising 1002 samples across audio, video, and text modalities, is a unique resource for emotion recognition studies in the Bangla language. It features emotional categories such as anger, fear, joy, and sadness, providing a comprehensive platform for research. Additionally, we developed a framework for audio, video and textual emotion recognition (i.e., AVaTER) that employs a cross-modal attention mechanism among unimodal features. This mechanism fosters the interaction and fusion of features from different modalities, enhancing the model’s ability to capture nuanced emotional cues. The effectiveness of this approach was demonstrated by achieving an F1-score of 0.64, a significant improvement over unimodal methods. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (3rd Edition))
Show Figures

Figure 1

28 pages, 7195 KiB  
Article
MEEAFusion: Multi-Scale Edge Enhancement and Joint Attention Mechanism Based Infrared and Visible Image Fusion
by Yingjiang Xie, Zhennan Fei, Da Deng, Lingshuai Meng, Fu Niu and Jinggong Sun
Sensors 2024, 24(17), 5860; https://doi.org/10.3390/s24175860 - 9 Sep 2024
Viewed by 468
Abstract
Infrared and visible image fusion can integrate rich edge details and salient infrared targets, resulting in high-quality images suitable for advanced tasks. However, most available algorithms struggle to fully extract detailed features and overlook the interaction of complementary features across different modal images [...] Read more.
Infrared and visible image fusion can integrate rich edge details and salient infrared targets, resulting in high-quality images suitable for advanced tasks. However, most available algorithms struggle to fully extract detailed features and overlook the interaction of complementary features across different modal images during the feature fusion process. To address this gap, this study presents a novel fusion method based on multi-scale edge enhancement and a joint attention mechanism (MEEAFusion). Initially, convolution kernels of varying scales were utilized to obtain shallow features with multiple receptive fields unique to the source image. Subsequently, a multi-scale gradient residual block (MGRB) was developed to capture the high-level semantic information and low-level edge texture information of the image, enhancing the representation of fine-grained features. Then, the complementary feature between infrared and visible images was defined, and a cross-transfer attention fusion block (CAFB) was devised with joint spatial attention and channel attention to refine the critical supplemental information. This allowed the network to obtain fused features that were rich in both common and complementary information, thus realizing feature interaction and pre-fusion. Lastly, the features were reconstructed to obtain the fused image. Extensive experiments on three benchmark datasets demonstrated that the MEEAFusion proposed in this research has considerable strengths in terms of rich texture details, significant infrared targets, and distinct edge contours, and it achieves superior fusion performance. Full article
Show Figures

Figure 1

11 pages, 1506 KiB  
Article
Contemporary Outcomes of Infrainguinal Vein Bypass Surgery for Chronic Limb-Threatening Ischaemia: A Two-Centre Cross-Sectional Study
by Thomas Lovelock, Sharan Randhawa, Cameron Wells, Anastasia Dean and Manar Khashram
J. Clin. Med. 2024, 13(17), 5343; https://doi.org/10.3390/jcm13175343 - 9 Sep 2024
Viewed by 398
Abstract
Background/Objectives: Chronic limb-threatening ischaemia (CLTI) is a significant life and limb-threatening condition. Two recent seminal trials, BEST-CLI and BASIL-2, have provided seemingly conflicting results concerning the optimal treatment modality for patients with CLTI. We sought to investigate the outcomes of patient undergoing [...] Read more.
Background/Objectives: Chronic limb-threatening ischaemia (CLTI) is a significant life and limb-threatening condition. Two recent seminal trials, BEST-CLI and BASIL-2, have provided seemingly conflicting results concerning the optimal treatment modality for patients with CLTI. We sought to investigate the outcomes of patient undergoing infrainguinal bypass at two centres in Aotearoa New Zealand. Methods: A cross-sectional retrospective review of all patients who underwent infrainguinal bypass grafting for CLTI at Auckland City Hospital and Waikato Hospital between January 2020 and December 2021 was performed. The primary outcome was a composite of death, above-ankle amputation, and major limb reintervention. The secondary outcome was minor limb reintervention. Kaplan–Meier survival analysis was performed to determine time to the primary and secondary endpoints. Demographic factors were examined using the log-rank test to examine the effect on the outcome. Results: One hundred and nineteen patients who underwent infrainguinal bypass for CLTI in the study period were identified. Of these, 93 patients had a bypass with ipsilateral or contralateral GSV. The median follow-up time was 1.85 years. The most common indication for surgery was tissue loss (69%, n = 63), with the most common distal bypass target being the below-knee popliteal artery (45%, n = 41). The primary composite outcome occurred in 42.8% of the cohort (n = 39). Death was the most common component of the primary outcome (26%, n = 24). Male sex (HR 0.48, 95% CI 0.26–0.88, p = 0.018) and statin use (HR 0.49, 95% CI 0.24–0.98, p = 0.044) were independent predictors of protection from the composite outcome on multivariate analysis. Dialysis dependence (HR 3.32, 95% CI 1.23–8.99, p = 0.018) was an independent predictor for patients meeting the composite outcome. Conclusions: This study’s results are consistent with the published outcomes of BEST-CLI. The patient cohorts examined, anatomical disease patterns, and conduit use may explain some of the differences observed between this study, BEST-CLI and BASIL-2. Further work is required to define the specific patient populations who will benefit most from an open surgical or endovascular first approach to the management of CLTI. Full article
(This article belongs to the Special Issue Clinical Advances in Vascular and Endovascular Surgery)
Show Figures

Figure 1

18 pages, 1057 KiB  
Review
Advancing in RGB-D Salient Object Detection: A Survey
by Ai Chen, Xin Li, Tianxiang He, Junlin Zhou and Duanbing Chen
Appl. Sci. 2024, 14(17), 8078; https://doi.org/10.3390/app14178078 - 9 Sep 2024
Viewed by 236
Abstract
The human visual system can rapidly focus on prominent objects in complex scenes, significantly enhancing information processing efficiency. Salient object detection (SOD) mimics this biological ability, aiming to identify and segment the most prominent regions or objects in images or videos. This reduces [...] Read more.
The human visual system can rapidly focus on prominent objects in complex scenes, significantly enhancing information processing efficiency. Salient object detection (SOD) mimics this biological ability, aiming to identify and segment the most prominent regions or objects in images or videos. This reduces the amount of data needed to process while enhancing the accuracy and efficiency of information extraction. In recent years, SOD has made significant progress in many areas such as deep learning, multi-modal fusion, and attention mechanisms. Additionally, it has expanded in real-time detection, weakly supervised learning, and cross-domain applications. Depth images can provide three-dimensional structural information of a scene, aiding in a more accurate understanding of object shapes and distances. In SOD tasks, depth images enhance detection accuracy and robustness by providing additional geometric information. This additional information is particularly crucial in complex scenes and occlusion situations. This survey reviews the substantial advancements in the field of RGB-Depth SOD, with a focus on the critical roles played by attention mechanisms and cross-modal fusion methods. It summarizes the existing literature, provides a brief overview of mainstream datasets and evaluation metrics, and quantitatively compares the discussed models. Full article
(This article belongs to the Special Issue Artificial Intelligence in Computer Vision and Object Detection)
Show Figures

Figure 1

13 pages, 1087 KiB  
Article
[18F]FDG PET/CT Imaging Is Associated with Lower In-Hospital Mortality in Patients with Pyogenic Spondylodiscitis—A Registry-Based Analysis of 29,362 Cases
by Siegmund Lang, Nike Walter, Stefanie Heidemanns, Constantin Lapa, Melanie Schindler, Jonas Krueckel, Nils Ole Schmidt, Dirk Hellwig, Volker Alt and Markus Rupp
Antibiotics 2024, 13(9), 860; https://doi.org/10.3390/antibiotics13090860 - 8 Sep 2024
Viewed by 455
Abstract
Background: While MRI is the primary diagnostic tool for the diagnosis of spondylodiscitis, the role of [18F]-fluorodeoxyglucose ([18F]FDG) PET/CT is gaining prominence. This study aimed to determine the frequency of [18F]FDG PET/CT usage and its impact on [...] Read more.
Background: While MRI is the primary diagnostic tool for the diagnosis of spondylodiscitis, the role of [18F]-fluorodeoxyglucose ([18F]FDG) PET/CT is gaining prominence. This study aimed to determine the frequency of [18F]FDG PET/CT usage and its impact on the in-hospital mortality rate in patients with spondylodiscitis, particularly in the geriatric population. Methods: We conducted a Germany-wide cross-sectional study from 2019 to 2021 using an open-access, Germany-wide database, analyzing cases with ICD-10 codes M46.2-, M46.3-, and M46.4- (‘Osteomyelitis of vertebrae’, ‘Infection of intervertebral disc (pyogenic)’, and ‘Discitis unspecified’). Diagnostic modalities were compared for their association with in-hospital mortality, with a focus on [18F]FDG PET/CT. Results: In total, 29,362 hospital admissions from 2019 to 2021 were analyzed. Of these, 60.1% were male and 39.9% were female, and 71.8% of the patients were aged 65 years and above. The overall in-hospital mortality rate was 6.5% for the entire cohort and 8.2% for the geriatric subgroup (p < 0.001). Contrast-enhanced (ce) MRI (48.1%) and native CT (39.4%) of the spine were the most frequently conducted diagnostic modalities. [18F]FDG PET/CT was performed in 2.7% of cases. CeCT was associated with increased in-hospital mortality (OR = 2.03, 95% CI: 1.90–2.17, p < 0.001). Cases with documented [18F]FDG PET/CT showed a lower frequency of in-hospital deaths (OR = 0.58, 95% CI: 0.18–0.50; p = 0.002). This finding was more pronounced in patients aged 65 and above (OR = 0.42, 95% CI: 0.27–0.65, p = 0.001). Conclusions: Despite its infrequent use, [18F]FDG PET/CT was associated with a lower in-hospital mortality rate in patients with spondylodiscitis, particularly in the geriatric cohort. This study is limited by only considering data on hospitalized patients and relying on the assumption of error-free coding. Further research is needed to optimize diagnostic approaches for spondylodiscitis. Full article
(This article belongs to the Section Antibiotic Therapy in Infectious Diseases)
Show Figures

Figure 1

24 pages, 7323 KiB  
Article
AID-YOLO: An Efficient and Lightweight Network Method for Small Target Detector in Aerial Images
by Yuwen Li, Jiashuo Zheng, Shaokun Li, Chunxi Wang, Zimu Zhang and Xiujian Zhang
Electronics 2024, 13(17), 3564; https://doi.org/10.3390/electronics13173564 - 8 Sep 2024
Viewed by 388
Abstract
The progress of object detection technology is crucial for obtaining extensive scene information from aerial perspectives based on computer vision. However, aerial image detection presents many challenges, such as large image background sizes, small object sizes, and dense distributions. This research addresses the [...] Read more.
The progress of object detection technology is crucial for obtaining extensive scene information from aerial perspectives based on computer vision. However, aerial image detection presents many challenges, such as large image background sizes, small object sizes, and dense distributions. This research addresses the specific challenges relating to small object detection in aerial images and proposes an improved YOLOv8s-based detector named Aerial Images Detector-YOLO(AID-YOLO). Specifically, this study adopts the General Efficient Layer Aggregation Network (GELAN) from YOLOv9 as a reference and designs a four-branch skip-layer connection and split operation module Re-parameterization-Net with Cross-Stage Partial CSP and Efficient Layer Aggregation Networks (RepNCSPELAN4) to achieve a lightweight network while capturing richer feature information. To fuse multi-scale features and focus more on the target detection regions, a new multi-channel feature extraction module named Convolutional Block Attention Module with Two Convolutions Efficient Layer Aggregation Net-works (C2FCBAM) is designed in the neck part of the network. In addition, to reduce the sensitivity to position bias of small objects, a new function, Normalized Weighted Distance Complete Intersection over Union (NWD-CIoU_Loss) weight adaptive loss function, was designed in this study. We evaluate the proposed AID-YOLO method through ablation experiments and comparisons with other advanced models on the VEDAI (512, 1024) and DOTAv1.0 datasets. The results show that compared to the Yolov8s baseline model, AID-YOLO improves the [email protected] metric by 7.36% on the VEDAI dataset. Simultaneously, the parameters are reduced by 31.7%, achieving a good balance between accuracy and parameter quantity. The Average Precision (AP) for small objects has improved by 8.9% compared to the baseline model (YOLOv8s), making it one of the top performers among all compared models. Furthermore, the FPS metric is also well-suited for real-time detection in aerial image scenarios. The AID-YOLO method also demonstrates excellent performance on infrared images in the VEDAI1024 (IR) dataset, with a 2.9% improvement in the [email protected] metric. We further validate the superior detection and generalization performance of AID-YOLO in multi-modal and multi-task scenarios through comparisons with other methods on different resolution images, SODA-A and the DOTAv1.0 datasets. In summary, the results of this study confirm that the AID-YOLO method significantly improves model detection performance while maintaining a reduced number of parameters, making it applicable to practical engineering tasks in aerial image object detection. Full article
Show Figures

Figure 1

12 pages, 951 KiB  
Article
Acute Effects of Soft Tissue Modalities on Muscular Ultrasound Characteristics and Isometric Performance
by Eric Sobolewski, William Topham, Ryan Hosey, Nora Waheeba and Thelen Rett
Appl. Sci. 2024, 14(17), 7994; https://doi.org/10.3390/app14177994 - 6 Sep 2024
Viewed by 448
Abstract
Prior to training, many athletes perform different soft-tissue preparation protocols. Many of these protocols involve stretching, foam rolling, and/or percussion massage. Many of these modalities have been studied, but not as a group to observe muscle alterations and differences in males and females. [...] Read more.
Prior to training, many athletes perform different soft-tissue preparation protocols. Many of these protocols involve stretching, foam rolling, and/or percussion massage. Many of these modalities have been studied, but not as a group to observe muscle alterations and differences in males and females. In total, 40 (20 males, 20 females) participants performed five minutes of static stretching, foam rolling, and percussion massage. Pre- and post-isometric leg strength, muscle activation and ultrasound assessments (cross-sectional area, echo intensity, pennation angle, fascicle length, and muscle thickness) were taken. The results indicate that there is no significant difference among modalities, and that they do not significantly alter any muscle characteristic or improve performance. There is a significant difference in size between males and female, with males having larger muscle and greater pennation angles than females. This allows males to generate significantly more muscle force. However, they both respond similarly to each modality. In conclusion, the muscle response to static stretching, foam rolling, and percussion massage do not differ among modalities and do not contribute to an increase or decrease in maximal isometric knee extension with similar effects between males and females. Full article
(This article belongs to the Special Issue Sports Injuries and Physical Rehabilitation)
Show Figures

Figure 1

27 pages, 10427 KiB  
Article
UMMFF: Unsupervised Multimodal Multilevel Feature Fusion Network for Hyperspectral Image Super-Resolution
by Zhongmin Jiang, Mengyao Chen and Wenju Wang
Remote Sens. 2024, 16(17), 3282; https://doi.org/10.3390/rs16173282 - 4 Sep 2024
Viewed by 472
Abstract
Due to the inadequacy in utilizing complementary information from different modalities and the biased estimation of degraded parameters, the unsupervised hyperspectral super-resolution algorithm suffers from low precision and limited applicability. To address this issue, this paper proposes an approach for hyperspectral image super-resolution, [...] Read more.
Due to the inadequacy in utilizing complementary information from different modalities and the biased estimation of degraded parameters, the unsupervised hyperspectral super-resolution algorithm suffers from low precision and limited applicability. To address this issue, this paper proposes an approach for hyperspectral image super-resolution, namely, the Unsupervised Multimodal Multilevel Feature Fusion network (UMMFF). The proposed approach employs a gated cross-retention module to learn shared patterns among different modalities. This module effectively eliminates the intermodal differences while preserving spatial–spectral correlations, thereby facilitating information interaction. A multilevel spatial–channel attention and parallel fusion decoder are constructed to extract features at three levels (low, medium, and high), enriching the information of the multimodal images. Additionally, an independent prior-based implicit neural representation blind estimation network is designed to accurately estimate the degraded parameters. The utilization of UMMFF on the “Washington DC”, Salinas, and Botswana datasets exhibited a superior performance compared to existing state-of-the-art methods in terms of primary performance metrics such as PSNR and ERGAS, and the PSNR values improved by 18.03%, 8.55%, and 5.70%, respectively, while the ERGAS values decreased by 50.00%, 75.39%, and 53.27%, respectively. The experimental results indicate that UMMFF demonstrates excellent algorithm adaptability, resulting in high-precision reconstruction outcomes. Full article
(This article belongs to the Special Issue Image Enhancement and Fusion Techniques in Remote Sensing)
Show Figures

Figure 1

25 pages, 23009 KiB  
Article
Exploring Reinforced Class Separability and Discriminative Representations for SAR Target Open Set Recognition
by Fei Gao, Xin Luo, Rongling Lang, Jun Wang, Jinping Sun and Amir Hussain
Remote Sens. 2024, 16(17), 3277; https://doi.org/10.3390/rs16173277 - 3 Sep 2024
Viewed by 341
Abstract
Current synthetic aperture radar (SAR) automatic target recognition (ATR) algorithms primarily operate under the closed-set assumption, implying that all target classes have been previously learned during the training phase. However, in open scenarios, they may encounter target classes absent from the training set, [...] Read more.
Current synthetic aperture radar (SAR) automatic target recognition (ATR) algorithms primarily operate under the closed-set assumption, implying that all target classes have been previously learned during the training phase. However, in open scenarios, they may encounter target classes absent from the training set, thereby necessitating an open set recognition (OSR) challenge for SAR-ATR. The crux of OSR lies in establishing distinct decision boundaries between known and unknown classes to mitigate confusion among different classes. To address this issue, we introduce a novel framework termed reinforced class separability for SAR target open set recognition (RCS-OSR), which focuses on optimizing prototype distribution and enhancing the discriminability of features. First, to capture discriminative features, a cross-modal causal features enhancement module (CMCFE) is proposed to strengthen the expression of causal regions. Subsequently, regularized intra-class compactness loss (RIC-Loss) and intra-class relationship aware consistency loss (IRC-Loss) are devised to optimize the embedding space. In conjunction with joint supervised training using cross-entropy loss, RCS-OSR can effectively reduce empirical classification risk and open space risk simultaneously. Moreover, a class-aware OSR classifier with adaptive thresholding is designed to leverage the differences between different classes. Consequently, our method can construct distinct decision boundaries between known and unknown classes to simultaneously classify known classes and identify unknown classes in open scenarios. Extensive experiments conducted on the MSTAR dataset demonstrate the effectiveness and superiority of our method in various OSR tasks. Full article
Show Figures

Figure 1

18 pages, 980 KiB  
Article
Tinnitus Prevalence, Associated Characteristics, and Treatment Patterns among Adults in Saudi Arabia
by Ahmad A. Alanazi
Audiol. Res. 2024, 14(5), 760-777; https://doi.org/10.3390/audiolres14050064 - 1 Sep 2024
Viewed by 391
Abstract
Tinnitus affects millions of people around the world and causes significant negative impacts on their quality of life (QoL). Tinnitus is rarely examined in Saudi Arabia. This study aimed to estimate the prevalence of tinnitus among adults, explore their experience with tinnitus, investigate [...] Read more.
Tinnitus affects millions of people around the world and causes significant negative impacts on their quality of life (QoL). Tinnitus is rarely examined in Saudi Arabia. This study aimed to estimate the prevalence of tinnitus among adults, explore their experience with tinnitus, investigate the impact of tinnitus on their QoL, and discover their tinnitus management methods. A descriptive cross-sectional study design was performed utilizing a non-probability purposive sampling technique and a face-to-face in-person administered questionnaire. Descriptive statistics and a chi-square test were used to assess the data and find any correlation between the variables. Out of 4860 adults, 320 (males: n = 172; females: n = 148; age range = 18–90 years) had tinnitus, mainly described as a daily, gradual, continuous, whistling, and ringing tinnitus in both ears. Tinnitus prevalence was estimated at 6.54% with a slight predominance in males (6.9%) compared with females (6.2%). Most of the participants were unaware of the cause of their tinnitus. The modal value of the severity of tinnitus signals was severe for both genders. The modal value of the impact of tinnitus on the QoL was moderate for males and severe for females. Sleep, social activities, quiet settings, and concentration were largely affected by tinnitus. Significant associations (p < 0.05) between the impact of tinnitus on the QoL and risk factors, such as gender, age, hearing loss, and hyperacusis were determined. Also, the impact of tinnitus on the QoL was significantly associated (p < 0.05) with the duration of complaints and the severity of tinnitus signals. Approximately, 61% of the participants did not use any tinnitus treatment, while the remaining participants usually used hearing aids, medications, and counseling to manage their tinnitus. By increasing awareness, establishing standard practice, developing guidelines for managing tinnitus, expanding access to suitable interventions, and carrying out additional research, adults living with tinnitus in Saudi Arabia will have better support and, ultimately, an enhancement of their overall well-being. Full article
Show Figures

Figure 1

16 pages, 6475 KiB  
Article
Exploring Inner Speech Recognition via Cross-Perception Approach in EEG and fMRI
by Jiahao Qin, Lu Zong and Feng Liu
Appl. Sci. 2024, 14(17), 7720; https://doi.org/10.3390/app14177720 - 1 Sep 2024
Viewed by 638
Abstract
Multimodal brain signal analysis has shown great potential in decoding complex cognitive processes, particularly in the challenging task of inner speech recognition. This paper introduces an innovative I nner Speech Recognition via Cross-Perception (ISRCP) approach that significantly enhances accuracy by fusing electroencephalography (EEG) [...] Read more.
Multimodal brain signal analysis has shown great potential in decoding complex cognitive processes, particularly in the challenging task of inner speech recognition. This paper introduces an innovative I nner Speech Recognition via Cross-Perception (ISRCP) approach that significantly enhances accuracy by fusing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data. Our approach comprises three core components: (1) multigranularity encoders that separately process EEG time series, EEG Markov Transition Fields, and fMRI spatial data; (2) a cross-perception expert structure that learns both modality-specific and shared representations; and (3) an attention-based adaptive fusion strategy that dynamically adjusts the contributions of different modalities based on task relevance. Extensive experiments on the Bimodal Dataset on Inner Speech demonstrate that our model outperforms existing methods across accuracy and F1 score. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

17 pages, 3903 KiB  
Article
HDCCT: Hybrid Densely Connected CNN and Transformer for Infrared and Visible Image Fusion
by Xue Li, Hui He and Jin Shi
Electronics 2024, 13(17), 3470; https://doi.org/10.3390/electronics13173470 - 31 Aug 2024
Viewed by 484
Abstract
Multi-modal image fusion is a methodology that combines image features from multiple types of sensors, effectively improving the quality and content of fused images. However, most existing deep learning fusion methods need to integrate global or local features, restricting the representation of feature [...] Read more.
Multi-modal image fusion is a methodology that combines image features from multiple types of sensors, effectively improving the quality and content of fused images. However, most existing deep learning fusion methods need to integrate global or local features, restricting the representation of feature information. To address this issue, a hybrid densely connected CNN and transformer (HDCCT) fusion framework is proposed. In the proposed HDCCT framework, the network of the CNN-based blocks obtain the local structure of the input data, and the transformer-based blocks obtain the global structure of the original data, significantly improving the feature representation. In the fused image, the proposed encoder–decoder architecture is designed for both the CNN and transformer blocks to reduce feature loss while preserving the characterization of all-level features. In addition, the cross-coupled framework facilitates the flow of feature structures, retains the uniqueness of information, and makes the transform model long-range dependencies based on the local features already extracted by the CNN. Meanwhile, to retain the information in the source images, the hybrid structural similarity (SSIM) and mean square error (MSE) loss functions are introduced. The qualitative and quantitative comparisons of grayscale images with infrared and visible image fusion indicate that the suggested method outperforms related works. Full article
Show Figures

Figure 1

Back to TopTop