Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,065)

Search Parameters:
Keywords = real-world evidence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
44 pages, 1051 KiB  
Review
Multimodal Emotion Recognition Using Visual, Vocal and Physiological Signals: A Review
by Gustave Udahemuka, Karim Djouani and Anish M. Kurien
Appl. Sci. 2024, 14(17), 8071; https://doi.org/10.3390/app14178071 - 9 Sep 2024
Viewed by 397
Abstract
The dynamic expressions of emotion convey both the emotional and functional states of an individual’s interactions. Recognizing the emotional states helps us understand human feelings and thoughts. Systems and frameworks designed to recognize human emotional states automatically can use various affective signals as [...] Read more.
The dynamic expressions of emotion convey both the emotional and functional states of an individual’s interactions. Recognizing the emotional states helps us understand human feelings and thoughts. Systems and frameworks designed to recognize human emotional states automatically can use various affective signals as inputs, such as visual, vocal and physiological signals. However, emotion recognition via a single modality can be affected by various sources of noise that are specific to that modality and the fact that different emotion states may be indistinguishable. This review examines the current state of multimodal emotion recognition methods that integrate visual, vocal or physiological modalities for practical emotion computing. Recent empirical evidence on deep learning methods used for fine-grained recognition is reviewed, with discussions on the robustness issues of such methods. This review elaborates on the profound learning challenges and solutions required for a high-quality emotion recognition system, emphasizing the benefits of dynamic expression analysis, which aids in detecting subtle micro-expressions, and the importance of multimodal fusion for improving emotion recognition accuracy. The literature was comprehensively searched via databases with records covering the topic of affective computing, followed by rigorous screening and selection of relevant studies. The results show that the effectiveness of current multimodal emotion recognition methods is affected by the limited availability of training data, insufficient context awareness, and challenges posed by real-world cases of noisy or missing modalities. The findings suggest that improving emotion recognition requires better representation of input data, refined feature extraction, and optimized aggregation of modalities within a multimodal framework, along with incorporating state-of-the-art methods for recognizing dynamic expressions. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

36 pages, 2275 KiB  
Review
Blockchain Forensics: A Systematic Literature Review of Techniques, Applications, Challenges, and Future Directions
by Hany F. Atlam, Ndifon Ekuri, Muhammad Ajmal Azad and Harjinder Singh Lallie
Electronics 2024, 13(17), 3568; https://doi.org/10.3390/electronics13173568 - 8 Sep 2024
Viewed by 364
Abstract
Blockchain technology has gained significant attention in recent years for its potential to revolutionize various sectors, including finance, supply chain management, and digital forensics. While blockchain’s decentralization enhances security, it complicates the identification and tracking of illegal activities, making it challenging to link [...] Read more.
Blockchain technology has gained significant attention in recent years for its potential to revolutionize various sectors, including finance, supply chain management, and digital forensics. While blockchain’s decentralization enhances security, it complicates the identification and tracking of illegal activities, making it challenging to link blockchain addresses to real-world identities. Also, although immutability protects against tampering, it introduces challenges for forensic investigations as it prevents the modification or deletion of evidence, even if it is fraudulent. Hence, this paper provides a systematic literature review and examination of state-of-the-art studies in blockchain forensics to offer a comprehensive understanding of the topic. This paper provides a comprehensive investigation of the fundamental principles of blockchain forensics, exploring various techniques and applications for conducting digital forensic investigations in blockchain. Based on the selected search strategy, 46 articles (out of 672) were chosen for closer examination. The contributions of these articles were discussed and summarized, highlighting their strengths and limitations. This paper examines the selected papers to identify diverse digital forensic frameworks and methodologies used in blockchain forensics, as well as how blockchain-based forensic solutions have enhanced forensic investigations. In addition, this paper discusses the common applications of blockchain-based forensic frameworks and examines the associated legal and regulatory challenges encountered in conducting a forensic investigation within blockchain systems. Open issues and future research directions of blockchain forensics were also discussed. This paper provides significant value for researchers, digital forensic practitioners, and investigators by providing a comprehensive and up-to-date review of existing research and identifying key challenges and opportunities related to blockchain forensics. Full article
Show Figures

Figure 1

19 pages, 1114 KiB  
Article
AFTEA Framework for Supporting Dynamic Autonomous Driving Situation
by Subi Kim, Jieun Kang and Yongik Yoon
Electronics 2024, 13(17), 3535; https://doi.org/10.3390/electronics13173535 - 6 Sep 2024
Viewed by 382
Abstract
The accelerated development of AI technology has brought about revolutionary changes in various fields of society. Recently, it has been emphasized that fairness, accountability, transparency, and explainability (FATE) should be considered to support the reliability and validity of AI-based decision-making. However, in the [...] Read more.
The accelerated development of AI technology has brought about revolutionary changes in various fields of society. Recently, it has been emphasized that fairness, accountability, transparency, and explainability (FATE) should be considered to support the reliability and validity of AI-based decision-making. However, in the case of autonomous driving technology, which is directly related to human life and requires real-time adaptation and response to various changes and risks in the real world, environmental adaptability must be considered in a more comprehensive and converged manner. In order to derive definitive evidence for each object in a convergent autonomous driving environment, it is necessary to transparently collect and provide various types of road environment information for driving objects and driving assistance and to construct driving technology that is adaptable to various situations by considering all uncertainties in the real-time changing driving environment. This allows for unbiased and fair results based on flexible contextual understanding, even in situations that do not conform to rules and patterns, by considering the convergent interactions and dynamic situations of various objects that are possible in a real-time road environment. The transparent, environmentally adaptive, and fairness-based outcomes provide the basis for the decision-making process and support clear interpretation and explainability of decisions. All of these processes enable autonomous vehicles to draw reliable conclusions and take responsibility for their decisions in autonomous driving situations. Therefore, this paper proposes an adaptability, fairness, transparency, explainability, and accountability (AFTEA) framework to build a stable and reliable autonomous driving environment in dynamic situations. This paper explains the definition, role, and necessity of AFTEA in artificial intelligence technology and highlights its value when applied and integrated into autonomous driving technology. The AFTEA framework with environmental adaptability will support the establishment of a sustainable autonomous driving environment in dynamic environments and aims to provide a direction for establishing a stable and reliable AI system that adapts to various real-world scenarios. Full article
Show Figures

Figure 1

19 pages, 7056 KiB  
Article
A Data-Centric Approach to Understanding the 2020 U.S. Presidential Election
by Satish Mahadevan Srinivasan and Yok-Fong Paat
Big Data Cogn. Comput. 2024, 8(9), 111; https://doi.org/10.3390/bdcc8090111 - 4 Sep 2024
Viewed by 286
Abstract
The application of analytics on Twitter feeds is a very popular field for research. A tweet with a 280-character limitation can reveal a wealth of information on how individuals express their sentiments and emotions within their network or community. Upon collecting, cleaning, and [...] Read more.
The application of analytics on Twitter feeds is a very popular field for research. A tweet with a 280-character limitation can reveal a wealth of information on how individuals express their sentiments and emotions within their network or community. Upon collecting, cleaning, and mining tweets from different individuals on a particular topic, we can capture not only the sentiments and emotions of an individual but also the sentiments and emotions expressed by a larger group. Using the well-known Lexicon-based NRC classifier, we classified nearly seven million tweets across seven battleground states in the U.S. to understand the emotions and sentiments expressed by U.S. citizens toward the 2020 presidential candidates. We used the emotions and sentiments expressed within these tweets as proxies for their votes and predicted the swing directions of each battleground state. When compared to the outcome of the 2020 presidential candidates, we were able to accurately predict the swing directions of four battleground states (Arizona, Michigan, Texas, and North Carolina), thus revealing the potential of this approach in predicting future election outcomes. The week-by-week analysis of the tweets using the NRC classifier corroborated well with the various political events that took place before the election, making it possible to understand the dynamics of the emotions and sentiments of the supporters in each camp. These research strategies and evidence-based insights may be translated into real-world settings and practical interventions to improve election outcomes. Full article
(This article belongs to the Special Issue Machine Learning in Data Mining for Knowledge Discovery)
Show Figures

Figure 1

10 pages, 578 KiB  
Article
A Retrospective Analysis of Intravenous Push versus Extended Infusion Meropenem in Critically Ill Patients
by Emory G. Johnson, Kayla Maki Ortiz, David T. Adams, Satwinder Kaur, Andrew C. Faust, Hui Yang, Carlos A. Alvarez and Ronald G. Hall
Antibiotics 2024, 13(9), 835; https://doi.org/10.3390/antibiotics13090835 - 2 Sep 2024
Viewed by 442
Abstract
Meropenem is a broad-spectrum antibiotic used for the treatment of multi-drug-resistant infections. Due to its pharmacokinetic profile, meropenem’s activity is optimized by maintaining a specific time the serum concentration remains above the minimum inhibitory concentration (MIC) via extended infusion (EI), continuous infusion, or [...] Read more.
Meropenem is a broad-spectrum antibiotic used for the treatment of multi-drug-resistant infections. Due to its pharmacokinetic profile, meropenem’s activity is optimized by maintaining a specific time the serum concentration remains above the minimum inhibitory concentration (MIC) via extended infusion (EI), continuous infusion, or intermittent infusion dosing strategies. The available literature varies regarding the superiority of these dosing strategies. This study’s primary objective was to determine the difference in time to clinical stabilization between intravenous push (IVP) and EI administration. We performed a retrospective pilot cohort study of 100 critically ill patients who received meropenem by IVP (n = 50) or EI (n = 50) during their intensive care unit (ICU) admission. There was no statistically significant difference in the overall achievement of clinical stabilization between IVP and EI (48% vs. 44%, p = 0.17). However, the median time to clinical stability was shorter for the EI group (20.4 vs. 66.2 h, p = 0.01). EI administration was associated with shorter hospital (13 vs. 17 days; p = 0.05) and ICU (6 vs. 9 days; p = 0.02) lengths of stay. Although we did not find a statistically significant difference in the overall time to clinical stabilization, the results of this pilot study suggest that EI administration may produce quicker clinical resolutions than IVP. Full article
Show Figures

Figure 1

12 pages, 1307 KiB  
Article
Real-World Analysis of Survival and Treatment Efficacy in Stage IIIA-N2 Non-Small Cell Lung Cancer
by Eleni Josephides, Roberta Dunn, Annie-Rose Henry, John Pilling, Karen Harrison-Phipps, Akshay Patel, Shahreen Ahmad, Michael Skwarski, James Spicer, Alexandros Georgiou, Sharmistha Ghosh, Mieke Van Hemelrijck, Eleni Karapanagiotou, Daniel Smith and Andrea Bille
Cancers 2024, 16(17), 3058; https://doi.org/10.3390/cancers16173058 - 2 Sep 2024
Viewed by 496
Abstract
Background: Stage IIIA-N2 non-small cell lung cancer (NSCLC) poses a significant clinical challenge, with low survival rates despite advances in therapy. The lack of a standardised treatment approach complicates patient management. This study utilises real-world data from Guy’s Thoracic Cancer Database to analyse [...] Read more.
Background: Stage IIIA-N2 non-small cell lung cancer (NSCLC) poses a significant clinical challenge, with low survival rates despite advances in therapy. The lack of a standardised treatment approach complicates patient management. This study utilises real-world data from Guy’s Thoracic Cancer Database to analyse patient outcomes, identify key predictors of overall survival (OS) and disease-free survival (DFS), and address the limitations of randomised controlled trials. Methods: This observational, single-centre, non-randomised study analysed 142 patients diagnosed with clinical and pathological T1/2 N2 NSCLC who received curative treatment from 2015 to 2021. Patients were categorised into three groups: Group A (30 patients) underwent surgery for clinical N2 disease, Group B (54 patients) had unsuspected N2 disease discovered during surgery, and Group C (58 patients) received radical chemoradiation or radiotherapy alone (CRT/RT) for clinical N2 disease. Data on demographics, treatment types, recurrence, and survival rates were analysed. Results: The median OS for the cohort was 31 months, with 2-year and 5-year OS rates of 60% and 30%, respectively. Group A had a median OS of 32 months, Group B 36 months, and Group C 25 months. The median DFS was 18 months overall, with Group A at 16 months, Group B at 22 months, and Group C at 17 months. Significant predictors of OS included ECOG performance status, lymphovascular invasion, and histology. No significant differences in OS were found between treatment groups (p = 0.99). Conclusions: This study highlights the complexity and diversity of Stage IIIA-N2 NSCLC, with no single superior treatment strategy identified. The findings underscore the necessity for personalised treatment approaches and multidisciplinary decision-making. Future research should focus on integrating newer therapeutic modalities and conducting multi-centre trials to refine treatment strategies. Collaboration and ongoing data collection are crucial for improving personalised treatment plans and survival outcomes for Stage IIIA-N2 NSCLC patients. Full article
(This article belongs to the Special Issue The Use of Real World (RW) Data in Oncology)
Show Figures

Figure 1

15 pages, 1168 KiB  
Systematic Review
Evaluating the Effectiveness of Proton Beam Therapy Compared to Conventional Radiotherapy in Non-Metastatic Rectal Cancer: A Systematic Review of Clinical Outcomes
by Kelvin Le, James Norton Marchant and Khang Duy Ricky Le
Medicina 2024, 60(9), 1426; https://doi.org/10.3390/medicina60091426 - 31 Aug 2024
Viewed by 314
Abstract
Background and Objectives: Conventional radiotherapies used in the current management of rectal cancer commonly cause iatrogenic radiotoxicity. Proton beam therapy has emerged as an alternative to conventional radiotherapy with the aim of improving tumour control and reducing off-set radiation exposure to surrounding [...] Read more.
Background and Objectives: Conventional radiotherapies used in the current management of rectal cancer commonly cause iatrogenic radiotoxicity. Proton beam therapy has emerged as an alternative to conventional radiotherapy with the aim of improving tumour control and reducing off-set radiation exposure to surrounding tissue. However, the real-world treatment and oncological outcomes associated with the use of proton beam therapy in rectal cancer remain poorly characterised. This systematic review seeks to evaluate the radiation dosages and safety of proton beam therapy compared to conventional radiotherapy in patients with non-metastatic rectal cancer. Materials and Methods: A computer-assisted search was performed on the Medline, Embase and Cochrane Central databases. Studies that evaluated the adverse effects and oncological outcomes of proton beam therapy and conventional radiotherapy in adult patients with non-metastatic rectal cancer were included. Results: Eight studies were included in this review. There was insufficient evidence to determine the adverse treatment outcomes of proton beam therapy versus conventional radiotherapy. No current studies assessed radiotoxicities nor oncological outcomes. Pooled dosimetric comparisons between proton beam therapy and various conventional radiotherapies were associated with reduced radiation exposure to the pelvis, bowel and bladder. Conclusions: This systematic review demonstrates a significant paucity of evidence in the current literature surrounding adverse effects and oncological outcomes related to proton beam therapy compared to conventional radiotherapy for non-metastatic rectal cancer. Pooled analyses of dosimetric studies highlight greater predicted radiation-sparing effects with proton beam therapy in this setting. This evidence, however, is based on evidence at a moderate risk of bias and clinical heterogeneity. Overall, more robust, prospective clinical trials are required. Full article
(This article belongs to the Section Oncology)
Show Figures

Figure 1

17 pages, 382 KiB  
Article
Can a Transparent Machine Learning Algorithm Predict Better than Its Black Box Counterparts? A Benchmarking Study Using 110 Data Sets
by Ryan A. Peterson, Max McGrath and Joseph E. Cavanaugh
Entropy 2024, 26(9), 746; https://doi.org/10.3390/e26090746 - 31 Aug 2024
Viewed by 569
Abstract
We developed a novel machine learning (ML) algorithm with the goal of producing transparent models (i.e., understandable by humans) while also flexibly accounting for nonlinearity and interactions. Our method is based on ranked sparsity, and it allows for flexibility and user control in [...] Read more.
We developed a novel machine learning (ML) algorithm with the goal of producing transparent models (i.e., understandable by humans) while also flexibly accounting for nonlinearity and interactions. Our method is based on ranked sparsity, and it allows for flexibility and user control in varying the shade of the opacity of black box machine learning methods. The main tenet of ranked sparsity is that an algorithm should be more skeptical of higher-order polynomials and interactions a priori compared to main effects, and hence, the inclusion of these more complex terms should require a higher level of evidence. In this work, we put our new ranked sparsity algorithm (as implemented in the open source R package, sparseR) to the test in a predictive model “bakeoff” (i.e., a benchmarking study of ML algorithms applied “out of the box”, that is, with no special tuning). Algorithms were trained on a large set of simulated and real-world data sets from the Penn Machine Learning Benchmarks database, addressing both regression and binary classification problems. We evaluated the extent to which our human-centered algorithm can attain predictive accuracy that rivals popular black box approaches such as neural networks, random forests, and support vector machines, while also producing more interpretable models. Using out-of-bag error as a meta-outcome, we describe the properties of data sets in which human-centered approaches can perform as well as or better than black box approaches. We found that interpretable approaches predicted optimally or within 5% of the optimal method in most real-world data sets. We provide a more in-depth comparison of the performances of random forests to interpretable methods for several case studies, including exemplars in which algorithms performed similarly, and several cases when interpretable methods underperformed. This work provides a strong rationale for including human-centered transparent algorithms such as ours in predictive modeling applications. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Inference for High Dimensional Data)
Show Figures

Figure 1

31 pages, 2237 KiB  
Article
Initial Trans-Arterial Chemo-Embolisation (TACE) Is Associated with Similar Survival Outcomes as Compared to Upfront Percutaneous Ablation Allowing for Follow-Up Treatment in Those with Single Hepatocellular Carcinoma (HCC) ≤ 3 cm: Results of a Real-World Propensity-Matched Multi-Centre Australian Cohort Study
by Jonathan Abdelmalak, Simone I. Strasser, Natalie L. Ngu, Claude Dennis, Marie Sinclair, Avik Majumdar, Kate Collins, Katherine Bateman, Anouk Dev, Joshua H. Abasszade, Zina Valaydon, Daniel Saitta, Kathryn Gazelakis, Susan Byers, Jacinta Holmes, Alexander J. Thompson, Jessica Howell, Dhivya Pandiaraja, Steven Bollipo, Suresh Sharma, Merlyn Joseph, Rohit Sawhney, Amanda Nicoll, Nicholas Batt, Myo J. Tang, Stephen Riordan, Nicholas Hannah, James Haridy, Siddharth Sood, Eileen Lam, Elysia Greenhill, John Lubel, William Kemp, Ammar Majeed, John Zalcberg and Stuart K. Robertsadd Show full author list remove Hide full author list
Cancers 2024, 16(17), 3010; https://doi.org/10.3390/cancers16173010 - 29 Aug 2024
Viewed by 454
Abstract
Percutaneous ablation is recommended in Barcelona Clinic Liver Cancer (BCLC) stage 0/A patients with HCC ≤3 cm as a curative treatment modality alongside surgical resection and liver transplantation. However, trans-arterial chemo-embolisation (TACE) is commonly used in the real-world as an initial treatment in [...] Read more.
Percutaneous ablation is recommended in Barcelona Clinic Liver Cancer (BCLC) stage 0/A patients with HCC ≤3 cm as a curative treatment modality alongside surgical resection and liver transplantation. However, trans-arterial chemo-embolisation (TACE) is commonly used in the real-world as an initial treatment in patients with single small HCC in contrast to widely accepted clinical practice guidelines which typically describe TACE as a treatment for intermediate-stage HCC. We performed this real-world propensity-matched multi-centre cohort study in patients with single HCC ≤ 3 cm to assess for differences in survival outcomes between those undergoing initial TACE and those receiving upfront ablation. Patients with a new diagnosis of BCLC 0/A HCC with a single tumour ≤3 cm first diagnosed between 1 January 2016 and 31 December 2020 who received initial TACE or ablation were included in the study. A total of 348 patients were included in the study, with 147 patients receiving initial TACE and 201 patients undergoing upfront ablation. After propensity score matching using key covariates, 230 patients were available for analysis with 115 in each group. There were no significant differences in overall survival (log-rank test p = 0.652) or liver-related survival (log-rank test p = 0.495) over a median follow-up of 43 months. While rates of CR were superior after ablation compared to TACE as a first treatment (74% vs. 56%, p < 0.004), there was no significant difference in CR rates when allowing for further subsequent treatments (86% vs. 80% p = 0.219). In those who achieved CR, recurrence-free survival and local recurrence-free survival were similar (log rank test p = 0.355 and p = 0.390, respectively). Our study provides valuable real-world evidence that TACE when offered with appropriate follow-up treatment is a reasonable initial management strategy in very early/early-stage HCC, with similar survival outcomes as compared to those managed with upfront ablation. Further work is needed to better define the role for TACE in BCLC 0/A HCC. Full article
(This article belongs to the Special Issue Radiology for Diagnosis and Treatment of Liver Cancer)
Show Figures

Figure 1

26 pages, 4486 KiB  
Article
Bleached Hair as Standard Template to Insight the Performance of Commercial Hair Repair Products
by Eva Martins, Pedro Castro, Alessandra B. Ribeiro, Carla F. Pereira, Francisca Casanova, Rui Vilarinho, Joaquim Moreira and Óscar L. Ramos
Cosmetics 2024, 11(5), 150; https://doi.org/10.3390/cosmetics11050150 - 28 Aug 2024
Viewed by 1163
Abstract
The increasing demand for effective hair care products has highlighted the necessity for rigorous claims substantiation methods, particularly for products that target specific hair types. This is essential because the effectiveness of a product can vary significantly based on the hair’s condition and [...] Read more.
The increasing demand for effective hair care products has highlighted the necessity for rigorous claims substantiation methods, particularly for products that target specific hair types. This is essential because the effectiveness of a product can vary significantly based on the hair’s condition and characteristics. A well-defined bleaching protocol is crucial for creating a standardized method to assess product efficacy, especially for products designed to repair damaged hair. The objective of this study was to create a practical bleaching protocol that mimics real-world consumer experiences, ensuring that hair samples exhibit sufficient damage for testing. This approach allows for a reliable assessment of how well various products can repair hair. The protocol serves as a framework for evaluating hair properties and the specific effects of each product on hair structure. Color, brightness, lightness, morphology, and topography were primarily used to understand the big differences in the hair fiber when treated with two repair benchmark products, K18® and Olaplex®, in relation to the Bleached hair. The devised bleaching protocol proved to be a fitting framework for assessing the properties of hair and the unique characteristics of each tested product within the hair fiber. This protocol offers valuable insights and tools for substantiating consumer claims, with morphological and mechanical methods serving as indispensable tools for recognizing and validating claims related to hair. The addition of K18® and Olaplex® demonstrated an increase in hair brightness (Y) and lightness (L* and a*) in relation to the Bleached samples, which were considered relevant characteristics for consumers. Olaplex®’s water-based nature creates a visible inner sheet, effectively filling empty spaces and improving the disulfide linkage network. This enhancement was corroborated by the increased number of disulfide bonds and evident changes in the FTIR profile. In contrast, K18®, owing to the lipophilic nature of its constituents, resulted in the formation of an external layer above the fiber. The composition of each of the products had a discrete impact on the fiber distribution, which was an outcome relevant to the determination of spreadability by consumers. Full article
(This article belongs to the Special Issue 10th Anniversary of Cosmetics—Recent Advances and Perspectives)
Show Figures

Graphical abstract

14 pages, 971 KiB  
Article
Early Clinical Experience of Finerenone in People with Chronic Kidney Disease and Type 2 Diabetes in Japan—A Multi-Cohort Study from the FOUNTAIN (FinerenOne mUltidatabase NeTwork for Evidence generAtIoN) Platform
by Atsuhisa Sato, Daloha Rodriguez-Molina, Kanae Yoshikawa-Ryan, Satoshi Yamashita, Suguru Okami, Fangfang Liu, Alfredo Farjat, Nikolaus G. Oberprieler, Csaba P. Kovesdy, Keizo Kanasaki and David Vizcaya
J. Clin. Med. 2024, 13(17), 5107; https://doi.org/10.3390/jcm13175107 - 28 Aug 2024
Viewed by 538
Abstract
Background: In the phase 3 clinical trials FIGARO-DKD and FIDELIO-DKD, finerenone reduced the risk of cardiovascular and kidney events among people with chronic kidney disease (CKD) and type 2 diabetes (T2D). Evidence regarding finerenone use in real-world settings is limited. Methods: A retrospective [...] Read more.
Background: In the phase 3 clinical trials FIGARO-DKD and FIDELIO-DKD, finerenone reduced the risk of cardiovascular and kidney events among people with chronic kidney disease (CKD) and type 2 diabetes (T2D). Evidence regarding finerenone use in real-world settings is limited. Methods: A retrospective cohort study (NCT06278207) using two Japanese nationwide hospital-based databases provided by Medical Data Vision (MDV) and Real World Data Co., Ltd. (RWD Co., Kyoto Japan), converted to the OMOP common data model, was conducted. Persons with CKD and T2D initiating finerenone from 1 July 2021, to 30 August 2023, were included. Baseline characteristics were described. The occurrence of hyperkalemia after finerenone initiation was assessed. Results: 1029 new users of finerenone were included (967 from MDV and 62 from RWD Co.). Mean age was 69.5 and 72.4 years with 27.3% and 27.4% being female in the MDV and RWD Co. databases, respectively. Hypertension (92 and 95%), hyperlipidemia (59 and 71%), and congestive heart failure (60 and 66%) were commonly observed comorbidities. At baseline, 80% of persons were prescribed angiotensin-converting-enzyme inhibitors or angiotensin-receptor blockers. Sodium–glucose cotransporter 2 inhibitors and glucagon-like peptide 1 receptor agonists were prescribed in 72% and 30% of the study population, respectively. The incidence proportions of hyperkalemia were 2.16 and 2.70 per 100 persons in the MDV and RWD Co. databases, respectively. There were no hospitalizations associated with hyperkalemia observed in either of the two datasets. Conclusions: For the first time, we report the largest current evidence on the clinical use of finerenone in real-world settings early after the drug authorization in Japan. This early evidence from clinical practice suggests that finerenone is used across comorbidities and comedications. Full article
(This article belongs to the Special Issue Type 2 Diabetes: Epidemiology and Clinical Advances)
Show Figures

Figure 1

11 pages, 630 KiB  
Systematic Review
Real-World Efficacy of Intravitreal Faricimab for Diabetic Macular Edema: A Systematic Review
by Safiullah Nasimi, Nasratullah Nasimi, Jakob Grauslund, Anna Stage Vergmann and Yousif Subhi
J. Pers. Med. 2024, 14(9), 913; https://doi.org/10.3390/jpm14090913 - 28 Aug 2024
Viewed by 311
Abstract
Background: Diabetic macular edema (DME) is a prevalent exudative maculopathy, and anti-vascular endothelial growth factor (anti-VEGF) therapy is the first-line choice for treatment. Faricimab, a novel anti-VEGF and anti-angiopoietin-2 bispecific agent, has recently been approved for the treatment of DME. In this study, [...] Read more.
Background: Diabetic macular edema (DME) is a prevalent exudative maculopathy, and anti-vascular endothelial growth factor (anti-VEGF) therapy is the first-line choice for treatment. Faricimab, a novel anti-VEGF and anti-angiopoietin-2 bispecific agent, has recently been approved for the treatment of DME. In this study, we systematically reviewed the real-world evidence of the efficacy of faricimab for the treatment of DME. Methods: We searched 11 databases for eligible studies. Study selection and data extraction were made independently by two authors in duplicate. Eligible studies were reviewed qualitatively. Results: We identified 10 eligible studies that summarized data from a total of 6054 eyes with a mean follow-up of between 55 days and 12 months. Five studies reported outcomes in a population of both treatment-naïve and previously treated eyes, and five studies reported outcomes exclusively in relation to eyes that were previously treated. Faricimab improved the best-corrected visual acuity and macular thickness. The extension of the treatment interval was possible in 61–81% of treatment-naïve eyes and 36–78% of previously treated eyes. Conclusions: Faricimab for DME yields clinical outcomes similar to those known from previous anti-VEGF treatments but with extended treatment intervals, thus lowering the burden of therapy for patients. Long-term real-world studies are warranted. Full article
(This article belongs to the Special Issue Personalized Diagnosis and Therapies in Retinal Diseases)
Show Figures

Figure 1

16 pages, 323 KiB  
Article
An Innovative Algorithm Based on Octahedron Sets via Multi-Criteria Decision Making
by Güzide Şenel
Symmetry 2024, 16(9), 1107; https://doi.org/10.3390/sym16091107 - 26 Aug 2024
Viewed by 604
Abstract
Octahedron sets, which extend beyond the previously defined fuzzy set and soft set concepts to address uncertainty, represent a hybrid set theory that incorporates three distinct systems: interval-valued fuzzy sets, intuitionistic fuzzy sets, and traditional fuzzy set components. This comprehensive set theory is [...] Read more.
Octahedron sets, which extend beyond the previously defined fuzzy set and soft set concepts to address uncertainty, represent a hybrid set theory that incorporates three distinct systems: interval-valued fuzzy sets, intuitionistic fuzzy sets, and traditional fuzzy set components. This comprehensive set theory is designed to express all information provided by decision makers as interval-valued intuitionistic fuzzy decision matrices, addressing a broader range of demands than conventional fuzzy decision-making methods. Multi-criteria decision-making (MCDM) methods are essential tools for analyzing and evaluating alternatives across multiple dimensions, enabling informed decision making aligned with strategic objectives. In this study, we applied MCDM methods to octahedron sets for the first time, optimizing decision results by considering various constraints and preferences. By employing an MCDM algorithm, this study demonstrated how the integration of MCDM into octahedron sets can significantly enhance decision-making processes. The algorithm allowed for the systematic evaluation of alternatives, showcasing the practical utility and effectiveness of octahedron sets in real-world scenarios. This approach was validated through influential examples, underscoring the value of algorithms in leveraging the full potential of octahedron sets. Furthermore, the application of MCDM to octahedron sets revealed that this hybrid structure could handle a wider range of decision-making problems more effectively than traditional fuzzy set approaches. This study not only highlights the theoretical advancements brought by octahedron sets but also provides practical evidence of their application, proving their importance and usefulness in complex decision-making environments. Overall, the integration of octahedron sets and MCDM methods marks a significant step forward in decision science, offering a robust framework for addressing uncertainty and optimizing decision outcomes. This research paves the way for future studies to explore the full capabilities of octahedron sets, potentially transforming decision-making practices across various fields. Full article
(This article belongs to the Special Issue Recent Developments on Fuzzy Sets Extensions)
15 pages, 8273 KiB  
Article
Tunable High-Static-Low-Dynamic Stiffness Isolator under Harmonic and Seismic Loads
by Giovanni Iarriccio, Antonio Zippo, Fatemeh Eskandary-Malayery, Sinniah Ilanko, Yusuke Mochida, Brian Mace and Francesco Pellicano
Vibration 2024, 7(3), 829-843; https://doi.org/10.3390/vibration7030044 - 25 Aug 2024
Viewed by 437
Abstract
High-Static-Low-Dynamic Stiffness (HSLDS) mechanisms exploit nonlinear kinematics to improve the effectiveness of isolators, preserving controlled static deflections while maintaining low natural frequencies. Although extensively studied under harmonic base excitation, there are still few applications considering real seismic signals and little experimental evidence of [...] Read more.
High-Static-Low-Dynamic Stiffness (HSLDS) mechanisms exploit nonlinear kinematics to improve the effectiveness of isolators, preserving controlled static deflections while maintaining low natural frequencies. Although extensively studied under harmonic base excitation, there are still few applications considering real seismic signals and little experimental evidence of real-world performance. This study experimentally demonstrates the beneficial effects of HSLDS isolators over linear ones in reducing the vibrations transmitted to the suspended mass under near-fault earthquakes. A tripod mechanism isolator is presented, and a lumped parameter model is formulated considering a piecewise nonlinear–linear stiffness, with dissipation taken into account through viscous and dry friction forces. Experimental shake table tests are conducted considering harmonic base motion to evaluate the isolator transmissibility in the vertical direction. Excellent agreement is observed when comparing the model to the experimental measurements. Finally, the behavior of the isolator is investigated under earthquake inputs, and results are presented using vertical acceleration time histories and spectra, demonstrating the vibration reduction provided by the nonlinear isolator. Full article
(This article belongs to the Special Issue Nonlinear Vibration of Mechanical Systems)
Show Figures

Figure 1

39 pages, 2298 KiB  
Article
Effects of the Minimum Wage (MW) on Income Inequality: Systematic Review and Analysis of the Spanish Case
by Manuela A. de Paz-Báñez, Celia Sánchez-López and María José Asensio-Coto
Economies 2024, 12(9), 223; https://doi.org/10.3390/economies12090223 - 23 Aug 2024
Viewed by 457
Abstract
The minimum wage has become a standard measure in the economic and social policies of countries all over the world. The primary objective of this measure is to guarantee that workers receive a minimum wage that allows them to lead a decent life, [...] Read more.
The minimum wage has become a standard measure in the economic and social policies of countries all over the world. The primary objective of this measure is to guarantee that workers receive a minimum wage that allows them to lead a decent life, thereby reducing inequality and poverty. However, studies on the minimum wage have not focused on assessing the effects on these dimensions but only on employment. The objective of this study is to address this research gap by analysing the effects of minimum wage increases on income inequality and poverty. To this end, firstly, a systematic review of the empirical analyses was conducted using the PRISMA methodology, with a view to ensuring that all empirical evidence was available. Secondly, the Spanish case was examined. The significant increase in minimum wage in Spain in 2019 (21.3% in real terms) presents an invaluable opportunity to utilise this event as a natural experiment to generate new evidence. A difference-in-differences approach was employed to assess the impact of this phenomenon in the period 2018–2019 with microdata from European Statistics on Income and Living Conditions (EU-SILC for Spain). In doing so, two basic scientific contributions were made. The first one, a systematic, exhaustive, and up-to-date literature review (up to June 2024), as there is, to our knowledge, no recent systematic review of this relationship (minimum wage vs. inequality). The available evidence indicates a clear inverse relationship between the minimum wage and inequalities and poverty. The second one, regarding the Spanish case, there has been a dearth of scientific studies on this subject. Thus, this paper provides new scientific evidence demonstrating that a significant increase in the minimum wage can significantly improve the income of low-wage earners, thereby reducing income inequality and in-work poverty. Furthermore, there is evidence of a spillover effect towards income groups closer to the treatment group. Full article
Show Figures

Figure 1

Back to TopTop