Next Issue
Volume 15, September
Previous Issue
Volume 15, July
 
 

Information, Volume 15, Issue 8 (August 2024) – 87 articles

Cover Story (view full-size image): The paper presents Darly, a QoS-, interference- and heterogeneity-aware Deep Reinforcement Learning-based Scheduler for serverless video analytics deployments on top of distributed Edge nodes. The proposed framework incorporates a DRL agent that exploits performance counters to identify the levels of interference and the degree of heterogeneity in the underlying Edge infrastructure. It combines this information along with user-defined QoS requirements to improve resource allocations by deciding the placement, migration, or horizontal scaling of serverless functions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
62 pages, 1897 KiB  
Review
Construction of Knowledge Graphs: Current State and Challenges
by Marvin Hofer, Daniel Obraczka, Alieh Saeedi, Hanna Köpcke and Erhard Rahm
Information 2024, 15(8), 509; https://doi.org/10.3390/info15080509 - 22 Aug 2024
Viewed by 599
Abstract
With Knowledge Graphs (KGs) at the center of numerous applications such as recommender systems and question-answering, the need for generalized pipelines to construct and continuously update such KGs is increasing. While the individual steps that are necessary to create KGs from unstructured sources [...] Read more.
With Knowledge Graphs (KGs) at the center of numerous applications such as recommender systems and question-answering, the need for generalized pipelines to construct and continuously update such KGs is increasing. While the individual steps that are necessary to create KGs from unstructured sources (e.g., text) and structured data sources (e.g., databases) are mostly well researched for their one-shot execution, their adoption for incremental KG updates and the interplay of the individual steps have hardly been investigated in a systematic manner so far. In this work, we first discuss the main graph models for KGs and introduce the major requirements for future KG construction pipelines. Next, we provide an overview of the necessary steps to build high-quality KGs, including cross-cutting topics such as metadata management, ontology development, and quality assurance. We then evaluate the state of the art of KG construction with respect to the introduced requirements for specific popular KGs, as well as some recent tools and strategies for KG construction. Finally, we identify areas in need of further research and improvement. Full article
(This article belongs to the Special Issue Knowledge Graph Technology and its Applications II)
Show Figures

Figure 1

25 pages, 5483 KiB  
Article
Automated Negotiation Agents for Modeling Single-Peaked Bidders: An Experimental Comparison
by Fatemeh Hassanvand, Faria Nassiri-Mofakham and Katsuhide Fujita
Information 2024, 15(8), 508; https://doi.org/10.3390/info15080508 - 22 Aug 2024
Viewed by 406
Abstract
During automated negotiations, intelligent software agents act based on the preferences of their proprietors, interdicting direct preference exposure. The agent can be armed with a component of an opponent’s modeling features to reduce the uncertainty in the negotiation, but how negotiating agents with [...] Read more.
During automated negotiations, intelligent software agents act based on the preferences of their proprietors, interdicting direct preference exposure. The agent can be armed with a component of an opponent’s modeling features to reduce the uncertainty in the negotiation, but how negotiating agents with a single-peaked preference direct our attention has not been considered. Here, we first investigate the proper representation of single-peaked preferences and implementation of single-peaked agents within bidder agents using different instances of general single-peaked functions. We evaluate the modeling of single-peaked preferences and bidders in automated negotiating agents. Through experiments, we reveal that most of the opponent models can model our benchmark single-peaked agents with similar efficiencies. However, the accuracies differ among the models and in different rival batches. The perceptron-based P1 model obtained the highest accuracy, and the frequency-based model Randomdance outperformed the other competitors in most other performance measures. Full article
(This article belongs to the Special Issue Intelligent Agent and Multi-Agent System)
Show Figures

Figure 1

33 pages, 6672 KiB  
Review
Advancements in Deep Learning Techniques for Time Series Forecasting in Maritime Applications: A Comprehensive Review
by Meng Wang, Xinyan Guo, Yanling She, Yang Zhou, Maohan Liang and Zhong Shuo Chen
Information 2024, 15(8), 507; https://doi.org/10.3390/info15080507 - 21 Aug 2024
Viewed by 1291
Abstract
The maritime industry is integral to global trade and heavily depends on precise forecasting to maintain efficiency, safety, and economic sustainability. Adopting deep learning for predictive analysis has markedly improved operational accuracy, cost efficiency, and decision-making. This technology facilitates advanced time series analysis, [...] Read more.
The maritime industry is integral to global trade and heavily depends on precise forecasting to maintain efficiency, safety, and economic sustainability. Adopting deep learning for predictive analysis has markedly improved operational accuracy, cost efficiency, and decision-making. This technology facilitates advanced time series analysis, vital for optimizing maritime operations. This paper reviews deep learning applications in time series analysis within the maritime industry, focusing on three areas: ship operation-related, port operation-related, and shipping market-related topics. It provides a detailed overview of the existing literature on applications such as ship trajectory prediction, ship fuel consumption prediction, port throughput prediction, and shipping market prediction. The paper comprehensively examines the primary deep learning architectures used for time series forecasting in the maritime industry, categorizing them into four principal types. It systematically analyzes the advantages of deep learning architectures across different application scenarios and explores methodologies for selecting models based on specific requirements. Additionally, it analyzes data sources from the existing literature and suggests future research directions. Full article
(This article belongs to the Special Issue Deep Learning Approach for Time Series Forecasting)
Show Figures

Figure 1

16 pages, 2069 KiB  
Article
Trading Cloud Computing Stocks Using SMA
by Xianrong Zheng and Lingyu Li
Information 2024, 15(8), 506; https://doi.org/10.3390/info15080506 - 21 Aug 2024
Viewed by 411
Abstract
As cloud computing adoption becomes mainstream, the cloud services market offers vast profits. Moreover, serverless computing, the next stage of cloud computing, comes with huge economic potential. To capitalize on this trend, investors are interested in trading cloud stocks. As high-growth technology stocks, [...] Read more.
As cloud computing adoption becomes mainstream, the cloud services market offers vast profits. Moreover, serverless computing, the next stage of cloud computing, comes with huge economic potential. To capitalize on this trend, investors are interested in trading cloud stocks. As high-growth technology stocks, investing in cloud stocks is both rewarding and challenging. The research question here is how a trading strategy will perform on cloud stocks. As a result, this paper employs an effective method—Simple Moving Average (SMA)—to trade cloud stocks. To evaluate its performance, we conducted extensive experiments with real market data that spans over 23 years. Results show that SMA can achieve satisfying performance in terms of several measures, including MAE, RMSE, and R-squared. Full article
(This article belongs to the Special Issue Blockchain Applications for Business Process Management)
Show Figures

Figure 1

29 pages, 1854 KiB  
Article
Information Security Awareness in the Insurance Sector: Cognitive and Internal Factors and Combined Recommendations
by Morgan Djotaroeno and Erik Beulen
Information 2024, 15(8), 505; https://doi.org/10.3390/info15080505 - 21 Aug 2024
Viewed by 651
Abstract
Cybercrime is currently rapidly developing, requiring an increased demand for information security knowledge. Attackers are becoming more sophisticated and complex in their assault tactics. Employees are a focal point since humans remain the ‘weakest link’ and are vital to prevention. This research investigates [...] Read more.
Cybercrime is currently rapidly developing, requiring an increased demand for information security knowledge. Attackers are becoming more sophisticated and complex in their assault tactics. Employees are a focal point since humans remain the ‘weakest link’ and are vital to prevention. This research investigates what cognitive and internal factors influence information security awareness (ISA) among employees, through quantitative empirical research using a survey conducted at a Dutch financial insurance firm. The research question of “How and to what extent do cognitive and internal factors contribute to information security awareness (ISA)?” has been answered, using the theory of situation awareness as the theoretical lens. The constructs of Security Complexity, Information Security Goals (InfoSec Goals), and SETA Programs (security education, training, and awareness) significantly contribute to ISA. The most important research recommendations are to seek novel explaining variables for ISA, further investigate the roots of Security Complexity and what influences InfoSec Goals, and venture into qualitative and experimental research methodologies to seek more depth. The practical recommendations are to minimize the complexity of (1) information security topics (e.g., by contextualizing it more for specific employee groups) and (2) integrate these simplifications in various SETA methods (e.g., gamification and online training). Full article
Show Figures

Figure 1

26 pages, 10462 KiB  
Article
The Optimal Choice of the Encoder–Decoder Model Components for Image Captioning
by Mateusz Bartosiewicz and Marcin Iwanowski
Information 2024, 15(8), 504; https://doi.org/10.3390/info15080504 - 21 Aug 2024
Viewed by 489
Abstract
Image captioning aims at generating meaningful verbal descriptions of a digital image. This domain is rapidly growing due to the enormous increase in available computational resources. The most advanced methods are, however, resource-demanding. In our paper, we return to the encoder–decoder deep-learning model [...] Read more.
Image captioning aims at generating meaningful verbal descriptions of a digital image. This domain is rapidly growing due to the enormous increase in available computational resources. The most advanced methods are, however, resource-demanding. In our paper, we return to the encoder–decoder deep-learning model and investigate how replacing its components with newer equivalents improves overall effectiveness. The primary motivation of our study is to obtain the highest possible level of improvement of classic methods, which are applicable in less computational environments where most advanced models are too heavy to be efficiently applied. We investigate image feature extractors, recurrent neural networks, word embedding models, and word generation layers and discuss how each component influences the captioning model’s overall performance. Our experiments are performed on the MS COCO 2014 dataset. As a result of our research, replacing components improves the quality of generating image captions. The results will help design efficient models with optimal combinations of their components. Full article
(This article belongs to the Special Issue Information Processing in Multimedia Applications)
Show Figures

Figure 1

17 pages, 368 KiB  
Article
An Approach for Maximizing Computation Bits in UAV-Assisted Wireless Powered Mobile Edge Computing Networks
by Zhenbo Liu, Yunge Duan and Shuang Fu
Information 2024, 15(8), 503; https://doi.org/10.3390/info15080503 - 21 Aug 2024
Viewed by 440
Abstract
With the development of the Internet of Things (IoT), IoT nodes with limited energy and computing capability are no longer able to address increasingly complex computational tasks. To address this issue, an Unmanned Aerial Vehicle (UAV)-assisted Wireless Power Transfer (WPT) Mobile Edge Computing [...] Read more.
With the development of the Internet of Things (IoT), IoT nodes with limited energy and computing capability are no longer able to address increasingly complex computational tasks. To address this issue, an Unmanned Aerial Vehicle (UAV)-assisted Wireless Power Transfer (WPT) Mobile Edge Computing (MEC) system is proposed in this study. By jointly optimizing variables such as energy harvesting time, user transmission power, user offloading time, CPU frequency, and UAV deployment location, the system aims to maximize the number of computation bits by the users. This optimization yields a challenging non-convex optimization problem. To address these issues, a two-stage alternating method based on the Lagrangian dual method and the Successive Convex Approximation (SCA) method is proposed to decompose the initial problem into two sub-problems. Firstly, the UAV position is fixed to obtain the optimal values of other variables, and then the UAV position is optimized based on the solved variables. Finally, this iterative process continues until the algorithm convergences, and the optimal solution for the given problem is obtained. The simulation results indicate that the proposed algorithm exhibits good convergence. Compared to other benchmark solutions, the proposed approach performs optimally in maximizing computation bits. Full article
Show Figures

Figure 1

15 pages, 2185 KiB  
Article
Cost Estimation and Prediction for Residential Projects Based on Grey Relational Analysis–Lasso Regression–Backpropagation Neural Network
by Lijun Chen and Dejiang Wang
Information 2024, 15(8), 502; https://doi.org/10.3390/info15080502 - 21 Aug 2024
Viewed by 460
Abstract
In the early stages of residential project investment, accurately estimating the engineering costs of residential projects is crucial for cost control and management of the project. However, the current cost estimation of residential engineering in China is primarily carried out by cost personnel [...] Read more.
In the early stages of residential project investment, accurately estimating the engineering costs of residential projects is crucial for cost control and management of the project. However, the current cost estimation of residential engineering in China is primarily carried out by cost personnel based on their own experience. This process is time-consuming and labour-intensive, and it involves subjective judgement, which can lead to significant estimation errors and fail to meet the rapidly developing market demands. Data collection for residential construction projects is challenging, with small sample sizes, numerous attributes, and complexity. This paper adopts a hybrid method combining a grey relational analysis, Lasso regression, and Backpropagation Neural Network (GAR-LASSO-BPNN). This method has significant advantages in handling high-dimensional small samples and multiple correlated variables. The grey relational analysis (GRA) is used to quantitatively identify cost-driving factors, and 14 highly correlated factors are selected as input variables. Then, regularization through Lasso regression (LASSO) is used to filter the final input variables, which are subsequently input into the Backpropagation Neural Network (BPNN) to establish the relationship between the unit cost of residential projects and 12 input variables. Compared to using LASSO and BPNN methods individually, the GAR-LASSO-BPNN hybrid prediction method performs better in terms of error evaluation metrics. The research findings can provide quantitative decision support for cost estimators in the early estimation stages of residential project investment decision-making. Full article
Show Figures

Figure 1

12 pages, 244 KiB  
Perspective
Reviewing the Horizon: The Future of Extended Reality and Artificial Intelligence in Neurorehabilitation for Brain Injury Recovery
by Khalida Akbar, Anna Passaro, Mariacarla Di Gioia, Elvira Martini, Mirella Dragone, Antonio Zullo and Fabrizio Stasolla
Information 2024, 15(8), 501; https://doi.org/10.3390/info15080501 - 21 Aug 2024
Viewed by 612
Abstract
People with disorders of consciousness, either as a consequence of an acquired brain injury or a traumatic brain injury, may pose serious challenges to medical and/or rehabilitative centers with an increased burden on caregivers and families. The objectives of this study were as [...] Read more.
People with disorders of consciousness, either as a consequence of an acquired brain injury or a traumatic brain injury, may pose serious challenges to medical and/or rehabilitative centers with an increased burden on caregivers and families. The objectives of this study were as follows: to explore the use of extended reality as a critical means of rehabilitative support in people with disorders of consciousness and brain injuries; to evaluate its impact on recovery processes; to assess the improvements in the participants’ quality of life, and to reduce the burden on families and caregivers by using extended reality and artificial-intelligence-based programs. A selective review of the newest empirical studies on the use of extended reality and artificial-intelligence-based interventions in patients with brain injuries and disorders of consciousness was conducted over the last decade. The potential for bias in this selective review is acknowledged. A conceptual framework was detailed. The data showed that extended reality and artificial-intelligence-based programs successfully enhanced the adaptive responding of the participants involved, and improved their quality of life. The burden on caregivers and families was reduced accordingly. Extended reality and artificial intelligence may be viewed as crucial means of recovery in people with disorders of consciousness and brain injuries. Full article
(This article belongs to the Special Issue Extended Reality and Cybersecurity)
25 pages, 514 KiB  
Article
Bridging Linguistic Gaps: Developing a Greek Text Simplification Dataset
by Leonidas Agathos, Andreas Avgoustis, Xristiana Kryelesi, Aikaterini Makridou, Ilias Tzanis, Despoina Mouratidis, Katia Lida Kermanidis and Andreas Kanavos
Information 2024, 15(8), 500; https://doi.org/10.3390/info15080500 - 20 Aug 2024
Viewed by 434
Abstract
Text simplification is crucial in bridging the comprehension gap in today’s information-rich environment. Despite advancements in English text simplification, languages with intricate grammatical structures, such as Greek, often remain under-explored. The complexity of Greek grammar, characterized by its flexible syntactic ordering, presents unique [...] Read more.
Text simplification is crucial in bridging the comprehension gap in today’s information-rich environment. Despite advancements in English text simplification, languages with intricate grammatical structures, such as Greek, often remain under-explored. The complexity of Greek grammar, characterized by its flexible syntactic ordering, presents unique challenges that hinder comprehension for native speakers, learners, tourists, and international students. This paper introduces a comprehensive dataset for Greek text simplification, containing over 7500 sentences across diverse topics such as history, science, and culture, tailored to address these challenges. We outline the methodology for compiling this dataset, including a collection of texts from Greek Wikipedia, their annotation with simplified versions, and the establishment of robust evaluation metrics. Additionally, the paper details the implementation of quality control measures and the application of machine learning techniques to analyze text complexity. Our experimental results demonstrate the dataset’s initial effectiveness and potential in reducing linguistic barriers and enhancing communication, with initial machine learning models showing promising directions for future improvements in classifying text complexity. The development of this dataset marks a significant step toward improving accessibility and comprehension for a broad audience of Greek speakers and learners, fostering a more inclusive society. Full article
(This article belongs to the Special Issue Information Extraction and Language Discourse Processing)
Show Figures

Figure 1

20 pages, 2982 KiB  
Article
Exploring Tourist Experience through Online Reviews Using Aspect-Based Sentiment Analysis with Zero-Shot Learning for Hospitality Service Enhancement
by Ibrahim Nawawi, Kurnia Fahmy Ilmawan, Muhammad Rifqi Maarif and Muhammad Syafrudin
Information 2024, 15(8), 499; https://doi.org/10.3390/info15080499 - 20 Aug 2024
Viewed by 474
Abstract
Hospitality services play a crucial role in shaping tourist satisfaction and revisiting intention toward destinations. Traditional feedback methods like surveys often fail to capture the nuanced and real-time experiences of tourists. Digital platforms such as TripAdvisor, Yelp, and Google Reviews provide a rich [...] Read more.
Hospitality services play a crucial role in shaping tourist satisfaction and revisiting intention toward destinations. Traditional feedback methods like surveys often fail to capture the nuanced and real-time experiences of tourists. Digital platforms such as TripAdvisor, Yelp, and Google Reviews provide a rich source of user-generated content, but the sheer volume of reviews makes manual analysis impractical. This study proposes integrating aspect-based sentiment analysis with zero-shot learning to analyze online tourist reviews effectively without requiring extensive annotated datasets. Using pretrained models like RoBERTa, the research framework involves keyword extraction, sentence segment detection, aspect construction, and sentiment polarity measurement. The dataset, sourced from TripAdvisor reviews of attractions, hotels, and restaurants in Central Java, Indonesia, underwent preprocessing to ensure suitability for analysis. The results highlight the importance of aspects such as food, accommodation, and cultural experiences in tourist satisfaction. The findings indicate a need for continuous service improvement to meet evolving tourist expectations, demonstrating the potential of advanced natural language processing techniques in enhancing hospitality services and customer satisfaction. Full article
Show Figures

Figure 1

17 pages, 809 KiB  
Article
Intelligent Risk Evaluation for Investment Banking IPO Business Based on Text Analysis
by Lei Zhang, Chao Wang and Xiaoxing Liu
Information 2024, 15(8), 498; https://doi.org/10.3390/info15080498 - 20 Aug 2024
Viewed by 384
Abstract
By constructing a text quality analysis system and company quality analysis system based on a prospectus, the intelligent analysis method of investment banking IPO business risk is proposed based on the machine learning method and text analysis technology. Taking the Sci-Tech Innovation Board [...] Read more.
By constructing a text quality analysis system and company quality analysis system based on a prospectus, the intelligent analysis method of investment banking IPO business risk is proposed based on the machine learning method and text analysis technology. Taking the Sci-Tech Innovation Board in China as a sample, the empirical analysis results show that the text quality and the company quality disclosed in the prospectus can affect the withdrawal rate of investment banking IPO business. By carrying out a text analysis and machine learning on the text quality and company quality, the risk of investment banking IPO business can be predicted intelligently and effectively. The research results can not only improve the business efficiency of investment banking IPO, and save resource cost, but also improve the standardization and authenticity of investment banking IPO business. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

14 pages, 291 KiB  
Article
Minimum Mean Squared Error Estimation and Mutual Information Gain
by Jerry Gibson
Information 2024, 15(8), 497; https://doi.org/10.3390/info15080497 - 20 Aug 2024
Viewed by 453
Abstract
Information theoretic quantities such as entropy, entropy rate, information gain, and relative entropy are often used to understand the performance of intelligent agents in learning applications. Mean squared error has not played a role in these analyses, primarily because it is not felt [...] Read more.
Information theoretic quantities such as entropy, entropy rate, information gain, and relative entropy are often used to understand the performance of intelligent agents in learning applications. Mean squared error has not played a role in these analyses, primarily because it is not felt to be a viable performance indicator in these scenarios. We build on a new quantity, the log ratio of entropy powers, to establish that minimum mean squared error (MMSE) estimation, prediction, and smoothing are directly connected to mutual information gain or loss in an agent learning system modeled by a Markov chain for many probability distributions of interest. Expressions for mutual information gain or loss are developed for MMSE estimation, prediction, and smoothing, and an example for fixed lag smoothing is presented. Full article
(This article belongs to the Special Issue Fundamental Problems of Information Studies)
26 pages, 3537 KiB  
Article
From Data to Insight: Transforming Online Job Postings into Labor-Market Intelligence
by Giannis Tzimas, Nikos Zotos, Evangelos Mourelatos, Konstantinos C. Giotopoulos and Panagiotis Zervas
Information 2024, 15(8), 496; https://doi.org/10.3390/info15080496 - 20 Aug 2024
Viewed by 624
Abstract
In the continuously changing labor market, understanding the dynamics of online job postings is crucial for economic and workforce development. With the increasing reliance on Online Job Portals, analyzing online job postings has become an essential tool for capturing real-time labor-market trends. This [...] Read more.
In the continuously changing labor market, understanding the dynamics of online job postings is crucial for economic and workforce development. With the increasing reliance on Online Job Portals, analyzing online job postings has become an essential tool for capturing real-time labor-market trends. This paper presents a comprehensive methodology for processing online job postings to generate labor-market intelligence. The proposed methodology encompasses data source selection, data extraction, cleansing, normalization, and deduplication procedures. The final step involves information extraction based on employer industry, occupation, workplace, skills, and required experience. We address the key challenges that emerge at each step and discuss how they can be resolved. Our methodology is applied to two use cases: the first focuses on the analysis of the Greek labor market in the tourism industry during the COVID-19 pandemic, revealing shifts in job demands, skill requirements, and employment types. In the second use case, a data-driven ontology is employed to extract skills from job postings using machine learning. The findings highlight that the proposed methodology, utilizing NLP and machine-learning techniques instead of LLMs, can be applied to different labor market-analysis use cases and offer valuable insights for businesses, job seekers, and policymakers. Full article
(This article belongs to the Special Issue Second Edition of Predictive Analytics and Data Science)
Show Figures

Figure 1

27 pages, 19187 KiB  
Article
Analyzing Tor Browser Artifacts for Enhanced Web Forensics, Anonymity, Cybersecurity, and Privacy in Windows-Based Systems
by Muhammad Shanawar Javed, Syed Muhammad Sajjad, Danish Mehmood, Khawaja Mansoor, Zafar Iqbal, Muhammad Kazim and Zia Muhammad
Information 2024, 15(8), 495; https://doi.org/10.3390/info15080495 - 19 Aug 2024
Viewed by 639
Abstract
The Tor browser is widely used for anonymity, providing layered encryption for enhanced privacy. Besides its positive uses, it is also popular among cybercriminals for illegal activities such as trafficking, smuggling, betting, and illicit trade. There is a need for Tor Browser forensics [...] Read more.
The Tor browser is widely used for anonymity, providing layered encryption for enhanced privacy. Besides its positive uses, it is also popular among cybercriminals for illegal activities such as trafficking, smuggling, betting, and illicit trade. There is a need for Tor Browser forensics to identify its use in unlawful activities and explore its consequences. This research analyzes artifacts generated by Tor on Windows-based systems. The methodology integrates forensic techniques into incident responses per NIST SP (800-86), exploring areas such as registry, storage, network, and memory using tools like bulk-extractor, autopsy, and regshot. We propose an automated PowerShell script that detects Tor usage and retrieves artifacts with minimal user interaction. Finally, this research performs timeline analysis and artifact correlation for a contextual understanding of event sequences in memory and network domains, ultimately contributing to improved incident response and accountability. Full article
(This article belongs to the Special Issue Cybersecurity, Cybercrimes, and Smart Emerging Technologies)
Show Figures

Figure 1

19 pages, 1050 KiB  
Article
Enhancing Biomedical Question Answering with Large Language Models
by Hua Yang, Shilong Li and Teresa Gonçalves
Information 2024, 15(8), 494; https://doi.org/10.3390/info15080494 - 19 Aug 2024
Viewed by 619
Abstract
In the field of Information Retrieval, biomedical question answering is a specialized task that focuses on answering questions related to medical and healthcare domains. The goal is to provide accurate and relevant answers to the posed queries related to medical conditions, treatments, procedures, [...] Read more.
In the field of Information Retrieval, biomedical question answering is a specialized task that focuses on answering questions related to medical and healthcare domains. The goal is to provide accurate and relevant answers to the posed queries related to medical conditions, treatments, procedures, medications, and other healthcare-related topics. Well-designed models should efficiently retrieve relevant passages. Early retrieval models can quickly retrieve passages but often with low precision. In contrast, recently developed Large Language Models can retrieve documents with high precision but at a slower pace. To tackle this issue, we propose a two-stage retrieval approach that initially utilizes BM25 for a preliminary search to identify potential candidate documents; subsequently, a Large Language Model is fine-tuned to evaluate the relevance of query–document pairs. Experimental results indicate that our approach achieves comparative performances on the BioASQ and the TREC-COVID datasets. Full article
(This article belongs to the Special Issue Editorial Board Members’ Collection Series: "Information Processes")
Show Figures

Figure 1

14 pages, 1516 KiB  
Article
Early Recurrence Prediction of Hepatocellular Carcinoma Using Deep Learning Frameworks with Multi-Task Pre-Training
by Jian Song, Haohua Dong, Youwen Chen, Xianru Zhang, Gan Zhan, Rahul Kumar Jain and Yen-Wei Chen
Information 2024, 15(8), 493; https://doi.org/10.3390/info15080493 - 17 Aug 2024
Viewed by 530
Abstract
Post-operative early recurrence (ER) of hepatocellular carcinoma (HCC) is a major cause of mortality. Predicting ER before treatment can guide treatment and follow-up protocols. Deep learning frameworks, known for their superior performance, are widely used in medical imaging. However, they face challenges due [...] Read more.
Post-operative early recurrence (ER) of hepatocellular carcinoma (HCC) is a major cause of mortality. Predicting ER before treatment can guide treatment and follow-up protocols. Deep learning frameworks, known for their superior performance, are widely used in medical imaging. However, they face challenges due to limited annotated data. We propose a multi-task pre-training method using self-supervised learning with medical images for predicting the ER of HCC. This method involves two pretext tasks: phase shuffle, focusing on intra-image feature representation, and case discrimination, focusing on inter-image feature representation. The effectiveness and generalization of the proposed method are validated through two different experiments. In addition to predicting early recurrence, we also apply the proposed method to the classification of focal liver lesions. Both experiments show that the multi-task pre-training model outperforms existing pre-training (transfer learning) methods with natural images, single-task self-supervised pre-training, and DINOv2. Full article
(This article belongs to the Special Issue Intelligent Image Processing by Deep Learning)
Show Figures

Figure 1

14 pages, 4112 KiB  
Article
A Feasibility Study of a Respiratory Rate Measurement System Using Wearable MOx Sensors
by Mitsuhiro Fukuda, Jaakko Hyry, Ryosuke Omoto, Takunori Shimazaki, Takumi Kobayashi and Daisuke Anzai
Information 2024, 15(8), 492; https://doi.org/10.3390/info15080492 - 16 Aug 2024
Viewed by 392
Abstract
Accurately obtaining a patient’s respiratory rate is crucial for promptly identifying any sudden changes in their condition during emergencies. Typically, the respiratory rate is assessed through a combination of impedance change measurements and electrocardiography (ECG). However, impedance measurements are prone to interference from [...] Read more.
Accurately obtaining a patient’s respiratory rate is crucial for promptly identifying any sudden changes in their condition during emergencies. Typically, the respiratory rate is assessed through a combination of impedance change measurements and electrocardiography (ECG). However, impedance measurements are prone to interference from body movements. Conversely, a capnometer coupled with a ventilator offers a method of measuring the respiratory rate that is unaffected by body movements. However, capnometers are mainly used to evaluate respiration when using a ventilator or an Ambu bag by measuring the CO2 concentration at the breathing circuit, and they are not used only to measure the respiratory rate. Furthermore, capnometers are not suitable as wearable devices because they require intubation or a mask that covers the nose and mouth to prevent air leaks during the measurement. In this study, we developed a reliable system for measuring the respiratory rate utilizing a small wearable MOx sensor that is unaffected by body movements and not connected to the breathing circuit. Subsequently, we conducted experimental assessments to gauge the accuracy of the rate estimation achieved by the system. In order to avoid the effects of abnormal states on the estimation accuracy, we also evaluated the classification performance for distinguishing between normal and abnormal respiration using a one-class SVM-based approach. The developed system achieved 80% for both true positive and true negative rates. Our experimental findings reveal that the respiratory rate can be precisely determined without being influenced by body movements. Full article
Show Figures

Figure 1

18 pages, 1873 KiB  
Article
Beyond Supervised: The Rise of Self-Supervised Learning in Autonomous Systems
by Hamed Taherdoost
Information 2024, 15(8), 491; https://doi.org/10.3390/info15080491 - 16 Aug 2024
Viewed by 678
Abstract
Supervised learning has been the cornerstone of many successful medical imaging applications. However, its reliance on large labeled datasets poses significant challenges, especially in the medical domain, where data annotation is time-consuming and expensive. In response, self-supervised learning (SSL) has emerged as a [...] Read more.
Supervised learning has been the cornerstone of many successful medical imaging applications. However, its reliance on large labeled datasets poses significant challenges, especially in the medical domain, where data annotation is time-consuming and expensive. In response, self-supervised learning (SSL) has emerged as a promising alternative, leveraging unlabeled data to learn meaningful representations without explicit supervision. This paper provides a detailed overview of supervised learning and its limitations in medical imaging, underscoring the need for more efficient and scalable approaches. The study emphasizes the importance of the area under the curve (AUC) as a key evaluation metric in assessing SSL performance. The AUC offers a comprehensive measure of model performance across different operating points, which is crucial in medical applications, where false positives and negatives have significant consequences. Evaluating SSL methods based on the AUC allows for robust comparisons and ensures that models generalize well to real-world scenarios. This paper reviews recent advances in SSL for medical imaging, demonstrating their potential to revolutionize the field by mitigating challenges associated with supervised learning. Key results show that SSL techniques, by leveraging unlabeled data and optimizing performance metrics like the AUC, can significantly improve the diagnostic accuracy, scalability, and efficiency in medical image analysis. The findings highlight SSL’s capability to reduce the dependency on labeled datasets and present a path forward for more scalable and effective medical imaging solutions. Full article
(This article belongs to the Special Issue Emerging Research on Neural Networks and Anomaly Detection)
Show Figures

Figure 1

23 pages, 1044 KiB  
Article
Optimized Early Prediction of Business Processes with Hyperdimensional Computing
by Fatemeh Asgarinejad, Anthony Thomas, Ryan Hildebrant, Zhenyu Zhang, Shangping Ren, Tajana Rosing and Baris Aksanli
Information 2024, 15(8), 490; https://doi.org/10.3390/info15080490 - 16 Aug 2024
Viewed by 426
Abstract
There is a growing interest in the early prediction of outcomes in ongoing business processes. Predictive process monitoring distills knowledge from the sequence of event data generated and stored during the execution of processes and trains models on this knowledge to predict outcomes [...] Read more.
There is a growing interest in the early prediction of outcomes in ongoing business processes. Predictive process monitoring distills knowledge from the sequence of event data generated and stored during the execution of processes and trains models on this knowledge to predict outcomes of ongoing processes. However, most state-of-the-art methods require the training of complex and inefficient machine learning models and hyper-parameter optimization as well as numerous input data to achieve high performance. In this paper, we present a novel approach based on Hyperdimensional Computing (HDC) for predicting the outcome of ongoing processes before their completion. We highlight its simplicity, efficiency, and high performance while utilizing only a subset of the input data, which helps in achieving a lower memory demand and faster and more effective corrective measures. We evaluate our proposed method on four publicly available datasets with a total of 12 binary prediction tasks. Our proposed method achieves an average 6% higher area under the ROC curve (AUC) and up to a 14% higher F1-score, while yielding a 20× earlier prediction than state-of-the-art conventional machine learning- and neural network-based models. Full article
(This article belongs to the Special Issue Second Edition of Predictive Analytics and Data Science)
Show Figures

Figure 1

15 pages, 956 KiB  
Article
Enhancing Brain Tumour Multi-Classification Using Efficient-Net B0-Based Intelligent Diagnosis for Internet of Medical Things (IoMT) Applications
by Amna Iqbal, Muhammad Arfan Jaffar and Rashid Jahangir
Information 2024, 15(8), 489; https://doi.org/10.3390/info15080489 - 16 Aug 2024
Viewed by 422
Abstract
Brain tumour disease develops due to abnormal cell proliferation. The early identification of brain tumours is vital for their effective treatment. Most currently available examination methods are laborious, require extensive manual instructions, and produce subpar findings. The EfficientNet-B0 architecture was used to diagnose [...] Read more.
Brain tumour disease develops due to abnormal cell proliferation. The early identification of brain tumours is vital for their effective treatment. Most currently available examination methods are laborious, require extensive manual instructions, and produce subpar findings. The EfficientNet-B0 architecture was used to diagnose brain tumours using magnetic resonance imaging (MRI). The fine-tuned EffeceintNet B0 model was proposed for the Internet of Medical Things (IoMT) environment. The fine-tuned EfficientNet-B0 architecture was employed to classify four different stages of brain tumours from the MRI images. The fine-tuned model showed 99% accuracy in the detection of four different classes of brain tumour detection (glioma, no tumour, meningioma, and pituitary). The proposed model performed very well in the detection of the pituitary class with a precision of 0.95, recall of 0.98, and F1 score of 0.96. The proposed model also performed very well in the detection of the no-tumour class with values of 0.99, 0.90, and 0.94 for precision, recall, and the F1 score, respectively. The precision, recall, and F1 scores for Glioma and Meningioma classes were also high. The proposed solution has several implications for enhancing clinical investigations of brain tumours. Full article
Show Figures

Figure 1

12 pages, 1035 KiB  
Article
A Methodology to Distribute On-Chip Voltage Regulators to Improve the Security of Hardware Masking
by Soner Seçkiner and Selçuk Köse
Information 2024, 15(8), 488; https://doi.org/10.3390/info15080488 - 16 Aug 2024
Viewed by 438
Abstract
Hardware masking is used to protect against side-channel attacks by splitting sensitive information into different parts, called hardware masking shares. Ideally, a side-channel attack would only work if all these parts were completely independent. But in real-world VLSI implementations, things are not perfect. [...] Read more.
Hardware masking is used to protect against side-channel attacks by splitting sensitive information into different parts, called hardware masking shares. Ideally, a side-channel attack would only work if all these parts were completely independent. But in real-world VLSI implementations, things are not perfect. Information from a hardware masking share can leak to another, making it possible for side-channel attacks to succeed without needing data from every hardware masking share. The theoretically supposed independence of these shares often does not hold up in practice. The effectiveness of hardware masking is reduced because of the parasitic impedance that stems from power delivery networks or the internal structure of the integrated circuit. When the coupling effect and noise spread among the hardware masking shares powered by the same power delivery network, side-channel attacks can be carried out with fewer measurements. To address this, we propose a new method of distributing on-chip voltage regulators to improve hardware masking security. The benefits of distributed on-chip voltage regulators are evident. Placing the regulators close to the load minimizes power loss due to resistive losses in the power delivery network. Localized regulation allows for more efficient adjustments to the varying power demands of different chip sections, improving overall power efficiency. Additionally, distributed regulators can quickly respond to power demand changes, maintaining stable voltage levels for high-performance circuits, leading to improved control over noise. We introduce a new DLDO voltage regulator that uses random clocking and randomizing limit cycle oscillations to enhance security. Our simulations show that with these distributed DLDO regulators, the t-test value can be as low as 2.019, and typically, a circuit with a t-test value below 4.5 is considered secure. Full article
(This article belongs to the Special Issue Hardware Security and Trust)
Show Figures

Figure 1

35 pages, 8757 KiB  
Review
From Information to Knowledge: A Role for Knowledge Networks in Decision Making and Action Selection
by Jagmeet S. Kanwal
Information 2024, 15(8), 487; https://doi.org/10.3390/info15080487 - 15 Aug 2024
Viewed by 464
Abstract
The brain receives information via sensory inputs through the peripheral nervous system and stores a small subset as memories within the central nervous system. Short-term, working memory is present in the hippocampus whereas long-term memories are distributed within neural networks throughout the brain. [...] Read more.
The brain receives information via sensory inputs through the peripheral nervous system and stores a small subset as memories within the central nervous system. Short-term, working memory is present in the hippocampus whereas long-term memories are distributed within neural networks throughout the brain. Elegant studies on the mechanisms for memory storage and the neuroeconomic formulation of human decision making have been recognized with Nobel Prizes in Physiology or Medicine and in Economics, respectively. There is a wide gap, however, in our understanding of how memories of disparate bits of information translate into “knowledge”, and the neural mechanisms by which knowledge is used to make decisions. I propose that the conceptualization of a “knowledge network” for the creation, storage and recall of knowledge is critical to start bridging this gap. Knowledge creation involves value-driven contextualization of memories through cross-validation via certainty-seeking behaviors, including rumination or reflection. Knowledge recall, like memory, may occur via oscillatory activity that dynamically links multiple networks. These networks may show correlated activity and interactivity despite their presence within widely separated regions of the nervous system, including the brainstem, spinal cord and gut. The hippocampal–amygdala complex together with the entorhinal and prefrontal cortices are likely components of multiple knowledge networks since they participate in the contextual recall of memories and action selection. Sleep and reflection processes and attentional mechanisms mediated by the habenula are expected to play a key role in knowledge creation and consolidation. Unlike a straightforward test of memory, determining the loci and mechanisms for the storage and recall of knowledge requires the implementation of a naturalistic decision-making paradigm. By formalizing a neuroscientific concept of knowledge networks, we can experimentally test their functionality by recording large-scale neural activity during decision making in awake, naturally behaving animals. These types of studies are difficult but important also for advancing knowledge-driven as opposed to big data-driven models of artificial intelligence. A knowledge network-driven understanding of brain function may have practical implications in other spheres, such as education and the treatment of mental disorders. Full article
Show Figures

Figure 1

16 pages, 1563 KiB  
Article
Assessment in the Age of Education 4.0: Unveiling Primitive and Hidden Parameters for Evaluation
by Anil Verma, Parampreet Kaur and Aman Singh
Information 2024, 15(8), 486; https://doi.org/10.3390/info15080486 - 15 Aug 2024
Viewed by 466
Abstract
This study delves into the nuanced aspects that influence the quality of education within the Education 4.0 framework. Education 4.0 epitomizes a contemporary educational paradigm leveraging IoT devices, sensors, and actuators to facilitate real-time and continuous assessment, thereby enhancing student evaluation methodologies. Within [...] Read more.
This study delves into the nuanced aspects that influence the quality of education within the Education 4.0 framework. Education 4.0 epitomizes a contemporary educational paradigm leveraging IoT devices, sensors, and actuators to facilitate real-time and continuous assessment, thereby enhancing student evaluation methodologies. Within this context, the study scrutinizes the pivotal role of infrastructure, learning environment, and faculty, acknowledged as fundamental determinants of educational excellence. Identifying five discrete yet crucial hidden parameters, awareness, accessibility, participation, satisfaction, and academic loafing, this paper meticulously examines their ramifications within the Education 4.0 landscape. Employing a comparative analysis encompassing pre- and post-implementation scenarios, the research assesses the transformative impact of Education 4.0 on the educational sector while dissecting the influence of these hidden parameters across these temporal contexts. The findings underscore the substantial enhancements introduced by Education 4.0, including the provision of real-time and continuous assessment mechanisms, heightened accessibility to educational resources, and amplified student engagement levels. Notably, the study advocates for bolstering stakeholders’ accountability as a strategic measure to mitigate academic loafing within an ambient educational milieu. In essence, this paper offers invaluable insights into the intricate interplay between hidden parameters and educational quality, elucidating the pivotal role of Education 4.0 in catalyzing advancements within the education industry. Full article
Show Figures

Figure 1

25 pages, 4636 KiB  
Article
Application of Multi-Source Remote Sensing Data and Machine Learning for Surface Soil Moisture Mapping in Temperate Forests of Central Japan
by Kyaw Win, Tamotsu Sato and Satoshi Tsuyuki
Information 2024, 15(8), 485; https://doi.org/10.3390/info15080485 - 15 Aug 2024
Viewed by 1006
Abstract
Surface soil moisture (SSM) is a key parameter for land surface hydrological processes. In recent years, satellite remote sensing images have been widely used for SSM estimation, and many methods based on satellite-derived spectral indices have also been used to estimate the SSM [...] Read more.
Surface soil moisture (SSM) is a key parameter for land surface hydrological processes. In recent years, satellite remote sensing images have been widely used for SSM estimation, and many methods based on satellite-derived spectral indices have also been used to estimate the SSM content in various climatic conditions and geographic locations. However, achieving an accurate estimation of SSM content at a high spatial resolution remains a challenge. Therefore, improving the precision of SSM estimation through the synergies of multi-source remote sensing data has become imperative, particularly for informing forest management practices. In this study, the integration of multi-source remote sensing data with random forest and support vector machine models was conducted using Google Earth Engine in order to estimate the SSM content and develop SSM maps for temperate forests in central Japan. The synergy of Sentinel-2 and terrain factors, such as elevation, slope, aspect, slope steepness, and valley depth, with the random forest model provided the most suitable approach for SSM estimation, yielding the highest accuracy values (overall accuracy for testing = 91.80%, Kappa = 87.18%, r = 0.98) for the temperate forests of central Japan. This finding provides more valuable information for SSM mapping, which shows promise for precision forestry applications. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
Show Figures

Figure 1

48 pages, 894 KiB  
Review
Earlier Decision on Detection of Ransomware Identification: A Comprehensive Systematic Literature Review
by Latifa Albshaier, Seetah Almarri and M. M. Hafizur Rahman
Information 2024, 15(8), 484; https://doi.org/10.3390/info15080484 - 14 Aug 2024
Viewed by 991
Abstract
Cybersecurity is normally defined as protecting systems against all kinds of cyberattacks; however, due to the rapid and permanent expansion of technology and digital transformation, the threats are also increasing. One of those new threats is ransomware, which is a form of malware [...] Read more.
Cybersecurity is normally defined as protecting systems against all kinds of cyberattacks; however, due to the rapid and permanent expansion of technology and digital transformation, the threats are also increasing. One of those new threats is ransomware, which is a form of malware that aims to steal user’s money. Ransomware is a form of malware that encrypts a victim’s files. The attacker then demands a ransom from the victim to restore access to the data upon a large payment. Ransomware is a way of stealing money in which a user’s files are encrypted and the decrypted key is held by the attacker until a ransom amount is paid by the victim. This systematic literature review (SLR) highlights recent papers published between 2020 and 2024. This paper examines existing research on early ransomware detection methods, focusing on the signs, frameworks, and techniques used to identify and detect ransomware before it causes harm. By analyzing a wide range of academic papers, industry reports, and case studies, this review categorizes and assesses the effectiveness of different detection methods, including those based on signatures, behavior patterns, and machine learning (ML). It also looks at new trends and innovative strategies in ransomware detection, offering a classification of detection techniques and pointing out the gaps in current research. The findings provide useful insights for cybersecurity professionals and researchers, helping guide future efforts to develop strong and proactive ransomware detection systems. This review emphasizes the need for ongoing improvements in detection technologies to keep up with the constantly changing ransomware threat landscape. Full article
(This article belongs to the Special Issue Cybersecurity, Cybercrimes, and Smart Emerging Technologies)
Show Figures

Figure 1

15 pages, 5521 KiB  
Article
A Historical Handwritten French Manuscripts Text Detection Method in Full Pages
by Rui Sang, Shili Zhao, Yan Meng, Mingxian Zhang, Xuefei Li, Huijie Xia and Ran Zhao
Information 2024, 15(8), 483; https://doi.org/10.3390/info15080483 - 14 Aug 2024
Viewed by 388
Abstract
Historical handwritten manuscripts pose challenges to automated recognition techniques due to their unique handwriting styles and cultural backgrounds. In order to solve the problems of complex text word misdetection, omission, and insufficient detection of wide-pitch curved text, this study proposes a high-precision text [...] Read more.
Historical handwritten manuscripts pose challenges to automated recognition techniques due to their unique handwriting styles and cultural backgrounds. In order to solve the problems of complex text word misdetection, omission, and insufficient detection of wide-pitch curved text, this study proposes a high-precision text detection method based on improved YOLOv8s. Firstly, the Swin Transformer is used to replace C2f at the end of the backbone network to solve the shortcomings of fine-grained information loss and insufficient learning features in text word detection. Secondly, the Dysample (Dynamic Upsampling Operator) method is used to retain more detailed features of the target and overcome the shortcomings of information loss in traditional upsampling to realize the text detection task for dense targets. Then, the LSK (Large Selective Kernel) module is added to the detection head to dynamically adjust the feature extraction receptive field, which solves the cases of extreme aspect ratio words, unfocused small text, and complex shape text in text detection. Finally, in order to overcome the CIOU (Complete Intersection Over Union) loss in target box regression with unclear aspect ratio, insensitive to size change, and insufficient correlation between target coordinates, Gaussian Wasserstein Distance (GWD) is introduced to modify the regression loss to measure the similarity between the two bounding boxes in order to obtain high-quality bounding boxes. Compared with the State-of-the-Art methods, the proposed method achieves optimal performance in text detection, with the precision and [email protected] reaching 86.3% and 82.4%, which are 8.1% and 6.7% higher than the original method, respectively. The advancement of each module is verified by ablation experiments. The experimental results show that the method proposed in this study can effectively realize complex text detection and provide a powerful technical means for historical manuscript reproduction. Full article
Show Figures

Figure 1

14 pages, 1356 KiB  
Article
Combined-Step-Size Affine Projection Andrew’s Sine Estimate for Robust Adaptive Filtering
by Yuhao Wan and Wenyuan Wang
Information 2024, 15(8), 482; https://doi.org/10.3390/info15080482 - 14 Aug 2024
Viewed by 355
Abstract
Recently, an affine-projection-like M-estimate (APLM) algorithm has gained popularity for its ability to effectively handle impulsive background disturbances. Nevertheless, the APLM algorithm’s performance is negatively affected by steady-state misalignment. To address this issue while maintaining equivalent computational complexity, a robust cost function based [...] Read more.
Recently, an affine-projection-like M-estimate (APLM) algorithm has gained popularity for its ability to effectively handle impulsive background disturbances. Nevertheless, the APLM algorithm’s performance is negatively affected by steady-state misalignment. To address this issue while maintaining equivalent computational complexity, a robust cost function based on the Andrew’s sine estimator (ASE) is introduced and a corresponding affine-projection Andrew’s sine estimator (APASE) algorithm is proposed in this paper. To further enhance the tracking capability and accelerate the convergence rate, we develop the combined-step-size APASE (CSS-APASE) algorithm using a combination of two different step sizes. A series of simulation studies are conducted in system identification and echo cancellation scenarios, which confirms that the proposed algorithms can attain reduced misalignment compared to other currently available algorithms in cases of impulsive noise. Meanwhile, we also establish a bound on the learning rate to ensure the stability of the proposed algorithms. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning, 2nd Edition)
Show Figures

Figure 1

10 pages, 594 KiB  
Article
Performance Management Decision-Making Model: Case Study on Foreign Language Learning Curriculums
by Kuen-Suan Chen, Chun-Min Yu, Chun-Hung Yu and Yen-Po Chen
Information 2024, 15(8), 481; https://doi.org/10.3390/info15080481 - 14 Aug 2024
Viewed by 489
Abstract
Foreign language learning courses can be regarded as a service operation system, and a complete foreign language learning course performance evaluation model can help improve the effectiveness of student learning. The performance evaluation matrix (PEM) is an excellent tool for evaluation and resource [...] Read more.
Foreign language learning courses can be regarded as a service operation system, and a complete foreign language learning course performance evaluation model can help improve the effectiveness of student learning. The performance evaluation matrix (PEM) is an excellent tool for evaluation and resource management decision making, and the administrator uses the satisfaction and the importance indices to establish evaluation coordinate points based on the rules of statistical testing. The coordinate points of all service items are plotted into the PEM to grasp the full picture and to make decisions on what to improve or to consider resource transfers so as to elevate the overall satisfaction of the entire service. However, plotting all the coordinate points on the PEM can only be performed by programming, which will lead to limitations in practice. Therefore, instead of the above evaluation rules, this article uses the confidence intervals of decision-making indicators to form a validity evaluation table, to decide which teaching service items should be improved, maintained, or transferred to improve the satisfaction of the entire service system. This form of performance evaluation can be completed using any commonly used word-processing software, so it is easy to apply and promote. Finally, this article provides an applied example to illustrate the proposed method. Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis II)
Show Figures

Figure 1

19 pages, 1196 KiB  
Article
AI-Driven QoS-Aware Scheduling for Serverless Video Analytics at the Edge
by Dimitrios Giagkos, Achilleas Tzenetopoulos, Dimosthenis Masouros, Sotirios Xydis, Francky Catthoor and Dimitrios Soudris
Information 2024, 15(8), 480; https://doi.org/10.3390/info15080480 - 13 Aug 2024
Viewed by 632
Abstract
Today, video analytics are becoming extremely popular due to the increasing need for extracting valuable information from videos available in public sharing services through camera-driven streams in IoT environments. To avoid data communication overheads, a common practice is to have computation close to [...] Read more.
Today, video analytics are becoming extremely popular due to the increasing need for extracting valuable information from videos available in public sharing services through camera-driven streams in IoT environments. To avoid data communication overheads, a common practice is to have computation close to the data source rather than Cloud offloading. Typically, video analytics are organized as separate tasks, each with different resource requirements (e.g., computational- vs. memory-intensive tasks). The serverless computing paradigm forms a promising approach for mapping such types of applications, enabling fine-grained deployment and management in a per-function, and per-device manner. However, there is a tradeoff between QoS adherence and resource efficiency. Performance variability due to function co-location and prevalent resource heterogeneity make maintaining QoS challenging. At the same time, resource efficiency is essential to avoid waste, such as unnecessary power consumption and CPU reservation. In this paper, we present Darly, a QoS-, interference- and heterogeneity-aware Deep Reinforcement Learning-based Scheduler for serverless video analytics deployments on top of distributed Edge nodes. The proposed framework incorporates a DRL agent that exploits performance counters to identify the levels of interference and the degree of heterogeneity in the underlying Edge infrastructure. It combines this information along with user-defined QoS requirements to improve resource allocations by deciding the placement, migration, or horizontal scaling of serverless functions. We evaluate Darly on a typical Edge cluster with a real-world workflow composed of commonly used serverless video analytics functions and show that our approach achieves efficient scheduling of the deployed functions by satisfying multiple QoS requirements for up to 91.6% (Profile-based) of the total requests under dynamic conditions. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop