Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,501)

Search Parameters:
Keywords = LSTM network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1587 KiB  
Article
Research on Predictive Analysis Method of Building Energy Consumption Based on TCN-BiGru-Attention
by Sijia Fu, Rui Zhu and Feiyang Yu
Appl. Sci. 2024, 14(20), 9373; https://doi.org/10.3390/app14209373 (registering DOI) - 14 Oct 2024
Abstract
Building energy consumption prediction has always played a significant role in assessing building energy efficiency, building commissioning, and detecting and diagnosing building system faults. With the progress of society and economic development, building energy consumption is growing rapidly. Therefore, accurate and effective building [...] Read more.
Building energy consumption prediction has always played a significant role in assessing building energy efficiency, building commissioning, and detecting and diagnosing building system faults. With the progress of society and economic development, building energy consumption is growing rapidly. Therefore, accurate and effective building energy consumption prediction is the basis of energy conservation. Although there are currently a large number of energy consumption research methods, each method has different applicability and advantages and disadvantages. This study proposes a Time Convolution Network model based on an attention mechanism, which combines the ability of the Time Convolution Network model to capture ultra-long time series information with the ability of the BiGRU model to integrate contextual information, improve model parallelism, and reduce the risk of overfitting. In order to tune the hyperparameters in the structure of this prediction model, such as the learning rate, the size of the convolutional kernel, and the number of recurrent units, this study chooses to use the Golden Jackal Optimization Algorithm for optimization. The study shows that this optimized model has better accuracy than models such as LSTM, SVM, and CNN. Full article
(This article belongs to the Special Issue Air Quality Prediction Based on Machine Learning Algorithms II)
18 pages, 954 KiB  
Article
A New Framework for Integrating DNN-Based Geographic Simulation Models within GISystems
by Peng Zhang, Wenzhou Wu, Cunjin Xue, Shaochen Shi and Fenzhen Su
ISPRS Int. J. Geo-Inf. 2024, 13(10), 361; https://doi.org/10.3390/ijgi13100361 - 14 Oct 2024
Abstract
As a crucial spatial decision support tool, Geographic Information Systems (GISystems) are widely used in fields such as digital watersheds, resource management, environmental assessment, and regional governance, with their core strength lying in the integration of geographic simulation models from various disciplines, enabling [...] Read more.
As a crucial spatial decision support tool, Geographic Information Systems (GISystems) are widely used in fields such as digital watersheds, resource management, environmental assessment, and regional governance, with their core strength lying in the integration of geographic simulation models from various disciplines, enabling the analysis of complex geographical phenomena and the resolution of comprehensive spatial problems. With the rapid advancement of artificial intelligence, deep neural network-based geographic simulation models (DNN-GSMs) have increasingly replaced traditional models, offering significant advantages in simulation accuracy and inference speed, and have become indispensable components in GISystems. However, existing integration methods do not adequately account for the specific characteristics of DNN-GSMs, such as their formats and input/output data types. To address this gap, we propose a novel tight integration framework for DNN-GSMs, comprising four key interfaces: the data representation interface, the model representation interface, the data conversion interface, and the model application interface. These interfaces are designed to describe spatial data, the simulation model, the adaptation between spatial data and the model, and the model’s application process within the GISystem, respectively. To validate the proposed method, we construct a spatial morphology simulation model based on CNN-LSTM, integrate it into a GISystem using the proposed interfaces, and conduct a series of predictive experiments on island morphology evolution. The results demonstrate the effectiveness of the proposed integration framework for DNN-GSMs. Full article
27 pages, 7320 KiB  
Article
A Real-Time and Online Dynamic Reconfiguration against Cyber-Attacks to Enhance Security and Cost-Efficiency in Smart Power Microgrids Using Deep Learning
by Elnaz Yaghoubi, Elaheh Yaghoubi, Ziyodulla Yusupov and Mohammad Reza Maghami
Technologies 2024, 12(10), 197; https://doi.org/10.3390/technologies12100197 - 14 Oct 2024
Abstract
Ensuring the secure and cost-effective operation of smart power microgrids has become a significant concern for managers and operators due to the escalating damage caused by natural phenomena and cyber-attacks. This paper presents a novel framework focused on the dynamic reconfiguration of multi-microgrids [...] Read more.
Ensuring the secure and cost-effective operation of smart power microgrids has become a significant concern for managers and operators due to the escalating damage caused by natural phenomena and cyber-attacks. This paper presents a novel framework focused on the dynamic reconfiguration of multi-microgrids to enhance system’s security index, including stability, reliability, and operation costs. The framework incorporates distributed generation (DG) to address cyber-attacks that can lead to line outages or generation failures within the network. Additionally, this work considers the uncertainties and accessibility factors of power networks through a modified point prediction method, which was previously overlooked. To achieve the secure and cost-effective operation of smart power multi-microgrids, an optimization framework is developed as a multi-objective problem, where the states of switches and DG serve as independent parameters, while the dependent parameters consist of the operation cost and techno-security indexes. The multi-objective problem employs deep learning (DL) techniques, specifically based on long short-term memory (LSTM) and prediction intervals, to effectively detect false data injection attacks (FDIAs) on advanced metering infrastructures (AMIs). By incorporating a modified point prediction method, LSTM-based deep learning, and consideration of technical indexes and FDIA cyber-attacks, this framework aims to advance the security and reliability of smart power multi-microgrids. The effectiveness of this method was validated on a network of 118 buses. The results of the proposed approach demonstrate remarkable improvements over PSO, MOGA, ICA, and HHO algorithms in both technical and economic indicators. Full article
Show Figures

Figure 1

21 pages, 4510 KiB  
Article
Pedestrian Trajectory Prediction in Crowded Environments Using Social Attention Graph Neural Networks
by Mengya Zong, Yuchen Chang, Yutian Dang and Kaiping Wang
Appl. Sci. 2024, 14(20), 9349; https://doi.org/10.3390/app14209349 (registering DOI) - 14 Oct 2024
Abstract
Trajectory prediction is a key component in the development of applications such as mixed urban traffic management and public safety. Traditional models have struggled with the complexity of modeling dynamic crowd interactions, the intricacies of spatiotemporal dependencies, and environmental constraints. Addressing these challenges, [...] Read more.
Trajectory prediction is a key component in the development of applications such as mixed urban traffic management and public safety. Traditional models have struggled with the complexity of modeling dynamic crowd interactions, the intricacies of spatiotemporal dependencies, and environmental constraints. Addressing these challenges, this paper introduces the innovative Social Attention Graph Neural Network (SA-GAT) framework. Utilizing Long Short-Term Memory (LSTM) networks, SA-GAT encodes pedestrian trajectory data to extract temporal correlations, while Graph Attention Networks (GAT) are employed to precisely capture the subtle interactions among pedestrians. The SA-GAT framework boosts its predictive accuracy with two key innovations. First, it features a Scene Potential Module that utilizes a Scene Tensor to dynamically capture the interplay between crowds and their environment. Second, it incorporates a Transition Intention Module with a Transition Tensor, which interprets latent transfer probabilities from trajectory data to reveal pedestrians’ implicit intentions at specific locations. Based on AnyLogic modeling of the metro station on Line 10 of Chengdu Shuangliu Airport, China, numerical studies reveal that the SA-GAT model achieves a substantial reduction in ADE and FDE metrics by 34.22% and 38.04% compared to baseline models. Full article
Show Figures

Figure 1

24 pages, 13862 KiB  
Article
Depth Video-Based Secondary Action Recognition in Vehicles via Convolutional Neural Network and Bidirectional Long Short-Term Memory with Spatial Enhanced Attention Mechanism
by Weirong Shao, Mondher Bouazizi and Ohtuski Tomoaki
Sensors 2024, 24(20), 6604; https://doi.org/10.3390/s24206604 (registering DOI) - 13 Oct 2024
Viewed by 366
Abstract
Secondary actions in vehicles are activities that drivers engage in while driving that are not directly related to the primary task of operating the vehicle. Secondary Action Recognition (SAR) in drivers is vital for enhancing road safety and minimizing accidents related to distracted [...] Read more.
Secondary actions in vehicles are activities that drivers engage in while driving that are not directly related to the primary task of operating the vehicle. Secondary Action Recognition (SAR) in drivers is vital for enhancing road safety and minimizing accidents related to distracted driving. It also plays an important part in modern car driving systems such as Advanced Driving Assistance Systems (ADASs), as it helps identify distractions and predict the driver’s intent. Traditional methods of action recognition in vehicles mostly rely on RGB videos, which can be significantly impacted by external conditions such as low light levels. In this research, we introduce a novel method for SAR. Our approach utilizes depth-video data obtained from a depth sensor located in a vehicle. Our methodology leverages the Convolutional Neural Network (CNN), which is enhanced by the Spatial Enhanced Attention Mechanism (SEAM) and combined with Bidirectional Long Short-Term Memory (Bi-LSTM) networks. This method significantly enhances action recognition ability in depth videos by improving both the spatial and temporal aspects. We conduct experiments using K-fold cross validation, and the experimental results show that on the public benchmark dataset Drive&Act, our proposed method shows significant improvement in SAR compared to the state-of-the-art methods, reaching an accuracy of about 84% in SAR in depth videos. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking—2nd Edition)
Show Figures

Figure 1

29 pages, 6269 KiB  
Article
Malware Detection Based on API Call Sequence Analysis: A Gated Recurrent Unit–Generative Adversarial Network Model Approach
by Nsikak Owoh, John Adejoh, Salaheddin Hosseinzadeh, Moses Ashawa, Jude Osamor and Ayyaz Qureshi
Future Internet 2024, 16(10), 369; https://doi.org/10.3390/fi16100369 - 13 Oct 2024
Viewed by 298
Abstract
Malware remains a major threat to computer systems, with a vast number of new samples being identified and documented regularly. Windows systems are particularly vulnerable to malicious programs like viruses, worms, and trojans. Dynamic analysis, which involves observing malware behavior during execution in [...] Read more.
Malware remains a major threat to computer systems, with a vast number of new samples being identified and documented regularly. Windows systems are particularly vulnerable to malicious programs like viruses, worms, and trojans. Dynamic analysis, which involves observing malware behavior during execution in a controlled environment, has emerged as a powerful technique for detection. This approach often focuses on analyzing Application Programming Interface (API) calls, which represent the interactions between the malware and the operating system. Recent advances in deep learning have shown promise in improving malware detection accuracy using API call sequence data. However, the potential of Generative Adversarial Networks (GANs) for this purpose remains largely unexplored. This paper proposes a novel hybrid deep learning model combining Gated Recurrent Units (GRUs) and GANs to enhance malware detection based on API call sequences from Windows portable executable files. We evaluate our GRU–GAN model against other approaches like Bidirectional Long Short-Term Memory (BiLSTM) and Bidirectional Gated Recurrent Unit (BiGRU) on multiple datasets. Results demonstrated the superior performance of our hybrid model, achieving 98.9% accuracy on the most challenging dataset. It outperformed existing models in resource utilization, with faster training and testing times and low memory usage. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

28 pages, 7586 KiB  
Article
A Comprehensive Hybrid Deep Learning Approach for Accurate Status Predicting of Hydropower Units
by Liyong Ma, Siqi Chen, Dali Wei, Yanshuo Zhang and Yinuo Guo
Appl. Sci. 2024, 14(20), 9323; https://doi.org/10.3390/app14209323 (registering DOI) - 13 Oct 2024
Viewed by 286
Abstract
Hydropower units are integral to sustainable energy production, and their operational reliability hinges on accurate status prediction. This paper introduces an innovative hybrid deep learning model that synergistically integrates a Temporal Convolutional Network (TCN), a Residual Short-Term LSTM (REST-LSTM) network, a Gated Recurrent [...] Read more.
Hydropower units are integral to sustainable energy production, and their operational reliability hinges on accurate status prediction. This paper introduces an innovative hybrid deep learning model that synergistically integrates a Temporal Convolutional Network (TCN), a Residual Short-Term LSTM (REST-LSTM) network, a Gated Recurrent Unit (GRU) network, and the tuna swarm optimization (TSO) algorithm. The model was meticulously designed to capture and utilize temporal features inherent in time series data, thereby enhancing predictive performance. Specifically, the TCN effectively extracts critical temporal features, while the REST-LSTM, with its residual connections, improves the retention of short-term memory in sequence data. The parallel incorporation of GRU further refines temporal dynamics, ensuring comprehensive feature capture. The TSO algorithm was employed to optimize the model’s parameters, leading to superior performance. The model’s efficacy was empirically validated using three datasets—unit flow rate, guide vane opening, and maximum guide vane water temperature—sourced from the Huadian Electric Power Research Institute. The experimental results demonstrate that the proposed model significantly reduces both the maximum and average prediction errors, while also offering substantial improvements in forecasting accuracy compared with the existing methodologies. This research presents a robust framework for hydropower unit operation prediction, advancing the application of deep learning in the hydropower sector. Full article
Show Figures

Figure 1

23 pages, 7794 KiB  
Article
Efficient Method for Photovoltaic Power Generation Forecasting Based on State Space Modeling and BiTCN
by Guowei Dai, Shuai Luo, Hu Chen and Yulong Ji
Sensors 2024, 24(20), 6590; https://doi.org/10.3390/s24206590 (registering DOI) - 13 Oct 2024
Viewed by 396
Abstract
As global carbon reduction initiatives progress and the new energy sector rapidly develops, photovoltaic (PV) power generation is playing an increasingly significant role in renewable energy. Accurate PV output forecasting, influenced by meteorological factors, is essential for efficient energy management. This paper presents [...] Read more.
As global carbon reduction initiatives progress and the new energy sector rapidly develops, photovoltaic (PV) power generation is playing an increasingly significant role in renewable energy. Accurate PV output forecasting, influenced by meteorological factors, is essential for efficient energy management. This paper presents an optimal hybrid forecasting strategy, integrating bidirectional temporal convolutional networks (BiTCN), dynamic convolution (DC), bidirectional long short-term memory networks (BiLSTM), and a novel mixed-state space model (Mixed-SSM). The mixed-SSM combines the state space model (SSM), multilayer perceptron (MLP), and multi-head self-attention mechanism (MHSA) to capture complementary temporal, nonlinear, and long-term features. Pearson and Spearman correlation analyses are used to select features strongly correlated with PV output, improving the prediction correlation coefficient (R2) by at least 0.87%. The K-Means++ algorithm further enhances input data features, achieving a maximum R2 of 86.9% and a positive R2 gain of 6.62%. Compared with BiTCN variants such as BiTCN-BiGRU, BiTCN-transformer, and BiTCN-LSTM, the proposed method delivers a mean absolute error (MAE) of 1.1%, root mean squared error (RMSE) of 1.2%, and an R2 of 89.1%. These results demonstrate the model’s effectiveness in forecasting PV power and supporting low-carbon, safe grid operation. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

20 pages, 10975 KiB  
Article
Numerical Weather Prediction of Sea Surface Temperature in South China Sea Using Attention-Based Context Fusion Network
by Hailun He, Benyun Shi, Yuting Zhu, Liu Feng, Conghui Ge, Qi Tan, Yue Peng, Yang Liu, Zheng Ling and Shuang Li
Remote Sens. 2024, 16(20), 3793; https://doi.org/10.3390/rs16203793 - 12 Oct 2024
Viewed by 225
Abstract
Numerical weather prediction of sea surface temperature (SST) is crucial for regional operational forecasts. Deep learning offers an alternative approach to traditional numerical general circulation models for numerical weather prediction. In our previous work, we developed a sophisticated deep learning model known as [...] Read more.
Numerical weather prediction of sea surface temperature (SST) is crucial for regional operational forecasts. Deep learning offers an alternative approach to traditional numerical general circulation models for numerical weather prediction. In our previous work, we developed a sophisticated deep learning model known as the Attention-based Context Fusion Network (ACFN). This model integrates an attention mechanism with a convolutional neural network framework. In this study, we applied the ACFN model to the South China Sea to evaluate its performance in predicting SST. The results indicate that for a 1-day lead time, the ACFN model achieves a Mean Absolute Error of 0.215 °C and a coefficient of determination (R2) of 0.972. In addition, in situ buoy data were utilized to validate the forecast results. The Mean Absolute Error for forecasts using these data increased to 0.500 °C for a 1-day lead time, with a corresponding R2 of 0.590. Comparative analyses show that the ACFN model surpasses traditional models such as ConvLSTM and PredRNN in terms of accuracy and reliability. Full article
Show Figures

Figure 1

27 pages, 920 KiB  
Article
AI-Generated Spam Review Detection Framework with Deep Learning Algorithms and Natural Language Processing
by Mudasir Ahmad Wani, Mohammed ElAffendi and Kashish Ara Shakil
Computers 2024, 13(10), 264; https://doi.org/10.3390/computers13100264 - 12 Oct 2024
Viewed by 253
Abstract
Spam reviews pose a significant challenge to the integrity of online platforms, misleading consumers and undermining the credibility of genuine feedback. This paper introduces an innovative AI-generated spam review detection framework that leverages Deep Learning algorithms and Natural Language Processing (NLP) techniques to [...] Read more.
Spam reviews pose a significant challenge to the integrity of online platforms, misleading consumers and undermining the credibility of genuine feedback. This paper introduces an innovative AI-generated spam review detection framework that leverages Deep Learning algorithms and Natural Language Processing (NLP) techniques to identify and mitigate spam reviews effectively. Our framework utilizes multiple Deep Learning models, including Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, Gated Recurrent Unit (GRU), and Bidirectional LSTM (BiLSTM), to capture intricate patterns in textual data. The system processes and analyzes large volumes of review content to detect deceptive patterns by utilizing advanced NLP and text embedding techniques such as One-Hot Encoding, Word2Vec, and Term Frequency-Inverse Document Frequency (TF-IDF). By combining three embedding techniques with four Deep Learning algorithms, a total of twelve exhaustive experiments were conducted to detect AI-generated spam reviews. The experimental results demonstrate that our approach outperforms the traditional machine learning models, offering a robust solution for ensuring the authenticity of online reviews. Among the models evaluated, those employing Word2Vec embeddings, particularly the BiLSTM_Word2Vec model, exhibited the strongest performance. The BiLSTM model with Word2Vec achieved the highest performance, with an exceptional accuracy of 98.46%, a precision of 0.98, a recall of 0.97, and an F1-score of 0.98, reflecting a near-perfect balance between precision and recall. Its high F2-score (0.9810) and F0.5-score (0.9857) further highlight its effectiveness in accurately detecting AI-generated spam while minimizing false positives, making it the most reliable option for this task. Similarly, the Word2Vec-based LSTM model also performed exceptionally well, with an accuracy of 97.58%, a precision of 0.97, a recall of 0.96, and an F1-score of 0.97. The CNN model with Word2Vec similarly delivered strong results, achieving an accuracy of 97.61%, a precision of 0.97, a recall of 0.96, and an F1-score of 0.97. This study is unique in its focus on detecting spam reviews specifically generated by AI-based tools rather than solely detecting spam reviews or AI-generated text. This research contributes to the field of spam detection by offering a scalable, efficient, and accurate framework that can be integrated into various online platforms, enhancing user trust and the decision-making processes. Full article
Show Figures

Figure 1

18 pages, 3339 KiB  
Article
Prediction of Rock Bursts Based on Microseismic Energy Change: Application of Bayesian Optimization–Long Short-Term Memory Combined Model
by Xing Fu, Shiwei Chen and Tuo Zhang
Appl. Sci. 2024, 14(20), 9277; https://doi.org/10.3390/app14209277 (registering DOI) - 11 Oct 2024
Viewed by 454
Abstract
The prediction of rock bursts is of paramount importance in ensuring the safety of coal mine production. In order to enhance the precision of rock burst prediction, this paper utilizes a working face of the Gengcun Coal Mine as a case study. The [...] Read more.
The prediction of rock bursts is of paramount importance in ensuring the safety of coal mine production. In order to enhance the precision of rock burst prediction, this paper utilizes a working face of the Gengcun Coal Mine as a case study. The paper employs a three-year microseismic monitoring data set from the working face and employs a sensitivity analysis to identify three monitoring indicators with a higher correlation with rock bursts: daily total energy, daily maximum energy, and daily frequency. Three subsets are created from the 10-day monitoring data: daily frequency, daily maximum energy, and daily total energy. The impact risk score of the next day is assessed as the sample label by the expert assessment system. Sample input and sample label define the data set. The long short-term memory (LSTM) neural network is employed to extract the features of time series. The Bayesian optimization algorithm is introduced to optimize the model, and the Bayesian optimization–long short-term memory (BO-LSTM) combination model is established. The prediction effect of the BO-LSTM model is compared with that of the gated recurrent unit (GRU) and the convolutional neural network (1DCNN). The results demonstrate that the BO-LSTM combined model has a practical application value because the four evaluation indexes of the model are mean absolute error (MAE), mean absolute percentage error (MAPE), variance accounted for (VAF), and mean squared error (MSE) of 0.026272, 0.226405, 0.870296, and 0.001102, respectively. These values are better than those of the other two single models. The rock explosion prediction model can make use of the research findings as a guide. Full article
Show Figures

Figure 1

21 pages, 9299 KiB  
Article
Implementing PSO-LSTM-GRU Hybrid Neural Networks for Enhanced Control and Energy Efficiency of Excavator Cylinder Displacement
by Van-Hien Nguyen, Tri Cuong Do and Kyoung-Kwan Ahn
Mathematics 2024, 12(20), 3185; https://doi.org/10.3390/math12203185 - 11 Oct 2024
Viewed by 273
Abstract
In recent years, increasing attention has been given to reducing energy consumption in hydraulic excavators, resulting in extensive research in this field. One promising solution has been the integration of hydrostatic transmission (HST) and hydraulic pump/motor (HPM) configurations in parallel systems. However, these [...] Read more.
In recent years, increasing attention has been given to reducing energy consumption in hydraulic excavators, resulting in extensive research in this field. One promising solution has been the integration of hydrostatic transmission (HST) and hydraulic pump/motor (HPM) configurations in parallel systems. However, these systems face challenges such as noise, throttling losses, and leakage, which can negatively impact both tracking accuracy and energy efficiency. To address these issues, this paper introduces an intelligent real-time prediction framework for system positioning, incorporating particle swarm optimization (PSO), long short-term memory (LSTM), a gated recurrent unit (GRU), and proportional–integral–derivative (PID) control. The process begins by analyzing real-time system data using Pearson correlation to identify hyperparameters with medium to strong correlations to the positioning parameters. These selected hyperparameters are then used as inputs for forecasting models. Independent LSTM and GRU models are subsequently developed to predict the system’s position, with PSO optimizing four key hyperparameters of these models. In the final stage, the PSO-optimized LSTM-GRU models are employed to perform real-time intelligent predictions of motion trajectories within the system. Simulation and experimental results show that the model achieves a prediction deviation of less than 3 mm, ensuring precise real-time predictions and providing reliable data for system operators. Compared to traditional PID and LSTM-GRU-PID controllers, the proposed controller demonstrated superior tracking accuracy while also reducing energy consumption, achieving energy savings of up to 10.89% and 2.82% in experimental tests, respectively. Full article
(This article belongs to the Special Issue Multi-objective Optimization and Applications)
Show Figures

Figure 1

22 pages, 3942 KiB  
Article
Countering Social Media Cybercrime Using Deep Learning: Instagram Fake Accounts Detection
by Najla Alharbi, Bashayer Alkalifah, Ghaida Alqarawi and Murad A. Rassam
Future Internet 2024, 16(10), 367; https://doi.org/10.3390/fi16100367 - 11 Oct 2024
Viewed by 359
Abstract
An online social media platform such as Instagram has become a popular communication channel that millions of people are using today. However, this media also becomes an avenue where fake accounts are used to inflate the number of followers on a targeted account. [...] Read more.
An online social media platform such as Instagram has become a popular communication channel that millions of people are using today. However, this media also becomes an avenue where fake accounts are used to inflate the number of followers on a targeted account. Fake accounts tend to alter the concepts of popularity and influence on the Instagram media platform and significantly impact the economy, politics, and society, which is considered cybercrime. This paper proposes a framework to classify fake and real accounts on Instagram based on a deep learning approach called the Long Short-Term Memory (LSTM) network. Experiments and comparisons with existing machine and deep learning frameworks demonstrate considerable improvement in the proposed framework. It achieved a detection accuracy of 97.42% and 94.21% on two publicly available Instagram datasets, with F-measure scores of 92.17% and 89.55%, respectively. Further experiments on the Twitter dataset reveal the effectiveness of the proposed framework by achieving an impressive accuracy rate of 99.42%. Full article
Show Figures

Figure 1

21 pages, 4502 KiB  
Article
An Analytical Approach for IGBT Life Prediction Using Successive Variational Mode Decomposition and Bidirectional Long Short-Term Memory Networks
by Kaitian Deng, Xianglian Xu, Fang Yuan, Tianyu Zhang, Yuli Xu, Tunzhen Xie, Yuanqing Song and Ruiqing Zhao
Electronics 2024, 13(20), 4002; https://doi.org/10.3390/electronics13204002 - 11 Oct 2024
Viewed by 328
Abstract
The precise estimation of the operational lifespan of insulated gate bipolar transistors (IGBT) holds paramount significance for ensuring the efficient and uncompromised safety of industrial equipment. However, numerous methodologies and models currently employed for this purpose often fall short of delivering highly accurate [...] Read more.
The precise estimation of the operational lifespan of insulated gate bipolar transistors (IGBT) holds paramount significance for ensuring the efficient and uncompromised safety of industrial equipment. However, numerous methodologies and models currently employed for this purpose often fall short of delivering highly accurate predictions. The analytical approach that combines the Pattern Optimization Algorithm (POA) with Successive Variational Mode Decomposition (SVMD) and Bidirectional Long Short-term Memory (BiLSTM) network is introduced. Firstly, SVMD is employed as an unsupervised feature learning method to partition the data into intrinsic modal functions (IMFs), which are used to eliminate noise and preserve the essential signal. Secondly, the BiLSTM network is integrated for supervised learning purposes, enabling the prediction of the decomposed sequence. Additionally, the hyperparameters of BiLSTM and the penalty coefficients of SVMD are optimized utilizing the POA technique. Subsequently, the various modal functions are predicted utilizing the trained prediction model, and the individual mode predictions are subsequently aggregated to yield the model’s definitive final life prediction. Through case studies involving IGBT aging datasets, the optimal prediction model was formulated and its lifespan prediction capability was validated. The superiority of the proposed method is demonstrated by comparing it with benchmark models and other state-of-the-art methods. Full article
Show Figures

Figure 1

21 pages, 6513 KiB  
Article
A Monitoring Device and Grade Prediction System for Grain Mildew
by Lei Xu, Yane Li, Xiang Weng, Jiankai Shi, Hailin Feng, Xingquan Liu and Guoxin Zhou
Sensors 2024, 24(20), 6556; https://doi.org/10.3390/s24206556 - 11 Oct 2024
Viewed by 215
Abstract
Mildew infestation is a significant cause of loss during grain storage. The growth and metabolism of mildew leads to changes in gas composition and temperature within granaries. Recent advances in sensor technology and machine learning enable the prediction of grain mildew during storage. [...] Read more.
Mildew infestation is a significant cause of loss during grain storage. The growth and metabolism of mildew leads to changes in gas composition and temperature within granaries. Recent advances in sensor technology and machine learning enable the prediction of grain mildew during storage. Current research primarily focuses on predicting mildew occurrence or grading using simple machine learning methods, without in-depth exploration of the time series characteristics of mildew process data. A monitoring device was designed and developed to capture high-quality microenvironment parameters and image data during a simulated mildew process experiment. Using the “Yongyou 15” rice varieties from Zhejiang Province, five simulation experiments were conducted under varying temperature and humidity conditions between January and May 2023. Mildew grades were defined through manual analysis to construct a multimodal dataset for the rice mildew process. This study proposes a combined model (CNN–LSTM–A) that integrates convolutional neural networks (CNN), long short-term memory (LSTM) networks, and attention mechanisms to predict the mildew grade of stored rice. The proposed model was compared with LSTM, CNN–LSTM, and LSTM–Attention models. The results indicate that the proposed model outperforms the others, achieving a prediction accuracy of 98%. The model demonstrates superior accuracy and more stable performance. The generalization performance of the prediction model was evaluated using four experimental datasets with varying storage temperature and humidity conditions. The results show that the model achieves optimal prediction stability when the training set contains similar storage temperatures, with prediction accuracy exceeding 99.8%. This indicates that the model can effectively predict the mildew grades in rice under varying environmental conditions, demonstrating significant potential for grain mildew prediction and early warning systems. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Back to TopTop