Home Online addiction analysis and identification of students by applying gd-LSTM algorithm to educational behaviour data
Article Open Access

Online addiction analysis and identification of students by applying gd-LSTM algorithm to educational behaviour data

  • Shuang Zhang and Huisi Yu EMAIL logo
Published/Copyright: March 8, 2024
Become an author with De Gruyter Brill

Abstract

Internet has become the primary source of extracurricular entertainment for college students in today’s information age of Internet entertainment. However, excessive Internet addiction (IA) can negatively impact a student’s daily life and academic performance. This study used Stochastic models to gather data on campus education behaviour, extract the temporal characteristics of university students’ behaviour, and build a Stochastic dropout long short-term memory (LSTM) network by fusing Dropout and LSTM algorithms in order to identify and analyse the degree of IA among university students. The model is then used to locate and forecast the multidimensional vectors gathered, and finally to locate and evaluate the extent of university students’ Internet addiction. According to the experiment’s findings, there were 4.23% Internet-dependent students among the overall (5,861 university students), and 95.66% of those students were male. The study examined the model using four dimensions, and the experimental findings revealed that the predictive model suggested in the study had much superior predictive performance than other models, scoring 0.73, 0.72, 0.74, and 0.74 on each dimension, respectively. The prediction model outperformed other algorithms overall and in the evaluation of the four dimensions, performing more evenly than other algorithms in the performance comparison test with other similar models. This demonstrated the superiority of the research model.

1 Introduction

Internet addiction (IA) is a word used to describe a user’s behaviour who relies excessively on the Internet and exhibits long-term, unchecked addiction to the online world [1]. If a student’s IA level is too high, it can negatively impact their ability to interact with others, as well as their physical and mental growth. In extreme circumstances, it can even cause them to waste their education and ruin a great job. As a result, identifying and analysing the IA levels of students in higher education (HE) has also grown to be a crucial topic in the educational field [2]. Numerous international studies have examined the IA level of identification, yet there is still no widely recognised benchmark. Questionnaires are still used to identify Internet addiction (IAI) more frequently [3]. Considering the high labor costs associated with questioning and the inability of questioning surveys to provide broad generalizations [4,5]. In order to evaluate the level of IA, the study used a random model to analyze the academic behavior data. Stochastic models (SM) of college students, and constructed a Gate by integrating LSTM and Dropout algorithms LSTM (Long Short Term Memory and Long Short Term Memory, gd LSTM) model. The multidimensional vectors gathered to identify and examine the IA levels of university students are identified and predicted using the gd-LSTM model.

The study is broken up as follows: The IA analysis model employed in the study is derived from the summary and analysis of the present status of international research on LSTM algorithms and IA analysis described in Section 2. Sections 3–5 explain the LSTM method’s concepts, the gd-LSTM IA analysis prediction model, which combines the Dropout algorithm with the LSTM algorithm, and the extraction of time-series features (TSF) of university student activity. Section 6 evaluates the gd-LSTM IA analytical prediction model’s performance and examines the results of the experiment. The experimental findings are summarised in Section 7, which also identifies the study’s flaws.

2 Related works

Internet has been incorporated into every part of people’s everyday lives in this era of information explosion, and it has replaced television as the primary form of entertainment on college campuses. Many professionals and academics from across the world have undertaken research on the crucial subject of IAI analysis and have produced some conclusions in order to assist university students utilise the Internet sensibly and prevent IA. To extract spatial and short-term temporal information, Zheng et al. created an attention-based LSTM neural network. By automatically allocating different weights to reflect the trend of traffic flow in the forward and backward directions, the significance of flow sequences at various times was recognised. The efficiency of the algorithm was confirmed by experimental findings [6]. Shrestha et al. have suggested a novel technique based on recursive LSTM and Bi-directional LSTM (Bi-LSTM) network architecture. The results demonstrate that although distance domain data only achieve about 76% accuracy on average, Doppler domain data using Bi-LSTM networks and appropriate learning rates obtain an average accuracy of over 90% [7]. A framework created by Shen et al. that combines Bi-LSTM and data sequencing can be used to forecast the diameter of jet grout columns in soft soils in real time. An example study of jet grouting treatments in soft soils was used to evaluate the model. The efficiency of the method was supported by experimental findings indicating the suggested strategy could successfully estimate the column diameter with depth [8]. By simultaneously modelling behavioural activities at the individual group level, Shu et al. presented an LSTM algorithm with residual connectivity to learn temporal and static properties of person-level residuals to achieve group activity recognition. The usefulness of the approach was confirmed by experimental findings on two open datasets [9]. Convolutional neural networks (CNN) and LSTM algorithms were combined in Sun et al.’s proposed hybrid deep learning technique to estimate the short-term degradation of a 110 kW fuel cell system for commercial vehicles. Sliding windows are used to extract non-linear non-smooth voltage sequences, which are then broken down into modal sequences with various characteristic time scales and fed into the relevant CNN-LSTM [10].

Multivariate analysis of variance (MANOVA) was employed by Jin Jeong et al. to examine the statistical variances among 12 risk factors for addiction. In terms of the differences in addiction risk variables between IA and smartphone addiction, the experimental findings revealed that smartphone addiction was greater than IA [11]. Suresh and Biswas collected and analysed data from 202 respondents over the course of 7 months in Bangalore. The findings revealed that excessive internet shopping was positively correlated with rising IA [12]. In order to investigate the relationship between IA and obesity, Aghasi et al. studied nine cross-sections. By combining 11 effect sizes from the 9 studies, they were able to demonstrate that there was a significantly higher likelihood of being overweight or obese among those who used the Internet the most than those who used it the least [13]. You et al. used the Pittsburgh Sleep Quality Index to measure multiple cross-sectional studies of a sample of college students in order to look into the impact of IA on sleep quality in students. According to the findings, college students with high levels of IA were 2.35 times more likely than those with normal levels to report having poor subjective sleep quality [14].

In conclusion, even though many experts have suggested numerous techniques and forecasting models for the detection and analysis of IAI, they have rarely started investigations into the behavioural traits of university students. In order to develop an IAI analysis model based on the LSTM algorithm combined with the Dropout algorithm, the study combines educational behavioural data in order to extract behavioural TSFs using SM. As a result, the study introduces fresh perspectives and references to the IAI field.

3 Applying the gd-LSTM algorithm to construct a student IA analysis recognition model

To identify and analyse students’ IA levels more efficiently and accurately, the study adopts a statistical approach to classify the TSF of university students during their school years into multi-dimensional vectors for inductive analysis, and uses the LSTM algorithm combined with the Dropout algorithm to build an IA analysis model, so as to complete the identification and analysis of IA risks of university students.

3.1 Student IA analysis under educational behaviour data

To quantitatively analyse the IA level of university students, the study used big data techniques combined with statistical methods to refer educational behaviour data to the SM and to assess the overall data using the central limit theorem. The SM is a model made according to a combination of random variables, which are independent of each other and can faithfully reflect the relationship between the random parameters in the system, and well characterise the real-life. The sample mean probability statistics of SM is shown in Figure 1.

Figure 1 
                  Sample mean probability statistical chart.
Figure 1

Sample mean probability statistical chart.

σ in Figure 1 indicates the SM standard error. The probability of a sample falling within the range of ± 1 σ is 68%, i.e. the confidence level is 68%. As can be seen from Figure 1, the SM satisfies the central limit theorem, i.e. the sample mean is infinitely close to the overall mean and any sample will always be normally distributed around the overall mean [15,16]. Therefore, the mean value and standard deviation of the sample can be used to estimate the overall mean value and standard deviation, and thus analyse the level of IA of college students. The next step is to construct a TSF of college students from their behavioural data during their school years. The study starts from four dimensions, namely, behavioural patterns, consumption behaviour, academic performance, and gender, and the TSF system of college students obtained is shown in Figure 2.

Figure 2 
                  Time series characteristic system diagram of college students.
Figure 2

Time series characteristic system diagram of college students.

The behavioural patterns of students in HE are mostly chaotic but repetitive, and the concept of Information Entropy (IE) is used to characterise behavioural patterns. It is possible to calculate the likelihood that a random event will occur in terms of the likelihood that an uncertain event will occur. The greater the uncertainty, the greater the IE [17]. By specifying the frequency of different behaviours of university students per unit of time and calculating the entropy of student behaviour, the behavioural patterns of university students can be quantified.

A time interval of 1 h is specified, and a day is divided into a 24-dimensional time vector. The frequency of behaviour v in each time period per person per month is counted and the frequency of occurrence of the event is calculated. The equation for calculating frequency p is shown in equation (1).

(1) p v ( T = t i ) = n v ( t i ) i = 1 n n v ( t i ) ,

where T is the time interval, t i denotes the i th time interval of the day, v is the various behaviours of the student, and n v ( t i ) denotes the total number of times behaviour v occurs during time interval t i . Further calculations lead to the expression for the behavioural entropy of behaviour v in a month as in equation (2).

(2) E v = i = 1 n p v ( T = t i ) log p v ( T = t i ) .

By solving equation (2), the entropy of behaviour regarding behaviour v for each month is obtained. Next taking the whole semester as a unit, the mean value of behavioural entropy for each month within each semester is calculated, and the result obtained is the behavioural entropy of the student’s behaviour v . The larger the behavioural entropy value, the more the time periods in which the student’s behaviour v occurs, proving that the student’s behaviour v is more irregular. Following the above method to conduct behavioural entropy analysis on the consumption behaviour of college students, the behavioural entropy about college students regarding their consumption behaviour can be determined. By solving the behavioural entropy, the TSF of each dimension can be extracted from the huge amount of educational behaviour data.

4 TSF analysis based on LSTM algorithm

To better analyse the TSF of university students in each dimension, the study uses the LSTM algorithm to build a predictive model for student IA level recognition analysis. The LSTM prediction model is a recurrent neural network (RNN)-based temporal RNN that can remember both long- and short-term information. Figure 3 depicts the RNN’s structural layout.

Figure 3 
               RNN structure diagram.
Figure 3

RNN structure diagram.

In Figure 3, h t is the hidden layer state, x t is the input vector at the moment of t , and Tanh is the activation function (AF). According to Figure 3, the RNN, as a feed-forward neural network, has a directed graph at its core, and the directed graphs are linked in a chain-like manner to form recurrent units [18]. Many identical recurrent units connected in a chained fashion make up the RNN. And the state expression of an RNN is given in equation (3).

(3) x t = F ( x t 1 , μ t , θ ) ,

where x t is the system state of the RNN at the moment t , μ is the system input, and θ is the weight coefficient inside the recurrent unit. In addition to the recurrent unit, the RNN is generally set up with another output node and defined as a linear function. The expression of the function of the output node o t is shown in equation (4).

(4) o t = v x t + c ,

where v and t are the different weighting coefficients. For an RNN with specified parameters, it is generally expressed in terms of weights, and the weight expression of an RNN is as in equation (5).

(5) x t = W rec σ ( x t 1 ) + W in μ t + b ,

where W rec denotes the weight of the recurrence matrix of the neural network, W in denotes the input weight, b is the bias of the neural network, and σ is the AF. RNN, as a chain-structured neural network, has a variable length of the input sequence it processes [19]. In comparison, the LSTM neural network has the same chain structure but four network layers inside the recurrent cell. Through these three gates – the input gate, output gate, and forgetting gate – it is able to control the amount of information sent about the cell state. The cell structure of LSTM neural network is schematically shown in Figure 4.

Figure 4 
               Schematic diagram of cell structure of LSTM neural network.
Figure 4

Schematic diagram of cell structure of LSTM neural network.

In Figure 4, each yellow box indicates a neural network layer, each pink circle indicates an element-level operation, and C t indicates the cell state. This results in a selective screening of information within the cell state. The study proposes the LSTM-based gated Dropout algorithm, referred to as gd-LSTM algorithm, which is to make the three gates of LSTM not functioning randomly with a certain probability. The algorithm realises the application of dropout to the hidden layer, and in LSTM, W denotes the weight matrix, and the expression of the oblivious gate f t is shown in equation (6).

(6) f t = σ ( b f + w f [ h t 1 , x t ] ) ,

where [ h t 1 , x t ] represents the information connecting the hidden layer to the input, and b f is the bias vector of the forgetting gate. The information allowed to pass through is sent to the input gate for data update. The expression for the input gate i is given in equation (7).

(7) i t = σ ( w i [ h t 1 , x t ] + b i ) ,

where b i is the bias vector of the input gate. The input gate determines which information is added to the cell state. The candidate cell information C ˜ t is obtained by filtering the information from the hidden state h t 1 and the input vector x t . The candidate cell information C ˜ t is calculated as shown in equation (8).

(8) C ˜ t = tanh ( b c + W c [ h t 1 , x t ] ) ,

where W c represents the cell state matrix and b c is the cell state bias vector. The input gate updates the candidate information C ˜ t into the cell state via the tanh AF to obtain the updated cell information C t . the equation for the updated cell information C t is given in equation (9).

(9) C t = i t × C ˜ t + f t × C t 1 ,

where C t 1 in equation (9) represents the old cell information. The data in the updated cell information C t are delivered to the output gate, where the state characteristics of the output cell are determined by the tanh AF. The expression of the output gate o is shown in equation (10).

(10) o t = σ ( w o [ h t 1 , x t ] + b o ) ,

where b o in equation (10) is the bias vector of the output gate. The cell state characteristics of the output gate are passed through the tanh layer to obtain a vector of [−1,1]. The expression for the output h t of the loop cell is given in equation (11).

(11) h t = o t × tanh ( C t ) .

In the model, the value of the parameter gate_dropout is specified to make the gd-LSTM algorithm work, i.e. to make the three gates of LSTM randomly non-functional with probability value of gate_dropout. gate_dropout takes a value between 0 and 1, when it takes 0, it means that no gate_dropout is added, and when it takes 1, it means that it is not functional with probability 1, i.e. the cell fails. Generally, the value of gate_dropout is 0.2 or 0.4.

5 Predictive model for IA analysis combining Dropout algorithm and LSTM algorithm

Although LSTM prediction models have shown strong computational performance when analysing student TSF data, they are prone to overfitting [20]. The reason for this phenomenon is that the LSTM prediction model is more complex compared to the dataset, making the algorithm too stringent in its judgement criteria and lack of regularisation when dealing with simple data. The study therefore regularises the LSTM at the hidden layer by placing the Dropout algorithm, with the probability of randomly discarding one of the three gates from functioning, thus greatly improving the level of regularisation of the model. Research using the gd-LSTM algorithm to construct a prediction model and adding an attention mechanism after the output of the hidden layer to enhance the influence of important features and improve model performance. Among them, the model is a bidirectional LSTM structure with 71 input units and n outputs – Classes use binary classification, with 60 neurons per hidden layer. AF is ReLU, optimisation method is Adam algorithm, for gate, the dropout parameter is set to 0.3 to prevent overfitting. During the training process of the model, a 10-fold cross validation is used to randomly divide the training set into 3:7 validation data and training data. The number of iterations is 5, and the batch size is 50. Adjust the hyperparameters to optimise the model. Figure 5 depicts the final structure of the gd-LSTM IA analysis model.

Figure 5 
               Structure diagram of gd-LSTM IA analysis model.
Figure 5

Structure diagram of gd-LSTM IA analysis model.

As shown in Figure 5, x 0 is the current amount of cell state and y 0 is the updated cell state information. In order to prevent the loss of important information due to the excessive length of the model sequence, the study added the Attention mechanism after the output of the hidden layer of the model, using the correlation between IA risk and each feature as the attention weight to enhance the influence of important features on IA level. The study builds the confusion matrix by dichotomous classification, which yields prediction accuracy for the difference between the actual class and the prediction class, in hopes to assess the performance of the model. Figure 6 provides a schematic illustration of the confusion matrix’s elements.

Figure 6 
               Component diagram of confusion matrix.
Figure 6

Component diagram of confusion matrix.

According to Figure 6, it can be seen that the confusion matrix ultimately yields four components, where TP is denoted as a sample of positive cases with correct predictions, FP is a sample of negative cases with incorrect predictions, TN is a sample of negative cases with correct predictions, and FN is a sample of positive cases with incorrect predictions. In order to avoid the data imbalance that may be brought about by evaluating data of a single dimension, the study evaluates the prediction results of the model in four dimensions: accuracy, recall, precision, and reconciled mean. The expression is shown in equation (12).

(12) Accuracy = TP + TN TP + FN + TN + FP ,

where Accuracy is the performance measure of the learning algorithm, i.e. the proportion of correct samples. When the sample data are unbalanced, the accuracy can still be high, so to avoid distortion of the results, the study introduces other dimensions for a comprehensive evaluation of the model. In equation (13), the Recall is provided.

(13) Recall = TP TP + FN ,

where Recall denotes the number of samples that were correctly predicted in the sample of positive examples. The expression for the accuracy rate of the model is given in equation (14).

(14) Precision = TP TP + FP ,

where Precision indicates the percentage of samples with positive cases among all samples with positive cases predicted. Finally, the precision and recall rates are reconciled, and the final reconciled mean expression is obtained as in equation (15).

(15) F 1 = 2 × Recall × Precision Recall + Precision ,

where the harmonic mean F 1 reconciles the relationship between precision and recall, and a smaller value of either component results in a smaller harmonic mean F 1 , thus avoiding imbalance in the sample data.

6 Performance testing of IA analysis models based on gd-LSTM algorithm

The study used the educational behaviour data of a university’s class of 2020 college students as the training set to train the model, and obtained multidimensional TSF data of 5,861 college students for four semesters, of which 214 were IA high-risk students, noted as the positive class sample, and 5,647 were IA low-risk students, noted as the negative class sample, and randomly assigned the training set data in a 7:3 ratio using 10-fold cross-validation into training data and validation data. The IA analysis model was created in the following study using the gd-LSTM method, with the parameters A set to 0.3, 85 input units, 73 neurons per hidden layer, and the ReLU function for AF. The output was binary classification. 50 batches were created with 5 iterations per iteration.

The study computed the total amount of time each student spent online in minutes based on their starting and stopping timings to determine how many hours each student spent online while enrolled in HE. Then, each individual’s daily online time was summed up to obtain the final sum of each student’s online time for each month. The statistical results were applied to the SM, and the final histogram of the sample mean frequency of Internet access hours of university students was obtained as shown in Figure 7.

Figure 7 
               Sample mean frequency histogram.
Figure 7

Sample mean frequency histogram.

As shown in Figure 7, the sample data basically show a normal distribution, and the mean value of the sample mean was 43.742. The study used the two standard error ranges as the criteria for distinguishing the level of IA among college students, and after further calculations, the corresponding length of time spent online was 274.692. Therefore, the study used 275 min as the criterion for judging IA, and students who spent more than 275 min on the Internet in a single day were regarded as IA students. According to the data in Figure 7, the number of IA students accounted for 4.23% of the total number of students. The next step was to analyse the behavioural TSF of HE students, and the annual behavioural entropy change of HE students was obtained as shown in Figure 8.

Figure 8 
               Annual behaviour entropy change chart. (a) Entrophy changes in gymnasium from January to December, (b) Entrophy changes in the cafeteria from January to December, (c) Library entrophy changes from January to December, and (d) Entrophy changes in bathing centers from January to December.
Figure 8

Annual behaviour entropy change chart. (a) Entrophy changes in gymnasium from January to December, (b) Entrophy changes in the cafeteria from January to December, (c) Library entrophy changes from January to December, and (d) Entrophy changes in bathing centers from January to December.

Figure 8 displays the change in behavioural entropy of college students in the stadium, canteen, library, and bathing facility, respectively. Figure 8 shows that the behavioural entropy in each subplot in February and August is almost zero, which is due to the fact that these months correspond to the college and university’s winter and summer breaks, respectively, when the vast majority of students have left for the holidays and the minority of students are still enrolled. Figure 8a shows that the behavioural entropy of IA students spiked in May. It was determined that this spike was caused by the need for the school’s PE classes to be held in the gymnasium, which increased the number of times IA students visited the facility, despite the fact that they did so infrequently on average. Figure 8 demonstrates that the behavioural entropy of non-IA students is less and more stable than that of IA students, indicating that non-IA students have a more predictable routine. The following phase involved comparing the grades of college students for each semester in 2020, calculating the grades and the resulting GPA, and using those numbers as the grade attributes for each topic. Figure 9 depicts the final box plot of the grade characteristics for each semester.

Figure 9 
               Characteristic maps of grades for each semester. (a) First semester, (b) second semester, (c) third semester, and (d) fourth semester.
Figure 9

Characteristic maps of grades for each semester. (a) First semester, (b) second semester, (c) third semester, and (d) fourth semester.

Figure 9 illustrates a box plot with upper and lower horizontal lines denoting the upper and lower limits of the data, a blue box in the middle denoting the 25–75% of the data distribution, or the data between the upper and lower quartiles, a green horizontal line in the middle denoting the median data, and outlier values denoting values that fall outside the upper and lower limits. Looking at Figure 9, it can be seen that IA students have a lower mean GPA and low lower limit values than non-IA students, while non-IA students generally have higher upper limit values than IA students and somewhat higher outlier values than IA students. This suggests that non-IA students generally perform better academically and confirms the negative impact of IA on academic performance. Finally, the relationship between gender and IA was verified for students in HE, and the final student gender ratios obtained are displayed in Table 1.

Table 1

Student sex ratio table

Dataset Proportion
Raw data Male 68.37%
Female 31.63%
Non-IA students Male 70.23%
Female 29.77%
IA students Male 95.66%
Female 4.34%

As can be seen from Table 1, the proportions of male and female students in the original data were 68.37 and 31.36%, respectively. Among the students without IA, the proportions of male and female students were 70.23 and 29.77%, which basically matched the original data. Among IA students, the proportions of male and female students were 95.66 and 4.34%, which differed greatly from the original data on the proportions of male and female, indicating that the vast majority of IA students are male and female students are even less likely to become IA students.

The next study conducted a comprehensive evaluation of the prediction results of the gd-LSTM IA analysis model in four dimensions: accuracy, recall, precision, and harmonic mean. The LSTM algorithm, gd-LSTM algorithm, and CNN algorithm were used for performance comparison, and the graphs of the three algorithms obtained are shown in Figure 10.

Figure 10 
               Performance comparison chart of three algorithms.
Figure 10

Performance comparison chart of three algorithms.

According to Figure 10, the gd-LSTM algorithm scores 0.73, 0.72, 0.74, and 0.74 in each dimension, which are significantly higher than the scores of the other two algorithms in the same dimension. This indicates that the gd-LSTM algorithm has the best performance and stability among the three algorithms, thus proving the effectiveness of the optimisation algorithm. The study compared the performance of several commonly used international algorithmic models with the gd-LSTM algorithm, and the final performance comparison graph is shown in Figure 11.

Figure 11 
               Comparison of performance of common algorithm models.
Figure 11

Comparison of performance of common algorithm models.

In Figure 11, GDBT stands for Gradient Boosting Decision Tree (GDBT), LR stands for Logistic regression (LR), SVM stands for Support Vector Machine (SVM), and NBC stands for Naive Bayesian Model (NBC). The scores of the gd-LSTM algorithm were 0.746, 0.749, 0.745, and 0.746 in each dimension, which were the highest values in the accuracy and F1 dimensions, indicating that the prediction results of the research algorithm were more accurate. Although the SVM model scored the highest in the accuracy dimension with a score of 0.875, it still did not perform as well as the gd-LSTM algorithm in the other dimensions. Therefore, when looking at all dimensions together, the gd-LSTM algorithm performs better, thus demonstrating the superiority of the model.

7 Conclusion

The study used SM to extract the student behavioural feature vector from educational behaviour data, applied the gated dropout technique to the LSTM model, and then developed the IAI analysis model to identify and analyse the level of IA among university students. As a consequence of the trial, it was discovered that 5,861 university students were IA students, making up 4.23% of the overall student body. Additionally, the percentage of male and female students in the IA students was 95.66 and 4.34%, respectively, showing that male students made up the majority of the IA students. The multidimensional behavioural TSF calculation reveals that IA students lead more erratic lives and have more behavioural entropy. IA pupils typically performed worse than non-IA students across all academic performance parameters. The gd-LSTM algorithm proposed in the study receives scores of 0.73, 0.72, 0.74, and 0.74 in each dimension, respectively, which are higher than those of the LSTM algorithm and CNN algorithm in the same dimension, demonstrating the effectiveness of the optimisation algorithm. The gd-LSTM algorithm scored 0.746, 0.749, 0.745, and 0.746 in each dimension, with a more balanced performance in each dimension and the highest values of the model in the accuracy and F1 dimensions, proving the accuracy of the algorithm’s prediction. These results were obtained from performance comparison tests with other models of the same type. There is still no globally recognised diagnostic technique for IA, hence the study can only be used as a method of evaluating and appraising the riskiness of IA and not as a medical tool for IA diagnosis.

  1. Funding information: The authors state no funding involved.

  2. Author contributions: All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Shuang Zhang and Huisi Yu. The first draft of the manuscript was written by Shuang Zhang and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

  3. Conflict of interest: The authors report there are no competing interests to declare.

  4. Data availability statement: The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

References

[1] Bi J, Zhang X, Yuan H, Zhang J, Zhou M. A hybrid prediction method for realistic network traffic with temporal convolutional network and LSTM. IEEE Trans Autom Sci Eng. 2021;19(3):1869–79.10.1109/TASE.2021.3077537Search in Google Scholar

[2] Hamayel MJ, Owda AY. A novel cryptocurrency price prediction model using GRU, LSTM and bi-LSTM machine learning algorithms. AI. 2021;2(4):477–96.10.3390/ai2040030Search in Google Scholar

[3] Oslund S, Washington C, So A, Chen T, Ji H. Multiview robust adversarial stickers for arbitrary objects in the physical world. J Comput Cognit Eng. 2022;1(4):152–8.10.47852/bonviewJCCE2202322Search in Google Scholar

[4] Wang X, Cheng M, Eaton J, Hsieh CJ, Wu SF. Fake node attacks on graph convolutional networks. J Comput Cognit Eng. 2022;1(4):165–73.10.47852/bonviewJCCE2202321Search in Google Scholar

[5] Moghar A, Hamiche M. Stock market prediction using LSTM recurrent neural network. Procedia Comput Sci. 2020;170:1168–73.10.1016/j.procs.2020.03.049Search in Google Scholar

[6] Zheng H, Lin F, Feng X, Chen Y. A hybrid deep learning model with attention-based conv-LSTM networks for short-term traffic flow prediction. IEEE Trans Intell TransportatiSyst. 2020;22(11):6910–20.10.1109/TITS.2020.2997352Search in Google Scholar

[7] Shrestha A, Li H, Le Kernec J, Fioranelli F. Continuous human activity classification from FMCW radar with Bi-LSTM networks. IEEE Sens J. 2020;20(22):13607–19.10.1109/JSEN.2020.3006386Search in Google Scholar

[8] Shen SL, Atangana Njock PG, Zhou A, Lyu HM. Dynamic prediction of jet grouted column diameter in soft soil using Bi-LSTM deep learning. Acta Geotechnica. 2021;16(1):303–15.10.1007/s11440-020-01005-8Search in Google Scholar

[9] Shu X, Zhang L, Sun Y, Tang J. Host–parasite: Graph LSTM-in-LSTM for group activity recognition. IEEE Trans Neural Netw Learn Syst. 2020;32(2):663–74.10.1109/TNNLS.2020.2978942Search in Google Scholar PubMed

[10] Sun B, Liu X, Wang J, Wei X, Yuan H, Dai H. Short-term performance degradation prediction of a commercial vehicle fuel cell system based on CNN and LSTM hybrid neural network. Int J Hydrog Energy. 2023;48(23):8613–28.10.1016/j.ijhydene.2022.12.005Search in Google Scholar

[11] Jin Jeong Y, Suh B, Gweon G. Is smartphone addiction different from Internet addiction? Comparison of addiction-risk factors among adolescents. Behav Inf Technol. 2020;39(5):578–93.10.1080/0144929X.2019.1604805Search in Google Scholar

[12] Suresh AS, Biswas A. A study of factors of internet addiction and its impact on online compulsive buying behaviour: Indian millennial perspective. Glob Bus Rev. 2020;21(6):1448–65.10.1177/0972150919857011Search in Google Scholar

[13] Aghasi M, Matinfar A, Golzarand M, Salari-Moghaddam A, Ebrahimpour-Koujan S. Internet use in relation to overweight and obesity: A systematic review and meta-analysis of cross-sectional studies. Adv Nutr. 2020;11(2):349–56.10.1093/advances/nmz073Search in Google Scholar PubMed PubMed Central

[14] You Z, Mei W, Ye N, Zhang L, Andrasik F. Mediating effects of rumination and bedtime procrastination on the relationship between internet addiction and poor sleep quality. J Behav Addict. 2021;9(4):1002–10.10.1556/2006.2020.00104Search in Google Scholar PubMed PubMed Central

[15] Solly JE, Hook RW, Grant JE, Cortese S, Chamberlain SR. Structural gray matter differences in problematic usage of the internet: A systematic review and meta-analysis. Mol Psychiatry. 2022;27(2):1000–9.10.1038/s41380-021-01315-7Search in Google Scholar PubMed PubMed Central

[16] Buneviciene I, Bunevicius A. Prevalence of internet addiction in healthcare professionals: Systematic review and meta-analysis. Int J Soc Psychiatry. 2021;67(5):483–91.10.1177/0020764020959093Search in Google Scholar PubMed

[17] Goslar M, Leibetseder M, Muench HM, Hofmann SG, Laireiter AR. Treatments for internet addiction, sex addiction and compulsive buying: A meta-analysis. J Behav Addictions. 2020;9(1):14–43.10.1556/2006.2020.00005Search in Google Scholar PubMed PubMed Central

[18] Tras Z, Gökçen G. Academic procrastination and social anxiety as predictive variables internet addiction of adolescents. Int Educ Stud. 2020;13(9):23–35.10.5539/ies.v13n9p23Search in Google Scholar

[19] Xia X. Internet addiction among college students in China and its underlying causes. Sci Insights Educ Front. 2023;16(1):2457–73.10.15354/sief.23.re127Search in Google Scholar

[20] Tayhan Kartal F, Yabancı Ayhan N. Relationship between eating disorders and internet and smartphone addiction in college students. Eat Weight Disord. 2021;26:1853–62.10.1007/s40519-020-01027-xSearch in Google Scholar PubMed

Received: 2023-07-27
Revised: 2023-10-17
Accepted: 2024-01-09
Published Online: 2024-03-08

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. A study on intelligent translation of English sentences by a semantic feature extractor
  3. Detecting surface defects of heritage buildings based on deep learning
  4. Combining bag of visual words-based features with CNN in image classification
  5. Online addiction analysis and identification of students by applying gd-LSTM algorithm to educational behaviour data
  6. Improving multilayer perceptron neural network using two enhanced moth-flame optimizers to forecast iron ore prices
  7. Sentiment analysis model for cryptocurrency tweets using different deep learning techniques
  8. Periodic analysis of scenic spot passenger flow based on combination neural network prediction model
  9. Analysis of short-term wind speed variation, trends and prediction: A case study of Tamil Nadu, India
  10. Cloud computing-based framework for heart disease classification using quantum machine learning approach
  11. Research on teaching quality evaluation of higher vocational architecture majors based on enterprise platform with spherical fuzzy MAGDM
  12. Detection of sickle cell disease using deep neural networks and explainable artificial intelligence
  13. Interval-valued T-spherical fuzzy extended power aggregation operators and their application in multi-criteria decision-making
  14. Characterization of neighborhood operators based on neighborhood relationships
  15. Real-time pose estimation and motion tracking for motion performance using deep learning models
  16. QoS prediction using EMD-BiLSTM for II-IoT-secure communication systems
  17. A novel framework for single-valued neutrosophic MADM and applications to English-blended teaching quality evaluation
  18. An intelligent error correction model for English grammar with hybrid attention mechanism and RNN algorithm
  19. Prediction mechanism of depression tendency among college students under computer intelligent systems
  20. Research on grammatical error correction algorithm in English translation via deep learning
  21. Microblog sentiment analysis method using BTCBMA model in Spark big data environment
  22. Application and research of English composition tangent model based on unsupervised semantic space
  23. 1D-CNN: Classification of normal delivery and cesarean section types using cardiotocography time-series signals
  24. Real-time segmentation of short videos under VR technology in dynamic scenes
  25. Application of emotion recognition technology in psychological counseling for college students
  26. Classical music recommendation algorithm on art market audience expansion under deep learning
  27. A robust segmentation method combined with classification algorithms for field-based diagnosis of maize plant phytosanitary state
  28. Integration effect of artificial intelligence and traditional animation creation technology
  29. Artificial intelligence-driven education evaluation and scoring: Comparative exploration of machine learning algorithms
  30. Intelligent multiple-attributes decision support for classroom teaching quality evaluation in dance aesthetic education based on the GRA and information entropy
  31. A study on the application of multidimensional feature fusion attention mechanism based on sight detection and emotion recognition in online teaching
  32. Blockchain-enabled intelligent toll management system
  33. A multi-weapon detection using ensembled learning
  34. Deep and hand-crafted features based on Weierstrass elliptic function for MRI brain tumor classification
  35. Design of geometric flower pattern for clothing based on deep learning and interactive genetic algorithm
  36. Mathematical media art protection and paper-cut animation design under blockchain technology
  37. Deep reinforcement learning enhances artistic creativity: The case study of program art students integrating computer deep learning
  38. Transition from machine intelligence to knowledge intelligence: A multi-agent simulation approach to technology transfer
  39. Research on the TF–IDF algorithm combined with semantics for automatic extraction of keywords from network news texts
  40. Enhanced Jaya optimization for improving multilayer perceptron neural network in urban air quality prediction
  41. Design of visual symbol-aided system based on wireless network sensor and embedded system
  42. Construction of a mental health risk model for college students with long and short-term memory networks and early warning indicators
  43. Personalized resource recommendation method of student online learning platform based on LSTM and collaborative filtering
  44. Employment management system for universities based on improved decision tree
  45. English grammar intelligent error correction technology based on the n-gram language model
  46. Speech recognition and intelligent translation under multimodal human–computer interaction system
  47. Enhancing data security using Laplacian of Gaussian and Chacha20 encryption algorithm
  48. Construction of GCNN-based intelligent recommendation model for answering teachers in online learning system
  49. Neural network big data fusion in remote sensing image processing technology
  50. Research on the construction and reform path of online and offline mixed English teaching model in the internet era
  51. Real-time semantic segmentation based on BiSeNetV2 for wild road
  52. Online English writing teaching method that enhances teacher–student interaction
  53. Construction of a painting image classification model based on AI stroke feature extraction
  54. Big data analysis technology in regional economic market planning and enterprise market value prediction
  55. Location strategy for logistics distribution centers utilizing improved whale optimization algorithm
  56. Research on agricultural environmental monitoring Internet of Things based on edge computing and deep learning
  57. The application of curriculum recommendation algorithm in the driving mechanism of industry–teaching integration in colleges and universities under the background of education reform
  58. Application of online teaching-based classroom behavior capture and analysis system in student management
  59. Evaluation of online teaching quality in colleges and universities based on digital monitoring technology
  60. Face detection method based on improved YOLO-v4 network and attention mechanism
  61. Study on the current situation and influencing factors of corn import trade in China – based on the trade gravity model
  62. Research on business English grammar detection system based on LSTM model
  63. Multi-source auxiliary information tourist attraction and route recommendation algorithm based on graph attention network
  64. Multi-attribute perceptual fuzzy information decision-making technology in investment risk assessment of green finance Projects
  65. Research on image compression technology based on improved SPIHT compression algorithm for power grid data
  66. Optimal design of linear and nonlinear PID controllers for speed control of an electric vehicle
  67. Traditional landscape painting and art image restoration methods based on structural information guidance
  68. Traceability and analysis method for measurement laboratory testing data based on intelligent Internet of Things and deep belief network
  69. A speech-based convolutional neural network for human body posture classification
  70. The role of the O2O blended teaching model in improving the teaching effectiveness of physical education classes
  71. Genetic algorithm-assisted fuzzy clustering framework to solve resource-constrained project problems
  72. Behavior recognition algorithm based on a dual-stream residual convolutional neural network
  73. Ensemble learning and deep learning-based defect detection in power generation plants
  74. Optimal design of neural network-based fuzzy predictive control model for recommending educational resources in the context of information technology
  75. An artificial intelligence-enabled consumables tracking system for medical laboratories
  76. Utilization of deep learning in ideological and political education
  77. Detection of abnormal tourist behavior in scenic spots based on optimized Gaussian model for background modeling
  78. RGB-to-hyperspectral conversion for accessible melanoma detection: A CNN-based approach
  79. Optimization of the road bump and pothole detection technology using convolutional neural network
  80. Comparative analysis of impact of classification algorithms on security and performance bug reports
  81. Cross-dataset micro-expression identification based on facial ROIs contribution quantification
  82. Demystifying multiple sclerosis diagnosis using interpretable and understandable artificial intelligence
  83. Unifying optimization forces: Harnessing the fine-structure constant in an electromagnetic-gravity optimization framework
  84. E-commerce big data processing based on an improved RBF model
  85. Analysis of youth sports physical health data based on cloud computing and gait awareness
  86. CCLCap-AE-AVSS: Cycle consistency loss based capsule autoencoders for audio–visual speech synthesis
  87. An efficient node selection algorithm in the context of IoT-based vehicular ad hoc network for emergency service
  88. Computer aided diagnoses for detecting the severity of Keratoconus
  89. Improved rapidly exploring random tree using salp swarm algorithm
  90. Network security framework for Internet of medical things applications: A survey
  91. Predicting DoS and DDoS attacks in network security scenarios using a hybrid deep learning model
  92. Enhancing 5G communication in business networks with an innovative secured narrowband IoT framework
  93. Quokka swarm optimization: A new nature-inspired metaheuristic optimization algorithm
  94. Digital forensics architecture for real-time automated evidence collection and centralization: Leveraging security lake and modern data architecture
  95. Image modeling algorithm for environment design based on augmented and virtual reality technologies
  96. Enhancing IoT device security: CNN-SVM hybrid approach for real-time detection of DoS and DDoS attacks
  97. High-resolution image processing and entity recognition algorithm based on artificial intelligence
  98. Review Articles
  99. Transformative insights: Image-based breast cancer detection and severity assessment through advanced AI techniques
  100. Network and cybersecurity applications of defense in adversarial attacks: A state-of-the-art using machine learning and deep learning methods
  101. Applications of integrating artificial intelligence and big data: A comprehensive analysis
  102. A systematic review of symbiotic organisms search algorithm for data clustering and predictive analysis
  103. Modelling Bitcoin networks in terms of anonymity and privacy in the metaverse application within Industry 5.0: Comprehensive taxonomy, unsolved issues and suggested solution
  104. Systematic literature review on intrusion detection systems: Research trends, algorithms, methods, datasets, and limitations
Downloaded on 16.8.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2023-0102/html
Scroll to top button