Design of Machine Learning Algorithm For Tourism D
Design of Machine Learning Algorithm For Tourism D
Research Article
Design of Machine Learning Algorithm for Tourism
Demand Prediction
1
Nan Yu and Jiaping Chen2
1
School of Tourism, Yellow River Conservancy Technical Institute, Kaifeng, Henan 475000, China
2
School of Tourism, Henan Vocational & Technical College, Zhengzhou, Henan 450000, China
Received 21 April 2022; Revised 16 May 2022; Accepted 20 May 2022; Published 8 June 2022
Copyright © 2022 Nan Yu and Jiaping Chen. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
Unused hotel rooms, unused event tickets, and unsold items are all examples of wasted expenses and earnings. Governments
require accurate tourism demand forecasting in order to make informed decisions on topics such as infrastructure
development and lodging site planning; therefore, accurate tourism demand forecasting becomes vital. Artificial intelligence
(AI) models such as neural networks and security violation report (SVR) have been used effectively in tourist demand
forecasting as a result of the fast advancement of AI. This paper constructs a tourism demand forecasting model based on
machine learning on the basis of the existing forecasting model research. The completed work is as follows: (1) It introduces a
large number of domestic and foreign literatures on tourism volume forecasting and proposes the research content of this
paper. (2) It is proposed to stack the long short-term memory- (LSTM-) based autoencoders deeply, by adopting a hierarchical
greedy pretraining method to replace the random weight initialization method used in the deep network and combining this
pretraining stage and fine-tuning network together to form the SAE-LSTM prediction model for improving the performance of
deep learning models. (3) This paper uses the monthly search engine strength data of city A’s monthly tourist volume and its
related influencing factors as the data set; processes the data set to make the model adapt to the data input; uses mean absolute
error (MAE), root mean square error (RMSE), MAPE, and other model evaluation indicators; and uses LSTM and the
constructed SAE-LSTM model to conduct comparative experiments to predict the number of tourist arrivals in four years. The
prediction results of the models proposed in this paper are better than those of the LSTM model. According to the
experimental results, the superiority of the proposed LSTM-based unsupervised pretraining method is demonstrated.
standards and build a perfect system to predict the tourist flow The paper structures are as follows: Section 2 discusses
and prevent problems before they occur. While avoiding the related work. Section 3 defines the various methods.
excessive tourist flow, it is also necessary to avoid the loss of Section 4 analyzes the experiment and analysis. Section 5
tourism in the off-season, especially in hotels and transporta- concludes the article.
tion, which will cause waste of resources and affect the sustain-
able development of scenic spots [4, 5]. It can be seen that the 2. Related Work
development of the tourism industry is affected by many fac-
tors, which will cause seasonal and cyclical changes. It is nec- Nowadays, the social economy is in a period of rapid develop-
essary to pay attention to the timely and accurate judgment ment, and the proportion of tourism in the entire economic
of the tourism volume, improve the utilization rate of life is increasing. There is an urgent need to conduct more
resources, improve the management ability of scenic spots, in-depth research on tourism volume forecasting. At present,
and ensure that the tourism industry can form a sustainable after sorting out relevant literature, it is found that foreign
development. In order to prevent various problems caused research on tourism volume forecasting is mainly divided into
by excessive human flow, it is necessary to predict the passen- three stages: traditional econometric model research, artificial
ger flow in advance, improve the carrying capacity of the sce- intelligence model research, and hybrid model research, which
nic spot, and formulate effective preventive measures to avoid have more accurate forecasting effects. The earliest traditional
it, so as to improve the utilization rate of resources and evenly econometric model adopts the form of time series. Through
distribute the passenger flow. Therefore, research on the tour- long-term use, the model structure is relatively mature, focus-
ism industry must focus on the forecast of tourism volume, ing on the characteristics and variable parameters of the data
which can promote the simultaneous improvement of market associated with the model, which are determined by the
value and academic value, and has attracted attention from all changing trend and shape of the time series; then, through
walks of life. Aiming at the problem of tourism demand fore- the analysis of the econometric model, the change of the target
casting, statistical methods are used for modeling, and good time series is determined. It can be seen that the observation
results have been achieved [6]. To begin, linear regression object of the time series model is only the historical data of
was employed to predict tourist demand. It was possible to the predictor variable, so the data collection is simple, and
develop a model for forecasting tourist demand by looking at the application cost is very low, but there is a certain deviation
the link between historical demand data and estimated model in the prediction result. For a long time in the past, time series
parameters. Because the cyclical variation in tourist demand is models have been used in the forecast of tourism volume and
not taken into account, the inaccuracy in predicting tourism have been popularized in many fields. The more famous one is
demand is rather substantial. Some researchers have proposed the comprehensive autoregressive moving average model pro-
methods based on moving average, exponential smoothing, posed by reference [11] in 1970. After that, the use of time
and other time series analysis techniques to address the short- series models made a breakthrough in 2000, and simple
comings of the linear regression model; however, these tech- ARIMA models and SARIMA models were constructed,
niques are still essentially linear modeling techniques, and as which were recognized and popularized by the academic com-
a result, their limitations are also obvious [7]. With the munity. Since the tourism measuring tool is greatly affected by
advancement of neural network research in recent years, some the seasonality, the model can effectively grasp this feature and
researchers have proposed a neural network-based tourism improve the prediction accuracy [6]. Many innovations were
demand forecasting model, which is a nonlinear modelling then carried out on this foundation. Reference [12] conducted
method that can not only describe cyclical characteristics of extensive study on tourism volume forecasting, built a
tourism demand but also track time-varying travel demand GARCH model using three multivariables, and examined the
and produce good travel demand forecast results. corresponding factors impacting tourism volume through real
It is suggested that the SVR model be employed in tour- verification. Market demand is influenced by different types of
ist forecasting. SVR is a small sample forecasting issue markets. In addition, reference [13] analyzed and compared
modeling that has the strong forecasting performance of a the forecast differences between the econometric models and
neural network and can overcome the disadvantages of over- analyzed the Indian tourism market through the forecast
fitting. The neural network needs a large number of tourist results obtained by the X-12-ARIMA model and the ARFIMA
demand samples, and tourism demand is a small sample model. Also, typical is the traditional econometric model,
prediction issue; thus, the neural network is prone to overfit- which is also one of the more mature forecasting models.
ting during the learning phase, although the fitting accuracy The core foundation of the traditional econometric model is
is rather good. The capacity of deep learning in tourism pre- statistics, which integrates the knowledge of various disci-
diction is demonstrated by a tourism prediction model based plines. Based on the mathematical model, the actual parame-
on LSTM and attention mechanism [8–10]. As a result, the ters are added to form a random setting form, and then, the
paper’s main research goal is to present a tourism volume relationship between variables and influencing factors is ana-
forecasting model that will improve forecasting accuracy. lyzed through the application of the model. At present, there
The proposed model will enable the tourist department are many forms of econometric methods in tourism volume
comprehend the passenger flow distribution in advance so forecasting, mainly including vector autoregressive model
that scientific decisions can be made, and the strategy will (VAR), autoregressive distributed lag model (ADLM), error
also save a lot of tourism resources, which has far-reaching correction model (ECM), and time-varying parameter (TVP)
practical implications. model, and all of them have been effectively applied [14–16].
Computational and Mathematical Methods in Medicine 3
Reference [17] used the VAR method to successfully predict 3.1. AE-LSTM Prediction Model Construction. Here, LSTM
the number of tourists in popular scenic spots in a certain network and autoencoders are examined. They analyze the
region of France in recent years. Then, taking the characteris- LSTM-based autoencoder pretraining model construction.
tics of different countries as the analysis point, we got the con-
clusion that the country is the largest market in the future and 3.1.1. LSTM Network. RNN has been used to solve problems
then compared the actual state for verification and summa- such as tourism demand predictions because it can simulate
rized the results that the VAR model can effectively predict dependencies between sequence data through loops. How-
the medium and long-term tourism volume. In addition, ever, because the neural network is in the process of forward
through extensive practical analysis, VAR models can be con- transmission, the influence of the later time on the previous
structed more simply because they are mainly based on theo- time diminishes as the later time passes, so it is unable to
retical analysis. After that, the research of reference [18] recall the long-term memory. LSTM not only has a loop
introduced a new method, which led to the development of learning unit inside the network but also through the design
inbound tourism forecasting. Reference [19] carried out error gate to collect longer and shorter states from the start unit to
analysis on the existing basis and optimized the prediction the last unit; the recall of long-term memory is superior to
model, so that the British outbound tourism volume was effec- RNN. The design of LSTM can effectively deal with long-
tively analyzed. With the continuous development of research term memory. Figure 1 shows the LSTM model used in this
methods, the academic community has gradually realized that paper. LSTM can process time series data sequentially and
due to the problems of time series, the previous prediction use the output of the last time step to predict the output of
models often fail to achieve the expected results. To this end, a linear regression layer. The LSTM memory cell is con-
artificial intelligence models have been proposed through a trolled by three gates to sequence the messages passed,
large number of innovations, and AI-based technology has thereby accurately capturing long-term dependencies. The
been greatly developed [20, 21]. At the end of the last century, three gates are responsible for controlling the interaction
the concept of artificial neural network (ANN) was introduced between different memory cells. The function of the input
into the tourism volume prediction model, which made the gate is to control whether the input signal can modify the
research greatly developed. In this regard, reference [22] sum- state of the memory cell, the forget gate controls whether
marizes the ANN model for tourism volume prediction and to remember the previous state, and the output gate controls
puts forward a conclusion that is superior to the traditional the output of the memory cell.
model through analysis. Reference [23] believes that the In each time step T, the hidden state ht is updated by the
ANN model has great advantages in tourism volume forecast- input xt at the same time, the previous state of the hidden
ing. Compared with the traditional Naivel model, ES method, layer is ht−1 , the input gate is it , the output gate is ot , the for-
multiple regression, and other models, it has many character- get gate is f t , and the storage unit is st ; the relational equa-
istics, and the prediction results are more effective. Reference tion is as follows:
[24] optimizes on the basis of ANN, compares multiple pre-
diction models after summarization, and obtains many rules it = λ wi xt + vi ht−1 + ai , ð1Þ
and summarizes the multilayer perception prediction model.
With the continuous research on tourism volume forecasting,
f t = λ w f xt + v f ht−1 + a f , ð2Þ
the accuracy of tourism volume forecasting based on various
models has been continuously improved, but the transparency
of the model is low, so that users cannot understand the mech- ot = λ wo xt + vo ht−1 + ao , ð3Þ
anism of the model and thus cannot gain the trust of users. It is
of great significance to study the interpretability of tourism st = f t × st−1 + it × tanh ws xt + vs ht−1 + as , ð4Þ
forecasting models, but at this stage, there is no or very little
ht = ot × tanh st , ð5Þ
work focused on interpretable tourism demand forecasting
[25–27]. Many machine learning models are widely used in where w, v, and a are model parameters, which are continu-
time series forecasting. If the traditional ANN with shallow ously learned during model training; λ and tanh are excita-
structure becomes too complex, for example, the network con- tion functions, which are responsible for mapping the
tains many layers and parameters; it is difficult to train. Deep input of the neuron to the output; × represents the product
neural networks (DNNs) have shown better performance than of the corresponding position elements of the two matrices;
traditional neural networks in many time series forecasting and the linear regression layer is as follows:
applications. Deep learning can train DNNs with many hid-
den layers, and it is easy to learn features from raw data, so yi = wr hti , ð6Þ
it is very popular in the field of machine learning.
where wr is the weight parameter of the linear regression
3. Method layer, and formula (6) is used to predict the output of the lin-
ear regression layer.
This section discusses the AE-LSTM prediction model con-
struction. They define the SAE-LSTM prediction model con- 3.1.2. Autoencoders. Autoencoder (AE) is divided into three
struction, and they evaluate the selection of model parts, namely, input layer, hidden layer, and output layer,
evaluation indicators. and is an unsupervised learning algorithm. The autoencoder
4 Computational and Mathematical Methods in Medicine
Regression Target y
hT
Input
Sequence X X2 X3 XT-1 XT
1
S (t) S (t)
X3 LSTM Xo3
LSTM
ę ę
Hidden
Xn LSTM LSTM Xon
training process used in this paper is divided into two stages: representation of x. Autoencoders are often used to extract
the encoding stage compresses the input data, and the nonlinear features.
decoding stage reoutputs the compressed data. The autoen-
coder structure is shown in Figure 2. 3.1.3. LSTM-Based Autoencoder Pretraining Model
Among them, given the unlabeled data set xn , n = 1, 2, Construction. The forecasting problem of tourism demand
⋯, N, the two stages of the autoencoder used in this paper is a time series forecasting problem. Based on RNN, it is
can be expressed as follows: more suitable for modeling time series data. This paper pro-
poses a new structure based on LSTM and autoencoder,
hx = f ðw1 x + a1 Þ, ð7Þ which can extract features from time series problems, and
is to replace the encoding layer and decoding layer of the
autoencoder with the LSTM network layer, so as to propose
xo = gðw2 hx + a2 Þ, ð8Þ an LSTM-based autoencoder pretraining model, as shown in
Figure 3.
where w1 is the weight parameter from input to hidden neu- This paper proposes an LSTM-based autoencoder pre-
ral layer, w2 is the weight parameter from hidden to output training model, which consists of two LSTM layers, an
neural layer, a1 and a2 are the deviation vectors of input LSTM encoding layer and an LSTM decoding layer, given
layer and hidden layer, respectively, encoding refers to from an input sequence ðx1 , x2 , ⋯, xn Þ; the encoder accepts the
input to hidden neural layer conversion, decoding refers to input sequence and encodes it into the learned representa-
the conversion from hidden to output neural layer, f is the tion vector; then, the decoding layer takes this representa-
encoding function, g is the decoding function, and hx repre- tion vector as input and tries to reconstruct the input
sents the hidden encoding layer vector calculated from the sequence ðxo1 , xo2 , ⋯, xon Þ. This structure is an unsupervised
input vector x, where the relationship between the input vec- learning algorithm.
tor x and the output vector xo is shown as follows:
3.2. SAE-LSTM Prediction Model Construction. This section
xo ≈ x: ð9Þ examines the stacked autoencoders. They discuss the
LSTM-based stacked autoencoder pretraining model con-
Among them, the autoencoder is learned through train- struction. They analyze the SAE-LSTM prediction model
ing to ensure that x and xo are equal to achieve a compressed construction.
Computational and Mathematical Methods in Medicine 5
3.2.1. Stacked Autoencoders. Deep learning is a machine formance. Save the learned network parameters and
learning method based on data learning. Data features are the second LSTM-based encoding layer using the
extracted through multilayered nonlinear processing units. second LSTM-based encoding layer as an input to
Each layer uses the output of the preceding layer as input, the third LSTM-based self-encoder
and the data is translated from the bottom layer to the top
layer. Deep networks can learn more representations and (3) Load the two saved encoder layers, use them to
better identify correlations between data by stacking layers encode the input twice, then continue to train the
on top of each other. Stacked autoencoders (SAE) essentially third LSTM-based autoencoder with the saved
use the output of the hidden layer of the previous autoenco- encoded version, thereby reconstructing the original
input and saving the third LSTM-based autoencoder
der as the input of the next autoencoder, which is composed
encoding layer and its learned network parameters.
of multiple autoencoders. Each autoencoder is utilized as a
And so on, this model can also be generalized to
hidden layer, and many hidden layers are piled successively
more than three layers
from the bottom to produce the stacked autoencoder deep
network. In this structure, multiple autoencoders are 3.2.3. SAE-LSTM Prediction Model Construction. The SAE-
stacked, and each autoencoder layer is trained so that the LSTM tourism volume prediction model is constructed
input error is locally optimal, and the output of the hidden using a stacked autoencoder based on LSTM to replace the
layer is used as the input layer of the next autoencoder layer. random initialization of weights used in the LSTM network.
A single autoencoder can learn a feature representation Taking the training of three LSTM-based autoencoder stacks
through the three-layer network of equation (10), such as as an example, the SAE-LSTM model is divided into a pre-
the following: training stage and a fine-tuning stage, in which the three
encoding layers and the optimized network parameters are
x ⟶ H 1 ⟶ xo , ð10Þ saved in the pretraining stage. The fine-tuning stage is
divided into two steps:
H 1 = f μ ðx Þ: ð11Þ
(1) Using the three LSTM encoding layers and their
The stacked autoencoder obtains H 1 through the train- learned network parameters saved in the pretraining
ing of the first autoencoder and then uses H 1 as the input stage
to train the next autoencoder to obtain H 2 and then con-
tinues to train this deep learning structure. That is, first train (2) Add an output layer on top of the three hidden
equation (10) to obtain the transformation of equation (12), layers, which have only one node and is utilized to
and then, train equation (13) to obtain the transformation of solve the tourism volume forecast problem
equation (14), and finally, stack the SAE layer by layer.
3.3. Selection of Model Evaluation Indicators. In order to
evaluate the performance of the prediction model, this paper
x ⟶ H1, ð12Þ
uses three evaluation indicators to evaluate the prediction
effect. They are mean absolute error (MAE), root mean
H1 ⟶ H2 ⟶ H1, ð13Þ square error (RMSE), and mean absolute percentage error
H1 ⟶ H2: ð14Þ (MAPE). Given the predicted value yp and the actual value
y as follows:
3.2.2. LSTM-Based Stacked Autoencoder Pretraining Model
n o
Construction. The progressive unsupervised pretrained yp = yp1 , yp2 , ⋯, ypn , ð15Þ
stacked autoencoder can learn the features of the original
data layer by layer, which is more suitable for complex fea-
tures. Therefore, this paper proposes an LSTM-based y = fy1 , y2 , ⋯, yn g: ð16Þ
stacked autoencoder pretraining model. The LSTM-based
The three index equations are as follows:
stacked autoencoder is also a greedy hierarchical pretraining,
1 n
and its construction process is divided into the following
three steps: MAE = 〠ypi − yi , ð17Þ
n i=1
(1) First train the first LSTM-based autoencoder; then, sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
save its LSTM encoding layer and its learned net- 1 n 2
work parameters, and use the first LSTM encoding RMSE = 〠 ypi − yi , ð18Þ
layer as the input of the second LSTM-based n i=1
autoencoder
1 n ypi − yi
MAPE = 〠 , ð19Þ
(2) In order to train an LSTM-based autoencoder, load n i=1 yi
and utilize the previously stored encoder layer to rec-
reate the original input data, not the encoded input where MAE is a measure of the average magnitude of a set of
data. This allows the encoder to pick up on charac- errors, and it is the sum of the absolute values of the differ-
teristics from the original data and improve its per- ences between y and yp and then divided by the number of
6 Computational and Mathematical Methods in Medicine
test samples; RMSE is a measure of the average magnitude of Table 1: Influencing factors of tourism in city A.
a set of errors, which is the square root of the average of the
squared differences between y and yp ; and MAPE is the Travel category Influencing factors
mean of the absolute value of each error divided by y. The Dining Gourmet, snack, famous restaurant
three indicator formulas are calculated by the predicted Lodging Hotels, homestay, city bus
value and the actual value. The larger the three values, the Tour Tourist volume, tourism index
larger the error. Clothing Weather, air quality
Shopping Shopping mall, shopping street
4. Experiment and Analysis Recreation Bar, concert
This section discusses the data preprocessing. They evaluate Transportation Yacht, motorboat
the experimental design and model parameter selection.
They analyze the analysis of experimental results.
the right at a time, and each window is always 12 inches in
4.1. Data Preprocessing. Here, the data sources are defined. length. In other words, the time series data employed in this
They examine the data preprocessing. study is cyclical. The impact of data periodicity on forecast-
4.1.1. Data Sources. The data set used in this paper is the ing difficulties can be reduced by using a sliding window to
monthly search engine strength data of the number of provide a fixed length for data conversion. The time series
monthly passenger arrivals and tourism-related influencing is made up of sequences that are ordered in chronological
factors in a region from January 2014 to December 2019, order. The data for this study is collected on a monthly basis.
and the region is represented by city A. Among them, the This paper converts the original data into time series data
monthly tourist arrivals are provided by the tourism bureau based on sliding windows. Given a time series T and a win-
of the city government. The tourist arrivals collected from dow of length 12, T = ðx1 , x2 , ⋯, xn Þ, n is 72, representing 72
the DSEC website in this article are the tourist arrivals from months of data collected from January 2014 to December
the global market. The experimental data of this paper 2019; each x is also a 168-dimensional vector, representing
adopts the search engine strength data of 168 influencing the monthly search intensity of 168 tourism-related features.
factors related to tourism in city A, of which 38 monthly In this paper, the data is moved by the shift function of
search engine data are from Baidu and 130 monthly search pandas. First, the time window is placed at the starting posi-
engine data are from Google. The influencing factors of the tion of T, and then, the time window is moved one month
seven tourism categories are extended, and Table 1 lists backward with time, and then, the second month of T is
some of the influencing factors used in the experimental used as the starting position, get the second data of length
data. 12, and so on; there are a total of 60 data of length 12 c1 ,
To sum up, the experimental data in this paper consists c2 , ⋯, c60 , where ðc1 = x1 , x2 , ⋯, x12 Þ, ðc2 = x2 , x3 , ⋯, x13 Þ.
of the monthly search intensity of 168 tourism keywords The converted data is as follows:
and the arrivals of tourists in city A. This time series data
is a list with ordered values. W ðcÞ = fci ji = 1, 2, ⋯60g: ð21Þ
and then, the test set of this paper is the 12-month Table 2: SAE-LSTM experimental results with different number of
2016 data set. When predicting the 12-month tour- hidden layers.
ism arrivals in 2017, the first three years of data are
used as the training set, the first 10% of the data Number of hidden layers MAPE
are divided from the training set as the validation 1 3.158
set, and then, the 2017 12-month data set is used as 2 3.125
the test set, and so on to forecast the tourism volume 3 3.388
from 2016 to 2019 4 6.753
(2) Use the training set to train each model separately 5 6.923
and predict the total amount of tourism for each
model. The corresponding real tourist volume will
be collected from the test set to produce new training
data once the prediction model predicts the tourism
volume for a month. Then, do the next step of train- 4
ing and prediction and so on to predict the number
of tourists in city A from 2016 to 2019. And the
hyperparameter involved in this paper is set based 3
MAPE
on past experience, and grid search is performed to
perform brute force search, so as to obtain the best
2
experimental results
(3) Save the experimental results of each model and ana-
lyze the experimental results 1
[3] P. Su, Study on Scenic Area Tourism Passenger Flow Short- capabilities of single and ensemble prediction models,” Renew-
Term Forecast Method, [Ph.D. thesis], Hefei University of able and Sustainable Energy Reviews, vol. 75, pp. 796–808,
Technology, 2013. 2017.
[4] C. Vu, “Effect of demand volume on forecasting accuracy,” [22] A. Palmer, J. J. Montano, and A. Sesé, “Designing an artificial
Tourism Economics, vol. 12, no. 2, pp. 263–276, 2006. neural network for forecasting tourism time series,” Tourism
[5] W. Bi, Y. Liu, and H. Li, “Daily tourism volume forecasting for Management, vol. 27, no. 5, pp. 781–790, 2006.
tourist attractions,” Annals of Tourism Research, vol. 83, article [23] R. Law, “Back-propagation learning in improving the accuracy
102923, 2020. of neural network-based tourism demand forecasting,” Tour-
[6] S. Sakhuja, V. Jain, S. Kumar, C. Chandra, and S. K. Ghildayal, ism Management, vol. 21, no. 4, pp. 331–340, 2000.
“Genetic algorithm based fuzzy time series tourism demand [24] W. Höpken, T. Eberle, M. Fuchs, and M. Lexhagen, “Improv-
forecast model,” Industrial Management & Data Systems, ing tourist arrival prediction: a big data and artificial neural
vol. 116, no. 3, pp. 483–507, 2016. network approach,” Journal of Travel Research, vol. 60, no. 5,
[7] L. Wu and J. Zhang, “The variance-covariance method using pp. 998–1017, 2021.
IOWGA operator for tourism forecast combination,” Interna- [25] B. Ji, Y. Li, D. Cao, C. Li, S. Mumtaz, and D. Wang, “Secrecy
tional Journal of Supply and Operations Management, vol. 1, performance analysis of UAV assisted relay transmission for
no. 2, pp. 152–166, 2014. cognitive network with energy harvesting,” IEEE Transactions
[8] X. Yang, B. Pan, A. Evans, and B. Lv, “Forecasting Chinese on Vehicular Technology, vol. 69, no. 7, pp. 7404–7415, 2020.
tourist volume with search engine data,” Tourism Manage- [26] X. Lin, J. Wu, S. Mumtaz, S. Garg, J. Li, and M. Guizani,
ment, vol. 46, pp. 386–397, 2015. “Blockchain-based on-demand computing resource trading
[9] R. Kubicki and M. Kulbaczewska, “Modeling and forecasting in IoV-assisted smart city,” IEEE Transactions on Emerging
of the volume of touristic flows in Poland,” Zeszyty Naukowe Topics in Computing, vol. 9, no. 3, pp. 1373–1385, 2021.
Uniwersytetu Szczecińskiego. Ekonomiczne Problemy Turys-
[27] J. Pei, K. Zhong, M. A. Jan, and J. Li, “Personalized federated
tyki, vol. 3, pp. 57–70, 2014.
learning framework for network traffic anomaly detection,”
[10] R. Law, G. Li, C. Fong, and X. Han, “Tourism demand fore- Computer Networks, vol. 209, p. 108906, 2022.
casting: a deep learning approach,” Annals of Tourism
Research, vol. 75, pp. 410–423, 2019.
[11] H. Shen, Q. Wang, C. Ye, and J. S. Liu, “The evolution of hol-
iday system in China and its influence on domestic tourism
demand,” Journal of Tourism Futures, vol. 4, no. 2, pp. 139–
151, 2018.
[12] W. Chang and Y. Liao, “A seasonal ARIMA model of tourism
forecasting: the case of Taiwan,” Asia Pacific Journal of Tour-
ism Research, vol. 15, no. 2, pp. 215–221, 2010.
[13] H. Aladag, E. Egrioglu, and C. Kadilar, “Improvement in fore-
casting accuracy using the hybrid model of ARFIMA and feed
forward neural network,” American Journal of Intelligent Sys-
tems, vol. 2, no. 2, pp. 12–17, 2012.
[14] G. Li, H. Song, and S. F. Witt, “Recent developments in econo-
metric modeling and forecasting,” Journal of Travel Research,
vol. 44, no. 1, pp. 82–99, 2005.
[15] H. Song and G. Li, “Tourism demand modelling and forecast-
ing–a review of recent research,” Tourism Management,
vol. 29, no. 2, pp. 203–220, 2008.
[16] G. Li, F. Wong, H. Song, and S. F. Witt, “Tourism demand
forecasting: a time varying parameter error correction model,”
Journal of Travel Research, vol. 45, no. 2, pp. 175–185, 2006.
[17] U. Gunter and I. Önder, “Forecasting international city tour-
ism demand for Paris: accuracy of uni- and multivariate
models employing monthly data,” Tourism Management,
vol. 46, pp. 123–135, 2015.
[18] M. Uysal and J. L. Crompton, “An overview of approaches
used to forecast tourism demand,” Journal of Travel Research,
vol. 23, no. 4, pp. 7–15, 1985.
[19] R. Rivera, “A dynamic linear model to forecast hotel registra-
tions in Puerto Rico using Google Trends data,” Tourism Man-
agement, vol. 57, pp. 12–20, 2016.
[20] S. Collins and M. Moons, “Reporting of artificial intelligence
prediction models,” The Lancet, vol. 393, no. 10181,
pp. 1577–1579, 2019.
[21] Z. Wang and R. S. Srinivasan, “A review of artificial intelli-
gence based building energy use prediction: contrasting the