Abstract
Deep artificial neural networks have become a good alternative to classical forecasting methods in solving forecasting problems. Popular deep neural networks classically use additive aggregation functions in their cell structures. It is available in the literature that the use of multiplicative aggregation functions in shallow artificial neural networks produces successful results for the forecasting problem. A type of high-order shallow artificial neural network that uses multiplicative aggregation functions is the dendritic neuron model artificial neural network, which has successful forecasting performance. In this study, the transformation of the dendritic neuron model turned into a multi-output architecture. A new dendritic cell based on the multi-output dendritic neuron model and a new deep artificial neural network is proposed. The training of this new deep dendritic artificial neural network is carried out with the differential evolution algorithm. The forecasting performance of the deep dendritic artificial neural network is compared with basic classical forecasting methods and some recent shallow and deep artificial neural networks over stock market time series. As a result, it has been observed that deep dendritic artificial neural network produces very successful forecasting results for the forecasting problem.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In the solution of the time series forecasting problem, the past movements and realisations of the time series are taken into account and tried to be explained by statistical models. Classical forecasting methods are usually based on linear models for lagged variables of the time series. Artificial neural networks can successfully solve the forecasting problem by utilising a combination of flexible nonlinear functions and using lagged variables as inputs. Multilayer perceptron (MLP) artificial neural networks proposed by Rumelhart et al. (1986) are the most frequently used shallow artificial neural network type in the literature for solving forecasting and classification problems. In recent studies, Borhani and Wong (2023) used an MLP artificial neural network to predict students' achievement in their study. Shams et al. (2023) used an MLP artificial neural network to estimate air quality indexes in their study. Park et al. (2023) used MLP to predict arsenate toxicity in their study. Arumugam et al. (2024) trained an MLP artificial neural network with a Crossover smell agent algorithm and used it with a convolutional neural network to detect brain tumours. Kumar et al. (2024) used MLP in the detection of Vector-Borne Disease. Shafiq et al. (2024) used MLP models to predict Darcy–Forchheimer tangent hyperbolic flow parameters. Chen et al. (2024b) proposed an MLP-based model for grey gas emissivity and absorptivity. Chen et al. (2024a), MLP is one of the machine learning methods used for breast cancer diagnosis. Mariia (2024) used a three-layer MLP for yield forecasting and control of Microclimate Parameters. Jiang et al. (2024) presented a method that produces interval prediction by using MLP in combination with a deep neural network built with CNN.
While MLP is based only on the additive aggregation function, artificial neural networks based on the multiplicative aggregation function have also been proposed in the literature. The single multiplicative neural cell artificial neural network method proposed by Yadav et al. (2007) can solve the prediction problem with a single neuron as successfully as MLP. This has led to the investigation of artificial neural networks based on the multiplicative aggregation function and many artificial neural networks using the multiplicative aggregation function have been proposed in the literature. Zhao and Yang (2009) used particle swarm optimization algorithm; Burse et al. (2011) used an improved backpropagation algorithm; Worasucheep (2012) used harmony search algorithm; Chatterjee et al. (2013) used standard backpropagation; Wu et al. (2013b) used online training algorithms; Cui et al. (2015) used improved glowworm swarm optimization algorithm; Gundogdu et al. (2016) used PSO, Bas (2016) used differential evolution algorithm; Nigam (2019) used standard backpropagation learning algorithm; Kolay (2019) used sine cosine algorithm, Yu et al. (2020) used spherical search algorithm; Bas et al. (2020) used a hybrid algorithm based on artificial bat and backpropagation algorithms; Egrioglu et al. (2023c) used a new genetic algorithm method based on statistical-based replacement in the training of SMNM-ANN. Aladag (2013) used a multiplicative neuron model to establish fuzzy logic relationships. Wu et al. (2013a) proposed novel techniques based on the SMN model with iterated nonlinear filtering online training algorithms for engine systems reliability prediction. Velásquez et al. (2013) proposed a hybrid model based on SARIMA and a multiplicative neuron model for electricity demand forecasting. Wu et al. (2015) used a single multiplicative neuron model with nonlinear filters for hourly wind speed prediction. Basiouny et al. (2017) proposed a Wi-Fi fingerprinting indoor positioning system which utilizes the single multiplicative neuron. Yildirim et al. (2021) proposed a threshold single multiplicative neuron artificial neural network based on PSO and harmony search algorithm. Wu et al. (2021) used an online nonlinear state space forecasting model with the basis of the SMN model. Pan et al. (2021) used a modified double multiplicative neuron network for time-series interval prediction. Nigam and Bhatt (2023) proposed a single multiplicative neuron model for predicting crude oil prices and analyzing lag effects. Egrioglu and Bas (2023a) proposed a hybrid neural network based on a combination of simple exponential smoothing and the single multiplicative neuron model. Egrioglu et al. (2023a) proposed a new nonlinear causality test based on a single multiplicative neuron model artificial neural network. Kolay and Tunç (2023) proposed a new hybrid neural network classifier based on adaptive neurons and multiplicative neurons.
Shin and Ghosh (1991) proposed an artificial neural network called Pi-Sigma artificial neural network, which is similar to MLP but different from MLP, using a multiplicative aggregation function in the output layer. In the Pi-Sigma neural network, the weights in the hidden layer and the output layer are taken as fixed, whereas, in Egrioglu and Bas (2023b), where these weights are taken as variable, it is shown that improvement is provided for the solution of the forecasting problem compared to the Pi-Sigma ANN. Another artificial neural network that uses the multiplicative aggregation function is the Sigma-Pi artificial neural network, which has been developed in Rumelhart and McClelland (1988) and Gurney (1989). The training of the Sigma-Pi artificial neural network with the grey wolf optimisation algorithm was proposed in Sarıkaya et al. (2023) and its applications on the forecasting problem were performed. Nie and Deng (2008) proposed a hybrid genetic learning algorithm for the Pi-sigma neural network. Hussain et al. (2008) proposed a recurrent Pi-Sigma neural network for physical time series prediction. Ghazali and Al-Jumeily (2009) applied pi-sigma neural networks to financial time series prediction. Husaini et al. (2011) used a backpropagation algorithm on historical temperature data of Batu Pahat. Husaini et al. (2012) studied the effects of parameters on the pi-sigma neural network for temperature forecasting. Panigrahi et al. (2013) used a modified differential evolution algorithm in the training of a pi-sigma neural network for pattern classification. Nayak et al. (2014) used a hybrid training algorithm based on PSO and GA for the Pi sigma neural network. Nayak et al. (2015) used gradient descent and genetic algorithm methods in the training of Pi-Sigma artificial neural networks. Akdeniz et al. (2018) proposed a new recurrent architecture for Pi-Sigma artificial neural networks. Egrioglu et al. (2019) proposed a new intuitionistic fuzzy time series method based on pi-sigma artificial neural networks trained by an artificial bee colony. Nayak (2020) used a fireworks algorithm in the training of PS-ANN. Panda and Majhi (2020) used an improved spotted hyena optimizer algorithm in the training of the Pi sigma neural network. Pattanayak et al. (2020) used hybrid chemical reaction optimization in the training of the Pi-Sigma neural network. Panda and Majhi (2021) used the Salp swarm algorithm in the training of the Pi Sigma neural network. Bas et al. (2021) used a sine cosine optimization algorithm in the training of a Pi-Sigma artificial neural network. Yılmaz et al. (2021) used a differential evolution algorithm in the training of Pi-Sigma artificial neural networks for forecasting. Kumar (2022) used a Lyapunov-stability-based context-layered recurrent pi-sigma neural network for the identification of nonlinear systems. Dash et al. (2023) used a shuffled differential evolution in the training of PS-ANN. Fan et al. (2023) proposed a new algorithm for Pi-sigma neural networks with entropy error functions based on L_0 regularization Arslan and Cagcag Yolcu (2022) proposed an intuitionistic fuzzy time series forecasting model based on a hybrid sigma-pi neural network. Bas et al. (2023) proposed a robust algorithm based on PSO in the training of Pi-sigma artificial neural networks for forecasting problems. Bas and Egrioglu (2023) proposed a new recurrent pi‐sigma artificial neural network inspired by an exponential smoothing feedback mechanism for forecasting.
Another artificial neural network using the multiplicative neuron model is the dendritic neuron model artificial neural network (DNM-ANN) proposed in Todo et al. (2014), which takes different nonlinear transformations of all raw inputs and works on the transformed inputs, thus including the data augmentation process in the process. The dendritic neuron model artificial neural network has been used in many studies in the literature to obtain time series forecasting. Yu et al. (2016) used a dendritic neuron model for forecasting the house price index of China. Zhou et al. (2016) used a dendritic neuron model for time series forecasting. Chen et al. (2017) proposed a novel dendritic neuron model to perform tourism demand forecasting. Gao et al. (2018) used some popular artificial intelligence optimization algorithms in DNM-ANN for classification, approximation, and prediction. Song et al. (2020) used a dendritic neuron model for wind speed time series forecasting. Jia et al. (2018) proposed a flexible methodology by combining a dendritic neuron model with a statistical test for forecasting. Qian et al. (2019) proposed a novel mutual information-based dendritic neuron model for classification. Song et al. (2019) used a social learning particle swarm optimization algorithm in the training of the dendritic neuron model. Jia et al. (2020) used backpropagation, biogeography-based optimization and competitive swarm optimizer in the training of DNM-ANN for classification. Han et al. (2020) used a whale optimization algorithm in the training of the dendritic neuron model for classification. Wang et al. (2020b) proposed a dendritic neuron model with adaptive synapses trained by a differential evolution algorithm. Wang et al. (2020a) used states of matter search in their proposed median dendritic neuron model for forecasting. Yu et al. (2021) used a dynamic scale-free network-based differential evolution in the training of DNM-ANN. Luo et al. (2021) proposed a decision-tree-initialized dendritic neuron model for classification. Xu et al. (2021) used an information feedback-enhanced differential evolution algorithm in the training of a dendritic neuron model. In He et al. (2021), the time series were decomposed by the seasonal trend decomposition method and the method of obtaining forecasts over the decomposed series with DNM-ANN was preferred. Tang et al. (2021) used an artificial immune system algorithm in the training of DNM-ANN. Nayak et al. (2022a) used chemical reaction optimization in the training of DNM-ANN for forecasting. Al-Qaness et al. (2022b) used a seagull optimization algorithm in the training of DNM-ANN for forecasting. Yilmaz and Yolcu (2022) used modified particle swarm optimization in the training of DNM-ANN for forecasting. He et al. (2022) used a coyote optimization algorithm in the training of DNM-ANN. Wang et al. (2022) proposed a novel dendritic convolutional neural network which considers the nonlinear information processing functions of dendrites in a single neuron. In Al-Qaness et al. (2022a), DNM-ANN was used for crude-oil-production forecasting. In Nayak et al. (2022b), an improved chemical reaction optimisation algorithm-based dendritic neuron model is proposed for financial time series forecasting In Egrioglu et al. (2022), a recurrent dendritic neuron model artificial neural network was proposed for the first time in the literature. The proposed artificial neural network in Egrioglu et al. (2022) has a structure in which the error of the network is feedback. Although Egrioglu et al. (2022) produce successful prediction results, it is not a deep neural network and does not have the advantage of increasing the number of hidden layers for more successful modelling. Deep artificial neural networks, which have very useful results, especially in the field of image processing, have started to be preferred in solving the forecasting problem in recent years. Wang et al. (2023) used the Levenberg–Marquardt algorithm with error selection in the training of DNM-ANN. Yılmaz and Yolcu (2023) proposed a robust algorithm based on Huber's loss function in the training of dendritic neuron model artificial neural network for forecasting problems. Egrioglu et al. (2023b) proposed a robust algorithm with Tukey’s weight loss function based on particle swarm optimization (PSO) in the training of winsorized dendritic neuron model artificial neural network for forecasting problems. Olmez et al. (2023) proposed a bootstrapped dendritic neuron model artificial neural network based on PSO for forecasting problems. Gul et al. (2023) proposed some statistical learning algorithms for dendritic neuron model artificial neural networks for forecasting. Zhang et al. (2023) proposed a dendritic neuron model optimized by meta-heuristics for financial time-series forecasting. Yuan et al. (2023) proposed a dendritic neuron model trained by an improved state-of-matter heuristic algorithm for forecasting. Cao et al. (2023) used an improved Adam optimizer to train a dendritic neuron model for water quality prediction. Bas et al. (2024) proposed a robust training of median dendritic artificial neural networks for time series forecasting.
Recurrent deep neural networks such as long short-term memory (LSTM) and gated recurrent unit (GRU) have been the most frequently used deep neural networks in the field of forecasting thanks to their structure using timesteps. A summary of the literature on deep neural networks can be given as follows. Jiang and Hu (2018) used an LSTM model for day-ahead price forecasting for the electricity market. Chung and Shin (2018) used LSTM based on a genetic algorithm for stock market prediction. Tian et al. (2018) used LSTM and convolutional neural network (CNN) methods for load forecasting. In Bendali et al. (2020) study, the GRU-GA model was proposed for the estimation of photovoltaic energy production. Veeramsetty et al. (2021) also carried out a study on load forecasting using factor analysis and LSTM. Liu et al. (2021) combined the LSTM model with online social networks for stock price prediction. In Guo and Mao (2020) study, the GRU-GA model was proposed for the charging estimation of electric vehicles. Gundu and Simon (2021) used LSTM based on PSO for the short-term forecast of heterogeneous time series electricity prices. Inteha (2021) performed the day-ahead short-term load forecast with the GRU-GA model. Ning et al. (2022) compared the performance of ARIMA, LSTM and Prophet methods for oil production forecasting. Karasu and Altan (2022) used the LSTM method for oil time series prediction. Bilgili et al. (2022) performed electricity energy consumption forecasting using LSTM. Liu et al. (2022) proposed a new deep-learning forecasting method for Satellite Network Traffic Prediction by developing the GRU artificial neural network. Du et al. (2022) used LSTM based on particle swarm optimization for urban water demand. Gong et al. (2022) proposed an improved LSTM for the state of health estimation of lithium-ion batteries. Huang et al. (2022) used an LSTM model for well-performance prediction. In Liu et al. (2022), satellite network traffic prediction was performed using GRU-PSO. In Song et al. (2022) study, GRU-PSO was used for terminal cooling load estimation. Li et al. (2022) proposed a novel ensemble method based on Bidirectional-GRU and a sparrow search algorithm is proposed to forecast production. Lin et al. (2022) used gated recurrent unit deep neural networks for time series-based groundwater level forecasting.
When the literature on forecasting with artificial neural networks is examined, it is seen that the application and development of both shallow and deep artificial neural networks on the solution of the forecasting problem continues. It is seen that these deep ANNs can find more application areas, especially since the creation of modular structures of networks such as CNN, LSTM and GRU in ready-made package programs and libraries offers convenience to practitioners. However, in recent years, it has been seen that shallow artificial neural networks based on different neuron models and different architectures can produce more successful prediction results than deep neural networks. As a result, it seems that the introduction of deep artificial neural networks that can be created with different neuron models will be the subject of future studies for researchers producing artificial neural network architectures and models.
The motivation of this study is to contribute to the solution of the forecasting problem by proposing a deep recurrent artificial neural network using a dendritic neuron model for the forecasting problem. The contributions of the work are presented in the following sentences. In this study, a deep recurrent artificial neural network using the dendritic neuron model is proposed, which has started to achieve successful forecasting results in recent years. This proposed new deep recurrent artificial neural network is named a Deep dendritic artificial neural network (DeepDenT). To create the DeepDenT deep artificial neural network, a new "dendritic cell" structure is created. Dendritic cells, just like LSTM and GRU cells, can work like a mini neural network that can receive many inputs and produce many outputs. For the generation of the dendritic cell, the DNM-ANN proposed by Todo et al. (2014) is modified into a multivariate dendritic neuron model (MDNM). DeepDenT is designed to incorporate the proposed new architecture consisting of a hierarchical arrangement of dendritic cells. A training algorithm based on the differential evolution optimisation method proposed by Storn and Price (1997) is presented for training the DeepDenT neural network. The proposed training algorithm can get rid of local optimum traps more easily thanks to the restart strategy it contains and can find a solution to overfitting problems thanks to the early stopping condition. Since the proposed training algorithm does not require derivatives of the objective function, it does not involve exploding or vanishing gradient problems such as LSTM.
The rest of the paper is organized as follows. In the second part of the study, the newly proposed MDNM artificial neural network will be introduced. In the third part of the study, "dendritic cell" will be introduced. In the fourth section, the DeepDenT artificial neural network and its training algorithm will be introduced. In the fifth section, applications of stock market time series and comparison results with other methods in the literature will be presented. In the last section, the advantages, improvements and limitations of the DeepDenT artificial neural network will be discussed by considering the findings obtained in the application.
2 MDNM artificial neural network
DNM-ANN is proposed by Todo et al. (2014) in a multi-input and single-output structure and can be used to obtain the forecasting of a single time series. In this section, the structure of the DNM-ANN has been made multi-output to allow it to form a cell structure in a deep neural network, and the formulas for calculating the output of the network and the architecture of the MDNM are introduced. The architecture of the MDNM artificial neural network is given in Fig. 1. As can be seen from the figure, since more than one neuron is used in the output layer of the network, the number of parameters has to increase twice the number of additional outputs. In addition, for the training of such a network, the total error for all outputs must be taken into account. The training problem of this network is out of the scope of this study because the reason why we propose this network is that it is used in the construction of the "dendritic cell" structure.
What is important for this study is how to generate the outputs of the MDNM neural network for a given input set. The output of MDNM is given by the following equations. Synaptic functions for an MDDNM with \(p\)-input, \(m\) dendrites and \(k\) outputs are calculated as in Eq. (1).
In Eq. 1, \({w}_{ij}\) and \({\theta }_{ij}\) shows the weights and biases, respectively. Besides, \(k\) is the slope parameter for synaptic function.
Dendritic functions are calculated by multiplying synaptic functions as in Eq. (2). The values of the dendrite function are products of different nonlinear transformations of the inputs.
Membrane functions are calculated by summing the dendritic functions as in Eq. (3). Finally, the output of the network is calculated as in Eq. (4).
In Eq. 4, \({k}_{soma}^{l}\) and \({\theta }_{soma}^{l}\) shows the slope and centralization parameters, respectively. The values of the membrane function are the same input signal for all outputs, but since different parameter values are used in the activation function for each output, different output values are obtained. Here, the activation function parameters become more important for the network to produce different outputs. The total number of parameters in the MDNM artificial neural network is \(2pm+2n+1\) and the parameters of the network are given in Table 1 with the number of elements it contains.
3 Dendritic cell
To create the DeepDenT deep artificial neural network, a new "dendritic cell (DnC)" structure is created. DnC, just like LSTM and GRU cells, can work like a mini neural network that can receive many inputs and produce many outputs. The architectural structure of the DnC is given in Fig. 2.
In a DnC, given feature number p, number of hidden layer units h and dendrite number m, the output of the DnC is calculated by the following equations.
In Eq. (6), \({Y}^{\left(1\right)}\) represents the synaptic function values calculated for the inputs. In Eq. (6), \(\sigma (.)\) represents the logistic activation function.
In Eq. (8), \({Y}^{\left(2\right)}\) represents the synaptic function values calculated for the recurrent connections.
In Eqs. (6) and (8), \(\odot\) represents elementwise multiplication operation or Hadamard product. An example is given as follows for this product:
Dendritic function value is calculated by using Eqs. (10) or (11).
\(\circledast\) represents the multiplication of cumulative product for columns’ elements. An example is given as follows for this product:
The membrane function value is calculated as follows:
The elements of the output of the DnC are calculated by using (14).
The number of parameters is \(2pm+2k+1\) in a DnC. The dimensions of weights and biases in a dendritic cell are listed in Table 2.
The dimensions of calculated vectors and matrices in a DnC are given in Table 3.
-
\({H}_{t}:{X}_{t-1}:1\times p\)
-
\({Y}^{\left(1\right)}:p\times m\)
-
\({Y}^{\left(2\right)}:h\times m\)
-
\(Z:1\times m\)
-
\(V:1\times 1\)
4 DeepDenT artificial neural network and its training algorithm
The DeepDenT artificial neural network is a deep recurrent artificial neural network that combines DnCs. In the output layer of DeepDenT, there is a classical fully connected (FC) layer based on the additive aggregation function. DeepDenT is a partially connected artificial neural network with DnCs in a sequential and hierarchical structure. The architectural structure of DeepDenT is given in Fig. 3.
The DnC given by the dark cell wall is the last cell calculated before the output. The input of a DnC in the DeepDent is a lagged variable of \({x}_{t}=({y}_{t},{y}_{t-1,\dots ,}{y}_{t-p+1})\) according to the number of hidden layer nodes in Fig. 3. The input for the DeepDent cell in the lower left corner of the architecture is \({x}_{t-{\text{h}}}=({y}_{t-{\text{h}}-1},{y}_{t-h-2,\dots ,}{y}_{t-h-p})\) in Fig. 3. The output of the DeepDent deep recurrent artificial neural network is one step ahead forecast of the time series. The architecture in Fig. 3 has \(h\) time steps, \(q\) hidden layers, \(m\) dendrite number and \(p\) inputs or features. The number of neurons in all hidden layers is equal to h. The weight and bias values of all DeepDent cells in the same hidden layer are taken equally. This parameter sharing reduces the number of parameters and enables a common DeepDent cell that presents the same mathematical model in all time steps. These weights and bias values change in different hidden layers, that is, increasing the number of hidden layers increases the number of parameters of the network, while the number of time steps is not effective on the number of parameters like LSTM and GRU. The output of the DeepDenT is calculated with the following formulas. The parameters of DeepDenT for a hidden layer are combined into a single parameter set given in (17) for ease of illustration. As can be seen in (17), while the parameters change from hidden layer to hidden layer, they do not change for the same hidden layer units, i.e. time steps, and parameters are shared.
The output of the first hidden layer of DeepDenT is calculated by Eq. (18). Here, the function f is a representative function representation and the calculations are performed with the formulas given in the DnC section.
In (18), In this equation, \({h}_{t-k}^{1}\) indicates the output obtained in the first hidden layer for the kth time step at time t. Starting from the second hidden layer, the calculations are performed with Eq. (19) until the output of DeepDenT is obtained.
The computation of DnCs in DeepDenT is performed in hidden layer order and from left to right within the same hidden layer. The output of the DnC shown in dark colour in Fig. 3 is \({h}_{t-1}^{q}\). The final output of DeepDenT can be calculated as the output of FC with the formula given in Eq. (21).
After presenting the computational formulas of DeepDenT, the most important problem is to propose the training algorithm for this network. The training algorithm of DeepDenT based on the differential evolution optimisation (DEO) method is given in steps in the following Algorithm.
The algorithm of the proposed method was coded in Matlab and shared publicly on Github at https://github.com/erole1977/DeepDenT. These codes can be used for recalculation of the obtained results. It is possible to obtain numerical differences in the recalculation of the results since the initial random weights are taken according to the system clock of the running computer at the time of execution of the codes. However, the ranking of the methods.
The computation time of a time series with 250 observations for a single architecture varies between 2.07 s and 5.02 s. The computation time can be affected by random initial values. When considered together with hyperparameter optimization, the total computation time of a time series with 250 observations takes between 20 and 22 min. A personal computer (12th Gen Intel(R) Core(TM) i5-12500H 2.50 GHz processor with 16 GB RAM) was used in the calculations.
5 Applications
In the application, the performance of the proposed method is investigated for a total of 20-time series for two stock market indices in Turkey and the USA stock markets. The first time series analysed is the S&P500 index (S&P 500 (GSPC), SNP—SNP Real-Time Price. Currency in USD) time series. The time series given in Table 4 for the opening values between 2014–2018 were randomly selected for use in the application. In the application, the lengths of the time series were taken as 250 and 500 to cover approximately 1 and 2 years. Thanks to the random selection, opening values in different periods of the year can be used as test data in the comparison.
The performance of the proposed method is compared with some popular and recent ANN methods and some classical forecasting methods. LSTM proposed in Hocreiter and Schmidhuber (1991), pi-sigma ANN (PSGM) proposed in Shin and Ghosh (1991) and bootstrapped hybrid ANN (B-HANN) proposed in Egrioglu and Fildes (2022) were used in the comparison. As classical forecasting methods, random walk and Holt's linear trend exponential smoothing method were used.
For all time series used in the application, the data set is divided into three parts training-validation and test data in block structure. While the parameter estimations of the methods applied to the training data were performed, the forecasting performances were calculated on the validation and test data. In the application of all methods, possible values for hyperparameters were selected as similar and the best hyperparameter values were selected on the validation set. According to the best hyperparameter values, 30 different test set performances were obtained by training the methods with data other than the test set with 30 different random initialisations. The statistics of the RMSE values obtained for the test set performance are presented in the tables.
The following direction accuracy (DA) criterion is used for the direction accuracy of the method. The Da criterion is calculated for the best architecture in the tables and given alongside the RMSE statistics.
where \({x}_{t}\) is the value of the time series at time t and \({\widehat{x}}_{t}\) is the forecast value at time t.
The RMSE statistics for the test data forecasting performance obtained for the time series given in Table 5 are given in Table 6 and the best hyperparameter values of all methods are given in Table 7.
Table 5 shows that DeepDenT has a superior forecasting performance compared to all other methods by having a lower average RMSE value in 8 out of 10-time series, i.e. 80%. In particular, it can produce RMSE results with both lower mean and lower standard deviation for the test set than the LSTM method, which is the most popular deep ANN for the forecasting problem. It can be concluded that DeepDenT is more successful than all other methods for the S&P500 time series and should be preferred.
When the LSTM, PSGM, BHANN and DeepDenT methods are compared according to the DA criterion, it is seen that the methods do not provide a clear superiority to each other, in general, the methods have a directional accuracy performance between 50 and 65%. DeepDenT has the highest directional accuracy in 30% of the S&P500 series and is the second-best method after LSTM in terms of directional accuracy. Although this is the case, it should be ignored that the DA criterion does not make sense on its own, but only measures the direction of the forecast.
The second application was carried out over the opening values of the Borsa Istanbul 100 index (BIST100) between 01/02/2014 and 09/02/2018. The random series obtained for BIST100 are given in Table 7.
The RMSE statistics for the test data forecasting performance obtained for the time series given in Table 7 are given in Table 8 and the best hyperparameter values of all methods are given in Table 9.
Table 8 shows that DeepDenT has a superior forecasting performance compared to all other methods by having a lower average RMSE value in 5 out of 10-time series, i.e. 50%. In particular, it can produce RMSE results with both lower mean and lower standard deviation for the test set than the LSTM method, which is the most popular deep ANN for the forecasting problem. It can be concluded that DeepDenT is more successful than all other methods for the BIST100 time series and should be preferred.
When the LSTM, PSGM, BHANN and DeepDenT methods are compared according to the DA criterion, it is seen that the methods do not provide a clear superiority to each other, in general, the methods have a directional accuracy performance between 50 and 65%. DeepDenT has the highest directional accuracy in 40% of the BIST100 series and is the best method with LSTM in terms of directional accuracy.
In Fig. 4, Box-Plot Graphs of RMSE values calculated over the test set for both stock market data are given separately for all methods. As can be seen, although the results of the methods do not have a normal distribution, it is understood that DeepDenT has the lowest median.
6 Conclusions and discussion
In this study, a new deep artificial neural network DeepDenT is proposed to solve the forecasting problem. In addition, a training algorithm based on the differential evolution algorithm is proposed for DeepDent. Since the proposed training algorithm includes a restart strategy and early stopping conditions, it can produce successful training results for DeepDent. The performance of DeepDent is investigated for a 20-time series obtained from two stock exchanges. As a result of the application, it was observed that the proposed method produces successful results compared to both popular and current artificial neural networks and classical forecasting methods. The performance of the proposed new artificial neural network under training algorithms based on different artificial intelligence optimization methods is one of the topics to be investigated in the future. Another future study is to transform the proposed new neural network into a fully automatic forecasting method. For this purpose, input significance tests and different statistical tools are planned to be used for the proposed ANN.
Data availability
Data will be made available on request.
References
Akdeniz E, Egrioglu E, Bas E, Yolcu U (2018) An ARMA type pi-sigma artificial neural network for nonlinear time series forecasting. J Artif Intell Soft Comput Res 8(2):121–132
Aladag CH (2013) Using multiplicative neuron model to establish fuzzy logic relationships. Expert Syst Appl 40(3):850–853
Al-Qaness MA, Ewees AA, Abualigah L, AlRassas AM, Thanh HV, Abd Elaziz M (2022a) Evaluating the applications of dendritic neuron model with metaheuristic optimization algorithms for crude-oil-production forecasting. Entropy 24(11):1674
Al-Qaness MA, Ewees AA, Elaziz MA, Samak AH (2022b) Wind power forecasting using optimized dendritic neural model based on seagull optimization algorithm and aquila optimizer. Energies 15(24):9261
Arslan SN, CagcagYolcu O (2022) A hybrid sigma-pi neural network for combined intuitionistic fuzzy time series prediction model. Neural Comput Appl 34(15):12895–12917
Arumugam M, Thiyagarajan A, Adhi L, Alagar S (2024) Crossover smell agent optimized multilayer perceptron for precise brain tumor classification on MRI images. Expert Syst Appl 238:121453
Bas E (2016) The training of multiplicative neuron model based artificial neural networks with differential evolution algorithm for forecasting. J Artif Intell Soft Comput Res 6(1):5–11
Bas E, Egrioglu E (2023) A new recurrent pi-sigma artificial neural network inspired by exponential smoothing feedback mechanism. J Forecast 42(4):802–812
Bas E, Egrioglu E, Cansu T (2024) Robust training of median dendritic artificial neural networks for time series forecasting. Expert Syst Appl 238:122080
Bas E, Egrioglu E, Karahasan O (2021) A Pi-Sigma artificial neural network based on sine cosine optimization algorithm. Granular Comput 7:813–820
Bas E, Egrioglu E, Yolcu U (2020) A hybrid algorithm based on artificial bat and backpropagation algorithms for multiplicative neuron model artificial neural networks. J Ambient Intell Human Comput 1–9
Bas E, Egrioglu E, Yolcu U, Chen MY (2023) A robust learning algorithm based on particle swarm optimization for Pi-Sigma artificial neural networks. Big Data 11(2):105–116
Basiouny Y, Arafa M, Sarhan AM (2017) Enhancing Wi-Fi fingerprinting for indoor positioning system using single multiplicative neuron and PCA algorithm. In 2017 12th international conference on computer engineering and systems, pp 295–305
Bendali W, Saber I, Bourachdi B, Boussetta M, Mourad Y (2020) Deep learning using genetic algorithm optimization for short term solar irradiance forecasting. In 2020 fourth international conference on intelligent computing in data sciences, pp 1–8
Bilgili M, Arslan N, Şekertekin A, Yaşar A (2022) Application of long short-term memory (LSTM) neural network based on deep learning for electricity energy consumption forecasting. Turk J Electr Eng Comput Sci 30(1):140–157
Borhani K, Wong RTK (2023) An artificial neural network for exploring the relationship between learning activities and students’ performance. Decision Anal J 9:100332
Burse K, Manoria M, Kirar VPS (2011) Improved back propagation algorithm to avoid local minima in multiplicative neuron model. In international conference on advances in information technology and mobile communication, pp 67–73
Cao J, Zhao D, Tian C, Jin T, Song F (2023) Adopting improved Adam optimizer to train dendritic neuron model for water quality prediction. Math Biosci Eng 20(5):9489–9510
Chatterjee S, Singh JB, Nigam S, Upadhyaya LN (2013) A study of a single multiplicative neuron (SMN) model for software reliability prediction. Innov Intell Mach-3: Contemp Ach Intell Syst 89–102
Chen T, Zhou X, Wang G (2024a) Using an innovative method for breast cancer diagnosis based on extreme gradient boost optimized by simplified memory bounded. Biomed Signal Process Control 87:105450
Chen W, Ren T, Zhao C (2024b) A machine learning based model for gray gas emissivity and absorptivity of H2O-CO2-CO-N2 mixtures. J Quant Spectrosc Radiat Transfer 312:108798
Chen W, Sun J, Gao S, Cheng JJ, Wang J, Todo Y (2017) Using a single dendritic neuron to forecast tourist arrivals to Japan. IEICE Trans Inf Syst 100(1):190–202
Chung H, Shin KS (2018) Genetic algorithm-optimized long short-term memory network for stock market prediction. Sustainability 10(10):3765
Cui H, Feng J, Guo J, Wang T (2015) A novel single multiplicative neuron model trained by an improved glowworm swarm optimization algorithm for time series prediction. Knowl-Based Syst 88:195–209
Dash R, Rautray R, Dash R (2023) Utility of a shuffled differential evolution algorithm in designing of a Pi-sigma neural network based predictor model. Appl Comput Inform 19(1/2):22–40
Du B, Huang S, Guo J, Tang H, Wang L, Zhou S (2022) Interval forecasting for urban water demand using PSO optimized KDE distribution and LSTM neural networks. Appl Soft Comput 122:108875
Egrioglu E, Bas E (2023a) A new hybrid recurrent artificial neural network for time series forecasting. Neural Comput Appl 35(3):2855–2865
Egrioglu E, Bas E (2023b) Modified pi sigma artificial neural networks for forecasting. Gran Comput 8(1):131–135
Egrioglu E, Bas E, Cansu T, Kara MA (2023a) A new nonlinear causality test based on single multiplicative neuron model artificial neural network: a case study for Turkey’s macroeconomic indicators. Gran Comput 8(2):391–396
Egrioglu E, Bas E, Chen MY (2022) Recurrent dendritic neuron model artificial neural network for time series forecasting. Inf Sci 607:572–584
Egrioglu E, Bas E, Karahasan O (2023b) Winsorized dendritic neuron model artificial neural network and a robust training algorithm with Tukey’s biweight loss function based on particle swarm optimization. Gran Comput 8(3):491–501
Egrioglu E, Fildes R (2022) A new bootstrapped hybrid artificial neural network approach for time series forecasting. Comput Econ 59:1355–1383
Egrioglu E, Grosan C, Bas E (2023c) A new genetic algorithm method based on statistical-based replacement for the training of multiplicative neuron model artificial neural networks. J Supercomput 79(7):7286–7304
Egrioglu E, Yolcu U, Bas E (2019) Intuitionistic high-order fuzzy time series forecasting method based on pi-sigma artificial neural networks trained by artificial bee colony. Gran Comput 4:639–654
Fan Q, Zheng F, Huang X, Xu D (2023) Convergence analysis for sparse Pi-sigma neural network model with entropy error function. Int J Mach Learn Cybern 14:4405–4416
Gao S, Zhou M, Wang Y, Cheng J, Yachi H, Wang J (2018) Dendritic neuron model with effective learning algorithms for classification, approximation, and prediction. IEEE Trans Neural Netw Learn Syst 30(2):601–614
Ghazali R, Al-Jumeily D (2009) Application of pi-sigma neural networks and ridge polynomial neural networks to financial time series prediction. In artificial higher order neural networks for economics and business, pp 271–293
Gong Y, Zhang X, Gao D, Li H, Yan L, Peng J, Huang Z (2022) State-of-health estimation of lithium-ion batteries based on improved long short-term memory algorithm. J Energy Storage 53:105046
Gul HH, Egrioglu E, Bas E (2023) Statistical learning algorithms for dendritic neuron model artificial neural network based on sine cosine algorithm. Inf Sci 629:398–412
Gundogdu O, Egrioglu E, Aladag CH, Yolcu U (2016) Multiplicative neuron model artificial neural network based on Gaussian activation function. Neural Comput Appl 27:927–935
Gundu V, Simon SP (2021) PSO–LSTM for short-term forecast of heterogeneous time series electricity price signals. J Ambient Intell Humaniz Comput 12:2375–2385
Guo X, Mao J (2020) Prediction of gas concentration based on PSO optimized GRU Neural network. In 2020 IEEE international conference on ınformation technology, big data and artificial ıntelligence, pp 1126–1130
Gurney K (1989) Learning in nets of structured hypercubes. PhD Thesis, Department of Electrical Engineering, Brunel University, Middlesex, UK
Han Z, Shi J, Todo Y, Gao S (2020) Training dendritic neuron model with whale optimization algorithm for classification. In 2020 IEEE international conference on progress in informatics and computing, pp 11–15
He H, Gao S, Jin T, Sato S, Zhang X (2021) A seasonal-trend decomposition-based dendritic neuron model for financial time series prediction. Appl Soft Comput 108:107488
He H, Xue C, Li Q, Gao S. (2022) A coyote optimization algorithm-trained dendritic neuron model for prediction tasks. In 2022 joint 12th international conference on soft computing and intelligent systems and 23rd international symposium on advanced intelligent systems, pp 1–6
Hocreiter S, Schmidhuber J (1991) Long short-term memory. Neural Comput 9(8):1735–1780
Huang R, Wei C, Wang B, Yang J, Xu X, Wu S, Huang S (2022) Well performance prediction based on long short-term memory (LSTM) neural network. J Pet Sci Eng 208:109686
Husaini NA, Ghazali R, Nawi NM, Ismail LH (2011) Pi-Sigma neural network for temperature forecasting in Batu Pahat. In software engineering and computer systems: second international conference, pp 530–541.
Husaini NA, Ghazali R, Nawi NM, Ismail LH (2012) The effect of network parameters on pi-sigma neural network for temperature forecasting. In international journal of modern physics: conference series, pp 440–447
Hussain AJ, Liatsis P, Tawfik H, Nagar AK, Al-Jumeily D (2008) Physical time series prediction using recurrent pi-sigma neural networks. Int J Artif Intell Soft Comput 1(1):130–145
Inteha A (2021) A GRU-GA hybrid model based technique for short term electrical load forecasting. In 2021 2nd International conference on robotics, electrical and signal processing techniques, pp 515–519
Jia D, Fujishita Y, Li C, Todo Y, Dai H (2020) Validation of large-scale classification problem in dendritic neuron model using particle antagonism mechanism. Electronics 9(5):792
Jia D, Zheng S, Yang L, Todo Y, Gao S (2018) A dendritic neuron model with nonlinearity validation on Istanbul stock and Taiwan futures exchange indexes prediction. In 2018 5th IEEE international conference on cloud computing and intelligence systems, pp 242–246
Jiang L, Hu G (2018) Day-ahead price forecasting for electricity market using long-short term memory recurrent neural network. In: proceedings of IEEE international conference on control, automation, robotics and vision, pp 949–954
Jiang M, Chen W, Xu H, Liu Y (2024) A novel interval dual convolutional neural network method for interval-valued stock price prediction. Pattern Recogn 145:109920
Karasu S, Altan A (2022) Crude oil time series prediction model based on LSTM network with chaotic Henry gas solubility optimization. Energy 242:122964
Kolay E (2019) A novel multiplicative neuron model based on sine cosine algorithm for time series prediction. Eskişehir Techn Univ J Sci Technol A-Appl Sci Eng 20(2):153–160
Kolay E, Tunç T (2023) A new hybrid neural network classifier based on adaptive neuron and multiplicative neuron. Soft Comput 27(3):1797–1808
Kumar S, Srivastava A, Maity R (2024) Modeling climate change impacts on vector-borne disease using machine learning models: case study of Visceral leishmaniasis (Kala-azar) from Indian state of Bihar. Expert Syst Appl 237:121490
Kumar R (2022) A Lyapunov-stability-based context-layered recurrent pi-sigma neural network for the identification of nonlinear systems. Appl Soft Comput 122:108836
Li X, Ma X, Xiao F, Xiao C, Wang F, Zhang S (2022) Time-series production forecasting method based on the integration of bidirectional gated recurrent unit (Bi-GRU) network and sparrow search algorithm (SSA). J Petrol Sci Eng 208:109309
Lin H, Gharehbaghi A, Zhang Q, Band SS, Pai HT, Chau KW, Mosavi A (2022) Time series-based groundwater level forecasting using gated recurrent unit deep neural networks. Eng Appl Comput Fluid Mech 16(1):1655–1672
Liu K, Zhou J, Dong D (2021) Improving stock price prediction using the long short-term memory model combined with online social networks. J Behav Exp Finance 30:100507
Liu Z, Li W, Feng J, Zhang J (2022) Research on satellite network traffic prediction based on improved GRU neural network. Sensors 22(22):8678
Luo X, Wen X, Zhou M, Abusorrah A, Huang L (2021) Decision-tree-initialized dendritic neuron model for fast and accurate data classification. IEEE Trans Neural Netw Learn Syst 33(9):4173–4183
Mariia M (2024) Methodology for controlling greenhouse microclimate parameters and yield forecast using neural network technologies. Stud Syst Decis Control 439:245–277
Nayak J, Naik B, Behera HS (2014) A hybrid PSO-GA based Pi sigma neural network (PSNN) with standard back propagation gradient descent learning for classification. In 2014 international conference on control, instrumentation, communication and computational technologies (iccicct), pp 878–885
Nayak SC (2020) A fireworks algorithm-based Pi-Sigma neural network (FWA-PSNN) for modelling and forecasting chaotic crude oil price time series. EAI Endorsed Trans Energy Web 7(28):e2–e2
Nayak SC, Dehuri S, Cho SB (2022a) CRODNM: Chemical Reaction optimization of dendritic neuron models for forecasting net asset values of mutual funds. In international conference on innovations in intelligent computing and communications, pp 299–312
Nayak SC, Dehuri S, Cho SB (2022b) Intelligent financial forecasting with an improved chemical reaction optimization algorithm based dendritic neuron model. IEEE Access 10:130921–130943
Nayak SC, Misra BB, Behera HS (2015) A pi-sigma higher order neural network for stock index forecasting. In computational intelligence in data mining, pp 311–319
Nie Y, Deng W (2008) A hybrid genetic learning algorithm for Pi-sigma neural network and the analysis of its convergence. In 2008 fourth international conference on natural computation, pp 19–23
Nigam S (2019) Single multiplicative neuron model in reinforcement learning. harmony search and nature inspired optimization algorithms: theory and applications. ICHSA, pp 889–895
Nigam S, Bhatt V (2023) Single multiplicative neuron model in predicting crude oil prices and analyzing lag effects. In computer vision and robotics: proceedings of CVR 2022, pp 517-526
Ning Y, Kazemi H, Tahmasebi P (2022) A comparative machine learning study for time series oil production forecasting: ARIMA, LSTM, and Prophet. Comput Geosci 164:105126
Olmez E, Egrioglu E, Bas E (2023) Bootstrapped dendritic neuron model artificial neural network for forecasting. Gran Comput 8:1689–1699
Pan W, Feng L, Zhang L, Cai L, Shen C (2021) Time-series interval prediction under uncertainty using modified double multiplicative neuron network. Expert Syst Appl 184:115478
Panda N, Majhi SK (2020) Improved spotted hyena optimizer with space transformational search for training Pi-sigma higher order neural network. Comput Intell 36(1):320–350
Panda N, Majhi SK (2021) Effectiveness of swarm-based metaheuristic algorithm in data classification using pi-sigma higher order neural network. In progress in advanced computing and intelligent engineering, pp 77–88
Panigrahi S, Bhoi AK, Karali Y (2013) A modified differential evolution algorithm trained pi-sigma neural network for pattern classification. Int J Soft Comput Eng 3(5):133–136
Park J, Yang JH, Jung J, Kwak IS, Choe JK, An J (2023) Comparative analysis of the capability of the extended biotic ligand model and machine learning approaches to predict arsenate toxicity. Chemosphere 344:140350
Pattanayak RM, Behera HS, Panigrahi S (2020) A multi-step-ahead fuzzy time series forecasting by using hybrid chemical reaction optimization with pi-sigma higher-order neural network. Computational intelligence in pattern recognition: proceedings of CIPR 1029–1041
Qian X, Wang Y, Cao S, Todo Y, Gao S (2019) Mr DNM: A novel mutual information-based dendritic neuron model. Comput Intell Neurosci 7362931
Rumelhart DE, McClelland JL (1988) PDP Research Group, Parallel distributed processing. IEEE, New York, pp 354–362
Rumelhart E, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. Chapter 8. The M.I.T. Press, Cambridge, pp 318–362
Sarıkaya C, Bas E, Egrioglu E (2023) Training Sigma-Pi neural networks with the grey wolf optimization algorithm. Gran Comput 8:981–989
Shafiq A, Çolak AB, Sindhu TN (2024) Comparative analysis to study the Darcy-Forchheimer Tangent hyperbolic flow towards cylindrical surface using artificial neural network: An application to Parabolic Trough Solar Collector. Math Comput Simul 216:213–230
Shams SR, Kalantary S, Jahani A, Shams SMP, Kalantari B, Singh D, Moeinnadini M, Choi Y (2023) Assessing the effectiveness of artificial neural networks (ANN) and multiple linear regressions (MLR) in forecasting AQI and PM10 and evaluating health impacts through AirQ+ (case study: Tehran). Environ Pollut 338:122623
Shin Y, Ghosh J (1991) The Pi-Sigma network: an efficient higher order neural network for pattern classification and function approximation, In: Proceedings of the international joint conference on neural networks, pp 13–18
Song L, Gao W, Yang Y, Zhang L, Li Q, Dong Z (2022) Terminal cooling load forecasting model based on particle swarm optimization. Sustainability 14(19):11924
Song S, Chen X, Tang C, Song S, Tang Z, Todo Y (2019) Training an approximate logic dendritic neuron model using social learning particle swarm optimization algorithm. IEEE Access 7:141947–141959
Song Z, Zhou T, Yan X, Tang C, Ji J (2020). Wind speed time series prediction using a single dendritic neuron model. In 2020 2nd international conference on machine learning, big data and business intelligence, pp 140–144
Storn R, Price K (1997) Differential evolution a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359
Tang C, Todo Y, Ji J, Lin Q, Tang Z (2021) Artificial immune system training algorithm for a dendritic neuron model. Knowl-Based Syst 233:107509
Tian C, Ma J, Zhang C, Zhan P (2018) A deep neural network model for short-term load forecast based on long short-term memory network and convolutional neural network. Energies 11(12):3493
Todo Y, Tamura H, Yamashita K, Tang Z (2014) Unsupervised learnable neuron model with nonlinear interaction on dendrites. Neural Netw 60:96–103
Veeramsetty V, Chandra DR, Salkuti SR (2021) Short-term electric power load forecasting using factor analysis and long short-term memory for smart cities. Int J Circuit Theory Appl 49(6):1678–1703
Velásquez HJD, Rueda MV, Franco CCJ (2013) Electricity demand forecasting using a SARIMA-multiplicative single neuron hybrid model. Dyna 80(180):4–8
Wang RL, Lei Z, Zhang Z, Gao S (2022) Dendritic convolutional neural network. IEEJ Trans Electr Electron Eng 17(2):302–304
Wang S, Yu Y, Zou L, Li S, Yu H, Todo Y, Gao S (2020a) A novel median dendritic neuron model for prediction. IEEE Access 8:192339–192351
Wang Z, Gao S, Wang J, Yang H, Todo Y (2020b) A dendritic neuron model with adaptive synapses trained by differential evolution algorithm. Comput Intell Neurosci 2710561
Wang Z, Wang Y, Huang H (2023) Error selection based training of fully complex-valued dendritic neuron model. In chinese intelligent automation conference, pp 683–690
Worasucheep C. (2012) Training a single multiplicative neuron with a harmony search algorithm for prediction of S&P500 index-An extensive performance evaluation. In knowledge and smart technology, pp 1–5
Wu X, Chang Y, Mao J, Du Z (2013a) Predicting reliability and failures of engine systems by single multiplicative neuron model with iterated nonlinear filters. Reliab Eng Syst Saf 119:244–250
Wu X, Mao J, Du Z, Chang Y (2013b) Online training algorithms based single multiplicative neuron model for energy consumption forecasting. Energy 59:126–132
Wu X, Wang Y, Bai Y, Zhu Z, Xia A (2021) Online short-term load forecasting methods using hybrids of single multiplicative neuron model, particle swarm optimization variants and nonlinear filters. Energy Rep 7:683–692
Wu X, Zhu Z, Su X, Fan S, Du Z, Chang Y, Zeng Q (2015) A study of single multiplicative neuron model with nonlinear filters for hourly wind speed prediction. Energy 88:194–201
Xu Z, Wang Z, Li J, Jin T, Meng X, Gao S (2021) Dendritic neuron model trained by information feedback-enhanced differential evolution algorithm for classification. Knowl-Based Syst 233:107536
Yadav RN, Kalra PK, John J (2007) Time series prediction with single multiplicative neuron model. Appl Soft Comput 7(4):1157–1163
Yildirim AN, Bas E, Egrioglu E (2021) Threshold single multiplicative neuron artificial neural networks for non-linear time series forecasting. J Appl Stat 48(13–15):2809–2825
Yilmaz A, Yolcu U (2022) Dendritic neuron model neural network trained by modified particle swarm optimization for time-series forecasting. J Forecast 41(4):793–809
Yilmaz A, Yolcu U (2023) A robust training of dendritic neuron model neural network for time series prediction. Neural Comput Appl 35:10387–10406
Yılmaz O, Bas E, Egrioglu E (2021) The training of Pi-Sigma artificial neural networks with differential evolution algorithm for forecasting. Comput Econ 59:1699–1711
Yu J, Shi J, Li Z, He H, Gao S (2020) Single dendritic neuron model trained by spherical search algorithm for classification. In 2020 IEEE international conference on progress in informatics and computing, pp 30–33
Yu Y, Lei Z, Wang Y, Zhang T, Peng C, Gao S (2021) Improving dendritic neuron model with dynamic scale-free network-based differential evolution. IEEE/CAA J Autom Sin 9(1):99–110
Yu Y, Song S, Zhou T, Yachi H, Gao S (2016) Forecasting house price index of China using dendritic neuron model. In 2016 international conference on progress in informatics and computing, pp 37–41
Yuan Z, Gao S, Wang Y, Li J, Hou C, Guo L (2023) Prediction of PM2. 5 time series by seasonal trend decomposition-based dendritic neuron model. Neural Comput Appl 35:15397–15413
Zhang Y, Yang Y, Li X, Yuan Z, Todo Y, Yang H (2023) A dendritic neuron model optimized by meta-heuristics with a power-law-distributed population interaction network for financial time-series forecasting. Mathematics 11(5):1251
Zhao L, Yang Y (2009) PSO-based single multiplicative neuron model for time series prediction. Expert Syst Appl 36(2):2805–2812
Zhou T, Gao S, Wang J, Chu C, Todo Y, Tang Z (2016) Financial time series prediction using a dendritic neuron model. Knowl-Based Syst 105:214–224
Acknowledgements
This study is supported by the Turkish Science and Technological Researches Foundation (TUBITAK) with Award Number:122F133, Recipient: Erol Egrioglu and Eren Bas.
Author information
Authors and Affiliations
Contributions
Erol Egrioglu: Conceptualization, Methodology, Software, Writing – original draft, Writing – review & editing, Visualization, Investigation, Data curation.
Eren Bas: Conceptualization, Methodology, Writing – original draft, Writing – review & editing, Visualization, Investigation, Data curation.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Egrioglu, E., Bas, E. A new deep neural network for forecasting: Deep dendritic artificial neural network. Artif Intell Rev 57, 171 (2024). https://doi.org/10.1007/s10462-024-10790-7
Accepted:
Published:
DOI: https://doi.org/10.1007/s10462-024-10790-7