Elite GA-based Feature Selection of LSTM For Earth
Elite GA-based Feature Selection of LSTM For Earth
Earthquake Prediction
Zhiwei Ye
Hubei University of Technology
Wuyang Lan
Hubei University of Technology
Wen Zhou ( [email protected] )
Hubei University of Technology
Qiyi He
Hubei University of Technology
Liang Hong
Hubei Provincial Geographical National Conditions Monitoring Center
Xinguo Xu
Hubei Provincial Geographical National Conditions Monitoring Center
Yunxuan Gao
Hubei University of Technology
Research Article
Keywords: Earthquake magnitude prediction, Elite genetic algorithm, Long short-term memory, AETA
DOI: https://doi.org/10.21203/rs.3.rs-3049982/v1
License: This work is licensed under a Creative Commons Attribution 4.0 International License.
Read Full License
Abstract
Earthquake magnitude prediction is an extremely difficult task that
has been studied by various machine learning researchers. However, the
redundant features and time series properties hinder the development
of prediction models. Elite Genetic Algorithm (EGA) has the advan-
tages in searching optimal feature subsets, meanwhile, Long Short-Term
Memory (LSTM) is dedicated to processing time series and complex
data. Therefore, we propose an EGA-based feature selection of LSTM
model (EGA-LSTM) for time series earthquake prediction. First, the
acoustic and electromagnetics data of the AETA system we developed
are fused and preprocessed by EGA, aiming to find strong correlation
indicators. Second, LSTM is introduced to execute magnitude pre-
diction with the selected features. Specifically, the RMSE of LSTM
and the ratio of selected features are chosen as fitness components of
EGA. Finally, we test the proposed EGA-LSTM on the AETA data
of Sichuan province, including the influence of data in different peri-
ods (timeP eriod) and fitness function weights (ωa and ωF ) on the
prediction results. Linear Regression (LR), Support Vector Regression
(SVR), Adaboost, Random Forest (RF), standard GA (SGA), steadyGA,
1
Springer Nature 2021 LATEX template
1 Introduction
Earthquakes are serious, sudden-onset natural disasters. Since 1970, at least 3.3
billion people have died from such natural disasters, and 226 million people are
directly affected each year [1]. Earthquakes cause serious human injuries and
economic losses due to the destruction of buildings and other rigid structures.
Nevertheless, earthquake prediction is essential and necessary to reduce human
casualties and economic loss.
Earthquake Moment Magnitude (MW) prediction is one of the important
issues in earthquake prediction fields. Many studies have been carried out for
finding the relationships in earthquakes and MW prediction. However, the
complex and non-linear relationships in earthquakes make MW prediction to
be a challenging task. Machine Learning (ML) methods have been adopted for
MW prediction due to their high classification accuracy. Adeli et al. [2] cal-
culated the seismic historical indicators, including Gutenberg Richer b-values,
time lag, earthquake energy, and mean magnitude based on ML methods to
predict MW. Asim et al. [3] investigated the Cyprus earthquake catalog tem-
porally and computed sixty seismic features. Then, these features served as
the input instances of Support Vector Machines (SVM) and RF to predict five
days-ahead, one week-ahead, ten days-ahead, and fifteen days-ahead with dif-
ferent MW thresholds. Crustal movement is a continuous process makes MW
prediction to be a time-series issue. Hence, in [4], Berhich et al. calculated an
appropriate feature by historical data to enhance time-series task and imple-
mented LSTM to predict MW with the enhanced features. Cai et al. [5] used
three groups of real seismic data: gravity, georesistivity, and water-level dataset
to predict MW based on LSTM. They considered the time series of precur-
sors and earthquakes, however, there are irrelevant or even redundant features
within the precursors, resulting in low prediction accuracy.
Feature Selection (FS) is a critical issue to improve the classification
accuracy of ML methods. FS can be regarded as combinatorial optimization
problems and EGA has the advantage in searching optimal feature subset.
Kadam et al. [6] proposed EGA for selecting features in arrhythmia classifica-
tion, and achieved satisfied performance. Thus, we adopt EGA for FS in MW
Springer Nature 2021 LATEX template
prediction and design a novel fitness function, which is calculted by the RMSE
of LSTM and the ratio of selected features. Finally, considering the time-series
effect of electromagnetic and acoustic data from AETA and the abruptness of
earthquake, we introduce LSTM to predict magnitude with optimal feature
subset selected by EGA.
To verify the performance of the proposed EGA-LSTM, 95 features that
are derived from the electromagnetic and acoustic data collected by AETA are
adopted as our experimental dataset. After the feature selection process by
EGA, the chosen features served as the input data of LSTM model. Besides,
five Evolution Algorithms (EAs) and four different ML methods are adopted
as our baselines: SGA, steadyGA, and three DEs, LR, SVR, Adaboost, and
RF. Five ML evaluation indicators are adopted to assess the performance
of those methods. The experimental results demonstrate that our proposed
EGA-LSTM is superior to the state-of-the-art methods.
The main contributions of this study are as follows:
• The original dataset is collected by the AETA system developed by our
team, which detects electromagnetic and acoustic to predict earthquakes in
Sichuan and surroundings.
• To eliminate redundant features from the original 95 features, we propose
EGA with a novel fitness function of a specific seismic scene to find the
optimal solution.
• Considering earthquake magnitude’s time series effect and the abruptness
of earthquake, we choose LSTM as prediction model. Other ML methods,
such as LR, SVR, Adaboost, and RF are our baselines.
• The statistical measures (MAE, MSE, RMSE, R-square) are used to measure
the performance of suggested ML models.
The remainder of the study is arranged as follows. Section 2 depicts the
latest and related works. The method will be fully described in section 3. In
section 4, the experiments of EGA-LSTM and other EAs and ML methods
will be illustrated in detail. The conclusion and future works will be shown in
section 5.
2 Related Work
Various approaches have been implemented in earthquake prediction, including
feature selection methods and prediction methods.
3 Methodology
This section describes the proposed model to perform earthquake prediction.
EGA-LSTM is presented to predict MW.
features for electromagnetic signals, and 44 features for acoustic signals. How-
ever, MW isn’t detected in AETA. Hence, the MW data is chosen from China
Earthquake Networks Center. As a result, the algorithm merges 95 features
and MW according to time. Since the time interval is short and MW prediction
in a short period makes less sense. Hence, we select a typical value in every
feature to represent a whole day. That means the size of the original one-day
data is 144*96. After fusing phase, the size becomes 1*96. In addition, electro-
magnetic and acoustic signals may be affected by human activities, we need
to choose a suitable timeP eriod to present a whole day. Considering the lack
of nighttime activities, we choose the data from the timeP eriod 0:00 to 8:00.
Then, the algorithm transforms time series data into supervised learning. In
this algorithm, the input variable is electromagnetic and acoustic signals and
the output variable is the MW of the following day. The step size is equal to
1. That means the algorithm can predict MW of the next day.
After processing the original data, the algorithm executes feature selection.
This model adopts EGA to select optimal feature subset, the individual of
EGA is applied to binary encoding and the dimension of each individual is
95. If one feature is selected, the corresponding dimension is 1, otherwise 0.
With comprehensive consideration of prediction accuracy and time complexity,
we proposed a novel fitness function defined as Eq. (1), where ωa and ωF is
Springer Nature 2021 LATEX template
Algorithm 1 EGA-LSTM
Input: electromagnetic and acoustic signals from AETA
Output: MW of the next day
1: M axiter: The maximum number of iterations
2: Of s: The selected optimal feature subset
3: Use the max value of the data between 0:00 and 8:00 representing the
whole day
4: Time series transform into supervised learning sequence, step size equal to
1
5: Initialize population and parameter
6: while t < M axiter do
7: Calculate fitness for each individual
8: for each individual do
9: Retaining the best individuals
10: Selected operation
11: Crossover operation
12: Mutate operation
13: end for
14: Generate new population
15: end while
16: OF S = best f itness individual
17: M W = LST M (OF S)
18: return M W
the weight factor, F is the number of selected features, and P is the number
of all features. The first part of Eq. (1) represents the prediction accuracy
and the second part indicates the complexity of the model. In addition, Root
Mean Squared Error (RMSE) is an evaluation indicator that is defined as
Eq. (2), where ŷ is the predicting value and is the true value. This fitness
function enables considerable accuracy and the number of selected features is
significantly reduced.
F
f itness = ωa ∗ RM SE + ωF ∗ ( ) (1)
v P
u n
u1 X
RM SE(y, ŷ) = t k yi − ŷ k22 (2)
n i=1
The algorithm first defines the maximum number of iterations, crossover
rate, mutate rate, and the number of individuals. Then the model randomly
initializes the population and begins to iterate. The first step of iterating is
calculating the fitness of every individual. Hereafter, the algorithm executes
the selection operator, crossover operator, and mutation operator to generate
a new subpopulation. The loop finishes until the number of iterations reaches
the maximum number of iterations. After the iteration phase, the best indi-
vidual is the selected optimal feature subset. The last step of the algorithm
Springer Nature 2021 LATEX template
is predicting MW. At the beginning, the selected optimal feature subsets are
normalized by the Min-Max scaler, which is a transform method calculated
by Eq. (3). In this model, optimal feature subsets are mapped into the same
range which is between 0 and 1, which enables no one feature dominates the
others. The next step consists of dividing 70% of the training set and 30%
of the testing set. Then LSTM is trained and supported by RMSE for error
calculation and evaluation. Finally, after the model is trained, LSTM predicts
MW on the testing set, and the error between the real MW and the predicted
MW is calculated by different evaluation indicators. They are widely used to
evaluate regression models and applied to earthquake prediction. The detailed
evaluation indicators will be provided in section 4.2.
x − min(x)
z= (3)
max(x) − min(x)
best individual, otherwise recalculate fitness. The best individual gene (output
OF S = {of1 , of2 , ..., ofn }) is the optimal feature subset.
The selected features are processed by the Min-Max scaler and the squashed
data pass the input gate which takes the relevant features from the squashed
input data by multiplying them with a sigmoid function. This function maps
the relevant features the range 0 to 1. If the value is 0, the network removes
the feature, otherwise, the feature pass through the network. The next step
is to decide how much memory we need to store in the cell state. Then a
tanh layer creates a new vector of new candidate values. The memory gate
takes the information stored in the previous state and adds it to the input
gate. Since the memory operator is addition instead of multiplication, LSTM
avoids the vanishing problem. Moreover, the forget data decides which state
of information needs to be memorized or forgotten based sigmoid function.
Finally, the output gate decides what the algorithm is going to output based
sigmoid function. Then, we put the cell state through tanh layer to push the
values to between -1 and 1 and multiply it by the output of the sigmoid gate,
so that we only output the parts we are interested in. In the last layer, the
output is MW. These operations are described as Eqs. (4-9).
4 Experiment
4.1 Data process
The earthquake information of the regions AETA decting is intercepted from
January 1, 2017 to December 31, 2022. AETA is a multi-component seismic
monitoring system and it is co-developed by the IMS Laboratory of Peking
University and our research group School of Computer Science of Wuhan
University for earthquake monitoring and prediction. AETA system mainly
collects two categories of data, including electromagnetic and acoustic. It has
the advantages of strong system stability, high environmental adaptability,
and strong anti-interference ability [32]. To better verify earthquake predic-
tion, the dataset consists four regions: DS1((98◦ E, 34◦ N), (101◦ E, 30◦ N)),
DS2((101◦ E, 34◦ N), (104◦ E, 30◦ N)), DS3((98◦ E, 30◦ N), (101◦ E, 26◦ N)),
DS4((101◦ E, 30◦ N),(104◦ E, 26◦ N)). The training data starts on January 1,
2017 and ends on February 1, 2022, and the testing data starts on February 1,
2022 and ends on December 30, 2022. In AETA, electromagnetic and acoustic
signals are detected every 10 minutes, so, we selected more than 20000 samples
in the four years only at one station. Each catalog contains a list of records:
time of earthquake, 51 electromagnetic features, and 44 acoustic features. MW
data is maintained on the China Earthquake Network Center (CENC). Their
catalog is available over the internet at http://www.ceic.ac.cn/. Then, we
merge the two tables over the time of earthquake. Table 1 shows the partial
earthquake data of DS2 after merging.
After merging the dataset, we fuzze multiple pieces of data from one day
into one piece of data. The specific operation consists of two steps: selecting the
different period of catalogs as the representative data and selecting the maxi-
mum of each feature and MW. In this experiment, we choose the timeP eriods
including: 0:00-8:00, 0:00-12:00, and 0:00-24:00. Then, we transform time-series
data into supervised learning data based on MW and the time step is one day.
Fig. 4 shows the kurt of sound which is one of the 95 features.
Fig. 5 shows the relationship between partial different features. From Fig.
5(b), we found the electromagnetic absolute mean value and electromagnetic
absolute maximum 5% position have a strong correlation. Not only these two
features, there are also many characteristics that correlate with each other.
Springer Nature 2021 LATEX template
(a) (b)
(c) (d)
we choose two types of GA: SGA, SteadyGA and three different mutation
strategies of DEs: operation vector is the best vector and the crossover opera-
tion is a random crossover (DE best 1 b), operation vector is the best vector
and the crossover operation is linear order crossover (DE best 1 L), operation
vector is a random vector and the crossover operation is random operation
(DE rand 1 b).
Six EAs adopt binary encoding and set the dimension of individuals to
95 corresponding features. 1 means the corresponding dimension’s feature is
selected, otherwise without being selected. The number of individuals is set
to 10 and the maximum of iteration is set to 20. The threshold of stagnation
is set to 0.000001 and the maximum evolutionary stagnation counter is set to
10. In GAs, the crossover rate and mutation rate are set to 0.7 and 0.01. In
DEs, the scaling factor and crossbreeding rate are both set to 0.5.
The RF is combined with 100 decision trees. Other parameters are set
as follows: max depth = 10, min samples split=2, min samples leaf=1. The
Adaboost’s base learner is decision trees, the number of boosting is 100, and the
learning rate is set to 1 and the boosting algorithm is based on the probability
of prediction error.
Springer Nature 2021 LATEX template
The LR’s loss function adopts the least square method. In SVR, the ker-
nel function adopts the radial basis function, the kernel factor is set to the
reciprocal of the product of the number of features and the variance of the
eigenvector and penalty parameter is set to 1.
The LSTM in our experiment has 100 hidden layers, one output layer, and
the activation functions. The learning rate is set to 0.00045, the batch-size is
set to 32 and the number of training epochs are set to 120 epochs. The initial
weights are random values and the bias is set to 0.
Table 2 Fitness and RMSE in different EAs, parameters, and timeP eriod (0:00-8:00) in
DS2
Table 3 Fitness and RMSE in different EAs, parameters, and timeP eriod (0:00-12:00) in
DS2
Table 4 Fitness and RMSE in different EAs, parameters, and timeP eriod (0:00-24:00) in
DS2
With the reference to DS3(see Table 7), it is worth noting that LSTM
without EGA perform best than EGA-LSTM. However EGA-LSTM’s results
still perform superior than others for the rest indicators. Intuitively, the dif-
ference between EGA-LSTM and other algorithms is not obvious, especially
EGA-SVR. More statistical comparison details can be seen in section 4.5.
Particular results for DS4 are shown in Table 8. EGA-LSTM perform better
than other nine methods for four metrics.
From a joint analysis of the four tables, a conclusion can be easily con-
cluded. For the four indicators, EGA-LSTM is the best one of them. Therefore,
it can be concluded that EGA-LSTM is the most precise and stable algorithms
for all datasets.
Fig. 8 shows the fitting curves of EGA-LSTM. Since we choose the suitable
parameter and loss function, this model is not overfitting and underfitting.
The real MW and predicted MW on different methods are shown on Fig. 9,
and the result demonstrates that all methods perform well in the range of a
low MW. However, the high MW can rarely be predicted accurately.
The µLST M and µEGA−LST M represent the RMSE of the two models. H0
means LSTM perform better than EGA-LSTM and H1 the opposite. The
description and quartile statistics are presented in Table 9 for the two methods.
Sub-Table 9(a) shows the basic test statistics, such as maximum, minimum,
standard deviation, and other values for the pairwise comparison of EGA-
LSTM and LSTM based on each evaluation indicators separately. It can be
observed from Table 11 for 20 independent and replicate experiments, that
RMSE of EGA-LSTM is better than the opposed to its counterpart LSTM.
In sub-Table 9(b) of Wilcoxon signed-rank, positive ranks, negative ranks
are presented for each region of measures corresponding to EGA-LSTM and
LSTM. The Wilcoxon signed-rank shows that the results from different regions
are different. Checking the Wilcoxon signed-rank’s boundary table, the critical
value in case of one-tailed test with confidence level of 0.01 is 43, and in case of
the confidence level of 0.05 is 60. In DS1, the test statistic is 28, therefore we
reject the null hypothesis with 99% certainty that EGA-LSTM is superior to
Springer Nature 2021 LATEX template
LSTM. DS4 is the same as DS1. The test statistic of DS2 is 58 with is greater
than 43, we can not reject the null hypothesis with the confidence level of 0.01.
However, 58 is less than 60 which is the critical value with confidence level of
0.05. DS3 is the same az DS2. In summary, they all indicate strong evidence
for the null hypothesis, that EGA-LSTM is superior than the standard LSTM.
Springer Nature 2021 LATEX template
and more complicated earthquake prediction scenarios of our method can also
be part of future work.
6 Ethical Approval
Not applicable.
7 Competing interests
Not applicable.
8 Authors’ contributions
Zhiwei Ye and Wuyang Lan and Wen Zhou: Wrote the main manuscript
text. Qiyi He: Supervision, Writing- Reviewing and Editing, Funding acquisi-
tion. Liang Hong and Xinguo Yu and Yunxuan Gao: Reviewing. All authors
reviewed the manuscript.
10 Funding
The authors want to thank NSFC- http://www.nsfc.gov.cn/ for the support
through Grants Number 61877045 and 62202147, and Fundamental Research
Project of Shenzhen Science and Technology Program for the support through
Grants Number JCYJ20160428153956266, and Research project of the Natu-
ral Resources Department of Hubei Province for the support through Grants
Number ZRZY2023KJ13.
References
[1] Bank, W., Nations, U.: Natural Hazards, Unnatural Disasters: the
Economics of Effective Prevention. The World Bank, ??? (2010)
[2] Adeli, H., Panakkat, A.: A probabilistic neural network for earthquake
magnitude prediction. Neural networks 22(7), 1018–1024 (2009)
[3] Asim, K.M., Moustafa, S.S., Niaz, I.A., Elawadi, E.A., Iqbal, T., Martı́nez-
Álvarez, F.: Seismicity analysis and machine learning models for short-
term low magnitude seismic activity predictions in cyprus. Soil Dynamics
and Earthquake Engineering 130, 105932 (2020)
Springer Nature 2021 LATEX template
[4] Berhich, A., Belouadha, F.-Z., Kabbaj, M.I.: Lstm-based earthquake pre-
diction: enhanced time feature and data representation. International
Journal of High Performance Systems Architecture 10(1), 1–11 (2021)
[5] Cai, Y., Shyu, M.-L., Tu, Y.-X., Teng, Y.-T., Hu, X.-X.: Anomaly detec-
tion of earthquake precursor data using long short-term memory networks.
Applied Geophysics 16, 257–266 (2019)
[6] Kadam, V.J., Yadav, S.S., Jadhav, S.M.: Soft-margin svm incorporating
feature selection using improved elitist ga for arrhythmia classification.
In: Intelligent Systems Design and Applications: 18th International Con-
ference on Intelligent Systems Design and Applications (ISDA 2018) Held
in Vellore, India, December 6-8, 2018, Volume 2, pp. 965–976 (2020).
Springer
[7] John, G.H., Kohavi, R., Pfleger, K.: Irrelevant features and the subset
selection problem. In: Machine Learning Proceedings 1994, pp. 121–129.
Elsevier, ??? (1994)
[10] Chen, Y., Zhang, J., He, J.: Research on application of earthquake pre-
diction based on chaos theory. In: 2010 International Conference on
Intelligent Computing and Integrated Systems, pp. 753–756 (2010). IEEE
[11] Cekim, H.O., Tekin, S., Özel, G.: Prediction of the earthquake magni-
tude by time series methods along the east anatolian fault, turkey. Earth
Science Informatics 14(3), 1339–1348 (2021)
[12] Panakkat, A., Adeli, H.: Neural network models for earthquake magnitude
prediction using multiple seismicity indicators. International journal of
neural systems 17(01), 13–33 (2007)
[13] Asim, K., Martı́nez-Álvarez, F., Basit, A., Iqbal, T.: Earthquake magni-
tude prediction in hindukush region using machine learning techniques.
Natural Hazards 85(1), 471–486 (2017)
[14] Chanda, S., Raghucharan, M., Reddy, K.K., Chaudhari, V., Somala, S.N.:
Duration prediction of chilean strong motion data using machine learning.
Springer Nature 2021 LATEX template
[16] Muhammad, A., Külahcı, F., Birel, S.: Investigating radon and TEC
anomalies relative to earthquakes via AI models. J. Atmos. Sol. Terr.
Phys. 245(106037), 106037 (2023)
[17] Yang, F., Kefalas, M., Koch, M., Kononova, A.V., Qiao, Y., Bäck, T.:
Auto-rep: An automated regression pipeline approach for high-efficiency
earthquake prediction using lanl data. In: 2022 14th International Confer-
ence on Computer and Automation Engineering (ICCAE), pp. 127–134
(2022). IEEE
[18] Asim, K.M., Idris, A., Iqbal, T., Martı́nez-Álvarez, F.: Seismic indica-
tors based earthquake predictor system using genetic programming and
adaboost classification. Soil Dynamics and Earthquake Engineering 111,
1–7 (2018)
[19] Zhou, W., Liang, Y., Dong, H., Tan, C., Xiao, Z., Liu, W.: A numerical dif-
ferentiation based dendritic cell model. In: 2017 IEEE 29th International
Conference on Tools with Artificial Intelligence (ICTAI), pp. 1092–1098
(2017). IEEE
[20] Zhou, W., Dong, H., Liang, Y.: The deterministic dendritic cell algo-
rithm with haskell in earthquake magnitude prediction. Earth Science
Informatics 13(2), 447–457 (2020)
[21] Zhou, W., Zhang, K., Ming, Z., Chen, J., Liang, Y.: Immune optimization
inspired artificial natural killer cell earthquake prediction method. The
Journal of Supercomputing 2022, 1–23 (2022)
[22] Zhou, W., Liang, Y., Wang, X., Ming, Z., Xiao, Z., Fan, X.: Introduc-
ing macrophages to artificial immune systems for earthquake prediction.
Applied Soft Computing 122, 108822 (2022)
[23] Moustra, M., Avraamides, M., Christodoulou, C.: Artificial neural net-
works for earthquake prediction using time series magnitude data or
seismic electric signals. Expert systems with applications 38(12), 15032–
15039 (2011)
[24] Asim, K.M., Idris, A., Iqbal, T., Martı́nez-Álvarez, F.: Earthquake pre-
diction model using support vector regressor and hybrid neural networks.
PloS one 13(7), 0199004 (2018)
Springer Nature 2021 LATEX template
[25] Jain, R., Nayyar, A., Arora, S., Gupta, A.: A comprehensive analysis
and prediction of earthquake magnitude based on position and depth
parameters using machine and deep learning models. Multimedia Tools
and Applications 80(18), 28419–28438 (2021)
[26] Draz, M.U., Shah, M., Jamjareegulgarn, P., Shahzad, R., Hasan, A.M.,
Ghamry, N.A.: Deep machine learning based possible atmospheric and
ionospheric precursors of the 2021 mw 7.1 japan earthquake. Remote
Sensing 15(7) (2023). https://doi.org/10.3390/rs15071904
[27] Berhich, A., Belouadha, F.-Z., Kabbaj, M.I.: Lstm-based models for earth-
quake prediction. In: Proceedings of the 3rd International Conference on
Networking, Information Systems & Security, pp. 1–7 (2020)
[29] Kavianpour, P., Kavianpour, M., Jahani, E., Ramezani, A.: A cnn-bilstm
model with attention mechanism for earthquake prediction. arXiv preprint
arXiv:2112.13444 (2021)
[30] Jh, H.: Adaptation in natural and artificial systems. Ann Arbor (1975)
[31] Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural com-
putation 9(8), 1735–1780 (1997)
[32] Wanga, J., Yong, S., et al.: An aeta electromagnetic disturbance anomaly
extraction method based on sample entropy. In: 2021 IEEE 5th Advanced
Information Technology, Electronic and Automation Control Conference
(IAEAC), vol. 5, pp. 2265–2269 (2021). IEEE
[33] Adeli, H., Panakkat, A.: A probabilistic neural network for earthquake
magnitude prediction. Neural Networks 22(7), 1018–1024 (2009)
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download.
EQTransformermaster.zip
areafeature.rar