Computers 13 00059
Computers 13 00059
Article
Static Malware Analysis Using Low-Parameter Machine
Learning Models
Ryan Baker del Aguila, Carlos Daniel Contreras Pérez, Alejandra Guadalupe Silva-Trujillo * , Juan C. Cuevas-Tello
and Jose Nunez-Varela
Abstract: Recent advancements in cybersecurity threats and malware have brought into question the
safety of modern software and computer systems. As a direct result of this, artificial intelligence-based
solutions have been on the rise. The goal of this paper is to demonstrate the efficacy of memory-
optimized machine learning solutions for the task of static analysis of software metadata. The study
comprises an evaluation and comparison of the performance metrics of three popular machine
learning solutions: artificial neural networks (ANN), support vector machines (SVMs), and gradient
boosting machines (GBMs). The study provides insights into the effectiveness of memory-optimized
machine learning solutions when detecting previously unseen malware. We found that ANNs shows
the best performance with 93.44% accuracy classifying programs as either malware or legitimate even
with extreme memory constraints.
Keywords: malware detection; data representation; static analysis; classification; machine learning;
deep learning
methods may have the answer [3,7,8]. ML has demonstrably worked on capturing the
essence of malware samples to effectively combat the growing threat [9]. These methods are
dynamic in their capacity to identify previously unknown dangers [4], offering a promising
general-case solution to protect against malware.
In this paper, we present an analysis of three widely used ML algorithms: artificial
neural networks (ANNs) [10], support vector machines (SVMs), and gradient boosting
machines (GBMs), for malware detection with computational resource constraints. Our
research is based on the evaluation of these algorithms’ key result metrics using a publicly
available dataset of malware samples [11].
We used program metadata, obtained from our VirusShare dataset, to train and test the
algorithms. The metadata provides valuable information on the behavior and characteristics
of malware, which enables the algorithms to detect new or previously unknown malware.
We also incorporated strategies for reducing each model’s memory footprint in the hopes
that they may be adopted by the low-resource hardware frequently found in IoT [1].
The structure of this paper is as follows: Section 2 details the state of the art, previous
research, and the modern technologies adjacent to our work. Section 3 covers the materials
and methods implemented to develop the experiments and how the results are obtained.
Section 4 presents the produced results. Then, Section 5 and onward offer a comprehensive
analysis of the meaning and significance of our results, where our research is situated,
and our conclusions.
Topic Authors
Mithal et al. [12], Malik et al. [13],
Portable executable (PE) file-based detection Vinayakumar et al. [14],
Baldangombo et al. [15]
Amin et al. [5], Milosevic et al. [16],
Android malware detection Agrawal et al. [17], Feng et al. [22],
Pan et al. [23]
Santos et al. [18],
Combining static and dynamic analysis
Mangialardo et al. [24], Jain et al. [25]
Feature extraction and reduction Rathore et al. [19]
Computers 2024, 13, 59 3 of 18
Table 1. Cont.
Topic Authors
Fleshman et al. [20],
Comparative analysis and literature review
Vinayakumar et al. [21]
General overview and robustness in dynamic analysis Ijaz et al. [9], Or et al. [26]
Behavioral data and short-term predictions Rhode et al. [27]
IoT device malware detection Baek et al. [28]
Vulnerabilities and evasive techniques in IoT Fang et al. [29]
3.1. Dataset
The dataset used in the experiment was generated using parameters from PE format
files, whose content was analyzed and subsequently classified as malicious software on the
basis of the VirusShare public data and evaluation techniques. The files were collected from
the malware collection at virusshare.com accessed on 8 November 2022 [11], from which
only PE files with extractable characteristics were retained. Subsequently, they were
analyzed using the Pefile tool. Pefile is a cross-platform tool written in Python for analyzing
and working with PE files. In PE files, most of the information contained in the headers
is accessible, as well as details and data for executable sections. Relevant information for
malware identification, such as section entropy for packer detection, was extracted from
these files.
In total, the dataset contains 57 attributes. The attribute ‘legitimate’, obtained from
VirusShare’s data, is used for the ground truth of the experiment (see Table 2). The dataset
consists of 152,227 samples of program metadata, among which 138,047 are considered
for our experiments; while 14,180 samples were discarded due to empty, corrupted, or in-
complete data. Among the 14,180 samples, 13,289 corresponded to corrupted data and
891 samples were too obfuscated to use. The decision for removing excessively obfuscated
samples was motivated by an interest in avoiding the sparsity introduced. The missing
entries are deemed unnecessary for the purposes of this experiment. Of the 138,047 remain-
ing program metadata samples, 96,724 represent malware metadata and 41,323 represent
legitimate program metadata. The distribution is approximately 2.3:1 as a ratio of the
malware program to legitimate program metadata.
A complete and comprehensive analysis of the dataset is available in our project
repository [35]. It contains information regarding the meaning of the metadata, as well as a
broader statistical analysis of the elements contained within.
Table 2. Categories of the attributes of an executable file on the basis of metadata from the dataset.
Category Attribute(s)
Name Name of the executable
MD5 MD5 checksum of the executable
Header Optional header size
Linker major version, Linker minor version, Entry point address, Im-
age base, Section alignment, File alignment, Loader flags, Rva number
Features and sizes, Subsystem, DLL features, Backup stack size, Commit stack
size, Heap commit size, Nb sections, Imports Nb DLL, Imports Nb
Ordinal, Imports Nb
Size Code size, Initialized data size, Uninitialized data size
Codebase Code base
Database Data base
Operating system related Operating System Major Version, Operating System Minor Version
Configuration size Configuration Load Size, Version Information Size
Legitimacy Legitimacy flag
Computers 2024, 13, 59 5 of 18
Table 3. Summarized statistics of some of the attributes in the dataset (mean, std, min).
3.2. Representation
We also performed an analysis of each attribute to determine the distribution of the
data. The objective is to determine which normalization strategy might most accurately
depict the information. In this step, we discard string-based fields and entirely focus on
a numerical approach. To pre-emptively select the normalization strategy, we employ an
algorithm that factors in statistical tendencies to form a conclusion.
As evidenced in Table 6, we most frequently observe the selection of a robust scaling
strategy by the algorithm. This strategy is crucial for handling outliers, which is logical for
this dataset because values frequently fall outside a simple norm.
After the algorithm evaluates the basic tendencies of the data, we manually analyze
column histograms to determine the most suitable normalization strategy. One common
observation is that bimodal or trimodal clusters manifest frequently in the data. By human
inspection, we determine where to consider remapping bimodal distributions into simpler
integer values such as binary or ternary. Once factored in, we produce an attribute-by-
attribute normalization of the data on the basis of machine and human inspection. We
believe this effectively normalizes the data for the subsequent models to train on.
Computers 2024, 13, 59 8 of 18
When an ANN is deployed, there must be a direct relationship between x1 and the
input parameters P such that P = Qx1 for any positive real value of Q. Next, we define xn
as the final layer output of the ANN. Depending on the type of answer we want, we modify
its size. Roughly speaking, each possible answer that the network can give represents
another neuron in this layer. If we are trying to classify an input into one of three classes,
then xn = 3.
Let A be the architecture of the network such that:
We can see that A simply represents the inner layers of the ANN X. Designing the
inner layers is significantly more challenging than the input and output layers. We need to
consider several aspects such as: (i) the length of A; (ii) every value ai ; and (iii) to evaluate
how each one plays a meaningful role in the convergence of the output.
To determine an optimal value of A, a genetic algorithm is used. We define a genetic
algorithm G, a population P, a chromosome C, a fitness function F, and a mutation rate M
as a tuple G = (C, P, M, F ).
Let P = { X1 , X2 , ..., Xl } such that all Xi are subject to the previously defined ANN
definition.
{ A}
Let C = x where the resulting operation yields:
1
a1 a am
C = { f loor ( ), f loor ( 2 ), ..., f loor ( )}
x1 x1 x1
Hyperparameters Values
Number of estimators 20, 30, 50, 150, 250
Learning rate 0.01, 0.1, 0.2
Maximum depth 3, 5, 7
Minimum samples split 2, 6, 10
Number of iterations without change 10, 15, 25
Hyperparameters Values
C 0.01, 0.1, 1, 10, 100, 1000, 10,000
Gamma 10, 1, 0.1, 0.01, 0.001, 0.0001, ‘scale’, ‘auto’
Computers 2024, 13, 59 11 of 18
4. Results
In this section, we present the results of the tests performed on each of the proposed ML
models (ANN, SVM, and GBM). Our objective is to compare and evaluate the performance
of the models in terms of accuracy and loss as well as resource requirements for the
classification to take place.
Table 9 demonstrates the best raw results for each of the three ML models implemented, in
bold are the best results. The results clearly demonstrate that each of the three models can be
effectively implemented in some capacity towards the goal of classification. We also introducing
a false positive ratio (FPR) and false negative ratio (FNR), defined, respectively, as:
FP
FPR =
FP + TN
FN
FNR =
FN + TP
where FP denotes false positives, TN denotes true negatives, FN denotes false negatives,
and TP denotes true positives. These metrics are essential as they allow us to further
compare the rates and the nature of the models’ failures. FPR denotes the likelihood that a
model will produce a false positive classification. Meanwhile, FNR denotes the likelihood
that a model will produce a false negative classification. The values range from 0 to 1,
where 0 indicates a perfect model and 1 indicates an inverse relationship to ground truth.
Though it would be ideal for models to have values close to 0, our target is to ensure that
models have an FNR below 0.05 and an FPR below 0.1. Loosely speaking, this would
suggest that the models are twice as likely to accidentally flag false positives as they are to
flag false negatives. In the case of malware analysis, this is consistent with a preference
towards caution when handling foreign software.
Table 9. Summary of the experimental results of the machine learning models used. The optimal
model was the ANN, in bold.
Predicted
Positive Negative
Positive 5947 297
Actual
Negative 403 3353
Predicted
Positive Negative
Positive 6128 342
Actual
Negative 558 2972
a legitimate program than the ANN when accounting for accuracy. With a sharp skew
towards false positives, this suggests the model would be better suited as a first line of
analysis via filtering as opposed to the conclusive classifier.
Predicted
Positive Negative
Positive 6048 317
Actual
Negative 483 3152
5. Discussion
The comparative analysis of the ANN, SVM, and GBM models in the context of
classifying program metadata as malware or legitimate provides valuable insights into the
strengths and trade-offs of each approach. This discussion synthesizes these findings and
offers practical considerations for their application.
The ANN model demonstrates the highest level of accuracy found, particularly when
the alpha parameter is finely tuned. However, there is a noticeable trade-off between
accuracy and computational efficiency. The diminishing returns observed beyond an alpha
value of 0.6 suggest that this might be the optimal setting for balancing accuracy with
resource utilization. This balance is crucial in the context of malware analysis, where both
precision and efficiency are paramount. However, the lengthy training time (over 13 h on
Google Colab) highlights a significant area for optimization in future research, perhaps
through more efficient training algorithms or parallel processing techniques. The ANN
model is best suited for environments where high accuracy is needed and computational
resources, especially time, are not a primary constraint. The alpha value of 0.6 should be
considered as a starting point for achieving a balance between efficiency and accuracy.
On top of offering leading accuracy over the competing models, its FPR and FNR scores
are excellent indicators of the fact that the model was most correctly generalized from
the dataset. This suggests that the ANN might be the best rounded tool for the broader
purpose of malware detection among the three models explored.
The SVM, particularly with the RBF kernel, stands out for its lower need for parameter
tuning and its competent performance, achieving an accuracy of 91.07%. This makes it an
appealing choice for scenarios where rapid model deployment is essential, or resources for
extensive model tuning are limited. Its advantageous performance in both training and
inference, compared to the more complex ANN and GBM models, also makes it suitable
for applications where computational resources are a constraint. The SVM model is ideal
for rapid deployment scenarios and where model simplicity and lower computational
overhead are valued. This model is particularly useful when the data are not excessively
large or complex. For more difficult datasets, the FNR and FPR scores indicate that the
Computers 2024, 13, 59 15 of 18
model will not perform as favorably. Considering the fact that this model is the least flexible
out of the three considered, its use should be limited to low resource contexts only.
The GBM model exhibits a direct relationship between its complexity (as indicated
by the number of estimators and tree depth) and accuracy. This model reaches efficient
accuracy at an accuracy preference value of 0.79, suggesting a preferable trade-off point for
this specific task. The higher memory requirement associated with this setting is justified
by the substantial improvements in accuracy, which is particularly critical in malware
detection scenarios where the cost of false negatives can be high. This is corroborated by an
FPR of 0.133, which indicates a skew towards false positives anyway. This result signifies
that the GBM might be suited as an initial filtering technique as opposed to providing the
final verdict.
Among the three models tested, the ANN outperformed the others. The genetic
algorithm used for model selection not only enforces a memory optimization strategy
but also ensures that the best parameters are retained. This combination between the
ANN and genetic algorithm give us an optimized architecture for generalizing beyond the
dataset without the risk of overfitting and bias. This supports our belief that our ANN
architecture is an optimal candidate for resource-constrained static malware analysis. We
have effectively demonstrated the efficacy of lightweight models in this context, and we
hope that they can be applied to better address emerging IoT safety concerns.
6. Limitations
With an ever-changing software environment, it is difficult to predict the full scope of
advancements in malware. As such, a critical limitation of this research is in anticipating
state-of-the-art malware and future adjacent technologies. Our dataset, although providing
a comprehensive snapshot of malware in recent years, cannot and should not attempt to
account for future malware, technologies, and cybercriminal strategies.
With research centered around binary classification, we do not make an attempt to
further classify detected malware into its respective categories. Although an important
field, it sits beyond the intended goals of this study because it would require far more
robust models.
Strategies for detecting more sophisticated, obfuscated, or otherwise disguised mal-
ware are omnipresent and a growing threat [12]. It is known that more robust ML strategies
are required to handle such adversity [7], but ML tailored towards IoT and other low
resource hardware is not designed for this task.
7. Conclusions
The application of ML and deep learning models for malware detection has gained sig-
nificant traction in recent years, as is evident from the state-of-the-art literature. With sharp
results across various domains, it is clear that this topic is an important field of study
moving forward.
We assert that our study contributes to this domain of research by focusing on the effi-
cacy of ANN, SVM, and GBM models in static malware detection. The focus on lightweight
models augments some of the state of the art while primarily focusing on delivering results
within the constraints of broader IoT devices and subsequent applications.
Our experiment found that our ANN architecture performed favorably in malware
detection with an accuracy of over 94% when classifying programs into malware or legiti-
mate. However, to adhere to the constraints of IoT, our projected architecture may instead
be sufficiently effective with an accuracy of 93.44% at alpha values of 0.6, roughly reducing
the parameter count of the neural network by 40% over competing ANNs while preserving
proper generalization.
The SVM and GBM architectures proposed, though less effective than our ANN archi-
tecture, offer useful insight into the behavior of machine learning for malware classification.
On one hand, the SVM is significant because of its resource efficacy. With fewer parame-
ters than the competing models, it offers a relatively effective first line of defense for the
Computers 2024, 13, 59 16 of 18
most resource constrained devices in IoT. The GBM, on the other hand, is a well-rounded
alternative to our ANN architecture with potential use in conjunction with other models.
While other works have extraordinary results in classification [3,22], they do not
present as many resource constraints as we do. Additionally, when factoring in the current
implementation for malware detection, where results can be as low as 63% and 70% [25],
we consider our 93% accuracy a strong indicator of viability for our architecture. Of course,
the results depend on the dataset used. Without more comprehensive testing of modern
software suites, it is difficult to determine how closely they align with contemporary data
in practice.
In summary, our experiment presents a compelling case for the use of lightweight
machine learning models in malware detection. We believe that we offer researchers and
practitioners a viable and efficient alternative for combating the growing sophistication of
malware via lightweight ML models. Our findings reaffirm the potential of machine learn-
ing in cybersecurity and encourage further exploration and innovation in this crucial field.
If one removes the experiment constraints of requiring lightweight models, then we
can test more sophisticated deep learning models such as convolutional neural networks [7].
We expect an improvement in accuracy, but high computational resources are required to
train these kinds of deep learning models. As such, although significant, these experimental
conditions sit outside the scope of this research and is a fundamental limitation of machine
learning with a goal of low-parameter models.
References
1. Wang, H.; Zhang, W.; He, H.; Liu, P.; Luo, D.X.; Liu, Y.; Jiang, J.; Li, Y.; Zhang, X.; Liu, W.; et al. An evolutionary study of IoT
malware. IEEE Internet Things J. 2021, 8, 15422–15440. [CrossRef]
2. Gregorio, L.D. Evolution and Disruption in Network Processing for the Internet of Things: The Internet of Things (Ubiquity
symposium). Ubiquity 2015, 2015, 1–14. [CrossRef]
3. Vidyarthi, D.; Kumar, C.; Rakshit, S.; Chansarkar, S. Static malware analysis to identify ransomware properties. Int. J. Comput.
Sci. Issues 2019, 16, 10–17.
4. Sihwail, R.; Omar, K.; Ariffin, K.Z. A survey on malware analysis techniques: Static, dynamic, hybrid and memory analysis. Int.
J. Adv. Sci. Eng. Inf. Technol. 2018, 8, 1662–1671. [CrossRef]
5. Amin, M.; Tanveer, T.A.; Tehseen, M.; Khan, M.; Khan, F.A.; Anwar, S. Static malware detection and attribution in android
byte-code through an end-to-end deep system. Future Gener. Comput. Syst. 2020, 102, 112–126. [CrossRef]
6. Balram, N.; Hsieh, G.; McFall, C. Static malware analysis using machine learning algorithms on APT1 dataset with string and PE
header features. In Proceedings of the 2019 International Conference on Computational Science and Computational Intelligence
(CSCI), Las Vegas, NV, USA, 5–7 December 2019; pp. 90–95.
7. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [CrossRef]
8. Murray, A.F. Applications of Neural Networks; Springer: Berlin/Heidelberg, Germany, 1995.
9. Ijaz, M.; Durad, M.H.; Ismail, M. Static and dynamic malware analysis using machine learning. In Proceedings of the 2019
16th International Bhurban Conference on Applied Sciences and Technology (IBCAST), Islamabad, Pakistan, 8–12 January 2019;
pp. 687–691.
Computers 2024, 13, 59 17 of 18
10. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958,
65, 386. [CrossRef]
11. Virus Share. Available online: https://virusshare.com/ (accessed on 30 November 2022).
12. Mithal, T.; Shah, K.; Singh, D.K. Case studies on intelligent approaches for static malware analysis. In Proceedings of the
Emerging Research in Computing, Information, Communication and Applications, Bangalore, India, 11–13 September 2015;
Volume 3, pp. 555–567.
13. Malik, K.; Kumar, M.; Sony, M.; Mukhraiya, R.; Girdhar, P.; Sharma, B. Static Malware Detection Furthermore, Analysis Using
Machine Learning Methods. Adv. Appl. Math. Sci. 2022, 21, 4183–4196.
14. Vinayakumar, R.; Soman, K. DeepMalNet: Evaluating shallow and deep networks for static PE malware detection. ICT Express
2018, 4, 255–258.
15. Baldangombo, U.; Jambaljav, N.; Horng, S.J. A static malware detection system using data mining methods. arXiv 2013,
arXiv:1308.2831.
16. Milosevic, N.; Dehghantanha, A.; Choo, K.K.R. Machine learning aided Android malware classification. Comput. Electr. Eng.
2017, 61, 266–274. [CrossRef]
17. Agrawal, P.; Trivedi, B. Machine learning classifiers for Android malware detection. In Data Management, Analytics and Innovation;
Springer: Singapore, 2021; Volume 1174, pp. 311–322.
18. Santos, I.; Devesa, J.; Brezo, F.; Nieves, J.; Bringas, P.G. Opem: A static-dynamic approach for machine-learning-based malware
detection. In Proceedings of the International Joint Conference CISIS’12-ICEUTE’12-SOCO’12 Special Sessions, Ostrava, Czech
Republic, 5–7 September 2013; pp. 271–280.
19. Rathore, H.; Agarwal, S.; Sahay, S.K.; Sewak, M. Malware detection using machine learning and deep learning. In Proceedings of
the Big Data Analytics: 6th International Conference, BDA 2018, Warangal, India, 18–21 December 2018; pp. 402–411.
20. Fleshman, W.; Raff, E.; Zak, R.; McLean, M.; Nicholas, C. Static malware detection & subterfuge: Quantifying the robustness of
machine learning and current anti-virus. In Proceedings of the 2018 13th International Conference on Malicious and Unwanted
Software (MALWARE), Nantucket, MA, USA, 22–24 October 2018; pp. 1–10.
21. Vinayakumar, R.; Alazab, M.; Soman, K.; Poornachandran, P.; Venkatraman, S. Robust intelligent malware detection using deep
learning. IEEE Access 2019, 7, 46717–46738. [CrossRef]
22. Feng, J.; Shen, L.; Chen, Z.; Wang, Y.; Li, H. A two-layer deep learning method for android malware detection using network
traffic. IEEE Access 2020, 8, 125786–125796. [CrossRef]
23. Pan, Y.; Ge, X.; Fang, C.; Fan, Y. A systematic literature review of android malware detection using static analysis. IEEE Access
2020, 8, 116363–116379. [CrossRef]
24. Mangialardo, R.J.; Duarte, J.C. Integrating static and dynamic malware analysis using machine learning. IEEE Lat. Am. Trans.
2015, 13, 3080–3087. [CrossRef]
25. Jain, A.; Singh, A.K. Integrated Malware analysis using machine learning. In Proceedings of the 2017 2nd International Conference
on Telecommunication and Networks (TEL-NET), Noida, India, 10–11 August 2017; pp. 1–8.
26. Or-Meir, O.; Nissim, N.; Elovici, Y.; Rokach, L. Dynamic malware analysis in the modern era—A state of the art survey. ACM
Comput. Surv. 2019, 52, 88. [CrossRef]
27. Rhode, M.; Burnap, P.; Jones, K. Early-stage malware prediction using recurrent neural networks. Comput. Secur. 2018, 77, 578–594.
[CrossRef]
28. Baek, S.; Jeon, J.; Jeong, B.; Jeong, Y.S. Two-stage hybrid malware detection using deep learning. Hum.-Centric Comput. Inf. Sci.
2021, 11, 10-22967.
29. Fang, Y.; Zeng, Y.; Li, B.; Liu, L.; Zhang, L. DeepDetectNet vs. RLAttackNet: An adversarial method to improve deep
learning-based static malware detection model. PLoS ONE 2020, 15, e0231626. [CrossRef] [PubMed]
30. Tayyab, U.e.H.; Khan, F.B.; Durad, M.H.; Khan, A.; Lee, Y.S. A Survey of the Recent Trends in Deep Learning Based Malware
Detection. J. Cybersecur. Priv. 2022, 2, 800–829. [CrossRef]
31. Prayudi, Y.; Riadi, I.; Yusirwan, S. Implementation of malware analysis using static and dynamic analysis method. Int. J. Comput.
Appl. 2015, 117, 11–15.
32. Chikapa, M.; Namanya, A.P. Towards a fast off-line static malware analysis framework. In Proceedings of the 2018 6th
International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), Barcelona, Spain, 6–8 August 2018;
pp. 182–187.
33. Aslan, Ö. Performance comparison of static malware analysis tools versus antivirus scanners to detect malware. In Proceedings
of the International Multidisciplinary Studies Congress (IMSC), Antalya, Turkey, 25–26 November 2017.
34. Martín, A.; Lara-Cabrera, R.; Camacho, D. A new tool for static and dynamic Android malware analysis. In Data Science and
Knowledge Engineering for Sensing Decision Support, Proceedings of the 13th International FLINS Conference (FLINS 2018), Belfast, UK,
21–24 August 2018; World Scientific: Singapore, 2018; pp. 509–516.
35. Contreras, C.; Baker, R.; Gutiérrez, A.; Cerda, J. Machine Learning Malware Detection. Available online: https://github.com/
CarlosConpe/Machine-Learning-Malware-Detection/ (accessed on 18 December 2023).
36. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536.
[CrossRef]
Computers 2024, 13, 59 18 of 18
37. Kapanova, K.; Dimov, I.; Sellier, J. A genetic approach to automatic neural network architecture optimization. Neural Comput.
Appl. 2018, 29, 1481–1492. [CrossRef]
38. Bukhtoyarov, V.V.; Semenkin, E. A comprehensive evolutionary approach for neural network ensembles automatic design. Sib.
Aerosp. J. 2010, 11, 14–19.
39. Miller, G.F.; Todd, P.M.; Hegde, S.U. Designing Neural Networks Using Genetic Algorithms. In Proceedings of the ICGA, Fairfax,
VA, USA, 4–7 June 1989; pp. 379–384.
40. Schaffer, J.D.; Whitley, D.; Eshelman, L.J. Combinations of genetic algorithms and neural networks: A survey of the state of the
art. In Proceedings of the International Workshop on Combinations of Genetic Algorithms and Neural Networks, Baltimore, MD,
USA, 6 June 1992; pp. 1–37.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.