0% found this document useful (0 votes)
41 views20 pages

June 77

The document discusses different feature selection techniques for brain tumor detection including filter methods, wrapper methods, embedded methods, forward selection, backward elimination, and recursive feature elimination. It also introduces a hybrid feature selection algorithm called GenBoruta that performs well compared to other techniques for finding relevant variables.

Uploaded by

Vinayaga Moorthy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views20 pages

June 77

The document discusses different feature selection techniques for brain tumor detection including filter methods, wrapper methods, embedded methods, forward selection, backward elimination, and recursive feature elimination. It also introduces a hybrid feature selection algorithm called GenBoruta that performs well compared to other techniques for finding relevant variables.

Uploaded by

Vinayaga Moorthy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

ISSN: 0974-5823 Vol. 7 No.

6 June, 2022
International Journal of Mechanical Engineering

An Efficient Genetic Boruta(GenBoruta)


Algorithm Based Feature Selection on Brain
Tumor Dataset
R.Vinayaga Moorthy1, Dr.R.Balasubramanian2
Research Scholar1, Professor2,
Department of Computer Science & Engineering,
Manonmaniam Sundaranar University, Tirunelveli-627 012

Abstract
The World Health Organization revealed that the brain tumor is one of the most severe sicknesses since it
affects most people, including kids, worldwide. Developing a system to identify brain tumors at a beginning
stage would assist in saving the existence of many people. Much exploration has been made around here to
develop a system for distinguishing brain tumors; however, this system should be improved, its exactness
upgraded. Consequently, feature selection methods are expected to improve the system. The main intention of
the feature selection techniques in machine learning (ML) is to select a suitable set of features. Wrapper
methods are used to filter. These methods are classified into four categories: forward selection, backward
elimination, exhaustive feature selection, and recursive feature elimination. In recent years, brain tumor
disease affected more people. Brain tumor disease affects the brain, sometimes sprit into some other parts.
Besides, there are 55 features concentrated on, like the image roughness, consistency or energy, and nearby
homogeneity removed to show the quality distinction between methods. The goal is to search for the
possibility of features that structure a large problem with feature selection techniques, which is resolved using
bruta and genetics. Boruta feature selection algorithm based on random forest. In this paper, we introduced a
hybrid feature selection technique called GenBoruta. GenBoruta is a hybrid feature selection algorithm for
finding all relevant variables. It iteratively eliminates the features which are demonstrated by a measurable
test to be less significant than random probes. The proposed techniques performed well compared to existing
techniques like forwarding Selection, Backward Elimination, Boruta, and Genetic.
Keywords: Forward Selection, Backward Elimination, Recursive Feature Elimination, Genetic, Boruta,
GenBoruta

1. Introduction
Brain tumors are a very dangerous disease because of their effect on the brain. The feature selection
techniques are used to select the particular features. Brain tumor features are a brain tumor attribute that is
helpful in the solution, and selecting the most important features for the techniques is called feature selection.
Image processing needs human interference.

2. Feature selections method


There are two types of feature selection techniques such as supervised techniques and unsupervised
techniques. Supervised techniques are classified into three categories: filter methods, wrapper methods, and
embedded methods.

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
693
The following Table 1 shows different types of feature selection method
Filter Wrapper Embedded
Non-exclusive arrangement of Evaluates a particular ML algorithm Embeds {fix) features during the
techniques that don't incorporate to track down optimal features. model structure process. Include
a particular Machine Learning determination is finished by noticing
(ML) algorithm. every emphasis of the model training
stage.
A lot quicker contrasted with High computation time for a Sits between filter techniques and
wrapper techniques as far as time dataset with numerous features. wrapper techniques as far as time
complexity. complexity.
Less inclined to over-fitting Highest possibilities of over-fitting For the most part, used to lessen over-
since it includes preparing of ML Fitting by punishing the coefficients of
models with various mixes of a model being excessively large.
features.
Table 1 Feature Selection Method

2.1Filter Method
The filter method is used to assess the feature selection. This approach is a statistical measure. Feature
Selection has become progressively significant for machine learning, data mining, and data analysis.
Particularly for high-layered data sets, it is important to filter out the insignificant and redundant features by
selecting an appropriate subset of important features to order to over-fitting and tackling the scourge of
dimensionality. Concerning data sets from the medical area, including feature selection permits identifying
significant features for medical processes of interest [3].

2.2 Wrapper Method


Wrapper techniques depend on greedy search algorithms. They evaluate all potential combinations of the
features and select the combination that delivers the best outcome for a particular machine learning algorithm.
The wrapper method was used to assess the feature subsection. The interaction between the feature subset and
regression model in the wrapper method is considered.

2.3 Embedded Method


In Embedded Methods, the feature selection algorithm is coordinated as a feature of the learning algorithm.
Embedded techniques combine the characteristics of filter and wrapper methods. Algorithms carry it out with
their feature selection techniques. A learning algorithm exploits its variable selection process and
simultaneously performs feature selection and classification/regression.
In [20], embedded Methods are used to assess the lower computational cost. The embedded method approach
is a fast processing method. Embedded methods within algorithms that select features inductively, that is, the
classification function and selection of subsets of features are jointly learned. Generally, these methods
optimize objective functions that point to classification accuracy and fine-tune the use of additional features
simultaneously. In [24], embedded methods assess the feature subset by inserting the feature selection in the
cycle of classifier construction. These feature selection methods are intended to perform the same assignments
with different design methodologies. Each has its benefits and weaknesses in various perspectives, such as
model complexity, computational effectiveness, and time productivity.

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
694
3. Literature Review
3.1 Wrapper Method
The wrapper method approach is decision making. An optimization approach based on hybrid wrapper-filter
feature selection and ensemble classification are used to minimize the impact of an imbalanced brain tumor
dataset. It is used to assess the accuracy [21]. In [10] order to identify the most suitable subset of features and
assess each subset's accuracy in predicting the target, it uses a learning algorithm.

3.2 Forward Selection


In [8], the forward selection method process begins with no shifting & adds separately. Each adds one variable
that reduces the error until some further variable addition doesn't notably reduce the error. The forward
selection method is a simple ranking-based feature selection. The forward feature selection method is
restricted to certain features with huge contribution rates; be that as it may, in a few practical cases, the
optimal combination of features is typically not made out of these features. Subsequently, the feature subset
got by the forward feature selection technique is probably going to prompt lower-order accuracy. In [16], the
forward selection (FS) algorithm adds a new variable to the existing model, starting from a null model with no
caveats, and at each step based on some criterion such as the decrease in the residual sum of squares (RSS).

3.3 Backward Elimination


S.Vanaja et al. [19] Select a feature using the backward eliminations method to select and eliminate the
appropriate feature. The feature selection is a three-stage process in a particular search, evaluation, and stop.
The wrapper method uses backward elimination to eliminate the irrelevant features from the subset. In [15],
backward elimination can detect models where all the elements are genuine. If no noise variables are present,
the model is found to be valid. A second unfounded assumption is that classic least squares estimation results
can be applied to the final model.

3.4 Recursive Feature Elimination


B.Sathees Kumar [4] Applying the recursive feature elimination method, one can identify the important
features before even conducting the classification task. The technique is based on its iteration count and size.
It uses the random forest principle and k-fold validation. In [23], a recursive feature elimination algorithm is
used to remove a single feature at a time. It is used to solve the multiclass classification. The weight vector for
recursive removal is computed on each subsample and put into every individual classifier.

3.5 Genetic Algorithm


Sourabh Katoch et al. [22] select a feature using a genetic algorithm used to assess the great features. In
addition to providing information about each component of GAs, this paper provides a source of recent
research on GAs. Researchers will be encouraged to understand GA fundamentals and use the knowledge in
their research. In addition, it expands the range of possible users.GA consists of encoding, selection,
crossover, and mutation. In [13], Genetic has been efficiently used in the feature selection problem to redact
high-dimensional datasets. One of the disadvantages of this technique is that it doesn't think about the
associations among the features when selecting the final features. Accordingly, the probability of choosing a
subset with redundancy will increment.
L. Haldurai et al. [11] Selection operation involve selecting elite individuals that can produce offspring from
the current population. Individuals are judged on their fitness values to determine whether they are elitists. In
[1], Crossover in GA involves exchanging information between good solutions to form new and hopefully
better solutions. Crossover operators facilitate rapid convergence towards a good solution. The crossover
consists of a single point, two-point, k-point, uniform, partially matched, order, precedence preserving
crossover, shuffle, reduced surrogate, and cycle. In [6], the mutation is the process of changing (growing)
chromosome genomes, resulting in the flipping of bits (genes) of chromosomes. Mutation consists of
displacement, simple inversion, and scramble mutation.
Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)
International Journal of Mechanical Engineering
695
3.6 Boruta Algorithm
In [25] Boruta algorithm is intended to be a wrapper algorithm that performs robust, statistically relevant
feature selection. To select the optimal feature groups based on HCR and DLR features, they applied a
random forest (RF)-based Boruta algorithm. Johannes Haubold et al. [9] select a feature using the Boruta
algorithm to reduce the noise added by redundant features; a subset of features with the highest predictive
power needs to be selected from among the extensive features. To achieve this, the Boruta algorithm used a
wrapper method combining Gradient Boosting XGBoost. In [17], the Boruta package empowers the user to
select the most important features of an information system based on unbiased criteria, while manual methods
are more likely to render errors. The boruta wrapper algorithm is used to select features in an information
system unbiased to important and other attributes. Supervised learning is essential to train raw data after
feature engineering.

3.7 Hybrid Genetic Algorithm


Ahmed Kharrat and Mahmoud Neji [2] proposed a hybrid SA-GA algorithm for selecting the optimal feature
subsets from countless features capable of acting as an instrument for PC-supported findings in the separation
of brain tumors on magnetic resonance images. Bain Khusnul Khotimah et al. [5] select a feature using a
hybrid algorithm that produces an effective and less feature set and the weakest feature enhancements. Since
the NB classification results in higher accuracy in several heterogeneous datasets. GA worked well in finding
optimal feature values, achieving imputation results with an error rate10%.
Joans and Sandhiya [18] proposed a genetic Algorithm consisting of a feature selection technique to classify
the MRI images using the random forest as a classifier. For a mixture medical images recovery system, a GA
approach is introduced to choose a dimensionality decreased set of features. This framework has been in three
stages. In the first stage, three particular algorithms are utilized to extricate the imperative elements from the
pictures. The algorithm conceived for extracting the features is the Texton-based intrinsic pattern extraction
algorithm, contour gradient extraction algorithm, and shift-invariant feature transformation algorithm. The
second stage to distinguish the potential element vector GA-based highlight choice is done, utilizing a hybrid
approach of "Branch and Bound Algorithm" and "Fake Bee Colony Algorithm" utilizing the brain tumor,
thyroid images, and breast cancer. The Chi-Square distance estimation is utilized to evaluate the likeness
between inquiry pictures and data set images. A fitness function with regard Minimum depiction length rule
was utilized as a beginning necessity for the GA algorithm. In the third stage, the assorted thickness-based
pertinence criticism technique is utilized to work on the presentation of the mixture content-based clinical
picture recovery framework. The term mixture is utilized as this framework can recover any sort of clinical
picture like brain tumor, breast cancer, thyroid disease, lung cancer, etc. [7].

4. Methodology
Characterizing pictorial information is expected to distinguish significant features present in pictures that lead
to the arrangement. Such features could be gathered into basic and complex features. The selection of features
can also improve classifier performance. In machine learning and measurements, highlight determination,
otherwise called variable choice, characteristic choice, or variable subset choice, is the most common way of
choosing a subset of pertinent elements (factors, indicators) for model development.
The wrapper method utilizes a prescient model to score highlight subsets. Every new subset is utilized to
prepare a model, which is tried on a hold-out set. As the wrapper method train another model for every subset,
they are computationally escalated yet ordinarily give the best performing highlight set for that specific sort of
model or regular issue.

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
696
Fig 1- Feature Selection Proposed Architecture

4.1 Feature Extraction


A start to finish guide on the most proficient method to diminish a dataset dimensionality utilizing Feature
Extraction Techniques, Such as Shape Descriptors, (First Order) Histogram-Based Metrics, and Grey-level co-
occurrence matrix GLCM.

The following Tables 2, 3, and 4 represent types of features and formula


Shape Descriptors
1. Mesh Volume

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
697
2. Voxel Volume

3. Surface Area

4. Surface Area to Volume


ratio

5. Sphericity

6. Compactness 1

7. Compactness 2

8. Spherical Disproportion

9. Major Axis Length

10. Minor Axis Length

11. Least Axis Length

12. Elongation

13. Flatness

Table 2 Shape Descriptors features and formula

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
698
(First Order) Histogram-Based Metrics
1 Energy

2. Total Energy

3 Entropy

4 Max Intensity

5 Min Intensity

6 Mean Value

7 Mean absolute deviation

8 Robust Mean Absolute


Deviation (rMAD)

9 Range

10 Root mean square (RMS)

11 Standard deviation

12 Uniformity

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
699
13 Variance

14 Skewness

15 Kurtosis

Table 3 (First Order) Histogram-Based Metrics features and formula

Grey-level co-occurrence matrix GLCM


1. Autocorrelation

2. Joint Average

3. Cluster Prominence

4. Cluster shade

5. Cluster Tendency

6. Contrast

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
700
7. Correlation

8. Difference Average

9. Difference entropy

10. Difference variance

11. Dissimilarity

12. Joint Energy

13. Joint Entropy

14. Homogeneity

15. Information measure


of correlation 1

16. Information measure


of correlation 2

17. Inverse difference


Moment

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
701
18. Maximum
Correlation
Coefficient

19. Inverse Difference


Moment Normalized
(IDMN)

20. Inverse Difference


(ID)

21. Inverse Difference


Normalized (IDN)

22. Inverse Variance

23. Maximum
Probability
24. Sum Average

25. Sum variance

26. Sum entropy

27. Sum of squares

Table 4 GLCM features and formula

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
702
4.2 Forward Selection Algorithm
Forward selection is an iterative technique wherein we initialize with no elements in the model. In every
cycle, we continue to add a component that best works on our model till an option of another variable doesn't
work with the presentation of the model. To execute an optimization, it is important to lessen the levels of
opportunity for the framework in request to make the issue viable with the most widely recognized
optimization.

Fig 2- Forward Selection Architecture

4.3 Backward Elimination Algorithm


As an iterative approach, backward elimination is also an option, but it is the opposite to forward selection.
Machine learning models are built using backward elimination as a feature selection technique. The process
starts by considering all the features and then removing the least significant features. It continues until
removing the features and does not improve the model's performance.

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
703
Fig 3- Backward Elimination Architecture

4.4 Recursive Feature Elimination (RFE)


RFE is an FS technique that fits a model and eliminates the most fragile feature (or highlight) until the
predefined number of features is reached. Recursive Feature Elimination with Cross-Validation shows the
features which are significant with significant positioning. This empowers us to fabricate the model with ideal
aspects.

Fig 4- Recursive Feature Elimination Architecture

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
704
4.5 Genetic Algorithm
GA is a heuristic hunt technique utilized in artificial intelligence (AI) and figuring. It is utilized for tracking
down enhanced answers for search issues in view of the hypothesis of normal determination and
developmental science. GA is superb for looking through the enormous and complex dataset.

Fig 5- Genetic Algorithm Architecture

4.6 Boruta Algorithm


Perform rearranging of predictors qualities, go along with them with the first predictors, and afterward
construct a random forest on the consolidated dataset. Then, at that point, make the correlation of unique
factors with the randomized factors to quantify variable significance. Just factors having higher significance
than that of the randomized factors are viewed as significant.

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
705
Fig 6- Boruta Algorithm Architecture

4.7 Proposed (GenBoruta) Feature Selection Technique:


The techniques proposed in this work depend on making a hybrid model that joins a GA and boruta, fully
intent on grouping tests before selecting the small number of important variables. The proposed hybrid
technique is used to overcome the disadvantage of each one.

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
706
Fig 7- Architecture of proposed GenBoruta Feature Selection Technique

5. Experimental Setup
5.1. Dataset Description
A brain tumor dataset with imitated acoustic features has been utilized for training and assessing our
approach. Dataset is from the Kaggle website. The dataset comprises 3264 brain MRI images classified into
four classes: glioma, meningioma, pituitary, and no tumor.
The following Table 5 represents the types of features.

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
707
S.No Attribute S.No Attribute S.No Attribute S.No Attribute S.No Attribute
1 Mesh 12 Elongation 23 Root mean 34 Contrast 45 Inverse
Volume square difference
(RMS) Moment
2 Voxel 13 Flatness 24 Standard 35 Correlation 46 Maximum
Volume deviation probability
coefficient
3 Surface Area 14 Energy 25 Uniformity 36 Difference 47 Inverse
Average Difference
Moment
Normalized
(IDMN)
4 Surface Area 15 Total 26 Variance 37 Difference 48 Inverse
to Volume Energy entropy Difference
ratio (ID)
5 Sphericity 16 Entropy 27 Skewness 38 Difference 49 Inverse
variance Difference
Normalized
(IDN)
6 Compactness 17 Max 28 Kurtosis 39 Dissimilarity 50 Inverse
1 Intensity Variance
7 Compactness 18 Min 29 Autocorrelat 40 Joint Energy 51 Maximum
2 Intensity ion Probability
8 Spherical 19 Mean Value 30 Joint 41 Joint Entropy 52 Sum
Disproportio Average Average
n
9 Major Axis 20 Mean 31 Cluster 42 Homogeneity 53 Sum
Length absolute Prominence variance
deviation
10 Minor Axis 21 Robust 32 Cluster 43 Information 54 Sum
Length Mean shade measure of entropy
Absolute correlation 1
Deviation
(rMAD)
11 Least Axis 22 Range 33 Cluster 44 Information 55 Sum of
Length Tendency measure of squares
correlation 2
Table 5 Types of features

5.2 Results
Performance is evaluated based on features extracted from Shape Descriptors, (First Order) Histogram-Based
Metrics, and Grey-level co-occurrence matrix (GLCM). This part shows the experimental results of the
proposed GenBoruta framework: 1) this method enables the selection of more essential and relevant functions
than existing methods of detecting brain tumors.2) Compared to existing methods, the proposed method
produces better results. The following Table 6 represents selected features.

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
708
S.No Features Selection Features
Algorithms
1 Forward Selection 25 features are Surface Area to Volume ratio, Least Axis Length,
Robust Mean Absolute Deviation (rMAD), Cluster Tendency, Joint
Average, Joint Entropy, Joint Energy, Kurtosis, Skewness,
Uniformity, Difference entropy, Contrast, Standard deviation, Inverse
Difference Normalized (IDN), Difference variance, Sum Average,
Sum of squares, Least Axis Length, Minor Axis Length, Sphericity,
Mesh Volume, Maximum correlation coefficient, Maximum
Probability, Mean Value, Min Intensity.
2 Backward Elimination 22 features are Mesh Volume, Surface Area, Sphericity, Spherical
Disproportion, Minor Axis Length, Elongation, Energy, Total
Energy, Max Intensity, Mean Value, Robust Mean Absolute
Deviation (rMAD), Root mean square (RMS), Uniformity, Skewness,
Autocorrelation, Cluster Prominence, Cluster Tendency, Correlation,
Difference entropy, Dissimilarity, Joint Entropy, Information
measure of correlation 1.
3 Recursive Feature 21Features are Mesh Volume, Voxel Volume, Surface Area, Surface
Elimination Area to Volume ratio, Sphericity, Uniformity, Variance, Skewness,
Kurtosis, Autocorrelation, Maximum Probability, Sum Average, Sum
variance, Sum entropy, Sum of squares, Max Intensity, Min Intensity,
Mean Value, Mean absolute deviation, Robust Mean Absolute
Deviation (rMAD), Cluster shade.
4 Boruta 14 Features are Voxel Volume, Surface Area to Volume ratio,
Uniformity, Variance, Kurtosis, Autocorrelation, Maximum
Probability, Sum Average, Sum variance, Sum of squares, Max
Intensity, Min Intensity, Mean Value, Cluster shade.
5 Genetic 15 Features are Mesh Volume, Surface Area, Surface Area to Volume
ratio, Sphericity, Uniformity, Variance, Skewness, Kurtosis,
Autocorrelation, Maximum Probability, Sum Average, Sum entropy,
Max Intensity, Mean Value, Robust Mean Absolute Deviation
(rMAD).
6 Genboruta 12 Features are
Surface Area Flatness
Skewness Uniformity
Contrast Correlation
Range Cluster shade
Dissimilarity Maximum Probability
Variance Max Intensity
Table 6 Selected features

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
709
Table 7 Performance Analysis of Various FS techniques and proposed technique
Feature Section Techniques Feature Selection Accuracy Error Rate

Forward Selection 25/55 94 11

Backward Elimination 22/55 96 9

Recursive Feature Elimination 21/55 95.5 8.2

Boruta 14/55 96 9

Genetic 15/55 96.5 8

Proposed (GenBoruta) 12/55 97.5 7

Table 7 Feature selection performance analysis

120
100
80
60
40
Feature Selection
20
Accuracy
0
Error Rate

Fig 8- Feature Selection Graphical Representation

6. Conclusion
The proposed hybrid algorithm combines the Genetic algorithm and the Boruta algorithm. A hybrid algorithm
Genboruta has several advantages from the combination of benefits of existing algorithms gave to the choice
of optimal feature subsets from a small number of features. To select the most appropriate features from the
feature extraction data, the feature selection method is applied. Crucial features are specified using Genboruta.
Genboruta obtained the best performance among these methods and overcame all other methods. So we limit
the error rate as well as increase the accuracy. The performance of the proposed approach gives a significant
improvement against four closely related techniques, accomplishing an accuracy of 97.5% and an error rate7.
Our approach proved its efficiency in feature selection on the brain tumor dataset. For the implementation of
the study, Jupiter notebooks version 6.3.0 was used, and Python for coding. Our model is chosen because of
its high predictive accuracy.

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
710
Reference
1. Ahmed Kharrat and Mahmoud Neji, "A Hybrid Feature Selection for MRI Brain Tumor Classification",
In International Conference on Innovations in Bio-Inspired Computing and Applications, Springer, pp.
329-338, March 2018.
2. Ahmed Kharrat, and Mahmoud Neji, “Feature selection based on hybrid optimization for magnetic
resonance imaging brain tumor classification and segmentation”, Applied Medical Informatics, Vol. 41,
Issue. 1, pp. 9-23, March 2019.
3. Andrea Bommert, Xudong Sun, BerndBischl, Jörg Rahnenführer, Michel Lang, "Benchmark for filter
methods for feature selection in high-dimensional classification data", Vol. 143, pp.1-22, March 2020.
4. B.Sathees Kumar, "Identification and Classification of Brain Tumor Images Using Efficient Classifier",
International Journal of Engineering and Advanced Technology, Vol-8, Issue-6, pp. 3677 - 3683, August
2019.
5. Bain Khusnul Khotimah, Miswanto Miswanto, Herry Suprajitno, “Optimization of Feature Selection
Using Genetic Algorithm in Naïve Bayes Classification for Incomplete Data”, International Journal of
Intelligent Engineering and Systems, Vol.13, Issue.1, pp.334-343, December 2020.
6. Dr. S. Mary Joans, J. Sandhiya, "A Genetic Algorithm Based Feature Selection for Classification of
Brain MRI Scan Images Using Random Forest Classifier", International Journal of Advanced
Engineering Research and Science, Vol. 4, Issue-5, pp. 124- 130, May 2017.
7. G.Nagarajana, R.I.Minub , B Muthukumar c, V.Vedanarayanan d & S.D.Sundarsinghe, “Hybrid Genetic
Algorithm for Medical Image Feature Extraction and selection”, Procedia Computer Science, vol. 85,pp.
455 – 462,2016.
8. Jianli Ding, Liyang Fu, “A Hybrid Feature Selection Algorithm Based on Information Gain and
Sequential Forward Floating Search”, Journal of Intelligent Computing Volume 9, Issue 3, pp. 93-103,
September 2018.
9. Johannes Haubold, René Hosch, Vicky Parmar, Martin Glas, Nika Guberina,Onofrio Antonio Catalano,
Daniela Pierscianek, Karsten Wrede , Cornelius Deuschl , Michael Forsting , Felix Nensa , Nils Flaschel
and Lale Umutlu , “Fully Automated MR Based Virtual Biopsy of Cerebral Gliomas”, cancers, MDPI,
Vol. 13,Issue 24, pp. 1-13, December 2021.
10. Katkoori Arun Kumar1, Ravi Boda2, Dr. Praveen Kumar3, Dr. Vijendar Amgothu, "Feature Selection
using Multi-Verse Optimization for Brain Tumor Classification", Annals of the Romanian Society for
Cell Biology, Vol. 25, Issue 6, pp. 3970 – 3982, May 2021.
11. L. Haldurai1, T. Madhubala and R. Rajalakshmi “A Study on Genetic Algorithm and its Applications”,
International Journal of Computer Sciences and Engineering, Vol. 4, Issue 10, pp. 139-143, Oct 2016.
12. Maryam Bahojb Imani, Mohammad Reza Keyvanpour, and Reza Azmi, “A Novel Embedded Feature
Selection Method: A Comparative Study In The Application Of Text Categorization”, Applied Artificial
Intelligence, Vol. 27, pp.408–427, 2013.
13. Mehrdad Rostami, Kamal Berahmand, and Saman Forouzandeh, “A novel community detection based
genetic algorithm for feature selection” Journal of Big Data, Vol. 8, Issue 2, pp. 2-27, Dec 2021.
14. Methaq Kadhum, Saher Manaseer, and Abdel Latif Abu Dalhoum, "Evaluation Feature Selection
Technique on Classification by Using Evolutionary ELM Wrapper Method with Features Priorities",
Journal of Advances in Information Technology Vol. 12, issue. 1, pp-21-28, February 2021.
15. Milan Bašta, "Properties of Backward Elimination and Forward Selection In Linear Regression",
International Days of Statistics and Economics, Prague, pp.114-124, September 2018.
16. Naveen Naidu Narisetty, “Bayesian model selection for high-dimensional data” In Handbook of
Statistics, Elsevier, Vol-43, pp. 207-248, 2020.
17. Nazmun Nahar, Ferdous Ara, Md. Arif Istiek Neloy, Anik Biswas, Mohammad Shahadat Hossain, and
Karl Andersson, "Feature Selection Based Machine Learning to Improve Prediction of Parkinson
Disease", In International Conference on Brain Informatics, Springer, pp. 496-508, September 2021.
18. Nilesh Bhaskarrao Bahadure,1 Arun Kumar Ray,1 and Har Pal Thethi, "Image Analysis for MRI Based
Brain Tumor Detection and Feature Extraction Using Biologically Inspired BWT and SVM",
International Journal of Biomedical Imaging, Vol. 2017, pp.1-12, March 2017.
19. S.Vanaja, K.Ramesh Kumar, “Analysis of Feature Selection Algorithms on Classification: A Survey",
International Journal of Computer Applications, Volume 96– No.17, pp.0975 – 8887, June 2014.

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
711
20. Samaneh Liaghat and Eghbal G. Mansoori, "Filter-based unsupervised feature selection using Hilbert–
Schmidt independence criterion", International Journal of Machine Learning and Cybernetics, Vol-10,
Issue-9, pp. 2313-2328, August 2018.
21. Shamsul Huda, John Yearwood, Herbert F. Jelinek, Mohammad Mehedi Hassan, Giancarlo Fortino,
Michael Buckland “A Hybrid Feature Selection with Ensemble Classification for Imbalanced Healthcare
Data: A Case Study for Brain Tumor Diagnosis”, IEEE ACCESS, Vol. 4, pp.1-13, January 2017.
22. Sourabh Katoch, Sumit Singh Chauhan1 & Vijay Kumar, "A review on genetic algorithm: past, present,
and future", Multimedia Tools and Applications, Springer, Vol. 80, Issue 5, pp. 8091–8126, October
2020.
23. V. Panca and Z. Rustama, "Application of machine learning on brain cancer multiclass classification",
AIP Conference Proceedings, Vol. 1862, Issue 1, pp. 030133-030139, July 2017.
24. Yee Ching Saw, Zeratul Izzah Mohd Yusoh, Azah Kamilah Muda and Ajith Abraham, “Ensemble Filter-
Embedded Feature Ranking Technique (FEFR) for 3D ATS Drug Molecular Structure", International
Journal of Computer Information Systems and Industrial Management Application, Vol. 9, pp. 124-134,
2017.
25. Zhiyuan Liu, Zekun Jiang, Li Meng, Jun Yang, Ying Liu, Yingying Zhang, Haiqin Peng, Jiahui Li, Gang
Xiao, Zijian Zhang, and Rongrong Zhou, "Handcrafted and Deep Learning-Based Radiomic Models Can
Distinguish GBM from Brain Metastasis", Journal of Oncology, Hindawi, Vol. 2021, pp.1-10, June 2021.

Copyrights @Kalahari Journals Vol.7 No.6 (June, 2022)


International Journal of Mechanical Engineering
712

You might also like