A Guide To 21 Feature Importance Methods and Packages in Machine Learning (With Code) - by Theophano Mitsa - Dec, 2023 - Towards Data Science
A Guide To 21 Feature Importance Methods and Packages in Machine Learning (With Code) - by Theophano Mitsa - Dec, 2023 - Towards Data Science
Member-only story
Open in app
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85a… 1/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85a… 2/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
data. Specifically, this article is about feature importance, which measures the
contribution of a feature to the predictive ability of a model. We have to understand
feature importance for many essential reasons:
Time: Having too many features slows down the training model time and also
model deployment. The latter is particularly important in edge applications
(mobile, sensors, medical diagnostics).
Overfitting. If our features are not carefully selected, we might make our model
overfit, i.e., learn about noise, too.
Curse of dimensionality. Many features mean many dimensions, and that makes
data analysis exponentially more difficult. For example, k-NN classification, a
widely used algorithm, is greatly affected by dimension increase.
Adaptability and transfer learning. This is my favorite reason and actually the
reason for writing this article. In transfer learning, a model trained in one task can
be used in a second task with some finetuning. Having a good understanding of
your features in the first and second tasks can greatly reduce the fine-tuning you
need to do.
We will focus on tabular data and discuss twenty-one ways to assess feature
importance. One might wonder: ‘Why twenty-one techniques? Isn’t one enough?’ It
is important to discuss all twenty-one techniques because each one has unique
characteristics that are very worthwhile learning about. Specifically, there are two
ways I will indicate in the article why a particular technique is worthwhile learning
about (a) Sections titled: “Why this is important” and (b) Highlighting the word
unique, to indicate that I am talking about a special and unique characteristic.
The techniques we will discuss come from two distinct areas of machine learning:
interpretability and feature selection. Specifically, we will discuss the following:
Feature selection methods. These methods focus on reducing the model’s features
by identifying the most informative features, and they generally fall into the filter,
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85a… 3/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
Data
To demonstrate the above feature-importance-computation techniques, we will use
tabular data related to heart failure prediction from Kaggle:
https://www.kaggle.com/datasets/fedesoriano/heart-failure-prediction/data
The dataset has 918 rows and 12 columns corresponding to the following features:
The dataset has no missing values, and the target variable is relatively balanced with
410 ‘0‘ ’instances and 508 ‘1’ instances. Five of the features are categorical: ‘Sex’,
‘ChestPainType,’ ‘ExerciseAngina,’ ‘RestingECG,’ ‘ST_Slope.’ These features are
encoded with the one-hot-encoding Pandas method:
1 import pandas as pd
2 heart2 = pd.get_dummies(heart, columns=['Sex', 'ChestPainType',
3 'RestingECG','ExerciseAngina',
4 'ST_Slope'])
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85a… 4/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
Then, the data is split into training and test sets. Finally, scikit-learn’s StandardScaler
is applied to the numerical data of the train and test data sets. Now, we are ready to
proceed to feature importance assessment.
A. Interpretability Packages
Why they are important
In recent years, the interpretability of machine learning algorithms has attracted
significant attention. Machine learning algorithms have recently found use in many
areas, such as finance, medicine, environmental modeling, etc. This broad use of
ML algorithms by people who are not necessarily ML experts begs for more
transparency because:
Trust issues. Black boxes make people nervous and unsure as to whether they
should trust them.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85a… 5/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
Figure 1, from Shapash’s application interface, shows the feature importances. The
length of the horizontal bar corresponds to the importance of the feature. Thus,
‘ST_Slope_up’ is the most important feature, and ‘chest_pain_typeTA’ is the least
important.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85a… 6/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85a… 7/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
Figure 3 shows the local explanations for slice 131, where the predicted class is 1
with probability of 0.8416. Bars to the right show a positive contribution, and bars to
the left show a negative contribution to the result. ‘St_Slope_up’ has the highest
positive contribution, while ‘max_heart_rate’ has the highest negative contribution.
In summary, Shapash is a very useful package to know because (a) it offers a great
interface where the user can gain a deep understanding of global and local
explanations, (b) it offers the unique feature of displaying feature contributions
across cases
The code below shows the creation of an OMNIXAI explainer. The essential steps
are (a) the creation of an OMNIXAI-specific data type (‘Tabular’) to hold the data, (b)
Data pre-processing through the ‘TabularTransform,’ (c) data splitting into training
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85a… 8/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
and test sets, (d) training of an XGBClassifier model (e) data inversion back to their
original format (f) setting up a ‘TabularExplainer’ of the XGBClassifier with both
SHAP and LIME methods. The explanation will be applied to ‘test_instances’ [130–
135] (g) generation and display of the predictions
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85a… 9/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 10/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
gistfile1 txt hosted with ❤ by GitHub view raw
Figure 4 shows the aggregate local explanation for slices between [130:135] using
LIME. The green bars in the right part show a positive contribution to label class 1,
whereas the red bars in the left part show negative contributions to class 1. The
longer the bar, the more significant the contribution.
Figure 5 shows the aggregate local explanations for slices [130:135] using SHAP. The
meaning of the green/red bars is the same as in the above graph.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 11/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
The InterpretML XAI interpretability [4] package has the unique feature of ‘glassbox
models,’ which are inherently explainable models.
Figure 7 shows the computed local explanations for slice at 43. Most features
contribute positively to the prediction of class 1, while only ‘Cholesterol’ and
‘FastingBS’ contribute negatively.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 12/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
The Aspects module. This allows us to explain a model taking into account
feature inter-dependencies.
1 import dalex as dx
2 model_dx = LogisticRegression(C=0.25,max_iter=50)
3 heart_exp = dx.Explainer(model_dx, X_train, y_train,
4 label = "Heart Logistic Pipeline")
5 s=model_dx.fit(X_train, y_train)
6
7 mp_rf = heart_exp.model_parts()
8 mp_rf.result
9 mp_rf.plot()
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 13/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
It works with text data. Specifically, it provides a ‘TextExplainer’ that can explain
predictions of text classifiers.
1 import eli5
2 from eli5.sklearn import PermutationImportance
3 from sklearn.svm import SVC
4 perm = PermutationImportance(SVC(), cv=5)
5 perm.fit(X_train, y_train)
6 importances=perm.feature_importances_
Figure 9 shows the computed feature importances for the ‘svc’ estimator.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 14/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
Figure 9.
The code snippet below shows the implementation of SFS wrapped around a
‘KNeighborsClassifier’ model. It also shows how to output the selected features and
their names.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 15/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
The snippet below shows the implementation of Boruta using the BorutaPy package
and the selected features.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 16/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
1 Selected Features: 15
2 ['Cholesterol' 'FastingBS' 'MaxHR' 'Sex_F' 'Sex_M'
3 'ChestPainType_ASY''ChestPainType_ATA' 'ChestPainType_NAP'
4 'ChestPainType_TA' 'RestingECG_LVH' 'ExerciseAngina_N'
5 'ExerciseAngina_Y' 'ST_Slope_Down'
6 'ST_Slope_Flat' 'ST_Slope_Up']
Embedded Methods
These refer to algorithms that have the built-in ability to compute feature
importances or select features, such as Random Forest and lasso regression,
respectively. An important note for these methods is that they do not directly select
features. Instead, they compute feature importances, which can be used in a post
hoc process to choose features. Such a post hoc process is ‘SelectFromModel’
discussed in section B.9.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 17/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
High-dimensional data are very common today in the form of unstructured text,
images, and time series, especially in the fields of bioinformatics, environment
monitoring, and finance. The greatest advantage of embedded methods is their
ability to handle high-dimensional data. The reason for this ability is that they do
not have separate modeling and feature selection steps. Feature selection and
modeling are combined in one single step, which leads to a significant speed-up.
B.4 Logistic Regression
Logistic Regression is a statistical method used for binary classification. The
coefficients of the model relate to the importance of features. Each weight indicates
the direction (positive or negative) and the strength of feature’s effect on the log
odds of the target variable. A larger absolute value of a weight indicates that the
corresponding feature is more important in predicting the outcome. The code
snippet below shows the creation of the logistic regression. The hyper-parameters
‘C’ (regularization strength) and ‘max_iter’ are learned by applying scikit-learn’s
‘GridSearchCV.’
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 18/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
1
2 coef
3 ChestPainType_ASY 1.091887
4 FastingBS 0.863633
5 ST_Slope_Flat 0.829780
6 Sex_M 0.463849
7 ExerciseAngina_Y 0.382341
8 Oldpeak 0.294517
9 ST_Slope_Down 0.277960
10 RestingECG_LVH 0.207710
11 Age 0.027053
12 RestingBP 0.000638
13 RestingECG_ST -0.097332
14 RestingECG_Normal -0.110399
15 ChestPainType_TA -0.193232
16 MaxHR -0.298876
17 ExerciseAngina_N -0.382362
18 ChestPainType_ATA -0.388795
19 Sex_F -0.463869
20 Cholesterol -0.475150
21 ChestPainType_NAP -0.509880
22 ST_Slope_Up -1.107760
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 19/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
The code for the computation and display of feature importances is shown below.
The computed feature importances are shown in Figure 10.
1 importances = model_rf.feature_importances_
2 forest_importances = pd.Series(importances, index=column_headers)
3 sortindices = np.argsort(forest_importances)
4 plt.title(' Random Forest Feature Importances')
5 plt.barh(range(len(sortindices)), forest_importances[sortindices], color='lime', align='
6 plt.yticks(range(len(sortindices)), [column_headers[i] for i in sortindices])
7 plt.show()
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 20/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
1 model_lgb = lgb.LGBMClassifier(device="gpu",learning_rate=0.05,max_depth=12,n_estimators
2 model_lgb.fit(X_train, y_train)
Figure 11.
It can effectively use all available CPU cores or clusters to create the tree in
parallel. It also utilizes cache optimization.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 21/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
Figure 12 shows the feature importances. Note that some importances are set to
zero because of L1 regularization.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 22/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
must be declared as type ‘category.’ Then, as shown in the snippet below, the
categorical features are provided as input to the model’s fit function.
1 categoricalcolumns = Xcat_train.select_dtypes(include=["category"]).columns.tolist()
2 cat_feat = [Xcat_train.columns.get_loc(col) for col in categoricalcolumns]
3 model7 = cb.CatBoostClassifier(loss_function="Logloss",iterations=1000,eval_metric="AUC"
4 model7.fit(Xcat_train, ycat_train, cat_features=cat_feat, plot=True)
5 importances = model7.feature_importances_
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 23/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
B. 10 Mutual information
The mutual information measures the reduction in uncertainty (entropy) in one
variable, given knowledge of the other. The mutual information between the
predictors and the target variable is computed using scikit-learn’s
mutual_info_classif. The mutual information score of each predictor is shown in
Figure 14.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 24/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
The code snippet below shows the implementation of MRMR with the ‘mrmr’
Python library.
1 import mrmr
2 from mrmr import mrmr_classif
3 selected_features = mrmr_classif(X, y, K=5)
4 print(selected_features)
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 25/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
Figure 15 below shows the scores (importances) of the features according to the
scoring function ‘f_classif.’ Note that although we chose K=5, Figure 15 displays the
scores for all features.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 26/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
1 ['ChestPainType_NAP', 'ExerciseAngina_Y',
2 'ChestPainType_ASY', 'ST_Slope_Flat', 'Sex_F']
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 27/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
Figure 16.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 29/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
Let us take a look at Table 1 below. This table has two columns corresponding to the
features ‘ST_Slope_up’ and ‘ST_Slope_flat.’ The rows correspond to the algorithms
and packages we used in the article. The numbers 1,2,3 indicate whether the feature
was selected as the best, second best, or third best by the algorithm.
Table 1.
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 30/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
As discussed in the article, some algorithms simply output a set of features without
any order. In this case, the X in the table indicates that the algorithm selected the
feature. In the case of a gap in the table, the feature was not in the three best
features selected by the corresponding algorithm. For logistic regression, the
absolute value of the coefficients was considered. For CatBoost, we assigned a 1
value to both ‘ST_Slope_up’ and ‘ST_Slop_flat’ because CatBoost selected ‘ST_Slope’ as
the most important feature. Finally, the OMNIXAI package results were not included
because they provided local explanations for a few rows.
An interesting fact emerges from the observation of Table 1. Except for LightGBM,
the feature ‘ST_Slope_up’ had the highest or second highest importance in the
algorithms that report feature importances. It was also selected by most algorithms
that reported selected features without importances. The feature ‘ST_Slope_Flat’
also performed pretty well because it was either in the first three highest-
importance features or in the selected feature groups for most algorithms.
Now, let us delve into another interesting insight. These two features had the highest
and second-highest mutual information scores. As we saw in section B.10, this is a
straightforward feature computed in one line of code. So, with one line of code, we
gained insights into the most important features of our data, as reported by the other
significantly more computationally complex algorithms.
This article discussed twenty-one packages and methods that compute feature
importance, a measure of a feature’s contribution to a model’s predictive ability. For
further reading, I recommend [23], which discusses another role of features,
specifically their error contribution to a model.
Footnotes:
Dataset license: Open Data Commons Open Database License (ODbL) v1.0 as
mentioned in https://www.kaggle.com/datasets/fedesoriano/heart-failure-
prediction and described in https://opendatacommons.org/licenses/odbl/1-0/
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 31/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
References
1. The Shapash package,https://shapash.readthedocs.io/en/latest/
7. Mazzanti, S., Boruta Explained Exactly How You Wished Someone Explained to
You, Medium: Towards Data Science, March 2020.
13. Raju, S.K., Evaluation of Mutual Information and Feature Selection for SARS-
CoV-2 Respiratory Infection, Bioengineering (Basel), vol. 10, no. 7, July 2023.
14. Mazzanti, S., “MRMR” Explained Exactly How You Wished Someone Explained
to You, Medium: Towards Data Science, February 2021.
16. Kavya, D., Optimizing Performance: SelectKBest for Efficient Feature Selection
in Machine Learning, Medium, February 2023.
19. Sharma, H., Featurewiz: Fast Way to Select the Best Features in Data, Medium,
Medium: Towards Data Science, Dec.2020.
23. Mazzanti, S., Your Features Are Important? It Doesn’t Mean They Are Good,
Medium: Towards Data Science, August 2023.
Feature Importance
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 33/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
Follow
I am a data scientist, ECE Ph.D. My interests are in AI, machine learning, and evidence-based medicine.
768 5
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 34/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
2.5K 33
1.2K 12
Need for Speed: Comparing Pandas 2.0 with Four Python Speed-Up Libs
(with Code)
Polars, Dask, RAPIDS.ai cuDF, and Numba are compared against Pandas 2.0 with pyarrow in
the backend, vectorization, and itertuples()…
244 2
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 36/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
Martin Thissen
608 11
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 37/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
422 4
Lists
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 38/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
125 3
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 39/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
10 Python Libraries You Should Be Using for Data Science (But Probably
Aren’t)
In the rapidly evolving field of data science, Python has emerged as the lingua franca, thanks to
its simplicity, readability, and…
519 2
Mokkup.ai
342 2
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 40/41
03/01/2024, 17:57 A Guide to 21 Feature Importance Methods and Packages in Machine Learning (with Code) | by Theophano …
259 2
https://towardsdatascience.com/a-guide-to-21-feature-importance-methods-and-packages-in-machine-learning-with-code-85… 41/41