Forest Fire Prediction Using Machine Learning
Forest Fire Prediction Using Machine Learning
Forest Fire Prediction Using Machine Learning
Forest, bush, or vegetarian fire, can be described as any uncontrolled and non-prescribed combustion or
burning of plants in a natural setting such as a forest, grassland, etc. In this article we are not determining
if a forest fire will take place or not, we are predicting the confidence of the forest fire based on some
attributes.
Image 1
Well, the first question arises as that why we even need Machine learning to predict forest fire in that
particular area? So, yes the question is valid that despite having the experienced forest department who
have been dealing with these issues for a long time why is there a need for ML, having said that answer is
quite simple that the experienced forest department can check on 3-4 parameters from their human mind
but ML on other hand can handle the numerous parameters whether it can be latitude, longitude, satellite,
version, and whatnot, so dealing with this multi-relationship of a parameter that is responsible for the fire
in the forest we do need ML for sure!
Table of content
Importing libraries
import datetime as dt import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot
as plt %matplotlib inline from sklearn.model_selection import train_test_split from sklearn.metrics import
forest = pd.read_csv('fire_archive.csv')
forest.head()
Output:
Data exploration
forest.shape
Output:
(36011, 15)
Here we can see that we have 36011 rows and 15 columns in our dataset obviously, we have to do a lot of
data cleaning but first
forest.columns
Output:
Checking for the null values in the forest fire prediction dataset
forest.isnull().sum()
Output:
latitude 0 longitude 0 brightness 0 scan 0 track 0 acq_date 0 acq_time 0 satellite 0 instrument 0 confidence
forest.describe()
Output:
plt.figure(figsize=(10, 10)) sns.heatmap(forest.corr(),annot=True,cmap='viridis',linewidths=.5)
Output:
Data cleaning
forest = forest.drop(['track'], axis = 1)
Note: By the way from the dataset we are not finding if the forest fire happens or not, we are trying to find
the confidence of the forest fire happening. They may seem to be the same thing but there is a very small
difference between them, try to find that
Output:
The scan column 1.0 8284 1.1 6000 1.2 3021 1.3 2412 1.4 1848 1.5 1610 1.6 1451 1.7 1281 1.8 1041 1.9 847 2.0
707 2.2 691 2.1 649 2.3 608 2.5 468 2.4 433 2.8 422 3.0 402 2.7 366 2.9 361 2.6 347 3.1 259 3.2 244 3.6 219
3.4 203 3.3 203 3.8 189 3.9 156 4.7 149 4.3 137 3.5 134 3.7 134 4.1 120 4.6 118 4.5 116 4.2 108 4.0 103 4.4
100 4.8 70 Name: scan, dtype: int64 The aqc_time column 506 851 454 631 122 612 423 574 448 563 ... 1558 1
635 1 1153 1 302 1 1519 1 Name: acq_time, Length: 662, dtype: int64 The satellite column Aqua 20541 Terra
15470 Name: satellite, dtype: int64 The instrument column MODIS 36011 Name: instrument, dtype: int64 The
version column 6.3 36011 Name: version, dtype: int64 The daynight column D 28203 N 7808 Name: daynight,
dtype: int64
From the above data, we can see that some columns have just one value recurring in them, meaning they
are not valuable to us
So we will drop them altogether.
Thus only satellite and day-night columns are the only categorical type.
Having said that, we can even use the scan column to restructure it into a categorical data type column.
Which we will be doing in just a while.
forest.head()
Output:
daynight_map = {"D": 1, "N": 0} satellite_map = {"Terra": 1, "Aqua": 0} forest['daynight'] =
forest['daynight'].map(daynight_map) forest['satellite'] = forest['satellite'].map(satellite_map)
forest.head()
Output:
forest['type'].value_counts()
Output:
Output:
Renaming columns for better understanding
Binning Method
Now I mentioned we will be converting scan column to categorical type, we will be doing this using the
binning method.
forest.head()
Output:
forest['acq_date'] = pd.to_datetime(forest['acq_date'])
Now we will be dropping the scan column and handle datetype data – we can extract useful information
from these datatypes just like we do with categorical data.
forest = forest.drop(['scan'], axis = 1)
Output:
As we have added the year column similarly we will add the month and day column
forest.shape
Output:
(36011, 17)
Now, as we can see that two more columns have been added which are a breakdown of date columns
Image by Author
fin.head()
Output:
Splitting the clean data into training and testing dataset
Model building
Output:
95.32 %
Output:
65.32 %
Model Tuning
So we use RandomCV
Output:
{'bootstrap': True, 'ccp_alpha': 0.0, 'criterion': 'mse', 'max_depth': None, 'max_features': 'auto',
'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None,
'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 300,
'n_jobs': -1, 'oob_score': False, 'random_state': 42, 'verbose': 0, 'warm_start': False}
""" n_estimators = number of trees in the forest max_features = max number of features considered for
splitting a node max_depth = max number of levels in each decision tree min_samples_split = min number
of data points placed in a node before the node is split min_samples_leaf = min number of data points
allowed in a leaf node bootstrap = method for sampling data points (with or without replacement) """
Number of trees in random forest n_estimators = [int(x) for x in np.linspace(start = 300, stop = 500, num =
20)] Number of features to consider at every split max_features = ['auto', 'sqrt'] Maximum number of levels
in tree max_depth = [int(x) for x in np.linspace(15, 35, num = 7)] max_depth.append(None) Minimum number of
samples required to split a node min_samples_split = [2, 3, 5] Minimum number of samples required at each
leaf node min_samples_leaf = [1, 2, 4] Create the random grid random_grid = {'n_estimators': n_estimators,
'max_features': max_features, 'max_depth': max_depth, 'min_samples_split': min_samples_split,
Output:
{'n_estimators': [300, 310, 321, 331, 342, 352, 363, 373, 384, 394, 405, 415, 426, 436, 447, 457, 468, 478,
489, 500], 'max_features': ['auto', 'sqrt'], 'max_depth': [15, 18, 21, 25, 28, 31, 35, None],
'min_samples_split': [2, 3, 5], 'min_samples_leaf': [1, 2, 4]}
A random search of parameters, using 3 fold cross-validation, search across 100 different
combinations and use all available cores
n_iter, which controls the number of different combinations to try, and cv which is the number of
folds to use for cross-validation
Output:
Just like this snippet, there will be numerous folds in this RandomizedSearchCV
rf_random.best_params_
Output:
y_pred1 = random_new.predict(Xtest)
Output:
95.31 %
'%')
Output:
67.39 %
Saving the tuned model by pickle module using the serialized format
reg_from_pickle = pickle.load(saved_model)
bz2file
Here comes the cherry on the cake part (bonus of this article). Let’s understand what this bz2file module is
all about. Let’s get started!
What is bz2file
bz2file is one of the modules in python which are responsible for compression and decompression of files,
hence it can help in decreasing the serialized or deserialized file to a smaller size which will be very helpful
in the long run when we have large datasets
As we know that our dataset is 2.7+ MB and our Random forest model is whooping 700+ MB so we need to
compress that so that that model won’t be leading as a hectic situation for storage.
Hence I installed bz2file, which is used to compress data. This is a life-saving package for those who have
low spaces on their disk but want to store or use large datasets. Now the pickled file was over 700 MB in
size which when used bz2 compressed into a file of size 93 MB or less.
import bz2 compressionLevel = 9 source_file = 'ForestModel.pickle' # this file can be in a different format,
like .csv or others... destination_file = 'ForestModel.bz2' with open(source_file, 'rb') as data:
tarbz2contents = bz2.compress(data.read(), compressionLevel) fh = open(destination_file, "wb")
fh.write(tarbz2contents) fh.close()
This code will suppress the size of the tuned pickle model.
Endnotes
Here you can access my other articles which are published on Analytics Vidhya as a part of the Blogathon
(link)
If got any queries you can connect with me on LinkedIn, refer to this link
About me
Greeting to everyone, I’m currently working in TCS and previously I worked as a Data Science Associate
Analyst in Zorba Consulting India. Along with full-time work, I’ve got an immense interest in the same field
i.e. Data Science along with its other subsets of Artificial Intelligence such as, Computer Vision, Machine
learning, and Deep learning feel free to collaborate with me on any project on the above-mentioned
domains (LinkedIn).
Image Source-
1. Image 1 – https://www.theleader.info/wp-content/uploads/2017/08/forest-fire.jpg
The media shown in this ar ticle on forest fire prediction is not owned by Analytics Vidhya and are used at
the Author’s discretion.