Enhanced Wildfire Detection Using AIML Harnessing
Enhanced Wildfire Detection Using AIML Harnessing
January 8, 2024
Posted on 8 Jan 2024 — CC-BY 4.0 — https://doi.org/10.36227/techrxiv.24438904.v2 — e-Prints posted on TechRxiv are preliminary reports that are not peer reviewed. They should not b...
Abstract
This research paper titled “Enhanced Wildfire Detection using AI/ML: Harnessing Multi-spectral Satellite Imagery with Con-
volutional Neural Networks” aims to advance the capabilities of wildfire detection by employing Artificial Intelligence (AI),
specifically Convolutional Neural Networks (CNNs). Given the escalating threat of wildfires exacerbated by climate change
and human activity, traditional detection methods, though effective, are both costly and time-consuming. To counter these
limitations, the study taps into multi-resolution satellite imagery, particularly from the VIIRS and Sentinel-2 satellites. The
primary data source, VIIRS, offers comprehensive spectral bands and frequent global coverage. In contrast, Sentinel-2 provides
high-resolution optical image data vital for detailed wildfire detection. The research processes the collected data, refining and
categorizing them for training and testing. A Convolutional Neural Network is then employed to classify images as either “fire”
or “nofire.” Two main architectures, Deep CNN and a simplified MobileNet-like CNN, were explored. Among the models
tested, the Deep CNN using the Adam optimizer was found to be the most accurate, although it hinted at possible overfitting.
The paper also points out several limitations, such as reliance on the visible spectrum that could be obstructed by atmospheric
conditions and the temporal gaps in image captures that could delay real-time detection. The study concludes by emphasizing
the transformative potential of integrating AI with satellite technology for early wildfire detection. Future advancements could
harness multispectral bands and refine spatial and temporal resolutions to further enhance the early detection and intervention
of wildfires. The research received support from the Network of Resources (NoR) at ESA, which facilitated expanded access to
the SentinelHub platform.
1
Enhanced Wildfire Detection using AI/ML: Harnessing
Multi-spectral Satellite Imagery with Convolutional
Neural Networks
Arya Prince, 10/25/23
Abstract
Wildfires, made worse by climate change and human activities, have emerged as a formidable
challenge to global ecosystems, infrastructure, and human safety. Traditional wildfire detection
mechanisms, while effective, are often made worse due to high costs and latency. This study
investigates the efficacy of Artificial Intelligence (AI), particularly Convolutional Neural Networks
(CNNs), in tandem with multi-resolution satellite imagery, as a novel approach for prompt and
efficient wildfire detection. Our findings demonstrate that AI-powered models, when trained with
high-resolution satellite data, can significantly enhance the speed and accuracy of wildfire
detection, offering a promising alternative to traditional methods.
Index Terms - Wildfires, AI, Convolutional Neural Networks, image processing, satellite imagery
1. Introduction
Wildfires pose an escalating threat to human lives, property, ecosystems, and the environment,
with their frequency and intensity amplified by climate change and human activities. In 2022, the
United States grappled with an amount estimating to around 69,000 wildfires [1], which
devoured over 7.5 million acres and incurred a staggering $14 billion in damages, while
releasing 66 mega tones of CO2 emissions into the atmosphere. Detecting wildfires promptly is
vital for limiting their devastation. However, conventional methods heavily reliant on human
surveillance are not only costly but also time-consuming. The severity of this problem is
underscored by incidents like the 2018 Camp Fire in California [2], where it's estimated that at its
peak, the fire consumed an area larger than a football field (about 1.32 acres) every second. The
protracted duration required to pinpoint a fire amplifies the challenge in containing it,
emphasizing the vital role of early detection in effectively managing and extinguishing fires6.
Moreover, human-induced incidents continue to constitute the majority of wildfire occurrences,
further accentuating the exigency for a more adept, accurate, and timely wildfire detection With
the advancements in AI and satellite technology, there lies an opportunity to revolutionize the
way we detect and respond to wildfires, potentially saving countless lives and preserving our
natural ecosystems.
2. Data Acquisition
VIIRS: Onboard the Suomi National Polar-orbiting Partnership (Suomi NPP) and NOAA-20
satellites, Visible Infrared Imaging Radiometer Suite (VIIRS) [3] has 22 spectral bands covering
visible, near-infrared, and thermal infrared wavelengths. These bands are essential for various
environmental monitoring tasks, including wildfire detection. They offer global coverage
approximately every 12 hours. VIIRS data is used as the primary input source for identifying the
satellite imagery for the wildfire detection due to its comprehensive spectral bands and
frequent global coverage, making it a reliable source for real-time monitoring.
This level offers "top-of-atmosphere" (TOA) reflectance. It represents the data as captured by
the satellite, which includes the influence of atmospheric conditions such as haze, clouds, and
other particles. It's essentially what the satellite "sees" from its vantage point in space.
L2A provides "bottom-of-atmosphere" (BOA) reflectance. The data at this level has undergone
atmospheric correction, removing the effects of the atmosphere to give a clearer representation
of the Earth's surface. When relying on the visible spectrum, the BOA reflectance makes the
visible spectrum based wildfire detection comparatively easier. This research paper used many
images from the Sentinel-2 L2A.
In Figure 1, the unique vantage points of VIIRS onboard Suomi NPP and NOAA-20 satellites are
depicted. This comparative visualization underscores the robust capabilities of these satellites
in achieving global coverage, crucial for tasks such as wildfire detection.
Figure 1. Comparative views of Earth from VIIRS on Suomi NPP and NOAA-20
satellites
Latitude Longitude bright_ti4 scan track acq_date acq_time satellite confidence version bright_ti5 frp daynight
46.04395 -73.13679 295.29 0.45 0.39 9/30/2023 637 N nominal 2.0NRT 283.9 1.06 N
46.04107 -73.14352 296.86 0.45 0.39 9/30/2023 637 N nominal 2.0NRT 283.76 1.06 N
46.57838 -80.79401 367 0.33 0.55 9/30/2023 637 N high 2.0NRT 283.54 10.15 N
30.49405 -109.62815 367 0.36 0.57 9/30/2023 824 N high 2.0NRT 293.72 13.89 N
30.22569 -108.82178 314.02 0.73 0.76 9/30/2023 1004 N nominal 2.0NRT 287.71 3.77 N
37.27805 -87.55209 327.49 0.38 0.59 9/30/2023 1802 N low 2.0NRT 297.51 2.12 D
30.22973 -108.82793 310.18 0.73 0.76 9/30/2023 1004 N nominal 2.0NRT 287.24 2.76 N
^ Sample data from VIIIRS
Using the filtered data, the program fetches Sentinel-2 images using SentinelHub API for the
time surrounding each fire incident [6], including before, during, and after, if images exist for
those intervals. Subsequently, these images are reviewed and classified as 'fire' or 'nofire',
readying them for further analysis. In addition, images from other known non-fire locations and
times are extracted to “nofire” data.
3. Methodology
In this research, a Convolutional Neural Network (CNN) [7] was utilized to categorize images
into fire and no-fire classifications. The data was allocated with 60% dedicated to training, while
the remaining 40% was evenly divided between validation and testing. Evaluation metrics like
accuracy, loss, among others, were employed to assess the model's performance, with further
details to be provided in the following sections.
The study concentrated on exploring Deep CNN and simpler MobileNet-like CNN, as the
SVM-based method could not complete the training given the constrained computing resources
at hand.
A variety of satellite images were used to train the models, with data augmentation techniques
such as rotation, width shift, height shift, shear, zoom, and horizontal flip applied to broaden the
data variety. These techniques aimed to bolster the model's generalization ability. The model
showcasing the lowest validation loss was recognized as the best-performing model in this
endeavor.
This model comprises three convolutional layers (each succeeded by max-pooling), a flattening
layer, a dense layer, a dropout layer for regularization, and a final dense layer with a sigmoid
activation function for binary classification. The model is compiled employing binary cross
entropy as the loss function, with the flexibility to specify either Adam or RMSprop as the
optimizer. An ImageDataGenerator is deployed for real-time data augmentation to bolster the
model's generalization capability. During training, the model's performance is monitored using a
callback to save the model whenever the validation loss improves.The architecture comprises
three convolutional layers (each succeeded by max-pooling), a flattening layer, a dense layer, a
dropout layer for regularization, and a final dense layer with a sigmoid activation function for
binary classification. This architecture and its training results, using the two optimizers, are
illustrated in Figure 3.
A diverse variety of images was used to train the models. Data augmentation techniques such
as rotation, width shift, height shift, shear, zoom, and horizontal flip were applied to broaden the
data variety. These techniques were employed to enhance the model's generalization capability.
The model showcasing the lowest validation loss was recognized as the best-performing model
in this endeavor.
The models were trained using the acquired and pre-processed data. The performance of the
models was evaluated using metrics like accuracy, loss, and others, as illustrated in Figure 5,
showcasing a comparison of performance metrics between two trained models. The study
concentrated on exploring both Deep CNN and simpler CNN architectures, especially since the
SVM-based method could not complete the training due to constrained computing resources.
Figure 5. Comparison of performance metrics between two trained models
showcasing their accuracy, F1 score, precision, recall, and respective confusion
matrices.
Optimize
r Adam RMSprop Adam RMSprop
In this study, the Deep CNN with the Adam optimizer emerged as the frontrunner. However, the
accuracy loss graph indicates it may be suffering from some overfitting. The simplified CNN
offers advantages in terms of computational efficiency, there's a compromise in detection
performance. The choice of optimizer also played a pivotal role, with Adam delivering slightly
better results over RMSProp. The model size of simplifiedCNN is almost 1000 times smaller
than that of deep CNN with multiple convolutional layers and maxpooling
4. Limitations
The primary limitation of this study is the reliance on the visible spectrum for wildfire detection,
which can be obstructed by adverse conditions like cloud cover or heavy smoke. The temporal
resolution of satellite imagery may also hinder real-time wildfire detection due to the gaps
between image captures. Additionally, the spectral resolution could be improved by
incorporating a broader range of the electromagnetic spectrum. The spatial resolution of the
imagery and the dependency on external platforms like SentinelHub for data acquisition are
other notable constraints that could affect the scalability and replicability of the research.
Labeling was another challenge. In addition, limited access to computing resources. Skew in the
fire vs. nofire image may cause some overfit
5. Future Work
In future endeavors, the model can benefit from the integration of multispectral imagery,
especially bands like NIR (Near InfraRed) and SWIR (Short Wave InfraRed), to address
challenges from adverse atmospheric conditions. Incorporating real-time satellite data streams
will bolster the model's real-world applicability [8]. Exploring data fusion techniques and
leveraging satellites with higher temporal resolution can lead to a more precise wildfire
detection system. Enhancing both spatial and temporal resolutions by utilizing additional
satellite data, including near-earth orbiting satellites, offers a comprehensive approach to
wildfire detection and monitoring. Addressing data skew between nofire and fire images.
Furthermore, integrating feedback loops for continuous model improvement and exploring
ensemble methods could further enhance the model's robustness and accuracy. Collaborative
efforts with firefighting agencies could also be explored to ensure the practical applicability of
the model in real-world scenarios.
6. Conclusion
This research underscores the potential of Convolutional Neural Networks paired with
high-resolution satellite imagery for early wildfire detection. The accuracy achieved indicates the
power of machine learning in addressing wildfire challenges. Future work could benefit from
integrating additional multispectral bands like SWIR and NIR and refining spatial and temporal
resolutions for the early detection and prevention of wildfires. While future advancements may
make natural disasters like lightning-induced fires more predictable and preventable, the early
detection of human-caused incidents remains crucial. Emphasizing the adage, "prevention is
better than cure," it's vital to recognize that when prevention fails, early detection and
intervention become our best defense to safeguard lives and ecosystems. The integration of AI
with satellite technology not only offers a promising solution to the wildfire detection challenge
[9] but also shows the transformative potential of technology in addressing pressing global
issues.
7. Acknowledgments
This research was significantly aided by the generous sponsorship of approximately
Euro 4000 from the Network of Resources (NoR) at ESA. This funding notably expanded
access to the SentinelHub platform, facilitating the use of high-resolution, multi-spectral
imagery which was crucial for the study. Additionally, the Sentinel-Hub forum proved to
be instrumental in resolving queries related to the imagery.
8. References
[1] https://www.nifc.gov/fire-information/statistics
[2] https://www.cnn.com/2018/11/09/us/california-wildfires-superlatives-wcx/index.html
[3]
https://lpdaac.usgs.gov/data/get-started-data/collection-overview/missions/s-npp-nasa-viirs-ov
erview/
[4] "Sentinel-2 - ESA's Optical High-Resolution Mission for GMES Operational Services," European
Space Agency (ESA). https://sentinel.esa.int/web/sentinel/missions/sentinel-2
[5] "Using VIIRS Active Fire Product to Monitor Wildfire in the United States," NASA Earth
Observing System Data and Information System (EOSDIS).
https://www.earthdata.nasa.gov/sensors/viirs
[7] https://www.mathworks.com/discovery/convolutional-neural-network-matlab.html
[8] "Fire Detection - VIIRS Active Fire Detections Data," USDA Forest Service.
https://www.ospo.noaa.gov/Products/land/afiband.html#:~:text=VIIRS%20Active%20Fire%20De
tection%20Data,or%20less%20around%201
[9] "How Satellites Are Used in Wildfire Response," U.S. Geological Survey.
https://www.nasa.gov/missions/aqua/nasa-tracks-wildfires-from-above-to-aid-firefighters-below
9. Appendix:
9.1 Wildfire statistics
Year Total Fires Human caused Human Caused% Million
Acres
source: https://www.nifc.gov/fire-information/statistics
9.2. Sample Images
Fire
No fire
Github: https://github.com/kingaryaprince/wildfiredetect