0% found this document useful (0 votes)
129 views57 pages

Visvesvaraya Technological University: "Detection of Deforestion From Statellite Images"

The document is a project report submitted by four students to partially fulfill the requirements for a Bachelor of Engineering degree. The project aims to detect deforestation from satellite images using deep learning techniques like convolutional neural networks. Deforestation is a major problem that is rapidly reducing forest areas around the world. The students hope to test algorithms that can help map and monitor deforestation more accurately than current methods to support sustainable forest management. The report includes certificates of approval from their guide, department head, and institution.

Uploaded by

Kavitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views57 pages

Visvesvaraya Technological University: "Detection of Deforestion From Statellite Images"

The document is a project report submitted by four students to partially fulfill the requirements for a Bachelor of Engineering degree. The project aims to detect deforestation from satellite images using deep learning techniques like convolutional neural networks. Deforestation is a major problem that is rapidly reducing forest areas around the world. The students hope to test algorithms that can help map and monitor deforestation more accurately than current methods to support sustainable forest management. The report includes certificates of approval from their guide, department head, and institution.

Uploaded by

Kavitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 57

VISVESVARAYA TECHNOLOGICAL UNIVERSITY

“JNANA SANGAMA” BELAGAVI-590 018, KARNATAKA

PROJECT REPORT

ON

“Detection of Deforestion from Statellite


images”
SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENT
FOR THE AWARD OF THE DEGREE,

BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE & ENGINEERING
Submitted By
1. Anusha S 2. Hemashree CL

[1CG17CS008] [1CG17CS035]
2. Kavana A R 4. Kavitha K
[1CG17CS041] [1CG16CS042]

Under the guidance of: HOD:

Dr. Shantala C P Ph.D Dr. Shantala C P, Ph.D


Prof & Head, Dept. of CSE,CIT, Prof & Head, Dept. of CSE,
Gubbi, Tumakuru. CIT, Gubbi, Tumakuru.

Channabasaveshwara Institute of Technology


(NAAC Accredited & ISO 9001:2015 Certified Institution)
NH 206 (B.H. Road), Gubbi, Tumakuru – 572 216. Karnataka.

(Affiliated to Visvesvaraya Technological University, Belagavi & Recognized by AICTE New Delhi)
2020-21
Channabasaveshwara Institute of Technology
(NAAC Accredited & ISO 9001:2015 Certified Institution)
NH 206 (B.H. Road), Gubbi, Tumakuru – 572 216. Karnataka.

(Affiliated to Visvesvaraya Technological University, Belagavi & Recognized by AICTE New Delhi)
2020-21

DEPARTMENT OF COMPUTER SCIENCE ENGINEERING

CERTIFICATE

This is to certify that the project work entitled “Detection of Deforestion from Statellite
images” has been successfully carried out by Anusha S [1CG17CS008], Hemashree C L
[1CG17CS035], Kavana A R [1CG17CS041], Kavitha K [1CG17CS042], bonafide
students of CHANNABASAVESHWARA INSTITUTE OF TECHNOLOGY, GUBBI,
TUMAKURU, under our supervision and guidance and submitted in partial fulfillment of the
requirements for the award of Degree in Bachelor of Engineering by Visvesvaraya Technological
University, Belagavi during the academic year of 2019–2020. It is certified that all
corrections/suggestions indicated for internal assessment have been incorporated in the report deposited
in the departmental library. The project report has been approved as it satisfies the academic
requirements for the above said degree.

Guide: H.O.D:

Dr. Shantala C P, Ph.D Dr. Shantala C P, Ph.D


Prof & Head, Dept. of CSE, Prof & Head, Dept. of CSE,
CIT, Gubbi, Tumakuru. CIT, Gubbi,Tumakuru.

Principal: Examiners:

1.
Dr. Suresh D S Ph.D
CIT, Gubbi, Tumakuru. 2.
Channabasaveshwara Institute of Technology
(NAAC Accredited & ISO 9001:2015 Certified Institution)
NH 206 (B.H. Road), Gubbi, Tumakuru – 572 216. Karnataka.

(Affiliated to Visvesvaraya Technological University, Belagavi & Recognized by AICTE New Delhi)
2020-21

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

UNDERTAKING

We the students Anusha S [1CG17CS008], Hemashree C L [1CG17CS035], Kavana


A R[1CG17CS041], Kavitha K[1CG17CS042] of VIII semester B.E. Computer
Science and Engineering of CHANNABASAVESHWARA INSTITUTE OF
TECHNOLOGY, GUBBI, TUMAKURU declare that Project work entitled “Detection
of Deforestion from Statellite images” has been carried out and submitted in partial
fulfillment of the requirements for the award of degree in Bachelor of Engineering in
Computer Science and Engineering by the Visvesvaraya Technological University
during the academic year 2019-2020.

1. Anusha S 3. Kavana A R
[1CG17CS008] [1CG17CS041]

2. Hemashree C L 4. Kavitha K
[1CG17CS035] [1CG17CS042]
Channabasaveshwara Institute of Technology
(NAAC Accredited & ISO 9001:2015 Certified Institution)
NH 206 (B.H. Road), Gubbi, Tumakuru – 572 216. Karnataka.

(Affiliated to Visvesvaraya Technological University, Belagavi & Recognized by AICTE New Delhi)
2020-21

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

BONAFIDE CERTIFICATE

This is to certify that the Project work entitled “Detection of Deforestation from
satellite image” is a bonafide work of Anusha S [1CG17CS008], Hemashree C L
[1CG17CS035], Kavana A R [1CG17CS041],Kavitha K [1CG17CS042] students
of VIII semester B.E computer Science and Engineering carried out at
Channabasaveshwara Institute of Technology, Gubbi, Tumakuru, in partial
fulfillment of the requirements of the award of degree in B.E. in Computer Science and
Engineering of Visvesvaraya Technological University, Belagavi under my supervision
and guidance. Certified that to the best of my knowledge the work reported here in does
not from part of any other thesis on the basis of which degree or award was conferred on
earlier occasion to this or any other candidates.

Guide:

Dr. Shantala C P Ph.D


Prof & Head, Dept. of CSE,
CIT, Gubbi, Tumakuru.
ACKNOWLEDGEMENT
A great deal of time and lot of effort has gone into completing this project report
and documenting it. The number of hours spent in getting through various books and
other materials related to this topic chosen by us have reaffirmed its power and utility in
doing this project.
Several special people have contributed significantly to this effort. First of all, we
are grateful to our institution “Channabasaveshwara Institute of Technology”, Gubbi
which provided us an opportunity in fulfilling our most cherished desire of reaching the
goal.
We acknowledge and express our sincere thanks to the beloved Director and
Principal Dr. Suresh D S for his many valuable suggestions and continued
encouragement and support in the academic endeavors.

We wish to express our deep sense of gratitude to our guide Dr. Shantala C P,
Head of the Department of Computer Science and Engineering for all the guidance and
who still remains a constant driving force and motivated through innovative ideas with
tireless support and advice during the project to examine and helpful suggestions offered.

This would never been possible without the support and technical supervision by all
the faculty members of CITRIS for all their guidance.

We would express our gratitude towards our parents and friends for their kind
cooperation and encouragement which helped us in completion of this project.

Finally, we would like to thank all the teaching and non-teaching staff of Dept of
CSE, for their cooperation.

Thanking everyone….

Anusha S [1CG17CS008]

Hemashreee C L [1CG17CS035]

Kavana A R [1CG17CS041]

Kavitha K [1CG17CS042]
ABSTRACT

Forest monitoring is an effort to maintain forest sustainability. Forest monitoring was accomplished by
observing the existence of forest class, for example, primary forest, secondary forest or planting forest. The
existence of each forest class supports the environmental sustainability and stability. Since the forests spread
widely in the Land covers, a reliable monitoring technique is required to help the government to monitor the
forest. Forest classification application requires very fine forest types mapping. A promising technique for the
direct mapping of forest types is the usage of satellite imagery. This technology enables faster data acquisition
and wider data coverage compared with field observations
Mapping deforestation is an essential step in the process of managing tropical rainforests. It lets us understand
and monitor both legal and illegal deforestation and its implications, which include the effect deforestation may
have on climate change through greenhouse gas emissions. Given that there is ample room for improvements
when it comes to mapping deforestation using satellite imagery, in this study, we aimed to test and evaluate the
use of algorithms belonging to the growing field of deep learning (DL), particularly convolutional neural
networks (CNNs), to this end. Although studies have been using DL algorithms for a variety of remote sensing
tasks for the past few years, they are still relatively unexplored for deforestation mapping

The forests on Earth are rapidly shrinking due to human urbanization. In fact, the world loses almost fifty
football fields of forest area every minute. Deforestation in the Amazon Basin has been particularly noticeable,
leading to reduced biodiversity, climate change, and habitat loss. However, satellites have taken thousands of
pictures of the forests, and analyzing them could provide significant insight into the causes and effects of
deforestation. Consequently, algorithms interpreting satellite images of these locations are necessary to help
groups respond to deforestation quickly and effectively.
TABLE OF CONENTS

Contents Page no
1. INTRODUCTION 1-3
1.1 Objective 2
1.2 Problem Statement 2
1.3 Scope of the Project 3

2. LITERATURE SURVEY 4-13

3. SYSTEM ANALYSIS 14-15


3.1 Existing System 14
3.2 Proposed System 15
4. SYSTEM DESIGN 16-20

4.1 System Architecture 16


4.2 Flowchart 17
4.3 Sequence diagram 19

4.4 Use case diagram 20

5. IMPLEMENTATION 21-30
5.1 Violation detection 21
5.1.1 Vehicle Classification 22
5.1.2 Initial User Interface View 23
5.1.3 Opening Video Footage From Storage 23
5.1.4 Region Of Interest 23
5.1.5 YOLO V3 Model 24

5.2. Number plate Detection 25

6. SYSTEM TESTING 31
7. RESULTS 32-39

8. CONCLUSION 40

REFERENCES
DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

CHAPTER 1

INTRODUCTION

Deforestation is one of the primary sources of concern regarding climate change as it is one of the largest
sources of greenhouse gas emissions in the world, second only to the burning of fossil fuels. Within the region
of the Brazilian Amazon, studies have shown that deforestation, in conjunction with forest fires, can make up to
48% of the total emissions .It also bears substantial implications regarding the conservation of ecosystems and
their biodiversity in the region, and it has been linked to the loss of species and general loss of ecosystem
stability through fragmentation. Locally, estimates also show that unchecked deforestation could lead to
reductions in seasonal rainfall and into the savanization of the environment .

Remote sensing imagery has been instrumental in the process of keeping track of deforestation in the Amazon.
The Brazilian National Institute for Space Research (INPE) releases annual deforestation
and land use information derived from satellite imagery data through their Program for Deforestation
Monitoring (PRODES) and TerraClass projects , which have been widely used for monitoring, research, and
policymaking. Carbon emission estimates from deforestation are also dependent on land use and land-use
change data. However, they are likely to be underestimated due to the omission of illegal logging data in
official reports

The development of remote sensing technology was followed by the development of digital image analysis
and classification. The challenge in addressing this task is to duplicate human abilities in image understanding
which purpose is the computer could recognize an object like a human. Deep learning algorithm has reached a
massive rise in popularity for remote sensing image analysis over the past few years . Deep learning has been
employed for satellite image analysis including object classification, object detection; land use and land cover
classification,
The deep learning algorithm that has been widely adopted is the Convolutional Neural Network
(CNN). This algorithm was designed to imitate the image recognition system on the human visual cortex.
Research by explained the CNN model was designed as a hierarchy of convolutional and pooling layers. The
hierarchical structure of CNN makes it possible to learn the hierarchical feature of the input data. This research
was used for visual object recognition and vision navigation for off-road mobile robots. Multispectral data has
not been explored in this research.

DEPT OF CSE, CIT GUBBI Page 8


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

. Technology is required to support and answer the forest monitoring problem. One main problem in forest
monitoring in Indonesia is the wide area and wide distribution area of forest. One potential technology that
could be employed to address this problem is the usage of the classification method based on deep learning and
satellite imagery data. The usage of satellite imagery data enables faster data acquisition and wider data
coverage compared with field observation approach.
Several studies have proposed CNN for land cover classification, to the best of our knowledge; little has been
said the use of spectral satellite image including spectral feature, spectral index and spatial features together as
input feature.

1.1 OBJECTIVES
The goal of the project is to detect and track the changes from large number of images.
CNN based change detection method will be exploited for detecting the changes in the Satellite
images. On successful completion of the project it will deliver the functionality that will enable
to detect the changes from Satellite images for deforestation.

1.2 PROBLEM STATEMENT

Deforestation is an urgent problem in our world today, as it contributes to reduced


biodiversity, habitat loss, climate change, and other devastating effecte we tackle this problem by
using satellite data, With advances in satellite imagery, detection of deforestation has become
faster, more convenient, and more accurate than before. An ongoing effort is the Real Time
System for Detection of Deforestation (DETER) which has been credited for reducing the
deforestation

1.3 SCOPE
Convolutional Neural Networks Convolutional Neural Networks (CNNs) have been very
successful in recent years for a large number of visual tasks, such as image recognition and video
analysis, because of. Co-appearance of labels in data. Blue boxes indicate that the presence of a
label will make the other label more likely to appear. Red boxes indicate that the presence of a
label will make the other label less likely to appear. their ability to capture structured

DEPT OF CSE, CIT GUBBI Page 9


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

representation of data. This presents two advantages.

First, this drastically reduces the number of parameters that need to be learnt by the model,
which allows the model to be trained faster

Secondly, because the filter uses the same set of parameter weights while convolving across
different parts of the image, this gives CNNs a translational invariance property. The implication
is that identical objects appearing in different parts of the image can be recognized as being
identical. types of layers are also present in the model, such as the ReLU layer that sets a
thresholds for neuron activations, the pooling layer that performs downsampling operations
along the spatial dimensions, and dropout layers that randomly deactivates neurons in order to
reduce overfitting.

A feedforward Artificial Neural Network is used which is based on OCR Algorithms. For
this purpose MATLAB matrix library is used. MATLAB is a high-level language and
interactive environment for numerical computation, visualization, and programming. Using
MATLAB, you can analyze data, develop algorithms, and create models and applications. The
language, tools, and built-in math functions enable you to explore multiple approaches and reach
a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or
java.

DEPT OF CSE, CIT GUBBI Page 10


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

CHAPTER 2

LITERATURE SURVEY

Literature Survey For the past several years, there has been an increasing interest among
researchers in the problem related to extracting information derived from satellite imagery data
through their Program for Deforestation Monitoring, which is evident from large number of
technical papers.

Deforestation is an urgent problem in our world today,as it contributes to reduced biodiversity, habitat
loss,climatechange, and other devastating effects. In the Amazon river basin, a rainforest that covers
40% of continentalSouth America and spans across nine countries, 0.2% of theforest is lost to
deforestation each year. We hope to tackle this problem by using satellite data to track
deforestationand help researchers better understand where, how and why deforestation happens, and
how to respond to it.With advances in satellite imagery, detection of deforestation
has become faster, more convenient, and more accurate than before. An example of an ongoing effort
isthe Real Time System for Detection of Deforestation (DETER) which has been credited for reducing
the deforestationrate in Brazil by almost 80% since 2004, by alertingenvironmental police to large-
scale forest clearing. [20]Current tracking efforts within rainforests largely depends on coarse-
resolution imagery from Landsat (30 meter pixels) or MODIS (250 meter pixels). The challenges
facedby these methods are the limited effectiveness in detecting small-scale deforestation or
differentiating between human causes of forest loss and natural causes. Planet, designer and builder of
Earth-imaging satellites has a labelled dataset of land surfaces at the 3-5 meter resolution, and we
propose leveraging modern deep learning techniques to identify activities happening within the
images. We treat this as a multi-label classification problem, and we aim to label satellite image chips
with one or more of 17 labels that indicates atmospheric conditions,
land cover, and land use.

One such application is Convolutional Neural Networks

Convolutional Neural Networks have been very successful in recent years for a large
number of visual tasks,such as image recognition and video analysis,because of their ability to
capture structured representation of data.CNNs are in fact an adaptation of a machine-learning model

DEPT OF CSE, CIT GUBBI Page 11


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

called Neural Networks for visual tasks,and they make us e of the special structure present in visual
data to the improve efficiency and effectiveness of the Neural Network model.

In CNNs a convolving filter is used across the image and neurons in one layer are only
connected locally to the neurons in the preceding layer.For each neuron in the current layer, a dot
product is computed between the parameter weights of the convolving filter and the local region in
the preceding layer.

As a challenging topic, satellite image classification usually involves in tons of data and large
variations. Traditional machine learning algorithms such as random forest cannot handle extractions
of a huge amount of features. Recently, people begins to turn to focus on machine learning, CNN and
Deep belief Network for classification of satellite images. Muhammad et al. [1] extract attributes
including organization of color pixels and pixel intensity using decision tree for training, although
the test is only conducted on a very limited data set. Goswami et al. [2] used satellite images to train
a multi-layer perceptron (MLP) model which employs back propagation (EBP) learning algorithm
for a two-class waterbody object detection problem. Although the problem itself is simplified
because there’s only two classes, but they can achieve comparatively high test accuracy and
demonstrate the applicability of neural network on satellite image classification. Basu et al. [13]
builds a classification framework which extracts features from an input images and feeds the
normalized feature vectors to a Deep Belief Network for classification. The authors show that their
framework outperforms CNN over 10% on two datasets they build up. Penatti et al. [14] use arial
and remote sensing images to train CNNs and make comparisons between the performance of the
ConvNets and other low-level color descriptors and show the advantages CNNs get. They also shows
the possibility of combining different CNNs with other descriptors or fusing multiple CNNs.
Nogueira et al [15] further explore the CNN for classification of remote sensing images. They extract
features using CNNs and conduct experiments over 3 remote sensing datasets by using six popular
CNN models and achieve state-of-art results. Another important topic directly related to satellite
image classification problem is object detection because there are usually multiple labels for a single
satellite image and what are in the image are always required to be clarified. A naive approach for
locating objects in images is sliding window boxes with different sizes at selected locations of the
image and classify those window boxes [3, 4]. This can lead to satisfying accuracy but incur
considerable computational cost [5]. By replacing fully connected layer with convolutional neural
DEPT OF CSE, CIT GUBBI Page 12
DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

network, the computational cost is reduced significantly [5]. Regional convolutional neural network
(R-CNN) rises as an efficient and accurate approach to addressing object detection problems [6]. It
first employs a regional proposal method to locate regions of possible interest in the image, then
applies neural network to classify the object proposed [7,8]. Segmentation works as another stateof-
the-art approach for object detection [9, 10, 11]. The basic idea of segmentation approach is to
generate a label map for each pixel in the image.

Residual Networks, or “Resnets”,is a variant of CNN. Resnet is used as a base for CNN model, as it
is a state-of-the-art architecture known for its superior performance. High layer-depths is of central
importance for many visual recognition tasks,and Resnets are notable being able to achieve this at a
lower complexity than other architectures.

They do so by using special skip connections , which also helps solve the degradation problem
associated with networks with high layer-depth,in which accuracy gets saturated and degrads rapidly.
We experiment with this ResNet Architecture as our base, and add our own prediction layer at the
end.

U-Net

U-Net is a semantic segmentation fully convolutional neural network model proposed by


[Ronneberger et al. 2015], modified and extended to work with fewer images during training in order
to generate precise segmentation. Fully convolutional networks have an encoder-decoder
architecture, in which features are learned by the encoder through convolution layers, while the
decoder converts these features into a pixel-level classification through up-sampling layers. The
model used stands out for using intermediate outputs of the encoder concatenated to the decoder.
This reduces the negative effects of dimensionality reduction performed by max-pooling functions
during the encoder, improving segmentation. The original architecture is presented in Figure 1. In
order to improve the result and shorten the time required for training convolutional layer models, the
use of Residual Blocks is state of the art [He et al. 2016]. Such blocks are composed of a chain of
convolutional layers, where the output of this chain is added to its input. The advantage of this

DEPT OF CSE, CIT GUBBI Page 13


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

structure is that it creates a free path to the gradient, which, in a structure without the residual block,
can become derisory after subsequent derivatives and activation functions in the backpropagation
step.

In the present work, it was chosen to fine-tune the U-Net model architecture with Residual Blocks to
get a better accuracy for the proposed task changing the network’s hyperparameters, but not creating
a deeper network than the original U-Net [Ronneberger et al. 2015]. Using ADAM optimizer
[Kingma and Ba 2014] with a decaying learning rate of 0.001, our best network with these
assumptions was as follows (Table 1). ReLU activation function was used in every convolution
layer, and Softmax, on the output layer.

..,

DEPT OF CSE, CIT GUBBI Page 14


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

R-C NN [1]
To bypass the problem of selecting a huge number of regions, Ross Girshick et al proposed a
method where we use selective search to extract just 2000 regions from the image and he called
them region proposals. Therefore, now, instead of trying to classify a huge number of regions,
you can just work with 2000 regions. These 2000 region proposals are generated using the
selective search algorithm which is written below.
Selective Search:
1. Generate initial sub-segmentation, we generate many candidate regions
2. Use greedy algorithm to recursively combine similar regions into larger ones
3. Use the generated regions to produce the final candidate region proposals.

These 2000 candidate region proposals are warped into a square and fed into a convolution
neural network that produces a 4096-dimensional feature vector as output. The CNN acts as a
feature extractor and the output dense layer consists of the features extracted from the image and
the extracted features are fed into an SVM to classify the presence of the object within that
candidate region proposal. In addition to predicting the presence of an object within the region
proposals, the algorithm also predicts four values which are offset values to increase the
precision of the bounding box. For example, given a region proposal, the algorithm would have
predicted the presence of a person but the face of that person within that region proposal could
have been cut in half. Therefore, the offset values help in adjusting the bounding box of the
region proposal.

Problems with R-CNN


 It still takes a huge amount of time to train the network as you would have to classify
2000 region proposals per image.
 It cannot be implemented real time as it takes around 47 seconds for each test image.
 The selective search algorithm is a fixed algorithm. Therefore, no learning is happening
at that stage. This could lead to the generation of bad candidate region proposals.

DEPT OF CSE, CIT GUBBI Page 15


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

FAST R-CNN: Fast Region-based Convolution Network method (Fast R-CNN) for object
detection [2].

Fast R-CNN builds on previous work to efficiently classify object proposals using deep
convolution networks. Compared to previous work, Fast R-CNN employs several innovations to
improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains
the very deep VGG16 network 9× faster than R-CNN, is 213× faster at test-time, and achieves a
higher map on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains VGG16 3× faster,
tests 10× faster, and is more accurate. Fast R-CNN is implemented in Python and C++.

The Fast RCNN method has several advantages:

1. Higher detection quality (mAP) than R-CNN, SPPnet

2. Training is single-stage, using a multi-task loss

3. Training can update all network layers

4. No disk storage is required for feature caching

Fast R-CNN architecture and training:

A Fast R-CNN network takes as input an entire image and a set of object proposals. The
network first processes the whole image with several convolution (conv) and max pooling layers
to produce a conv feature map. Then, for each object proposal a region of interest (RoI) pooling
layer extracts a fixed-length feature vector from the feature map. Each feature vector is fed into a
sequence of fully connected (fc) layers that finally branch into two sibling output layers: one that
produces softmax probability estimates over K-object classes plus a catch-all “background” class
and another layer that outputs four real-valued numbers for each of the K-object classes. Each set
of 4 values encodes refined bounding-box positions for one of the K-classes.

Conclusion:
The reason “Fast R-CNN” is faster than R-CNN is because you do not have to feed 2000
region proposals to the convolution neural network every time. Instead, the convolution
operation is done only once per image and a feature map is generated from it. Fast R-CNN is
significantly faster in training and testing sessions over R-CNN.

DEPT OF CSE, CIT GUBBI Page 16


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

YOLO: You Only Look Once [3]


All the previous object detection algorithms use regions to localize the object within the
image. The network does not look at the complete image. Instead, parts of the images which
have high probabilities of containing the object. YOLO or You Only Look Once is an object
detection algorithm much different from the region based algorithms seen above. In YOLO a
single convolution network predicts the bounding boxes and the class probabilities for these
boxes [1]. The unified architecture is extremely fast. The base YOLO model processes images in
real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes
astounding 155 frames per second while still achieving double the mAP of other real-time
detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors
but is less likely to predict false positives on background. Finally, YOLO learns very general
representations of objects. It outperforms other detection methods, including DPM and R-CNN,
when generalizing from natural images to other domains like artwork.

Limitations of Yolo:
YOLO imposes strong spatial constraints on bounding box predictions since each grid
cell only predicts two boxes and can only have one class. This spatial constraint limits the
number of nearby objects that our model can predict. The model struggles with small objects that
appear in groups, such as flocks of birds. Since our model learns to predict bounding boxes from
data, it struggles to generalize to objects in new or unusual aspect ratios or configurations. The
model also uses relatively coarse features for predicting bounding boxes since our architecture
has multiple down sampling layers from the input image. Finally, while we train on a loss
function that approximates detection performance, the loss function treats errors the same in
small bounding boxes versus large bounding boxes. A small error in a large box is generally
benign but a small error in a small box has a much greater effect on IOU. The main source of
error is incorrect localizations .

Architecture:
It is possible to frame object detection as a single regression problem, straight from
image pixels to bounding box coordinates and class probabilities. Using the system, you only
look once (YOLO) at an image to predict what objects are present and where they are.

DEPT OF CSE, CIT GUBBI Page 17


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

A single convolution network simultaneously predicts multiple bounding boxes and class
probabilities for those boxes. YOLO trains on full images and directly optimizes detection
performance. This unified model has several benefits over traditional methods of object
detection. Current detection systems purpose classifiers to perform detection. To detect an
object, these systems take a classifier for that object and evaluate it at various locations and
scales in a test image. Systems like deformable parts models (DPM) use a sliding window
approach where the classifier is run at evenly spaced locations over the entire image.
First, YOLO is extremely fast. Since it frames detection as a regression problem it does
not need a complex pipeline. It simply runs our neural network on a new image at test time to
predict detections. Our base network runs at 45 frames per second with no batch processing on a
Titan X GPU and a fast version runs at more than 150 fps. This means we can process streaming
video in real-time with less than 25 milliseconds of latency. Furthermore, YOLO achieves more
than twice the mean average precision of other real-time systems.
Second, YOLO reasons globally about the image when making predictions. Unlike
sliding window and region proposal-based techniques, YOLO sees the entire image during
training and test time so it implicitly encodes contextual information about classes as well as
their appearance. Fast R-CNN, a top detection method, mistakes background patches in an image
for objects because it cannot see the larger context. YOLO makes less than half the number of
background errors compared to Fast R-CNN.
Third, YOLO learns generalizable representations of objects. When trained on natural
images and tested on artwork, YOLO outperforms top detection methods like DPM and R-CNN
by a wide margin. Since YOLO is highly generalizable it is less likely to break down when
applied to new domains or unexpected inputs. YOLO still lags behind state-of-the-art detection
systems in accuracy. While it can quickly identify objects in images it struggles to precisely
localize some objects, especially small ones.

YOLO V1:

You only look once (YOLO v1) [6] is an object detection system targeted for realtime
processing. YOLO divides the input image into an S*S grid. Each grid cell predicts only one
object. Each grid cell predicts a fixed number of boundary boxes. In this example, the yellow
grid cell makes two boundary box predictions (blue boxes) to locate where the object is.

DEPT OF CSE, CIT GUBBI Page 18


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

For each grid cell,


 It predicts B boundary boxes and each box has one box confidence score,
 It detects one object only regardless of the number of boxes B,
 It predicts C conditional class probabilities (one per class for the likeliness of the object
class).
Each boundary box contains 5 elements: (x, y, w, h) and a box confidence score. The confidence
score reflects how likely the box contains an object and how accurate is the boundary box. It
normalizes the bounding box width w and height h by the image width and height. x and y are
offsets to the corresponding cell. Hence, x, y, w and h are all between 0 and 1. Each cell has 20
conditional class probabilities. The conditional class probability is the probability that the
detected object belongs to particular class (one probability per category for each cell). So,
YOLO’s prediction has a shape of (S, S, B×5 + C) = (7, 7, and 2×5 + 20) = (7, 7, and 30).
The major concept of YOLO is to build a CNN network to predict a (7, 7, 30) tensor. It uses a
CNN network to reduce the spatial dimension to 7×7 with 1024 output channel at each location.
YOLO performs a linear regression using two fully connected layers to make 7×7×2 boundary
box predictions middle picture below). To make a final prediction, we keep those with high box
confidence scores (greater than 0.25) as our final predictions .

YOLO V2:
SSD is a strong competitor for YOLO which at one point demonstrates higher accuracy
for real-time processing. Comparing with region based detectors, YOLO has higher localization
errors and the recall (measure how good to locate all objects) is lower.
YOLOv2 [7] is the second version of the YOLO with the objective of improving the
accuracy significantly while making it faster.

YOLO V3: Class Prediction:


Most classifiers assume output labels are mutually exclusive. It is true if the output is
mutually exclusive object classes. Therefore, YOLO applies a softmax function to convert scores
into probabilities that sum up to one.
YOLOv3[8] uses multi-label classification. For example, the output labels may be
“pedestrian” and “child” which are not non-exclusive (the sum of output can be greater

DEPT OF CSE, CIT GUBBI Page 19


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

than 1 now.) YOLOv3 replaces the softmax function with independent logistic classifiers to
calculate the likeliness of the input belongs to a specific label. Instead of using mean square error
in calculating the classification loss, YOLOv3 uses binary cross-entropy loss for each label. This
also reduces the computation complexity by avoiding the softmax function.

Bounding box prediction and cost function classification:


YOLOv3 predicts an objectives score for each bounding box using logistic regression.
YOLOv3 changes the way in calculating the cost function. If the bounding box prior (anchor)
overlaps a ground truth object more than others, the corresponding objectives score should be 1.
For other priors with overlap greater than a predefined threshold (default 0.5), they incur no cost.
Each ground truth object is associated with one boundary box prior only. If a bounding box prior
is not assigned, it incurs no classification and localization lost, just confidence loss on objectives.
We use tx and ty (instead of bx and by) to compute the loss.
YOLOv3 makes predictions at 3 different scales (similar to the FPN):
 In the last feature map layer.
 Then it goes back 2 layers back and samples it by 2. YOLOv3 then takes a feature
map with higher resolution and merge it with the sampled feature map using
element-wise addition. YOLOv3 apply convolution filters on the merged map to
make the second set of predictions.
 Repeat 2 again so the resulted feature map layer has good high-level structure
(semantic) information and good resolution spatial information on object
locations.

DEPT OF CSE, CIT GUBBI Page 20


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

Some of the other literature reviews are as follows:


Analysis of deforestation patterns and drivers in Swaziland using efficient Bayesian
multivariate

present a machine learning-based method which automatically identifies key drivers and makes
predictions from available spatial data in Swaziland during the post millennium period.

The efficient Bayesian multivariate classifiers (EBMC) are used to learn feature-selected Bayesian network
(BN) models of deforestation from multisource data. The EBMC models, learned using the K2 and BDeu
algorithms, were also used to predict the probability or risk of deforestation in addition to providing a directed
acyclic graphical view of the key interacting factors.

These were compared with constraint and knowledge-based BNs developed using the common EBMC-selected
variables. Alll the models performed consistently well(log loss<0.3, AUC>0.8) when evaluated against
observed deforestation patterns.

A land cover change detection and classification protocol

method is designed to characterize the main land cover changes associated with different drivers, including the
conversion of forests to shrub and grassland primarily as a result of wildland fire and forest harvest, the
vegetation successional processes after disturbance, and changes of surface water extent and glacier ice/snow
associated with weather and climate changes. For natural vegetated areas, a component named AKUP11-VEG
was developed for updating
the land cover that involves four major steps
: 1) identify the disturbed and successional areas using Landsat images and ancillary datasets;
2) update the land cover status for these areas using a SKILL model (System of Knowledge-based Integrated-
trajectory Land cover Labeling);
3) perform decision tree classification; and
4) develop a final land cover and land cover change product through the postprocessing modeling.
For water and ice/snow areas, another component named AKUP11-WIS was developed for initial land cover
change detection, removal of the terrain shadow effects, and exclusion of ephemeral snow changes using a 3-
year MODIS snow extent dataset from 2010 to 2012.

DEPT OF CSE, CIT GUBBI Page 21


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

The overall approach was tested in three pilot study areas in Alaska, with each area consisting of four Landsat
image footprints. The results from the pilot study show that the overall accuracy in detecting change and no-
change is 90% and the overall accuracy of the updated land cover label for 2011 is 86%. The method provided
a robust, consistent, and efficient means for capturing major disturbance events and updating land cover for
Alaska. The method has subsequently been applied to generate the land cover and land cover change products
for the entire state of Alaska.
.

A near-real-time approach for monitoring forest disturbance using Landsat time series: stochastic
continuous change detection

near-real-time and high-performance remote sensing tools for monitoring abrupt and subtle forest disturbances.
This study presents a new approach called ‘Stochastic Continuous Change Detection (S-CCD)’ using a dense
Landsat data time series.

The quantitative accuracy assessment is evaluated based on 3782 Landsat-based disturbance reference plots (30
m) from a probability sampling distributed throughout the Conterminous United States.

Validation results show that the overall accuracy (best F1 score) of S-CCD is 0.793 with 20% omission error
and 21% commission error.

Continuous Change Detection and Classification of Land Cover Using All Available Landsat Data

A new algorithm for Continuous Change Detection and Classification (CCDC) of land cover using all available
Landsat data is developed. It is capable of detecting many kinds of land cover change continuously as new
images are collected and providing land cover maps for any given time.

We applied the CCDC algorithm to one Landsat scene in New England (WRS Path 12 and Row 31). All
available (a total of 519) Landsat images acquired between 1982 and 2011 were used.
The accuracy assessment shows that CCDC results were accurate for detecting land surface change, with
producer's accuracy of 98%

DEPT OF CSE, CIT GUBBI Page 22


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

Continuous monitoring of forest disturbance using all available Landsat image

The Continuous Monitoring of Forest Disturbance Algorithm (CMFDA) flags forest disturbance by
differencing the predicted and observed Landsat images.
Two algorithms
1.single-date and
2.multi-date differencing
were tested for detecting forest disturbance at a Savannah River site. The map derived from the multi-date
differencing algorithm was chosen as the final CMFDA result,
It determines a disturbance pixel by the number of times “change” is observed consecutively.
The accuracy assessment shows that CMFDA results were accurate for detecting forest disturbance, with both
producer’s and user’s accuracies higher than 95% in the spatial domain.

Land use/land cover change detection and prediction in the north-western coastal desert of
Egypt using

We studied land use/land cover (LULC) changes in part of the northwestern desert of Egypt

And used the Markov-CA integrated approach to predict future changes. We mapped the LULC
distribution of the desert landscape for 1988, 1999, and 2011. Landsat Thematic Mapper 5 data and
ancillary data were classified using the random forests approach.

The use of a spatially explicit land use change modeling approach, such as Markov-CA approach,
provides ways for projecting different future scenarios. Markov-CA was used to predict land use
change in 2011

The technique produced LULC maps with an overall accuracy of more than 90%.

DEPT OF CSE, CIT GUBBI Page 23


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

CHAPTER 3

SYSTEM ANALYSIS

3.1 EXISTING SYSTEM

There has been a lot of work in object detection using traditional computer vision
techniques. However, they lack the accuracy of deep learning based techniques. Among the deep
learning based techniques

In current system whenever development of remote sensing technology was followed by the
development of digital image analysis and classification. The challenge in addressing this task is
to duplicate human abilities in image understanding which purpose is the computer could
recognize an object like a human. Deep learning algorithm has reached a massive rise in
popularity for remote sensing image analysis over the past few years [5, 6]. Deep learning has
been employed for satellite image analysis including object classification, object detection; land
use and land cover (LULC) classification

DEPT OF CSE, CIT GUBBI Page 24


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

PROPOSED SYSTEM
The proposed system is a fast and accurate method to improve and increase performance; therefore,
aiding this application can be implemented in an optimal performance system .The deep learning
algorithm that has been widely adopted is the Convolutional Neural Network (CNN). This algorithm
was designed to imitate the image recognition system on the human visual cortex Our approach is to
build Deforest Detection

One potential technology that could be employed to address this problem is the usage of the
classification method based on deep learning and satellite imagery data. The usage of satellite
imagery data enables faster data acquisition and wider data coverage compared with field
observation approach . Several studies have proposed CNN for land cover classification, to the best
of our knowledge; little has been said the use of spectral satellite image including spectral feature,
spectral index and spatial features together as input feature. Therefore, the objective of this study was
to develop a classification method based on CNN and Sentinel satellite imagery to overcome forest
monitoring problem
Deforestion detection has 2 main stages
 image detection
 image Recognition

The image taken from statilite used for this system is then localized, this localization is
known as detection technique.
CNN refers to a neural network model which has been particularly used to process grid-structured
data namely two-dimensional image. CNN is a development of multilayer perceptron (MLP) and
it is designed to process two-dimensional data in the image form [2]. CNN is one of the most
broadly used deep learning models.
This model is suitable to process multiband remote sensing image data where pixels are
organized regularly .CNN consists of three different types of hierarchical structures,
 namely convolution layers,
 pooling layers,
 and fully connected layers.
On each layer, the input image is convolved with a set of K kernels W= W1, W2, WK and added
biases = b1, bK, each layer generating a new feature map Xk.

DEPT OF CSE, CIT GUBBI Page 25


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

These features ar subjected to elementwise nonlinear transform (-). Subsequently, the same
process is repeated for every convolutional layer l
This research achieved greater accuracy. The exploration of feature extraction and selection could be
employed to increase the accuracy

DEPT OF CSE, CIT GUBBI Page 26


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

CHAPTER 4

SYSTEM DESIGN

The design of the system deals with how the system is developed. It explains the flow
functionalities in brief. The section contains system data flow diagram , Flowchart and Sequence
Diagram described below.
4.1 System Architecture
Fig 4.1 : System Architecture

Input satellite image

Pre-proccessing

Visulation of
data

Model training

Predictions

Performance Evalulations

DEPT OF CSE, CIT GUBBI Page 27


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

4.2 Flowchart:
A flowchart is a type of diagram that represents an algorithm, workflow or process.
Flowchart can also be defined as a diagrammatic representation of an algorithm (step by step
approach to solve a task).

Start

Input satellite
image

Transforming
image,pre-processes
and final classifcation

If change
detected
in image

yes
No
Detected
deforestio
n

End
fig 4.2 : Flowchart of Deforestation Detection
DEPT OF CSE, CIT GUBBI Page 28
DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

The input to our algorithm is a satellite image We then use a CNN to output predicted
multiple labels., We investigated using pretrained convolutional neural network (CNN)
to detect and track changes of forest, The categories of labels includes 17 specific
labels in total. To be more specific, these category contains: Annual Crop,Foresrt,
HerbacousVegetation,Highway,Industrial,Pasture, permanent crop, Residential, River,

Sea Lake. Resnet50 is used technique in remote sensing image to facilitate


classification of land covers is defining of various indexes

DEPT OF CSE, CIT GUBBI Page 29


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

4.3 Sequence Diagram

A sequence diagram shows object interactions arranged in time sequence. It depicts the
objects and classes involved in the scenario and the sequence of messages exchanged between
the objects needed to carry out the functionality of the scenario. Sequence diagrams are typically
associated with use case realizations in the Logical View of the system under development.
Sequence diagrams are sometimes called event diagrams or event scenarios.
A sequence diagram shows, as parallel vertical lines (lifelines), different processes or
objects that live simultaneously, and, as horizontal arrows, the messages exchanged between
them, in the order in which they occur. This allows the specification of simple runtime scenarios
in a graphical manner.

Figure 4.4 Sequence diagram

DEPT OF CSE, CIT GUBBI Page 30


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

4.4 USE CASE DIAGRAM

Use cases are used during the analysis phase of a project to identify system functionality.
They separate the system into actors and use cases. Actors represent roles that are played by
users of the system. Users may be humans, other computers, or even other software systems.

Use case diagrams are used to gather the requirements of a system including internal and
external influences. These requirements are mostly design requirements. Hence, when a system
is analyzed to gather its functionalities, use cases are prepared and actors are identified.

DEPT OF CSE, CIT GUBBI Page 31


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

CHAPTER 5

IMPLEMENTATION

Our approach to building this traffic rule violation model is discussed in 2 modules

 Data Collection

 Model Building
5.1 Data Collection

The EuroSAT dataset from Kaggle .this dataset is the first large-scale patch-based land
use and land Classification dataset based on satellite-2 satellite image and is thanks to the
fantastic work of helberetal(2018)

From the given video footage, moving objects are detected. An object detection model
YOLOv3 is used to classify those moving objects into respective classes. YOLOv3 is the third
object detection algorithm in YOLO (You Only Look Once) family. It improved the accuracy
with many tricks and is more capable of detecting objects. The classifier model is built with
Resnet-50 architecture. Table-1 shows how the neural network architecture is designed.

The vehicles are detected using YOLOv3 model. After detecting the vehicles, violation
cases are checked. A traffic line is drawn over the road in the preview of the given video footage
by the user. The line specifies that the traffic light is red. Violation happens if any vehicle
crosses the traffic line in red state. The detected objects have a green bounding box. If any
vehicle passes the traffic light in red state, violation happens. After detecting violation, the
bounding box around the vehicle becomes red.

OpenCV is an open source computer vision and machine learning software library
which is used in this project for image processing purpose. Tensorflow is used for implementing
the vehicle classifier.

DEPT OF CSE, CIT GUBBI Page 32


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

5.1.1 Vehicle Classification


From the given video footage, moving objects are detected. An object detection model
YOLOv3 is used to classify those moving objects into respective classes. YOLOv3 is the third
object detection algorithm in YOLO (You Only Look Once) family. It improved the accuracy
with many tricks and is more capable of detecting objects. The classifier model is built with
Darknet-53 architecture. Table-1 shows how the neural network architecture is designed.

Features:
1. Bounding Box Predictions:
YOLOv3 is a single network the loss for objectiveness and classification needs to be
calculated separately but from the same network. YOLOv3 predicts the objectiveness score using
logistic regression where 1 means complete overlap of bounding box prior over the ground truth
object. It will predict only 1 bonding box prior for one ground truth object and any error in this
would incur for both classification as well as detection loss. There would also be other bounding
box priors which would have objectiveness score more than the threshold but less than the best
one. These errors will only incur for the detection loss and not for the classification loss.

2. Class Prediction:
YOLOv3 uses independent logistic classifiers for each class instead of a regular softmax
layer. This is done to make the classification multi-label classification. Each box predicts the
classes the bounding box may contain using multilabel classification.

3. Predictions across scales:


To support detection a varying scales YOLOv3 predicts boxes at 3 different scales. Then
features are extracted from each scale by using a method similar to that of feature pyramid
networks. YOLOv3 gains the ability to better predict at varying scales using the above method.
The bounding box priors generated using dimension clusters are divided into 3 scales, so that
there are 3 bounding box priors per scale and thus total 9 bounding box priors.

DEPT OF CSE, CIT GUBBI Page 33


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

4. Feature Extractor:
YOLOv3 uses a new network- Darknet-53. Darknet-53 has 53 convolutional layers, its
deeper than YOLOv2 and it also has residuals or shortcut connections. Its powerful than Darknet
-19 and more efficient than ResNet-101 or ResNet-152.

5.1.2 INITIAL USER INTERFACE VIEW

Primarily, for the start of the project usage, the administrator needs to open a video
footage using ‘Open’ item that can be found under ‘File’ . The administrator can open any video
footage from the storage files.

5.1.3 OPENING VIDEO FOOTAGE FROM STORAGE

After opening a video footage from storage, the system will get a preview of the footage.
The preview contains a frame from the given video footage. The preview is used to identify
roads and draw a traffic line over the road. The traffic line drawn by administrator will act as a
traffic signal line. To enable the line drawing feature, we need to select ‘Region of interest’ item
from the ‘Analyze’ option. After that administrator will need to select two points to draw a line
that specifies traffic signal.

5.1.4 REGION OF INTEREST

Selecting the region of interest will start violation detection system. The coordinates of
the line drawn will be shown on console . The violation detection system will start immediately
after the line is drawn. At first the weights will be loaded. Then the system will detect objects
and check for violations.

Fig 5.1 line coordinates.


DEPT OF CSE, CIT GUBBI Page 34
DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

5.1.5 YOLO V3 MODEL

The “You Only Look Once”, or YOLO, family of models are series of end-to-end deep
learning models designed for fast object detection. YOLO is a single network trained end to end
to perform a regression task predicting both object bounding box and object class. This network
is extremely fast, it processes images in real-time at 45 frames per second. A smaller version of
the network, Fast YOLO, processes an astounding 155 frames per second.

First, it divides the image into a 13×13 grid of cells. The size of these 169 cells vary
depending on the size of the input. For a 416×416 input size that we used in our experiments, the
cell size was 32×32. Each cell is then responsible for predicting a number of boxes in the image.

For each bounding box, the network also predicts the confidence that the bounding box
actually encloses an object, and the probability of the enclosed object being a particular class.

• The Vehicles are detected using YOLO v3 model

• YOLO can detect the objects belonging to classes present in dataset used to
train network.

DEPT OF CSE, CIT GUBBI Page 35


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

5.1.6 CREATING GUI USING TKINTER

Tkinter is the standard GUI library for Python. Python when combined with Tkinter
provides a fast and easy way to create GUI applications. Tkinter provides a powerful object-
oriented interface to the Tk GUI toolkit.

Creating a GUI application using Tkinter is an easy task. All you need to do is perform the
following steps −

 Import the Tkinter module.

 Create the GUI application main window.

 Add one or more of the above-mentioned widgets to the GUI application.

 Enter the main event loop to take action against each event triggered by the

user top=Tk()

label=Label(top,background='cyan', font=('algerian',25,'bold'))

top.geometry('1500x800')

top.title('Traffic rule violation')

top.iconbitmap("C:\\Users\\Bilva Prasad\\Downloads\\cit.ico")

top.mainloop()

5.2 NUMBER PLATE DETECTION

It has 3 main stages

 License plate detection

 Character segmentation

 Character Recognisition

DEPT OF CSE, CIT GUBBI Page 36


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

5.2.1 LICENSE PLATE DETECTION

The image taken from web cam or the camera used for this system is then localized, this
localization can be done using edge detection technique.

After that we need to extract the plate region from the scene or image. Plate region extraction is
the first stage in this algorithm. Image captured from the camera is first converted to the binary
image consisting of only 1’s and 0’s (only black and white) by thresholding the pixel values of 0
(black) for all pixels in the input image with luminance less than threshold value and 1 (white)
for all other pixels.

INPUT IMAGE
BGR TO GRAY
Binarization BLUR
Segmentation

Fig 5.2 License Plate Detection

 BGR to GRAY Conversion :

We need to import the cv2 module, which will make available the functionalities needed to read
the original image and to convert it to gray scale.

import cv2
To read the original image, simply call the imread function of the cv2 module, passing as input
the path to the image, as a string.

image_path = filedialog.askopenfilename()

image = cv2.imread(image_path)

DEPT OF CSE, CIT GUBBI Page 37


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

Next, we need to convert the image to gray scale. To do it, we need to call the cvtColor function,
which allows to convert the image from a color space to another.

As first input, this function receives the original image. As second input, it receives the color
space conversion code. Since we want to convert our original image from the BGR color space
to gray, we use the code COLOR_BGR2GRAY.

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

Now, to display the images, we simply need to call the imshow function of the cv2 module. This
function receives as first input a string with the name to assign to the window, and as second
argument the image to show.

 BILATERAL FILTER
A bilateral filter is used for smoothening images and reducing noise, while preserving
edgesOpenCV has a function called bilateralFilter() with the following arguments:
o d: Diameter of each pixel neighborhood.
o sigmaColor: Value of in the color space. The greater the value, the colors
farther to each other will start to get mixed.
o sigmaColor: Value of in the coordinate space. The greater its value, the more
further pixels will mix together, given that their colors lie within the sigmaColor
range.

image_path = filedialog.askopenfilename()
image = cv2.imread(image_path)
blur = cv2.bilateralFilter(gray, 11,90,
90) plot_images(gray, blur)

 CANNY EDGE DETECTION


The Canny edge detector is an edge detection operator that uses a multi-
stage algorithm to detect a wide range of edges in images.

DEPT OF CSE, CIT GUBBI Page 38


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

Canny edge detector is an edge detection operator that uses multi-stage algorithm to
detect a wide range of edges in images.

The main stages are:

1. Filtering out noise using Gaussian blur algorithm.


2. Finding the strength and direction of edges using Sobel Filters.
3. Isolating the strongest edges and thin them to one-pixel wide lines by applying non-
maximum suppression.
4. Using hysteresis to isolate the best edges.

image_path = filedialog.askopenfilename()

image = cv2.imread(image_path)

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)


blur = cv2.bilateralFilter(gray, 11,90, 90)
edges = cv2.Canny(blur, 30, 200)

We need to pass the image to cv2.Canny() function which finds edges in the input image
and marks them in the output map edges using the Canny algorithm.

The smallest value between threshold1 and threshold2 is used for edge linking. The largest
value is used to find initial segments of strong edges.

3. CHARACTER SEGMENTATION

In the segmentation of number plate characters, number plate is segmented into its
constituent parts obtaining the characters individually. Firstly, image is filtered for enhancing the
image and removing the noises and unwanted spots. Then dilation operation is applied to the
image for separating the characters from each other if the characters are close to each other.
After performing the operations here we have to options foe segmentation of characters either we
use regionprops() function for getting the bounding box of each character and then crop it or we

DEPT OF CSE, CIT GUBBI Page 39


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

just get the highest row and highest column of each character and then crop it. Each cropped
character is then resized and stored in row matrix respectively.

Segmentation is one of the most important processes for the automatic identification of
license plates, because any other step is based on it. If the segmentation fails, recognition
phasewill not be correct.To ensure proper segmentation, preliminary processing will have to be
performed.

Fig 5.3 : Character Segmentation

3. CHARACTER RECOGNITION

After the segmentation process the last step is the character recognition. For this step the
output of the segmentation process is used as the input. Means the segmented characters’ output
matrix is feed to the neural network and neural network make some processing on it and give the
results as text. Before recognition algorithm, the characters are normalized. Normalization is to
refine the characters into a block containing no extra white spaces (pixels) in all the four sides of
the characters.

For matching the characters with the database, input images must be equal-sized with the
database characters. Here the characters are fit to 20 * 20. The extracted characters cut from plate
and the characters on database are now equal-sized. Because of the similarities of some
characters, there may be some errors during recognition. The confused characters mainly are B
and 8, E and F, D and O, S and 5, Z and 2. To increase the recognition rate, some criteria tests
are used in the system for the confused characters defining the special features of the characters.

DEPT OF CSE, CIT GUBBI Page 40


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

With these features of characters and applied tests during recognition algorithm, recognition rate
is increased with the minimum error.

DEPT OF CSE, CIT GUBBI Page 41


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

CHAPTER 6

SYSTEM TESTING

Traffic signal violation are effective tools to help traffic administration to monitor the traffic
condition. It can detect traffic violations, such as running red lights, speeding in real time.

Inpu Expected output Obtained output Pass or Fail


t
1. IMG_42 BAI-7690 BAI-7690 Partial

2. IMG_761 DoD-8863 DoD-8863 Pass

3. IMG_711 AUY-1857 AUY-1857 Partial

4. IMG_316 AHX-7300 AHX-7300 Partial

5. IMG_140 Fail
AWP-1117 AWP-117

6.IMG_104 AJO-8004 AJO-8004 Partial

7. IMG_870 AYE-3368 AYE-3368 Pass

8. IMG_104 AJO-8004 AJO-8004 Pass

9. IMG_751 AYO-1952 AYO-1952 Partial

10. IMG_838 EOX-3445 EOX-3445 Pass

 If the number of characters is completely detected and recognized, then result is


considered as pass.
 If the number of characters is partially detected and recognized, then result is
considered as partial.
 If the number of characters fails to detect and recognized, then result is
considered as fail.

DEPT OF CSE, CIT GUBBI Page 42


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

CHAPTER 7

RESULTS

DEPT OF CSE, CIT GUBBI Page 43


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

DEPT OF CSE, CIT GUBBI Page 44


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

DEPT OF CSE, CIT GUBBI Page 45


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

DEPT OF CSE, CIT GUBBI Page 46


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

DEPT OF CSE, CIT GUBBI Page 47


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

DEPT OF CSE, CIT GUBBI Page 48


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

DEPT OF CSE, CIT GUBBI Page 49


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

DEPT OF CSE, CIT GUBBI Page 50


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

DEPT OF CSE, CIT GUBBI Page 51


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

DEPT OF CSE, CIT GUBBI Page 52


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

DEPT OF CSE, CIT GUBBI Page 53


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

CONCLUSION

This study evaluated, for the first time, the potential of freely available dense Landsat
Time Series (LTS) data for deforestation detection in tropical rainforests of Kalimantan,
Indonesia, at sub-annual time scales. Results showed, firstly, regarding data, the cloud-free
observation density provided by combined LTS data from TM, ETM+, and OLI sensors
indicated the feasibility of sub-annual deforestation mapping and monitoring in the Kalimantan
mega-island, despite the region’s persistent cloud cover. Secondly, regarding the deforestation
detection, the pilot validation indicated a promising Forests 2018, 9, 389 21 of 26 spatial
accuracy. Therefore, the presented methodology is promising for use cases in which high spatial
accuracy is a priority, such as to improve the quality of Activity Data needed for calculating
emissions (and removals) from land use, land-use change, and forestry (LULUCF) under
UNFCCC requirements, in the context of REDD+ at a subnational level. The promising spatial
accuracy provides motivation for further studies to collect a larger reference sample size,
representative for the larger Kalimantan mega-island forest area, in order to evaluate the
robustness of the presented methodology for producing large area estimates of deforested area, as
well as for large area deforestation monitoring. Such a large scale assessment would only be
feasible with the help of crowdsourcing activities for collecting the reference data through visual
interpretation of LTS and VHSR. At the same time, reference data of the remnant clouds in the
satellite image can be built to further improve cloud masking algorithm, and hence reducing
noise in LTS. Further works are also needed to assess the performance of the proposed
methodology in the detection of forest degradation i.e., non-stand replacing forest disturbances,
for which sizable reference data were not available in this study. Detecting various types of
forest disturbances, with the associated different ranges of spectral change magnitude, likely
benefits from the use of multiple spectral variables (indices and individual spectral bands) other
than the NDMI utilized for primarily deforestation detection in this study. The achieved temporal
accuracy (median temporal lag of 40–112 days) was deemed not adequate for near-real-time
monitoring purpose. Looking forward, however, it can be expected that the new data stream
(freely available) from the now fully-operational Sentinel-2A and Sentinel-2B would improve
the cloud-free observation density, and thus the temporal accuracy of deforestation detection.

DEPT OF CSE, CIT GUBBI Page 54


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

Finally, beyond deforestation mapping and monitoring, the results indicated the potential of
utilizing the spectral-temporal features in the dense LTS data to automate the task of mapping
natural forests and plantations. Results from this are expected to be useful for many stakeholders
who are interested in forest management and in palm oil production, but also for the local
community affected by the consequences of deforestation.

DEPT OF CSE, CIT GUBBI Page 55


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

REFERENCES
1. United Nations Framework Convention on Climate Change. Adoption of the Paris Agreement.
Technical Report, Paris, 2015. Available online:
https://unfccc.int/files/essential_background/convention/ application/pdf/english_paris_agreement.pdf
(accessed on 1 June 2017).
2. Van der Werf, G.R.; Morton, D.C.; DeFries, R.S.; Olivier, J.G.; Kasibhatla, P.S.; Jackson, R.B.;
Collatz, G.J.; Randerson, J.T. CO2 emissions from forest loss. Nat. Geosci. 2009, 2, 737–738.
[CrossRef]
3. Baccini, A.; Goetz, S.J.; Walker, W.S.; Laporte, N.T.; Sun, M.; Sulla-Menashe, D.; Hackler, J.;
Beck, P.S.A.; Dubayah, R.; Friedl, M.A.; Samanta, S.; Houghton, R.A. Estimated carbon dioxide
emissions from tropical deforestation improved by carbon-density maps. Nat. Clim. Chang. 2012, 3,
182–185. [CrossRef]
4. Le Quéré, C.; Andrew, R.M.; Canadell, J.G.; Sitch, S.; Ivar Korsbakken, J.; Peters, G.P.; Manning,
A.C.; Boden, T.A.; Tans, P.P.; Houghton, R.A.; et al. Global Carbon Budget 2016. Earth Syst. Sci.
Data 2016, 8, 605–649. [CrossRef]
5. Houghton, R.A.; Byers, B.; Nassikas, A.A. A role for tropical forests in stabilizing atmospheric
CO2. Nat. Clim. Chang. 2015, 5, 1022–1023. [CrossRef]
6. Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.; Tyukavina, A.; Thau, D.;
Stehman, S.V.; Goetz, S.J.; Loveland, T.R.; et al. High-Resolution Global Maps of 21st-Century
Forest Cover Change. Science 2013, 342, 850–853. [CrossRef] [PubMed]
7. Gaveau, D.L.A.; Sheil, D.; Husnayaen; Salim, M.A.; Arjasakusuma, S.; Ancrenaz, M.; Pacheco, P.;
Meijaard, E. Rapid conversions and avoided deforestation: Examining four decades of industrial
plantation expansion in Borneo. Sci. Rep. 2016, 6, 32017. [CrossRef] [PubMed]
8. Margono, B.A.; Potapov, P.V.; Turubanova, S.; Stolle, F.; Hansen, M.C. Primary forest cover loss
in Indonesia over 2000–2012. Nat. Clim. Chang. 2014, 4, 730–735. [CrossRef]
9. Koh, L.P.; Miettinen, J.; Liew, S.C.; Ghazoul, J. Remotely sensed evidence of tropical peatland
conversion to oil palm. Proc. Natl. Acad. Sci. USA 2011, 108, 5127–5132. [CrossRef] [PubMed]
10. Gaveau, D.L.A.; Sloan, S.; Molidena, E.; Yaen, H.; Sheil, D.; Abram, N.K.; Ancrenaz, M.; Nasi,
R.; Quinones, M.; Wielaard, N.; et al. Four Decades of Forest Persistence, Clearance and Logging on
Borneo. PLoS ONE 2014, 9, 1–11. [CrossRef] [PubMed]
11. Avitabile, V.; Herold, M.; Heuvelink, G.B.M.; Lewis, S.L.; Phillips, O.L.; Asner, G.P.; Armston,
J.; Ashton, P.S.; Banin, L.; Bayol, N.; et al. An integrated pan-tropical biomass map using multiple
reference datasets. Glob. Chang. Biol. 2016, 22, 1406–1420. [CrossRef] [PubMed]
12. Sullivan, M.J.; Talbot, J.; Lewis, S.L.; Phillips, O.L.; Qie, L.; Begne, S.K.; Chave, J.; Cuni-
Sanchez, A.; Hubau, W.; Lopez-Gonzalez, G.; et al. Diversity and carbon storage across the tropical
forest biome. Sci. Rep. 2017, 7, 39102. [CrossRef] [PubMed]
13. GOFC-GOLD. A Sourcebook of Methods and Procedures for Monitoring and Reporting
Anthropogenic Greenhouse Gas Emissions and Removals Associated with Deforestation, Gains and
Losses of Carbon Stocks in Forests Remaining Forests, and Forestation; Technical Report; GOFC-
GOLD Land Cover Project Office: Wageningen, The Netherlands; Wageningen University:
Wageningen, The Netherlands, 2016.
14. Wulder, M.A.; Masek, J.G.; Cohen, W.B.; Loveland, T.R.; Woodcock, C.E. Opening the archive:
How free data has enabled the science and monitoring promise of Landsat. Remote Sens. Environ.
2012, 122, 2–10. [CrossRef]

DEPT OF CSE, CIT GUBBI Page 56


DEFORESTION DETECTION BY SATELLITE IMAGE 2020-21

15. Austin, K.G.; González-Roglich, M.; Schaffer-Smith, D.; Schwantes, A.M.; Swenson, J.J. Trends
in size of tropical deforestation events signal increasing dominance of industrial-scale drivers.
Environ. Res. Lett. 2017, 12, 054009. [CrossRef]
16. Pelletier, J.; Martin, D.; Potvin, C. REDD+ emissions estimation and reporting: Dealing with
uncertainty. Environ. Res. Lett. 2013, 8, 034009. [CrossRef]
17. Margono, B.A.; Turubanova, S.; Zhuravleva, I.; Potapov, P.; Tyukavina, A.; Baccini, A.; Goetz,
S.; Hansen, M.C. Mapping and monitoring deforestation and forest degradation in Sumatra
(Indonesia) using Landsat time series data sets from 1990 to 2010. Environ. Res. Lett. 2012, 7,
034010. [CrossRef]

DEPT OF CSE, CIT GUBBI Page 57

You might also like