0% found this document useful (0 votes)
32 views14 pages

01 - RA - Agriculture Monitoring System Based On Internet of Things by Deep Learning Feature Fusion With Classification

01_RA_Agriculture monitoring system based on internet of things by deep learning feature fusion with classification

Uploaded by

Brad Murdoch
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views14 pages

01 - RA - Agriculture Monitoring System Based On Internet of Things by Deep Learning Feature Fusion With Classification

01_RA_Agriculture monitoring system based on internet of things by deep learning feature fusion with classification

Uploaded by

Brad Murdoch
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Computers and Electrical Engineering 102 (2022) 108197

Contents lists available at ScienceDirect

Computers and Electrical Engineering


journal homepage: www.elsevier.com/locate/compeleceng

Agriculture monitoring system based on internet of things by deep


learning feature fusion with classification☆
K. Sita Kumari a, S.L. Abdul Haleem b, *, G. Shivaprakash c, M. Saravanan d,
B. Arunsundar e, Thandava Krishna Sai Pandraju f
a
Department of Information Technology, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, India
b
Department of Information and Communication Technology, Faculty of Technology, South Eastern University of Sri Lanka, Oluvil, Sri Lanka
c
Department of Electronics and Instrumentation Engineering, Ramaiah Institute of Technology, Bengaluru, India
d
Department of Computer Science and Engineering, KPR Institute of Engineering and Technology, Coimbatore, India
e
Department of Electronics and Communication Engineering Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences,
Chennai, Tamilnadu, India
f
Department of EEE, Dhanekula Institute of Engineering and Technology, Vijayawada, Ganguru, India

A R T I C L E I N F O A B S T R A C T

Keywords: This research proposed novel technique in crop monitoring system using machine learning-based
UAV classification using UAV. To monitor and operate activities from remote locations, UAVs extended
Crop monitoring system their freedom of operation. For smart farming, it’s significant to use UAV prospects. On the other
IoT
hand, the cost and convenience of using UAVs for smart-farming may be a major factor in
Live data
Classification
farmers’ decisions to use UAVs in farming. The IoT-based module is used to update the database
Machine Learning with monitored data. Using this method, live data should be updated soon, and it can help in crop
cultivation identification. Research also monitor climatic conditions using live satellite data. The
data is collected as well as classified for detecting crop abnormality based on climatic conditions
and pre-historic data based on cultivation for the field also this monitoring system will differ­
entiate weeds and crops. Simulation results show accuracy, precision, specificity for trained data
by detecting the crop abnormality.

1. Introduction

Crops necessitate use of herbicides as a strategy for preserving and securing crop quality along with quantity. Because weeds are
frequently geographically distributed in patches, herbicides are usually broadcast over entire fields, even if there are weed-free sec­
tions [1]. Overdrive of pesticides, on the contrary, poses patent economic and environmental hazards, prompting the adoption of
European law for sustainable use of pesticides, which includes guidelines for reducing these substances [2]. In this way, patch spraying
has made it possible to apply SSWM based on weed coverage. In this regard, remote sensing is shown to greatly enhance the reliability
of SSWM, given that equipment’s spatial along with spectral resolutions are enough for identifying spectral reflectance changes [3].
However, in the early phases of growth, crop and weed appearances are remarkably similar. To address such, past research has used
piloted aircraft or Quick Bird satellite data to map weeds at late stages of growth (e.g., flowering).Despite the right to their limited


This paper is for special section VSI-sacs. Reviews were processed by Guest Editor Dr. Antonio Zuorro and recommended for publication.
* Corresponding author.
E-mail address: [email protected] (S.L.A. Haleem).

https://doi.org/10.1016/j.compeleceng.2022.108197
Received 21 January 2022; Received in revised form 19 June 2022; Accepted 21 June 2022
Available online 16 July 2022
0045-7906/© 2022 Published by Elsevier Ltd.
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

spatial resolution, these technologies cannot exist used for early detection. UAVs are demonstrated in various studies to have ad­
vantages over airborne or satellite missions, such as lower costs, greater flexibility in flight scheduling, and capability to acquire
ultra-high spatial resolution images. UAVs are thus a prospective tool for multi-temporal research in early crop together with weed
mapping [4], which is one of the conventional constraints of remote sensing technologies. In some recent works for precision agri­
culture, image analysis and machine learning are mostly used, and it is a mostly undeveloped area. In this regard, employing
manually-defined rules to construct a weed management strategy with UAVs is a popular option. Nonetheless, expect that new image
analysis, furthermore machine learning methods, will help remote sensing to a great amount. This type of technology has been suc­
cessful with on-ground photographs, which encourages more research in this area. Proximal sensing, on the other hand, has some
drawbacks that make it difficult to employ in practice (computational resource constraints because it is frequently done in real-time,
equipment vibration, variations in brightness, and so on) [5]. In contrast, using remote sensing, an analysis should be done before
broadcasting, except it could also be effective to predict the required amount of herbicide and enhance field path that broadcasting
equipment should take. The most common challenges with UAVs have now gone to overcome, and the cost of this technique is now
acceptable, making it ready for implementation. Various researches has examined the benefits and feasibility of this approach, besides
offering novel techniques for weed monitoring, which have been tested in diverse experimental setups. This method has shown sig­
nificant potential in detecting weeds between crop rows, but differentiating weeds between crop rows remains a difficult task. The
main distinction between this method and the rest of the literature is that we integrate a variety of machine learning algorithms [6].
Contribution of this research is as follows:

• To propose crop monitoring system using machine learning-based classification using UAV.
• To monitor the crop of remote areas where the cultivation is below average, and also, this can be able to oversee the climatic
conditions of the region.
• To design IoT based module is employed to update monitored data to the database. By this method, the live data ought to be
updated soon, and it can help by identification crop cultivation in the crop. We also use live data of satellites for monitoring climatic
conditions.

Paper organization is as follows: related works are described in Section 2, proposed methodology is detailed in Section 3, and
Section 4 defines experimental analysis and concludes in Section 5.

2. Related works

The majority of recent UAV research has focused on vertical applications, ignoring the issues that UAVs face inside vertical domains
as well as across application areas. Furthermore, many studies do not address practical solutions to problems that have the potential to
impact numerous application areas.
From a communication and networking standpoint, the authors in [7] discuss characteristics in addition to UAV networks re­
quirements for envisioned civil applications for period the 2000–2015.Finally, they offer experimental findings from a form of ini­
tiatives and look into the appropriateness of current communications technology for supporting reliable aerial networks. The authors
of [8] seek to concentrate their research on routing, smooth handover, and energy efficiency. In [9], the authors look at FANETs, which
are ad-hoc networks that connect UAVs. They begin by distinguishing between FANETs, MANETs, and VANETs. Then they go over
major FANET model problems as well as open research questions. The authors introduce basic networking architecture and essential
channel features in [10], which provides an overview of UAV-aided wireless communications. They also highlight significant design
considerations and fresh possibilities to investigate [11]. Authors of [12] look at applications that use cooperative swarms of UAVs to
act as a distributed processing system. They categorize distributed processing applications as follows: object detection, tracking,
general-purpose distributed processing applications, data gathering, path planning, navigation, collision avoidance, coordination, and
environmental monitoring. However, this assessment ignores the obstacles that UAVs face in these applications, furthermore, the
potential significance of emerging technology in UAV applications. The authors of [13] present a thorough examination of UAVs,
emphasizing their potential for delivering IoT services from the skies. They outline their proposed UAV-based structure as well as
significant obstacles and requirements that come with it. Despite its enormous potential, UAV imaging faces several practical chal­
lenges. When traditional pixel-based techniques bein used for classification, the ultra-high spatial resolution of UAV images frequently
creates noise effects due to enhanced observable targets [14]. Using spatial contextual data or applying an object-oriented classifi­
cation strategy is a typical way to reduce noise effects. Texture data is taken from a GLCM (gray-level co-occurrence matrix) [15] and
paired with spectral data for classification in spatial contextual data technique. The use of such texture data can help to mitigate the
effects of isolated pixels in pixel-based approaches. Using multi-resolution segmentation [16], the object-oriented technique first
recovers meaningful objects, wherefore classification is performed on object units. These two methods are familiar to deliver higher
classification accuracy than the pixel-based method based solely on spectral data [17]. The endorse problem is that data pre-treatment
and processing impose a substantial computational cost. The abundance of UAV images is captured with a small field of view,
necessitating mosaicking of numerous sub-images to create a complete image set. Radiometric calibration should be used during
mosaicking if the sub-images remain captured at various solar conditions along with flight altitudes. The ultra-high spatial resolution
of UAV photos makes pre-processing difficult and classification time-consuming [18]. Another significant challenge is that con­
structing a time-series UAV picture set for crop categorization is not always viable. Although UAV photographs are less impacted by
atmospheric conditions than satellite images, taking UAV images during certain seasons [19], mainly in the rainy season, which
corresponds with the growing season of crops in Korea, may be difficult. From a practical standpoint, collecting time-series UAV photos

2
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

Fig. 1. Proposed architecture.

for crop categorization forcibly necessitates that operators visit the region of interest multiple times. In practice, it is essential to gather
ideal icons at specific moments to achieve classification accuracy similar to that of a complete time-series image set. Crop categori­
zation using UAV pictures is generally done with a single image [20], but accuracy comparisons with a time-series image set have yet to
be properly examined. To obtain trustworthy crop classification findings, in addition to data collecting concerns, a correct classifi­
cation approach must be used. Machine learning methods such as RF (random forest) and SVM (support vector machine) have been
routinely used to classify crops using remote sensing data since the 2000s [21].
Deep models generally bring higher computing, memory and network requirements, hence cloud computing is a common solution
to increase efficiency with high scalability and low cost, but at the cost of high latency and pressure on the network bandwidth. The
emerging of edge computing brings the computing to the edge of the network close to the data sources. The AI and edge computing
further yield edge intelligence, providing a promising solution for efficient intelligent UAV RS applications. In terms of hardware,
typical computing solutions include CPUs, GPUs and FPGAs. From the perspectives of algorithm, lightweight model design deriving
from model compression techniques especially model pruning and quantization is one of the most significant and widely used
technique.

3. Methodology

Everything is now possible thanks to recent technology advancements, such as the utilization of UAVs and Deep Learning. The UAV
and IoT systems are highly effective for gathering real-time data from sensors and satellite data. The important data can be used in a
method that feeds it to a trained deep learning system, such as an ANN, for prediction. The outcome is extremely useful in determining
the best crop to plant in the given field. This section describes UAV design, IoT design, pre-processing phase, feature extraction, and
classification for abnormal data. Proposed devise is given in Fig. 1.
The data has been initially pre-processed for resizing, noise removal, and data cleaning. Then this data has been segmented for
image enhancement, edge normalization, and smoothening. The segmented image has been pre-trained using CNN for extraction of the
feature. By this process, the crop abnormality has been detected. When, the deformity of input data becomes identified, that data have
classified for predicting the stage crop abnormality. Here the fast recurrent neural network-based classification has been used. The
following section will briefly discuss the extraction of features and data classification.

3.1. DRNN based feature fusion

One type of machine learning is reinforcement learning. The most famous machine learning type is supervised learning. In su

3
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

pervised learning, algorithms are created to produce outputs that closely resemble the labels in the training set. Using a simple
example, Markov Decision Process and DRNN key ingredients were described. When the probability of moving to the next stateSt + 1
depends on the present stateSt, but not on previous statesS1,S2,…….., St − 1, then states sequence is Markov as shown in Eq. (1)
P[St+1 ][St ] = P[St+1 ][S1 , S2 , …….., St ] (1)
In RL, we usually talk about a time-homogeneous Markov chain, in which the transition probability is independent of time t in Eq.
(2):
P[St+1 = s‘][St = s] = P[St = s‘|St− 1 = s]

vπ (s) = ∗E∗π [G[t |St = s] ]


= ∗E∗π Rt+1 + γRt+2 + γ2 Rt+3 +…|St = s
= ∗E∗π [Rt+1 + γRt+2 + γRt+3 +…|St = s]
= ∗E∗π [Rt+1 + γGt+1 |St = s] (2)
= ∗E∗π [Rt+1 + γvπ (St+1 )|St = s]
= ∗E∗π [Rt+1 |St = s] + ∗E∗π [γvπ (St+1 )|St = s]
⏟̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅⏞⏞̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅⏟

The energy function E in RBM has general form with weights matrix W as in Eq. (3), with parameters v and h indicating a pair of
visible and hidden vectors.

E(v, h) = − aT v − bT h − vT Wh (3)
Bias weights for visible and hidden units are indicated by a and b, respectively. Eq. (4) gives the probability distribution P with v
and h in terms of E:
1
P(v, h) = e− E(v,h)
(4)
Z
Here normalizing constant Z is given by Eq. (5):

(5)
′ ′
Z= e− E(v ,h )
v′ ,h′

Furthermore, Eq. (6) gives the probability of v across hidden units as sum of the above-mentioned equations:
1∑ −
P(v) = e E(v,h)
(6)
Z h

Eq. (7) estimates the log-likelihood difference of training data in terms of W:


∑n=N ∂logP(vn ) 〈 〉 〈 〉
= vi hj data − vi hj model (7)
n=1 ∂Wij
The values expected for the data and distribution model are represented by vihjdataandvihjmodel, respectively. The learning rate is
used to construct network weights for log-likelihood-based training data, as shown in Eq. (8)
( )
ΔWij = ε vi hj data − vi hj model (8)

Because neurons are not coupled at hidden or visible layers, unbiased samples can be obtained from vihjdata. Furthermore, for given
h and c, activation of hidden or visible units is conditionally independent. Eq. (9) describes the conditional property of for a given v:
∏ ( ⃒ )
P(h|v) = j
P h j ⃒v (9)

Wherehj ∈ {0, 1}and probability of hj = 1 is given in Eq. (10):


( )
( ⃒ ) ∑
P hj = 1⃒v = σ bj + vi Wij (11)
i

Logistic function σ is specified as in Eq. (12):

σ(x) = (1 + e− x )− 1
(12)
Likewise, whenvi = 1, conditional property is evaluated by Eq. (13),
( )

P(vi = 1|v) = σ ai + Wij hj (13)
j

In general, unbiased sampling using < vi,hj> is not straightforward, but it can be used to reconstruct. Every unit of hidden and
visible layers is updated in parallel using Gibbs sampling. Finally, the proper specimen is computed with < vi,hj>, by multiplying

4
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

expected and updated values of h and v. FFNN is started with RBM weights.

3.2. Fast convolution neural network based classification

A 224 × 224 × 3 image is used as the neural network’s input. The filters are made up of 33 matrices, each with a stride of 1. Padding
size is always 1, and max-pooling is done over a 2 × 2 pixel window with stride. CNN is a set of convolutional as well as pooling layers
that allow for the extraction of key features from images that best match the final goal. Convolution product on a volume is now
explicitly defined by defining convolution product on a 2D matrix, which is sum of element-wise product. In general, an image is
defined mathematically as a tensor with dimensions stated in Eq. (14):
dim(image) = (nH nw nc ) (14)
Where
nH : the size of the Height

nw : the size of the width

nc : the number of channels


For example, n_C=3 contains green, blue, and red in the RGB image. When utilising convolutional product, filter/kernel K must
have same number of channels as picture; this allows us to apply a separate filter to each channel. Filter dimension is given by Eq. (15):
dim(filter) = (f , f , nc ) (15)
For a given image and filter by Eq. (16):
nH ∑
∑ nW ∑
nC
conv(I, K)x,y = Ki,j,k Ix+i− 1,y+j− 1,k (16)
i=1 j=1 k=1

Keeping thesame notations as before by Eq. (17):


( )
nH + 2p − f nW + 2p − f
dim(conv(I, K)) = + 1, + 1 ;s > 0
s s (17)
= (nH + 2p − f , nW + 2p − f ); s = 0
Where |x| is the floor function of x
Same convolution: output size = input size → p = f−21
1 × 1 convolutions: f1, it might be useful in the same cases to shrink ncnumber of channels without changing corresponding
dimensionsnHnw. For instance, the filter is filled with a number for sake illustration in CNN, f*f*nc filters parameters are learned
through backpropagation.

3.3. Pooling

Down-sampling is process of combining data and reducing the size of an image’s features. The operation only impacts dimensions
(n C, n W) and ignores n C because it is performed in each channel. Apply a function to selected elements after sliding a filter across a
picture in a defined stride with no parameters to learn represented by Eq. (18)
( ⌊ ⌋ )
⌊ ⌋ nW + 2p − f
dim(pooling (image)) = nH + 2p − fs + 1 , + 1 , nC ; s > 0
s (18)
= (nH + 2p − f , nW + 2p − f , nC ); s = 0
Let us consider s=2 and f=2 with squared filter having size f.

3.4. Convolutional layer

As previously said, apply convolutional products to input at the convolutional layer, this time utilising many filters and an aψ
activation function.
More preciously, at lth layer:

[l− 1] [l− 1] [l− 1]


• Input: a[l − 1]
with size (nH , nW , nC ), a[0] being image in the input
• Padding: p[l], stride: s[l]
[l− 1]
• Number of filters: nC where everyK(n) has dimension: (f [l] , f [l] , nC
[l]
)
• Bias of nth convolution: bn
[l]

5
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

• ψ[l] is activation function


• Output: a[l] with size (nH , nW , nC )
[l] [l] [l]

The Eq. (19) shows,


[ ]
[l]
∀n ∈ 1, 2, ..., nC :
⎛ ⎞
( [l− 1] (n) ) ∑nn− 1 ∑nl− 1 ∑nl− 1 (n) [l− 1]
con v a , K x,y = ψ [l] ⎝ H
i=1
W
j=1
C
K a
k=1 i,j,k x+i− 1,y+j− 1,k + b[l]
n
⎠ (19)

( ( )) ( )
dim conv a[l− 1] , K (n) = n[l] [l]
H , nW

Thus by Eq. (20):


[l]
[a =( ( )) ( ( )) ( ( [l)
))]
ψ conv a[l− 1] , K (1) , ψ [l] conv a[l− 1] , K (2) , …, ψ [l] conv a[l− 1] , K (nC )
[l]
(20)
( ) ( )
dim a[l] = n[l] [l]
H , nW , nC
[l]

With Eq. (21):


(l− 1) ′ ]
[l]
nH/W + 2p l − f ll
nH/W = + 1 ;s > 0
sll
1]
(21)
= n[l− [l] [l]
H/W + 2p − f ; s = 0

n[l]
C = number of filters

Learned parameters at lth layer are:

[l− 1]
• Filters with (f [l] × f [l] × nC ) × nC
[l]

• Bias with (1 × 1 × 1) × nC parameters


[l]

Convolution is a significant analytical mathematical operation. A third function is generated by this operator using two co-efficient
namely f, and g, where the third function represents the overlapping area between the functions f and g which is either translated or
flipped, and the calculation is given by Eq. (22):

+∞
z(t)def = f (t) ∗ g(t) = f (τ)g(t − τ) (22)
τ=− ∞

The integral form for the above equation is given by Eq. (23):,
∫+∞ ∫+∞
z(t) = f (t) ∗ g(t) = f (τ)g(t − τ)dτ = f (t − τ)g(τ)dτ (23)
− ∞ − ∞

In classifying an image, a digital image is considered as a discrete function f (x, y) of a 2D space. By considering g (x, y), a 2D
convolutional function, the output image z (x, y) is given by Eq. (24):
z(x, y) = f (x, y) ∗ g(x, y) (24)
Here, the convolutional operation is employed for extracting the features of an image. Likewise, with applications involving deep
learning approaches, when a color image is given as input, it is anHD array with the dimension of 3 × image width × image length;
thus, the convolutional kernel in CNN is described as accounting in the deep learning algorithm. The computing parameter also is a
high-dimensional array. Then, for the given 2D image, the respective convolutional operation is given by Eq. (25):
∑∑
z(x, y) = f (x, y) ∗ g(x, y) = f (t, h)g(x − t, y − h) (25)
t h

The integral form is the following Eq. (26):


∫∫
z(x, y) = f (x, y) ∗ g(x, y) = f (t, h)g(x − t, y − h)dtdh (26)

For the given convolution kernel of m × n in Eq. (27):,

6
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

t=m ∑
∑ h=n
z(x, y) = f (x, y) ∗ g(x, y) = f (t, h)g(x − t, y − h) (27)
t=0 h=0

Here f indicates the input G with the convolutional kernel size of m and n. Convolution is generally realized in the computer as a
product of a matrix.

3.5. Pooling layer

As previously stated, the pooling layer seeks to down-sample the input’s features without reducing the number of channels. The
following notation is taken into consideration:

[l− 1] [l− 1] [l− 1]


• Input:a[l − 1]
with size (nH , nW , nC ), a[0] being input image.
[l] [l]
• Padding: p , stride: s
• Pooling filter Size: f[l]
• pooling function: φ[l]
[l− 1]
• Output: a[l] with size (nH , nW , nC = nC
[l] [l] [l]
)

We can assert that by Eq. (28):


( [l− 1] ) (( ) )
1]
a[l]
x,y,z = pool a x,y,z
= ϕ[l] a[l−
x+i− 1,y+j− 1,z 2
(i,j)∈[ ]
(28)
1,2,…,f [l]
( ) ( )
dim a[l] = n[l] [l1 )
H , nW , nC
[l]

With by Eq. (29)


⌊ [l− 1]
nH/W + 2p[l] − f [l]
n[l]
H/W = +1|; s > 0
s[l]
[l− 1]
(29)
= nH/W + 2p[l] − f [l] ; s = 0
1]
n[l]
C = n[l−
C

There is no parameter in pooling layer to learn.

3.6. Fully connected layer

It is made up of a small number of neurons that take in a vector and output another vector.
In general, considerjth node of ith layer indicated by thefollowing Eq. (30):
n∑
i− 1
[i] [i] [i− 1] [i]
zj = wj,l al + bj
l=1
( ) (30)
→ a[i]
j = ψ
[i]
z[i]
j

1] 1] 1]
To plug into fully connected layer tensor flatten to a 10 vector having dimension:(n[i−
H × n[i−
W × n[i−
C , 1), thus by Eq. (31):

(31)
[i− 1] [i− 1] [i− 1]
ni− 1 = nH × nW × nC
Learned parameters at lth layer are:

• Bias with nl parameters


• Weights wj,l with nl − 1 × nl parameters

Forward Propagation: Send input through network either in its entirety or in chunks as well as evaluate loss function for each
batch, which is nothing more than the sum of mistakes committed at anticipated output for each row. Here with (L+1) layers where
L=4 contains n1 input units in Input layer, n5 output units in the Output layer, and numerous hidden units in C2, M3, and F4 layers. Let
xi be the input for ith layer as well as output of (l− 1) th layer, then xi+1 is computed by Eq. (32):
xi+1 = fi (ui ) (32)

Where ui = WiT xi + bi and WTi represents the weight matrix on the input and bi indicates an additive bias vector, and (⋅) indicates
the activation function of ith layer. Here, tan (u), the hyperbolic tangent function, is chosen as the activation function for C1 and F3
layers. Max(u), then maximizing function is involved in the M2 layer. As the CNN classifier intricate here is a multiclass type, output of
the F3 layer is given to the n5 softmax function where distribution over n5 class labels occurs. Softmax regression is described as Eq.

7
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

(33):
⎡ ⎤
T
ewL,K xL +bL,1
1 ⎢ T ⎥
y = ∑n T
wL,K xL +bL,k
⎢ ewL,K xL +bL,2 ⎥
⎣ ⎦ (33)
k=1 e T
ewL,K xL +bL,n

The output vector y = xL+1 of Output layer is the final probability of every class in current iteration.
Back propagation: It entails determining the cost function’s gradients to various parameters, then updating them using a descent
process.
The epoch number is the number of times we repeat the identical process. The learning method is written as follows after estab­
lishing architecture:

• Initialization of model specifications, which is essentially the same as introducing noise into a model.
• For i = 1, 2…N
• Execute forward propagation:
• ∀i, evaluate predicted value of xi through NN: ̂ yi
θ

• Evaluate the function: J(θ) = m1 m y i , yi )
θ
i=1 L ( ̂

where m is training set size, model parameters areθand L cost (*) function

• Execute back propagation:


• Apply a decent method to update parameters:
θ =: G(θ)

The loss function is given as Eq. (34):


/
∑ m ∑ n5
{ } ( )
J(θ) = − 1 m 1 j = Y (i) log y(i)
j (34)
i=1 j=1

Where m, Y and y(i) j shows the number of training samples, estimated output, jth value of original output for ith specimen of
training, and the size of the vector is n5. For the estimated output (i) for sample i, the probability is 1 for the label class, and the
prospect for other classes is 0. When j equals the estimated label of training sample i, = Y(i) means 1; otherwise, it is 0. A minus sign is
assigned before θ for convenient computation. Loss function relating to ui is derived as Eq. (35):

∂J − (Y − y).f ‘(ui ), i = L
δi = {( ) (35)
∂ui WiT δi+1 .f ‘(ui ), i < L
Thus, for every iteration, updating is done by Eq. (36):
θ = θ − α.∇θ J(θ) (36)
To adjust these parameters, where α represents learning factor (α taking 0.01), and from Eq. (37):
{ }
∂J ∂J ∂J
∇θ J(θ) = , ……, (37)
∂θ1 ∂θ2 ∂θ L
When training iteration increases, the cost function is less indicating the original output is near to the estimated output. The
iteration comes to an end if the difference between them is lesser.
S = (S (1)… S (N)) and M = (M (1)… M (N)) are input frame images and their associated map frame images, respectively. Learning
method of distribution over labels, which is as Eq. (38), is used to model this as a probabilistic technique.
P(n(M, i, wm )|n(S, i, ws )) (38)
Where n (I,i,w) is a w*w patch for an image I that is focused on pixel i. W s should be set higher in this case so that additional
contextual data may be collected. It has the following functional form f: Eq. (39):
fi (s) = σ(ai (s)) = P(mi = 1|s) (39)
The total of input for ith output component and the significance of the ith output component are represented by ai and fi,
respectively. The logistic utility σ (x), is written as Eq. (40):
1
σ(x) = (40)
1 + exp(− x)

8
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

Fig. 2. FCNN architecture for classification.

The softmax output is an L-dimensional vector that shows the conveyance better than potential marks for pixel i. If path from pixel I
to output unit l is considered for multi-class marking, re-composed condition is given by Eq. (44):
exp(ail (S)
fil (s) = = P(mi = l|s (41)
Z
Wherefil(s)is prediction probability where pixel I is mapped to label j.
Architecture of FCNN has been given in below Fig. 2.
The following are the benefits of the proposed method:

• For starters, FCNN may be responsible for a massive volume of labelled data from multiple domains.
• Second, when it is parallelized with the Graphics Processing Unit, it is faster (GPU). As a result, it can now handle a larger number of
pixels. The suggested method simulates training data by lowering kernel size using a computer learning process.
• Initiative Sigma has delivered every patch in the training data. Because there are so many training patches, optimization becomes
difficult. A binary classifier with minimum patches can be used to do this. Only a few of the hyper-parameters have been changed in
any way. Hyper-parameters have been used to define the sensitivity analysis, allowing them to be fine-tuned with more precision.

Inputs are the data obtained from the sensors such as temperature sensor, humidity sensor, and moisture sensor. The number of
hidden layers present is directly proportional to the level of accuracy. The output will be the decision of the detection of crop ab­
normality which is suitable to be grown in that area for present and future climatic scenarios.

4. Performance analysis

The experiment was conducted by giving the present weather conditions as the input values such as temperature sensor values,
humidity sensor values, rain sensor values, and moisture sensor values as input.The 32 ground truth frames are taken into account
when analyzing the results. We calculate the approximate percentage of soil, crop, and weed pixels for each of these frames and
compare them to the ones obtained by our algorithm. Simulation results have been represented below.
We selected 100 UAVs because, as can be seen in the writing about UAV correspondence systems, this number is seen as standard
for UAV networks. For all V2V interchanges and for transmitting direct impressions, we set a break term of 1000 s, as this period is
typically used in writing about helpful UAVs.
In a simulated scenario, our system is compared against benchmarks for continuous variable mapping. Simulations were run in
MATLAB software with a system havinga 1.8 GHz Intel i7 processor and 16 GB of RAM.
Dataset:
GitHub dataset: Datasets published at GitHub consists of orthomosaic image, training-validation dataset, and demo dataset. The
orthomosaic image is the image stitched from a series of nadir-like view UAV images. The dataset provides 13 images for consecutive
growth stages, which were imaged in 2018, 2019, and 2020. All the images are georeferenced in TWD97/TM2 zone 121 (EPSG: 3826)

9
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

Table 1
Comparative analysis of proposed with existing methods.
Datasets Methods Accuracy Recall Precision F-1 score

GitHub dataset GLCM 97.6 81.9 75.2 47.1


RF-SVM 97.8 86.1 76.9 64.1
DRNN-FCNN 98 87.6 77 64.5
USC-GRAD-STDdb GLCM 91.5 86 74.5 53.6
RF-SVM 95.2 87.1 77.8 55.1
DRNN-FCNN 96.5 87.6 78 60.2
Microsoft COCO GLCM 93 83.2 76.5 51.9
RF-SVM 93.8 87.8 77 54.2
DRNN-FCNN 97 87.9 77.9 64.5

Fig. 3. Simulation scenario.

projected coordinate system.


The USC-GRAD-STDdb:For small target detection, USC-GRAD-STDdb is employed, which is an annotated video. With drone
images, this USC-GRAD-STDdb set has few public datasets. It focuses on small objects which are hard to recognize by people.
Microsoft COCO:For object recognition, this dataset is employed, which is the popular, public and large dataset. It consists of 91
objects with 328k images and 2500k labels. It mainly focuses on object bounding box localization, semantic segmentation, and image
classification. For precise object localization, objects are labeled using per-instance segmentation.
Parametric analysis has been carried out in terms of Recall, Accuracy, Precision, and F-1 score. Proposed technique is CAL_­
HOG_CNN, and the existing method compared is SAR_MRS, UGR_HRS.
The above Table 1 shows the comparative analysis for proposed and existing techniques in the dataset and parametric analysis. The
comparison has been carried out for various datasets, specifically Git Hub dataset, USC-GRAD-STDdb, and Microsoft COCO dataset.
The graphical representation has shown below Fig. 3.
Comparison of parameters in terms of recall, accuracy, precision, and F-1 score is shown in Fig. 4. The accuracy of GitHub Dataset
has been presented in the graph above, with accuracy of 98%, recall of 87.6%, precision of 77%, and F-1 score of 64.5% for the
suggested technique. Based on this analysis, the proposed strategy yielded the best results for parameters for the GitHub Dataset.
The above Fig. 5 shows comparison of parameters in terms of Recall, Accuracy, Precision, and F-1 score. A graph is plotted between
% of parameters with the number of epochs. For USC-GRAD-STDdb accuracy 96.5%, recall 87.6%, precision 78%, and F-1 score of
60.2% for a proposed proficiency, which is the enhanced result of USC-GRAD-STDdb using the proposed method.
The above Fig. 6 represents comparison of parameters in terms of Recall, Accuracy, Precision, and F-1 score. For Microsoft COCO
accuracy97%, recall 87.9%, precision 77.9%, and F-1 score of 64.5% for the proposed technique in terms of Recall, Accuracy, Pre­
cision, and F-1 score proposed method obtains optimal results for Microsoft COCO dataset.

5. Conclusion

Self-driving agricultural machines, temperature and moisture sensors, aerial photos and UAVs, multi-spectral and hyper-spectral
imaging equipment, and GPS and other location technology will all be used in the future of agriculture. The large amount of data

10
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

Fig. 4. Comparison of parameters for GitHub Dataset (a) Accuracy, (b) Recall, (c) Precision, (d) F-1score.

collected by these new techniques, along with recent advances in parallel and GPU computing, allowed researchers to embrace and use
data-driven analysis and decision-making approaches like deep learning in the agriculture area. These breakthroughs paved the path
for precision agriculture to solve exceedingly complicated classification and segmentation problems. Flying at low altitude, tiny size,
high resolution, lightweight, and portability are all advantages of the UAV. In scientific research, UAVs and machine learning have
enormous potential. This study compiled studies on use of UAVs and ML. Here IOT based module is used for collecting data as well as
then DRNN based feature fusion and machine learning-based classification has been carried out for detection of weed and crop. These
systems also monitor the abnormality in crops and detect them by feature extraction, and then its level has been predicted using FCNN
based classification technique. The simulation results obtained show the optimal accuracy, precision, specificity, and MAE for the
proposed design when compared with the existing proficiency. The potential of automatic feature extraction by learning time cor­
relation of numerous images is the key advantage of our architecture, which reduces human feature engineering and crop modeling
steps.

Declaration

I / We declare that “it is not been submitted anywhere before as well as not been published in other journals”. It does not comprise
that is outrageous, indecent, deception, stealing, defamatory, or else opposing to rules. I/we pursued the Journal’s accepted “Publi­
cation ethics and malpractice” declaration provided in website of journal in concern part and responsible for the rightness (or copying)
and article genuineness.

11
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

Fig. 5. Comparison of parameters for USC-GRAD-STDdb (a) Accuracy, (b) Recall, (c) Precision, (d) F-1score.

Availability of data and materials

Data and coding will be shared whenever it is required for the review.

Authors contribution

Both the authors are equally contributed their skills and effort to produce this article.

Statements and declarations

Ethical approval: This article does not contain any studies with animals performed by any of the authors.

Funding

No Funding.
Informed consent: Informed consent was obtained from all individual participants included in the study.
Availability of data and material: All date available in Manuscript
Code availability: All date available in Manuscript – Custom Mode.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to
influence the work reported in this paper.

12
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

Fig. 6. Comparison for Microsoft COCO (a) Accuracy, (b) Recall, (c) Precision, (d) F-1score.

Data availability

The data that has been used is confidential.

References

[1] Maimaitijiang M, et al. Crop monitoring using satellite/UAV data fusion and machine learning. Remote Sens 2020;12(9):1357.
[2] Khan NA, Jhanjhi NZ, Brohi SN, Usmani RSA, Nayyar A. Smart traffic monitoring system using unmanned aerial vehicles (UAVs). Comput Commun 2020;157:
434–43.
[3] Mazzia V, et al. UAV and machine learning based refinement of a satellite-driven vegetation index for precision agriculture. Sensors 2020;20(9):2530.
[4] Su J, et al. Machine learning-based crop drought mapping system by UAV remote sensing RGB imagery. Unmanned Syst 2020;8(01):71–83.
[5] Zhou X, et al. UAV data as an alternative to field sampling to monitor vineyards using machine learning based on UAV/sentinel-2 data fusion. Remote Sens
2021;13(3):457.
[6] Almusaylim ZA, Zaman N. A review on smart home present state and challenges: linked to context-awareness internet of things (IoT). Wirel Netw 2019;25(6):
3193–204.
[7] Ge X, et al. Combining UAV-based hyperspectral imagery and machine learning algorithms for soil moisture content monitoring. PeerJ 2019;7:e6926.
[8] Guo Y, et al. Scaling effects on chlorophyll content estimations with RGB camera mounted on a UAV platform using machine-learning methods. Sensors 2020;20
(18):5130.
[9] Eskandari R, et al. Meta-analysis of unmanned aerial vehicle (UAV) imagery for agro-environmental monitoring using machine learning and statistical models.
Remote Sens 2020;12(21):3511.
[10] Lottes P, et al. UAV-based crop and weed classification for smart farming. In: Proceedings of the IEEE international conference on robotics and automation
(ICRA). IEEE; 2017.
[11] Ezuma M, et al. Micro-UAV detection and classification from RF fingerprints using machine learning techniques. In: Proceedings of the IEEE aerospace
conference. IEEE; 2019.
[12] Zhou X, et al. Predicting within-field variability in grain yield and protein content of winter wheat using UAV-based multispectral imagery and machine learning
approaches. Plant Prod Sci 2020:1–15.
[13] Radoglou-Grammatikis P, et al. A compilation of UAV applications for precision agriculture. Comput Netw 2020;172:107148.
[14] Böhler J, Schaepman M, Kneubühler M. Crop classification in a heterogeneous arable landscape using uncalibrated UAV data. Remote Sens 2018;10:1282.
[15] Hall O, Dahlin S, Marstorp H, ArchilaBustos M, Öborn I, Jirström M. Classification of maize in complex smallholder farming systems using UAV imagery. Drones
2018;2:22.
[16] Nijhawan R, Sharma H, Sahni H, Batra A. A deep learning hybrid CNN framework approach for vegetation cover mapping using deep features. In: Proceedings of
the 13th international conference on signalimage technology and internet-based systems, SITIS 2017; 2018. p. 192–6. vol. 2018- Janua.

13
K.S. Kumari et al. Computers and Electrical Engineering 102 (2022) 108197

[17] Baeta R, Nogueira K, Menotti D, Dos Santos JA. Learning deep features on multiple scales for coffee crop recognition. In: Proceedings of the 30th conference on
graphics, patterns and images, SIBGRAPI; 2017. p. 262–8.
[18] Bah MD, Hafiane A, Canal R. Weeds detection in uav imagery using slic and the hough transform. In: Proceedings of the 7th international conference on image
processing theory, tools and applications (IPTA). IEEE; 2017. p. 1–6.
[19] dos Santos Ferreira A, Matte Freitas D, Gonc¸alves da Silva G, Pistori H, TheophiloFolhes M. Weed detection in soybean crops using ConvNets. Comput Electron
Agric 2017;143(November):314–24.
[20] Rahnemoonfar M, Sheppard C. Real-time yield estimation based on deep learning. In: Proceedings of the autonomous air and ground sensing systems for
agricultural optimization and phenotyping II. 10218; 2017, 1021809.
[21] Huang H, Deng J, Lan Y, Yang A, Zhang L, Wen S, Zhang H, Zhang Y, Deng Y. Detection of helminthosporium leaf blotch disease based on UAV imagery. Appl Sci
2019;9(3):558.

Dr. K. Sita Kumari, is Currently working on, Associate Professor, Department of Information Technology, Velagapudi Ramakrishna Siddhartha Engineering College,
Vijayawada, AP, India, she is participated many international and national conferences, She Published many articles in Scopus, SCIE in reputed Index journals.

Sulaima Lebbe Abdul Haleem, is currently serving as Senior Lecturer in Information & Communication Technology at the Department of Information & Commu­
nication Technology, Faculty of Technology, South Eastern University of Sri Lanka, Sri Lanka. He is the founder Head of the Department as well served in many academic
administrative posts such as Chairman Curriculum Development Committee, Chairman IT advisory Committee, etc. He is the prime architect of many honors degree
curricula at the Faculty of Technology and also at the various faculty at the South Eastern University of Sri Lanka, He Published Various articles in peer-reviewed as well
indexed in PubMed, SCI, Scopus, and Web of Science journals. He has served as a reviewer in many international journals on top publishers such as Inderscience,
Springer, Elsevier, Willey, and IEEE, etc. Email: [email protected].

Dr. G. Shivaprakash, is Currently working as Associate professor in the department of Electronics and Instrumentation Engineering, Ramaiah Institute of Technology,
Bengaluru, India. His areas of interests are Signal processing, Biomedical instrumentation, VLSI, Programmable logic controllers, digital signal processing and
processors.

Dr. M.Saravanan is currently working as Associate Professor in the department of CSE, in KPR institute of Engineering and Technology, Coimbatore, India. Totally he is
having 11+ years of experience. He completed his research in the area VANET and received doctoral degree from Anna University in 2020. He Published Various articles
in SCI, Scopus and Web of Science journals.

Dr. B.Arunsundar, completed Ph.D. in the field of Wireless Communication from College of Engineering Guindy, Anna University, Chennai. Currently, he is working as
an Assistant Professor in the Department of ECE, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai. His area of interest
includes wireless communication, Networking, and Image Processing.

Thandava Krishna Sai Pandraju, He is received best teacher award and best mentor award also, he has more than 10 years teaching experience. He is expert in
Renewable Energy Systems, Control Systems, and Power Systems research areas.

14

You might also like