0% found this document useful (0 votes)
7 views

Draft_Copy_Implementation of the Proposed Method

This document presents a comparative analysis of two predictive modeling approaches, LSTM and FNN, for time series forecasting in smart farming using IoT sensor data. It outlines the implementation of two models focusing on anomaly detection, incorporating temporal analysis and refined spatial correlation calculations to enhance accuracy. The study aims to address challenges in IoT data reliability and offers practical solutions for real-time anomaly detection in agricultural applications.

Uploaded by

anuragkarki.val
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Draft_Copy_Implementation of the Proposed Method

This document presents a comparative analysis of two predictive modeling approaches, LSTM and FNN, for time series forecasting in smart farming using IoT sensor data. It outlines the implementation of two models focusing on anomaly detection, incorporating temporal analysis and refined spatial correlation calculations to enhance accuracy. The study aims to address challenges in IoT data reliability and offers practical solutions for real-time anomaly detection in agricultural applications.

Uploaded by

anuragkarki.val
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Contents

Abstract.......................................................................................................2
Introduction.................................................................................................2
Literature Study...........................................................................................3
Implementation of the Proposed Method (Methodology)............................5
MODEL-1..................................................................................................5
Adding Temporal Analysis on Weight Factor..........................................6
MODEL-2..................................................................................................6
Proposed Algorithm.....................................................................................8
Model Parameters and Settings..............................................................11
Temperature Prediction (ForestFires)......................................................12
Results.......................................................................................................17
Comparative Results..............................................................................20
Conclusion.................................................................................................20
Pseudo Code..............................................................................................21
Reference..................................................................................................22

Abstract
This study presents a comparative analysis of two predictive modeling
approaches for time series forecasting: a Long Short-Term Memory (LSTM)
network and a Feedforward Neural Network (FNN). We begin by
preprocessing the dataset through normalization and sequence creation,
where sequences of data are prepared for model input. The LSTM model is
designed with a single hidden layer of 50 units and ReLU activation, and is
trained using mean squared error (MSE) as the loss function. In parallel,
an FNN model with a three-layer architecture, including 64 and 32 units in
the hidden layers and a single output unit, is trained under similar
conditions. Both models are evaluated on their predictive performance
using the MSE metric, with the results highlighting the comparative
effectiveness of each approach in time series forecasting. This analysis
provides insights into the strengths and limitations of LSTM and FNN
models, contributing to the understanding of their applicability in
predictive analytics.

Introduction
The rapid advancement of the Internet of Things (IoT) has ushered in
transformative changes across various industries, with precision
agriculture, or smart farming, standing out as a key beneficiary [1][2]. By
integrating IoT technologies, traditional farming practices have evolved
into more efficient and data-driven processes, significantly enhancing
productivity and resource management [3]. This shift has been
instrumental in optimizing operations such as soil moisture monitoring,
climate tracking, and precision irrigation, thereby boosting crop yields and
reducing waste.

In smart farming, the deployment of IoT sensors is crucial for gathering


essential data on environmental conditions and crop health [4]. These
sensors enable real-time data collection and analysis, providing farmers
with actionable insights to make informed decisions. However, the
widespread use of IoT devices in agriculture also brings challenges,
particularly in ensuring the accuracy and reliability of the data collected.
Sensors deployed in outdoor environments are prone to errors due to
harsh conditions and are vulnerable to security breaches, including cyber-
attacks and tampering [5][6].

Given these challenges, it is essential to develop robust methods for


detecting anomalies in sensor data. Anomalies can stem from various
sources, including hardware malfunctions, environmental interferences, or
malicious activities. Accurate detection and classification of these
anomalies are vital for maintaining the integrity of the data and ensuring
the smooth operation of IoT systems in agriculture. Traditional methods of
anomaly detection, often designed for Wireless Sensor Networks (WSNs),
may not be fully applicable to IoT networks due to differences in data
characteristics and system requirements [7].

In this study, we propose two innovative models for enhancing anomaly


detection in IoT sensor networks within the smart farming domain. The
first model leverages Long Short-Term Memory (LSTM) networks to
incorporate temporal analysis into the detection process, addressing both
spatial and temporal anomalies [10]. The second model builds on this by
providing a more nuanced analysis of sensor data, including first-order
and second-order spatial and temporal correlations. These models aim to
provide a comprehensive framework for identifying and distinguishing
between normal, faulty, and malicious sensor behaviors [8].

Our research not only addresses a critical gap in the existing literature but
also offers practical solutions for improving the reliability of IoT systems in
agriculture. By enhancing anomaly detection capabilities, we contribute to
the broader goal of achieving more efficient and secure smart farming
practices. The novelty and Contribution of the proposed work is given
below:

 Developed two innovative models for anomaly detection in IoT


sensor networks, specifically tailored for smart farming applications.
 Integrated Long Short-Term Memory (LSTM) networks to analyze
temporal anomalies, enhancing the detection of both spatial and
temporal outliers.
 Introduced a refined spatial correlation calculation that includes
temporal components, providing a more comprehensive anomaly
detection framework.
 Proposed a second-order spatial-temporal correlation model that
captures complex patterns in sensor data, utilizing both first-order
and second-order correlations.
 Demonstrated the effectiveness of the proposed models using real-
world datasets, showcasing improved accuracy in detecting
legitimate, faulty, and malicious sensor behaviours.
 Addressed the limitations of traditional Wireless Sensor Networks
(WSN) techniques by adapting them for the unique characteristics of
IoT environments in agriculture.
 Provided practical solutions for real-time detection and classification
of sensor anomalies, which are crucial for maintaining data integrity
and system reliability.
 Contributed new knowledge to the field of IoT in agriculture, with
potential applications beyond smart farming, including other IoT-
driven domains.
Literature Study
The literature on the Internet of Things (IoT) and its applications reveals
significant advancements and ongoing challenges in the field, particularly
concerning security and anomaly detection. Butun et al. (2019) offer a
detailed review of the vulnerabilities, attacks, and countermeasures
associated with IoT systems. Their study highlights various security
threats, including Distributed Denial of Service (DDoS) attacks, which pose
a substantial risk due to the resource constraints typical of many IoT
devices. This review underscores the necessity for effective security
measures to safeguard IoT infrastructures against such attacks [1].

Sonar and Upadhyay (2014) specifically address the impact of DDoS


attacks on IoT environments. Their survey emphasizes the unique
challenges posed by these attacks within the IoT context and the need for
targeted mitigation strategies. They argue that the conventional
approaches to handling DDoS attacks may not be fully applicable to IoT
systems due to their distinct characteristics [2]. In the realm of smart
farming, Dhanaraju et al. (2022) examine how IoT technologies contribute
to sustainable agriculture. Their research demonstrates how IoT
applications can optimize resource use, enhance crop management, and
drive overall efficiency in farming practices. This work highlights the
potential for IoT to revolutionize agriculture by making it more sustainable
and productive [3].

Jayaraman et al. (2016) focus on the practical aspects of deploying IoT


platforms for smart farming. Their experiences and lessons learned
provide valuable insights into the implementation of IoT solutions in
agricultural settings. They discuss the benefits and challenges associated
with these platforms, contributing to a deeper understanding of how IoT
technologies can be effectively integrated into farming practices [4].
Sontowski et al. (2020) further explore the cybersecurity risks specific to
smart farming infrastructures. Their study identifies various cyber threats
and attacks that can compromise the integrity of smart farming systems.
They highlight the importance of developing robust security measures to
protect against these threats and ensure the resilience of IoT-based
agricultural solutions [5].

Yazdinejad et al. (2021) review the security aspects of smart farming and
precision agriculture, discussing the different types of attacks, threats,
and countermeasures relevant to this field. Their work emphasizes the
need for ongoing research and development to address the evolving
security challenges faced by IoT applications in agriculture [6].
Mohammad et al. (2019) explore security weaknesses and attacks on IoT
applications more broadly, providing a comprehensive overview of the
types of threats that can affect various IoT systems. Their research
highlights the need for improved security protocols to protect against
these vulnerabilities [7].

Sood et al. (2021) contribute to the understanding of IoT sensor behaviors


by proposing methods to accurately detect legitimate, faulty, and
compromised sensor scenarios. Their approach offers a novel perspective
on anomaly detection within IoT networks, focusing on the specific
challenges faced in smart farming contexts [8]. Chen (2013) introduces
new approaches for calculating Moran’s index of spatial autocorrelation,
which can be applied to analyze spatial patterns in IoT sensor data. This
methodological advancement supports more accurate detection of spatial
anomalies in IoT networks [9].

Karim et al. (2017) propose LSTM fully convolutional networks for time
series classification, offering a robust method for analyzing temporal
patterns in sensor data. Their work is relevant for enhancing the detection
of temporal anomalies in IoT systems, particularly in the context of smart
farming [10]. Xu, He, and Li (2014) provide a comprehensive survey on
the deployment of IoT in industrial contexts. Their work details the various
industrial applications of IoT, emphasizing how these technologies
enhance operational efficiency, automate processes, and facilitate real-
time monitoring. The survey covers a wide range of industrial sectors,
illustrating the diverse applications and potential benefits of IoT
integration in industrial environments [11].

Liu et al. (2021) explore the transition from Industry 4.0 to Agriculture 4.0,
examining the current state of IoT applications in agriculture. They identify
enabling technologies, such as sensors and data analytics, that drive the
shift towards smarter agricultural practices. The paper also highlights key
research challenges, including the need for robust systems to handle large
volumes of data and ensure the reliability of IoT solutions in agricultural
settings [12].

Brewster et al. (2017) discuss the design and implementation of a large-


scale IoT pilot project in Europe focused on agriculture. Their work outlines
the architecture of the pilot, the technologies used, and the objectives of
the project. They address the practical aspects of deploying IoT solutions
across diverse agricultural environments, providing insights into the
complexities and benefits of large-scale IoT implementations in agriculture
[13]. Roy, Das, and Das (2017) present a temperature and humidity
monitoring system designed for industrial storage rooms. Their system
utilizes IoT technologies to continuously monitor environmental
conditions, ensuring that storage conditions remain optimal for preserving
goods. This work demonstrates a specific application of IoT in industrial
environments, highlighting how these technologies can enhance
operational management and quality control [14].

Implementation of the Proposed Method


(Methodology)
MODEL-1
Model 1 focuses on enhancing the detection of anomalies in sensor
measurements by incorporating temporal analysis using Long Short-Term
Memory (LSTM) networks, which are well-suited for analyzing time series
data. The model aims to detect both spatial and temporal anomalies by
modifying the spatial correlation calculation to include temporal
components. The refined spatial correlation is represented by the equation
1
w ij = 2 2 , where t i and t j are the time indices of measurements at
dij +α ( t i−t j )
sensors i and j , respectively, and α is a weighting factor for temporal
distance. This adjustment improves the detection of anomalies by
considering not only the spatial distance between sensors but also the
temporal distance, which can reveal patterns not apparent when looking
solely at spatial data. The model also modifies the calculation of the local
Moran's I index and the weighted variance to incorporate temporal
weights, thus addressing challenges related to static sensors and low-
density deployments [9].

Adding Temporal Analysis on Weight Factor


LSTM networks are well-suited for analyzing timeseries data. We will use
LSTM to detect temporal anomalies in the sensor measurements. Modify
the spatial correlation calculation to include temporal components,
improving the detection of both spatial and temporal outliers.

Refined Spatial Correlation Calculation


1
w ij =
d +α ( t i−t j )2
2
ij

where t i and t j are the time indices of measurements at sensors i and j ,


and α is a weighting factor for temporal distance.
Calculation of the local Moran's I index considering both spatial and
temporal components:
N

( xi −x́ ) ∑ ❑ wij ( x j−x́ )


j=1 , j ≠ i
I i=
δ 2i

Modification of the weighted variance to include temporal weights:


N

∑ ❑ wij ( x j−x́ )2
δ 2i = j =1
N−1

It will help to resolve the issues of static sensor as well as low density
deployment.

MODEL-2
Model 2 expands on this approach by introducing a more detailed
breakdown of spatial and temporal correlations. It includes first-order
N
spatial correlation ( Si ), calculated as Si=¿ ∑ j=1 , j≠ i wij ( x j− x́ ), and first-order
temporal correlation ( T i ), defined as the difference between consecutive
time points, T i=x i (t)−x i (t−1). Additionally, second-order correlations are
introduced: second-order spatial correlation ( S(2)
i ) and second-order
temporal correlation ( T (2i ) ), capturing more complex patterns in the data.
These are combined into a second-order spatialtemporal correlation
equation: C i=α Si + β T i + γ S (2) (2)
i + δ T i , where α , β , γ , and δ are weighting factors.

i +δ T i ], integrates
The final anomaly detection equation, Ai= ( x i (t)− x́ ) ¿ γ S(2) (2)

these components to provide a comprehensive metric for identifying


anomalies. This approach builds upon the neural network architecture
discussed earlier by combining the outputs of spatial and temporal
analysis, offering a more robust method for anomaly detection in time
series data.

First-Order Spatial Correlation:


N
Si = ∑ ❑ wij ( x j− x́ )
j=1 , j ≠ i

1
where w ij = 2.
dij
First-Order Temporal Correlation:
T i=x i (t)−x i (t−1)

Second-Order Spatial Correlation:


N
S(2)
i = ∑ 2
❑ wij ( x j−x i )
j =1 , j ≠i

Second-Order Temporal Correlation:


(2)
T i =( xi (t )−2 x i(t −1)+ x i (t −2) )

Combined Second-Order Spatial-Temporal Correlation:


(2) (2)
C i=α Si + β T i + γ S i + δ T i

where α , β , γ , and δ are weighting factors.

Second-Order Equation

Combining the above components, the second-order equation for


detecting anomalies can be written as:

Ai= ( x i (t )−x́ ) [ α S i+ β T i + γ S (2i )+ δ T (2)


i ]

In the below figure neural network settings are shown, including 100
training cycles, a learning rate of 0.005, and training on a CPU without
using a learned optimizer. The network architecture consists of an input
layer with 70 features, followed by two dense layers with 20 and 10
neurons, respectively, and a single output layer. This configuration
suggests a basic feedforward neural network setup for a supervised
learning task.

Proposed Algorithm
Algorithm 1 describes the process for developing and evaluating a
predictive LSTM model. The initial step involves importing essential
libraries such as numpy, 'pandas, 'tensorflow', andkeras, along with
modules for data scaling and performance evaluation. The dataset is
then loaded from a CSV file usingread_csv(file_path)`. In the data
preprocessing phase, features (' x ') and target variables (' y ) are
extracted from the dataset. The features are normalized
withStandardScaler()` to standardize the input values, which helps in
improving model performance. Sequences of data are then created using
a defined sequence length ('SEQ LENGTH'), which structures the data into
a format suitable for LSTM input.
For model development, the dataset is divided into training and testing
sets using 'train_test_split()', reserving 20% of the data for testing. An
LSTM model is then defined and compiled with the Adam optimizer and
mean squared error loss function, setting the stage for training. During the
training phase, the model is trained using model1.fit( ) with a specified
number of epochs (100) and a validation split of 10 % . This training process
adjusts the model's parameters to minimize the loss on the training data.
In the evaluation phase, the model's performance is assessed by making
predictions on the test set and calculating the Mean Squared Error (MSE)
between the predicted and actual values. This MSE is printed to provide a
measure of the model's accuracy.

___________________________________________________________________________
Algorithm 1: Model 1
1 Initialization
Import necessary libraries

numpy, pandas, tensorflow, keras, StandardScaler,


mean_squared_error, train

Load dataset D from CSV


D ← read_csv(file_path)

2 Data Preprocessing
Define features X and target y
¿ ¿ y ← D[ area ]¿

Normalize features using StandardScaler:


scaler ← StandardScaler()
X scaled ← scaler.fit_transform (X )

Define sequence length SEQ_LENGTH and create sequences:


SEQ_LENGTH ← 1
( X seq , y seq ) ← create_sequences ( X scaled , SEQ_LENGTH ¿
Where:

X seq [i]=[ X scaled [i], … , X scaled [i+ SEQ_LENGTH −1] ]


y seq [i]= y [i+ SEQ_LENGTH ]

3 Model Development
Split data into training and testing sets:
( X train , X test , y train , y test ) ← train_test_split ( X seq , y seq , test_size ¿
0.2 , random_state ¿ 42 )
Define and compile the LSTM model
model1.compile(optimizer ¿' adam ' , loss ¿' mean_squared_error ' )

4 Training
Train the model:
history1 ← model1.fit ( X train , y train , epochs ¿ 100, validation_split ¿ 0.1 )

5 Evaluation
Predict and evaluate the model:
y pred1 ← model1.predict ( X test )
y pred1 =¿ flatten ( y pred1 )
y test =¿ flatten ( y test )
MSE ⁡1=¿ mean_squared_error ( y test , y pred1 )

Print the Mean Squared Error (MSE)


print(MSE1)

___________________________________________________________________________

Algorithm 2 outlines a process for implementing and evaluating two


models: an LSTM model and a Feedforward Neural Network (FNN) for time
series prediction. The process begins with data preparation, where
sequences are generated using the create_sequences function. This
function creates sequences of input data ('x_seq q ‘ ) and their
corresponding target values ('y_seq) for LSTM input. Each
sequencex_seq[i]includes a single data point fromx_scaled', with
y_seq[i] being the value immediately following this data point, and the
sequence length is set to 1.

Data is then split into training and testing sets using train_test_split,
allocating 20% of the data for testing while ensuring reproducibility with a
fixed random state.

For the LSTM model, the architecture includes 50 units with ReLU
activation and an input shape based on the sequence length and feature
dimensions. This model also has a final dense layer with one output unit.
The model is compiled with the Adam optimizer and mean squared error
loss function, and trained for 100 epochs with a 10 % validation split. After
training, predictions are made on the test set, and the Mean Squared Error
(MSE) is calculated to assess the model's performance.

For the Feedforward Neural Network (FNN) model, the architecture


features three dense layers: the first with 64 units and ReLU activation,
the second with 32 units and ReLU activation, and the final output layer
with one unit. The FNN model is compiled similarly with the Adam
optimizer and mean squared error loss, trained for 100 epochs with a 10 %
validation split, and the input data is reshaped to fit the model's expected
input shape. Performance is evaluated by comparing the predictions to the
actual test values and calculating the MSE.

___________________________________________________________________________
Algorithm 2: Model 2
1 Data Preparation
Sequence Creation
Define a function create_sequences to generate sequences for
LSTM: create sequences ⁡( X scaled , SEQ_LENGTH ¿=( X seq , y seq )
where
X seq [i]=¿ SEQ_LENGTH -1 ¿ }
y seq [i]= X scaled ¿ SEQ_LENGTH ¿
Sequence Length
Set the sequence length:
SEQ_LENGTH ¿ 1
Data Splitting
Split data into training and testing sets:
( X train , X test , y train , y test ) ← train_test_split ( X seq , y seq , test_size ¿
0.2 , random_state ¿ 42 )
2 Model 1: LSTM
Model Architecture:
Define the LSTM model:

model2 ¿
{
LSTM ⁡( 50 , activation = ReLU , input_shape =( SEQ_LENGTH , X train
Dense ⁡(1)
Compile the model:
model1.compile(optimizer ¿ Adam, loss ¿ mean_squared_error)
Training
Train the model:
history 1 ← model1.fit ( X train , y train , epochs ¿ 100, validation_split ¿ 0.1 )
Evaluation
Predict and evaluate
y pred1 =¿ model1.predict ( X test )
MSE 1=¿ mean_squared_error ( y test , y predl )
3 Model 2: Feedforward Neural Network
Model Architecture
Define the Feedforward model

{
Dense ( 64 , activation = ReLU , input_shape =( X train ⋅ shape [1], ) )
model ⁡2= Dense ⁡(32 , activation =ReLU )
Dense ⁡(1)
Compile the model
model2.compile(optimizer ¿ Adam, loss ¿ mean ¿squared_error)

Training
Train the model
history 2 ←model ⁡2.fit ( X train ⋅reshape ⁡( X train ⋅ shape[0], -1¿ , y train , epochs ¿
100 , validation ¿split ¿ 0.1 )
Evaluation
Predict and evaluate
y pred2 =¿ model2.predict ( X test ⋅reshape ⁡( X test ⋅ shape [0],−1 ) )
MSE ⁡2=¿ mean_squared ¿error ( y test , y pred2 )
___________________________________________________________________________

Model Parameters and Settings

Temperature Prediction (ForestFires)


The image displays a scatter plot illustrating the on-device performance of
a regression model. The plot uses two colors: green for correct predictions
and red for incorrect predictions. The majority of the data points are
clustered together, with most points marked as correct (green) and a few
marked as incorrect (red). The presence of outliers in the upper-right area,
all marked as correct, indicates some data points with significantly
different values or predictions. This visualization helps assess the model's
performance and identify areas where it may be making errors.
Results

The image displays the results of a regression model, showing a high


accuracy rate of 97.06%, which indicates that the majority of predictions
are correct. The Mean Squared Error (MSE) is 0.04, a low value that
signifies the model's predictions are close to the actual values on average.
Additionally, the Mean Absolute Error (MAE) is 0.14, suggesting the
average magnitude of the errors is relatively small. However, the
Explained Variance Score is -8.76, which is unusual as this metric typically
measures the proportion of variance captured by the model. A negative
score can imply that the model is underperforming or that there may be
an issue with the data or model specification.

The "Feature Explorer" section provides a visual representation of the


model's predictions using a 3D scatter plot. This plot maps the data points
based on three variables: "X Average," "Y Average," and "FFMC Average."
The points are color-coded, with green representing correct predictions
and red representing incorrect ones. The visualization shows a
concentration of correct predictions, with a few incorrect ones scattered
among them. This indicates that while the model performs well overall,
there are still instances where it does not predict accurately, highlighting
potential areas for further refinement or data analysis.
The code is a TensorFlow/Keras implementation for training a neural
network aimed at a regression task. It begins by importing necessary
modules from TensorFlow, including components for defining the model
architecture (`Sequential`, `Dense`, etc.) and configuring the Adam
optimizer. Constants are then defined to set the number of training epochs
(`EPOCHS`), the initial learning rate (`LEARNING_RATE`), and batch size
(`BATCH_SIZE`), which can be adjusted via command-line arguments
(`args`).

To ensure variability in training data, the code includes an option


(`ENSURE_DETERMINISM`) to shuffle the training dataset (`train_dataset`)
if set to `False`. Both training and validation datasets are subsequently
batched according to the specified `BATCH_SIZE`.

The neural network model is constructed using the `Sequential` API,


comprising two hidden `Dense` layers with ReLU activation functions and
an output `Dense` layer that produces predictions (`y_pred`). The model
is optimized using the Adam optimizer with user-defined parameters, and
callbacks may be appended to enhance training monitoring and
functionality.

For training, the model is compiled with a mean squared error loss
function (`loss='mean_squared_error'`) and the configured optimizer
(`opt`). Training occurs over multiple epochs (`EPOCHS`), utilizing the
training dataset (`train_dataset`) and validating performance on the
validation dataset (`validation_dataset`). The verbosity during training
(`verbose=2`) controls the amount of output displayed.

The code snippet concludes with a comment regarding


`disable_per_channel_quantization`, suggesting an optional setting that
could optimize memory usage in convolutional models but isn't directly
utilized in the current implementation.

The provided implementation showcases a structured approach to


constructing and training a neural network model using TensorFlow/Keras,
emphasizing configuration flexibility and performance monitoring through
callbacks and configuration parameters.

Comparative Results
The comparative analysis of Mean Squared Error (MSE) values for the
existing method, Model 1, and Model 2 reveals notable improvements in
prediction accuracy across different environmental factors. For rainfall
prediction, both Model 1 and Model 2 outperform the existing method,
with Model 2 achieving the lowest MSE of 0.2716 compared to 0.352 for
the existing method. This indicates that the new models are better at
capturing rainfall patterns, with Model 2 showing a slight edge over Model
1.

In the case of wind speed prediction, both models offer improvements


over the existing method's MSE of 1.2456. Model 1 performs slightly
better with an MSE of 0.9785 compared to Model 2’s 0.9812. This
suggests that Model 1 provides a marginally more accurate prediction for
wind speed, although both models contribute to enhanced performance.

For temperature predictions, both new models show a reduction in MSE


from the existing method’s 1.02, with Model 1 achieving 0.8076 and
Model 2 close behind at 0.8098. This reduction indicates that both models
offer better accuracy, with Model 1 showing a slight improvement over
Model 2.

The analysis demonstrates that Model 1 and Model 2 both improve


prediction accuracy for rain, wind, and temperature compared to the
existing method. Model 1 generally provides slightly better performance,
especially for wind and temperature predictions, highlighting the
effectiveness of the modifications implemented in these models.

Table 1: Comparative Mean Squared Error (MSE) Results for Rain, Wind,
and Temperature Predictions Using Existing Methods, Model 1, and Model
2

MSE (Existing MSE (Model 1) MSE (Model 2)


Method)
Rain 0.352 0.2742 0.2716
Wind 1.2456 0.9785 0.9812
Temperature 1.02 0.8076 0.8098

Conclusion
This study evaluates the efficacy of Long Short-Term Memory (LSTM)
networks and Feedforward Neural Networks (FNN) for time series
forecasting. The comparative analysis revealed that while both models
exhibit robust performance, there are notable differences in their
predictive accuracy and efficiency. The LSTM model demonstrated
superior performance in capturing temporal dependencies and trends
within the data, as evidenced by its lower mean squared error (MSE)
compared to the FNN. This can be attributed to LSTM's ability to retain
information over long sequences, which is particularly advantageous for
time series data. On the other hand, the FNN model, with its simpler
architecture, provided competitive results but lacked the temporal context
sensitivity that LSTM models offer.

These findings underscore the importance of selecting the appropriate


model based on the specific characteristics of the data and the forecasting
requirements. For applications where capturing temporal dynamics is
crucial, LSTM networks are preferred due to their enhanced capability to
model sequential dependencies. Conversely, for less complex time series
data, FNNs can offer a viable and computationally efficient alternative.
Future work could explore hybrid approaches or additional model
variations to further improve forecasting accuracy and model adaptability.

Pseudo Code
import tensorflow as tf
from tensorflow.keras.models
import Sequential
from tensorflow.keras.layers
import Dense, InputLayer, Dropout, Conv1D, Conv2D, Flatten, Reshape,
MaxPooling1D, MaxPooling2D, AveragePooling2D, BatchNormalization,
Permute, ReLU, Softmax
from tensorflow.keras.optimizers.legacy
import Adam
EPOCHS = args.epochs or 100
LEARNING_RATE = args.learning_rate or 0.005
# If True, non-deterministic functions (e.g. shuffling batches) are not used.
# This is False by default.
ENSURE_DETERMINISM = args.ensure_determinism
# this controls the batch size, or you can manipulate the tf.data.Dataset
objects yourself
BATCH_SIZE = args.batch_size or 32
if not ENSURE_DETERMINISM:
train_dataset = train_dataset.shuffle(buffer_size=BATCH_SIZE*4)
train_dataset=train_dataset.batch(BATCH_SIZE, drop_remainder=False)
validation_dataset = validation_dataset.batch(BATCH_SIZE,
drop_remainder=False)

# model architecture
model = Sequential()
model.add(Dense(20, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(classes, name='y_pred'))

# this controls the learning rate


opt = Adam(learning_rate=LEARNING_RATE, beta_1=0.9, beta_2=0.999)
callbacks.append(BatchLoggerCallback(BATCH_SIZE, train_sample_count,
epochs=EPOCHS, ensure_determinism=ENSURE_DETERMINISM))

# train the neural network


model.compile(loss='mean_squared_error', optimizer=opt, metrics=None)
model.fit(train_dataset, epochs=EPOCHS,
validation_data=validation_dataset, verbose=2, callbacks=callbacks)

# Use this flag to disable per-channel quantization for a model.


# This can reduce RAM usage for convolutional models, but may have
# an impact on accuracy.
disable_per_channel_quantization = False

Reference
1. Butun, I., Österberg, P., & Song, H. (2019). Security of the Internet of
Things: Vulnerabilities, attacks, and countermeasures. IEEE
Communications Surveys & Tutorials, 22(1), 616-644.
2. Sonar, K., & Upadhyay, H. (2014). A survey: DDOS attack on Internet
of Things. International Journal of Engineering Research and
Development, 10(11), 58-63.
3. Dhanaraju, M., Chenniappan, P., Ramalingam, K., Pazhanivelan, S., &
Kaliaperumal, R. (2022). Smart farming: Internet of Things (IoT)-
based sustainable agriculture. Agriculture, 12(10), 1745.
4. Jayaraman, P. P., Yavari, A., Georgakopoulos, D., Morshed, A., &
Zaslavsky, A. (2016). Internet of Things platform for smart farming:
Experiences and lessons learned. Sensors, 16(11), 1884.
5. Sontowski, S., Gupta, M., Chukkapalli, S. S. L., Abdelsalam, M., Mittal, S., Joshi, A.,
& Sandhu, R. (2020, December). Cyber attacks on smart farming infrastructure. In
2020 IEEE 6th International Conference on Collaboration and Internet Computing
(CIC) (pp. 135-143). IEEE.
6. Yazdinejad, A., Zolfaghari, B., Azmoodeh, A., Dehghantanha, A.,
Karimipour, H., Fraser, E., ... & Duncan, E. (2021). A review on
security of smart farming and precision agriculture: Security
aspects, attacks, threats and countermeasures. Applied Sciences,
11(16), 7518.
7. Mohammad, Z., Qattam, T. A., & Saleh, K. (2019, April). Security
weaknesses and attacks on the internet of things applications. In
2019 IEEE Jordan International Joint Conference on Electrical
Engineering and Information Technology (JEEIT) (pp. 431-436). IEEE.
8. Sood, K., Nosouhi, M. R., Kumar, N., Gaddam, A., Feng, B., & Yu, S.
(2021). Accurate detection of IoT sensor behaviors in legitimate,
faulty and compromised scenarios. IEEE Transactions on
Dependable and Secure Computing, 20(1), 288-300.
9. Chen, Y. (2013). New approaches for calculating Moran’s index of
spatial autocorrelation. PloS one, 8(7), e68336.
10. Karim, F., Majumdar, S., Darabi, H., & Chen, S. (2017). LSTM
fully convolutional networks for time series classification. IEEE
access, 6, 1662-1669.
11. L. D. Xu, W. He, and S. Li, “Internet of Things in industries: A
survey,” IEEE Trans. Ind. Inform., vol. 10, no. 4, pp. 2233–2243, Nov.
2014.
12. Y. Liu, X. Ma, L. Shu, G. P. Hancke, and A. M. Abu-Mahfouz,
“From industry 4.0 to agriculture 4.0: Current status, enabling
technologies, and research challenges,” IEEE Trans. Ind. Inform., vol.
17, no. 6, pp. 4322–4334, Jun. 2021.
13. C. Brewster, I. Roussaki, N. Kalatzis, K. Doolin, and K. Ellis, “IoT
in agriculture: Designing a europe-wide large-scale pilot,” IEEE
Commun. Mag., vol. 55, no. 9, pp. 26–33, Sep. 2017.
14. A. Roy, P. Das, and R. Das, “Temperature and humidity
monitoring system for storage rooms of industries,” in Proc. Int.
Conf. Comput. Commun. Technol. Smart Nation, 2017, pp. 99–103.

You might also like