0% found this document useful (0 votes)
30 views7 pages

Research Paper

Research

Uploaded by

Shohanur Rahman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views7 pages

Research Paper

Research

Uploaded by

Shohanur Rahman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

e-ISSN: 2582-5208

International Research Journal of Modernization in Engineering Technology and Science


( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:03/Issue:07/July-2021 Impact Factor- 5.354 www.irjmets.com

ANIMAL DETECTION IN FARM AREA


Nagashree K*1, Devadiga Varshini Vasantha*2, Deekshitha*3, Mehnaz*4,
Aishwarya D Shetty*5
*1,2,3,4Student, Dept Of ISE, Yenepoya Institute Of Technology Moodbidri, Karnataka, India.
*5Assistant Professor, Dept Of ISE, Yenepoya Institute Of Technology Moodbidri, Karnataka, India.
ABSTRACT
Animal attack in the farm area is considered as the major threat, which will reduce the amount of crop. The
main reason for this is the expansion of cultivated land. Human-wildlife conflicts occur through crop raiding
which is common in these days. The farmers in India face huge loss through natural calamities, animal attacks
etc. The age old methods practiced by the farmers are not efficient. It is practically impossible to appoint guards
to monitor the farm area. The main aim of the project is to help the farmer to save the crops without harming
the animals. The steps performed here is to protect the crops from animal attack by taking appropriate
measure to keep the animal away by producing appropriate sound without killing or harming the animals.
Thus, to reach our goal and solve the problem, we make use of machine learning technique to detect the animal
entering into the farm area using convolutional neural network. Here in this project, the entire farm area is
monitored at regular interval of time through the camera, which helps to record the entire surrounding of the
farm. Machine learning model is designed to detect the animal entering the farm and plays the appropriate
sound to shoo an animal away from the farm such that the crops are prevented from damage. Different types of
packages and concepts of the convolutional neural network is used to design the model to achieve the desired
aim in the project.
Keywords: Convolutional Neural Network, Machine Learning, Training And Validation, Prediction, Play Sound.
I. INTRODUCTION
In agriculture, one of the main social issues that is existing in the present is the damaging of the crops by the
wild animals. Wild animal intrusion has always been a persisting problem to the agriculturalist. Some of the
animals that act as a threat to the crops are monkeys, elephants, cow and others. These animals may feed on
crops and also they run around the field in the absence of farmer and thus can cause damage to those crops.
This may results in significant loss in the yield and will cause additional financial protection to the farmer in
order to deal with the aftermath of the damage.
Every farmer, while utilizing his production, should also be aware of the fact that animals are also live in the
same place and they need to be secured from any probable suffering. This problem need to be attended
immediately and an effective solution must be created and accomplished. Thus, this project aims to address this
problem which is caused to farmer. One of the applications of the deep learning technique called Convolutional
Neural Network[1] is animal detection. The rapid growth in the human population and the continuous
economic development are making over-exploitation of mineral deposit, causing fast, novel and remarkable
changes to ecosystems. The large amount of land surface has been converted by human action, making changes
in wildlife population, habitat and their behavior. Which result in more serious thing that is many wild animals
on Earth have been driven to extinction, and many species are entered into new areas where they can disturb
both natural and human systems. Therefore, observing the wild animals is essential as it provides evidences to
the researchers to inform conservation and management decisions for maintaining diverse, balanced and
maintainable ecosystems[2].
II. METHODOLOGY
During these situations, the main aim is to drive away the wild animals automatically without causing loss for
human lives as well as animal lives. As manual methods require manpower and system with hardware to detect
animals requires maintenance every now and then, the model proposed overcome these difficulties as it
involves only maintenance of software code.
2.1 Data Flow Diagram

www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science


[1024]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:03/Issue:07/July-2021 Impact Factor- 5.354 www.irjmets.com
A Data-flow diagram (DFD) in the information system is used to represent graphically the “flow” of data.
The flow diagram describes:
Start: The initial stage of the overall process.
Run the code: Implementing and running the code.
Get input from camera: Taking the input from the camera for the overall process.
Wild Animal Detected: Code to detect the wild animal data set.
Produce the appropriate sound through speaker: Producing the sound if the wild animal is detected.
Continue to run till detected: Continuing the process if the wild animal is not detected.
Stop: To stop the overall process.

Figure 2.1. Data Flow Diagram


2.2 Convolutional Neural Networks

Figure 2.2. The structure of VGG-6


www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[1025]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:03/Issue:07/July-2021 Impact Factor- 5.354 www.irjmets.com
Convolutional Neural Network (CNN) has been one of the greatest advancements in the artificial intelligence
field in the past years. It has achieved State-of-the-art in computer vision, speech detection, Natural language
processing, Intelligent agents, and several other areas. It is proven that CNN can be very valuable in extracting a
plethoric number of features out of several types of dataset and actually learn properly for it. CNN came as an
improvement to the limitations and shortcomings of simple Neural Networks (NN) or Multilayer Perceptron
(MLP). Indeed, MLP only accepted vectors and was made up of fully connected layers. This required a lot of
parameters (computationally expensive) and was not 22 able to capture special features in a complex image. On
the other end, CNN can accept matrices on several dimensions and came with sparsely connected layers using
feature maps and pooling to extract features and reduces the required amount of parameters necessary to
learn. The baseline CNN model is from the general architectural principles of the VGG[3] models. This involves
stacking convolutional layers with small 3*3 filters which is followed by a max pooling layer. These layers are
used to form the blocks, as per the number of filters in each block can be increased and the blocks can be
repeated with along depth of the network as 32, 64, 128, 128 for the starting 4 blocks of the model. To ensure
the height and width of the shapes of the feature of output match the input padding is used in the Convolutional
layers. Rectified linear activation function is used in each layer. The model has been fit with RMSprop optimizer
which is similar to the gradient descent algorithm.
2.3 Dataset and Features
The data set collected for this project is from Kaggle [4], which provides 25,000 labeled photos. We have taken
images of Sheep, Elephant, Cow, Squirrel, Horse, Out of all the photos, 20% are for validation and testing, and
the remaining 80% for training. In another folder Kaggle provides unlabeled photos for testing. Additionally,
the outlook here varies from face to entire body. In some of the images, the animal part is obstructed from the
view and other image contains more than one of same animal.
III. SYSTEM REQUIREMENTS
A System requirement specification also known as software requirement specification refers to the document
or set of documentation that used to describe the characteristics and features of the software application. It
includes a variety of element that attempts to define the intended functionality required by the customer to
satisfy their different users.
3.1 Functional Requirements
A functional requirement it is used to define a function of a system or system’s component. The function is used
for specifying the behavior among the inputs and outputs.
3.2 Non-Functional Requirements
Non-functional requirements are attributes or characteristics of the system that can be used to judge its
operation. Non-functional requirements are used to define system attributes.
3.3 Hardware Requirements
Hardware requirement is used to specify the hardware device used in the system and determines the complete
set of functional, interface, operational, quality and test requirement for the system.
• System requires RAM of 8GB and Above
• System requires HDD of 20 GB Hard Disk Space and Above
• System requires Graphics Card
3.4 Software Requirements
• PhyCharm: PhyCharm is an extremely popular Python IDE. The IDE or Integrated Development Environment
features a code editor and in one or many programming languages it used as compiler for writing and
compiling the programs.
• Tensorflow2.0: For developers, and researchers who want to push the state-of-the-art in machine learning
and want to build scalable ML-powered applications, TensorFlow2.0 provides a complete ecosystem of tools.
• Keras: Keras is used for designing deep models in smart-phones (iOS and Android), on JVM or web. It also
permits the use of distributed training of deep-learning models in the clusters of Graphics processing units
(GPU) and tensor processing units (TPU).

www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science


[1026]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:03/Issue:07/July-2021 Impact Factor- 5.354 www.irjmets.com
• CUDA 10: CUDA (Compute Unified Device Architecture) is the computing platform and application
programming interface (API). The CUDA –enabled GPU (Graphics processing unit) is used for the purpose of
processing by the software developers or software engineers. This approach is referred as GPGPU (General-
Purpose computing on Graphics Processing Units).
• OpenCV: OpenCV stands for open-source computer vision. OpenCV is machine learning software library
developed by Intel. It provides a common platform for applications which are related to computer vision and its
associated fields.
• Flask: It is a framework of web. The libraries, technologies and tools required for building the web
application are provided through the flask. Web application may be of any form that is blog, web page,
commercial website, web based calendar etc.
IV. ACCURACY AND LOSS
The overfit is observed from initial phase VGG-6 model. Hence the modification is made in the baseline of the
model with the Dropoff and Data augmentation. The training accuracy is above 80% and validation accuracy is
around 86% with the loss lower than 0.4 (red curves in Fig. 4.1). To improve training accuracy by lowering the
Dropout rate little and to train more we could keep tuning the network.
The VGG-16 base model (black curves in Fig. 4.1) with data augmentation can significantly improve the training
performance. Reviewing of learning curves shows that the model fits the dataset quickly at the first 20 epochs.
The overall accuracy of training is around 90% and the accuracy of validation is 91%. To further improve the
accuracy, we can fine-tune the weights of some layers in the part of future detector in the model. In this project,
we unfreeze from the layer ’block5_conv1’ along with our Dense layers, resulting in more than 95% validation
accuracy (blue curves in Fig.4.1). It is possible utilize the pre-trained networks by applying fine-tuning to
identify classes they were not originally trained on, and reduces the loss to 0.1.

Figure 4.1. Accuracy of training and validation

Figure 4.2. Loss of training and validation


V. RESULTS

www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science


[1027]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:03/Issue:07/July-2021 Impact Factor- 5.354 www.irjmets.com

Figure 5.1. Home Page


The Figure 5.1 is the home page. Using this home page the existing user is able to login and if new user visits to
the system then he/she can register himself/herself.

Figure 5.2. Login Page


The Fig 5.2 is the login page. In order to login user has to provide username and password. If the user is new to
the system then he can register with the system using Register Here.

Figure 5.3. Sign up Page


The Fig 5.3 is the registration page for new user. The system allows the new user to get registered with system
for that user has to provide some of the basic details such as name, mobile number, email and he can also set his
own password for login credentials.

www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science


[1028]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:03/Issue:07/July-2021 Impact Factor- 5.354 www.irjmets.com

Figure 5.4. Services Page


The Fig. 5.4 is the services page. After the successful login the user can use the services. To use the services he
has to start the CNN and ha to click to start process.

Figure 5.5. Animal Classification Page1

Figure 5.6. Animal Classification Page2


The Fig. 5.5 and Fig. 5.6, shows animal classification pages, where by clicking upload an image we have to
upload the image and by clicking the classify image the system will classify the animal image to the particular
class and name of the animal will be displayed.

www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science


[1029]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:03/Issue:07/July-2021 Impact Factor- 5.354 www.irjmets.com
VI. CONCLUSION
The project solves the problem related to the crop damage caused by wild animal attacks and is among the
main hazard in the recent years. The major concern in the project is that the farmer should be able to save the
crops from damage and also consider the fact that animals should not be harmed or killed. This issue is one
among the major concerns and it is essential to find the appropriate solution. The project here carries the great
social relevance were the farmers can protect their yields and save them from the huge financial loss by using
the suitable algorithms and methods we detect the animals and produce sound to drive away animals.
VII. REFERENCES
[1] Wild Animals Intrusion Detection using Deep Learning Techniques DR.R.S. SABEENIAN1* , N.
DEIVANAI2 , B. MYTHILI3 1M.E., Ph.D.,Received: 12.04.20, Revised: 24.05.20, Accepted: 04.06.20
[2] Santhiya S, Dhamodharan Y, Kavi Priya NE,. Santhosh CS and Surekha M. ‘A Smart Farmland Using
Raspberry Pi Crop Prevention and Animal Intrusion Detection System ‘. International Research Journal
of Engineering and Technology (IRJET), 2018; 05(03)
[3] Simonyan, k. & Zisserman, A. (2014) Very Deep Convolutional Networks for Large-Scale Image
Recognition. https://arxiv.org/abs/1409.1556
[4] Duhart C, Dublon G, Mayton B, and Paradiso J. ‘Deep Learning Locally Trained Wildlife Sensing in Real
Acoustic Wetland Environment’. In Thampi SM, Marques O, Krishnan S, Ciuonzo D and Kolekar MH
(eds.), Advances in Signal Processing and Intelligent Recognition Systems, 2019: 3–14.

www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science


[1030]

You might also like