0% found this document useful (0 votes)
2 views

Weed Detection by Using Image Processing

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Weed Detection by Using Image Processing

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Indonesian Journal of Electrical Engineering and Computer Science

Vol. 30, No. 1, April 2023, pp. 341~349


ISSN: 2502-4752, DOI: 10.11591/ijeecs.v30.i1.pp341-349  341

Weed detection by using image processing

Vijaykumar Bidve1, Sulakshana Mane2, Pradip Tamkhade3, Ganesh Pakle4


1
Department of Information Technology, Marathwada Mitra Mandal’s College of Engineering, Pune, India
2
Department of Computer Engineering, Bharati Vidyapeeth College of Engineering, Navi Mumba, India
3
Department of Mechanical Engineering, Marathwada Mitra Mandal’s College of Engineering, Pune, India
4
Department of Information Technology, Shri Guru Gobind Singhji Institute of Engineering and Technology, Pune, India

Article Info ABSTRACT


Article history: In agricultural regions, the procedure of weed removal is crucial. Weed
removal in the classic way, takes longer and requires greater physical effort.
Received Jul 9, 2022 The idea is to eliminate weeds from agricultural fields automatically. The
Revised Oct 31, 2022 proposed study uses a deep learning algorithm to detect weeds growing
Accepted Nov 23, 2022 between crops. Deep learning method also known as deep learning is used to
analyse the main properties of agricultural photographs. Weeds and crops
have been identified using the dataset. Convolutional neural network (CNN)
Keywords: uses a completely attached surface with rectified linear units (RELU) to
differentiate weed and crop. It extracts features of crop using deep learning.
Convolutional neural network The CNN uses features of proceeded image to extract region of interest (ROI).
Image processing A deep learning network features are used to identify crop. In total of 1280
Region of interest images are used for testing the system, and 10 images are used to find the
Shape features confidence score.
Weed classification
This is an open access article under the CC BY-SA license.

Corresponding Author:
Vijaykumar Bidve
Department of Information Technology, Marathwada Mitra Mandal’s College of Engineering
Pune, Maharashtra, India
Email: [email protected]

1. INTRODUCTION
The need of food in world is increasing day by day. There is shortage of water, land, labors for the
farming. There is need to improve agriculture outcome of the farmers. The appropriate herbicides are to
required to remove weed and increase production of crop. Crop and weed need to be differentiated to remove
weed from crop with less efforts. To remove weed from crop with less efforts its correct identification is must.
Herbicides should not affect on the crop at farm. Convolutional neural network (CNN) is a technique used to
identify weed correctly. The images are captured using camera and with the help of deep learning techniques
weeds are detected. After detecting the weeds it is remove from the crop using herbicides [1]–[7].
The methods proposed in the literature for detection of weeds are either specific to a particular crop,
or the algorithms stated are not as efficient as in the proposed work [7]–[15]. The use of drones, robots is also
included in few of the papers in the literature [16]–[24]. The use of robots and drones increases the cost
overhead of the project. The proposed system is affordable and efficient as compared to the existing systems
proposed in the literature. The factors like more cost, less efficient algorithms, to remain specific for a particular
crop is the motivation behind the proposed system. The proposed system uses CNN algorithm to identify weed
from crop. The features of weed are recognized using deep learning techniques. The main features of the weed
images are identified using deep learning. The subsequent sections of this paper describes literature survey,
methodology followed by result analysis [25]–[31].
Yu et al. [1] stated herbicide utility can notably reduce herbicide statistics and weed manage fee in
turf weed. Spot showering relies upon on device imaginative and prescient-primarily based totally identifiers

Journal homepage: http://ijeecs.iaescore.com


342  ISSN: 2502-4752

for unbiased weed manage. This work uses totally relies on correct identification of weed. Osorio et al. [2]
stated weed management is one of the important component; understanding area of concern dealing with weed
images. This paper describes strategies for weed evaluation in mild of profound studying photo dealing visible
tests with the aid of using professionals. Tiwari et al. [3] stated weeds gift within side the harvests cause a
decrease in crop creation. The weeds soak up supplements to be used for crop. So, a way needs to be find to
detect the weed and herbicide to be showered to remove from crop.
Umamaheswari et al. [4] stated people’s organization is taught approximately, the financial problems
of insecticides used for weed. There is a continuously growing hobby for meals to be met with the aid of using
agribusiness makers. To reduce the ecological problems and deal with meals security, IoT is primarily based
totally on accuracy in farming. Accuracy in agriculture reduces capital expenditure and increase quality of
product and it use. Badhan et al. [5] proposed a actual weed detection framework based on AI. A sound absed
gadget and 3-D harvest creation is used.
A motion based technique is used to create 3-D factor. The AI based version for cucumber and Onion
crop is prepared with the help of suitable dataset. Sarvini et al. [6]-described weed identification is very
essential for crops, weed is dangerous for crops. The ordinary techniques of weed identification are very time
consuming and ineffective. The tradition technique of weed detection requires lot of efforts. The AI based
techniques are more suitable and accurate for the weed detection.
Jin et al. [7] proposed any other approach in a contrary method, a detailed studying and photo coping
with innovation. In a clear way an organized CNet version changed into applied to differentiate greens and
draw bouncing packing containers round them. Thereafter, the leftover inexperienced articles are losing out of
leaping packing containers that have been taken into consideration as weeds. The version spotlights on spotting
simply the greens and therefore, attempts now are no longer to address one-of-a-kind weed species.
Asad and Bais [8] described increasing use of in farming to increase production, in different weather
conditions and the environment. To avoid detrimental effects accuracy in agriculture process is required. The
evaluation made using advanced PC imaginative and prescient strategies like profound studying calls for big
marked agribusiness information. Liu and Bruch [9] described weed area frameworks are big solutions for one
of the cutting-edge rural problems that unmechanized weeds manipulate. Weed vicinity moreover offers a
technique for reducing or taking away herbicide use, assuaging agrarian ecological and wellness sway, and in
addition growing maintainability. Le et al. [10] described a FT_BRC photo dataset (disbursed on line with
three thousand plus weed images) is changed into an accrued through a digital brought on a compact streetcar
below common-sense discipline situations from an enterprise. The results of weed are damaging for crop so
weed detection is required.
As a summary of the existing work it is observed that, in the literature either several deep
convolutional neural networks are used, or there is use of drone technology, IoT based architecture, use of
color images, use of machine learning algorithms, weed detection in only soybean crop, manual labeling of
pixels or autonomous weed control using robots. All the techniques suggested in the literature are limited, with
an increased cost overhead or are not efficient. The complete efficient and automated solution for weed
detection is not provided by any of the authors. The summary of the literature survey is given in Table 1 in
Appendix.
Weed is harmful to crop as it sucks the required nutrition from the land. The visual based identification
is a basic method of manual crop detection. The manual process needs more labor and time as well. The more
use of herbicides is dangerous for environment, health and crop. The solution is to use image processing and
deep learning techniques to detect weed correctly from crop. The proposed system describes a method of weed
identification and classification.

2. METHOD
2.1. Image acquisition
The proposed methodology is developed using a dataset of weed images. The methods extracts weed
using the tasks followed by images capture, edge identification, and image type identification. Weed images
are captured using high quality camera. The images are compared with images stored in the dataset. New
images are contentiously added in the dataset. The accuracy of the proposed system increases as the number
of records in the dataset are more.

2.2. Image pre-processing


Image processing module performs some basic processes to get required picture for processing. To
obtain accurate and clear image, the algorithm performs various operations like gray scale, conversion,
sharpening, filtering, edging, smoothing and image segmentation. The quality of image is improved in pre-

Indonesian J Elec Eng & Comp Sci, Vol. 30, No. 1, April 2023: 341-349
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752  343

processing phase by improving image features and reducing noise. The black and white photos are of different
shades of gray. The value of each pixel is measured of the gray scale image.
The quality of image in the form of sharpness, smoothness is improved using various tasks in the
image pre-processing. The images are made more sharp with the help of filters. Noise is minimized of the
image with the help of smoothing. Multiple algorithms are used to enhance quality and sharpness of the image
in the task of image pre-processing. The image pre-processing is the very important, required and fundamental
step in the image identification.

2.3. Image segmentation


There are set of operations included in the image recognition. The image recognition have a diversified
set of applications including number plate recognition, and CCTV surveillance. In the process of image
segmentation the actual image is translated into binary valued image using maximum valued method. The
digital image is characterized into several parts depending on the values of the each image component. The
pixels with similar values are clubbed. The values are used to identify different portion of image. In short image
characterization process is usd to extract key features of the object for future analysis.

2.4. Feature extraction


The system does further operations on the separated picture in this module. The module deals with
feature extraction to extract overall information of the weed image. The image processing, machine learning
help to classify weed images. The important features of the weed images are extracted and used for
classification based on the range of values. The main point in the image classification is to identify most
dynamic features for classification of images. The feature extraction is a key in the image processing to handle
the image for various operations like dimension reduction.

2.5. CNN algorithm


The weed detection is done using CNN algorithm. The layers of CNN like input, processing and
output does the work of weed image identification. The image denitrification further leads to image
classification. The massive developments in ultimating the distance among human and machine capabilities is
achieved using the techniques of machine learning and artificial intelligence. The CNN results are amazing in
the area of image processing and classification. The area of machine imaginative and prescient is certainly
considered one among numerous such disciplines.
The aim of this area is to allow machines to look and understand the sector within side the identical
manner that people do, and to apply that understanding for a number of responsibilities inclusive of image
recognition. The image feature extraction, creation of value vector, clarification training testing etc are the
major components comes with CNN algorithm. The image captured using high quality camera is a input for
CNN, the weights are assigned to the various factors of the image with the help of extracted features and images
are distinguished depending on the feature vector values.

2.6. Classification
In the classification section deep learning algorithm is used for actual classification on the basis of its
features. The weed identification is based on the characteristics of weed i.e. set of values. The values decides
the type of weed. The feature vector is used to identify weed using CNN algorithm. The phases of CNN like
training, testing are used for actual classification of weed. The classification and identification of weed is the
end result of this work. Figure 1 shows the block diagram of proposed system.
As shown in the Figure1 the image is captured using camera, the image is pre-processed, features are
extracted and classification is done. The classification leads to the identification of image. The training of the
image is again performed using three phases. Input image, image pre-processing, feature extraction are three
stages used in the training phase. The pre-processing phase makes the image more clear all the borders are
pixel values are extracted in this phase. The values identified in the pre-processing phase are used for training
of the module.
The feature extraction phase mainly used for identification of key features of the of the images. On
the basis of key features the training and testing of the images is enriched. The training phase stores data of all
such weed images for the further identification of weed images. The more data records in the dataset increases
accuracy of the training set. The training set is used as a input for the classification of weed images. The
proposed module gives a accurate platform for weed detection as shown in the Figure1. The end result of this
module is identification of weed.

Weed detection by using image processing (Vijaykumar Bidve)


344  ISSN: 2502-4752

Figure 1. Block diagram of system

3. RESULTS AND DISCUSSION


This section discusses the results obtained from proposed system. The Figure 2 gives the System user
interface of working module of weed detection software system. The Figure 2 is divided in six components.
The details of each component is described in subsequent paragraphs. Figure 2(a), shows the home page of the
system weed detection system. In the proposed system, the CNN algorithm is used for weed detection. In
Figure 2(b), registration page of the weed detection is shown. For this registration, the user can enter the
username, password, and password confirmation to register in the system. In Figure 2(c), the login page of the
weed detection system is shown. For logging in to the system, the user will have to enter the username, and
password. In Figure 2(d), the page of upload image for the detection of weed is shown. In this page, the user
can choose the input image for detection of a weed in that image. In Figure 2(e), the result of weed detection
is shown. By using the CNN classifier, the system can detect the weed accurately given in this Figure 2(e).
This system predicts the result by using the CNN classifier. The weed is not detected in the given image. The
system takes the image as an input and by using the CNN classifier algorithm, it detects whether the given
image is a weed or not. The system refers the dataset to identify the weed. In Figure 2(e), it shows that the
resulting image is not a weed. In Figure 2(f), the result of weed detection is shown. By using the CNN classifier,
the system is able to accurately detect the weed in the figure. This system predicts the result by using the CNN
classifier. The Figure 2(f), shows that the weed is detected.
As shown in Figure 3, total of 1,280 images are used for testing the system, and the system was able
to detect a weed in 1,260 of those images successfully. It means the accuracy of this system is more than 98%.
The number in a green coloured box shows the number of correctly identified weed images. Where a faint red
coloured box, shows the number of not identified images. Figure 4 shows the performance of proposed system.
Figure 4(a), gives comparison of accuracy of existing system and proposed system. The existing systems uses
weed detection techniques based on image processing only without the use of the CNN algorithm. The
proposed system has better accuracy as compared to the existing system. The confusion matrix Class1, Class2
training modules are shown in the Figure 3. In Class1, the input photos are 909, and the system achieved an
accuracy of 98.46 % and precision of 0.99 % while training the classifier with the supplied input database.
Because the 909-classifier failed to classify 8 photos as an output form of a class1, recall is reduced to 0.99 %,
and F1 score is also reduced to 0.99%. In Class2, the input photos are 391, and the system achieved an accuracy
of 98.46 % and precision of 0.97 % while training the classifier as a train with the given input database. As a
result of the 391-classifier failing to detect 12 photos as an output form of a Class 2, recall has been reduced to
0.98 %, and F1 score has been reduced to 0.97 %. It can be concluded that our system's performance is better
with 98.46% after looking at the above performance parameters.
The Figure 4(b), shows performance of the system based on the parameters accuracy, precision, recall,
and F1score. The Figure 4(a), shows that the overall performance of the system is 97% or above for both the
classes. Figure 4(b) referees Class1 and Class 2 images mentioned in the Figure 3. The accuracy, precision,
recall and F1 score measurements are calculated by the system of Class 1 and Class 2 respectively s shown in
the Figure 4(b). Table 2 gives confidence of each of the images. The score shows the accuracy of weed detection
of the system for a particular image.
In Figure 5, the confidence score of the images is shown with the help of a graph. The graph of
confidence score against images is plotted for ten images. These ten random images were taken for testing,
which were not used for training and this system was successful in identifying the weed from those images.
The results show the ability of the system to capture low level features to improve the ability to detect small
objects. Hence, the average confidence score of the system is more than 98%.

Indonesian J Elec Eng & Comp Sci, Vol. 30, No. 1, April 2023: 341-349
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752  345

(a)

(b)

(c)

(d)

(e)

(f)

Figure 2. System user interface (a) home page, (b) registration page, (c)login page, (d) input image,
(e) result weed not detected, and (f) result weed detection

Weed detection by using image processing (Vijaykumar Bidve)


346  ISSN: 2502-4752

Figure 3. Dataset for testing the system

(a)

(b)

Figure 4. Performance of proposed system (a) accuracy comparison and (b) system performance

Table 2. Confidence score of weed images


Weed images Confidence score
Image 1 98.67
Image 2 99.46
Image 3 97.54
Image 4 99.28
Image 5 94.49
Image 6 97.54
Image 7 99.67
Image 8 96.24
Image 9 98.15
Image 10 97.48

Indonesian J Elec Eng & Comp Sci, Vol. 30, No. 1, April 2023: 341-349
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752  347

Figure 5. Confidence score of images

4. CONCLUSION
To improve production of farmers the weed removal plays vital role. There is a need to distinguish
weed and crop. The proposed work uses CNN to extract key features of weed images. The image processing
and feature extraction using CNN is a base of the proposed work for identification of weed. Deep learning
approach is used to process captured image, assign the values to key attributes as per its features extracted. On
the basis of various valued attributes of image the weed images are distinguished and identified. The proposed
system uses CNN based approach for image characterization so the accuracy of the the system is more. The
future work of includes automation in the process of weed removal from the crop.

APPENDIX
Table 1. Summary of literature survey
Sr.
Paper title Author Publication Remark
No.
Weed detection in perennial ryegrass
CNN techniques are used in the paper for
1. with deep learning convolutional Yu et al. 2019
detection of a certain type of weed.
neural network
A deep learning approach for weed
Three methods based on deep learning are
2. detection in lettuce crops using Osorio et al. 2020
suggested for weed identification.
multispectral
An experimental set up for utilizing
In association with CNN drone technology and
3. convolutional neural network in Tiwari et al. 2019
deep learning us used.
automated weed detection
Weed detection in farm crops using Umamaheswari
4. 2018 IoT systems are used for agriculture work
parallel image processing et al.
Real-time weed detection using A motion based technique is used to create 3-D
5. Badhan et al. 2021
machine learning and stereo-vision factor.
Performance comparison of weed
6. Sarvini et al. 2019 AI based techniques are for the weed detection.
detection algorithms
A weed identification from soyabin crop is done
Weed detection in soybean crops
7. Ferreira et al. 2017 using CNN. The images of soyabin crop is used
using convnets
as a base.
Weed detection in canola fields using
The technique of pixel labeling with human help
maximum likelihood classification
8. Asad and Bais 2019 is used in this work. This process is carried in
and deep convolutional neural
two parts.
network
The work means to reduce use of herbicides by
Weed detection for selective spraying: identifying the weed accurately. The health,
9. Liu and Bruch 2020
a review” published online environment , sustainability parameters are also
considered.
Photo dataset is changed into an accrued
Detecting weeds from crops under
through a digital brought on a compact streetcar
10. complex field environments based on Le et al. 2021
below common-sense discipline situations from
faster RCNN
an enterprise.

Weed detection by using image processing (Vijaykumar Bidve)


348  ISSN: 2502-4752

ACKNOWLEDGEMENTS
We acknowledge all the people who directly or indirectly guided in the development of this work. All
the authors contributed in the development of the work. We are grateful to family members, friends and
colleagues helped and guided directly and indirectly for the development of this system and preparation of this
paper. Our special thanks to our employer for providing all the way support for publication of this paper.

REFERENCES
[1] J. Yu, A. W. Schumann, Z. Cao, S. M. Sharpe, and N. S. Boyd, “Weed detection in perennial ryegrass with deep learning
convolutional neural network,” Frontiers in Plant Science, vol. 10, Oct. 2019, doi: 10.3389/fpls.2019.01422.
[2] K. Osorio, A. Puerto, C. Pedraza, D. Jamaica, and L. Rodríguez, “A deep learning approach for weed detection in lettuce crops
using multispectral,” AgriEngineering, vol. 2, no. 3, pp. 471–488, Aug. 2020, doi: 10.3390/agriengineering2030032.
[3] O. Tiwari, V. Goyal, P. Kumar, and S. Vij, “An experimental set up for utilizing convolutional neural network in automated weed
detection,” in 2019 4th International Conference on Internet of Things: Smart Innovation and Usages (IoT-SIU), Apr. 2019, pp. 1–
6, doi: 10.1109/IoT-SIU.2019.8777646.
[4] S. Umamaheswari, R. Arjun, and D. Meganathan, “Weed detection in farm crops using parallel image processing,” in 2018
Conference on Information and Communication Technology (CICT), Oct. 2018, pp. 1–4, doi:
10.1109/INFOCOMTECH.2018.8722369.
[5] S. Badhan, K. Desai, M. Dsilva, R. Sonkusare, and S. Weakey, “Real-time weed detection using machine learning and stereo-
vision,” in 2021 6th International Conference for Convergence in Technology (I2CT), Apr. 2021, pp. 1–5, doi:
10.1109/I2CT51068.2021.9417989.
[6] T. Sarvini, T. Sneha, G. G. S. Sukanya, S. Sushmitha, and R. Kumaraswamy, “Performance comparison of weed detection
algorithms,” in 2019 International Conference on Communication and Signal Processing (ICCSP), Apr. 2019, pp. 0843–0847, doi:
10.1109/ICCSP.2019.8698094.
[7] X. Jin, J. Che, and Y. Chen, “Weed identification using deep learning and image processing in vegetable plantation,” IEEE Access,
vol. 9, pp. 10940–10950, 2021, doi: 10.1109/ACCESS.2021.3050296.
[8] M. H. Asad and A. Bais, “Weed detection in canola fields using maximum likelihood classification and deep convolutional neural
network,” Information Processing in Agriculture, vol. 7, no. 4, pp. 535–545, Dec. 2020, doi: 10.1016/j.inpa.2019.12.002.
[9] B. Liu and R. Bruch, “Weed detection for selective spraying: a review,” Current Robotics Reports, vol. 1, no. 1, pp. 19–26, Mar.
2020, doi: 10.1007/s43154-020-00001-w.
[10] V. N. T. Le, G. Truong, and K. Alameh, “Detecting weeds from crops under complex field environments based on faster RCNN,”
in 2020 IEEE Eighth International Conference on Communications and Electronics (ICCE), Jan. 2021, pp. 350–355, doi:
10.1109/ICCE48956.2021.9352073.
[11] B. Alipanahi, A. Delong, M. T. Weirauch, and B. J. Frey, “Predicting the sequence specificities of DNA- and RNA-binding proteins
by deep learning,” Nature Biotechnology, vol. 33, no. 8, pp. 831–838, Aug. 2015, doi: 10.1038/nbt.3300.
[12] A. Bordes, S. Chopra, and J. Weston, “Question answering with subgraph embeddings,” Jun. 2014, [Online]. Available:
http://arxiv.org/abs/1406.3676.
[13] P. Busey, “Cultural management of weeds in turfgrass,” Crop Science, vol. 43, no. 6, pp. 1899–1911, Nov. 2003, doi:
10.2135/cropsci2003.1899.
[14] Q. Chen, J. Xu, and V. Koltun, “Fast image processing with fully-convolutional networks,” in 2017 IEEE International Conference
on Computer Vision (ICCV), Oct. 2017, pp. 2516–2525, doi: 10.1109/ICCV.2017.273.
[15] R. Collobert and J. Weston, “A unified architecture for natural language processing,” in Proceedings of the 25th international
conference on Machine learning - ICML ’08, 2008, pp. 160–167, doi: 10.1145/1390156.1390177.
[16] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (Almost) from
scratch,” The Journal of Machine Learning Research, vol. 12, pp. 2493–2537, 2011.
[17] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: a large-scale hierarchical image database,” in 2009 IEEE
Conference on Computer Vision and Pattern Recognition, Jun. 2009, pp. 248–255, doi: 10.1109/CVPR.2009.5206848.
[18] A. S. Ferreira, D. M. Freitas, G. G. da Silva, H. Pistori, and M. T. Folhes, “Weed detection in soybean crops using ConvNets,”
Computers and Electronics in Agriculture, vol. 143, pp. 314–324, Dec. 2017, doi: 10.1016/j.compag.2017.10.027.
[19] S. A. Fennimore, D. C. Slaughter, M. C. Siemens, R. G. Leon, and M. N. Saber, “Technology for automation of weed control in
specialty crops,” Weed Technology, vol. 30, no. 4, pp. 823–837, Dec. 2016, doi: 10.1614/WT-D-16-00070.1.
[20] E. Gawehn, J. A. Hiss, and G. Schneider, “Deep learning in drug discovery,” Molecular Informatics, vol. 35, no. 1, pp. 3–14, Jan.
2016, doi: 10.1002/minf.201501008.
[21] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” The International Journal of Robotics
Research, vol. 32, no. 11, pp. 1231–1237, Sep. 2013, doi: 10.1177/0278364913491297.
[22] S. Ghosal, D. Blystone, A. K. Singh, B. Ganapathysubramanian, A. Singh, and S. Sarkar, “An explainable deep machine vision
framework for plant stress phenotyping,” Proceedings of the National Academy of Sciences, vol. 115, no. 18, pp. 4613–4618, May
2018, doi: 10.1073/pnas.1716999115.
[23] G. L. Grinblat, L. C. Uzal, M. G. Larese, and P. M. Granitto, “Deep learning for plant identification using vein morphological
patterns,” Computers and Electronics in Agriculture, vol. 127, pp. 418–424, Sep. 2016, doi: 10.1016/j.compag.2016.07.003.
[24] J. Gu et al., “Recent advances in convolutional neural networks,” Pattern Recognition, vol. 77, pp. 354–377, May 2018, doi:
10.1016/j.patcog.2017.10.013.
[25] M. Havaei et al., “Brain tumor segmentation with deep neural networks,” Medical Image Analysis, vol. 35, pp. 18–31, Jan. 2017,
doi: 10.1016/j.media.2016.05.004.
[26] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), Jun. 2016, pp. 770–778, doi: 10.1109/CVPR.2016.90.
[27] G. Hinton et al., “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,”
IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, Nov. 2012, doi: 10.1109/MSP.2012.2205597.
[28] D. Hoiem, Y. Chodpathumwan, and Q. Dai, “Diagnosing error in object detectors,” in Computer Vision – ECCV 2012. ECCV 2012.
Lecture Notes in Computer Science, Berlin: Springer, 2012, pp. 340–353.
[29] Y. Jia et al., “Caffe: convolutional architecture for fast feature embedding,” in Proceedings of the 22nd ACM international
conference on Multimedia, Nov. 2014, pp. 675–678, doi: 10.1145/2647868.2654889.

Indonesian J Elec Eng & Comp Sci, Vol. 30, No. 1, April 2023: 341-349
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752  349

[30] C. R. Johnston, J. Yu, and P. E. McCullough, “Creeping bentgrass, perennial ryegrass, and tall fescue tolerance to topramezone
during establishment,” Weed Technology, vol. 30, no. 1, pp. 36–44, Mar. 2016, doi: 10.1614/WT-D-15-00072.1.
[31] M. I. Jordan and T. M. Mitchell, “Machine learning: trends, perspectives, and prospects,” Science, vol. 349, no. 6245, pp. 255–260,
Jul. 2015, doi: 10.1126/science.aaa8415.

BIOGRAPHIES OF AUTHORS

Dr. Vijaykumar Bidve is Associate Professor and Dean Academics at


Marathwada Mitra Mandal’s College of Engineering, Pune, Maharashtra, India. He Holds a
PhD degree in Computer Science & Engineering with specialization in Software Engineering.
His research areas are Software Engineering, Machine Learning. Dr Vijaykumar has
published number of patents. He is working as an expert for various subjects. Also, he has
worked as a reviewer for various conferences and journals. He can be contacted at email:
[email protected].

Ms. Sulkshana Mane is Assistant Professor at Bharati Vidyapeeth College Of


Engineering, Navi Mumbai, Maharashtra, India. She is pursuing a PhD degree in Computer
Science & Engineering with specialization in Information Security. Her research areas are
Information Security, Machine Learning. Ms. Sulkshana has published two Indian patents.
She is working as an expert for various subjects. Also, she has worked as a reviewer
for various conferences and journals. She can be contacted at email:
[email protected].

Dr. Pradip Tamkhade is Assistant Professor and Dean Student Affairs at


Marathwada Mitra Mandal’s College of Engineering, Pune, Maharashtra, India. He Holds a
PhD degree in Mechanical Engineering with specialization in Thermal Engineering. His
research areas are Heat Transfer and heat exchanger. Dr Pradip has published number of
patents. He is working as an expert for various subjects. Also, he has worked as a reviewer
for various conferences and journals. He can be contacted at email:
[email protected].

Dr. Ganesh Pakle is Assistant Professor and Dean IT services at Shri Guru
Gobind Singhji Institute of Engineering and Technology, Nanded, Pune, Maharashtra, India.
He Holds a PhD degree in Computer Science and Engineering with specialization in
computer network. His research areas are software engineering and computer network. He is
working as an expert for various subjects. Also, he has worked as a reviewer for various
conferences and journals. He can be contacted at email: [email protected].

Weed detection by using image processing (Vijaykumar Bidve)

You might also like