0% found this document useful (0 votes)
84 views10 pages

Automatic Classification of Mechanical Components of Engines Using Deep Learning Techniques

Mechanical parts of engines help to reduce friction and carry weight for linear or rotating motion. Modern engines are
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views10 pages

Automatic Classification of Mechanical Components of Engines Using Deep Learning Techniques

Mechanical parts of engines help to reduce friction and carry weight for linear or rotating motion. Modern engines are
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Automatic Classification of Mechanical Components


of Engines using Deep Learning Techniques
1 3
Philip O. Adejumobi Iyabo I. Adeaga
Department of Electronic and Electrical Engineering, Department of Computer Engineering,
Ladoke Akintola University of Technology, The Polytechnic Ibadan,
Ogbomoso, Nigeria. Ibadan, Nigeria
Orcid: 0000-0003-3220-1547
4
Abiodun A. Baruwa
2
Oluwadare A. Adebisi Department of Electrical and Electronic Engineering,
Department of Mechanical and Mechatronics Engineering, Osun State College of Technology,
First Technical University, Esa Oke, Nigeria
Ibadan, Nigeria
Orcid: 0000-0001-7009-563X
5
Kolawole M. Ajala
Department of Electrical Engineering,
The Polytechnic Ibadan,
Ibadan, Nigeria

Abstract:- Mechanical parts of engines help to reduce transmitted motion while the friction between the thread and
friction and carry weight for linear or rotating motion. compression in nuts and bolts work together to form the
Modern engines are complex systems with structural fastener [4]. Assembling and disassembling of mechanical
elements, mechanisms, and mechanical parts. The components is a routine and important process in industries
building blocks of the engines are joined together using using machines [5].
several mechanical components that are similar in shape
and size. During the assembly and disassembly of these Most mechanical engine contains tens or hundreds of
complex engines, the mechanical components get mixed mechanical components. The classification of mechanical
up. The traditional classification techniques for components can be defined as a Fine Grained Visual
components are laborious with high costs. Existing Categorization (FGVC) problem. A significant number of
research for classifying mechanical components uses these mechanical components needs to be identified and
algorithms that work based on shape descriptors and classified [6]. However, some of these mechanical
geometric similarity thereby resulting in low accuracies. components are similar in shapes and sizes thereby making
Hence, there is a need to develop an automatic the manual extraction of distinguishing features difficult [7].
classification technique with high accuracy. This study Also, mechanical components with unrecognizable part
classified four mechanical components (bearing, nut, number or without part number makes the manual
gear, and bolt) using four deep learning models classification costly and time consuming for technicians [8].
(AlexNet, DenseNet-121, ResNet-50, and SqueezeNet). In Hence, an automatic classification technique would reduce
the result, Densenet-121 achieved the highest costs and save time [9].
performance at an accuracy of 98.3%, sensitivity of
95.8%, specificity of 98.5%, and Area under Curve Computer vision [1] is the technology and science that
(AUC) of 98.5%. focuses on the theory behind artificial systems to extract
information from images using an automated approach. It
Keywords: Mechanical Components, Deep Learning, has a great potential in the sorting, inspection, classification
Convolutional Neural Network, Engine, Transfer Learning. and quality control of mechanical parts during assembling,
manufacturing and disassembling stages [10]. Computer
I. INTRODUCTION vision has been combined with machine learning techniques
for achieving automatic image classification. The major
Mechanical design involves the knowledge of constraint of machine learning is that it cannot extract
numerous machine elements that could be assembled differentiating features from the training set of data.
together using mechanical components [1]. Mechanical However, This limitation has been remedied by the use of
components ensure the reliability and safety of engines [2]. deep learning technique [11].
Basic mechanical components such as screw, bearing, gear,
washer, bolt, and nut can be used for connecting mechanical Deep Learning (DL) is a branch of machine learning
building blocks into complex systems in the mechanical that use algorithms for processing information [11]. It is
industry [3]. Bearing and gear are structural components, the implemented using the architecture of neural network [12].
basic purpose of gear is to change the speed of direction for Various images has been successfully classified using deep

IJISRT23APR2118 www.ijisrt.com 2028


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
learning techniques. Convolutional Neural Network (CNN) transition layers and one hundred and twenty-one
is a commonly used deep learning models that has the perceptrons [27].
advantage of parameter sharing, comparable representations
and sparse interactions [13]. The first convolutional neural These classical CNN models have similarities in the
network was presented by LeCun et al. in the year 1990 mode of image recognition, model training and
[14], Since then, researchers have developed several CNN classification of output. However, they differ in architecture
models for improve performances [15]. CNN architecture design.
contains basically the convolutional layer, fully connected
layer and max pooling layer [16]. Common CNN models  Transfer Learning
designed for image classification tasks are LeNet, AlexNet, Transfer learning [29] is a machine learning method
ZFNet, VGGNet, GoogLeNet, ResNet, and DenseNet. The wherein a model that was trained and developed for a task is
network depth and the number of neurons describe the re-used for a related task with reduced time, computational
network topology [17]. requirements and satisfactory results [30]. It is suitable when
there is a new dataset that is smaller than the initial pre-
 AlexNet trained model’s dataset [31]. Transfer learning allows
AlexNet is a convolutional neural network that was starting with the learned features on the larger dataset and
developed by Alex Krizhevsky et al. in the year 2012 [16]. It adjusting the architecture of the model to suit the new
is a deep CNN with five convolutional layers, three sub- dataset rather than starting the learning procedure from the
sampling layers and three fully connected layers [11]. The scratch [25].
training of AlexNet was done using 1.3 million high-
resolution images for identifying 1000 different objects. The training algorithm aims to lessen the training error
AlexNet participated in ImageNet Large Scale Visual between the actual labels and the predicted values by
Recognition Challenge (ILSVRC) in the year 2012 [18] and updating the parameters of the neural network. The error is
the network attained 15.3% top-5 error at 10.8 percentage quantified by the loss function. The optimization algorithm
points lesser than the second position [19]. AlexNet has is used for updating biases and weights (the internal
achieved greats improvement in performance in the parameters of a model) to reduce the error [32]. Figure 1
classification of various medical images [20]. shows the flowchart for training the pretrained CNN model.

 ResNet
Residual neural network (ResNet) was developed in
2015 by Kaiming et al. [21]. The network was developed by
piling residual blocks on top of each other. It was realized
by hopping connections on two or three layers containing
batch normalization and Rectified Linear Unit (ReLU)
between the architectures [22]. Compared to other CNN
models, ResNet’s training ability is better and the
computations are lighter. The available ResNet models are
ResNet-18, ResNet-34, ResNet-50, ResNet-101 and ResNet-
152 [23]. ResNet-50 won ILSVRC in year 2015 with 3.6%
error rate. Its architecture contains 48 convolution layers,
one maxpool layer and one average pool layer [24].

 SqueezeNet
SqueezeNet was developed at the University of
California, Stanford University, Berkeley and DeepScale in
year 2016 [25] by researchers. The architecture of
SqueezeNet achieved an AlexNet-level precision on
ImageNet with less than 5 MB parameters and less
computational time [26]. The architecture contains two
convolutional layers, eight fire modules and one softmax
layer [26].

 DenseNet
Densely connected convolutional networks (DenseNet)
was developed by Zhuang Liu, Gao Huang and their team in
year 2017 [27]. Each DenseNet layer has a direct contact
with the original input image and it does not learn redundant
feature maps [28]. The available variants of DenseNet are
DenseNet-121, DenseNet-160, and DenseNet-201.
DenseNet-121 is a 24-layer convolutional neural network
containing five convolution layers, five pooling layers, three

IJISRT23APR2118 www.ijisrt.com 2029


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Start

Import Pytorch Libraries

Define Transformations

Import Training Dataset

Import CNN Model

Set Training Parameters

Train Model

Computer Loss and


Gradient

Back Propagate the


Error component and
Update Parameters

Perform Forward Pass

Has NO
Network
Converged

YES

Validate the Model

Save Model

Stop

Fig 1 Flowchart for Training CNN Model

The remaining part of this paper is organized as follows; Section 2 presents the literature survey. Section 3 contains the
proposed methodology. Section 4 reports the discussion of the result and finally, Section 5 presents the conclusion and future
scope.

IJISRT23APR2118 www.ijisrt.com 2030


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
II. LITERATURE SURVEY  Image Pre-Processing
Image classification tasks are affected by the presence
In this section we reviewed some of the work done in of noise, scale variation, viewpoint variation, poor quality,
the classification of mechanical components. [33] developed and illumination. Image preprocessing is used to remove the
a system for classifying fasteners automatically using noise that is present in the image so that a noise free image
computer vision and machine learning on a created datasets. is used for the feature extraction. For our work, the images
In the result, the work classified 20 bolts and 14 washers at were enhanced using histogram equalization and de-noised
an accuracy of 99.4%. [34] designed a system that by median filtering. The images in the dataset were 640 x
recognised nuts and bolts in automotive and mechanical 640 pixels for uniformity in size. Our database contained
industries. [35] built a method to recognize bolt and nut 1,100 images of Bearing (1,000 for training and 100 for
using artificial neural network (ANN). The process started testing), 1,100 images of Bolt (1,000 for training and 100 for
with image acquisition. The images were captured using testing), 1,100 images of Gear (1,000 for training and 100
high resolution camera. At the pre-processing stage, the for testing), 1,100 images of Nut (1,000 for training and 100
images were normalized, converted to gray scale and for testing). The total number of images in the training
resized. Principle Component Analysis (PCA) was used for dataset equals 6,000, and the total number of images in the
the feature extraction. The system classified the objects test dataset equals 600. Figure 3 shows the sample of the
accurately. [1] used machine learning for automatic images in our dataset.
classification of mechanical components. Each object was
represented with a bag of features. The dataset was formed
by 2354 images and 875 features for 15 sub-categories. The
test dataset contained 606 images. In the result, the work
achieved average area under ROC curves by similarity
coefficients. [36] performed data analysis for the automated
classification of mechanical components. The system
outperformed the Light Field Descriptor classifier. [37]
investigated the methodologies for part classification using
deep learning technogies. 2D-CNN model was trained using Fig 3 (a) Bearing (b) Bolt (c) Gear (d) Nut
csv files and picture data while the 3D-CNN was trained
using voxel data. In the result, the 2D-CNN model generated  Feature Extraction and Image Classification
the highest accuracy. [38] recognized bolt and nut in Feature extraction is the process of obtaining features
realtime using image processing algorithm. The system from an image while image classification predicts the
achieved an accuracy of 92%. [4] evaluated machine category of the input image using its features. For this work,
learning classification methods using Neural Networks CNN models will be used to perform the feature extraction
(NN), random forests and Ensemble Decision Tree (EDT) and the image classification automatically.
algorithms. In the result, EDT method outperformed the
neural network. [5] classified mechanical components based  Training the CNN Models
on lateral shape and their head. The study achieved Our work aim to compare the performances of four
[email protected] of 0.996 for the classification of components. In CNN models on our dataset. To achieve this, we
view of the reviewed works, there is a need for classifying downloaded four pre-trained classical CNN models
mechanical components with improved accuracy. (AlexNet, DenseNet-121, ResNet-50, and SqueezeNet). The
last fully connected layers for these models were modified
 Proposed System to contain four neurons which is our target class. To train
our CNN models in pytorch, we started by importing
pytorch libraries, then we transformed the input images by
resizing to 255 pixels, center-cropped the images to 224
pixels and we used totensor to convert the images into
pytorch’s usable format. Afterwards, we normalized the
images. After the transformation of the images, we divided
the training dataset in ratio 70:30 for the training and the
Fig 2 Flow Diagram for the Classification of validation dataset correspondingly. We loaded the images
Mechanical Components into the model via dataloader, then moved the model into the
device (central processing unit).
 Dataset
Dataset is very critical for a successful automatic
With both the model and the training data defined, we
image classification. Unlike traditional methods, deep
configured the learning process by setting the training
learning models succeeds on huge datasets because the
parameters for the models as learning rate of 0.001, batch
classification technique depend on extracted features from
size of 32, momentum of 0.9, epoch of 10, loss function was
the images [39]. Without a standard dataset, it would be
cross-entropy loss and optimization function was Stochastic
hard to equate learning algorithms on mechanical Gradient Descent (SGD). We set the model to training mode
components. For this work, the online mechanical and initiated the training process. For the four classical
parts_coco_json dataset was used. models, transfer learning technique was used to retrain these

IJISRT23APR2118 www.ijisrt.com 2031


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
modified models on our database. The validation dataset
was used to evaluate the performance of the models during
the training process. After the completion of the training for
each model, we saved each trained model.

 Testing the CNN Model


In this section, we tested the performance of our CNN
models. To achieve this, we set the model to evaluation
mode. We programmed the settings for the testing as
follows; the test images would be pre-processed by resizing
to 255 pixels and center-cropped to 224 pixels. The images
were then converted into pytorch’s usable format followed
by normalization. The batch size was set to 1.

 Evaluation Metrics
Evaluation metrics are indicators for assessing the
performance of an experiment [40]. In this work, sensitivity,
accuracy, specificity and Area under Curve (AUC) were
selected as the quantitative evaluation metrics. The Fig 4 (b) Confusion Matrix for AlexNet Model
accuracy, sensitivity, and specificity were calculated using
Equations (1), (2), and (3) correspondingly.
𝑇𝑃+𝑇𝑁
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = (1)
𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁

𝑇𝑃
𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 = (2)
𝑇𝑃+𝐹𝑁

𝑇𝑁
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = (3)
𝑇𝑁+𝐹𝑃

where TN, FN, FP, and TP are true negative, false


negative, false positive, and true positive correspondingly.
Receiver Operating Characteristic (ROC) curves is a
probability curve that plots the False Positive Rate (FPR)
against the True Positive Rate (TPR) at several threshold
values [41]. AUC serves as a summary for the ROC curve.

This work was implemented with Python programming


language using PyTorch library on Intel(R) Core ™ i3-2330
[email protected], 8GB RAM laptop running Microsoft Fig 4 (c) ROC Plot for AlexNet Model
Windows 10.

 Performance Analysis
This section presents the result obtained during the
training and the testing of the CNN models. Figures,
(4),(5),(6),and (7) shows the results obtained.

Fig 5 (a) Training Plot for DenseNet-121 Model

Fig 4 (a) Training Plot for AlexNet Model

IJISRT23APR2118 www.ijisrt.com 2032


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig 5 (b) Confusion Matrix for DenseNet-121 Model Fig 6 (b) Confusion Matrix for ResNet-50 Model

Fig 5 (c) ROC Plot for DenseNet-121 Model Fig 6 (c) ROC Plot for ResNet-50 Model

Fig 6 (a) Training Plot for ResNet-50 Model Fig 7 (a) Training Plot for SqueezeNet Model

IJISRT23APR2118 www.ijisrt.com 2033


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig 7 (b) Confusion Matrix for SqueezeNet Model Fig 7 (c) ROC Plot for SqueezeNet Model

 Result of Training CNN Models


From the training plots, it was discovered that the training loss and validation loss were high at the start but decreased as the
number of epoch increased, while the training accuracy and validation accuracy were low at the beginning but increased as the
number of epoch increased. Table 1 shows the result of accuracy, loss, and the elapsed time at 15th epoch during the training of the
CNN models.

Table 1 The Result of Accuracy, Loss, and the Elapsed Time at 15th Epoch when Training the CNN Models

Model AlexNet DenseNet-121 ResNet - 50 SqueezeNet

Validation accuracy at
15 th epoch (%) 97.893 96.975 96.179 96.504

Training accuracy at
15 th epoch (%) 98.520 98.849 97.867 97.533

validation loss at 15 th
epoch 0.00026 0.00169 0.00243 0.00129

Training loss at 15 th
epoch 0.00014 0.00112 0.00110 0.00104

Training-Time
(hh:mm:ss) 02:18:37 42:08:03 18:49:28 02:31:28

From Table 1, we can observe that all the models had  Result of Testing the CNN Models
high training accuracies indicating that the models learnt the This section presents the result of testing the CNN
features of the images correctly. DenseNet-121 model had models in classifying the test dataset. It was observed from
the highest training accuracy of 98.849% while SqueezeNet the confusion matrixes that some of the mechanical
model had the least training accuracy of 97.533%. AlexNet components were misclassified, this indicated similarities
model had the highest validation accuracy of 97.893% while among mechanical components. From the confusion matrix
the ResNet-50 model had the least validation accuracy of for each CNN model, the evaluation metric for each
96.179%. It was observed that the validation accuracy for all mechanical component was calculated using equations (1),
the models were lesser than the training accuracy; this (2), and (3) correspondingly. The performance of each
indicated that the models did not over-fit and they CNN model was determined by calculating average
generalized well on the validation dataset. Considering the accuracy, average sensitivity, average specificity, and
elapsed time for the training, SqueezeNet model took the average AUC. The result obtained is as shown in Table 2.
least time, while DenseNet-121 took the longest time.

IJISRT23APR2118 www.ijisrt.com 2034


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Table 2 The Performance of AlexNet, DenseNet-121, ResNet-50, and SqueezeNet

Model AlexNet DenseNet- 121 ResNet-50 SqueezeNet

Accuracy (%) 94.5 98.3 97.5 96.5

Sensitivity (%) 94.5 95.8 94.8 92.8

Specificity (%) 96.2 98.5 98.0 97.5

AUC (%) 96.0 98.5 99.0 93.5

T esting -Time 00:00:40 00:04:02 00:02:46 00:00:28

From Table 2, the test performances for all the models were good. DenseNet-121 model had the highest performance at
98.3% accuracy, 95.8% sensitivity, and 98.5% specificity. AlexNet had the lowest accuracy of 94.5%. sensitivity of 92.8%, and
the highest AUC of 93.5% while squeezeNet has the lowest AUC of 96.0%. AlexNet has the lowest specificity of 96.2%. ResNet-
50 had the highest AUC of 99.0%. Considering the time elapsed for the testing, SqueezeNet model took the least test time, while
DenseNet took the longest time.

 Comparison of CNN Architectures and their Performances


Table 3 Shows the Configuration of the CNN Models we used in this Work in Order to Make Inference on their
Performances in the Classification of Mechanical Components

Table 3 Configuration of CNN Architecture


Model Year Activation function Architecture Parameters

AlexNet 2012 ReLU Convolutional layer = 5, 60 million


Fully connected layer = 3

ResNet-50 2015 ReLU Convolution layer = 48, 23 million


Fully connected layer = 3

SqueezeNet 2016 ReLU Convolutional layer = 2, 1.25 million


Fully connected layer = 1

DenseNet 2017 ReLU Convolution layer = 5, 48 million


fully connected layer = 1

From Table 3, we can observe that AlexNet model has The classification performance for all the considered
the highest number of parameters while SqueezeNet model CNN models are good, this indicates that deep learning
has the lowest number of parameters. ResNet-50 model has techniques are good image classifiers. DenseNet-121 has the
the deepest architecture while squeezeNet model has the highest performance in terms of accuracy, sensitivity, and
shallowest architecture. All the models were trained and specificity. However, it came second in the AUC value. This
tested under the same conditions. Comparing the model suggests that the use of more than one evaluation metric is
architecture and their performances. We observed that important in comparing and validating the performance of
DenseNet-121 model took the longest time for the training deep learning models as classifiers. From this work, it can
and testing while squeezeNet model took the shortest be established that under the same training and testing
training and testing time. This indicated that network conditions; the performance of CNN models differs due to
architecture could affect the training and testing time for differnces in their network architecture. It can also be
CNN models. deduced that the performance of CNN models in image
classification has improved over the years through the
variation of network architectures.

IJISRT23APR2118 www.ijisrt.com 2035


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
III. CONCLUSION AND FUTURE SCOPE [10]. M.Z. Omid, D. Mark, W. Stephen, and P. Cecile,
“Image captioning using facial expression and
This work compares the performance of four state-of- attention,” Journal of Artificial Intelligence Research,
the-art convolutional neural network models (AlexNet, vol. 68, pp. 661-689, 2020.
DenseNet-121, ResNet-50, and SqueezeNet). Each network [11]. M.K. Manoj, M. Neelima, M. Harshali, and M.G.R.
was trained on the same dataset using the same Venu, “Image classification using deep learning,”
hyperparameters. The models were also tested on the same International Journal of Engineering and Technology,
dataset and the results were compared using accuracy, vol. 7, no. 2.7, pp. 614-617, 2018.
sensitivity, specificity, and AUC as performance metrics. It [12]. O.A Adebisi, J.A Ojo, O.M. Oni “Comparative
was observed that DenseNet-121 has the highest analysis of deep learning models for detection of
performance at 98.3% accuracy, 95.8% sensitivity, and COVID-19 from chest x-ray images,” International
98.5% specificity while Resnet-50 has the highest AUC of Journal of Scientific Research in Computer Science
99.0%. and Engineering (IJSRCSE), vol. 8, no.5, pp.28-35,
2020.
In the future work, the performance of the CNN [13]. L. Qing, Z. Suzhen, and W. Yuechun, “Deep
models can be improved by increasing the number of learning model of image classification using machine
training epoch, and the dataset. The developed method can mearning,” Advances in Multimedia, pp. 1-12, 2022.
also be extended to classify more mechanical parts. [14]. R. Muthukrishnan, M.V. Anand, and H.
Shanmugasundaram, “Image classification using
REFERENCES convolutional neural networks,” International Journal
of Pure and Applied Mathematics, vol. 119, no.17,
[1]. R. Matteo, G. Franca, L. Katia, and M. Marina, “A pp.1307-1319, 2018.
methodology for part classification with supervised [15]. S. Parul, P.S.B. Yash, and G. Wiqas, “Performance
machine learning,” Artificial Intelligence for analysis of deep learning CNN models for disease
Engineering Design Analysis and Manufacturing, detection in plants using image segmentation,”
https://www.researchgate.net/publication/32 Information Processing in Agriculture, pp. 1-9, 2019.
7272531, pp.1-28, 2019. [16]. K. Alex, S. Ilya, and E.H. Geoffrey, “ImageNet
[2]. L. Zhijun, L. Yingjie, and J. Qineng, “A bolt defect classification with deep convolutional neural
recognition algorithm based on attention model,” networks,”
Fuzzy Systems and Data Mining VII, pp. 86-93, https://www.researchgate.net/publication/330729785,
2021. pp.1-9, 2012.
[3]. Z.J. Li, K. Adamu, K. Yan, X.L. Xu, P. Shao, X. H. [17]. D. Meriam, and K.S.B. Ahmed, “Optimization of
Li, and H.M. Bashir “Detection of nut–bolt loss in CNN model for image classification,”
steel bridges using deep learning techniques,” https://www.researchgate net/publication/352284493,
Sustainability, vol. 14, pp. 1-18, 2022. pp.1-7, 2021.
[4]. J. O. Steven, J. C. Armida, G. P. Matthew, and D. E. [18]. O. Russakovsky, J. Deng, H. Su, J. Krause, S.
Corey, “Machine learning classification and Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
reduction of cad parts for rapid design to simulation,” and M. Berstein, “Imagenet large scale visual
pp. 1-13, 2021. recognition challenge,” International Journal of
[5]. M. Faisel, R. Kaki, D. Sandip, R. Tathagata, P. Computer Vision, vol. 115, pp. 211-252, 2015.
Chandu, T. Praveen, K.J. Pramod, “Nuts and bolts: [19]. A.J. Fareed, and J. Atiya, “Image classification using
yolo-v5 and image processing based component alexNet with SVM classifier and transfer learning,”
identification system,” vol. 118, pp. 235-246, 2023. Journal of Information Communication Technologies
[6]. H.L.Thai, T.S. Hai, and T.T. Nguyen, “Image and Robotics Applications (JICTRA), vol. 10, no. 1,
classification using support vector machine and pp. 44-51, 2019.
artificial neural network,” International Journal of [20]. O.A. Adebisi, and J.A. Ojo, “A review of various
Information Technology and Computer Science, vol. segmentation methods for ultrasound thyroid
5, no. 2, pp. 32-38, 2012. images,” International Journal of Advanced Research
[7]. H. Qian, “High precision quality inspection for screw in Science, Engineering and Technology (IJARSET),
using artificial intelligence technology,”. pp. 94-101, vol. 7, no. 8, pp. 14577-14582, 2020.
2020. [21]. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual
[8]. E.L. Secco, C. Deters, H.A. Wurdemann, H.K. Lam, learning for image recognition,” In Proceedings of
L.D. Seneviratne, “A neural network clamping force the IEEE conference on computer vision and pattern
model for bolt tightening of wind turbine hubs,” pp. recognition, vol. 4, no. 7, pp. 34-39, 2016.
1-9, 2015. [22]. S. Devvi, H.P. Radifa, B. Alhadi, and A. Pinkie,
[9]. C. Junwen, W. Hongrui, and H. Zhiwei, H., “Deep learning in image classification using residual
“Automatic defect detection of fasteners on the network (ResNet) variants for detection of colorectal
catenary support device using deep convolutional cancer,” 5th International Conference on Computer
neural network,” IEEE transactions on Science and Computational Intelligence, vol. 179, pp.
instrumentation and measurement, vol. 67, No. 2, pp. 423–431, 2021.
257-269, 2018.

IJISRT23APR2118 www.ijisrt.com 2036


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
[23]. H. Kaiming, Z. Xiangyu, R. Shaoqing, and S. Jian, [36]. M. Rucco, F. Giannini, K. Lupinetti, and M. Monti,
“Deep residual learning for image recognition,” “A methodology for part classification with
Computer Vision Foundation IEEE Xplore, vol. 5, supervised machine learning,” Artificial Intelligence
pp. 770-778, 2015. for Engineering Design, Analysis and Manufacturing,
[24]. H.H. Hong, and H.T. Hoang, “Improvement for vol. 33, pp. 100-113, 2019.
convolutional neural networks in image classification [37]. N. Fangwei, S. Yan, and X. weiqing, “Various
using long skip connection,” Applied Sciences, vol. realization methods of machine part classification
11, no.2092, pp. 1-4, 2021. based on deep learning,” Journal of Intelligent
[25]. B. Varsha, and K. Arun, “Analysis of convolutional manufacturing, vol. 31, no.12, pp. 2019-2032, 2020.
neural network using pre-trained squeezenet model [38]. M.J. Teuku, and S.P. Anthon, “ Recognition of bolt
for classification of thermal fruit images,” and nut using artificial neural network,” https://www.
https://www.researchgate.net/publication/341162911, researchgate.net/publication/352213979, pp. 1-13,
759-768, 2020. 2011.
[26]. A.A. Deny, and A.Suryaperdana, “Cervical cancer [39]. A.M. Seyyid, “A comparative study of feature
image classification using CNN transfer learning,” extraction methods in image classification,”
Advances in Engineering Research, Proceedings of International Journal of Image, Graphics and Signal
the 2nd International Seminar of Science and Applied Processing, vol. 3, pp.16-23, 2015.
Technology (ISSAT 2021), vol. 207, pp. 145-149, [40]. L. Niklas, and D. Paul, “Evaluating learning
2021. algorithms and classifiers,” International Journal of
[27]. H. Najmul, B. Yukun, and S. Ashadullah, “DenseNet Intelligent Information and Database Systems, vol. 1,
convolutional neural networks application for no. 1, pp. 27-52, 2007.
predicting covid‑19 using CT image,” SN Computer [41]. J.H. David, and J.T. Robert, “A simple
Science, vol. 2, no 389, pp. 1-14, 2021, Springer. generalization of the area under the ROC curve for
[28]. A. Neda, I. Tehran, T. Omid, and A. Mohammad, multiple class classification problems,”
Flower image classification using deep convolutional https://www.researchgate.net/publication/338228279,
neural network. https://www.researchgate.net/ vol. 45, pp. 172-186, 2001.
publication/352093979, pp. 1-6, 2021.
[29]. T. Srikanth, “Transfer learning using Vgg-16 with
deep convolutional neural network for classifying
images,” International Journal of Scientific and
Research Publications, vol. 9, no. 10, pp.143-150,
2019.
[30]. O.A Adebisi, J.A Ojo, T.O. Bello “Computer Aided
Diagnosis System for Classification of Abnormalities
in Thyroid Nodules Ultrasound Images using Deep
Learning.” IOSR Journal of Computer Engineering
(IOSR-JCE), Vol. 22, Issue 3, pp. 60-66, 2020.
[31]. R. Ramya, K. Ece, D. Debadeepta, H. Eric, S. Julie,
“Blind spot detection for safe sim-to-real transfer,”
Journal of Artificial Intelligence Research, vol.67,
pp. 191-234, 2020.
[32]. C. Marc, and L.R.D. Bart, “Hyperparameter search in
machine learning,” https://www.researchgate.net/
publication/276978774, pp. 23-32, 2019.
[33]. T. Sajjad, H. Juan, and B. Bernd, “Fine-grained
visual categorization of fasteners in overhaul
processes,” pp. 1-7, 2017.
[34]. A.K. Amit, S. B. Dhiraj, and M. J. Prashil, “An
application for the separation of nuts and bolts using
image processing and artificial intelligence,”
International Journal of Latest Technology in
Engineering, Management and Applied Science
(IJLTEMAS), vol. 7, No.1, pp. 1-3, 2018.
[35]. D. Amol, A.S. Khobragade, S. Ambarish,
“Mechanical nut-bolt sorting using principle
component analysis and artificial neural network,”
International Journal of Applied Information Systems
(IJAIS), pp. 1-3, 2018.

IJISRT23APR2118 www.ijisrt.com 2037

You might also like