0% found this document useful (0 votes)
34 views

Store Product Classification Using Convolutional Neural Network

Stores sells consumer goods, mainly food products and other household products at retail. The products sold in stores vary greatly, in order to be time efficient in the fast-paced era and the current technological era requires artificial intelligence technology. In the artificial intelligence branch, there is a specific or detailed learning process known as deep learning. One of the branches of deep learning is the convolutional neural network (CNN). This research intends to employ a CNN architecture to facilitate and streamline the time and cost of the store’s product sorting process. The test is conducted with 1,050 product images divided into 35 labels and divided into three data, namely 80% data training 10% data validation and 10% data test. The image used is preprocessed with a size of 256×256 pixels. The data was trained with six convolution layers and an epoch of 50 with an execution time of 33 minutes so as to achieve an accuracy of 91.37%.

Uploaded by

IAES IJAI
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Store Product Classification Using Convolutional Neural Network

Stores sells consumer goods, mainly food products and other household products at retail. The products sold in stores vary greatly, in order to be time efficient in the fast-paced era and the current technological era requires artificial intelligence technology. In the artificial intelligence branch, there is a specific or detailed learning process known as deep learning. One of the branches of deep learning is the convolutional neural network (CNN). This research intends to employ a CNN architecture to facilitate and streamline the time and cost of the store’s product sorting process. The test is conducted with 1,050 product images divided into 35 labels and divided into three data, namely 80% data training 10% data validation and 10% data test. The image used is preprocessed with a size of 256×256 pixels. The data was trained with six convolution layers and an epoch of 50 with an execution time of 33 minutes so as to achieve an accuracy of 91.37%.

Uploaded by

IAES IJAI
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

IAES International Journal of Artificial Intelligence (IJ-AI)

Vol. 12, No. 3, September 2023, pp. 1439∼1447


ISSN: 2252-8938, DOI: 10.11591/ijece.v12i3.pp1439-1447 r 1439

Store product classification using convolutional neural


network
I Made Wiryana1 , Suryadi Harmanto2 , Alfharizky Fauzi3 , Imam Bil Qisthi3 , Zalita Nadya Utami3
1 Management Information System, Gunadarma University, Jakarta, Indonesia
2 Information Technology, Gunadarma University, Jakarta, Indonesia
3 Specialization in Information Technology, Master of Electrical Engineering, Gunadarma University, Jakarta, Indonesia

Article Info ABSTRACT


Article history: Stores sells consumer goods, mainly food products and other household prod-
ucts at retail. The products sold in stores vary greatly, in order to be time ef-
Received Sep 16, 2022
ficient in the fast-paced era and the current technological era requires artificial
Revised Sep 30, 2022 intelligence technology. In the artificial intelligence branch, there is a specific
Accepted Oct 11, 2022 or detailed learning process known as deep learning. One of the branches of
deep learning is the convolutional neural network (CNN). This research intends
Keywords: to employ a CNN architecture to facilitate and streamline the time and cost of
the store’s product sorting process. The test is conducted with 1,050 product im-
Classification ages divided into 35 labels and divided into three data, namely 80% data training
Convolutional neural network 10% data validation and 10% data test. The image used is preprocessed with a
Deep learning size of 256×256 pixels. The data was trained with six convolution layers and an
Image processing epoch of 50 with an execution time of 33 minutes so as to achieve an accuracy
Store product of 91.37%.

This is an open access article under the CC BY-SA license.

Corresponding Author:
Alfharizky Fauzi
Specialization in Information Technology, Master of Electrical Engineering, Gunadarma University
Pabuaran Asri A10/28, Cibinong, West Java, Indonesia
Email : [email protected]

1. INTRODUCTION
Trading activity is one of the ways people can meet their diverse needs of life. Presidential decree 112
of 2007 explains that the market is an area where traders and buyers carry out buying and selling activities or
a place where traders and buyers meet. The traditional market is the market with the main characteristics of
bargaining prices in the buying and selling process, while the modern market is a buying and selling area that
has a fixed price. Modern markets are distinguished into modern shopping centers and shops.
A modern store is a one-stop shop that sells various types of goods at retail. Ultramodern stores
are divided into stores, minimarkets, wholesalers, supermarkets, department stores, and hypermarkets. The
differences of ultramodern stores are based on the bottom area and the range of goods. Wholesale sells non-
commercial consumer goods. Department stores sell consumer goods mainly clothing products and equipment
at retail. Minimarkets, supermarkets and hypermarkets sell consumer goods, mainly food products and other
household products [1].
The products sold in stores are very varied, in order to be time efficient in the fast-paced era and the
current technological era requires artificial intelligence technology by adopting human processes and ways of
thinking capable of classifying these products. In the artificial intelligence branch, there is a specific or detailed

Journal homepage: http://ijai.iaescore.com


1440 r ISSN: 2252-8938

learning process known as deep learning. Deep learning permits machines to unravel advanced issues even
from numerous and unstructured information [2]. The deep learning approach is a breakthrough algorithm in
solving the problems of human life today [3]. Deep learning has proven to excel in image processing [4].
One of the uses of deep learning is the field of image processing. The existence of an image processing
system is intended to help humans in recognizing or classifying objects efficiently, namely fast, precise, and
able to process with a lot of data at once. Algorithms contained in image processing, including support vector
machine, Naıve Bayes, and neural network. One of the algorithms that is often used is neural network. Neural
networks are developed based on how neural networks work in the human brain.
One of the most common deep learning applications in the field of image classification is convolutional
neural network (CNN) [5]. CNN could be a special variety of multi-layer neural network impressed by the
mechanisms of the optical system of living beings [6]. CNN is a deep learning model consisting of several
layers : convolutional, pooling and fully connected layers [7]. CNN is widely used in many fields, applications,
and problems in computer vision such as image recognition, segmentation, detection, and classification [8], [9].
Therefore, this study intends to employ a CNN best solution that uses image objects of 35 types of
products. The CNN architecture employs in this study is expected to be able to classify images of store products
and produce the best accuracy. Section 2 presents research on the development of CNNs that have been carried
out previously and used as a reference.

2. RELATED WORKS
CNN is an algorithm that already exists but is widely developed by several researchers in the field of
deep learning. This study is inseparable from previous research as a reference. Several studies have focused on
the development of CNNs across various objects.
Lu et al. [10], classifies fruits using CNNs. author designed six layers of CNN consisting of a con-
volution layer, a pooling layer and a fully connected layer. The results of the experiment achieved promising
performance with an accuracy of 91.44%, better than three advanced approaches : vector engine voting-based
support, wavelet entropy, and genetic algorithms.
Liu et al. [11], built a large data set of wide flower images with 79 classes and planned a brand new
framework supported CNNs to classify flowers. The neural network consists of five convolutional layers during
which the receptive planes square measure adopted little, a number of that square measure followed by a max-
pooling layer, and three layers that are fully connected with the last 79-way softmax. The author’s approach
reached 76.54% brackets in the author’s laborious flower dataset. additionally, the algorithmic rule of the check
authors on the Oxford 102 flowers dataset. It outperformed the antecedently proverbial vogue and achieved the
delicacy of the 84.02% bracket.
Attokaren et al. [12], classifies food images using CNNs. The author additionally uses the max-
pooling function for data, and the features extracted from this function are used to train the network. Accuracy
86.97% for food class-101 recognized datasets using the proposed implementation.
Dyrmann et al. [13], presents a method capable of recognizing plant species in pictures coloured
during a CNN way. The network was engineered from scratch by being trained and tested on a complete of
10,413 pictures containing 22 weeds and plant species within the early stages of growth. For these 22 species,
this network is in a position to realize a classification accuracy of 86.2%.
Rathi et al. [14], recommends a way for automatic classification of fish species, is needed for greater
understanding of fish behavior in zoological science and by marine biologists. This methodology uses new
techniques supported by CNNs, deep image learning and processing to achieve 96.29% accuracy. This system
ensures vastly demarcation delicacy advancements than the preliminarily proposed styles.
Prasad et al. [15], classifying flower images using CNNs. The dataset used is in the style of 9,500
flower images categorized into four related ones. The CNN coaching is initiated in five batches and therefore
the testing is disbursed on all the for datasets and achieves an accuracy of 97.78%.
Gustisyaf and Sinaga [16], the way toward distinguishing fingerprints is one of the significant, simple
to do multifariousness strategies, the cost is cheap, and a dactyloscopy authority does the particular outgrowth.
Classified gender by fingerprint using the CNN method, and three models were created to determine sex, with
a complete of 49,270 image data that included test data and training data by classifying two classes, male and
female. Of the three models, the very best accuracy was taken at 99.9667%.
Khaing et al. [17], developed a system of objection management recognition supported CNNs is

Int J Artif Intell, Vol. 12, No. 3, September 2023: 1439-1447


Int J Artif Intell ISSN: 2252-8938 r 1441

considered. CNN applied to fruit discovery tasks and recognition through parameter optimization. Results
from the take a look at has associate degree accuracy within the classification getting ready to 94% to 30 class
971 drawings.
Liew et al. [18], an approach using CNNs is proposed for real-time gender classifications based on
the image of the face. Liew et al. deliver the goods classification accuracy 98.75% and 99.38% on sums and
AT&T databases, severally. The image is 32×32 at a speed of 0.27 ms which means it reaches 3,700 images
per second. Training among but 20 epochs.
Elhoseiny et al. [19], weather classification of images using CNNs. Approach outperforming the state
of art by a huge margin in weather classification tasks. Approach reaches 82.2% normalized classification
accuracy, not 53.1% for state of the art (i.e., 54.8% relative improvement). In section 3, we will explain the
CNN method that will be developed and the processing of the dataset to be used.

3. METHOD
Deep learning has emerged as a major tool for self-perception problems such as understanding images,
sounds from humans, robots exploring the world. CNN it consists of 1 or a more convolutional layer and so
continued with one or additional utterly connected layers as in a very customary multilayer neural network
[20]. Understanding CNN and applying it to image recognition systems is the target of the projected model.
CNN extracts feature maps from the second image with a victimization filter. CNNs consider mapping image
pixels with environmental areas instead of having entirely layers of neurons connected. CNNs have been tried
to be very dominant and potential tools in the image process. Even at intervals the areas of pc vision such
as handwriting recognition, physical object classification, and segmentation, CNN has become a far higher
tool compared to all or any or any different tools antecedently enacted [21]. CNNs are used with the Keras
framework as well as Google Collaboratory and back-end TensorFlow.

3.1. Dataset

The imagery data is taken manually, i.e. shooting using the phone’s camera, also taken from Google
images. The image data collected is 1,050 product images divided into 35 classes as labels with different pixel
resolution sizes and different formats. At the data preprocessing stage, change the pixel resolution size to
256×256, and formatting is also done into joint photographic experts group (JPG). The purpose of the data
preprocessing stage for the input used is equal. The image is divided into three parts, namely 80% training,
10% validation, and 10% testing. The Image data will not be displayed all only 6 products randomly displayed
as shown in Figure 1.
Figure 1(a) is Ultra Milk Strawberry which includes dairy products, in addition to Ultra Milk Straw-
berry there are also other dairy products such as Frisian flag Strawberry and Ultra Milk Cokelat. Figure 1(b)
is Samyang which includes instant noodle products. Figure 1(c) is Amidis and Figure 1(d) is Aqua which
include mineral water products, besides that there are also other mineral water products including Crystalline,
Le Minerale, and Vit. Figure 1(e) is Lemonilo Mie Goreng which includes instant noodle products, in addition
to Samyang and Lemonilo Mie Goreng there are also other instant noodle products, namely Indomie Goreng,
Lemonilo Ayam Bawang, Lemonilo Kari Ayam, and Mie Sedap Goreng. Figure 1(f) is Bodrex Extra which in-
clude medicinal products, in addition to Bodrex Extra there are also other medicinal products, namely Antimo,
Bodrex Flu dan Batuk Berdahak, Bodrex flu Dan Batuk PE, Bodrex Migraine, Promag, Tolak Angin, and Kayu
Putih Cap Lang.
Figures 1(a) to 1(f) is part of the image data that has been collected manually shooting using the
phone’s camera, also taken from google image with pixel resolution that has been adjusted to 256×256 and
image format to JPG. The dataset that is displayed randomly, there are only 6 products, in addition to those in
the picture there are also several snacks including Chitato Sapi Panggang, Garuda Rosta Bawang, Lays Classic.
Kopi Kapal Api, and Luwak White Coffee are coffee. Paseo is a tissue. Pepsodent, and Pepsodent Herbal are
toothpastes. Teh Sariwangi is tea. Zen Body Wash Pink is a shower gel. There are several types of sauces
including Saus ABC Extra Pedas, Saus Dua Belibis, and Saus Jawara.

Store product classification using convolutional neural network (I Made Wiryana)


1442 r ISSN: 2252-8938

Figure 1. Dataset for (a) Ultra Milk Strawberry, (b) Samyang, (c) Amidis, (d) Aqua, (e) Lemonilo, and
(f) Bodrex Extra

3.2. Proposed Convolutional Neural Network


The convolutional layer is that the central a part of CNN. Pictures area unit usually stationary. Meaning
that the formation of one half the image is that the same because the remainder of the half. The features studied
in one region will match similar patterns in another. The filter area unit is then designed to support the rear
propagation technique [6].
CNNs are currently the most numerous efficient models for classifying images [22]. CNN continue
to increase its computing speed because hardware progress, and the range of applications have has developed
gradually because it has proven its superiority performance [23]. CNN showed good performance in the clas-
sification of objects [24]. Almost all CNN architectures follow identical general style principles of in turn
applying convolutional layers to the input, sporadically downsampling the spatial dimensions whereas increas-
ing the quantity of feature maps. Moreover, there also are totally connected layers, activation operates and
loss function. However, among all the operations of CNN, convolutional layers, pooling layers, and absolutely
connected layers are the foremost vital ones.
The convolutional layer is that the terribly initial layer wherever it will extract options from the pictures
as a result of pixels area unit solely associated with the adjacent and shut pixels, convolution permits to preserve
the connection between completely different components of a picture. Convolution is filtering the image with a
smaller picture element filter to decrease the dimensions of the image while not losing the link between pixels.
When applying convolution to a 7×7 image by employing a 3×3 size filter with a 1×1 step can find yourself
with a 5×5 output.
When constructing CNN, it’s common to insert pooling layers once every convolution layer, in order
that we will cut back the abstraction size of the illustration. Layers reduce parameters and reduce complexity.
Also, pooling layers facilitate with the overfitting downside. The totally connected network is in any design
wherever every parameter is connected to every different to see the link and impact of every parameter on the
label. The complexness of space-time continuum is reduced by victimisation convolution and pooling layers.
In general, CNN has 2 stages, namely the stage of learning features and classification. Feature learning
is a technique that permits a system to run mechanically to see the illustration of a picture into options in the
variety of numbers that gift the image. The bracket stage is the stage where the results of point literacy is used
for the next process which is based on the intended sorting. Image input on the CNN model uses an image that
is 256×256 in size. Feature learning consists of two layers: the convolutional layer and the pooling layer [25].
The input image is then be processed first through a convolution process and a pooling process at the feature

Int J Artif Intell, Vol. 12, No. 3, September 2023: 1439-1447


Int J Artif Intell ISSN: 2252-8938 r 1443

learning stage. Each convolution has a different number of filters and kernel sizes. Then the alignment process
or the process of changing the feature map resulting from the pooling layer is carried out. The built CNN model
is depicted in Figure 2.

Figure 2. Segmentation model convolutional neural network

The structure of the CNN consists of inputs, feature extraction, classification, and outputs [26]. The
CNN model is build consists of 6 convolution layers with features measuring 3×3 and using the rectified linear
unit (ReLU) activation function and max pooling used at a size of 2×2, then a flattening process is carried
out, namely changing the output of the convolution process in the form of a matrix into a vector which is then
be continued in the classification process using a multi-layer perceptron with the number of neurons on the
predetermined hidden layer. The image class is then classified based on the values of the neurons in the hidden
layer by using the softmax activation function. The output of the connected end layer is fully fed to the softmax
function [27]. In section 4 is explained the results of the application of the method in section 3.

4. RESULTS AND DISCUSSION


The CNN model training process is performed out as many as 50 epochs. The test is performed using
test data for each CNN model class that had been created using the Adam optimizer. The results of the training
data and validation accuracy data is depicted in Figure 3. The graph show training and validation accuracy in
the chart.

Figure 3. Accuracy chart

Store product classification using convolutional neural network (I Made Wiryana)


1444 r ISSN: 2252-8938

Figure 3 explained for the graph of accuracy in training and validation it can be seen that the higher
the epoch or train is done, the closer the accuracy is to perfect. In Figure 3, it is depicted that the graph shows
almost equal and ascending movements of training and validation, where if the accuracy chart goes up, the
better. However, it still needs additions to improve accuracy, meanwhile, in Figure 4. Explaining graphs of
training accuracy and validation for data loss where when the epoch gets bigger, the accuracy decreases and it
can be concluded that there is less data loss than valid data.

Figure 4. Loss chart

In Figure 4, it is depicted that the chart shows training and validation movements that are almost in
line and decreasing, where if the loss chart decreases, the better and the accuracy and loss are in the opposite
direction which indicates that the data tested has run optimally. After testing with training data and validation
data, testing is carried out using test data in Figure 4. That data loss and near-perfect accuracy can be produced.
Can be proven in the test results and data validation described in Figure 5.

Figure 5. Results

Int J Artif Intell, Vol. 12, No. 3, September 2023: 1439-1447


Int J Artif Intell ISSN: 2252-8938 r 1445

From the test results it can be seen in Figure 5 that the make proposal succeeded in developing Ul-
tra Milk Chocolate with an accuracy of 96.6%, Jawara Sauce with an accuracy of 99.82%, Samyang with an
accuracy of 100%, Delicious Fried Noodles with an accuracy of 99.99%, Vit Mineral Water with an accuracy
71.11%, Extra Spicy ABC Sauce with 99.6% accuracy and achieves very good and appropriate accuracy, ob-
taining a final accuracy of 91.37% with an execution time of 33 minutes. By combining the created CNN
methods, very good accuracy can be obtained for classifying mini-market products. And it is hoped that in this
study the methods used can be used in real terms to facilitate work in sorting goods in mini markets. In section
5 are the final conclusions and things that can be done for further research.

5. CONCLUSION
This study classified products using the CNN method, which has been successfully carried out with an
accuracy rate of 91.37%. The CNN model used consists of 6 convolution layers with a filter size of 3×3, ReLU
activation function, and softmax, also using 2×2 layer pooling. The images used are 1,050 images divided into
35 labels with an image size of 256×256 pixels on a sequential model, with an epoch process of 50. From this
research, it can be developed again such as the data augmentation that does not yet exist.

ACKNOWLEDGEMENTS
We thank Gunadarma University and the UG DGX Development Team. We would like to thank
profusely to the entire team and family for their support and motivation so far during the research period. We
also thank our colleagues who always support us in the form of motivation and knowledge during this research.
As well as our supervisors who have helped us in terms of discussion and writing with great patience and
enthusiasm to complete this research. We also do not forget to thank the previous research which has also
provided knowledge to develop this research. This research is intended for future technology development so
that it is even better and more advanced. We thank profusely all parties that cannot be mentioned one by one.

REFERENCES
[1] N. K. Seminari, N. M. Rastini, and E. Sulistyawati, “The impact of modern retail on traditional retail traders in The Mengwi, Badung
district,” IRCS UNUD Journals, vol. 1, no. 1, pp. 35–43, Feb. 2017, doi: 10.24843/ujossh.2017.v01.i01.p05.
[2] M. I. H. Razali, M. A. Hairuddin, A. H. Jahidin, M. H. M. Som, and M. S. A. M. Ali, “Classification of nutrient deficiency in oil
palms from leaf images using convolutional neural network,” IAES International Journal of Artificial Intelligence, vol. 11, no. 4,
pp. 1314–1322, Dec. 2022, doi: 10.11591/ijai.v11.i4.pp1314-1322.
[3] S. Krishnan, P. Magalingam, and R. Ibrahim, “Hybrid deep learning model using recurrent neural network and gated recurrent unit
for heart disease prediction,” International Journal of Electrical and Computer Engineering (IJECE), vol. 11, no. 6, pp. 5467–5476,
Dec. 2021, doi: 10.11591/ijece.v11i6.pp5467-5476.
[4] M. Al-Smadi, M. Hammad, Q. B. Baker, S. K. Tawalbeh, and S. A. Al-Zboon, “Transfer deep learning approach for detecting
coronavirus disease in X-ray images,” International Journal of Electrical and Computer Engineering (IJECE), vol. 11, no. 6,
pp. 4999–5008, Dec. 2021, doi: 10.11591/ijece.v11i6.pp4999-5008.
[5] J. Li et al., “A shallow convolutional neural network for apple classification,” IEEE Access, vol. 8, pp. 111683–111692, 2020,
doi: 10.1109/ACCESS.2020.3002882.
[6] F. Sultana, A. Sufian, and P. Dutta, “Advancements in image classification using convolutional neural network,” in Proceedings
- 2018 4th IEEE International Conference on Research in Computational Intelligence and Communication Networks, ICRCICN
2018, Nov. 2018, pp. 122–129, doi: 10.1109/ICRCICN.2018.8718718.
[7] Y. Gulzar, Y. Hamid, A. B. Soomro, A. A. Alwan, and L. Journaux, “A convolution neural network-based seed classification system,”
Symmetry, vol. 12, no. 12, pp. 1–18, 2020, doi: 10.3390/sym12122018.
[8] E. Maggiori, Y. Tarabalka, G. Charpiat, and P. Alliez, “Convolutional neural networks for large-scale remote-sensing
image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 2, pp. 645–657, 2017,
doi: 10.1109/TGRS.2016.2612821.
[9] G. Cheng, C. Yang, X. Yao, L. Guo, and J. Han, “When deep learning meets metric learning: remote sensing image scene classi-
fication via learning discriminative CNNs,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 5, pp. 2811–2821,
May 2018, doi: 10.1109/TGRS.2017.2783902.
[10] S. Lu, Z. Lu, S. Aok, and L. Graham, “Fruit classification based on six layer convolutional neural network,” in
2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), Nov. 2018, vol. 2018-Novem, pp. 1–5,
doi: 10.1109/ICDSP.2018.8631562.
[11] Y. Liu, F. Tang, D. Zhou, Y. Meng, and W. Dong, “Flower classification via convolutional neural network,” in Proceedings -
2016 IEEE International Conference on Functional-Structural Plant Growth Modeling, Simulation, Visualization and Applications
(FSPMA), Nov. 2016, pp. 110–116, doi: 10.1109/FSPMA.2016.7818296.
[12] D. J. Attokaren, I. G. Fernandes, A. Sriram, Y. V. S. Murthy, and S. G. Koolagudi, “Food classification from images using convolu-
tional neural networks,” in TENCON 2017-2017 IEEE Region 10 Conference. IEEE, Nov. 2017, vol. 2017-Decem, pp. 2801–2806,
doi: 10.1109/TENCON.2017.8228338.

Store product classification using convolutional neural network (I Made Wiryana)


1446 r ISSN: 2252-8938

[13] M. Dyrmann, H. Karstoft, and H. S. Midtiby, “Plant species classification using deep convolutional neural network,” Biosystems
Engineering, vol. 151, pp. 72–80, Nov. 2016, doi: 10.1016/j.biosystemseng.2016.08.024.
[14] D. Rathi, S. Jain, and S. Indu, “Underwater fish species classification using convolutional neural network and deep learning,”
in 2017 9th International Conference on Advances in Pattern Recognition (ICAPR), Dec. 2017, pp. 1–6,
doi: 10.1109/ICAPR.2017.8593044.
[15] M. V.D. Prasad et al., “An efficient classification of flower images with convolutional neural networks,” International Journal of
Engineering and Technology (UAE), vol. 7, no. 1.1, pp. 384–391, Dec. 2018, doi: 10.14419/ijet.v7i1.1.9857.
[16] A. I. Gustisyaf and A. Sinaga, “Implementation of convolutional neural network to classification gender based on fin-
gerprint,” International Journal of Modern Education and Computer Science, vol. 13, no. 4, pp. 55–67, Aug. 2021,
doi: 10.5815/IJMECS.2021.04.05.
[17] Z. M. Khaing, Y. Naung, and P. H. Htut, “Development of control system for fruit classification based on convolutional neural
network,” in Proceedings of the 2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering,
ElConRus 2018, Jan. 2018, vol. 2018-January, pp. 1805–1807, doi: 10.1109/EIConRus.2018.8317456.
[18] S. S. Liew, M. Khalil-Hani, S. Ahmad Radzi, and R. Bakhteri, “Gender classification: a convolutional neural network approach,”
Turkish Journal of Electrical Engineering and Computer Sciences, vol. 24, no. 3, pp. 1248–1264, 2016, doi: 10.3906/elk-1311-58.
[19] M. Elhoseiny, S. Huang, and A. Elgammal, “Weather classification with deep convolutional neural networks,” in 2015
IEEE international conference on image processing (ICIP), Sep. 2015, vol. 2015-Decem, pp. 3349–3353,
doi: 10.1109/ICIP.2015.7351424.
[20] S. Vishnupriya and K. Meenakshi, “Automatic music genre classification using convolution neural network,” in 2018 International
Conference on Computer Communication and Informatics, ICCCI 2018, Jan. 2018, pp. 1–4, doi: 10.1109/ICCCI.2018.8441340.
[21] M. A. Hossain and M. S. Alam Sajib, “Classification of image using convolutional neural network (CNN),” Global Journal of
Computer Science and Technology, pp. 13–18, May 2019, doi: 10.34257/gjcstdvol19is2pg13.
[22] A. Bouti, M. A. Mahraz, J. Riffi, and H. Tairi, “A robust system for road sign detection and classification using LeNet
architecture based on convolutional neural network,” Soft Computing, vol. 24, no. 9, pp. 6721–6733, Sep. 2020,
doi: 10.1007/s00500-019-04307-6.
[23] J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Networks, vol. 61, pp. 85–117, Jan. 2015,
doi: 10.1016/j.neunet.2014.09.003.
[24] S. Lim, S. Kim, and D. Kim, “Performance effect analysis for insect classification using convolutional neural network,” in 2017
7th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Nov. 2017, vol. 2017-Novem,
pp. 210–215, doi: 10.1109/ICCSCE.2017.8284406.
[25] T. Guo, J. Dong, H. Li, and Y. Gao, “Simple convolutional neural network on image classification,” in 2017 IEEE 2nd International
Conference on Big Data Analysis, ICBDA, 2017, pp. 721–724, doi: 10.1109/ICBDA.2017.8078730.
[26] E. Carabez, M. Sugi, I. Nambu, and Y. Wada, “Identifying single trial event-related potentials in an earphone-based auditory brain-
computer interface,” Applied Sciences (Switzerland), vol. 7, no. 11, p. 1197, 2017, doi: 10.3390/app7111197.
[27] E. J. Kim and R. J. Brunner, “Star-galaxy classification using deep convolutional neural networks,” Monthly Notices of the Royal
Astronomical Society, p. stw2672, Oct. 2016, doi: 10.1093/mnras/stw2672.

BIOGRAPHIES OF AUTHORS

I Made Wiryana hold degrees from University Indonesia, Gunadarma


University, Edith Cowan University – Perth, and Bielefeld University – Germany. His research
area is a fuzzy-neural network, a socio-technical approach for system development, and artificial
intelligence. He is a faculty member at Gunadarma University and appointed as Director of
International Collaboration and Relationship, Coordinator for the Incubator Business Centre at
Gunadarma University, and also Management Team for High-Performance Computing at Gunadarma
University. He leads the development of Asset Management for Municipal Government for
various municipal governments in Indonesia. Integration for the Ministry of Research and
Technology, Ministry of Foreign Affair, Ministry of Health, and also Ministry of Government
Officer. He also developed the system to support Citizen Science for Biodiversity Informatics. He
can be contacted at email: [email protected].

Suryadi Harmanto has an educational background in Mathematics - University of


Indonesia for Bachelor’s Degree, and Information Systems for a Master’s Degree. He is a
Professor at Gunadarma University, and also a lecturer, researcher, and management team for High-
Performance Computing at Gunadarma University. His research topic about information systems,
artificial intelligence, machine learning, deep learning, databases, and mathematics. He can be
contacted at email: [email protected].

Int J Artif Intell, Vol. 12, No. 3, September 2023: 1439-1447


Int J Artif Intell ISSN: 2252-8938 r 1447

Alfharizky Fauzi holds a Bachelor of Engineering (S.T.) in Informatics Engineering, and


is currently pursuing a Master of Engineering (M.T.) in Electrical Engineering in addition to several
certificates and professional skills. He is currently staff at Gunadarma University. His research areas
of interest include artificial intelligent, image processing, and data science. He can be contacted at
email: [email protected].

Imam Bil Qisthi holds a Bachelor of Engineering (S.T.) in Informatics Engineering, and
is currently pursuing a Master of Engineering (M.T.) in Electrical Engineering in addition to several
certificates and professional skills. He is currently staff at Gunadarma University. His research areas
of interest include artificial intelligent, image processing, natural language processing (NLP), and
data science. He can be contacted at email: [email protected].

Zalita Nadya Utami holds a Bachelor of Engineering (S.T.) in Informatics Engineering,


and is currently pursuing a Master of Engineering (M.T.) in Electrical Engineering in addition to
several certificates and professional skills. She is currently staff at Gunadarma University. Her
research areas of interest include web developer, mobile developer, artificial intelligent, image
processing, and data science. She can be contacted at email: [email protected].

Store product classification using convolutional neural network (I Made Wiryana)

You might also like