0% found this document useful (0 votes)
41 views14 pages

Research Paper CCS7

This paper aims to classify images of "Etag", a traditional Filipino cured meat product, using convolutional neural networks (CNNs) and image augmentation techniques. It will collect images of "Etag" preserved through sun drying, smoking, and storing in pots. A CNN model will be trained on the original images and images augmented through flipping, rotating, and changing hue. The model will then be tested to consistently detect "Etag" images classified by preservation method, in order to better understand and identify this important cultural product amid a lack of scientific research.

Uploaded by

Adam Blanza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views14 pages

Research Paper CCS7

This paper aims to classify images of "Etag", a traditional Filipino cured meat product, using convolutional neural networks (CNNs) and image augmentation techniques. It will collect images of "Etag" preserved through sun drying, smoking, and storing in pots. A CNN model will be trained on the original images and images augmented through flipping, rotating, and changing hue. The model will then be tested to consistently detect "Etag" images classified by preservation method, in order to better understand and identify this important cultural product amid a lack of scientific research.

Uploaded by

Adam Blanza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Studying Classification for preservation method of “Etag” through Convolutional

Neural Network

INTRODUCTION

A specific region in the northern part of the Philippines where indigenous


products are produced, specifically “Etag'' is defined as a native delicacy
fabricated from smoked and salted meat done by two weeks, months, and a
year for preservation, tending to be highly desired in Cordillera region.
According to Cadiogan (2021) it is also known as an “Igorot ham”, chopped
meat re-hydrated and served as part in different viands, an Igorot version of sisig
as flavor enhancer. It originated from Sagada, Mountain Province and is known
to be processed through the curing of pork from native pigs and simply
engendered by hanging and allowed to dry the meat in an open space prior to
sun drying or passive solar dryer for preservation (Nukulwar and Tungkiar (2021).
The “Etag'' consists of a foul odor and most often has maggots in the meat
because it appears when the meat is left exposed to air and flies (Li,
Rahimnejad, Wang, Lu, Song, and Zhang, 2019). In line with this, it is proven that
another traditional way of preservation is to store inside the pot undisturbed for
a year, which is a traditional practice as it depicts a rich culture and tradition.
Thus, in these three methods, the produced product changes its color, texture
such as the roughness, and scale (Smaoui, Hlima, Braiek, Ennouri, Ennouri,
Mellouli, and Khaneghah (2021). However, it is also proven that traditional
“Etag” after several days looks dark, almost bark-ish in nature. It changes color
and texture over a long time of aging but when it is preserved then it looks partly
dark brown with noticeable whitish-yellow fat color (Lorenzo, Munekata,
Sant’Ana, Carvalho, Barba, Toldra, Mora, and Trindade (2018).

Through the use of Image classification, also referred to as the task of


extracting information classes within the image. This can be achieved by
collecting a great number of characteristics, referred to as the features
Momeny, Sarram, Latif, Sheikhpour, and Zhang (2021). Displayable images from
different preservation methods of “Etag'' will be differentiated through its distinct
colors within the meat. Each preservation image can contain thousands of
pixels and it is presented by a set of ranged numbers, also referred to as bit
depth and indicates a collection of primary colors such as Red, Green, and Blue
(RGB) (Wang, Zhang, Han, Liu, and Xu, 2020). In the instance of Etag, bit depth
will indicate the number of colors that can be used in an image. Those images
are converted into thousands of labeled features and categorized in the
process according to the meat’s preservation (Nanda and Goswami, 2021). To
effectively identify each preservative object, each class within Image
recognition uses a training model that can be further improved by delivering a
consistent amount of images, with the extension of image augmentation
(Zhenyang, Ruifeng, XiaoXing and Hongliang, 2021). This model is extensively
used in Image recognition due to its self-learning ability, as it can identify each
unique object with highly accurate results and can be classified as
Convolutional Neural Networks, therefore the chosen supervised model to
differentiate the meat preservation is considered part of Deep Learning
(Han-Cheng, Ge-Wen and Zhi-Heng, 2021).

CNN features the concepts of multiple building blocks in which a network


is essentially composed of 3 major layers, namely Convolutional layers, Pooling
layers, and Fully connected layers (Shen, Yang, and Zhang, 2021). The condition
of CNN can be applied in the meat preservation process with the distinct term
of its accuracy to prevent errors with the capabilities to be fully functional with
high-limited data sources for image classification and image recognition. The
technique is to focus on the level of class according to its condition (Yu, Yang,
Zhang, Armstrong, and Deen, 2021). Practical guidelines were summarized for
using transfer learning to build classification models with limited data. This
instance can also be an alternative to the Cordilleran “Etag” in the opposition of
its different preservation techniques, where limited sources are relatively hard to
obtain. The assessment of very low-quality visual data is known to be difficult
(Zhu, Braun, Chiang, and Romagnoli, 2021). Image classification shares a similar
problem within the relationship of Etag, where the object may share similar traits,
with the common example being the different features of the preserved meat,
making it hard to accumulate if the data is undefined. In particular, the ability of
humans to recognize encrypted visual data is currently impossible to determine
computationally (Hofbauer, Austrusseau, and Uhl, 2021). Deep learning provides
excellent potentials for hyperspectral images (HSIs) classification, but it is
infamous for requiring a large number of labeled samples while the collection of
high-quality labels for HSIs is extremely expensive and time-consuming (Fang, Li,
Zhang, and Chan, 2020). Limited training samples may cause deep learning
methods to suffer from overfitting. As a result, it may not perform well due to the
massive parameters and complexity of the network structure.

(Liu, Pu, Sun, 2021) Implements image augmentation as a useful


technique in convolutional neural networks that involves modifying existing data
in order to generate and expand the data for training a model that enhances
the size and quality without acquiring new images. In relation to (Bang, Baek,
Park, Kim, and Kim, 2020), CNN is applied to the recognition and detection of
food images and evaluates its performance through training datasets. Also,
deep neural networks have been successfully applied to image classification,
segmentation, and object detection tasks in computer vision. Whilst (Liu, Pu, and
Sun, 2021), The convolutional neural network (CNN) has proven as an efficient
and promising tool for feature extraction and considered the most popular
deep learning architecture and has been increasingly applied for the analysis
and detection of complex food matrices including meat and other products. To
create image diversity in the model (Oyalade and Ezugwu, 2021) presented
augmentation techniques for images and computer vision such as translation,
cropping, rotation, flipping, color, contrast, brightness, and scaling
augmentation. Furthermore, augmentation is successfully used to prepare deep
learning or deep learning models in such applications that help to increase the
diversity of data, as augmentation methods have a degree of randomness. (
Bang, Baek, Park, Kim, and Kim, 2020), Introduces the three techniques to
achieve better performance, which is higher compared to a network trained
with the original dataset of Image Augmentation such as
removing-and-inpainting which removes the remaining objects from the original
images for a finishing data, cut-and-paste extracts the objects from the original
data to reformat, and image-variation applies three transformation techniques,
the intensity-, blur- and scale-variation to the images.

In this nostudy, researchers intend to identify the ways of preserving


indigenous products in the northern region of the Cordillera, a product that is
referred to as “Etag”. It mainly proposes a comparison between traditional
products supported by the convolutional neural network, which makes it
possible to detect such as the rotation, flipping, and changing of hue as an
augmentation technique. Through this, machine learning will train a data set to
determine which product is categorized as a sun-dried, smoked meat, and
“Etag” in the pot. In line with this, image augmentation is used to expand the
training dataset in order to improve the model's performance and
generalizability. This problem is heavily addressed throughout the Philippine
community, as people are less knowledgeable upon identifying the Cordilleran
meat, specifically “Etag” and also its ways of preserving and the length of time
on preserving this specific Cordilleran meat. The lack of scientific studies and
support for the indigenous products from the northern Cordillera region is a
major issue. To address this problem, the researchers intend to study and analyze
the similarities between the 2 objects and classify them respectively. This will be
achieved with the use of supervised learning and processing the images
through a series of CNNs models.
3-augmentation techniques > Testing and comparison of those
techniques > detected data and its final results to justify >
● The paper aims to collect the data sources, pre-format and apply
3-defined augmentation techniques such as flipping, rotating, and
changing hue
● The proposed model uses a pre-trained CNNs model named Teachable
Machine for extracting the features from the original and augmented
ones in order to extract more detailed features for all etag images using.
● The CNN structure analysis and Consistent detection testing of Etag with
unfamiliar datasets within the machine training model.
Methodologies

Image Samples Original Flipping Rotating Changing Hue

Sun-Dried
Preservation

Smoked
Preservation

Pot Preservation

Description 224x224 px Horizontal Right 90⁰ Violet Hue

Figure 1. Original photo and Augmented Techniques.

Qualitative research is a method used to gather the data in which a data


is collected on a video analysis and actual photos, gathering all the image
samples of “Etag” both rotating, flipping, and changing of hue. The data
augmentation techniques are adopted during the training phase to avoid
overfitting. Figure 1. Shows 3 divided and categorized files into methods such as
sun-dried, smoked, and meat in the pot preservations. It was determined as
original photos before proceeding to turn it into augmentation techniques. The
researchers collected and preformatted all 151 images of the Cordilleran Etag
with the respected dataset resolution of 224 x 224 pixels. The researchers
decided to format the flipping in horizontal while rotation is 90°, and lastly the hue
is violet. The first labeled file was “Sun-dried Preservation”, and contained 51
images of sun-dried etag, the second was “Smoked preservation”, containing
up to 50 images of smoked dried etag and lastly the third file was “Pot
preservation” which contained 50 images for pot preservation etag.

Figure 2. Machine Learning Process using Teachable Machine.

Figure 2, shows a procedure of gathering specific images from different


preservation methods of Etag, A supervised machine learning model named
Teachable Machine is proposed in this paper and was implemented in order to
execute the first experiment. There are 12 classes in a dataset in which a data
augmentation is included before training a data. In the alignment of collecting
all unique images, specified images from each preservation technique were
extracted before the training process of the model.

Figure 3.
The Researchers uploaded 604 images in a teachable machine system
and divided the data into 3 categories with a set of 3 epochs, batch size of 16,
and a learning rate of 0.001 and with the approximate process time of 2-3
seconds. In addition, to extract the formal functionality of the model and obtain
valuable information, the Teachable machine featured an “Under the hood”
option, where the presented details of the model were located. The options
were namely as the Accuracy per class, confusion matrix, accuracy per epoch
and loss per epoch

Findings and Discussion

Experiment Number of Number of Success rate of Over all


Photos extracted testing labeled etag success rate
Photos

Experiment 151 30 97-100% 100%


1

Experiment 453 90 98-100% 100%


2

Experiment 453 90 98-100% 100%


3

Experiment 453 90 98-100% 100%


4

Experiment 604 120 97-100% 100%


5

Table 1. Accuracy rate of each experiment


Table 2. Accuracy per class table

Table 1. Shows the accuracy rate of testing and training the data. There are 5 labels
which are identified as the experiment, number of photos, number of extracted photos, success
rate of testing labeled etag, and lastly overall success rate.

In Experiment no. 1 contains 151 original photos of the three preservatives


with a 30 number of extracted photos. Experiment no. 2 contains 453 photos
combining original photo, rotation, and changing hue. Whilst, Experiment no. 3
contains 453 photos combining original photo, rotation, and changing hue, and
experiment no. 4 also contains 453 photos combining original photo, flipping,
and changing hue. Experiment no. 5 contains 604 photos combining all of the
three augmentation techniques with original photos included and has 120
numbers of extracted photos.

Experiment no. 2, 3, and 4 has the amount of 90 extracted photos with a


total accuracy rate of 98-100%. Whilst, experiment 1 and 5 has a success rate of
97 percent. Furthermore, a study on the Teachable machine model was
conducted and revealed the accuracy rate of the labeled data based on 8
relatable samples. The lowest rate was indicated on the Rotating etag in pot
with a 38% accuracy rate. Labeled classes with etag smoked, etag sundried,
rotating smoked etag and rotating sundried indicated an overall success rate of
50%. Etag pot showed 63% along with changing hue etag pot and flipping
smoked etag with an accuracy rate of 75%. Changing hue smoked etag is 88%
and lastly changing hue sun dried and flipping sun dried demonstrated a 100%
accuracy rate. However, the researchers implemented a non-fixed dataset
resolution, showing that the model was still unable to recognise the correct
object even with a precise resolution of 224 x 224
Figure 4. Accuracy per epoch Figure 5. Loss per epoch

Figure 4 shows that the accuracy is 0.3503 with test accuracy of 0.4270 ,
rises up at the second epoch which is 0.7086 accuracy with 0.5937 test
accuracy and increases a quantity at the third epoch which is 0.8307 accuracy
with 0.5145 test accuracy. There are 604 samples for testing a model and
validated samples of 151. Figure 5 shows a loss per epoch with a first loss of
1.7964 with a test loss quantity of 1.2979 and decreases which is 0.8522 with a
test loss of 1.0223, and lastly decreases for the third time with a quantity of
0.5669 loss of accuracy with 0.9413 test loss accuracy.

Conclusions

REFERENCES
[1] D. J. Cadiogan, S. C. Dy, C. J. L. Opaco, R. Rodriguez, J. T. Tan, K. Villanueva,
J. M. Mercado (2021). Manyisig: The Culinary Heritage Significance of Sisig in
Angeles City, Pampanga, Philippines. International Journal of Gastronomy and
Food Science
https://www.sciencedirect.com/science/article/abs/pii/S1878450X21000469

[2]Energy Procedia
M. R. Nukulwar, V. B. Tungkiar (2021). A review on performance evaluation of
solar dryer and its material for drying agricultural products. Materialstoday:
Proceedings
https://www.sciencedirect.com/science/article/pii/S221478532036226X

[3] S. Smaoui, H. B. Hlima, O. B. Braiek, K. Ennouri, K. Ennouri, L. Mellouli, A. M.


Khaneghah (2021). Recent advancements in encapsulation of bioactive
compounds as a promising technique for meat preservation. Meat Science
https://www.sciencedirect.com/science/article/abs/pii/S0309174021001613
[4] J. M. Lorenzo, P. E. S. Munekata, A. S. Sant’Ana, R. B. Carvalho, F. J. Barba, F.
Toldra, L. Mora, M. A. Trindade (2018). Main characteristics of peanut skin and its
role for the preservation of meat products: Trends in Food Science & Technology
https://www.sciencedirect.com/science/article/abs/pii/S0924224417305010

[5] X. Li, S. Rahimnejad, L. Wang, K. Lu, K. Song, C. Zhang (2019). Substituting fish
meal with housefly (Musca domestica) maggot meal in diets for bullfrog Rana
(Lithobates) catesbeiana: Effects on growth, digestive enzymes activity,
antioxidant capacity and gut health. Aquaculture
https://www.sciencedirect.com/science/article/abs/pii/S0044848618312717

[6] M. Momeny, M. A. Sarram, A. M. Latif, R. Sheikhpour, Y. D. Zhang (2021) A


Noise Robust Convolutional Neural Network for Image Classification. Results in
Engineering
https://www.sciencedirect.com/science/article/pii/S2590123021000268

[7] J. Wang, H. Zhang, P. Han, C. Liu, Y. Xu (2020). Pixel re-representations for


better classification of images. Pattern Recognition Letters
https://www.sciencedirect.com/science/article/abs/pii/S0167865520301495

[8] P. Nanda, L. Goswami (2021). Image processing application in character


recognition. Materials Today: Proceedings
https://www.sciencedirect.com/science/article/pii/S2214785321027991

[9] Z. Shen, H. Yang, S. Zhang (2021). Neural network approximation: Three


hidden layers are enough. Neural Networks
https://www.sciencedirect.com/science/article/abs/pii/S0893608021001465
[10] H. Yu, L. Yang, Q. Zhang, D. Armstrong, M. J. Deen (2021). Convolutional
neural networks for medical image analysis: State-of-the-art, comparisons,
improvement and perspectives. Neurocomputing
https://www.sciencedirect.com/science/article/abs/pii/S0925231221001314

[11] W. Zhu, B. Braun, L. Chiang, J. Romagnoli (2021). Investigation of transfer


learning for image classification and impact on training sample size.
Chemometrics and Intelligent Laboratory Systems
https://www.sciencedirect.com/science/article/abs/pii/S016974392100037X

[12] H. Hofbauer, F. Autrusseau, A. Uhl (2021). To recognize or not to recognize –


A database of encrypted images with subjective recognition ground truth.
Information Sciences
https://www.sciencedirect.com/science/article/pii/S0020025520311415

[13] B. Fang, Y. Li, H. Zhang, J. C. Chan (2020). Collaborative learning of


lightweight convolutional neural networks and deep clustering for hyperspectral
image semi-supervised classification with limited training samples. ISPRS Journal
of Photogrammetry and Remote Sensing
https://www.sciencedirect.com/science/article/abs/pii/S0924271620300125

[14] S. Bang, F. Baek, S. Park, W. Kim, H. Kim (2020). Image augmentation to


improve construction resource detection using generative adversarial networks,
cut-and-paste, and image transformation techniques. Automation in
Construction
https://www.sciencedirect.com/science/article/abs/pii/S0926580519311653

[15] Y. Liu, H. Pu, D. Sun (2021). Efficient extraction of deep image features using
convolutional neural networks (CNN) for applications in detecting and analysing
complex food matrices. Trends in Food Science & Technology
https://www.sciencedirect.com/science/article/abs/pii/S0924224421003022
[16] O. N. Oyelade, A. E. Ezugwu (2021). A deep learning model using data
augmentation for detection of architectural distortion in whole and patches of
images. Biomedical Signal Processing and Control.
https://www.sciencedirect.com/science/article/abs/pii/S1746809420304730?fbc
lid=IwAR28dSBi67ttW1bi83x-flN5QLANtyExPutn21T4v0WmcPkFsNNQY-RLOqc

[17] S. Bang, F. Baek, S. Park, W. Kim, H. Kim (2020). Image augmentation to


improve construction resource detection using generative adversarial networks,
cut-and-paste, and image transformation techniques. Automation in
Construction
https://www.sciencedirect.com/science/article/abs/pii/S0926580519311653

[18] Z. Wang, R. Guo, H. Wang, X. Zhang (2021). A new model for small target
adult image recognition. Procedia Computer Science
https://www.sciencedirect.com/science/article/pii/S1877050921005731

[19] D. Sarwinda, R. H. Paradisa, A. Bustamam, P. Anggia (2021) Deep Learning


in Image Classification using Residual Network (ResNet) Variants for Detection of
Colorectal Cancer. Procedia Computer Science

https://www.sciencedirect.com/science/article/pii/S1877050921000284

https://www.sciencedirect.com/science/article/abs/pii/S1361841521001067(AD
AM)
Step 6 - Procedure and tools
Step 7 - Finding the results of five experiments
Step 8 - Conclusion the best experiment in terms of highest efficiency

The paper aims to collect the data sources, model architecture, and
overall performance of CNN compared to other existing methods. Applications
of CNN as a depth feature extractor for detecting and analyzing complex food
matrices between Etag and other pork. The CNN structure, feature extraction
methods based on 1-D, 2-D, and 3-D CNN models, and multi-feature
aggregation methods are all introduced.
Specific objectives
-Comprehensive comparison of 3-defined augmentation techniques and its
original state
-An in depth feature extractor for the different preservation methods
-Consistent analysis and detection testing for Etag with CNN training model

To be continue

V2 The paper aims to collect the data sources and apply 3-defined
augmentation techniques such as flipping, rotating, and changing hue.

V1 The paper aims to collect the data sources, model architecture, and a
comprehensive study of indigenous product “Etag”, applying the 3-defined
augmentation techniques such as flipping, rotating, and changing hue and its
original state.

Labeled data Success rate Success rate of Success rate of Success rate of Overall success
of testing smoked to sun-dried to pot to smoked rate
labeled etag sun-dried etag pot etag etag

Smoked Etag 97-100% 99-100% 99-100% 99%

Sun-dried Etag 99-100% 100% 100% 99%

Pot Etag 98-100% 99-100% 98-100% 100%

The remaining images were equally distributed into 3 classified classes,


each class would contain a specified preservation method.

The sample images were then trained into the machine model, and
sequentially inserted the extracted data with the observed recognition rate
ranging from 97-100% to 99-100%.

Experiment 1

draft
#This can be used on experiment 2(decided to work with 3 augmentation
techniques such as Flipping, rotating, and change of hue. In the first
experimentation, a comparison of)

You might also like