0% found this document useful (0 votes)
38 views4 pages

Indian Food Image Classification Using Convolutional Neural Network

The Indian food image classification using convolutional neural network.

Uploaded by

Rohith P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
38 views4 pages

Indian Food Image Classification Using Convolutional Neural Network

The Indian food image classification using convolutional neural network.

Uploaded by

Rohith P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 4
International Journal for Research in Applied Science & Engineering Technology (IIRASET) ISSN: 2321-9653; IC Value: 45,98; SJ Impact Factor: 7.538 Volume 11 Issue IX Sep 2023- Available at wnnwijraset.com Indian Food Image Classification Using Convolutional Neural Network R. Shailaja', P. Rohith?, R. Prem Kumar’, K. Kannaki* B.E,, Independent Researcher, Tamil Nadu, India Abstract: Food classification using Convolutional Neural Networks (CNNs) has gained significant attention due to its potential ‘applications in dietary analysis, food recommendation systems, and nutrition monitoring. In this study, we present a CNN-based approach for food classification, leveraging image data augmentation to enhance the model's ability to generalize to new and unseen food images. We utilize a dataset containing diverse food categories and preprocess the images by resizing and normalization. The proposed CNN architecture consists of multiple convolutional and pooling layers, along with dropout and batch normalization for regularization. To prevent overfitting, we employ early stopping as a custom callback during model training. The experimental results demonstrate promising training accuracy of 90.13% and relatively low training loss of 0.3066, indicating effective learning from the training data. the validation accuracy of 69.00% and validation loss of 1.270. Keywords: Food Classification, Convolutional Neural Network, Indian Food, Food recognition, Food detection, TensorFlow, Deep Learning. 1. INTRODUCTION Indian Food Classification using Convolutional Neural Networks (CNNs) has gained significant attention for its potential impact on dietary analysis, personalized nutrition, and food recommendation systems. With the exponential growth of food-related images shared on social media, blogs, and restaurant review platforms, automated food recognition has become crucial for understanding cating habits and promoting healthier choices. However, food classification poses challenges due to the wide variation in appearance caused by different cooking styles, ingredient ‘combinations, and food presentations. CNNs have shown great promise in image classification tasks, including food recognition, but they require large and diverse datasets for overfitting on limited data, To address these issues, image data augmentation techniques are employed to increase the diversity of the training dataset ‘ive learning, Obtaining such datasets can be challenging, leading to potential artificially. By applying random transformations like rotations, zooming, and flips, the CNN is exposed to various variations of the same food images, reducing the risk of overfiting and improving generalization to unseen data, In this work, we propose a CNN-based food classification approach that leverages image data augmentation to enhanee the model's accuracy on diverse food categories. We use a carefully preprocessed dataset of food images and design a CNN architecture with specialized layers for effective feature extraction and regularization. To prevent overfitting, we introduce an carly stopping mechanism during training. In this work, we have sct the callback value for the training loss at 0.3066. ‘The structure of this paper is as follows: Section 2 provides an overview of related work in the ficld of food classification using CNNs and image data augmentation techniques. Section 3 details the methodology, including the dataset, preprocessing steps, CNN architecture, and data augmentation strategy: Section 4 presents the experimental setup and analyzes the results obtained from model training and evaluation Section 5 concludes the paper, outlining future research directions to advance the field of a food classification using Convolutional Neural Network. I, LITERATURE SURVEY Convolutional Neural Networks (CNNs) have become widely popular in food recognition applications, surpassing traditional ‘machine learning methods in terms of performance. Numerous studies have successfully utilized CNNS for food recognition Im [1], the authors altered the AlexNet model's architecture, which was first described in [2], and created a deep CNN using images from the Food-101 dataset. This novel strategy attained a top-1 accuracy of 56.4%, which is astounding. The use of CNS for food recognition and identification across ten food groups in [3] is another noteworthy contribution. The outcomes demonstrated CNNs' exceptional performance, with a detection accuracy of 73.7% when compared to traditional techniques. ©URASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | International Journal for Research in Applied Science & Engineering Technology (IIRASET) ISSN: 2321-9653; IC Value: 45,98; SJ Impact Factor: 7.538 Volume 11 Issue IX Sep 2023- Available at wnnwijraset.com ‘On the UEC-FOOD-100 dataset [5], which contains 100 Japanese food classifications, CNNs were the only feature extractors employed in [4]. This method's accuracy of 72.3% was outstanding, The outputs from a pre-trained AlexNet model and manually created features were combined to create the feature maps. ‘Anew food dataset was unveiled in [4], which included photos that were gathered from open social media sites like Instagram, This dataset produced a CNN classification performance that was comparable to previous datasets, with a remarkable accuracy of 99.1%. I, METHODOLOGY A. Dataset ‘The Food classfication is trained by a dataset named as Indian food classification. It contains 6269 images belong to 20 food categories fom the Indian food classification dataset has been considered for the classification, These 20 food categories from the Indian food classification dataset are burger, butter_naan, chai, chapati, chole_bhature, dal_makhani, dhokla, fried_rice, idl, jalebi, kkaathi_solls, kadai_pancer, kulfi, masala_dosa, momos, paani_puri, pakode, pay_bhaji, pizza, samosa. In this dataset, 80% of data is, used for training and 20% of data is used for validation, ae Fig | Dataset sample B. Data Preprocessing Before feeding the images into the CNN model, several preprocessing steps are applied to standardize and facilitate model learning. ‘The images are resized to a fixed size of 64x64 pixels to ensure uniformity in the input data. Additionally, pixel values are normalized to the range [0, 1] by dividing each pixel value by 255. This normalization process helps in stabilizing the taining process and improving convergence. C. Convolutional Neural Network Architecture The proposed CNN architecture is designed to effectively extract features from food images for accurate classification. The architecture consists of multiple layers, including convolutional layers, pooling layers, dropout layers, and batch normalization layers. ‘The CNN model starts with a convolutional layer with 32 filters of size (3, 3) and a ReLU activation function. The stride is set to 1 and padding is applied to maintain the spatial dimensions. A max-pooling layer with a pool size of (2, 2) and stride of 2 follows the first convolutional layer to downsample the feature maps and reduce computational complexity. Subsequently, another convolutional layer with 64 filters and a ReLU activation function is applied. Dropout with a rate of 0.2 is introduced alter the second convolutional layer to prevent overftting during training. A max- pooling layer is then employed with the same specifications as before, The third convolutional layer consists of 128 filters and a ReLU activation function, followed by another max-pooting layer to downsample the feature maps further, Following the convolutional and pooling layers, the feature maps are flattened into a one-dimensional vector, This flattened vector is then passed through a fully connected dense layer with $12 units and a Rel.U activation function to capture higher-level features and representations of the input images. To further prevent overfitting, a dropout layer with a rate of 0.3 is introduced after the dense layer. Finally, the output layer contains a number of units equal to the number of food classes, with a softmax activation function to produce the probability distribution over the classes. ©URASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | International Journal for Research in Applied Science & Engineering Technology (IIRASET) ISSN: 2321-9653; IC Value: 45,98; SJ Impact Factor: 7.538 Volume 11 Issue IX Sep 2023- Available at wnnwijraset.com D. Image Data Augmentation Image data augmentation is a erucial technique employed to enhance the model's generalization and prevent overfitting. During ‘model training, the ImageDataGenerator class from Keras is used to apply various transformations to the input images. The transformations include random rotations, zooming, horizontal and vertical shifts, and horizontal flips, These augmented images, along with the original images, create a more diverse training dataset, which helps the model learn more robust and representative ° BEER 0 500 50 features for accurate classification. 0 500 50 ° DERE o S00 S00 SOO 50 Fig 2 Processed dataset E. Model Compilation and Training The model is compiled using the Adam optimizer, a popular choice for optimizing deep leaning models. The categorical cross- entropy loss function is used as the training objective, as itis well-suited for mult-class classification problems. During training, the model is fed batches of images generated by the ImageDataGenerator. The training process runs for a specified number of epochs (in this case, 100) with a batch size of 64, Early stopping is applied as a custom callback with a target loss value of 0.3129. This early stopping mechanism allows the training to halt if the specified target loss is achieved, helping to prevent unnecessary computations and potential overfitting. IV. EVALUATION ‘The model's performance is evaluated using various metrics, including training accuracy, training loss, validation accuracy, and validation loss. The training accuracy and training loss measure the model's performance on the training dataset, while the validation accuracy and validation loss assess its generalization on unseen data from the validation dataset. Additionally, the confusion matrix is computed to analyse the model's performance on individual food classes. This matrix provides insights into the number of rue positive, true negative, false positive, and false negative predictions for each class, enabling a detailed analysis of classification accuracy for different food categories. ‘Waning and Validation Acuracy Fig 3 Comparison of Training accuracy and Validation Accuracy ©URASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | International Journal for Research in Applied Science & Engineering Technology (URASET) ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538 Volume 11 Issue IX Sep 2023- Available at wwwijraset.com Fig 4 Comparison of Training Loss and Validation Loss m4 ij Fig 5 Confusion matrix of the proposed system ‘The above figures confirm the experimental results of the proposed system, V. CONCLUSION In this research study, the Convolution Neural Network, a Deep leaming technique is used to classify the Indian food image classification with their clases. The dataset considered here is Indian Food Classification. The experiment results promising training accuracy 90.13% and relatively low training loss 0.3066, indicating effective learning from the training data the validation accuracy £69.00% and validation loss 1.270. uw a BI 4) (5) 6) REFERENCES 1. Borsard, M. Guilin, od L. Ven Gool, “Food-L0I-mining discriminative components with random forests,” in European Conference on Computer Vision, 2014, pp, 445-461, A. Keishevsy, I Suskever, and G. E. Hint, processing systems 2012, pp. 10971105 HL Kagaya, K. Aizawa, and M. Ogawa, "Food detetion end recognition using convolutional neural network” in Proceedings of the 22nd ACM international conference on Maltimesia, 2018, pp 1085-1088, YY. Kawano and K. Yana, "Food image recognition with desp convolutional features.” in Proosedings ofthe 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, 2014, pp. 589-593. YY, Matsuda H. Hoshi, and K. Yana, “Recognition of ruliplefood images by detecting candidate ceyions," in Multimedia and Expo (ICME), 2012 TREE Interational Confrence on, 2012, p. 25-30, 1. Kagaya and K. Aizawa, “Highly accurate food non-food image eassification based ona dep convolutional neural network,” in Internation Conference on Image Analysis and Processing, 2015, pp. 350-357 tmagenetclassication with deep convolutional neural netwotks,” in Advances in neural information ©URASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 |

You might also like