0% found this document useful (0 votes)
32 views24 pages

Minor Report

This document presents a minor project report on using deep learning networks to measure calories for a raw vegan diet. The proposed system uses a convolutional neural network (CNN) trained on 80 high-resolution food photos to classify foods and a Faster R-CNN algorithm to identify and label food items in camera images taken by users. This would allow users to take a photo of their food and instantly determine the calorie content to help track their diet and support medical treatment for obesity and related diseases. The system is intended to provide a more accurate and practical way for users to monitor their calorie and food intake through digital means like mobile devices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views24 pages

Minor Report

This document presents a minor project report on using deep learning networks to measure calories for a raw vegan diet. The proposed system uses a convolutional neural network (CNN) trained on 80 high-resolution food photos to classify foods and a Faster R-CNN algorithm to identify and label food items in camera images taken by users. This would allow users to take a photo of their food and instantly determine the calorie content to help track their diet and support medical treatment for obesity and related diseases. The system is intended to provide a more accurate and practical way for users to monitor their calorie and food intake through digital means like mobile devices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

MINOR PROJECT REPORT

on

Calorie Measurement for Raw Vegan Diet Using Deep


Learning Networks
For
18ECP107L / Minor Project

Submitted by
Shivani Saraf (RA1911004010644)

Ram Kumar Bagaria (RA191104010629)

Semester – VII
Academic Year: 2022-23

Under the supervision of


Guide Name : Dr. Harisudha K. , Assistant Professor,
Department of ECE

DEPARTMENT OF ELECTRONICS AND COMMUNICATION


ENGINEERING

College of Engineering and Technology, SRM Institute of Science and


Technology
SRM Nagar, Kattankulathur – 603203, Kancheepuram District, Tamil Nadu.

NOV 2022

i
SRM Institute of Science and Technology
(Under Section 3 of UGC Act, 1956)

BONAFIDE CERTIFICATE

Certified that this project report titled “Calorie Measurement for Raw Vegan Diet using
Deep Learning Networks” is the bonafide work of “ SHIVANI SARAF [Reg No:
RA1911004010644], RAM KUMAR BAGARIA [Reg No :RA1911004010629 ],who
carried out the project work under my supervision. Certified further, that to the best of my
knowledge the work reported herein does not form any other project report or dissertation
on the basis of which a degree or award was conferred on an earlier occasion on this or
any other candidate.

SIGNATURE SIGNATURE

Guide Prof In-charge


Dr. Harisudha K. Dr.J.Subhashini
Assistant Professor Dept. of Electronics &
Dept. of Electronics & Communication Engineering
Communication Engineering

ii
TABLE OF CONTENTS

ABSTRACT iv

ACKNOWLEDGEMENTS v

LIST OF TABLES vi

LIST OF FIGURES vii

1. INTRODUCTION 1
1.1 Problem Context..........................................................................................1

2. LITERATURE SURVEY 2

3. PROPOSED SYSTEM 4

iii
ABSTRACT
One of the essential needs of every living thing on earth is food. People are getting increasingly sensitive to
their diets today, all around the world. The fight against obesity, weight gain, diabetes, etc., requires
accurate methods for measuring food and energy intake. An innovative and practical solution that helps
users/patients track their food consumption and collect dietary data might give us the most insight into long-
term prevention and efficient treatment programs. In this post, we offer a calorie-measuring method that can
help patients and medical professionals fight diseases caused by food. The user can snap a photo of the food
and instantly determine how many calories were consumed thanks to our suggested method. We classify 80
high-resolution food photos into each class using deep convolutional neural networks to train the model and
precisely identify the food components in the user's camera-taken image. We deployed Faster R-CNN
algorithms to identify food items and label them appropriately.

iv
ACKNOWLEDGEMENTS

We want to express our deepest gratitude to our guide, Dr. Harisudha K. , for her valuable guidance,
consistent encouragement, personal caring, and timely help and for providing us with an excellent
atmosphere for doing the project. All through the work, despite her busy schedule, he has extended
cheerful and cordial support to us for completing this project work.

v
LIST OF TABLES

Literature Survey....................................................................................................3

Result....................................................................................................................9

vi
LIST OF FIGURES

Design Methodology..................................................................6
Result............................................................................................8
Implemented Code.....................................................................12
Implemented Code......................................................................13

vii
CHAPTER 1

INTRODUCTION

Heat energy is measured in calories; units used to determine the amount of heat needed to raise one gram of
water one degree. Calories are essential for the body because they produce energy. However, it is generally
agreed upon that anything in excess is harmful, which also holds for calorie intake. Worldwide obesity has
nearly tripled since 1975. Both adults and children are believed to be affected by the global epidemic of
obesity. In 2016, there were more than 1.9 billion overweight adults. More than 650 million of them were
obese. Obesity and overweight are primarily caused by an imbalance in energy between calorie consumption
and calories burned. Obesity is mostly brought on by an imbalance between excessive food intake and a lack
of exercise. As a result, it is critical to monitor diet correctly. Studies among teenagers indicate that creative
technology use may increase young people's dietary information integrity. Many teenagers around 13 and 17
worldwide rely extensively on digital technology in their day-to-day lives. Additionally, as people adapt to
sedentary lifestyles, they unintentionally lose awareness of the amount of food and energy consumed.
Patients who are obese run a high risk of developing many comorbid diseases, including cardiovascular
disease (CVD), gastrointestinal problems, type 2 diabetes (T2D), joint and muscle problems, respiratory
problems, and psychological concerns. These ailments may significantly affect patients' day-to-day lives and
increase their death chances. Most people are familiar with how diet and health are related. Consumers have
access to a wide range of dietary requirements and information at their fingertips. Such knowledge hasn't,
however, been enough to shield patients from diet-related ailments or encourage them to eat wisely. Most of
the time, individuals find it challenging to analyze all dietary and nutritional facts. Due to poor healthy
awareness, erratic eating habits, or a lack of self-control, people are also unaware of how to measure or
regulate their daily caloric intake. Innovative techniques that enable patients to make long-lasting
modifications regarding their calorie intake and diet quality are needed to provide them with an effective
long-term treatment.

Keeping a food journal on a computer or mobile device is essential for adults and teenagers. Most adults
and teenagers could capture before and after pictures of their meals using the specified set of skills.
Additionally, more precise dietary assessment techniques will improve researchers' capacity to recognize
associations between diet and disease and diet and genes. The factors mentioned earlier have led numerous
scientists to suggest assistive calorie tracking systems that run on cellphones and allow the user to take a
picture of the food while automatically calculating the user's calorie consumption.

We examine the application of a deep learning neural network object detection method in this paper.
Multiple processing layers are used in deep learning, a type of machine learning based on artificial neural
networks, to extract ever-more-complex features from data. We demonstrate that the identification and
classification of food can be much more accurate when deep learning is used.

Here we investigated two different neural network models. First is Convolution Neural Network, a
classification model; the other is the Faster R-CNN model, an object detection model. In contrast to
1
classification algorithms, by attempting to create a bounding box around the object, object identification
algorithms try to identify the object of interest inside the image. Furthermore, you might draw multiple
bounding boxes to represent various objects of interest in the image in an object detection scenario rather
than simply one. In Faster R-CNN, we created a weighted file with the help of a training dataset on GPU.
This weighted file is then further used for object detection & labeling objects in the input image. Object
detection is a hybrid technique combined with informative region selection, feature extraction, and
categorization to provide a suitable outcome with high accuracy.

2
CHAPTER 2

LITERATURE SURVEY

S.No Title Author Journal Details Inference

1 24- Hour Dietary M. The This procedure lists the daily food
Recall (24HR). Livingstone, American Journal intake using a special format for a
P. Robson of Clinical period of 24 hours. Estimation of
and a. Nutrition, vol. 78, food portion size is made using
J.Wallace, L. p.p. 480–484, standardized cups and spoons. The
Bandini, A. 2003. record of food amounts are
Must, H. converted into nutrient intakes
Cyr, S. amounts using food composition
Anderson, J. tables.
Spadano and
W. Dietz

2. Food W. Luo, H. Chronic Diseases FFQ focuses on describing


Frequency Morrison, M. in Canada, vol. dietary patterns or food habits,
Questionnair d. Groh, C. 27, no. 4, p.p. but not calorie intake, which uses
e (FFQ) Waters, M. 135-144, 2007. an external verification based on
DesMeules, E. double labeled water and urinary
Jones- nitrogen. FFQ focuses on
McLean, A.- describing dietary patterns or
M. Ugnat, S.
food habits, but not calorie
Desjardins and
intake. The main disadvantage of
M. L. a. Y. Ma
the 24HR and FFQ are: the
delay of reporting the eaten food,
the underreporting of the size of
food portions, relying on
memory, requiring skilled
interviewers who can guess how
much calories and nutrient the
person has taken, not quantifying
usual dietary intake, and needing
complex calculations to estimate
frequencies.

3
3 A web application for Y. Kato, T. IEEE The application acquires and
an obesity prevention Suzuki, K. International registers data about diet, exercise,
system based on Kobayashi, Y. Symposium on sleep, and fat mass, by using a web
individual lifestyle Nakauchi, Multimedia application and health information
analysis. (ISM), sensors
pp.363-368, 5-7
Dec. 2011

IEEE The images in the dictionary are


Image-based Calorie T. Miyazaki, International used for dietary assessment, but
4. with only 6512 images, the
Content Estimation G.C. De Conference on
for Dietary Silva, K. accuracy of such approach is low.
Systems, Man, and
Assessment Aizawa Cybernetics
(SMC), p.p. 1718 -
1723, Oct.2012.

3D/2D model to- H. C. n The food is segmented from the


5.
image registration Chen, W. Jia, 38th Annual background image using
for quantitative Z. Li, Y. Sun, Northeast morphological operations while the
dietary assessment M. Sun, Bioengineering size of the food is estimated based
Conference on user-selected 3D shape model.
(NEBEC), p.p. 95-
96, March 2012

Multiple Hypotheses Fengqing A set of segmented objects


6. IEEE Journal of
Image Segmentation Zhu, Marc partitioned into similar object
and Classification Biomedical and classes based on their features, for
4. Bosch, Nitin
With Application to Health solving the idea they have applied
Khanna,
Dietary Assessment Informatics, Vol. different segmentation method.
Carol J. 19, NO. 1,pp. 377- Second, automatic segmented
Boushey 389, January 2015 regions were classified using a
multichannel feature classification
system.

4
CHAPTER 3

PROPOSED SYSTEM

Faster R-CNN is an object detection model that improves on Fast R-CNN by utilising a region proposal
network (RPN) with the CNN model. The RPN shares full-image convolutional features with the detection
network, enabling nearly cost-free region proposals. It is a fully convolutional network that simultaneously
predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate
high-quality region proposals, which are used by Fast R-CNN for detection. RPN and Fast R-CNN are
merged into a single network by sharing their convolutional features: the RPN component tells the unified
network where to look.

As a whole, Faster R-CNN consists of two modules. The first module is a deep fully convolutional network
that proposes regions, and the second module is the Fast R-CNN detector that uses the proposed regions.

5
CHAPTER 4

DESIGN
METHODOLOGY

A deep convolutional network called Faster R-CNN, employed for object detection, appears to the
user as a solitary, end-to-end, unified network. The network can forecast the positions of various
items rapidly and correctly [17]. The network can forecast the positions of various items rapidly and
correctly. The initial step in our method is to use the CNN network to create a pre-trained weighted
model file. It is accomplished by first taking a collection of photographs for three distinct classes
(Apple, Orange, and Banana, 80 images for each category) and then classifying the food items in
those images by constructing square boxes around them using the LabelImg Tool. The background
of these images does not affect the model's outcome as Faster R-CNN uses a region proposal
network which helps to identify only food objects in images. Faster R-CNN uses neural networks
for region selection. The method of selective search employed in the previous method provides
approximately 300 to 400 region proposals.
In contrast, the region proposal neural network only provides a few, allowing it to understand the
image and give relatively modest region proposals. Thus, the computation will take relatively little
time. The Faster RCNN differs significantly from its predecessors in this way. The program loads
the model file once it has been trained using the training dataset, and it is then evaluated against
photos taken with a camera.

The following diagram illustrates the Faster R-architecture. In CNN, there are two modules which
are listed below:
1. RPN: To produce proposals for regions.
2. Fast R-CNN: For finding objects in the suggested locations.

Region proposals are created via the RPN module. It uses neural networks to implement the idea of
attention; it provides directions for finding objects in the picture for the detection module. Each
suggestion is parametrized by an anchor box, a reference box. The convolution layer produces a vector
with two items for each region suggestion. A region proposal with the first element of 1 and the second
6
element of 0 is classified as background. The region represents an item if the first element is 0 and the
second element is 1. Each anchor is given a positive or negative objectness score when teaching the
RPN based on the intersection-over-union (IoU). The IoU is where the ground-truth box and the anchor
box intersect, split by the region where the two boxes join. The IoU lies between 0.0 and 1.0. The IoU is
0.0 if there isn't an intersection. The IoU rises until it hits 1.0 as the two boxes get closer. (When the two
boxes are completely similar).

If an anchor has an IoU overlap of greater than 0.7 with any ground-truth box, it is classified as having
positive objectness. If there isn't an anchor with an IoU overlap greater than 0.7, use a ground-truth box
to positively designate the anchor or anchors with the highest IoU overlap. If the IoU overlap for all
ground-truth boxes is less than 0.3, a non-positive anchor is given a negative objectness score. The
anchor is classified as background if it has a low objectness score. The training objective is not
advanced by anchors that fit into neither the positive nor the negative categories. The Fast R-CNN
detection network's convolutional layers are used in the RPN's processing of the image. As a result,
compared to algorithms like Selective Search, The RPN doesn't need any more time to create the
suggestions. One network can be created by combining or unifying the RPN with the Fast R-CNN
because they both use the same convolutional layers. Training is thus only performed once. Each of the
image's region suggestions is utilised to create a fixed-length feature vector in the ROI Pooling layer.
The Fast R-CNN (Support Vector Machine) is then used to classify the extracted feature vectors. The
class probability of the identified items is returned together with their bounding boxes. If the label has a
probability greater than a predefined threshold, the user sees the output along with the object name in
the dialogue box (we have set the threshold value at 0.7). The system calculates the calories after
verifying the object's name. The output is then printed to the user with the required calories.

(a) Before threshold (b) After threshold (0.7)

7
CHAPTER 5

RESULT

The Faster R-CNN allows us to calculate food calories with greater accuracy and improve food
classification and recognition. The initial step in our approach is to use the CNN network to create
a pre-trained weight model file.

Figure 7 shows the sample raw food and the calories calculated for each fruit. To complete this
phase, we first took a collection of photographs belonging to a single class as well as photographs
having multiple time of food items (for instance, 80 images of the apple class), after which we
labeled the food pieces in them with their names. The system is trained using the image sets after
they have been collected. In our instance, we trained the system with backdrop photos so that it
could identify or classify food amounts in them, helping to boost model accuracy when dealing
with real-world data. We load the model file into the Faster R-CNN algorithm after it has been
formed from the training dataset and then test the model on photos captured by user. The user is
presented with the labels that have the highest likelihood (above 0.7) in the dialogue box. Within
0.2 seconds, our system recognized the food portions with great accuracy.

The graph above shows that Faster R-CNN is substantially faster than its forerunners. As a
result, it can be utilized for instantaneous object detection. Table 1 shows output of Faster R-CNN
algorithms on food items.

8
TABLE 1 FOOD WEIGHT & CALORIES MEASUREMENT
No. Food Measure Weight Calories
Name (gram) Measured
(Cal)
1 Apple 1 180 95
2 Orange 3 393 186
3 Banana 2 250 222
4 Egg 1 150 17
5 Tomato 3 333 60
6 Potato 2 426 328
7 Carrot, 3, 2 233 95
Tomato
8 Mango, 3, 3, 1 1679 1077
Apple,
Orange

9
CHAPTER 6

CONCLUSION

Our objective in this research is to strengthen the client by equipping them with a practical, sensible, and acceptable
technique that enables them to make informed decisions about their daily caloric intake. We proposed a measurement
approach that predicts the number of calories from a food's image by measuring the size of the food components from
the image and using nutritional data tables to determine the number of calories in the meal. Finally, a rough estimate of
calories is indicated in the results. To accurately classify and identify various foods in a single image, we used a Faster
R-CNN deep learning neural network. We demonstrated that the technique offers a potent tool to achieve 99%
precision of our system's food recognition.

10
CHAPTER 7

FUTURE WORK

Our goal is to give the user by providing them with a practical, knowledgeable, and finding optimum method
that enables them to make wise decisions regarding their calorie consumption. Future studies will involve
expanding our image library and testing mixed food amounts using the methodology YOLO v7 algorithm
which achieves the highest accuracy among all other real-time object detection models. In order to give users
more precise information about their nutritional needs, we will also be estimating the weight and volume of
food portions and employing a precision sensor to measure the calories in liquid foods like milk, sauce, tea,
juices, and other beverages.

11
CHAPTER 8

APPENDIX

1. Detecto Module –
Detecto is a Python package that allows you to build fully-functioning computer vision and object
detection models with just 5 lines of code. Inference on still images and videos, transfer learning on
custom datasets, and serialization of models to files are just a few of Detecto's features.

2. Access to Google Drive –


Training dataset has been uploaded in google drive. As we are using google colab so the data
present in google drive can only be accessed.

12
3. Creating Weighted Model –
The weighted file is created by the training dataset. Later this weighted model is used to find
objects in test images. Creating a weighted file for prediction reduces the running time of the
model as the model is trained only once.

4. Testing User Image --

13
5. Calories –
The system calculates the calories after verifying the object's name. CSV file is given as
input. The output is then printed to the user with the required calories.

14
15
REFERENCES
[1] World Health Organization. (9 June 2021) Obesity Study. [Online]. https://www.who.int/news-
room/fact-sheets/detail/obesity-and-overweight

[2]World Health Organization. (2022) World Health Statistics 2021. [Online].


https://www.who.int/data/gho/publications/world-health-statistics.

[3] Nurul Naimah Rose, Nor Hafizan Habib Sultan, Aida Shakila Ishak, Fauziah Ismail, “Effect of
Digital Technology on Adolescents”, 2022.

[4] Sharon M. Fruh, "Obesity: Risk factors, complications, and strategies for sustainable long ‐term
weight management," Published online 2017 Oct 12. DOI: 10.1002/2327-6924.12510

[5] Bethany L Daugherty, TusaRebecca E Schap, Reynolette EttiGittens, Fengqing M Zhu, Marc Bosch,
Edward J Delp, David S Ebert, Deborah A Kerr, Carol J Boushey, Novel “Technologies for Assessing
Dietary Intake: Evaluating the Usability of a Mobile Telephone Food Record Among Adults and
Adolescents”, Published online 2012.

[6] M. Livingstone, P. Robson and a. J.Wallace, “Issues in dietary intake assessment of children and
adolescents,” British Journal of Nutrition, vol. 92, p. 213–222, 2004P.

[7] P.Y. Chil, J.-H. Chen, H.-H. Chu and J.-L. Lo, “Enabling Calorie-Aware Cooking in a Smart
Kitchen,” Springer-Verlag Berlin Heidelberg, vol. 5033, p.p. 116-127, 2008.

[8] M. S. Westerterp-Plantenga, “Eating behavior in humans, characterized by cumulative food


intake curves-a review,” Neuroscience and Biobehavioral Reviews, vol. 24, p. 239–248, 2000.

[9] Y. Kato, T. Suzuki, K. Kobayashi, Y. Nakauchi, "A web application for an obesity prevention
system based on individual lifestyle analysis," IEEE International Conference on Systems, Man, and
Cybernetics (SMC), p.p. 1718 - 1723, Oct.2012.

[10] T. Miyazaki, G.C. De Silva, K. Aizawa, "Image-based Calorie Content Estimation for Dietary
Assessment," IEEE International Symposium on Multimedia (ISM), pp.363-368, 5-7 Dec. 2011.

[11] H. C. n Chen, W. Jia, Z. Li, Y. Sun, M. Sun, "3D/2D model to-image registration for
quantitative dietary assessment," 38th Annual Northeast Bioengineering Conference (NEBEC), p.p. 95-
96, March 2012.

[12] C. K. Martin, S. Kaya and B. K. Gunturk, “Quantification of food intake using food image
analysis,” IEEE International Conference of Engineering in Medicine and Biology Society, vol. 2009,
p.p. 6869-6872, 2009.

[13] Junqing Shang, eric Pepin, eric johnson, David hazel, Ankur Sardesai, alan Kristal, and
alexander mamishev, “Dietary Intake Assessment using Integrated Sensors and Software”, spic digital
library, pp. 1-11, 2015.
16
[14] Nikha dharman, shafna ps, shahana cm, simi shanmughan, nicy johnson, "Image2 Calories",
International Journal of Computer Trends and Technology, pp. 144-148, 2015.

[15] J. Dehais, S. Shevchik, P. Diem, S.G. Mougiakakou, “Food Volume Computation for Self
Dietary Assessment Applications”, IEEE 13th International Conference on Bioinformatics and
Bioengineering (BIBE), p.p.1-4, 2013.

[16] Girshick, Ross. "Fast r-CNN." Proceedings of the IEEE international conference on computer
vision. 2015.

[17] Ren, Shaoqing, et al. "Faster r-CNN: Towards real-time object detection with region proposal
networks." Advances in neural information processing systems. 2015.

[18] H., Kuresan, D., Samiappan, and S., Masunda, “Fusion of WPT and MFCC feature extraction in
Parkinson's disease diagnosis,” Technology and Health Care, vol. 27, no. 4, pp. 363–372,2019. DOI:
10.3233/THC-181306.

[19] S. Barui, S. Latha, D. Samiappan, P. Muthu, "SVM pixel classification on colour image
segmentation", J. Phys. Conf. Ser., 1000 (1), Article 012110, 2018.

[20] S. Dhanalakshmi and C. Venkatesh, “Classification of ultrasound carotid artery images using
texture features”, International Review on Computers and Software (IRECOS), vol. 8, no. 4, pp. 933–
940, 2013.

[21] Krupa, Abel Jaba Deva, Samiappan Dhanalakshmi, and R. Kumar", An improved parallel sub-
filter adaptive noise canceler for the extraction of fetal ECG." Biomedical Engineering/Biomedizinische
Technik 66, no. 5 (2021): 503-514.

[22] Mahima Takur et al., “Soft Attention Based DenseNet Model for Parkinson’s Disease
Classification Using SPECT Images”, Frontiers in Aging Neuroscience, Volume 14, 2022. doi:
10.3389/fnagi.2022.908143.

[23] Mahima Takur et al., “Automated restricted Boltzmann machine classifier for early
diagnosis of Parkinson's disease using digitized spiral drawings”, Journal of
Ambient Intelligence and Humanized Computing, 2022.
https://link.springer.com/article/10.1007/s12652-022-04361-3.

17

You might also like