0% found this document useful (0 votes)
27 views53 pages

Phase 2

The project report for SmartBite outlines an AI-powered application that utilizes deep learning techniques for real-time food recognition and calorie estimation to facilitate personalized diet monitoring. It aims to empower users to make informed dietary choices by providing detailed insights into their food intake and nutritional patterns. The system is designed to support various health goals and offers personalized recommendations, fostering healthier eating habits through its user-friendly interface.

Uploaded by

sun872679
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views53 pages

Phase 2

The project report for SmartBite outlines an AI-powered application that utilizes deep learning techniques for real-time food recognition and calorie estimation to facilitate personalized diet monitoring. It aims to empower users to make informed dietary choices by providing detailed insights into their food intake and nutritional patterns. The system is designed to support various health goals and offers personalized recommendations, fostering healthier eating habits through its user-friendly interface.

Uploaded by

sun872679
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

SMARTBITE: AI-POWERED FOOD RECOGNITION

AND CALORIE ESTIMATION FOR


PERSONALIZED DIET MONITORING

PROJECT REPORT
PHASE II

Submitted by

RATCHAYA R [21CS135]
SUJAY B K [21CS161]
SURENDHAR A [21CS162]
VINEETH K [21CS179]

in partial fulfillment for the award of the degree


of

BACHELOR OF ENGINEERING
in
COMPUTER SCIENCE AND ENGINEERING

MUTHAYAMMAL ENGINEERING COLLEGE

(AUTONOMOUS)

RASIPURAM – 637 408

ANNA UNIVERSITY::CHENNAI- 600 025

APRIL 2025
MUTHAYAMMAL ENGINEERING COLLEGE
(AUTONOMOUS)
RASIPURAM

BONAFIDE CERTIFICATE

Certified that this Report “SMARTBITE : AI POWERED FOOD


RECOGNITION AND CALORIE ESTIMATION FOR PERSONALIZED
DIET MONITORING” is the Bonafide work of “RATCHAYA R [21CS135],
SUJAY B K [21CS161], SURENDHAR A [21CS162] ,VINEETH K
[21CS179]” who carried out the work under my supervision.

SIGNATURE SIGNATURE
Dr.G.KAVITHA, M.S (By Research), Ph.D., Mrs. S.NAZEEMA, M.E.,
PROFESSOR ASSISTANT PROFESSOR
HEAD OF THE DEPARTMENT SUPERVISOR
Department of Computer Science and Department of Computer Science and
Engineering, Engineering,
Muthayammal Engineering College Muthayammal Engineering College
(Autonomous), Rasipuram-637 408. (Autonomous), Rasipuram-637 408.

Submitted for the Project Work Phase-II Viva-Voce examination held on

INTERNAL EXAMINER EXTERNAL EXAMINER


ACKNOWLEDGEMENT

We would like to thank our College Chairman Shri.R.KANDASAMY and our

Secretary Dr.K.GUNASEKARAN, M.E., Ph.D., F.I.E., who encourages us in all

activities.

We here like to record our deep sense of gratitude to our beloved Principal

Dr.M.MADHESWARAN, M.E., Ph.D., MBA., for providing us the required facility

to complete our project successfully.

We extend our sincere thanks and gratitude to our Head of the Department

Dr.G.KAVITHA, M.S(By Research), Ph.D., Department of Computer Science and

Engineering for her valuable suggestions throughout the project.

It is pleasure to acknowledge the contribution made by our Project Coordinator

Mr.S.R.SRIDHAR, M.E., (Ph.D)., Assistant Professor, Department of Computer

Science and Engineering for his efforts to complete our project successfully.

It is grateful to acknowledge the support provided by our Project Guide

Mrs.S.NAZEEMA, M.E., Assistant Professor, Department of Computer Science and

Engineering for her guidance to complete our project successfully.

We are very much thankful to our Parents, Friends and all Faculty Members of the

Department of Computer Science and Engineering, who helped us in the successful

completion of the project.

iii
Vision of the Institute
To be a Centre of excellence in Engineering, Technology and Management on par with
International standards
Mission of the Institute
 To prepare the students with high professional skills and ethical values
 To impart knowledge through best practices
 To instill spirit of innovation through training, research and development
 To undertake continuous assessment and remedial measures
 To achieve academic excellence through intellectual, emotional and social
stimulation

Vision of the Department


To produce the Computer Science and Engineering graduates with the Innovative and
Entrepreneur skills to face the challenges ahead
Mission of the Department
M1: To impart knowledge in the state of art technologies in Computer Science and
Engineering
M2: To inculcate the analytical and logical skills in the field of Computer Science and
Engineering
M3: To prepare the graduates with Ethical values to become successful Entrepreneurs
Program Educational Objectives (PEOs)
PEO1: Graduates will be able to Practice as an IT Professional in Multinational
Companies
PEO2: Graduates will be able to Gain necessary skills and to pursue higher education
for career growth
PEO3: Graduates will be able to Exhibit the leadership skills and ethical values in the
day to day life

iv
Program Outcomes (POs)
PO1 - Engineering knowledge: Apply the knowledge of mathematics, science,
engineering fundamentals, and an engineering specialization to the solution of complex
engineering problems.
PO2 - Problem analysis: Identify, formulate, review research literature, and analyze
complex engineering problems reaching substantiated conclusions using first principles
of mathematics, natural sciences, and engineering sciences.
PO3 - Design/development of solutions: Design solutions for complex engineering
problems and design system components or processes that meet the specified needs
with appropriate consideration for the public health and safety, and the cultural,
societal, and environmental considerations.
PO4 - Conduct investigations of complex problems: Use research-based knowledge
and research methods including design of experiments, analysis and interpretation of
data, and synthesis of the information to provide valid conclusions.
PO5 - Modern tool usage: Create, select, and apply appropriate techniques, resources,
and modern engineering and IT tools including prediction and modeling to complex
engineering activities with an understanding of the limitations.
PO6 - The engineer and society: Apply reasoning informed by the contextual
knowledge to assess societal, health, safety, legal and cultural issues and the
consequent responsibilities relevant to the professional engineering practice.
PO7 - Environment and sustainability: Understand the impact of the professional
engineering solutions in societal and environmental contexts, and demonstrate the
knowledge of, and need for sustainable development.
PO8 - Ethics: Apply ethical principles and commit to professional ethics and
responsibilities and norms of the engineering practice.

v
PO9 - Individual and team work: Function effectively as an individual, and as a
member or leader in diverse teams, and in multidisciplinary settings.
PO10 - Communication: Communicate effectively on complex engineering activities
with the engineering community and with society at large, such as, being able to
comprehend and write effective reports and design documentation, make effective
presentations, and give and receive clear instructions.
PO11 - Project management and finance: Demonstrate knowledge and
understanding of the engineering and management principles and apply these to one’s
own work, as a member and leader in a team, to manage projects and in
multidisciplinary environments.
PO12 - Life-long learning: Recognize the need for, and have the preparation and
ability to engage in independent and life-long learning in the broadest context of
technological change.
Program Specific Outcomes (PSOs)
PSO1: Graduates should be able to design and analyze the algorithms to develop an
Intelligent Systems
PSO2: Graduates should be able to apply the acquired skills to provide efficient
solutions for real time problems
PSO3: Graduates should be able to exhibit an understanding of System Architecture,
Networking and Information Security

vi
COURSE OUTCOMES:
At the end of the course, the student will able to
21CSP02.CO1 Understand the technical concepts of project area.
21CSP02.CO2 Identify the problem and formulation
21CSP02.CO3 Design the Problem Statement
21CSP02.CO4 Formulate the algorithm by using the design
21CSP02.CO5 Develop the Module

PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2 PSO3
✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔

SIGNATURE OF STUDENTS SIGNATURE OF GUIDE

vii
INDEX

CHAPTER NO TABLE OF CONTENT PAGENO


ABSTRACT x
LIST OF FIGURES xi
LIST OF ABBREVIATIONS xii
1. INTRODUCTION 1
1.1 PROJECT OVERVIEW 1
1.2 OBJECTIVE 2
1.3 DEEP LEARNING 2
1.4 ADVANTAGES 3
2. LITERATURE SURVEY 4
2.1 REVIEW I 4
2.2 REVIEW II 5
2.3 REVIEW III 6
2.4 REVIEW IV 7
2.5 REVIEW V 8
3. SYSTEM ANALYSIS 9
3.1 EXISTING SYSTEM 9
3.1.1 LIMITATIONS 10
3.2 PROPOSED SYSTEM 11
4. SYSTEM REQUIREMENTS 12
4.1 HARDWARE REQUIREMENTS 12
4.2 SOFTWARE REQUIREMENTS 12
4.3 HARDWARE DESCRIPTION 13
4.4 SOFTWARE DESCRIPTION 14

viii
5. PROJECT DESIGN 18
5.1 BLOCK DIAGRAM FOR FOOD DETECTION 18
5.2 DATASET 18
5.3 PREPROCESSING 20
5.4 EXPLORATORY DATA ANALYSIS 23
5.5 MODEL IMPLEMENTATION 23
5.6 CONVOLUTIONAL NEURAL NETWORKS 24
6. MODULE LIST 27
6.1 FOOD DATA GATHERING 27
6.2 FOOD IMAGE ENHANCEMENT 27
6.3 FOOD RECOGNITION MODEL DEPLOYMENT 28
6.4 FINAL IMPLEMENTATION AND DEPLOYMENT 28
7. CONCLUSION AND FUTURE ENHANCEMENT 29
7.1 CONCLUSION 29
7.2 FUTURE ENHANCEMENT 29
APPENDIX 30
A.1 SOURCE CODE 30
A.1.1 MAIN FILE 30
A.1.2 DATABASE 35
A.2 SCREENSHOTS 36
REFERENCES 39
PUBLICATIONS 41

ix
ABSTRACT

SmartBite utilizes deep learning techniques with Python and the MobileNet
architecture to identify food items and estimate calorie content in real-time. It offers
users intelligent and personalized diet monitoring, enabling healthier eating habits.
By analyzing images within seconds on a web framework, the system achieves high
precision in food recognition and calorie estimation. SmartBite empowers users to
monitor daily food intake, gain insights into nutritional patterns, and set
personalized health objectives. With its advanced food recognition algorithm, it can
distinguish between a wide variety of dishes, snacks, and beverages, providing
detailed calorie breakdowns for each. The platform also supports meal logging,
enabling users to track their eating habits over time. The system equips users with
essential dietary information, fostering well-informed decisions about food choices.
Additionally, it integrates recommendations for portion control and balanced
nutrition based on individual health goals, such as weight management or
maintaining a specific diet plan. With its advanced capabilities and user-friendly
interface, SmartBite encourages healthier lifestyles and supports balanced nutrition.
Its real-time analysis and feedback mechanisms make it a valuable tool for those
looking to adopt mindful eating practices, improve their dietary habits, and enhance
overall health outcomes.

x
LIST OF FIGURES

FIGURE NO. FIGURE NAME PAGE NO.

1 Working of Python Interpreter 15


2 Block Diagram of Food Detection 18
3 Data collection 20
4 Data Preprocessing 22
5 Convolutional Neural Network 26

xi
LIST OF ABBREVIATIONS

TERM ABBREVIATIONS

AI Artificial Intelligence
DL Deep Learning
ML Machine Learning
IEEE Institute of Electrical and Electronics Engineers
RAD Rapid Application Development
PyPI Python Package Index
PyPL Python Programming Language
RELU Rectified Linear Unit
CNN Convolution Neural Network
FIC Food Image Dataset
USDA United States Department of Agriculture
SGD Stochastic Gradient Descent
NLP Natural Language Processing
LifeTech Life Sciences and Technologies
EDA Exploratory Data Analysis
MSE Mean Squared Error
Glove Global Vectors for Word Representation

Word2Vec Word to Vector

xii
CHAPTER 1

INTRODUCTION

1.1 PROJECT OVERVIEW

SmartBite is an innovative web-based application that leverages advanced deep


learning techniques to revolutionize diet monitoring and calorie estimation. At its
core, the project utilizes the MobileNet architecture and Python to accurately identify
food items from images and provide real-time calorie estimations. This intelligent
system is designed to offer users a seamless and personalized approach to managing
their dietary habits and making informed nutritional choices.

The platform processes food images uploaded by users, instantly identifying the
food items and breaking down their calorie content with high precision. SmartBite
caters to individuals with varying health goals, such as weight management,
maintaining specific dietary plans, or simply improving their overall nutrition. By
offering detailed insights into daily food intake and nutritional patterns, the system
allows users to track their progress toward health objectives effectively.

A key feature of SmartBite is its ability to personalize recommendations. By


analyzing users' dietary history and health goals, the platform suggests portion control
strategies, balanced meal options, and healthier food alternatives. This level of
customization ensures that the system caters to the unique needs of each user, making
it an invaluable tool for adopting mindful eating habits.

The user-friendly interface of SmartBite makes it accessible to all, allowing users


to log meals, view detailed reports, and set goals effortlessly. Its integration with real-
time image processing technology ensures quick and accurate feedback, fostering a

1
smooth user experience. Additionally, the platform’s database is enriched with a wide
variety of food items, ensuring comprehensive recognition capabilities.

SmartBite goes beyond traditional calorie-counting apps by incorporating AI-


driven insights into nutrition and health. It empowers users to take control of their
diet, understand the impact of their food choices, and make sustainable changes
toward a healthier lifestyle. By blending cutting-edge technology with practical
functionality, SmartBite is poised to be a game-changer in the realm of personalized
diet monitoring and nutritional health management.

1.2 OBJECTIVE

SmartBite, an AI-powered platform, aims to revolutionize dietary habits by


accurately recognizing food items through image analysis and estimating calorie
intake. By leveraging deep learning and MobileNet architecture, the platform offers
personalized dietary insights, fosters user-friendly meal logging, and provides
recommendations for healthier choices. It also contributes to public health by offering
valuable data on nutrition trends. SmartBite's ultimate goal is to empower users to
make informed decisions about their diet, leading to healthier lifestyles and advancing
digital health

1.3 DEEP LEARNING

Deep learning plays a pivotal role in the SmartBite project, enabling real-time food
recognition and accurate calorie estimation. By leveraging advanced algorithms and
architectures, such as the MobileNet framework, the system can process and analyze
food images with high precision and efficiency. MobileNet, known for its lightweight
and efficient deep learning model, is particularly well-suited for real-time
applications, as it provides fast inference without compromising accuracy. This

2
ensures that users can receive instant feedback on their food intake, making the system
practical and user-friendly.

The core of the project revolves around Convolutional Neural Networks (CNNs),
which are highly effective for image recognition tasks. These networks extract
meaningful features from food images, such as shape, color, and texture, enabling the
system to identify various food items accurately. By training the CNNs on a vast and
diverse dataset of labeled food images, the model learns to recognize a wide array of
dishes and ingredients, from simple fruits to complex multi-component meals

To enhance the system’s usability, deep learning-based optimization techniques


are applied to ensure the model performs efficiently on web and mobile platforms.
This includes minimizing latency and computational requirements, making the
application accessible even on devices with limited processing power.

In summary, the deep learning component of SmartBite integrates sophisticated


techniques to deliver an intelligent, scalable, and accurate food recognition and calorie
estimation system. By combining CNNs, transfer learning, and lightweight
architectures like MobileNet, the project ensures a seamless and impactful user
experience, encouraging healthier lifestyles and fostering a deeper understanding of
nutrition.

1.4 ADVANTAGES
 Real-time food recognition and feedback1
 Accurate calorie estimation with minimal input
 Lightweight MobileNet ensures fast processing
 Supports diverse food items and cuisines
 Streamlined user experience on all devices

3
CHAPTER 2

LITERATURE SURVEY

2.1 REVIEW I

TITLE : A STUDY ON FOOD VALUE ESTIMATION FROM

IMAGES : TAXONOMIES DATASETS AND

TECHNIQUES

PUBLISHER : IEEE

VOLUME NO : 11

YEAR : 2023

The existing problem in the realm of automated food nutritional value


estimation lies in the complexity of accurately analyzing food items from images.
While advancements in computer vision and deep learning have enabled progress,
significant challenges remain. Current systems struggle with the diversity and
variability of food items, including differences in appearance, texture, size, and
presentation.. Moreover, conventional approaches often require manual inputs like
portion size or additional annotations, which reduces the automation level and
usability of such systems.

High computational demands of existing solutions make them unsuitable for


deployment on mobile devices or in resource-constrained settings, limiting
accessibility for a broader audience. Additionally, these systems often fail to provide
real-time results or meaningful insights for complex or mixed food items. Addressing
these limitations is crucial for creating an efficient, accurate, and accessible food
nutrition estimation system capable of the promoting healthier eating habits and
personalized diet monitoring.

4
2.2 REVIEW- II

TITLE : THE IMPACT OF PATIENT CLINICAL

INFORMATION ON AUTOMATED SKIN CANCER

DETECTION

PUBLISHER : COMPUTERS IN BIOLOGY AND MEDICINE

VOLUME NO : 116

YEAR : 2020

The Automated skin cancer detection has significantly advanced with the rise
of artificial intelligence (AI) and deep learning techniquesThis research examines the
impact of incorporating patient metadata into machine learning models for skin cancer
detection. The study utilizes diverse datasets, comparing model performance with and
without additional clinical information. Findings indicate that AI-driven diagnostic
systems, when supplemented with contextual patient data, achieve higher sensitivity
and specificity, reducing false positives and false negatives. The study highlights that
patient demographics and medical history contribute to refining risk assessment
models, leading to more personalized and precise diagnoses.
Future advancements in automated skin cancer detection should focus on real-
time patient monitoring, federated learning approaches for secure data sharing, and AI
models capable of self-learning from continuous patient interactions. Additionally,
integrating electronic health records (EHRs) with dermatological imaging can further
enhance decision-making capabilities, paving the way for more robust, patient-
centered skin cancer diagnosis and treatment planning.
This research underscores the critical role of patient clinical information in
bridging the gap between AI-driven diagnostics and precision medicine, fostering a
more holistic approach to skin cancer detection and management.

5
2.3 REVIEW- III

TITLE : RECENT STUDIES ON SEGMENTATION

TECHNIQUES FOR FOOD RECOGNITION

PUBLISHER : ARCHIVES OF COMPUTATIONAL METHODS IN

ENGINEERING

VOLUME NO : 29

YEAR : 2022

Food recognition is an emerging field in computer vision with applications in


health monitoring, dietary assessment, and automated food tracking. Accurate food
recognition relies on robust image segmentation techniques that effectively isolate
food items from complex backgrounds. This survey explores recent advancements in
segmentation methodologies used in food recognition, covering traditional image
processing approaches, machine learning-based models, and deep learning techniques.
The study categorizes segmentation methods into thresholding, edge detection,
region-based techniques, and deep learning-driven approaches such as Fully
Convolutional Networks (FCNs), U-Net, and Mask R-CNN. It highlights the
advantages and limitations of these methods in handling challenges like varying
lighting conditions, occlusions, and intra-class variations in food appearance.
Future research should focus on developing lightweight models for real-time
applications, enhancing generalization across diverse food datasets, and integrating
multi-modal approaches using depth sensing and hyperspectral imaging. This survey
provides a foundation for future developments in automated food recognition,
contributing to advancements in health monitoring, personalized nutrition, and smart
food tracking systems.

6
2.4 REVIEW- IV

TITLE : MOBILE COMPUTER VISION – BASED

APPLICATIONS FOR FOOD RECOGNITION AND

VOLUME AND CALORIFIC ESTIMATION

PUBLISHER : HEALTHCARE (MDPI)

VOLUME NO : 11

YEAR : 2023

With the increasing prevalence of diet-related health issues, mobile computer


vision-based applications are gaining importance in food recognition and dietary
monitoring. This systematic review examines recent advancements in mobile-based
food recognition technologies that leverage computer vision for accurate volume
estimation and calorific analysis. The study categorizes various methodologies,
including traditional image processing, deep learning models, and hybrid approaches
for food segmentation, classification, and nutrient estimation. It highlights the role of
convolutional neural networks (CNNs), object detection frameworks like Faster R-
CNN and YOLO, and depth estimation techniques for precise portion size evaluation.
Findings indicate that while mobile applications demonstrate high accuracy in
recognizing common food items, challenges remain in handling occlusions, varying
lighting conditions, and complex multi-food scenarios .Future research should focus
on developing lightweight deep learning models optimized for real-time performance
on mobile devices, enhancing cross-cultural food recognition capabilities, and
integrating personalized dietary recommendations through AI-driven insights. This
study provides a foundation for improving digital nutrition tracking, supporting
healthier lifestyle choices, and advancing mobile health (mHealth) applications.

7
2.5 REVIEW- V

TITLE : CHARACTERISTICS OF SMARTPHONE – BASED

DIETARY ASSESSMENT

PUBLISHER : HEALTH PSYCHOLOGY REVIEW

VOLUME NO : 16

YEAR : 2022

Smartphone-based dietary assessment tools are transforming nutrition


monitoring by offering accessible, real-time tracking of food intake. This systematic
review examines the characteristics, accuracy, usability, and effectiveness of mobile
dietary assessment applications, evaluating various approaches such as image-based
food recognition, barcode scanning, manual entry, and AI-driven automatic tracking.
Findings highlight key features of these tools, including real-time feedback,
integration with health tracking apps, and user engagement strategies. While deep
learning and computer vision techniques enhance automation in image-based methods,
challenges persist in portion size estimation, misclassification, and user compliance.
Additionally, behavioural factors affecting adoption and long-term usability are
discussed, emphasizing the importance of intuitive design and personalization.
With the rise of diet-related health concerns such as obesity, diabetes, and
cardiovascular diseases, accurate dietary assessment has become a crucial aspect of
nutrition monitoring and personalized healthcare. Traditional methods, including food
diaries, 24-hour recall surveys, and nutritionist consultations, are often time-
consuming, prone to recall bias, and reliant on user compliance. This review explores
how these tools leverage technology to enhance nutrition tracking, enabling users to
gain better insights into their dietary habits. Future research should focus on improving
AI-based food recognition accuracy, enhancing interoperability with healthcare
systems, and incorporating gamification and behavioural science techniques.
8
CHAPTER 3

SYSTEM ANALYSIS

3.1 EXISTING SYSTEM


Current food intake estimation systems leverage computer vision and machine
learning to analyze food images for calorie and nutritional value estimation. These
systems use pre-trained models or custom-built neural networks to identify food items
and estimate their quantities. Despite these advancements, the existing systems have
notable limitations. The recognition accuracy of the current models is restricted, with
an average of 73.29% during training and 78.7% during testing. This level of accuracy
often leads to errors in identifying food items, particularly in complex or mixed dishes,
which reduces the reliability of calorie and nutrition estimations.
While existing systems can provide basic insights into eating habits and help medical
professionals monitor dietary patterns, their scope is limited. They lack the ability to
integrate health insights with personalized data, such as user-specific dietary
requirements, allergies, or fitness goals. This creates a gap in providing a
comprehensive solution for users seeking tailored health and nutrition advice.
Moreover, current systems face challenges in accurately estimating portion
sizes from images, a critical factor for precise calorie calculations. These systems often
rely on user inputs or predefined assumptions about portion sizes, which can lead to
significant inaccuracies. Additionally, the systems are typically designed for static
environments and struggle to adapt to real-world variability, such as changes in
lighting, image angles, or food presentation styles.
While some frameworks offer mobile compatibility, the computational demands
of these systems often compromise their usability in real-time scenarios. They may
also lack intuitive user interfaces, making them less accessible to a broader audience.
Overall, the existing systems, though beneficial, fall short in delivering an accurate,

9
personalized, and user-friendly approach to food intake and nutrition estimation.
Addressing these limitations is crucial to building a next-generation solution that
seamlessly integrates accuracy, personalization, and usability.
3.1.1 LIMITATIONS

 Current models achieve only around 73.29% accuracy during training and
78.7% during testing, leading to potential errors in food identification.
 The system struggles with complex or mixed dishes where multiple food items
are present, reducing its ability to provide accurate calorie estimations.
 Current systems rely on assumptions or user input for portion sizes, leading to
significant errors in calorie calculation.
 Existing systems do not integrate personalized health data such as user-specific
dietary needs, allergies, or fitness goals, limiting their effectiveness for
individualized diet management.
 The system's performance is sensitive to variations in lighting, image angles,
and food presentation styles, which can negatively impact recognition accuracy.
 The system may perform well with common foods but show reduced accuracy
with less frequently encountered or unusual food items.
 The computational demands of current systems often hinder real-time
processing, affecting their usability in everyday, on-the-go scenarios.
 Many existing systems lack intuitive user interfaces, making it difficult for non-
tech-savvy users to navigate and effectively use the system.
 Achieving high accuracy requires access to large and diverse food image
datasets for training, which can be time-consuming and difficult to acquire.

10
3.2 PROPOSED SYSTEM

This project aims to develop an expert system to provide personalized nutrition advice.
By analysing user’s health profiles and dietary needs, the system generates tailored
recommendations on nutrient intake, meal planning, and exercise routines. It aims to
improve nutritional awareness, reduce consultation time with nutritionists, and
promote healthier lifestyles.

The system's accuracy and precision are ensured by a vast dataset of food types,
nutritional information, and health parameters. It provides insights into general
wellness and nutrition, but future iterations can incorporate specific needs, such as
pregnancy or lactation.

Ultimately, this expert system offers a user-friendly platform for accessing reliable
nutrition advice and making informed decisions about dietary choices.

11
CHAPTER 4

SYSTEM REQUIREMENTS

4.1 HARDWARE REQUIREMENTS

 RAM : 6 GB
 Processor : I5 and Above
 Hard disk space : 2 GB (minimum) free space available
 Screen resolution : 1024 x 768 or higher
 Mouse : Logitech Mouse
 Keyboard : Standard Keyboard

4.2 SOFTWARE REQUIREMENTS

 Operating System : Windows 7 or later


 Simulation tool : HTML, CSS, Python -3.10, FLASK
 Documentation : MS-Office

12
4.3 HARDWARE DESCRIPTION
A well-structured hardware setup is crucial for ensuring the smooth operation
and efficiency of any system. The performance of a system largely depends on its
hardware configuration, which plays a key role in handling various tasks seamlessly.
One of the primary components is memory (RAM), which directly impacts the
system’s ability to manage multiple processes simultaneously. In this case, 6 GB RAM
ensures that applications run smoothly without lag or interruption, providing a stable
working environment.Another important aspect is processing power. A Core i5
processor or higher allows for faster execution of instructions, ensuring quick
responsiveness and improved system efficiency. A powerful processor enhances
computational capabilities, making the system more suitable for handling resource-
intensive applications and multitasking.
Storage capacity is another critical factor. The system requires at least 2 GB of free
disk space to accommodate essential software installations, updates, and file storage,
ensuring smooth operation without space limitations.Display quality also contributes
to a better user experience. A screen resolution of 1024 x 768 or higher ensures clarity
and sharpness, making it easier to view data, graphics, and other visual elements
effectively.
Additionally, peripherals like input devices play a crucial role in user interaction. A
Logitech mouse and a standard keyboard provide a comfortable and reliable input
experience, ensuring ease of navigation and efficient workflow.A well-optimized
hardware setup not only enhances productivity but also prevents performance
bottlenecks. By meeting these essential hardware requirements, the system can
function effectively, ensuring reliability, speed, and overall operational excellence.

13
4.4 SOFTWARE DESCRIPTION

 Python

 Python Features
Python
Python is commonly used for developing websites and software, task
automation, data analysis, and data visualization. Since it’s relatively easy to learn,
Python has been adopted by many non-programmers, such as accountants and
scientists, for a variety of everyday tasks, like organizing finances.
"Writing programs is a very creative and rewarding activity," says
University of Michigan and Coursers instructor Charles R Severance in his book
Python for Everybody. “You can write programs for many reasons, ranging from
making living to solving a difficult data analysis problem to having fun to helping
someone else solve a problem."
Uses:
• Data analysis and machine learning
• Web development
• Automation or scripting
• Software testing and prototyping
• Everyday tasks
What is Python? Executive Summary
Python is a versatile, high-level programming language known for its simplicity and
readability. Its dynamic typing and dynamic binding, combined with high-level data
structures, make it ideal for rapid application development and scripting. Python's
modularity and extensive standard library promote code reuse and efficient

14
development.
Python's interpreted nature simplifies the development process, as there's no
compilation step. Debugging is straightforward, with exceptions and stack traces
providing clear error information. The interactive interpreter and debugger further
enhance the debugging experience.
Python's combination of productivity, readability, and powerful features makes it a
popular choice for a wide range of applications.

Figure 1. Working of python interpreter


Python Use Cases

 Creating web applications on a server

 Building workflows that can be used in conjunction with software


connecting to database systems

 Reading and modifying files

 Performing complex mathematics Processing big data

 Fast prototyping

 Developing production-ready software


Features and Benefits of Python

 Compatible with a variety of platforms including Windows, Mac, Linux,

15
Raspberry Pi,and others

 Uses a simple syntax comparable to the English language that lets


developers use fewer lines than other programming languages

 Operates on an interpreter system that allows code to be executed


immediately, fast- tracking prototyping

 Can be handled in a procedural, object-orientated, or functional way


Python Syntax

Python Flexibility
Python, a dynamically typed language, is especially flexible, eliminating
hard rules for building features and offering more problem-solving flexibility with a
variety of methods. It also allows uses to compile and run programs right up to a
problematic area because it uses run-time type checking rather than compile-time
checking.
The Less Great Parts of Python
On the down side, Python isn’t easy to maintain. One command can have
multiple meanings depending on context because Python is a dynamically typed
language. And, maintaining a Python app as it grows in size and complexity can be
increasingly difficult, especially finding and fixing errors. Users will need experience
to design code or write unit tests that make maintenance easier.
Python and AI
AI researchers are fans of Python. Google TensorFlow, as well as other
libraries (scikit- learn, Keras), establish a foundation for AI development because of
the usability and it offers Python users. These libraries, and their availability, are
critical because they enable developers to focus on growth and building.

16
Good to Know
The Python Package Index (PyPI) is a repository of software for the Python
programming language. PyPI helps users find and install software developed and
shared by the Python community.
Applications
The Python Package Index (pypi) hosts thousands of third-party modules for
Python. Both Python's standard library and the community-contributed modules allow
for endless possibilities.

 Web and Internet Development Database Access

 Desktop Scientific & Numeric Education

 Network Programming Software & Game Development


Getting Started
Python can be easy to pick up whether you're a first time programmer or you're
experienced with other languages. The following pages are a useful first step to get on
your way writing programs with Python!
Be ginner's Guide, Programmers Beginner's Guide, Non-Programmers
Beginner's Guide, Download & Installation Code sample and snippets for Beginners
Friendly & Easy to Learn
The community hosts conferences and meet ups, collaborates on code, and
much more. Python's documentation will help you along the way, and the mailing lists
will keep you in touch.

17
CHAPTER 5

PROJECT DESIGN

5.1 BLOCK DIAGRAM OF FOOD DETECTION

Figure 2. Block Diagram of Food Detection


5.2 DATASET

For the SmartBite AI-powered food recognition and calorie estimation system,
an essential component is the dataset used to train and evaluate the deep learning
model. The dataset should contain images of various food items along with their
associated nutritional information such as calorie content, macronutrients (proteins,
fats, carbohydrates), and micronutrients (vitamins, minerals). A diverse,
comprehensive dataset is critical for achieving high accuracy in food recognition and
calorie estimation.

18
A comprehensive food dataset for the SmartBite system should cover a wide range of
food categories, including fruits, vegetables, grains, dairy, proteins (meat, fish, tofu,
etc.), and beverages, ensuring representation of different cuisines and food
preparations. The dataset must feature high-quality, well-labeled images with
variations in lighting conditions, angles, and presentations to enhance the deep
learning model’s generalization to real-world scenarios. Each image should be
accurately annotated with the food name, serving size, and detailed nutritional
information, including calories, carbohydrates, fats, proteins, and micronutrient
content. To ensure diversity, food images should be sourced from different
geographical regions and cultures, incorporating datasets such as Food-101, UCI
Food-50, FIC, and ETHZ Food-101. Data augmentation techniques, including random
rotations, flipping, and scaling, will be applied to increase diversity and improve
model robustness. The dataset should also integrate nutritional information from
reputable sources like the USDA Food Database, Nutritionix, or MyFitnessPal for
precise calorie and macronutrient estimations. Additionally, real-time image data from
users, captured through a mobile app, can enhance the dataset by incorporating food
images taken from different angles and lighting conditions, along with user-provided
serving size information. The dataset will be systematically split into training,
validation, and test sets to optimize model accuracy and robustness. Crowdsourced
data, where users contribute images of their meals along with nutritional details, will
further enrich the dataset, ensuring broad coverage of food items. Ethical
considerations must be prioritized by obtaining user consent for data collection and
anonymizing personal information. By leveraging this extensive and continuously
updated dataset, the SmartBite system can employ deep learning techniques to
accurately recognize food items and estimate their nutritional values, adapting to

19
emerging food trends and user-generated content for improved functionality and
precision.

Figure 3. Dataset Collection

5.3 PREPROCESSING

Preprocessing is a crucial step in the development of the SmartBite AI-powered


food recognition and calorie estimation system. It involves transforming raw data into
a format suitable for deep learning models, ensuring the system can effectively
interpret and analyze food images. The preprocessing pipeline typically includes
several stages:

1. Image Resizing: Images from the dataset are resized to a consistent resolution,
typically 224x224 or 299x299 pixels, depending on the architecture (e.g.,

20
MobileNet). This ensures uniformity in input size, which is essential for model
training.
2. Data Normalization: Image pixel values are scaled to a range of 0 to 1, or -1
to 1, by dividing by 255 or using a normalization technique like mean
subtraction. This helps standardize the input data and speeds up the
convergence of the model during training.
3. Image Augmentation: To enhance the robustness of the model, various
augmentation techniques are applied, such as random rotations, flips,
translations, and zooms. This simulates different scenarios, helping the model
generalize better by exposing it to a variety of food presentations, angles, and
lighting conditions.
4. Label Encoding: The food items in the images are labeled with their
corresponding nutritional information. For multi-class classification, the labels
are encoded into a one-hot format, where each food category is represented by
a binary vector. For regression tasks like calorie estimation, labels are treated
as continuous values.
5. Noise Reduction: To improve the quality of the images, preprocessing may
include techniques like image denoising or removing unwanted artifacts (such
as borders, watermarks, or irrelevant backgrounds). This ensures the food item
is the focus of the image, making it easier for the model to recognize.
6. Bounding Box Detection: If the food items are not isolated in the image, object
detection techniques may be used to identify the food’s region using bounding
boxes. This step involves detecting and cropping the food items from the
background, making it easier for the model to focus on the relevant portions of
the image.
7. Data Augmentation: Techniques like rotation, flipping, and zooming are
applied to create multiple variations of the same image, which increases the
21
diversity of the training set. This helps the model learn better and become more
robust to different viewing angles, food preparations, and lighting conditions.
8. Feature Extraction (for NLP/Hybrid Models): In cases where text data is
involved (e.g., food descriptions or ingredient lists), natural language
processing (NLP) techniques may be applied to extract relevant features.
Tokenization, word embeddings (like Word2Vec or GloVe), and sentiment
analysis are common methods used to preprocess textual data for the system.

By applying these preprocessing steps, the SmartBite system ensures that the
input data is clean, consistent, and ready for model training, leading to improved
performance in food recognition and calorie estimation tasks. Proper preprocessing
also aids in reducing overfitting, improving generalization, and accelerating the
training process.

Figure 4. Data Preprocessing

22
5.4 EXPLORATORY DATA ANALYSIS (EDA)

Exploratory Data Analysis (EDA) is a crucial step in the development of the


SmartBite system. It involves visualizing data distributions, calculating summary
statistics, and identifying patterns and relationships between variables. EDA helps
uncover class imbalances, outliers, and missing data, ensuring that the data fed into
the deep learning models is clean and reliable.By analyzing the distribution of food
categories and calories, the system can identify potential biases and adjust the dataset
accordingly. Visualizations like histograms, box plots, and scatter plots help uncover
relationships between nutritional features, such as calorie content and food type.

Through EDA, the SmartBite system can refine features, select relevant
information, and improve the accuracy and robustness of its food recognition and
calorie estimation models.

5.5 MODEL IMPLEMENTATION

The model implementation for the SmartBite AI-powered food recognition and
calorie estimation system leverages deep learning techniques, particularly
Convolutional Neural Networks (CNNs), for image classification and feature
extraction. The overall goal of the model is to identify food items from images and
accurately estimate their nutritional values, specifically the calorie content.

The model architecture begins with preprocessing, where input images are
resized and normalized to ensure consistency and improve the training efficiency. The
images are then passed through a series of convolutional layers that extract key
features from the images, such as edges, textures, and shapes, which are important for

23
food identification. These convolutional layers are followed by pooling layers to
reduce dimensionality and retain only the most essential features. The CNN model
typically employs activation functions like ReLU (Rectified Linear Unit) to introduce
non-linearity, allowing the model to learn complex patterns.

The extracted features are flattened and passed through fully connected layers
for food category prediction and nutritional value estimation. A softmax layer handles
multi-class classification, while regression techniques provide continuous calorie
estimates.The model is trained on a diverse dataset of labeled food images, including
food categories and nutritional values. Parameters are updated using backpropagation
and optimizers like Adam or SGD to minimize the loss between predicted and actual
calorie values.

After training, the model is validated and tested using a separate set of images
to evaluate its performance. Metrics such as accuracy, precision, recall, and Mean
Squared Error (MSE) are calculated to assess both the food classification accuracy
and the effectiveness of calorie estimation.

The final implementation allows the SmartBite system to take an image as


input, predict the food item, and provide an estimated calorie count in real-time,
making it a valuable tool for personalized diet monitoring and healthier eating habits.

5.6 CONVOLUTIONAL NEURAL NETWORKS

Convolutional Neural Networks (CNNs) are a class of deep learning algorithms


that are particularly effective for analyzing visual data, such as images and videos. In
the context of the SmartBite project, CNNs play a crucial role in food recognition and
calorie estimation. These networks are designed to automatically learn and extract
important features from images without the need for manual feature extraction.A CNN

24
consists of multiple layers that work together to process the input image. The key
layers include:

1. Convolutional Layers: These layers apply convolutional filters (also called


kernels) to the input image to extract local features like edges, textures, and
patterns. By sliding the filter over the image, the convolutional layer creates
feature maps that highlight important characteristics of the image. Multiple
filters can be used at each layer to detect different types of features.
2. Activation Function (ReLU): After the convolution operation, the feature
maps are passed through an activation function, typically the Rectified Linear
Unit (ReLU). ReLU introduces non-linearity into the network, allowing the
model to learn more complex patterns and make better predictions.
3. Pooling Layers: Pooling layers are used to down-sample the feature maps,
reducing their dimensions while retaining important information. The most
common type of pooling is max pooling, where the maximum value within a
defined region of the feature map is selected. This helps reduce computational
complexity and prevents overfitting by removing unnecessary details.
4. Fully Connected Layers (Dense Layers): After the convolutional and pooling
layers, the CNN typically includes one or more fully connected layers. These
layers connect every neuron to every neuron in the previous layer, enabling the
model to make final predictions based on the features learned throughout the
network. In the case of SmartBite, these layers are used to classify the food and
estimate its nutritional content.
5. Softmax Layer (for Classification): The final layer of a CNN is often a
softmax layer, which is used for classification tasks. It converts the output of
the last fully connected layer into probabilities for each class (food item). The

25
model then selects the class with the highest probability as the predicted food
item.
6. In the SmartBite project, the CNN is trained on a large dataset of labeled food
images. Additionally, the CNN helps estimate the calorie content of the food
based on learned associations between food categories and their typical
nutritional values.
7. CNNs are highly effective for food recognition due to their ability to
automatically learn hierarchical features from raw pixel data.

Figure 5. Convolutional Neural Network

26
CHAPTER 6

MODULE LIST

SMARTBITE : Food Classification and Calorie Estimation : It is


divided into multiple modules to ensure a systematic approach to food image
recognition and nutritional analysis. Each module plays a crucial role in the pipeline,
from data acquisition to deep learning-based prediction and deployment.

6.1 FOOD DATA GATHERING


This module focuses on the initial stage of data acquisition for food
classification and calorie estimation. A diverse dataset of food images is collected
from multiple sources, including open-access food databases, manually captured
images, and crowd-sourced repositories. The dataset includes a wide variety of food
items, different portion sizes, and various lighting conditions to enhance model
robustness. Each image is annotated with corresponding food labels and nutritional
information, including calorie content, macronutrient breakdown, and serving sizes.
6.2 FOOD IMAGE ENHANCEMENT
In this module, raw food images undergo preprocessing techniques to
improve image clarity and enhance model performance. Several image processing
techniques are applied, including image resizing, which standardizes images to a
fixed resolution for consistency, and normalization, which scales pixel values to a
common range for better convergence during model training. Contrast enhancement
is implemented through histogram equalization to improve image visibility.
Segmentation techniques are used to identify and isolate food objects from the
background, ensuring the model focuses only on relevant portions. Additionally,
data augmentation techniques such as rotation, flipping, and brightness adjustments
are applied to enhance dataset diversity and improve model generalization.
27
6.3 FOOD RECOGNITION MODEL DEPLOYMENT
This module involves performing statistical and visual analysis of the dataset to
gain insights into food class distributions, calorie variations, and potential biases.
Various techniques, such as histogram plotting, correlation analysis, and principal
component analysis (PCA), are utilized to understand data trends. Heatmaps and bar
charts are generated to identify class imbalances, ensuring that the dataset is well-
structured for deep learning model training.
A deep learning model is implemented for food classification and calorie estimation.
The key steps include selecting pre-trained architectures such as ResNet, VGG, or
MobileNet for transfer learning, utilizing convolutional layers for feature extraction,
and feeding the labeled dataset into the model while optimizing it using techniques
like Adam or SGD optimizers. The model's effectiveness is evaluated using metrics
such as accuracy, precision, recall, and F1-score to ensure high-performance
classification. The model consists of multiple convolutional layers for feature
extraction, followed by fully connected layers for classification. Dropout
regularization and batch normalization are employed to improve model stability and
prevent overfitting. The CNN model is trained with large-scale food datasets and
validated using cross-validation techniques to ensure robust performance.
6.4 FINAL IMPLEMENTATION AND DEPLOYMENT
This module involves integrating the trained model into a real-world application
for practical use. Users can upload food images for instant classification and calorie
estimation, ensuring seamless user input handling. The system provides real-time
predictions by displaying classified food items along with their estimated calorie
values and nutritional breakdown. Additionally, performance optimization techniques
are applied to ensure smooth and efficient model inference, minimizing latency and
enhancing accuracy for real-time applications.

28
CHAPTER 7

CONCLUSION AND FUTURE ENHANCEMENT

7.1 CONCLUSION
Food detection and calorie estimation are two important tasks in the fields of
computer vision and health, with significant implications for dietary monitoring,
weight management, and healthcare. Recent advances in deep learning, particularly
the use of convolutional neural networks, have shown promising results in
accurately detecting food types and estimating their calorie content from images.
There are challenges in food image analysis, including variations in portion size,
lighting, and presentation. Inconsistent color representation and varying portion
sizes can hinder accurate recognition and calorie estimation. Incorporating
contextual information like ingredient lists, serving sizes, and user data can improve
accuracy and personalization. Future models could combine image, text, and sensor
data for even more precise predictions.
7.2 FUTURE ENHANCEMENT
To further enhance the SmartBite system, future developments could focus on multi-
modal inputs, improved food variations handling, and personalized health data
integration. By combining image, text, and sensor data, the system can achieve more
precise calorie estimation and tailored recommendations. Advanced techniques like
3D image reconstruction and data augmentation can improve the system's ability to
recognize food variations. Incorporating personalized health data can lead to more
customized and effective dietary advice.With these enhancements, SmartBite can
become a comprehensive and accurate tool for promoting healthy eating habits and
improving overall well-being.

29
APPENDIX
A.1 SOURCE CODE
A.1.1 MAIN FILE
from flask import Flask, request, jsonify, render_template, url_for
import tensorflow as tf
import numpy as np
import pickle
from tensorflow.keras.preprocessing.image import load_img, img_to_array
import io
import os
from flask import Flask, request, render_template
import contextlib
import joblib
import re
import sqlite3
import pandas as pd
from argon2 import PasswordHasher
from argon2.exceptions import VerifyMismatchError
from create_database import setup_database
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import img_to_array, load_img
from tensorflow.keras.applications.vgg16 import preprocess_input
from utils import login_required, set_session
from flask import (
30
Flask, render_template,
request, session, redirect
)
app = Flask(__name__)
# Load the trained model
model = tf.keras.models.load_model('models/food_classifier.h5
database = "users.db"
setup_database(name=database)
app.secret_key = 'xpSm7p5bgJY8rNoBjGWiz5yjxM-NEBlW6SIBI62OkLc='
# Load class labels
with open('models/class_labels.pkl', 'rb') as f:
class_labels = pickle.load(f)
# Calorie dictionary
calorie_dict = {
'Apple': 52, # Calories per 100g
'Banana': 89,
'Peanuts': 567,
'Pizza': 266,
# Add more classes as needed

@app.route('/')
def index():
return render_template('index.html')

31
# Homepage route
@app.route('/')
def home():
return render_template('index.html')
@app.route('/about')
def about():
return render_template('about.html')
@app.route('/login', methods=['GET', 'POST'])
def login():
if request.method == 'GET':
return render_template('login.html')
# Set data to variables
username = request.form.get('username')
password = request.form.get('password')
# Attempt to query associated user data
query = 'select username, password, email from users where username =
:username'
with contextlib.closing(sqlite3.connect(database)) as conn:
with conn:
account = conn.execute(query, {'username': username}).fetchone()
if not account:
return render_template('login.html', error='Username does not exist')
# Verify password
Try

32
ph = PasswordHasher()
ph.verify(account[1], password)
except VerifyMismatchError:
return render_template('login.html', error='Incorrect password')
# Check if password hash needs to be updated
# Set cookie for user session
set_session(
username=account[0],
email=account[2],
remember_me='remember-me' in request.form
)
return redirect('/predict_page')
@app.route('/register', methods=['GET', 'POST'])
def register():
if request.method == 'GET':
return render_template('register.html')
# Store data to variables
password = request.form.get('password')
confirm_password = request.form.get('confirm-password')
username = request.form.get('username')
email = request.form.get('email')
# Verify data
if len(password) < 8:
return render_template('register.html', error='Your password must be 8 or more
33
characters')
if password != confirm_password:
if not re.match(r'^[a-zA-Z0-9]+$', username):
return render_template('register.html', error='Username must only be letters
and numbers')
query = 'select username from users where username = :username;'
if result:
return render_template('register.html', error='Username already exists')
# Create password hash
pw = PasswordHasher()
hashed_password = pw.hash(password)
query = 'insert into users(username, password, email) values (:username,
:password, :email);'
params = {
'username': username,
'password': hashed_password,
'email': email
}
with contextlib.closing(sqlite3.connect(database)) as conn:
with conn:
def predict_food_and_calories():
return render_template('result.html', food=food, calories=calories)
if __name__ == '__main__':
app.run(debug=True)

34
A.1.2 DATABASE
import sqlite3
import contextlib
from pathlib import Path
def create_connection(db_file: str) -> None:
""" Create a database connection to a SQLite database """
Try :conn = sqlite3.connect(db_file)
finally:
conn.close()
def create_table(db_file: str) -> None:
""" Create a table for users """
query = '''
CREATE TABLE IF NOT EXISTS users (
username TEXT PRIMARY KEY,
password TEXT NOT NULL,
email TEXT
) ; with contextlib.closing(sqlite3.connect(db_file)) as conn:
with conn : conn.execute(query)
def setup_database(name: str) -> None:
if Path(name).exists():
return
create_connection(name)
create_table(name)
print('\033[91m', 'Creating new example database "users.db"', '\033[0m')
35
A.2 SCREENSHOTS

Figure A.2.1 Opening page

Figure A.2.2 About page

36
Figure A.2.3 Register page

Figure A.2.4 Existing user Login

37
Figure A.2.5 Image upload page

Figure A.2.6 Result page

38
REFERENCES
1. A.G.C. Pacheco and R. A. Krohling, ‘‘The impact of patient clinical
information on automated skin cancer detection,’’ Comput. Biol. Med., vol.
116, Jan. 2020, Art. no. 103545.

2. A.G.C. Pacheco, G. R. Lima, A. S. Salomão, B. Krohling, I. P. Biral, G. G. de


Angelo, F. C. R. Alves Jr., J. G. M. Esgario, A. C. Simora, P. B. C. Castro, F.
B. Rodrigues, P. H. L. Frasson, R. A. Krohling, H. Knidel, M. C. S. Santos, R.
B. do Espírito Santo, T. L. S. G. Macedo, T. R. P. Canuto, and L. F. S. de
Barros, ‘‘PAD-UFES-20: A skin lesion dataset composed of patient data
and clinical images collected from smartphones,’’ Data Brief, vol. 32, Oct.
2020, Art. no. 106221.

3. C.P.Davis, Rosacea, Acne, Shingles, Covid-19 Rashes: Common Adult Skin


Diseases, 2020.

4. Dipu Chandra Malo, Md Mustafizur Rahman, Jahin Mahbub and Mohammad


Monirujjaman Khan, "Skin Cancer Detection using Convolutional Neural
Network", 2022 IEEE 12th Annual Computing and Communication
Workshop and Conference (CCWC), pp. 0169-0176, 2022.

5. K. V. Dalakleidi, M. Papadelli, I. Kapolos, and K. Papadimitriou, ‘‘Applying


image-based food-recognition systems on dietary assessment: A systematic
review,’’ Adv. Nutrition, vol. 13, no. 6, pp. 2590–2619, Nov. 2022.

6. L. M. Amugongo, A. Kriebitz, A. Boch, and C. Lutge, ‘‘Mobile computer


vision-based applications for food recognition and volume and calorific
estimation: A systematic review,’’ in Healthcare, vol. 11. Basel, Switzerland:
Multidisciplinary Digital Publishing Institute, 2023, p. 59.

39
7. L. M. König, M. Van Emmenis, J. Nurmi, A. Kassavou, and S. Sutton,
‘‘Characteristics of smartphone-based dietary assessment tools: A
systematic review,’’ Health Psychol. Rev., vol. 16, no. 4, pp. 526–550, Oct.
2022.

8. M. Chen, P. Zhou, D. Wu, L. Hu, M. M. Hassan, and A. Alamri, ‘‘AI-skin:


Skin disease recognition based on self-learning and wide data collection
through a closed-loop framework,’’ Inf. Fusion, vol. 54, pp. 1–9, Feb. 2020.

9. M. Chopra and A. Purwar, ‘‘Recent studies on segmentation techniques for


food recognition: A survey,’’ Arch. Comput. Methods Eng., vol. 29, no. 2, pp.
865–878, Mar. 2022.

10. M. Goyal, T. Knackstedt, S. Yan, and S. Hassanpour, ‘‘Artificial intelligence-


based image classification methods for diagnosis of skin cancer: Challenges
and opportunities,’’ Computer. Biol. Med., vol. 127, Dec. 2020, Art. no.
104065.

11. S. S. Chaturvedi, J. V. Tembhurne, and T. Diwan, ‘‘A multi-class skin cancer


classification using deep convolutional neural networks,’’ Multimedia Tools
Appl., vol. 79, nos. 39–40, pp. 28477–28498, Oct. 2020.

12. Yessi Jusman, Indah Monisa Firdiantika, Dhimas Arief Dharmawan and Kunnu
Purwanto, "Performance of multi layer perceptron and deep neural
networks in skin cancer classification", 2021 IEEE 3rd Global Conference
on Life Sciences and Technologies (LifeTech), pp. 534-538, 2021.

40
PUBLICATIONS

Mrs.S.Nazeema , M.E., R.Ratchaya, B.K.Sujay , A.Surendhar, K.Vineeth


“SMARTBITE: AI -Powered Food Recognition and Calorie Estimation for
Personalized Diet Monitoring” , International Conference on Innovative Technologies
in Engineering and Science ICITES 2025 organized by Bannari Amman Institute
of Technology ,Sathyamangalam, Erode – 638401, Tamilnadu, India, 21-March-2025.

41

You might also like