0% found this document useful (0 votes)
23 views20 pages

Facial Emotion Detection

The document is a Major Project Report on Facial Emotion Detection submitted by Vatsalya Katariya and Yaman Mahtha for their Bachelor of Engineering in CSE-Artificial Intelligence and Machine Learning. It outlines the project's objectives, methodology, and feasibility, focusing on developing a system to recognize human emotions through facial expressions using advanced machine learning techniques. The report also discusses the technical, operational, and application scope of the project, emphasizing its potential benefits in various fields such as healthcare, education, and marketing.

Uploaded by

yamanmahtha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views20 pages

Facial Emotion Detection

The document is a Major Project Report on Facial Emotion Detection submitted by Vatsalya Katariya and Yaman Mahtha for their Bachelor of Engineering in CSE-Artificial Intelligence and Machine Learning. It outlines the project's objectives, methodology, and feasibility, focusing on developing a system to recognize human emotions through facial expressions using advanced machine learning techniques. The report also discusses the technical, operational, and application scope of the project, emphasizing its potential benefits in various fields such as healthcare, education, and marketing.

Uploaded by

yamanmahtha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

FACIAL EMOTION DETECTION

A Major Project-1 Report


Submitted in partial fulfilment of the requirement for the award of Degree of
Bachelor of Engineering in CSE-Artificial Intelligence and Machine
Learning

Submitted to
RAJIV GANDHI PROUDYOGIKI VISHWAVIDYALAYA
BHOPAL (M.P.)

MAJOR PROJECT-1 REPORT


Submitted by
VATSALYA KATARIYA (0111AL211176)
YAMAN MAHTHA (0111AL211183)

Under the supervision of


Dr. Sumit Vashishtha
Head of the Department,
CSE-Artificial Intelligence and Machine Learning

ISO: 9001-2000

TECHNOCRATS INSTITUTE OF TECHNOLOGY, BHOPAL


Session 2023-2024
TECHNOCRATS INSTITUTE OF TECHNOLOGY,
BHOPAL
CSE-ARTIFICIAL INTELLIGENCE & MACHINE LEARNING

DECLARATION

We, Vatsalya Katariya and Yaman Mahtha student of Bachelor of


Engineering, CSE-Artificial Intelligence and Machine Learning,
TECHNOCRATS INSTITUTE OF TECHNOLOGY, Bhopal, hereby
declare that the work presented in this Major Project is outcome of our own
work and is correct to the best of our knowledge. This work has been carried
out taking care of Engineering Ethics. The work presented does not infringe
any patented work and has not been submitted to any other University /
Institute for the award of any degree / diploma or any professional certificate.

Vatsalya Katariya (0111AL211176)


Yaman Mahtha (0111AL211183)

Date :25/11/2024
TECHNOCRATS INSTITUTE OF TECHNOLOGY,
BHOPAL

CSE-ARTIFICIAL INTELLIGENCE & MACHINE LEARNING

CERTIFICATE

This is to certify that the work embodied in this Project entitled “FACIAL
EMOTION DETECTION” has been satisfactorily completed by Vatsalya
Katariya (0111AL211176) and Yaman Mahtha (0111AL211183). The work
has been carried out under my / our supervision and guidance in the CSE-
Artificial Intelligence and Machine Learning, TECHNOCRATS
INSTITUTE OF TECHNOLOGY, Bhopal, for partial fulfilment of the
Bachelor of Engineering Degree during the academic year 2023-2024.

Approved By
Dr. Sumit Vashishth
Professor and Head of the Department
CSE-Artificial Intelligence and Machine Learning,
Forwarded by:

Dr. Shashi Kumar Jain


Director

TECHNOCRATS INSTITUTE OF TECHNOLOGY, Bhopal


TECHNOCRATS INSTITUTE OF TECHNOLOGY
BHOPAL
CSE-ARTIFICIAL INTELLIGENCE & MACHINE LEARNING

CERTIFICATE OF APPROVAL

The project work submitted is hereby approved as a systematic study of


an engineering subject. It is presented in a satisfactory manner. It is also
accepted as a prerequisite to the degree for which it has been submitted. It is
understood that by this approval the undersigned does not necessarily endorse
or approve any statement made, opinion expressed or conclusions drawn
therein, but approves the project only for the purpose for which it has been
submitted.

(Internal Examiner) (External Examiner)


CONTENTS

I. Abstract
II. Introduction
III. Literature Review
IV. Feasibility study
V. Objective
VI. Problem statement
VII. Scope Of Project
VIII. Description of software model used for project
IX. Methodology
X. Software and Hardware Requirement
XI. Project description details
XII. Benefits of project to society
XIII. Outcomes of Project
XIV. Limitations
XV. Future Scope
XVI. References
ABSTRACT
A key requirement for developing any innovative system in a computing
environment is to integrate a sufficiently friendly interface with the average end
user. Accurate design of such a user-centered interface, however, means more
than just the ergonomics of the panels and displays. It also requires that designers
precisely define what information to use and how, where, and when to use it.
Facial expression as a natural, non-intrusive and efficient way of communication
has been considered as one of the potential inputs of such interfaces.
The work of this thesis aims at designing a robust Facial Expression Recognition
(FER) system by combining various techniques from computer vision and pattern
recognition. Expression recognition is closely related to face recognition where
a lot of research has been done and a vast array of algorithms has been introduced.
FER can also be considered as a special case of a pattern recognition problem
and many techniques are available. In the designing of an FER system, we can
take advantage of these resources and use existing algorithms as building blocks
of our system. So a major part of this work is to determine the optimal
combination of algorithms. To do this, we first divide the system into 3 modules,
i.e. Preprocessing, Feature Extraction and Classification, then for each of them
some candidate methods are implemented, and eventually the optimal
configuration is found by comparing the performance of different combinations.
Another issue that is of great interest to facial expression recognition systems
designers is the classifier which is the core of the system. Conventional
classification algorithms assume the image is a single variable function of a
underlying class label. However this is not true in face recognition area where the
appearance of the face is influenced by multiple factors: identity, expression,
illumination and so on.
INTRODUCTION
Project Objective:
The primary goal of this research is to design, implement and evaluate a novel facial expression
recognition system using various statistical learning techniques. This goal will be realized
through the following objectives:
1. System level design: In this stage, we'll be using existing techniques in related areas as
building blocks to design our system.
a) A facial expression recognition system usually consists of multiple components, each of
which is responsible for one task. We first need to review the literature and decide the
overall architecture ofour system, i.e., how many modules it has, the responsibility of
each of them and how they should cooperate with each other.
b) Implement and test various techniques for each module and find the best
combination bycomparing their accuracy, speed, and robustness.
2. Algorithm level design: Focus on the classifier which is the core of a recognition
system, trying todesign new algorithms which hopefully have better performance
compared to existing ones.

Motivation:

In today‟s networked world the need to maintain security of information


orphysical property is becoming both increasingly important and increasingly difficult. In
countries like Nepal the rate of crimes are increasing day by day. No automatic systems
are there thatcan track person‟s activity. If we will be able to track Facial expressions
of persons automatically then we can find the criminal easily since facial expressions
changes doing different activities. So we decided to make a Facial Expression Recognition
System. We are interested in this project after we went through few papers in this area. The
papers were published as per their system creation and way of creating the system for accurate
and reliable facial expression recognition system. As a result we are highly motivated to
develop a system that recognizes facial expression and track one person‟s activity.

Problem Statement

Human emotions and intentions are expressed through facial expressions and deriving an
efficient and effective feature is the fundamental component of facial expression system. Face
recognition is important for the interpretation of facial expressions in applications such as
intelligent, man-machine interface and communication, intelligent visual surveillance,
teleconference and real-time animation from live motion images. The facial expressions are
useful for efficient interaction Most research and system in facial expression recognition are
limited to six basic expressions (joy, sad, anger, disgust, fear, surprise). It is found that it is
insufficient to describe all facial expressions and these expressions are
categorized based on facial actions.Detecting face and recognizing the facial expression is a
verycomplicated task when it is a vital to pay attention to primary components like: face
configuration, orientation, location where the face is set.

Problem Definition :

Human facial expressions can be easily classified into 7 basic emotions: happy, sad, surprise,
fear, anger, disgust, and neutral. Our facial emotions are expressed through activation of
specific sets of facial muscles. These sometimes subtle, yet complex, signals in an expression
often contain an abundant amount of information about our state of mind. Through facial
emotion recognition, we are able to measure the effects that content and services have on
the audience/users through an easy and low-cost procedure. For example, retailers may use
these metrics to evaluate customer interest. Health care providers can provide better service
by using additional information about patient‟s emotional state during treatment.
Entertainment producers can monitor audience engagement in events to consistently create
desired content.
LITERATURE REVIEW
Facial Emotion Detection (FED) systems have become a crucial research area in
artificial intelligence and computer vision. They aim to recognize human emotions
from facial expressions, leveraging various computational techniques to analyze
visual data. These systems find applications in diverse fields such as healthcare,
education, marketing, and security. This review provides an overview of the key
approaches, methodologies, challenges, and advancements in facial emotion
detection.

Key Components of Facial Emotion Detection Systems

 Face Detection: Identifying the face in an image or video is the first step. Techniques like Haar
cascades, Histogram of Oriented Gradients (HOG), and deep learning models (e.g., YOLO, SSD)
are commonly used.
 Feature Extraction: Extracting meaningful features from facial data is critical for emotion
recognition. Popular approaches include:
 Geometric-based Methods: Analyze facial landmarks (e.g., eyes, eyebrows, mouth) to
determine movements and positions.
 Appearance-based Methods: Use texture and pixel intensity data, often employing
methods like Local Binary Patterns (LBP) and Gabor filters.
 Classification: Machine learning models classify emotions based on extracted features. Traditional
models include Support Vector Machines (SVMs) and Random Forests, while modern FED systems
leverage deep learning networks such as Convolutional Neural Networks (CNNs).

 Face Registration : Face Registration is a computer technology being used in a variety of


applications that identifies human faces in digital images. In this face registration step, faces are first
located in the image using some set of landmark points called “face localization” or “face
detection”. These detected faces are then geometrically normalized to match some template image
in a process called “face registration”.
FEASIBILITY STUDY
This feasibility study evaluates the practicality and viability of implementing the Facial Emotion
Detection System, which aims to identify emotions based on facial expressions using advanced
machine learning techniques.
1. Project Overview
Project Name: Facial Emotion Detection System
Description: A system to detect and classify human emotions in real-time based on facial
expressions.
Objective: To leverage AI for emotion detection with applications in healthcare,
education, and marketing.

2. Technical Feasibility
Technology Requirements: Python, TensorFlow, OpenCV, and GPU-enabled systems.
Scalability: Designed to handle real-time processing and increased user demand.
Integration: Compatible with existing software platforms.

3. Economic Feasibility
Estimated Cost: Includes hardware, software development, and personnel.
ROI: Expected to enhance decision-making and user engagement.
Budget: Within available financial resources.

4. Conclusion
Based on the analysis, the Facial Emotion Detection System is deemed feasible, given the
availability of required technologies, budget, and alignment with ethical considerations.
OBJECTIVE
The Facial Emotion Detection System is designed to identify and classify human
emotions by analyzing facial expressions using advanced AI and computer vision
techniques. This innovative system has applications in diverse fields such as
healthcare, education, and marketing, enabling enhanced user interaction and data-
driven decision-making while ensuring ethical and responsible usage.

Objectives:

1. To develop a system that accurately identifies and classifies human emotions


based on facial expressions.
2. To leverage advanced machine learning and computer vision techniques for
real-time emotion detection.
3. To enhance user experience in applications like healthcare, education,
marketing, and entertainment.
4. To ensure ethical data usage by adhering to privacy laws and minimizing
biases in emotion classification.
5. To integrate the system seamlessly with existing platforms and enable
scalability for diverse use cases.
PROBLEM STATEMENT

Emotions play a crucial role in human interaction and decision-making. However,


in many applications, understanding emotional states accurately remains a
challenge, particularly in non-verbal communication. Current systems lack robust
mechanisms to identify emotions effectively, leading to gaps in areas such as
personalized services, mental health monitoring, and adaptive learning
environments.

Key Challenges:

1. Lack of Real-Time Emotion Analysis: Many existing systems struggle


with real-time processing, limiting their usability in dynamic environments.
2. Cultural and Contextual Variability: Differences in emotion expression
across cultures and contexts hinder the generalization of existing models.
3. Data Privacy Concerns: Handling sensitive facial data raises concerns
about privacy and security.
4. Integration Difficulties: Current emotion detection systems often lack
compatibility with diverse platforms and applications.

Problem Statement:

There is a need for a reliable, scalable, and ethical Facial Emotion Detection
System that can accurately analyze emotions in real-time while addressing cultural
variability, ensuring data privacy, and seamlessly integrating with various domains.
This system should enhance decision-making and user engagement across multiple
fields, including healthcare, education, and marketing.
SCOPE OF THE PROJECT

The Facial Emotion Detection System aims to provide a robust solution for identifying and
classifying human emotions based on facial expressions. Its scope includes technical,
operational, and application aspects, ensuring wide usability across diverse domains.

1. Technical Scope:

 Development of a machine learning-based model capable of real-time emotion detection.


 Integration of facial recognition and emotion classification algorithms using advanced
computer vision techniques.
 Scalability to handle large datasets and support multi-platform deployment (web, mobile,
and desktop).
 Implementation of secure data handling practices to ensure privacy compliance.

2. Operational Scope:

 Real-time analysis of facial expressions for applications in dynamic environments such as


live monitoring or interactive systems.
 Customizable features to adapt to specific industry needs, such as healthcare, education,
and marketing.
 User-friendly interface design for ease of use by non-technical personnel.

3. Application Scope:

 Healthcare: Assist in monitoring mental health conditions and emotional well-being.


 Education: Enhance e-learning platforms by analyzing student engagement and adjusting
teaching methods accordingly.
 Marketing: Evaluate customer reactions to products and advertisements for improved
strategies.
 Entertainment: Enhance interactive experiences in gaming and virtual reality systems.
 Security: Support surveillance systems to detect suspicious behaviors through emotional
cues.
SOFTWARE MODEL USED FOR PROJECT

Agile is an iterative and incremental software development methodology focused on


collaboration, adaptability, and customer satisfaction. The development of the Facial Emotion
Detection System will follow this model, breaking the process into short sprints, with each sprint
delivering a functional and deployable product increment.

Key Principles:

1. Iterative Development:
The project will be divided into sprints lasting two to four weeks. Each sprint will result in a
functional component of the system, such as face detection, feature extraction, or emotion
classification.

2. Customer Collaboration:
Frequent collaboration with stakeholders, such as users and domain experts, will ensure the
system aligns with their needs. Continuous feedback will help refine features and improve the
system’s usability.

3. Adaptability to Change:
Agile’s flexibility allows for changes in requirements even during later stages of development.
This is crucial for the Facial Emotion Detection System, as AI advancements and user
preferences may evolve throughout the project.

4. Cross-Functional Teams:
The team will consist of developers, designers, data scientists, and ethics specialists. This
diversity facilitates continuous communication, ensuring all aspects of the system—technical,
ethical, and user experience—are well-integrated and al
SOFTWARE AND HARDWARE REQUIREMENT
Software Requirements:

1. Programming Languages Libraries and Frameworks:

 Python: For implementing machine learning algorithms and facial emotion


detection.
 OpenCV: For facial recognition and image processing.
 TensorFlow/Keras/PyTorch: For training and deploying machine learning
models for emotion classification.
 Scikit-learn: For additional machine learning tools and utilities.
 Dlib: For facial landmark detection.

2. Databases:

 MySQL/PostgreSQL: For storing user data and emotion analysis logs (if
required).
 NoSQL Database (MongoDB): For handling unstructured data (e.g., images,
model weights, etc.).

3. Development Tools:

 IDE/Code Editor: VS Code, PyCharm, or Jupyter Notebook for Python.


 Version Control: Git and GitHub for version control and collaboration.

4. Operating System:

 Windows/Linux/MacOS: Compatible with development tools and libraries.


Linux is preferred for production deployment.

Hardware Requirements:

1. Processing Unit:

 CPU: Intel i5/i7 or equivalent for general development tasks.


 GPU (Recommended): Nvidia GTX 1060 or higher (for training machine
learning models on large datasets).
 RAM: 8GB or more for smooth execution, especially when working with large
image datasets or machine learning models.
 Storage: SSD with at least 256GB for faster read/write operations, especially
when dealing with large datasets.
BENEFITS OF PROJECT TO SOCIETY

The Facial Emotion Detection System offers significant advantages, enhancing emotional
awareness and improving interactions across various sectors. Below are four key benefits:

1. Mental Health Monitoring and Support:

The system helps detect early signs of emotional distress, such as anxiety or stress, by analyzing
facial expressions. This enables timely intervention and personalized care, especially in
healthcare settings, offering remote monitoring for patients and enhancing treatment
effectiveness.

2. Enhanced Customer Experience:

By recognizing emotions in real-time, the system can tailor customer interactions, improving
satisfaction. In customer service, it can identify frustration or contentment, allowing businesses
to respond accordingly. Marketers can use emotional insights to create more targeted and
engaging campaigns.

3. Empowering Education and Learning:

In educational environments, the system helps teachers recognize students' emotional states,
enabling personalized learning experiences. It fosters better engagement and support, enhancing
student performance by addressing emotional needs during lessons.

4. Promoting Public Safety and Security:

The system improves safety by detecting signs of distress or aggression, helping security
personnel identify potential threats early. In workplaces, it can monitor employee emotions,
preventing stress-related incidents and promoting overall well-being.
OUTCOMES OF PROJECT

The Facial Emotion Detection System aims to achieve several impactful outcomes, benefiting
various sectors by providing real-time emotional insights. The primary outcomes include:

1. Improved Emotional Awareness:

The system will provide users with an enhanced understanding of emotional states by accurately
detecting facial expressions. This will enable individuals, caregivers, and organizations to
respond more empathetically and effectively to emotional cues, fostering better interpersonal
interactions.

2. Enhanced User Experience:

Through real-time emotion detection, the system will enhance the user experience in multiple
applications, including customer service, education, and healthcare. It will enable adaptive
environments that respond to user emotions, leading to more personalized and engaging
interactions.

3. Support for Mental Health Monitoring:

The system will offer tools for monitoring emotional well-being, helping to identify early signs
of mental health issues like anxiety or depression. This will support proactive mental health
interventions and provide valuable insights for mental health professionals.
Future Scope
The Facial Emotion Detection System has significant potential for growth. Here are some key
areas for its future development:

1. Improved Accuracy and Multimodal Detection:

Future versions could combine facial recognition with other data sources, such as voice tone and
physiological signals, to enhance accuracy and provide a more comprehensive understanding of
emotions.

2. Cultural and Regional Adaptation:

To improve global applicability, the system can be trained on diverse datasets that account for
cultural differences in emotional expressions, enabling better interpretation across different
regions.

3. Real-Time Feedback and Adaptation:

The system could be enhanced to provide real-time emotional feedback, adjusting content or
interactions in applications like education and healthcare to create personalized, responsive
experiences.

4. Privacy and Security Enhancements:

Future systems could integrate advanced encryption and user consent features to ensure privacy,
allowing users to control their emotional data securely and ethically.

5. AI and Mental Health Integration:

Integrating the system with AI-driven mental health platforms could provide deeper emotional
insights, enabling early detection and personalized support in therapeutic settings.

6. Expansion into VR/AR:

The system could be integrated into VR and AR environments to create adaptive experiences that
respond to users' emotions, improving training simulations and entertainment applications.
REFERENCES

 Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and
emotion. Journal of Personality and Social Psychology, 17(2), 124–129.

 Pantic, M., & Rothkrantz, L. J. M. (2000). Facial expression recognition: The


state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence,
22(12), 1424–1445.

 Zeng, Z., Pantic, M., Roisman, G. I., & Huang, T. S. (2009). A survey of affect
recognition methods: Audio, visual, and spontaneous expressions. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 31(1), 39-58.

 Mollahosseini, A., Chan, D., & Mahoor, M. H. (2017). Going deeper in facial
expression recognition using deep neural networks. IEEE CVPR, 1–9.

You might also like