CPP Final Report Final
CPP Final Report Final
INTRODUCTION
INTRODUCTION
Mood detection based on emotion is the one of the current topic in the various fields
which provides solution to various challenges. Beside traditional challenges in captured
facial images under uncontrolled settings such as varying poses, different lighting and
expressions for face recognition and different sound frequencies for emotion recognition.
For the any face and mood detection system database is the most important part for the
comparison of the face features and sound Mel frequency components. For database
creation features of the face are calculated and these features are store in the database.
This database is then use for the evaluation of the face and emotion by using different
algorithms.
Emotional aspects have more impact on social intelligence like communication
understanding, decision making and also helps in understanding behavioral attitude of
human. Emotion play important role during communication. Emotion recognition is
implemented out in diverse way; it may be verbal or non-verbal. Voice (Audible) is
verbal way of communication & Facial expression, action, body postures and gesture is
non-verbal form of communication. Human can recognize emotions without any
meaningful delay and effort but recognition of facial expression by machine is a big
challenge.
One of the most interesting areas of human computer interaction is face detection
and identification. Distinguishing facial features are comparatively low and it is most
interesting task to observe these. Detection and identification face objects from face is a
challenging task.
Finding a human emotion using human’s face which can be one of the most challenging
assignments you will handle in your career. A face is the best way to detect and recognize
a human. No recognition algorithms will work without face detection step. Rate of
detection affects the recognition stage. With all these noise is a very intriguing task to
detect and localize an unknown non-face from still image.
Face emotion detection applications is still a challenging task since face images
may be affected by changes in the scene, such as pose variation, face expression, or
illumination. The main goal to propose this system is to find the human mood with the
help face image as input and after that using these emotion results to play the audio file.
A face recognition technique which is used here to matches the train face image to the
original input face image.
The proposed approach is simple, efficient, and accurate. This system gives
accurate result as compare to existing approach. System play’s very important role in
recognition and detection related field. That is this gives important result very quickly as
compare to traditional methods.
PROJECT SCHEDULE
PROJECT SCHEDULE
Aug
System
Engineering
Sep
Requirement
Oct Analysis
Nov
Dec
Design
Jan
Feb
Coding
Mar
Testing
REQUIREMENTS
ANALYSIS
REQUIREMENTS ANALYSIS
2. Objectives
Software Requirements
There must be used are windows-10 which will support our functions that we are going
to built in our project.
2. Programming Language -
3. Technical Analysis
For developing the software we have used python as programming language because the
functionalities for developing the modules of requirement as it can be done easily using
it.
DESIGN
Analysis Modeling
Input Human
Face Image
Fig1.1
Characteristics of AI & DL :
1.Artificial Intelligence
Refers to the ability of a machine or computer system to perform tasks that typically
require human intelligence, such as speech recognition, problem-solving, decision-
making, and learning from experience.
Can be categorized into two types: Narrow AI (also known as Weak AI) which is
designed for a specific task, and General AI (also known as Strong AI) which has the
capability to perform any intellectual task that a human can do.
Can be used in various applications including virtual assistants, autonomous vehicles,
fraud detection, recommendation systems, and healthcare, among others.
Can operate autonomously or in collaboration with humans, and can continuously learn
and improve from data and feedback..
2. DeepLearning:
Is a subset of machine learning that involves training artificial neural networks with
multiple layers (deep neural networks) to process and analyze large amounts of data.
Relies on a hierarchical structure of layers, where each layer processes data and passes it
onto the next layer for further processing, allowing for the extraction of complex patterns
and representations from the data.
Can handle unstructured data such as images, speech, text, and sensor data, and has
shown remarkable success in tasks such as image and speech recognition, natural
language processing, and playing games.
Requires a large amount of labeled data for training, as well as significant computing
power and storage resources.
Has the ability to automatically learn features from raw data, reducing the need for
handcrafted features, and can adapt and improve its performance with more data and
iterations.
In summary, AI encompasses the broader concept of machines or systems exhibiting
human-like intelligence, while DL is a specific approach within machine learning that
uses deep neural networks to process and analyze complex data for pattern recognition
and decision-making.
IMPLEMENTATION
`
IMPLEMENTATION
Proposed Architecture:
Language:
This project is based on the concept of AI & Deep Learning, so we
chosen Python Programming language to do our project.
Software requirements
Microsoft Visual Studio - Python
Overall Description
The proposed approach used humans face to detect the emotion and finally using
this result to play the audio file which related to human’s emotion. Firstly system takes
the human face image as input. In image preprocessing step, we remove the noise of
image and then convert it into grayscale. After that face detection is carried out. Then
using feature extraction techniques to recognize the human face for emotion detection.
These techniques help to detect the human’s emotion. Through the feature detection of
lip, mouth, and eyes, eyebrow, those feature points are found. If the input face wills
matches exactly to the emotions based dataset’s face then we can detect the human’s
exact emotion to play the emotion related audio. Also we recommend the music based on
detected mood. Detection under different environmental conditions can be achieved by
training on limited number of characteristics faces.
TESTING REPORT
TESTING REPORTS
1. Unit Testing
Unit testing focuses each module individually, ensuring that if function properly
as unit. In this testing we have tested of our software to ensure maximum error detection.
It helps to remove bugs from sub modules & prevent arrival of huge bugs after words.
Functional Partitioning:
1. Data Collection
2. Data Training
3. Interface
Functional Description:
1) Name: Data Collection
Input:
To Collect the type of we Emotion we need in our Project.
Output:
Data stored in “.npy” format.
Specification
This module is used to store the type of data we need in our project. After running
the “data_collection.py” it will ask to “Enter the name of Data”
After Enter data it will access the camera and record the data.
Specification:
This module is used to train the emotion so that it will clearly Identify the
Emotion’s on User’s face
3) Name: Interface
Input: It will record the user’s face constantly and display the type emotion at
Top Left corner of Window.
Output:
Records User’s face and Shows the type of Emotion.
Specification:
This module Uses the data Provided by both The “Data Collection” and “Data
Training” to Display the Type of Emotion the User is showing on Screen.
2 .Integration Testing
Integration testing is a systematic technique for constructing the program structure while
at the same time conducting tests to uncover errors associated with interfacing. The
objective is to take unit tested components and build a program structure that has been
dictated by the design.
We prefer the Top-down integration testing as a testing approach for our project. The
Top-down integration testing is an incremental approach to construction of program
structure. Modules are integrated by moving downward through the control hierarchy,
beginning with the main control module
Modules subordinate to the main module are incorporated into the structure in
depth-first order.
Depth-first integration would integrate all components on a major control path of a
structure. Selection of a major path is somewhat arbitrary and depends on application-
specific characteristics.
3. Validation testing
The purpose of validation testing is to ensure that all expectations of the customer have
been satisfied. The testing is done by the project group members by inspecting the
requirements and validating each requirement.
We prefer the alpha testing technique for the validation purpose. In which case the a
group of students is called on to test the services of Software. The group members will
present at the place and observing the working and collecting the bugs occurring at the
site. The changes are made according to the requirements and testing is done again to
gather more errors if present.
4. System Testing
In the system testing system undergoes various exercises to fulfill the system
requirements. These tests include: a. Security Tests: these are designed to ensure no user
can access the other documents which are none of his business, b. Performance Tests:
The tests are conducted to check the performance of the system.
PERFORMANCE
ANALYSIS
PERFORMANCE ANALYSIS
INSTALLATION
GUIDE
INSTALLATION GUIDE
4. To Host Direct Web Application Using Local Host Use Following Command -
streamlit run .\music.py
15. Finally YouTube Will Open With Your Emotions + singer + Language
Recommendation
USER MANUAL
USER MANUAL
Step 2: Give input as language of song and name of singer which you want
Step 3: Allow the web camera to capture your emotions and click on recommend songs
Step 4: Now you will be redirected to youtube and it will recommend songs based on
your real time emotions captured
RESULT
– Capturing Emotions:
1. Mood is Happy
1. Mood is Sad
2. Mood is Neutral
APPLICATION
APPLICATIONS
BIBLIOGRAPHY
BIBLIOGRAPHY
[1] Bharati Dixit, Arun Gaikwad, “Facial Features Based Emotion Recognition”. ISSN
(e): 2250-3021, ISSN (p): 2278-8719 Vol. 08, Issue 8 (August. 2018)
[2] J Jayalekshmi, Tessy Mathew, “Facial expression recognition and emotion
classification system for sentiment analysis”. 2017 International Conference.
[3] Suchitra, Suja P. Shikha Tripathi, “Real-time emotion recognition from facial images
using Raspberry Pi II”. 2016 3rd International Conference
[4] Dolly Reney, Neeta Tripathi, “An Efficient Method to Face and Emotion Detection”.
2015 Fifth International Conference.
[5] Monika Dubey, Prof. Lokesh Singh, “Automatic Emotion Recognition Using Facial
Expression: A Review”. International Research Journal of Engineering and Technology
(IRJET) Feb-2016.
[6] Anuradha Savadi Chandrakala V Patil, “Face Based Automatic Human Emotion
Recognition”. International Journal of Computer Science and Network Security, VOL.14
No.7, July 2014.
[7] Songfan Yang, Bir Bhanu, “Facial expression recognition using emotion avatar
image”. 2011 IEEE International Conference.
[8] Leh Luoh, Chih-Chang Huang, Hsueh-Yen Liu, “Image processing based emotion
recognition”. 2010 International Conference.
[9] Jiequan Li, M. Oussalah, “Automatic face emotion recognition system”. 2010 IEEE
9th International Conference.