0% found this document useful (0 votes)
17 views11 pages

Paper 7

Uploaded by

nandithadirisala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views11 pages

Paper 7

Uploaded by

nandithadirisala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.

NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation

Face Expression and Emotion Detection by using


Machine learning and Music Recommendation
Prince Kumar1,Gouri Sankar Mishra2,Tarun Maini3,Pradeep Kumar Mishra4,
Shubham Dubey5, Shivam Sharma6

1,4,5,6
Department of Computer Science & Application, Sharda School of Engineering & Technology,
Sharda University, Greater Noida, UP, India
2,3
Department of Computer Science & Engineering, Sharda School of Engineering & Technology,
Sharda University, Greater Noida, UP, India.

1 [email protected] ,[email protected] , 3 [email protected],


4 [email protected], 5 [email protected] ,6 [email protected]

ABSTRACT
Most of us listen to music to feel emotions. Your negative mood might be lifted by music. Currently existing music
systems let you listen to chosen music and suggest songs in categories depending on your interests or the tastes of
other users. Music fans cannot completely depend on such methods and therefore do not prefer to listen to music on
the station or online while such sound systems are not created with the emotions elicited in mind. In this work, we
provide a music system based on sentiment. Our Raspberry Pi-based system plays tunes based on the ambiance of the
room using a speaker, a microphone, and a Raspberry Pi. The emotion of the recorded background sounds is assessed
using a classification issue based on machine learning. For this categorization, we make use of a simple Bayesian 2945
classifier. Using the song's Bits per Minute pace to identify songs with comparable emotional content.
Keywords: Face-extraction, music suggestion, emotion recognition, and real-time image capture.
DOI Number: 10.48047/NQ.2022.20.20.NQ109290 NeuroQuantology2022;20(20):2945-2954

1. Introduction shown/recommended songs based on the mood that


By identifying and recording the user's emotions in has been identified for them.
real time, this notion suggests music to the user.We In this application, a person's picture is taken utilizing
presented a method to categorise various types of a real-time device that can access the nearby
music into distinct moods, such as joyful, sad, furious, equipment. Other than that, we have some common
etc. Previous techniques were employing collaborative features like a queue playlist so that we can have a
techniques that used user data from prior sessions to personal playlist, and the last one is random. It uses
select music. However, these techniques take a lot of the python Eel library so that it can pick a random
human work. Emotion-Based-music-player It is a music song without any order. Also, based on the image that
player that uses Chrome as its front-end and a was shot, the database data sets previously saved on
machine learning algorithm written in Python to the local device are compared, and after processing, it
recognise emotions on the user's face. The user will be determines the user's current mood in numerical form

eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation

dependent on how the music is going to be played. For targeted area before the feature map's points for
this, we used tools like OpenCV, EEL, numpy, etc. extraction. This method employs training and
This method focuses mostly on suggested Music and measurement of the Hausdorff separation between
has evolved an essential tool for reducing stress in the input face picture and the data image using a
modern society. Since facial expressions frequently Bezier curve in order to comprehend emotion.
convey emotion, we use faces as our major source of 2.3. Using animated mood images to suggest music
information for identifying emotion. Next, We provide A technique for recommending music employing
music that can alter a user's mindset in line with that animated mood images was proposed by
user's mood. ArtooLeptinemia and JukkaHolm[3]. Using a library of
photos, the user of this system may obtain music
recommendations based on the genre of each image.
The Nokia Research Center created this technique for
2. Literature review making music recommendations. Audio signal
2.1. System for Detecting Faces and Recognizing processing and textual meta tags are used in this
Expressions on the Face system to describe the genre.
Anagha S. Dhavalikar[1] suggested a technique for 2.4. Utilizing emotion identification from facial
automatically recognising facial expressions. This expressions in human-computer interaction. 2946
system consists of three phases. Facial recognition, A.Pruski et al., C. Maaoui et al., and F. Abdat[4]. They
Expression recognition, followed by Feature suggested a completely automated facials emotion
extraction.The InitialRGB Color Model is used for face and identification system based on three steps: face
detection, adjusting the lighting while acquiring a face recognition, facial feature categorization of facial
and morphological surgeries to keep the desired face, expressions and extraction. This methodology coupled
such as the face's lips and eyes.This System is also the Shi and Tomasi method with an anthropometric
utilisedActive Appearance Model Technique, or AAM, model to identify the facial feature points. This
is used to extract face features. In this approach,The methodology uses a version of 21 distances to
model facial angularities, including the lips, br0ws, and characterize facial features from a neutral face and
eyes, are located, with a file holding data details about classifies data using SVM.
the model points that were identified is produced. The 2.5. Music Suggestions Depend on Emotion By
method also detects faces and uses input of an association, I discovered film music.
expression to determine how the AAM Model should Suh-Yin Lee et al. and Fang-FeiKuo et al .[5] The
change. growth of music suggestions for customers is a result
of the spread of digital music. The consumers'
2.2. Bezier Curve Fitting for Emotional Identification preferences for music are the basis for the current
from Face Expression Analysis recommendation methods. Nonetheless, there are
Bezier curve fitting was the basis for the approach occasions when selecting music based on the mood is
provided by Youngseop Kim, Woori Han, and Yong- necessary. Using association learning from cinema
Hwan Lee[2]. The first stage in this system's method music, We provide a unique approach for
for determining facial expression and In order to recommending music based on emotions. In this work.
validate the facial expression of certain characteristics In order to uncover associations between emotions
in the region of interest, the second step is to identify and musical qualities, we examined musical feature
and analyze the facial landmarks from the original extraction and modified the affinity graph. According
photo input. determining the position of the lips and to experimental findings, the suggested technique
eyes on the face as well as the angle of the face, averages 85% accuracy.
feature maps were employed after the initial step of 2.6. Interactive Music Search and Recommendation
face identification, which used color still images Based on Mood
considering skin color pixels and initiated spatial According to John O'Donovan et al. and Ivana
filtering. In the process of applying a Bezier curve to Andjelkovic et al. [6],On increasing prediction and
the eye and mouth, this approach first extracts the ranking, recommender system research has been

eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation

heavily concentrated. The importance of other aspects be improved by including personality and emotional
of the suggestions, such as accessibility, flexibility, and states into music choices.We think that by taking
overall user experience, has, however, been these psychological elements into account, the
highlighted by recent studies. On the basis of these recommendation's accuracy may be improved. The
features, we propose MoodPlay, a hybrid music system focuses on the relationship between a person's
recommendation system with a user-friendly interface personality and how they utilize music to control their
that combines content- and mood-based filtering. We emotional states[9].
walk users through using MoodPlay to search music It's essential to be able to discern an individual's
files by secret emotional aspects and how to blend emotions from their face.In order to capture the
users input with forecast made from a prior user essential data from the a face of a person.Among
profile when making recommendations. Findings from other things, input input may be used to extract
a user research (N=240) that looked at 4 situations information that may be used to estimate a person's
with various visibility, engagement, and control levels mood.Songs are created using the "feeling" that is
are discussed. acquired from the preceding input. A playlist that is
2.7. An Reliable Face Expression-Based Music Playlist appropriate for a certain person's emotional qualities
Generation Algorithm may be made with less effort spent manually sorting
AnukritiDureha and colleagues [7]. He recommended songs into numerous categories. In order to produce a
labor- and time-intensive manual playlists playlist that meets the specified criteria,Upon
segmentation and music annotation depending on the scanning and understanding the data, the Facial
user's present state of mind.To automate this Expression Based Music Player. Our proposed method
procedure, many algorithms have been suggested. The focuses on recognising constructing an emotion-based
present algorithms, however, are less precise, require music player using personal emotions. It explains how
more equipment (such as EEG sensors and our music player detects human emotions, how other
equipment), which drives up the cost of the system as music players currently on the market sense emotions,
a whole, and are wasteful. the method of creating an and how to use our technology for emotion detection
audio album from a participant's facial gestures to to its fullest potential. Also, a brief explanation of
save time and labor-intensive manual labor. The playlist generation, emotion categorization, and the 2947
algorithm put forward in this research aims to cut operation of our algorithms is provided.We utilized
down on both the system's total cost and calculation the pycharm tool for analysis in this project[10].
time. It also seeks to improve the proposed system's
accuracy. The proposed algorithm's facial expression The results of research are divided into two phases:
recognition module is tested against user-dependent 1. Using Python, create a programme that can identify
and user-independent datasets to ensure its accuracy. a user's emotion from their expression.
2.8. Improving Character Details and Emotional 2. The music will be played based on the user's
States in Music Advisory Systems preferences if Python-code is added to the web-
Markus Schedl et al. and Bruce Ferwerda et al. [8]. service. statement.
made the suggestion that the basic study hypotheses

eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation

Fig-1.Various of face Expression


3. Proposed methodology 3.2. Haar Cascade Algorithm
In web camera captures the Particulars face. Frames The Haar Cascade Algorithm is a machine learning
from the captured video are created. Utilizing tool for classifying the various elements of a taken
preprocessing, the webcam picture is used to picture. The major use is object detection. Several
transform the facial expression into a series of Actions steps of collection from weak learners are used in the
Unites (AUs).The Facial Action Coding System makes cascade classifier. These weak classifiers, also known
use of combinations of the 64 AUs characterizes as boosting classifiers, are the most basic kind of
every face emotion. Following feature extraction, the classifiers. If the label has a positive range, the
faces' emotions—such as happiness, anger, sadness, process advances to the step where the outcome is
and surprise—are categorized. They are connected shown. They recognise the photos in accordance with
with the web services. They might be SAAS, IAAS, or the labels, which has both good and bad aspects. On
PAAS. The music is played based on the emotions that various levels, they have a group of positive photos
are recognised, and the feelings are placed over the unfavorable ones. Images with more
communicated[11]. clarity and moreof them are favored since they
3.1. Fisher Face Algorithm produce better outcomes.
This image processing system employs the principle In this case, the object in the image is found using the
component analysis (PCA) approach to reduce the size Haarcascade frontal face default.xml algorithm. Nose,
of the face space before obtaining the feature of the eyes, ears, and lips are the objects in this face[13].
image characteristics using the fisher's linear The frontal face is detected using the open cv-
discriminant (FDL) or local discriminant analysis (LDA) designed Haarcascade. Also, it has the ability to
methods. Because it optimizes the distinction recognise the source's characteristics. It functions by
between classes throughout the training phase, we superimposing negative pictures on top of positive
specifically adopt this approach. While the minimal ones that have been trained over them. Just the
Euclidean technique is used for matching faces, this photos that we want our classifier to classify are
approach aids in picture identification and helps us contained in positive images. Negative images are
categorize facial expressions that suggest user made up of all other images, none of which contain
mood[20]. the item we're looking for[14]. 2948

eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation

Face Webcam

Preprocessing

Edging

Segmentation

Face Detection

Feature
Extraction
2949

SaaS Iaas PaaS

Music System

Fig-2 Schematic of the Emotion-Based Music Recommendation System's overall architecture.

4. Experiments and results 3. Linear Classifier


4.1. Face recognition Using an image pyramid with several scales, the
By eliminating extraneous sounds and other resulting data are divided into the sample image. Just
elements, the major goal of the face identification extracting features while lowering noise and other
approaches to recognise the face in the parameters is how this approach is used.The Gaussian
picture(faces). The FACE DETECTION METHOD has the pyramid, also known as the lowpass filter image
following steps: pyramid technique, subsamples the frame by
1. Picture pyramid decreasing its resolution and smoothing it. To get the
2. Oriented gradients histogram intended result, a frame that resembles greater

eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation

resolution and a higher amount of smoothing than numerous times.


the original, the operation must be performed

Face Webcam Face detection


process

Initialising the
process

Face
detected
2950

Emostion
Anylised

Stop

Fig-3.Flowchart for the Face Detection Module


The term "HOG" refers to a feature descriptor used of objects in photographs.The basic goal of this
when it comes to image processing that counts the approach is to use a collection of distributions of
instances of gradient orientations in a certain region intensity gradients to characterize the face in the
of an image and is widely applied to the identification image.

eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation

Fig-4.Facial recognition method


The final stage of the face detection procedure is The methodology uses regression approaches taught 2951
linear classification. We only substituted using a linear with a gradient boosting approach to learn how to
predictor for SVM in order to shorten the computing obtain the facial landmark map from a given face shot
set-up required for classification and, thus, provide a using just the pixel's intensity values indexing of each
quicker face identification operation[15][16]. point. After the PCA reduction procedure, the data
4.2. Classification of emotions will be categorised [18]. Using a multiclass SVM with a
The image will have a bounding box overlayed over it linear kernel, the given data is compared with the
once the faces has been properly recognised in order stored data to decide which class (feeling) it belongs
to extractable the ROI (face) for additional analysis. to. If any of the three emotions—anger, fear, or
The 68 facial feature points will then be retrieved surprise—are noticed, a speed-reducing command
using the "Predictor" function, sometimes referred to will be sent to reduce the wheelchair's speed in order
as a script from the extracted ROI and save them in an to keep the user safe[19].
array. A PCA reduction process will then be used to 4.3. Music suggestion
compress the data from the features array. Then The webcam is used to capture the video before the
remove leaving only the major components after frame is finished since the input is Captured in real-
removing any associated coordinates of the important time. The processed framed photographs are
sites. The 68x2 array of the data contains coordinates classified using a hidden markov chain. For the goal of
on the x- and y-axes for each of the 68 points. The emotion categorization, all frame and pixel formats
array will be changed into a 1 column by 136 row from the collected frames are taken into account.
vector. A series of photos and landmark maps for Each facial landmark's value is determined, and
each image are used to train the face landmark recorded for further use. Most classifiers are
extraction algorithm "Predictor[17]." successful to a degree of 90–95%. such that even if
the face changes as a result of outside influences.

eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation

Face
Detected Segmentation

Perform SVM
Classification

Emotion
Classification

Stop 2952

Figure No. 5: Emotion Classification Module Flow Diagram


The systems still has recognise the facial and the appropriate music will play when the desired feeling
sentiment conveyed. The values retrieved and set are is conveyed. Happy, angry, sad, and surprised are the
then utilized to determine the feelings,and the value four emotions that can be employed[15]. The music
of the received pixel is contrasted with the values are played in line with the feelings that are detected;
contained as the code's threshold. From the user to in other words, when the joyous sensation is
the online service, values are transmitted. Based on recognised, the songs allocated for that particular
the feeling experienced, the music is performed. emotion are played, which is also true of the other
Every song has a certain set of emotions.The emotions[20].

Fig.6.Music selection and emotion recognition

5. Conclusion emotion-based music recommendation system


In this study, we proposed a models to select music employing facial recognition technology was
based on facial expressions that indicate emotion. An suggested in this study. Music has the ability to

eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation

relieve tension and all types of emotions. The [8] Bruce Ferwerda and Markus Schedl “Enhancing
potential for constructing recommendation systems Music Recommender Systems with Personality
for music based on emotions has recently increased. Information and Emotional States”: A Proposal: 2014.
In order to recognise emotions and play the [9] S. Mithen, “The singing neanderthals: The origins
appropriate music, the recommended system of song, verbal, and physique”. London, England:
provides a face-based emotion recognition system. Harvard University Press, 2006.
In today's society, a music player with facial [10] F. Randri, “Emotion-based music reference
recognition technology is very necessary for system by a bottomless reinforcement learning
everyone. This system has been further improved method,” Analytics Vidhya, 26-Jan-2021. [Online].
with features that can be upgraded in the future. The Accessible: https://medium.com/analytics-
mechanism for improving music playback that occurs vidhya/emotion-based-music-recommendation-
automatically uses facial expression recognition. The system-using-a-deep-reinforcement-learning-
RPI camera's programming interface allows for the approach-6d23a24d3044.
detection of facial expression. An alternate approach [11]S. A. Nash, “Charles Darwin on the appearance of
built on feelings other than revulsion and terror that human emotions,” Brain World, 12-Feb-2020.
are not recognised by our system. To assist the [Online]. Accessible:
automated playing of music, this feeling was https://brainworldmagazine.com/charles-darwin-on-
introduced. the-appearance -of-human-emotions/.
References [12] K. Cherry, “The 6 types of simple sentiments and
2953
[1]. AnaghaS.Dhavalikar and Dr. R. K. Kulkarni, “Face their consequence on human manners,” Verywell
Detection and Facial Expression Recognition System” Mind, 03-May-2018. [Online]. Available:
2014 International Conference on Electronics and https://www.verywellmind.com/an-overview-of-the-
Communication System (ICECS -2014). types-of-emotions-4163976.
[2]. Yong-Hwan Lee , Woori Han and Youngseop Kim, [13]J. van Wyhe, “Darwin, C. R. 1872. The appearance
“Emotional Recognition from Facial Expression of the sentiments in man and animals. London: John
Analysis using Bezier Curve Fitting” 2013 16th Murray. First edition,” Org.uk. [Online]. Accessible:
International Conference on Network-Based http://darwin-
Information Systems. online.org.uk/content/contentblock?itemID=F1142&b
[3]. ArtoLehtiniemi and Jukka Holm, “Using Animated asepage=1&hitpage=1&viewtype=text.
Mood Pictures in Music Recommendation”, 2012 16th [14]“LDA vs. PCA,” Towards AI, 26-Jan-2022.
International Conference on Information [15] X.-C. Yuan and C.-M. Pun, “Feature mining and
Visualisation. native Zernike flashes based symmetrical invariant
[4]. F. Abdat, C. Maaoui and A. Pruski, watermarking,” Multimed. Tools Appl., vol. 72, no. 1,
“Humancomputer interaction using emotion pp. 777–799, 2014.
recognition from facial expression”, 2011 UKSim 5th [16]J. K. Nuamah, Y. Seong, and S. Yi,
European Symposium on Computer “Electroencephalography (EEG) ordering of
[5]. T.-H. Wang and J.-J.J. Lien, “Facial Expression intellectual jobs based on task arrangement index,”
Recognition System Based on Rigid and Non-Rigid in 2017 IEEE Conference on Cognitive and
Motion Separation and 3D PoseEstimation,” J. Pattern Computational Aspects of Situation Management
Recognition, vol. 42, no. 5, pp. 962-977, 2009. (CogSIMA), 2017, pp. 1–6.
[6] Renuka R. Londhe, Dr. Vrushshen P. Pawar, [17] Gouri Sankar Mishra, Pradeep Kumar Mishra,
“Analysis of Facial Expression and Recognition Based Parma Nand, Rani Astya, and Amrita, "User
On Statistical Approach”, International Journal of Soft Authentication: A Three Level Password
Computing and Engineering (IJSCE) Volume-2, May Authentication Mechanism", in Journal of Physics:
2012. Conference Series 1712,2020, 012005
[7] AnukritiDureha “An Accurate Algorithm for doi:10.1088/1742-6596/1712/1/012005
Generating a Music Playlist based on Facial [18] Rachna Jain, Abhishek Sharma, Gouri Sankar
Expressions” : IJCA 2014. Mishra, Parma Nand, and SudeshnaChakraborty,

eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation

"Named Entity Recognition in English Text", in Journal


of Physics: Conference Series 1712, 2020, 012013,
doi:10.1088/1742-6596/1712/1/012013
[19] G. S. Mishra, P. Nand and Pooja, 'English text to
Indian Sign Language Machine Translation: A Rule
Based Approach', International Journal of Innovative
Technology and Exploring Engineering (IJITEE), 2019,
8(10)
[20] Gouri Sankar Mishra, K. K. Ravulakolluand A. K.
Sahoo, “Word based statistical machine translation
from English text to Indian Sign Language”, in Journal
of Engineering and Applied Sciences,12(2), 2017,
p.481-488.
[21] S. Tyagi, andGouri Sankar Mishra, “POS Tagging
Using Support Vector Machines and Neural Classifier”, 2954
in International Journal of Computer Science And
Technology, Vol. 7, Issue 2, 2016, ISSN : 2229-4333
[22] Pradeep Kumar Mishra, Ali Imam Abidi and Gouri
Sankar Mishra, “Improved Methodology For
Personality Assessment Using Handwritten
Documents”, in Journal of Positive School Psychology,
Pages 3263-3273,pub 2022/6
[23] Pradeep Kumar Mishra, Ali Imam Abidi and Gouri
Sankar Mishra, “Baseline and its Slant Based
Personality Assessment from Handwritten
Documents”, in International Journal of Mechanical
Engineering PP 2947-2953, pub 2022/5
[24] SamikshaKumari, Karan Kumar Singh, Gouri
Sankar Mishra and Parma Nand, "A Comparative
Study Of Security Issues And Attacks On Underwater
Sensor Network", Accepted, to be published in
Lecture Notes in Networks and Systems, 2021, LNEE:
https://www.springer.com/series/15179
[25] S. Parhar, A. Roy, K. Kumar, A. Kumar and G. S.
Mishra, "Lung Field Segmentation of X-ray Images by
Normalized Gradient Gaussian Filter and Snake
Segmentation," 2021 2nd International Conference
for Emerging Technology (INCET), Belagavi, India,
2021, pp. 1-5, doi:
10.1109/INCET51464.2021.9456146.
[26] Syed Faraz Ali, Gouri Sankar Mishra and Ashok
Kumar Sahoo, “Domain bounded English to Indian
sign language translation model”, In International
Journal of Computer Science and Informatics, 2013, p.
41-45

eISSN1303-5150 www.neuroquantology.com
Reproduced with permission of copyright owner. Further reproduction
prohibited without permission.

You might also like