Paper 7
Paper 7
NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation
1,4,5,6
Department of Computer Science & Application, Sharda School of Engineering & Technology,
Sharda University, Greater Noida, UP, India
2,3
Department of Computer Science & Engineering, Sharda School of Engineering & Technology,
Sharda University, Greater Noida, UP, India.
ABSTRACT
Most of us listen to music to feel emotions. Your negative mood might be lifted by music. Currently existing music
systems let you listen to chosen music and suggest songs in categories depending on your interests or the tastes of
other users. Music fans cannot completely depend on such methods and therefore do not prefer to listen to music on
the station or online while such sound systems are not created with the emotions elicited in mind. In this work, we
provide a music system based on sentiment. Our Raspberry Pi-based system plays tunes based on the ambiance of the
room using a speaker, a microphone, and a Raspberry Pi. The emotion of the recorded background sounds is assessed
using a classification issue based on machine learning. For this categorization, we make use of a simple Bayesian 2945
classifier. Using the song's Bits per Minute pace to identify songs with comparable emotional content.
Keywords: Face-extraction, music suggestion, emotion recognition, and real-time image capture.
DOI Number: 10.48047/NQ.2022.20.20.NQ109290 NeuroQuantology2022;20(20):2945-2954
eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation
dependent on how the music is going to be played. For targeted area before the feature map's points for
this, we used tools like OpenCV, EEL, numpy, etc. extraction. This method employs training and
This method focuses mostly on suggested Music and measurement of the Hausdorff separation between
has evolved an essential tool for reducing stress in the input face picture and the data image using a
modern society. Since facial expressions frequently Bezier curve in order to comprehend emotion.
convey emotion, we use faces as our major source of 2.3. Using animated mood images to suggest music
information for identifying emotion. Next, We provide A technique for recommending music employing
music that can alter a user's mindset in line with that animated mood images was proposed by
user's mood. ArtooLeptinemia and JukkaHolm[3]. Using a library of
photos, the user of this system may obtain music
recommendations based on the genre of each image.
The Nokia Research Center created this technique for
2. Literature review making music recommendations. Audio signal
2.1. System for Detecting Faces and Recognizing processing and textual meta tags are used in this
Expressions on the Face system to describe the genre.
Anagha S. Dhavalikar[1] suggested a technique for 2.4. Utilizing emotion identification from facial
automatically recognising facial expressions. This expressions in human-computer interaction. 2946
system consists of three phases. Facial recognition, A.Pruski et al., C. Maaoui et al., and F. Abdat[4]. They
Expression recognition, followed by Feature suggested a completely automated facials emotion
extraction.The InitialRGB Color Model is used for face and identification system based on three steps: face
detection, adjusting the lighting while acquiring a face recognition, facial feature categorization of facial
and morphological surgeries to keep the desired face, expressions and extraction. This methodology coupled
such as the face's lips and eyes.This System is also the Shi and Tomasi method with an anthropometric
utilisedActive Appearance Model Technique, or AAM, model to identify the facial feature points. This
is used to extract face features. In this approach,The methodology uses a version of 21 distances to
model facial angularities, including the lips, br0ws, and characterize facial features from a neutral face and
eyes, are located, with a file holding data details about classifies data using SVM.
the model points that were identified is produced. The 2.5. Music Suggestions Depend on Emotion By
method also detects faces and uses input of an association, I discovered film music.
expression to determine how the AAM Model should Suh-Yin Lee et al. and Fang-FeiKuo et al .[5] The
change. growth of music suggestions for customers is a result
of the spread of digital music. The consumers'
2.2. Bezier Curve Fitting for Emotional Identification preferences for music are the basis for the current
from Face Expression Analysis recommendation methods. Nonetheless, there are
Bezier curve fitting was the basis for the approach occasions when selecting music based on the mood is
provided by Youngseop Kim, Woori Han, and Yong- necessary. Using association learning from cinema
Hwan Lee[2]. The first stage in this system's method music, We provide a unique approach for
for determining facial expression and In order to recommending music based on emotions. In this work.
validate the facial expression of certain characteristics In order to uncover associations between emotions
in the region of interest, the second step is to identify and musical qualities, we examined musical feature
and analyze the facial landmarks from the original extraction and modified the affinity graph. According
photo input. determining the position of the lips and to experimental findings, the suggested technique
eyes on the face as well as the angle of the face, averages 85% accuracy.
feature maps were employed after the initial step of 2.6. Interactive Music Search and Recommendation
face identification, which used color still images Based on Mood
considering skin color pixels and initiated spatial According to John O'Donovan et al. and Ivana
filtering. In the process of applying a Bezier curve to Andjelkovic et al. [6],On increasing prediction and
the eye and mouth, this approach first extracts the ranking, recommender system research has been
eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation
heavily concentrated. The importance of other aspects be improved by including personality and emotional
of the suggestions, such as accessibility, flexibility, and states into music choices.We think that by taking
overall user experience, has, however, been these psychological elements into account, the
highlighted by recent studies. On the basis of these recommendation's accuracy may be improved. The
features, we propose MoodPlay, a hybrid music system focuses on the relationship between a person's
recommendation system with a user-friendly interface personality and how they utilize music to control their
that combines content- and mood-based filtering. We emotional states[9].
walk users through using MoodPlay to search music It's essential to be able to discern an individual's
files by secret emotional aspects and how to blend emotions from their face.In order to capture the
users input with forecast made from a prior user essential data from the a face of a person.Among
profile when making recommendations. Findings from other things, input input may be used to extract
a user research (N=240) that looked at 4 situations information that may be used to estimate a person's
with various visibility, engagement, and control levels mood.Songs are created using the "feeling" that is
are discussed. acquired from the preceding input. A playlist that is
2.7. An Reliable Face Expression-Based Music Playlist appropriate for a certain person's emotional qualities
Generation Algorithm may be made with less effort spent manually sorting
AnukritiDureha and colleagues [7]. He recommended songs into numerous categories. In order to produce a
labor- and time-intensive manual playlists playlist that meets the specified criteria,Upon
segmentation and music annotation depending on the scanning and understanding the data, the Facial
user's present state of mind.To automate this Expression Based Music Player. Our proposed method
procedure, many algorithms have been suggested. The focuses on recognising constructing an emotion-based
present algorithms, however, are less precise, require music player using personal emotions. It explains how
more equipment (such as EEG sensors and our music player detects human emotions, how other
equipment), which drives up the cost of the system as music players currently on the market sense emotions,
a whole, and are wasteful. the method of creating an and how to use our technology for emotion detection
audio album from a participant's facial gestures to to its fullest potential. Also, a brief explanation of
save time and labor-intensive manual labor. The playlist generation, emotion categorization, and the 2947
algorithm put forward in this research aims to cut operation of our algorithms is provided.We utilized
down on both the system's total cost and calculation the pycharm tool for analysis in this project[10].
time. It also seeks to improve the proposed system's
accuracy. The proposed algorithm's facial expression The results of research are divided into two phases:
recognition module is tested against user-dependent 1. Using Python, create a programme that can identify
and user-independent datasets to ensure its accuracy. a user's emotion from their expression.
2.8. Improving Character Details and Emotional 2. The music will be played based on the user's
States in Music Advisory Systems preferences if Python-code is added to the web-
Markus Schedl et al. and Bruce Ferwerda et al. [8]. service. statement.
made the suggestion that the basic study hypotheses
eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation
eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation
Face Webcam
Preprocessing
Edging
Segmentation
Face Detection
Feature
Extraction
2949
Music System
eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation
Initialising the
process
Face
detected
2950
Emostion
Anylised
Stop
eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation
eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation
Face
Detected Segmentation
Perform SVM
Classification
Emotion
Classification
Stop 2952
eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation
relieve tension and all types of emotions. The [8] Bruce Ferwerda and Markus Schedl “Enhancing
potential for constructing recommendation systems Music Recommender Systems with Personality
for music based on emotions has recently increased. Information and Emotional States”: A Proposal: 2014.
In order to recognise emotions and play the [9] S. Mithen, “The singing neanderthals: The origins
appropriate music, the recommended system of song, verbal, and physique”. London, England:
provides a face-based emotion recognition system. Harvard University Press, 2006.
In today's society, a music player with facial [10] F. Randri, “Emotion-based music reference
recognition technology is very necessary for system by a bottomless reinforcement learning
everyone. This system has been further improved method,” Analytics Vidhya, 26-Jan-2021. [Online].
with features that can be upgraded in the future. The Accessible: https://medium.com/analytics-
mechanism for improving music playback that occurs vidhya/emotion-based-music-recommendation-
automatically uses facial expression recognition. The system-using-a-deep-reinforcement-learning-
RPI camera's programming interface allows for the approach-6d23a24d3044.
detection of facial expression. An alternate approach [11]S. A. Nash, “Charles Darwin on the appearance of
built on feelings other than revulsion and terror that human emotions,” Brain World, 12-Feb-2020.
are not recognised by our system. To assist the [Online]. Accessible:
automated playing of music, this feeling was https://brainworldmagazine.com/charles-darwin-on-
introduced. the-appearance -of-human-emotions/.
References [12] K. Cherry, “The 6 types of simple sentiments and
2953
[1]. AnaghaS.Dhavalikar and Dr. R. K. Kulkarni, “Face their consequence on human manners,” Verywell
Detection and Facial Expression Recognition System” Mind, 03-May-2018. [Online]. Available:
2014 International Conference on Electronics and https://www.verywellmind.com/an-overview-of-the-
Communication System (ICECS -2014). types-of-emotions-4163976.
[2]. Yong-Hwan Lee , Woori Han and Youngseop Kim, [13]J. van Wyhe, “Darwin, C. R. 1872. The appearance
“Emotional Recognition from Facial Expression of the sentiments in man and animals. London: John
Analysis using Bezier Curve Fitting” 2013 16th Murray. First edition,” Org.uk. [Online]. Accessible:
International Conference on Network-Based http://darwin-
Information Systems. online.org.uk/content/contentblock?itemID=F1142&b
[3]. ArtoLehtiniemi and Jukka Holm, “Using Animated asepage=1&hitpage=1&viewtype=text.
Mood Pictures in Music Recommendation”, 2012 16th [14]“LDA vs. PCA,” Towards AI, 26-Jan-2022.
International Conference on Information [15] X.-C. Yuan and C.-M. Pun, “Feature mining and
Visualisation. native Zernike flashes based symmetrical invariant
[4]. F. Abdat, C. Maaoui and A. Pruski, watermarking,” Multimed. Tools Appl., vol. 72, no. 1,
“Humancomputer interaction using emotion pp. 777–799, 2014.
recognition from facial expression”, 2011 UKSim 5th [16]J. K. Nuamah, Y. Seong, and S. Yi,
European Symposium on Computer “Electroencephalography (EEG) ordering of
[5]. T.-H. Wang and J.-J.J. Lien, “Facial Expression intellectual jobs based on task arrangement index,”
Recognition System Based on Rigid and Non-Rigid in 2017 IEEE Conference on Cognitive and
Motion Separation and 3D PoseEstimation,” J. Pattern Computational Aspects of Situation Management
Recognition, vol. 42, no. 5, pp. 962-977, 2009. (CogSIMA), 2017, pp. 1–6.
[6] Renuka R. Londhe, Dr. Vrushshen P. Pawar, [17] Gouri Sankar Mishra, Pradeep Kumar Mishra,
“Analysis of Facial Expression and Recognition Based Parma Nand, Rani Astya, and Amrita, "User
On Statistical Approach”, International Journal of Soft Authentication: A Three Level Password
Computing and Engineering (IJSCE) Volume-2, May Authentication Mechanism", in Journal of Physics:
2012. Conference Series 1712,2020, 012005
[7] AnukritiDureha “An Accurate Algorithm for doi:10.1088/1742-6596/1712/1/012005
Generating a Music Playlist based on Facial [18] Rachna Jain, Abhishek Sharma, Gouri Sankar
Expressions” : IJCA 2014. Mishra, Parma Nand, and SudeshnaChakraborty,
eISSN1303-5150 www.neuroquantology.com
NeuroQuantology| December 2022 | Volume 20 | Issue 20 |Page 2945-2954| doi: 10.48047/NQ.2022.20.20.NQ109290
Prince Kumar et al/Face Expression and Emotion Detection by using Machine learning and Music Recommendation
eISSN1303-5150 www.neuroquantology.com
Reproduced with permission of copyright owner. Further reproduction
prohibited without permission.