Music Recommendation Based On Facial Expressions and Mood Detection Using CNN
Music Recommendation Based On Facial Expressions and Mood Detection Using CNN
Authorized licensed use limited to: Zhejiang University. Downloaded on November 19,2023 at 13:23:57 UTC from IEEE Xplore. Restrictions apply.
2023 International Conference on Computer Communication and Informatics (ICCCI ), Jan. 23 – 25, 2023, Coimbatore, INDIA
one group, the similarity of features in all the groups can be assigned for that particular emotion is played and in the same
avoided. Eventually, different features can be extracted easily way the songs are played for other emotions .It is just by
by grouping the similar features of similar so that the features detecting the facial expression and emotion of the user. This
can be differentiated. Adiyansjah, Alexander a s Gunawan, will give the result in the recommended playlist for the user.
Derwin Suhartono [4] proposed convolutional recurrent neural We used a library called as pygame for playing the audio.
networks for the recommendation of music. the task is that Function of the library is used to work with the music player.
automatic playlist has to be created by recognizing the For the developing GUI we can use Tkinter. Figure 1 briefs
emotion of human at particular situation. Music the flow work of our proposed work,
recommendation system can be divided into three parts users,
items and user-item matching algorithm. Buetal – used social C. Conventional Neural Network
media information to give more accurate music
recommendations. Bogdanov et al [5] exploited genre CNN is one of the famous NN algorithms which detect the
metadata to increase listener satisfaction based on face emotion in efficient manner [6]. It aids to predict the
collaborative filtering and content-based filtering. moods based the features, even though it is not a good
collaborative filtering method-sparse assessment matrix as the learning features but good in extract the high level features
users may listen only some part or some libraries, as the effect like edges, colour and textures.
of this, most of the assessments are not determined. Contact-
based filtering is a two-stage approach.
Image
III . METHODOLOGY START Start Recognition
camera
The music recommendation is based on the facial expression
and that implements the mood detection. It mood detection
consists of two parts: 1. Facial expression Detection 2.Music No
Recommendation Detection. Face
Detection
As per the person mood the Dataset is classified. In these Data Figure 1 Flow Diagram of System
set contain the songs according to the mood. These datasets
was found on Cohn-Kanade (CK) database with various
emotions. Data has been stored in cloud storage module. IV. RESULT AND DISCUSSION
These Data has arranged as per users request and In this Overall universe many peoples having similar faces and it is
process the ANS, google cloud were found but they are very hard to detect the human emotion efficiently. By using
rejected because, they are costly and limited storage. After facial expression we can detect the emotion up to certain level.
doing a lot of research the Firebase is considered as It can be used in the Android apps and also in the computer
blockserver. It can be in android app also. The music is played systems by using some applications. The camera can capture
according to mood detect by the picture. And the music is the image and recognize the face in the image and then detect
played by music recommended to that mood in the playist. the emotion in the image. Various faces with emotions are
When the surprise emotion is detected the songs that are shown in figure 2.
Authorized licensed use limited to: Zhejiang University. Downloaded on November 19,2023 at 13:23:57 UTC from IEEE Xplore. Restrictions apply.
2023 International Conference on Computer Communication and Informatics (ICCCI ), Jan. 23 – 25, 2023, Coimbatore, INDIA
Performance metrics:
From the Figure 4 it was proved that our CNN method is more
suitable for detecting the mood of any person with 92%
validation accuracy whereas SVM yields 65% and ELM yields
64%. With the open CV implementation we proved that
detecting the emotion and categories the music play list based
on the mood of captured images with CNN in efficient
manner.
V. CONCLUSION
Human emotions are very difficult to categorize and to detect
the mood based on the facial expressions and emotions it is
Figure 3: real facial expression and their recommended music definitely a challenge to us but with the help of the neural
playlist network and machine learning this model can be trained and to
get the maximum accuracy. Expressions are different from
person to person so, it is difficult to detect a emotion of a
With CNN model it detects the facial expression of various person. we have to train the model with larger number of
faces is shown in Figure 3. Based on the expression and mood images and we get more accurate with image datasets. Mostly
recommended music list is displayed, based on the list we can the major moods such as anger, disgust, fear, happy, sad,
able to categorizes the playlist for finding the trends of music surprise and neutral can be detected to 92% accurately. Face
for all age peoples. recognition is the primary step of the project. Without the face
Authorized licensed use limited to: Zhejiang University. Downloaded on November 19,2023 at 13:23:57 UTC from IEEE Xplore. Restrictions apply.
2023 International Conference on Computer Communication and Informatics (ICCCI ), Jan. 23 – 25, 2023, Coimbatore, INDIA
VI.REFERENCES
[1]Anima Majumder, Laxmidhar Behera, Venkatesh K.
Subramanian,"Emotion recognition from geometric facial features using self-
organizing map", Pattern Recognition, Vol 47, No.3, pp 1282-1293, 2014
[6] Rabia Qayyum, Vishwesh Akre, Talha Hafeez, Sheeraz Ahmed, Asif
Nawaz, Hasan Ali Khattak, Pankaj Mohindru, Doulat Khan, Khalil Ur
Rahman, “Android based Emotion Detection Using Convolutional Neural
Networks”, 2021 International Conference on Computational Intelligence and
Knowledge Economy (ICCIKE) ,2021
Authorized licensed use limited to: Zhejiang University. Downloaded on November 19,2023 at 13:23:57 UTC from IEEE Xplore. Restrictions apply.