0% found this document useful (0 votes)
39 views8 pages

Emotion-Based Music Player

Uploaded by

siddhi shinde
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views8 pages

Emotion-Based Music Player

Uploaded by

siddhi shinde
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

International Research Journal on Advanced Engineering e ISSN: 2584-2854

Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152

Emotion-Based Music Player


Netravathi K S1, Bibi Haleema N2, Priyanka3, Madhushree4, Priyanka R V5
1,2,3,4
Computer Science and Engineering, The National Institute of Engineering, Mysuru, Karnataka, India.
5
Assistant Professor, Computer Science and Engineering, The National Institute of Engineering, Mysuru,
Karnataka, India.
Emails: [email protected], [email protected], [email protected],
[email protected], [email protected]

Abstract
Songs have always been a popular medium for communicating and understanding human emotions. Reliable
emotion-based categorization systems can be quite helpful to us in understanding their relevance. However,
the outcome of the research on Emotion-based music classification have not been the greatest. Here, we
introduce EMP, a cross-platform emotional music player that play songs in accordance with the user’s
feelings at the time. EMP provides intelligent mood-based music player by incorporating emotion context
reasoning abilities into our adaptive music engine. EMP revolutionizes how users interact with music,
fostering deeper connections between emotions and musical experiences. Our music player is composed of
three modules: the emotion module, the classification module, and the queue-based module. The Emotion
Module analyses a picture of the user’s face and uses the VGG16 algorithm to detect their mood with a
precision exceeding 95%. The Music Classification Module gets an outstanding result by utilizing aural
criteria while classifying music into 7 different mood groups. The queue module plays the songs directly from
the mapped folders in the order they are stored, ensuring alignment with the user’s mood and preferences.
Keywords: VGG 16 Algorithm, Emotion Context, Intelligent, EMP.

1. Introduction
The world of music has always been an integral associated with that emotional state. The main goal
element of our lives and it has the power to evoke of this project is to provide a personalized and
emotions and feelings that are unique to everyone. In emotionally engaging music experience for the
recent years, the field of music technology has seen user. The potential applications of this system
tremendous growth and there have been numerous extend far beyond just music players and could be
advancements in the utilization of machine learning incorporated in a range of industries, including
algorithms to develop intelligent music systems. One healthcare and entertainment. Human beings
such system is the emotion-based music player, exhibit diverse music preferences tailored to their
which uses VGG16 to detect emotions in music and varying emotional states and activities. Whether
then plays a song derived from the identified engaged in physical exertion or seeking relaxation,
emotional state. In this project, we will explore the individuals seek out specific genres and rhythms to
evolutionary process of an emotion-based music suit their needs. It is within this context that the
player using VGG16 for the detection of emotions concept of an emotion-based music player system
using Python. The system is designed to make use of emerges, offering tailored musical experiences
a pre-trained VGG16 to analyze facial features of the across a spectrum of scenarios including physical
users and predict the emotion. The predicted emotion labor, stress management, music therapy, and
will then be utilized to select and play the most academic endeavors. We introduce an emotion-
appropriate songs from a pre-defined playlist that is based music player system tailored to address the

IRJAEM 1149
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152

intricate emotional preferences of users, playing accuracy emotion prediction. Real-time prediction
music aligned with their emotional states [1]. involves considering 20 samples of the user's
1.1 Related Work current emotion, enabling seamless music
In this study, researchers propose a novel approach to selection based on predominant emotional states.
music recommendation based on emotions. They [5]In this research paper, it utilizes deep learning
leverage deep learning models to analyze user mechanisms, particularly focusing on facial
preferences and emotional responses to music, expression recognition. By analyzing facial traits
enabling more personalized recommendations. By such as expressions, color, posture, and
integrating emotion recognition techniques, it can orientation, the system automatically creates music
accurately capture user’s mood and tailor playlists in consideration of the real-time mental
recommendations accordingly. This paper leverages state of a person. Two classifiers, Haar Cascade,
CNN which possess the capability to autonomously CNN and SVM, are employed for emotion
discern pertinent features from images, eliminates the detection, with comparative studies conducted
need for manual feature crafting [2].The research based on trained datasets. The model comprises
introduces a system that identifies user’s emotional face discovery and facial component extraction
states and recommends music tracks accordingly. By components, enabling the system for identifying
analyzing factors like tempo, pitch, and lyrics emotions [6]. Kundeti Naga Prasanthi et al.
sentiment, the system tailor’s recommendations to proposed an audio player which involves Haar
match user’s current mood. Through empirical cascade classification for face segmentation,
evaluation, the study showcases the effectiveness of Principal Component Analysis (PCA) and Linear
the proposed approach in enhancing user experience Discriminant Analysis (LDA) for feature
and satisfaction with music recommendation extraction, and Euclidean distance calculation for
services. This research underscores the importance of emotion classification. The system aims to provide
incorporating emotional cues into recommendation a more accurate and efficient method of selecting
systems to provide more personalized and engaging music tailored to the user’s emotional state. [7]This
user experiences in the realm of music streaming paper proposes a ‘smart music player’ system
platforms [3]. In this work introduced a dynamic employing artificial intelligence (AI) and facial
framework for music recommendations grounded in expression recognition to recommend music based
human emotions. By training song selections for on the user’s mood. It employs convolutional
distinct emotional states derived from individual neural networks (CNNs) for facial expression
listening patterns, the researchers established a detection and analysis, categorizing emotions into
personalized approach to music curation. Employing seven groups: happy, sad, neutral, surprise, fear,
a fusion of feature extraction methodologies and disgust, and angry. The system’s architecture
machine learning algorithms, the system adeptly incorporates training Deep Neural Networks to
discerns the emotional nuances of human faces recognize facial features and recommend music
depicted in input images. Once the mood is tracks accordingly. It uses the Stream lit
ascertained, the system seamlessly integrates by framework for the user interface and connects to
playing music tailored to the identified emotional the Spotify API for song recommendations. The
state, thereby enhancing user engagement and system achieves a 76% accuracy in emotion
satisfaction. [4]The paper proposes an emotion-based recognition. [8]This paper utilizes technologies
music player system utilizing facial recognition to such as React JS, Node JS, and Firebase for the
detect user’s emotions, achieving high accuracy with frontend and backend. Leveraging algorithms such
SVM classification aided by PCA and a polynomial as Support Vector Machines (SVM) and OpenCV
kernel. It effectively integrates Haar features and for facial recognition is used. Through algorithmic
PCA for dimensionality reduction and employs SVM design, the system follows a step-by-step process
classification with polynomial kernels for high from image upload to emotion detection to song

IRJAEM 1150
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152

selection. The user interface is intuitive, allowing


users to easily upload images, detect emotions, and
select songs. By providing a user manual, the system
ensures seamless user interaction.
1.2 Existing System
While the concept of generating a playlist of songs in
accordance with facial expressions using
Convolutional [9] Neural Network (CNN) algorithms
seems innovative, it comes with several drawbacks.
Firstly, relying solely on facial gesture to determine
emotions may not always accurately reflect the user's
true mood. Additionally, CNN algorithms for
emotion detection may not always be reliable or Figure 1 System Architecture
consistent. They can be prone to errors, especially in 1.4 Dataset Collection
scenarios with fluctuating lighting conditions, facial We collected an emotion dataset from reputable
angles, or cultural differences in facial expressions. sources such as Kaggle, a popular platform for
This could lead to misinterpretations of the user's hosting datasets and machine learning
emotions, resulting in inappropriate song competitions. The dataset comprises a diverse
recommendations. [10] Furthermore, the automated range of images depicting facial expressions
generation of playlists based on detected emotions representing various emotions, including happy,
may lack the personal touch and customization that sad, anger, surprise, fear, disgust and neutrality
users desire. Music preferences are highly subjective (Table 1). Every image is tagged with the
and influenced by individual tastes, memories, and corresponding emotion category, providing
associations. Relying solely on facial expressions to valuable ground truth annotations for training and
curate playlists may overlook these nuances, evaluating our emotion recognition model. [11] To
resulting in a generic and potentially unsatisfying ensure the dataset's quality and diversity, we
listening experience for the user. Additionally, there conducted thorough screening and selection
are privacy consideration associated with using facial processes, prioritizing datasets with high-
recognition technology in this manner. Users may be resolution images, balanced class distributions,
uncomfortable with their emotions being and annotations provided by expert annotators or
continuously monitored and analyzed, provoking crowdsourcing platforms. Additionally, we
concerns about data security and consent. Overall, verified the credibility and licensing of the datasets
while the idea of leveraging facial expressions to to comply with ethical and legal considerations
tailor music playlists is intriguing, the drawbacks regarding data usage. The collected dataset serves
related to accuracy, personalization and privacy must as a critical component in training and validating
be carefully considered and addressed for such a our emotion recognition model based on deep
system to be truly effective and user-friendly. learning techniques. By leveraging this rich
dataset, we aim to enhance the accuracy and
1.3 Propsed System robustness of our model in recognizing facial
Figure 1 displays the proposed application's system expressions across different individuals,
overview. The program will employ face detection to demographics, and environmental conditions. This
identify the user's emotion and assess the user’s dataset acquisition process aligns with best
current mood before playing music from a music practices in machine learning research, ensuring
folder that was manually classified while the transparency, reproducibility, and ethical data
application was being created. handling throughout the project lifecycle.

IRJAEM 1151
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152

Table 1 Collected Datasets # Add additional layers for recognizing emotional


CLASS DATASET COUNTS state
Happy 7164 model.add (Flatten ())
Sad 4938
model.add (Dense (256, activation='relu'))
Neutral 4982
Angry 3993 model.add (Dropout (0.5))
Fear 4103 model.add (Dense (7, activation='softmax')) //
Disgust 436 Assuming 7 classes for different emotions
Surprise 3205
# Compile the model
2. Method model.compile (optimizer='adam',
2.1. Collect and Preprocess the Dataset loss='categorical_crossentropy', metrics=
Collecting and pre-processing the set of data is an ['accuracy'])
important step in developing an emotion-based music # Train the model with your dataset
player using VGG16 for detecting emotions in video
live stream input. Additionally, the model can be # assuming you have data X_train, y_train and
trained using live video stream data, which can be X_val, y_val for training and validation
collected from various sources, such as webcams. model.fit (X_train, y_train, validation_data=
Before using the collected data, it needs to be pre- (X_val, y_val), epochs=10, batch_size=32)
processed to remove any noise or disturbances that
may inhibit the emotion recognition process. For # Evaluate the model
example, Video data can be pre-processed using Loss, accuracy = model. Evaluate (X_test,
techniques such as image resizing, normalization, and y_test)
feature extraction from individual frames.
This pseudo code assumes you have preprocessed
2.2. Build the VGG16 Model
the data to fit the input shape of the VGG16 model
The VGG16 model for the emotion-based music
(224x224x3). Replace X_train, y_train, X_val,
player will be built using the Keras deep learning
y_val, X_test, and y_test with your actual training,
library in Python. The model will consist of multiple
validation, and test data.
convolutional layers (Figure 2) with ReLU
activation, followed by max pooling layers to reduce
dimensionality. The output will then be compressed
and transmitted through fully connected layers with
dropout regularization to prevent overfitting. [12]
Pseudo Code:
# Load pre-trained VGG16 model without the top
(fully connected) layers
Figure 2 VGG16
base_model=VGG16 (weights='imagenet', 2.3. Stream Video Input
include_top=False, input shape= (224, 224, 3)) Streaming video input is a fundamental aspect of
# Freeze the pre-trained layers the emotion-based music player that uses VGG16
For layer in base_model.layers: for detecting emotions. The system requires a real-
time video input to analyze the emotions of the
layer.trainable = False person in the video stream and then selects music
# create a new model and add the VGG16 base based on the detected emotion. [13] To achieve
this, the system uses a video stream input from a
Model = Sequential ()
webcam, which captures the live video feed of the
model.add (base_model) user. The video stream is then processed using

IRJAEM 1152
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152

OpenCV in Python to extract the features required for also utilized.


emotion detection. OpenCV offers a range of Feature Extraction using VGG16 Method:
preprocessing functionalities including Convolutional neural networks (CNNs) are a
Standardizing, scaling and noise reduction, all of prevalent type of deep neural network
which contribute to enhancing the precision of the fundamentally used for visual perception tasks in
VGG16 model. The obtained features are then fed deep learning. CNNs operate based on a shared-
into the VGG16 model to estimate the user’s weight architecture of convolution kernels or
emotional state accurately. [14] To ensure a smooth filters, which slide across input features to produce
streaming experience, the system also utilizes a translation-equivariant responses known as feature
buffer to store the video input. The buffer allows for maps. VGG16s are a type of CNN, with multilayer
any latency or lag that might occur during the perceptron’s adapted into their architecture.
streaming process, thereby ensuring that the emotion Multilayer perceptron’s typically refer to fully
detection and music selection process is not affected. connected networks, where each neuron in a layer
Overall, the use of real-time video stream input is is connected to every other neuron in the layer
essential for the emotion based music player’s above. However, such networks are prone to
functioning and ensures that the music selection overfitting due to their high connectivity. VGG16s
accurately reflects the user’s emotional state. employ a novel regularization strategy by
2.4. Play Music Based on Emotion leveraging the hierarchical structure of data to
After detecting the emotions from the video input construct patterns of increasing complexity from
stream, the next step is to play music that matches the smaller and simpler patterns imprinted in their
detected emotions. The emotion-based music player filters.
can be combined with the PyVLC media player to User Emotion Recognition: Many platforms
play music in real-time according to the detected utilize facial expression recognition as a method
emotions. [15] The PyVLC media player is a for emotion analysis. Fisher Face is a technique
powerful media player library in Python that can play rooted in principal component analysis and linear
various types of media files and supports different discriminant analysis principles. It involves
video and audio codecs. By integrating the emotion- categorizing and reducing photographic data
based music player with PyVLC, we can easily play before allocating it into appropriate groups,
the appropriate music file based on the emotions ultimately recording statistical values.
detected from the video input stream. For instance, if Emotion Mapping: Facial expressions can be
the model detects that the emotional state is happy, categorized into basic emotions such as anger,
the music player can select upbeat and joyful music happy, fear, neutral, sadness, disgust and surprise.
from a playlist, while sad emotions can trigger the The user's expression is compared to expressions
player to select mellow and calming music. in the dataset, thereby enabling emotion mapping.
2.5. User Emotion Classification 2.6. VGG16 Working
Face Detection: The primary objective of face Detecting faces is a popular topic with many
detection is to locate human faces within images. practical applications. In today's smartphones and
This process typically involves identifying facial PCs, face detection software is already built in to
features such as the nose, mouth, and eyes, which help validate the user's identity. In addition to
serve as initial steps in face detection. Utilizing the determining the user’s age and gender and using
sophisticated VGG16 Algorithm for facial detection some extremely amazing filters, several
ensures reliable results. This algorithm employs a applications can record, recognize, and process
machine learning-based object detection method, faces in instantaneously. For feature extraction,
which requires a substantial number of positive VGG16 is utilized. For the emotion recognition
photos for training the classifier. Additionally, module, we must train the system using datasets of
negative images depicting objects without faces are seven emotions. VGG16 has the special ability to

IRJAEM 1153
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152

apply automatic learning to extract traits from dataset effectively learned meaningful patterns and
images for model building. VGG16 can provide an features associated with different emotions.
internal, two-dimensional visual representation. On
this matrix, operations in three dimensions are carried
out for teaching and testing reasons. Five-Layer
Model: As its name indicates, this model has five
layers. (Figure 3) A convolutional and a max-pooling
layer, a fully connected layer with 1024 neurons, an
output layer with 7 neurons, and a soft-max activation
function are the layers that make up each of the first
three phases. For the initial convolutional layers, 32,
32, and 64 5*5, 4*4, and 5*5 kernels, respectively,
Figure 4 Capturing Image and Detect
were used. Max-pooling layers come after
Emotion
convolutional layers, and they each employed kernels
with 3*3 dimensions, a stride of 2, and the ReLu
activation function.

Figure 5 Happy Emotion Detection

Figure 3 Emotion Recogntion using VGG16


The final layer will use softmax activation to output
the predicted probabilities for each emotion class.
The model will be trained using the dataset described
previously, with a batch size of 32 and an Adam
optimizer. The accuracy of the model will be
evaluated using the validation set, and the best-
performing model will be used to predict emotions in Figure 6 Neutral Emotion Detection
the live video stream input and play music
accordingly.
3. Results and Discussion
3.1. Results
The Figure 4 indicate that the VGG16 model
achieved a high level of accuracy in predicting the
emotions reaching above 90%. The model
demonstrated a strong ability to classify emotions
such as happiness, sadness, anger, neutral, disgust,
fear, and surprise with a significant level of precision.
This high accuracy recommend that the model has Figure 7 Surprised Emotion Detection

IRJAEM 1154
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152

3.2. Discussion Professor& HOD, from the Department of CSE at


The discussion highlights the practical NIE, for their relentless support and
implications of such accurate emotion detection in encouragement. Additionally, we extend immense
the music player. As shown in Figure 5, Figure 6 and pleasure in thanking Ms. Priyanka R V, Assistant
Figure 7 by reliably predicting the users emotional Professor in the Department of CSE at NIE, for her
state (eg. Happy, Neutral, Surprised, Sad, Angry, valuable suggestions and guidance during the
Disgust, and Scared) the music player can provide a process of this project study.
highly personalized and enjoyable experience. It can References
automatically select music tracks that precisely match [1]. Jaladi Sam Joel, B. Ernest Thompson,
the user's emotions, creating a seamless and Steve Renny Thomas, T. Revanth Kumar,
immersive listening experience. This approach Bini D.
eliminates the need for the user to actively search for Emotion based Music Recommendation
music that aligns with their mood, greatly enhancing System using Deep Learning Model.
convenience and user satisfaction. However, it is International Conference on Inventive
important to address the limitation of the current Computation Technologies (ICICT). Doi:
system regarding real-time input-based 10.1109/ICICT57646.2023.10134389
categorization. While achieving high accuracy in [2]. Vijay Prakash Sharma et al., Emotion-
emotion detection, the system does not incorporate Based Music Recommendation System
real-time indicators such as facial expression analysis (2021). 9th International Conference on
to capture the user’s changing emotional state. Reliability, Infocom Technologies and
Integrating real-time emotion detection techniques Optimization (Trends and Future
could significantly enhance the system’s ability to Directions) (ICRITO). Doi:
adapt and respond to the user’s evolving emotions 10.1109/ICRIRT051393.2021.9596276
and preferences, leading to an even more refined and [3]. ShanthaShalini. K et al., “Facial Emotion
tailored music selection. Based Music Recommendation System
Conclusion using computer vision and machine
This study looked at an innovative method of learning techiniques” (2021). Turkish
classifying music based on the emotions and facial Journal of Computer and Mathematics
expressions of the listeners. It is advised to use neural Education vol. 12, no. 2, pp. 912–917, Apr.
networks and visual processing to categorize the [4]. Seshaayini K et al., Emotion Recognition
seven fundamental universal emotions conveyed by Based Music Player (2023). Fifth
music—happiness, sad, anger, disgust, surprised, International Conference on Electrical,
scared, neutrality. First, the input image is run Computer and Communication
through a face detection algorithm. A feature Technologies (ICECCT).
extraction method based on image processing is then Doi:10.1109/ICECCT56650.2023.101797
used to recover the feature points. Finally, 16
instructions are supplied to a neural network to [5]. Prachi Vijayeeta, Parthasarathi pattnayak.
identify the emotion present in a collection of values A Deep Learning approach for Emotion
obtained by analyzing the acquired feature points. Based Music Player (2022). OITS
Although the research is still in its early stages, International Conference on Information
success in the field of emotion identification and Technology (OCIT).
playing music from the supplied dataset is Doi:10.1109/OCIT56763.2022.00060
anticipated. [6]. Kundeti Naga Prasanthi et al. Machine
Acknowledgements Learning Techniques based Audio Player
The authors express their sincerest gratitude to Dr. to Soothe Human Emotions (2022).
Rohini Nagapadma, the Principal, and Dr. Anitha R, International Conference on Sustainable

IRJAEM 1155
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152

Computing and Data Communication 02, Iss. 02, S. No. 018, pp. 1-11
Systems(ICSCDS).Doi:10.1109/ICSCDS537 [15]. Rahul Arya, Chandradeep Bhatt, Mudit
36.2022.9760912 Mittal. Music Player Based on Emotion
[7]. Anushree K et al. Artificial Intelligence (AI) Detection Using CNN. IEEE North
Enabled Music Player System for User Facial Karnataka Subsection Flagship
Recognition (2023). 4th International International Conference (NKCon). Doi:
Conference for Emerging Technology 10.1109/NKCon56289.2022.10126761
(INCET) Belgaum, India, May26-28, 2023.
Doi:10.1109/INCET57972.2023.10170476
[8]. Vinay p et al. Facial Expression Based Music
Recommendation System (2021).
International Journal of Advanced Research
in Computer and Communication
Engineering.
Doi:10.17148/IJARCCE.2021.10682
[9]. Serhat Hizlisoy et al. Music emotion
recognition using convolutional long short
term memory deep neural networks (2021).
Engineering Science and Technology, an
International Journal Volume 24, Issue 3.
Doi.org/10.1016/j.jestch.2020.10.009
[10]. Sulaiman Muhammad et al. Real Time
Emotion Based Music Player Using CNN
Architectures. 6th International Conference
for Convergence in Technology (I2CT).
Doi:10.1109/I2CT51068.2021.9417949
[11]. Sreenivas, V., Namdeo, V. & Kumar, E.V.
Group based emotion recognition from video
sequence with hybrid optimization based
recurrent fuzzy neural network (2020). J Big
Data 7, 56. Doi.org/10.1186/s40537-020-
00326-5.
[12]. Soumya K, Suja Palaniswamy. Emotion
Recognition from Partially Occluded Facial
Images using Prototypical Networks (2020).
Second International Conference on
Innovative Mechanisms for Industry
Applications.
Doi:10.1109/ICIMIA48430.2020.9074962
[13]. Ishwar More et al. Melomaniac- Emotion
Based Music Recommendation System
(2021). IJARIIE, no. 3, pp.1323–1329.
[14]. Madhuri Athavle et al. Music
Recommendation Based on Face Emotion
Recognition (2021). Journal of Informatics
Electrical and Electronics Engineering. Vol.

IRJAEM 1156

You might also like