Emotion-Based Music Player
Emotion-Based Music Player
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152
Abstract
Songs have always been a popular medium for communicating and understanding human emotions. Reliable
emotion-based categorization systems can be quite helpful to us in understanding their relevance. However,
the outcome of the research on Emotion-based music classification have not been the greatest. Here, we
introduce EMP, a cross-platform emotional music player that play songs in accordance with the user’s
feelings at the time. EMP provides intelligent mood-based music player by incorporating emotion context
reasoning abilities into our adaptive music engine. EMP revolutionizes how users interact with music,
fostering deeper connections between emotions and musical experiences. Our music player is composed of
three modules: the emotion module, the classification module, and the queue-based module. The Emotion
Module analyses a picture of the user’s face and uses the VGG16 algorithm to detect their mood with a
precision exceeding 95%. The Music Classification Module gets an outstanding result by utilizing aural
criteria while classifying music into 7 different mood groups. The queue module plays the songs directly from
the mapped folders in the order they are stored, ensuring alignment with the user’s mood and preferences.
Keywords: VGG 16 Algorithm, Emotion Context, Intelligent, EMP.
1. Introduction
The world of music has always been an integral associated with that emotional state. The main goal
element of our lives and it has the power to evoke of this project is to provide a personalized and
emotions and feelings that are unique to everyone. In emotionally engaging music experience for the
recent years, the field of music technology has seen user. The potential applications of this system
tremendous growth and there have been numerous extend far beyond just music players and could be
advancements in the utilization of machine learning incorporated in a range of industries, including
algorithms to develop intelligent music systems. One healthcare and entertainment. Human beings
such system is the emotion-based music player, exhibit diverse music preferences tailored to their
which uses VGG16 to detect emotions in music and varying emotional states and activities. Whether
then plays a song derived from the identified engaged in physical exertion or seeking relaxation,
emotional state. In this project, we will explore the individuals seek out specific genres and rhythms to
evolutionary process of an emotion-based music suit their needs. It is within this context that the
player using VGG16 for the detection of emotions concept of an emotion-based music player system
using Python. The system is designed to make use of emerges, offering tailored musical experiences
a pre-trained VGG16 to analyze facial features of the across a spectrum of scenarios including physical
users and predict the emotion. The predicted emotion labor, stress management, music therapy, and
will then be utilized to select and play the most academic endeavors. We introduce an emotion-
appropriate songs from a pre-defined playlist that is based music player system tailored to address the
IRJAEM 1149
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152
intricate emotional preferences of users, playing accuracy emotion prediction. Real-time prediction
music aligned with their emotional states [1]. involves considering 20 samples of the user's
1.1 Related Work current emotion, enabling seamless music
In this study, researchers propose a novel approach to selection based on predominant emotional states.
music recommendation based on emotions. They [5]In this research paper, it utilizes deep learning
leverage deep learning models to analyze user mechanisms, particularly focusing on facial
preferences and emotional responses to music, expression recognition. By analyzing facial traits
enabling more personalized recommendations. By such as expressions, color, posture, and
integrating emotion recognition techniques, it can orientation, the system automatically creates music
accurately capture user’s mood and tailor playlists in consideration of the real-time mental
recommendations accordingly. This paper leverages state of a person. Two classifiers, Haar Cascade,
CNN which possess the capability to autonomously CNN and SVM, are employed for emotion
discern pertinent features from images, eliminates the detection, with comparative studies conducted
need for manual feature crafting [2].The research based on trained datasets. The model comprises
introduces a system that identifies user’s emotional face discovery and facial component extraction
states and recommends music tracks accordingly. By components, enabling the system for identifying
analyzing factors like tempo, pitch, and lyrics emotions [6]. Kundeti Naga Prasanthi et al.
sentiment, the system tailor’s recommendations to proposed an audio player which involves Haar
match user’s current mood. Through empirical cascade classification for face segmentation,
evaluation, the study showcases the effectiveness of Principal Component Analysis (PCA) and Linear
the proposed approach in enhancing user experience Discriminant Analysis (LDA) for feature
and satisfaction with music recommendation extraction, and Euclidean distance calculation for
services. This research underscores the importance of emotion classification. The system aims to provide
incorporating emotional cues into recommendation a more accurate and efficient method of selecting
systems to provide more personalized and engaging music tailored to the user’s emotional state. [7]This
user experiences in the realm of music streaming paper proposes a ‘smart music player’ system
platforms [3]. In this work introduced a dynamic employing artificial intelligence (AI) and facial
framework for music recommendations grounded in expression recognition to recommend music based
human emotions. By training song selections for on the user’s mood. It employs convolutional
distinct emotional states derived from individual neural networks (CNNs) for facial expression
listening patterns, the researchers established a detection and analysis, categorizing emotions into
personalized approach to music curation. Employing seven groups: happy, sad, neutral, surprise, fear,
a fusion of feature extraction methodologies and disgust, and angry. The system’s architecture
machine learning algorithms, the system adeptly incorporates training Deep Neural Networks to
discerns the emotional nuances of human faces recognize facial features and recommend music
depicted in input images. Once the mood is tracks accordingly. It uses the Stream lit
ascertained, the system seamlessly integrates by framework for the user interface and connects to
playing music tailored to the identified emotional the Spotify API for song recommendations. The
state, thereby enhancing user engagement and system achieves a 76% accuracy in emotion
satisfaction. [4]The paper proposes an emotion-based recognition. [8]This paper utilizes technologies
music player system utilizing facial recognition to such as React JS, Node JS, and Firebase for the
detect user’s emotions, achieving high accuracy with frontend and backend. Leveraging algorithms such
SVM classification aided by PCA and a polynomial as Support Vector Machines (SVM) and OpenCV
kernel. It effectively integrates Haar features and for facial recognition is used. Through algorithmic
PCA for dimensionality reduction and employs SVM design, the system follows a step-by-step process
classification with polynomial kernels for high from image upload to emotion detection to song
IRJAEM 1150
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152
IRJAEM 1151
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152
IRJAEM 1152
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152
IRJAEM 1153
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152
apply automatic learning to extract traits from dataset effectively learned meaningful patterns and
images for model building. VGG16 can provide an features associated with different emotions.
internal, two-dimensional visual representation. On
this matrix, operations in three dimensions are carried
out for teaching and testing reasons. Five-Layer
Model: As its name indicates, this model has five
layers. (Figure 3) A convolutional and a max-pooling
layer, a fully connected layer with 1024 neurons, an
output layer with 7 neurons, and a soft-max activation
function are the layers that make up each of the first
three phases. For the initial convolutional layers, 32,
32, and 64 5*5, 4*4, and 5*5 kernels, respectively,
Figure 4 Capturing Image and Detect
were used. Max-pooling layers come after
Emotion
convolutional layers, and they each employed kernels
with 3*3 dimensions, a stride of 2, and the ReLu
activation function.
IRJAEM 1154
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152
IRJAEM 1155
International Research Journal on Advanced Engineering e ISSN: 2584-2854
Volume: 02
and Management Issue: 04 April 2024
https://goldncloudpublications.com Page No: 1149 - 1156
https://doi.org/10.47392/IRJAEM.2024.0152
Computing and Data Communication 02, Iss. 02, S. No. 018, pp. 1-11
Systems(ICSCDS).Doi:10.1109/ICSCDS537 [15]. Rahul Arya, Chandradeep Bhatt, Mudit
36.2022.9760912 Mittal. Music Player Based on Emotion
[7]. Anushree K et al. Artificial Intelligence (AI) Detection Using CNN. IEEE North
Enabled Music Player System for User Facial Karnataka Subsection Flagship
Recognition (2023). 4th International International Conference (NKCon). Doi:
Conference for Emerging Technology 10.1109/NKCon56289.2022.10126761
(INCET) Belgaum, India, May26-28, 2023.
Doi:10.1109/INCET57972.2023.10170476
[8]. Vinay p et al. Facial Expression Based Music
Recommendation System (2021).
International Journal of Advanced Research
in Computer and Communication
Engineering.
Doi:10.17148/IJARCCE.2021.10682
[9]. Serhat Hizlisoy et al. Music emotion
recognition using convolutional long short
term memory deep neural networks (2021).
Engineering Science and Technology, an
International Journal Volume 24, Issue 3.
Doi.org/10.1016/j.jestch.2020.10.009
[10]. Sulaiman Muhammad et al. Real Time
Emotion Based Music Player Using CNN
Architectures. 6th International Conference
for Convergence in Technology (I2CT).
Doi:10.1109/I2CT51068.2021.9417949
[11]. Sreenivas, V., Namdeo, V. & Kumar, E.V.
Group based emotion recognition from video
sequence with hybrid optimization based
recurrent fuzzy neural network (2020). J Big
Data 7, 56. Doi.org/10.1186/s40537-020-
00326-5.
[12]. Soumya K, Suja Palaniswamy. Emotion
Recognition from Partially Occluded Facial
Images using Prototypical Networks (2020).
Second International Conference on
Innovative Mechanisms for Industry
Applications.
Doi:10.1109/ICIMIA48430.2020.9074962
[13]. Ishwar More et al. Melomaniac- Emotion
Based Music Recommendation System
(2021). IJARIIE, no. 3, pp.1323–1329.
[14]. Madhuri Athavle et al. Music
Recommendation Based on Face Emotion
Recognition (2021). Journal of Informatics
Electrical and Electronics Engineering. Vol.
IRJAEM 1156