Harmonic Fusion: AI-Driven Music Personalization Via Emotion-Enhanced Facial Expression Recognition Using Python, OpenCV, TensorFlow, and Flask

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Volume 8, Issue 12, December – 2023 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Harmonic Fusion: AI-Driven Music Personalization


via Emotion-Enhanced Facial Expression Recognition
Using Python, OpenCV, TensorFlow, and Flask
Moeez Rajjan 1 , Prajwal Deore 2 , Yashraj Mohite 3 , Yash Desai 4
1,2,3,4
Student , Department of Computer Engineering , Ramrao Adik Institute of Technology

Abstract:- The exciting rise of big data in recent years has have the ability to unlock the full value of image data [3].
drawn a lot of attention to the interesting realm of deep This uncharted terrain in picture information highlights the
learning. Convolutional Neural Networks (CNNs), a key necessity for a comprehensive approach that goes beyond
component of deep learning, have demonstrated their mere accuracy enhancement and delves into real-world
worth, particularly in the field of facial recognition [3]. applicability.
This research presents a novel and creative technique
that combines CNN-based microexpression detection This research offers a comprehensive strategy that
technology with an autonomous music recommendation combines deep learning techniques with a music
system [3] [1]. Our innovative algorithm excels at recommendation system to bridge the gap between image
detecting minor facial microexpressions and then goes analysis and practical effects. This strategy is based on
above and beyond by selecting music that perfectly employing convolutional neural networks (CNNs) for facial
matches the emotional states represented by these micro-expression identification to not only distinguish
expressions. emotions but also to improve music recommendations [3].
The great potential of music to alter emotional states has
Our micro-expression recognition model performs increased its significance in various aspects of human
admirably on the FER2013 dataset, with a recognition existence [2]. Recognizing the strong relationship between
rate of 62.1% [3]. We use a content-based music emotions and music, our technique aims to offer musical
recommendation algorithm to extract some song feature selections that correspond to and influence the observed
vectors after we've deciphered the specific facial emotion. emotional state [2]. This effort to combine facial expression
Then we turn to the tried-and-true cosine similarity detection and music suggestions has the promise of providing
algorithm to do its thing and recommend some music [3]. a more immersive and enhanced user experience.
But it does not end there. This study isn't only about
improving music recommendation systems; it's also about In tandem with this strategy, meticulous efforts are put
investigating how these systems may assist us manage our into curating music databases, which include playlist crawls
emotions [2] [1]. The findings of this study offer a great and manual annotations obtained from top music platforms
deal of promise, pointing to interesting prospects for [3]. By leveraging these datasets, we broaden the breadth of
incorporating emotion-aware music recommendation image processing outcomes, allowing for a more tailored and
algorithms into numerous facets of our life." engaging user experience. This study aims to lay the
groundwork for an innovative and integrated system that
Keywords:- Deep Learning, Facial Micro-Expression improves the user's emotional journey by smoothly combining
Recognition, Convolutional Neural Network (CNN), the areas of facial expression detection and music suggestion.
FER2013 Dataset, Music Recommendation Algorithm, The following sections of this paper delve into the
Emotion Recognition, Emotion Recognition In Conversation methodology that underpins our approach, providing details
(ERC), Recommender Systems, Music Information Retrieval, on the design and training of the expression recognition
Artificial Neural Networks, Multi-Layer Neural Network. model, the fusion of image processing results with the music
recommendation algorithm, and the broader implications of
I. INTRODUCTION our findings.

Deep learning has seen considerable use in today's II. LITERATURE SURVEY
information technology era, ranging from picture
identification to image processing, with a special emphasis on An extensive exploration into methods to discern users'
face expression recognition [3]. Facial recognition, a rising behavioral and emotional states reveals a diverse landscape
field of study within the sphere of human-computer of research [2]. Various techniques, including facial
interaction, has made remarkable progress. However, when expressions, gestures, body language, and speech analysis,
converting image processing advances into real-world have been employed to decode these emotional signals.
contexts, its practical applicability frequently encounters Efforts to categorize emotional expressions on users' faces
limits. Image research usually focuses on improving have spurred the development of different methodologies,
recognition accuracy while ignoring secondary processes that involving feature extraction and classification algorithms [2].

IJISRT23DEC1027 www.ijisrt.com 977


Volume 8, Issue 12, December – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
The foundational work by Ekman and Friesen introduced The central aim of this innovative system is to intuitively
Action Units (AUs), capturing both fleeting and enduring engage users by uplifting their emotional state through a
facial traits to elucidate the direct correlation between facial customized music playlist. Emotions, whether happiness,
muscle movements and expressed emotions [2]. This led to sadness, neutrality, or surprise, profoundly influence human
the establishment of the Facial Action Coding System, which responses to music. Leveraging facial micro-expression
delineates 44 action units representing emotions with varying recognition, our system aims to automatically tailor music
intensities [2]. The pursuit of algorithms based on distinctive selections to match users' current emotional state [4].
features aligns with Ekman's recommendations [2].
This visionary system operates harmoniously through a
In parallel, geometric-based approaches for emotional comprehensive set of integrated modules:
analysis have surfaced [2]. These approaches rely on facial  Real-Time Capture: This module acts as the initial step,
markers such as eye corners, lip contours, and brow precisely capturing users' facial expressions through
movements to extract defining characteristics [2]. The cameras, laying the groundwork for emotion recognition.
distances between these points generate a feature vector that  Face Recognition: Using CNNs, this module extracts
dynamically changes with shifting emotional states [2]. This features from captured facial images, delving into the
vector becomes instrumental in identifying emotions using nuanced realm of facial expressions to capture emotional
Support Vector Machines (SVM) and Radial Basis Function subtleties.
Neural Networks (RBFNN) [2].  Emotion Detection: Central to the system's cognitive
abilities, this module conducts intricate feature analysis to
Efforts to classify music based on lyrical analysis have discern emotional nuances from facial expressions,
faced challenges due to language barriers and the nuanced providing crucial emotional context for music
emotional expressions conveyed through music [2]. recommendation.
 Music Recommendation: A pinnacle achievement, this
The literature also features endeavors aiming to merge module curates music recommendations based on
facial emotion recognition with music recommendation recognized emotions, merging the emotional journey with
systems [1, 5]. These algorithms strive to personalize music auditory preferences for an enriched user experience [4].
choices by analyzing collected facial expressions from
images, thereby reducing the time complexity for users  Methodology
managing extensive playlists [1]. This integration holds
promise for enhancing the music experience by aligning it  Database Description
with the user's emotional journey [1]. Recent research has The emotion detection process anchors on a
developed systems for emotional identification and music Convolutional Neural Network (CNN) model trained on the
recommendation based on users' facial expressions, utilizing FER2013 dataset. This dataset, comprising 30,219 grayscale
artificial neural networks for emotion classification and facial images sized at 48x48 pixels, segregates emotions like
customized playlist suggestions [5]. happy, sad, angry, surprise, and neutral for comprehensive
training. Utilizing this dataset, the system's goal is to leverage
Moreover, sophisticated methodologies like deep learning for accurate facial expression identification and
Convolutional Neural Networks (CNNs) have emerged as interpretation, unraveling the emotional essence within [4].
potent tools for recognizing emotions [2]. Leveraging deep
learning, these networks demonstrate an aptitude for grasping The Emotion Extraction Module orchestrates image
subtle emotional cues and have gained traction in emotion capture, feature extraction, and CNN-based analysis to reveal
recognition applications [2]. The interplay between facial users' emotional states. Grayscale images, captured via
expression analysis and music recommendation systems cameras or webcams, undergo advanced feature extraction to
clearly presents significant potential for augmenting user untangle facial landmarks and expressions. A trained network
engagement and delivering tailored experiences [1]. interprets these features to determine and label the user's
emotional state.
III. PROPOSED SYSTEM AND METHODOLOGY
Once emotions are decoded, the Audio Extraction
 Proposed System Overview Module recommends music or audio aligned with the user's
This paper marks a groundbreaking stride in the digital expressed emotions. A personalized list of emotion-matched
era, where deep learning reshapes various domains like image songs is presented to users, considering their preferences for
recognition and facial expression analysis. It introduces an an engaging musical experience.
integrated system that unites emotion recognition seamlessly
with music recommendation. Powered by Convolutional The Emotion-Audio Integration Module seamlessly
Neural Networks (CNNs), this system adeptly captures and combines emotion-laden songs with the user's current
interprets users' emotional expressions from facial photos in emotional disposition. This module operates through a
real-time. This capability serves as a bridge, merging dynamic web interface leveraging technologies like PHP,
emotions with auditory experiences, unlocking novel ways to MySQL, HTML, CSS, and JavaScript. It acts as the bridge for
connect with users and enhance their emotional well-being. emotion-based audio selection, creating a harmonious blend
between user sentiment and auditory pleasure [5].

IJISRT23DEC1027 www.ijisrt.com 978


Volume 8, Issue 12, December – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
IV. MODULE DETECTION AND Now, the secret sauce—pooling. It's akin to stepping
RECOMMENDATION back to appreciate the broader canvas. Pooling ensures
consistent outcomes even with minor input variations. It's
A. Emotion Detection Module: Deciphering the Language of about discerning patterns amidst chaos. Different pooling
Faces methods exist—like min, average, and max—but let's
In the domain of computer vision, face detection highlight max-pooling for its adeptness in capturing vital
mirrors the complexity of deciphering the intricate language details.
embedded in human expressions. It stands as a fundamental
application, employing algorithms aimed at identifying faces Following this artistic analysis, the flattened inputs
or objects within images [4]. Consider these algorithms as the embark on a journey through a deep neural network. Imagine
digital detectives, akin to Sherlock Holmes, finely tuned to passing these artistic insights to a master storyteller proficient
spot faces amidst the visual cacophony. Face detection relies in deciphering the object's emotional state [6].
heavily on classifiers, the equivalent of the detective's
magnifying glass. Their primary mission? Distinguishing D. Music Recommendation Module: Crafting Musical
whether an element within an image is a face (denoted as 1) Harmony
or something else (denoted as 0). It's far from a trivial Ah, music - the universal language of emotions. In our
pursuit. These classifiers undergo rigorous training using quest to craft the perfect musical experience, we curate a
extensive image datasets to achieve precision comparable to database filled with Bollywood Hindi songs, each category
Sherlock's investigative skills. brimming with 100 to 150 soul-stirring songs. You see,
music isn't just a backdrop; it's a driving force behind
Enter OpenCV, our trusted ally in this investigative emotions. So, when our emotion module detects a user's
journey, armed with two principal types of classifiers: Local mood (say, they're feeling a bit blue), our system leaps into
Binary Pattern (LBP) and Haar Cascades [6]. Haar classifiers action. It recommends a curated playlist that resonates with
reign supreme in facial detection. Painstakingly trained on their mood, effectively lifting their spirits.
diverse facial data, they ensure pinpoint accuracy. Their
ultimate goal is crystal clear: spotting faces within a frame But how do we make the magic happen? Real-time
while adeptly filtering out external distractions and noise. emotion detection is the linchpin. It labels emotions like
The secret lies in machine learning. Through cascade Happy, Sad, Angry, Surprise, and Neutral. These labels serve
functions trained meticulously on input files, these classifiers as our guiding stars, leading us to the perfect musical
use Haar Wavelet techniques, functioning as a metaphorical constellation. The songs are meticulously organized into
magnifying glass. These techniques break down image pixels folders using Python's trusty `os.listdir()` method. When you
into squares, employing machine learning methods to achieve see the playlist, it's not just a list of songs; it's a symphony of
accuracy akin to Sherlock's investigative prowess, a process emotions. Each song is ordered based on how often the user
aptly named "training data." listens to it.

B. Feature Extraction: Unearthing the Hidden Gems And the cherry on top? The GUI of our music player.
In the domain of deep learning, feature extraction It's like the stage where the emotions and music come
resembles excavating precious gems within a chest of together for a grand performance. Pygame, a multimedia
treasures. Envision a pre-trained network as an art library, takes center stage for audio playback. Functions like
connoisseur, discerning specific strokes over others. Input `playsong`, `pausesong`, `resumesong`, and `stopsong`
images traverse this network, pausing at set layers to acquire manage the music, while Tkinter, the magician of GUI
outputs as features—an artistic appreciation of a masterpiece, development, creates the visual magic [4].
layer by layer. The initial layers in this convolutional
network act as the connoisseur's focus on broader strokes, E. Eigenfaces Approach: Recognizing the Unique You
extracting high-level features through a limited filter set. When it comes to perceiving facial expressions, the
Eigenfaces approach is our trusted ally. It zeroes in on the
As we venture deeper, these layers emulate the most significant parts of your face - the eyes, nose, cheeks,
connoisseur's magnifying glass, revealing intricate details. and forehead. Why? Because these areas change relative to
However, these filters are not ordinary; they specialize in one another when you express emotions. It's like recognizing
capturing exquisite features, albeit with added computational a friend by their unique smile or the twinkle in their eye. The
intricacy [6]. Eigenfaces algorithm works its magic by capturing these
crucial features in faces and employing Eigenvalues and
C. Emotion Detection: Cracking the Emotion Code Eigenvectors to tell them apart. It's all about capturing the
Sure, let's delve into emotion detection—a fascinating maximum variations in facial features among different faces,
process where Convolutional Neural Network (CNN) just like recognizing your friend in a crowd.
architecture takes the spotlight. Picture it as an ensemble of
artists scrutinizing an image. These feature detectors,
resembling artistic virtuosos, meticulously examine input
images, unveiling distinct emotional strokes. Whether it's
edges, lines, or curves, they dissect and analyze every aspect.

IJISRT23DEC1027 www.ijisrt.com 979


Volume 8, Issue 12, December – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
F. Face Detection and Recognition: The Art of Reading # Extract emotion from detection results
Faces emotion = emotions[np.argmax(detections)]
Recognizing facial expressions is all about reading the
intricate stories etched on faces. To decode these tales, we cv2.putText(frame, f'Emotion: {emotion}', (10, 30),
process images with human faces, detecting emotions cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
conveyed. Here, different algorithms step into the spotlight - cv2.imshow('Emotion Detection', frame)
Eigenfaces, Local Binary Patterns, Direct Cosine Transform,
and Gabor Wavelets. They work like literary experts if cv2.waitKey(1) & 0xFF == ord('q'):
dissecting the nuances of facial expressions. OpenCV and the break
Eigenfaces algorithm are our trusty companions, detecting
faces within images and unveiling the emotional cues hidden cap.release()
within them. It's like understanding the unspoken language of cv2.destroyAllWindows()
faces.
V. RESULTS
G. Music Feature and Recommendation: Crafting Musical
Stories The fusion of facial expression recognition with music
Music recommendation isn't just about songs; it's about recommendation systems is a captivating endeavor in recent
weaving musical stories that resonate with your soul. We research. Visionaries in this field employ Convolutional
delve into the depths of music, analyzing factors like artist, Neural Networks (CNNs) to create innovative solutions that
album, and mood. Artificial Neural Networks (ANNs), the bridge facial expression recognition and music
maestros of our orchestra, take center stage. They classify recommendation.
songs into various categories, painting musical portraits
based on diverse criteria. The Million Song Dataset by Athavle et al. [4] designed a system that not only
Kaggle serves as our training ground, offering metadata and recognizes emotions like happiness, sadness, anger, surprise,
triplet files brimming with song information and user and neutrality but also curates music playlists that match the
interactions. It's a treasure trove of musical insights. With user's mood seamlessly.
ANN-based methods, we deliver accurate classifications and
recommendations, crafting musical journeys that dance with Yu et al. [3] explored the subtleties of facial micro-
your emotions. expressions, achieving a remarkable 62.1% recognition rate.
They integrated this achievement into their music
Emotion Detection Module recommendation algorithm, creating a symphony of emotions
Equation for ReLU activation function: in music.
f(x) = max(0, x)
Feature Extraction Krupa et al. [2] elevated the CNN approach with their
Pooling operation equation: two-level CNN model, achieving recognition accuracies of up
``` to 88% through meticulous optimization, emphasizing the
Pooled Feature = max(Pooling Area) importance of multi-level feature extraction.
```
Music Recommendation Module Nareen Sai et al. [7] reported substantial recognition
Code for real-time emotion detection using CNN: accuracies using a CNN architecture, highlighting the rising
```python importance of deep learning in facial expression analysis and
# Sample code for real-time emotion detection its integration with music recommendation frameworks.
import cv2
import numpy as np Metilda Florence and Uma [5] conducted emotion
recognition experiments based on user facial expressions. In
# Load pre-trained model one set, explicit instructions led to perfect accuracy when
model = cv2.dnn.readNetFromCaffe(protoTxt, modelFile) inner emotions matched facial expressions. In another set
without guidance, accuracy varied widely, reflecting the
# Capture video from webcam diversity of human emotions.
cap = cv2.VideoCapture(0)
while True: This research underscores the growing significance of
ret, frame = cap.read() emotion recognition through facial expressions across
if not ret: disciplines. While achieving accurate classification of user
break emotions is well within reach with accuracy rates exceeding
80% in various scenarios, challenges persist, such as acquiring
blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, suitable image data and ensuring well-lit environments for
300)), 1.0, (300, 300), (104.0, 177.0, 123.0)) precise predictions.
model.setInput(blob)
detections = model.forward()

IJISRT23DEC1027 www.ijisrt.com 980


Volume 8, Issue 12, December – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Table 1

Fig 3

Fig 1 Model Accuracy

Fig 4

VI. CONCLUSION

In the realm of music recommendation systems, the


integration with facial emotion recognition techniques has
sparked substantial advancements. This technological
marriage not only refines the precision in recognizing users'
feelings through facial cues but also refines the emotional
depth of music recommendations.

Take, for instance, Yu et al. [3], who introduced a


model based on convolutional neural networks (CNNs) to
decode facial microexpressions. This achievement hit a
recognition rate of 62.1%, laying the groundwork for a
groundbreaking recommendation algorithm. Its real-time
emotion recognition crafts music suggestions tailored to
users' current emotional states, not just based on their past
Fig 2 Model Loss history.

IJISRT23DEC1027 www.ijisrt.com 981


Volume 8, Issue 12, December – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Then there's Krupa et al. [2] and their emotion-sensitive Healing Through Harmony: Beyond entertainment, the
smart music recommender. It extends beyond music, delving future might witness these systems stepping into the domain
into a deeper understanding of users. By integrating chatbot of music therapy. Picture therapists utilizing emotion
interactions and facial expression-based emotion detection, detection to craft sessions alleviating stress, anxiety,
they've fashioned a system reaching beyond songs, depression, or trauma. It's the fusion of technology and
embracing areas like driver assistance, lie detection, mental well-being.
surveillance, and mood-based learning.
As these advancements unfold, it's crucial to strike a
Gilda et al. [1] offer a smart music player seamlessly balance between refining algorithms and practical
blending facial emotion recognition into mood-based music application. The goal is clear: to create technology that not
recommendations. It's not just smart; its striking 97.69% only comprehends us but also touches the strings of our
accuracy minimizes user effort in playlist creation, emotions, enhancing our experiences and addressing our
emphasizing technology adapting to emotions. emotional well-being.

Athavle et al. [4] pioneered a music recommendation The future promises a symphony of possibilities, where
system that doesn't merely shuffle tracks but orchestrates innovation and emotions coalesce, ensuring a more enriching
mood transformations. From happiness to surprise, it gauges and harmonious tomorrow.
emotions and crafts playlists in sync. It's an application
demonstrating music's magical influence on moods, LIMITATIONS
enhancing user experiences.
The system has its share of limitations, contributing to a
Collectively, these studies highlight the potential of nuanced understanding of its capabilities. Primarily, the
merging facial emotion recognition with music system's emotional understanding remains confined within
recommendation. While paving remarkable paths, challenges the limits of its dataset. This constraint restricts its capacity to
persist. The precision in recognizing microexpressions discern the entire breadth of human emotions, emphasizing
demands enhancement, and issues like adverse lighting the critical role of comprehensive and diverse data in training
conditions cast shadows. Nonetheless, these hurdles are mere such systems.
stepping stones toward more personalized and emotionally
synchronized music experiences. Another significant factor influencing the system's
performance is lighting. Similar to the nuances of
In essence, the intersection of emotions and music photography, the system thrives in well-lit environments
yields magic. These advancements extend beyond melodies, where it can accurately detect and interpret facial
influencing domains like mental health therapy and gaming. expressions. However, in dimly lit surroundings, its precision
As researchers refine these systems, their promise to enhance might be compromised, indicating the importance of
user well-being and satisfaction grows, ensuring technology optimized lighting conditions for optimal functionality.
resonates more profoundly with the human heart.
Moreover, the quality of the images greatly impacts the
FUTURE SCOPE system's accuracy in emotional interpretation. It prefers
images of higher resolution, preferably at least 320p, to
The evolution of uniting emotions with music vividly capture and decode emotional nuances. Clearer and
recommendation systems continues, paving the way for sharper images contribute to a more accurate portrayal of
unexplored territories ripe with innovation: emotions, underscoring the importance of image quality in
the system's operations.
Exploring Nuanced Emotions: What if the system could
discern even the subtlest shades of disgust and fear? Future Recognizing these limitations is pivotal as it propels
research could expand the spectrum of recognized emotions, ongoing efforts towards system enhancement. The system
incorporating these intricate sentiments. It's about technology continually strives to evolve, seeking improvements that
comprehending not only smiles but also the intricate tapestry surmount these challenges, fostering a deeper and more
of human emotions. precise alignment with human emotional expressions.

Illuminating Dark Spaces: Adverse lighting and low- REFERENCES


quality camera resolutions present hurdles. Future systems
should excel in any setting, ensuring emotions are never [1]. S. Gilda, H. Zafar, C. Soni and K. Waghurdekar, "Smart
concealed in the dark. Imagine a system that perceives your music player integrating facial emotion recognition and
emotions even in dimly lit environments. music mood recommendation," 2017 International
Conference on Wireless Communications, Signal
Personalizing Melodies: Collaborative filtering Processing and Networking (WiSPNET), Chennai,
techniques hold the promise of a more personalized musical India, 2017, pp. 154-158, doi:
journey. These systems could tune in not just to your mood 10.1109/WiSPNET.2017.8299738.
but also your musical preferences, creating a symphony that
deeply resonates with your inner self.

IJISRT23DEC1027 www.ijisrt.com 982


Volume 8, Issue 12, December – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
[2]. K. S. Krupa, G. Ambara, K. Rai and S. Choudhury,
"Emotion aware Smart Music Recommender System
using Two Level CNN," 2020 Third International
Conference on Smart Systems and Inventive
Technology (ICSSIT), Tirunelveli, India, 2020, pp.
1322-1327, doi: 10.1109/ICSSIT48917.2020.9214164.
[3]. Z. Yu, M. Zhao, Y. Wu, P. Liu and H. Chen, "Research
on Automatic Music Recommendation Algorithm
Based on Facial Micro-expression Recognition," 2020
39th Chinese Control Conference (CCC), Shenyang,
China, 2020, pp. 7257-7263, doi:
10.23919/CCC50068.2020.9189600.
[4]. M. Athavle, D. Mudale, U. Shrivastav, and M. Gupta
(2021). "Music Recommendation Based on Face
Emotion Recognition." Journal of Informatics Electrical
and Electronics Engineering, Vol. 02, Iss. 02, S. No.
018, pp. 1-11. 2021.
https://doi.org/10.54060/JIEEE/002.02.018
[5]. S. Metilda Florence and M. Uma, "Emotional Detection
and Music Recommendation System based on User
Facial Expression," presented at the 3rd International
Conference on Advances in Mechanical Engineering
(ICAME 2020), IOP Conf. Series: Materials Science
and Engineering 912 (2020) 062007, IOP Publishing,
doi: 10.1088/1757-899X/912/6/062007.
[6]. Samuvel, D. J., Perumal, B., & Elangovan, M. (2020).
Music recommendation system based on facial emotion
recognition. 3C Tecnología. Glosas de innovación
aplicadas a la pyme. Edición Especial, Marzo 2020,
261-271.
[7]. Nareen Sai, B., Sai. Vamshi, D., Pogakwar, P.,
Seetharama Rao, V., Srinivasulu, Y. "Music
Recommendation System Using Facial Expression
Recognition Using Machine Learning." International
Journal for Research in Applied Science & Engineering
Technology (IJRASET), Volume 10, Issue VI, June
2022. ISSN: 2321-9653. DOI:
https://doi.org/10.22214/ijraset.2022.44396.

IJISRT23DEC1027 www.ijisrt.com 983

You might also like