Diabetic Retinopathy

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Visvesvaraya Technological University, Belagavi

Rajeev Institute of Technology


Department of Computer Science and Engineering
Phase-1 Project Presentation on

“Music Player System”


Submitted by

AMULYA H V [4RA20CS007]
ANUSHA K M [4RA20CS009]
BHOOMIKA S J [4RA20CS014]
HARSHITHA H C [4RA20CS035]

Under the Guidance of


Mrs. Swetha B R
Assistant professor
Department of Computer science and Engineering RIT,
Hassan
ABSTRACT
This proposes the implementation of an intelligent agent that segregates songs and plays
them according to the user's current mood. The music that best matches the emotion is
recommended to the user as a playlist. Face emotion recognition is a form of image
processing. Facial emotion recognition is the process of converting the movements of a
person's face into a digital database using various image processing techniques. Facial
Emotion Recognition recognises the face's emotions. The collection of songs is based on
the emotions conveyed by each song and then suggests an appropriate playlist. The user's
music collection is initially clustered based on the emotion the song conveys. This is
calculated taking into consideration the lyrics of the song as well as the melody. Every
time the user wishes to generate a mood-based playlist, the user takes a picture of
themselves at that instant. This image is subjected to facial detection and emotion
recognition techniques, recognising the emotion of the user.
INTRODUCTION

• Emotion recognition is a feature of artificial intelligence that is becoming more


applicable for robotically performing various processes that are relatively more
exhausting to perform manually. The human face is an essential part of the human
body, mostly when it comes to extracting a person's emotional state and behaviour
according to the situation . Recognizing a person’s mood or state of mind based on
the emotions they show is an important part of making systematic decisions that are
best suited to the person in question for a diversity of applications.
• In day to day life, each person faces a lot of problems, and the best helper for all
the stress, anxiety, tension, and worry that is encountered is music. Music plays a
very vital role in building up and enhancing the life of every individual because it
is an important medium of entertainment for music lovers and listeners.
• In today's world, with ever increasing advancement in the field of technology and
multimedia, several music players were developed with functions like fast reverse,
forward, variable playback speed (speeding up or slowing down the original speed of
audio), streaming playback, volume modulation, genre classification, etc. Although
these functions satisfy the basic requirements of the user, the user still has to
manually scroll through the playlist and choose songs based on his current mood and
behaviour.
LITERATURE SURVEY

[1] R. Ramanathan etal have proposed the first part of the emotion-based music
player system is emotion recognition. The software captures the emotion of a person
in the image captured by the webcam using various image processing and
segmentation techniques. bit extracts features from the face of a person.

[2] Charles Darwin was the first scientist to recognise that facial expression is one
of the most powerful and immediate means for human beings to communicate their
emotions, intentions, and opinions to each other.

[3] Rosalind Picard (1997) describes why emotions are important to the
computing community. There are two aspects to effective computing: giving the
computer the ability to detect emotions and giving it the ability to express
emotions.
[4] Ligang Zhang and Dian T (2011) developed a facial emotion recognition system
(FER). They used a dynamic 3D Gabor feature approach and obtained the highest
correct recognition rate (CRR) on the JAFFE database, and FER is among the top
performers on the Cohn-Kanade (CK) database using the same approach. They attested
to the effectiveness of the proposed approach through recognition performance,
computational time, and comparison with the state-of-the-art performance.

[5] R. A. Patil, Vineet Sahula , and A. S. Mandal for CEERI Pilani on expression

recognition, the problem is divided into three subproblems: face detection, feature
extraction, and facial expression classification. Most of the existing systems assume
that the presence of the face in a scene is ensured. Most of the systems deal with only
feature extraction and classification, assuming that the face is already detected
PROBLEM STATEMENT

▪ Develop a system that presents a cross-platform music player, which


recommends music based on the real-time mood of the user through a web
camera using Machine Learning Algorithms .
OBJECTIVE OF THE PROJECT
• Revolutionize the way users interact with and experience music by developing an
intelligent and adaptive platform that analyse and understands the emotional states
of users.

• Through advanced emotion recognition algorithms, user feedback, and behavioural


analysis, the system aims to accurate and deliver a personalized music playlist that
resonates with the user's current emotional state. This innovative music player
seeks to enhance user engagement, satisfaction, and well-being by fostering a
deeper emotional connection between users and their music, ultimately creating a
more immersive and enjoyable listening experience.

• The system will prioritize user privacy, ensuring that emotional data is handled
with the utmost sensitivity and security, promoting trust and confidence among
users."
METHOLODOGY
➢ Face Capturing We used open-source computer vision (OpenCV), a library of
programming functions aimed mainly at real-time computer vision. It's usually used
for real-time computer vision, so it's easier to combine with other libraries that can also
use NumPy . When the first process starts, the stream from the camera is accessed, and
about 10 photos are taken for further processing and emotion recognition. We use an
algorithm to categorise photos

➢ Face Detection The principal component analysis (PCA) method is used to reduce
the face space dimensions. Following that, linear discriminant analysis (LDA) method
is used to obtain image feature characteristics. The minimal Euclidean approach for
matching faces, this algorithm aids in picture recognition processing and helps us
categorise user expressions that suggest emotions.
➢ Facial Emotion Recognition A Python script is used to fetch images containing
faces along with their emotional descriptor values. The images are contrast enhanced
by contrast-limited adaptive histogram equalisation and converted to grayscale in
order to maintain uniformity and increase the effectiveness of the classifiers. A
cascade classifier, trained with face images, is used for face detection, where the
image is split into fixed-size windows. Fisher's face recognition method proves to be
efficient as it works better with additional features such as spectacles and facial hair.
It is also relatively invariant to lighting.

➢ Feature Extraction The features considered while detecting emotion can be static,
dynamic, point based geometric, or region based appearance. To obtain real-time
performance and to reduce time complexity, for the intent of expression recognition,
only eyes and mouth are considered. The combination of two features is adequate to
convey emotions accurately. Finally, In order to identify and segregate feature points
on the face, a point detection algorithm is used.
➢ Eyebrow Extraction Two rectangular regions in the edge image which lie directly
above each of the eye regions are selected as the eyebrow regions. Now Sobel method
was used in obtaining the edge image as more images can be detected when compared
to Robert’s method. These obtained edge images are then dilated and the holes are filled.

➢ Mouth Extraction The points in the top region, bottom region, right corner points
and left corner points of the mouth are all extracted and the centroid of the mouth is
calculated.

➢ Music Recommendation The emotion detected from the image processing is given
as input to the clusters of music to select a specific cluster. In order to avoid interfacing
with a music app or music module, which would involve extra installation, the support
from the operating system is used instead to play the music file. The playlist selected by
the clusters is played by creating a forked sub process that returns control back to the
python script on completion of its execution so that other songs can be played. This
makes the programme play music on any system, regardless of its music player
Flow Chart :
Fig : Gray-Scale Image Conversion
Video Animation Of Face Recognition
Software and Hardware Components

Software Components Hardware Components

Required operating system :


Minimum-Hardware
Windows 10
Requirements
Programming Language :
Central-Processor : Intel i5 2.4
Python Framework
GHz
Python IDLE Tools : PyCharm
Hard-Disk : 4O GB
RAM : 4GB or above
REFERENCES
[1] R. Ramanathan, R. Kumaran, R. Ram Rohan, R. Gupta and V. Prabhu, "An
Intelligent Music Player Based on Emotion Recognition," 2017 2nd International
Conference on Computational.

[2] S. Deebika, K. A. Indira and Jesline, "A Machine Learning Based Music Player by
Detecting Emotions," 2019 Fifth International Conference on Science Technology
Engineering and Mathematics (ICONSTEM), Chennai, India, 2019

[3] S. G. Kamble and A. H. Kulkarni, "Facial expression based music player," 2016
International Conference on Advances in Computing, Communications and Informatics
(ICACCI), Jaipur, India, 2016

[4] Darwin C.1998 The expression of the emotions in man and animals, 3rd edn (ed.
Ekman P.). London: Harper Collins; New York: Oxford University Press

You might also like