Diabetic Retinopathy
Diabetic Retinopathy
Diabetic Retinopathy
AMULYA H V [4RA20CS007]
ANUSHA K M [4RA20CS009]
BHOOMIKA S J [4RA20CS014]
HARSHITHA H C [4RA20CS035]
[1] R. Ramanathan etal have proposed the first part of the emotion-based music
player system is emotion recognition. The software captures the emotion of a person
in the image captured by the webcam using various image processing and
segmentation techniques. bit extracts features from the face of a person.
[2] Charles Darwin was the first scientist to recognise that facial expression is one
of the most powerful and immediate means for human beings to communicate their
emotions, intentions, and opinions to each other.
[3] Rosalind Picard (1997) describes why emotions are important to the
computing community. There are two aspects to effective computing: giving the
computer the ability to detect emotions and giving it the ability to express
emotions.
[4] Ligang Zhang and Dian T (2011) developed a facial emotion recognition system
(FER). They used a dynamic 3D Gabor feature approach and obtained the highest
correct recognition rate (CRR) on the JAFFE database, and FER is among the top
performers on the Cohn-Kanade (CK) database using the same approach. They attested
to the effectiveness of the proposed approach through recognition performance,
computational time, and comparison with the state-of-the-art performance.
[5] R. A. Patil, Vineet Sahula , and A. S. Mandal for CEERI Pilani on expression
recognition, the problem is divided into three subproblems: face detection, feature
extraction, and facial expression classification. Most of the existing systems assume
that the presence of the face in a scene is ensured. Most of the systems deal with only
feature extraction and classification, assuming that the face is already detected
PROBLEM STATEMENT
• The system will prioritize user privacy, ensuring that emotional data is handled
with the utmost sensitivity and security, promoting trust and confidence among
users."
METHOLODOGY
➢ Face Capturing We used open-source computer vision (OpenCV), a library of
programming functions aimed mainly at real-time computer vision. It's usually used
for real-time computer vision, so it's easier to combine with other libraries that can also
use NumPy . When the first process starts, the stream from the camera is accessed, and
about 10 photos are taken for further processing and emotion recognition. We use an
algorithm to categorise photos
➢ Face Detection The principal component analysis (PCA) method is used to reduce
the face space dimensions. Following that, linear discriminant analysis (LDA) method
is used to obtain image feature characteristics. The minimal Euclidean approach for
matching faces, this algorithm aids in picture recognition processing and helps us
categorise user expressions that suggest emotions.
➢ Facial Emotion Recognition A Python script is used to fetch images containing
faces along with their emotional descriptor values. The images are contrast enhanced
by contrast-limited adaptive histogram equalisation and converted to grayscale in
order to maintain uniformity and increase the effectiveness of the classifiers. A
cascade classifier, trained with face images, is used for face detection, where the
image is split into fixed-size windows. Fisher's face recognition method proves to be
efficient as it works better with additional features such as spectacles and facial hair.
It is also relatively invariant to lighting.
➢ Feature Extraction The features considered while detecting emotion can be static,
dynamic, point based geometric, or region based appearance. To obtain real-time
performance and to reduce time complexity, for the intent of expression recognition,
only eyes and mouth are considered. The combination of two features is adequate to
convey emotions accurately. Finally, In order to identify and segregate feature points
on the face, a point detection algorithm is used.
➢ Eyebrow Extraction Two rectangular regions in the edge image which lie directly
above each of the eye regions are selected as the eyebrow regions. Now Sobel method
was used in obtaining the edge image as more images can be detected when compared
to Robert’s method. These obtained edge images are then dilated and the holes are filled.
➢ Mouth Extraction The points in the top region, bottom region, right corner points
and left corner points of the mouth are all extracted and the centroid of the mouth is
calculated.
➢ Music Recommendation The emotion detected from the image processing is given
as input to the clusters of music to select a specific cluster. In order to avoid interfacing
with a music app or music module, which would involve extra installation, the support
from the operating system is used instead to play the music file. The playlist selected by
the clusters is played by creating a forked sub process that returns control back to the
python script on completion of its execution so that other songs can be played. This
makes the programme play music on any system, regardless of its music player
Flow Chart :
Fig : Gray-Scale Image Conversion
Video Animation Of Face Recognition
Software and Hardware Components
[2] S. Deebika, K. A. Indira and Jesline, "A Machine Learning Based Music Player by
Detecting Emotions," 2019 Fifth International Conference on Science Technology
Engineering and Mathematics (ICONSTEM), Chennai, India, 2019
[3] S. G. Kamble and A. H. Kulkarni, "Facial expression based music player," 2016
International Conference on Advances in Computing, Communications and Informatics
(ICACCI), Jaipur, India, 2016
[4] Darwin C.1998 The expression of the emotions in man and animals, 3rd edn (ed.
Ekman P.). London: Harper Collins; New York: Oxford University Press