0% found this document useful (0 votes)
53 views5 pages

Recognition of Human Face Emotions Detection Using Computer Vision Based Smart Images

The document discusses a method for recognizing human emotions by detecting facial expressions in images using computer vision and machine learning techniques. It involves detecting faces, extracting features from facial regions, and evaluating expressions to identify emotions like smiling, surprise and crying. The method aims to allow computers to automatically recognize emotions from images in a way that is similar to human vision.

Uploaded by

Velumani s
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views5 pages

Recognition of Human Face Emotions Detection Using Computer Vision Based Smart Images

The document discusses a method for recognizing human emotions by detecting facial expressions in images using computer vision and machine learning techniques. It involves detecting faces, extracting features from facial regions, and evaluating expressions to identify emotions like smiling, surprise and crying. The method aims to allow computers to automatically recognize emotions from images in a way that is similar to human vision.

Uploaded by

Velumani s
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

ISSN 2278-3091

Sanaullah Memon et al., International Journal of Advanced


VolumeTrends in Computer
10, No.3, May Science
- June and Engineering, 10(3), May - June 2021, 1700 – 1704
2021
International Journal of Advanced Trends in Computer Science and Engineering
Available Online at http://www.warse.org/IJATCSE/static/pdf/file/ijatcse301032021.pdf
https://doi.org/10.30534/ijatcse/2021/301032021

Recognition of Human Face Emotions Detection Using


Computer Vision Based Smart Images
Sanaullah Memon1*, Noor Hassan Bhangwar1, Dr. Abdul Hanan Sheikh2, Najamuddin Budh3
1*
Department of Information Technology, Shaheed Benazir Bhutto University Shaheed Benazirabad,
Campus Naushahro Feroze, Sindh Pakistan
1
Department of Information Technology, Shaheed Benazir Bhutto University Shaheed Benazirabad,
Campus Naushahro Feroze, Sindh Pakistan
2
Department of Mathematics and Statistics, Institute of Business Management Karachi, Sindh Pakistan
3
Department of English, Shaheed Benazir Bhutto University Shaheed Benazirabad, Campus Naushahro Feroze,
Sindh Pakistan

 whenever. Machine learning is malleable in both image


ABSTRACT processing and computer vision yet it is to be more observable
in computer vision. It furnishes systems with the capacity to
Humans share a collection of basic and essential emotions that automatically take in and improve as a matter of fact without
are expressed by facial expressions that seem to be consistent. being explicitly customized. A delicate prologue to computer
The automated identification of human emotion in images vision is shown in Figure 1. Over last decades, human face
will be possible due to an algorithm that detects, extracts, and location has been investigated generally due to the recent
evaluates these facial expressions. The Face detector and advances of its applications, for example, security access
recognizer application is a desktop application used to control, data recovery in numerous unstructured media
recognize human face emotions by using a computer vision database, and progressed Human Computer Interaction (HCI).
based smart images. It consists of human face detection The input pictures can be caught by means of a few gadgets,
picture boxes and considers an image as an original image. It for example, cameras and they can be controlled by different
climaxes a face skin color, finds high impact area and Computer vision strategies. Face detection is one of the most
identifies different emotions of the face from an image. The significant steps in many picture preparing applications,
results of the image are depending upon the separation of Eyes particularly in face recognition because of they need to find
& Lips movement of person. This is done by comparing face face first to perceive and condense data about the given casing
embedding vectors. It finds the smart photos focused on progressively applications [2]. Face recognition strategies can
computer vision for successful identification of facial be classified as highlight-based, layout-based, or
emotions in terms of various modes of speech such as smiling, appearance-based. Highlight based strategies try to look
shocking and weeping. through the territories of particular picture features, for
instance, the eyes, nose, and mouth, and after that check
Key words: Computer Vision, Images, Recognition, whether these features are in a possible geometrical
Detection, Emotions, Human Face, Machine Learning game-plan [3]. Layout based technique, for example, active
appearance models (AAMs), can manage a wide scope of
posture and articulation changeability. Normally, they require
1. INTRODUCTION great instatement close to a genuine face and are thusly not
appropriate as quick face finders. Appearance based
Computer Vision is a field of man-made brain power that techniques examine over small overlapping rectangular
instructs computers to decipher and comprehend the visual patches of the image scanning for likely face up-and-comers,
world. It is concerned with the theory behind artificial systems which would then be able to be refined utilizing a course of
that extract information from images. Utilizing computerized progressively costly however particular recognition
pictures from cameras and videos and profound learning algorithms [4].
models, machines can precisely recognize and arrange objects
and after that respond to what they "see". All such change of
information can accomplished for achieving some specific
objective [1]. A computer vision intention is concerned with
the design and development of algorithms that enables
computers to improve their exhibition after some time
dependent on information for example from databases.
Learning indicates changes in a framework that empowers a
framework to do a similar task all the more productively

Figure 1: A gentle introduction to computer vision

1700
Sanaullah Memon et al., International Journal of Advanced Trends in Computer Science and Engineering, 10(3), May - June 2021, 1700 – 1704

2. LITERATURE REVIEW Today machines are an important audience for any image.
Rettberg looks at how facial acknowledgment algorithms
Computer vision is a field that highlights systems to process, examine our images for surveillance, confirmation of
break down and get pictures. It expects to copy the capability personality and better-altered business benefits, and relates
of human vision by capturing a picture. One of the significant this to understandings of machine vision as post-optical and
prevention to the uses of Computer vision is movement non-authentic [11].
estimation, shape estimation and investigation. Gesture based
communication is a computer vision based unblemished
multi-faceted language that draws in signs formed by hand 3. PROBLEM STATEMENT
minutes in amalgamation with outward appearances and
Recognition of facial expression in the case of the real world
stances. It maps regular method for correspondence to human
is a long-standing problem which makes the process of
signs and motions empowering hearing debilitated
extraction of features more complex. There are some major
individuals to impart among them [5]. Brilliant photography
problems like face detection, face recognition, Image
passes on the assurance of value improvement and usefulness
transformation and one of these problems is facial emotions
augmentation in making stylishly engaging pictures. The vast
detection that expresses the face expressions, identify
majority of the present arrangements utilize a post handling
pictures automatically during different expressions styles like
methodology to decorate an image; the created device
smile, surprise, sad, cry and normal expressions by using
empowers a novel capacity of prescribing a decent look
computer vision based smart images.
before the photograph is caught. Given a data face picture, the
instrument normally evaluates the stance based in vogue
score, finds the most appealing purpose of the face, and 4. AIM AND OBJECTIVES
proposes how the stance should be adjusted [6]. Semantic
highlights depict inborn attributes of exercises. Accordingly, The aim of the project is to investigate the facial
semantics make the acknowledgment task increasingly solid expressions at the time of smiling, crying, surprising
particularly when similar activities appear to be outwardly and normal by using computer vision technology based
unique because of the assortment of activity executions. A on smart images.
semantic space includes the most famous semantic highlights Objectives of the study are:
of a movement to be specific the human body (present and
pose let), characteristics, related articles, and scene setting.  Read lot of capabilities from the background
Techniques are to be presented by exploiting these semantic study of Image detection and Image
highlights to perceive exercises from still pictures and video recognition.
information just as four gatherings of exercises: nuclear  To study the image processing algorithms like
activities, individuals’ connections, human–object facial feature recognition algorithm.
communications, and gathering exercises [7]. A head-pose  To calculate the facial expression results
recommendation system is presented that aides a client in during the project analysis.
how to best present while taking an image. Given an info face  The reasons and factors behind the measured
picture, the framework finds the most appealing edge of the results are evaluated on the basis of the results
face and proposes how the posture ought to be balanced. The of selected algorithms published.
proposal results are resolved adaptively to the appearance and
beginning posture of the information face. The client study
demonstrates the suggestion execution of the framework is 5. RESEARCH METHODOLOGY
tolerably identified with the level of congruity among the
picture takers' proposals [8]. Self-Trackam analysis video The Face detector and recognizer application is a desktop
edges caught continuously to confine human faces in each application and used for the human face detection &
edge. The framework processes this data to situate the phone recognition. Research methodology is supposed to be
which is mounted on two servos equipped for panning on a experimental during application development phase. It takes
level plane and tilting vertically. The region of interest is an original image and identifies high impact area of face
consequently situated in the focal point of casing utilizing detection. It extracts facial features and emotions detections
controls got from faces positions [9]. and analyses the expressions from an image. After that it
compiles all the expressions in the desired output. This
A factual human shape model that depicts a body shape with application is developed on visual studio with the use the
shape parameters, a novel way to deal with consequently Microsoft visual studio library OpenCV.
gauge these parameters from a solitary information shape
outline utilizing semi- managed/supervised learning. By  Integrate OpenCV library to programming language
using outline includes that encode neighborhood and like C#, JAVA or Python.
worldwide properties strong to clamor posture and view  Perform various operations like Face recognition,
changes, and anticipating them to bring down dimensional Facial expressions etc
spaces traversed multi-see learning with standard  Implementation phase for image capturing by using
connection investigation [10]. facial expression such as smiling, weeping and
crying.
1701
Sanaullah Memon et al., International Journal of Advanced Trends in Computer Science and Engineering, 10(3), May - June 2021, 1700 – 1704

 Do the system testing and release the deliverables in 6. TOOLS AND TECHNOLOGIES
beta version.
The Following tools and technologies are used during the
development of project.
C#: It is general purpose programming language that
incorporates strong scripting, declarative, procedural,
abstract, object and component oriented disciplines. It was
developed within the context of the .Net project by Microsoft
headed by Anders Hejlsberg and his team and was accepted
by ECMA and ISO.

SQL Server: Microsoft server is RDBMS that serves a wide


range of enterprise IT database management, business
intelligence and analytics applications. Microsoft SQL server
along with oracle database and IBM’s DB2 is one of the three
leading database systems on the industry.

OpenCV Library: OpenCV is an open source computer


vision and software library for machine learning.
OpenCV was designed to provide a shared platform for
the implementation of computer vision and to promote
the use of machine perception in consumer products.
Figure 2: Methodology for computer vision based smart images
Being a BSD-licensed software, OpenCV makes the use
and alteration of the code simple for businesses.
A Viola-Jones cascade object face detector is used to remove
the overall face from the image first. Using basic features
7. EXPERIMENTS AND RESULTS
known as Haar-like features, the Viola- Jones detection
system attempts to recognize features of a face. Passing
In this Facial Recognition project, the computer vision
feature boxes is part of the procedure over an image and
based smart images are captured of the person with
calculating the difference between adjacent regions' summed
pixel values. The difference is then compared to a threshold, different face expressions to identify the facial emotions.
which determines whether or not an object has been This is done by comparing face embedding vectors. The
identified. This necessitates thresholds that have been emotion Detection is classifying the emotions on the face
pre-trained for various feature boxes and features. The idea is of a person in distinct perspectives such as happy, angry,
that most faces and the features inside them can follow sad, normal, surprise, disgust or fear. AForge.NET is an
general requirements, so specific feature boxes for facial open source C# platform developed for computer vision
features are used. The Haar-like function method is extremely and artificial intelligence developers and researchers,
quick because it can measure the integral image of the image neural networks, genetic algorithms, fuzzy logic,
in a supervised manner and generate a summed area table. machine learning, robotics and so on. The framework
Then using just four values, the summed results of the pixels consists of a collection of libraries and sample
in any rectangle in the original image can be calculated. This applications which display their characteristics:
enables several passes of various features to be completed  AForge.Imaging - library with image
quickly. A selection of features would be transferred for face processing routines and filters;
recognition in order to detect certain aspects of a face, if one  AForge.Vision - computer vision library;
exists. The face is identified if enough thresholds are reached.  AForge.Video - set of libraries for video
The faces are removed and resized to a predetermined processing;
dimensional norm once they have been found. The average  AForge.Neuro - neural networks computation
image for all of the training faces will then be determined. library;
Faces that convey the basic emotions make up the entire  AForge.Genetic - evolution programming library;
training package. The mean image is then subtracted from the  AForge.Fuzzy - fuzzy computations library;
entire training collection of images. The scatter matrix S is
 AForge.Robotics - library providing support of
then generated using the mean-subtracted training set. The
some robotics kits;
goal is to find a change in basis that will enable us to express
 AForge.MachineLearning - machine learning
our face data in a more dimensionally optimized manner. As a
library;
result, the majority of the data can be retained as a linear
combination of the much smaller dimension collection. PCA
does this by attempting to minimize variation.

1702
Sanaullah Memon et al., International Journal of Advanced Trends in Computer Science and Engineering, 10(3), May - June 2021, 1700 – 1704

The face recognition application is analyzed by the


human face emotions. It consists of three human face Surprise is one of seven universal emotions which show up
detection picture boxes. In a first picture box, it considers when we hear sudden and unexpected sounds or gestures. His
an image as original image. In a second picture box, it role as the briefest of the universal emotions is to concentrate
highlights the face skin color and finds the high impact our attention on deciding what is going on and whether or not
area of the image and in a third picture box, it searches it is risky. The following figure 5 shows the result of when a
the high impact face area and identify a face from an person is surprising.
image. Following figure 3 shows the main screen of
human face detection.

Figure 5: Surprising Human Face Expression

Sadness is a form of emotion that is characterized by feelings


Figure 3: Human Face Detection Main Screen of anger, hopelessness, disinterest and humidity. Sadness is
universal emotion that everyone goes through at some point in
Browsing plays an important role in computer vision and their lives. In certain cases, people can experience persistent
machine learning. An image browser is a piece of software and severe sadness, which may lead to depression. The
designed specifically to enable viewing of digital image files. following figure 6 shows the result of when a person is sad.
Modern digital photographers often have catalogs of tens of
thousands of digital photographs, and an image browser can
be used to seek, organize and delete images files quickly.
Image browser allows the organization of images on the fly
using the information stored in the metadata of the image file.
It allows to extremely quickly find specific images or types of
images, a much better solution to look through thousands of
images one at a time. The main screen of the project shows the
overall functionality of an image. First, browse the image in
picture box. After browsing an image, the other picture boxes
cover high impact area from an image and identify the human
face. The result of the image is depending upon the Eyes &
Lips movement of a person. It separates all the eyes and lips
movement of person and show a result in terms of smile, cry,
surprise, etc. Smile is often described as a smile emotional Figure 6: Sad Human Face Expression
state characterized by feeling of happiness, pleasure,
accomplishment and well-being. The following figure 4
shows the results of when a person is smiling. 8. TESTING AND EVALUATION

Primary data: Primary data was obtained by firsthand


(Primary source) information resulting in a collection of
English-language questions related to the application.

Secondary data: Secondary data was also used for literature


review, which justified our research work; throughout the
light of various scholarly publications by various scholars.

Sampling: The data was collected using a random sampling


method from 160 respondents, including both teachers and
students of Shaheed Benazir Bhutto university Campus
Naushahro Feroze.
Figure 4: Smiling Human Face Expression

1703
Sanaullah Memon et al., International Journal of Advanced Trends in Computer Science and Engineering, 10(3), May - June 2021, 1700 – 1704

Statistical methods: In starting, we checked the reliability of Universiti Teknologi Malaysia, 2015.
the instrument by using spss26 through Cronbach’s alpha. http://eprints.utm.my/id/eprint/77598
[3] Yang, Ming-Hsuan, David J. Kriegman, and Narendra
8.1 VALIDITY/RELIABILITY ANALYSIS Ahuja. "Detecting faces in images: A survey." IEEE
Transactions on pattern analysis and machine
The peer review and expert opinion offered validity for the intelligence 24.1(2002):34-58.
research tool. After testing, several changes were made to the https://ieeexplore.ieee.org/document/982883
analysis instrument claims. To test the effectiveness of tools,
the following final draft of the questionnaire was used as [4] Szeliski, Richard. Computer vision: algorithms and
shown in Tables 1. applications. Springer Science & Business Media, 2010.
https://link.springer.com/book/10.1007/978-1-84882-9
Table 1: Questionnaire for application 35-0
Qno: 1 Is this application reliable and easily accessible?
[5] Rao, G. Ananth, and P. V. V. Kishore. "Sign Language
Recognition System Simulated for Video Captured with
Qno: 2 Which feature do you think that it is missing and
Smart Phone Front Camera." International Journal of
should be added to this project?
Electrical & Computer Engineering (2088-8708) 6.5
(2016).
Qno: 3 Is this application providing accurate and
http://ijece.iaescore.com/index.php/IJECE/article/view/
optimized results?
1144
Qno: 4 What do you think this application is efficient in [6] Hu, Chuan-Shen, et al. "Virtual portraitist: An
marketplace? intelligent tool for taking well-posed selfies." ACM
Transactions on Multimedia Computing,
Qno: 5 Does this application cover the needs that you Communications, and Applications (TOMM) 15.1s
want to acquire? (2019): 1-17. https://dl.acm.org/doi/10.1145/3288760
[7] Ziaeefard, M., & Bergevin, R. (2015). Semantic human
Qno: 6 Is this application said to be effective, efficient, activity recognition: A literature review. Pattern
and highly scoped in modern world technology? Recognition, 48(8), 2329-2345.
https://www.sciencedirect.com/science/article/abs/
Qno: 7 Does this application contain new idea and pii/S0031320315000953
contribution from previous study? [8] Hsieh, Yi-Tsung, and Mei-Chen Yeh. "Head pose
recommendation for taking good selfies." Proceedings
Qno: 8 Are the factors behind the measured results are of the Workshop on Multimodal Understanding of
evaluated on the basis of the results of selected Social, Affective and Subjective Attributes. 2017.
algorithms published? https://dl.acm.org/doi/pdf/10.1145/3132515.3132518

Total Cronbach’s alpha: α = .78 [9] binti Ariffin, Mariatul Kiptiah.


"Self-Trackam." International Conference on
Computational Intelligence in Information System.
Springer,Cham,2018.https://link.springer.com/chapter/1
9. CONCLUSION 0.1007/978-3-030-03302-6_20

Computer Vision provides the function to recognize an item [10] Dibra, Endri, et al. "Shape from selfies: Human body
as a particular object such as human being. In this study, it has shape estimation using cca regression forests." European
been found that the face detector and recognizer application is conference on computer vision. Springer, Cham, 2016.
totally worked on computer vision based smart images. It https://link.springer.com/chapter/10.1007/978-3-319-46
identifies the high impact area of the face that contains facial 493-0_6
expressions in the images like happy, sad and surprise. It [11] Rettberg, Jill Walker. "Biometric citizens: Adapting
concludes that this application contains more effective our selfies to machine vision." Selfie citizenship.
approach for detection and recognition of the expressions Palgrave Macmillan, Cham, 2017. 89-96.
from the images. https://link.springer.com/chapter/10.1007/978-3-319
REFERENCES -45270-8_10

[1] Bradski, Gary, and Adrian Kaehler. Learning OpenCV:


Computer vision with the OpenCV library. " O'Reilly
Media,Inc.",2008.
https://dl.acm.org/doi/book/10.5555/1461412
[2] Sharifara, Ali. Enhanced Face Detection Framework
Based on Skin Color and False Alarm Rejection. Diss.
1704

You might also like