Emotion Detection For Hotel Industry Feedback System Using Machine Learning
Emotion Detection For Hotel Industry Feedback System Using Machine Learning
https://doi.org/10.22214/ijraset.2023.51108
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com
Abstract: This study investigates the extent to which hotel employees are conscious of their abilities to identify facial expressions
and emotions conveyed by customers while interacting with those customers. Facial expressions and emotions may have
significant implications when interpreted as a sign of satisfaction and the quality of the service that was provided. According to
the findings, a sizeable portion of hotel staff members do not have an accurate understanding of their abilities to recognise facial
expressions and emotions, and a large number of them have a tendency to exaggerate their capabilities. This study has
significant ramifications in regard to employee efforts put into tasks (such as effort and concentration), employee self-
development and training, and employee willingness to take risks in their respective service encounters.
Keywords: Emotion recognition, facial expressions.
I. INTRODUCTION
Feedback is a vital part of the process of self-improvement in any service sector. This study computes the hotel and restaurant
service sector. In the conventional model, feedback is collected by means of questionnaires (whether manual or digital) and through
online screens or kiosks. The model displays various delicacy messages related to the customer's overall experience of the service.
This proposed application can be technologically advanced by incorporating digital video cameras to
capture the mood or emotion of the customers at various key strategic places within the vicinity of the particular hotel or restaurant
outlet and by using artificial intelligence and machine learning to extract frames from the continuous video stream to record the
exact emotion expressed by the customer with respect to and in reaction to the various services offered by the soliciting
organization.
These frames, which include emotions displayed by the consumer, are passed through various established human emotion detection
algorithms to extract the exact feeling felt by the customer in response to the efficiency of the service provided.
For instance, when a user passes in front of such a camera-and-display device, the device may display a message such as "Please
rate the ambience of the hotel with respect to your experience" or "Please indicate through a gesture how you felt about the food".
When the response is recorded, based on the intensity of the feeling in the user, the algorithm may classify the response as
"excellent ", "outstanding" "happy", "not satisfactory, "etc.
This system thus delivers the customer feedback to the service-provider organization, which can then use the information to take
corrective measures to improve the quality of service to the end customer.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com
3) This research study is based on the facial expression of the person; there emotions and sentiment are found, which is conducted
with the available database using Matlab. Gaussian filtering is used to remove the noise in the image, canny is used to find out
heavy edges, and geometric ratios remove errors, which increases the performance of image processing. The expressions such
as happy, sad, angry, confused, etc. are classified based on sentiment recognition.
4) In this study, convolutional neural networks are used to determine different emotions like fear, happiness, anger, sadness,
surprise, etc. This type of classification helps in applications where an automatic tag predictor is used, usually in social media,
where the sentiment of a person is understood based on their emotions. Two pretrained models are compared, and the accuracy
of image sentiment is determined. First, a sentiment is determined, and then emotions are predicted, which describe happy, sad,
fear, surprise, no emotion, etc.
5) This research intends to improve the signal-to-noise ratio of seismic data by employing median filtering and mean filtering,
both of which have previously resulted in high signal-to-noise ratios and putting forward a convolutional neural network noise
reduction framework. This article presents a deep learning convolutional neural network that achieves high performance by
employing fault noise suppression as its primary training method. The input layer, the convolution layer, the activation layer,
the normalisation layer, and the output layer are all components of the network. The utilisation of residual learning, the RELU
function, and batch normalisation has resulted in an improvement in the accuracy of the FD-Net model, as well as the
preservation of the original datasets and a high signal-to-noise ratio, and it keeps the fault information that is associated with the
image.
6) This project reports on a sewing defect detection method that is based on CNN with a per-trained VGG-16. The goal of this
method is to find broken stitches in a picture of a sewing operation. The set of rotated, normal, and sewing images were used to
arrive at a conclusion regarding the efficacy, which was found to be 92.3%. Computing devices and deep learning libraries
were investigated so that the amount of time spent computing could be reduced. The outcome of the test demonstrates that the
manufacturing technology is useful to produce garments.
7) The primary objective of this work is to develop an alternative method to the traditional pooling method, which is represented
here by the combpool layer, which has three different architectures. It has been observed that the combination of calculating the
arithmetic mean and the maximum value produces more accurate results. The integration of Sugeno into the integral
combination led to an improvement in performance. When used in conjunction with other functions, the DV Sugeno integral
produces satisfactory results. Since the generalisation of the Sugeno integral produced positive results, the performance of
models that include the Choquet integral and generalisations as a combinatorial layer was tested. Combpool is also utilised in
contemporary architectural designs.
8) This work presents the detection of food spoilage from the production stage to the consumption stage. The freshness of fruit is
detected by a computer vision-based technique that uses deep learning with CNN. Later, the model is analysed with the
available dataset of fresh and rotten fruit images from Kaggle.
9) This paper presents a larger amount of analysis on facial responses. Where in eyebrow raises, sad, happy, disgusting, positive,
and negative expressions were measured frame by frame and mapped with ad liking and brand purchase, and effectiveness was
measured. A model has been built to predict the emotional responses, and the results showed high liking scores. And also, one
more model has been created for predicting changes in intent based on automatic facial responses; the results show good
effectiveness of the model. The work tested in this paper is for short-term purchasing decisions like chocolate. This approach
can be combined with content analysis for audio and visual ads.
10) This study demonstrates a few difficulties associated with sentiment analysis. The approaches that are used can be broken down
into three categories: lexicon-based, machine learning, and hybrid. Techniques for conducting supervised, unsupervised, and
deep forms of machine learning are the three categories into which supervised machine learning falls. The posts and reviews on
social networks, which take the form of textual data, are analysed during the decision-making process based on the individuals'
emotional behaviour, and the results of this analysis are classified as pre-processing, feature extraction, and classification or
clustering. Comparing the results of using all three methods for sentiment analysis reveals that the results have a high rate of
accuracy.
11) This study demonstrates that the functionality of image pre-processing and augmentation techniques that are already present in
HistoClean can be improved in the field of deep learning by utilising CNN. In order to implement image preprocessing
techniques, the open-source software HistoClean is utilised. As a result, time is saved, and both transparency and data integrity
are maintained.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com
HistoClean gives users a quick, powerful, and reproducible experience without requiring them to know how to code. In addition
to this, it reduces the amount of time spent running and rerunning scripts. The programme was designed to simplify both the
steps in the procedures and the steps themselves, making it as user-friendly as possible.
12) In this study, a comparison of different sentiment analysis classifiers is proposed using three distinct methods: machine
learning, deep learning, and an evolutionary strategy referred to as EvoMSA. The comparison is carried out by developing two
different corpora of expressions within the programming domain. This determines the emotional state of the student in relation
to teachers, exams, homework, and academic projects. A corpus known as sentiTEXT contains polarity, that is, both positive
and negative labels, whereas a corpus known as eduSERE contains both positive and negative learning-centred emotions,
including labels such as happy, sad, exhausted, excited, bored, and frustrated. After examining the three approaches, it was
determined that the evolutionary algorithm (EvoMSA) produced the best results for the corpus sentiTEXT dataset, which had
an accuracy of 93%, and the corpus eduSERE dataset, which had an accuracy of 84%.
13) This work presents a comprehensive study on various methods of face recognition that are based on deep learning. Deep
learning techniques are completely applied to face recognition, and they have played a vital role in avoiding challenges in FR,
pose variations, illumination, age, facial expression, and heterogeneous face matching. Deep learning techniques are completely
applied to face recognition. The use of methods based on deep learning has led to improved results in the processing of RGB-D,
video, and heterogeneous face data.
14) The primary objective of this research is to on this project, a method is developed for implementing a fully connected deep
neural network and a convolutional neural network inference system on a field-programmable gate array (FPGA). CNN makes
use of both the systolic array architecture present and the parallel processing potential. The minimum amount of memory
necessary for algorithmic analysis is required for fixed-point trained parameters. According to the findings, selecting block
memory over distributed memory saves approximately 62% of the lookup tables for the DNN, while selecting distributed
memory over block memory saves approximately 30% of the BRAM for the LeNet-5 CNN unit. This study provides some
insight into the process of developing digital systems based on FPGAs for applications that require DNN and CNN.
15) Sentiment analysis makes use of a wide variety of machine learning and depth learning methods, such as SVM, NB, Haar
cascade, LBPH, CNN, and so on. This allows guests in hotels to express their opinion on the food, ambience, and other aspects
of the experience, which can be a helpful resource to obtain feedback from customers. The proposed work determines sentiment
analysis pictures pertaining to users along with their faces from the hotel reviews, revealing that it is highly efficient in locating
and categorising emotions from images.
16) This work discusses the analysis of textual sentiment in addition to the analysis of visual sentiment for the image that was
provided, classifying the image as either positive, negative, or neutral. Utilising deep learning techniques such as CNN, visual
sentiment analysis can be recast as image classification. However, the mood conveyed by the captured image is influenced by
three different factors: the image factor, the user factor, and the item factor. both focused on the products and the people using
them. CNN was designed so that image characteristics can be swapped out for different user expressions at any given time. The
findings indicate that pictures taken in restaurants were more successful than others in categorising respondents' feelings.
17) This paper makes the argument that sentiment analysis should be classified according to facial expressions and visual analysis
to determine whether an image is optimistic or pessimistic based on the emotions conveyed by it. CNN is used to construct a
model for the prediction of the sentiment of images. Additionally, the opinions are categorised effectively, which increases the
accuracy of the hotel picture that is posted on social media. When compared to naive bayes, the results produce the best
performance analysis of reviews from images when using the ML technique.
18) The concepts of image and sentiment are Adjective noun pairs, also known as automatically finding the tags of a picture, are
beneficial in terms of locating the emotions. Since deep learning techniques are able to effectively learn the polarity of images,
they are used for sentiment analysis. This allows for more accurate results. Deep neural networks, convolutional neural
networks, region-based convolutional neural networks, and fast recurrent neural networks are all proposed in this paper with the
intention of being applicable to specific applications in image sentiment analysis.
19) The purpose of this research is to determine whether CNN can accurately forecast a range of feelings, including happiness,
surprise, sadness, fear, excitement, exhaustion, and neutrality. There are a number of applications that are capable of
automatically tagging predictions for visual data on social media and for comprehending feelings. They used the VGG-16
model, the ResNet model, and a customised CNN framework with seven layers in order to be able to predict the emotion and
sentiment of the image that was given to them. The prediction was both more accurate and more efficient as a result of using
CNN.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com
20) The trained model was used to find the emotion in a live photo by loading a captured image using a GUI.
21) This project discusses sentiment analysis using facial recognition, utilising the Viola-Jones algorithm to find an image and
local binary pattern for expression recognition. The classification of expression is accomplished with the help of support vector
machines. You can tell from a person's expression whether they are happy, sad, or excited just by looking at their face. This
framework initially locates and reads the face of a person; then, after finding the face, it computes a variety of facial
expressions; finally, after finding the expressions, it categorises them as happy, sad, exhausted, and so on.
Ruchika Prakash Nagekar 2022 Natural language processing (NLP) Custom model and web
camera provides better
results.
Barot, V., & Gavhane 2020 Convolutional Neural Network. CNN provides 80% for
testing set and 60% fro
validation test.
Buket Kaya,Ayten Geçmez 2020 Gaussian filtering, canny, geo metric Percentage of noise is
Ratio. reduced in the image.
M. M., S. 2022 Visual analysis using CNN Result yield best performance
Shivakum ar,V. R. analysis of reviews from images
sanjay compared to naive bayes in ML
technique.
Valentino, F., 2021 Computer vision-based Fresh fruits images and rotten
Cenggoro, T technique using CNN. and classified from Kaggle.
Viratham Pulsawatd i A, 2021 Deep learning by CNN and Suggests for the applications of
Javier I. Histolene. deep learning technologies in
Quezada-Marín hospitality.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 4
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com
Guo, G., & Zhang, N. 2019 Deep learning technique son Deep learning methods yields
face recognition. best results in processing RGB-
D, video, and heterogeneous
face.
Nyoung Ki m a, Hyu n Rim 2015 DNN and CNN inference Block over distributed memory
Choi system on a FPGA. saves ≈62%
look-up-tables (LUTs) for the
DNN
Yash Gherkar, Parth Gujar 2022 ML and depth learning Results shows high efficiency in
methods Cascade, LBPH, finding and classifying emotions
CNN from images.
QuocTuan, 2017 Sentiment analysis using Results shows that images from
Truong, HadyW, Lauw CNN. restaurant have been more
effective in classifying the
sentiments.
IV. CONCLUSIONS
This paper presents a solution to getting proper feedback from the customers, which helps in the proper development of the
business. Here, the issue is taking reviews or feedback from the customers who visit the place (hotel). Those who are interested in
giving feedback regularly do so, but a major percentage of people think it is a waste of time, so they will not give the review or
feedback. This project helps us get proper feedback from the customers. In simple terms, we have developed a technique in which
customers need not spend more time holding a pen and paper and filling out forms according to their wishes. We just need a small
glimpse of an image at which the customer will look, and the camera automatically captures the expression and updates it in the
database accordingly. Meaning the question about the particular property, e.g., service, cost, food, is taken and analysed using the
technology, then automatically updated in the database as good, excellent, or bad based on the expression the customer gives for the
question asked by our server.
REFERENCES
[1] Chandrasekaran G, Antoanela N, Andrei G, Monica C, & HemanthJ. “Visual Sentiment Analysis Using Deep Learning Models with Social Media
Data”Applied Sciences(Switzerland) -2022.
[2] Swapnil Sanjay Nagvenkar, Ruchika Prakash Nagekar, Tanmay Naik, Amey Kerkar, “Emotion detection based on facial expression” (IJCRT)-2022.
[3] Aynur Sevinç, Buket Kaya, Ayten Geçmez, “A Sentiment Analysis Study on Recognition of Facial Expressions: Gauss and Canny Methods” (IEEEI)-2020.
[4] Doshi U, Barot V, & Gavhane S, “Emotion detection and sentiment analysis of static images” (IEEE)-2020.
[5] Gherkar Y, Gujar P, Gaziyani A, & Kadu S, “Sentiment Analysis of Images using Machine Learning Techniques Yash” (ICCDW)-(2022).
[6] Kim H, Jung W K, Park Y C, Lee J W, & Ahn S H, “Broken stitch detection method for sewing operation using CNN feature map and image-processing
techniques. Expert Systems with Applications” (IEEE)-2021.
[7] Rodriguez Martinez I, Lafuente J, Santiago R H N, Dimuro G P, Herrera F, & Bustince H,”Replacing pooling functions in Convolutional Neural Networks by
linear combinations of increasing functions Neural Networks”. (IEEE)2022.
[8] Valentino F, Cenggoro T W, & Pardamean B, “A Design of Deep Learning Experimentation for Fruit Freshness Detection. IOP Conference Series: Earth and
Environmental Science” (IEEE)-2021.
[9] D McDuff, R El Kaliouby, J F Cohn and P Picard, Predicting ad liking and purcha se intent: Large-scale analysis of facial responses to ads” (IEEE)- 2015.
[10] Shaha Al-Otaibi and Amal Al-Rasheed, “A Review and Comparative Analysis of Sentiment Analysis Techniques” (Informatica)-2022.
[11] Kris DnMcCombe, Stephanie G Craig, Amélie Viratham Pulsawatdi, Javier I Quezada-Marín, “Open- source software for histological image pre-processing
and augmentation to improve development of robust convolutional neural networks” (IEEE)-2021.
[12] Estradam MLB, Cabada R Z, Bustillos R O & Graff M,“Opinion mining and emotion recognition applied to learning environments”(Expert Systems with
Applications)-2020.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 5
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue V May 2023- Available at www.ijraset.com
[13] Guo G, & Zhang N, “A survey on deep learning- b a s e d face recognition”. (Computer Vision and Image Understanding)-2019.
[14] Mukhopadhyay A K, Majumder S, & Chakrabarti I. “Systematic realization of a fully connected deep and convolutional neural network architecture on a field
programmable gate array” (IEEE)-2022.
[15] Yash Gherkar, Parth Gujar, Siddhi Kadu, “Sentiment Analysis of Images using Machine Learning Techniques” (itmconf)-2022.
[16] QuocTuan, Truong, HadyW, Lauw, “Visual Sentiment Analysis for Review Images with Item Oriented and User-Oriented CNN”(Corpus)-2022.
[17] S Shivakumar,V R sanjay, “Visual Sentiment Classification of Restaurant Review Images using Deep Convolutional Neural Networks” (Corpus)-2022.
[18] Namita Mittal, Divya Sharma, M. Joshi, "Image Sentiment Analysis Using Deep Learning" (Corpus) -2018.
[19] Udit Doshi, Vaibhav Barot, Sachin Gavhane, "Emotion Detection and Sentiment" (IJESC)-2017.
[20] R.N Devendra Kumar, Dr. C. Arvind, "Facial Expression Recognition System "Sentiment Analysis" (Jour of Adv Research in Dynamical & Control Systems)-
2017.
[21] Chandrappa S, Dharmanna L, Basavaraj Anami, " A Novel Approach for Early Detection of Neovascular Glaucoma Using Fractal Geometry", International
Journal of Image, Graphics and Signal Processing (IJIGSP), Vol.14, No.1, pp. 26- 39, 2022.DOI: 10.5815/ijigsp.2022.01.03
[22] Dharmanna L, Chandrappa S, T. C. Manjunath, Pavithra G,"A Novel Approach for Diagnosis of Glaucoma through Optic Nerve Head (ONH) Analysis using
Fractal Dimension Technique", International Journal of Modern Education and Computer Science (IJMECS), Vol.8, No.1, pp.55-61, 2016.DOI:
10.5815/ijmecs.2016.01.08
[23] Guru Prasad, M.S., Naveen Kumar, H.N., Raju, K. et al. Glaucoma Detection Using Clustering and Segmentation of the Optic Disc Region from Retinal
Fundus Images. SN COMPUT. SCI. 4, 192 (2023). https://doi.org/10.1007/s42979-022-01592-1
[24] Chandrappa S, Guruprasad, M.S., Kumar, H.N.N. et al. An IOT-Based Automotive and Intelligent Toll Gate Using RFID. SN COMPUT. SCI. 4, 154 (2023).
https://doi.org/10.1007/s42979-022-01569-0
[25] Chandrappa S, Dharmanna L and K. I. R.Neetha, "Automatic Elimination of Noises and Enhancement of Medical Eye Images through Image Processing
Techniques for better glaucoma diagnosis," 2019 1st International Conference on Advances in Information Technology (ICAIT), Chikmagalur, India,
2019,pp.551-557,doi: 10.1109/ICAIT47043.2019.8987312.\
[26] Chandrappa S, Dharmanna L, Shyama Srivatsa Bhatta U V, Sudeeksha Chiploonkar M, Suraksha M N, Thrupthi S, "Design and Development of IoT Device to
Measure Quality of Water", International Journal of Modern Computer Science (IJMECS), Vol.9,No.4,pp.50-56,2017.DOI: 10.5815/ijmecs.2017.04.06
[27] Chandrappa S, Dharmanna Lamani, Shubhada Vital Poojary, Meghana N U,"Automatic Control of Railway Gates and Destination Notification System using
Internet of Things (IoT)", International Journal of Education and Management Engineering (IJEME), Vol.7, No.5, pp.45-55, 2017.DOI:
10.5815/ijeme.2017.05.05
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 6