Muscle Movement and Facial Emoji Detection Via Raspberry Pi Authors

Download as odt, pdf, or txt
Download as odt, pdf, or txt
You are on page 1of 8

MUSCLE MOVEMENT AND FACIAL EMOJI DETECTION VIA RASPBERRY PI

Authors

marketing, medical, e-learning, surveillance,


and entertainment. There are various uses for
Abstract this technology in each stream alone. For
In the Human body facial muscle consumes instance, in the field of medicine, while
70% of energy to make expressions against dealing with a patient who suffers from
various emotions. Most importantly face has depression, AI must make highly precise use
around 42 striated muscles and 14 facial of emotion recognition technology to develop
bones all together perform an important the necessary treatment and process of
function in the daily life cycle. Facial emotions improvement. That would be a totally
can be replaced by Emoji reactions for customized experience if we could alter
example Sadness, Contempt, Surprise, Anger, everything by our feelings and mood. Hence,
Fear, and Happiness, among this Expression a system for recognizing human emotions is
recognition is a task that humans perform crucial if we are to live comfortably in a world
effortlessly, but it is not easily performed by where everything is entirely automated and
computers. Recent methods have presented run by artificial intelligence.
AI technology for Facial skeleton marking to Workflow:
create Apps and avatars in some conditions
(frontal face, controlled environments, and 5.0 MP Camera Capturing Images and
high-resolution images). This paper explains processing the emotions, interfacing with
technologies developed for automatic facial Raspberry PI using IDLE Python 3.11.2 is as
reaction captured through 5.0 MP Pi Digicam follows:
is interfaced with Raspberry pi in CSI (Camera
Serial Interface) using IDLE Python 3.11.2.
Here Raspberry pi OS is used to get display
Input
photographs clearly as it works faster than the
existing model and it is widely used in wireless
printing servers, controllers of the robot, and Pre-processing
Standard tripod stop motion cameras. This
paper reflects in the below applications: Lie
detections used by crime branch CSI Interfacing
departments, In Hospitals to monitor
intensive care patients. This model will
support for several domains such as Hospitals Facial detection
and Anti-crime Units. This output will be
monitored by 24/7 for tracking purposes and
provide a detailed Data log sheet with emoji’s. Facial Feature
Extraction
Keywords: Facial Muscle, Emotions, Skeleton
Marking, Raspberry Pi, 5.0 MP Digicam,
Hospitals, Anti-Crime units. Emotion classification
Introduction:
Recognition of facial expressions is essential Output Expression
for human-machine interaction. The
automatic facial expression recognition
system is useful in many fields, including law,
Block Diagram

User Diagram Trained Model

Programming
Target User

Raspberry Pi loading

Digicam Capture Trained Model with


trained dataset

Open CV Re-Design

Compared and Processing Image


Emoji Display
mapped emoji to emoji

RASPBERRY PI CONNECTION
Step 1: Setup installation
Raspbian is an OS for all raspberry pi

It’s the main dictionary on the SD card.


Step 3: Creating a configuration.
Login to raspberry pi, once it gets connected.

Flash OS to the SD card connecting it via an


adapter.
Step 2: Creating SSH file
Comment typing: sudo raspi-config for
interfacing

PI CAMERA
All Raspberry Pi versions are compatible with
a high-definition camera module. High
Config menu sensitivity, minimal crosstalk, and noise-free
Image capture are all provided in a
remarkably compact and lightweight
architecture. The Raspberry Pi and camera
module are linked. Camera can be connected
to the Pi board using the CSI connection. The
CSI bus, which can transfer data at
extraordinarily high rates, is used just to send
pixel data to the processor.
Interfacing option

METHODOLOGY
Enabling VNC
1. Programming
For programming IDLE python 3.11.2 software
is used. The above-mentioned Open CV and
Emot libraries are imported.

Including libraries

Open CV Emot
Step 4: VNC Viewer
It’s a one-time setup and raspberry pi can be Open CV
accessed in the way easily.
Open CV is a sizable open-source camera capabilities, to use the Pi Camera. For
library for computer vision, machine the PiCamera class, we must build an object.
learning, and image processing, and it Camera = picamera.PiCamera()
currently plays a large part in real-time
We may access the various member variables
operation, which is crucial in modern
and functions of the PiCamera class by only
systems. It may be used to analyse
adding a dot (.) between the object name and
pictures and movies to recognize things member name.
like faces, objects, and even human
handwriting. Python can process the Open Camera.resolution = (1080, 648)
CV array structure for analysis when capture()
combined with a variety of libraries, like
For image capturing capture() is used.
NumPy. We use vector space and apply
mathematical operations to the features Camera.capture(“/home/pi/image.jpeg”)
of a visual pattern to identify it. resolution (width,height)
Emot: It determines the resolution of the camera for
Emot is a Python module that allows you picture capture and preview displays. A tuple,
containing the names of frequently used
to extract emoticons and emoji’s from
display resolutions.
text (string). The source information will
be provided in the source section. All Camera.resolution = (x,y)
emoji’s and emoticons were taken from a Where x, y are the width and height of the
reputable source. The goal of the Emot image respectively.
release is to provide a high-performance
detection framework for data science, EXECUTION
particularly for text datasets of significant The proposed system works in three steps:
size. face detection, recognition, and classification.
In the first step, a picamera is used to capture
2. Pi camera Module interface with
a real-time human face. Open CV library is
Raspberry Pi
used for face detection. During this phase, a
Connect the picamera module's ribbon human face is detected and its features are
connector to the connector on the Raspberry extracted and saved in a database for face
Pi CSI module. The camera connects to the recognition. Human emotions are classified
white connection that is located closer to the using the emot library. Faces are recognised
Ethernet and USB ports.  from the database and compared . Face
detection is followed by recognition using
3. Enabling Camera database features and matching it
Certain functions to be included and steps with datasets. Finally, the recognised human
to be followed for enabling the Pi camera. face is classified as angry, fearful, disgusted,
happy, neutral, or surprised based on the
Functions used: expression in real time.
To use the Picamera Python-based Library, OUTPUT & RESULTS
we must import Picamera into our program.
import picamera

class
Using a Raspberry Pi, we can utilise the
PiCamera class, which provides many APIs for
Psychology is so closely related to
emotions, this system has a wide range of
applications. Thus, learning human emotions
in various situations can provide us with a
much better understanding of his/her mental
psychology [4].
Lie Detection:
The machine developed for lie detection
employs sensors to detect any abnormalities
in the user's blood flow streams. This product
can help by identifying any suspicious changes
in the person being tested
expressions/emotions [4].
CONCLUSION
Our project is capable of successfully
recognising human expressions/emotions. It is
in good working order and can be used in
real-world situations without issue. Deep
learning techniques were used to map emoji’s
to classify facial emotions over static facial
images. Emoji’s are methods of representing
nonverbal cues. These cues have become an
important part of online chatting, product
reviews, logo emotion, and a variety of other
activities. It resulted in the expansion of
advanced studies on emoji-driven storytelling.
REFERENCES
1. Praveen, T., Reddy, S. S., Reddy, T.
V. K., Reddy, N. S., &
Chandrashekhar, S. N. EMOJI
CREATION WITH FACIAL EMOTION
DETECTION USING CNN.

2. Lopes, A. T., De Aguiar, E., De Souza,


A. F., & Oliveira-Santos, T. (2017).
Facial expression recognition with
convolutional neural networks: coping
with few data and the training sample
APPLICATIONS order. Pattern recognition, 61, 610-
628
Health and Medicine:
3. Srivastava, S., Gupta, P., & Kumar, P.
To determine the emotions of a patient (2021, June). Emotion recognition
undergoing surgery or treatment and notify based emoji retrieval using deep
learning. In 2021 5th International
the appropriate doctor about them. Using Conference on Trends in Electronics
them as a foundation, a doctor can and Informatics (ICOEI) (pp. 1182-
recommend a more appropriate treatment for 1186). IEEE
the patient, easing their difficulties or pain [4].
4. https://github.com/ajinkyabedekar/
Psychology:  Face-to-Emoji
5. https://github.com/abhijeet3922/
FaceEmotion_ID/blob/master/
csv_to_images.py
6. https://github.com/omar178/
Emotion-recognition
7. https://github.com/oarriaga/
face_classification
Block Diagram

You might also like