Muscle Movement and Facial Emoji Detection Via Raspberry Pi Authors
Muscle Movement and Facial Emoji Detection Via Raspberry Pi Authors
Muscle Movement and Facial Emoji Detection Via Raspberry Pi Authors
Authors
Programming
Target User
Raspberry Pi loading
Open CV Re-Design
RASPBERRY PI CONNECTION
Step 1: Setup installation
Raspbian is an OS for all raspberry pi
PI CAMERA
All Raspberry Pi versions are compatible with
a high-definition camera module. High
Config menu sensitivity, minimal crosstalk, and noise-free
Image capture are all provided in a
remarkably compact and lightweight
architecture. The Raspberry Pi and camera
module are linked. Camera can be connected
to the Pi board using the CSI connection. The
CSI bus, which can transfer data at
extraordinarily high rates, is used just to send
pixel data to the processor.
Interfacing option
METHODOLOGY
Enabling VNC
1. Programming
For programming IDLE python 3.11.2 software
is used. The above-mentioned Open CV and
Emot libraries are imported.
Including libraries
Open CV Emot
Step 4: VNC Viewer
It’s a one-time setup and raspberry pi can be Open CV
accessed in the way easily.
Open CV is a sizable open-source camera capabilities, to use the Pi Camera. For
library for computer vision, machine the PiCamera class, we must build an object.
learning, and image processing, and it Camera = picamera.PiCamera()
currently plays a large part in real-time
We may access the various member variables
operation, which is crucial in modern
and functions of the PiCamera class by only
systems. It may be used to analyse
adding a dot (.) between the object name and
pictures and movies to recognize things member name.
like faces, objects, and even human
handwriting. Python can process the Open Camera.resolution = (1080, 648)
CV array structure for analysis when capture()
combined with a variety of libraries, like
For image capturing capture() is used.
NumPy. We use vector space and apply
mathematical operations to the features Camera.capture(“/home/pi/image.jpeg”)
of a visual pattern to identify it. resolution (width,height)
Emot: It determines the resolution of the camera for
Emot is a Python module that allows you picture capture and preview displays. A tuple,
containing the names of frequently used
to extract emoticons and emoji’s from
display resolutions.
text (string). The source information will
be provided in the source section. All Camera.resolution = (x,y)
emoji’s and emoticons were taken from a Where x, y are the width and height of the
reputable source. The goal of the Emot image respectively.
release is to provide a high-performance
detection framework for data science, EXECUTION
particularly for text datasets of significant The proposed system works in three steps:
size. face detection, recognition, and classification.
In the first step, a picamera is used to capture
2. Pi camera Module interface with
a real-time human face. Open CV library is
Raspberry Pi
used for face detection. During this phase, a
Connect the picamera module's ribbon human face is detected and its features are
connector to the connector on the Raspberry extracted and saved in a database for face
Pi CSI module. The camera connects to the recognition. Human emotions are classified
white connection that is located closer to the using the emot library. Faces are recognised
Ethernet and USB ports. from the database and compared . Face
detection is followed by recognition using
3. Enabling Camera database features and matching it
Certain functions to be included and steps with datasets. Finally, the recognised human
to be followed for enabling the Pi camera. face is classified as angry, fearful, disgusted,
happy, neutral, or surprised based on the
Functions used: expression in real time.
To use the Picamera Python-based Library, OUTPUT & RESULTS
we must import Picamera into our program.
import picamera
class
Using a Raspberry Pi, we can utilise the
PiCamera class, which provides many APIs for
Psychology is so closely related to
emotions, this system has a wide range of
applications. Thus, learning human emotions
in various situations can provide us with a
much better understanding of his/her mental
psychology [4].
Lie Detection:
The machine developed for lie detection
employs sensors to detect any abnormalities
in the user's blood flow streams. This product
can help by identifying any suspicious changes
in the person being tested
expressions/emotions [4].
CONCLUSION
Our project is capable of successfully
recognising human expressions/emotions. It is
in good working order and can be used in
real-world situations without issue. Deep
learning techniques were used to map emoji’s
to classify facial emotions over static facial
images. Emoji’s are methods of representing
nonverbal cues. These cues have become an
important part of online chatting, product
reviews, logo emotion, and a variety of other
activities. It resulted in the expansion of
advanced studies on emoji-driven storytelling.
REFERENCES
1. Praveen, T., Reddy, S. S., Reddy, T.
V. K., Reddy, N. S., &
Chandrashekhar, S. N. EMOJI
CREATION WITH FACIAL EMOTION
DETECTION USING CNN.