Project Report On Voice Assistant Using Python
Project Report On Voice Assistant Using Python
Abstract:-Emojis are small images that are commonly included in social media text messages. The combination of visual and
textual content in the same message builds up a modern way of communication. Emojis or avatars are ways to indicate
nonverbal cues. These cues have become an essential part of online chatting, product review, brand emotion, and many
more. It also led to increasing data science research dedicated to emoji-driven storytelling. With advancements in
computer vision and deep learning, it is now possible to detect human emotions from images. In this deep learning project,
we will classify human facial expressions to filter and map corresponding emojis or avatars. This project is not intended to
solve a real-world problem, instead it allows us to see things more colourful in the chatting world. Emoji-Fy is a software
which deals with the creation of Emoji’s or Avatars.
InToday’sGenerationpeopleusuallytendtocommunicatewitheachotherusingEmoticons.So,wethoughtofmakingourowncustomizedemojis.
Introduction:-
People use emojis every day? Emojis have become a new language that can more effectively express an idea or emotion.
This visual language is now a standard for online communication, available not only in Twitter, but also in another large
online platform such as Facebook and Instagram. In Today’s Generation people usually tend to communicate with each
other using Emoticons. So, we thought of making our own customized emojis. Emoji-Fy is a software which deals with the
creation of Emoji’s or Avatars.
The neural network has been an emerging application in numerous and diverse areas as example of end to end learning
This paper is based on a system which implements Convolutional Neural Network and Fer2013 Dataset to detect emotions
from facial expressions and converting them to personalised emojis.
We are building a convolution neural network to recognize facial emotions. We will be training our model on the FER2013
dataset. Then we’ll map those emotions with the corresponding emojis or avatars. Fer2013 contains approximately 30,000
facial RGB images of different expressions with size restricted to 48×48, and the main labels of it can be divided into 7
types: 0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral. The Disgust expression has the minimal
number of images – 600, while other labels have nearly 5,000 samples each.
Methodology:-
Proposed System
Here we are importing all the required libraries needed for our model and then we are initialising the training and
validation generators i.e., we are first rescaling all our images needed to train our model and then converting them to
grayscale images.
• Imports:
• Initializing the training and validation generators
Here we are training our network on all the images we have i.e., FER2013 dataset and then saving the weights in
model for the future predictions. Then using OpenCV harassed xml to detect the bounding boxes of face in webcam and
predict the emotions.
Create a folder named emojis and save the emojis corresponding to each of the seven emotions in the dataset. The
trained model is tested on a set of images. Random images are introduced to the network and the output label is compared
to the original known label of the image. Parameters used for evaluation are F1 score, precision and recall. Precision is the
proportion of predicted positives that are truly positives.
Testing
• Final Output
Here are some images of how will this project look. This dataset consists of facial emotions of
following categories: 0:angry 1:disgust 2:feat 3:happy 4:sad 5:surprise 6:natural.
Block Diagram
Tools Used:
● Google Collab supports both GPU and TPU instances, which makes it a perfect tool to train the model on large datasets
● We have also used various data science related libraries like koras, TensorFlow, OpenCV, matplotlib, NumPy etc. For the purpose of building
the keras model we have used sequential modelling technique.
● VS Code is used for overall development as a standard platform.
Conclusion:-
As Today’s generation people is loving the trend of communicating with non-verbalcues like emoticons so we
thought why not bring out our own emojis.
With advancements in computer vision and deep learning, we will now able to detect human emotions from
images. In this deep learning project, we will classify human facial expressions to filter and map corresponding
emojis or avatars.
The result we are expected is the use of emojify in chatting world. We want people to communicate with their
own customisable emoticon. The project will recognizeone’scurrent emotion and convert that emotion's emoji
so that the customer gets emoji of their face and use it in chatting.