DSMP Ty Report 2023
DSMP Ty Report 2023
1. INTRODUCTION
2) The Melting Face Emoji Has Already Won Us Over, New York Times
- AnnaP.Kambhapaty
There are times when words feel inadequate when one’s dread, shame, exhaustion or
discomfort seems too immense to be captured in written language. The melting face was
conceived back in 2019 by Jennifer Daniel and Neil Cohn, who connected over their
mutual appreciation for visual language. Ms. Daniel, who uses the pronouns they and
them, is an emoji subcommittee chair for Unicode and a creative director at Google; Mr.
Cohn, an associate professor of cognition and communication at Tilburg University. Mr.
Cohn had published some work on representations of emotion in Japanese Visual
Language that caught the eye of Ms. Daniel. In Mr. Cohn’s research was “paperification”
which, according to him, is “what happens in a manga sometimes when people become
embarrassed, they will turn into a piece of paper and flutter away.”
3) "DeepMoji”
- Bjarke Felbo
The first number of The Graphic appeared on December 4, 1869, and scored an
immediate success. The Islington Cattle Show and the National Cycle Show at the
Crystal Palace are attracting huge crowds. One newfangled idea is the motorbicycle
featuring a two horse-power four-cylinder engine, built by Major Holdenof the Royal
Artillery.Throughout the book, Felbo provides insights into the research and
experimentation process behind DeepMoji, explaining the techniques and
methodologies used to train the model. He discusses the challenges faced when working
with emotional data and outlines the steps taken to overcome them. The author also
offers practical examples of how DeepMoji can be applied in various fields, such as
sentiment analysis, social media monitoring, and market research
4) "EmojiNet"
- Sanjaya Wijeratne
Through engaging storytelling and in-depth analysis, the author sheds light on the
origins of emojis and their transformation from simple emoticons to a diverse and
universally recognized language. Wijeratne discusses the rise of emojis as a powerful
means of expression, allowing individuals to convey emotions, attitudes, and ideas in a
concise and visually appealing manner.
Moreover, the book examines the societal and psychological implications of emojis.
Wijeratne explores how emojis influence human behavior, shape online conversations,
and contribute to the formation of digital identities.
The way we communicate in the digital era is constantly evolving, and there is a growing
need for effective methods to convey emotions in online conversations. While text-based
communication lacks the nuances of face-to-face interactions, emojis have emerged as a
popular means of expressing emotions in digital conversations. However, manually
selecting emojis to match the intended emotion can be time-consuming and subjective,
leading to potential misunderstandings or misinterpretations.
Emojis for Accessibility: There may be opportunities to create emojis that are
specifically designed for people with disabilities, such as emojis that represent sign
language or emojis that are designed to be more inclusive of people with diverse
abilities.
Emojis for Social Causes: There may be opportunities to create emojis that are designed
to raise awareness or promote social causes, such as emojis that represent climate change,
social justice, or mental health awareness.
The "EmotionSense" project falls within the realm of computer vision, machine
learning, and human-computer interaction. It intersects the fields of emotion
recognition, facial expression analysis, and user experience research. The web
application can find many applications domains such as psychology, market research,
virtual reality, gaming, and customer feedback analysis.
Hard disk 20 GB
Ram 4 GB
4. PROJECT PLAN
Feb 2023 Introduction Search the topic Collect the related Create the
To project information synopsis
Mini project
5. SOFTWARE DESIGN
5.1 DFD
A Data Flow Diagram (DFD) illustrates the flow of data within the EmotionSense
software application, highlighting the interactions between different components and
the data transformations. Here's a high-level representation of the DFD for
EmotionSense:
Facial
Expression
Processing
Images
Facial
Particle Expression
Image Analysis
Emotion
Detection
START
Yes
Trained Model lenovo
No
Image Preprocessing
Deep CNN
Trained Model
No
Input Facial Expression
Image Preproessing
Detected
Expression
Yes
Emoji Output
The architecture diagram illustrates the high-level structure and organization of the
EmotionSense software application. It demonstrates the components and their
interactions within the system. Here's a simplified representation of the architecture
diagram for EmotionSense
<<Main Tkinter
Window>> Root
Username
Password
1..*
Use-case diagrams describe the high-level functions and scope of a system. These diagrams
also identify the interactions between the system and its actors. The use cases and actors in
use-case diagrams describe what the system does and how the actors use it, but not how the
system operates internally.
Automatic Face
Detection
Perform Face
Detection
Manual Face
User Detection
Request Image
Classification
Database: user_detail
Table: users
1. Username: It stores the unique username for each user. The data type of this field is
VARCHAR.
2. Password: It stores the password associated with the user's account. The data type of this
field is VARCHAR.
3. Data: It is a BLOB field that stores the binary data of the image.
6. IMPLEMENTATION DETAILS
The implementation details of the EmotionSense software application involve the usage
of various modules.
1. Data Processing: The module uses the Keras `ImageDataGenerator` to preprocess the
training and validation data by rescaling the pixel values.
2. Model Architecture: The CNN model is created using the Sequential API from Keras. It
contains several Convolution, Pooling, Dropout, and Dense layers to learn features and
classify emotions.
3. Model Training: The model is compiled with categorical cross-entropy loss, Adam
optimizer, and accuracy metrics. If a pre-trained model weights file exists, it is loaded and
evaluated on the training data. Otherwise, the model is trained using the `fit_generator`
function. The module assumes the presence of the emotion labels and emoji images in
specific directories. It requires the installation of OpenCV and Keras libraries with
dependencies.
It loads pre-trained weights for the model from a file named "emotionsModel.h5". The
emotions are detected by applying Haar cascades for face detection and using the trained
model to predict the emotion from the detected faces. The predicted emotion is then displayed
as text above the face and as an emoji image on the GUI. Additionally, the module includes
functions for capturing and displaying the live video feed from the default camera, as well as
saving the predicted emotion and corresponding emoji to the database. It also provides a
registration function for adding new users to the user_details.db database. The module uses
various image files located in the "images/emojis" and "images" directories to display the
emoji images and GUI background.
7. TESTING
Introduction
Testing plays a crucial role in ensuring the reliability, accuracy, and effectiveness of the
EmotionSense software application. It involves various techniques and strategies to verify that
the system meets the specified requirements and functions as intended. This section outlines
the different types of testing performed on EmotionSense, including system testing, white box
testing, black box testing, unit testing, integration testing, and performance testing. It also
discusses the test items, test plan, and criteria for determining the pass/fail status of each test.
System Testing
System testing focuses on evaluating the entire EmotionSense system as a whole. It involves
testing the integrated components and their interactions to ensure that they work together
seamlessly. System testing verifies that the application functions correctly, user interface
elements are responsive, and all features are working as expected. Test scenarios include
logging in and registering, capturing video input, performing real-time emotion detection, and
displaying accurate feedback. The aim is to validate that the system meets the defined
requirements and operates reliably under different scenarios.
Test Items
The test items for EmotionSense encompass various components and functionalities,
including:
Test Plan
The test plan outlines the approach and strategies for testing the EmotionSense application. It
includes a detailed description of test scenarios, test objectives, testing techniques, and the
timeline for each testing phase. The plan defines the roles and responsibilities of the testing
team, the resources required, and the environments in which the tests will be conducted. It
also identifies any specific tools or frameworks used for testing purposes, such as test
automation frameworks or mock data generators.
Unit Testing
Integration Testing
Integration testing verifies the interactions and communication between different components
of the EmotionSense application. It ensures that the integrated components work together
seamlessly and produce the expected results. Integration tests validate the integration of the
user interface, login system, emotion detection module, feedback generation, and database
management. The goal is to identify any inconsistencies or issues that may arise due to the
interactions between these components.
Performance Testing
Performance testing evaluates the EmotionSense application's performance under various load
conditions and measures its responsiveness and scalability. It tests the system's ability to
handle a large number of concurrent users and process video input in real-time. Performance
testing also assesses the application's resource usage, response times, and stability under
different workloads. The objective is to ensure that the application performs optimally and
meets the performance requirements defined for EmotionSense.
By conducting thorough testing using a combination of system testing, white box testing, black
box testing, unit testing, integration testing, and performance testing, the EmotionSense
application can be validated for its functionality, reliability, accuracy, and performance. The
testing process ensures a robust and user-friendly application that accurately detects and
analyzes user emotions.
8. SNAPSHOTS
1. Login Page
8. CONCLUSION
The Emotionsense project has successfully addressed the challenge of accurately conveying
emotions in facial-based communication. By leveraging natural language processing and
machine learning techniques, we have developed a web application that predicts and suggests
appropriate emojis based on the content and emotional context of a given facial expression.
The Emotionsense system offers users an intuitive and interactive web application, enabling
them to effortlessly enhance their facial communication with expressive emojis. The
Emotionsense web application enhances the user experience, allowing individuals to
personalize their communication style and better convey their intended emotions. The
Emotionsense project has not only fulfilled the goal of bridging the gap between text and
emotion but also contributed to fostering better emotional understanding in online interactions.
By automating the process of selecting appropriate emojis, users can effectively convey their
emotions, reducing the chances of misinterpretations and misunderstandings. The
Emotionsense project has made significant strides in improving the emotional expression and
understanding in digital communication, enhancing the overall communication experience.
9. FUTURE SCOPE
While the current version of EmotionSense successfully detects and analyzes emotions from
facial expressions, there are several areas that can be explored to enhance the application and
expand its capabilities:
Multi-platform support: Currently, the application is developed for the Python platform.
Expanding the application to other platforms, such as mobile devices (iOS, Android), would
increase its accessibility and reach.
Enhanced emotion detection: Continual improvement of the emotion detection algorithms can
be pursued to enhance the accuracy and robustness of the application. Incorporating state-of-
the-art machine learning techniques, such as deep learning models, could lead to improved
emotion recognition results.
11. REFERENCES
2. OpenCV. (n.d.). OpenCV: Open Source Computer Vision Library. Retrieved from
https://opencv.org/
4. NumPy. (n.d.). NumPy: The fundamental package for scientific computing with Python.
Retrievedfrom https://numpy.org/
6. Keras. (n.d.). Keras: Deep Learning for humans. Retrieved from https://keras.io/
7. The Python Software Foundation. (n.d.). The Python Standard Library - SQLite3.
Retrieved fromhttps://docs.python.org/3/library/sqlite3.html
10. Das, A. (2020). A Comprehensive Guide to Convolutional Neural Networks — the ELI5
way. Retrieved from https://towardsdatascience.com/a-comprehensive-guide-to-
convolutional-neural- networks-the-eli5-way-3bd2b1164a53