0% found this document useful (0 votes)
19 views10 pages

Literature Survey of s7 Project

The literature survey in the paper 'BERT-CNN: A Deep Learning Model for Detecting Emotions from Text' reviews advancements in emotion recognition, categorizing methods into traditional and deep learning approaches. It highlights the significance of the BERT-CNN model, which integrates BERT with CNN for improved emotion classification, and demonstrates superior performance over existing models. The survey also suggests future research directions to enhance emotion detection capabilities.

Uploaded by

NAVANEETHAN N
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views10 pages

Literature Survey of s7 Project

The literature survey in the paper 'BERT-CNN: A Deep Learning Model for Detecting Emotions from Text' reviews advancements in emotion recognition, categorizing methods into traditional and deep learning approaches. It highlights the significance of the BERT-CNN model, which integrates BERT with CNN for improved emotion classification, and demonstrates superior performance over existing models. The survey also suggests future research directions to enhance emotion detection capabilities.

Uploaded by

NAVANEETHAN N
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Paper 1

Literature Survey of BERT-CNN: A Deep Learning Model for Detecting


Emotions from Text
The literature survey in the paper "BERT-CNN: A Deep Learning Model for Detecting Emotions
from Text" provides a comprehensive overview of the advancements in the field of emotion
recognition from text. Here are the key points derived from the contexts:
Categories of Emotion Recognition: The existing studies in emotion recognition can be broadly
categorized into two main approaches:
Traditional methods that rely on rule-based or statistical techniques.
Deep learning approaches that utilize neural networks for more sophisticated analysis and
classification of emotions from text [1].
Importance of Emotion Detection: The paper emphasizes that detecting emotions in text is a
significant challenge in Natural Language Processing (NLP). Emotions are integral to human
behavior and have been extensively studied in psychology and behavioral sciences, highlighting
their relevance in understanding user sentiments [2].
Advancements in Deep Learning: The literature review discusses how deep learning models,
particularly those combining different architectures, have shown promising results in emotion
detection tasks. The proposed BERT-CNN model is positioned within this context, showcasing
its innovative approach by integrating Bidirectional Encoder Representations from Transformers
(BERT) with Convolutional Neural Networks (CNN) for improved textual classification [2].
Comparative Analysis: The paper also mentions that the BERT-CNN model was evaluated
against various state-of-the-art models in the literature, demonstrating its superior performance on
datasets like SemEval 2019 Task 3 and ISEAR. This comparative analysis is crucial for
establishing the effectiveness of the proposed model in the context of existing research [2].
Future Directions: The literature survey hints at future research directions, including the
exploration of other pretrained transformer models and methodologies to enhance emotion
detection capabilities. This indicates an ongoing evolution in the field, where new techniques and
models are continuously being developed to address the complexities of emotion recognition
from text [3].
In summary, the literature survey in this paper not only reviews the current state of emotion
detection methodologies but also sets the stage for the introduction of the BERT-CNN model as a
significant contribution to the field.

Paper2
BERT-CNN: A Deep Learning Model for Detecting Emotions from Text
Ahmed R. Abas1 , Ibrahim Elhenawy1 , Mahinda Zidan2,* and Mahmoud
Othman
Literature Survey of the BERT-CNN Paper
The literature survey in the paper "BERT-CNN: A Deep Learning Model for Detecting Emotions
from Text" provides a comprehensive overview of the advancements in emotion recognition,
particularly focusing on two main categories:
Traditional Methods: The paper discusses earlier approaches to emotion recognition that relied
on conventional techniques. These methods often utilized rule-based systems or simple machine
learning algorithms, which may not capture the complexities of human emotions effectively.
Deep Learning Approaches: The survey highlights the shift towards deep learning methods,
which have gained prominence due to their ability to learn complex patterns from large datasets.
The paper emphasizes that recent studies have shown significant improvements in emotion
detection accuracy through the use of deep learning models, particularly those that leverage
neural networks.
The literature review serves as a foundation for the proposed BERT-CNN model, which combines
Bidirectional Encoder Representations from Transformers (BERT) with Convolutional Neural
Networks (CNN) for enhanced textual classification. This model aims to address the limitations
of previous methods by utilizing BERT's capability to generate dynamic semantic representations
of words based on their context, which is then processed by CNN for emotion
classification [1] [2] [3].
Additionally, the paper notes that the proposed model has been evaluated against existing state-
of-the-art models, demonstrating superior performance on datasets such as SemEval 2019 Task 3
and ISEAR. This comparative analysis underscores the effectiveness of the BERT-CNN model in
the context of emotion detection from text, further validating the need for advanced
methodologies in this field [3].
In summary, the literature survey in this paper not only reviews the evolution of emotion
recognition techniques but also sets the stage for the introduction of the BERT-CNN model,
which aims to push the boundaries of what is achievable in emotion detection from textual data.

Paper3
Fine-grained Sentiment Classification using BERT
iterature Survey of Fine-grained Sentiment Classification using BERT
The literature survey in this paper highlights significant advancements and methodologies in
sentiment classification, particularly focusing on fine-grained sentiment analysis. Here are the
key points derived from the contexts:
Popularity of Sentiment Classification: Sentiment classification is a widely researched area in
Natural Language Processing (NLP), with many studies aimed at improving accuracy in this task.
The paper notes that most research has concentrated on binary sentiment classification due to the
availability of large datasets like the IMDb movie review dataset [1].
Embedding Techniques: The initial step in sentiment classification involves converting text into
fixed-size vectors through embedding. Early approaches included learning word embeddings,
with notable contributions from Mikolov et al. and Pennington et al., who developed methods for
creating semantic representations of words from large text corpora [1].
Context-Free vs. Contextual Embeddings: Traditional methods generated context-free
embeddings, meaning that words had the same representation regardless of their context (e.g.,
"bank" in different phrases). Recent advancements have shifted towards contextual embeddings,
which consider the surrounding words to provide more accurate representations. This shift is
exemplified by the work of Peters et al. and Devlin et al., who introduced BERT, a model that
generates deep bidirectional representations [1].
Fine-grained Sentiment Classification: The Stanford Sentiment Treebank (SST) dataset is
highlighted as a significant resource for fine-grained sentiment classification. Various models,
including LSTM networks and CNNs, have been applied to this dataset, but the paper emphasizes
that most existing approaches have not utilized BERT for the SST-5 dataset, which is the focus of
their research [1] [2].
Transfer Learning in NLP: The paper discusses the effectiveness of transfer learning in NLP,
particularly with BERT, which has shown promising results across various tasks. The authors aim
to explore its application in fine-grained sentiment classification, motivated by the success of
BERT in other sentiment analysis tasks [2].
Comparison with Existing Models: The results section of the paper indicates that their model,
despite its simplicity, outperforms many sophisticated models in terms of accuracy on the SST
datasets, showcasing the potential of BERT in this domain [3].\
This literature survey provides a comprehensive overview of the evolution of sentiment
classification techniques, emphasizing the transition from traditional methods to advanced models
like BERT, and sets the stage for the authors' contributions to fine-grained sentiment analysis.

Paper4
Target-Dependent Sentiment Classification With BERT
Literature Survey of Target-Dependent Sentiment Classification With BERT

Aspect-Based Sentiment Analysis (ABSA): The paper discusses three critical tasks in ABSA,
which include representing the entire context where a target appears, generating a representation
of the target itself, and identifying the significant parts of the context that influence sentiment
judgment for that specific target. This highlights the importance of text representation in natural
language processing tasks [1].
Representation Techniques: The paper emphasizes that effective representation of both target
and context is crucial for the subsequent classification model, which assigns sentiment labels.
Various models, ranging from traditional machine learning to advanced neural networks, are
explored, indicating that the performance of these models heavily relies on the quality of text
representation [1].
BERT's Role: The introduction of BERT (Bidirectional Encoder Representations from
Transformers) is significant in the literature, as it has set new benchmarks in multiple NLP tasks,
including sentiment classification. The paper notes that while BERT has shown remarkable
performance in sentence-level sentiment classification, its application in aspect-level sentiment
analysis is less explored [2][3].
Challenges with Existing Models: The authors point out that simply integrating BERT with
existing neural network models does not necessarily enhance performance. Instead, they found
that these models often perform better with context-independent representations. This suggests a
need for models that can effectively incorporate target information to improve sentiment
classification outcomes [2][3].
Multi-Target Sentiment Analysis: The paper also addresses the complexities of multi-target
sentiment analysis, where multiple targets within a single text can influence sentiment. The
authors argue that previous models have not adequately tackled the inter-correlations among
multiple targets, which presents a significant challenge in fine-grained sentiment analysis. They
propose that their TD-BERT model can be extended to handle these multi-target scenarios
effectively [4][5].
Empirical Results: The experiments conducted in the paper demonstrate that the TD-BERT
model outperforms existing models, particularly in complex scenarios with inconsistent sentiment
polarities. This empirical evidence supports the effectiveness of their approach in advancing the
field of aspect-based sentiment analysis [5][3].
In summary, the literature survey within this paper highlights the evolution of sentiment analysis
techniques, the pivotal role of BERT, and the challenges faced in multi-target sentiment scenarios,
ultimately leading to the development of the TD-BERT model.

Paper5
BERT: Pre-training of Deep Bidirectional Transformers for
Language Understanding
Literature Survey of BERT: Pre-training of Deep Bidirectional Transformers for Language
Understanding
The paper "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"
presents a significant advancement in the field of natural language processing (NLP) by
introducing the BERT model. Here’s a literature survey based on the contexts provided:
Historical Context of Language Representation: The paper acknowledges a long history of
pre-training general language representations, indicating that various approaches have been
explored prior to BERT. This sets the stage for understanding the evolution of language models
and the limitations of earlier methods [1].
Limitations of Previous Models: The authors critique existing techniques, particularly those that
employ unidirectional language models, such as OpenAI GPT, which only allow tokens to attend
to previous tokens. This unidirectionality is seen as a significant limitation for tasks requiring
comprehensive context, such as question answering [2]. The paper emphasizes that these
restrictions hinder the effectiveness of fine-tuning approaches, especially for token-level tasks [2].
Introduction of BERT: BERT stands for Bidirectional Encoder Representations from
Transformers. It is designed to overcome the limitations of unidirectional models by using a
masked language model (MLM) objective, which allows the model to consider both left and right
contexts during training. This approach is inspired by the Cloze task and enables the model to
learn richer representations of language [3].
Pre-training and Fine-tuning Framework: The paper outlines a two-step framework consisting
of pre-training on unlabeled data and fine-tuning on labeled data for specific tasks. This method
allows BERT to achieve state-of-the-art results across various NLP tasks with minimal task-
specific modifications [4] [5].
Performance Metrics: BERT has demonstrated significant improvements over previous models,
achieving new state-of-the-art results on eleven NLP tasks, including substantial gains in GLUE
scores and question answering benchmarks like SQuAD [5].
Comparison with Other Models: The paper discusses alternatives like ELMo, which uses
separate left-to-right (LTR) and right-to-left (RTL) models. However, BERT's single bidirectional
model is presented as more efficient and powerful, as it utilizes context from both directions at
every layer, enhancing its performance on various tasks [6].
In summary, the literature survey highlights the evolution of language representation models,
critiques their limitations, and positions BERT as a transformative approach that leverages
bidirectional context to improve performance across a range of NLP tasks.

Paper 6
Comparative Analyses of BERT, RoBERTa, DistilBERT, and XLNet for
Text-based Emotion Recognition

Literature Survey of the Paper


The literature survey in this paper focuses on the advancements in emotion recognition,
particularly through the use of transformer models. Here are the key points derived from the
provided contexts:
Emotion Recognition Overview: The paper emphasizes that emotion detection is a nuanced
extraction of user sentiments, with text-based emotion recognition being a significant sub-branch.
This method aims to derive fine-grained emotions from textual data, which is crucial for
understanding user sentiments more accurately [1].
Challenges in Sentiment Analysis: It highlights the limitations of traditional sentiment analysis
(SA), which often lacks the granularity needed for effective user profiling. The paper argues that
a more detailed approach is necessary to capture the subtleties of user emotions, especially in the
context of social media [1].
Advancements in Model Architecture: The introduction of transformer models, particularly
BERT, RoBERTa, DistilBERT, and XLNet, is noted as a breakthrough in addressing the
challenges of long-term dependencies in text and parallel processing. These models have shown
significant improvements in various natural language processing (NLP) tasks, including emotion
recognition [1] [2].
Comparative Analysis of Models: The paper aims to analyze the efficacy of the aforementioned
transformer models on the International Survey on Emotion Antecedents and Reactions (ISEAR)
dataset. It seeks to fill a gap in the literature by providing a comparative study of these models
regarding their performance metrics such as accuracy, precision, and recall [3].
Previous Works: The literature review also references previous studies, such as the work by
Polignano et al., which combined Bi-LSTM, Self-Attention, and CNNs for emotion recognition.
Their findings indicated that using robust pre-trained word embeddings, like FastText, can
significantly enhance model performance across various datasets, including ISEAR [4] [5].
Research Gap: The paper identifies a lack of comparative analyses specifically focusing on the
performance of BERT, RoBERTa, DistilBERT, and XLNet on the ISEAR dataset, which
underscores the novelty and relevance of the current research [3].
This literature survey sets the stage for the paper's contributions by contextualizing the research
within existing studies and highlighting the need for further exploration in the field of emotion
recognition using advanced transformer models.

Paper7
Emotion and sentiment analysis of tweets using BERT
Literature Survey of the Paper
The paper "Emotion and sentiment analysis of tweets using BERT" explores the growing field of
sentiment analysis, particularly in the context of user-generated content on social media platforms
like Twitter. Here’s a summary of the relevant literature discussed in the paper:
Increasing Interest in Sentiment Analysis: The paper highlights that the rise of user-generated
content has led to a surge in research focused on automatic sentiment analysis. This area has
gained significant attention due to the vast amount of data available online, which presents both
opportunities and challenges for analysis [1].
Deep Learning Techniques: It notes that deep learning methods have become prevalent in
sentiment analysis, with various surveys documenting their effectiveness. These techniques often
outperform traditional methods, showcasing the evolution of sentiment analysis approaches over
recent years [1].
Word Embeddings: A critical aspect of sentiment analysis is the vector representation of words,
typically achieved through word embeddings. The paper discusses popular methods like
Word2Vec and GloVe, which transform words into continuous vector representations. These
embeddings are essential for capturing semantic meanings in text [1].
Refinements in Embedding Techniques: The literature also covers advancements in embedding
techniques, such as sentiment-specific word embeddings (SSWE) that incorporate sentiment
information into the learned vectors. This refinement aims to enhance the performance of
sentiment analysis models by embedding both semantic and sentiment-related information [1].
BERT and Its Applications: The paper emphasizes the use of Bidirectional Encoder
Representations from Transformers (BERT) for sentiment analysis and emotion recognition.
BERT's architecture allows for better understanding of context in text, making it a suitable choice
for analyzing tweets, which often contain informal language and abbreviations [2].
Real-World Applications: The analysis of customer opinions through sentiment analysis is
highlighted as a valuable tool for businesses. It helps in identifying issues and suggesting
improvements, thereby driving innovation and enhancing customer satisfaction [3].
Performance Evaluation: The paper concludes with a discussion on the performance of the
proposed BERT-based models, achieving high accuracy rates in sentiment and emotion analysis,
which underscores the effectiveness of using advanced language models in this domain [4].
This literature survey illustrates the evolution of sentiment analysis techniques and the significant
role of deep learning and advanced models like BERT in enhancing the accuracy of emotion and
sentiment detection in social media data.

Paper8
EmoDet2: Emotion Detection in English Textual Dialogue using BERT and
BiLSTM Models
Literature Survey of EmoDet2
The paper "EmoDet2: Emotion Detection in English Textual Dialogue using BERT and BiLSTM
Models" presents a comprehensive overview of previous research in the field of emotion
detection, highlighting various methodologies and models that have been explored. Here are the
key points from the literature survey:
Definition of Emotions: Emotions are described as complex states of feelings influenced by
physical and psychological changes, which vary based on the mood and personality of the speaker.
This foundational understanding sets the stage for emotion detection research [1].
Basic Emotions: The work of Ekman is referenced, identifying six basic emotions: anger, disgust,
fear, happiness, sadness, and surprise. This classification has been pivotal in developing
algorithms for emotion recognition [1].
Machine Learning Approaches: Various researchers have employed machine learning
techniques to understand emotions. For instance, Chatterjee et al. utilized an LSTM model with
two types of word embeddings: semantic (GloVe) and sentiment-specific (SSWE) [1].
Previous Models: The EmoDet model, built using data from the SemEval-2019 workshop,
combined fully connected neural networks with LSTM architectures. This model served as a
benchmark for subsequent research [1].
Sentiment Detection in Other Languages: The SEDAT model focused on detecting sentiments
and emotions in Arabic tweets, employing CNN-LSTM architectures and various embeddings.
This indicates the broader applicability of emotion detection models across languages [1].
Emotional Chatbots: Research on EmoNet aimed at creating emotional chatbots, which
underscores the importance of understanding human emotions for better interaction in
conversational agents [1].
Feature Extraction Techniques: The literature also discusses the use of different feature
extraction methods, such as fasttext embeddings and attention mechanisms in BiLSTM models,
which enhance the model's ability to capture emotional nuances [1].
Traditional Machine Learning: Some studies have utilized traditional machine learning
methods like Logistic Regression and Support Vector Machines, showcasing a range of
approaches in the field [1].
This literature survey illustrates the evolution of emotion detection methodologies, emphasizing
the transition from traditional machine learning to advanced deep learning techniques, which the
EmoDet2 system builds upon. The paper positions its contributions within this rich context of
prior research, demonstrating how it advances the state of the art in emotion detection in textual
dialogue.

Paper 9
Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing
Auxiliary Sentence

Literature Survey of Utilizing BERT for Aspect-Based Sentiment Analysis

Sentiment Analysis Overview: The paper discusses sentiment analysis (SA) as a crucial task in
natural language processing, focusing on the computational processing of opinions, emotions, and
subjectivity. It highlights the importance of SA in both academia and industry, particularly in
analyzing online reviews to gather customer opinions on products and services [1].
Aspect-Based Sentiment Analysis (ABSA): The authors introduce aspect-based sentiment
analysis (ABSA), which aims to identify fine-grained opinion polarity towards specific aspects of
a product or service. This is essential for users to evaluate sentiments related to different aspects,
such as quality and price, within a single comment [1].
Targeted Aspect-Based Sentiment Analysis (TABSA): The paper further narrows down to
targeted aspect-based sentiment analysis (TABSA), which identifies opinion polarity towards
specific aspects associated with given targets. This task is divided into two steps: determining
aspects related to each target and resolving the polarity of these aspects [1].
Evolution of Methods: The literature review indicates that early works on TABSA relied heavily
on feature engineering, while more recent approaches have utilized neural network-based
methods to achieve higher accuracy. The authors note the incorporation of commonsense
knowledge into deep learning models to enhance performance [1].
Pre-trained Language Models: The paper emphasizes the effectiveness of pre-trained language
models like BERT, ELMo, and OpenAI GPT in reducing the effort required for feature
engineering. However, it points out that the direct application of BERT in TABSA has not yielded
significant improvements, suggesting that the model's potential is not fully utilized in this
context [1].
Proposed Methodology: The authors propose a novel approach by converting TABSA into a
sentence-pair classification task, akin to question answering (QA) and natural language inference
(NLI). This transformation allows for better utilization of BERT's capabilities, leading to state-of-
the-art results on benchmark datasets [1] [2].
Comparative Experiments: The paper includes comparative experiments that demonstrate the
superiority of their sentence-pair classification method over traditional single-sentence
classification approaches, indicating that the improvements stem from both the BERT model and
the proposed methodology [1] [3].
This literature survey encapsulates the evolution of sentiment analysis techniques, the challenges
faced in TABSA, and the innovative solutions proposed by the authors to leverage BERT
effectively.

Paper 10
Exploiting BERT for End-to-End Aspect-based Sentiment Analysis

Literature Survey of the Paper


The paper "Exploiting BERT for End-to-End Aspect-based Sentiment Analysis" explores the
advancements in Aspect-based Sentiment Analysis (ABSA) by leveraging the capabilities of
BERT, a pre-trained language model. Here’s a summary of the relevant literature and context
surrounding this research:
Aspect-based Sentiment Analysis (ABSA): ABSA aims to identify users' sentiments towards
specific aspects mentioned in texts. It can be categorized into different tasks, including original
ABSA, Aspect-oriented Opinion Words Extraction (AOWE), and End-to-End Aspect-based
Sentiment Analysis (E2E-ABSA) [1]. The original ABSA focuses on sentiment classification,
while E2E-ABSA combines aspect detection and sentiment prediction into a single task [1].
Existing Models and Limitations: Prior models for ABSA often utilized task-agnostic pre-
trained word embeddings like Word2Vec or GloVe, which provided context-independent features.
This approach has shown limitations in capturing complex semantic dependencies, leading to a
bottleneck in performance improvements [1]. The paper notes that while some models have
attempted to integrate contextualized embeddings with deep learning architectures, the focus has
often been on task-specific designs rather than exploring the potential of contextualized
embeddings like BERT for E2E-ABSA [1].
BERT and Contextualized Embeddings: The paper emphasizes the significance of BERT,
which offers deep contextualized embeddings that can enhance the performance of E2E-ABSA
tasks. The authors build on previous work that utilized BERT for ABSA but aim to investigate its
modeling power without developing a task-specific architecture [1]. They propose a series of
simple neural baselines that utilize BERT as a feature extractor or fine-tune it for the task [1].
Experimental Validation: The authors conducted experiments on datasets from SemEval,
demonstrating that even with a simple linear classification layer, their BERT-based architecture
outperformed existing state-of-the-art methods. This highlights the effectiveness of using BERT
for E2E-ABSA and establishes a benchmark for future research in this area [2].
Standardization in Comparative Studies: The paper also addresses the need for standardized
methodologies in comparative studies, advocating for the consistent use of hold-out development
datasets for model selection, which has been largely overlooked in previous works [2].
This literature survey illustrates the evolution of ABSA methodologies, the limitations of earlier
models, and the promising role of BERT in enhancing sentiment analysis tasks.

You might also like