Sentiment Analysis
Sentiment Analysis
Sentiment Analysis
Abstract. This paper covers the two approaches for sentiment analysis: i) lexi-
con based method; ii) machine learning method. We describe several techniques
to implement these approaches and discuss how they can be adopted for senti-
ment classification of Twitter messages. We present a comparative study of dif-
ferent lexicon combinations and show that enhancing sentiment lexicons with
emoticons, abbreviations and social-media slang expressions increases the ac-
curacy of lexicon-based classification for Twitter. We discuss the importance of
feature generation and feature selection processes for machine learning sentiment
classification. To quantify the performance of the main sentiment analysis meth-
ods over Twitter we run these algorithms on a benchmark Twitter dataset from
the SemEval-2013 competition, task 2-B. The results show that machine learn-
ing method based on SVM and Naive Bayes classifiers outperforms the lexicon
method. We present a new ensemble method that uses a lexicon based sentiment
score as input feature for the machine learning approach. The combined method
proved to produce more precise classifications. We also show that employing a
cost-sensitive classifier for highly unbalanced datasets yields an improvement of
sentiment classification performance up to 7%.
Keywords: sentiment analysis, social media, Twitter, natural language processing, lex-
icon, emoticons
1 Introduction
Sentiment analysis is an area of research that investigates peoples opinions towards dif-
ferent matters: products, events, organisations (Bing, 2012). The role of sentiment anal-
ysis has been growing significantly with the rapid spread of social networks, microblog-
ging applications and forums. Today, almost every web page has a section for the users
to leave their comments about products or services, and share them with friends on
Facebook, Twitter or Pinterest - something that was not possible just a few years ago.
Mining this volume of opinions provides information for understanding collective hu-
man behaviour and it is of valuable commercial interest. For instance, an increasing
amount of evidence points out that by analysing sentiment of social-media content it
might be possible to predict the size of the markets (Bollen et al., 2010) or unemploy-
ment rates over time (Antenucci et al., 2014).
One of the most popular microblogging platforms is Twitter. It has been growing
steadily for the last several years and has become a meeting point for a diverse range of
people: students, professionals, celebrities, companies and politicians. This popularity
of Twitter results in the enormous amount of information being passed through the
service, covering a wide range of topics from people well-being to the opinions about
the brands, products, politicians and social events. In this contexts Twitter becomes a
powerful tool for predictions. For example, (Asur and Huberman, 2010) was able to
predict from Twitter analytics the amount of ticket sales at the opening weekend for
movies with 97.3% accuracy, higher than the one achieved by the Hollywood Stock
Exchange, a known prediction tool for the movies.
In this paper, we present a step-by-step approach for two main methods of senti-
ment analysis: lexicon based approach (Taboada et al., 2011), (Ding et al., 2008) and
machine learning approach (Pak and Paroubek, 2010). We show that accuracy of the
sentiment analysis for Twitter can be improved by combining the two approaches: dur-
ing the first stage a lexicon score is calculated based on the polarity of the words which
compose the text, during the second stage a machine learning model is learnt that uses
the lexicon score as one of the features. The results showed that the combined approach
outperforms the two approaches. We demonstrate the use of our algorithm on a dataset
from a popular Twitter sentiment competition SemEval-2013, task 2-B (Nakov et al.,
2013). In (Souza et al., 2015) our algorithm for sentiment analysis is also successfully
applied to 42,803,225 Twitter messages related to companies from the retail sector to
predict the stock price movements.
The field of text categorization was initiated long time ago (Salton and McGill, 1983),
however categorization based on sentiment was introduced more recently in (Das and
Chen, 2001; Morinaga et al., 2002; Pang et al., 2002; Tong, 2001; Turney, 2002; Wiebe,
2000).
The standard approach for text representation (Salton and McGill, 1983) has been
the bag-of-words mehod (BOW). According to the BOW model, the document is rep-
resented as a vector of words in Euclidian space where each word is independent from
others. This bag of individual words is commonly called a collection of unigrams. The
BOW is easy to understand and allows to achieve high performance (for example, the
best results of multi-lable categorization for the Reuters-21578 dataset were produced
using BOW approach (Dumais et al., 1998; Weiss et al., 1999)).
The main two methods of sentiment analysis, lexicon-based method (unsupervised
approach) and machine learning based method (supervised approach), both rely on the
bag-of-words. In the machine learning supervised method the classifiers are using the
unigrams or their combinations (N-grams) as features. In the lexicon-based method
the unigrams which are found in the lexicon are assigned a polarity score, the overall
polarity score of the text is then computed as sum of the polarities of the unigrams.
When deciding which lexicon elements of a message should be considered for
sentiment analysis, different parts-of-speech were analysed (Pak and Paroubek, 2010;
Kouloumpis et al., 2011). Benamara et al. proposed the Adverb-Adjective Combina-
tions (AACs) approach that demonstrates the use of adverbs and adjectives to detect
sentiment polarity (Benamara et al., 2007). In recent years the role of emoticons has
been investigated (Pozzi et al., 2013a; Hogenboom et al., 2013; Liu et al., 2012; Zhao
et al., 2012). In their recent study (Fersini et al., 2015) further explored the use of (i)
adjectives, (ii) emoticons, emphatic and onomatopoeic expressions and (iii) expressive
lengthening as expressive signals in sentiment analysis of microblogs. They showed
that the above signals can enrich the feature space and improve the quality of sentiment
classification.
Advanced algorithms for sentiment analysis have been developed (see (Jacobs, 1992;
Vapnik, 1998; Basili et al., 2000; Schapire and Singer, 2000)) to take into consideration
not only the message itself, but also the context in which the message is published, who
is the author of the message, who are the friends of the author, what is the underlying
structure of the network. For instance, (Hu et al., 2013) investigated how social rela-
tions can help sentiment analysis by introducing a Sociological Approach to handling
Noisy and short Texts (SANT), (Zhu et al., 2014) showed that the quality of sentiment
clustering for Twitter can be improved by joint clustering of tweets, users, and features.
In the work by (Pozzi et al., 2013b) the authors looked at friendship connections and
estimated user polarities about a given topic by integrating post contents with approval
relations. Quanzeng You and Jiebo Luo improved sentiment classification accuracy by
adding a visual content in addition to the textual information (You and Luo, 2013).
Aisopos et al. significantly increased the accuracy of sentiment classification by using
content-based features along with context-based features (Aisopos et al., 2012). Saiff et
al. achieved improvements by growing the feature space with semantics features (Saif
et al., 2012).
While many research works focused on finding the best features, some efforts have
been made to explore new methods for sentiment classification. Wang et al. evaluated
the performance of ensemble methods (Bagging, Boosting, Random Subspace) and em-
pirically proved that ensemble models can produce better results than the base learners
(Wang et al., 2014). Fersini et al. proposed to use Bayesian Model Averaging ensem-
ble method which outperformed both traditional classification and ensemble methods
(Fersini et al., 2014). Carvalho et al. employed genetic algorithms to find subsets of
words from a set of paradigm words that led to improvement of classification accuracy
(Carvalho et al., 2014).
Before applying any of the sentiment extraction methods, it is a common practice to per-
form data pre-processing. Data pre-processing allows to produce higher quality of text
classification and reduce the computational complexity. Typical pre-processing proce-
dure includes the following steps:
Part-of-Speech Tagging (POS). The process of part-of-speech tagging allows to
automatically tag each word of text in terms of which part of speech it belongs to: noun,
pronoun, adverb, adjective, verb, interjection, intensifier, etc. The goal is to extract pat-
terns in text based on analysis of frequency distributions of these part-of-speech. The
importance of part-of-speech tagging for correct sentiment analysis was demonstrated
by (Manning and Schutze, 1999). Statistical properties of texts, such as adherence to
Zipfs law can also be used (Piantadosi, 2014). Pak and Paroubek analysed the distribu-
tion of POS tagging specifically for Twitter messages and identified multiple patterns
(Pak and Paroubek, 2010). For instance, they found that subjective texts (carrying the
sentiment) often contain more pronouns, rather than common and proper nouns; sub-
jective messages often use past simple tense and contain many verbs in a base form and
many modal verbs.
There is no common opinion about whether POS tagging improves the results of
sentiment classification. Barbosa and Feng reported positive results using POS tagging
(Barbosa and Feng, 2010), while (Kouloumpis et al., 2011) reported a decrease in per-
formance.
Stemming and lemmatisation. Stemming is a procedure of replacing words with
their stems, or roots. The dimensionality of the BOW is reduced when root-related
words, such as read, reader and reading are mapped into one word read. How-
ever, one should be careful when applying stemming, since it might increase bias. For
example, the biased effect of stemming appears when merging distinct words exper-
iment and experience into one word exper, or when words which ought to be
merged together (such as adhere and adhesion) remain distinct after stemming.
These are examples of over-stemming and under-stemming errors respectively. Over-
stemming lowers precision and under-stemming lowers recall. The overall impact of
stemming depends on the dataset and stemming algorithm. The most popular stemming
algorithm is Porter stemmer (Porter, 1980).
Stop-words removal. Stop words are words which carry a connecting function in
the sentence, such as prepositions, articles, etc. (Salton and McGill, 1983). There is
no definite list of stop words, but some search machines, are using some of the most
common, short function words, such as the, is, at, which and on. These words
can be removed from the text before classification since they have a high frequency of
occurrence in the text, but do not affect the final sentiment of the sentence.
Negations Handling. Negation refers to the process of conversion of the sentiment
of the text from positive to negative or from negative to positive by using special words:
no,not,dont etc. These words are called negations. The example of some nega-
tion words is presented in the Table 1.
Handling negation in the sentiment analysis task is a very important step as the
whole sentiment of the text may be changed by the use of negation. It is important to
identify the scope of negation (for more information see (Councill et al., 2010)). The
simplest approach to handle negation is to revert the polarity of all words that are found
between the negation and the first punctuation mark following it. For instance, in the
text I dont want to go to the cinema the polarity of the whole phrase want to got to
the cinema will be reverted.
Other researches introduce the concept of contextual valence shifter (Polanyi and
Zaenen, 2006), which consists of negation, intensifier and diminisher. Contextual va-
lence shifters have an impact of flipping the polarity, increasing or decreasing the degree
to which a sentimental term is positive or negative.
But-clauses. The phrases like but, with the exception of, except that, ex-
cept for generally change the polarity of the part of the sentence following them. In
order to handle these clauses the opinion orientation of the text before and after these
phrases should be set opposite to each other. For example, without handling the but-
type clauses the polarity of the sentence may be set as following:I don like[-1] this
mobile, but the screen has high[0] resolution. When but-clauses is processed, the
sentence polarity will be changed to: I dont like[-1] this mobile, but the screen has
high[+1] resolution. Notice, that even neutral adjectives will obtain the polarity that is
opposite to the polarity of the phrase before the but-clause.
However, the solution described above does not work for every situation. For ex-
ample, in the sentence Not only he is smart, but also very kind - the word but does
not carry contrary meaning and reversing the sentiment score of the second half of the
sentence would be incorrect. These situations need to be considered separately.
Tokenisation into N-grams. Tokenisation is a process of creating a bag-of-words
from the text. The incoming string gets broken into comprising words and other ele-
ments, for example URL links. The common separator for identifying individual words
is whitespace, however other symbols can also be used. Tokenisation of social-media
data is considerably more difficult than tokenisation of the general text since it contains
numerous emoticons, URL links, abbreviations that cannot be easily separated as whole
entities.
It is a general practice to combine accompanying words into phrases or n-grams,
which can be unigrams, bigrams, trigrams, etc. Unigrams are single words, while bi-
grams are collections of two neighbouring words in a text, and trigrams are collections
of three neighbouring words. N-grams method can decrease bias, but may increase sta-
tistical sparseness. It has been shown that the use of n-grams can improve the quality
of text classification (Raskutti et al., 2001; Zhang, 2003; Diederich et al., 2003), how-
ever there is no unique solution for the the size of n-gram. Caropreso et al. conducted
an experiment of text categorization on the Reuters-21578 benchmark dataset (Caro-
preso et al., 2001). They reported that in general the use of bigrams helped to produce
better results than the use of unigrams, however while using Rocchio classifier (Roc-
chio, 1971) the use of bigrams led to the decrease of classification quality in 28 out
of 48 experiments. Tan et al. reported that use of bigrams on Yahoo-Science dataset
(Tan et al., 2002) allowed to improve the performance of text classification using Naive
Bayes classifier from 65% to 70% break-even point, however, on Reuters-21578 dataset
the increase of accuracy was not significant. Conversely, trigrams were reported to gen-
erate poor performances (Pak and Paroubek, 2010).
Lexicon-based approach calculates the sentiment of a given text from the polarity of the
words or phrases in that text (Turney, 2002). For this method a lexicon (a dictionary)
of words with assigned to them polarity is required. Examples of the existing lexicons
include: Opinion Lexicon (Hu and Liu, 2004), SentiWordNet (Esuli and Sebastiani,
2006), AFINN Lexicon (Nielsen, 2011), LoughranMcDonald Lexicon, NRC-Hashtag
(Mohammad et al., 2013), General Inquirer Lexicon3 (Stone and Hunt, 1963).
The sentiment score Score of the text T can be computed as the average of the
polarities conveyed by each of the words in the text. The methodology for the senti-
ment calculation is schematically illustrated in Figure 1 and can be described with the
following steps:
Pre-processing. The text undergoes pre-processing steps that were described in the
previous section: POS tagging, stemming, stop-words removal, negation handling,
tokenisations into N-grams. The outcome of the pre-processing is a set of tokens or
a bag-of-words.
Checking each token for its polarity in the lexicon. Each word from the bag-
of-words is compared against the lexicon. If the word is found in the lexicon, the
polarity wi of that word is added to the sentiment score of the text. If the word is
not found in the lexicon its polarity is considered to be equal to zero.
Calculating the sentiment score of the text. After assigning polarity scores to
all words comprising the text, the final sentiment score of the text is calculated by
dividing the sum of the scores of words caring the sentiment by the number of such
words:
m
1 X
ScoreAV G = Wi . [1]
m i=1
3
http://www.wjh.harvard.edu/ inquirer/
The averaging of the score allows to obtain a value of the sentiment score in the
range between -1 and 1, where 1 means a strong positive sentiment, -1 means a
strong negative sentiment and 0 means that the text is neutral. For example, for the
text:
A masterful[+0.92] film[0.0] from a master[+1] filmmaker[0.0], unique[+1] in
its deceptive[0.0] grimness[0.0], compelling[+1] in its fatalist[-0.84] world[0.0]
view[0.0].
the sentiment score is calculated as follows:
The quality of classification highly depends on the quality of the lexicon. Lexicons
can be created using different techniques:
Manually constructed lexicons. The straightforward approach, but also the most
time consuming, is to manually construct a lexicon and tag words in it as positive or
negative. For example, (Das and Chen, 2001) constructed their lexicon by reading sev-
eral thousands of messages and manually selecting words, that were carrying sentiment.
They then used a discriminant function to identify words from a training dataset, which
can be used for sentiment classifier purposes. The remained words were expanded
to include all potential forms of each word into the final lexicon. Another example of
hand-tagged lexicon is The Multi-Perspective-Question-Answering (MPQA) Opinion
Corpus4 constructed by (Wiebe et al., 2005). MPQA is publicly available and consists
of 8,222 subjective expressions along with their POS-tags, polarity classes and intensity.
Another resource is The SentiWordNet created by (Esuli and Sebastiani, 2006).
SentiWordNet extracted words from WordNet5 and gave them probability of belong-
ing to positive, negative or neutral classes, and subjectivity score. Ohana and Tierney
demonstrated that SentiWordNet can be used as an important resource for sentiment
calculation (Ohana and Tierney, 2009).
Constructing a lexicon from trained data. This approach belongs to the category
of the supervised methods, because a training dataset of labelled sentences is needed.
With this method the sentences from the training dataset get tokenised and a bag-of-
words is created. The words are then filtered to exclude some parts-of-speech that do
not carry sentiment, such as prepositions, for example. The prior polarity of words is
calculated according to the occurrence of each word in positive and negative sentences.
For example, if a word success is appearing more often in the sentences labelled as
positive in the training dataset, the prior polarity of this word will be assigned a positive
value.
Extending a small lexicon using bootstrapping techniques. Hazivassiloglou and
McKeown proposed to extend a small lexicon comprised of adjectives by adding new
adjectives which were conjoined with the words from the original lexicon (Hatzivas-
siloglou and McKeown, 1997). The technique is based on the syntactic relationship
4
available at nrrc.mitre.org/NRRC/publications.htm
5
http://wordnet.princeton.edu/
between two adjectives conjoined with the AND it is established that AND usually
joins words with the same semantic orientation. Example:
The weather yesterday was nice and in-
spiring
Since words nice and inspiring are con-
joined with AND, it is considered that both
of them carry a positive sentiment. If only the
word nice was present in the lexicon, a new
word inspiring would be added to the lex-
icon. Similarly, (Hatzivassiloglou and McKe-
own, 1997) and (Kim and Hovy, 2004) sug-
gested to expand a small manually constructed
lexicon with synonyms and antonyms obtained
from NLP resources such as WordNet6 . The
process can be repeated iteratively until it is not
possible to find new synonyms and antonyms.
Moilanen and Pulman also created their lexicon
by semi-automatically expanding WordNet2.1
lexicon (Moilanen and Pulman, 2007). Other
approaches include extracting polar sentences
by using structural clues from HTML docu-
ments (Kaji and Kitsuregawa, 2007), recog-
nising opinionated text based on the density
of other clues in the text (Wiebe and Wilson,
2002). After the application of a bootstrapping
technique it is important to conduct a manual
inspection of newly added words to avoid er-
rors.
1. Data Pre-processing. Before training the classifiers each text needs to be pre-
processed and presented as an array of tokens. This step is performed according
to the process described in section 3.
2. Feature generation. Features are text attributes that are useful for capturing pat-
terns in data. The most popular features used in machine learning classification
are the presence or the frequency of n-grams extracted during the pre-processing
step. In the presence-based representation for each instance a binary vector is cre-
ated in which 1 means the presence of a particular n-gram and 0 indicates its
absence. In the frequency-based representation the number of occurrences of a par-
ticular n-gram is used instead of a binary indication of presence. In cases where text
length varies greatly, it might be important to use term frequency (TF) and inverse
term frequency (IDF) measures (Rajaraman and Ullman, 2011). However, in short
messages like tweets words are unlikely to repeat within one instance, making the
binary measure of presence as informative as the counts (Ikonomakis et al., 2005).
Apart from the n-grams, additional features can be created to improve the over-
all quality of text classification. The most common features that are used for this
purpose include:
Number of words with positive/negative sentiment;
Number of negations;
Length of a message;
Number of exclamation marks;
Number of different parts-of-speech in a text (for example, number of nouns,
adjectives, verbs);
Number of comparative and superlative adjectives.
3. Feature selection. Since the main features of a text classifier are N-grams, the
dimensionality of the feature space grows proportionally to the size of the dataset.
This dramatical growth of the feature space makes it in most cases computationally
infeasible to calculate all the features of a sample. Many features are redundant
or irrelevant and do not significantly improve the results. Feature selection is the
process of identifying a subset of features that have the highest predictive power.
This step is crucial for the classification process, since elimination of irrelevant and
redundant features allows to reduce the size of feature space increasing the speed of
the algorithm, avoiding overfitting as well as contributing to the improved quality
of classification.
There are three basic steps in feature selection process (Dash and Liu, 1997)
(a) Search procedure. A process that generates a subset of features for evaluation.
A procedure can start with no variables and add them one by one (forward se-
lection) or with all variables and remove one at each step (backward selection),
or features can be selected randomly (random selection).
(b) Evaluation procedure. A process of calculating a score for a selected subset of
features. The most common metrics for evaluation procedure are: Chi-squared,
Information Gain, Odds Ratio, Probability Ratio, Document Frequency, Term
Frequency. An extensive overview of search and evaluation methods is pre-
sented in (Ladha and Deepa, 2011a; Forman, 2003).
(c) Stopping criterion. The process of feature selection can be stopped based on
a: i) search procedure, if a predefined number of features was selected or pre-
defined number of iterations was performed; ii) evaluation procedure, if the
change of feature space does not produce a better subset or if optimal subset
was found according to the value of evaluation function.
4. Learning an Algorithm. After feature generation and feature selection steps the
text is represented in a form that can be used to train an algorithm. Even though
many classifiers have been tested for sentiment analysis purposes, the choice of
the best algorithm is still not easy since all methods have their advantages and
disadvantages (see (Marsland, 2011) for more information on classifiers).
Decision Trees (Mitchell, 1996). A decision tree text classifier is a tree in which
non-leaf nodes represent a conditional test on a feature, branches denote the out-
comes of the test, and leafs represent class labels. Decision trees can be easily
adapted to classifying textual data and have a number of useful qualities: they are
relatively transparent, which makes them simple to understand; they give direct
information about which features are important in making decisions, which is es-
pecially true near the top of the decision tree. However, decision trees also have
a few disadvantages. One problem is that trees can be easily overfitted. The rea-
son lies in the fact that each branch in the decision tree splits the training data,
thus, the amount of training data available to train nodes located in the bottom of
the tree, decreases. This problem can be addressed by using the tree pruning. The
second weakness of the method is the fact that decision trees require features to be
checked in a specific order. This limits the ability of an algorithm to exploit features
that are relatively independent of one another.
Naive Bayes (Narayanan et al., 2013) is frequently used for sentiment analysis pur-
poses because of its simplicity and effectiveness. The basic concept of the Naive
Bayes classifier is to determine a class (positive negative, neutral) to which a text
belongs using probability theory. In case of the sentiment analysis there will be
three hypotheses: one for each sentiment class. The hypothesis that has the highest
probability will be selected as a class of the text. The potential problem with this
approach emerges if some word in the training set appears only in one class and
does not appear in any other classes. In this case, the classifier will always clas-
sify text to that particular class. To avoid this undesirable effect Laplace smoothing
technique may be applied.
Another very popular algorithm is Support Vector Machines (SVMs) (Cortes and
Vapnik, 1995; Vapnik, 1995). For the linearly separable two-class data, the basic
idea is to find a hyperplane, that not only separates the documents into classes, but
for which the Euclidian distance to the closest training example, or margin, is as
large as possible. In a three-class sentiment classification scenario, there will be
three pair-wise classifications: positive-negative, negative-neutral, positive-neutral.
The method has proved to be very successful for the task of text categorization
(Joachims, 1999; Dumais et al., 1998) since it can handle very well large feature
spaces, however, it has low interpretability and is very computationally expensive,
because it involves calculations of discretisation, normalization and dot product
operations.
5. Model Evaluation. After the model is trained using a classifier it should be vali-
dated, typically, using a cross-validation technique, and tested on a hold-out dataset.
There are several metrics defined in information retrieval for measuring the effec-
tiveness of classification, among them are:
Accuracy: as described by (Kotsiantis, 2007), accuracy is the fraction of the
number of correct predictions over the total number of predictions.
Error rate: measures the number of incorrectly predicted instance against the
total number of predictions.
Precision: shows the proportion of how many instances the model classified
correctly to the total number of true positive and true negative examples. In
other words, precision shows the exactness of the classifier with respect to each
class.
Recall: represents the proportion of how many instances the model classified
correctly to the total number of true positives and false negatives. Recall shows
the completeness of the classifier with respect to each class.
F-score: (Rijsbergen, 1979) defined the F1-score as the harmonic mean of pre-
cision and recall:
2 P recision Recall
F-Score = . [2]
P recision + Recall
Depending on the nature of the task, one may use accuracy, error rate, precision,
recall or F-score as a metric or some mixture of them. For example, for unbalanced
datasets, it was shown that precision and recall can be better metrics for measuring
classifiers performance (Manning and Schutze, 1999). However, sometimes one of
these metrics can increase at the expense of the other. For example, in the extreme
cases the recall can reach to 100%, but precision can be very low. In these situations
the F-score can be a more appropriate measure.
(b)
Fig. 2: Statistics of a) training dataset and b) test datset from SemEval-2013 competi-
tion, Task 2-B (Nakov et al., 2013). Dark grey bar on the left represents the proportion
of positive tweets in the dataset, grey bar in the middle shows the proportion of negative
tweets and light grey bar on the right reflects the proportion of neutral sentences.
We performed pre-processing steps as described in section 3. For the most of the steps
we used the machine learning software WEKA7 . WEKA was developed in the univer-
sity Wakaito and provides implementations of many machine learning algorithms. Since
it is an open source tool and has an API, WEKA algorithms can be easily embedded
within other applications.
Stemming and lemmatisation. The overall impact of stemming depends on the
dataset and stemming algorithm. WEKA contains implementation of a SnowballStem-
mer (Porter, 2002) and LovinsStemmer (Lovins, 1968). After testing both implementa-
tions we discovered that the accuracy of the sentiment classification was decreased after
applying both stemming algorithms, therefore, stemming operation was avoided in the
final implementation of the sentiment analysis algorithm.
Stop-words Removal. WEKA provides a file with a list of words, which should be
considered as stop-words. The file can be adjusted to ones needs. In our study we used
a default WEKA stop-list file.
7
http://www.cs.waikato.ac.nz/ml/weka/
8
http://nlp.stanford.edu/software/index.shtml
9
http://cogcomp.cs.illinois.edu/page/software view/3
10
http://opennlp.sourceforge.net/models-1.5
11
http://alias-i.com/lingpipe/demos/tutorial/posTags/read-me.html
12
http://wortschatz.uni-leipzig.de/cbiemann/software/unsupos.html
13
http://www.ark.cs.cmu.edu/TweetNLP
14
http://nlp.cs.berkeley.edu/Software.shtml
Table 3: Example of ArkTweetNLP (Gimpel et al., 2011) tagger in practice.
Sentence:
ikr smh he asked fir yo last name so he can add u on fb lololol
word tag confidence
ikr ! 0.8143
smh G 0.9406
he O 0.9963
asked V 0.9979
fir P 0.5545
yo D 0.6272
last A 0.9871
name N 0.9998
so P 0.9838
he O 0.9981
can V 0.9997
add V 0.9997
u O 0.9978
on P 0.9426
fb 0.9453
lololol ! 0.9664
used abbreviations (see Table 2 for some tags examples). An example15 of how Ark-
TweetNLP tagger works in practice is presented in Table 3.
As the result of POS-tagging in our study, we filtered out all words that did not
belong to one of the following categories: N(common noun), V(verb), A(adjective),
R(adverb), !(interjection), E(emoticon), G(abbreviations, foreign words, possessive end-
ings).
Negations Handling. We implemented negation handling using simple, but effec-
tive strategy: if negation word was found, the sentiment score of every word appear-
ing between a negation and a clause-level punctuation mark (.,!?:;) was reversed (Pang
et al., 2002). There are, however, some grammatical constructions in which a negation
term does not have a scope. Some of these situations we implemented as exceptions:
Exception Situation 1: Whenever a negation term is a part of a phrase that does not
carry negation sense, we consider that the scope for negation is absent and the polarity
of words is not reversed. Examples of these special phrases include not only, not
just, no question, not to mention and no wonder.
15
http://www.ark.cs.cmu.edu/TweetNLP/
Exception Situation 2: A negation term does not have a scope when it occurs in a
negative rhetorical question. A negative rhetorical question is identified by the following
heuristic. (1) It is a question; and (2) it has a negation term within the first three words
of the question. For example:
Did not I enjoy it?
Wouldnt you like going to the cinema?
Tokenisation into N-grams. We used WEKA tokeniser to extract uni-grams and
bi-grams from the Twitter dataset.
1. Pre-processing of the dataset: POS tags were assigned to all words in the dataset;
words were lowered in case; BOW was created by tokenising the sentences in the
dataset.
2. The number of occurrences of each word in positive and negative sentences from
the training dataset was calculated.
3. The positive polarity of each word was calculated by dividing the number of occur-
rences in positive sentences by the number of all occurrences:
#P ositive sentences
positiveSentScore = .
(#P ositive sentences + #N egative sentences)
[3]
For example, we calculated that the word pleasant appeared 122 times in the
positive sentences and 44 times in the negative sentences. According to the formula,
the positive sentiment score of the word pleasant is
122
positiveSentScore = = 0.73.
(122 + 44)
Similarly, the negative score for the word pleasant can be calculated by dividing
the number of occurrences in negative sentences by the total number of mentions
#N egative sentences
negativeSentScore = ,
(#P ositive sentences + #N egative sentences)
[4]
44
negativeSentScore = = 0.27.
(122 + 44)
Based on the positive score of the word we can make a decision about its polarity:
the word is considered positive, if its positive score is above 0.6; the word is con-
sidered neutral, if its positive score is in the range [0.4; 0.6]; the word is considered
negative, if the positive score is below 0.4. Since the positive score of the word
pleasant is 0.73, it is considered to carry positive sentiment. Sentiment scores of
some other words from the experiment are presented in Table 4.
We can observe from the table that the words GOOD and BAD have strongly
defined positive and negative scores, as we would expect. The word LIKE has
polarity scores ranging between 0.4 and 0.6 indicating its neutrality. To understand
why the neutral label for the word LIKE was assigned we investigate the se-
mantic role of this word in English language:
(a) Being a verb to express preference. For example: I like ice-cream.
(b) Being a preposition for the purpose of comparison. For example: This town
looks like Brighton.
The first sentence has positive sentiment, however can easily be transformed into a
negative sentence: I dont like ice-cream. This demonstrates that the word LIKE
can be used with equal frequency for expressing positive and negative opinions. In
the second example the word LIKE is playing a role of a preposition and does not
effect the overall polarity of the sentence. Thus, the word LIKE is a neutral word
and was correctly assigned a neutral label using the approach described above.
In our study all words from the Bag-of-Words with a polarity in the range [0.4; 0.6]
were removed, since they do not help to classify the text as positive or negative.
The sentiment scores of the words were mapped into the range [-1;1] by using the
following formula:
Lexicons Combinations. Since the role of emoticons for expressing opinion online
is continuously increasing, it is crucial to incorporate emoticons into lexicons used
for sentiment analysis. Hogenboom et al. showed that incorporation of the emoticons
into lexicon can significantly improve the accuracy of classification (Hogenboom et al.,
2013). Apart from emoticons, new slang words and abbreviations are constantly emerg-
ing and need to be accounted for when performing sentiment analysis. However, most
of the existing public lexicons do not contain emoticons and social-media slang, on the
contrary, emoticons and abbreviations are often being removed as typographical sym-
bols during the first stages of pre-processing.
Table 5: Example of tokens from our EMO lexicon along with their polarity. Tokens
represent emoticons, abbreviations and slang words that are used in social-media to
express emotions.
Emoticon Score Emoticon Score Abbreviation Score Abbreviation Score
l-) 1 [-( -1 lol 1 dbeyr -1
:-} 1 TT -1 ilum 1 iwiam -1
x-d 1 :-(( -1 iyqkewl 1 nfs -1
;;-) 1 :-[ -1 iwalu 1 h8ttu -1
=] 1 :((( -1 koc 1 gtfo -1
We performed Machine Learning based sentiment analysis. For this purpose we used
the machine learning package WEKA16 .
Pre-processing/cleaning the data. Before training the classifiers the data needed to be
pre-processed and this step was performed according to the general process described
in section 3. Some additional steps that had to be performed:
Filtering. Some syntactic constructions used in Twitter messages are not useful
for sentiment detection. These constructions include URLs, @-mentions, hashtags,
RT-symbols and they were removed during the pre-processing step.
Tokens replacements. The words that appeared to be under the effect of the negation
words were modified by adding a suffix NEG to the end of those words.
For example, the phrase I dont want. was modified to I dont want NEG.
This modification is important, since each word in a sentence serves a purpose
of a feature during the classification step. Words with NEG suffixes increase the
dimensionality of the feature space, but allow the classifier to distinguish between
words used in the positive and in the negative context.
When performing tokenisation, the symbols ():;, among others are considered to
be delimiters, thus most of the emoticons could be lost after tokenisation. To avoid
this problem positive emoticons were replaced with pos emo and negative were
replaced with neg emo. Since there are many variations of emoticons representing
the same emotions depending on the language and community, the replacement
of all positive lexicons by pos emo and all negative emoticons by neg emo also
achieved the goal of significantly reducing the number of features.
Feature Generation. The following features were constructed for the purpose of train-
ing a classifier:
16
http://www.cs.waikato.ac.nz/ml/weka/
N-grams: we transformed the training dataset into the bag-of-ngrams taking into
account only the presence/absence of unigrams. Using n-grams frequency would
not be logical in this particular experiment, since Twitter messages are very short,
and a term is unlikely to appear in the same message more than once;
Lexicon Sentiment: the sentiment score obtained during the lexicon based senti-
ment analysis as described in 6;
Elongated words number: the number of words with one character repeated more
than 2 times, e.g. soooo;
Emoticons: presence/absence of positive and negative emoticons at any position in
the tweet;
Last token: whether the last token is a positive or negative emoticon;
Negation: the number of negated contexts;
POS: the number of occurrences for each part-of-speech tag: verbs, nouns, adverbs,
at-mentions, abbreviations, URLs, adjectives and others
Punctuation marks: the number of occurrences of punctuation marks in a tweet;
Emoticons number: the number of occurrences of positive and negative emoti-
cons;
Negative tokens number: total count of tokens in the tweet with logarithmic score
less than 0;
Positive tokens number: total count of tokens in the tweet with logarithmic score
greater than 0;
Feature Selection. After performing the feature generation step described above a
feature space comprising 1826 features was produced. The next important step for im-
proving classification accuracy is the selection of the most relevant features from this
feature space. To this purpose we used Information Gain evaluation algorithm and a
Ranker search method (Ladha and Deepa, 2011b). Information Gain measures the de-
crease in entropy when the feature is present vs absent, while Ranker ranks the features
based on the amount of reduction in the objective function. We used features for which
the value of information gain was above zero. As the result, a subset of 528 features
was selected.
Table 9: Scenario 1: 5-fold cross-validation test on a movies reviews dataset using only
N-grams as features.
Method Tokens Folds Accuracy Precision Recall F-Score
Type Number
Naive uni/bigrams 5 81.5% 0.82 0.82 0.82
Bayes
Decision uni/bigrams 5 80.57% 0.81 0.81 0.81
Trees
SVM uni/bigrams 5 86.62% 0.87 0.87 0.87
17
http://www.cs.cornell.edu/people/pabo/movie-review-data/
Table 10: Scenario 2: 5-fold cross-validation test on a movies reviews dataset using tra-
ditional N-grams features in combination with manually constructed features: lexicon
sentiment score, number of different parts-of-speech, number of emoticons, number of
elongated words, etc.
Method Tokens Folds Accuracy Precision Recall F-Score
Type Number
Naive uni/bigrams 5 88.54% 0.89 0.86 0.86
Bayes
Decision uni/bigrams 5 89.9% 0.90 0.90 0.90
Trees
SVM uni/bigrams 5 91.17% 0.91 0.91 0.91
Training the Model, Validation and Testing. Machine Learning Supervised approach
requires a labelled training dataset. We used a publicly available training dataset (Figure
2a) from SemEval-2013 competition, Task 2-B (Nakov et al., 2013).
Each of the tweets from the training set was expressed in terms of its attributes.
As the result, n by m binary matrix was created, where n is the number of training
instances and m is the number of features. This matrix was used for training differ-
ent classifiers: Naive Bayes, Support Vector Machines, Decision trees. It is important
to notice that the training dataset was highly unbalanced with the majority of neutral
messages (Figure 2a). In order to account for this unbalance we trained a cost-sensitive
SVM model (Ling and Sheng, 2007). Cost-Sensitive classifier allows to minimize the
total cost of classification by putting a higher cost on a particular type of error (in our
case, misclassifying positive and negative messages as neutral).
As the next step we tested the models on an unseen before test set (Figure 2b) from
SemEval-2013 Competition (Nakov et al., 2013) and compared our results against the
results of 44 teams that took part in the SemEval-2013 competition. While the classifi-
cation was performed for 3 classes (pos, neg, neutral), the evaluation metric was F-score
(Equation 2) between positive and negative classes.
Table 11: F-score results of our algorithm using different classifiers. The test was per-
formed on a test dataset from SemEval Competition-2013, Task 2-B (Nakov et al.,
2013).
Classifier Naive Decision SVM Cost Sensitive
Bayes Trees SVM
F-SCORE 0.64 0.62 0.66 0.73
Table 12: Fscore results of SemEval Competition-2013, Task 2-B (Nakov et al., 2013).
TEAM NAME F-SCORE
NRC-Canada 0.6902
GUMLTLT 0.6527
TEREGRAM 0.6486
AVAYA
BOUNCE 0.6353
KLUE 0.6306
AMI and ERIC 0.6255
FBM 0.6117
SAIL
AVAYA 0.6084
SAIL 0.6014
UT-DB 0.5987
FBK-irst 0.5976
Our results for different classifiers are presented in Table 11. We can observe that
the Decision Tree algorithm had the lowest F-score of 62%. The reason may lay in a
big size of the tree needed to incorporate all of the features. Because of the tree size, the
algorithm needs to traverse multiple nodes until it reaches the leaf and predicts the class
of the instance. This long path increases the probability of mistakes and thus decreases
the accuracy of the classifier. Naive Bayes and SVM produced better scores of 64% and
66% respectively. The best model was a Cost-sensitive SVM that allowed to achieve the
F-measure of 73%. This is an important result, providing evidence that accounting for
the unbalance in the training dataset allows to improve model performance significantly.
Comparing our results with the results of the competition (Table 12), we can conclude
that our algorithm based on the Cost-sensitive SVM would had produced the best results
scoring 4 points higher than the winner of that competition.
7 Conclusion
In this paper we have presented the review of two main approaches for sentiment anal-
ysis, a lexicon based method and a machine learning method.
In the lexicon based approach we compared the performance of three lexicons: i) an
Opinion lexicon (OL); ii) an Opinion lexicon enhanced with manually created corpus
of emoticons, abbreviations and social-media slang expressions (OL + EMO); iii) OL
+ EMO further enhanced with automatically generated lexicon (OL + EMO + AUTO).
We showed that on a benchmark Twitter dataset, OL + EMO lexicon outperforms both,
the traditional OL and a larger OL + EMO + AUTO lexicon. These results demonstrate
the importance of incorporating expressive signals such as emoticons, abbreviations and
social-media slang phrases into lexicons for Twitter analysis. The results also show that
larger lexicons may yield a decrease in performance due to ambiguity of words polarity
and increased model complexity (agreeing with (Ghiassi et al., 2013)).
In the machine learning approach we propose to use a lexicon sentiment obtained
during the lexicon based classification as an input feature for training classifiers. The
ranking of all features based on the information gain scores during the feature selection
process revealed that the lexicon feature appeared on the top of the list, confirming
its relevance in sentiment classification. We also demonstrated that in case of highly
unbalanced datasets the utilisation of cost-sensitive classifiers improves accuracy of
class prediction: on the benchmark Twitter dataset a cost-sensitive SVM yielded 7%
increase in performance over a standard SVM.
Acknowledgments
We thank the valuable feedback from the two anonymous reviewers. T.A. acknowl-
edges support of the UK Economic and Social Research Council (ESRC) in funding
the Systemic Risk Centre (ES/K002309/1). O.K. acknowledges support from the com-
pany Certona Corporation. T.T.P.S. acknowledges financial support from CNPq - The
Brazilian National Council for Scientific and Technological Development.
Bibliography
Aisopos, F., Papadakis, G., Tserpes, K., and Varvarigou, T. (2012). Content vs. context
for sentiment analysis: A comparative analysis over microblogs. In Proceedings of
the 23rd ACM Conference on Hypertext and Social Media, HT 12, pages 187196,
New York, NY, USA. ACM.
Antenucci, D., Cafarella, M., Levenstein, M. C., R, C., and Shapiro, M. (2014). Using
social media to measure labor market flows. http://www.nber.org/papers/w20010.
Accessed: 2015-04-10.
Asur, S. and Huberman, B. A. (2010). Predicting the future with social media. In
Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelli-
gence and Intelligent Agent Technology - Volume 01, WI-IAT 10, pages 492499,
Washington, DC, USA. IEEE Computer Society.
Barbosa, L. and Feng, J. (2010). Robust sentiment detection on twitter from biased and
noisy data. In Proceedings of the 23rd International Conference on Computational
Linguistics: Posters, COLING 10, pages 3644, Stroudsburg, PA, USA. Association
for Computational Linguistics.
Basili, R., Moschitti, A., and Pazienza, M. T. (2000). Language-Sensitive Text Clas-
sification. In Proceeding of RIAO-00, 6th International Conference Recherche
dInformation Assistee par Ordinateur, pages 331343, Paris, France.
Benamara, F., Irit, S., Cesarano, C., Federico, N., and Reforgiato, D. (2007). Sentiment
Analysis: Adjectives and Adverbs are better than Adjectives Alone. In In Proc of Int
Conf on Weblogs and Social Media.
Bing, L. (2012). Sentiment analysis: A fascinating problem. In Sentiment Analysis and
Opinion Mining, pages 7143. Morgan and Claypool Publishers.
Bollen, J., Mao, H., and Zeng, X. (2010). Twitter mood predicts the stock market. In
CoRR, volume abs/1010.3003.
Caropreso, M. F., Matwin, S., and Sebastiani, F. (2001). A learner-independent evalua-
tion of the usefulness of statistical phrases for automated text categorization. In Chin,
A. G., editor, Text Databases and Document Management, pages 78102, Hershey,
PA, USA. IGI Global.
Carvalho, J., Prado, A., and Plastino, A. (2014). A statistical and evolutionary approach
to sentiment analysis. In Proceedings of the 2014 IEEE/WIC/ACM International
Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT)
- Volume 02, WI-IAT 14, pages 110117, Washington, DC, USA. IEEE Computer
Society.
Cortes, C. and Vapnik, V. (1995). Support-vector networks. In Machine Learning,
volume 20, pages 273297, Hingham, MA, USA. Kluwer Academic Publishers.
Councill, I. G., McDonald, R., and Velikovich, L. (2010). Whats great and whats
not: Learning to classify the scope of negation for improved sentiment analysis. In
Proceedings of the Workshop on Negation and Speculation in Natural Language Pro-
cessing, NeSp-NLP 10, pages 5159, Stroudsburg, PA, USA. Association for Com-
putational Linguistics.
Das, S. and Chen, M. (2001). Yahoo! for amazon: Extracting market sentiment from
stock message boards. In Asia Pacific Finance Association Annual Conf. (APFA).
Dash, M. and Liu, H. (1997). Feature selection for classification. In Intelligent data
analysis, volume 1, pages 131156. No longer published by Elsevier.
Diederich, J., Kindermann, J., Leopold, E., and Paass, G. (2003). Authorship attribution
with support vector machines. In Applied Intelligence, volume 19, pages 109123,
Hingham, MA, USA. Kluwer Academic Publishers.
Ding, X., Liu, B., and Yu, P. S. (2008). A holistic lexicon-based approach to opinion
mining. In Proceedings of the 2008 International Conference on Web Search and
Data Mining, WSDM 08, pages 231240, New York, NY, USA. ACM.
Dumais, S., Platt, J., Heckerman, D., and Sahami, M. (1998). Inductive learning algo-
rithms and representations for text categorization. In CIKM 98: Proceedings of the
seventh international conference on Information and knowledge management, pages
148155, New York, NY, USA. ACM.
Esuli, A. and Sebastiani, F. (2006). Sentiwordnet: A
publicly available lexical resource for opinion mining.
http://www.bibsonomy.org/bibtex/25231975d0967b9b51502fa03d87d106b/mkroell.
Accessed: 2014-07-07.
Fersini, E., Messina, E., and Pozzi, F. (2014). Sentiment analysis: Bayesian ensemble
learning. In Decision Support Systems, volume 68, pages 26 38.
Fersini, E., Messina, E., and Pozzi, F. (2015). Expressive signals in social media lan-
guages to improve polarity detection. In Information Processing and Management.
Forman, G. (2003). An extensive empirical study of feature selection metrics for text
classification. In J. Mach. Learn. Res., volume 3, pages 12891305. JMLR.org.
Ghiassi, M., Skinner, J., and Zimbra, D. (2013). Twitter brand sentiment analysis: A
hybrid system using n-gram analysis and dynamic artificial neural network. In Expert
Syst. Appl., volume 40, pages 62666282, Tarrytown, NY, USA. Pergamon Press, Inc.
Gimpel, K., Schneider, N., OConnor, B., Das, D., Mills, D., Eisenstein, J., Heilman,
M., Yogatama, D., Flanigan, J., and Smith, N. A. (2011). Part-of-speech tagging for
twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual
Meeting of the Association for Computational Linguistics: Human Language Tech-
nologies: Short Papers - Volume 2, HLT 11, pages 4247, Stroudsburg, PA, USA.
Association for Computational Linguistics.
Hall, M. (2012). Twitter labelled dataset. http://markahall.blogspot.co.uk/2012/03/sentiment-
analysis-with-weka.html. Accessed: 06-Feb-2013.
Hatzivassiloglou, V. and McKeown, K. (1997). Predicting the semantic orientation of
adjectives. pages 174181, Madrid, Spain.
Hogenboom, A., Bal, D., Frasincar, F., Bal, M., de Jong, F., and Kaymak, U. (2013).
Exploiting emoticons in sentiment analysis. In Proceedings of the 28th Annual ACM
Symposium on Applied Computing, SAC 13, pages 703710, New York, NY, USA.
ACM.
Hu, M. and Liu, B. (2004). Opinion lexicon.
http://www.cs.uic.edu/ liub/FBS/sentiment-analysis.html. Accessed: 2014-03-
20.
Hu, X., Tang, L., Tang, J., and Liu, H. (2013). Exploiting social relations for sentiment
analysis in microblogging. In Proceedings of the Sixth ACM International Confer-
ence on Web Search and Data Mining, WSDM 13, pages 537546, New York, NY,
USA. ACM.
Ikonomakis, M., Kotsiantis, S., and Tampakas, V. (2005). Text classification using ma-
chine learning techniques. In WSEAS Transactions on Computers, volume 4, pages
966974.
Jacobs, P. S. (1992). Joining statistics with nlp for text categorization. http://dblp.uni-
trier.de/db/conf/anlp/anlp1992.html. Accessed: 2014-05-07.
Joachims, T. (1999). Transductive inference for text classification using support vector
machines. In Proceedings of the Sixteenth International Conference on Machine
Learning, ICML 99, pages 200209, San Francisco, CA, USA. Morgan Kaufmann
Publishers Inc.
Kaji, N. and Kitsuregawa, M. (2007). Building lexicon for sentiment analysis from
massive collection of html documents. In EMNLP-CoNLL, pages 10751083. ACL.
Kim, S.-M. and Hovy, E. (2004). Determining the sentiment of opinions. pages 1267
1373, Geneva, Switzerland.
Kotsiantis, S. B. (2007). Supervised machine learning: A review of classification tech-
niques. In Proceedings of the 2007 Conference on Emerging Artificial Intelligence
Applications in Computer Engineering: Real Word AI Systems with Applications in
eHealth, HCI, Information Retrieval and Pervasive Technologies, pages 324, Ams-
terdam, The Netherlands, The Netherlands. IOS Press.
Kouloumpis, E., Wilson, T., and Moore, J. (2011). Twitter sentiment analysis: The good
the bad and the omg! In Adamic, L. A., Baeza-Yates, R. A., and Counts, S., editors,
ICWSM. The AAAI Press.
Ladha, L. and Deepa, T. (2011a). Feature selection methods and algorithms. In Interna-
tional Journal on Computer Science and Engineering, volume 3, pages 17871797.
Ladha, L. and Deepa, T. (2011b). Feature selection methods and algorithms, inter-
national journal on computer science and engineering. In International Journal on
Computer Science and Engineering, volume 3, pages 17871800.
Ling, C. X. and Sheng, V. S. (2007). Cost-sensitive Learning and the Class Imbalanced
Problem. In Sammut, C., editor, Encyclopedia of Machine Learning.
Liu, K.-L., Li, W.-J., and Guo, M. (2012). Emoticon smoothed language models for
twitter sentiment analysis. In Proceedings of the National Conference on Artificial
Intelligence, volume 2, pages 16781684. cited By 2.
Lovins, J. B. (1968). Development of a stemming algorithm. In Mechanical Translation
and Computational Linguistics 11, pages 2231.
Manning, C. D. and Schutze, H. (1999). Foundations of Statistical Natural Language
Processing. MIT Press, Cambridge, Massachusetts.
Marsland, S. (2011). Machine Learning: An Algorithmic Perspective. CRC Press.
Mitchell, T. M. (1996). Machine Learning. McGrwa Hill, New York, New York, NY,
USA.
Mohammad, S. M., Kiritchenko, S., and Zhu, X. (2013). Nrc-canada: Building the state-
of-the-art in sentiment analysis of tweets. In Proceedings of the seventh international
workshop on Semantic Evaluation Exercises (SemEval-2013).
Moilanen, K. and Pulman, S. (2007). Sentiment composition. In Proceedings of Recent
Advances in Natural Language Processing (RANLP 2007), pages 378382.
Morinaga, S., Yamanishi, K., Tateishi, K., and Fukushima, T. (2002). Mining product
reputations on the web. In Proceedings of the Eighth ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, KDD 02, pages 341349,
New York, NY, USA. ACM.
Nakov, P., Kozareva, Z., Ritter, A., Rosenthal, S., Stoyanov, V., and Wilson, T. (2013).
Semeval-2013 task 2: Sentiment analysis in twitter. In Seventh International Work-
shop on Semantic Evaluation (SemEval 2013), volume 2, pages 312320.
Narayanan, V., Arora, I., and Bhatia, A. (2013). Fast and accurate sentiment classifica-
tion using an enhanced naive bayes model. In Yin, H., Tang, K., Gao, Y., Klawonn,
F., Lee, M., Weise, T., Li, B., and Yao, X., editors, Intelligent Data Engineering and
Automated Learning IDEAL 2013, volume 8206 of Lecture Notes in Computer Sci-
ence, pages 194201. Springer Berlin Heidelberg.
Nielsen, F. . (2011). A new anew: evaluation of a word list for sentiment analysis in
microblogs. In Proceedings of the ESWC2011 Workshop on Making Sense of Mi-
croposts: Big things come in small packages 718 in CEUR Workshop Proceedings,
pages 9398.
Ohana, B. and Tierney, B. (2009). Sentiment classification of reviews using sentiword-
net. http://www.bibsonomy.org/bibtex/2443c5ba60fab3ce8bb93a6e74c8cf87d/bsc.
Pak, A. and Paroubek, P. (2010). Twitter as a corpus for sentiment analysis and opin-
ion mining. In Chair), N. C. C., Choukri, K., Maegaard, B., Mariani, J., Odijk, J.,
Piperidis, S., Rosner, M., and Tapias, D., editors, Proceedings of the Seventh Inter-
national Conference on Language Resources and Evaluation (LREC10), Valletta,
Malta. European Language Resources Association (ELRA).
Pang, B., Lee, L., and Vaithyanathan, S. (2002). Thumbs up?: Sentiment classification
using machine learning techniques. In Proceedings of the ACL-02 Conference on
Empirical Methods in Natural Language Processing - Volume 10, EMNLP 02, pages
7986, Stroudsburg, PA, USA. Association for Computational Linguistics.
Piantadosi, S. (2014). Zipfs word frequency law in natural language: A critical review
and future directions. In Psychonomic Bulletin and Review, volume 21, pages 1112
1130. Springer US.
Polanyi, L. and Zaenen, A. (2006). Contextual Valence Shifters. In Croft, W. B.,
Shanahan, J., Qu, Y., and Wiebe, J., editors, Computing Attitude and Affect in Text:
Theory and Applications, volume 20 of The Information Retrieval Series, chapter 1,
pages 110. Springer Netherlands.
Porter, M. (2002). Snowball: Quick introduction.
http://snowball.tartarus.org/texts/quickintro.html. Accessed: 2014-10-16.
Porter, M. F. (1980). An algorithm for suffix stripping. In Program, volume 14, pages
130137.
Pozzi, F. A., Fersini, E., Messina, E., and Blanc, D. (2013a). Enhance polarity clas-
sification on social media through sentiment-based feature expansion. In Baldoni,
M., Baroglio, C., Bergenti, F., and Garro, A., editors, WOA@AI*IA, volume 1099 of
CEUR Workshop Proceedings, pages 7884. CEUR-WS.org.
Pozzi, F. A., Maccagnola, D., Fersini, E., and Messina, E. (2013b). Enhance user-level
sentiment analysis on microblogs with approval relations. In Baldoni, M., Baroglio,
C., Boella, G., and Micalizio, R., editors, AI*IA, volume 8249 of Lecture Notes in
Computer Science, pages 133144. Springer.
Rajaraman, A. and Ullman, J. D. (2011). Mining of Massive Datasets. Cambridge
University Press, New York, NY, USA.
Raskutti, B., Ferra, H. L., and Kowalczyk, A. (2001). Second order features for max-
imising text classification performance. In Proceedings of the 12th European Confer-
ence on Machine Learning, EMCL 01, pages 419430, London, UK, UK. Springer-
Verlag.
Rijsbergen, C. J. V. (1979). Information Retrieval. Butterworth-Heinemann, Newton,
MA, USA, 2nd edition.
Rocchio, J. J. (1971). Relevance feedback in information retrieval. In Salton, G., editor,
The Smart retrieval system - experiments in automatic document processing, pages
313323. Englewood Cliffs, NJ: Prentice-Hall.
Saif, H., He, Y., and Alani, H. (2012). Semantic sentiment analysis of twitter. In
Proceedings of the 11th International Conference on The Semantic Web - Volume
Part I, ISWC12, pages 508524, Berlin, Heidelberg. Springer-Verlag.
Salton, G. and McGill, M. J. (1983). In Introduction to Modern Information Retrieval.
McGraw Hill Book Co.
Schapire, R. E. and Singer, Y. (2000). BoosTexter: A Boosting-based System for Text
Categorization. In Machine Learning, volume 39, pages 135168.
Souza, T. T. P., Kolchyna, O., Treleaven, P. C., and Aste, T. (2015). Twit-
ter sentiment analysis applied to finance: A case study in the retail industry.
http://arxiv.org/abs/1507.00784.
Stone, P. J. and Hunt, E. B. (1963). A computer approach to content analysis: Studies
using the general inquirer system. In Proceedings of the May 21-23, 1963, Spring
Joint Computer Conference, AFIPS 63 (Spring), pages 241256, New York, NY,
USA. ACM.
Taboada, M., Brooke, J., Tofiloski, M., Voll, K., and Stede, M. (2011). Lexicon-based
methods for sentiment analysis. In Comput. Linguist., volume 37, pages 267307,
Cambridge, MA, USA. MIT Press.
Tan, C.-M., Wang, Y.-F., and Lee, C.-D. (2002). The use of bigrams to enhance text
categorization. In INF. PROCESS. MANAGE, pages 529546.
Tong, R. (2001). An operational system for detecting and tracking opinions in on-line
discussions. In Working Notes of the SIGIR Workshop on Operational Text Classifi-
cation, pages 16, New Orleans, Louisianna.
Turney, P. D. (2002). Thumbs up or thumbs down?: Semantic orientation applied to
unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting on
Association for Computational Linguistics, ACL 02, pages 417424, Stroudsburg,
PA, USA. Association for Computational Linguistics.
Vapnik, V. N. (1995). The Nature of Statistical Learning Theory. Springer-Verlag New
York, Inc., New York, NY, USA.
Vapnik, V. N. (1998). Statistical Learning Theory. Wiley, New York, NY, USA.
Wang, G., Sun, J., Ma, J., Xu, K., and Gu, J. (2014). Sentiment classification: The
contribution of ensemble learning. In Decision Support Systems, volume 57, pages
77 93.
Weiss, S. M., Apte, C., Damerau, F. J., Johnson, D. E., Oles, F. J., Goetz, T., and Hampp,
T. (1999). Maximizing text-mining performance. In IEEE Intelligent Systems, vol-
ume 14, pages 6369, Piscataway, NJ, USA. IEEE Educational Activities Depart-
ment.
Wiebe, J. (2000). Learning subjective adjectives from corpora. In Proceedings of the
Seventeenth National Conference on Artificial Intelligence and Twelfth Conference
on Innovative Applications of Artificial Intelligence, pages 735740. AAAI Press.
Wiebe, J. and Wilson, T. (2002). Learning to disambiguate potentially subjective ex-
pressions. pages 112118, Taipei, Taiwan.
Wiebe, J., Wilson, T., and Cardie, C. (2005). Annotating expressions of opinions and
emotions in language. In Language Resources and Evaluation, volume 39, pages
164210.
Wilson, T., Wiebe, J., and Hoffmann, P. (2005). Recognizing contextual polarity
in phrase-level sentiment analysis. In hltemnlp2005, pages 347354, Vancouver,
Canada.
You, Q. and Luo, J. (2013). Towards social imagematics: Sentiment analysis in social
multimedia. In Proceedings of the Thirteenth International Workshop on Multimedia
Data Mining, MDMKDD 13, pages 3:13:8, New York, NY, USA. ACM.
Zhang, D. (2003). Question classification using support vector machines. In In Pro-
ceedings of the 26th annual international ACM SIGIR conference on Research and
development in informaion retrieval, pages 2632. ACM Press.
Zhao, J., Dong, L., Wu, J., and Xu, K. (2012). Moodlens: an emoticon-based sentiment
analysis system for chinese tweets. In The 18th ACM SIGKDD International Confer-
ence on Knowledge Discovery and Data Mining, KDD 12, Beijing, China, August
12-16, 2012, pages 15281531.
Zhu, L., Galstyan, A., Cheng, J., and Lerman, K. (2014). Tripartite graph clustering
for dynamic sentiment analysis on social media. In Proceedings of the 2014 ACM
SIGMOD International Conference on Management of Data, SIGMOD 14, pages
15311542, New York, NY, USA. ACM.