0% found this document useful (0 votes)
79 views8 pages

Face 2 Emoji

Uploaded by

pepper2008
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views8 pages

Face 2 Emoji

Uploaded by

pepper2008
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Face2Emoji: Using Facial Emotional

Expressions to Filter Emojis

Abdallah El Ali Wilko Heuten Abstract


University of Oldenburg, OFFIS - Institute for IT One way to indicate nonverbal cues is by sending emoji
Germany Oldenburg, Germany (e.g., ), which requires users to make a selection from
[email protected] [email protected]
large lists. Given the growing number of emojis, this can in-
cur user frustration, and instead we propose Face2Emoji,
where we use a user’s facial emotional expression to filter
Torben Wallbaum Susanne CJ Boll out the relevant set of emoji by emotion category. To vali-
OFFIS - Institute for IT University of Oldenburg, date our method, we crowdsourced 15,155 emoji to emo-
Oldenburg, Germany Germany tion labels across 308 website visitors, and found that our
[email protected] [email protected] 202 tested emojis can indeed be classified into seven ba-
sic (including Neutral) emotion categories. To recognize
facial emotional expressions, we use deep convolutional
Merlin Wasmann neural networks, where early experiments show an overall
OFFIS - Institute for IT accuracy of 65% on the FER-2013 dataset. We discuss our
Oldenburg, Germany future research on Face2Emoji, addressing how to improve
[email protected] our model performance, what type of usability test to run
with users, and what measures best capture the usefulness
and playfulness of our system.

Author Keywords
Permission to make digital or hard copies of part or all of this work for personal or Face2Emoji; emoji; crowdsourcing; emotion recognition;
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation facial expression; input; keyboard; text entry
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s). Copyright is held by the
author/owner(s).
ACM Classification Keywords
CHI’17 Extended Abstracts, May 06–11, 2017, Denver, CO, USA. H.5.m [Information interfaces and presentation (e.g., HCI)]:
ACM 978-1-4503-4656-6/17/05.
http://dx.doi.org/10.1145/3027063.3053086
Miscellaneous
Introduction Motivation & Research Questions
Nonverbal behavior conveys affective and emotional infor- Face2Emoji is motivated by two findings from the literature:
mation, to communicate ideas, manage interactions, and that a primary function of emojis is to express emotion, and
disambiguate meaning to improve the efficiency of con- that most emojis used are face emojis. Cramer et al. [3]
versations [14, 25]. One way to indicate nonverbal cues is found that 60% (139/228) of their analyzed message by
by sending emoji, which are graphic icons (e.g., , , ) US participants were emoji used for expressing emotion.
managed by the Unicode Consortium1 that are identified by In an Instagram emoji study4 , faces accounted for 6 of the
unicode characters and rendered according to a platform’s top 10 emojis used, providing further evidence that peo-
font package. ple frequently use emoji to express emotion. Furthermore,
according to a 2015 SwiftKey report5 , faces accounted for
Emojis enable people to express themselves richly, and close to 60 percent of emoji use in their analysis of billions
while shown as screen graphics, they can be manipulated of messages. Finally, in a qualitative study from Lee et al.
as text structures. Besides Pohl et al.’s EmojiZoom [22] [17] on emoticon sticker usage, they found that these stick-
who propose a zooming-based interface, entering emoji ers were used mainly for expressing emotions.
on smartphone keyboards currently requires users to make
a selection from large lists (one list per category of emoji) The study of nonverbal communication via emotions origi-
(e.g., Apple© iOS 10 emoji keyboard2 in Fig. 1). This makes nated with Darwin’s claim that emotion expressions evolved
emoji entry “a linear search task" [22], and given the grow- in humans from pre-human nonverbal displays [4]. Further-
Figure 1: Apple© iOS 10 emoji ing number of emojis, we assume can incur user frustration. more, according to Ekman [6, 7], there are six basic emo-
keyboard within iMessage.
While no prior work explicitly addresses this, efforts such as tions which have acquired a special status among the sci-
Emojipedia3 highlight the need for better emoji search. entific community: Anger, Disgust, Fear, Happiness, Sad-
ness, and Surprise. Here, we draw on these six basic emo-
To address this, we propose Face2Emoji, a system and tions, and additionally include the Neutral facial expression.
method to use users’ facial emotional expressions as sys- By using computer vision and machine learning techniques
tem input to filter emojis by emotional category. Despite that for analyzing and recognizing emotional expressions, the
emojis can represent actions, objects, nature, and other user’s face can be used as a natural interaction filter6 . To
symbols, the most commonly used emojis are faces which test the validity of our proposed method, we used crowd-
express emotion [3, 17, 24]. Moreover, previous work has sourcing to firstly identify whether a natural mapping be-
shown that emojis can be ranked by sentiment (cf., Emoji
Sentiment Ranking by Novak et al. [15]), textual notifica- 4
https://www.tumblr.com/dashboard/blog/instagram-
tions containing emojis exhibit differences in 3-valued sen- engineering/117889701472 ; last retrieved: 14-02-2017
timent across platforms [23], and for faces, emojis can be 5
https://blog.swiftkey.com/americans-love-skulls-brazilians-love-cats-
ranked by valence and arousal [24]. swiftkey-emoji-meanings-report/ ; last retrieved: 14-02-2017
6
A filter according to Wikipedia (https://en.wikipedia.org/wiki/Filter_(higher-
1
http://unicode.org/emoji/ ; last retrieved: 14-02-2017 order_function)) is defined as “a higher-order function that processes a
2
Source: https://support.apple.com/en-us/HT202332 ; last retrieved: data structure (usually a list) in some order to produce a new data struc-
14-02-2017 ture containing exactly those elements of the original data structure for
3
http://emojipedia.org/ ; last retrieved: 14-02-2017 which a given predicate returns the boolean value true."
tween emojis and the seven facial expressions exists, and if municate [18]. In a study by Barbieri et al. [2], they found
so, what this mapping distribution looks like. that the overall semantics of the subset of the emojis they
studied is preserved across US English, UK English, Span-
We address the following questions: Do the most frequently ish, and Italian. As validation of the usefulness of mapping
used emojis naturally map to the six basic (+ Neutral) facial emojis to emotions, preliminary investigations reported by
emotional expressions? Can we achieve reasonable facial Jaeger et al. [13] suggest that emoji may have potential as
emotional expression recognition for these emotions using a method for direct measurement of emotional associations
deep convolutional neural networks? The rest of the paper to foods and beverages.
will address related work on natural, multimodal user inter-
faces and emoji communication, provide our crowdsourcing Emoji (Mis-)interpretation
approach and results, our early emotion recognition ex- Recently, Miller et al. [20] demonstrated how same emoji
periments using deep convolutional neural networks, and look differently across devices (iPhone, Android, Samsung)
sketch our future research steps and open questions. and is therefore differently interpreted across users. Even
when participants were exposed to the same emoji render-
Related Work ing, they disagreed on whether the sentiment was positive,
Multimodal User Interfaces and Emoji Entry neutral, or negative around 25% of the time. In a related
Related to our approach, Filho et al. [8] augmented text preliminary study, Tigwell et al. [24] found clear differences
chatting in mobile phones by adding automatically detected in emoji valence and arousal ratings between platform pairs
facial expression reactions using computer vision tech- due to differences in their design, as well as variations in
niques, resulting in an emotion enhanced mobile chat. For ratings for emoji within a platform. In the context of our
using the user’s face as input, Anand et al. [1] explored a work, this highlights the need to account for multiple in-
Face2Emoji: Mapping facial emotional expressions to Emoji use-case of an eBook reader application wherein the user terpretations, where an emoji (as we show later) can be
Which emotional expression is this emoji most associated with?
Progress: 1/202 performs certain facial expressions naturally to control the classified as belonging to one or more emotion categories.
device. With respect to emoji entry, Pohl et al. [22] pro-
posed a new zooming keyboard for emoji entry, EmojiZoom, Crowdsourcing Emoji to Emotion Mappings
where users can see all emoji at once. Their technique, Approach
Afraid Angry Disgusted Neutral Happy Sad Surprised
which was tested in a usability study against the Google To validate whether emojis, irrespective of function, can
Skip: emoji not displayed correctly
keyboard, showed 18% faster emoji entry. be categorized into one of the six basic emotional expres-
Instructions
Click / Tap on the facial expression that you think best matches the emoji shown. If the
emoji is not displayed correctly, then choose 'Skip'.
sions (+ Neutral), and what such a mapping looks like, we
Emoji and Emotion Communication adopted a crowdsourcing approach. Since currently as of
The compactness of emojis reduces the effort of input to Unicode 9.0, there are 1,394 emojis (not including modified
PDF generated automatically by the HTML to PDF API of PDFmyURL

Figure 2: Snapshot of the express not only emotions, but also serves to adjust mes- emojis, or sequences)7 , we decided to test only a subset.
Face2Emoji crowdsourcing website sage tone, increase message engagement, manage con- We selected emojis with greater than 100 occurrences from
(showing female faces here). versations and maintain social relationships [3]. Moreover, the Emoji Sentiment Ranking V1 [15] dataset, which re-
emojis do not have language barriers, making it possible for sulted in a total of 202 emojis.
users across countries and cultural backgrounds to com-
7
http://emojipedia.org/stats/ ; last retrieved: 14-02-2017
We built a website to crowdsource emoji to emotion labels into emotions (κ=0.55, CI: [0.46,0.65]), where we agreed
(shown in Fig. 2). On the website, an emoji would be shown on 71.3% (144/202) of emojis. These joint labels were
and a visitor has to choose one of seven emotion faces8 : then compared with the top ranked (majority voted) emo-
Afraid, Angry, Disgusted, Neutral, Happy, Sad, Surprised. jis from the crowd, which gave an almost perfect agreement
Additionally, a ‘Skip’ option was provided in case the emoji (κ=0.85, CI: [0.77,0.93]).
was not displayed correctly. We tracked emojis and emo-
tion labels using cookie session IDs, where the emoji index Classification Results
and associated unicode were used for all subsequent anal- The distribution of the top most frequent (by majority vote)
Operating System Labels ysis. We additionally tracked a visitors’ Operating System, emotion labels, as well as the next top labels, across the
Win32 7113
however not the browser type (which can be a limitation). 202 tested emojis are shown in Fig. 3. Interesting to ob-
MacIntel 3033
iPhone 2347 IP addresses were not tracked to avoid data privacy issues. serve here that for the majority of labels, none of the emojis
Android 1269
Furthermore, we chose to render the unicode and not cre- tested were skipped due to unicode rendering. From our
Linux 517
iPad 449 ate images from them, in order to ensure users across plat- labeled data, it became clear that an emoji can be classi-
Win64 427
forms can provide emotion labels, irrespective of rendering. fied under two emojis (following a bimodal or at times mul-
The website was distributed via online forums (e.g., Red- timodal distribution). For example, was nearly equally
Table 1: OS’s used to access the
dit) and the authors’ social networks. Our datasets (raw and labeled as Happy (N=32) (since a trophy, a sign of achieve-
Face2Emoji website across all
ranked) are publicly available9 for research purposes here: ment can evoke happiness) and Neutral (N=34), since
visitors (N=15,155)
https://github.com/abdoelali/f2e_dataset it is an object with no direct mapping to a facial expres-
100 sion. Therefore, to account for this variability, we classified
80 Descriptive Statistics whether an emoji belongs to an emotion label using our
We collected a total of 15,155 labels, across 308 unique Emotion Class (EC) function:
Frequency

60

40
website visitors. Each emoji received an average of 75.0
labels (M d=74.5, s=5.3). From the total set, 1,621 (10.7 %)
20
were ‘skipped’ (or labeled as NA’s), where 10% of respon-
(
xij 1 if EC > 0.5
0
afraid angry disgusted happy na neutral sad surprised dents who labeled NA made up 73.3% (1188/1621) of all EC = = (1)
Emotions
NAs in our dataset. The distribution of operating systems
max (xi ) 0 if EC ≤ 0.5
top next top
used to access the website are shown in Table 1.
Figure 3: Distribution of top and
Annotation Quality where: xi ∈ [1, 202], xj ∈ [1, 8]. We chose a cutoff thresh-
next top crowdsourced majority
As a test for annotation quality, we independently (N=2) old of 0.5, where an emoji is classified as belonging to an
voting of 202 emojis across
emotion categories, including NAs. rated each emoji by classifying into one of the emotion cat- emotion class if EC > 0.5. The result of applying our EC
egories, and computed unweighted Cohen’s Kappa. Our function to our data is shown in Table 2, where the emojis
ratings reached moderate agreement on classifying emojis per emotion category are sorted by label count in ascend-
ing order.
8
Female or male faces randomly chosen on page refresh.
9
Our datasets contain no sensitive information and therefore comply
with user privacy.
6000
5500
5000
4500
4000
Frequency

3500 Neutral Happy Sad Angry Surprised Disgusted Afraid


3000
2500
2000
1500
1000
500
0
afraid angry disgusted happy neutral sad surprised

Emotions
training validation

Figure 4: Training and validation


data distribution across emotion ♥
categories.

Layer Output Size


Input 48 x 48 x 1
Convolution 5 x 5 x 64 (activation
= ReLU)
Max Pooling 3 x 3 (strides = 2)
Convolution 5 x 5 x 64 (activation
= ReLU) Table 2: Resulting emojis per emotion class distribution after applying our EC function.
Max Pooling 3 x 3 (strides = 2)
Convolution 4 x 4 x 128 (activation
= ReLU)
Dropout value = 0.3 Deep CNN for Emotion Recognition 48x48 pixel images of facial expressions, collected from the
Fully Connected 3072 To build our emotion recognition module, we used deep web using 184 emotion-related keywords. We implemented
Softmax 7
Convolutional Neural Networks (CNNs). Deep Learning- our network with TFLearn11 , a deep learning library featur-
Table 3: Our current CNN based approaches, particularly those using CNNs, have ing a higher-level API for TensorFlow12 . Our implementa-
architecture. been very successful at image-related tasks in recent years, tion and training procedure followed recent work by Gudi et
due to their ability to extract good representations from data al. [10] who used CNNs for emotion recognition. All faces
[12]. We chose to build our own recognition system instead were detected with OpenCV’s Viola & Jones face detector
1.0 of using available APIs (such as Microsoft’s Emotion API10 ) (frontal) [26], resulting in a final training sample of 21,039
because: (a) it allows us greater flexibility in inspecting the and validation sample of 1,546 images. The distribution
0.8
classification accuracies ourselves and determining why across emotion labels is shown in Fig. 4, where it can be
Accuracy

0.6 certain emotions are not correctly classified, (b) we can en- seen that most of the emotions are Happy, Neutral, Sad,
sure user privacy by running all predictions directly on the and Surprised. After experimenting with different architec-
0.4
device, and (c) it is free. ture and hyperparameters, our final network architecture
0 25 50 75 100 is shown in Table 3, where training was done with a batch
Dataset & Architecture
Epochs size of 32, using stochastic gradient descent with hyper-
acc val_acc We used the FER-2013 facial expression dataset [9] for
parameters (momentum=0.9, learning rate= 0.001, weight
training and validation, which comprises 32,298 grayscale
Figure 5: Accuracy and validation 10 11
https://www.microsoft.com/cognitive-services/en-us/emotion-api ; last http://tflearn.org/ ; last retrieved: 14-02-2017
of our final network model across 12
retrieved: 14-02-2017 https://www.tensorflow.org/ ; last retrieved: 14-02-2017
100 epochs.
decay=0.0005) where loss was computed using categori- tend to test our Face2Emoji app with users for only those
neutral 0.03 0.01 0.02 0.04 0.10 0.02 0.78 cal cross-entropy, and run on an NVIDIA GeForce GTX 970 three emotions: Happy, Surprised, and Neutral.
surprised 0.01 0.01 0.08 0.03 0.03 0.81 0.04
GPU for 100 epochs.
Next Steps & Research Agenda
sad 0.03 0.04 0.05 0.06 0.43 0.04 0.34 Early Experiments & Results Our next steps are to experiment further with deep learning
Accuracy and validation accuracy plots across 100 epochs approaches for emotion recognition (e.g., using transfer-
Real Emotion

happy 0.01 0.01 0.00 0.86 0.03 0.03 0.07


for our CNN model are shown in Fig. 5. Our network con- based learning to deal with our small dataset [21]). Fur-
afraid 0.11 0.04 0.43 0.01 0.12 0.12 0.17 verged on a validation accuracy of 65%, which is compa- thermore, we are currently completing development of the
rable to human level performance on this dataset [9]. To Face2Emoji keyboard prototype for Android (shown in Fig.
disgusted 0.09 0.74 0.03 0.00 0.11 0.00 0.03
evaluate our network performance, we tested our predic- 7), and planning a usability test with users. In this usability
angry 0.46 0.05 0.08 0.03 0.16 0.03 0.20 tions on 1,523 FER-2013 test images. Additionally, we used test, we want to test Face2Emoji against a baseline cus-
angry disgusted afraid happy sad surprised neutral the Radboud Faces Database (RaFD) [16], which consists tom keyboard we call EmoTabs, where the emojis are or-
Predicted Emotion
of 8000 high resolution faces, as well as the Karolinska ganized according to emotion labels (instead of the default
Directed Emotional Faces (KDEF) [19], which consists of categories). Our current plan is to evaluate Face2Emoji on
Figure 6: Performance matrix for
4900 pictures of human facial expressions of emotion. selection time performance for individual emojis, but more
our deep CNN model.
importantly on whether emoji filtering using one’s own emo-
The datasets differ on quantity, quality, and how much pos-
tional facial expressions is fun and natural.
ing is involved. In this respect, the FER-2013 dataset shows
emotions ‘in the wild’. We took only the frontal images of For this work in progress, many open questions remain
the RaFD, and after face detection, we had a test set of which steer our future work: how would users perceive such
1,407 images. For the KDEF dataset, after preprocessing, a system (i.e., does it raise social acceptability issues)?
we had 980 images. The performance of our network on How to give feedback to the user on their current detected
the FER-2013 test set is 68%, 55% on the RaFD, and 46% emotion? Should activating face recognition remain user-
on KDEF. We additionally experimented with a ResNet- driven, or should it be system-driven (i.e., continuous recog-
32 [11] deep residual network (which have recently shown nition)? How would they react to such machine learning
great promise on image recognition tasks), where we achieved predictions, especially when they are incorrect or exhibit
up to 70% validation accuracy, however the network ap- bias? How should misclassifications be explained and vi-
peared to overfit and performed poorly on our FER-2013 sualized to users? Since we have shown that emojis can
test set (32.5%). For this reason, we left further experi- be classified into emotion categories, can NLP methods
ments with this type of network for future work. be used to automate classification of new emojis? Finally,
what other smartphone applications can benefit from facial
Our model prediction performance matrix is shown in Fig.
emotional expression shortcuts? For distant future work,
6. Best results were for Happy, Surprised, and Neutral
we intend on exploring a personalized form of Face2Emoji,
emotions. This is in part due to the amount of training data
where we would integrate contextual cues using word em-
Figure 7: Early Android-based used, but also to the difficulty in detecting certain emotions
bedding models (including emoji2vec [5] emoji pre-trained
Face2Emoji prototype showing (e.g., surprise and fear are easily conflated due to similar
embeddings) for personalized ranking and filtering.
emojis (with Apple’s© unicode coarse facial features). Given this, in our future work we in-
rendering).
References 2014. Non-verbal Communications in Mobile Text
[1] B. Anand, B. B. Navathe, S. Velusamy, H. Kannan, A. Chat: Emotion-enhanced Mobile Chat. In Proc. Mobile-
Sharma, and V. Gopalakrishnan. 2012. Beyond touch: HCI ’14. ACM, New York, NY, USA, 443–446. DOI:
Natural interactions using facial expressions. In proc. http://dx.doi.org/10.1145/2628363.2633576
CCNC ’ 12. 255–259. DOI:http://dx.doi.org/10.1109/ [9] Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier,
CCNC.2012.6181097 Aaron C. Courville, Mehdi Mirza, Benjamin Hamner,
[2] Francesco Barbieri, German Kruszewski, Francesco William Cukierski, Yichuan Tang, David Thaler, Dong-
Ronzano, and Horacio Saggion. 2016. How Cos- Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang
mopolitan Are Emojis?: Exploring Emojis Usage and Feng, Ruifan Li, Xiaojie Wang, Dimitris Athanasakis,
Meaning over Different Languages with Distributional John Shawe-Taylor, Maxim Milakov, John Park, Radu-
Semantics. In Proc. MM ’16. ACM, New York, NY, Tudor Ionescu, Marius Popescu, Cristian Grozea,
USA, 531–535. DOI:http://dx.doi.org/10.1145/2964284. James Bergstra, Jingjing Xie, Lukasz Romaszko, Bing
2967278 Xu, Chuang Zhang, and Yoshua Bengio. 2013. Chal-
[3] Henriette Cramer, Paloma de Juan, and Joel Tetreault. lenges in Representation Learning: A Report on Three
2016. Sender-intended Functions of Emojis in US Machine Learning Contests.. In ICONIP (3), Minho
Messaging. In Proc. MobileHCI ’16. ACM, New York, Lee, Akira Hirose, Zeng-Guang Hou, and Rhee Man
NY, USA, 504–509. DOI:http://dx.doi.org/10.1145/ Kil (Eds.), Vol. 8228. Springer, 117–124.
2935334.2935370 [10] Amogh Gudi. 2015. Recognizing Semantic Features
[4] Charles Darwin. 1872/2009. The Expression of in Faces using Deep Learning. CoRR abs/1512.00743
the Emotions in Man and Animals (anniversary (2015). http://arxiv.org/abs/1512.00743
ed.). Harper Perennial. http://www.worldcat.org/isbn/ [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian
0195158067 Sun. 2016. Deep Residual Learning for Image Recog-
[5] Ben Eisner, Tim Rocktäschel, Isabelle Augen- nition. In Proc. CVPR ’16.
stein, Matko Bosnjak, and Sebastian Riedel. 2016. [12] Yoshua Bengio Ian Goodfellow and Aaron Courville.
emoji2vec: Learning Emoji Representations from 2016. Deep Learning. (2016). http://goodfeli.github.io/
their Description. CoRR abs/1609.08359 (2016). dlbook/ Book in preparation for MIT Press.
http://arxiv.org/abs/1609.08359 [13] Sara R. Jaeger, Leticia Vidal, Karrie Kam, and
[6] Paul Ekman. 1992. An argument for basic emotions. GastÃ3 n Ares. 2017. Can emoji be used as a direct
Cognition and Emotion (1992), 169–200. method to measure emotional associations to food
[7] Paul Ekman and Wallace V. Friesen. 1971. Constants names? Preliminary investigations with consumers in
across cultures in the face and emotion. Journal of {USA} and China. Food Quality and Preference 56,
Personality and Social Psychology 17, 2 (1971), 124– Part A (2017), 38 – 48. DOI:http://dx.doi.org/10.1016/j.
129. http://search.ebscohost.com/login.aspx?direct=true& foodqual.2016.09.005
db=pdh&AN=psp-17-2-124&site=ehost-live [14] Mark L. Knapp and Judith A. Hall. 2010. Nonver-
[8] Jackson Feijó Filho, Thiago Valle, and Wilson Prata. bal Communication in Human Interaction (7 ed.).
Wadsworth: Cengage Learning, Bosten, USA.
[15] Petra Kralj Novak, Jasmina Smailović, Borut Sluban, [21] Hong-Wei Ng, Viet Dung Nguyen, Vassilios Vonikakis,
and Igor Mozetič. 2015. Sentiment of Emojis. PLOS and Stefan Winkler. 2015. Deep Learning for Emotion
ONE 10, 12 (12 2015), 1–22. DOI:http://dx.doi.org/10. Recognition on Small Datasets Using Transfer Learn-
1371/journal.pone.0144296 ing. In Proc. ICMI ’15. ACM, New York, NY, USA, 443–
[16] Oliver Langner, Ron Dotsch, Gijsbert Bijlstra, Daniel 449. DOI:http://dx.doi.org/10.1145/2818346.2830593
H. J. Wigboldus, Skyler T. Hawk, and Ad van Knip- [22] Henning Pohl, Dennis Stanke, and Michael Rohs.
penberg. 2010. Presentation and validation of the 2016. EmojiZoom: Emoji Entry via Large Overview
Radboud Faces Database. Cognition and Emotion Maps 😄🔍. In Proc. MobileHCI
24, 8 (2010), 1377–1388. DOI:http://dx.doi.org/10.1080/ ’16. ACM, New York, NY, USA, 510–517. DOI:
02699930903485076 http://dx.doi.org/10.1145/2935334.2935382
[17] Joon Young Lee, Nahi Hong, Soomin Kim, Jonghwan [23] Channary Tauch and Eiman Kanjo. 2016. The Roles
Oh, and Joonhwan Lee. 2016. Smiley Face: Why We of Emojis in Mobile Phone Notifications. In Proceed-
Use Emoticon Stickers in Mobile Messaging. In Proc. ings of the 2016 ACM International Joint Conference
MobileHCI ’16. ACM, New York, NY, USA, 760–766. on Pervasive and Ubiquitous Computing: Adjunct (Ubi-
DOI:http://dx.doi.org/10.1145/2957265.2961858 Comp ’16). ACM, New York, NY, USA, 1560–1565.
[18] Xuan Lu, Wei Ai, Xuanzhe Liu, Qian Li, Ning Wang, DOI:http://dx.doi.org/10.1145/2968219.2968549
Gang Huang, and Qiaozhu Mei. 2016. Learning from [24] Garreth W. Tigwell and David R. Flatla. 2016. Oh
the Ubiquitous Language: An Empirical Analysis of That’s What You Meant!: Reducing Emoji Misunder-
Emoji Usage of Smartphone Users. In Proc. UbiComp standing. In Proc. MobileHCI ’16. ACM, New York, NY,
’16. ACM, New York, NY, USA, 770–780. DOI:http: USA, 859–866. DOI:http://dx.doi.org/10.1145/2957265.
//dx.doi.org/10.1145/2971648.2971724 2961844
[19] Daniel Lundqvist, Anders Flykt, and Arne Öhman. [25] Jessica L Tracy, Daniel Randles, and Conor M Steck-
1998. The Karolinska directed emotional faces ler. 2015. The nonverbal communication of emotions.
(KDEF). CD ROM from Department of Clinical Neu- Current Opinion in Behavioral Sciences 3 (2015), 25
roscience, Psychology section, Karolinska Institutet – 30. DOI:http://dx.doi.org/10.1016/j.cobeha.2015.01.001
(1998), 91–630. Social behavior.
[20] Hannah Miller, Jacob Thebault-Spieker, Shuo Chang, [26] Paul Viola and Michael J. Jones. 2004. Robust Real-
Isaac Johnson, Loren Terveen, and Brent Hecht. 2016. Time Face Detection. Int. J. Comput. Vision 57, 2
"blissfully happy" or "ready to fight": Varying interpre- (May 2004), 137–154. DOI:http://dx.doi.org/10.1023/B:
tations of emoji. In Proc. ICWSM 2016. AAAI press, VISI.0000013087.49260.fb
259–268.

You might also like