0% found this document useful (0 votes)
11 views7 pages

Predicting The Importance of Newsfeed Posts and So

Online news feed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views7 pages

Predicting The Importance of Newsfeed Posts and So

Online news feed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/221603440

Predicting the Importance of Newsfeed Posts and Social Network Friends

Conference Paper · January 2010


Source: DBLP

CITATIONS READS
66 810

5 authors, including:

Michael Gamon Aman Dhesi


Microsoft Princeton University
119 PUBLICATIONS   6,489 CITATIONS    4 PUBLICATIONS   98 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Sentiment Analysis View project

Natural Language Generation View project

All content following this page was uploaded by Michael Gamon on 15 December 2015.

The user has requested enhancement of the downloaded file.


Predicting the Importance of Newsfeed Posts and Social Network Friends
Tim Paek, Michael Gamon, Scott Counts, David Maxwell Chickering, Aman Dhesi*

Microsoft Research Indian Institute of Technology Kanpur*


One Microsoft Way Kanpur 208016
Redmond, WA 98052 USA Uttar Pradesh, India
{timpaek|mgamon|counts|dmax}@microsoft.com [email protected]

Abstract Third, we delineate all the features we engineered, and


As users of social networking websites expand their relate the results of model selection experiments in which
network of friends, they are often flooded with newsfeed we learned support vector machine (SVM) classifiers using
posts and status updates, most of which they consider to be different combinations of features. Finally, we discuss the
“unimportant” and not newsworthy. In order to better results with an eye towards future research, including what
understand how people judge the importance of their benefit might be possible with personalization.
newsfeed, we conducted a study in which Facebook users
were asked to rate the importance of their newsfeed posts as
well as their friends. We learned classifiers of newsfeed and Background
friend importance to identify predictive sets of features
related to social media properties, the message text, and Most content in popular social media websites takes the
shared background information. For classifying friend form of status updates or posts that are contributed by users
importance, the best performing model achieved 85% and subsequently pushed out to others who are friends or
accuracy and 25% error reduction. By leveraging this model followers of that user. Facebook utilizes this concept by
for classifying newsfeed posts, the best newsfeed classifier
allowing users to post text status updates, as well as to
achieved 64% accuracy and 27% error reduction.
share links, photos, and videos. Once posted, this content is
pushed to the newsfeeds of friends in the post sender’s
social network, where it is presented in reverse
Introduction chronological order. Based on usage statistics from
Facebook1, a rough estimate shows that the typical
According to market research (Morgan Stanley, 2009) Facebook user receives well over 1,000 items per week
social networking is a global phenomenon; Facebook alone from 130 friends. Despite an average of 55 minutes per day
has over 350 million active users with 137% year-to-year spent on the site, given the sheer number of items and
growth. Indeed, over the last 3 years, users spent more chronological presentation, users are likely to miss some
global Internet minutes on Facebook than any other potentially interesting content.
website. As more people join social networking sites, and This highlights the need for better tools to surface the
users expand their network of friends, they are often most important newsfeed posts. Facebook itself has
confronted with a triage problem: their user accounts are implemented a system for distinguishing the more
flooded with newsfeed posts and status updates, most of important or interesting content (the “News Feed”) from
which they consider to be “unimportant” and not the stream of all content (“Live Feed”). The details of the
newsworthy (as we demonstrate later in our data analysis). algorithm for identifying News Feed content are not
In this paper, we explore to what extent we can accurately publicly known, but it appears to use a heuristic approach
predict users’ perceived importance of newsfeed posts and that includes metrics like what type of content was posted
of their friends. We employ machine learning to not only (e.g., a status update, link, photo) and how many comments
learn classifiers for newsfeed posts and friends, but also to it has received2. The system does not appear to take into
gain insight into the kinds of features related to social account the message text or any historical information such
media properties, the message text, and shared background as how frequently users have corresponded with the post
information that are indicative of importance. Such models sender. Furthermore, there is relatively little functionality
and insight could be used to develop intelligent user in Facebook to help users triage feed content explicitly.
interfaces that filter or re-rank newsfeeds. Other than gross level setting like blocking and hiding
This paper consists of four sections. First, we provide (specifying friends and applications you do not want to
background on social media and related research. Second, receive content from) and specifying individuals from
we describe a study in which Facebook users were asked to whom you would like to see more content, there is no
rate the importance of their newsfeed posts and friends. functionality for nuanced content triage such as

Copyright © 2010, Association for the Advancement of Artificial


1
Intelligence (www.aaai.org). All rights reserved. http://www.facebook.com/press/info.php?statistics
2
http://www.facebook.com/help/?page=408#/help/?faq=16162
Figure 1: Screenshots of the Newsfeed Tagger Facebook application showing how participants rated (a) news feed posts and (b) friends.

preferentially weighting content based on keywords, or (1999) demonstrated that machine learning could be
otherwise helping users rank their feed content. leveraged to rank email content for near-automated triage.
In summary, the sheer number of posts most users see in
Related Research Facebook highlights the need for better content ranking.
While relatively little research exists specifically on
Given that Facebook data is not entirely public, little ranking social network feed content, prior work has
research has examined methods for content ranking in
demonstrated that user behavior and Facebook content are
Facebook. However, several efforts have demonstrated the
predictive of a number of phenomena, and thus are good
predictive qualities of Facebook data. First, in terms of candidates on which to train statistical models for
leveraging social media to uncover relationships, Gilbert
classification. From work in the email domain, we know
and Karahalios (2009) showed that properties such as the
that both social metadata and machine learning have been
number of intimacy words exchanged between two users successfully leveraged to help triage incoming content.
on their Facebook walls and days since their last
Here, we take a similar approach in what, to our
communication, can predict “tie strength” (Granovetter,
knowledge, is the first research to apply machine learning
1973), or the strength of the relationship between any two to build predictive models of newsfeed importance, which
users, with moderate to strong accuracy. In our study,
in turn can be used to build interfaces that help users triage
although we had users rate the “importance” of their
their flood of posts.
friends, it was in terms of how interested they were in
knowing about their daily activities. Because it is possible
to have weak tie strength and a strong interest in knowing User Study
about a friend’s daily life (e.g., a boss), our focus here is
more about news and less about social relationships, In order to obtain importance ratings for newsfeed posts
though we consider all such variables to be useful features and friends, we conducted a user study. We recruited 24
for classification. participants through an email solicitation sent to our
In addition to predicting tie strength, Facebook has been organization. Participants were required to be active
analyzed statistically to better understand a variety of Facebook users who checked their newsfeed on a daily
properties of users and their behavior. For example, new basis. All participants had at least 200 friends in their
Facebook users’ photo posting behavior can be predicted social network. They were also financially compensated
by the photo posting behavior of friends in their network for their involvement.
(Burke et al., 2009). The number of friends has been
shown to have a curvilinear relationship with the social Data Collection Method
attractiveness and extraversion of the user (Tong et al.,
Participants were asked to download a Facebook
2008). And Sun et al. (2009) demonstrated that information
diffuses in small chains of users that may then merge, application we developed called “Newsfeed Tagger” under
Facebook’s Terms of Service agreement. As shown in
rather than starting at a single point.
Figure 1, the application consists of two tabs: one to Rate
In terms of triaging content more broadly, research in
the email domain has demonstrated a number of benefits to News Feed and another to Rate Friends. For newsfeed
posts, the application retrieved and displayed posts using
leveraging social metadata. For example, Venolia et al.
the same markup language style as the newsfeed on the
(2001) highlight a variety of social attributes of email that
contribute to perceived importance, including whether it Facebook home page (Figure 1(a)). Participants were
instructed to rate the importance of each post using a slider
was addressed directly to the user as well as the
next to the post, where “the far right of the slider means
relationship of the sender to the user (e.g., whether the
email came from a manager). Given the potential that this item is very important and the far left means that
you would skip the item.” The sliders provided a
usefulness of social metadata, prototype systems such as
continuous value from 0 to 100. For rating friends,
DriftCatcher (Lockerd, 2002), Bifrost (Balter & Sidner,
2002), and SNARF (Neustadter et al., 2005) have participants received a list of friends in their network
ranked according to a simple heuristic that took into
incorporated social relationship information when
account the last time users interacted with that friend and
organizing and presenting email to the user in order to
facilitate triage. Finally, of notable relevance, Horvitz et al. how frequently. As shown in Figure 1(b), because users
Figure 2: (a) Newsfeed ratings histogram; (b) Friend ratings histogram; (c) Scatter plot of time since post creation by newsfeed rating.

had over 200 friends, we also included a search box so that considered the majority of their friends to be people for
users could find friends. Participants were instructed to use whom they had little to moderate interest in knowing about
the adjacent sliders to rate how “close” they were to the their daily affairs. This does not include the friends they
friend, where closeness was defined as “interest in could not remember.
knowing what is going on in their daily lives”. Because Finally, because Facebook utilizes reverse chronological
many of the participants found it onerous to rate all their ordering of the newsfeed, we assessed to what extent
friends, we asked them to rate at least 100 friends. timeliness, or urgency, was correlated with the ratings. In
Participants were asked to do the rating every day for a other words, we investigated whether participants
full business week. Because we allowed participants to considered the most recent newsfeed posts to be the most
submit their ratings at their own leisure, not all participants important. Figure 2(c) shows a scatter plot where the x-axis
actively rated their newsfeed and friends. In all, we represents the time since the post was created in minutes
received 4989 newsfeed ratings and 4238 friend ratings. and the y-axis represents the newsfeed rating. Note that
Upon initiating the study, we downloaded whatever instead of a left leaning slope the scatter plot shows more
information was programmatically available for the of a vertical column; indeed, the Pearson correlation
participant’s Facebook account through the beta version of (r=.01) was not statistically significant. In short, for our
the Facebook Open Stream API per the Terms of Service participants, reverse chronological ordering did not suffice
agreement. Because participants had extensive social to surface the most important newsfeed posts.
networks, we did not download information about all the While many Facebook users would have suspected
friends in their networks but only those they remembered much of the data analysis reported in this section, no prior
enough to rate in the Rate Friends tab. Because research has, to our knowledge, provided any such
participants rated friends who had not sent posts during the empirical validation.
week of the study, and not all poster senders were rated by
the participants, only 3241 out of the 4989 posts (65%) had
ratings for the sender, along with other downloaded Model Selection Experiments
information. We used this smaller dataset for model
We conducted model selection experiments with two goals
selection so that we could compare the effects of using
different sets of features. in mind: first, we sought to identify what kinds of features
were predictive of the perceived importance of newsfeed
posts and friends, and second, we sought to attain the
Data Analysis maximum classification accuracy possible on the data.
In order to validate the need for newsfeed triage, we first Given the successful track record of linear kernel SVM
examined descriptive statistics for the ratings. Figure 2(a) classifiers in the area of text classification (Joachim, 1998),
displays a histogram of all the newsfeed ratings. The mode and the fact that they can be trained relatively quickly over
of the ratings was 0 – hence the large spike in the left of a very large number of features (e.g., n-grams), we decided
the histogram. The average rating was 37.3 and the median to learn linear SVM classifiers using the Sequential
36. Note that ratings greater than 80 comprised the two Minimal Optimization (SMO) algorithm (Platt, 1999). For
smallest bins in the histogram. ¾ of the ratings were below performance reasons, we discretized the values of the
60. In short, the descriptive statistics demonstrate that most continuous predictor variables into 5 bins containing
participants regarded the majority of newsfeed they roughly the same number of cases in each bin. For our
received to be unimportant, though participants varied in primary target variable, newsfeed rating, which is also
their rating distributions, as we revisit later. continuous, we split the ratings into 2 bins, Important and
Figure 2(b) displays a histogram of the friend ratings. Not Important, for several reasons. First, we intended to
Similar to the newsfeed ratings, the two smallest bins employ the models as a type of spam filter, which is
consist of ratings 80 and above. The mode was 0 and ¾ of typically binary. Second, finer-grained classification would
the ratings were below 60. The average friend rating was have been difficult given the size of our dataset (3241
42.4 and the median was 40. Hence, our participants cases). Furthermore, although we could have set the target
variable threshold to the midpoint of the sliders (i.e., 50), consisted of counts. Binary features that were observed 3
given the skewed histogram in Figure 2(b) we decided to times or less in the corpus were eliminated. The LIWC
use the median rating (i.e., 35) instead. This allowed us to features correspond to the counts of words in a text
avoid modeling complications due to unbalanced classes. belonging to each of 80 categories in the LIWC dictionary.
Note that Gilbert and Karahalios (2009) did not utilize any
Feature Engineering n-gram features and only looked at 13 emotion and
intimacy related LIWC categories: Positive Emotion,
Having downloaded all programmatically available content
Negative Emotion, Family, Friends, Home Sexual, Swears,
from participants’ Facebook accounts, we engineered Work, Leisure, Money, Body, Religion and Health. Given
features from three types of information: social media
our focus on news, we decided to include all other
properties, the message text and corpus, and shared
categories, such as Insight (e.g., “think”, “know”), Assent
background information. (e.g., “agree”, “OK”) and Fillers (e.g., “you know”, “I
mean”).
Social media properties. Social media properties included
In addition, we also extracted a number of other text-
any properties related to the newsfeed post and sender, oriented features from the post and corpus: Whether there
excluding the actual text. In particular, we extracted:
were embedded URLs; Total number of stop words; Ratio
Whether the post was a wall post or feed post; Whether the
of stop words to total words; Ratio of non-punctuation,
post contained photos, links, and/or videos; Total number non-alphanumeric characters to total characters; Sum and
of comments by everyone; Total number of comments by
average of tf.idf (term-frequency × inverse document
friends (including multiple comments); Total number of
frequency) of all words in a post or corpus, where tf.idf
comments by distinct friends; Total number of likes by scores were computed on all the posts in the entire dataset;
everyone; Total number of likes by friends; Time elapsed
Sum and average of tf.idf of all words in a post or corpus,
since the post was created; Total number of words
where tf.idf scores were computed on Wikipedia; Delta of
exchanged between the user and the sender on their the previous two tf.idf measures; Message length in tokens
respective walls (including comments); Total number of
and characters.
posts from the user to the sender; Total number of posts
from the sender to the user; Time since the first exchange; Shared background information. Finally, for every
Time since the most recent exchange; Total number of
participant and rated friend, we compared shared
photos in which both the user and sender are tagged
background information in terms of the following self-
together; Total number of photos the user has of the friend disclosed categories: Affiliations, Hometowns, Religion,
and vice versa; Total number of friends overlapping in
Political Views, Current Location, Activities, Stated
their respective networks.
Interests, Music, Television, Movies, Books, Pre-College
For every post, we also had lists of Facebook account Education, College and Post-College Education, and
IDs that had provided comments, likes, etc. We created a
“About Me” Profile. For each of these categories, after
set of features based on knowing the importance rating of
removing category-specific stop words (e.g., “high school”
the account IDs; in particular, the maximum friend rating for Pre-College Education), we extracted the number of
of people who posted comments, put likes, or are otherwise
common words as well as the percent overlap.
tagged in photos. The intuition here is that even if users do
not find the post content to be important, it may become
important if someone they know and track with great Experimental Setup
interest commented on it. For mutual friends between the All of our model selection experiments were conducted in
user and sender, we also extracted the maximum, minimum the following manner. First, the dataset was split into five
and average friend rating, along with its variance. folds of training and test data. The training set of the first
fold was then utilized to tune the optimal classifier
Message text and corpus. For text analysis features, we parameter settings, as measured on the test set of the first
looked at two sources: the post and the corpus of all posts fold. These settings were then used to learn SVM
exchanged between the user and the sender. Because classifiers on the training files for the remaining four folds.
Facebook maintains only the most recent posts, for the Evaluation was performed on the test sets of the four folds.
corpus we were only able to retrieve posts up to roughly 2- We conducted a grid search on the first fold to determine
3 months prior to the date of retrieval. In order to capture optimal values for the SVM cost parameter c, which trades
the linguistic content of the post and corpus, we extracted off training error against model complexity (Joachims,
both n-gram features, with n ranging from 1 to 3, and 2002). For feature reduction of binary features, even after
features based on the Linguistic Inquiry and Word Count imposing a count cutoff, the number of n-gram features
dictionary (LIWC, Pennebaker et al., 2007). N-gram was in the tens of thousands. So, we reduced the number of
features had binary values depending on whether the n- features to the top 3K features in terms of log likelihood
gram was present or not, whereas the LIWC features ratios (Dunning, 1993) as determined on the training set.
Figure 3: (a) Newsfeed importance classification; (b) Friend importance classification; (c) Histogram of the maximum rating differences
between participants for the same newsfeed post. Error bars represent standard errors about the mean.

classification, the multi-class classifier utilizes the


Results prediction of the SVM with the highest class probability.
As shown in Figure 3(b), compared to the majority class
For newsfeed importance, the first model we learned was a
baseline of 80.3%, the multi-class classifier achieved an
classifier using all of the features, including the friend
accuracy of 85.0%, a 24.9% relative error reduction.
rating. Because we do not have access to the friend rating Inspecting the top 50 selected features of the 3 SVM
in a deployed setting and can only infer it, the performance
classifiers, the majority were corpus features (i.e., features
of this model provides an upper bound for classification
based on analyzing all exchanged messages between the
accuracy. The baseline accuracy, based on predicting the participant and sender). This again highlights the
majority class, is 51.3%. As shown in Figure 3(a), using all
importance of textual features, even for friend importance.
features achieved the highest classification accuracy at
Indeed, as shown in Figure 3(b), if we remove all message
69.7% with 37.7% relative reduction in error rate. In text and corpus features (“All minus text”), then the
analyzing the top 50 selected features ranked by their
accuracy dips to 81.9%, which is close to the baseline (but
learned SVM weights, we observed a number of findings:
statistically different by McNemar’s test, p<.01).
First, not surprisingly, friend rating was the top feature, Turning back to newsfeed classification, if we remove
though because friend rating (as a predictor variable) was
friend rating and all of its related features, such as the
discretized into 5 bins, only the top and bottom bins were
maximum friend rating of people who commented on a
selected. We computed the Pearson correlation to measure post, the accuracy drops to 63.4% (see “All minus friend
the relationship between newsfeed rating and friend rating,
rating” in Figure 3(a)). However, just because we will not
and found it to be statistically significant (r=0.38, p(two-
have access to the friend rating of a sender in a deployed
tail)<.01). Second, we found that the majority (34/50) of system does not mean that we cannot infer and incorporate
the top features were message text and corpus features.
it as a feature. As such, we integrated the multi-class
Examples included the count of LIWC Home words, the
classifier for friend importance into a classifier for
count of LIWC Ingestion words, and having a trigram newsfeed importance as follows:
consisting of “with” and two capitalized words (i.e., Proper
Name). Note that because some n-gram features (e.g.,
“beer”) are also incorporated into LIWC counts (e.g.,
Ingest) the SVM algorithm splits the weight between the where p(friendi) is a normalized probability iterating over
features. Hence, the top message features may be more how likely it is that the sender of the newsfeed post is a top
predictive than what is indicated by their SVM weights, 10%, middle 80%, or bottom 10% rated friend. This
and some redundant message features that are very combined model achieved a slightly higher accuracy at
predictive may be missing from the top 50. Finally, in the 64.4% (a 26.6% error reduction), but was not statistically
rest of the features (16/50), we found social media different than the “All minus friend rating” model
properties, such as the average friend rating for mutual Looking at other combinations of features, if we have no
friends, and shared background information, such as the knowledge at all about the sender – i.e., if we remove all
number of common words about Music. social media properties about the sender, all corpus related
Having confirmed a significant correlation between message features and all shared background features, then
newsfeed rating and friend rating, we explored how well the accuracy drops dramatically to 57.5% (see “No friend
we could predict friend importance. Given that having a info”). If instead we remove just the message text and
very high or low friend rating was an important predictor corpus features (“All minus text”), then the accuracy is
of newsfeed rating, we learned a multi-class classifier comparable to “All minus friend rating” at 63.2%. Finally,
consisting of an SVM to predict the top 10% of the friend we investigated the contribution of the feature types just by
ratings (ratings > 77), an SVM to predict the bottom 10% themselves. As shown in the rightmost side of Figure 3(a),
(ratings < 7), and an SVM to predict the middle 80%. For the “text alone” model was not statistically worse than any
of our other previous models, but was significantly better In this paper, we provided empirical validation of the
than both “Social media only” and “Background only” (p < need for triaging newsfeeds and moving beyond the
.01). This again highlights the importance of message text standard reverse chronological ordering of posts. Having
and corpus features. engineered a large set of programmatically available
features for predicting newsfeed and friend importance, we
conducted classification experiments using different
Discussion and Future Research combination of features and identified predictive features.
To our knowledge, this research constitutes the first of
Overall, the best classification performance of any model what is likely to be many papers on triaging newsfeeds.
is the combined model at 64% accuracy. Even if we had
the friend ratings for all post senders, the upper bound is
near 70%, which may be sufficient for re-ranking for Acknowledgements
newsfeed posts but is probably not good enough for triage.
In fact, we tried to evaluate our models with respect to We thank Roger Booth for assistance in automating LIWC
ranking (precision/recall), but discovered that most of the feature generation.
participants did not rate their newsfeeds in distinct
sessions. As such, we could not accurately identify what
newsfeed posts were available to participants at the time of References
rating, which we would need to know for ranking.
Balter, O., & Sidner, C. (2002). Bifrost Inbox Organizer:
In our experiments using different combination of
Giving users control over the inbox. In Proc. Nordic CHI
features, and in perusing the top selected features for the
best performing models, textual features consistently stood 2002, ACM Press.
Burke, M., Marlow, C., and Lento, T. (2009). Feed me:
out. In fact, a classifier using just text features was
Motivating newcomer contribution in social network sites.
statistically no different than one which added other types
of features. Even for predicting friend rating, corpus In Proc. CHI 09, ACM Press.
Gilbert, E., & Karahalios, K. (2009). Predicting Tie
features contributed significantly to accuracy. This
Strength with Social Media. In Proc. CHI ‘09, ACM Press.
suggests that for prediction focused on news, having access
to the message text and corpus features is vital. Granovetter, M. (1973). The Strength of Weak Ties.
American Journal of Sociology, 78 (6), pp. 1360-1380.
In terms of limitations, our results are limited by the
Horvitz, E., Jacobs, A., and Hovel, D. (1999). Attention-
relatively small size of the data. Because of privacy
restrictions, we were not able to collect data at a larger Sensitive Alerting. In Proc. UAI-99, 305-313.
Joachims, T. (1998): Text Categorization with Support
scale. However, we plan to conduct another user study, one
Vector Machines: Learning with Many Relevant Features.
where participants will rate their newsfeed at distinct times
of the day so that we can evaluate ranking performance. In Proc. of ECML-1998, 137-142.
Joachims, T. (2002). Learning to Classify Text Using
As for future research, one very promising direction is
Support Vector Machines: Methods, Theory, and
personalization. In our data analysis, we noticed 44 cases
in which more than 1 participant had rated the same Algorithms. Kluwer/Springer.
Lockerd, A. (2002). Understanding Implicit Social Context
newsfeed post. We computed the difference between the
in Electronic Communication, MIT Master’s Thesis.
highest rating and the lowest rating for identical posts (40
cases had only 2 ratings). Figure 3(c) displays a histogram Morgan Stanley (2009). “Mobile Internet Report”
published Dec. 15, 2009.
of the differences in newsfeed ratings. If users generally
Neustadter, C., Brush, A., Smith, M., & Fisher, D. (2005).
find the same kinds of newsfeed posts to be important or
unimportant, we would expect to see a heavily skewed The Social Network and Relationship Finder: Social
Sorting for Email Triage. In Proc. CEAS 2005.
distribution leaning towards the left. Here, we see that
Pennebaker, J. D, Chung, C. K., Ireland, M., Gonzales A.,
36/44 (82%) of the cases differ in rating by more than 10
points with the average and median difference being and Booth, R. J. (2007). The LIWC2007 Application.
http://www.liwc.net/liwcdescription.php.
roughly 41 points. In short, it was not unusual for two
Platt, J. (1999): Fast training of SVMs using sequential
participants to give very different ratings to the same post,
suggesting that importance ratings can be quite subjective, minimal optimization. In: B. Schoelkopf, C. Burges and A.
Smola (eds.) Advances in Kernel Methods: Support Vector
although it is hard to generalize with only 44 cases. As
Learning, MIT Press, Cambridge, MA, 185-208.
such, we decided to explore whether we could improve
classification accuracy though personalization. In our data, Tong, S.T., van der Heide, B., Langwell, L., & Walther, J.
(2008). Too Much of a Good Thing? The Relationship
only one participant had about 400 or more newsfeed
Between Number of Friends and Interpesonal Impressions
ratings. For this user, we learned the “All minus friend
rating” model. Compared to a majority class baseline of on Facebook. J. of Computer-Mediated Communication,
13(3), 531-549.
53.6%, the personalized classifier achieved 69.6%
Venolia, G., Dabbish, L., Cadiz. J., & Gupta, A. (2001).
accuracy, a 34.6% error reduction. With such an auspicious
result, as we continue to collect more data, we plan to Supporting Email Workflow, Microsoft Technical Report
TR-2001-88.
conduct more personalization experiments.

View publication stats

You might also like