Project Notes - II (Capstone Project) - Facebook Comments Volume Prediction - YS
Project Notes - II (Capstone Project) - Facebook Comments Volume Prediction - YS
Submitted to
Submitted by:
Yogesh Sharma
I would like to convey my sincere gratitude to the mentor Mr. Anirban Dey for his able guidance and
mentorship. I expect that his deep understanding of the use case and bu siness intellect shall help me
in charting the right approach and deploying the appropriate models for the analytics problem at
hand.
I would also like to thank the Great Lakes management for giving an opportunity to work on a real
case scenario which will surely help me to apply the learning practically.
NOTE
This is a ‘Work in progress’ document and submitted in partial fulfillment basis the requirements in
‘Project Notes-II’ only. Furthermore, this document is built on the analysis done in Project Notes -I
and shall be enriched further as per next phases and mentor/ evaluator comments.
2|P age
Table of Contents
1. INTRODUCTION:..............................................................................................................................................4
3. PROJECT OBJECTIVE:...................................................................................................................................5
7. MODEL BUILDING:........................................................................................................................................ 15
3|P age
1. Introduction:
The leading trends towards the Social Networking has drawn high public attention from past ‘two’ decades.
For both small businesses and large corporations, social media is playing a key role in brand building and
customer communication. Facebook is one of the social networking site relevant for firms to make
themselves real for customers. It is estimated that advertising revenues of Facebook in the United States in
2018 stands up to 14.89 billion USD against 18.95 billion USD outside. Other categories like news,
communication, commenting, marketing, banking, Entertainment etc. are also generating huge social
media content every minute.
As per Forbes survey in 2018, there are 2 billion active users on Facebook making it the largest media
platform.
Here are some more intriguing Facebook statistics:
2. Project Background:
In this project, we used the most active social networking service ‘Facebook’ importantly the ‘Facebook
Pages’ for analysis. Our research is oriented towards the estimation of comment volume that a post is
expected to receive in next few hours. Before continuing to the problem of comment volume prediction,
some domain specific concepts are discussed below:
- Public Group/Facebook Page: It is a public profile specifically created for businesses, brands,
celebrities etc.
- Post/Feed: These are basically the individual stories published on page by administrators of page.
- Comment: It is an important activity in social sites, that gives potential to become a discussion
forum and it is only one measure of popularity/interest towards post is to which extent readers are
inspired to leave comments on document/post.
4|P age
3. Project Objective:
Basis the training dataset e.g. ‘Facebook comment volume prediction’ provided, the goal is to predict how
many comments a user generated posts is expected to receive in the given set of hours. We need to model
the user comments pattern over a set of variables which are provided and get to the right number of
comments for each post with minimum error possible.
Here, user comment volume prediction is made based on page category i.e., for a particular category of
page’s post will get certain amount of comments. In order to predict the comment volume for each page
and to find which page category getting the highest comment, I shall use ‘Decision tree’ and ‘regression
techniques’ to make the prediction effective. I shall also model the user comment pattern with respect to
Page Likes and Popularity, Page Category and Time.
As the part of Project Notes – II (covering Notes -I analysis), we shall focus on following:
As the no. of comments (Facebook dataset) is the continuous data hence we shall perform Regression
analysis to determine the relationship between the target value and predictors in it. We shall also look at
the distribution and spread of variables using histogram and box plot.
• Use of techniques like Decision Tree, LASSO, K-Nearest Neighbor (KNN), Random Forest and
Linear Regression
• The error will be further quantified basis RSME (Root Mean Square Error) metrices
5|P age
• Then, concluding that K-Nearest Neighbor Algorithm performing well and giving the effective
prediction
Our experimental model explains that data set split into training and testing before data modelling and then
change into vector form in order to push it for prediction model and the results will be generated with
respect to minimal error obtained.
The structure of each process is carried out in each phase as per below diagram:
Figure - 1
6|P age
6. Exploratory Data Analysis :
The data set used is a ‘Facebook comment volume’ record captured over the period containing 32,759
lines and 43 variables.
Out of 43 variables with one as target value for each post and categorized the features based on
relation between Target variable.
1) Page Features: It defines about popularity/Likes of a page, check-in’s, category of a page. Page
Likes: This feature describes about the user specific interest related to page category such as Status,
wall posts, Photos, Profile pic, shares or pages.
2) Essential Features: The pattern of comment from different users on the post at various time interv al
with respect to randomly selected base time/date. CC1 to CC5 cover this.
Figure - 2
3) Weekday Features: It is for the complete week that is used to pick the post that got published on
selected base time/date and weekday.
4) Other basic Features: The remaining features that help to predict the volume of comment for each
page category and that includes to document about the source of the page and date/time for about next
H hours.
5) Without using Parameter: The prediction comes with expected way when performing without
specifying any parameters in it. The regression gives the result which expected and termed as best
prediction results among the results that with specified
In order to study the data better, we performed a preliminary variable reduction in the beginning itself.
At this stage, we reduced the variable on the following criteria:
▪ Redundant Variables
▪ Business relevance
7|P age
▪ Correlated Variables
▪ Target Variable
Variable name
Type of Variable
Page Popularity/likes Business relevance
6.3 Data Validation and Analysis (Outlier skip patterns, missing inputs)
setwd("C:/Users/Yogesh Sharma/Desktop/Capstone")
getwd()
names(Comments)
8|P age
str(Comments)
summary(Comments)
OBSERVATIONS :
Base.DateTime.weekday
CC5,Derived features
4. Maximum value for some key variables is high as compared to 3rd Qu - Possibility of outliers?
CC5,Post.Share.Count
attach(Comments)
hist(Target.Variable)
boxplot(Target.Variable)
OBSERVATIONS:
Most of the Comments are at the lower end - One outlier very far out
library(dplyr)
Comments=Target.Variable[Target.Variable<1100]
hist(Target.Variable)
9|P age
OBSERVATIONS:
Number of observations reduced from 32760 to 32757 - Therefore, there were 3 outliers :
## Let us now examine the Integer Independent variables using the original dataset
names(Target.Variable)
hist(Page.likes)
hist(Page.talking.about)
hist(Page.Category)
hist(CC1)
hist(CC2)
hist(CC3)
hist(CC4)
hist(CC5)
hist(H.local)
boxplot(Page.Category)
table(Post.published.weekday)
> table(Post.published.weekday)
plot(Post.published.weekday)
table(Base.DateTime.weekday)
plot(Base.DateTime.weekday)
10 | P a g e
6.4 Visual representation
11 | P a g e
6.5 Data Modelling and Experimental Settings
For further analysis, data of Facebook page with user pattern in each page is taken for training and
testing. The sorted data is cleaned, and the cleaned corpus is divided into two different subsets using
temporal splits i.e. (1) Training data (80%, 32757) and (2) Testing data (20%, 8190)
A. Training Dataset -Training Dataset is from the variant selection and calculation of it then vectorizing
it termed to be pre- processing.
B. Testing Dataset - In Testing also, data is vectorized i.e., from 8190 it had formed as 100 in each
vector of total 10 as modeled.
• By looking at the structure of the training data, we can see that data is now in ‘integer’ or ‘float’
format. So, this make our mathematical calculations easier. Now, we go through our dataset to
understand how the data is distributed. To understand what day of the week the post is been
posted, we plot a graph.
From the above graph, we can understand that frequency of post increases on daily basis and it
reaches its maximum point at Wednesday and then it declines gradually.
Now we must understand how the comments are coming for these posts when compared to base time.
12 | P a g e
• Now, we must understand the characteristics of length of the post.
• With the count and mean mentioned, we can clearly understand how the data is distributed.
Similarly, we must understand the characteristics of ‘Post_share_count’
• With the count and mean mentioned, we can clearly understand how the data is distributed.
Similarly, we must understand the characteristics of CC1, CC2, CC3.
13 | P a g e
With the count and mean mentioned, we can clearly understand how the data is distributed.
There are other columns in our dataset (Feature columns) for which we don’t understand the
content. In dictionary, generic terms are mentioned as they represent mean, minimum value,
maximum value, average, median, standard deviation, etc. So, it’s better to understand the
correlation between these columns by drawing heat maps.
From the above heat map, we can understand that ‘c21’ has least values as compared to all
other columns. ‘c24’ also have very low values but slightly higher as compared to ‘c21’. ‘c11’
have highest value as compared to every other column. ‘c6’ also contains higher set of values
but not higher than ‘c11’. Data in columns ‘CC1’ , ‘CC2’ , ‘CC3’ and ‘CC4’ are evenly distributed
with no column have regular high or low data as compared to other columns.
We shall detail out the analysis covering below techniques/ models in Project Notes-III document:
14 | P a g e
1) Linear Regression: It’s a common regression technique that helps in the forecasting the results. In
this model the dependent value will be continuous, and the independent variable may be of discrete or
continuous depends on the values given. So, based on the line equation it has been calculated along
with mean square error (difference of Residuals and observation).
2) K-Nearest Neighbors: KNN is the other effective algorithm that takes f or analysis without specifying
the parameters and calculates based on the data similarity.
3) Decision Tree: Decision Tree is the tree based structured model. It selects the node by itself, from
the input given and forms tree. From classification it differs in regression i.e., it takes average of every
parameter and forms the root node with the highest influencing node.
4) Random Forest: It clearly for the large set of data that picks the variables randomly which fits for it. If
the response is a factor, random Forest performs Classification; if the response is continuous, random
Forest performs Regression. The unsupervised data is generally called as unlabeled data. It randomly
picks up the predictors (i.e., group of decision tress) to form the model
5) Least Absolute Shrinkage and selection operator (LASSO) creates a regression model that will
penalize with the L1-norm which is the sum of the absolute coefficients. It makes the effect of shrinking
the coefficients
6) Statistical testing is for choosing best predictors - We shall test cases for hypothesis testing for
example, Chi Square, Anova, F-test, Pearson Correlation etc., For large we set of data with different
group of parameters and values of continues as well as categorical will be going for Anova Test.
Appendix:
15 | P a g e