Lab Manual Big Data Analyticts
Lab Manual Big Data Analyticts
CERTIFICATE
BRANCH : CSE-DS
REGULATION : R18
Course Outcomes
1. Use Excel as an Analytical tool and visualization tool.
2. Ability to program using HADOOP and Map reduce.
3. Ability to perform data analytics using ML in R.
4. Use cassandra to perform social media analytics.
List of Experiments
1. Implement a simple map-reduce job that builds an inverted index on the set of input
documents (Hadoop)
2. Process big data in HBase
3. Store and retrieve data in Pig
4. Perform Social media analysis using cassandra
5. Buyer event analytics using Cassandra on suitable product sales data.
6. Using Power Pivot (Excel) Perform the following on any dataset
a) Big Data Analytics
b) Big Data Charting
7. Use R-Project to carry out statistical analysis of big data
8. Use R-Project for data visualization of social media data
TEXT BOOKS:
1. Big Data Analytics, Seema Acharya, Subhashini Chellappan, Wiley 2015.
2. Big Data, Big Analytics: Emerging Business Intelligence and Analytic Trends for Today’s
Business, Michael Minelli, Michehe Chambers, 1st Edition, Ambiga Dhiraj, Wiely CIO
Series, 2013.
3. Hadoop: The Definitive Guide, Tom White, 3rd Edition, O‟Reilly Media, 2012.
4. Big Data Analytics: Disruptive Technologies for Changing the Game, Arvind Sathi, 1st
Edition,
IBM Corporation, 2012.
REFERENCES:
1. Big Data and Business Analytics, Jay Liebowitz, Auerbach Publications, CRC press (2013).
2. Using R to Unlock the Value of Big Data: Big Data Analytics with Oracle R Enterprise
and Oracle R Connector for Hadoop, Tom Plunkett, Mark Hornick, McGraw-Hill/Osborne
Media (2013), Oracle press.
3. Professional Hadoop Solutions, Boris lublinsky, Kevin t. Smith, Alexey Yakubovich,
Wiley, ISBN: 9788126551071, 2015.
4. Understanding Big data, Chris Eaton, Dirk deroos et al., McGraw Hill, 2012.
5. Intelligent Data Analysis, Michael Berthold, David J. Hand, Springer, 2007.
6. Taming the Big Data Tidal Wave: Finding Opportunities in Huge Data Streams with
Advanced
Analytics, Bill Franks, 1st Edition, Wiley and SAS Business Series, 2012.
INDEX
Experiment 1 : Implement a simple map-reduce job that builds an inverted index on the
set of input documents (Hadoop)
exit
3. Install Hadoop by navigating to the following link and downloading the tar.gz
file for Hadoop version 3.3.0 (or a later version if you wish). (478 MB)
https://hadoop.apache.org/release/3.3.0.html
</property>
13. Now, run the following commands on the terminal to create a directory for
hadoop space, name node and data node.
15. Before starting the Hadoop Distributed File System (hdfs), we need to make
sure that the rcmd type is “ssh” not “rsh” when we type the following
command
pdsh -q -w localhost
16. If the rcmd type is “rsh” as in the above figure, type the following commands:
export PDSH_RCMD_TYPE=ssh
cat $HOME/.ssh/id_rsa.pub >>
$HOME/.ssh/authorized_keys chmod 0600
~/.ssh/authorized_keys
Run Step 16 again to check that the rcmd type is now ssh.
If not, skip that step.
19. Type the following command. You should see an output similar to the one in
the following figure.
jps
20. Go to localhost:9870 from the browser. You should expect the following
2. Create a directory on the Desktop named Lab and inside it create two folders;
one called “Input” and the other called “tutorial_classes”.
[You can do this step using GUI normally or through terminal commands]
cd Desktop
mkdir Lab
mkdir
Lab/Input
mkdir Lab/tutorial_classes
3. Add the file attached with this document “WordCount.java” in the directory
Lab
4. Add the file attached with this document “input.txt” in the directory Lab/Input.
5. Type the following command to export the hadoop classpath into bash.
export HADOOP_CLASSPATH=$(hadoop classpath)
Make sure it is now exported.
echo $HADOOP_CLASSPATH
6. It is time to create these directories on HDFS rather than locally. Type the
following commands.
hadoop fs -mkdir /WordCountTutorial
hadoop fs -mkdir
/WordCountTutorial/Input
hadoop fs -put Lab/Input/input.txt /WordCountTutorial/Input
7. Go to localhost:9870 from the browser, Open “Utilities →
Browse File System” and you should see the directories and
files we placed in the file system.
8. Then, back to local machine where we will compile the WordCount.java file.
Assuming we are currently in the Desktop directory.
cd Lab
javac -classpath $HADOOP_CLASSPATH -d
tutorial_classes WordCount.java
Put the output files in one jar file (There is a dot at the end)
jar -cvf WordCount.jar -C tutorial_classes .
Program:
First Create Indexmapper.java class
Packagemr03.inverted_index;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import
org.apache.hadoop.mapreduce.lib.input.FileSplit;
import java.io.IOException;
import java.util.StringTokenizer;
@Override
protected void map(LongWritable key, Text value,
Context context) throws
IOException, InterruptedException {
FileSplit split = (FileSplit)
context.getInputSplit();
StringTokenizer tokenizer = new
StringTokenizer(value.toString());
while (tokenizer.hasMoreTokens()) {
String fileName =
split.getPath().getName().split("\\.")[0];
//remove special char using
// tokenizer.nextToken().replaceAll("[^a-
zA-
Z]", "").toLowerCase()
//check for empty words
wordAtFileNameKey.set(tokenizer.nextToken
() +
"@" + fileName);
context.write(wordAtFileNameKey,
ONE_STRING);
}
}
}
IndexReducer.java
package
mr03.inverted_index;
import
org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
IndexDriver.java
package
mr03.inverted_index;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.mapreduce.Job;
import
org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
FileSystem fs = FileSystem.get(conf);
job.setMapperClass(IndexMapper.class);
job.setCombinerClass(IndexCombiner.class);
job.setReducerClass(IndexReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
}
IndexCombiner.java
package
mr03.inverted_in
de x;
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
fileAtWordFreqValue.set(key.toString().substring(splitIndex+1)
+":"
+sum);
key.set(key.toString().substring(0,splitInde
x));
context.write(key, fileAtWordFreqValue);
}
}
Output:
Hbase commands
Step 5: hbase(main):001:0>version
hbase(main):011:0> create
'newtbl','knowledge'
hbase(main):011:0>describe 'newtbl'
hbase(main):011:0>status
1 servers, 0 dead, 15.0000 average load
ROW COLUMN+CELL
r1 column=knowledge:science, timestamp=1678807827189, value=physics
r1 column=knowledge:sports, timestamp=1678807791753, value=
cricket
r2 column=knowledge:economics, timestamp=1678807854590, value=macroeconomics
r2 column=knowledge:music, timestamp=1678807877340,
value=songs
2 row(s) in 0.0250 seconds
data
output
COLUMN CELL
knowledge:science timestamp=1678807827189, value=physics
knowledge:sports timestamp=1678807791753, value=cricket
2 row(s) in 0.0150 seconds.
Verification
After disabling the table, you can still sense its existence through list and exists
commands. You cannot scan it. It will give you the following error.
is_disabled
This command is used to find whether a table is disabled. Its syntax is as follows.
disable_all
This command is used to disable all the tables matching the given regex. The syntax for
disable_all command is given below.
hbase> disable_all 'r.*'
Suppose there are 5 tables in HBase, namely raja, rajani, rajendra, rajesh, and raju. The
following code will disable all the tables starting with raj.
Verification
After enabling the table, scan it. If you can see the schema, your table is successfully
enabled.
is_enabled
This command is used to find whether a table is enabled. Its syntax is as follows:
hbase> is_enabled 'table name'
The following code verifies whether the table named emp is enabled. If it is enabled, it
will return true and if not, it will return false.
describe
This command returns the description of the table. Its syntax is as follows:
hbase(main):006:0> describe 'newtbl'
DESCRIPTION
ENABLED
Aim: To perform storing and retrieval of big data using Apache pig
Operator Description
LOAD To Load the data from the file system (local/HDFS) into a
relation.
STORE To save a relation to the file system (local/HDFS).
Filtering
Sorting
Diagnostic Operators
For the given Student dataset and Employee dataset, perform Relational operations
like Loading, Storing, Diagnostic Operations (Dump, Describe, Illustrate & Explain) in
Hadoop Pig framework using Cloudera
Student ID First Name Age City CGPA
001 Jagruthi 21 Hyderabad 9.1
002 Praneeth 22 Chennai 8.6
003 Sujith 22 Mumbai 7.8
004 Sreeja 21 Bengaluru 9.2
005 Mahesh 24 Hyderabad 8.8
006 Rohit 22 Chennai 7.8
007 Sindhu 23 Mumbai 8.3
Employee Age
Name City
ID
001 Angelina 22 LosAngeles
002 Jackie 23 Beijing
003 Deepika 22 Mumbai
004 Pawan 24 Hyderabad
005 Rajani 21 Chennai
006 Amitabh 22 Mumbai
Step-1: Create a Directoryin HDFS with the name pigdir in the required path using mkdir:
$ hdfs dfs -mkdir /bdalab/pigdir
Step-2: The input file of Pig contains each tuple/record in individual lines with the entities
separated by a delimiter ( “,”).
In the local file system, create an input In the local file system, create an
file student_data.txt containing data as input file employee_data.txt
shown below. containing data as shown below.
001,Jagruthi,21,Hyderabad,9.1 001,Angelina,22,LosAngeles
002,Praneeth,22,Chennai,8.6 002,Jackie,23,Beijing
003,Sujith,22,Mumbai,7.8 003,Deepika,22,Mumbai
004,Sreeja,21,Bengaluru,9.2 004,Pawan,24,Hyderabad
005,Mahesh,24,Hyderabad,8.8 005,Rajani,21,Chennai
006,Rohit,22,Chennai,7.8 006,Amitabh,22,Mumbai
007,Sindhu,23,Mumbai,8.3
Step-3: Move the file from the local file system to HDFS using put (Or) copyFromLocal
command and verify using -cat command
To get the path of the file student_data.txt type the below
command readlink -f student_data.txt
Step-5: Apply Relational Operator – STORE to Store the relation in the HDFS directory
“/pig_output/” as shown below.
grunt> STORE student INTO ' /bdalab/pigdir/pig_output/ ' USING PigStorage (',');
grunt> STORE employee INTO ' /bdalab/pigdir/pig_output/ ' USING PigStorage (',');
Step-7: Apply Relational Operator – Diagnostic Operator – DUMP toPrint the contents of
the relation.
Step-9: Apply Relational Operator – Diagnostic Operator – EXPLAIN toDisplay the logical,
physical, and MapReduce executionplans of a relation usingExplain operator
grunt> Explain student
grunt>Explain employee
Step-10: Apply Relational Operator – Diagnostic Operator – ILLUSTRATE to give the step-
by-step execution of a sequence of statements
grunt>Illustrate student
grunt>Illustrate employee
Procedure:
Capture
This command captures the output of a command and adds it to a file. For
example, take a look at the following code that captures the output to a file named
Outputfile.
cqlsh> CAPTURE '/home/hadoop/CassandraProgs/Outputfile'
When we type any command in the terminal, the output will be captured by the file
given. Given below is the command used and the snapshot of the output file.
cqlsh:tutorialspoint> select * from emp;
Consistency
This command shows the current consistency level, or sets a new consistency level.
cqlsh:tutorialspoint> CONSISTENCY
Current consistency level is 1.
Copy
This command copies data to and from Cassandra to a file. Given below is an
example to copy the table named emp to the file myfile.
cqlsh:tutorialspoint> COPY emp (emp_id, emp_city, emp_name, emp_phone,emp_sal) TO ‘myfile’;
4 rows exported in 0.034 seconds.
If you open and verify the file given, you can find the copied data as shown below.
Describe
This command describes the current cluster of Cassandra and its objects. The
variants of this command are explained below.
Describe cluster − This command provides information about the cluster.
cqlsh:tutorialspoint> describe cluster;
Range ownership:
-658380912249644557 [127.0.0.1]
-2833890865268921414 [127.0.0.1]
-6792159006375935836 [127.0.0.1]
Describe Keyspaces − This command lists all the keyspaces in a cluster. Given below
is the usage of this command.
cqlsh:tutorialspoint> describe keyspaces;
emp_city text,
emp_name text,
emp_phone varint,
emp_sal varint
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class':
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
'max_threshold': '32'}
card_details card
Expand
This command is used to expand the output. Before using this command, you have to
turn the expand command on. Given below is the usage of this command.
cqlsh:tutorialspoint> expand on;
cqlsh:tutorialspoint> select * from emp;
@ Row 1
-----------+------------
emp_id | 1
emp_city | Hyderabad
emp_name | ram
emp_phone | 9848022338
emp_sal | 50000
@ Row 2
-----------+------------
emp_id | 2
emp_city | Delhi
emp_name | robin
emp_phone | 9848022339
emp_sal | 50000
@ Row 3
-----------+------------
emp_id | 4
emp_city | Pune
emp_name | rajeev
emp_phone | 9848022331
emp_sal | 30000
@ Row 4
-----------+------------
emp_id | 3
emp_city | Chennai
emp_name | rahman
emp_phone | 9848022330
emp_sal | 50000
(4 rows)
Note − You can turn the expand option off using the following command.
cqlsh:tutorialspoint> expand off;
Disabled Expanded output.
Exit
This command is used to terminate the cql shell.
Show
This command displays the details of current cqlsh session such as Cassandra version,
host, or data type assumptions. Given below is the usage of this command.
cqlsh:tutorialspoint> show host;
Connected to Test Cluster at 127.0.0.1:9042.
Then you can execute the file containing the commands as shown below.
cqlsh:tutorialspoint> source '/home/hadoop/CassandraProgs/inputfile';
Aim: To perform the buyer event analysis using Cassandra on sales data
Procedure:
db.employee.insert(
{
empid: '1',
firstname:
'FN', lastname:
'LN', gender:
'M'
}
43
(1 rows)
(0 rows)
'Insert query');
(2 rows)
Let’s only update if an entry already exists, by using IF EXISTS:
cqlsh> UPDATE learn_cassandra.todo_by_user_email SET name = 'Update query with
LWT' WHERE user_email = '[email protected]' AND creation_date = '2021-03-14
16:07:19.622+0000' IF EXISTS;
[applied]
True
[applied]
True
Aim: To perform the big data analytics using power pivot in Excel
Procedure:
Open the Microsoft Excel and go to data menu and click get data
Next click create connection and click the check box add to the data model
Next click manage data model and see that all the
twitter data is loaded as model and close the power
pivot window.
Click the diagram view and give the relationships between the tables
set
Aim: To create variety of charts using Excel for the given data
Procedure:
Click the OK button. New worksheet gets created in Excel window and an empty Power PivotTable
appears.
As you can observe, the layout of the Power PivotTable is similar to that of PivotTable.
The PivotTable Fields List appears on the right side of the worksheet. Here, you will find
some differences from PivotTable. The Power PivotTable Fields list has two tabs − ACTIVE
and ALL, that appear below the title and above the fields list. ALL tab is highlighted. The ALL
tab displays all the data tables in the Data Model and ACTIVE tab displays all the data tables
that are chosen for the Power PivotTable at hand.
Click the table names in the PivotTable Fields list
under ALL. The corresponding fields with check boxes will
appear.
Each table name will have the symbol on the left side.
If you place the cursor on this symbol, the Data Source and the Model Table Name of
that data table will be displayed.
Click the OK button. Sort the column labels in the ascending order.
Click on the Home tab on the Ribbon in the Power Pivot window.
Click on PivotTable.
Click on PivotChart in the dropdown list.
Click the OK button. An empty PivotChart gets created on a new worksheet in the Excel
window. In this chapter, when we say PivotChart, we are referring to Power PivotChart.
As you can observe, all the tables in the data model are displayed in the PivotChart Fields list.
Downloaded by Latha Panjala ([email protected])
lOMoARcPSD|34347908
You can remove the legend and the value field buttons for a tidier look of the PivotChart.
Click on the button at the top right corner of the PivotChart.
Deselect Legend in the Chart Elements.
hidden.
Note that display of Field Buttons and/or Legend depends on the context of the PivotChart.
You need to decide what is required to be displayed.
As in the case of Power PivotTable, Power PivotChart Fields list also contains two tabs −
ACTIVE and ALL. Further, there are 4 areas −
AXIS (Categories)
LEGEND (Series)
∑ VALUES
FILTERS
As you can observe, Legend gets populated with ∑ Values. Further, Field Buttons get added
to the PivotChart for the ease of filtering the data that is being displayed. You can click on
the arrow on a Field Button and select/deselect values to be displayed in the Power
PivotChart.
You can have the following Table and Chart Combinations in Power Pivot.
Chart and Table (Horizontal) - you can create a Power PivotChart and a Power
PivotTable, one next to another horizontally in the same worksheet.
Chart and Table (Vertical) - you can create a Power PivotChart and a Power PivotTable, one
below another vertically in the same worksheet.
These combinations and some more are available in the dropdown list that appears when
you click on PivotTable on the Ribbon in the Power Pivot window.
Click on the pivot chart and can develop multiple variety of charts
Output:
Procedure:
https://posit.co/download/rstudio-desktop/#download
step 2 :wget -c
https://download1.rstudio.org/desktop/jammy/amd64/rstudio
-2022.07.2-576-amd64.deb
step 4:rstudio
launch R studio
procedure:
-->install.packages("gapminder")
-->library(gapminder)
-->data(gapminder)
output:
A tibble: 1,704 × 6
-->summary(gapminder)
summary(gapminder)
output:
(Other) 1632
-->x<-mean(gapminder$gdpPercap)
-->x
output:[1] 7215.327
-->attach(gapminder)
-->median(pop)
output:[1] 7023596
-->hist(lifeExp)
-->plot(lifeExp - gdpPercap)
-->install.packages("dplyr")
-->gapminder %>%
+ filter(year == 2007) %>%
+ group_by(continent) %>%
+ summarise(lifeExp = median(lifeExp))
output:
# A tibble: 5 × 2
continent lifeExp
<fct> <dbl>
1 Africa 52.9
2 Americas 72.9
3 Asia 72.4
4 Europe 78.6
5 Oceania 80.7
-->install.packages("ggplot2")
--> library("ggplot2")
-->ggplot(gapminder, aes(x = continent, y = lifeExp))
+ geom_boxplot(outlier.colour = "hotpink") +
geom_jitter(position = position_jitter(width = 0.1, height = 0), alpha = 1/4)
output:
-->head(country_colors, 4)
output:
Nigeria Egypt Ethiopia
"#7F3B08" "#833D07" "#873F07"
Congo, Dem. Rep.
"#8B4107"
-->head(continent_colors)
mtcars
mpg cyl disp hp drat wt qsec vs a gear carb
m
Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4
Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3
Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3
Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4
Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1
Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2
Downloaded by Latha Panjala ([email protected])
lOMoARcPSD|34347908
> rownames(Data_Cars)[which.max(Data_Cars$hp)]
[1] "Maserati Bora"
> rownames(Data_Cars)[which.min(Data_Cars$hp)]
[1] "Honda Civic"
> median(Data_Cars$wt)
[1] 3.325
> names(sort(-table(Data_Cars$wt)))[1] [1]
"3.44"
median(Data_Cars$wt)
[1] 3.325
Data_Cars <- mtcars
names(sort(-table(Data_Cars$wt)))[1]
# Values of weight.
63, 81, 56, 91, 47, 57, 76, 72, 62, 48
lm() Function
This function creates the relationship model between the predictor and the response vari-
able.
Syntax
The basic syntax for lm() function in linear regression is −
lm(formula,data)
Following is the description of the parameters used −
formula is a symbol presenting the relation between x and y.
data is the vector on which the formula will be applied.
Create Relationship Model & get the Coefficient
x <- c(151, 174, 138, 186, 128, 136, 179, 163, 152, 131)
y <- c(63, 81, 56, 91, 47, 57, 76, 72, 62, 48)
print(relation)
Result:
Call:
lm(formula = y ~ x)
Coefficients:
(Intercept) x
-38.4551 0.6746
print(summary(relation))
Result:
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-6.3002 -1.6629 0.0412 1.8944 3.9775
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -38.45509 8.04901 -4.778 0.00139 **
x 0.67461 0.05191 12.997 1.16e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
predict() Function
Syntax
The basic syntax for predict() in linear regression is −
predict(object, newdata)
Following is the description of the parameters used −
object is the formula which is already created using the lm() function.
newdata is the vector containing the new value for predictor variable.
Result:
1
76.22869
Procedure:
Step I: Facebook Developer Registration
Step2:click on tools
install.packages("httpuv")
install.packages("Rfacebook")
install.packages("RcolorBrewer”)
install.packages("Rcurl")
install.packages("rjson)
install.packages("httr")
library(Rfacebook)
library(httpuv)
library(RcolorBrewer)
access_token="EAATgfMOrIRoBAOR9XUl3VGzbLMuWGb9FqGkTK3PFBuRyUVZ
AAL7ZBw0xN3AijCsPiZBylucovck4YUhUfkWLMZBo640k2ZAupKgsaKog9736lec
P8E52qkl5de8M963oKG8KOCVUXqqLiRcI7yIbEONeQt0eyLI6LdoeZA65Hyxf8so
1 UMbywAdZCZAQBpNiZAPPj7G3UX5jZAvUpRLZCQ5SIG"
options(RCurloptions=list(verbose=FALSE,capath=system.file("CurlSSL","cacert.pem
",package = "Rcurl"),ssl.verifypeer=FALSE))
me<-getUsers("me",token=acess_token)
View(me)
Downloaded by Latha Panjala ([email protected])
lOMoARcPSD|34347908
myFriends<-getFriends(acess_token,simplify = FALSE)
table(myFriends)
pie(table(myFriends$gender))
output