0% found this document useful (0 votes)
57 views59 pages

CS ELEC 4 Midterm Module

This document provides an overview of managing data with R. It discusses strategies for importing data from CSV files into R datasets and exploring the structure of the data. It also covers measuring the central tendency through mean, median and mode. Measuring the spread of data is discussed through quartiles and the five-number summary. Visualizing numeric variables with boxplots and histograms is presented.

Uploaded by

Richard Monreal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views59 pages

CS ELEC 4 Midterm Module

This document provides an overview of managing data with R. It discusses strategies for importing data from CSV files into R datasets and exploring the structure of the data. It also covers measuring the central tendency through mean, median and mode. Measuring the spread of data is discussed through quartiles and the five-number summary. Visualizing numeric variables with boxplots and histograms is presented.

Uploaded by

Richard Monreal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

CS ELEC 4 - Analytics Techniques

& Tools/Machine Learning


MODULE NO.: 1 (Prelim)
Module Title: Managing Data with R
WRITER: Richard N. Monreal, MIT

To do well in this module, you need to remember the following:

1. Pause and pray before starting this module.


2. Read and go through the module at your own time and pace.
3. You may open suggested references for supplemental activities and exercises.
4. Honestly answer the activities and sample exercises. The answers are provided on the succeeding
pages.

OPENING PRAYER

May God the Father bless us. May God the Son heal us. May God the Holy Spirit enlighten
us, and give us eyes to see with, ears to hear with, hands to do the work of God with, feet
to walk with, a mouth to preach the word of salvation with, and the angel of peace to watch
over us and lead us at last, by our Lord's gift, to the Kingdom. Amen.

Learning Outcomes Estimated


Subtopic Title
“I SHOULD BE ABLE TO”… time
MANAGING DATA WITH R

 Saving and Loading R  Present some basic statistics, e.g., for


Data Structures measuring central tendency (mean,
 Importing and Saving median, mode) or dispersion
Data from CSV Files (variance, quartiles, range),
2 25 Hrs.
 Exploring the Structure of
Data  explore simple plots,
 Exploring Numeric
Variables
 demonstrate the uniform and normal
 Measuring the Central
distributions,
Tendency - mean, median,
mode
 contrast numerical and categorical
 Measuring Spread - types of variables,
quartiles and the five-
number summary
 Visualizing Numeric  present strategies for handling
Variables – boxplots incomplete (missing) data, and
 Visualizing Numeric
Variables – histograms  Show the need for cohort-rebalancing
 Understanding Numeric when comparing imbalanced groups
Data - uniform and normal of subjects, cases or units.
distributions
 Measuring Spread -
variance and standard
deviation
 Exploring Categorical
Variables
 Exploring Relationships
Between Variables
 Missing Data
 Parsing webpages and
visualizing tabular HTML
data
MODULE INTRODUCTION AND FOCUS QUESTION(S):

In this chapter, we will discuss strategies to import data and export results. Also, we are going to learn the
basic tricks we need to know about processing different types of data. Specifically, we will illustrate
common R data structures and strategies for loading (ingesting) and saving (regurgitating) data.

Pretest

To further gauge your level of understanding and where you currently stand in this topic, please answer the
following pre-test questions honestly. Take note of the items that you were not able to correctly answer and
look for the right answer as you go through this module.

A. Give the purpose of the following code/ functions: (25 points)

i. read.csv()
ii. c()
iii. max()
iv. min()
v. write.csv()
vi. summary()
vii. plot()
viii. median()
ix. library()
x. range()
xi. def()
xii. quantile()
xiii. boxplot()
xiv. hist()
xv. lines()

B. Enumeration (10 points)

i. What is the formula in getting the variance? Give example and solution.
ii. What is the formula in getting the standard deviation? Give example and solution.

C. Lab/ Actual Activities (400 points)


i. Perform the following activities in your machine (create your own dataset):
1. Week 1
i. Saving and Loading R Data Structures
ii. Importing and Saving Data from CSV Files
iii. Exploring the Structure of Data
2. Week 2
i. Exploring Numeric Variables
ii. Measuring the Central Tendency - mean, median, mode
iii. Measuring Spread - quartiles and the five-number summary
3. Week 3
i. Visualizing Numeric Variables - boxplots
ii. Visualizing Numeric Variables – histograms
4. Week 4
i. Understanding Numeric Data - uniform and normal distributions
ii. Measuring Spread - variance and standard deviation
Study Time

Managing Data with R

Saving and Loading R Data Structures

Let’s start by extracting the Edgar Anderson’s Iris Data from the package datasets. The iris dataset
quantifies morphologic shape variations of 50 Iris flowers of three related genera - Iris setosa, Iris
virginica and Iris versicolor. Four shape features were measured from each sample - length and the
width of the sepals and petals (in centimetres). These data were used by Ronald Fisher in his 1936
linear discriminant analysis paper.

As an I/O (input/output) demonstration, after we load the iris data and examine its class type, we can
save it into a file named “myData.RData” and then reload it back into R.

Importing and Saving Data from CSV Files

Importing the data from "CaseStudy07_WorldDrinkingWater_Data.csv" from these case-studies and


saving it into the R dataset named “water”. The variables in the dataset are as follows:

 Time: Years (1990, 1995, 2000, 2005, 2010, 2012)


 Demographic: Country (across the world)
 Residence Area Type: Urban, rural, or total
 WHO Region
 Population using improved drinking-water sources: The percentage of the population
using an improved drinking water source.
 Population using improved sanitation facilities: The percentage of the population using an
improved sanitation facility.
Generally, the separator of a CSV file is comma. By default, we have optionsep=", " in the command
read.csv(). Also, we can use colnames() to rename the column variables.

This code loads CSV files that already include a header line listing the names of the variables. If we
don’t have a header in the dataset, we can use the header = FALSE option to fix it. R will assign
default names to the column variables of the dataset.
To save a data frame to CSV files, we could use the write.csv() function. The option file =
"a/local/file/path" allow us edit the saved file path.

Exploring the Structure of Data

We can use the command str() to explore the structure of a dataset.

We can see that this World Drinking Water dataset has 3331 observations and 6 variables. The
output also give us the class of each variable and first few elements in the variable.
Exploring Numeric Variables

Summary statistics for numeric variables in the dataset could be accessed by using the command
summary().

The six summary statistics and NA’s (missing data) are in the output.
Measuring the Central Tendency - mean, median, mode

Mean and median are two frequent measurements of the central tendency. Mean is “the sum of all
values divided by the number of values”. Median is the number in the middle of an ordered list of
values. In R, mean() and median() functions can provide us with these two measurements.

The mode is the value that occurs most often in the dataset. It is often used in categorical data,
where mean and median are inappropriate measurements.

We can have one or more modes. In the water dataset, we have “Europe” and “Urban” as the modes
for region and residence area respectively. These two variables are unimodal, which has a single
mode. For the year variable, we have two modes 2000 and 2005. Both of the categories have 570
counts. The year variable is an example of a bimodal. We also have multimodal that has two or more
modes in the data.

Mode is one of the measures for the central tendency. The best way to use it is to comparing the
counts of the mode to other values. This help us to judge whether one or several categories
dominates all others in the data. After that, we are able to analyze the story behind these common
ones.

In numeric datasets, we could think mode as the highest bin in the histogram, since it is unlikely to
have many repeated measurements for continuous variables. In this way, we can also examine if the
numeric data is multimodal.
Measuring Spread - quartiles and the five-number summary

The five-number summary describes the spread of a dataset. They are:

 Minimum (Min.), representing the smallest value in the data


 First quantile/Q1 (1st Qu.), representing the 25th

 percentile, which splits off the lowest 25% of data from the highest 75%
 Median/Q2 (Median), representing the 50th
 percentile, which splits off the lowest 50% of data from the top 50%
 Third quantile/Q3 (3rd Qu.), representing the 75th

 percentile, which splits off the lowest 75% of data from the top 25%
 Maximum (Max.), representing hte largest value in the data.

Min and Max can be obtained by using min() and max() respectively.

The difference between maximum and minimum is known as range. In R, range() function give us
both the minimum and maximum. An combination of range() and diff() could do the trick of getting the
actual range value.

Q1 and Q3 are the 25th and 75th percentiles of the data. Median (Q2) is right in the middle of Q1 and
Q3. The difference between Q3 and Q1 is called the interquartile range (IQR). Within the IQR lies half
of our data that has no extreme values.
In R, we use the IQR() to calculate the interquartile range. If we use IQR() for a data with NA’s, the
NA’s are ignored by the function while using the option na.rm=TRUE.

Just like the command summary() that we have talked about earlier in this chapter. A similar function
quantile() could be used to obtain the five-number summary.

We can also calculate specific percentiles in the data. For example, if we want the 20th and 60th
percentiles, we can do the following.

When we include the seq() function, generating percentiles of evenly-spaced values is available.

Let’s re-examine the five-number summary for the improved_water variable. When we ignore the
NA’s, the difference between minimum and Q1 is 74 while the difference between Q3 and maximum
is only 1. The interquatile range is 22%. Combining these facts, the first quarter is more widely spread
than the middle 50 percent of values. The last quarter is the most condensed one that has only two
percentages 99% and 100%. Also, we can notice that the mean is smaller than the median. The
mean is more sensitive to the extreme values than the median. We have a very small minimum that
makes the range of first quantile very large. This extreme value impacts the mean less than the
median.
Visualizing Numeric Variables - boxplots

We can visualize the five-number summary by a boxplot (box-and-whiskers plot). With the boxplot()
function we can manage the title (main="") and labels for x (xlab="") and y (ylab="") axis.

In the boxplot we have five horizontal lines each represents the corresponding value in the five-
number summary. The box in the middle represents the middle 50 percent of values. The bold line in
the box is the median. Mean value is not illustrated on the graph.

Boxplots only allow the two ends to extend to a minimum or maximum of 1.5 times the IQR.
Therefore, any value that falls outside of the 3×IQR

range will be represented as circles or dots. They are considered outliers. We can see that there are
a lot of outliers with small values on the low ends of the graph.
Visualizing Numeric Variables - histograms

Histogram is another way to show the spread of a numeric variable (See Chapter 3 for additional
information). It uses predetermined number of bins as containers for values to divide the original data.
The height of the bins indicates frequency.
We could see that the shape of two graphs are somewhat similar. They are both left skewed patterns
(mean<median). Other common skew patterns are shown in the following picture.

These plots are generated by R and the code is provided in the appendix.

You can see the density plots of over 80 different probability distributions using the SOCR Java
Distribution Calculators or the Distributome HTML5 Distribution Calculators.
Understanding Numeric Data - uniform and normal distributions

If the data follows a uniform distribution, then all values are equally likely to occur. The histogram for
a uniformly distributed data would have equal heights for each bin like the following graph.

Often, but not always, real world processes leand to normally distributed data. A normal distribution
would have a higher frequency for middle values and lower frequency for more extreme values. It has
a symetric and bell-curved shape just like the following diagram generated by R. Many parametric-
based statistical approaches assume normality of the data. In cases where this parametric
assumption is violated, variable transformations or distribution-free tests may be more appropriate.
Measuring Spread - variance and standard deviation

Distribution is a great way to characterize data using only a few parameters. For example, normal
distribution can be defined by only two parameters center and spread or statistically mean and
standard deviation.

The way to get mean value is to divide the sum of the data values by the number of values. So, we
have the following formula.

The standard deviation is the square root of the variance. Variance is the average sum of square.

Since the water dataset is non-normal, we use a new dataset about the demographics of baseball
players to illustrate normal distribution properties. The "01_data.txt" in our class file has following
variables:

 Name
 Team
 Position
 Height
 Weight
 Age
We check the histogram for approximate normality first.
This plot allows us to visually inspect the normality of the players height and weight. We could also
obtain mean and standard deviation of the weight and height variables.

Larger standard deviation, or variance, suggest the data is more spread out from the mean.
Therefore, the weight variable is more spread than the height variable.
Given the first two moments (mean and standard deviation), we can easily estimate how extreme a
specific value is. Assuming we have a normal distribution, the values follow a 68−95−99.7 rule. This
means 68% of the data lies within the interval [μ−σ,μ+σ];95% of the data lies within the interval
[μ−2∗σ,μ+2∗σ] and 99.7% of the data lies within the interval [μ−3∗σ,μ+3∗σ]. The following graph
plotted by R illustrates the 68−95−99.7 rule

Applying the 68-95-99.7 rule to our baseball weight variable, we know that 68% of our players
weighted between 180.7168 pounds and 222.7164 pounds; 95% of the players weighted between
159.7170 pounds and 243.7162 pounds; And 99.7% of the players weighted between 138.7172
pounds and 264.7160 pounds.
Exploring Categorical Variables

Back to our water dataset, we can treat the year variable as categorical rather than a numeric
variable. Since the year variable only have six distinctive values, it is rational to treat it as a
categorical variable where each value is a category that could apply to multiple WHO regions.
Moreover, region and residence area variables are also categorical.
Different from numeric variables, the categorical variables are better examined by tables rather than
summary statistics. One-way table represents a single categorical variable. It gives us the counts of
different categories. table() function can create one-way tables for our water dataset:

Given that we have a total of 3331 observations, the WHO region table tells us that about 27%
(910/3331) of the areas examined in the study are in Europe.
R can directly give us table proportions when using the prop.table() function. The proportion values
can be transformed into percentage form and edit number of digits.

Exploring Relationships Between Variables

So far the methods and statistics that we have go through are at univariate level. Sometimes we want
to examine the relationship between two or multiple variables. For example, did the percentage of
population that uses improved drinking-water sources increase over time? To address these
problems we need to look at bivariate or multivariate relationships.

Visualizing Relationships - scatterplots

Let’s look at bivariate case first. A scatterplot is a good way to visualize bivariate relationships. We
have x axis and y axis each representing one of the variables. Each observation is illustrated on the
graph by a dot. If the graph shows a clear pattern rather a group of messy dots or a horizontal line,
the two variables may correlated with each other.
In R we can use plot() function to create scatterplots. We have to define the variables for x-axis and
y-axis. The labels in the graph are editable.

We can see from the scatterplot that there is an increasing pattern. In later years, the percentages
are more centered around one hundred. Especially, in 2012, not of the regions had less than 20% of
people using improved water sources while there used to be some regions that have such low
percentages in the early years.
Examining Relationships - two-way cross-tabulations

Scatterplot is a useful tool to examine the relationship between two variables where at least one of
them is numeric. When both variables are nominal, two-way cross-tabulation would be a better choice
(also named as crosstab or contingency table).

The function CrossTable() is available in R under the package gmodels. Let’s install it first.

We are interested in investigating the relationship between WHO region and residence area type in
the water study. We might want to know if there is a difference in terms of residence area type
between the African WHO region and all other WHO regions.

To address this problem we need to create an indicator variable for African WHO region first.

Let’s revisit the table() function to see how many WHO regions are in Africa.
Now, let’s create a two-way cross-tabulation using CrossTable().

Each cell in the table contains five numbers. The first one N give us the count that falls into its
corresponding category. The Chi-square contribution provide us information about the cell’s
contribution in the Pearson’s Chi-squared test for independence between two variables. This number
measures the probability that the differences in cell counts are due to chance alone.
The number of most interest is the N/ Col Total or the counts over column total. In this case, these
numbers represent the distribution for residence area type among African regions and the regions in
the rest of the world. We can see the numbers are very close between African and non-African
regions for each type of residence area. Therefore, we can conclude that African WHO regions do not
have a difference in terms of residence area types compared to the rest of the world.

Missing Data

In the previous sections, we simply ignored the incomplete observations in our water dataset (na.rm =
TRUE). Is this an apporpriate strategy to handle incmplete data? Could the missingness pattern of
those incomplete observations be important? It is possible that the arrangement of the missing
observations may reflect an important factor that was not accounted for in our statistics or our
models.

Missing Completely at Random (MCAR) is an assumption about the probability of missingness


being equal for all cases; Missing at Random (MAR) assumes the probability of missingness has a
known but random mechanism (e.g., different rates for different groups); Missing not at Random
(MNAR) suggest a missingness mechanism linked to the values of predictors and/or response, e.g.,
some participants may drop out of a drug trial when they have side-effects.

There are a number of strategies to impute missing data. The expectation maximization (EM)
algorithm provides one example for handling missing data. The SOCR EM tutorial, activity, and
documentations provides the theory, applications and practice for effective (multidimensional) EM
parameter estimation.

The simplest way to handle incomplete data is to substitute each missing value with its (feature or
column) average. When the missingness proportion is small, the the effect of substituting the means
for the missing values will have little effect on the mean, variance, or other important statistics of the
data. Also, this will preserve those non-missing values of the same observation or row.
A more sophisticated way of resolving missing data is to use a model (e.g., linear regression) to
predict teh missing feature and impute its missing values. This is called the predictive mean matching
approach. This method is good for data with multivariate normality. However, a disadvantage of it is
that it can only predict one value at a time, which is very time consuming. Also, the multivariate
normality assumption might not be satisfied and there may be important multivariate relations that are
not accounted for. We are using the mi package for the predictive mean matching procedure.

Let’s install the mi package first.


Then we need to get the missing information matrix. We are using the imputation method
pmm(predictive mean matching approach) for both missing variables.
Notes:

 Converting the input data.frame to a missing_data.frame allows us to include in the DF


enhanced metadata about each variable, which is essential for the subsequent modeling,
interpretation and imputation of the initial missing data.
 show() displays all missing variables and their class-labels (e.g., continuous), along with meta-
data. The missing_data.frame constructor suggests the most appropriate classes for each
missing variable, however, the user often needs to correct, modify or change these meta-data,
using change().
 Use the change() function to change/correct many meta-data in the constructed
missing_data.frame object which are incorrect when using show(mfd).
 To get a sense of the raw data, look at the summary, image, or hist of the missing_data.frame.
 The mi vignettes provide many useful examples of handling missing data.

We can perform the initial imputation. Here we imputed 3 times, which will create 3 different datasets
with slightly different imputed values.

Next, we need to extract several multiply imputed data.frames from imputations object. Finally, we
can compare the summary stats between the original dataset and the imputed datasets.
...
This is just a brief introduction for handling incomplete datasets. In later chapters, we will discuss
more about missing data with different imputation methods and how to evaluate the complete imputed
results.
Simulate some real multivariate data

Suppose we would like to generate a synthetic dataset:

sim_data={y,x1,x2,x3,x4,x5,x6,x7,x8,x9,x10}.

Then, we can introduce a method that takes a dataset and a desired proportion of missingness and
wipes out the same proportion of the data, i.e., introduces random patterns of missingness. Note that
there are already R functions that automate the introduction of missingness, e.g.,
missForest::prodNA(), however writing such method from scratch is also useful.

Next, let’s syntheticaly generate (simulate) 1,000 cases including all 11 features in the data
({y,x1,x2,x3,x4,x5,x6,x7,x8,x9,x10}).
The histogram plots display the distributions of:

 The observed data (in blue color),


 The imputed data (in red color), and
 The completed values (observed plus imputed, in gray color).
...
Let’s check imputation convergence (details provided below).
...
Finally, pool over the m=3 completed datasets when we fit the “model”. Pool from across the 3 chains
- in order to estimate a linear regression model
Parsing webpages and visualizing tabular HTML data

In this section, we will utilize the Earthquakes dataset on SOCR website. It records information about
earthquakes happened between 1969 and 2007 with magnitudes larger than 5 on the Richter’s scale.
Here is how we parse the data on the source webpage and ingest the information into R:

In this dataset, Magt(magnitude type) may be used as grouping variable. We will draw a “Longitude
vs Latitude” line plot from this dataset. The function we are using is called ggplot() under ggplot2. The
input type for this function is mostly data frame. aes() specifies axes.
We can see the most important line of code was made up with 2 parts. The first part
ggplot(earthquake, aes(Longiture, Latitude, group=Magt, color=Magt)) specifies the setting of the plot:
dataset, group and color. The second part specifies we are going to draw lines between data points.
In later chapters we will frequently use package ggplot2 and the structure under this great package is
always function1+function2.

We can visualize the distribution for different variables using density plots. The following chunk of
codes plots the distribution for Latitude among different Magnitude types. Also, it is using ggplot()
function but combined with geom_density().

We can also compute and display 2D Kernel Density and 3D Surface Plots. Plotting 2D Kernel
Density and 3D Surface plots is very important and useful in multivariate exploratory data analytic.

We will use plot_ly() function under plotly package, which takes value from a data frame.

To create a surface plot, we use two vectors: x and y with length m and n respectively. We also need
a matrix: z of size m×n

This z matrix is created from matrix multiplication between x and y. However, we need to register in
plotly website to make our own plots. Still, there are some built-in datasets that can be used to
demonstrate this type of graph.
The kde2d() function is needed for 2D kernel density estimation.

Here z is an estimate of the kernel density function. Then we apply plot_ly to the list kernal_density
via with() function.

Note that we used the option "surface", however you can experiment with the type option.
Alternatively, one can plot 1D, 2D or 3D plots:
Importing Data from SQL Databases

We can also import SQL databases in to R. First, we need to install and load the RODBC(R Open
Database Connectivity) package.

Then, we could open a connection to the SQL server database with Data Source Name (DSN), via
Microsoft Access.

Research:

List down new examples of the following:

 data sets using input and output


 csv file

Analysis
Choose at least 2 values each from the inputs you listed and write perform the variable information and
conversion.

_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________.

Action:

Design your own data selection and manipulation.


POST TEST

A. Give the purpose of the following code/ functions: (25 points)

i. read.csv()
ii. c()
iii. max()
iv. min()
v. write.csv()
vi. summary()
vii. plot()
viii. median()
ix. library()
x. range()
xi. def()
xii. quantile()
xiii. boxplot()
xiv. hist()
xv. lines()

B. Enumeration (10 points)

i. What is the formula in getting the variance? Give example and solution.
ii. What is the formula in getting the standard deviation? Give example and solution.

C. Lab/ Actual Activities (400 points)


i. Perform the following activities in your machine (create your own dataset):
1. Week 1
i. Saving and Loading R Data Structures
ii. Importing and Saving Data from CSV Files
iii. Exploring the Structure of Data
2. Week 2
i. Exploring Numeric Variables
ii. Measuring the Central Tendency - mean, median, mode
iii. Measuring Spread - quartiles and the five-number summary
3. Week 3
i. Visualizing Numeric Variables - boxplots
ii. Visualizing Numeric Variables – histograms
4. Week 4
i. Understanding Numeric Data - uniform and normal distributions
ii. Measuring Spread - variance and standard deviation
CLOSING PRAYER

May God the Father bless us.


May God the Son heal us.
May God the Holy Spirit enlighten us,
and give us eyes to see with,
ears to hear with,
hands to do the work of God with,
feet to walk with,
a mouth to preach the word of salvation with,
and the angel of peace to watch over us and lead us at last,
by our Lord's gift,
to the Kingdom.
Amen.

RUBRICS

Programming Rubrics
Mathematics Rubrics

Category 4 3 2 1
Neatness and The work is The work is The work is The work appears
organization presented in a presented in a presented in an sloppy and
neat, clear, neat and organized unorganized. It is
organized fashion organized fashion but may hard to know what
that is easy to fashion that is be hard to read information goes
read. usually easy to at times. together.
read.
Understanding I got it!! I did it in I got it. I I understood I did not
new ways and understood the parts of the understand the
showed you how it problem and problem. I got problem.
worked. I can tell have an started, but I
you what math appropriate couldn’t finish.
concepts are solution. All
used. parts of the
problem are
addressed.
Strategy & Typically, uses an Typically, uses Sometimes uses Rarely uses an
Procedures efficient and an effective an effective effective strategy
effective strategy strategy to solve strategy to solve to solve problems.
to solve the the problem(s). problems, but
problem(s). does not do it
consistently.
Mathematical 90-100% of the Almost all (85- Most (75-84%) More than 75% of
Errors steps and 89%) of the of the steps and the steps and
solutions have no steps and solutions have solutions have
mathematical solutions have no mathematical mathematical
errors. no mathematical errors. errors.
errors.
Completion All problems are All but one of the All but two of the Several of the
completed. problems are problems are problems are not
completed. completed. completed.

This module was developed based on the following references:

1. Dinov, ID. (2018) Data Science and Predictive Analytics: Biomedical and Health Applications using R,
Springer (ISBN 978-3-319-72346-4).
2. DSPA Book downloads (5M, as of May 2020).

You might also like