Data Mining and Analysis
Data Mining and Analysis
Data mining is the process of discovering insightful, interesting, and novel patterns, as
well as descriptive, understandable, and predictive models from large-scale data. We
begin this chapter by looking at basic properties of data modeled as a data matrix. We
emphasize the geometric and algebraic views, as well as the probabilistic interpreta-
tion of data. We then discuss the main data mining tasks, which span exploratory data
analysis, frequent pattern mining, clustering, and classification, laying out the roadmap
for the book.
Data can often be represented or abstracted as an n × d data matrix, with n rows and
d columns, where rows correspond to entities in the dataset, and columns represent
attributes or properties of interest. Each row in the data matrix records the observed
attribute values for a given entity. The n × d data matrix is given as
X1 X2 · · · Xd
x x 11 x 12 · · · x 1d
1
D =.
x2 x 21 x 22 · · · x 2d
. .. .. .. ..
. . . . .
xn x n1 x n2 · · · x nd
xi = (x i1 , x i2 , . . . , x id )
X j = (x 1 j , x 2 j , . . . , x nj )
data, whereas the number of attributes d is called the dimensionality of the data. The
analysis of a single attribute is referred to as univariate analysis, whereas the simultane-
ous analysis of two attributes is called bivariate analysis and the simultaneous analysis
of more than two attributes is called multivariate analysis.
Example 1.1. Table 1.1 shows an extract of the Iris dataset; the complete data forms
a 150 × 5 data matrix. Each entity is an Iris flower, and the attributes include sepal
length, sepal width, petal length, and petal width in centimeters, and the type
or class of the Iris flower. The first row is given as the 5-tuple
Not all datasets are in the form of a data matrix. For instance, more complex
datasets can be in the form of sequences (e.g., DNA and protein sequences), text,
time-series, images, audio, video, and so on, which may need special techniques for
analysis. However, in many cases even if the raw data is not a data matrix it can
usually be transformed into that form via feature extraction. For example, given a
database of images, we can create a data matrix in which rows represent images
and columns correspond to image features such as color, texture, and so on. Some-
times, certain attributes may have special semantics associated with them requiring
special treatment. For instance, temporal or spatial attributes are often treated dif-
ferently. It is also worth noting that traditional data analysis assumes that each entity
or instance is independent. However, given the interconnected nature of the world
we live in, this assumption may not always hold. Instances may be connected to
other instances via various kinds of relationships, giving rise to a data graph, where
a node represents an entity and an edge represents the relationship between two
entities.
1.2 Attributes 3
1.2 ATTRIBUTES
Attributes may be classified into two main types depending on their domain, that is,
depending on the types of values they take on.
Numeric Attributes
A numeric attribute is one that has a real-valued or integer-valued domain. For
example, Age with domai n(Age) = N, where N denotes the set of natural num-
bers (non-negative integers), is numeric, and so is petal length in Table 1.1, with
domai n(petal length) = R+ (the set of all positive real numbers). Numeric attributes
that take on a finite or countably infinite set of values are called discrete, whereas those
that can take on any real value are called continuous. As a special case of discrete,
if an attribute has as its domain the set {0, 1}, it is called a binary attribute. Numeric
attributes can be classified further into two types:
Categorical Attributes
A categorical attribute is one that has a set-valued domain composed of a set of
symbols. For example, Sex and Education could be categorical attributes with their
domains given as
• Nominal: The attribute values in the domain are unordered, and thus only equality
comparisons are meaningful. That is, we can check only whether the value of the
attribute for two given instances is the same or not. For example, Sex is a nomi-
nal attribute. Also class in Table 1.1 is a nominal attribute with domai n(class) =
{iris-setosa, iris-versicolor , iris-virginica }.
• Ordinal: The attribute values are ordered, and thus both equality comparisons (is one
value equal to another?) and inequality comparisons (is one value less than or greater
than another?) are allowed, though it may not be possible to quantify the difference
between values. For example, Education is an ordinal attribute because its domain
values are ordered by increasing educational qualification.
4 Data Mining and Analysis
If the d attributes or dimensions in the data matrix D are all numeric, then each row
can be considered as a d-dimensional point:
xi = (x i1 , x i2 , . . . , x id ) ∈ Rd
e j = (0, . . . , 1 j , . . . , 0)T
Any other vector in Rd can be written as linear combination of the standard basis
vectors. For example, each of the points xi can be written as the linear combination
d
X
xi = x i1 e1 + x i2 e2 + · · · + x id ed = xi j e j
j=1
where the scalar value x i j is the coordinate value along the j th axis or attribute.
Example 1.2. Consider the Iris data in Table 1.1. If we project the entire data
onto the first two attributes, then each row can be considered as a point or
a vector in 2-dimensional space. For example, the projection of the 5-tuple
x1 = (5.9, 3.0, 4.2, 1.5, Iris-versicolor) on the first two attributes is shown in
Figure 1.1a. Figure 1.2 shows the scatterplot of all the n = 150 points in the 2-
dimensional space spanned by the first two attributes. Likewise, Figure 1.1b shows
x1 as a point and vector in 3-dimensional space, by projecting the data onto the first
three attributes. The point (5.9, 3.0, 4.2) can be seen as specifying the coefficients in
the linear combination of the standard basis vectors in R3 :
1 0 0 5.9
x1 = 5.9e1 + 3.0e2 + 4.2e3 = 5.9 0 + 3.0 1 + 4.2 0 = 3.0
0 0 1 4.2
1.3 Data: Algebraic and Geometric View 5
X3
4
X2
3
4 x1 = (5.9, 3.0, 4.2)
x1 = (5.9, 3.0) bC
3 bc 2
2
1
1
0 X1 1 2 3
1 X2
0 1 2 3 4 5 6 2
3
4
5
6
X1
(a) (b)
4.5 bC
bC
bC
bC
4.0
X 2 : sepal width
bC
bC bC bC bC
bC bC bC
bC bC bC
bC bC bC bC
3.5 bC bC bC bC bC bC bC bC bC
bC bC bC bC
bC bC bC bC bC bC bC bC bC bC bC
bC bC bC bC bC bC
bC bC bC bC bC bC bC bC
b bC bC bC bC bC bC bC bC bC bC bC
3.0 bC bC bC bC bC bC bC bC bC bC
bC bC bC bC bC bC bC bC bC bC bC
bC bC bC bC bC bC
bC bC bC bC bC
bC bC bC bC bC bC bC
2.5 bC bC
bC bC bC bC
bC bC
bC
2
4 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0
X 1 : sepal length
Figure 1.2. Scatterplot: sepal length versus sepal width. The solid circle shows the mean point.
6 Data Mining and Analysis
— xT —
x 11 x 12 ··· x 1d 1
x 21 | | |
x 22 ··· x 2d
— x T
2 —
D = . .. ..
= = X X2 ··· Xd
.. .. .. 1
. . . . | | |
x n1 x n2 ··· x nd — xnT —
Treating data instances and attributes as vectors, and the entire dataset as a matrix,
enables one to apply both geometric and algebraic methods to aid in the data mining
and analysis tasks.
Let a, b ∈ Rm be two m-dimensional vectors given as
a1 b1
a2 b2
a= . b= .
.. ..
am bm
Dot Product
The dot product between a and b is defined as the scalar value
b1
b2
a T b = a1 a2 ··· am × .
..
bm
= a1 b1 + a2 b2 + · · · + am bm
X
m
= ai bi
i=1
1.3 Data: Algebraic and Geometric View 7
Length
The Euclidean norm or length of a vector a ∈ Rm is defined as
v
q u m
√ uX
kak = a a = a1 + a2 + · · · + am = t
T 2 2 2 ai2
i=1
By definition u has length kuk = 1, and it is also called a normalized vector, which can
be used in lieu of a in some analysis tasks.
The Euclidean norm is a special case of a general class of norms, known as
L p -norm, defined as
1 X
m 1p
p p p p p
kak p = |a1 | + |a2 | + · · · + |am | = |ai |
i=1
for any p 6= 0. Thus, the Euclidean norm corresponds to the case when p = 2.
Distance
From the Euclidean norm we can define the Euclidean distance between a and b, as
follows
v
u m
p uX
δ(a, b) = ka − bk = (a − b) (a − b) = t (ai − bi )2
T (1.1)
i=1
Thus, the length of a vector is simply its distance from the zero vector 0, all of whose
elements are 0, that is, kak = ka − 0k = δ(a, 0).
From the general L p -norm we can define the corresponding L p -distance function,
given as follows
δ p (a, b) = ka − bk p (1.2)
Angle
The cosine of the smallest angle between vectors a and b, also called the cosine
similarity, is given as
T
aT b a b
cos θ = = (1.3)
kak kbk kak kbk
Thus, the cosine of the angle between a and b is given as the dot product of the unit
a b
vectors kak and kbk .
The Cauchy–Schwartz inequality states that for any vectors a and b in Rm
X2
(1, 4)
4 bc a−b
(5, 3)
3 bc
2
b
a
1
θ
0 X1
0 1 2 3 4 5
Figure 1.3. Distance and angle. Unit vectors are shown in gray.
−1 ≤ cos θ ≤ 1
Because the smallest angle θ ∈ [0◦ , 180◦ ] and because cos θ ∈ [−1, 1], the cosine similar-
ity value ranges from +1, corresponding to an angle of 0◦ , to −1, corresponding to an
angle of 180◦ (or π radians).
Orthogonality
Two vectors a and b are said to be orthogonal if and only if aT b = 0, which in turn
implies that cos θ = 0, that is, the angle between them is 90◦ or π2 radians. In this case,
we say that they have no similarity.
Example 1.3 (Distance and Angle). Figure 1.3 shows the two vectors
5 1
a= and b =
3 4
The distance between a and b using Eq. (1.2) for the L p -norm with p = 3 is given as
1/3
ka − bk3 = (4, −1)T 3
= 43 + (−1)3 = (63)1/3 = 3.98
Mean
The mean of the data matrix D is the vector obtained as the average of all the row-
vectors:
n
1X
mean(D) = µ = xi
n i=1
Total Variance
The total variance of the data matrix D is the average squared distance of each point
from the mean:
n n
1X 1X
var (D) = δ(xi , µ)2 = kxi − µk2 (1.4)
n i=1 n i=1
1X
n
var (D) = kxi k2 − 2xiT µ + kµk2
n i=1
X !
1 X
n n
1
= kxi k2 − 2nµT xi + n kµk2
n i=1 n i=1
10 Data Mining and Analysis
!
1 X
n
= kxi k2 − 2nµT µ + n kµk2
n i=1
!
1 X
n
= kxi k − kµk2
2
n i=1
The total variance is thus the difference between the average of the squared magnitude
of the data points and the squared magnitude of the mean (average of the points).
Often in data mining we need to project a point or vector onto another vector, for
example, to obtain a new point after a change of the basis vectors. Let a, b ∈ Rm be two
m-dimensional vectors. An orthogonal decomposition of the vector b in the direction
X2
b
4
b⊥ a
3 r=
1 bk
p=
0 X1
0 1 2 3 4 5
Figure 1.4. Orthogonal projection.
1.3 Data: Algebraic and Geometric View 11
b = bk + b⊥ = p + r (1.6)
Example 1.4. Restricting the Iris dataset to the first two dimensions, sepal length
and sepal width, the mean point is given as
5.843
mean(D) =
3.054
X2
ℓ
1.5
rS
rS
rS
1.0 rs rs
rs rs
rS
rs rS
rs rs
rS rs rs rs rS uT uT
rS Sr rs rs rs rs rs rS
rs rs
rS rS rs rs uT
rs rs
rs rs rs
0.5 rS rS rS rs rS rs
rS rS rS rS rS rS rs bC uT uT
rS rS bCuT uT
rsbc
rS rS rS rS bc tu bC bCuT uT uT uT bC uT
cb bc
rS rS rS bc ut bc bc
bc ut
uT bCuT bCuT
0.0 rS rS rS rS rS bC bC bC bc ut CuTb
bcut bc bc
bc ut bc
uT bCuT uT bC bCuT uT uT uT uT uT X1
rS bC bC ut bcut Cb bC bC uT bC bC uT
bc ut bcut
bc bcut
uT bC uT bcut bcut Cb
ut bc bc uT uT uT bC bC uT uT
ut ut
bC bC bCuT bC ut bc ut bcut uT uT
bc ut ut
bC bC bC uT ut bcut bc uT
bc bcut ut
−0.5 uT bC bC bC uT ut CuTb
bcut uT
bcut bc
bC bC ut
bcut bc
rS bC bC bC ut
ut
bCuT bC ut ut
ut
−1.0 bC ut
ut
which is shown as the black circle in Figure 1.2. The corresponding centered data
is shown in Figure 1.5, and the total variance is var (D) = 0.868 (centering does not
change this value).
Figure 1.5 shows the projection of each point onto the line ℓ, which is the line that
maximizes the separation between the class iris-setosa (squares) from the other
T
two class (circles and triangles).
The line ℓis given as the set of all the points (x 1 , x 2 )
x1 −2.15
satisfying the constraint =c for all scalars c ∈ R.
x2 2.75
we are often interested in the linear combinations of the rows (points) or the
columns (attributes). For instance, different linear combinations of the original d
attributes yield new derived attributes, which play a key role in feature extraction and
dimensionality reduction.
Given any set of vectors v1 , v2 , . . . , vk in an m-dimensional vector space Rm , their
linear combination is given as
c1 v 1 + c2 v 2 + · · · + ck v k
where ci ∈ R are scalar values. The set of all possible linear combinations of the k
vectors is called the span, denoted as span(v1 , . . . , vk ), which is itself a vector space
being a subspace of Rm . If span(v1, . . . , vk ) = Rm , then we say that v1 , . . . , vk is a spanning
set for Rm .
r ow(D) = span(x1 , x2 , . . . , xn )
By definition r ow(D) is a subspace of Rd . Note also that the row space of D is the
column space of DT :
r ow(D) = col(DT )
Linear Independence
We say that the vectors v1 , . . . , vk are linearly dependent if at least one vector can be
written as a linear combination of the others. Alternatively, the k vectors are linearly
1.3 Data: Algebraic and Geometric View 13
dependent if there are scalars c1 , c2 , . . . , ck , at least one of which is not zero, such that
c1 v 1 + c2 v 2 + · · · + ck v k = 0
c1 v1 + c2 v2 + · · · + ck vk = 0 implies c1 = c2 = · · · = ck = 0
Simply put, a set of vectors is linearly independent if none of them can be written as a
linear combination of the other vectors in the set.
Any two bases for S must have the same number of vectors, and the number of vectors
in a basis for S is called the dimension of S, denoted as di m(S). Because S is a subspace
of Rm , we must have di m(S) ≤ m.
It is a remarkable fact that, for any matrix, the dimension of its row and column
space is the same, and this dimension is also called the rank of the matrix. For the data
matrix D ∈ Rn×d , we have r ank(D) ≤ mi n(n, d), which follows from the fact that the
column space can have dimension at most d, and the row space can have dimension at
most n. Thus, even though the data points are ostensibly in a d dimensional attribute
space (the extrinsic dimensionality), if r ank(D) < d, then the data points reside in a
lower dimensional subspace of Rd , and in this case r ank(D) gives an indication about
the intrinsic dimensionality of the data. In fact, with dimensionality reduction methods
it is often possible to approximate D ∈ Rn×d with a derived data matrix D′ ∈ Rn×k ,
which has much lower dimensionality, that is, k ≪ d. In this case k may reflect the
“true” intrinsic dimensionality of the data.
T
Example 1.5. The line ℓ in Figure 1.5 is given as ℓ = span −2.15 2.75 , with
di m(ℓ) = 1. After normalization, we obtain the orthonormal basis for ℓ as the unit
vector
1 −2.15 −0.615
√ =
12.19 2.75 0.788
14 Data Mining and Analysis
5.9 6.9 6.6 4.6 6.0 4.7 6.5 5.8 6.7 6.7 5.1 5.1 5.7 6.1 4.9
5.0 5.0 5.7 5.0 7.2 5.9 6.5 5.7 5.5 4.9 5.0 5.5 4.6 7.2 6.8
5.4 5.0 5.7 5.8 5.1 5.6 5.8 5.1 6.3 6.3 5.6 6.1 6.8 7.3 5.6
4.8 7.1 5.7 5.3 5.7 5.7 5.6 4.4 6.3 5.4 6.3 6.9 7.7 6.1 5.6
6.1 6.4 5.0 5.1 5.6 5.4 5.8 4.9 4.6 5.2 7.9 7.7 6.1 5.5 4.6
4.7 4.4 6.2 4.8 6.0 6.2 5.0 6.4 6.3 6.7 5.0 5.9 6.7 5.4 6.3
4.8 4.4 6.4 6.2 6.0 7.4 4.9 7.0 5.5 6.3 6.8 6.1 6.5 6.7 6.7
4.8 4.9 6.9 4.5 4.3 5.2 5.0 6.4 5.2 5.8 5.5 7.6 6.3 6.4 6.3
5.8 5.0 6.7 6.0 5.1 4.8 5.7 5.1 6.6 6.4 5.2 6.4 7.7 5.8 4.9
5.4 5.1 6.0 6.5 5.5 7.2 6.9 6.2 6.5 6.0 5.4 5.5 6.7 7.7 5.1
The probabilistic view of the data assumes that each numeric attribute X is a random
variable, defined as a function that assigns a real number to each outcome of an exper-
iment (i.e., some process of observation or measurement). Formally, X is a function
X : O → R, where O, the domain of X, is the set of all possible outcomes of the experi-
ment, also called the sample space, and R, the range of X, is the set of real numbers. If
the outcomes are numeric, and represent the observed values of the random variable,
then X : O → O is simply the identity function: X (v) = v for all v ∈ O. The distinc-
tion between the outcomes and the value of the random variable is important, as we
may want to treat the observed values differently depending on the context, as seen in
Example 1.6.
A random variable X is called a discrete random variable if it takes on only a finite
or countably infinite number of values in its range, whereas X is called a continuous
random variable if it can take on any value in its range.
Example 1.6. Consider the sepal length attribute (X 1 ) for the Iris dataset in
Table 1.1. All n = 150 values of this attribute are shown in Table 1.2, which lie in
the range [4.3, 7.9], with centimeters as the unit of measurement. Let us assume that
these constitute the set of all possible outcomes O.
By default, we can consider the attribute X 1 to be a continuous random variable,
given as the identity function X 1 (v) = v, because the outcomes (sepal length values)
are all numeric.
On the other hand, if we want to distinguish between Iris flowers with short and
long sepal lengths, with long being, say, a length of 7 cm or more, we can define a
discrete random variable A as follows:
(
0 If v < 7
A(v) =
1 If v ≥ 7
In this case the domain of A is [4.3, 7.9], and its range is {0, 1}. Thus, A assumes
nonzero probability only at the discrete values 0 and 1.
1.4 Data: Probabilistic View 15
Example 1.7 (Bernoulli and Binomial Distribution). In Example 1.6, A was defined
as discrete random variable representing long sepal length. From the sepal length
data in Table 1.2 we find that only 13 Irises have sepal length of at least 7 cm. We can
thus estimate the probability mass function of A as follows:
13
f (1) = P(A = 1) = = 0.087 = p
150
and
137
f (0) = P(A = 0) = = 0.913 = 1 − p
150
In this case we say that A has a Bernoulli distribution with parameter p ∈ [0, 1], which
denotes the probability of a success, that is, the probability of picking an Iris with a
long sepal length at random from the set of all points. On the other hand, 1 − p is the
probability of a failure, that is, of not picking an Iris with long sepal length.
Let us consider another discrete random variable B, denoting the number of
Irises with long sepal length in m independent Bernoulli trials with probability of
success p. In this case, B takes on the discrete values [0, m], and its probability mass
function is given by the Binomial distribution
m k
f (k) = P(B = k) = p (1 − p)m−k
k
The formula can be understood as follows. There are mk ways of picking k long sepal
length Irises out of the m trials. For each selection of k long sepal length Irises, the
total probability of the k successes is pk , and the total probability of m − k failures is
(1 − p)m−k . For example, because p = 0.087 from above, the probability of observing
exactly k = 2 Irises with long sepal length in m = 10 trials is given as
10
f (2) = P(B = 2) = (0.087)2(0.913)8 = 0.164
2
Figure 1.6 shows the full probability mass function for different values of k for m = 10.
Because p is quite small, the probability of k successes in so few a trials falls off
rapidly as k increases, becoming practically zero for values of k ≥ 6.
16 Data Mining and Analysis
P(B=k)
0.4
0.3
0.2
0.1
k
0 1 2 3 4 5 6 7 8 9 10
Figure 1.6. Binomial distribution: probability mass function (m = 10, p = 0.087).
Zb
P X ∈ [a, b] = f (x) d x
a
As before, the density function f must satisfy the basic laws of probability:
and
Z∞
f (x) d x = 1
−∞
[x − ǫ, x + ǫ]:
Zx+ǫ
P X ∈ [x − ǫ, x + ǫ] = f (x) d x ≃ 2ǫ · f (x)
x−ǫ
P X ∈ [x − ǫ, x + ǫ]
f (x) ≃ (1.8)
2ǫ
f (x) thus gives the probability density at x, given as the ratio of the probability mass
to the width of the interval, that is, the probability mass per unit distance. Thus, it is
important to note that P(X = x) 6= f (x).
Even though the probability density function f (x) does not specify the probability
P(X = x), it can be used to obtain the relative probability of one value x 1 over another
x 2 because for a given ǫ > 0, by (1.8), we have
P(X ∈ [x 1 − ǫ, x 1 + ǫ]) 2ǫ · f (x 1 ) f (x 1 )
≃ = (1.9)
P(X ∈ [x 2 − ǫ, x 2 + ǫ]) 2ǫ · f (x 2 ) f (x 2 )
Thus, if f (x 1 ) is larger than f (x 2 ), then values of X close to x 1 are more probable than
values close to x 2 , and vice versa.
Example 1.8 (Normal Distribution). Consider again the sepal length values from
the Iris dataset, as shown in Table 1.2. Let us assume that these values follow a
Gaussian or normal density function, given as
1 −(x − µ)2
f (x) = √ exp
2πσ 2 2σ 2
There are two parameters of the normal density distribution, namely, µ, which rep-
resents the mean value, and σ 2 , which represents the variance of the values (these
parameters are discussed in Chapter 2). Figure 1.7 shows the characteristic “bell”
shape plot of the normal distribution. The parameters, µ = 5.84 and σ 2 = 0.681, were
estimated directly from the data for sepal length in Table 1.2.
1
Whereas f (x = µ) = f (5.84) = √ exp{0} = 0.483, we emphasize that the
2π · 0.681
probability of observing X = µ is zero, that is, P(X = µ) = 0. Thus, P(X = x) is not
given by f (x), rather, P(X = x) is given as the area under the curve for an infinitesi-
mally small interval [x − ǫ, x + ǫ] centered at x, with ǫ > 0. Figure 1.7 illustrates this
with the shaded region centered at µ = 5.84. From Eq. (1.8), we have
As ǫ → 0, we get P(X = µ) → 0. However, based on Eq. (1.9) we can claim that the
probability of observing values close to the mean value µ = 5.84 is 2.67 times the
probability of observing values close to x = 7, as
f (5.84) 0.483
= = 2.69
f (7) 0.18
18 Data Mining and Analysis
f (x)
µ±ǫ
0.5
0.4
0.3
0.2
0.1
0 x
2 3 4 5 6 7 8 9
Figure 1.7. Normal distribution: probability density function (µ = 5.84, σ 2 = 0.681).
Example 1.9 (Cumulative Distribution Function). Figure 1.8 shows the cumulative
distribution function for the binomial distribution in Figure 1.6. It has the character-
istic step shape (right continuous, non-decreasing), as expected for a discrete random
variable. F(x) has the same value F(k) for all x ∈ [k, k + 1) with 0 ≤ k < m, where m
is the number of trials and k is the number of successes. The closed (filled) and open
circles demarcate the corresponding closed and open interval [k, k + 1). For instance,
F(x) = 0.404 = F(0) for all x ∈ [0, 1).
Figure 1.9 shows the cumulative distribution function for the normal density
function shown in Figure 1.6. As expected, for a continuous random variable, the
CDF is also continuous, and non-decreasing. Because the normal distribution is
symmetric about the mean, we have F(µ) = P(X ≤ µ) = 0.5.
1.4 Data: Probabilistic View 19
F(x)
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0 x
−1 0 1 2 3 4 5 6 7 8 9 10 11
Figure 1.8. Cumulative distribution function for the binomial distribution.
F(x)
1.0
0.9
0.8
0.7
0.6
0.5 (µ, F(µ)) = (5.84, 0.5)
0.4
0.3
0.2
0.1
0 x
0 1 2 3 4 5 6 7 8 9 10
Figure 1.9. Cumulative distribution function for the normal distribution.
Instead of considering each attribute as a random variable, we can also perform pair-
wise analysis by considering a pair of attributes, X 1 and X 2 , as a bivariate random
variable:
X1
X=
X2
X : O → R2 is a function that assigns to each outcome
in the sample space, a pair of
x1
real numbers, that is, a 2-dimensional vector ∈ R2 . As in the univariate case,
x2
20 Data Mining and Analysis
if the outcomes are numeric, then the default is to assume X to be the identity
function.
where W ⊂ R2 is some subset of the 2-dimensional space of reals. f must also satisfy
the following two conditions:
f (x) = f (x 1 , x 2 ) ≥ 0 for all − ∞ < x 1 , x 2 < ∞
Z Z∞ Z∞
f (x) dx = f (x 1 , x 2 ) d x 1 d x 2 = 1
R2 −∞ −∞
As in the univariate case, the probability mass P(x) = P (x 1 , x 2 )T = 0 for any
particular point x. However, we can use f to compute the probability density at x.
Consider the square region W = [x 1 − ǫ, x 1 + ǫ], [x 2 − ǫ, x 2 + ǫ] , that is, a window of
width 2ǫ centered at x = (x 1 , x 2 )T . The probability density at x can be approximated as
P(X ∈ W ) = P X ∈ [x 1 − ǫ, x 1 + ǫ], [x 2 − ǫ, x 2 + ǫ]
xZ1 +ǫ xZ2 +ǫ
= f (x 1 , x 2 ) d x 1 d x 2
x1 −ǫ x2 −ǫ
≃ 2ǫ · 2ǫ · f (x 1 , x 2 )
which implies that
P(X ∈ W )
f (x 1 , x 2 ) =
(2ǫ)2
The relative probability of one value (a1 , a2 ) versus another (b1 , b2 ) can therefore be
computed via the probability density function:
P(X ∈ [a1 − ǫ, a1 + ǫ], [a2 − ǫ, a2 + ǫ] ) (2ǫ)2 · f (a1 , a2 ) f (a1 , a2 )
≃ 2 · f (b , b )
=
P(X ∈ [b1 − ǫ, b1 + ǫ], [b2 − ǫ, b2 + ǫ] ) (2ǫ) 1 2 f (b1 , b2 )
1.4 Data: Probabilistic View 21
Example 1.10 (Bivariate Distributions). Consider the sepal length and sepal
width attributes in the Iris dataset, plotted in Figure 1.2. Let A denote the Bernoulli
random variable corresponding to long sepal length (at least 7 cm), as defined in
Example 1.7.
Define another Bernoulli random
variable B corresponding to long sepal width,
A
say, at least 3.5 cm. Let X = be the discrete bivariate random variable; then the
B
joint probability mass function of X can be estimated from the data as follows:
116
f (0, 0) = P(A = 0, B = 0) = = 0.773
150
21
f (0, 1) = P(A = 0, B = 1) = = 0.140
150
10
f (1, 0) = P(A = 1, B = 0) = = 0.067
150
3
f (1, 1) = P(A = 1, B = 1) = = 0.020
150
Figure 1.10 shows a plot of this probability mass function.
Treating attributes X 1 and X 2 in the Iris dataset (see Table 1.1) as continuous
X1
random variables, we can define a continuous bivariate random variable X = .
X2
Assuming that X follows a bivariate normal distribution, its joint probability density
function is given as
1 (x − µ)T 6 −1 (x − µ)
f (x|µ, 6) = √ exp −
2π |6| 2
Here µ and 6 are the parameters of the bivariate normal distribution, representing
the 2-dimensional mean vector and covariance matrix, which are discussed in detail
f (x)
b
0.773
0.14
b
0.067
b
0
1 0.02 1
b
X2
X1
Figure 1.10. Joint probability mass function: X1 (long sepal length), X2 (long sepal width).
22 Data Mining and Analysis
f (x)
0.4
0.2
0 X2
1
b 2
3
4
X1 0 5
1 6
2 7
3
4 8
5 9
in Chapter 2. Further, |6| denotes the determinant of 6. The plot of the bivariate
normal density is given in Figure 1.11, with mean
µ = (5.843, 3.054)T
Statistical Independence
Two random variables X 1 and X 2 are said to be (statistically) independent if, for every
W1 ⊂ R and W2 ⊂ R, we have
P(X 1 ∈ W1 and X 2 ∈ W2 ) = P(X 1 ∈ W1 ) · P(X 2 ∈ W2 )
Furthermore, if X 1 and X 2 are independent, then the following two conditions are also
satisfied:
F(x) = F(x 1 , x 2 ) = F1 (x 1 ) · F2 (x 2 )
f (x) = f (x 1 , x 2 ) = f 1 (x 1 ) · f 2 (x 2 )
1.4 Data: Probabilistic View 23
f (x) = P(X = x)
f (x 1 , x 2 , . . . , x d ) = P(X 1 = x 1 , X 2 = x 2 , . . . , X d = x d )
If all X j are continuous, then X is jointly continuous and its joint probability density
function is given as
Z Z
P(X ∈ W ) = · · · f (x) dx
x∈W
Z Z
T
P (X 1 , X 2 , . . . , X d ) ∈ W = ··· f (x 1 , x 2 , . . . , x d ) d x 1 d x 2 . . . d x d
(x1 ,x2 ,...,xd )T ∈W
F(x) = P(X ≤ x)
F(x 1 , x 2 , . . . , x d ) = P(X 1 ≤ x 1 , X 2 ≤ x 2 , . . . , X d ≤ x d )
F(x) = F(x 1 , . . . , x d ) = F1 (x 1 ) · F2 (x 2 ) · . . . · Fd (x d )
f (x) = f (x 1 , . . . , x d ) = f 1 (x 1 ) · f 2 (x 2 ) · . . . · f d (x d ) (1.11)
24 Data Mining and Analysis
The probability mass or density function of a random variable X may follow some
known form, or as is often the case in data analysis, it may be unknown. When the
probability function is not known, it may still be convenient to assume that the values
follow some known distribution, based on the characteristics of the data. However,
even in this case, the parameters of the distribution may still be unknown. Thus, in
general, either the parameters, or the entire distribution, may have to be estimated
from the data.
In statistics, the word population is used to refer to the set or universe of all entities
under study. Usually we are interested in certain characteristics or parameters of the
entire population (e.g., the mean age of all computer science students in the United
States). However, looking at the entire population may not be feasible or may be
too expensive. Instead, we try to make inferences about the population parameters by
drawing a random sample from the population, and by computing appropriate statis-
tics from the sample that give estimates of the corresponding population parameters of
interest.
Univariate Sample
Given a random variable X, a random sample of size n from X is defined as a set of n
independent and identically distributed (IID) random variables S1 , S2 , . . . , Sn , that is, all
of the Si ’s are statistically independent of each other, and follow the same probability
mass or density function as X.
If we treat attribute X as a random variable, then each of the observed values of
X, namely, x i (1 ≤ i ≤ n), are themselves treated as identity random variables, and the
observed data is assumed to be a random sample drawn from X. That is, all x i are
considered to be mutually independent and identically distributed as X. By (1.11) their
joint probability function is given as
n
Y
f (x 1 , . . . , x n ) = f X (x i )
i=1
Multivariate Sample
For multivariate parameter estimation, the n data points xi (with 1 ≤ i ≤ n) constitute
a d-dimensional multivariate random sample drawn from the vector random vari-
able X = (X 1 , X 2 , . . . , X d ). That is, xi are assumed to be independent and identically
distributed, and thus their joint distribution is given as
n
Y
f (x1 , x2 , . . . , xn ) = f X (xi ) (1.12)
i=1
Statistic
We can estimate a parameter of the population by defining an appropriate sample
statistic, which is defined as a function of the sample. More precisely, let {Si }mi=1 denote
the random sample of size m drawn from a (multivariate) random variable X. A statis-
tic θ̂ is a function θ̂ : (S1 , S2 , . . . , Sn ) → R. The statistic is an estimate of the corresponding
population parameter θ . As such, the statistic θ̂ is itself a random variable. If we use
the value of a statistic to estimate a population parameter, this value is called a point
estimate of the parameter, and the statistic is called an estimator of the parameter. In
Chapter 2 we will study different estimators for population parameters that reflect the
location (or centrality) and dispersion of values.
Example 1.11 (Sample Mean). Consider attribute sepal length (X 1 ) in the Iris
dataset, whose values are shown in Table 1.2. Assume that the mean value of X 1
is not known. Let us assume that the observed values {x i }ni=1 constitute a random
sample drawn from X 1 .
The sample mean is a statistic, defined as the average
1X
n
µ̂ = xi
n i=1
Data mining comprises the core algorithms that enable one to gain fundamental
insights and knowledge from massive data. It is an interdisciplinary field merging
concepts from allied areas such as database systems, statistics, machine learning, and
pattern recognition. In fact, data mining is part of a larger knowledge discovery pro-
cess, which includes pre-processing tasks such as data extraction, data cleaning, data
fusion, data reduction and feature construction, as well as post-processing steps such