0% found this document useful (0 votes)
4 views105 pages

Chap2 Data

The lecture notes from Chapter 2 of 'Introduction to Data Mining' cover essential concepts such as attributes and objects, types of data, data quality, and similarity measures. It discusses various types of attributes (nominal, ordinal, interval, ratio) and their properties, as well as challenges related to data quality like noise, outliers, and missing values. Additionally, it introduces similarity and dissimilarity measures, including Euclidean and Minkowski distances, which are crucial for data analysis.

Uploaded by

wvfvmcq4rg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views105 pages

Chap2 Data

The lecture notes from Chapter 2 of 'Introduction to Data Mining' cover essential concepts such as attributes and objects, types of data, data quality, and similarity measures. It discusses various types of attributes (nominal, ordinal, interval, ratio) and their properties, as well as challenges related to data quality like noise, outliers, and missing values. Additionally, it introduces similarity and dissimilarity measures, including Euclidean and Minkowski distances, which are crucial for data analysis.

Uploaded by

wvfvmcq4rg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 105

Data Mining: Data

Lecture Notes for Chapter 2

nd
Introduction to Data Mining , 2 Edition
by
Tan, Steinbach, Kumar
01/27/2021 Introduction to Data Mining, 2nd Edition 1
Tan, Steinbach, Karpatne, Kumar

Outline

● Attributes and Objects

● Types of Data

● Data Quality

● Similarity and Distance


01/27/2021 Introduction to Data Mining, 2nd Edition 2
Tan, Steinbach, Karpatne, Kumar
● Data Preprocessing
What is Data?
● Collection of data objects
Attributes
and their attributes
● An attribute is a property or Objects
characteristic of an object
– Examples: eye color of a
person, temperature, etc.
– Attribute is also known as
variable, field, characteristic,
dimension, or feature
● A collection of attributes describe an object
– Object is also known as record, point, case, sample, entity, or
instance
Attribute Values

● Attribute values are numbers or symbols


assigned to an attribute for a particular object

● Distinction between attributes and attribute values


– Same attribute can be mapped to different attribute
values
◆ Example: height can be measured in feet or meters
01/27/2021 Introduction to Data Mining, 2nd Edition 4
Tan, Steinbach, Karpatne, Kumar
– Different attributes can be mapped to the same set of
values
◆ Example: Attribute values for ID and age are integers
– But properties of attribute can be different than the
properties of the values used to represent the
attribute
Types of Attributes

● There are different types of attributes


– Nominal
◆ Examples: ID numbers, eye color, zip codes
– Ordinal
◆ Examples: rankings (e.g., taste of potato chips on a
scale from 1-10), grades, height {tall, medium, short}
– Interval
◆ Examples: calendar dates, temperatures in Celsius or
Fahrenheit.

01/27/2021 Introduction to Data Mining, 2nd Edition 6


Tan, Steinbach, Karpatne, Kumar
– Ratio
◆ Examples: temperature in Kelvin, length, counts,
elapsed time (e.g., time to run a race)
Properties of Attribute Values
● The type of an attribute depends on which of the
following properties/operations it possesses:
– Distinctness: = ≠ – Order: < >
– Differences are + - meaningful :
– Ratios are * / meaningful
– Nominal attribute: distinctness
– Ordinal attribute: distinctness & order

01/27/2021 Introduction to Data Mining, 2nd Edition 7


Tan, Steinbach, Karpatne, Kumar
– Interval attribute: distinctness, order & meaningful
differences
– Ratio attribute: all 4 properties/operations
Difference Between Ratio and
Interval
● Is it physically meaningful to say that a
temperature of 10 ° is twice that of 5° on
– the Celsius scale?
– the Fahrenheit scale?
– the Kelvin scale?

01/27/2021 Introduction to Data Mining, 2nd Edition 8


Tan, Steinbach, Karpatne, Kumar
● Consider measuring the height above average
– If Bill’s height is three inches above average and Bob’s
height is six inches above average, then would we say
that Bob is twice as tall as Bill?
– Is this situation analogous to that of temperature?

01/27/2021 Introduction to Data Mining, 2nd Edition 9


Tan, Steinbach, Karpatne, Kumar
This categorization of attributes is due to S. S. Stevens
01/27/2021 Introduction to Data Mining, 2nd Edition 12
Tan, Steinbach, Karpatne, Kumar
This categorization of attributes is due to S. S. Stevens

Discrete and Continuous


Attributes

● Discrete Attribute
– Has only a finite or countably infinite set of values
– Examples: zip codes, counts, or the set of words in
a collection of documents
– Often represented as integer variables.
– Note: binary attributes are a special case of discrete
attributes
● Continuous Attribute
– Has real numbers as attribute values – Examples:
temperature, height, or weight.
– Practically, real values can only be measured and
represented using a finite number of digits.
– Continuous attributes are typically represented as
floating-point variables.

01/27/2021 Introduction to Data Mining, 2nd Edition 14


Tan, Steinbach, Karpatne, Kumar
Asymmetric Attributes
● Only presence (a non-zero attribute value) is regarded as
important
◆ Words present in documents
◆ Items present in customer transactions

● If we met a friend in the grocery store would we ever say


the following?
“I see our purchases are very similar since we didn’t buy most
of the same things.”
Key Messages for Attribute Types

01/27/2021 Introduction to Data Mining, 2nd Edition 15


Tan, Steinbach, Karpatne, Kumar
● The types of operations you choose should be
“meaningful” for the type of data you have
– Distinctness, order, meaningful intervals, and meaningful
ratios are only four (among many possible) properties of data

– The data type you see – often numbers or strings – may not
capture all the properties or may suggest properties that are
not present

– Analysis may depend on these other properties of the data


◆ Many statistical analyses depend only on the distribution

– In the end, what is meaningful can be specific to domain

01/27/2021 Introduction to Data Mining, 2nd Edition 16


Tan, Steinbach, Karpatne, Kumar
Important Characteristics of Data
– Dimensionality (number of attributes)
◆ High dimensional data brings a number of challenges

– Sparsity
◆ Only presence counts

– Resolution
◆ Patterns depend on the scale

– Size

01/27/2021 Introduction to Data Mining, 2nd Edition 17


Tan, Steinbach, Karpatne, Kumar
◆ Type of analysis may depend on size of data

01/27/2021 Introduction to Data Mining, 2nd Edition 18


Tan, Steinbach, Karpatne, Kumar
Types of data sets
● Record
– Data Matrix
– Document Data
– Transaction Data
● Graph
– World Wide Web
– Molecular Structures
● Ordered
– Spatial Data
– Temporal Data

01/27/2021 Introduction to Data Mining, 2nd Edition 19


Tan, Steinbach, Karpatne, Kumar
– Sequential Data
– Genetic Sequence Data
Record Data

● Data that consists of a collection of records, each


of which consists of a fixed set of attributes

01/27/2021 Introduction to Data Mining, 2nd Edition 20


Tan, Steinbach, Karpatne, Kumar
01/27/2021 Introduction to Data Mining, 2nd Edition 21
Tan, Steinbach, Karpatne, Kumar
Document Data

● Each document becomes a ‘term’ vector


– Each term is a component (attribute) of the vector
– The value of each component is the number of times
the corresponding term occurs in the document.

01/27/2021 Introduction to Data Mining, 2nd Edition 22


Tan, Steinbach, Karpatne, Kumar
Transaction Data
● A special type of data, where
– Each transaction involves a set of items.

01/27/2021 Introduction to Data Mining, 2nd Edition 23


Tan, Steinbach, Karpatne, Kumar
– For example, consider a grocery store. The set of products
purchased by a customer during one shopping trip constitute a
transaction, while the individual products that were purchased
are the items.
– Can represent transaction data as record data

01/27/2021 Introduction to Data Mining, 2nd Edition 24


Tan, Steinbach, Karpatne, Kumar
Graph Data
● Examples: Generic graph, a molecule, and webpages

01/27/2021 Introduction to Data Mining, 2nd Edition 25


Tan, Steinbach, Karpatne, Kumar
Benzene Molecule: C6H6

01/27/2021 Introduction to Data Mining, 2nd Edition 26


Tan, Steinbach, Karpatne, Kumar
Ordered Data

● Genomic sequence data

01/27/2021 Introduction to Data Mining, 2nd Edition 27


Tan, Steinbach, Karpatne, Kumar
01/27/2021 Introduction to Data Mining, 2nd Edition 28
Tan, Steinbach, Karpatne, Kumar
Ordered Data

● Spatio-
Temporal
Data

Average
Monthly

01/27/2021 Introduction to Data Mining, 2nd Edition 29


Tan, Steinbach, Karpatne, Kumar
Temperature of land
and ocean
Data Quality
● Poor data quality negatively affects many data
processing efforts

● Data mining example: a classification model for detecting


people who are loan risks is built using poor data
– Some credit-worthy candidates are denied loans
– More loans are given to individuals that default

01/27/2021 Introduction to Data Mining, 2nd Edition 30


Tan, Steinbach, Karpatne, Kumar
Data Quality …

● What kinds of data quality problems?


● How can we detect problems with the data? ●
What can we do about these problems?

● Examples of data quality problems:


– Noise and outliers
– Wrong data
– Fake data
– Missing values
01/27/2021 Introduction to Data Mining, 2nd Edition 31
Tan, Steinbach, Karpatne, Kumar
– Duplicate data
Noise
● For objects, noise is an extraneous object
● For attributes, noise refers to modification of original
values
– Examples: distortion of a person’s voice when talking on a

poor phone and “snow” on television screen


01/27/2021 Introduction to Data Mining, 2nd Edition 32
Tan, Steinbach, Karpatne, Kumar
– The figures below show two sine waves of the same magnitude
and different frequencies, the waves combined, and the two
sine waves with random noise
◆ The magnitude and shape of the original signal is distorted

01/27/2021 Introduction to Data Mining, 2nd Edition 33


Tan, Steinbach, Karpatne, Kumar
Outliers

● Outliers are data objects with characteristics that


are considerably different than most of the other
data objects in the
data set
– Case 1: Outliers are
noise that interferes
with data analysis

– Case 2: Outliers are


the goal of our
analysis
01/27/2021 Introduction to Data Mining, 2nd Edition 34
Tan, Steinbach, Karpatne, Kumar
◆ Credit card fraud ◆ Intrusion detection

● Causes?
Missing Values

● Reasons for missing values


– Information is not collected
(e. g., people decline to give their age and
weight)
– Attributes may not be applicable to all cases
(e. g., annual income is not applicable to
children)

01/27/2021 Introduction to Data Mining, 2nd Edition 35


Tan, Steinbach, Karpatne, Kumar
● Handling missing values
– Eliminate data objects or variables
– Estimate missing values
◆ Example: time series of temperature
◆ Example: census results

– Ignore the missing value during analysis


Duplicate Data

● Data set may include data objects that are


duplicates, or almost duplicates of one another
– Major issue when merging data from heterogeneous
sources

01/27/2021 Introduction to Data Mining, 2nd Edition 36


Tan, Steinbach, Karpatne, Kumar
● Examples:
– Same person with multiple email addresses

● Data cleaning
– Process of dealing with duplicate data issues

● When should duplicate data not be removed?

01/27/2021 Introduction to Data Mining, 2nd Edition 37


Tan, Steinbach, Karpatne, Kumar
Similarity and Dissimilarity
Measures

● Similarity measure
– Numerical measure of how alike two data objects are.
– Is higher when objects are more alike.
– Often falls in the range [0,1]
● Dissimilarity measure
– Numerical measure of how different two data objects
are
– Lower when objects are more alike

01/27/2021 Introduction to Data Mining, 2nd Edition 38


Tan, Steinbach, Karpatne, Kumar
Euclidean Distance
– Minimum dissimilarity is often 0
– Upper limit varies
● Proximity
refers to a similarity or dissimilarity
Similarity/Dissimilarity for Simple
Attributes

01/27/2021 Introduction to Data Mining, 2nd Edition 39


Tan, Steinbach, Karpatne, Kumar
The following table shows the similarity and dissimilarity
between two objects, x and y, with respect to a single, simple
attribute.

01/27/2021 Introduction to Data Mining, 2nd Edition 40


Tan, Steinbach, Karpatne, Kumar
Euclidean Distance
● Euclidean Distance

where n is the number of dimensions (attributes) and th


attributes xk and yk are, respectively, the k
(components) or data objects x and y.

01/27/2021 Introduction to Data Mining, 2nd Edition 41


Tan, Steinbach, Karpatne, Kumar
p1 p2 p3 p4

p1

p2

p3

p4

● Standardization is
necessary, if scales
differ.
Distance Matrix

01/27/2021 Introduction to Data Mining, 2nd Edition 42


Tan, Steinbach, Karpatne, Kumar
Euclidean Distance

01/27/2021 Introduction to Data Mining, 2nd Edition 43


Tan, Steinbach, Karpatne, Kumar
Distance Matrix
Minkowski Distance

● Minkowski Distance is a generalization of Euclidean


Distance

Where r is a parameter, n is the number of dimensions


(attributes) and xk and yk are, respectively, the kth attributes
(components) or data objects x and y.

01/27/2021 Introduction to Data Mining, 2nd Edition 44


Tan, Steinbach, Karpatne, Kumar
Minkowski Distance: Examples

● r = 1. City block (Manhattan, taxicab, L1 norm) distance.


– A common example of this for binary vectors is the Hamming
distance, which is just the number of bits that are different
between two binary vectors

● r = 2. Euclidean distance

● r → ∞. “supremum” (Lmax norm, L∞ norm) distance.


– This is the maximum difference between any component of
the vectors

01/27/2021 Introduction to Data Mining, 2nd Edition 45


Tan, Steinbach, Karpatne, Kumar
● Do not confuse r with n, i.e., all these distances are defined
for all numbers of dimensions.

01/27/2021 Introduction to Data Mining, 2nd Edition 46


Tan, Steinbach, Karpatne, Kumar
Minkowski Distance

01/27/2021 Introduction to Data Mining, 2nd Edition 47


Tan, Steinbach, Karpatne, Kumar
Distance Matrix

Mahalanobis Distance

01/27/2021 Introduction to Data Mining, 2nd Edition 48


Tan, Steinbach, Karpatne, Kumar
Σ is the covariance matrix

For red points, the


Euclidean distance is 14.7,
Mahalanobis distance is 6.

01/27/2021 Introduction to Data Mining, 2nd Edition 49


Tan, Steinbach, Karpatne, Kumar
Covariance
Matrix:

B A: (0.5, 0.5)
B: (0, 1)
A
C: (1.5, 1.5)

Mahal(A,B) = 5
Mahal(A,C) = 4

01/27/2021 Introduction to Data Mining, 2nd Edition 50


Tan, Steinbach, Karpatne, Kumar
Mahalanobis Distance

01/27/2021 Introduction to Data Mining, 2nd Edition 51


Tan, Steinbach, Karpatne, Kumar
Common Properties of a Distance
● Distances, such as the Euclidean distance,
have some well known properties.
1. d(x, y) ≥ 0 for all x and y and d(x, y) = 0 if and only
if x = y.
2. d(x, y) = d(y, x) for all x and y. (Symmetry)
3. d(x, z) ≤ d(x, y) + d(y, z) for all points x, y, and z.
(Triangle Inequality)

where d(x, y) is the distance (dissimilarity) between


points (data objects), x and y.

01/27/2021 Introduction to Data Mining, 2nd Edition 52


Tan, Steinbach, Karpatne, Kumar
● A distance that satisfies these properties is a
metric
Common Properties of a Similarity

● Similarities, also have some well known


properties. 1. s(x, y) = 1 (or maximum similarity)
only if x = y.
(does not always hold, e.g., cosine)

2. s(x, y) = s(y, x) for all x and y. (Symmetry)

where s(x, y) is the similarity between points (data


objects), x and y.
01/27/2021 Introduction to Data Mining, 2nd Edition 53
Tan, Steinbach, Karpatne, Kumar
Similarity Between Binary Vectors
● Common situation is that objects, x and y, have only binary
attributes

● Compute similarities using the following quantities f01 = the


number of attributes where x was 0 and y was 1 f10 = the number
of attributes where x was 1 and y was 0 f00 = the number of
attributes where x was 0 and y was 0 f11 = the number of
attributes where x was 1 and y was 1

● Simple Matching and Jaccard Coefficients


SMC = number of matches / number of attributes
= (f11 + f00) / (f01 + f10 + f11 + f00)

01/27/2021 Introduction to Data Mining, 2nd Edition 54


Tan, Steinbach, Karpatne, Kumar
J = number of 11 matches / number of non-zero attributes
= (f11) / (f01 + f10 + f11)
SMC versus Jaccard: Example
x= 1000000000 y
= 0000001001

f01 = ? (the number of attributes where x was 0 and y was 1)


f10 = ? (the number of attributes where x was 1 and y was 0)
f00 = ? (the number of attributes where x was 0 and y was 0)
f11 = ? (the number of attributes where x was 1 and y was 1)

SMC versus Jaccard: Example


01/27/2021 Introduction to Data Mining, 2nd Edition 55
Tan, Steinbach, Karpatne, Kumar
x= 1000000000 y
= 0000001001

f01 = 2 (the number of attributes where x was 0 and y was 1)


f10 = 1 (the number of attributes where x was 1 and y was 0)
f00 = 7 (the number of attributes where x was 0 and y was 0)
f11 = 0 (the number of attributes where x was 1 and y was 1)

SMC = (f11 + f00) / (f01 + f10 + f11 + f00)


= (0+7) / (2+1+0+7) = 0.7

J = (f11) / (f01 + f10 + f11) = 0 / (2 + 1 + 0) = 0

01/27/2021 Introduction to Data Mining, 2nd Edition 56


Tan, Steinbach, Karpatne, Kumar
Cosine Similarity
● If d1 and d2 are two document vectors, then
cos( d1, d2 ) = <d1,d2> / ||d1|| ||d2|| ,
where <d1,d2> indicates inner product or vector dot product
of vectors, d1 and d2, and || d || is the length of vector d.

● Example:
d1 = 3 2 0 5 0 0 0 2 0 0
d2 = 1 0 0 0 0 0 0 1 0 2
<d1, d2> =

|| d1 || =

01/27/2021 Introduction to Data Mining, 2nd Edition 57


Tan, Steinbach, Karpatne, Kumar
|| d2 || =
cos(d1, d2 ) =

Cosine Similarity
● If d1 and d2 are two document vectors, then
cos( d1, d2 ) = <d1,d2> / ||d1|| ||d2|| ,
where <d1,d2> indicates inner product or vector dot product
of vectors, d1 and d2, and || d || is the length of vector d.

● Example:
d1 = 3 2 0 5 0 0 0 2 0 0
d2 = 1 0 0 0 0 0 0 1 0 2
01/27/2021 Introduction to Data Mining, 2nd Edition 58
Tan, Steinbach, Karpatne, Kumar
<d1, d2> = 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 0*0 + 0*0 + 2*1 + 0*0 + 0*2 = 5
| d1 || = (3*3+2*2+0*0+5*5+0*0+0*0+0*0+2*2+0*0+0*0)0.5 = (42) 0.5 = 6.481 ||

d2 || = (1*1+0*0+0*0+0*0+0*0+0*0+0*0+1*1+0*0+2*2) 0.5 = (6) 0.5 = 2.449

cos(d1, d2 ) = 0.3150

Correlation measures the linear relationship


between objects

01/27/2021 Introduction to Data Mining, 2nd Edition 59


Tan, Steinbach, Karpatne, Kumar
01/27/2021 Introduction to Data Mining, 2nd Edition 60
Tan, Steinbach, Karpatne, Kumar
Visually Evaluating Correlation

01/27/2021 Introduction to Data Mining, 2nd Edition 61


Tan, Steinbach, Karpatne, Kumar
Scatter plots
showing the
similarity from
–1 to 1.

01/27/2021 Introduction to Data Mining, 2nd Edition 62


Tan, Steinbach, Karpatne, Kumar
Drawback of Correlation

● x = (-3, -2, -1, 0, 1, 2, 3)


● y = (9, 4, 1, 0, 1, 4, 9)

yi = xi2

● mean(x) = 0, mean(y) = 4
● std(x) = 2.16, std(y) = 3.74
● corr = (-3)(5)+(-2)(0)+(-1)(-3)+(0)(-4)+(1)(-3)+(2)(0)+3(5) / ( 6 * 2.16 * 3.74 )
=0

01/27/2021 Introduction to Data Mining, 2nd Edition 63


Tan, Steinbach, Karpatne, Kumar
If X and Y are independent, then they are also uncorrelated. However, if X and
Y are uncorrelated, then they can still be dependent.
Correlation vs Cosine vs Euclidean
Distance
● Compare the three proximity measures according to their behavior under
variable transformation
– scaling: multiplication by a value – translation: adding a constant
Property Cosine Correlation Euclidean Distance
Invariant to scaling Yes Yes No
(multiplication)
Invariant to translation No Yes No
(addition)
● Consider the example
– x = (1, 2, 4, 3, 0, 0, 0), y = (1, 2, 3, 4, 0, 0, 0)
– ys = y * 2 (scaled version of y), yt = y + 5 (translated version)
Measure (x , y) (x , ys) (x , yt)
Cosine 0.9667 0.9667 0.7940

01/27/2021 Introduction to Data Mining, 2nd Edition 64


Tan, Steinbach, Karpatne, Kumar
Correlation 0.9429 0.9429 0.9429

Euclidean Distance 1.4142 5.8310 14.2127

Correlation vs cosine vs Euclidean


distance

● Choice of the right proximity measure depends on the domain


● What is the correct choice of proximity measure for the
following situations?
– Comparing documents using the frequencies of words
◆ Documents are considered similar if the word frequencies are similar

– Comparing the temperature in Celsius of two locations


◆ Two locations are considered similar if the temperatures are similar in
magnitude

01/27/2021 Introduction to Data Mining, 2nd Edition 65


Tan, Steinbach, Karpatne, Kumar
– Comparing two time series of temperature measured in Celsius
◆ Two time series are considered similar if their “shape” is similar, i.e., they vary
in the same way over time, achieving minimums and maximums at similar
times, etc.

Information Based Measures

● Information theory is a well-developed and


fundamental disciple with broad applications

● Some similarity measures are based on


information theory
– Mutual information in various versions

01/27/2021 Introduction to Data Mining, 2nd Edition 66


Tan, Steinbach, Karpatne, Kumar
– Maximal Information Coefficient (MIC) and related
measures
– General and can handle non-linear relationships
– Can be complicated and time intensive to compute
Information and Probability

● Information relates to possible outcomes of an event


– transmission of a message, flip of a coin, or measurement

of a piece of data
● The more certain an outcome, the less information
that it contains and vice-versa

01/27/2021 Introduction to Data Mining, 2nd Edition 67


Tan, Steinbach, Karpatne, Kumar
– For example, if a coin has two heads, then an outcome of
heads provides no information
– More quantitatively, the information is related the probability
of an outcome
◆ Thesmaller the probability of an outcome, the more information it
provides and vice-versa
– Entropy is the commonly used measure
Entropy

01/27/2021 Introduction to Data Mining, 2nd Edition 68


Tan, Steinbach, Karpatne, Kumar

01/27/2021 Introduction to Data Mining, 2nd Edition 69


Tan, Steinbach, Karpatne, Kumar
Entropy Examples

01/27/2021 Introduction to Data Mining, 2nd Edition 70


Tan, Steinbach, Karpatne, Kumar

01/27/2021 Introduction to Data Mining, 2nd Edition 71


Tan, Steinbach, Karpatne, Kumar
Entropy for Sample Data: Example

Hair Color Count p -plog2p


Black 75 0.75 0.3113
Brown 15 0.15 0.4105
Blond 5 0.05 0.2161
Red 0 0.00 0
Other 5 0.05 0.2161
Total 100 1.0 1.1540
Maximum entropy is log25 = 2.3219
Entropy for Sample Data

01/27/2021 Introduction to Data Mining, 2nd Edition 72


Tan, Steinbach, Karpatne, Kumar

01/27/2021 Introduction to Data Mining, 2nd Edition 73


Tan, Steinbach, Karpatne, Kumar
Mutual Information

01/27/2021 Introduction to Data Mining, 2nd Edition 74


Tan, Steinbach, Karpatne, Kumar

01/27/2021 Introduction to Data Mining, 2nd Edition 75


Tan, Steinbach, Karpatne, Kumar
Mutual Information Example

Student Count p Student Grade Count p -plog2p


Status Status
Undergrad 45 0.45
Undergrad A 5 0.05 0.2161
Grad 55 0.55
Undergrad B 30 0.30 0.5211
Total 100 1.00
Undergrad C 10 0.10 0.3322

Grade Count p Grad A 30 0.30 0.5211


A 35 0.35 Grad B 20 0.20 0.4644
B 50 0.50 Grad C 5 0.05 0.2161
C 15 0.15
Total 100 1.00 2.2710
Total 100 1.00
Mutual information of Student Status and Grade = 0.9928 + 1.4406 - 2.2710 = 0.1624

01/27/2021 Introduction to Data Mining, 2nd Edition 76


Tan, Steinbach, Karpatne, Kumar
Using Weights to Combine
Similarities

01/27/2021 Introduction to Data Mining, 2nd Edition 77


Tan, Steinbach, Karpatne, Kumar

01/27/2021 Introduction to Data Mining, 2nd Edition 78


Tan, Steinbach, Karpatne, Kumar
Data Preprocessing

● Aggregation
● Sampling
● Discretization and Binarization
● Attribute Transformation
● Dimensionality Reduction
● Feature subset selection
● Feature creation

01/27/2021 Introduction to Data Mining, 2nd Edition 79


Tan, Steinbach, Karpatne, Kumar
Aggregation
● Combining two or more attributes (or objects) into a single
attribute (or object)
● Purpose
– Data reduction - reduce the number of attributes or objects
– Change of scale
◆ Cities aggregated into regions, states, countries, etc.
◆ Days aggregated into weeks, months, or years
– More “stable” data - aggregated data tends to have less variability

Customer Name Date Item Quantity Price


John Smith 1/1/2022 Apples 2 1.50
John Smith 1/1/2022 Bananas 1 0.50
Mary Jones 1/2/2022 Oranges 3 2.00
Mary Jones 1/2/2022 Bread 1 3.00
Mary Jones 1/2/2022 Milk 1 2.50

01/27/2021 Introduction to Data Mining, 2nd Edition 80


Tan, Steinbach, Karpatne, Kumar
Example: Precipitation in Australia

● This example is based on precipitation in Australia


from the period 1982 to 1993.
The next slide shows
– A histogram for the standard deviation of average
monthly precipitation for 3,030 0.5◦ by 0.5◦ grid cells in
Australia, and
– A histogram for the standard deviation of the average
yearly precipitation for the same locations.
● The average yearly precipitation has less
variability than the average monthly precipitation.
01/27/2021 Introduction to Data Mining, 2nd Edition 81
Tan, Steinbach, Karpatne, Kumar
● All precipitation measurements (and their
standard deviations) are in centimeters.
Example: Precipitation in Australia

01/27/2021 Introduction to Data Mining, 2nd Edition 82


Tan, Steinbach, Karpatne, Kumar
Variation of Precipitation in Australia

Standard Deviation of Average Standard Deviation of


Monthly Precipitation Average Yearly Precipitation
01/27/2021 Introduction to Data Mining, 2nd Edition 83
Tan, Steinbach, Karpatne, Kumar
Sampling
● Sampling is the main technique employed for data
reduction.
– It is often used for both the preliminary investigation of the
data and the final data analysis.

● Statisticians often sample because obtaining the


entire set of data of interest is too expensive or
time consuming.

01/27/2021 Introduction to Data Mining, 2nd Edition 84


Tan, Steinbach, Karpatne, Kumar
● Sampling is typically used in data mining because
processing the entire set of data of interest is too
expensive or time consuming.
Sampling …

● The key principle for effective sampling is the


following:

– Using a sample will work almost as well as using the


entire data set, if the sample is representative

01/27/2021 Introduction to Data Mining, 2nd Edition 85


Tan, Steinbach, Karpatne, Kumar
– A sample is representative if it has approximately the
same properties (of interest) as the original set of data

8000 points 2000 Points 500 Points

Sample Size

01/27/2021 Introduction to Data Mining, 2nd Edition 86


Tan, Steinbach, Karpatne, Kumar
01/27/2021 Introduction to Data Mining, 2nd Edition 87
Tan, Steinbach, Karpatne, Kumar
Types of Sampling
● Simple Random Sampling
– There is an equal probability of selecting any particular
item
– Sampling without replacement
◆ As each item is selected, it is removed from the
population
– Sampling with replacement
◆ Objects are not removed from the population as they
are selected for the sample.
◆ In sampling with replacement, the same object can
be picked up more than once
01/27/2021 Introduction to Data Mining, 2nd Edition 88
Tan, Steinbach, Karpatne, Kumar
● Stratified sampling
– Split the data into several partitions; then draw random
samples from each partition
Sample Size
● Whatsample size is necessary to get at least one object
from each of 10 equal-sized groups.

01/27/2021 Introduction to Data Mining, 2nd Edition 89


Tan, Steinbach, Karpatne, Kumar
01/27/2021 Introduction to Data Mining, 2nd Edition 90
Tan, Steinbach, Karpatne, Kumar
Discretization

● Discretization
is the process of converting a
continuous attribute into an ordinal attribute
– A potentially infinite number of values are mapped into
a small number of categories
– Discretization is used in both unsupervised and
supervised settings

01/27/2021 Introduction to Data Mining, 2nd Edition 91


Tan, Steinbach, Karpatne, Kumar
Binarization

● Binarization maps a continuous or categorical


attribute into one or more binary variables

01/27/2021 Introduction to Data Mining, 2nd Edition 92


Tan, Steinbach, Karpatne, Kumar
Attribute Transformation
● An attribute transform is a function that maps the
entire set of values of a given attribute to a new
set of replacement values such that each old
value can be identified with one of the new values
– Simple functions: xk, log(x), ex, |x|
– Normalization
◆ Refers to various techniques to adjust to differences
among attributes in terms of frequency of
occurrence, mean, variance, range

01/27/2021 Introduction to Data Mining, 2nd Edition 93


Tan, Steinbach, Karpatne, Kumar
◆ Takeout unwanted, common signal, e.g.,
seasonality
– In statistics, standardization refers to subtracting off the
means and dividing by the standard deviation

01/27/2021 Introduction to Data Mining, 2nd Edition 94


Tan, Steinbach, Karpatne, Kumar
Curse of
Dimensionality
● When dimensionality
increases, data becomes
increasingly sparse in the
space that it occupies

● Definitions of density and


distance between points,
which are critical for clustering and outlier
detection, become less

01/27/2021 Introduction to Data Mining, 2nd Edition 95


Tan, Steinbach, Karpatne, Kumar
meaningful •Randomly generate 500 points
•Compute difference between max and
min distance between any pair of
points

01/27/2021 Introduction to Data Mining, 2nd Edition 96


Tan, Steinbach, Karpatne, Kumar
Dimensionality Reduction

● Purpose:
– Avoid curse of dimensionality
– Reduce amount of time and memory required by data
mining algorithms
– Allow data to be more easily visualized
– May help to eliminate irrelevant features or reduce
noise

● Techniques
– Principal Components Analysis (PCA)
– Singular Value Decomposition
01/27/2021 Introduction to Data Mining, 2nd Edition 97
Tan, Steinbach, Karpatne, Kumar
– Others: supervised and non-linear techniques
Dimensionality Reduction: PCA

● Goal is to find a projection that captures the


largest amount of variation in data

01/27/2021 Introduction to Data Mining, 2nd Edition 98


Tan, Steinbach, Karpatne, Kumar
Dimensionality Reduction: PCA

01/27/2021 Introduction to Data Mining, 2nd Edition 99


Tan, Steinbach, Karpatne, Kumar
Feature Subset Selection
01/27/2021 Introduction to Data Mining, 2nd Edition 100
Tan, Steinbach, Karpatne, Kumar
● Another way to reduce dimensionality of data
● Redundant features
– Duplicate much or all of the information contained in
one or more other attributes
– Example: purchase price of a product and the
amount of sales tax paid
● Irrelevant features
– Contain no information that is useful for the data
mining task at hand
– Example: students' ID is often irrelevant to the task of
predicting students' GPA
● Many techniques developed, especially for
classification
01/27/2021 Introduction to Data Mining, 2nd Edition 101
Tan, Steinbach, Karpatne, Kumar
Feature Creation

● Create new attributes that can capture the


important information in a data set much more
efficiently than the original attributes

● Three general methodologies:


– Feature extraction
◆ Example: extracting edges from images –
Feature construction
◆ Example: dividing mass by volume to get density
– Mapping data to new space

01/27/2021 Introduction to Data Mining, 2nd Edition 102


Tan, Steinbach, Karpatne, Kumar
◆ Example: Fourier and wavelet analysis

01/27/2021 Introduction to Data Mining, 2nd Edition 103


Tan, Steinbach, Karpatne, Kumar
● Fourier and wavelet transform

Frequency

Mapping Data to a New Space


01/27/2021 Introduction to Data Mining, 2nd Edition 104
Tan, Steinbach, Karpatne, Kumar
Two Sine Waves + Noise Frequency

01/27/2021 Introduction to Data Mining, 2nd Edition 105


Tan, Steinbach, Karpatne, Kumar

You might also like