0% found this document useful (0 votes)
6 views93 pages

Module 1 - Aug 2024

Data mining is defined as the analysis of large data collections to extract useful patterns, driven by the need to analyze vast amounts of data generated in the digital age. It encompasses various techniques and processes, including association rule mining, which identifies relationships between items in transaction databases. The document outlines the types of data, data mining applications, and algorithms like Apriori and FP-Growth used for frequent itemset mining.

Uploaded by

Sumita Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views93 pages

Module 1 - Aug 2024

Data mining is defined as the analysis of large data collections to extract useful patterns, driven by the need to analyze vast amounts of data generated in the digital age. It encompasses various techniques and processes, including association rule mining, which identifies relationships between items in transaction databases. The document outlines the types of data, data mining applications, and algorithms like Apriori and FP-Growth used for frequent itemset mining.

Uploaded by

Sumita Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 93

DATA MINING

Introduction
What is data mining?
• After years of data mining there is still no unique
answer to this question.

• A tentative definition:

Data mining is the use of efficient techniques for


the analysis of very large collections of data and the
extraction of useful and possibly unexpected
patterns in data.
Why do we need data mining?
• Really, really huge amounts of raw data!!
• In the digital age, TB of data is generated by the second
• Mobile devices, digital photographs, web documents.
• Facebook updates, Tweets, Blogs, User-generated content
• Transactions, sensor data, surveillance data
• Queries, clicks, browsing
• Cheap storage has made possible to maintain this data
• Needto analyze the raw data to extract
knowledge
Why do we need data mining?
• “The data is the computer”
• Large amounts of data can be more powerful than
complex algorithms and models
• Google has solved many Natural Language Processing problems,
simply by looking at the data
• Example: misspellings, synonyms
• Data is power!
• Today, the collected data is one of the biggest assets of an online
company
• Query logs of Google
• The friendship and updates of Facebook
• Tweets and follows of Twitter
• Amazon transactions
• We need a way to harness the collective intelligence
KDD Process
Data Mining Architecture
The data is also very complex
• Multiple types of data: tables, time series,
images, graphs, etc

• Spatial and temporal aspects

• Interconnected data of different types:


• From the mobile phone we can collect, location of the
user, friendship information, check-ins to venues,
opinions through twitter, images though cameras,
queries to search engines
Relational Database
Queries:

Show me a list of all items that were sold in


last quarter.

Show me the total sales of the last month,


grouped by branch.

How many sales transactions occurred in


the month of December.

Which sales person had the highest amount


of sales?

Data mining is applied for searching data patterns or trends.


For Ex: Data mining systems can analyze customer data to predict the credit risk of new
customers based on their income, age and previous credit information.
Data Warehouses
Data warehouse framework

Data are stored to provide information from a historical perspective and are typically
summarized.
Example: transaction data
• Billions of real-life customers:
• WALMART: 20M transactions per day
• AT&T 300 M calls per day
• Credit card companies: billions of transactions per day.

• The point cards allow companies to collect


information about specific users
Example: document data
• Web as a document repository: estimated 50
billions of web pages

• Wikipedia: 4 million articles (and counting)

• Online news portals: steady stream of 100’s of


new articles every day

• Twitter: ~300 million tweets every day


Example: network data
• Web: 50 billion pages linked via hyperlinks

• Facebook: 500 million users

• Twitter: 300 million users

• Instant messenger: ~1billion users

• Blogs: 250 million blogs worldwide, presidential


candidates run blogs
Example: genomic sequences
• http://www.1000genomes.org/page.php

• Full sequence of 1000 individuals

• 3*109 nucleotides per person  3*1012 nucleotides

• Lots more data in fact: medical history of the


persons, gene expression data
Example: environmental data
• Climate data (just an example)
http://www.ncdc.gov/oa/climate/ghcn-monthly/index.php

• “a database of temperature, precipitation and


pressure records managed by the National Climatic
Data Center, Arizona State University and the
Carbon Dioxide Information Analysis Center”

• “6000 temperature stations, 7500 precipitation


stations, 2000 pressure stations”
• Spatiotemporal data
Behavioral data
• Mobile phones today record a large amount of information about the user
behavior
• GPS records position
• Camera produces images
• Communication via phone and SMS
• Text via facebook updates
• Association with entities via check-ins

• Amazon collects all the items that you browsed, placed into your basket,
read reviews about, purchased.

• Google and Bing record all your browsing activity via toolbar plugins.
They also record the queries you asked, the pages you saw and the
clicks you did.

• Data collected for millions of users on a daily basis


Attributes
So, what is Data?
Tid Refund Marital Taxable
• Collection of data objects and Status Income Cheat

their attributes 1 Yes Single 125K No


2 No Married 100K No
• An attribute is a property or 3 No Single 70K No
characteristic of an object 4 Yes Married 120K No
• Examples: eye color of a person, 5 No Divorced 95K Yes
temperature, etc. Objects
6 No Married 60K No
• Attribute is also known as 7 Yes Divorced 220K No
variable, field, characteristic, or 8 No Single 85K Yes
feature
9 No Married 75K No
• A collection of attributes 10 No Single 90K Yes
describe an object 10

• Object is also known as record,


point, case, sample, entity, or Size: Number of objects
instance Dimensionality: Number of attributes
Sparsity: Number of populated
object-attribute pairs
Types of Attributes
• There are different types of attributes
• Categorical
• Examples: eye color, zip codes, words, rankings (e.g, good,
fair, bad), height in {tall, medium, short}
• Nominal (no order or comparison) vs Ordinal (order but not
comparable)
• Numeric
• Examples: dates, temperature, time, length, value, count.
• Discrete (counts) vs Continuous (temperature)
• Special case: Binary attributes (yes/no, exists/not exists)
Numeric Record Data
• If data objects have the same fixed set of numeric
attributes, then the data objects can be thought of as
points in a multi-dimensional space, where each
dimension represents a distinct attribute

• Such data set can be represented by an n-by-d data


matrix, where there are n rows, one for each object, and d
columns, one for each attribute

Projection Projection Distance Load Thickness


of x Load of y load

10.23 5.27 15.22 2.7 1.2


12.65 6.25 16.22 2.2 1.1
Categorical Data
• Data that consists of a collection of records, each
of which consists of a fixed set of categorical
attributes
Tid Refund Marital Taxable
Status Income Cheat

1 Yes Single High No


2 No Married Medium No
3 No Single Low No
4 Yes Married High No
5 No Divorced Medium Yes
6 No Married Low No
7 Yes Divorced High No
8 No Single Medium Yes
9 No Married Medium No
10 No Single Medium Yes
10
Document Data
• Each document becomes a `term' vector,
• each term is a component (attribute) of the vector,
• the value of each component is the number of times the
corresponding term occurs in the document.
• Bag-of-words representation – no ordering

timeout

season
coach

game
score
team

ball

lost
pla

wi
n
y

Document 1 3 0 5 0 2 6 0 2 0 2

Document 2 0 7 0 2 1 0 0 3 0 0

Document 3 0 1 0 0 1 2 2 0 3 0
Transaction Data
• Each record (transaction) is a set of items.
TID Items
1 Bread, Coke, Milk
2 Beer, Bread
3 Beer, Coke, Diaper, Milk
4 Beer, Bread, Diaper, Milk
5 Coke, Diaper, Milk

• A set of items can also be represented as a binary


vector, where each attribute is an item.
• A document can also be represented as a set of
words (no counts)
Sparsity: average number of products bought by a customer
Ordered Data
• Genomic sequence data

GGTTCCGCCTTCAGCCCCGCGCC
CGCAGGGCCCGCCCCGCGCCGTC
GAGAAGGGCCCGCCTGGCGGGCG
GGGGGAGGCGGGGCCGCCCGAGC
CCAACCGAGTCCGACCAGGTGCC
CCCTCTGCTCGGCCTAGACCTGA
GCTCATTAGGCGGCAGCGGACAG
GCCAAGTAGAACACGCGAAGCGC
TGGGCTGCCTGCTGCGACCAGGG

• Data is a long ordered string


Ordered Data
• Time series
• Sequence of ordered (over “time”) numeric values.
Graph Data
• Examples: Web graph and HTML Links
<a href="papers/papers.html#bbbb">
Data Mining </a>
<li>
2 <a href="papers/papers.html#aaaa">
Graph Partitioning </a>
<li>
5 1 <a href="papers/papers.html#aaaa">
Parallel Solution of Sparse Linear System of Equations </a>
<li>
2 <a href="papers/papers.html#ffff">
N-Body Computation and Dense Linear System Solvers
5
Types of data
• Numeric data: Each object is a point in a
multidimensional space
• Categorical data: Each object is a vector of
categorical values
• Set data: Each object is a set of values (with or
without counts)
• Sets can also be represented as binary vectors, or
vectors of counts
• Ordered sequences: Each object is an ordered
sequence of values.
• Graph data
What can we do with data mining?
• Some examples:
• Frequent itemsets and Association Rules extraction
• Coverage
• Clustering
• Classification
• Ranking
• Exploratory analysis
What can you do with the data?
• Suppose that you are the owner of a supermarket
and you have collected billions of market basket
data. What information would you extract from it
and how would you use it?
TID Items
Product placement
1 Bread, Coke, Milk
2 Beer, Bread
3 Beer, Coke, Diaper, Milk Catalog creation
4 Beer, Bread, Diaper, Milk
5 Coke, Diaper, Milk Recommendations

• What if this was an online store?


Association Rule Mining
• Association Rule Mining is used when you want to find an
association between different objects in a set, find
frequent patterns in a transaction database, relational
databases or any other information repository. It can tell
you what items do customers frequently buy together by
generating a set of rules called Association Rules.
Association rule mining

• Changing the store layout according to trends


• Customer behavior analysis
• Catalogue design
• Cross marketing on online stores
• What are the trending items customers buy
• Customized emails with add-on sales
What Is An Itemset?

A set of items together is called an itemset. If


any itemset has k-items it is called a k-itemset.
An itemset consists of two or more items. An
itemset. that occurs frequently is called a
frequent itemset. Thus frequent itemset
mining is a data mining technique to identify
the items that often occur together.

For Example, Bread and butter, Laptop and Antivirus


software, etc.
Frequent Item Set
• A set of items is called frequent if it satisfies a
minimum threshold value for support and
confidence.
• Support shows transactions with items
purchased together in a single transaction.
• Confidence shows transactions where the items
are purchased one after the other.
Frequent Item Set
• For frequent itemset mining method, we consider
only those transactions which meet minimum
threshold support and confidence requirements.
Insights from these mining algorithms offer a lot of
benefits, cost-cutting and improved competitive
advantage.
• There is a tradeoff time taken to mine data and the
volume of data for frequent mining. The frequent
mining algorithm is an efficient algorithm to mine
the hidden patterns of itemsets within a short time
and less memory consumption.
Support and Confidence
Association Rule Mining

“Let I= { …} be a set of ‘n’ binary attributes called


items. Let D= { ….} be set of transaction called
database. Each transaction in D has a unique
transaction ID and contains a subset of the items in
I. A rule is defined as an implication of form X->Y
where X, Y? I and X?Y=?. The set of items X and Y
are called antecedent and consequent of the rule
respectively.”
Association Rule Mining

Learning of Association rules is used to find


relationships between attributes in large databases.
An association rule, A=> B, will be of the form” for a
set of transactions, some value of itemset A
determines the values of itemset B under the
condition in which minimum support and confidence
are met”.
Words to know
Itemset: a collection of one or more items.
k-itemset: an itemset that contains k items.
Support count (s): is the frequency of occurrence of an
itemset.
Support: is the ratio (or fraction) of the number of
transactions that contain an itemset.
Confidence: is the probability that itemset B will exist given
itemset A exists in the transaction.
Association Rule: relationship discovered between two item
sets.
Frequent Itemset: an itemset whose support is greater than
or equal to a support threshold value
Strong Association Rules: rules whose confidence is
greater than or equal to a confidence threshold value
Support & Confidence

minsup: this is the minimal support used as a threshold.

minconf: this is the minimal confidence used as a threshold

Frequent Itemset: an itemset whose support is greater than or


equal to a minsup threshold

Strong Association Rules: rules whose confidence is greater


than or equal to a minconf threshold
Association Rule Mining uses these thresholds to reduce the
time complexity of the computations and find strong
association
Association Rule Mining
1.Frequent Itemset Generation:- find all itemsets
whose support is greater than or equal to the
minsup
2.Rule generation: generate strong association
rules from the frequent itemset whose confidence
greater than or equal to minconf.
Apriori Algorithm

• Apriori algorithm is a classic algorithm of mining rules, and


also the most commonly used algorithm of mining frequent
itemsets. The main steps are as follows
• Discover all frequent sets: support > = all itemsets of
minsupport
• Count the support degree of each k-term candidate set,
and find out the frequent k-term set: $l_k$
• Using frequent k-term sets to generate K + 1-term
candidate sets: $C {K + 1} $, k = K + 1
• Generate possible association rules for each frequent set
Apriori
Minimum Support Threshold = 2
Minimum Confidence Threshold = 70%
FP-Growth Algorithm
Frequent Itemsets: Applications
• Text mining: finding associated phrases in text
• There are lots of documents that contain the phrases
“association rules”, “data mining” and “efficient
algorithm”

• Recommendations:
• Users who buy this item often buy this item as well
• Users who watched James Bond movies, also watched
Jason Bourne movies.

• Recommendations make use of item and user similarity


Association Rule Discovery: Application
• Supermarket shelf management.
• Goal: To identify items that are bought together by
sufficiently many customers.
• Approach: Process the point-of-sale data collected
with barcode scanners to find dependencies among
items.
• A classic rule --
• If a customer buys diaper and milk, then he is very likely to
buy beer.
• So, don’t be surprised if you find six-packs stacked next to
diapers!

Tan, M. Steinbach and V. Kumar, Introduction to Data Mining


Clustering Definition
• Given a set of data points, each having a set of
attributes, and a similarity measure among them,
find clusters such that
• Data points in one cluster are more similar to one
another.
• Data points in separate clusters are less similar to
one another.
• Similarity Measures?
• Euclidean Distance if attributes are continuous.
• Other Problem-specific Measures.

Tan, M. Steinbach and V. Kumar, Introduction to Data Mining


Illustrating Clustering
Euclidean Distance Based Clustering in 3-D space.

Intracluster
Intraclusterdistances
distances Intercluster
Interclusterdistances
distances
are
areminimized
minimized are
aremaximized
maximized

Tan, M. Steinbach and V. Kumar, Introduction to Data Mining


Clustering: Application 1
• Document Clustering:
• Goal: To find groups of documents that are similar to
each other based on the important terms appearing in
them.
• Approach: To identify frequently occurring terms in
each document. Form a similarity measure based on
the frequencies of different terms. Use it to cluster.
• Gain: Information Retrieval can utilize the clusters to
relate a new document or search term to clustered
documents.

Tan, M. Steinbach and V. Kumar, Introduction to Data Mining


Coverage
• Given a set of customers and items and the
transaction relationship between the two, select a
small set of items that “covers” all users.
• For each user there is at least one item in the set that
the user has bought.

• Application:
• Create a catalog to send out that has at least one item
of interest for every customer.
Classification: Definition
• Given a collection of records (training set )
• Each record contains a set of attributes, one of the
attributes is the class.
• Find a model for class attribute as a function of
the values of other attributes.

• Goal: previously unseen records should be


assigned a class as accurately as possible.
• A test set is used to determine the accuracy of the
model. Usually, the given data set is divided into
training and test sets, with training set used to build
the model and test set used to validate it.
Classification Example
l l
us
ir ca ir ca uo
ego ego t in
t t n ss
ca ca co c l a
Tid Refund Marital Taxable Refund Marital Taxable
Status Income Cheat Status Income Cheat

1 Yes Single 125K No No Single 75K ?


2 No Married 100K No Yes Married 50K ?
3 No Single 70K No No Married 150K ?
4 Yes Married 120K No Yes Divorced 90K ?
5 No Divorced 95K Yes No Single 40K ?
6 No Married 60K No No Married 80K ? Test
10

Set
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
Training
Learn
10
10 No Single 90K Yes
Set Classifier Model

Tan, M. Steinbach and V. Kumar, Introduction to Data Mining


How is the derived model presented?
Classification: Application 1
• Ad Click Prediction
• Goal: Predict if a user that visits a web page will click
on a displayed ad. Use it to target users with high
click probability.
• Approach:
• Collect data for users over a period of time and record who
clicks and who does not. The {click, no click} information
forms the class attribute.
• Use the history of the user (web pages browsed, queries
issued) as the features.
• Learn a classifier model and test on new users.
Classification: Application 2
• Fraud Detection
• Goal: Predict fraudulent cases in credit card
transactions.
• Approach:
• Use credit card transactions and the information on its
account-holder as attributes.
• When does a customer buy, what does he buy, how often he pays on
time, etc
• Label past transactions as fraud or fair transactions. This
forms the class attribute.
• Learn a model for the class of the transactions.
• Use this model to detect fraud by observing credit card
transactions on an account.

Tan, M. Steinbach and V. Kumar, Introduction to Data Mining


Prediction
• Whereas classification predicts categorical (discrete,
unordered) labels, prediction models continuous-valued
functions. That is, it is used to predict missing or
unavailable numerical data values rather than class
labels.

• Although the term prediction may refer to both numeric


prediction and class label prediction, in this book we use it
to refer primarily to numeric prediction.

• Regression analysis is a statistical methodology that is


most often used for numeric prediction, although other
methods exist as well.
Example
• Suppose, as sales manager of AllElectronics, you would like to classify a
large set of items in the store, based on three kinds of responses to a sales
campaign: good response, mild response, and no response.

• Classification: Derive a model for each of these three classes based on the
descriptive features of the items, such as price, brand, place made, type, and
category.
• Decision tree: The decision tree, may identify price as being the single factor
that best distinguishes the three classes. The tree may reveal that, after price,
other features that help further distinguish objects of each class from another
include brand and place made. Such a decision tree may help you
understand the impact of the given sales campaign and
• Prediction: Suppose you would like to predict the amount of revenue that
each item will generate during an upcoming sale at AllElectronics, based on
previous sales data. This is an example of (numeric) prediction because the
model constructed will predict a continuous-valued
• function, or ordered value.
Outlier Analysis
• A database may contain data objects that do not comply with
the general behavior or model of the data. These data
objects are outliers.

• Most data mining methods discard outliers as noise or


exceptions. However, in some applications such as fraud
detection, the rare events can be more interesting than the
more regularly occurring ones. The analysis of outlier
data is referred to as outlier mining.

• Outliers may be detected using statistical tests that assume


a distribution or probability model for the data, or using
distance measures where objects that are a substantial
distance from any other cluster are considered outliers.
Evolution Analysis
• Data evolution analysis describes and models regularities or
trends for objects whose behavior changes over time.

• It may include characterization, discrimination, association


and correlation analysis, classification, prediction, or
clustering of time related data, distinct features of such an
analysis include time-series data analysis, sequence or
periodicity pattern matching, and similarity-based data
analysis
Link Analysis Ranking
• Given a collection of web pages that are linked to
each other, rank the pages according to
importance (authoritativeness) in the graph
• Intuition: A page gains authority if it is linked to by
another page.

• Application: When retrieving pages, the


authoritativeness is factored in the ranking.
Exploratory Analysis
• Trying to understand the data as a physical
phenomenon, and describe them with simple metrics
• What does the web graph look like?
• How often do people repeat the same query?
• Are friends in facebook also friends in twitter?

• The important thing is to find the right metrics and


ask the right questions

• It helps our understanding of the world, and can lead


to models of the phenomena we observe.
Exploratory Analysis: The Web
• What is the structure and the properties of the
web?
Exploratory Analysis: The Web
• What is the distribution of the incoming links?
Connections of Data Mining with other areas
• Draws ideas from machine learning/AI, pattern
recognition, statistics, and database systems
• Traditional Techniques
may be unsuitable due to
• Enormity of data Statistics/ Machine Learning/
AI Pattern
• High dimensionality Recognition
of data
• Heterogeneous, Data Mining

distributed nature
of data Database
• Emphasis on the use of data systems

Tan, M. Steinbach and V. Kumar, Introduction to Data Mining


73

Cultures
• Databases: concentrate on large-scale (non-
main-memory) data.
• AI (machine-learning): concentrate on complex
methods, small data.
• In today’s world data is more important than algorithms
• Statistics: concentrate on models.

CS345A Data Mining on the Web: Anand Rajaraman, Jeff Ullman


74

Models vs. Analytic Processing

• To a database person, data-mining is an


extreme form of analytic processing – queries
that examine large amounts of data.
• Result is the query answer.
• To a statistician, data-mining is the inference
of models.
• Result is the parameters of the model.

CS345A Data Mining on the Web: Anand Rajaraman, Jeff Ullman


75

(Way too Simple) Example


• Given a billion numbers, a DB person would
compute their average and standard deviation.
• A statistician might fit the billion points to the best
Gaussian distribution and report the mean and
standard deviation of that distribution.

CS345A Data Mining on the Web: Anand Rajaraman, Jeff Ullman


Data Mining: Confluence of Multiple Disciplines

Database
Technology Statistics

Machine Visualization
Data Mining
Learning

Pattern
Recognition Other
Algorithm Disciplines
Data Mining: Confluence of Multiple Disciplines

Database
Technology Statistics

Machine Visualization
Data Mining
Learning

Pattern
Recognition Other
Algorithm Disciplines
Data Mining: Confluence of Multiple Disciplines

Database
Technology Statistics

Machine Visualization
Data Mining
Learning

Pattern
Recognition Distributed
Algorithm Computing
Single-node architecture

CPU
Machine Learning, Statistics

Memory

“Classical” Data Mining

Disk
Commodity Clusters
• Web data sets can be very large
• Tens to hundreds of terabytes
• Cannot mine on a single server
• Standard architecture emerging:
• Cluster of commodity Linux nodes, Gigabit ethernet interconnect
• Google GFS; Hadoop HDFS; Kosmix KFS
• Typical usage pattern
• Huge files (100s of GB to TB)
• Data is rarely updated in place
• Reads and appends are common
• How to organize computations on this architecture?
• Map-Reduce paradigm
Cluster Architecture
2-10 Gbps backbone between racks
1 Gbps between Switch
any pair of nodes
in a rack
Switch Switch

CPU CPU CPU CPU

Mem … Mem Mem … Mem

Disk Disk Disk Disk

Each rack contains 16-64 nodes


Map-Reduce paradigm
• Map the data into key-value pairs
• E.g., map a document to word-count pairs
• Group by key
• Group all pairs of the same word, with lists of counts
• Reduce by aggregating
• E.g. sum all the counts to produce the total count.
The data analysis pipeline
• Mining is not the only step in the analysis process

Data Result
Data Mining
Preprocessing Post-processing

• Preprocessing: real data is noisy, incomplete and inconsistent.


Data cleaning is required to make sense of the data
• Techniques: Sampling, Dimensionality Reduction, Feature selection.
• A dirty work, but it is often the most important step for the analysis.
• Post-Processing: Make the data actionable and useful to the user
• Statistical analysis of importance
• Visualization.

• Pre- and Post-processing are often data mining tasks as


well
Data Quality
• Examples of data quality problems:
• Noise and outliers
• missing values
• duplicate data
Sampling
• Sampling is the main technique employed for data
selection.
• It is often used for both the preliminary investigation of the
data and the final data analysis.

• Statisticians sample because obtaining the entire


set of data of interest is too expensive or time
consuming.

• Sampling is used in data mining because


processing the entire set of data of interest is too
expensive or time consuming.
Sampling …
• The key principle for effective sampling is the
following:
• using a sample will work almost as well as using the
entire data sets, if the sample is representative

• A sample is representative if it has approximately the


same property (of interest) as the original set of data
Types of Sampling
• Simple Random Sampling
• There is an equal probability of selecting any particular item

• Sampling without replacement


• As each item is selected, it is removed from the population

• Sampling with replacement


• Objects are not removed from the population as they are selected
for the sample.
• In sampling with replacement, the same object can be picked up more
than once

• Stratified sampling
• Split the data into several partitions; then draw random samples
from each partition
Sample Size

8000 points 2000 Points 500 Points


Sample Size
• What sample size is necessary to get at least one
object from each of 10 groups.
A data mining challenge
• You are reading a stream of integers, and you want to
sample one integer uniformly at random but you do
not know the size (N) of the stream in advance. You
can only keep a constant amount of integers in
memory

• How do you sample?


• Hint: the last integer in the stream should have probability
1/N to be selected.

• Reservoir Sampling:
• Standard interview question
91

Meaningfulness of Answers
• A big data-mining risk is that you will “discover”
patterns that are meaningless.
• Statisticians call it Bonferroni’s principle:
(roughly) if you look in more places for
interesting patterns than your amount of data
will support, you are bound to find crap.
• The Rhine Paradox: a great example of how
not to conduct scientific research.

CS345A Data Mining on the Web: Anand Rajaraman, Jeff Ullman


92

Rhine Paradox – (2)


• He told these people they had ESP and called
them in for another test of the same type.
• Alas, he discovered that almost all of them had
lost their ESP.
• What did he conclude?
• Answer on next slide.

CS345A Data Mining on the Web: Anand Rajaraman, Jeff Ullman


93

Rhine Paradox – (3)


• He concluded that you shouldn’t tell people they
have ESP; it causes them to lose it.

CS345A Data Mining on the Web: Anand Rajaraman, Jeff Ullman

You might also like