DM UNIT-I Notes
DM UNIT-I Notes
UNIT I
The major reason that data mining has attracted a great deal of attention in information
industry in recent years is due to the wide availability of huge amounts of data and the
imminent need for turning such data into useful information and knowledge. The
information and knowledge gained can be used for applications ranging from business
management, production control, and market analysis, to engineering design and science
exploration.
1
UNIT-I
What is data mining?
Data mining refers to extracting or mining" knowledge from large amounts of data. There
are many other terms related to data mining, such as knowledge mining, knowledge
extraction, data/pattern analysis, data archaeology, and data dredging. Many people treat data
mining as a synonym for another popularly used term, Knowledge Discovery in
Databases", or KDD
Data mining is the process of discovering interesting knowledge from large amounts of data
stored either in databases, data warehouses, or other information repositories. Based on this
view, the architecture of a typical data mining system may have the following major
components:
2
UNIT-I
6. A graphical user interface that allows the user an interactive approach to the
data mining system.
How is a data warehouse different from a database? How are they similar?
3
UNIT-I
spatial databases, time-series databases, text databases, and multimedia databases.
Flat files: Flat files are actually the most common data source for data mining
algorithms, especially at the research level. Flat files are simple data files in text or
binary format with a structure known by the data mining algorithm to be applied. The
data in these files can be transactions, time-series data, scientific measurements, etc.
The most commonly used query language for relational database is SQL, which allows
retrieval and manipulation of the data stored in the tables, as well as the calculation of
aggregate functions such as average, sum, min, max and count. For instance, an SQL
query to select the videos grouped by category would be:
Data mining algorithms using relational databases can be more versatile than data
mining algorithms specifically written for flat files, since they can take advantage of the
structure inherent to relational databases. While data mining can benefit from SQL for
data selection, transformation and consolidation, it goes beyond what SQL could
provide, such as
4
UNIT-I
predicting, comparing, detecting deviations, etc.
Data warehouses
In order to facilitate decision making, the data in a data warehouse are organized
around major subjects, such as customer, item, supplier, and activity. The data are
stored to provide information from a historical perspective and are typically
summarized.
5
UNIT-I
The data cube structure that stores the primitive or lowest level of information is called
a base cuboid. Its corresponding higher level multidimensional (cube) structures are
called (non-base) cuboids. A base cuboid together with all of its corresponding higher
level cuboids form a data cube. By providing multidimensional data views and the
precomputation of summarized data, data warehouse systems are well suited for On-
Line Analytical Processing, or OLAP. OLAP operations make use of background
knowledge regarding the domain of the data being studied in order to allow the
presentation of data at different levels of abstraction. Such operations accommodate
different user viewpoints. Examples of OLAP operations include drill-down and roll-up,
which allow the user to view the data at differing degrees of summarization, as
illustrated in above figure.
Transactional databases
6
UNIT-I
In general, a transactional database consists of a flat file where each record represents a
transaction. A transaction typically includes a unique transaction identity number (trans ID), and a
list of the items making up the transaction (such as items purchased in a store) as shown below:
SALES
Trans-ID List of item_ID’s
T100 I1,I3,I8
…….. ………
• Time-Series Databases: Time-series databases contain time related data such stock
market data or logged activities. These databases usually have a continuous flow of new
data coming in, which sometimes causes the need for a challenging real time analysis.
Data mining in such databases commonly includes the study of trends and correlations
between evolutions of different variables, as well as the prediction of trends and
movements of the variables in time.
• A text database is a database that contains text documents or other word descriptions
in the form of long sentences or paragraphs, such as product specifications, error or bug
reports, warning messages, summary reports, notes, or other documents.
• A multimedia database stores images, audio, and video data, and is used in
applications such as picture content-based retrieval, voice-mail systems, video-on-
demand systems, the World Wide Web, and speech-based user interfaces.
• The World-Wide Web provides rich, world-wide, on-line information services, where
data objects are linked together to facilitate interactive access. Some examples of
distributed information services associated with the World-Wide Web include America
Online, Yahoo!, AltaVista, and Prodigy.
7
UNIT-I
Data mining functionalities/Data mining tasks: what kinds of patterns
can be mined?
Data mining functionalities are used to specify the kind of patterns to be found in data
mining tasks. In general, data mining tasks can be classified into two categories:
• Descriptive
• predictive
Descriptive mining tasks characterize the general properties of the data in the database.
Predictive mining tasks perform inference on the current data in order to make
predictions.
Describe data mining functionalities, and the kinds of patterns they can
discover (or)
Define each of the following data mining functionalities: characterization,
discrimination, association and correlation analysis, classification, prediction, clustering,
and evolution analysis. Give examples of each data mining functionality, using a real-life
database that you are familiar with.
Data can be associated with classes or concepts. It describes a given set of data in a
concise and summarative manner, presenting interesting general properties of the data.
These descriptions can be derived via
Data characterization
data. Example:
Data Discrimination is a comparison of the general features of target class data objects
with the general features of objects from one or a set of contrasting classes.
Example
8
UNIT-I
The general features of students with high GPA’s may be compared with the general
features of students with low GPA’s. The resulting description could be a general
comparative profile of the students such as 75% of the students with high GPA’s are
fourth- year computing science students while 65% of the students with low GPA’s are
not.
where X is a variable representing a student. The rule indicates that of the students
under study, 12% (support) major in computing science and own a personal computer.
There is a 98% probability (confidence, or certainty) that a student in this group owns a
personal computer.
Example:
A grocery store retailer to decide whether to but bread on sale. To help determine the
impact of this decision, the retailer generates association rules that show what other
products are frequently purchased with bread. He finds 60% of the times that bread is
sold so are pretzels and that 70% of the time jelly is also sold. Based on these facts, he
tries to capitalize on the association between bread, pretzels, and jelly by placing some
pretzels and jelly at the end of the aisle where the bread is placed. In addition, he
decides not to place either of these items on sale at the same time.
Classification and
prediction Classification:
Classification:
9
UNIT-I
o credit approval
o target marketing
o medical diagnosis
o treatment effectiveness analysis
Classification can be defined as the process of finding a model (or function) that
describes and distinguishes data classes or concepts, for the purpose of being able to use
the model to predict the class of objects whose class label is unknown. The derived
model is based on the analysis of a set of training data (i.e., data objects whose class
label is known).
Example:
An airport security screening station is used to deter mine if passengers are potential
terrorist or criminals. To do this, the face of each passenger is scanned and its basic
pattern(distance between eyes, size, and shape of mouth, head etc) is identified. This
pattern is compared to entries in a database to see if it matches any patterns that are
associated with known offenders
1) IF-THEN rules,
2) Decision tree
3) Neural Network
10
UNIT-I
Prediction:
Find some missing or unavailable data values rather than class labels referred to as
prediction. Although prediction may refer to both data value prediction and class label
prediction, it is usually confined to data value prediction and thus is distinct from
classification. Prediction also encompasses the identification of distribution trends
based on the available data.
Example:
Predicting flooding is difficult problem. One approach is uses monitors placed at various
points in the river. These monitors collect data relevant to flood prediction: water level,
rain amount, time, humidity etc. These water levels at a potential flooding point in the
river can be predicted based on the data collected by the sensors upriver from this
point. The prediction must be made with respect to the time the data were collected.
Classification differs from prediction in that the former is to construct a set of models
(or functions) that describe and distinguish data class or concepts, whereas the latter is
to predict some missing or unavailable, and often numerical, data values. Their
similarity is that they are both tools for prediction: Classification is used for predicting
the class label of data objects and prediction is typically used for predicting missing
numerical data values.
Clustering analysis
The objects are clustered or grouped based on the principle of maximizing the intraclass
similarity and minimizing the interclass similarity.
11
UNIT-I
Clustering can also facilitate taxonomy formation, that is, the organization of
observations into a hierarchy of classes that group similar events together as shown
below:
Example:
A certain national department store chain creates special catalogs targeted to various
demographic groups based on attributes such as income, location and physical
characteristics of potential customers (age, height, weight, etc). To determine the target
mailings of the various catalogs and to assist in the creation of new, more specific
catalogs, the company performs a clustering of potential customers based on the
determined attribute values. The results of the clustering exercise are the used by
management to create special catalogs and distribute them to the correct target
population based on the cluster for that catalog.
Outlier analysis: A database may contain data objects that do not comply with
general model of data. These data objects are outliers. In other words, the data objects
which do not fall within the cluster will be called as outlier data objects. Noisy data or
exceptional data are also called as outlier data. The analysis of outlier data is referred to
as outlier mining.
Example
Outlier analysis may uncover fraudulent usage of credit cards by detecting purchases of
12
UNIT-I
extremely large amounts for a given account number in comparison to regular charges
incurred by the same account. Outlier values may also be detected with respect to the
location and type of purchase, or the purchase frequency.
Example:
The data of result the last several years of a college would give an idea if quality of
graduated produced by it
Correlation analysis
A correlation coefficient of 0.0 indicates no relationship between the two variables. That
is, one cannot use the scores on one variable to tell anything about the scores on the
second variable.
Answer:
13
UNIT-I
(4) Novel.
A pattern is also interesting if it validates a hypothesis that the user sought to confirm.
An interesting pattern represents knowledge.
There are many data mining systems available or being developed. Some are specialized
systems dedicated to a given data source or are confined to limited data mining
functionalities, other are more versatile and comprehensive. Data mining systems can
be categorized according to various criteria among other classification are the following:
14
UNIT-I
· Classification according to the data model drawn on: this classification categorizes
data mining systems based on the data model involved such as relational database,
object- oriented database, data warehouse, transactional, etc.
• Task-relevant data: This primitive specifies the data upon which mining is to be
performed. It involves specifying the database and tables or data warehouse containing
the relevant data, conditions for selecting the relevant data, the relevant attributes or
dimensions for exploration, and instructions regarding the ordering or grouping of the
data retrieved.
• Knowledge type to be mined: This primitive specifies the specific data mining
function to be performed, such as characterization, discrimination, association,
classification, clustering, or evolution analysis. As well, the user can be more specific and
provide pattern templates that all discovered patterns must match. These templates or
meta patterns (also called meta rules or meta queries), can be used to guide the
discovery process.
• Background knowledge: This primitive allows users to specify knowledge they have
about the domain to be mined. Such knowledge can be used to guide the knowledge
discovery process and evaluate the patterns that are found. Of the several kinds of
background knowledge, this chapter focuses on concept hierarchies.
15
UNIT-I
• No coupling:
The data mining system uses sources such as flat files to obtain the initial data set to be
mined since no database system or data warehouse system functions are implemented
as part of the process. Thus, this architecture represents a poor design choice.
• Loose coupling:
The data mining system is not integrated with the database or data warehouse system
beyond their use as the source of the initial data set to be mined, and possible use in
storage of the results. Thus, this architecture can take advantage of the flexibility,
efficiency and features such as indexing that the database and data warehousing
systems may provide. However, it is difficult for loose coupling to achieve high
scalability and good performance with large data sets as many such systems are
memory-based.
• Semitight coupling:
Some of the data mining primitives such as aggregation, sorting or pre computation of
statistical functions are efficiently implemented in the database or data warehouse
system, for use by the data mining system during mining-query processing. Also, some
frequently used inter mediate mining results can be pre computed and stored in the
database or data warehouse system, thereby enhancing the performance of the data
mining system.
• Tight coupling:
The database or data warehouse system is fully integrated as part of the data mining
system and thereby provides optimized data mining query processing. Thus, the data
16
UNIT-I
mining sub system is treated as one functional component of an information system.
This is a highly desirable architecture as it facilitates efficient implementations of data
mining functions, high system performance, and an integrated information processing
environment
From the descriptions of the architectures provided above, it can be seen that tight
coupling is the best alternative without respect to technical or implementation issues.
However, as much of the technical infrastructure needed in a tightly coupled system is
still evolving, implementation of such a system is non-trivial. Therefore, the most
popular architecture is currently semi tight coupling as it provides a compromise
between loose and tight coupling.
_ Data mining query languages and ad-hoc data mining: Knowledge in Relational
query languages (such as SQL) required since it allow users to pose ad-hoc queries for
data retrieval.
17
UNIT-I
_ Handling outlier or incomplete data: The data stored in a database may reflect
outliers: noise, exceptional cases, or incomplete data objects. These objects may confuse
the analysis process, causing over fitting of the data to the knowledge model
constructed. As a result, the accuracy of the discovered patterns can be poor. Data
cleaning methods and data analysis methods which can handle outliers are required.
_ Handling of relational and complex types of data: Since relational databases and
data warehouses are widely used, the development of efficient and effective data mining
systems for such data is important.
18
UNIT-I
Data preprocessing
Data preprocessing describes any type of processing performed on raw data to prepare
it for another processing procedure. Commonly used as a preliminary data mining
practice, data preprocessing transforms the data into a format that will be more easily
and effectively processed for the purpose of the user.
Data preprocessing describes any type of processing performed on raw data to prepare
it for another processing procedure. Commonly used as a preliminary data mining
practice, data preprocessing transforms the data into a format that will be more easily
and effectively processed for the purpose of the user
Data in the real world is dirty. It can be in incomplete, noisy and inconsistent from.
These data needs to be preprocessed in order to help improve the quality of the data,
and quality of the mining results.
19
UNIT-I
Major Tasks in Data Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or remove outliers, and resolve
inconsistencies
Data integration
Data transformation
Data reduction
Data discretization
Part of data reduction but with particular importance, especially for numerical
20
UNIT-I
Descriptive Data
measures
A measure of central tendency is a single value that attempts to describe a set of data by
identifying the central position within that set of data. As such, measures of central
tendency are sometimes called measures of central location.
Mean: mean, or average, of numbers is the sum of the numbers divided by n. That is:
Example 1
The marks of seven students in a mathematics test with a maximum possible mark of
20 are given below:
15 13 18 16 14 17 12
21
UNIT-I
Solution:
Midrange
The midrange of a data set is the average of the minimum and maximum values.
Median: median of numbers is the middle number when the numbers are written in
order. If is even, the median is the average of the two middle numbers.
Example 2
The marks of nine students in a geography test that had a maximum possible mark of
50 are given below:
47 35 37 32 38 39 36 34 35
Solution:
Arrange the data values in order from the lowest value to the highest
value: 32 34 35 35 36 37 38 39 47
The fifth data value, 36, is the middle value in this arrangement.
Note:
22
UNIT-I
In general:
If the number of values in the data set is even, then the median is the average of the
two middle values.
Example 3
Solution:
Arrange the data values in order from the lowest value to the highest
value: 10 12 13 16 17 18 19 21
The number of values in the data set is 8, which is even. So, the median is the
average of the two middle values.
Trimmed mean
Mode of numbers is the number that occurs most frequently. If two numbers tie for
most frequent occurrence, the collection has two modes and is called bimodal.
The mode has applications in printing . For example, it is important to print more of
the most popular books; because printing different books in equal numbers would
cause a shortage of some books and an oversupply of others.
23
UNIT-I
Example 4
set: 48 44 48 45 42 49
48
Solution:
It is possible for a set of data values to have more than one mode.
If there are two data values that occur most frequently, we say that the set of
data values is bimodal.
If there is three data values that occur most frequently, we say that the set of
data values is trimodal
If two or more data values that occur most frequently, we say that the set of
data values is multimodal
If there is no data value or data values that occur most frequently, we say that
the set of data values has no mode.
The mean, median and mode of a data set are collectively known as measures of
central tendency as these three measures focus on where the data is centered or
clustered. To analyze data using the mean, median and mode, we need to use the
most appropriate measure of central tendency. The following points should be
remembered:
The mean is useful for predicting future results when there are no extreme
values in the data set. However, the impact of extreme values on the mean may
be important and should be considered. E.g. the impact of a stock market crash
on average investment returns.
The median may be more useful than the mean when there are extreme
values in the data set as it is not affected by the extreme values.
The mode is useful when the most common item, characteristic or value of a data
set is required.
Measures of Dispersion
Measures of dispersion measure how spread out a set of data is. The two most
commonly used measures of dispersion are the variance and the standard deviation.
Rather than showing how data are similar, they show how data differs from its
variation, spread, or dispersion.
Other measures of dispersion that may be encountered include the Quartiles, Inter
quartile range (IQR), Five number summary, range and box plots
24
UNIT-I
Variance and Standard Deviation
Very different sets of numbers can have the same mean. You will now study two
measures of dispersion, which give you an idea of how much the numbers in a set
differ from the mean of the set. These two measures are called the variance of the set
and the standard deviation of the set
Percentile
Percentiles are values that divide a sample of data into one hundred groups containing
(as far as possible) equal numbers of observations.
The pth percentile of a distribution is the value such that p percent of the observations
fall at or below it.
The most commonly used percentiles other than the median are the 25th percentile
and the 75th percentile.
The 25th percentile demarcates the first quartile, the median or 50th percentile
demarcates the second quartile, the 75th percentile demarcates the third quartile, and
the 100th percentile demarcates the fourth quartile.
25
UNIT-I
Quartiles
Quartiles are numbers that divide an ordered data set into four portions, each
containing approximately one-fourth of the data. Twenty-five percent of the data values
come before the first quartile (Q1). The median is the second quartile (Q2); 50% of the
data values come before the median. Seventy-five percent of the data values come
before the third quartile (Q3).
Q3=75th percentile=(n*75/100)
The inter quartile range is the length of the interval between the lower quartile (Q1)
and the upper quartile (Q3). This interval indicates the central, or middle, 50% of a data
set.
IQR=Q3-Q1
Range
The range of a set of data is the difference between its largest (maximum) and smallest
(minimum) values. In the statistical world, the range is reported as a single number, the
difference between maximum and minimum. Sometimes, the range is often reported as
“from (the minimum) to (the maximum),” i.e., two numbers.
Example1:
The range of data set is 3–8. The range gives only minimal information about the spread
of the data, by defining the two extremes. It says nothing about how the data are
distributed between those two endpoints.
Example2:
In this example we demonstrate how to find the minimum value, maximum value, and
range of the following data: 29, 31, 24, 29, 30, 25
26
UNIT-I
3. Calculate the range:
Five-Number Summary
The Five-Number Summary of a data set is a five-item list comprising the minimum
value, first quartile, median, third quartile, and maximum value of the set.
Box plots
A box plot is a graph used to represent the range, median, quartiles and inter quartile
range of a set of data values.
(i) Draw a box to represent the middle 50% of the observations of the data set.
(ii) Show the median by drawing a vertical line within the box.
(iii) Draw the lines (called whiskers) from the lower and upper ends of the box to
the minimum and maximum values of the data set respectively, as shown in the
following diagram.
scores: 76 79 76 74 75 71 85 82 82 79
81
magnitude: 71 74 75 76 76 79 79 81 82
82 85
27
UNIT-I
Step 2: Q1=25th percentile value in the given data set
28
UNIT-I
Q1=11*(25/100) th value
=75
=79
=11*(75/100)th value
= 82
Step 5: Min X=
71 Step 6:
Max X=85
Since the medians represent the middle points, they split the data into four equal
parts. In other words:
Outliers
Outlier data is a data that falls outside the range. Outliers will be any points below
Q1 – 1.5×IQR or above Q3 + 1.5×IQR.
29
UNIT-I
Example:
10.2, 14.1, 14.4, 14.4, 14.4, 14.5, 14.5, 14.6, 14.7, 14.7, 14.7, 14.9, 15.1, 15.9, 16.4
To find out if there are any outliers, I first have to find the IQR. There are fifteen
data points, so the median will be at position (15/2) = 7.5=8th value=14.6. That is,
Q2 = 14.6.
Q1 is the fourth value in the list and Q3 is the twelfth: Q1 = 14.4 and Q3 =
The values for Q1 – 1.5×IQR and Q3 + 1.5×IQR are the "fences" that mark off the
"reasonable" values from the outlier values. Outliers lie outside the fences.
Summaries 1 Histogram
A histogram is a way of summarizing data that are measured on an interval scale (either
discrete or continuous). It is often used in exploratory data analysis to illustrate the
major features of the distribution of the data in a convenient form. It divides up the
range of possible values in a data set into classes or groups. For each group, a rectangle
is constructed with a base length equal to the range of values in that specific group, and
an area proportional to the number of observations falling into that group. This means
that the rectangles might be drawn of non-uniform height.
The histogram is only appropriate for variables whose values are numerical and
measured on an interval scale. It is generally used when dealing with large data sets
(>100 observations)
30
UNIT-I
A histogram can also help detect any unusual observations (outliers), or any gaps in
the data set.
2 Scatter Plot
A scatter plot is a useful summary of a set of bivariate data (two variables), usually
drawn before working out a linear correlation coefficient or fitting a regression line. It
gives a good visual picture of the relationship between the two variables, and aids the
interpretation of the correlation coefficient or regression model.
Each unit contributes one point to the scatter plot, on which points are plotted but
not joined. The resulting pattern indicates the type and strength of the relationship
between the two variables.
A scatter plot will also show up a non-linear relationship between the two variables and
whether or not there exist any outliers in the data.
31
UNIT-I
3 Loess curve
It is another important exploratory graphic aid that adds a smooth curve to a scatter
plot in order to provide better perception of the pattern of dependence. The word loess
is short for “local regression.”
4 Box plot
The picture produced consists of the most extreme values in the data set (maximum and
minimum values), the lower and upper quartiles, and the median.
5 Quantile plot
Displays all of the data (allowing the user to assess both the overall behavior and
unusual occurrences)
Plots quantile information
For a data xi data sorted in increasing order, fi indicates that approximately
100 fi% of the data are below or equal to the value xi
32
UNIT-I
The f quantile is the data value below which approximately a decimal fraction f of the
data is found. That data value is denoted q(f). Each data point can be assigned an f-value.
Let a time series x of length n be sorted from smallest to largest values, such that the
sorted values have rank. The f-value for each observation is computed as . 1,2,..., n . The
f-value for
This kind of comparison is much more detailed than a simple comparison of means or
medians.
A normal distribution is often a reasonable model for the data. Without inspecting the
data, however, it is risky to assume a normal distribution. There are a number of graphs
that can be used to check the deviations of the data from the normal distribution. The
most useful tool for assessing normality is a quantile or QQ plot. This is a scatter plot
with the quantiles of the scores on the horizontal axis and the expected normal scores
on the vertical axis.
In other words, it is a graph that shows the quantiles of one univariate distribution
against the corresponding quantiles of another. It is a powerful visualization tool in that
it allows the user to view whether there is a shift in going from one distribution to
another.
First, we sort the data from smallest to largest. A plot of these scores against the
expected normal scores should reveal a straight line.
33
UNIT-I
The expected normal scores are calculated by taking the z-scores of (I - ½)/n where I is
the rank in increasing order.
Curvature of the points indicates departures of normality. This plot is also useful for
detecting outliers. The outliers appear as points that are far away from the overall
pattern op points
Data Cleaning
Data cleaning routines attempt to fill in missing values, smooth out noise while
identifying outliers, and correct inconsistencies in the data.
34
UNIT-I
Missing Values
The various methods for handling the problem of missing values in data tuples include:
(a) Ignoring the tuple: This is usually done when the class label is missing (assuming
the mining task involves classification or description). This method is not very effective
unless the tuple contains several
attributes with missing values. It is especially poor when the percentage of missing
values per attribute
varies considerably.
(b) Manually filling in the missing value: In general, this approach is time-consuming
and may not be a reasonable task for large data sets with many missing values,
especially when the value to be filled in is not easily determined.
(c) Using a global constant to fill in the missing value: Replace all missing attribute
values by the same constant, such as a label like “Unknown,” or −∞. If missing values are
replaced by, say, “Unknown,” then the mining program may mistakenly think that they
form an interesting concept, since they all have a value in common — that of
“Unknown.” Hence, although this method is simple, it is not recommended.
(d) Using the attribute mean for quantitative (numeric) values or attribute mode
for categorical (nominal) values, for all samples belonging to the same class as the
given tuple: For example, if classifying customers according to credit risk, replace the
missing value with the average income value for customers in the same credit risk
category as that of the given tuple.
(e) Using the most probable value to fill in the missing value: This may be
determined with regression, inference-based tools using Bayesian formalism, or
decision tree induction. For example, using the other customer attributes in your data
set, you may construct a decision tree to predict the missing values for income.
Noisy data:
Noise is a random error or variance in a measured variable. Data smoothing tech is used
for removing such noisy data.
1 Binning methods: Binning methods smooth a sorted data value by consulting the
neighborhood", or values around it. The sorted values are distributed into a number of
'buckets', or bins. Because binning methods consult the neighborhood of values, they
perform local smoothing.
In this technique,
35
UNIT-I
boundaries, etc.
a. Smoothing by bin means: Each value in the bin is replaced by the mean
value of the bin.
b. Smoothing by bin medians: Each value in the bin is replaced by the bin
median.
c. Smoothing by boundaries: The min and max values of a bin are identified
as the bin boundaries. Each bin value is replaced by the closest boundary
value.
Example: Binning Methods for Data Smoothing
o Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
o Partition into (equi-depth) bins(equi depth of 3 since each bin contains three
values):
- Bin 1: 4, 8, 9, 15
In smoothing by bin means, each value in a bin is replaced by the mean value of the bin.
For example, the mean of the values 4, 8, and 15 in Bin 1 is 9. Therefore, each original
value in this bin is replaced by the value 9. Similarly, smoothing by bin medians can be
employed, in which each bin value is replaced by the bin median. In smoothing by bin
boundaries, the minimum and maximum values in a given bin are identified as the bin
boundaries. Each bin value is then replaced by the closest boundary value.
36
UNIT-I
Suppose that the data for analysis include the attribute age. The age values for the data
tuples are (in
increasing order): 13, 15, 16, 16, 19, 20, 20, 21, 22, 22, 25, 25, 25, 25, 30, 33, 33, 35, 35, 35,
35, 36, 40, 45, 46, 52, 70.
(a) Use smoothing by bin means to smooth the above data, using a bin depth of 3.
Illustrate your steps.
Comment on the effect of this technique for the given data.
The following steps are required to smooth the above data using smoothing by bin
means with a bin
depth of 3.
• Step 1: Sort the data. (This step is not required here as the data are already sorted.)
• Step 4: Replace each of the values in each bin by the arithmetic mean calculated for
the bin.
Bin 1: 14, 14, 14 Bin 2: 18, 18, 18 Bin 3: 21, 21, 21
Bin 4: 24, 24, 24 Bin 5: 26, 26, 26 Bin 6: 33, 33, 33
Bin 7: 35, 35, 35 Bin 8: 40, 40, 40 Bin 9: 56, 56, 56
2 Clustering: Outliers in the data may be detected by clustering, where similar values
are organized into groups, or ‘clusters’. Values that fall outside of the set of clusters may
be considered outliers.
Linear regression involves finding the best of line to fit two variables, so
that one variable can be used to predict the other.
37
UNIT-I
Using regression to find a mathematical equation to fit the data helps smooth out the noise.
Unique rule is a rule says that each value of the given attribute must be different from
all other values of that attribute
Consecutive rule is a rule says that there can be no missing values between the lowest
and highest values of the attribute and that all values must also be unique.
Null rule specifies the use of blanks, question marks, special characters or other
strings that may indicate the null condition and how such values should be handled.
Data Integration
It combines data from multiple sources into a coherent store. There are number of issues
to consider during data integration.
Issues:
38
UNIT-I
1. Correlation analysis
The result of the equation is > 0, then A and B are positively correlated, which
means the value of A increases as the values of B increases. The higher value
may indicate redundancy that may be removed.
The result of the equation is = 0, then A and B are independent and there is
no correlation between them.
If the resulting value is < 0, then A and B are negatively correlated where the
values of one attribute increase as the value of one attribute decrease which
means each attribute may discourages each other.
39
UNIT-I
Example:
Data Transformation
Normalization
In which data are scaled to fall within a small, specified range, useful for classification
algorithms involving neural networks, distance measurements such as nearest
neighbor classification and clustering. There are 3 methods for data normalization.
They are:
1) min-max normalization
2) z-score normalization
3) normalization by decimal scaling
These techniques can be applied to obtain a reduced representation of the data set
that is much smaller in volume, yet closely maintains the integrity of the original data.
Data
reduction includes,
1. Data cube aggregation, where aggregation operations are applied to the data in
the construction of a data cube.
2. Attribute subset selection, where irrelevant, weakly relevant or
redundant attributes or dimensions may be detected and removed.
3. Dimensionality reduction, where encoding mechanisms are used to reduce the
data set size. Examples: Wavelet Transforms Principal Components Analysis
4. Numerosity reduction, where the data are replaced or estimated by
alternative, smaller data representations such as parametric models (which
need store only the model parameters instead of the actual data) or
nonparametric methods such as clustering, sampling, and the use of histograms.
5. Discretization and concept hierarchy generation, where raw data values for
attributes are replaced by ranges or higher conceptual levels. Data Discretization is a
41
UNIT-I
form of numerosity reduction that is very useful for the automatic generation of
concept hierarchies.
Data cube aggregation: Reduce the data to the concept level needed in the
analysis. Queries regarding aggregated information should be answered using
data cube when possible. Data cubes store multidimensional aggregated
information. The following figure shows a data cube for multidimensional
analysis of sales data with respect to annual sales per item type for each branch.
Each cells holds an aggregate data value, corresponding to the data point in
multidimensional space.
Data cubes provide fast access to pre computed, summarized data, thereby benefiting
on- line analytical processing as well as data mining.
The cube created at the lowest level of abstraction is referred to as the base
cuboid. A cube for the highest level of abstraction is the apex cuboid. The lowest level of
a data cube (base cuboid). Data cubes created for varying levels of abstraction are
sometimes referred to as cuboids, so that a “data cube" may instead refer to a lattice of
cuboids. Each higher level of abstraction further reduces the resulting data size.
The following database consists of sales per quarter for the years 1997-1999.
Suppose, the analyzer interested in the annual sales rather than sales per quarter, the
above data can be aggregated so that the resulting data summarizes the total sales per
year instead of per quarter. The resulting data in smaller in volume, without loss of
information necessary for the analysis task.
42
UNIT-I
Dimensionality Reduction
It reduces the data set size by removing irrelevant attributes. This is a method of
attribute subset selection are applied. A heuristic method of attribute of sub set
selection is explained here:
Feature selection is a must for any data mining product. That is because, when you
build a data mining model, the dataset frequently contains more information than is
needed to build the model. For example, a dataset may contain 500 columns that
describe characteristics of customers, but perhaps only 50 of those columns are used
to build a particular model. If you keep the unneeded columns while building the
model, more CPU and memory are required during the training process, and more
storage space is required for the completed model.
In which select a minimum set of features such that the probability distribution
of different classes given the values for those features is as close as possible to the
original distribution given the values of all features
1. Step-wise forward selection: The procedure starts with an empty set of attributes.
The best of the original attributes is determined and added to the set. At each
subsequent iteration or step, the best of the remaining original attributes is added to
the set.
2. Step-wise backward elimination: The procedure starts with the full set of
attributes. At each step, it removes the worst attribute remaining in the set.
43
UNIT-I
The mining algorithm itself is used to determine the attribute sub set, then it is called
wrapper approach or filter approach. Wrapper approach leads to greater accuracy
since it optimizes the evaluation measure of the algorithm while removing attributes.
Data compression
Wavelet transforms
Principal components analysis.
1. The length, L, of the input data vector must be an integer power of two. This
condition can be met by padding the data vector with zeros, as necessary.
44
UNIT-I
data smoothing
calculating weighted difference
3. The two functions are applied to pairs of the input data, resulting in two sets of data
of length L/2.
4. The two functions are recursively applied to the sets of data obtained in the previous
loop, until the resulting data sets obtained are of desired length.
5. A selection of values from the data sets obtained in the above iterations are
designated the wavelet coefficients of the transformed data.
If wavelet coefficients are larger than some user-specified threshold then it can be
retained. The remaining coefficients are set to 0.
The principal components (new set of axes) give important information about variance.
Using the strongest components one can reconstruct a good approximation of the
original signal.
45
UNIT-I
Numerosity Reduction
Data volume can be reduced by choosing alternative smaller forms of data. This tech.
can be
Parametric method
Non parametric method
Parametric: Assume the data fits some model, then estimate model parameters, and
store only the parameters, instead of actual data.
Non parametric: In which histogram, clustering and sampling is used to store
reduced form of data.
2 Histogram
Divide data into buckets and store average (sum) for each bucket
A bucket represents an attribute-value/frequency pair
It can be constructed optimally in one dimension using dynamic programming
It divides up the range of possible values in a data set into classes or groups. For
each group, a rectangle (bucket) is constructed with a base length equal to the
range of values in that specific group, and an area proportional to the number of
observations falling into that group.
The buckets are displayed in a horizontal axis while height of a bucket
represents the average frequency of the values.
46
UNIT-I
Example:
The following data are a list of prices of commonly sold items. The numbers have
been sorted.
1, 1, 5, 5, 5, 5, 5, 8, 8, 10, 10, 10, 10, 12, 14, 14, 14, 15, 15, 15, 15, 15, 15, 18, 18, 18, 18, 18,
18, 18, 18, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 25, 25, 25, 25, 25, 28, 28, 30, 30, 30.
Draw histogram plot for price where each bucket should have equi-width of 10
The buckets can be determined based on the following partitioning rules, including the
following.
V-Optimal and MaxDiff histograms tend to be the most accurate and practical.
Histograms are highly effective at approximating both sparse and dense data, as well as
highly skewed, and uniform data.
Clustering techniques consider data tuples as objects. They partition the objects into
groups or clusters, so that objects within a cluster are “similar" to one another and
“dissimilar" to objects in other clusters. Similarity is commonly defined in terms of how
“close" the objects are in space, based on a distance function.
47
UNIT-I
Quality of clusters measured by their diameter (max distance between any two objects
in the cluster) or centroid distance (avg. distance of each cluster object from its
centroid)
Sampling
Sampling can be used as a data reduction technique since it allows a large data set to be
represented by a much smaller random sample (or subset) of the data. Suppose that a
large data set, D, contains N tuples. Let's have a look at some possible samples for D.
48
UNIT-I
that each time a tuple is drawn from D, it is recorded and then replaced. That is,
after a tuple is drawn, it is placed back in D so that it may be drawn again.
3. Cluster sample: If the tuples in D are grouped into M mutually disjoint “clusters",
then a SRS of m clusters can be obtained, where m < M. For example, tuples in a
database are usually retrieved a page at a time, so that each page can be considered a
cluster. A reduced data representation can be obtained by applying, say, SRSWOR to the
pages, resulting in a cluster sample of the tuples.
Advantages of sampling
Discretization:
Discretization techniques can be used to reduce the number of values for a given
continuous attribute, by dividing the range of the attribute into intervals. Interval
labels can then be used to replace actual data values.
Concept Hierarchy
49
UNIT-I
Discretization and Concept hierarchy for numerical data:
There are five methods for numeric concept hierarchy generation. These include:
1. binning,
2. histogram analysis,
3. clustering analysis,
4. entropy-based Discretization, and
5. data segmentation by “natural partitioning".
Example:
Suppose that profits at different branches of a company for the year 1997 cover a
wide range, from -$351,976.00 to $4,700,896.50. A user wishes to have a concept
hierarchy for profit automatically generated
50
UNIT-I
Suppose that the data within the 5%-tile and 95%-tile are between -$159,876 and
$1,838,761. The results of applying the 3-4-5 rule are shown in following figure
Step 1: Based on the above information, the minimum and maximum values are: MIN = -
$351, 976.00, and MAX = $4, 700, 896.50. The low (5%-tile) and high (95%-tile) values
to be considered for the top or first level of segmentation are: LOW = -$159, 876, and
HIGH =
$1, 838,761.
Step 2: Given LOW and HIGH, the most significant digit is at the million dollar digit
position (i.e., msd =
1,000,000). Rounding LOW down to the million dollar digit, we get LOW’ = -$1; 000; 000;
and rounding
HIGH up to the million dollar digit, we get HIGH’ = +$2; 000; 000.
Step 3: Since this interval ranges over 3 distinct values at the most significant digit, i.e.,
(2; 000; 000-(-1, 000; 000))/1, 000, 000 = 3, the segment is partitioned into 3 equi-
width sub segments according to the 3-4-5 rule: (-$1,000,000 - $0], ($0 - $1,000,000],
and ($1,000,000 - $2,000,000]. This represents the top tier of the hierarchy.
Step 4: We now examine the MIN and MAX values to see how they “fit" into the first level
partitions. Since the first interval, (-$1, 000, 000 - $0] covers the MIN value, i.e., LOW’ <
MIN, we can adjust the left boundary of this interval to make the interval smaller. The
most
significant digit of MIN is the hundred thousand digit position. Rounding MIN down to
this position, we get MIN0’ = -$400, 000.
Therefore, the first interval is redefined as (-$400,000 - 0]. Since the last interval,
($1,000,000-$2,000,000] does not cover the MAX value, i.e., MAX > HIGH’, we need to
create a new interval to cover it. Rounding up MAX at its most significant digit position,
the new interval is ($2,000,000 - $5,000,000]. Hence, the top most level of the hierarchy
contains four partitions, (-$400,000 - $0], ($0 - $1,000,000], ($1,000,000 - $2,000,000],
and ($2,000,000 - $5,000,000].
Step 5: Recursively, each interval can be further partitioned according to the 3-4-5 rule
to form the next lower level of the hierarchy:
- The first interval (-$400,000 - $0] is partitioned into 4 sub-intervals: (-$400,000 - -
$300,000], (-$300,000 - -$200,000], (-$200,000 - -$100,000], and (-$100,000 - $0].
- The second interval, ($0- $1,000,000], is partitioned into 5 sub-intervals: ($0 -
$200,000], ($200,000 - $400,000], ($400,000 - $600,000], ($600,000 - $800,000],
and ($800,000 -$1,000,000].
- The third interval, ($1,000,000 - $2,000,000], is partitioned into 5 sub-intervals:
($1,000,000 - $1,200,000], ($1,200,000 - $1,400,000], ($1,400,000 - $1,600,000],
($1,600,000 - $1,800,000], and ($1,800,000 - $2,000,000].
51
UNIT-I
52
UNIT-I
($2,000,000 - $3,000,000], ($3,000,000 - $4,000,000], and ($4,000,000 -
$5,000,000].
53
UNIT-I
54