UNIT-1 Why We Need Data Mining?
UNIT-1 Why We Need Data Mining?
Knowledge discovery from Data (KDD) is essential for data mining. While others view data
mining as an essential step in the process of knowledge discovery. Here is the list of steps
involved in the knowledge discovery process −
• Data Cleaning − In this step, the noise and inconsistent data isremoved.
• Data Integration − In this step, multiple data sources arecombined.
• Data Selection − In this step, data relevant to the analysis task are retrieved from the
database.
• Data Transformation − In this step, data is transformed or consolidated into forms
appropriate for mining by performing summary or aggregationoperations.
• Data Mining − In this step, intelligent methods are applied in order to extract data
patterns.
• Pattern Evaluation − In this step, data patterns are evaluated.
• Knowledge Presentation − In this step, knowledge isrepresented.
Page 1
UNIT-1
Page 2
UNIT-1
3. Data Warehouse
• A data warehouse is defined as the collection of data integrated from multiple
sources that will query and decision making.
• There are three types of data ware house: Enterprise data ware house,
Data Mart and Virtual Warehouse.
• Two approaches can be used to update data in Data Ware house: Query-
driven Approach and Update-driven Approach.
• Application: Business decision making, Data mining ,etc.
4. Transactional Data bases
• Transactional databases are a collection of data organized by time stamps, date,
etc to represent transaction in databases.
• This type of database has the capability to roll back or undo its operation when a
transaction is not completed or committed.
• Highly flexible system where users can modify information without changing any
sensitive information.
• Follows ACID property of DBMS.
• Application: Banking, Distributed systems, Object data bases, etc.
5. Multimedia Databases
• Multimedia databases consists audio, video, images and text media.
• They can be stored on Object-Oriented Data bases.
• They are used to store complex information in pre-specified formats.
• Application: Digital libraries, video-on demand, news-on demand, musical data
base, etc.
6. Spatial Database
• Store geo graphical information.
• Stores data in the form of coordinates, topology, lines, polygons,etc.
• Application: Maps, Global positioning, etc.
7. Time-series Databases
• Time series databases contain stock exchange data and user logged activities.
• Handles array of numbers indexed by time, date, etc.
• It requires real-time analysis.
• Application: eXtremeDB, Graphite, InfluxDB,etc.
8. WWW
• WWW refers to World wide web is a collection of documents and resources like
audio, video, text, etc which are identified by Uniform Resource Locators (URLs)
through web browsers, linked by HTML pages, and accessible via the Internet
network.
• It is the most heterogeneous repository as it collects data from multiple resources.
• It is dynamic in nature as Volume of data is continuously increasing and changing.
• Application: Online shopping, Job search, Research, studying,etc.
Page 3
UNIT-1
a) Descriptive Function
The descriptive function deals with the general properties of data in the database.
Here is the list of descriptive functions −
1. Class/Concept Description
2. Mining of Frequent Patterns
3. Mining of Associations
4. Mining of Correlations
5. Mining of Clusters
1. Class/Concept Description
Class/Concept refers to the data to be associated with the classes or concepts. For
example, in a company, the classes of items for sales include computer and printers, and
concepts of customers include big spenders and budget spenders. Such descriptions of a class
or a concept are called class/concept descriptions. These descriptions can be derived by the
following two ways −
• Data Characterization − This refers to summarizing data of class under study. This
class under study is called as Target Class.
• Data Discrimination − It refers to the mapping or classification of a class with some
predefined group or class.
4. Mining of Correlations
It is a kind of additional analysis performed to uncover interesting statistical
correlations between associated-attribute-value pairs or between two item sets to analyze
that if they have positive, negative or no effect on each other.
5. Mining of Clusters
Cluster refers to a group of similar kind of objects. Cluster analysis refers to forming
group of objects that are very similar to each other but are highly different from the objects
in other clusters.
Page 4
UNIT-1
1. Classification (IF-THEN)Rules
2. Prediction
3. Decision Trees
4. Mathematical Formulae
5. Neural Networks
6. Outlier Analysis
7. Evolution Analysis
3. Decision Trees − A decision tree is a structure that includes a root node, branches,
and leaf nodes. Each internal node denotes a test on an attribute, each branch denotes
the outcome of a test, and each leaf node holds a class label.
6. Outlier Analysis − Outliers may be defined as the data objects that do not comply
with the general behavior or model of the data available.
Page 5
UNIT-1
Note − These primitives allow us to communicate in an interactive manner with the data
mining system. Here is the list of Data Mining Task Primitives −
• Set of task relevant data to be mined.
• Kind of knowledge to be mined.
• Background knowledge to be used in discovery process.
• Interestingness measures and thresholds for pattern evaluation.
• Representation for visualizing the discovered patterns.
1. Statistics:
• It uses the mathematical analysis to express representations, model and summarize
empirical data or real world observations.
• Statistical analysis involves the collection of methods, applicable to large amount of
data to conclude and report the trend.
2. Machine learning
• Arthur Samuel defined machine learning as a field of study that gives computers the
ability to learn without being programmed.
• When the new data is entered in the computer, algorithms help the data to grow or
change due to machine learning.
• In machine learning, an algorithm is constructed to predict the data from the available
database (Predictive analysis).
• It is related to computational statistics.
Page 6
UNIT-1
Page 7
UNIT-1
Page 8
UNIT-1
Intrusion Detection
Intrusion refers to any kind of action that threatens integrity, confidentiality, or the
availability of network resources. In this world of connectivity, security has become the
major issue. With increased usage of internet and availability of the tools and tricks for
intruding and attacking network prompted intrusion detection to become a critical component
of network administration.
Major Issues in data mining:
Data mining is a dynamic and fast-expanding field with great strengths. The major issues
can divided into five groups:
a) Mining Methodology
b) User Interaction
c) Efficiency and scalability
d) Diverse Data Types Issues
e) Data mining society
a) Mining Methodology:
It refers to the following kinds of issues −
• Mining different kinds of knowledge in databases − Different users may be
interested in different kinds of knowledge. Therefore it is necessary for data mining
to cover a broad range of knowledge discovery task.
• Mining knowledge in multidimensional space – when searching for knowledge in
large datasets, we can explore the data in multi dimensional space.
• Handling noisy or incomplete data − the data cleaning methods are required to
handle the noise and incomplete objects while mining the data regularities. If the data
cleaning methods are not there then the accuracy of the discovered patterns will be
poor.
• Pattern evaluation − the patterns discovered should be interesting because either
they represent common knowledge or lack novelty.
b) User Interaction:
• Interactive mining of knowledge at multiple levels of abstraction − The data
mining process needs to be interactive because it allows users to focus the search for
patterns, providing and refining data mining requests based on the returned results.
• Incorporation of background knowledge − To guide discovery process and to
express the discovered patterns, the background knowledge can be used. Background
knowledge may be used to express the discovered patterns not only in concise terms
but at multiple levels of abstraction.
• Data mining query languages and ad hoc data mining − Data Mining Query
language that allows the user to describe ad hoc mining tasks, should be integrated
with a data warehouse query language and optimized for efficient and flexible data
mining.
• Presentation and visualization of data mining results − Once the patterns are
discovered it needs to be expressed in high level languages, and visual
representations. These representations should be easily understandable.
Page 9
UNIT-1
Page 10
UNIT-1
2. Binary Attributes: Binary data has only 2 values/states. For Example yes or no,
affected or unaffected, true or false.
i) Symmetric: Both values are equally important(Gender).
ii) Asymmetric: Both values are not equally important(Result).
3. Ordinal Attributes: The Ordinal Attributes contains values that have a meaningful
sequence or ranking (order) between them, but the magnitude between values is not
actually known, the order of values that shows what is important but don’t indicate how
important itis.
Attribute Values
Grade O, S, A, B, C, D, F
5. Discrete: Discrete data have finite values it can be numerical and can also be in
categorical form. These attributes has finite or countable infinite set of values.
Example
Attribute Values
Teacher, Business man,
Profession
Peon
ZIP Code 521157, 521301
6. Continuous: Continuous data have infinite no of states. Continuous data is of float type.
There can be many values between 2 and3.
Example:
Attribute Values
Height 5.4, 5.7, 6.2, etc.,
Weight 50, 65, 70, 73, etc.,
Page 11
UNIT-1
Page 12
UNIT-1
The data values can represent as Bar charts, pie charts, Line graphs, etc.
Page 13
UNIT-1
Quantile plots:
➢ A quantile plot is a simple and effective way to have a first look at a univariate data
distribution.
➢ Plots quantile information
• For a data xi data sorted in increasing order, fi indicates that approximately 100
fi% of the data are below or equal to the value xi
➢ Note that
• the 0.25 quantile corresponds to quartileQ1,
• the 0.50 quantile is the median, and
• the 0.75 quantile isQ3.
Page 14
UNIT-1
Scatter Plot:
➢ Scatter plot
• Is one of the most effective graphical methods for determining if there appears to
be a relationship, clusters of points, or outliers between two numerical attributes.
➢ Each pair of values is treated as a pair of coordinates and plotted as points in the plane
Data Visualization:
Visualization is the use of computer graphics to create visual images which aid in the
understanding of complex, often massive representations of data.
Categorization of visualization methods:
a) Pixel-oriented visualization techniques
b) Geometric projection visualization techniques
c) Icon-based visualization techniques
d) Hierarchical visualization techniques
e) Visualizing complex data and relations
a) Pixel-oriented visualization techniques
➢ For a data set of m dimensions, create m windows on the screen, one for each
dimension
➢ The m dimension values of a record are mapped to m pixels at the corresponding
positions in the windows
➢ The colors of the pixels reflect the corresponding values
Page 15
UNIT-1
➢ To save space and show the connections among multiple dimensions, space filling is
often done in a circle segment
Page 16
UNIT-1
InfoCube Worlds-within-worlds
Page 17
UNIT-1
Page 18
UNIT-1
a) Euclidean Distance
Assume that we have measurements xik, i = 1, … , N, on variables k = 1, … , p (also
called attributes).
The Euclidean distance between the ith and jth objects is
Note that λ and p are two different parameters. Dimension of the data matrix remains
finite.
Page 19
UNIT-1
Page 20
DATA PREPROCESSING
1. Preprocessing
Real-world databases are highly susceptible to noisy, missing, and inconsistent data
due to their typically huge size (often several gigabytes or more) and their likely origin from
multiple, heterogeneous sources. Low-quality data will lead to low-quality mining results, so
we prefer a preprocessing concepts.
Data Preprocessing Techniques
* Data cleaning can be applied to remove noise and correct inconsistencies in the data.
* Data integration merges data from multiple sources into coherent data store, such as
a data warehouse.
* Data reduction can reduce the data size by aggregating, eliminating redundant
features, orclustering, for instance. These techniques are not mutually exclusive; they
may worktogether.
* Data transformations, such as normalization, may be applied.
Need for preprocessing
➢ Incomplete, noisy and inconsistent data are common place properties of large real world
databases and data warehouses.
➢ Incomplete data can occur for a number of reasons:
• Attributes of interest may not always be available
• Relevant data may not be recorded due to misunderstanding, or because of equipment
malfunctions.
• Data that were inconsistent with other recorded data may have been deleted.
• Missing data, particularly for tuples with missing values for some attributes, may
need to be inferred.
• The data collection instruments used may be faulty.
• There may have been human or computer errors occurring at data entry.
• Errors in data transmission can also occur.
• There may be technology limitations, such as limited buffer size for coordinating
synchronized data transfer and consumption.
• Data cleaning routines work to ―clean‖ the data by filling in missing values,
smoothing noisy data, identifying or removing outliers, and resolving inconsistencies.
• Data integration is the process of integrating multiple databases cubes or files. Yet
some attributes representing a given may have different names in different databases,
causing inconsistencies and redundancies.
• Data transformation is a kind of operations, such as normalization and aggregation,
are additional data preprocessing procedures that would contribute toward the success
of the mining process.
• Data reduction obtains a reduced representation of data set that is much smaller in
volume, yet produces the same(or almost the same) analytical results.
Page 1
Data Warehousing and Data Mining UNIT-2
2. DATA CLEANING
Real-world data tend to be incomplete, noisy, and inconsistent. Data cleaning (or data
cleansing) routines attempt to fill in missing values, smooth out noise while identifying
outliers and correct inconsistencies in the data.
Missing Values
Many tuples have no recorded value for several attributes, such as customer income. so we
can fill the missing values for this attributes.
The following methods are useful for performing missing values over several attributes:
1. Ignore the tuple: This is usually done when the class label missing (assuming the
mining task involves classification). This method is not very effective, unless the
tuple contains several attributes with missing values. It is especially poor when the
percentage of the missing values per attribute varies considerably.
2. Fill in the missing values manually: This approach is time –consuming and may not
be feasible given a large data set with many missing values.
3. Use a global constant to fill in the missing value: Replace all missing attribute value
by the same constant, such as a label like ―unknown‖ or -∞.
4. Use the attribute mean to fill in the missing value: For example, suppose that the
average income of customers is $56,000. Use this value to replace the missing value
for income.
5. Use the most probable value to fill in the missing value: This may be determined
with regression, inference-based tools using a Bayesian formalism or decision tree
induction. For example, using the other customer attributes in the sets decision tree is
constructed to predict the missing value for income.
Page 2
Data Warehousing and Data Mining UNIT-2
Noisy Data
Noise is a random error or variance in a measured variable. Noise is removed using
data smoothing techniques.
Binning: Binning methods smooth a sorted data value by consulting its ―neighborhood,‖ that is
the value around it. The sorted values are distributed into a number of ―buckets‖ or ―bins―.
Because binning methods consult the neighborhood of values, they perform local smoothing.
Sorted data for price (in dollars): 3,7,14,19,23,24,31,33,38.
Example 1: Partition into (equal-frequency) bins:
Bin 1: 3,7,14
Bin 2: 19,23,24
Bin 3: 31,33,38
In the above method the data for price are first sorted and then partitioned into equal-
frequency bins of size 3.
Smoothing by bin means:
Bin 1: 8,8,8
Bin 2: 22,22,22
Bin 3: 34,34,34
In smoothing by bin means method, each value in a bin is replaced by the mean value ofthe
bin. For example, the mean of the values 3,7&14 in bin 1 is 8[(3+7+14)/3].
Smoothing by bin boundaries:
Bin 1: 3,3,14
Bin 2: 19,24,24
Bin 3: 31,31,38
In smoothing by bin boundaries, the maximum & minimum values in give bin or identify as
the bin boundaries. Each bin value is then replaced by the closest boundary value.
In general, the large the width, the greater the effect of the smoothing. Alternatively, bins
may be equal-width, where the interval range of values in each bin is constant Example 2:
Remove the noise in the following data using smoothing techniques:
8, 4,9,21,25,24,29,26,28,15
Sorted data for price (in dollars):4,8,9,15,21,21,24,25,26,28,29,34
Partition into equal-frequency (equi-depth) bins:
Bin 1: 4, 8,9,15
Bin 2: 21,21,24,25
Bin 3: 26,28,29,34
Smoothing by bin means:
Bin 1: 9,9,9,9
Bin 2: 23,23,23,23
Bin 3: 29,29,29,29
Smoothing by bin boundaries:
Bin 1: 4, 4,4,15
Bin 2: 21,21,25,25
Bin3: 26,26,26,34
Page 3
Data Warehousing and Data Mining UNIT-2
Regression: Data can be smoothed by fitting the data to function, such as with regression.
Linear regression involves finding the ―best‖ line to fit two attributes (or variables), so that one
attribute can be used to predict the other. Multiple linear regressions is an extension of linear
regression, where more than two attributes are involved and the data are fit to a
multidimensional surface.
Clustering: Outliers may be detected by clustering, where similar values are organized into
groups, or ―clusters.‖ Intuitively, values that fall outside of the set of clusters may be
considered outliers.
Inconsistent Data
Inconsistencies exist in the data stored in the transaction. Inconsistencies occur due to occur
during data entry, functional dependencies between attributes and missing values. The
inconsistencies can be detected and corrected either by manually or by knowledge
engineering tools.
Data cleaning as a process
a) Discrepancy detection
b) Data transformations
a) Discrepancy detection
The first step in data cleaning is discrepancy detection. It considers the knowledge
ofmeta data and examines the following rules for detecting the discrepancy.
Unique rules- each value of the given attribute must be different from all other values for that
attribute.
Consecutive rules – Implies no missing values between the lowest and highest values for the
attribute and that all values must also be unique.
Null rules - specifies the use of blanks, question marks, special characters, or other strings
that may indicates the null condition
Discrepancy detection Tools:
❖ Data scrubbing tools - use simple domain knowledge (e.g., knowledge of postal
addresses, and spell-checking) to detect errors and make corrections in the data
❖ Data auditing tools – analyzes the data to discover rules and relationship, and
detecting data that violate such conditions.
b) Data transformations
This is the second step in data cleaning as a process. After detecting discrepancies, we
need to define and apply (a series of) transformations to correct them.
Data Transformations Tools:
❖ Data migration tools – allows simple transformation to be specified, such to replaced
the string ―gender‖ by ―sex‖.
❖ ETL (Extraction/Transformation/Loading) tools – allows users to specific transforms
through a graphical user interface(GUI)
Page 4
Data Warehousing and Data Mining UNIT-2
3. Data Integration
Data mining often requires data integration - the merging of data from stores into a coherent
data store, as in data warehousing. These sources may include multiple data bases, data
cubes, or flat files.
Issues in Data Integration
a) Schema integration & object matching.
b) Redundancy.
c) Detection & Resolution of data value conflict
a) Schema Integration & Object Matching
Schema integration & object matching can be tricky because same entity can be
represented in different forms in different tables. This is referred to as the entity identification
problem. Metadata can be used to help avoid errors in schema integration. The meta data may
also be used to help transform the data.
b) Redundancy:
Redundancy is another important issue an attribute (such as annual revenue, for
instance) may be redundant if it can be ―derived‖ from another attribute are set of attributes.
Inconsistencies in attribute of dimension naming can also cause redundancies in the resulting
data set. Some redundancies can be detected by correlation analysis and covariance analysis.
For Nominal data, we use the 2 (Chi-Square) test.
For Numeric attributes we can use the correlation coefficient and covariance.
2Correlation analysis for numerical data:
For nominal data, a correlation relationship between two attributes, A and B, can be
discovered by a 2 (Chi-Square) test. Suppose A has c distinct values, namely a1, a2, a3,
……., ac. B has r distinct values, namely b1, b2, b3, …., br. The data tuples are described by
table.
Page 5
Data Warehousing and Data Mining UNIT-2
c) Detection and Resolution of Data Value Conflicts.
A third important issue in data integration is the detection and resolution of data value
conflicts. For example, for the same real–world entity, attribute value from different sources
may differ. This may be due to difference in representation, scaling, or encoding.
For instance, a weight attribute may be stored in metric units in one system and
British imperial units in another. For a hotel chain, the price of rooms in different cities may
involve not only different currencies but also different services (such as free breakfast) and
taxes. An attribute in one system may be recorded at a lower level of abstraction than the
―same‖ attribute in another.
Careful integration of the data from multiple sources can help to reduce and avoid
redundancies and inconsistencies in the resulting data set. This can help to improve the
accuracy and speed of the subsequent of mining process.
4. Data Reduction:
Obtain a reduced representation of the data set that is much smaller in volume but yet
produces the same (or almost the same) analytical results.
Page 6
Data Warehousing and Data Mining UNIT-2
Year/Quarter 2014 2015 2016 2017 Year Sales
Quarter 1 200 210 320 230 2014 1640
Quarter 2 400 440 480 420 2015 1710
Quarter 3 480 480 540 460 2016 2020
Quarter 4 560 580 680 640 2017 1750
Page 7
Data Warehousing and Data Mining UNIT-2
globally optimal solution. Many other attributes evaluation measure can be used, such as the
information gain measure used in building decision trees for classification.
1. Stepwise forward selection: The procedure starts with an empty set of attributes as the
reduced set. The best of original attributes is determined and added to the reduced set. At
each subsequent iteration or step, the best of the remaining original attributes is added to the
set.
2. Stepwise backward elimination: The procedure starts with full set of attributes. At each
step, it removes the worst attribute remaining in the set.
3. Combination of forward selection and backward elimination: The stepwise forward
selection and backward elimination methods can be combined so that, at each step, the
procedure selects the best attribute and removes the worst from among the remaining
attributes.
4. Decision tree induction: Decision tree induction constructs a flowchart like structure
where each internal node denotes a test on an attribute, each branch corresponds to an
outcome of the test, and each leaf node denotes a class prediction. At each node, the
algorithm choices the ―best‖ attribute to partition the data into individual classes. A tree is
constructed from the given data. All attributes that do not appear in the tree are assumed to be
irrelevant. The set of attributes appearing in the tree from the reduced subset of attributes.
Threshold measure is used as stopping criteria.
Numerosity Reduction:
Numerosity reduction is used to reduce the data volume by choosing alternative, smaller
forms of the data representation
Techniques for Numerosity reduction:
➢ Parametric - In this model only the data parameters need to be stored, instead of the
actual data. (e.g.,) Log-linear models, Regression
Page 8
Data Warehousing and Data Mining UNIT-2
Parametric model
1. Regression
• Linear regression
➢ In linear regression, the data are model to fit a straight line. For example, a
random variable, Y called a response variable), can be modeled as a linear
function of another random variable, X called a predictor variable), with the
equation Y=αX+β
➢ Where the variance of Y is assumed to be constant. The coefficients, α and β
(called regression coefficients), specify the slope of the line and the Y- intercept,
respectively.
• Multiple- linear regression
➢ Multiple linear regression is an extension of (simple) linear regression, allowing a
response variable Y, to be modeled as a linear function of two or more predictor
variables.
2. Log-Linear Models
➢ Log-Linear Models can be used to estimate the probability of each point in a
multidimensional space for a set of discretized attributes, based on a smaller
subset of dimensional combinations.
Nonparametric Model
1. Histograms
A histogram for an attribute A partitions the data distribution of A into disjoint
subsets, or buckets. If each bucket represents only a single attribute-value/frequency pair, the
buckets are called singleton buckets.
Ex: The following data are bast of prices of commonly sold items at All Electronics. The
numbers have been sorted:
1,1,5,5,5,5,5,8,8,10,10,10,10,12,14,14,14,15,15,15,15,15,18,18,18,18,18,18,18,18,20,20,20,2
0,20,20,21,21,21,21,21,25,25,25,25,25,28,28,30,30,30
Page 9
Data Warehousing and Data Mining UNIT-2
2. Clustering
Clustering technique consider data tuples as objects. They partition the objects into
groups or clusters, so that objects within a cluster are similar to one another and dissimilar to
objects in other clusters. Similarity is defined in terms of how close the objects are in space,
based on a distance function. The quality of a cluster may be represented by its diameter, the
maximum distance between any two objects in the cluster. Centroid distance is an alternative
measure of cluster quality and is defined as the average distance of each cluster object from
the cluster centroid.
3. Sampling:
Sampling can be used as a data reduction technique because it allows a large data set
to be represented by a much smaller random sample (or subset) of the data. Suppose that a
large data set D, contains N tuples, then the possible samples are Simple Random sample
without Replacement (SRS WOR) of size n: This is created by drawing „n‟ of the „N‟ tuples
from D (n<N), where the probability of drawing any tuple in D is 1/N, i.e., all tuples are
equally likely to be sampled.
Page 10
Data Warehousing and Data Mining UNIT-2
Dimensionality Reduction:
In dimensionality reduction, data encoding or transformations are applied so as to
obtained reduced or ―compressed‖ representation of the oriental data.
Dimension Reduction Types
➢ Lossless - If the original data can be reconstructed from the compressed data without any
loss of information
➢ Lossy - If the original data can be reconstructed from the compressed data with loss of
information, then the data reduction is called lossy.
Effective methods in lossy dimensional reduction
a) Wavelet transforms
b) Principal components analysis.
a) Wavelet transforms:
The discrete wavelet transform (DWT) is a linear signal processing technique that,
when applied to a data vector, transforms it to a numerically different vector, of wavelet
coefficients. The two vectors are of the same length. When applying this technique to data
reduction, we consider each tuple as an n-dimensional data vector, that is,
X=(x1,x2,…………,xn), depicting n measurements made on the tuple from n database
attributes.
For example, all wavelet coefficients larger than some user-specified threshold can be
retained. All other coefficients are set to 0. The resulting data representation is therefore very
sparse, so that can take advantage of data sparsity are computationally very fast if performed
in wavelet space.
The numbers next to a wave let name is the number of vanishing moment of the wavelet
this is a set of mathematical relationships that the coefficient must satisfy and is related to number
of coefficients.
1. The length, L, of the input data vector must be an integer power of 2. This condition
can be met by padding the data vector with zeros as necessary (L >=n).
2. Each transform involves applying two functions
• The first applies some data smoothing, such as a sum or weighted average.
• The second performs a weighted difference, which acts to bring out the detailed
features of data.
3. The two functions are applied to pairs of data points in X, that is, to all pairs of
measurements (X2i , X2i+1). This results in two sets of data of length L/2. In general,
Page 11
Data Warehousing and Data Mining UNIT-2
these represent a smoothed or low-frequency version of the input data and high
frequency content of it, respectively.
4. The two functions are recursively applied to the sets of data obtained in the previous
loop, until the resulting data sets obtained are of length 2.
In the above figure, Y1 and Y2, for the given set of data originally mapped to the axes X1
and X2. This information helps identify groups or patterns within the data. The sorted axes
are such that the first axis shows the most variance among the data, the second axis shows the
next highest variance, and so on.
• The size of the data can be reduced by eliminating the weaker components.
Advantage of PCA
• PCA is computationally inexpensive
• Multidimensional data of more than two dimensions can be handled by reducing the
problem to two dimensions.
• Principal components may be used as inputs to multiple regression and cluster analysis.
Page 12
Data Warehousing and Data Mining UNIT-2
Page 13
Data Warehousing and Data Mining UNIT-2
Min-max normalization preserves the relationships among the original data values. Itwill
encounter an ―out-of-bounds‖ error if a future input case for normalization fallsoutside of the
original data range for A.
Example:-Min-max normalization. Suppose that the minimum and maximum values
fortheattribute income are $12,000 and $98,000, respectively. We would like to map income
to the range [0.0, 1.0]. By min-max normalization, a value of $73,600 for income
istransformed to
b) Z-Score Normalization
The values for an attribute, A, are normalized based on the mean (i.e., average) and standard
deviation of A. A value, vi, of A is normalized to vi’ by computing
where𝐴 and A are the mean and standard deviation, respectively, of attribute A.
Example z-score normalization. Suppose that the mean and standard deviation of the values
for the attribute income are $54,000 and $16,000, respectively. With z-score normalization, a
value of $73,600 for income is transformed to
Example: Decimal scaling. Suppose that the recorded values of A range from -986 to 917.
The maximum absolute value of A is 986. To normalize by decimal scaling, we therefore
divide each value by 1000 (i.e., j = 3) so that -986 normalizes to -0.986 and 917normalizes to
0.917.
Page 14
Data Warehousing and Data Mining UNIT-2
Page 15
Data Warehousing and Data Mining UNIT-2
State their partial ordering. The system can then try to automatically generate the
attribute ordering so as to construct a meaningful concept hierarchy.
4. Specification of only a partial set of attributes: Sometimes a user can be careless
when defining a hierarchy, or have only a vague idea about what should be included
in a hierarchy. Consequently, the user may have included only a small subset of the re
5. levant attributes in the hierarchy specification.
✓ Data cleaning routines attempt to fill in missing values, smooth out noise
whileidentifying outliers, and correct inconsistencies in the data.
✓ Data integration combines data from multiple sources to form a coherent datastore.
The resolution of semantic heterogeneity, metadata, correlation analysis,tuple
duplication detection, and data conflict detection contribute to smooth dataintegration.
✓ Data reduction techniques obtain a reduced representation of the data while
minimizingthe loss of information content. These include methods of
dimensionalityreduction, numerosity reduction, and data compression.
✓ Data transformation routines convert the data into appropriate forms for mining.For
example, in normalization, attribute data are scaled so as to fall within asmall range
such as 0.0 to 1.0. Other examples are data discretization and concepthierarchy
generation.
✓ Data discretization transforms numeric data by mapping values to interval or
conceptlabels. Such methods can be used to automatically generate concept
hierarchiesfor the data, which allows for mining at multiple levels of granularity.
Page 16
Data Warehousing and Data Mining UNIT-3
DATA CLASSIFICATION
Classification is a form of data analysis that extracts models describing important data
classes. Such models, called classifiers, predict categorical (discrete, unordered) class labels.
For example, we can build a classification model to categorize bank loan applications as
either safe or risky. Such analysis can help provide us with a better understanding of the data
at large. Many classification methods have been proposed by researchers in machine learning,
pattern recognition, and statistics.
Why Classification?
A bank loans officer needs analysis of her data to learn which loan applicants are
“safe” and which are “risky” for the bank. A marketing manager at AllElectronics needs data
analysis to help guess whether a customer with a given profile will buy a new computer.
A medical researcher wants to analyze breast cancer data to predict which one of three
specific treatments a patient should receive. In each of these examples, the data analysis task
is classification, where a model or classifier is constructed to predict class (categorical)
labels, such as “safe” or “risky” for the loan application data; “yes” or “no” for the marketing
data; or “treatment A,” “treatment B,” or “treatment C” for the medical data.
Suppose that the marketing manager wants to predict how much a given customer will
spend during a sale at AllElectronics. This data analysis task is an example of numeric
prediction, where the model constructed predicts a continuous-valued function, or ordered
value, as opposed to a class label. This model is a predictor.
Regression analysis is a statistical methodology that is most often used for numeric
prediction; hence the two terms tend to be used synonymously, although other methods for
numeric prediction exist. Classification and numeric prediction are the two major types of
prediction problems.
General Approach for Classification:
Data classification is a two-step process, consisting of a learning step (where a
classification model is constructed) and a classification step (where the model is used to
predict class labels for given data).
• In the first step, a classifier is built describing a predetermined set of data classes or
concepts. This is the learning step (or training phase), where a classification algorithm
builds the classifier by analyzing or “learning from” a training set made up of database
tuples and their associated class labels.
• Each tuple/sample is assumed to belong to a predefined class, as determined by the class
label attribute
• In the second step, the model is used for classification. First, the predictive accuracy of
the classifier is estimated. If we were to use the training set to measure the classifier’s
accuracy, this estimate would likely be optimistic, because the classifier tends to overfit
the data.
• Accuracy rate is the percentage of test set samples that are correctly classified by the
model
Page 1
Data Warehousing and Data Mining UNIT-3
Page 2
Data Warehousing and Data Mining UNIT-3
During tree construction, attribute selection measures are used to select the attribute
that best partitions the tuples into distinct classes. When decision trees are built, many of the
branches may reflect noise or outliers in the training data. Tree pruning attempts to identify
and remove such branches, with the goal of improving classification accuracy on unseen data.
❖ During the late 1970s and early 1980s, J. Ross Quinlan, a researcher in machine
learning, developed a decision tree algorithm known as ID3 (Iterative Dichotomiser).
❖ This work expanded on earlier work on concept learning systems, described by E. B. Hunt, J.
Marin, and P. T. Stone. Quinlan later presented C4.5 (a successor of ID3), which became
a benchmark to which newer supervised learning algorithms are often compared.
❖ In 1984,a group of statisticians (L. Breiman, J. Friedman, R. Olshen, and C. Stone)
publishedthe book Classification and Regression Trees (CART), which described the
generation of binary decision trees.
Algorithm: Generate decision tree. Generate a decision tree from the training tuples of data
partition, D.
Input:
▪ Data partition, D, which is a set of training tuples and their associated class labels;
▪ attribute list, the set of candidate attributes;
▪ Attribute selection method, a procedure to determine the splitting criterion that “best”
partitions the data tuples into individual classes. This criterion consists of a splitting
attribute and, possibly, either a split-point or splitting subset.
Output: A decision tree.
Method:
1) create a node N;
2) if tuples in D are all of the same class, C, then
3) return N as a leaf node labeled with the class C;
4) if attribute list is empty then
5) return N as a leaf node labeled with the majority class in D; // majority voting
6) apply Attribute selection method(D, attribute list) to find the “best” splitting
criterion;
7) label node N with splitting criterion;
8) if splitting attribute is discrete-valued and
multiway splits allowed then // not restricted to binary trees
9) attribute list attribute list - splitting attribute; // remove splitting attribute
10) for each outcome j of splitting criterion
// partition the tuples and grow subtrees for each partition
11) let Dj be the set of data tuples in D satisfying outcome j; // a partition
12) if Dj is empty then
13) attach a leaf labeled with the majority class in D to node N;
14) else attach the node returned by Generate decision tree(Dj , attribute list) to node N;
endfor
15) return N;
Page 3
Data Warehousing and Data Mining UNIT-3
Binary Attributes: The test condition for a binary attribute generates two potential
outcomes.
Nominal Attributes: These can have many values. These can be represented in two ways.
Ordinal attributes: These can produce binary or multi way splits. The values can be
grouped as long as the grouping does not violate the order property of attribute values.
Page 4
Data Warehousing and Data Mining UNIT-3
Information Gain
ID3 uses information gain as its attribute selection measure. Let node N represent or
hold the tuples of partition D. The attribute with the highest information gain is chosen as the
splitting attribute for node N. This attribute minimizes the information needed to classify the
tuples in the resulting partitions and reflects the least randomness or “impurity” in these
partitions. Such an approach minimizes the expected number of tests needed to classify a
given tuple and guarantees that a simple (but not necessarily the simplest) tree is found.
Where piis the nonzero probability that an arbitrary tuple in D belongs to class Ciand is
estimated by |Ci,D|/|D|. A log function to the base 2 is used, because the information is
encoded in bits.Info(D) is also known as the entropy of D.
Information gain is defined as the difference between the original information requirement
(i.e., based on just the proportion of classes) and the new requirement (i.e., obtained after
partitioning on A). That is,
Page 5
Data Warehousing and Data Mining UNIT-3
The attribute A with the highest information gain, Gain(A), is chosen as the splitting
attribute at node N. This is equivalent to saying that we want to partition on the attributeA that
would do the “best classification,” so that the amount of information still requiredto finish
classifying the tuples is minimal.
Gain Ratio
C4.5, a successor of ID3, uses an extension to information gain known as gain ratio,
which attempts to overcome this bias. It applies a kind of normalization to information gain
using a “split information” value defined analogously with Info(D) as
This value represents the potential information generated by splitting the training data set, D,
into v partitions, corresponding to the v outcomes of a test on attribute A. Note that, for each
outcome, it considers the number of tuples having that outcome with respect to the total
number of tuples in D. It differs from information gain, which measures the information with
respect to classification that is acquired based on the same partitioning. The gain ratio is
defined as
Gini Index
The Gini index is used in CART. Using the notation previously described, the Gini
index measures the impurity of D, a data partition or set of training tuples, as
Where piis the nonzero probability that an arbitrary tuple in D belongs to class Ciand
is estimated by |Ci,D|/|D| over m classes.
Note: The Gini index considers a binary split for each attribute.
When considering a binary split, we compute a weighted sum of the impurity of
eachresulting partition. For example, if a binary split on A partitions D into D1 and D2, the
Gini index of D given that partitioning is
➢ For each attribute, each of the possible binary splits is considered. For a discrete-valued
attribute, the subset that gives the minimum Gini index for that attribute is selected as its
splitting subset.
➢ For continuous-valued attributes, each possible split-point must be considered. The
strategy is similar to that described earlier for information gain, where the midpoint
between each pair of (sorted) adjacent values is taken as a possible split-point.
➢ The reduction in impurity that would be incurred by a binary split on a discrete- or
continuous-valued attribute A is
Page 6
Data Warehousing and Data Mining UNIT-3
Tree Pruning:
➢ When a decision tree is built, many of the branches will reflect anomalies in the training
data due to noise or outliers.
➢ Tree pruning methods address this problem of over fitting the data. Such methods
typically use statistical measures to remove the least-reliable branches.
➢ Pruned trees tend to be smaller and less complex and, thus, easier to comprehend.
➢ They are usually faster and better at correctly classifying independent test data (i.e., of
previously unseen tuples) than unpruned trees.
“How does tree pruning work?” There are two common approaches to tree pruning:
prepruning and postpruning.
➢ In the prepruning approach, a tree is “pruned” by halting its construction early. Upon
halting, the node becomes a leaf. The leaf may hold the most frequent class among the
subset tuples or the probability distribution of those tuples.
➢ If partitioning the tuples at a node would result in a split that falls below a pre specified
threshold, then further partitioning of the given subset is halted. There are difficulties,
however, in choosing an appropriate threshold.
➢ In the post pruning, which removes sub trees from a “fully grown” tree. A sub tree at a
given node is pruned by removing its branches and replacing it with a leaf. The leaf is
labeled with the most frequent class among the sub tree being replaced.
Page 7
Data Warehousing and Data Mining UNIT-3
➢ This set isindependent of the training set used to build the unpruned tree and of any
test set usedfor accuracy estimation.
➢ The algorithm generates a set of progressively pruned trees. Ingeneral, the smallest
decision tree that minimizes the cost complexity is preferred.
➢ C4.5 uses a method called pessimistic pruning, which is similar to the cost
complexitymethod in that it also uses error rate estimates to make decisions regarding
subtreepruning.
Scalability of Decision Tree Induction:
“What if D, the disk-resident training set of class-labeled tuples, does not fit in
memory? Inother words, how scalable is decision tree induction?” The efficiency of existing
decisiontree algorithms, such as ID3, C4.5, and CART, has been well established for
relativelysmall data sets. Efficiency becomes an issue of concern when these algorithms are
appliedto the mining of very large real-world databases. The pioneering decision tree
algorithmsthat we have discussed so far have the restriction that the training tuples should
residein memory.
In data mining applications, very large training sets of millions of tuples are
common.Most often, the training data will not fit in memory! Therefore, decision tree
construction becomes inefficient due to swapping of the training tuples in and out of main
and cache memories. More scalable approaches, capable of handling training data that are too
large to fit in memory, are required. Earlier strategies to “save space” included discretizing
continuous-valued attributes and sampling data at each node. These techniques, however, still
assume that the training set can fit in memory.
Several scalable decision tree induction methods have been introduced in recent
studies.RainForest, for example, adapts to the amount of main memory available and
appliesto any decision tree induction algorithm. The method maintains an AVC-set
(where“AVC” stands for “Attribute-Value, Classlabel”) for each attribute, at each tree
node,describing the training tuples at the node. The AVC-set of an attribute A at node Ngives
the class label counts for each value of A for the tuples at N. The set of all AVC-sets at a node
N is the AVC-groupof N. The size of an AVC-set for attribute A at node N depends only on
the number ofdistinct values of A and the number of classes in the set of tuples at N.
Typically, this sizeshould fit in memory, even for real-world data. RainForest also has
techniques, however,for handling the case where the AVC-group does not fit in memory.
Therefore, themethod has high scalability for decision tree induction in very large data sets.
Page 8
Data Warehousing and Data Mining UNIT-3
Solution:
Here the target class is buys_computer and values are yes, no. By using ID3
algorithm, we are constructing decision tree.
For ID3 Algorithm we have calculate Information gain attribute selection measure.
P buys_computer (yes) 9
CLASS
N buys_computer (no) 5
TOTAL 14
𝟐 𝟐
entropybuys_computer=-9/14*log(9/14)-5/14* log(5/14)
I(2,3)=-2/5*log2(2/5)-3/5*log2(3/5)=0.969
I(4,0)=-4/4*log2(4/4)-0/4*log2(0/4)=0
I(3,2)=-3/5*log2(3/5)-2/5*log2(2/5)=0.969
Page 9
Data Warehousing and Data Mining UNIT-3
Entropyage=5/14*0.969+0+5/14*0.969
Age P N TOTAL I(P,N)
youth 2 3 5 I(2,3 0.970
middle_aged 4 0 4 I(4,0) 0
senior 3 2 5 I(3,2) 0.970
Finally, age has the highest information gain among the attributes, it is selected as the
splitting attribute. Node N is labeled with age, and branches are grown for each of the
attribute’s values. The tuples are then partitioned accordingly, as
Page 10
Data Warehousing and Data Mining UNIT-3
Page 11
Data Warehousing and Data Mining UNIT-3
Algorithm for Decision Tree Induction
A skeleton decision tree induction algorithm called Tree Growth is. The input to this algorithm
consists of the training records E and the attribute set F. The algorithm works by recursively selecting
the best attribute to split the data and expanding the leaf nodes.
Page 12
Data Warehousing and Data Mining UNIT-3
EVALUATING THE PERFORMANCE OF A CLASSIFIER
The estimated error helps the learning algorithm to do model selection; i.e., to find a model of the
right complexity that is not susceptible to overfitting. Once the model has been constructed, it can
be applied to the test set to predict the class labels of previously unseen records.
First, fewer labeled examples are available for training because some of the records are with-held
for testing. As a result, the induced model may not be as good as when all the labeled examples are
used for training.
Second, the smaller the training set size, the larger the variance of the model. On the other hand,
if the training set is too large, then the estimated accuracy computed from the smaller test set is
less reliable.
Random Sub sampling
This approach is known as random sub sampling. Let acci be the model accuracy during the ith
iteration. The overall accuracy is given by accsub=∑𝑘𝑖=1 acci/k.
Random sub sampling still encounters some of the problems associated with the hold out method
because it does not utilize as much data as possible for training. It also has no control over the
number of times each record is used for testing and training. Consequently, some records might be
used for training more often than others.
Cross-Validation
An alternative to random subsampling is cross-validation. In this approach, each record is used the
same number of times for training and exactly once for testing. To illustrate this method, suppose we
partition the data into two equal sized sub sets.
First, we choose one of the sub sets for training and the other for testing. We then swap the roles
of the sub sets so that the previous training set becomes the test set and vice versa. This approach is
called a two-fold cross-validation.
In this example, each record is used exactly once for training and once for testing. The k-fold
cross-validation method generalizes this approach by segmenting the data in to k equal-sized
partitions. During each run, one of the partitions is chosen for testing, while the rest of them are used
for training. This procedure is repeated k times so that each partition is used for testing exactly once.
The total error is found by summing up the errors for all k runs.
Boot strap
In the boot strap approach, the training records are sampled with replacement; i.e., a record
already chosen for training is put back in to the original pool of records so that it is equally likely
to be redrawn. If the original data has N records, it can be shown that, on average, a boot strap
sample of size N contains about 63.2% of the records in the original data.
A record is chosen by a boot strap sample is 1−(1−1/N)N. When N is sufficiently large, the probability
asymptotically approaches 1−e−1=0.632. Records that are not included in the boot strap sample
become part of the test set. The model induced from the training set is then applied to the test set to
Page 13
Data Warehousing and Data Mining UNIT-3
obtain an estimate of the accuracy of the boot strap sample, €i.
Accuracy,accboot=1/b ∑bi=1 (0.632×ǫi+0.368×accs).
MODEL OVERFITTING
The errors committed by a classification model are generally divided into two types: training
errors and generalization errors.
Training error, also known as resubstitution error or apparent error, is the number of
misclassification errors committed on training records, where as generalization error is the
expected error of the model on previously unseen records.
A good classification model must not only fit the training data well; it must also
accurately classify records. A good model must have low training error as well as low
generalization error. This is important because a model that fits the training data too well can have
a poorer generalization error than a model with a higher training error. Such a situation is
known as model overfitting.
Overfitting Example in Two-Dimensional Data
Consider the two-dimensional datasets shown below. The dataset contains data points that
belong to two different classes, denoted as class o and class +, respectively.
The data points for the o class are generated from a mixture of three Gaussian distributions,
while a uniform distribution is used to generate the data points for the + class. There are altogether
1200 points belonging to the o class and 1800 points be-longing to the + class 30% of the
points are chosen for training, while the remaining 70% are used for testing. To investigate
the effect of overfitting, different levels of pruning are applied to the initial, fully-grown tree. The
below figure shows the training and test error rates of the decision tree.
Page 14
Data Warehousing and Data Mining UNIT-3
The training and test error rates of the model are large when the size of the tree is very small.
This situation is known as model underfitting. As the number of nodes in the decision tree
increases, the tree will have fewer training and test errors. However, once the tree becomes
too large, its test error rate begins to increase even though its training error rate continues to
decrease. This phenomenon is known as model overfitting.
The below figure shows the structure of two decision trees with different number of nodes. The tree
that contains the smaller number of nodes has a higher training error rate, but a lower test error
rate compared to the more complex tree.
Page 15
Data Warehousing and Data Mining UNIT-3
An example training set for classifying mammals. Class labels with asterisk symbols represent mislabeled records.
Model 1 misclassifies humans and dolphins as non mammals. Model 2 has a lower test error
rate (10%) even though its training error rate is higher (20%).
Page 16
Data Warehousing and Data Mining UNIT-3
Humans, elephants, and dolphins are misclassified because the decision tree classifies all warm-
blooded vertebrates that do not hibernate as non-mammals. The tree arrives at this classification
decision because there is only one training record, which is an eagle, with such characteristics.
This example clearly demonstrates the danger of making wrong predictions when there are not
enough representative examples at the leaf nodes of a decision tree.
Page 17
UNIT 4 Classification
Syllabus:
Classification: Alterative Techniques, Bayes’ Theorem, Naïve Bayesian Classification, Bayesian Belief
Networks
Bayesian Classifiers:
In many applications the relationship between the attribute set and the class variable is non-
deterministic.
The class label of a test record cannot be predicted even though its attribute set is identical
to some of the training examples
For example, consider the task of predicting whether a person is at risk for heart disease
based on the person’s diet and workout frequency.
Although most people who eat healthily and exercise regularly have less chance of developing
heart disease.
Still do so because of other factors such as heredity, excessive smoking, and alcohol abuse.
Determining whether a person’s diet is healthy or the workout frequency is sufficient is also
subject to interpretation.
In which in turn may introduce uncertainties into the learning problem.
An approach for modeling probabilistic relationships between the attribute set and the class
variable.
The Bayes theorem, a statistical principle for combining prior knowledge of the classes with
new evidence gathered from data.
The use of the Bayes theorem for solving classification problems will be explained.
It is followed by a description of two implementations of Bayesian classifiers: naive Bayes and
the Bayesian belief network.
Bayesian Classification:
Bayesian classifiers are statistical classifiers.
They can predict class membership probabilities, such as the probability that a given tuple
belongs to a particular class.
Bayesian classification is based on Bayes’ theorem
Bayes Theorem
Let X be a data tuple. In Bayesian terms, X is considered ― “evidence” and it is described by
measurements made on a set of n attributes.
Let H be some hypothesis, such as that the data tuple X belongs to a specified class C.
For classification problems, we want to determine P(H|X), the probability that the hypothesis H
holds given the ―evidence‖ or observed data tuple X.
P(H|X) is the posterior probability, or a posteriori probability, of H conditioned on X.
Bayes’ theorem is useful in that it provides a way of calculating the posterior probability,
P(H|X), from P(H), P(X|H), and P(X).
Problem:
Consider a football game between two rival teams: Team 0 and Team 1. Suppose Team 0 wins
65% of the time and Team 1 wins the remaining matches.
Among the games won by Team 0, only 30% of them come from playing on Team 1’s football
field. On the other hand, 75% of the victories for Team 1 are obtained while playing at home.
If Team 1 is to host the next match between the two teams, which team riff most likely emerge
as the winner?
Solution:
The Bayes theorem can be used to solve the prediction problem.
For notational convenience, let A be the random variable that represents the team hosting the
match and V be the random variable that represents the winner of the match. Both A and V can take on
values from the set (0, 1).
Probability Team 0 wins is P(Y = 0) = 0.65.
Probability Team 1 wins is P(V - 1) = 1 — P(V — 0) = 0.35.
Probability Team 1 hosted the match it won is P(A = 1|V = 1) = 0.75.
Probability Team 1 hosted the match won by Team 0 is P(A = 1|V = 0) = 0.3.
Our objective is to compute P(V = 1|A = 1), which is the conditional probability that Team 1 wins
the next match it will be hosting, and compares it against P(V = 0|A = 1).
Using the Bayes theorem, we obtain
We wish to predict the class label of a tuple using naïve Bayesian classification, given the
same training data above.
The training data were shown above in Table.
The data tuples are described by the attributes age, income, student, and credit rating.
The class label attribute, buys computer, has two distinct values (namely, {yes, no}).
Let C1 correspond to the class buys computer=yes and C2 correspond to buys computer=no.
The tuple we wish to classify is
X={age= “youth”, income= “medium”, student= “yes”, credit_rating= “fair”}
We need to maximize P(X|Ci)P(Ci), for i=1,2. P(Ci), the prior probability of each class, can be
computed based on the training tuples:
P(buys computer = yes) = 9/14 = 0.643
P(buys computer = no) = 5/14 = 0.357
To compute P(X|Ci), for i = 1, 2, we compute the following conditional probabilities:
P(age = youth | buys computer = yes) = 2/9 = 0.222
P(income=medium | buys computer=yes) = 4/9 = 0.444 P(student=yes | buys computer=yes) =
6/9 = 0.667 P(credit rating=fair | buys computer=yes) = 6/9 = 0.667 P(age=youth | buys computer=no)
= 3/5 = 0.600 P(income=medium | buys computer=no) = 2/5 = 0.400 P(student=yes | buys
computer=no) = 1/5 = 0.200 P(credit rating=fair | buys computer=no) = 2/5 = 0.400
Using these probabilities, we obtain
P(X | buys computer=yes) =
P(age=youth | buys computer=yes) × P(income=medium | buys computer=yes) × P(student=yes | buys
computer=yes) × P(credit rating=fair | buys computer=yes) = 0.222 × 0.444 × 0.667 × 0.667
= 0.044.
Similarly, P(X | buys computer=no) = 0.600 × 0.400 × 0.200 × 0.400 = 0.019. To find the class,
Ci, that P(X|Ci)P(Ci),
we compute
P(X | buys computer=yes) P(buys computer=yes) = 0.044 × 0.643 = 0.028
P(X | buys computer=no) P(buys computer=no) = 0.019 × 0.357 = 0.007
Therefore, the naïve Bayesian classifier predicts buys computer = yes for tuple X.
Baye’s error rate:
Bayesian classification method allow us to determine the ideal decision boundary for
classification task.
K nearest neighbour:
• K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on
Supervised Learning technique.
• K-NN algorithm assumes the similarity between the new case/data and available cases
and put the new case into the category that is most similar to the available categories.
Why do we need a K-NN Algorithm?
• Suppose there are two categories, i.e., Category A and Category B, and we have a new
data point x1, so this data point will lie in which of these categories.
• To solve this type of problem, we need a K-NN algorithm.
• With the help of K-NN, we can easily identify the category or class of a particular
dataset.
Consider the below diagram:
• By calculating the Euclidean distance we got the nearest neighbors, as three nearest
neighbors in category A and two nearest neighbors in category B.
• As we can see the 3 nearest neighbors are from category A, hence this new data point
must belong to category A.
How to select the value of K in the K-NN Algorithm?
• Below are some points to remember while selecting the value of K in the K-NN
algorithm:
• There is no particular way to determine the best value for "K", so we need to try
some values to find the best out of them. The most preferred value for K is 5.
• A very low value for K such as K=1 or K=2, can be noisy and lead to the effects of
outliers in the model.
• Large values for K are good, but it may find some difficulties
Association Analysis:
Association analysis, which is useful for discovering interesting relationships hidden in large
data sets. The uncovered relationships can be represented in the form of association rules or
sets of frequent items. For example, the following rule can be extracted from the data set
shown in below Table:
The rule suggests that a strong relationship exists between the sale of diapers and
beer because many customers who buy diapers also buy beer. There are two key issues that
need to be addressed when applying association analysis to market basket data. Second,
some of the discovered patterns are potentially spurious because they may happen simply by
chance.
1. PROBLEM DEFINITION
The basic terminology used in association analysis:
Binary Representation: Market basket data can be represented in a binary format as shown
in below Table, where each row corresponds to a transaction and each column corresponds to
an item. An item can be treated as a binary variable whose value is one if the item is present in
a transaction and zero otherwise.
Itemset and Support Count Let I = {i1,i2,….,id} be the set of all items in a market basket
data and T = {t1,t2,….,tN} be the set of all transactions. In association analysis, a collection of
zero or more items is termed an itemset. If an itemset contains k items, it is called a k-itemset.
For instance, {Beer, Diapers, Milk} is an example of a 3-itemset. The null (or empty) set is an
itemset that does not contain any items.
The transaction width is defined as the number of items present in a transaction. A transaction ti
is said to contain an itemset X if X is a subset of tj. For example, the second transaction shown in
the above table contains the itemset {Bread, Diapers} but not {Bread, Milk}. Its support count,
which refers to the number of transactions that contain a particular itemset. Mathematically, the
Where the symbol |. | denote the number of elements in a set. In the data set shown in the above
table, the support count for {Beer, Diapers, Milk} is equal to two because there are only two
transactions that contain all three items.
Association Rule An association rule is an implication expression of the form X→Y, The
strength of an association rule can be measured in terms of its support and confidence. Support
determines how often a rule is applicable to a given data set, while confidence determines how
frequently items in Y appear in transactions that contain X. The formal definitions of these
metrics are
Consider the rule {Milk, Diapers} →{Beer}. Since the support count for {Milk, Diapers,
Beer} is 2 and the total number of transactions is 5, the rule’s support is 2/5 = 0.4. The rule’s
confidence is obtained by dividing the support count for {Milk, Diapers, Beer} by the support
count for {Milk, Diapers}. Since there are 3 transactions that contain milk and diapers, the
confidence for this rule is 2/3 = 0.67.
Why Use Support and Confidence?
Support is an important measure because a rule that has very low support may occur simply by
chance. A low support rule uninteresting from a business perspective because it may not be
profitable to promote items that customers
Confidence, on the other hand, measures the reliability of the inference made by a rule. For a given
rule X —› Y, the higher the confidence, the more likely it is for Y to be present in transactions that
contain X. Confidence also provides an estimate of the conditional probability of Y given X.
Formulation of Association Rule Mining Problem The association rule mining problem
can be formally stated as follows:
Association Rule Discovery Given a set of transactions T, find all the rules having support ≥
minsup and confidence ≥ minconf, where minsup and minconf are the corresponding support and
confidence thresholds. A brute-force approach for mining association rules is to compute the
support and confidence for every possible rule.
An itemset lattice.
After counting their supports, the candidate itemsets {Cola} and {Eggs} are discarded
because they appear in fewer than three transactions.
In the next iteration, candidate 2-itemsets are generated using only the frequent 1-
itemsets because of the Apriori principle.
Two of these six candidates, {Beer, Bread} and {Beer, Milk}, are subsequently found
to be infrequent after computing their support values.
The remaining four candidates are frequent, and thus will be used to generate
candidate 3-itemsets.
With the Apriori principle, we only need to keep candidate 3-itemsets whose subsets
are frequent.
The only candidate that has this property is {Bread, Diapers, Milk}.
The effectiveness of the Apriori pruning strategy can be shown by counting the
number of candidate itemsets generated.
which represents a 68% reduction in the number of candidate item sets even in this simple
example.
Example 2:
1. The join step: To find Lk, a set of candidate k-itemsets is generated by joining Lk-1
with itself. This set of candidates is denoted Ck.
2. The prune step: Ck is a superset of Lk, that is, its members may or may not be
frequent, but all of the frequent k-itemsets are included in Ck. A database scan to
determine the count of each candidate in Ck would result in the determination of Lk.
Apriori algorithm:
The pseudo code for the frequent itemset generation part of the Apriori algorithm is
shown in the below Algorithm.
Let Ck denote the set of candidate k-itemsets and Fk denote the set of frequent k-
itemsets:
Generating Association Rules from Frequent Item sets:
Once the frequent itemsets from transactions in a database D have been found, it is
straightforward to generate strong association rules from them (where strong
association rules satisfy both minimum support and minimum confidence).
This can be done using Eq. for confidence, which we show again here for completeness.
Second method for frequent item set generation: 2. Frequent Pattern Growth Algorithm.
Now, all the Ordered-Item sets are inserted into a Trie Data Structure.
a) Inserting the set {K, E, M, O, Y}:
Here, all the items are simply linked one after the other in the order of occurrence in the set
and initialize the support count for each item as 1.
b) Inserting the set {K, E, O, Y}:
Till the insertion of the elements K and E, simply the support count is increased by 1. On
inserting O we can see that there is no direct link between E and O, therefore a new node for
the item O is initialized with the support count as 1 and item E is linked to this new node. On
inserting Y, we first initialize a new node for the item Y with support count as 1 and link the
new node of O with the new node of Y.
Now, for each item, the Conditional Pattern Base is computed which is path labels of all the
paths which lead to any node of the given item in the frequent-pattern tree. Note that the items
in the below table are arranged in the ascending order of their frequencies.
Now for each item the Conditional Frequent Pattern Tree is built. It is done by taking the
set of elements which is common in all the paths in the Conditional Pattern Base of that item
and calculating it’s support count by summing the support counts of all the paths in the
Conditional Pattern Base.
From the Conditional Frequent Pattern tree, the Frequent Pattern rules are generated by
pairing the items of the Conditional Frequent Pattern Tree set to the corresponding to the item
as given in the below table.
For each row, two types of association rules can be inferred for example for the first row
which contains the element, the rules K -> Y and Y -> K can be inferred. To determine
the valid rule, the confidence of both the rules is calculated and the one with confidence
greater than or equal to the minimum confidence value is retained.
This algorithm needs to scan the database only twice when compared to Apriori
which scans the transactions for each iteration.
The pairing of items is not done in this algorithm and this makes it faster.
The database is stored in a compact version in memory.
It is efficient and scalable for mining both long and short frequent patterns.
Applications:
Group related documents for browsing , group genes and protienes that have similar functionality.
What is not cluster analysis?
Supervised classification: have class label information.
Simple segmentation: Dividing students into different registration groups alphabetically, by last name.
Results of query: grouping are a result of external specification.
Graph partitioning.
Types of clustering:
A clustering is a set of clusters.
1)Partitional clustering: A division data object into non overlapping subsets(clusters)
2)hierarchical clustering:
A set of nested clusters organized as a hierarchical tree.
Difference between sets of clustering:
Exclusive versus non exclusive :
In non exclusive clustering, point may belong to multiple clusters.
Exclusive Clustering: Assignment is to one cluster
¤ Non-Exclusive Clustering: Data objects may belong to multiple clusters
Complete vs. Partial :
Complete Clustering: Every object is assigned to a cluster
Partial Clustering: Not every object needs to be assigned
Fuzzy versus non fuzzy:
In fuzzy clustering a point belongs to every cluster with some weight between 0 to 1.
Heterogeneous versus homogeneous :
Cluster of widely different sizes, shapes and densities.
Partitional vs. Hierarchical :
Partitional Clustering: A division of data into nonoverlapping clusters, such that each data object is in
exactly one subset
Hierarchical Clustering: A set of nested clusters organized as a hierarchical tree n Each node (cluster) is
union of its children (subclusters) n Root of tree: cluster containing all data objects n Leaves of tree:
singleton clusters
Types of clusters:
Well separated clusters: a cluster is a set of points such that any point in a cluster is closer to every other
point in the cluster than to any point not in the cluster .
Center based clusters: A cluster is a set of objects such that an object in a cluster is closer to center of
cluster of a cluster, than to the center of any other cluster.
The center of a cluster is often a centroid, the average of all the points in cluster or a medoid the most
representative point of cluster.
Contiguous Clusters : a point in a cluster is closer to one or more other points in the cluster than to any
point not in the cluster
Density based clustering:
Density clustering groups data points by how densely populated they are . a cluster is a density region of
points, which is separated by low density regions, from other regions of high density.
Used when the clusters are irregular and noise and outliers are present .
Clustering algorithms:
k-means algorithm:--
It follows partitional clustering approach. each cluster is associated with a cetroid (center point). Each
point is assigned to cluster with closest centroid . The basic algorithm is simple.
k-means clustering aims to partition n observations into k clusters in which each observation belongs to
the cluster with the nearest mean.
Algorithm:
Step 1: Take mean value(Random)
Step 2:Find nearest number on mean put it in cluster
Step 3: Repeat the one and two steps to get same mean .
Time complixity =0(I*K*M*N)
I- No of iterations
It is offen small and safe bounded
K is linear in m – no of points
k- no of clusters and significantly less than m.
K-Means Algorithm has a few limitations which are as follows:
• It only identifies spherical shaped clusters i.e it cannot identify, if the clusters are non-
spherical or of various size and density.
• It suffers from local minima and has a problem when the data contains outliers.
Bisecting K-means :
• Bisecting K-Means Algorithm is a modification of the K-Means algorithm.
• It can produce partitional/hierarchical clustering.
• It can recognize clusters of any shape and size.
• The Bisecting K-Means algorithm is a Variant of K-means that can produce a partitional or a
hierarchical clustering.
• Bisecting k-Means is like a combination of k-Means and hierarchical clustering.
• Instead of partitioning the data into ‘k’ clusters in each iteration, Bisecting k-means splits
one cluster into two sub clusters at each bisecting step(by using k-means) until k clusters are
obtained
Basic Bisecting K-means Algorithm for finding K clusters
1. Pick a cluster to split.
2. Find 2 sub-clusters using the basic K-means algorithm. (Bisecting step)
3. Repeat step 2, the bisecting step, for ITER times and take the split that produces the clustering with
the highest overall similarity.
4. Repeat steps 1, 2 and 3 until the desired number of clusters is reached.
Agglomerative clustering works in a “bottom-up” manner. That is, each object is initially considered as a
single-element cluster (leaf). At each step of the algorithm, the two clusters that are the most similar are
combined into a new bigger cluster (nodes). This procedure is iterated until all points are member of just
one single big cluster (root) (see figure below).
The inverse of agglomerative clustering is divisive clustering, which is also known as DIANA (Divise
Analysis) and it works in a “top-down” manner. It begins with the root, in which all objects are included
in a single cluster. At each step of iteration, the most heterogeneous cluster is divided into two. The
process is iterated until all objects are in their own cluster.
Example: Agglomerative Hierarchical Clustering
Clustering starts by computing a distance between every pair of units that you want to cluster. A distance
matrix will be symmetric (because the distance between x and y is the same as the distance between y and
x) and will have zeroes on the diagonal (because every item is distance zero from itself). The table below
is an example of a distance matrix. Only the lower triangle is shown, because the upper triangle can be
filled in by reflection.
Now lets start clustering. The smallest distance is between three and five and they get linked up or
merged first into a the cluster '35'.
To obtain the new distance matrix, we need to remove the 3 and 5 entries, and replace it by an entry "35"
. Since we are using complete linkage clustering, the distance between "35" and every other item is the
maximum of the distance between this item and 3 and this item and 5. For example, d(1,3)= 3 and
d(1,5)=11. So, D(1,"35")=11. This gives us the new distance matrix. The items with the smallest
distance get clustered next. This will be 2 and 4.
Continuing in this way, after 6 steps, everything is clustered. This is summarized below. On this plot, the
y-axis shows the distance between the objects at the time they were clustered. This is called the cluster
height. Different visualizations use different measures of cluster height.
Time and Space Complexity:
The basic agglomerative hierarchical clustering algorithm just presented uses a proximity matrix. This
requires the storage of l/2m2 proximities where rn is the number of data points. The total space
complexity is O(m2).
overall time required for a hierarchical clustering based on Algorithm is O(m2logm).