Dataware Housing Notes
Dataware Housing Notes
A data warehouse is a collection of data marts representing historical data from different
operations in the company. This data is stored in a structure optimized for querying and
data analysis as a data warehouse. Table design, dimensions and organization should be
consistent throughout a data warehouse so that reports or queries across the data
warehouse are consistent. A data warehouse can also be viewed as a database for
historical data from different functions within a company.
The term Data Warehouse was coined by Bill Inmon in 1990, which he defined in
the following way: "A warehouse is a subject-oriented, integrated, time-variant and non-
volatile collection of data in support of management's decision making process". He
defined the terms in the sentence as follows:
Subject Oriented: Data that gives information about a particular subject instead of
about a company's ongoing operations.
Integrated: Data that is gathered into the data warehouse from a variety of sources and
merged into a coherent whole.
Time-variant: All data in the data warehouse is identified with a particular time period.
Non-volatile: Data is stable in a data warehouse. More data is added but data is never removed.
Data Mart: Departmental subsets that focus on selected subjects. A data mart is a segment
of a data warehouse that can provide data for reporting and analysis on a section, unit,
department or operation in the company, e.g. sales, payroll, production. Data marts are
sometimes complete individual data warehouses which are usually smaller than the
corporate data warehouse.
Drill-down: Traversing the summarization levels from highly summarized data to the
underlying current or old detail
Data warehouses are designed to perform well with aggregate queries running on
large amounts of data.
The structure of data warehouses is easier for end users to navigate, understand
and query against unlike the relational databases primarily designed to handle
lots of transactions.
Data warehouses enable queries that cut across different segments of a
company's operation. E.g. production data could be compared against inventory
data even if they were originally stored in different databases with different
structures.
Queries that would be complex in very normalized databases could be easier to
build and maintain in data warehouses, decreasing the workload on transaction
systems.
Data warehousing is an efficient way to manage and report on data that is
from a variety of sources, non uniform and scattered throughout a company.
Data warehousing is an efficient way to manage demand for lots of information
from lots of users.
Data warehousing provides the capability to analyze large amounts of historical
• Operational Data:
Focusing on transactional function such as bank card withdrawals
and deposits
Detailed
Updateable
Reflects current data
• Informational Data:
□ Focusing on providing answers to problems posed by decision
makers
□ Summarized
□ Non updateable
However:
The major distinguishing features between OLTP and OLAP are summarized as follows.
1. Users and system orientation: An OLTP system is customer-oriented and is used for
transaction and query processing by clerks, clients, and information technology professionals. An
OLAP system is market-oriented and is used for data analysis by knowledge workers, including
managers, executives, and analysts.
2. Data contents: An OLTP system manages current data that, typically, are too detailed to be
easily used for decision making. An OLAP system manages large amounts of historical data,
provides facilities for summarization and aggregation, and stores and manages information at
different levels of granularity. These features make the data easier for use in informed decision
making.
3. Database design: An OLTP system usually adopts an entity-relationship (ER) data model and
an application oriented database design. An OLAP system typically adopts either a star or
snowflake model and a subject-oriented database design.
4. View: An OLTP system focuses mainly on the current data within an enterprise or department,
without referring to historical data or data in different organizations. In contrast, an OLAP
5
5. Access patterns: The access patterns of an OLTP system consist mainly of short, atomic
transactions. Such a system requires concurrency control and recovery mechanisms. However,
accesses to OLAP systems are mostly read-only operations although many could be complex
queries.
The most popular data model for data warehouses is a multidimensional model. This
model can exist in the form of a star schema, a snowflake schema, or a fact constellation schema.
Let's have a look at each of these schema types.
Star schema: The star schema is a modeling paradigm in which the data warehouse
contains (1) a large central table (fact table), and (2) a set of smaller attendant tables
(dimension tables), one for each dimension. The schema graph resembles a starburst,
with the dimension tables displayed in a radial pattern around the central fact table.
Fact constellation: Sophisticated applications may require multiple fact tables to share
dimension tables. This kind of schema can be viewed as a collection of stars, and hence is
called a galaxy schema or a fact constellation.
Figure Fact constellation schema of a data warehouse for sales and shipping.
A Concept Hierarchy
A Concept hierarchy defines a sequence of mappings from a set of low level Concepts to higher
level, more general Concepts. Concept hierarchies allow data to be handled at varying levels of
abstraction
OLAP operations on multidimensional data.
1. Roll-up: The roll-up operation performs aggregation on a data cube, either by climbing-up a
concept hierarchy for a dimension or by dimension reduction. Figure shows the result of a roll-up
operation performed on the central cube by climbing up the concept hierarchy for location. This
hierarchy was defined as the total order street < city < province or state <country.
2. Drill-down: Drill-down is the reverse of roll-up. It navigates from less detailed data to more
detailed data. Drill-down can be realized by either stepping-down a concept hierarchy for a
dimension or introducing additional dimensions. Figure shows the result of a drill-down
operation performed on the central cube by stepping down a concept hierarchy for time defined
as day < month < quarter < year. Drill-down occurs by descending the time hierarchy from the
level of quarter to the more detailed level of month.
3. Slice and dice: The slice operation performs a selection on one dimension of the given cube,
resulting in a sub cube. Figure shows a slice operation where the sales data are selected from the
central cube for the dimension time using the criteria time=‖Q2". The dice operation defines a
sub cube by performing a selection on two or more dimensions.
4. Pivot (rotate): Pivot is a visualization operation which rotates the data axes in view in order
to provide an alternative presentation of the data. Figure shows a pivot operation where the item
and location axes in a 2-D slice are rotated.
10
11
The bottom tier is ware-house database server which is almost always a relational
database system. The middle tier is an OLAP server which is typically implemented using either
(1) a Relational OLAP (ROLAP) model, (2) a Multidimensional OLAP (MOLAP) model. The
top tier is a client, which contains query and reporting tools, analysis tools, and/or data mining
tools (e.g., trend analysis, prediction, and so on).
From the architecture point of view, there are three data warehouse models: the enterprise
warehouse, the data mart, and the virtual warehouse.
12
Data mart: A data mart contains a subset of corporate-wide data that is of value to a
specific group of users. The scope is connected to specific, selected subjects. For
example, a marketing data mart may connect its subjects to customer, item, and sales.
The data contained in data marts tend to be summarized. Depending on the source of
data, data marts can be categorized into the following two classes:
(i).Independent data marts are sourced from data captured from one or more
operational systems or external information providers, or from data generated locally
within a particular department or geographic area.
(ii).Dependent data marts are sourced directly from enterprise data warehouses.
13
The four processes from extraction through loading often referred collectively as Data Staging.
EXTRACT
Some of the data elements in the operational database can be reasonably be expected to be useful
in the decision making, but others are of less value for that purpose. For this reason, it is
necessary to extract the relevant data from the operational database before bringing into the data
warehouse. Many commercial tools are available to help with the extraction process. Data
Junction is one of the commercial products. The user of one of these tools typically has an easy-
to-use windowed interface by which to specify the following:
(i) Which files and tables are to be accessed in the source database?
(ii) Which fields are to be extracted from them? This is often done internally by
SQL Select statement.
(iii) What are those to be called in the resulting database?
(iv) What is the target machine and database format of the output?
(v) On what schedule should the extraction process be repeated?
14
The operational databases developed can be based on any set of priorities, which keeps changing
with the requirements. Therefore those who develop data warehouse based on these databases are
typically faced with inconsistency among their data sources. Transformation process deals with
rectifying any inconsistency (if any).
CLEANSING
Information quality is the key consideration in determining the value of the information. The
developer of the data warehouse is not usually in a position to change the quality of its
underlying historic data, though a data warehousing project can put spotlight on the data quality
issues and lead to improvements for the future. It is, therefore, usually necessary to go through
the data entered into the data warehouse and make it as error free as possible. This process is
known as Data Cleansing.
Data Cleansing must deal with many types of possible errors. These include missing data and
incorrect data at one source; inconsistent data and conflicting data when two or more source are
involved. There are several algorithms followed to clean the data, which will be discussed in the
coming lecture notes.
LOADING
Loading often implies physical movement of the data from the computer(s) storing the source
database(s) to that which will store the data warehouse database, assuming it is different. This
takes place immediately after the extraction phase. The most common channel for data
movement is a high-speed communication link. Ex: Oracle Warehouse Builder is the API from
Oracle, which provides the features to perform the ETL task on Oracle Data Warehouse.
15
Single-source problems
The data quality of a source largely depends on the degree to which it is governed by schema and
integrity constraints controlling permissible data values. For sources without schema, such as
files, there are few restrictions on what data can be entered and stored, giving rise to a high
probability of errors and inconsistencies. Database systems, on the other hand, enforce
restrictions of a specific data model (e.g., the relational approach requires simple attribute values,
referential integrity, etc.) as well as application-specific integrity constraints. Schema-related
data quality problems thus occur because of the lack of appropriate model-specific or
application-specific integrity constraints, e.g., due to data model limitations or poor schema
16
For both schema- and instance-level problems we can differentiate different problem scopes:
attribute (field), record, record type and source; examples for the various cases are shown in
Tables 1 and 2. Note that uniqueness constraints specified at the schema level do not prevent
duplicated instances, e.g., if information on the same real world entity is entered twice with
different attribute values (see example in Table 2).
Multi-source problems
The problems present in single sources are aggravated when multiple sources need to be
integrated. Each source may contain dirty data and the data in the sources may be represented
differently, overlap or contradict. This is because the sources are typically developed, deployed
and maintained independently to serve specific needs. This results in a large degree of
heterogeneity w.r.t. data management systems, data models, schema designs and the actual data.
At the schema level, data model and schema design differences are to be addressed by the
steps of schema translation and schema integration, respectively. The main problems w.r.t.
17
A main problem for cleaning data from multiple sources is to identify overlapping data,
in particular matching records referring to the same real-world entity (e.g., customer). This
problem is also referred to as the object identity problem, duplicate elimination or the
merge/purge problem. Frequently, the information is only partially redundant and the sources
may complement each other by providing additional information about an entity. Thus duplicate
information should be purged out and complementing information should be consolidated and
merged in order to achieve a consistent view of real world entities.
The two sources in the example of Fig. 3 are both in relational format but exhibit schema and
data conflicts. At the schema level, there are name conflicts (synonyms Customer/Client,
Cid/Cno, Sex/Gender) and structural conflicts (different representations for names and
addresses). At the instance level, we note that there are different gender representations (―0‖/‖1‖
vs. ―F‖/‖M‖) and presumably a duplicate record (Kristen Smith). The latter observation also
reveals that while Cid/Cno are both source-specific identifiers, their contents are not comparable
18
Definition of transformation workflow and mapping rules: Depending on the number of data
sources ,their degree of heterogeneity and the ―dirtyness‖ of the data, a large number of data
transformation and cleaning steps may have to be executed. Sometime, a schema translation is
used to map sources to a common data model; for data warehouses, typically a relational
representation is used. Early data cleaning steps can correct single-source instance problems and
prepare the data for integration. Later steps deal with schema/data integration and cleaning multi-
source instance problems, e.g., duplicates.
For data warehousing, the control and data flow for these transformation and cleaning steps
should be specified within a workflow that defines the ETL process (Fig. 1).
The schema-related data transformations as well as the cleaning steps should be specified
by a declarative query and mapping language as far as possible, to enable automatic generation
of the transformation code. In addition, it should be possible to invoke user-written cleaning code
and special purpose tools during a data transformation workflow. The transformation steps may
request user feedback on data instances for which they have no built-in cleaning logic.
Transformation: Execution of the transformation steps either by running the ETL workflow for
loading and refreshing a data warehouse or during answering queries on multiple sources.
19
Data analysis
Metadata reflected in schemas is typically insufficient to assess the data quality of a source,
especially if only a few integrity constraints are enforced. It is thus important to analyse the
actual instances to obtain real (reengineered) metadata on data characteristics or unusual value
patterns. This metadata helps finding data quality problems. Moreover, it can effectively
contribute to identify attribute correspondences between source schemas (schema matching),
based on which automatic data transformations can be derived.
There are two related approaches for data analysis, data profiling and data mining. Data
profiling focuses on the instance analysis of individual attributes. It derives information such as
the data type, length, value range, discrete values and their frequency, variance, uniqueness,
occurrence of null values, typical string pattern (e.g., for phone numbers), etc., providing an
exact view of various quality aspects of the attribute.
Table: shows examples of how this metadata can help detecting data quality problems.
Metadata repository
Metadata are data about data. When used in a data warehouse, metadata are the data that define
warehouse objects. Metadata are created for the data names and definitions of the given
warehouse. Additional metadata are created and captured for time stamping any extracted data,
the source of the extracted data, and missing fields that have been added by data cleaning or
integration processes. A metadata repository should contain:
20
greater scalability
21
Cube Operation
Transform it into a SQL-like language (with a new operator cube by, introduced by Gray
et al.‘96)
SELECT item, city, year, SUM (amount)
FROM SALES
22
Computation
Partition arrays into chunks (a small sub cube which fits in memory).
Compressed sparse array addressing: (chunk_id, offset)
Compute aggregates in ―multiway‖ by visiting cube cells in the order which
minimizes the # of times to visit each cell, and reduces memory access and
storage cost.
23
The bitmap index is an alternative representation of the record ID (RID) list. In the
bitmap index for a given attribute, there is a distinct bit vector, By, for each value v in the
domain of the attribute. If the domain of a given attribute consists of n values, then n bits are
needed for each entry in the bitmap index
The join indexing method gained popularity from its use in relational database query
processing. Traditional indexing maps the value in a given column to a list of rows having that
value. In contrast, join indexing registers the joinable rows of two relations from a relational
database. For example, if two relations R(RID;A) and S(B; SID) join on the attributes A and B,
then the join index record contains the pair (RID; SID), where RID and SID are record identifiers
from the R and S relations, respectively.
1. Determine which operations should be performed on the available cuboids. This involves
transforming any selection, projection, roll-up (group-by) and drill-down operations specified in
the query into corresponding SQL and/or OLAP operations. For example, slicing and dicing of a
data cube may correspond to selection and/or projection operations on a materialized cuboid.
2. Determine to which materialized cuboid(s) the relevant operations should be applied. This
involves identifying all of the materialized cuboids that may potentially be used to answer the
query.
24
1. Information processing
2. Analytical processing
3. Data mining
Note:
From On-Line Analytical Processing to On Line Analytical Mining (OLAM) called from
data warehousing to data mining
Most data mining tools need to work on integrated, consistent, and cleaned data, which
requires costly data cleaning, data transformation and data integration as preprocessing steps. A
data warehouse constructed by such preprocessing serves as a valuable source of high quality
data for OLAP as well as for data mining.
Effective data mining needs exploratory data analysis. A user will often want to traverse
through a database, select portions of relevant data, analyze them at different granularities, and
present knowledge/results in different forms. On-line analytical mining provides facilities for
data mining on different subsets of data and at different levels of abstraction, by drilling,
pivoting, filtering, dicing and slicing on a data cube and on some intermediate data mining
results.
By integrating OLAP with multiple data mining functions, on-line analytical mining
provides users with the exibility to select desired data mining functions and swap data mining
tasks dynamically.
A metadata directory is used to guide the access of the data cube. The data cube can be
constructed by accessing and/or integrating multiple databases and/or by filtering a data
warehouse via a Database API which may support OLEDB or ODBC connections. Since an
OLAM engine may perform multiple data mining tasks, such as concept description, association,
classification, prediction, clustering, time-series analysis ,etc., it usually consists of multiple,
integrated data mining modules and is more sophisticated than an OLAP engine.
26
UNIT II
The major reason that data mining has attracted a great deal of attention in information
industry in recent years is due to the wide availability of huge amounts of data and
the imminent need for turning such data into useful information and knowledge.
The information and knowledge gained can be used for applications ranging from
business management, production control, and market analysis, to engineering design
and science exploration.
Data collection and Database Creation
The evolution of database technology (1960s and earlier)
Primitive file processing
Data mining refers to extracting or mining" knowledge from large amounts of data.
There are many other terms related to data mining, such as knowledge mining,
knowledge
extraction, data/pattern analysis, data archaeology, and data dredging. Many people treat
data mining as a synonym for another popularly used term, Knowledge Discovery
in Databases", or KDD
Essential step in the process of knowledge discovery in databases
28
Data mining is the process of discovering interesting knowledge from large amounts
of data stored either in databases, data warehouses, or other information repositories.
Based on this view, the architecture of a typical data mining system may have the
following major components:
29
Flat files: Flat files are actually the most common data source for data mining
algorithms, especially at the research level. Flat files are simple data files in text or binary
format with a structure known by the data mining algorithm to be applied. The data in
these files can be transactions, time-series data, scientific measurements, etc.
30
Data mining algorithms using relational databases can be more versatile than data mining
algorithms specifically written for flat files, since they can take advantage of the
structure inherent to relational databases. While data mining can benefit from SQL for
data selection, transformation and consolidation, it goes beyond what SQL could
provide, such as predicting, comparing, detecting deviations, etc.
Data warehouses
31
The data cube structure that stores the primitive or lowest level of information is called a
32
Transactional databases
In general, a transactional database consists of a flat file where each record represents a
transaction. A transaction typically includes a unique transaction identity number (trans
ID), and a list of the items making up the transaction (such as items purchased in a store)
as shown below:
SALES
Trans-ID List of item_ID‘s
T100 I1,I3,I8
…….. ………
• A spatial database contains spatial-related data, which may be represented in the form
of raster or vector data. Raster data consists of n-dimensional bit maps or pixel maps,
and vector data are represented by lines, points, polygons or other kinds of
processed primitives, Some examples of spatial databases include geographical (map)
databases, VLSI chip designs, and medical and satellite images databases.
• Time-Series Databases: Time-series databases contain time related data such stock
market data or logged activities. These databases usually have a continuous flow of
new
33
• A text database is a database that contains text documents or other word descriptions in
the form of long sentences or paragraphs, such as product specifications, error or
bug reports, warning messages, summary reports, notes, or other documents.
• A multimedia database stores images, audio, and video data, and is used in
applications such as picture content-based retrieval, voice-mail systems, video-on-demand
systems, the World Wide Web, and speech-based user interfaces.
1.4 Data mining functionalities/Data mining tasks: what kinds of patterns can
be mined?
Data mining functionalities are used to specify the kind of patterns to be found in
data mining tasks. In general, data mining tasks can be classified into two categories:
•
Descri
ptive •
predic
tive
Descriptive mining tasks characterize the general properties of the data in the database.
Predictive mining tasks perform inference on the current data in order to
make predictions.
Describe data mining functionalities and the kinds of patterns they can discover
(or)
Define each of the following data mining functionalities: characterization,
discrimination, association and correlation analysis, classification, prediction,
clustering, and evolution analysis. Give examples of each data mining functionality,
using a real-life database that you are familiar with.
34
Data characterization
Example:
Data Discrimination is a comparison of the general features of target class data objects
with the general features of objects from one or a set of contrasting classes.
Example
The general features of students with high GPA‘s may be compared with the
general features of students with low GPA‘s. The resulting description could be
a general comparative profile of the students such as 75% of the students with high
GPA‘s are fourth-year computing science students while 65% of the students with low
GPA‘s are not.
where X is a variable representing a student. The rule indicates that of the students under
study, 12% (support) major in computing science and own a personal computer. There is a
98% probability (confidence, or certainty) that a student in this group owns a personal
computer.
Example:
A grocery store retailer to decide whether to but bread on sale. To help determine the
impact of this decision, the retailer generates association rules that show what other
products are frequently purchased with bread. He finds 60% of the times that bread is sold
so are pretzels and that 70% of the time jelly is also sold. Based on these facts, he tries
to capitalize on the association between bread, pretzels, and jelly by placing some
pretzels and jelly at the end of the aisle where the bread is placed. In addition, he
decides not to place either of these items on sale at the same time.
Classification:
Classification:
Classification can be defined as the process of finding a model (or function) that
describes and distinguishes data classes or concepts, for the purpose of being able to use
the model to predict the class of objects whose class label is unknown. The derived
model is based on the analysis of a set of training data (i.e., data objects whose class label
is known).
Example:
36
1) IF-THEN rules,
2) Decision tree
3) Neural Network
37
Find some missing or unavailable data values rather than class labels referred to
as prediction. Although prediction may refer to both data value prediction and class
label prediction, it is usually confined to data value prediction and thus is distinct
from classification. Prediction also encompasses the identification of distribution trends
based on the available data.
Example:
Classification differs from prediction in that the former is to construct a set of models
(or functions) that describe and distinguish data class or concepts, whereas the latter
is to predict some missing or unavailable, and often numerical, data values. Their
similarity is that they are both tools for prediction: Classification is used for predicting the
class label of data objects and prediction is typically used for predicting missing numerical
data values.
Clustering analysis
The objects are clustered or grouped based on the principle of maximizing the intraclass
similarity and minimizing the interclass similarity.
Each cluster that is formed can be viewed as a class of objects.
Clustering can also facilitate taxonomy formation, that is, the organization of
observations into a hierarchy of classes that group similar events together as shown below:
38
A certain national department store chain creates special catalogs targeted to various
demographic groups based on attributes such as income, location and
physical characteristics of potential customers (age, height, weight, etc). To determine
the target mailings of the various catalogs and to assist in the creation of new, more
specific catalogs, the company performs a clustering of potential customers based
on the determined attribute values. The results of the clustering exercise are the used
by management to create special catalogs and distribute them to the correct target
population based on the cluster for that catalog.
Outlier analysis: A database may contain data objects that do not comply with general
model of data. These data objects are outliers. In other words, the data objects which do
not fall within the cluster will be called as outlier data objects. Noisy data or exceptional
data are also called as outlier data. The analysis of outlier data is referred to as outlier
mining.
Example
Outlier analysis may uncover fraudulent usage of credit cards by detecting purchases
of extremely large amounts for a given account number in comparison to regular
charges incurred by the same account. Outlier values may also be detected with
respect to the location and type of purchase, or the purchase frequency.
39
Example:
The data of result the last several years of a college would give an idea if quality
of graduated produced by it
Correlation analysis
Correlation analysis is a technique use to measure the association between two variables.
A correlation coefficient (r) is a statistic used for measuring the strength of a
supposed linear association between two variables. Correlations range from -1.0 to +1.0 in
value.
A correlation coefficient of 0.0 indicates no relationship between the two variables. That
is, one cannot use the scores on one variable to tell anything about the scores on the
second variable.
1.5 Are all of the patterns interesting? / What makes a pattern interesting?
(4) Novel.
A pattern is also interesting if it validates a hypothesis that the user sought to confirm.
An interesting pattern represents knowledge.
There are many data mining systems available or being developed. Some are
specialized systems dedicated to a given data source or are confined to limited
data mining functionalities, other are more versatile and comprehensive. Data mining
systems can be categorized according to various criteria among other classification are the
following:
41
• Task-relevant data: This primitive specifies the data upon which mining is to
be performed. It involves specifying the database and tables or data warehouse containing
the relevant data, conditions for selecting the relevant data, the relevant attributes
or dimensions for exploration, and instructions regarding the ordering or grouping of the
data retrieved.
• Knowledge type to be mined: This primitive specifies the specific data mining
function to be performed, such as characterization, discrimination, association,
classification, clustering, or evolution analysis. As well, the user can be more specific and
provide pattern templates that all discovered patterns must match. These templates or
meta patterns (also called meta rules or meta queries), can be used to guide the discovery
process.
• Pattern interestingness measure: This primitive allows users to specify functions that
42
The differences between the following architectures for the integration of a data mining
system with a database or data warehouse system are as follows.
• No coupling:
The data mining system uses sources such as flat files to obtain the initial data set to
be mined since no database system or data warehouse system functions are implemented
as part of the process. Thus, this architecture represents a poor design choice.
• Loose coupling:
The data mining system is not integrated with the database or data warehouse system
beyond their use as the source of the initial data set to be mined, and possible use
in storage of the results. Thus, this architecture can take advantage of the flexibility,
efficiency and features such as indexing that the database and data warehousing
systems may provide. However, it is difficult for loose coupling to achieve high
scalability and good performance with large data sets as many such systems are memory-
based.
Some of the data mining primitives such as aggregation, sorting or pre computation
of statistical functions are efficiently implemented in the database or data warehouse
system, for use by the data mining system during mining-query processing. Also, some
frequently used inter mediate mining results can be pre computed and stored in the
database or data warehouse system, thereby enhancing the performance of the data mining
system.
43
The database or data warehouse system is fully integrated as part of the data mining
system and thereby provides optimized data mining query processing. Thus, the data
mining sub system is treated as one functional component of an information system. This
is a highly desirable architecture as it facilitates efficient implementations of data
mining functions, high system performance, and an
integrated information processing environment
From the descriptions of the architectures provided above, it can be seen that tight
coupling is the best alternative without respect to technical or implementation issues.
However, as much of the technical infrastructure needed in a tightly coupled system is
still evolving, implementation of such a system is non-trivial. Therefore, the most
popular architecture is currently semi tight coupling as it provides a compromise
between loose and tight coupling.
_ Data mining query languages and ad-hoc data mining: Knowledge in Relational
query languages (such as SQL) required since it allow users to pose ad-hoc queries
44
_ Handling outlier or incomplete data: The data stored in a database may reflect
outliers: noise, exceptional cases, or incomplete data objects. These objects may confuse
the analysis process, causing over fitting of the data to the knowledge model
constructed. As a result, the accuracy of the discovered patterns can be poor. Data
cleaning methods and data analysis methods which can handle outliers are required.
_ Handling of relational and complex types of data: Since relational databases and
data warehouses are widely used, the development of efficient and effective data
mining systems for such data is important.
Data preprocessing
45
Data preprocessing describes any type of processing performed on raw data to prepare it
for another processing procedure. Commonly used as a preliminary data mining
practice, data preprocessing transforms the data into a format that will be more
easily and effectively processed for the purpose of the user
Data in the real world is dirty. It can be in incomplete, noisy and inconsistent from.
These data needs to be preprocessed in order to help improve the quality of the data, and
quality of the mining results.
If no quality data , then no quality mining results. The quality decision is always
based on the quality data.
If there is much irrelevant and redundant information present or noisy and
unreliable data, then knowledge discovery during the training phase is more difficult
o Data cleaning
o Data integration
o Data transformation
o Data reduction
Data discretization
Part of data reduction but with particular importance, especially for numerical data
A measure of central tendency is a single value that attempts to describe a set of data
by identifying the central position within that set of data. As such, measures of
47
Mean: mean, or average, of numbers is the sum of the numbers divided by n. That is:
Example 1
The marks of seven students in a mathematics test with a maximum possible mark of 20
are given below:
15 13 18 16 14 17 12
Solution:
48
The midrange of a data set is the average of the minimum and maximum values.
Median: median of numbers is the middle number when the numbers are written in order.
If is even, the median is the average of the two middle numbers.
Example 2
The marks of nine students in a geography test that had a maximum possible mark of 50
are given below:
47 35 37 32 38 39 36 34 35
Solution:
Arrange the data values in order from the lowest value to the highest value:
32 34 35 35 36 37 38 39 47
The fifth data value, 36, is the middle value in this arrangement.
Note:
In general:
If the number of values in the data set is even, then the median is the average of the
two middle values.
49
Solution:
Arrange the data values in order from the lowest value to the highest
value: 10 12 13 16 17 18 19 21
The number of values in the data set is 8, which is even. So, the median is the average
of the two middle values.
Trimmed mean
Mode of numbers is the number that occurs most frequently. If two numbers tie for most
frequent occurrence, the collection has two modes and is called bimodal.
The mode has applications in printing . For example, it is important to print more of
the most popular books; because printing different books in equal numbers would
cause a shortage of some books and an oversupply of others.
48 44 48 45 42 49 48
50
It is possible for a set of data values to have more than one mode.
If there are two data values that occur most frequently, we say that the set of data
values is bimodal.
If there is three data values that occur most frequently, we say that the set of data
values is trimodal
If two or more data values that occur most frequently, we say that the set of
data values is multimodal
If there is no data value or data values that occur most frequently, we say that
the set of data values has no mode.
The mean, median and mode of a data set are collectively known as measures of
central tendency as these three measures focus on where the data is centered or
clustered. To analyze data using the mean, median and mode, we need to use the most
appropriate measure of central tendency. The following points should be remembered:
The mean is useful for predicting future results when there are no extreme
values in the data set. However, the impact of extreme values on the mean may
be important and should be considered. E.g. the impact of a stock market crash
on average investment returns.
The median may be more useful than the mean when there are extreme
values in the data set as it is not affected by the extreme values.
The mode is useful when the most common item, characteristic or value of a
data set is required.
Measures of Dispersion
Measures of dispersion measure how spread out a set of data is. The two most
commonly used measures of dispersion are the variance and the standard deviation.
Rather than showing how data are similar, they show how data differs from its variation,
spread, or dispersion.
Other measures of dispersion that may be encountered include the Quartiles, Inter quartile
range (IQR), Five number summary, range and box plots
Variance and Standard Deviation
Very different sets of numbers can have the same mean. You will now study two
measures of dispersion, which give you an idea of how much the numbers in a set differ
from the mean of the set. These two measures are called the variance of the set and the
51
Percentile
Percentiles are values that divide a sample of data into one hundred groups containing (as
far as possible) equal numbers of observations.
The pth percentile of a distribution is the value such that p percent of the observations fall
at or below it.
The most commonly used percentiles other than the median are the 25th percentile and
the 75th percentile.
The 25th percentile demarcates the first quartile, the median or 50th percentile
demarcates the second quartile, the 75th percentile demarcates the third quartile, and the
100th percentile demarcates the fourth quartile.
Quartiles
Quartiles are numbers that divide an ordered data set into four portions, each containing
approximately one-fourth of the data. Twenty-five percent of the data values come
before the first quartile (Q1). The median is the second quartile (Q2); 50% of the data
52
Q1=25th percentile=(n*25/100), where n is total number of data in the given data set
Q2=median=50th percentile=(n*50/100)
th
Q3=75 percentile=(n*75/100)
The inter quartile range is the length of the interval between the lower quartile (Q1) and
the upper quartile (Q3). This interval indicates the central, or middle, 50% of a data set.
IQR=Q3-Q1
Range
The range of a set of data is the difference between its largest (maximum) and
smallest (minimum) values. In the statistical world, the range is reported as a single
number, the difference between maximum and minimum. Sometimes, the range is
often reported as ―from (the minimum) to (the maximum),‖ i.e., two numbers.
Example1:
The range of data set is 3–8. The range gives only minimal information about the spread
of the data, by defining the two extremes. It says nothing about how the data are
distributed between those two endpoints.
Example2:
In this example we demonstrate how to find the minimum value, maximum value,
and range of the following data: 29, 31, 24, 29, 30, 25
Five-Number Summary
The Five-Number Summary of a data set is a five-item list comprising the minimum
value, first quartile, median, third quartile, and maximum value of the set.
Box plots
A box plot is a graph used to represent the range, median, quartiles and inter quartile range
of a set of data values.
(i) Draw a box to represent the middle 50% of the observations of the data
set. (ii) Show the median by drawing a vertical line within the box.
(iii) Draw the lines (called whiskers) from the lower and upper ends of the box to the
minimum and maximum values of the data set respectively, as shown in the following
diagram.
76 79 76 74 75 71 85 82 82 79 81
71 74 75 76 76 79 79 81 82 82 85
54
Q1=11*(25/100) th value
=75
=79
=11*(75/100)th value
= 82
Step 5: Min X= 71
55
Outliers
Outlier data is a data that falls outside the range. Outliers will be any points below Q1
– 1.5×IQR or above Q3 + 1.5×IQR.
Example:
10.2, 14.1, 14.4, 14.4, 14.4, 14.5, 14.5, 14.6, 14.7, 14.7, 14.7, 14.9, 15.1, 15.9, 16.4
To find out if there are any outliers, I first have to find the IQR. There are fifteen data
points, so the median will be at position (15/2) = 7.5=8th value=14.6. That is, Q2 =
14.6.
Q1 is the fourth value in the list and Q3 is the twelfth: Q1 = 14.4 and Q3 = 14.9.
The values for Q1 – 1.5×IQR and Q3 + 1.5×IQR are the "fences" that mark off
the "reasonable" values from the outlier values. Outliers lie outside the
fences.
56
The histogram is only appropriate for variables whose values are numerical and measured
on an interval scale. It is generally used when dealing with large data sets
(>100 observations)
A histogram can also help detect any unusual observations (outliers), or any gaps in the
data set.
2 Scatter Plot
A scatter plot is a useful summary of a set of bivariate data (two variables), usually
drawn before working out a linear correlation coefficient or fitting a regression line. It
gives a good visual picture of the relationship between the two variables, and aids the
interpretation of the correlation coefficient or regression model.
Each unit contributes one point to the scatter plot, on which points are plotted but not
joined. The resulting pattern indicates the type and strength of the relationship between
the two variables.
57
A scatter plot will also show up a non-linear relationship between the two variables and
whether or not there exist any outliers in the data.
3 Loess curve
It is another important exploratory graphic aid that adds a smooth curve to a scatter plot in
order to provide better perception of the pattern of dependence. The word loess is short
for ―local regression.‖
58
The picture produced consists of the most extreme values in the data set (maximum and
minimum values), the lower and upper quartiles, and the median.
5 Quintile plot
□ Displays all of the data (allowing the user to assess both the overall behavior
and unusual occurrences)
□ Plots quintile information
□ For a data xi data sorted in increasing order, fi indicates that
approximately 100 fi% of the data are below or equal to the value xi
59
This kind of comparison is much more detailed than a simple comparison of means
or medians.
A normal distribution is often a reasonable model for the data. Without inspecting the
data, however, it is risky to assume a normal distribution. There are a number of graphs
that can be used to check the deviations of the data from the normal distribution. The
most useful tool for assessing normality is a quintile or QQ plot. This is a scatter plot
with the quantiles of the scores on the horizontal axis and the expected normal
scores on the vertical axis.
In other words, it is a graph that shows the quintiles of one univariate distribution against
the corresponding quintiles of another. It is a powerful visualization tool in that it allows
the user to view whether there is a shift in going from one distribution to another.
First, we sort the data from smallest to largest. A plot of these scores against the
expected normal scores should reveal a straight line.
60
Curvature of the points indicates departures of normality. This plot is also useful
for detecting outliers. The outliers appear as points that are far away from the overall
pattern op points
A quintile plot is a graphical method used to show the approximate percentage of values
below or equal to the indepequintile information for all the data, where the values
measured for the independent variable are plotted against their corresponding quintile.
Data Cleaning
Data cleaning routines attempt to fill in missing values, smooth out noise while
identifying outliers, and correct inconsistencies in the data.
61
Missing Values
The various methods for handling the problem of missing values in data tuples include:
(a) Ignoring the tuple: This is usually done when the class label is missing (assuming
the mining task involves classification or description). This method is not very effective
unless the tuple contains several
attributes with missing values. It is especially poor when the percentage of missing values
per attribute
varies considerably.
(b) Manually filling in the missing value: In general, this approach is time-
consuming and may not be a reasonable task for large data sets with many missing
values, especially when the value to be filled in is not easily determined.
(c) Using a global constant to fill in the missing value: Replace all missing
attribute values by the same constant, such as a label like ―Unknown,‖ or −∞. If missing
values are replaced by, say, ―Unknown,‖ then the mining program may mistakenly
think that they form an interesting concept, since they all have a value in common —
that of ―Unknown.‖ Hence, although this method is simple, it is not recommended.
(d) Using the attribute mean for quantitative (numeric) values or attribute mode for
categorical (nominal) values, for all samples belonging to the same class as the given
tuple: For example, if classifying customers according to credit risk, replace the
missing value with the average income value for customers in the same credit risk
category as that of the given tuple.
(e) Using the most probable value to fill in the missing value: This may be determined
with regression, inference-based tools using Bayesian formalism, or decision
tree induction. For example, using the other customer attributes in your data set, you
may construct a decision tree to predict the missing values for income.
Noisy data:
Noise is a random error or variance in a measured variable. Data smoothing tech is used for
removing such noisy data.
62
o Smoothing by bin
means: - Bin 1:
9, 9, 9, 9
o Smoothing by bin
boundaries: - Bin 1:
4, 4, 4, 15
In smoothing by bin means, each value in a bin is replaced by the mean value of the bin.
For example, the mean of the values 4, 8, and 15 in Bin 1 is 9. Therefore, each original
value in this bin is replaced by the value 9. Similarly, smoothing by bin medians can be
employed, in which each bin value is replaced by the bin median. In smoothing by bin
boundaries, the minimum and maximum values in a given bin are identified as the bin
63
The following steps are required to smooth the above data using smoothing by bin means
with a bin
depth of 3.
• Step 1: Sort the data. (This step is not required here as the data are already sorted.)
• Step 4: Replace each of the values in each bin by the arithmetic mean calculated for
the bin.
Bin 1: 14, 14, 14 Bin 2: 18, 18, 18 Bin 3: 21,
21, 21 Bin 4: 24, 24, 24 Bin 5: 26, 26, 26 Bin
6: 33, 33, 33 Bin 7: 35, 35, 35 Bin 8: 40, 40,
40 Bin 9: 56, 56, 56
2 Clustering: Outliers in the data may be detected by clustering, where similar values
are organized into groups, or ‗clusters‘. Values that fall outside of the set of clusters
may be considered outliers.
64
□ Linear regression involves finding the best of line to fit two variables, so
that one variable can be used to predict the other.
Using regression to find a mathematical equation to fit the data helps smooth out the noise.
Unique rule is a rule says that each value of the given attribute must be different from
all other values of that attribute
Consecutive rule is a rule says that there can be no missing values between the lowest and
highest values of the attribute and that all values must also be unique.
Null rule specifies the use of blanks, question marks, special characters or other strings
that may indicate the null condition and how such values should be handled.
Issues:
65
Some redundancy can be identified by correlation analysis. The correlation between two
variables A and B can be measured by
□ The result of the equation is > 0, then A and B are positively correlated, which
means the value of A increases as the values of B increases. The higher value
may indicate redundancy that may be removed.
□ The result of the equation is = 0, then A and B are independent and there is
no correlation between them.
□ If the resulting value is < 0, then A and B are negatively correlated where the
values of one attribute increase as the value of one attribute decrease which means
each attribute may discourages each other.
-also called Pearson‘s product moment coefficient
66
Normalization
In which data are scaled to fall within a small, specified range, useful for classification
algorithms involving neural networks, distance measurements such as nearest neighbor
classification and clustering. There are 3 methods for data normalization. They are:
67
Min-max normalization: performs linear transformation on the original data values. It can
be defined as,
v' v minA (new _ max A new _ min )A new _ min A
maxA minA
eanA
stand
_ devA
This method is useful when min and max value of attribute A are unknown or
when outliers that are dominate min-max normalization.
These techniques can be applied to obtain a reduced representation of the data set that is
much smaller in volume, yet closely maintains the integrity of the original data. Data
reduction includes,
68
form of numerosity reduction that is very useful for the automatic generation
of concept hierarchies.
Data cube aggregation: Reduce the data to the concept level needed in the
analysis. Queries regarding aggregated information should be answered using data
cube when possible. Data cubes store multidimensional aggregated information. The
following figure shows a data cube for multidimensional analysis of sales data with
respect to annual sales per item type for each branch.
Each cells holds an aggregate data value, corresponding to the data point
in multidimensional space.
Data cubes provide fast access to pre computed, summarized data, thereby benefiting
on-line analytical processing as well as data mining.
The cube created at the lowest level of abstraction is referred to as the base cuboid.
A cube for the highest level of abstraction is the apex cuboid. The lowest level of a data
cube (base cuboid). Data cubes created for varying levels of abstraction are sometimes
referred to as cuboids, so that a ―data cube" may instead refer to a lattice of cuboids. Each
higher level of abstraction further reduces the resulting data size.
69
Suppose, the analyzer interested in the annual sales rather than sales per quarter, the above
data can be aggregated so that the resulting data summarizes the total sales per year instead
of per quarter. The resulting data in smaller in volume, without loss of information
necessary for the analysis task.
Dimensionality Reduction
It reduces the data set size by removing irrelevant attributes. This is a method of attribute
subset selection are applied. A heuristic method of attribute of sub set selection is
explained here:
Feature selection is a must for any data mining product. That is because, when you build
a data mining model, the dataset frequently contains more information than is needed to
build the model. For example, a dataset may contain 500 columns that describe
characteristics of customers, but perhaps only 50 of those columns are used to build a
particular model. If you keep the unneeded columns while building the model, more CPU
and memory are required during the training process, and more storage space is required
for the completed model.
In which select a minimum set of features such that the probability distribution of
different classes given the values for those features is as close as possible to the original
distribution given the values of all features
70
2. Step-wise backward elimination: The procedure starts with the full set of attributes.
At each step, it removes the worst attribute remaining in the set.
The mining algorithm itself is used to determine the attribute sub set, then it is called
wrapper approach or filter approach. Wrapper approach leads to greater accuracy since it
71
Data compression
□ Wavelet transforms
□ Principal components analysis.
1. The length, L, of the input data vector must be an integer power of two. This
condition can be met by padding the data vector with zeros, as necessary.
3. The two functions are applied to pairs of the input data, resulting in two sets of data
of length L/2.
4. The two functions are recursively applied to the sets of data obtained in the previous
loop, until the resulting data sets obtained are of desired length.
5. A selection of values from the data sets obtained in the above iterations are designated
the wavelet coefficients of the transformed data.
If wavelet coefficients are larger than some user-specified threshold then it can be retained.
The remaining coefficients are set to 0.
72
The principal components (new set of axes) give important information about variance.
Using the strongest components one can reconstruct a good approximation of the original
signal.
Numerosity Reduction
Data volume can be reduced by choosing alternative smaller forms of data. This tech.
can be
□ Parametric method
□ Non parametric method
Parametric: Assume the data fits some model, then estimate model parameters, and store
only the parameters, instead of actual data.
Non parametric: In which histogram, clustering and sampling is used to store
reduced form of data.
2 Histogram
Divide data into buckets and store average (sum) for each bucket
A bucket represents an attribute-value/frequency pair
It can be constructed optimally in one dimension using dynamic programming
It divides up the range of possible values in a data set into classes or groups.
For
each group, a rectangle (bucket) is constructed with a base length equal to the range
of values in that specific group, and an area proportional to the number of
observations falling into that group.
The buckets are displayed in a horizontal axis while height of a bucket
represents the average frequency of the values.
Example:
The following data are a list of prices of commonly sold items. The numbers have
been sorted.
1, 1, 5, 5, 5, 5, 5, 8, 8, 10, 10, 10, 10, 12, 14, 14, 14, 15, 15, 15, 15, 15, 15, 18, 18, 18,
74
The buckets can be determined based on the following partitioning rules, including the
following.
1. Equi-width: histogram with bars having the same
width 2. Equi-depth: histogram with bars having the
same height
3. V-Optimal: histogram with least variance (countb*valueb)
4. MaxDiff: bucket boundaries defined by user specified threshold
V- Optimal and MaxDiff histograms tend to be the most accurate and practical.
Histograms are highly effective at approximating both sparse and dense data, as well as
highly skewed, and uniform data.
Clustering techniques consider data tuples as objects. They partition the objects into
groups or clusters, so that objects within a cluster are ―similar" to one another
and ―dissimilar" to objects in other clusters. Similarity is commonly defined in terms
of how ―close" the objects are in space, based on a distance function.
Quality of clusters measured by their diameter (max distance between any two objects in
the cluster) or centroid distance (avg. distance of each cluster object from its centroid)
75
represented by a much smaller random sample (or subset) of the data. Suppose that a large
data set, D, contains N tuples. Let's have a look at some possible samples for D.
Advantages of sampling
1. An advantage of sampling for data reduction is that the cost of obtaining a sample
is proportional to the size of the sample, n, as opposed to N, the data set size.
Hence, sampling complexity is potentially sub-linear to the size of the data.
Discretization:
Discretization techniques can be used to reduce the number of values for a given
continuous attribute, by dividing the range of the attribute into intervals. Interval labels
can then be used to replace actual data values.
Concept Hierarchy
Example:
78
Suppose that the data within the 5%-tile and 95%-tile are between -$159,876 and
$1,838,761. The results of applying the 3-4-5 rule are shown in following figure
Step 1: Based on the above information, the minimum and maximum values are: MIN
= -$351, 976.00, and MAX = $4, 700, 896.50. The low (5%-tile) and high (95%-tile)
values to be considered for the top or first level of segmentation are: LOW = -$159, 876,
and HIGH = $1, 838,761.
Step 2: Given LOW and HIGH, the most significant digit is at the million dollar digit
position (i.e., msd =
1,000,000). Rounding LOW down to the million dollar digit, we get LOW‘ = -$1; 000;
000; and rounding
HIGH up to the million dollar digit, we get HIGH‘ = +$2; 000; 000.
Step 3: Since this interval ranges over 3 distinct values at the most significant digit, i.e.,
(2; 000; 000-(-1, 000; 000))/1, 000, 000 = 3, the segment is partitioned into 3 equi-width
sub segments according to the 3-4-5 rule: (-$1,000,000 - $0], ($0 -
$1,000,000], and ($1,000,000 - $2,000,000]. This represents the top tier of the hierarchy.
Step 4: We now examine the MIN and MAX values to see how they ―fit" into the first
level partitions. Since the first interval, (-$1, 000, 000 - $0] covers the MIN value, i.e.,
LOW‘ < MIN, we can adjust the left boundary of this interval to make the interval
smaller. The most
significant digit of MIN is the hundred thousand digit position. Rounding MIN down to
this position, we get MIN0‘ = -$400, 000.
Therefore, the first interval is redefined as (-$400,000 - 0]. Since the last
interval, ($1,000,000-$2,000,000] does not cover the MAX value, i.e., MAX > HIGH‘, we
need to create a new interval to cover it. Rounding up MAX at its most significant digit
position, the new interval is ($2,000,000 - $5,000,000]. Hence, the top most level of the
hierarchy contains four partitions, (-$400,000 - $0], ($0 - $1,000,000], ($1,000,000
- $2,000,000], and ($2,000,000 - $5,000,000].
Step 5: Recursively, each interval can be further partitioned according to the 3-4-5 rule to
form the next lower level of the hierarchy:
- The first interval (-$400,000 - $0] is partitioned into 4 sub-intervals: (-$400,000 -
-$300,000], (-$300,000 - -$200,000], (-$200,000 - -$100,000], and (-$100,000 -
$0].
79
80
Rule support and confidence are two measures of rule interestingness. They respectively
reflect the usefulness and certainty of discovered rules. A support of 2% for association Rule
means that 2% of all the transactions under analysis show that computer and financial
management software are purchased together. A confidence of 60% means that 60% of the
81
• Given: (1) database of transactions, (2) each transaction is a list of items (purchased by a
customer in a visit)
• Find: all rules that correlate the presence of one set of items with that of another set of
items
– E.g., 98% of people who purchase tires and auto accessories also get automotive
services done
• Applications
– * Maintenance Agreement (What the store should do to boost Maintenance
Agreement sales)
– Home Electronics * (What other products should the store stocks up?)
– Attached mailing in direct marketing
– Detecting ―ping-pong‖ing of patients, faulty ―collisions‖
• Find all the rules X & Y Z with minimum confidence and support
– support, s, probability that a transaction contains {X Y Z}
– confidence, c, conditional probability that a transaction having {X Y} also
contains Z
Let minimum support 50%, and minimum confidence 50%, we have
– A C (50%, 66.6%)
– C A (50%, 100%)
Transaction ID It8e2ms Bought
2000 A,B,C
1000 A,C
4000 A,D
5 0 0 0 B,E,F
Downl oaded b y J y otsana Tewari (aanjali82 [email protected])
Association Rule Mining: A Road Map
The method that mines the complete set of frequent itemsets with candidate generation.
Apriori property & The Apriori Algorithm.
Apriori property
The method that mines the complete set of frequent itemsets without generation.
85
Header Table
• Completeness:
– never breaks a long pattern of any transaction
– preserves complete information for frequent pattern mining
• Compactness
– reduce irrelevant information—infrequent items are gone
– frequency descending ordering: more frequent items are more likely to be shared
– never be larger than the original database (if not count node-links and counts)
– Example: For Connect-4 DB, compression ratio could be over 100
Multilevel association rule: Multilevel association rules can be defined as applying association
rules over different levels of data abstraction
Ex:
87
Step3: find the Items at the lower level ( expected to have lower support)
Reduced Support:
88
Multi dimensional association rule: Multi dimensional association rule can be defined as the
statement which contains only two (or) more predicates/dimensions
**Multi dimensional association rule also called as Inter dimensional association rule
We can perform the following association rules on relational database and data warehouse
* 1)Boolean dimensional association rule: Boolean dimensional association rule can be defined
as comparing existing predicates/dimensions with non existing predicates/dimensions..
2) Single dimensional association rule: Single dimensional association rule can be defined as the
statement which contains only single predicate/dimension
**Single dimensional association rule also called as Intra dimensional association rule as
usually multi dimensional association rule..
** Multi dimensional association rule can be applied on different types of attributes ..here
the attributes are
1. Categorical Attributes
2. Quantitative Attributes
In relational database we are using the concept hierarchy that means generalization in
order to find out the frequent item sets
Generalization: replacing low level attributes with high level attributes called
generalization
Note2: data warehouse can be viewed in the form of multidimensional data model
(uses data cubes) in order to find out the frequent patterns.
89
*correlation analysis means one frequent item is dependent on other frequent item..
1) support; and
2) confidence
for each frequent item we are considering the above mentioned two measures to perform mining
and correlation analysis..
Note: Association mining: Association mining can be defined as finding frequent patterns,
associations, correlations, or causal structures among sets of items or objects in transaction
databases, relational databases, and other information repositories
3. Dimension/level constraints:
in relevance to region, price, brand, customer category.
4. Rule constraints:
On the form of the rules to be mined (e.g., # of predicates, etc)
small sales (price < $10) triggers big sales (sum > $200).
90
UNIT-IV
Classification and Prediction
What is classification? What is prediction?
Classification:
*used for prediction(future analysis ) to know the unknown attributes with their values.by using
classifier algorithms and decision tree.(in data mining)
*which constructs some models(like decision trees) then which classifies the attributes..
*already we know the types of attributes are 1.categorical attribute and 2.numerical attribute
*these classification can work on both the above mentioned attributes.
Prediction: prediction also used for to know the unknown or missing values..
1. which also uses some models in order to predict the attributes
2. models like neural networks, if else rules and other mechanisms
92
Issues (1): Data Preparation: Issues of data preparation includes the following
1) Data cleaning
*Preprocess data in order to reduce noise and handle missing values (refer
preprocessing techniques i.e. data cleaning notes)
2) Relevance analysis (feature selection)
Remove the irrelevant or redundant attributes (refer unit-iv AOI Relevance
analysis)
3) Data transformation (refer preprocessing techniques i.e data cleaning notes) Generalize
and/or normalize data
Decision tree
– A flow-chart-like tree structure
– Internal node denotes a test on an attribute
– Branch represents an outcome of the test
– Leaf nodes represent class labels or class distribution
Training Dataset
94
• Example
95
• Validation:
– Using training set as a test set will provide optimal classification accuracy.
– Expected accuracy on a different test set will always be less.
– 10-fold cross validation is more robust than using the training set as a test set.
• Divide data into 10 sets with about same proportion of class label values
as in original set.
• Run classification 10 times independently with the remaining 9/10 of the
set as the training set.
• Average accuracy.
– Ratio validation: 67% training set / 33% test set.
– Best: having a separate training set and test set.
• Results:
– Classification accuracy (correctly classified instances).
– Errors (absolute mean, root squared mean, …)
– Kappa statistic (measures agreement between predicted and observed
classification; -100%-100% is the proportion of agreements after chance
agreement has been excluded; 0% means complete agreement by chance)
• Results:
– TP (True Positive) rate per class label
– FP (False Positive) rate
– Precision = TP rate = TP / (TP + FN)) * 100%
– Recall = TP / (TP + FP)) * 100%
– F-measure = 2* recall * precision / recall + precision
• ID3 characteristics:
– Requires nominal values
– Improved into C4.5
• Dealing with numeric attributes
• Dealing with missing values
• Dealing with noisy data
• Generating rules from trees
• Methods:
96
Information Gain
Gain ratio
Gini Index
Discarding one or more sub trees and replacing them with leaves simplify a decision tree, and
that is the main task in decision-tree pruning. In replacing the sub tree with a leaf, the algorithm
expects to lower the predicted error rate and increase the quality of a classification model. But
computation of error rate is not simple. An error rate based only on a training data set does not
provide a suitable estimate. One possibility to estimate the predicted error rate is to use a new,
additional set of test samples if they are available, or to use the cross-validation techniques. This
technique divides initially available samples into equal sized blocks and, for each block, the tree
is constructed from all samples except this block and tested with a given block of samples. With
the available training and testing samples, the basic idea of decision tree-pruning is to remove
parts of the tree (sub trees) that do not contribute to the classification accuracy of unseen testing
samples, producing a less complex and thus more comprehensible tree. There are two ways in
which the recursive-partitioning method can be modified:
1. Deciding not to divide a set of samples any further under some conditions. The stopping
criterion is usually based on some statistical tests, such as the χ2 test: If there are no
significant differences in classification accuracy before and after division, then represent
a current node as a leaf. The decision is made in advance, before splitting, and therefore
this approach is called pre pruning.
2. Removing retrospectively some of the tree structure using selected accuracy criteria. The
decision in this process of post pruning is made after the tree has been built.
C4.5 follows the post pruning approach, but it uses a specific technique to estimate the predicted
error rate. This method is called pessimistic pruning. For every node in a tree, the estimation of
the upper confidence limit ucf is computed using the statistical tables for binomial distribution
97
Let us illustrate this procedure with one simple example. A sub tree of a decision tree is
given in Figure, where the root node is the test x1 on three possible values {1, 2, 3} of the
attribute A. The children of the root node are leaves denoted with corresponding classes and
(∣Ti∣/E) parameters. The question is to estimate the possibility of pruning the sub tree and
replacing it with its root node as a new, generalized leaf node.
To analyze the possibility of replacing the sub tree with a leaf node it is necessary to
compute a predicted error PE for the initial tree and for a replaced node. Using default
confidence of 25%, the upper confidence limits for all nodes are collected from statistical tables:
U25% (6, 0) = 0.206, U25%(9, 0) = 0.143, U25%(1, 0) = 0.750, and U25%(16, 1) = 0.157. Using these
values, the predicted errors for the initial tree and the replaced node are
Since the existing sub tree has a higher value of predicted error than the replaced node, it
is recommended that the decision tree be pruned and the sub tree replaced with the new leaf
node.
Bayesian Classification:
• Probabilistic learning: Calculate explicit probabilities for hypothesis, among the most
practical approaches to certain types of learning problems
• Incremental: Each training example can incrementally increase/decrease the probability
that a hypothesis is correct. Prior knowledge can be combined with observed data.
• Probabilistic prediction: Predict multiple hypotheses, weighted by their probabilities
• Standard: Even when Bayesian methods are computationally intractable, they can provide
a standard of optimal decision making against which other methods can be measured
Bayesian Theorem
98
P (C
j
| V ) P (C
j
) P ( v i | C j
)
i 1
• Greatly reduces the computation cost, only count the class distribution.
Bayesian classification
• Bayes theorem:
P(C|X) = P(X|C)· P(C) / P(X)
99
100
CPT shows the conditional probability for each possible combination of its parents
Association-Based Classification
Classification by Backpropagation
Backpropagation: A neural network learning algorithm
Started by psychologists and neurobiologists to develop and test computational analogues
of neurons
A neural network: A set of connected input/output units where each connection has a
weight associated with it
During the learning phase, the network learns by adjusting the weights so as to be able
to predict the correct class label of the input tuples
Also referred to as connectionist learning due to the connections between units
Neural Network as a Classifier
Weakness
Long training time
Require a number of parameters typically best determined empirically, e.g., the
network topology or ``structure."
Poor interpretability: Difficult to interpret the symbolic meaning behind the
learned weights and of ``hidden units" in the network
Strength
High tolerance to noisy data
Ability to classify untrained patterns
Well-suited for continuous-valued inputs and outputs
Successful on a wide array of real-world data
Algorithms are inherently parallel
Techniques have recently been developed for the extraction of rules from trained
neural networks
A Neuron (= a perceptron)
103
The inputs to the network correspond to the attributes measured for each training tuple
Inputs are fed simultaneously into the units making up the input layer
They are then weighted and fed simultaneously to a hidden layer
The number of hidden layers is arbitrary, although usually only one
The weighted outputs of the last hidden layer are input to units making up the output
layer, which emits the network's prediction
The network is feed-forward in that none of the weights cycles back to an input unit or to
an output unit of a previous layer
From a statistical point of view, networks perform nonlinear regression: Given enough
hidden units and enough training samples, they can closely approximate any function
Backpropagation
Iteratively process a set of training tuples & compare the network's prediction with the
actual known target value
For each training tuple, the weights are modified to minimize the mean squared error
between the network's prediction and the actual target value
Modifications are made in the ―backwards‖ direction: from the output layer, through each
hidden layer down to the first hidden layer, hence ―backpropagation‖
Steps
Initialize weights (to small random #s) and biases in the network
Propagate the inputs forward (by applying activation function)
Back propagate the error (by updating weights and biases)
104
105
SVM—Linearly Separable
A separating hyper plane can be written as
W●X+b=0
where W={w1, w2, …, wn} is a weight vector and b a scalar (bias)
For 2-D it can be written as
w0 + w1 x1 + w2 x2 = 0
The hyper plane defining the sides of the margin:
H1: w0 + w1 x1 + w2 x2 ≥ 1 for yi = +1, and
H2: w0 + w1 x1 + w2 x2 ≤ – 1 for yi = –1
Any training tuples that fall on hyper planes H1 or H2 (i.e., the
sides defining the margin) are support vectors
This becomes a constrained (convex) quadratic optimization problem: Quadratic
objective function and linear constraints Quadratic Programming (QP) Lagrangian
multipliers
106
Associative Classification
Associative classification
Association rules are generated and analyzed for use in classification
Search for strong associations between frequent patterns (conjunctions of
attribute-value pairs) and class labels
Classification: Based on evaluating a set of rules in the form of
P1 ^ p2 … ^ pl ―Aclass = C‖ (conf, sup)
Why effective?
It explores highly confident associations among multiple attributes and may
overcome some constraints introduced by decision-tree induction, which
considers only one attribute at a time
In many studies, associative classification has been found to be more accurate than some
traditional classification methods, such as C4.
Associative Classification May Achieve High Accuracy and Efficiency (Cong et al.
SIGMOD05)
107
108
Figure: A rough set approximation of the set of tuples of the class C suing lower and upper
approximation sets of C. The rectangular regions represent equivalence classes
109
Linear Regression
Linear regression: involves a response variable y and a single predictor variable x
y = w0 + w1 x
where w0 (y-intercept) and w1 (slope) are regression coefficients
Method of least squares: estimates the best-fitting straight line
Multiple linear regression: involves more than one predictor variable
Training data is of the form (X1, y1), (X2, y2),…, (X|D|, y|D|)
Ex. For 2-D data, we may have: y = w0 + w1 x1+ w2 x2
Solvable by extension of least square method or using SAS, S-Plus
Many nonlinear functions can be transformed into the above
Nonlinear Regression
Some nonlinear models can be modeled by a polynomial function
A polynomial regression model can be transformed into linear regression model. For
example,
y = w0 + w1 x + w2 x2 + w3 x3
convertible to linear with new variables: x2 = x2, x3= x3
y = w0 + w1 x + w2 x2 + w3 x3
Other functions, such as power function, can also be transformed to linear model
Some models are intractable nonlinear (e.g., sum of exponential terms)
110
111
Cluster Analysis
• Pattern Recognition
• Spatial Data Analysis
– create thematic maps in GIS by clustering feature spaces
– detect spatial clusters and explain them in spatial data mining
• Image Processing
• Economic Science (especially market research)
• WWW
– Document classification
– Cluster Weblog data to discover groups of similar access patterns
Examples of Clustering Applications
• Marketing: Help marketers discover distinct groups in their customer bases, and then use
this knowledge to develop targeted marketing programs
• Land use: Identification of areas of similar land use in an earth observation database
• Insurance: Identifying groups of motor insurance policy holders with a high average
claim cost
• City-planning: Identifying groups of houses according to their house type, value, and
geographical location
• Earth-quake studies: Observed earth quake epicenters should be clustered along continent
faults
112
Interval-valued variables
• Standardize data
– Calculate the mean absolute deviation:
Where m 1 (x x ... x ).
f n 1f 2 f nf
• Distances are normally used to measure the similarity or dissimilarity between two data
objects
• Some popular ones include: Minkowski distance:
d (i, j )
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-dimensional data objects, and q
is a positive integer
113
d (i, j )
s 1 (| x m | | x m | ... | x m |)
f n 1f f 2 f f nf f
– Properties
• d(i,j) 0
• d(i,i) = 0
• d(i,j) = d(j,i)
• Also one can use weighted distance, parametric Pearson product moment correlation, or
other disimilarity measures.
Binary Variables
Object j
1 0 sum
1 a b ab
0 c d cd
Nominal Variables
A generalization of the binary variable in that it can take more than 2 states, e.g., red, yellow,
blue, green
Ordinal Variables
• discrete or continuous
• order is important, e.g., rank
• Can be treated like interval-scaled r {1,..., M }
if f
– replacing xif by their rank
– map the range of each variable onto [0, 1] by replacing i-th object in the f-th
variable by
r 1
z if Mif 1
•
compute the dissimilarity using fmethods for interval-scaled variables
Ratio-Scaled Variables
115
p
d ij ( f ) (f )
– d (i, j ) f 1 ij
– pf 1ij( f )
–
– f is binary or nominal:
dij(f) = 0 if xif = xjf , or dij(f) = 1 o.w.
– f is interval-based: use the normalized distance
– f is ordinal or ratio-scaled
• compute ranks rif and
• and treat zif as interval-scaled
r 1
z if
if
M f
1
1. Partitioning algorithms: Construct various partitions and then evaluate them by some
criterion
2. Hierarchy algorithms: Create a hierarchical decomposition of the set of data (or objects)
using some criterion
5. Model-based: A model is hypothesized for each of the clusters and the idea is to find the
best fit of that model to each other
116
• Example
10
0
0 1 2 3 4 5 6 7 8 9 10
10
0
0 1 2 3 4 5 6 7 8 9 10
• Strength
– Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is #
iterations. Normally, k, t << n.
117
10 10
j
9
8
8
t
7
7
j
6
6
5
i h
4 5
1 2
0 1
0 1 2 3 4 5 6 7 8 9 10
0
0 1 2 3 4 5 6 7 8 9 10
10
10
9
9
8
8
h 7
7
6
6
5
5
4 i 4
3 3
2 2
1
1
0
0 1 2 3 4 5 6 7 8 9 10
0
0 1 2 3 4 5 6 7 8 9 10
119
Hierarchical Clustering
Use distance matrix as clustering criteria. This method does not require the number of clusters k
as an input, but needs a termination condition
10
0
0 1 2 3 4 5 6 7 8 9 10
Decompose data objects into a several levels of nested partitioning (tree of clusters), called a
dendrogram.
120
Rock: Algorithm
{1,2,3} 3 {1,2,4}
•Algorithm
– Draw random sample
– Cluster with links
– Label data in disk
CHAMELEON
Algorithms of hierarchical cluster analysis are divided into the two categories divisible
algorithms and agglomerative algorithms. A divisible algorithm starts from the entire set of
samples X and divides it into a partition of subsets, then divides each subset into smaller sets,
and so on. Thus, a divisible algorithm generates a sequence of partitions that is ordered from a
coarser one to a finer one. An agglomerative algorithm first regards each object as an initial
cluster. The clusters are merged into a coarser partition, and the merging process proceeds until
the trivial partition is obtained: all objects are in one large cluster. This process of clustering is a
bottom-up process, where partitions from a finer one to a coarser one.
122
The basic steps of the agglomerative clustering algorithm are the same. These steps are
1. Place each sample in its own cluster. Construct the list of inter-cluster distances for all
distinct unordered pairs of samples, and sort this list in ascending order.
2. Step through the sorted list of distances, forming for each distinct threshold value d k a
graph of the samples where pairs samples closer than dk are connected into a new cluster
by a graph edge. If all the samples are members of a connected graph, stop. Otherwise,
repeat this step.
3. The output of the algorithm is a nested hierarchy of graphs, which can be cut at the
desired dissimilarity level forming a partition (clusters) identified by simple connected
components in the corresponding sub graph.
Let us consider five points {x1, x2, x3, x4, x5} with the following coordinates as a two-
dimensional sample for clustering:
The distances between these points using the Euclidian measure are
d(x1 , x2 ) =2, d(x1, x3) = 2.5, d(x1, x4) = 5.39, d(x1, x5) = 5
The distances between points as clusters in the first iteration are the same for both single-
link and complete-link clustering. Further computation for these two algorithms is different.
Using agglomerative single-link clustering, the following steps are performed to create a cluster
and to represent the cluster structure as a dendrogram.
There are two main types of clustering techniques, those that create a hierarchy of
clusters and those that do not. The hierarchical clustering techniques create a hierarchy of
clusters from small to big. The main reason for this is that, as was already stated, clustering is
an unsupervised learning technique, and as such, there is no absolutely correct answer. For this
reason and depending on the particular application of the clustering, fewer or greater numbers of
clusters may be desired. With a hierarchy of clusters defined it is possible to choose the number
of clusters that are desired. At the extreme it is possible to have as many clusters as there are
records in the database. In this case the records within the cluster are optimally similar to each
other (since there is only one) and certainly different from the other clusters. But of course such
123
The hierarchy of clusters is usually viewed as a tree where the smallest clusters merge
together to create the next highest level of clusters and those at that level merge together to
create the next highest level of clusters. Figure 1.5 below shows how several clusters might form
a hierarchy. When a hierarchy of clusters like this is created the user can determine what the
right number of clusters is that adequately summarizes the data while still providing useful
information (at the other extreme a single cluster containing all the records is a great
summarization but does not contain enough specific information to be useful).
This hierarchy of clusters is created through the algorithm that builds the clusters. There are
two main types of hierarchical clustering algorithms:
124
Non-Hierarchical Clustering
There are two main non-hierarchical clustering techniques. Both of them are very fast to
compute on the database but have some drawbacks. The first are the single pass methods. They
derive their name from the fact that the database must only be passed through once in order to
create the clusters (i.e. each record is only read from the database once). The other class of
techniques are called reallocation methods. They get their name from the movement or
―reallocation‖ of records from one cluster to another in order to create better clusters. The
reallocation techniques do use multiple passes through the database but are relatively fast in
comparison to the hierarchical techniques.
Hierarchical Clustering
Hierarchical clustering has the advantage over non-hierarchical techniques in that the
clusters are defined solely by the data (not by the users predetermining the number of clusters)
and that the number of clusters can be increased or decreased by simple moving up and down the
hierarchy.
The hierarchy is created by starting either at the top (one cluster that includes all records)
and subdividing (divisive clustering) or by starting at the bottom with as many clusters as there
are records and merging (agglomerative clustering). Usually the merging and subdividing are
done two clusters at a time.
The main distinction between the techniques is their ability to favor long, scraggly
clusters that are linked together record by record, or to favor the detection of the more classical,
compact or spherical cluster that was shown at the beginning of this section. It may seem strange
to want to form these long snaking chain like clusters, but in some cases they are the patters that
the user would like to have detected in the database. These are the times when the underlying
space looks quite different from the spherical clusters and the clusters that should be formed are
not based on the distance from the center of the cluster but instead based on the records being
―linked‖ together. Consider the example shown in Figure 1.6 or in Figure 1.7. In these cases
there are two clusters that are not very spherical in shape but could be detected by the single link
technique.
When looking at the layout of the data in Figure1.6 there appears to be two relatively flat
clusters running parallel to each along the income axis. Neither the complete link nor Ward‘s
method would, however, return these two clusters to the user. These techniques rely on creating
a ―center‖ for each cluster and picking these centers so that they average distance of each record
from this center is minimized. Points that are very distant from these centers would necessarily
fall into a different cluster.
125
Figure 1.6 an example of elongated clusters which would not be recovered by the complete
link or Ward's methods but would be by the single-link method.
• Two parameters:
– Eps: Maximum radius of the neighbour hood
– MinPts: Minimum number of points in an Eps-neighbour hood of that point
• NEps(p): {q belongs to D | dist(p,q) <= Eps}
• Directly density-reachable: A point p is directly density-reachable from a point q wrt.
Eps, MinPts if
– 1) p belongs to NEps(q)
– 2) core point condition:
|NEps (q)| >= MinPts
127
• Uses grid cells but only keeps information about grid cells that do actually contain data
points and manages these cells in a tree-based access structure.
• Influence function: describes the impact of a data point within its neighborhood.
• Overall density of the data space can be calculated as the sum of the influence function of
all data points.
• Clusters can be determined mathematically by identifying density attractors.
• Density attractors are local maximal of the overall density function.
Grid-Based Methods
Using multi-resolution grid data structure
• Several interesting methods
– STING (a STatistical INformation Grid approach) by Wang, Yang and Muntz
(1997)
– WaveCluster by Sheikholeslami, Chatterjee, and Zhang (VLDB‘98)
• A multi-resolution clustering approach using wavelet method
– CLIQUE: Agrawal, et al. (SIGMOD‘98)
WaveCluster (1998)
• Sheikholeslami, Chatterjee, and Zhang (VLDB‘98)
• A multi-resolution clustering approach which applies wavelet transform to the feature
space
– A wavelet transform is a signal processing technique that decomposes a signal
into different frequency sub-band.
• Both grid-based and density-based
• Input parameters:
– # of grid cells for each dimension
– the wavelet, and the # of applications of wavelet transform.
129
130
131
Assume a model underlying distribution that generates data set (e.g. normal
distribution)
• Use discordance tests depending on
– data distribution
– distribution parameter (e.g., mean, variance)
– number of expected outliers
• Drawbacks
– most tests are for single attribute
– In many cases, data distribution may not be known
Outlier Discovery: Distance-Based Approach
• Introduced to counter the main limitations imposed by statistical methods
– We need multi-dimensional analysis without knowing data distribution.
• Distance-based outlier: A DB(p, D)-outlier is an object O in a dataset T such that at least
a fraction p of the objects in T lies at a distance greater than D from O
• Algorithms for mining distance-based outliers
– Index-based algorithm
– Nested-loop algorithm
– Cell-based algorithm
Outlier Discovery: Deviation-Based Approach
• Identifies outliers by examining the main characteristics of objects in a group
• Objects that ―deviate‖ from this description are considered outliers
• sequential exception technique
– simulates the way in which humans can distinguish unusual objects from among a
series of supposedly like objects
• OLAP data cube technique
– uses data cubes to identify regions of anomalies in large multidimensional data
133