Data Mining Applications & Tools
Data Mining Applications & Tools
SCHOOL OF COMPUTING
1
SCSA3001 Data Mining And Data Warehousing
DATA MINING
Introduction - Steps in KDD - System Architecture – Types of data -Data mining
functionalities - Classification of data mining systems - Integration of a data mining
system with a data warehouse - Issues - Data Preprocessing - Data Mining
Application.
INTRODUCTION
What is Data?
• Collection of data objects and their attributes
• An attribute is a property or characteristic of an object – Examples: eye color of a person,
temperature, etc. – Attribute is also known as variable, field, characteristic, or feature
• A collection of attributes describe an object – Object is also known as record, point, case,
sample, entity, or instance Attributes
Data sets are made up of data objects. A data object represents an entity—in a sales database, the
objects may be customers, store items, and sales; in a medical database, the objects may be
patients; in a university database, the objects may be students, professors, and courses. Data
objects are typically described by attributes. Data objects can also be referred to as samples,
examples, instances, data points, or objects. If the data objects are stored in a database, they
are data tuples. That is, the rows of a database correspond to the data objects, and the columns
correspond to the attributes.
Attribute:
It can be seen as a data field that represents characteristics or features of a data object. For a
customer object attributes can be customer Id, address etc.
We can say that a set of attributes used to describe a given object are known as attribute
vector or feature vector.
Type of attributes:
This is the First step of Data Data-preprocessing. We differentiate between different types of
attributes and then pre process the data. So here is description of attribute types.
1. Qualitative (Nominal (N), Ordinal (O), Binary (B)).
2. Quantitative (Discrete, Continuous)
2
SCSA3001 Data Mining And Data Warehousing
3
SCSA3001 Data Mining And Data Warehousing
Ordinal Attributes : The Ordinal Attributes contains values that have a meaningful sequence or
ranking(order) between them, but the magnitude between values is not actually known, the order
of values that shows what is important but don’t indicate how important it is.
4
SCSA3001 Data Mining And Data Warehousing
Data Mining also known as Knowledge Discovery in Databases refers to the nontrivial extraction
of implicit, previously unknown and potentially useful information from data stored in databases.
5
SCSA3001 Data Mining And Data Warehousing
3. Data Selection: Data selection is defined as the process where data relevant to the analysis is
decided and retrieved from the data collection.
Data selection using Neural network.
Data selection using Decision Trees.
Data selection using Naive bayes.
Data selection using Clustering, Regression, etc.
4. Data Transformation: Data Transformation is defined as the process of transforming data
into appropriate form required by mining procedure.
Data Transformation is a two-step process:
6
SCSA3001 Data Mining And Data Warehousing
SYSTEM ARCHITECTURE
Data mining is a very important process where potentially useful and previously unknown
information is extracted from large volumes of data. There are a number of components involved
in the data mining process. These components constitute the architecture of a data mining system.
Data Mining Architecture
The major components of any data mining system are data source, data warehouse server, data
mining engine, pattern evaluation module, graphical user interface and knowledge base.
7
SCSA3001 Data Mining And Data Warehousing
data needs to be cleaned and integrated. Again, more data than required will be collected from
different data sources and only the data of interest needs to be selected and passed to the server.
These processes are not as simple as we think. A number of techniques may be performed on the
data as part of cleaning, integration and selection.
b) Database or Data Warehouse Server
The database or data warehouse server contains the actual data that is ready to be processed.
Hence, the server is responsible for retrieving the relevant data based on the data mining request
of the user.
c) Data Mining Engine
The data mining engine is the core component of any data mining system. It consists of a number
of modules for performing data mining tasks including association, classification,
characterization, clustering, prediction, time-series analysis etc.
d) Pattern Evaluation Modules
The pattern evaluation module is mainly responsible for the measure of interestingness of the
pattern by using a threshold value. It interacts with the data mining engine to focus the search
towards interesting patterns.
e) Graphical User Interface
The graphical user interface module communicates between the user and the data mining system.
This module helps the user use the system easily and efficiently without knowing the real
complexity behind the process. When the user specifies a query or a task, this module interacts
with the data mining system and displays the result in an easily understandable manner.
f) Knowledge Base
The knowledge base is helpful in the whole data mining process. It might be useful for guiding the
search or evaluating the interestingness of the result patterns. The knowledge base might even
contain user beliefs and data from user experiences that can be useful in the process of data
mining. The data mining engine might get inputs from the knowledge base to make the result
more accurate and reliable. The pattern evaluation module interacts with the knowledge base on a
regular basis to get inputs and also to update it.
Summary
Each and every component of data mining system has its own role and importance in completing
data mining efficiently.
8
SCSA3001 Data Mining And Data Warehousing
9
SCSA3001 Data Mining And Data Warehousing
Pattern Recognition
Image Analysis
Signal Processing
Computer Graphics
Web Technology
Business
Bioinformatics
DATA MINING SYSTEM CLASSIFICATION
A data mining system can be classified according to the following criteria −
Database Technology
Statistics
Machine Learning
Information Science
Visualization
Other Disciplines
Apart from these, a data mining system can also be classified based on the kind of (a) databases
mined, (b) knowledge mined, (c) techniques utilized, and (d) applications adapted.
Classification Based on the Databases Mined
We can classify a data mining system according to the kind of databases mined. Database system
can be classified according to different criteria such as data models, types of data, etc. And the
data mining system can be classified accordingly.
For example, if we classify a database according to the data model, then we may have a
relational, transactional, object-relational, or data warehouse mining system.
10
SCSA3001 Data Mining And Data Warehousing
11
SCSA3001 Data Mining And Data Warehousing
dimensions). The kind of knowledge to be mined: This specifies the data mining functions to be
performed, such as characterization, discrimination, association or correlation analysis,
classification, prediction, clustering, outlier analysis, or evolution analysis.
The background knowledge to be used in the discovery process: This knowledge about the
domain to be mined is useful for guiding the knowledge discovery process and for evaluating the
patterns found. Concept hierarchies are a popular form of background knowledge, which allow
data to be mined at multiple levels of abstraction. User beliefs regarding relationships in the data
are another form of background knowledge. The interestingness measures and thresholds for
pattern evaluation: They may be used to guide the mining process or, after discovery, to evaluate
the discovered patterns. Different kinds of knowledge may have different interestingness
measures. For example, interestingness measures for association rules include support and
confidence. Rules whose support and confidence values are below user-specified thresholds are
considered uninteresting. The expected representation for visualizing the discovered patterns: This
refers to the form in which discovered patterns are to be displayed, which may include rules,
tables, charts, graphs, decision trees, and cubes. A data mining query language can be designed to
incorporate these primitives, allowing users to flexibly interact with data mining systems. Having
a data mining query language provides a foundation on which user-friendly graphical interfaces
can be built.
12
SCSA3001 Data Mining And Data Warehousing
13
SCSA3001 Data Mining And Data Warehousing
14
SCSA3001 Data Mining And Data Warehousing
15
SCSA3001 Data Mining And Data Warehousing
data into partitions which is further processed in a parallel fashion. Then the results from the
partitions are merged. The incremental algorithms, update databases without mining the data
again from scratch.
Diverse Data Types Issues:
Handling of relational and complex types of data − The database may contain complex data
objects, multimedia data objects, spatial data, temporal data etc. It is not possible for one
system to mine all these kind of data.
Mining information from heterogeneous databases and global information systems − The
data is available at different data sources on LAN or WAN. These data source may be
structured, semi structured or unstructured. Therefore mining the knowledge from them adds
challenges to data mining.
DATA PREPROCESSING
Data preprocessing is a data mining technique that involves transforming raw data into an
understandable format. Real-world data is often incomplete, inconsistent, and/or lacking in certain
behaviors or trends, and is likely to contain many errors. Data preprocessing is a proven method
of resolving such issues. Data preprocessing prepares raw data for further processing.
Data preprocessing is used database-driven applications such as customer relationship
management and rule-based applications (like neural networks).
Data goes through a series of steps during pre processing:
Data Cleaning: Data is cleansed through processes such as filling in missing values,
smoothing the noisy data, or resolving the inconsistencies in the data.
Data Integration: Data with different representations are put together and conflicts within
the data are resolved.
Data Transformation: Data is normalized, aggregated and generalized.
Data Reduction: This step aims to present a reduced representation of the data in a data
warehouse.
Data Discretization: Involves the reduction of a number of values of a continuous attribute
by dividing the range of attribute intervals.
Integration of a data mining system with a data warehouse:
DB and DW systems, possible integration schemes include no coupling, loose coupling, semi-
tight coupling, and tight coupling. We examine each of these schemes, as follows:
16
SCSA3001 Data Mining And Data Warehousing
1. No coupling: No coupling means that a DM system will not utilize any function of a DB or
DW system. It may fetch data from a particular source (such as a file system), process data using
some data mining algorithms, and then store the mining results in another file.
2. Loose coupling: Loose coupling means that a DM system will use some facilities of a DB or
DW system, fetching data from a data repository managed by these systems, performing data
mining, and then storing the mining results either in a file or in a designated place in a database or
data Warehouse. Loose coupling is better than no coupling because it can fetch any portion of data
stored in databases or data warehouses by using query processing, indexing, and other system
facilities.
However, many loosely coupled mining systems are main memory-based. Because mining does
not explore data structures and query optimization methods provided by DB or DW systems, it is
difficult for loose coupling to achieve high scalability and good performance with large data sets.
3. Semi-tight coupling: Semi-tight coupling means that besides linking a DM system to a
DB/DW system, efficient implementations of a few essential data mining primitives (identified by
the analysis of frequently encountered data mining functions) can be provided in the DB/DW
system. These primitives can include sorting, indexing, aggregation, histogram analysis, multi
way join, and pre computation of some essential statistical measures, such as sum, count, max,
min ,standard deviation,
4. Tight coupling: Tight coupling means that a DM system is smoothly integrated into the
DB/DW system. The data mining subsystem is treated as one functional component of
information system. Data mining queries and functions are optimized based on mining query
analysis, data structures, indexing schemes, and query processing methods of a DB or DW
system.
17
SCSA3001 Data Mining And Data Warehousing
18
SCSA3001 Data Mining And Data Warehousing
Retail Industry
Data Mining has its great application in Retail Industry because it collects large amount of data
from on sales, customer purchasing history, goods transportation, consumption and services. It is
natural that the quantity of data collected will continue to expand rapidly because of the increasing
ease, availability and popularity of the web.
Data mining in retail industry helps in identifying customer buying patterns and trends that lead to
improved quality of customer service and good customer retention and satisfaction. Here is the list
of examples of data mining in the retail industry −
Design and Construction of data warehouses based on the benefits of data mining.
Multidimensional analysis of sales, customers, products, time and region.
Analysis of effectiveness of sales campaigns.
Customer Retention.
Product recommendation and cross-referencing of items.
Telecommunication Industry
Today the telecommunication industry is one of the most emerging industries providing various
services such as fax, pager, cellular phone, internet messenger, images, e-mail, web data
transmission, etc. Due to the development of new computer and communication technologies, the
telecommunication industry is rapidly expanding. This is the reason why data mining is become
very important to help and understand the business.
Data mining in telecommunication industry helps in identifying the telecommunication patterns,
catch fraudulent activities, make better use of resource, and improve quality of service. Here is the
list of examples for which data mining improves telecommunication services −
Multidimensional Analysis of Telecommunication data.
Fraudulent pattern analysis.
Identification of unusual patterns.
Multidimensional association and sequential patterns analysis.
Mobile Telecommunication services.
Use of visualization tools in telecommunication data analysis.
Biological Data Analysis
In recent times, we have seen a tremendous growth in the field of biology such as genomics,
proteomics, functional Genomics and biomedical research. Biological data mining is a very
19
SCSA3001 Data Mining And Data Warehousing
important part of Bioinformatics. Following are the aspects in which data mining contributes for
biological data analysis −
Semantic integration of heterogeneous, distributed genomic and proteomic databases.
Alignment, indexing, similarity search and comparative analysis multiple nucleotide
sequences.
Discovery of structural patterns and analysis of genetic networks and protein pathways.
Association and path analysis.
Visualization tools in genetic data analysis.
Other Scientific Applications
The applications discussed above tend to handle relatively small and homogeneous data sets for
which the statistical techniques are appropriate. Huge amount of data have been collected from
scientific domains such as geosciences, astronomy, etc. A large amount of data sets is being
generated because of the fast numerical simulations in various fields such as climate and
ecosystem modelling, chemical engineering, fluid dynamics, etc. Following are the applications of
data mining in the field of Scientific Applications −
Data Warehouses and data preprocessing.
Graph-based mining.
Visualization and domain specific knowledge.
Intrusion Detection
Intrusion refers to any kind of action that threatens integrity, confidentiality, or the availability of
network resources. In this world of connectivity, security has become the major issue. With
increased usage of internet and availability of the tools and tricks for intruding and attacking
network prompted intrusion detection to become a critical component of network administration.
Here is the list of areas in which data mining technology may be applied for intrusion detection −
Development of data mining algorithm for intrusion detection.
Association and correlation analysis, aggregation to help select and build discriminating
attributes.
Analysis of Stream data.
Distributed data mining.
Visualization and query tools.
20
SCSA3001 Data Mining And Data Warehousing
There are many data mining system products and domain specific data mining applications. The
new data mining systems and applications are being added to the previous systems. Also, efforts
are being made to standardize data mining languages.
Choosing a Data Mining System
The selection of a data mining system depends on the following features −
Data Types − The data mining system may handle formatted text, record-based data, and
relational data. The data could also be in ASCII text, relational database data or data warehouse
data. Therefore, we should check what exact format the data mining system can handle.
System Issues − We must consider the compatibility of a data mining system with different
operating systems. One data mining system may run on only one operating system or on several.
There are also data mining systems that provide web-based user interfaces and allow XML data as
input.
Data Sources − Data sources refer to the data formats in which data mining system will
operate. Some data mining system may work only on ASCII text files while others on multiple
relational sources. Data mining system should also support ODBC connections or OLE DB for
ODBC connections.
Data Mining functions and methodologies − There are some data mining systems that provide
only one data mining function such as classification while some provides multiple data mining
functions such as concept description, discovery-driven OLAP analysis, association mining,
linkage analysis, statistical analysis, classification, prediction, clustering, outlier analysis,
similarity search, etc.
Coupling data mining with databases or data warehouse systems − Data mining systems need
to be coupled with a database or a data warehouse system. The coupled components are integrated
into a uniform information processing environment. Here are the types of coupling listed below −
o No coupling
o Loose Coupling
o Semi tight Coupling
o Tight Coupling
Scalability − There are two scalability issues in data mining −
21
SCSA3001 Data Mining And Data Warehousing
o Row (Database size) Scalability − A data mining system is considered as row scalable when
the number or rows are enlarged 10 times. It takes no more than 10 times to execute a query.
o Column (Dimension) Scalability − A data mining system is considered as column scalable if
the mining query execution time increases linearly with the number of columns.
Visualization Tools − Visualization in data mining can be categorized as follows −
o Data Visualization
o Mining Results Visualization
o Mining process visualization
o Visual data mining
Data Mining query language and graphical user interface − An easy-to-use graphical user
interface is important to promote user-guided, interactive data mining. Unlike relational database
systems, data mining systems do not share underlying data mining query language.
Trends in Data Mining
Data mining concepts are still evolving and here are the latest trends that we get to see in this field
Application Exploration.
Scalable and interactive data mining methods.
Integration of data mining with database systems, data warehouse systems and web database
systems.
Standardization of data mining query language.
Visual data mining.
New methods for mining complex types of data.
Biological data mining.
Data mining and software engineering.
Web mining.
Distributed data mining.
Real time data mining.
Multi database data mining.
Privacy protection and information security in data mining
22
SCSA3001 Data Mining And Data Warehousing
PART-A
1. Define Data mining. List out the steps in data mining. Remember BTL-1
7. Define an efficient procedure for cleaning the noisy data. Remember BTL-1
PART-B
1. ii) Describe in detail about the applications of data mining Remember BTL-1
(6)
i) State and explain the various classifications of data
mining systems with example. (7)
2. Analyze BTL-4
ii) Explain the various data mining functionalities in
detail. (6)
i) Describe the steps involved in Knowledge discovery in
databases (KDD). (7)
3. Remember BTL-1
ii) Draw the diagram and Describe the architecture of data
mining system. (6)
23
SCSA3001 Data Mining And Data Warehousing
24
SCSA3001 Data Mining And Data Warehousing
3. Pang-Ning Tan, Michael Steinbach and Vipin Kumar, “Introduction To Data Mining”, Person
Education, 2007.
4. K.P. Soman, Shyam Diwakar and V. Ajay, “Insight into Data mining Theory and Practice”,
Easter Economy Edition,
Prentice Hall of India, 2006.
5. G. K. Gupta, “Introduction to Data Mining with Case Studies”, Easter Economy Edition,
Prentice Hall of India, 2006.
6. Daniel T.Larose, “Data Mining Methods and Models”, Wile-Interscience, 2006
25
SCSA3001 Data Mining And Data Warehousing
26
SCSA3001 Data Mining And Data Warehousing
DATA WAREHOUSING
27
SCSA3001 Data Mining And Data Warehousing
This integration helps in effective analysis of data. Consistency in naming conventions, attribute
measures, encoding structure etc. has to be ensured.
Time-Variant
The time horizon for data warehouse is quite extensive compared with operational systems. The
data collected in a data warehouse is recognized with a particular period and offers information
from the historical point of view. It contains an element of time, explicitly or implicitly. One such
place where Data warehouse data display time variance is in in the structure of the record key.
Every primary key contained with the DW should have either implicitly or explicitly an element
of time. Like the day, week month, etc. Another aspect of time variance is that once data is
inserted in the warehouse, it can't be updated or changed.
Non-volatile
Data warehouse is also non-volatile means the previous data is not erased when new data is
entered in it. Data is read-only and periodically refreshed. This also helps to analyze historical
data and understand what & when happened. It does not require transaction process, recovery and
concurrency control mechanisms.
Activities like delete, update, and insert which are performed in an operational application
environment are omitted in Data warehouse environment. Only two types of data operations
performed in the Data Warehousing are
1. Data loading
2. Data access
Data Warehouse Architectures
Single-tier architecture
The objective of a single layer is to minimize the amount of data stored. This goal is to remove
data redundancy. This architecture is not frequently used in practice.
Two-tier architecture
Two-layer architecture separates physically available sources and data warehouse. This
architecture is not expandable and also not supporting a large number of end-users. It also has
connectivity problems because of network limitations.
Three-tier architecture
This is the most widely used architecture.
It consists of the Top, Middle and Bottom Tier.
28
SCSA3001 Data Mining And Data Warehousing
1. Bottom Tier: The database of the Data warehouse servers as the bottom tier. It is usually a
relational database system. Data is cleansed, transformed, and loaded into this layer using
back-end tools.
2. Middle-Tier: The middle tier in Data warehouse is an OLAP server which is implemented
using either ROLAP or MOLAP model. For a user, this application tier presents an abstracted
view of the database. This layer also acts as a mediator between the end-user and the database.
3. Top-Tier: The top tier is a front-end client layer. Top tier is the tools and API that you connect
and get data out from the data warehouse. It could be Query tools, reporting tools, managed
query tools, Analysis tools and Data mining tools.
DATA WAREHOUSE COMPONENTS
29
SCSA3001 Data Mining And Data Warehousing
for data warehousing. For instance, ad-hoc query, multi-table joins, aggregates are resource
intensive and slow down performance.
Hence, alternative approaches to Database are used as listed below-
In a data warehouse, relational databases are deployed in parallel to allow for scalability.
Parallel relational databases also allow shared memory or shared nothing model on various
multiprocessor configurations or massively parallel processors.
New index structures are used to bypass relational table scan and improve speed.
Use of multidimensional database (MDDBs) to overcome any limitations which are placed
because of the relational data model. Example: Essbase from Oracle.
Sourcing, Acquisition, Clean-up and Transformation Tools (ETL)
The data sourcing, transformation, and migration tools are used for performing all the
conversions, summarizations, and all the changes needed to transform data into a unified format in
the data warehouse. They are also called Extract, Transform and Load (ETL) Tools.
Their functionality includes:
Anonymize data as per regulatory stipulations.
Eliminating unwanted data in operational databases from loading into Data warehouse.
Search and replace common names and definitions for data arriving from different sources.
Calculating summaries and derived data
In case of missing data, populate them with defaults.
De-duplicated repeated data arriving from multiple data sources.
These Extract, Transform, and Load tools may generate cron jobs, background jobs, Cobol
programs, shell scripts, etc. that regularly update data in data warehouse. These tools are also
helpful to maintain the Metadata.
These ETL Tools have to deal with challenges of Database & Data heterogeneity.
Metadata
The name Meta Data suggests some high- level technological concept. However, it is quite
simple. Metadata is data about data which defines the data warehouse. It is used for building,
maintaining and managing the data warehouse.
In the Data Warehouse Architecture, meta-data plays an important role as it specifies the source,
usage, values, and features of data warehouse data. It also defines how data can be changed and
processed. It is closely connected to the data warehouse.
30
SCSA3001 Data Mining And Data Warehousing
31
SCSA3001 Data Mining And Data Warehousing
32
SCSA3001 Data Mining And Data Warehousing
Consider implementing an ODS model when information retrieval need is near the bottom of
the data abstraction pyramid or when there are multiple operational sources required to be
accessed.
One should make sure that the data model is integrated and not just consolidated. In that case,
you should consider 3NF data model. It is also ideal for acquiring ETL and Data cleansing
tools
Summary:
Data warehouse is an information system that contains historical and commutative data from
single or multiple sources.
A data warehouse is subject oriented as it offers information regarding subject instead of
organization's ongoing operations.
In Data Warehouse, integration means the establishment of a common unit of measure for all
similar data from the different databases
Data warehouse is also non-volatile means the previous data is not erased when new data is
entered in it.
A Data warehouse is Time-variant as the data in a DW has high shelf life.
There are 5 main components of a Data warehouse. 1) Database 2) ETL Tools 3) Meta Data 4)
Query Tools 5) Data Marts
These are four main categories of query tools 1. Query and reporting, tools 2. Application
Development tools, 3. Data mining tools 4. OLAP tools
The data sourcing, transformation, and migration tools are used for performing all the
conversions and summarizations.
In the Data Warehouse Architecture, meta-data plays an important role as it specifies the
source, usage, values, and features of data warehouse data.
BUILDING A DATA WAREHOUSE
In general, building any data warehouse consists of the following steps:
1. Extracting the transactional data from the data sources into a staging area
2. Transforming the transactional data
3. Loading the transformed data into a dimensional database
4. Building pre-calculated summary values to speed up report generation
5. Building (or purchasing) a front-end reporting tool
33
SCSA3001 Data Mining And Data Warehousing
34
SCSA3001 Data Mining And Data Warehousing
35
SCSA3001 Data Mining And Data Warehousing
parts produced per hour or the number of cars rented per day). Dimensions, on the other hand, are
what your business users expect in the reports—the details about the measures. For example, the
time dimension tells the user that 2000 parts were produced between 7 a.m. and 7 p.m. on the
specific day; the plant dimension specifies that these parts were produced by the Northern plant.
Just like any modeling exercise the dimensional modeling is not to be taken lightly. Figuring out
the needed dimensions is a matter of discussing the business requirements with your users over
and over again. When you first talk to the users they have very minimal requirements: "Just give
me those reports that show me how each portion of the company performs." Figuring out what
"each portion of the company" means is your job as a DW architect. The company may consist of
regions, each of which report to a different vice president of operations. Each region, on the other
hand, might consist of areas, which in turn might consist of individual stores. Each store could
have several departments. When the DW is complete, splitting the revenue among the regions
won't be enough. That's when your users will demand more features and additional drill-down
capabilities. Instead of waiting for that to happen, an architect should take proactive measures to
get all the necessary requirements ahead of time.
It's also important to realize that not every field you import from each data source may fit into the
dimensional model. Indeed, if you have a sequential key on a mainframe system, it won't have
much meaning to your business users. Other columns might have had significance eons ago when
the system was built. Since then, the management might have changed its mind about the
relevance of such columns. So don't worry if all of the columns you imported are not part of your
dimensional model.
Loading the Data:
After you've built a dimensional model, it's time to populate it with the data in the staging
database. This step only sounds trivial. It might involve combining several columns together or
splitting one field into several columns. You might have to perform several lookups before
calculating certain values for your dimensional model.
Keep in mind that such data transformations can be performed at either of the two stages: while
extracting the data from their origins or while loading data into the dimensional model. I wouldn't
recommend one way over the other—make a decision depending on the project. If your users need
to be sure that they can extract all the data first, wait until all data is extracted prior to
36
SCSA3001 Data Mining And Data Warehousing
transforming it. If the dimensions are known prior to extraction, go on and transform the data
while extracting it.
Generating Precalculated Summary Values:
The next step is generating the precalculated summary values which are commonly referred to
as aggregations. This step has been tremendously simplified by SQL Server Analysis Services (or
OLAP Services, as it is referred to in SQL Server 7.0). After you have populated your
dimensional database, SQL Server Analysis Services does all the aggregate generation work for
you. However, remember that depending on the number of dimensions you have in your DW,
building aggregations can take a long time. As a rule of thumb, the more dimensions you have, the
more time it'll take to build aggregations. However, the size of each dimension also plays a
significant role.
Prior to generating aggregations, you need to make an important choice about which dimensional
model to use: ROLAP (Relational OLAP), MOLAP (Multidimensional OLAP), or HOLAP
(Hybrid OLAP). The ROLAP model builds additional tables for storing the aggregates, but this
takes much more storage space than a dimensional database, so be careful! The MOLAP model
stores the aggregations as well as the data in multidimensional format, which is far more efficient
than ROLAP. The HOLAP approach keeps the data in the relational format, but builds
aggregations in multidimensional format, so it's a combination of ROLAP and MOLAP.
Regardless of which dimensional model you choose, ensure that SQL Server has as much memory
as possible. Building aggregations is a memory-intensive operation, and the more memory you
provide, the less time it will take to build aggregate values.
Building (or Purchasing) a Front-End Reporting Tool
After you've built the dimensional database and the aggregations you can decide how
sophisticated your reporting tools need to be. If you just need the drill-down capabilities, and
your users have Microsoft Office 2000 on their desktops, the Pivot Table Service of Microsoft
Excel 2000 will do the job. If the reporting needs are more than what Excel can offer, you'll have
to investigate the alternative of building or purchasing a reporting tool. The cost of building a
custom reporting (and OLAP) tool will usually outweigh the purchase price of a third-party tool.
That is not to say that OLAP tools are cheap.
There are several major vendors on the market that have top-notch analytical tools. In addition to
the third-party tools, Microsoft has just released its own tool, Data Analyzer, which can be a cost-
37
SCSA3001 Data Mining And Data Warehousing
effective alternative. Consider purchasing one of these suites before delving into the process of
developing your own software because reinventing the wheel is not always beneficial or
affordable. Building OLAP tools is not a trivial exercise by any means.
MULTIDIMENSIONAL DATA MODEL
Multidimensional data model stores data in the form of data cube. Mostly, data warehousing
supports two or three-dimensional cubes.
A data cube allows data to be viewed in multiple dimensions. Dimensions are entities with respect
to which an organization wants to keep records. For example in store sales record, dimensions
allow the store to keep track of things like monthly sales of items and the branches and locations.
A multidimensional database helps to provide data-related answers to complex business queries
quickly and accurately. Data warehouses and Online Analytical Processing (OLAP) tools are
based on a multidimensional data model. OLAP in data warehousing enables users to view data
from different angles and dimensions
The multi-Dimensional Data Model is a method which is used for ordering data in the database
along with good arrangement and assembling of the contents in the database.
The Multi-Dimensional Data Model allows customers to interrogate analytical questions
associated with market or business trends, unlike relational databases which allow customers to
38
SCSA3001 Data Mining And Data Warehousing
access data in the form of queries. They allow users to rapidly receive answers to the requests
which they made by creating and examining the data comparatively fast.
OLAP (online analytical processing) and data warehousing uses multi-dimensional databases. It is
used to show multiple dimensions of the data to users.
Working on a Multidimensional Data Model
The following stages should be followed by every project for building a Multi-Dimensional Data
Model:
Stage 1: Assembling data from the client: In first stage, a Multi-Dimensional Data Model collects
correct data from the client. Mostly, software professionals provide simplicity to the client about
the range of data which can be gained with the selected technology and collect the complete data
in detail.
Stage 2: Grouping different segments of the system: In the second stage, the Multi-Dimensional
Data Model recognizes and classifies all the data to the respective section they belong to and also
builds it problem-free to apply step by step.
Stage 3: Noticing the different proportions: In the third stage, it is the basis on which the design
of the system is based. In this stage, the main factors are recognized according to the user’s point
of view. These factors are also known as “Dimensions”.
Stage 4: Preparing the actual-time factors and their respective qualities: In the fourth stage, the
factors which are recognized in the previous step are used further for identifying the related
qualities. These qualities are also known as “attributes” in the database.
Stage 5: Finding the actuality of factors which are listed previously and their qualities: In the
fifth stage, A Multi-Dimensional Data Model separates and differentiates the actuality from the
factors which are collected by it. These actually play a significant role in the arrangement of a
Multi-Dimensional Data Model.
Stage 6: Building the Schema to place the data, with respect to the information collected from
the steps above: In the sixth stage, on the basis of the data which was collected previously, a
Schema is built.
For Example:
1. Let us take the example of a firm. The revenue cost of a firm can be recognized on the basis of
different factors such as geographical location of firm’s workplace, products of the firm,
advertisements done, time utilized to flourish a product, etc.
39
SCSA3001 Data Mining And Data Warehousing
40
SCSA3001 Data Mining And Data Warehousing
41
SCSA3001 Data Mining And Data Warehousing
Since OLAP servers are based on multidimensional view of data, we will discuss OLAP
operations in multidimensional data.
Here is the list of OLAP operations −
Roll-up
Drill-down
Slice and dice
Pivot (rotate)
Roll-up
Roll-up performs aggregation on a data cube in any of the following ways −
By climbing up a concept hierarchy for a dimension
By dimension reduction
The following diagram illustrates how roll-up works.
Roll-up is performed by climbing up a concept hierarchy for the dimension location.
Initially the concept hierarchy was "street < city < province < country".
On rolling up, the data is aggregated by ascending the location hierarchy from the level of
city to the level of country.
The data is grouped into cities rather than countries.
When roll-up is performed, one or more dimensions from the data cube are removed.
42
SCSA3001 Data Mining And Data Warehousing
43
SCSA3001 Data Mining And Data Warehousing
44
SCSA3001 Data Mining And Data Warehousing
45
SCSA3001 Data Mining And Data Warehousing
46
SCSA3001 Data Mining And Data Warehousing
Generally a data warehouses adopts three-tier architecture. Following are the three tiers of the data
warehouse architecture.
These 3 tiers are:
1. Bottom Tier (Data warehouse server)
2. Middle Tier (OLAP server)
3. Top Tier (Front end tools)
47
SCSA3001 Data Mining And Data Warehousing
Middle Tier − In the middle tier, we have the OLAP Server that can be implemented in either
of the following ways.
By Relational OLAP (ROLAP), which is an extended relational database management
system? The ROLAP maps the operations on multidimensional data to standard relational
operations.
By Multidimensional OLAP (MOLAP) model, which directly implements the
multidimensional data and operations?
Top-Tier − This tier is the front-end client layer. This layer holds the query tools and reporting
tools, analysis tools and data mining tools.
The following diagram depicts the three-tier architecture of data warehouse −
Data Warehouse Models
From the perspective of data warehouse architecture, we have the following data warehouse
models
Virtual Warehouse
Data mart
Enterprise Warehouse
Virtual Warehouse
The view over an operational data warehouse is known as a virtual warehouse. It is easy to build
a virtual warehouse. Building a virtual warehouse requires excess capacity on operational
database servers.
Data Mart
Data mart contains a subset of organization-wide data. This subset of data is valuable to specific
groups of an organization.
In other words, we can claim that data marts contain data specific to a particular group. For
example, the marketing data mart may contain data related to items, customers, and sales. Data
marts are confined to subjects.
Points to remember about data marts −
Window-based or Unix/Linux-based servers are used to implement data marts. They are
implemented on low-cost servers.
The implementation data mart cycles is measured in short periods of time, i.e., in weeks
rather than months or years.
48
SCSA3001 Data Mining And Data Warehousing
The life cycle of a data mart may be complex in long run, if its planning and design are
not organization-wide.
Data marts are small in size.
Data marts are customized by department.
The source of a data mart is departmentally structured data warehouse.
Data marts are flexible.
Enterprise Warehouse
An enterprise warehouse collects all the information and the subjects spanning an entire
organization
It provides us enterprise-wide data integration.
The data is integrated from operational systems and external information providers.
This information can vary from a few gigabytes to hundreds of gigabytes, terabytes or
beyond.
SCHEMAS FOR MULTI-DIMENSIONAL DATA MODEL
Schema is a logical description of the entire database. It includes the name and description of
records of all record types including all associated data-items and aggregates. Much like a
database, a data warehouse also requires to maintain a schema. A database uses relational model,
while a data warehouse uses Star, Snowflake, and Fact Constellation schema. In this chapter, we
will discuss the schemas used in a data warehouse.
Star Schema
49
SCSA3001 Data Mining And Data Warehousing
50
SCSA3001 Data Mining And Data Warehousing
Now the item dimension table contains the attributes item_key, item_name, type, brand,
and supplier-key.
The supplier key is linked to the supplier dimension table. The supplier dimension table
contains the attributes supplier_key and supplier_type.
Note − Due to normalization in the Snowflake schema, the redundancy is reduced and therefore,
it becomes easy to maintain and the save storage space.
Fact Constellation Schema
A fact constellation has multiple fact tables. It is also known as galaxy schema.
The following diagram shows two fact tables, namely sales and shipping.
51
SCSA3001 Data Mining And Data Warehousing
52
SCSA3001 Data Mining And Data Warehousing
53
SCSA3001 Data Mining And Data Warehousing
Relational OLAP
ROLAP servers are placed between relational back-end server and client front-end tools. To store
and manage warehouse data, ROLAP uses relational or extended-relational DBMS.
ROLAP includes the following −
Implementation of aggregation navigation logic.
Optimization for each DBMS back end.
Additional tools and services.
Multidimensional OLAP
MOLAP uses array-based multidimensional storage engines for multidimensional views of data.
With multidimensional data stores, the storage utilization may be low if the data set is sparse.
Therefore, many MOLAP server use two levels of data storage representation to handle dense
and sparse data sets.
Hybrid OLAP
Hybrid OLAP is a combination of both ROLAP and MOLAP. It offers higher scalability of
ROLAP and faster computation of MOLAP. HOLAP servers allow to store the large data
volumes of detailed information. The aggregations are stored separately in MOLAP store.
Specialized SQL Servers
Specialized SQL servers provide advanced query language and query processing support for
SQL queries over star and snowflake schemas in a read-only environment.
INTEGRATED OLAP AND OLAM ARCHITECTURE
Online Analytical Mining integrates with Online Analytical Processing with data mining and
mining knowledge in multidimensional databases. Here is the diagram that shows the integration
of both OLAP and OLAM
OLAM is important for the following reasons −
High quality of data in data warehouses − The data mining tools are required to work on
integrated, consistent, and cleaned data. These steps are very costly in the preprocessing of data.
The data warehouses constructed by such preprocessing are valuable sources of high quality data
for OLAP and data mining as well.
Available information processing infrastructure surrounding data warehouses − Information
processing infrastructure refers to accessing, integration, consolidation, and transformation of
54
SCSA3001 Data Mining And Data Warehousing
multiple heterogeneous databases, web-accessing and service facilities, reporting and OLAP
analysis tools
Online selection of data mining functions − Integrating OLAP with multiple data mining
functions and online analytical mining provide users with the flexibility to select desired data
mining functions and swap data mining tasks dynamically
Features of OLTP and OLAP:
The major distinguishing features between OLTP and OLAP are summarized as follows.
1. Users and system orientation: An OLTP system is customer-oriented and is used for
transaction and query processing by clerks, clients, and information technology professionals. An
OLAP system is market-oriented and is used for data analysis by knowledge workers, including
managers, executives, and analysts.
2. Data contents: An OLTP system manages current data that, typically, are too detailed to be
easily used for decision making. An OLAP system manages large amounts of historical data,
provides facilities for summarization and aggregation, and stores and manages information at
different levels of granularity. These features make the data easier for use in informed decision
making.
3. Database design: An OLTP system usually adopts an entity-relationship (ER) data model and
an application oriented database design. An OLAP system typically adopts either a star or
snowflake model and a subject-oriented database design.
4. View: An OLTP system focuses mainly on the current data within an enterprise or department,
without referring to historical data or data in different organizations. In contrast, an OLAP system
often spans multiple versions of a database schema. OLAP systems also deal with information that
originates from different organizations, integrating information from many data stores. Because of
their huge volume, OLAP data are stored on multiple storage media.
5. Access patterns: The access patterns of an OLTP system consist mainly of short, atomic
transactions. Such a system requires concurrency control and recovery mechanisms. However,
accesses to OLAP systems are mostly read-only operations although many could be complex
queries.
56
SCSA3001 Data Mining And Data Warehousing
PART-A
6. How would you evaluate the goals of data mining? Evaluate BTL-5
7. Can you list the categories of tools in business analysis? Remember BTL-1
PART-B
57
SCSA3001 Data Mining And Data Warehousing
58
SCSA3001 Data Mining And Data Warehousing
59
SCSA3001 Data Mining And Data Warehousing
60
SCSA3001 Data Mining And Data Warehousing
Examples
Rule form: ―Body ® Head [support, confidence]‖.
buys(x, ―diapers‖) ® buys(x, ―beers‖) [0.5%, 60%]
major(x, ―CS‖) ^ takes(x, ―DB‖) ® grade(x, ―A‖) [1%, 75%]
ASSOCIATIONS AND CORRELATIONS
Association Rule: Basic Concepts
Given: (1) database of transactions, (2) each transaction is a list of items (purchased by a customer
in a visit)
Find: all rules that correlate the presence of one set of items with that of another set of items
E.g., 98% of people who purchase tires and auto accessories also get automotive services done
Applications
* Maintenance Agreement (What the store should do to boost Maintenance
Agreement sales)
– Home Electronics * (What other products should the store stocks up?)
61
SCSA3001 Data Mining And Data Warehousing
Table 3.1
Association Rule Mining: A Road Map
• Boolean vs. quantitative associations (Based on the types of values handled)
– buys(x, ―SQLServer‖) ^ buys(x, ―DMBook‖) ® buys(x, ―DBMiner‖) [0.2%, 60%]
– age(x, ―30..39‖) ^ income(x, ―42..48K‖) ® buys(x, ―PC‖) [1%, 75%]
• Single dimension vs. multiple dimensional associations (see ex. above)
• Single level vs. multiple-level analysis
– What brands of beers are associated with what brands of diapers?
• Various extensions
– Correlation, causality analysis
• Association does not necessarily imply correlation or causality
– Maxpatterns and closed itemsets
– Constraints enforced
• E.g., small sales (sum < 100) trigger big buys (sum > 1,000)?
62
SCSA3001 Data Mining And Data Warehousing
63
SCSA3001 Data Mining And Data Warehousing
MINING METHODS
• Mining Frequent Pattern with candidate generation
• Mining Frequent Pattern without candidate generation
MINING FREQUENT PATTERNS WITH CANDIDATE GENERATION
The method that mines the complete set of frequent item sets with candidate generation.
Apriori property & The Apriori Algorithm. Apriori property
• All nonempty subsets of a frequent item set most also be frequent.
– An item set I does not satisfy the minimum support threshold, min-sup, then I is not frequent,
i.e., support (I) < min-sup
– If an item A is added to the item set I then the resulting item set (I U A) cannot occur more
frequently than I.
• Monotonic functions are functions that move in only one direction.
• This property is called anti-monotonic.
• If a set cannot pass a test, all its supersets will fail the same test as well.
• This property is monotonic in failing the test.
The Apriori Algorithm
• Join Step: Ck is generated by joining Lk-1with itself
• Prune Step: Any (k-1)-itemset that is not frequent cannot be a subset of a frequent kitemset
Method
1) L1 = find_frequent_1 itemsets(D);
2) for (k = 2; Lk-1; k++) {
3) Ck = apriori_gen (Lk-1, min_sup);
4) For each transaction t D { // scan D for counts
5) Ct = subset (Ck, t); // get the subsets of t that are candidates
6) for each candidate c Ct
7) c.count++;
8) }
9) Lk = {c Ck|c.count ≥ min_sup}
10) }
11) return L = UkLL;
64
SCSA3001 Data Mining And Data Warehousing
65
SCSA3001 Data Mining And Data Warehousing
66
SCSA3001 Data Mining And Data Warehousing
67
SCSA3001 Data Mining And Data Warehousing
68
SCSA3001 Data Mining And Data Warehousing
69
SCSA3001 Data Mining And Data Warehousing
which contain multiple occurrences of some predicates. These rules are called hybrid-dimensional
association rules.
An example of such a rule is the following, where the predicate buys is repeated:
age(X, “20 . . . 29”) ⇒buys(X, “laptop”) ⇒buys(X, “HP printer”).
Database attributes can be nominal or quantitative. The values of nominal (or categorical)
attributes are “names of things.” Nominal attributes have a finite number of possible values, with
no ordering among the values (e.g., occupation, brand, color)
Quantitative attributes are numeric and have an implicit ordering among values (e.g., age, income,
price). Techniques for mining multidimensional association rules can be categorized into two
basic approaches regarding the treatment of quantitative attributes. In the first approach,
quantitative attributes are discretized using predefined concept hierarchies. This discretization
occurs before mining. For instance, a concept hierarchy for income may be used to replace the
original numeric values of this attribute by interval labels such as “0..20K,” “21K..30K,”
“31K..40K,” and so on.
Here, discretization is static and predetermined. Chapter 3 on data preprocessing gave several
techniques for discretizing numeric attributes. The discretized numeric attributes, with their
interval labels, can then be treated as nominal attributes (where each interval is considered
a category).
Mining Quantitative Association Rules
• Determine the number of partitions for each quantitative attribute
• Map values/ranges to consecutive integer values such that the order is preserved
• Find the support of each value of the attributes, and combine when support is less than MaxSup.
Find frequent itemsets, whose support is larger than MinSup
• Use frequent set to generate association rules
• Pruning out uninteresting rules
Partial Completeness
• R : rules obtained before partition
• R’: rules obtained after partition
• Partial Completeness measures the maximum distance between a rule in R and its closest
generalization in R’
• 𝑋̂ is a generalization of itemset X: if
70
SCSA3001 Data Mining And Data Warehousing
71
SCSA3001 Data Mining And Data Warehousing
– Dimension/level constraints:
– Rule constraints
• small sales (price < $10) triggers big sales (sum > $200).
– Interestingness constraints:
72
SCSA3001 Data Mining And Data Warehousing
• sum (LHS) < 100 ^ min(LHS) > 20 ^ count(LHS) > 3 ^ sum(RHS) > 1000
– 1-var: A constraint confining only one side (L/R) of the rule, e.g., as shown above.
73
SCSA3001 Data Mining And Data Warehousing
Categories of Constraints
1. Anti-monotone and Monotone Constraints
• Constraint Ca is anti-monotone iff. for any pattern S not satisfying Ca, none of the super patterns
of S can satisfy Ca
• A constraint Cm is monotone iff. for any pattern S satisfying Cm, every super-pattern of S also
satisfies it
2. Succinct Constraint
• A subset of item Is is a succinct set, if it can be expressed as p (I) for some selection predicate
p, where is a selection operator
• SP2I is a succinct power set, if there is a fixed number of succinct set I1, …, Ik I,s.t. SP
can be expressed in terms of the strict power sets of I1, …, Ik using union and minus
• A constraint Cs is succinct provided SATCs (I) is a succinct power set
3. Convertible Constraint
• Suppose all items in patterns are listed in a total order R
• A constraint C is convertible anti-monotone iff a pattern S satisfying the constraint implies that
each suffix of S w.r.t. R also satisfies C
• A constraint C is convertible monotone iff a pattern S satisfying the constraint implies that each
pattern of which S is a suffix w.r.t. R also satisfies C
Property of Constraints: Anti-Monotone
• Anti-monotonicity: If a set S violates the constraint, any superset of S violates the constraint.
• Examples:
– sum(S.Price) ≤v is anti-monotone
– sum(S.Price) ≥v is not anti-monotone
– sum(S.Price) = v is partly anti-monotone
• Application:
– Push ―sum(S.price) ≤ 1000‖ deeply into iterative frequent set computation.
Property of Constraints: Succinctness
• Succinctness:
– For any set S1 and S2 satisfying C, S1 S2 satisfies C
– Given A1 is the sets of size 1 satisfying C, then any set S satisfying C are based on A1
, i.e., it contains a subset belongs to A1 ,
74
SCSA3001 Data Mining And Data Warehousing
• Example :
– sum(S.Price ) ≥v is not succinct
– min(S.Price ) ≤v is succinct
Optimization:
– If C is succinct, then C is pre-counting prunable. The satisfaction of the constraint alone is not
affected by the iterative support counting.
• ed based on the training set
• Unsupervised learning (clustering)
• The class labels of training data is unknown
• Given a set of measurements, observations, etc. with the aim of establishing the existence of
classes or clusters in the data.
PART-A
2. List the ways in which interesting patterns should be mined. Remember BTL-1
Are all patterns generated are interesting and useful? Give
3. Understand BTL-2
reasons to justify
Compare the advantages of FP growth algorithm over
4. Analyze BTL-4
Apriori algorithm
5. How will you apply FP growth algorithm in Data mining? Apply BTL-3
6. How will you Apply pattern mining in Multilevel space? Apply BTL-3
75
SCSA3001 Data Mining And Data Warehousing
PART-B
76
SCSA3001 Data Mining And Data Warehousing
77
SCSA3001 Data Mining And Data Warehousing
78
SCSA3001 Data Mining And Data Warehousing
79
SCSA3001 Data Mining And Data Warehousing
80
SCSA3001 Data Mining And Data Warehousing
81
SCSA3001 Data Mining And Data Warehousing
5. Interpretability:
Understanding and insight provided by the model
6. Goodness of rules
Decision tree size
Compactness of classification rules
Comparing Classification Methods
Classification and prediction methods can be compared and evaluated according to the following
criteria:
Predictive Accuracy: This refers to the ability of the model to correctly predict the class label of
new or previously unseen data.
Speed: This refers to the computation costs involved in generating and using the model.
Robustness: This is the ability of the model to make correct predictions given noisy data or data
with missing values.
Scalability: This refers to the ability to construct the model efficiently given large amount of data.
Interpretability: This refers to the level of understanding and insight that is provided by the model
CLASSIFICATION BY DECISION TREE INDUCTION
Decision tree
– A flow-chart-like tree structure
– Internal node denotes a test on an attribute
– Branch represents an outcome of the test
– Leaf nodes represent class labels or class distribution
• Decision tree generation consists of two phases
– Tree construction
• At start, all the training examples are at the root
• Partition examples recursively based on selected attributes
– Tree pruning
• Identify and remove branches that reflect noise or outliers
• Use of decision tree: Classifying an unknown sample
– Test the attribute values of the sample against the decision tree
Training Dataset
This follows an example from Quinlan’s ID3
82
SCSA3001 Data Mining And Data Warehousing
83
SCSA3001 Data Mining And Data Warehousing
Halt tree construction early—do not split a node if this would result in the goodness measure
falling below a threshold
Difficult to choose an appropriate threshold
Post pruning:
Remove branches from a “fully grown” tree—get a sequence of progressively pruned trees
Use a set of data different from the training data to decide which the “best pruned tree”
84
SCSA3001 Data Mining And Data Warehousing
85
SCSA3001 Data Mining And Data Warehousing
1. Deciding not to divide a set of samples any further under some conditions. The stopping
criterion is usually based on some statistical tests, such as the χ2 test: If there are no
significant differences in classification accuracy before and after division, then represent a
current node as a leaf. The decision is made in advance, before splitting, and therefore this
approach is called pre pruning.
2. Removing retrospectively some of the tree structure using selected accuracy criteria. The
decision in this process of post pruning is made after the tree has been built.
C4.5 follows the post pruning approach, but it uses a specific technique to estimate the predicted
error rate. This method is called pessimistic pruning. For every node in a tree, the estimation of
86
SCSA3001 Data Mining And Data Warehousing
the upper confidence limit ucf is computed using the statistical tables for binomial distribution
(given in most textbooks on statistics). Parameter Ucf is a function of ∣ Ti∣ and E for a given node.
C4.5 uses the default confidence level of 25%, and compares U25% (∣ Ti∣ /E) for a given node Ti
with a weighted confidence of its leaves. Weights are the total number of cases for every leaf. If
the predicted error for a root node in a sub tree is less than weighted sum of U25% for the leaves
(predicted error for the sub tree), then a sub tree will be replaced with its root node, which
becomes a new leaf in a pruned tree.
Let us illustrate this procedure with one simple example. A sub tree of a decision tree is given in
Figure, where the root node is the test x1 on three possible values {1, 2, 3} of the attribute A. The
children of the root node are leaves denoted with corresponding classes and (∣ Ti∣ /E) parameters.
The question is to estimate the possibility of pruning the sub tree and replacing it with its root
node as a new, generalized leaf node.
To analyze the possibility of replacing the sub tree with a leaf node it is necessary to compute a
predicted error PE for the initial tree and for a replaced node. Using default confidence of 25%,
the upper confidence limits for all nodes are collected from statistical tables: U25% (6, 0) = 0.206,
U25%(9, 0) = 0.143, U25%(1, 0) = 0.750, and U25%(16, 1) = 0.157. Using these values, the
predicted errors for the initial tree and the replaced node are
Since the existing subtree has a higher value of predicted error than the replaced node, it is
recommended that the decision tree be pruned and the subtree replaced with the new leaf node.
BAYESIAN CLASSIFICATION
• Probabilistic learning: Calculate explicit probabilities for hypothesis, among the most
practical approaches to certain types of learning problems
• Incremental: Each training example can incrementally increase/decrease the probability that a
hypothesis is correct. Prior knowledge can be combined with observed data.
• Probabilistic prediction: Predict multiple hypotheses, weighted by their probabilities
• Standard: Even when Bayesian methods are computationally intractable, they can provide a
standard of optimal decision making against which other methods can be measured
87
SCSA3001 Data Mining And Data Warehousing
BAYESIAN THEOREM
• Given training data D, posteriori probability of a hypothesis h, P(h|D) follows the Bayes
theorem
Greatly reduces the computation cost, only count the class distribution.
Naive Bayesian Classifier (II)
Given a training set, we can compute the probabilities
Outlook P N Humidity P N
Temperature Windy
88
SCSA3001 Data Mining And Data Warehousing
BAYESIAN CLASSIFICATION
• The classification problem may be formalized using a-posteriori probabilities:
• P(C|X) = prob. that the sample tuple
• X=<x1,…,xk> is of class C.
• E.g. P(class=N | outlook=sunny, windy=true,…)
• Idea: assign to sample X the class label C such that P(C|X) is maximal
Estimating a-posteriori probabilities
• Bayes theorem:
P(C|X) = P(X|C)·P(C) / P(X)
• P(X) is constant for all classes
• P(C) = relative freq of class C samples
• C such that P(C|X) is maximum = C such that P(X|C)·P(C) is maximum
• Problem: computing P(X|C) is unfeasible!
89
SCSA3001 Data Mining And Data Warehousing
90
SCSA3001 Data Mining And Data Warehousing
Association-Based Classification
• Several methods for association-based classification
– ARCS: Quantitative association mining and clustering of association rules (Lent et
al’97)
• It beats C4.5 in (mainly) scalability and also accuracy
– Associative classification: (Liu et al’98)
• It mines high support and high confidence rules in the form of “cond_set => y”, where y is a
class label
– CAEP (Classification by aggregating emerging patterns) (Dong et al’99)
Emerging patterns (EPs): the item sets whose support increases significantly from
one class to another
Mine Eps based on minimum support and growth rate
91
SCSA3001 Data Mining And Data Warehousing
One rule is created for each path from the root to a leaf
Each attribute-value pair along a path forms a conjunction: the leaf holds the class
prediction
Rules are mutually exclusive and exhaustive
92
SCSA3001 Data Mining And Data Warehousing
CLASSIFICATION BY BACKPROPAGATION
Back propagation: A neural network learning algorithm
Started by psychologists and neurobiologists to develop and test computational analogues of
neurons
A neural network: A set of connected input/output units where each connection has a weight
associated with it
During the learning phase, the network learns by adjusting the weights so as to be able to
predict the correct class label of the input tuples
Also referred to as connectionist learning due to the connections between units
Neural network as a classifier
Weakness
Long training time
Require a number of parameters typically best determined empirically, e.g., the
network topology or ``structure."
Poor interpretability: Difficult to interpret the symbolic meaning behind the
learned weights and of ``hidden units" in the network
Strength
High tolerance to noisy data
Ability to classify untrained patterns
Well-suited for continuous-valued inputs and outputs
Algorithms are inherently parallel
Techniques have recently been developed for the extraction of rules from trained
neural networks
A Neuron (= a perceptron)
93
SCSA3001 Data Mining And Data Warehousing
The n-dimensional input vector x is mapped into variable y by means of the scalar product
and a nonlinear function mapping
A multi-layer feed-forward neural network
94
SCSA3001 Data Mining And Data Warehousing
Initialize weights (to small random #s) and biases in the network
Propagate the inputs forward (by applying activation function)
Back propagate the error (by updating weights and biases)
Terminating condition (when error is very small, etc.)
Efficiency of backpropagation: Each epoch (one interaction through the training set)
takes O(|D| * w), with |D| tuples and w weights, but # of epochs can be exponential to n,
the number of inputs, in the worst case
Rule extraction from networks: network pruning
Simplify the network structure by removing weighted links that have the least
effect on the trained network
Then perform link, unit, or activation value clustering
The set of input and activation values are studied to derive rules describing the
relationship between the input and hidden unit layers
Sensitivity analysis: assess the impact that a given input variable has on a network
output. The knowledge gained from this analysis can be represented in rules
SVM—SUPPORT VECTOR MACHINES
A new classification method for both linear and nonlinear data
It uses a nonlinear mapping to transform the original training data into a higher dimension
With the new dimension, it searches for the linear optimal separating hyper plane (i.e.,
“decision boundary”)
With an appropriate nonlinear mapping to a sufficiently high dimension, data from two
classes can always be separated by a hyper plane
SVM finds this hyper plane using support vectors (“essential” training tuples) and margins
(defined by the support vectors)
Features: training can be slow but accuracy is high owing to their ability to model complex
nonlinear decision boundaries (margin maximization)
Used both for classification and prediction
Applications
Object recognition
95
SCSA3001 Data Mining And Data Warehousing
Speaker identification,
96
SCSA3001 Data Mining And Data Warehousing
SVM—Linearly Separable
Any training tuples that fall on hyper planes H1 or H2 (i.e., the sides defining
the margin) are support vectors
This becomes a constrained (convex) quadratic optimization problem: Quadratic
objective function and linear constraints Quadratic Programming (QP)
Lagrangian multipliers
97
SCSA3001 Data Mining And Data Warehousing
The number of support vectors found can be used to compute an (upper) bound on the
expected error rate of the SVM classifier, which is independent of the data dimensionality
Thus, an SVM with a small number of support vectors can have good generalization, even
when the dimensionality of the data is high
PREDICTION
(Numerical) prediction is similar to classification
construct a model
use model to predict continuous or ordered value for a given input
Prediction is different from classification
Classification refers to predict categorical class label
Prediction models continuous-valued functions
Major method for prediction: regression
model the relationship between one or more independent or predictorvariables
and a dependent or response variable
Regression analysis
Linear and multiple regression
Non-linear regression
Other regression methods: generalized linear model, Poisson
regression, log-linear models, regression trees
LINEAR REGRESSION
Linear regression: involves a response variable y and a single predictorvariable x
y = w0 + w1 x
Where w0 (y-intercept) and w1 (slope) are regression coefficients
Method of least squares: estimates the best-fitting straight line
Multiple linear regression: involves more than one predictor variable
Training data is of the form (X1, y1), (X2, y2),…, (X|D|, y|D|)
Ex. For 2-D data, we may have: y = w0 + w1 x1+ w2 x2
Solvable by extension of least square method or using SAS, S-Plus
Many nonlinear functions can be transformed into the above
Nonlinear Regression
Some nonlinear models can be modeled by a polynomial function
98
SCSA3001 Data Mining And Data Warehousing
A polynomial regression model can be transformed into linear regression model. For
example,
y = w0 + w1 x + w2 x2 + w3 x3
Convertible to linear with new variables: x2 = x2, x3= x3
y = w0 + w1 x + w2 x2 + w3 x3
Other functions, such as power function, can also be transformed to linear model
Some models are intractable nonlinear (e.g., sum of exponential terms)
Possible to obtain least square estimates through extensive calculation on more
complex formulae
PART-A
Q. No Questions Competence BT Level
9. What inference can you formulate with Bayes theorem? Create BTL-6
10. Define Lazy learners and eager learners with an example. Remember BTL-1
PART-B
99
SCSA3001 Data Mining And Data Warehousing
100
SCSA3001 Data Mining And Data Warehousing
101
SCSA3001 Data Mining And Data Warehousing
The process of grouping a set of physical objects into classes of similar objects is called
clustering.
Cluster – collection of data objects
– Objects within a cluster are similar and objects in different clusters are dissimilar.
Cluster applications – pattern recognition, image processing and market research.
- helps marketers to discover the characterization of customer groups based on purchasing
patterns
- Categorize genes in plant and animal taxonomies
- Identify groups of house in a city according to house type, value and geographical location
- Classify documents on WWW for information discovery
Clustering is a preprocessing step for other data mining steps like classification, characterization.
Clustering – Unsupervised learning – does not rely on predefined classes with class labels.
Typical requirements of clustering in data mining
1. Scalability – Clustering algorithms should work for huge databases
2. Ability to deal with different types of attributes – Clustering algorithms should work not only
for numeric data, but also for other data types.
3. Discovery of clusters with arbitrary shape – Clustering algorithms (based on distance measures)
should work for clusters of any shape.
4. Minimal requirements for domain knowledge to determine input parameters – Clustering results
are sensitive to input parameters to a clustering algorithm (example – number of desired clusters).
Determining the value of these parameters is difficult and requires some domain knowledge.
102
SCSA3001 Data Mining And Data Warehousing
5. Ability to deal with noisy data – Outlier, missing, unknown and erroneous data detected by a
clustering algorithm may lead to clusters of poor quality.
6. Insensitivity in the order of input records – Clustering algorithms should produce same results
even if the order of input records is changed.
7. High dimensionality – Data in high dimensional space can be sparse and highly skewed, hence
it is challenging for a clustering algorithm to cluster data objects in high dimensional space.
8. Constraint-based clustering – In Real world scenario, clusters are performed based on various
constraints. It is a challenging task to find groups of data with good clustering behavior and
satisfying various constraints.
9. Interpretability and usability – Clustering results should be interpretable, comprehensible and
usable. So we should study how an application goal may influence the selection of clustering
methods.
TYPES OF DATA IN CLUSTERING ANALYSIS
1. Data Matrix: (object-by-variable structure)
Represents n objects, (such as persons) with p variables (or attributes) (such as age, height,
weight, gender, race and so on. The structure is in the form of relational table or n x p matrix as
shown below:
Where d (i, j) is the dissimilarity between the objects i and j; d (i, j) = d (j, i) and d (i, i) = 0
103
SCSA3001 Data Mining And Data Warehousing
Many clustering algorithms use Dissimilarity Matrix. So data represented using Data matrixes are
converted into Dissimilarity Matrix before applying such clustering algorithms.
Clustering of objects done based on their similarities or dissimilarities. Similarity coefficients or
dissimilarity coefficients are derived from correlation coefficients
CATEGORIZATION OF MAJOR CLUSTERING METHODS
The choice of many available clustering algorithms depends on type of data available and the
application used.
Major Categories are:
1. Partitioning Methods:
- Construct k-partitions of the n data objects, where each partition is a cluster and k <= n.
- Each partition should contain at least one object & each object should belong to exactly one
partition.
- Iterative Relocation Technique – attempts to improve partitioning by moving objects from one
group to another.
- Good Partitioning – Objects in the same cluster are “close” / related and objects in the different
clusters are “far apart” / very different.
Uses the Algorithms
K-means Algorithm: - Each cluster is represented by the mean value of the objects in the
cluster.
K-mediods Algorithm: - Each cluster is represented by one of the objects located near the
center of the cluster.
These work well in small to medium sized database.
2. Hierarchical Methods:
- Creates hierarchical decomposition of the given set of data objects.
- Two types – Agglomerative and Divisive
- Agglomerative Approach: (Bottom-Up Approach):
Each object forms a separate group
Successively merges groups close to one another (based on distance between clusters)
Done until all the groups are merged to one or until a termination condition holds.
(Termination condition can be desired number of clusters)
104
SCSA3001 Data Mining And Data Warehousing
105
SCSA3001 Data Mining And Data Warehousing
PARTITIONING METHODS
Database has n objects and k partitions where k<=n; each partition is a cluster.
Partitioning criterion = Similarity function:
Objects within a cluster are similar; objects of different clusters are dissimilar.
Classical Partitioning Methods: k-means and k-mediods:
(A) Centroid-based technique: The k-means method:
- Cluster similarity is measured using mean value of objects in the cluster (or clusters center of
gravity)
- Randomly select k objects. Each object is a cluster mean or center.
- Each of the remaining objects is assigned to the most similar cluster – based on the distance
between the object and the cluster mean.
- Compute new mean for each cluster.
- This process iterates until all the objects are assigned to a cluster and the partitioning criterion is
met.
- This algorithm determines k partitions that minimize the squared error function.
- Square Error Function is defined as:
Where x is the point representing an object, mi is the mean of the cluster Ci.
Algorithm
K-Means Algorithm
1. Given k, the k-means algorithm is implemented in 4 steps:
2. Partition objects into k nonempty subsets
3. Compute seed points as the centroids of the clusters of the current partition. The centroid is the
center (mean point) of the cluster.
4. Assign each object to the cluster with the nearest seed point.
Here, E is the sum of the square error for all objects in the data set.x is the point in space
representing a given object, and mi is the mean of cluster Ci (both x and mi are multidimensional).
106
SCSA3001 Data Mining And Data Warehousing
In other words, for each object in each cluster, the distance from the object to its cluster center is
squared, and the distances are summed.
This criterion tries to make the resulting k clusters as compact and as separate as possible.
107
SCSA3001 Data Mining And Data Warehousing
To determine whether a non-medoid object is "Oi" random is a good replacement for a current
medoid "Oj", the following four cases are examined for each of the non-medoid objects "P".
108
SCSA3001 Data Mining And Data Warehousing
HIERARCHICAL METHODS
This method creates the hierarchical decomposition of the given set of data objects.
Agglomerative Approach
Divisive Approach
Agglomerative Approach
This approach is also known as bottom-up approach. In this we start with each object forming a
Separate group. It keeps on merging the objects or groups that are close to one another. It keeps
on doing so until all of the groups are merged into one or until the termination condition holds.
Divisive Approach
This approach is also known as top-down approach. In this we start with all of the objects in the
same cluster. In the continuous iteration, a cluster is split up into smaller clusters. It is down until
each object in one cluster or the termination condition holds.
109
SCSA3001 Data Mining And Data Warehousing
Incrementally construct a CF (Clustering Feature) tree, a hierarchical data structure for multiphase
clustering
Phase1: scan DB to build an initial in-memory CF tree (a multi-level compression of the data that
tries to preserve the inherent clustering structure of the data)
Phase2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CF-tree
110
SCSA3001 Data Mining And Data Warehousing
111
SCSA3001 Data Mining And Data Warehousing
112
SCSA3001 Data Mining And Data Warehousing
113
SCSA3001 Data Mining And Data Warehousing
114
SCSA3001 Data Mining And Data Warehousing
An extension to k-means
Assign each object to a cluster according to a weight (prob. distribution)
New means are computed based on weighted measures
General idea
Starts with an initial estimate of the parameter vector
Iteratively rescores the patterns against the mixture density produced by the parameter
vector
The rescored patterns are used to update the parameter updates
Patterns belonging to the same cluster, if they are placed by their scores in a particular
component
Algorithm converges fast but may not be in global optima
COBWEB (Fisher’87)
A popular a simple method of incremental conceptual learning
Creates a hierarchical clustering in the form of a classification tree
Each node refers to a concept and contains a probabilistic description of that concept
115
SCSA3001 Data Mining And Data Warehousing
Neurons compete in a― winner- takes- all‖ fashion for the object currently being presented
SOMs, also called topological ordered maps, or Kohonen Self-Organizing Feature Map
(KSOMs)
It maps all the points in a high- dimensional source space into a 2 to 3- d target space, s.t the
distance and proximity relationship (i.e., topology) are preserved as much as possible
Similarity ok-means: cluster centers tend to lie in a low- dimensional fold in the feature space
Clustering is performed by having several units competing for the current object
The unit whose weight vector is closest to the current object wins
The winner and its neighbors learn by having their weights adjusted
SOMs are believed to resemble processing that can occur in the brain
Useful for visualizing high-dimensional data in 2-or3-D space
CONSTRAINT-BASED METHOD
116
SCSA3001 Data Mining And Data Warehousing
Proposed approach
Find an initial―solution ‖by partitioning the data set into k groups and satisfying user-
constraints
Iteratively refine the solution by micro-clustering relocation (e.g., moving δ μ clusters from
cluster Ci to Cj) and― deadlock ‖handling (break the micro clusters when necessary)
Efficiency is improved by micro-clustering
How to handle more complicated constraints?
E.g., having approximately same number of valued customers in each cluster?!— Can you
solve it?
WHAT IS OUTLIER DISCOVERY
117
SCSA3001 Data Mining And Data Warehousing
118
SCSA3001 Data Mining And Data Warehousing
119
SCSA3001 Data Mining And Data Warehousing
120
SCSA3001 Data Mining And Data Warehousing
Complexity of Web pages − The web pages do not have unifying structure. They are very
complex as compared to traditional text document. There are huge amount of documents in digital
library of web. These libraries are not arranged according to any particular sorted order.
Web is dynamic information source. − The information on the web is rapidly updated. The data
such as news, stock markets, weather, sports, shopping, etc., are regularly updated.
Diversity of user communities − The user community on the web is rapidly expanding. These
users have different backgrounds, interests, and usage purposes. There are more than 100 million
workstations that are connected to the Internet and still rapidly increasing.
Relevancy of Information − It is considered that a particular person is generally interested in only
small portion of the web, while the rest of the portion of the web contains the information that is
not relevant to the user and may swamp desired results.
Mining web page layout structure
The basic structure of the web page is based on the Document Object Model (DOM). The DOM
structure refers to a tree like structure where the HTML tag in the page corresponds to a node in
the DOM tree. We can segment the web page by using predefined tags in HTML. The HTML
syntax is flexible therefore, the web pages does not follow the W3C specifications. Not following
the specifications of W3C may cause error in DOM tree structure.
The DOM structure was initially introduced for presentation in the browser and not for description
of semantic structure of the web page. The DOM structure cannot correctly identify the semantic
relationship between the different parts of a web page.
Vision-based page segmentation (VIPS)
• The purpose of VIPS is to extract the semantic structure of a web page based on its visual
presentation.
• Such a semantic structure corresponds to a tree structure. In this tree each node corresponds to a
block.
• A value is assigned to each node. This value is called the Degree of Coherence. This value is
assigned to indicate the coherent content in the block based on visual perception.
• The VIPS algorithm first extracts all the suitable blocks from the HTML DOM tree. After that it
finds the separators between these blocks.
• The separators refer to the horizontal or vertical lines in a web page that visually cross with no
blocks.
121
SCSA3001 Data Mining And Data Warehousing
• The semantics of the web page is constructed on the basis of these blocks.
Text databases consist of huge collection of documents. They collect this information from
several sources such as news articles, books, digital libraries, e-mail messages, web pages, etc.
Due to increase in the amount of information, the text databases are growing rapidly. In many of
the text databases, the data is semi-structured.
For example, a document may contain a few structured fields, such as title, author, publishing
date, etc. But along with the structure data, the document also contains unstructured text
components, such as abstract and contents. Without knowing what could be in the documents, it is
difficult to formulate effective queries for analyzing and extracting useful information from the
data. Users require tools to compare the documents and rank their importance and relevance.
Therefore, text mining has become popular and an essential theme in data mining.
Information Retrieval
Information retrieval deals with the retrieval of information from a large number of text-based
documents. Some of the database systems are not usually present in information retrieval systems
because both handle different kinds of data. Examples of information retrieval system include −
122
SCSA3001 Data Mining And Data Warehousing
123
SCSA3001 Data Mining And Data Warehousing
A spatial database stores a large amount of space-related data, such as maps, preprocessed remote
sensing or medical imaging data, and VLSI chip layout data. Spatial databases have many features
distinguishing them from relational databases. They carry topological and/or distance information,
usually organized by sophisticated, multidimensional spatial indexing structures that are accessed
by spatial data access methods and often require spatial reasoning, geometric computation, and
spatial knowledge representation techniques.
Spatial data mining refers to the extraction of knowledge, spatial relationships, or other interesting
patterns not explicitly stored in spatial databases. Such mining demands an integration of data
mining with spatial database technologies. It can be used for understanding spatial data,
discovering spatial relationships and relationships between spatial and non-spatial data,
constructing spatial knowledge bases, reorganizing spatial databases, and optimizing spatial
queries. It is expected to have wide applications in geographic information systems,
geomarketing, remote sensing, image database exploration, medical imaging, navigation, traffic
control, environmental studies, and many other areas where spatial data are used. A crucial
challenge to spatial data mining is the exploration of efficient spatial data mining techniques due
to the huge amount of spatial data and the complexity of spatial data types and spatial access
methods.
“What about using statistical techniques for spatial data mining?” Statistical spatial data analysis
has been a popular approach to analyzing spatial data and exploring geographic information. The
term geostatistics is often associated with continuous geographic space. Whereas the term spatial
statistics is often associated with discrete space. In a statistical model that handles non-spatial
data, one usually assumes statistical independence among different portions of data. However,
different from traditional data sets, there is no such independence among spatially distributed data
because in reality, spatial objects are often interrelated, or more exactly spatially co-located, in the
sense that the closer the two objects are located, the more likely they share similar properties. For
example, nature resource, climate, temperature, and economic situations are likely to be similar in
geographically closely located regions. People even consider this as the first law of geography:
“Everything is related to everything else, but nearby things are more related than distant things.”
Such a property of close interdependency across nearby space leads to the notion of spatial
124
SCSA3001 Data Mining And Data Warehousing
autocorrelation. Based on this notion, spatial statistical modeling methods have been developed
with good success. Spatial data mining will further develop spatial statistical analysis methods and
extend them for huge amounts of spatial data, with more emphasis on efficiency, scalability,
cooperation with database and data warehouse systems, improved user interaction, and the
discovery of new types of knowledge.
There are three types of dimensions in a spatial data cube:
A non-spatial dimension contains only nonspatial data. Non-spatial dimensions temperature and
precipitation
A spatial-to-nonspatial dimension is a dimension whose primitive-level data are spatial but whose
generalization, starting at a certain high level, becomes nonspatial
A spatial-to-spatial dimension is a dimension whose primitive level and all of its high level
generalized data are spatial.
We distinguish two types of measures in a spatial data cube:
A numerical measure contains only numerical data. For example, one measure in a spatial data
warehouse could be the monthly revenue of a region, so that a roll-up may compute the total
revenue by year, by county, and so on. Numerical measures can be further classified into
distributive, algebraic, and holistic, as discussed in
A spatial measure contains a collection of pointers to spatial objects. For example, in a
generalization (or roll-up) in the spatial data cube of Example 10.5, the regions with the same
range of temperature and precipitation will be grouped into the same cell, and the measure so
formed contains a collection of pointers to those regions.
PART-A
Q. No Questions Competence BT Level
Identify what changes you make to solve the problem in
1. Remember BTL-1
cluster analysis.
125
SCSA3001 Data Mining And Data Warehousing
7. Evaluate the different types of data used for cluster analysis? Create BTL-6
PART-B
9. Describe in detail about spatial mining and time series mining Remember BTL-1
126