BI Unit 1 Data Warehouse
BI Unit 1 Data Warehouse
AND
DATA MINING
Course Overview
• The course: what and
how
• 0. Introduction
• I. Data Warehousing
• II. Decision Support and
OLAP
• III. Data Mining
• IV. Looking Ahead
2
0. Introduction
• Data Warehousing, OLAP and
data mining: what and
why (now)?
• Relation to OLTP
• A case study
• demos, labs
3
A producer wants to know….
Which are our
lowest/highest
margin
customers ? Who are my
What is the customers
most and what
effective products
distribution are they buying?
channel?
What product
prom- Which customers
-otions have the are most likely
biggest to go
What impact to the
impact on will
revenue? competition ?
new
products/servic
es
have on 4
revenue
Data, Data everywhere
•yet ...find the data I need
I can’t
– data is scattered over the network
– many versions, subtle differences
[Barry Devlin]
6
What are the users saying...
• Data should be integrated across
the enterprise
• Summary data has a real value
to the organization
• Historical data holds the key to
understanding data over time
• What-if capabilities are required
7
What is Data Warehousing?
A process of transforming
Information
data into information and
making it available to users
in a timely enough manner
to make a difference
Data
8
Evolution
9
Warehouses are Very Large
Databases
35%
30%
25%
Respondents
20%
15%
Initial
10%
Projected 2Q96
5% Source: META Group, Inc.
5GB 10-19GB 50-99GB 250-499GB
0% 5-9GB 20-49GB 100-249GB 500GB-1TB
10
Very Large Data Bases
• Terabytes -- 10^12 bytes: Walmart -- 24 Terabytes
11
Data Warehousing --
It is a process
• Technique for assembling and
managing data from various sources
for the purpose of answering
business questions. Thus making
decisions that were not previous
possible
• A decision support database
maintained separately from the
organization’s operational database
12
Data Warehouse
• A data warehouse is a
– subject-oriented
– integrated
– time-varying
– non-volatile
collection of data that is used primarily in
organizational decision making.
-- Bill Inmon, Building the Data Warehouse 1996
13
Explorers, Farmers and Tourists
Tourists: Browse information
harvested by farmers
14
Data Warehouse Architecture
Relationa
l
Database
s
Optimized Loader
Extraction
ERP
Systems Cleansing
Data Warehouse
Engine Analyze
Purchase Query
d
Data
Legacy
Data Metadata Repository
15
Data Warehouse for Decision Support
& OLAP
• Putting Information technology to help the
knowledge worker make faster and better decisions
– Which of my customers are most likely to go to the
competition?
– What product promotions have the biggest impact on
revenue?
– How did the share price of software companies correlate
with profits over last 10 years?
16
Decision Support
• Used to manage and control business
• Data is historical or point-in-time
• Optimized for inquiry rather than update
• Use of the system is loosely defined and can be
ad-hoc
• Used by managers and end-users to understand the
business and make judgements
17
Data Mining works with Warehouse
Data
• Data Warehousing provides the
Enterprise with a memory
Industr Application
y
Finance Credit Card Analysis
Insurance Claims, Fraud Analysis
Telecommunication Call record analysis
Transport Logistics management
Consumer goods promotion analysis
Data Service providers Value added data
Utilitie Power usage analysis
s
20
Data Mining in Use
• The US Government uses Data Mining to track fraud
• A Supermarket becomes an information broker
• Basketball teams use it to track game strategy
• Cross Selling
• Warranty Claims Routing
• Holding on to Good Customers
• Weeding out Bad Customers
21
What makes data mining possible?
• Advances in the following areas are making
data mining deployable:
– data warehousing
– better and more data (i.e., operational,
behavioral, and demographic)
– the emergence of easily deployed data mining
tools and
– the advent of new data mining techniques.
– -- Gartner Group
22
Why Separate Data Warehouse?
• Performance
– Op dbs designed & tuned for known txs & workloads.
– Complex OLAP queries would degrade perf. for op txs.
– Special data organization, access & implementation methods needed for
multidimensional views & queries.
● Function
● Missing data: Decision support requires historical data, which
op dbs do not typically maintain.
● Data consolidation: Decision support requires consolidation
(aggregation, summarization) of data from many
heterogeneous sources: op dbs, external sources.
● Data quality: Different sources typically use inconsistent data
representations, codes, and formats which have to be
23
reconciled.
What are Operational Systems?
• They are OLTP systems
• Run mission critical
applications
• Need to work with stringent
performance requirements
for routine tasks
• Used to run a business!
24
RDBMS used for OLTP
• Database Systems have been used
traditionally for OLTP
– clerical data processing tasks
– detailed, up to date data
– structured repetitive tasks
– read/update a few records
– isolation, recovery and integrity are critical
25
Operational Systems
• Run the business in real time
• Based on up-to-the-second data
• Optimized to handle large numbers of
simple read/write transactions
• Optimized for fast response to
predefined transactions
• Used by people who deal with
customers, products -- clerks,
salespeople etc.
• They are increasingly used by customers
26
Examples of Operational Data
Data Industry Usage Technology Volumes
Custome Al Track Legacy application, flat Small-medium
rFile l Custome files, main frames
rDetails
Accoun Financ Contro Legacy applications, Large
tBalance e laccoun hierarchical databases,
tactivities mainframe
Point-of- Retail Generate ERP, Client/Server, Very Large
Sale data bills, manage relational databases
stoc
Call k
Telecomm- Billing Legacy application, Very Large
Record unication hierarchical database,
s mainframe
Productio Manufact- Contro ERP, Medium
R
n ecord uring P
l roductio relational databases,
n AS/400
27
So, what’s different?
Application-Orientation vs.
Subject-Orientation
Application-Orientation Subject-Orientation
Operational Data
Database Warehouse
Credit
Loans Customer
Card
Vendor
Trust Product
Savings Activity
29
OLTP vs. Data Warehouse
• OLTP systems are tuned for known transactions and
workloads while workload is not known a priori in a
data warehouse
• Special data organization, access methods and
implementation methods are needed to support data
warehouse queries (typically multidimensional
queries)
– e.g., average amount spent on phone calls between
9AM-5PM in Pune during the month of December
30
OLTP vs Data Warehouse
• OLTP • Warehouse (DSS)
– Application Oriented – Subject Oriented
– Used to run business – Used to analyze business
– Detailed data – Summarized and refined
– Current up to date – Snapshot data
– Isolated Data – Integrated Data
– Repetitive access – Ad-hoc access
– Clerical User – Knowledge User (Manager)
31
OLTP vs Data Warehouse
32
OLTP vs Data Warehouse
• OLTP • Data Warehouse
– Transaction throughput – Query throughput is the
is the performance performance metric
metric – Hundreds of users
– Thousands of users – Managed by subsets
– Managed in entirety
33
To summarize ...
• OLTP Systems are
used to “run” a business
34
Why Now?
• Data is being produced
• ERP provides clean data
• The computing power is available
• The computing power is affordable
• The competitive pressures are strong
• Commercial products are available
35
Myths surrounding OLAP Servers and
Data Marts
• Data marts and OLAP servers are departmental solutions
supporting a handful of users
• Million dollar massively parallel hardware is needed to deliver
fast time for complex queries
• OLAP servers require massive and unwieldy indices
• Complex OLAP queries clog the network with data
• Data warehouses must be at least 100 GB to be effective
» Source -- Arbor Software Home Page
36
Wal*Mart Case Study
• Founded by Sam Walton
• One the largest Super Market Chains in the US
37
Old Retail Paradigm
• Wal*Mart • Suppliers
– Inventory Management – Accept Orders
– Merchandise Accounts – Promote Products
Payable – Provide special
– Purchasing Incentives
– Supplier Promotions: – Monitor and Track The
National, Region, Store Level Incentives
– Bill and Collect
Receivables
– Estimate Retailer
Demands
38
New (Just-In-Time) Retail Paradigm
• No more deals
• Shelf-Pass Through (POS Application)
– One Unit Price
• Suppliers paid once a week on ACTUAL items sold
– Wal*Mart Manager
• Daily Inventory Restock
• Suppliers (sometimes SameDay) ship to Wal*Mart
• Warehouse-Pass Through
– Stock some Large Items
• Delivery may come from supplier
– Distribution Center
• Supplier’s merchandise unloaded directly onto Wal*Mart Trucks
39
Wal*Mart System
40
Course Overview
• 0. Introduction
• I. Data Warehousing
• II. Decision Support and
OLAP
• III. Data Mining
• IV. Looking Ahead
41
I. Data Warehouses:
Architecture, Design & Construction
• DW Architecture
• Loading, refreshing
• Structuring/Modeling
• DWs and Data Marts
• Query Processing
• demos, labs
42
Data Warehouse Architecture
Relation
al
Databas
es Optimized Loader
Extraction
ERP
Cleansing
Systems
Data Warehouse
Engine Analyze
Purchas Query
ed
Data
Legacy
Data Metadata Repository
43
Components of the Warehouse
• Data Extraction and Loading
• The Warehouse
• Analyze and Query -- OLAP Tools
• Metadata
44
Loading the Warehouse
46
Data Quality - The Reality
47
Data Quality - The Reality
• Legacy systems no longer documented
• Outside sources with questionable quality
procedures
• Production systems with no built in integrity checks
and no integration
– Operational systems are usually designed to solve a
specific business problem and are rarely developed to a a
corporate plan
• “And get it done quickly, we do not have time to worry about
corporate standards...”
48
Data Integration Across Sources
Savings Loans Trust Credit card
49
Data Transformation Example
Data Warehouse
encoding
appl A - m,f
appl B - 1,0
appl C - x,y
appl D - male, female
appl A - pipeline - cm
unit
appl B - pipeline - in
appl C - pipeline - feet
appl D - pipeline - yds
appl A - balance
field
appl B - bal
appl C - currbal
appl D - balcurr
50
Data Integrity Problems
• Same person, different spellings
– Agarwal, Agrawal, Aggarwal etc...
• Multiple ways to denote company name
– Persistent Systems, PSPL, Persistent Pvt. LTD.
• Use of different names
– mumbai, bombay
• Different account numbers generated by different
applications for the same customer
• Required fields left blank
• Invalid product codes collected at point of sale
– manual entry leads to mistakes
– “in case of a problem use 9999999”
51
Data Transformation Terms
• Extracting • Enrichment
• Conditioning • Scoring
• Scrubbing • Loading
• Merging • Validating
• Householding • Delta Updating
52
Data Transformation Terms
• Extracting
– Capture of data from operational source in “as is” status
– Sources for data generally in legacy mainframes in VSAM,
IMS, IDMS, DB2; more data today in relational databases
on Unix
• Conditioning
– The conversion of data types from the source to the
target data store (warehouse) -- always a relational
database
53
Data Transformation Terms
• Householding
– Identifying all members of a household (living at
the same address)
– Ensures only one mail is sent to a household
– Can result in substantial savings: 1 lakh catalogues
at Rs. 50 each costs Rs. 50 lakhs. A 2% savings
would save Rs. 1 lakh.
54
Data Transformation Terms
• Enrichment
– Bring data from external sources to
augment/enrich operational data. Data sources
include Dunn and Bradstreet, A. C. Nielsen, CMIE,
IMRA etc...
• Scoring
– computation of a probability of an event. e.g...,
chance that a customer will defect to AT&T from
MCI, chance that a customer is likely to buy a new
product
55
Loads
• After extracting, scrubbing, cleaning,
validating etc. need to load the data into the
warehouse
• Issues
– huge volumes of data to be loaded
– small time window available when warehouse can be taken off line
(usually nights)
– when to build index and summary tables
– allow system administrators to monitor, cancel, resume, change load
rates
– Recover gracefully -- restart after failure from where you were and
without loss of data integrity
56
Load Techniques
• Use SQL to append or insert new data
– record at a time interface
– will lead to random disk I/O’s
• Use batch load utility
57
Load Taxonomy
• Incremental versus Full loads
• Online versus Offline loads
58
Refresh
• Propagate updates on source data to the
warehouse
• Issues:
– when to refresh
– how to refresh -- refresh techniques
59
When to Refresh?
• periodically (e.g., every night, every week) or after
significant events
• on every update: not warranted unless warehouse
data require current data (up to the minute stock
quotes)
• refresh policy set by administrator based on user
needs and traffic
• possibly different policies for different sources
60
Refresh Techniques
• Full Extract from base tables
– read entire source table: too expensive
– maybe the only choice for legacy systems
61
How To Detect Changes
• Create a snapshot log table to record ids of
updated rows of source data and timestamp
• Detect changes by:
– Defining after row triggers to update snapshot log
when source table changes
– Using regular transaction log to detect changes to
source data
62
Data Extraction and Cleansing
• Extract data from existing operational and
legacy data
• Issues:
– Sources of data for the warehouse
– Data quality at the sources
– Merging different data sources
– Data Transformation
– How to propagate updates (on the sources) to the
warehouse
– Terabytes of data to be loaded
63
Scrubbing Data
• Sophisticated transformation
tools.
• Used for cleaning the quality of
data
• Clean data is vital for the
success of the warehouse
• Example
– Seshadri, Sheshadri, Sesadri,
Seshadri S., Srinivasan Seshadri,
etc. are the same person
64
Scrubbing Tools
• Apertus -- Enterprise/Integrator
• Vality -- IPE
• Postal Soft
65
Structuring/Modeling Issues
Data -- Heart of the Data Warehouse
• Heart of the data warehouse is the data itself!
• Single version of the truth
• Corporate memory
• Data is organized in a way that represents
business -- subject orientation
67
Data Warehouse Structure
• Subject Orientation -- customer, product,
policy, account etc... A subject may be
implemented as a set of related tables. E.g.,
customer may be five tables
68
Data Warehouse Structure
– base customer (1985-87)
• custid, from date, to date, name, phone, dob
– base customer (1988-90)
Time is
part of • custid, from date, to date, name, credit rating, employer
key of – customer activity (1986-89) -- monthly summary
each table
– customer activity detail (1987-89)
• custid, activity date, amount, clerk id, order no
– customer activity detail (1990-91)
• custid, activity date, amount, line item no, order no
69
Data Granularity in Warehouse
• Summarized data stored
– reduce storage costs
– reduce cpu usage
– increases performance since smaller number of
records to be processed
– design around traditional high level reporting
needs
– tradeoff with volume of data to be stored and
detailed usage of data
70
Granularity in Warehouse
• Can not answer some questions with
summarized data
– Did Anand call Seshadri last month? Not possible
to answer if total duration of calls by Anand over a
month is only maintained and individual call
details are not.
• Detailed data too voluminous
71
Granularity in Warehouse
• Tradeoff is to have dual level of granularity
– Store summary data on disks
• 95% of DSS processing done against this data
– Store detail on tapes
• 5% of DSS processing against this data
72
Vertical Partitioning
Frequently
accessed Rarely
accessed
Acct. Balanc Acct. Date Interest
Name Address
No e No Opened Rate
Smaller table
and so less I/O
73
Derived Data
• Introduction of derived (calculated data) may
often help
• Have seen this in the context of dual levels of
granularity
• Can keep auxiliary views and indexes to speed
up query processing
74
Schema Design
• Database organization
– must look like business
– must be recognizable by business user
– approachable by business user
– Must be simple
• Schema Types
– Star Schema
– Fact Constellation Schema
– Snowflake schema
75
Dimension Tables
• Dimension tables
– Define business in terms already familiar to users
– Wide rows with lots of descriptive text
– Small tables (about a million rows)
– Joined to fact table by a foreign key
– heavily indexed
– typical dimensions
• time periods, geographic region (markets, cities),
products, customers, salesperson, etc.
76
Fact Table
• Central table
– mostly raw numeric items
– narrow rows, a few columns at most
– large number of rows (millions to a billion)
– Access via dimensions
77
Star Schema
• A single fact table and for each dimension one
dimension table
• Does not capture hierarchies directly
T p
date, custno, prodno, cityname, ...
i r
m o
e f d
a
c c c
u t i
s t
t y 78
Snowflake schema
• Represent dimensional hierarchy directly by
normalizing tables.
• Easy to maintain and saves storage
T p
date, custno, prodno, cityname, ...
i r
m o
e f d
a
c c c r
u t e
i
s g
t i
t y o
79
n
Fact Constellation
• Fact Constellation
– Multiple fact tables that share many dimension
tables
– Booking and Checkout may share many dimension
tables in the hotel industry
Promotion
Hotels
Booking
Checkout
Travel Agents Room Type
Customer 80
De-normalization
• Normalization in a data warehouse may lead
to lots of small tables
• Can lead to excessive I/O’s since many tables
have to be accessed
• De-normalization is the answer especially
since updates are rare
81
Creating Arrays
• Many times each occurrence of a sequence of data is in a
different physical location
• Beneficial to collect all occurrences together and store as
an array in a single row
• Makes sense only if there are a stable number of
occurrences which are accessed together
• In a data warehouse, such situations arise naturally due
to time based orientation
– can create an array by month
82
Selective Redundancy
• Description of an item can be stored
redundantly with order table -- most often
item description is also accessed with order
table
• Updates have to be careful
83
Partitioning
• Breaking data into several
physical units that can be
handled separately
• Not a question of whether to do
it in data warehouses but how to
do it
• Granularity and partitioning are
key to effective implementation
of a warehouse
84
Why Partition?
• Flexibility in managing data
• Smaller physical units allow
– easy restructuring
– free indexing
– sequential scans if needed
– easy reorganization
– easy recovery
– easy monitoring
85
Criterion for Partitioning
• Typically partitioned by
– date
– line of business
– geography
– organizational unit
– any combination of above
86
Where to Partition?
• Application level or DBMS level
• Makes sense to partition at application level
– Allows different definition for each year
• Important since warehouse spans many years and as
business evolves definition changes
– Allows data to be moved between processing
complexes easily
87
Data Warehouse vs. Data Marts
Individually Less
Structured
Departmentally History
Structured Normalized
Detailed
Organizationally More
Structured Data Warehouse
Data
89
Data Warehouse and Data Marts
OLAP
Data Mart
Lightly summarized
Departmentally structured
Organizationally structured
Atomic
Detailed Data Warehouse Data
90
Characteristics of the Departmental
Data Mart
• OLAP
• Small
• Flexible
• Customized by Department
• Source is departmentally
structured data warehouse
91
Techniques for Creating
Departmental Data Mart
• OLAP
Finan
Sales
ce
Mktg. • Subset
• Summarized
• Superset
• Indexed
• Arrayed
92
Data Mart Centric
Data Sources
Data Marts
Data Warehouse
93
Problems with Data Mart Centric
Solution
94
True Warehouse
Data Sources
Data Warehouse
Data Marts
95
Query Processing
• Indexing
● Pre computed
views/aggregates
● SQL extensions
96
Indexing Techniques
• Exploiting indexes to reduce scanning of data
is of crucial importance
• Bitmap Indexes
• Join Indexes
• Other Issues
– Text indexing
– Parallelizing and sequencing of index builds and
incremental updates
97
Indexing Techniques
• Bitmap index:
– A collection of bitmaps -- one for each distinct
value of the column
– Each bitmap has N bits where N is the number of
rows in the table
– A bit corresponding to a value v for a row r is set if
and only if r has the value for the indexed
attribute
98
BitMap Indexes
• An alternative representation of RID-list
• Specially advantageous for low-cardinality domains
• Represent each row of a table by a bit and the table
as a bit vector
• There is a distinct bit vector Bv for each value v for
the domain
• Example: the attribute sex has values M and F. A
table of 100 million people needs 2 lists of 100
million bits
99
Bitmap Index
M Y 0 1 0
F Y 1 1 1
F N 1 0 0
M N 0 0 0
F Y 1 1 1
F N 1 0 0
101
BitMap Indexes
• Comparison, join and aggregation operations are reduced
to bit arithmetic with dramatic improvement in processing
time
• Significant reduction in space and I/O (30:1)
• Adapted for higher cardinality domains as well.
• Compression (e.g., run-length encoding) exploited
• Products that support bitmaps: Model 204, TargetIndex
(Redbrick), IQ (Sybase), Oracle 7.3
102
Join Indexes
• Pre-computed joins
• A join index between a fact table and a dimension
table correlates a dimension tuple with the fact
tuples that have the same value on the common
dimensional attribute
– e.g., a join index on city dimension of calls fact table
– correlates for each city the calls (in the calls table)
from that city
103
Join Indexes
• Join indexes can also span multiple dimension
tables
– e.g., a join index on city and time dimension of
calls fact table
104
Star Join Processing
• Use join indexes to join dimension and fact table
Calls
C+T
Time C+T+L
Loca-
tion C+T+L
Plan +P
105
Optimized Star Join Processing
Loca- Calls
tion
Virtual Cross Product
Plan of T, L and P
106
Bitmapped Join Processing
Bitmaps
1
Time Calls
0
1
Loca- 0
tion Calls 0
1
AND
Plan Calls
1
1
0
107
Intelligent Scan
• Piggyback multiple scans of a relation
(Redbrick)
– piggybacking also done if second scan starts a
little while after the first scan
108
Parallel Query Processing
• Three forms of parallelism
– Independent
– Pipelined
– Partitioned and “partition and replicate”
• Deterrents to parallelism
– startup
– communication
109
Parallel Query Processing
• Partitioned Data
– Parallel scans
– Yields I/O parallelism
• Parallel algorithms for relational operators
– Joins, Aggregates, Sort
• Parallel Utilities
– Load, Archive, Update, Parse, Checkpoint, Recovery
• Parallel Query Optimization
110
Pre-computed Aggregates
111
Pre-computed Aggregates
• Aggregated table can be maintained by the
– warehouse server
– middle tier
– client applications
• Pre-computed aggregates -- special case of
materialized views -- same questions and
issues remain
112
SQL Extensions
113
SQL Extensions
• Reporting features
– running total, cumulative totals
• Cube operator
– group by on all subsets of a set of attributes
(month,city)
– redundant scan and sorting of data can be avoided
114
Red Brick has Extended set of
Aggregates
• Select month, dollars, cume(dollars) as
run_dollars, weight, cume(weight) as
run_weights
from sales, market, product, period t
where year = 1993
and product like ‘Columbian%’
and city like ‘San Fr%’
order by t.perkey
115
RISQL (Red Brick Systems) Extensions
• Aggregates • Calculating Row
– CUME Subtotals
– MOVINGAVG – BREAK BY
– MOVINGSUM • Sophisticated Date Time
– RANK Support
– TERTILE – DATEDIFF
– RATIOTOREPORT • Using SubQueries in
calculations
116
Using SubQueries in Calculations
select product, dollars as jun97_sales,
(select sum(s1.dollars)
from market mi, product pi, period, ti, sales si
where pi.product = product.product
and ti.year = period.year
and mi.city = market.city) as total97_sales,
100 * dollars/
(select sum(s1.dollars)
from market mi, product pi, period, ti, sales si
where pi.product = product.product
and ti.year = period.year
and mi.city = market.city) as percent_of_yr
from market, product, period, sales
where year = 1997
and month = ‘June’ and city like ‘Ahmed%’
order by product;
117
Course Overview
• The course: what and
how
• 0. Introduction
• I. Data Warehousing
• II. Decision Support and
OLAP
• III. Data Mining
• IV. Looking Ahead
118
II. On-Line Analytical Processing (OLAP)
Making Decision
Support Possible
Limitations of SQL
“A Freshman in
Business needs a
Ph.D. in SQL”
-- Ralph Kimball
120
Typical OLAP Queries
• Write a multi-table join to compare sales for each product line
YTD this year vs. last year.
• Repeat the above process to find the top 5 product
contributors to margin.
• Repeat the above process to find the sales of a product line to
new vs. existing customers.
• Repeat the above process to find the customers that have had
negative sales growth.
121
What Is OLAP?
• Online Analytical Processing - coined by
EF Codd in 1994 paper contracted by
Arbor Software*
• Generally synonymous with earlier terms such as Decisions
Support, Business Intelligence, Executive Information System
• OLAP = Multidimensional Database
• MOLAP: Multidimensional OLAP (Arbor Essbase, Oracle
Express)
• ROLAP: Relational OLAP (Informix MetaCube, Microstrategy DSS
Agent)
* Reference:
http://www.arborsoft.com/essbase/wht_ppr/coddTOC.html
122
The OLAP Market
• Rapid growth in the enterprise market
– 1995: $700 Million
– 1997: $2.1 Billion
• Significant consolidation activity among major DBMS
vendors
– 10/94: Sybase acquires ExpressWay
– 7/95: Oracle acquires Express
– 11/95: Informix acquires Metacube
– 1/97: Arbor partners up with IBM
– 10/96: Microsoft acquires Panorama
• Result: OLAP shifted from small vertical niche to
mainstream DBMS category
123
Strengths of OLAP
• It is a powerful visualization paradigm
• It provides fast, interactive response times
• It is good for analyzing time series
• It can be useful to find some clusters and outliers
• Many vendors offer OLAP tools
124
OLAP Is FASMI
• Fast
• Analysis
• Shared
• Multidimensional
• Information
W
Re
S
N Product Region Time
Product
Month
126
Office Day
Data Cube Lattice
• Cube lattice
– ABC
AB AC BC
A B C
none
• Can materialize some groupbys, compute others on demand
• Question: which groupbys to materialze?
• Question: what indices to create
• Question: how to organize data (chunks, etc)
127
Visualizing Neighbors is simpler
128
A Visual Operation: Pivot (Rotate)
NY
LA
h
SF
n t
Mo
Juice 10
Cola 47
Region
Milk 30
Cream 12 Product
Household
Telecomm n s
i o
e g
Video R Europe
Far East
Audio India
Drill-Down
• Region
• Country
Roll Up
• State
• Location Address
• Sales Representative
Low-level
Details
131
Nature of OLAP Analysis
• Aggregation -- (total sales,
percent-to-total)
• Comparison -- Budget vs.
Expenses
• Ranking -- Top 10, quartile
analysis
• Access to detailed and aggregate
data
• Complex criteria specification
• Visualization
132
Organizationally Structured Data
• Different Departments look at the same detailed
data in different ways. Without the detailed,
organizationally structured data as a foundation,
there is no reconcilability of data
marketing
sales
finance
manufacturing
133
Multidimensional Spreadsheets
• Analysts need spreadsheets that
support
– pivot tables (cross-tabs)
– drill-down and roll-up
– slice and dice
– sort
– selections
– derived attributes
• Popular in retail domain
134
OLAP - Data Cube
135
SQL Extensions
• Front-end tools require
– Extended Family of Aggregate Functions
• rank, median, mode
– Reporting Features
• running totals, cumulative totals
– Results of multiple group by
• total sales by month and total sales by product
– Data Cube
136
Relational OLAP: 3 Tier DSS
Data Warehouse ROLAP Engine Decision Support Client
140
Metdata Repository .. 2
• Business data
– business terms and definitions
– ownership of data
– charging policies
• operational metadata
– data lineage: history of migrated data and sequence of
transformations applied
– currency of data: active, archived, purged
– monitoring information: warehouse usage statistics,
error reports, audit trails.
141
Recipe for a Successful
Warehouse
For a Successful Warehouse
From Larry Greenfield, http://pwp.starnetinc.com/larryg/index.html
• From day one establish that warehousing is a joint
user/builder project
• Establish that maintaining data quality will be an
ONGOING joint user/builder responsibility
• Train the users one step at a time
• Consider doing a high level corporate data model in
no more than three weeks
143
For a Successful Warehouse
• Look closely at the data extracting, cleaning, and
loading tools
• Implement a user accessible automated directory to
information stored in the warehouse
• Determine a plan to test the integrity of the data in
the warehouse
• From the start get warehouse users in the habit of
'testing' complex queries
144
For a Successful Warehouse
• Coordinate system roll-out with network
administration personnel
• When in a bind, ask others who have done the same
thing for advice
• Be on the lookout for small, but strategic, projects
• Market and sell your data warehousing systems
145
Data Warehouse Pitfalls
• You are going to spend much time extracting, cleaning, and
loading data
• Despite best efforts at project management, data
warehousing project scope will increase
• You are going to find problems with systems feeding the data
warehouse
• You will find the need to store data not being captured by any
existing system
• You will need to validate data not being validated by
transaction processing systems
146
Data Warehouse Pitfalls
• Some transaction processing systems feeding the
warehousing system will not contain detail
• Many warehouse end users will be trained and never or
seldom apply their training
• After end users receive query and report tools, requests for IS
written reports may increase
• Your warehouse users will develop conflicting business rules
• Large scale data warehousing can become an exercise in data
homogenizing
147
Data Warehouse Pitfalls
• 'Overhead' can eat up great amounts of disk space
• The time it takes to load the warehouse will expand to the
amount of the time in the available window... and then some
• Assigning security cannot be done with a transaction
processing system mindset
• You are building a HIGH maintenance system
• You will fail if you concentrate on resource optimization to the
neglect of project, data, and customer management issues
and an understanding of what adds value to the customer
148
DW and OLAP Research Issues
• Data cleaning
– focus on data inconsistencies, not schema differences
– data mining techniques
• Physical Design
– design of summary tables, partitions, indexes
– tradeoffs in use of different indexes
• Query processing
– selecting appropriate summary tables
– dynamic optimization with feedback
– acid test for query optimization: cost estimation, use of transformations,
search strategies
– partitioning query processing between OLAP server and backend server.
149
DW and OLAP Research Issues .. 2
• Warehouse Management
– detecting runaway queries
– resource management
– incremental refresh techniques
– computing summary tables during load
– failure recovery during load and refresh
– process management: scheduling queries, load and
refresh
– Query processing, caching
– use of workflow technology for process management
150
Products, References, Useful
Links
Reporting Tools
• Andyne Computing -- GQL
• Brio -- BrioQuery
• Business Objects -- Business Objects
• Cognos -- Impromptu
• Information Builders Inc. -- Focus for Windows
• Oracle -- Discoverer2000
• Platinum Technology -- SQL*Assist, ProReports
• PowerSoft -- InfoMaker
• SAS Institute -- SAS/Assist
• Software AG -- Esperant
• Sterling Software -- VISION:Data
152
OLAP and Executive Information
Systems
• Andyne Computing -- Pablo • Microsoft -- Plato
153
Other Warehouse Related Products
• Data extract, clean, transform, refresh
– CA-Ingres replicator
– Carleton Passport
– Prism Warehouse Manager
– SAS Access
– Sybase Replication Server
– Platinum Inforefiner, Infopump
154
Extraction and Transformation Tools
• Carleton Corporation -- Passport
• Evolutionary Technologies Inc. -- Extract
• Informatica -- OpenBridge
• Information Builders Inc. -- EDA Copy Manager
• Platinum Technology -- InfoRefiner
• Prism Solutions -- Prism Warehouse Manager
• Red Brick Systems -- DecisionScape Formation
155
Scrubbing Tools
• Apertus -- Enterprise/Integrator
• Vality -- IPE
• Postal Soft
156
Warehouse Products
• Computer Associates -- CA-Ingres
• Hewlett-Packard -- Allbase/SQL
• Informix -- Informix, Informix XPS
• Microsoft -- SQL Server
• Oracle -- Oracle7, Oracle Parallel Server
• Red Brick -- Red Brick Warehouse
• SAS Institute -- SAS
• Software AG -- ADABAS
• Sybase -- SQL Server, IQ, MPP
157
Warehouse Server Products
• Oracle 8
• Informix
– Online Dynamic Server
– XPS --Extended Parallel Server
– Universal Server for object relational applications
• Sybase
– Adaptive Server 11.5
– Sybase MPP
– Sybase IQ
158
Warehouse Server Products
• Red Brick Warehouse
• Tandem Nonstop
• IBM
– DB2 MVS
– Universal Server
– DB2 400
• Teradata
159
Other Warehouse Related Products
• Connectivity to Sources
– Apertus
– Information Builders EDA/SQL
– Platimum Infohub
– SAS Connect
– IBM Data Joiner
– Oracle Open Connect
– Informix Express Gateway
160
Other Warehouse Related Products
• Query/Reporting Environments
– Brio/Query
– Cognos Impromptu
– Informix Viewpoint
– CA Visual Express
– Business Objects
– Platinum Forest and Trees
161
4GL's, GUI Builders, and PC Databases
• Information Builders -- Focus
• Lotus -- Approach
• Microsoft -- Access, Visual Basic
• MITI -- SQR/Workbench
• PowerSoft -- PowerBuilder
• SAS Institute -- SAS/AF
162
Data Mining Products
• DataMind -- neurOagent
• Information Discovery -- IDIS
• SAS Institute -- SAS/Neuronets
163
Data Warehouse
• W.H. Inmon, Building the Data Warehouse, Second
Edition, John Wiley and Sons, 1996
• W.H. Inmon, J. D. Welch, Katherine L. Glassey,
Managing the Data Warehouse, John Wiley and Sons,
1997
• Barry Devlin, Data Warehouse from Architecture to
Implementation, Addison Wesley Longman, Inc 1997
164
Data Warehouse
• W.H. Inmon, John A. Zachman, Jonathan G. Geiger,
Data Stores Data Warehousing and the Zachman
Framework, McGraw Hill Series on Data
Warehousing and Data Management, 1997
• Ralph Kimball, The Data Warehouse Toolkit, John
Wiley and Sons, 1996
165
OLAP and DSS
• Erik Thomsen, OLAP Solutions, John Wiley and Sons
1997
• Microsoft TechEd Transparencies from Microsoft
TechEd 98
• Essbase Product Literature
• Oracle Express Product Literature
• Microsoft Plato Web Site
• Microstrategy Web Site
166
Data Mining
• Michael J.A. Berry and Gordon Linoff, Data Mining
Techniques, John Wiley and Sons 1997
• Peter Adriaans and Dolf Zantinge, Data Mining,
Addison Wesley Longman Ltd. 1996
• KDD Conferences
167
Other Tutorials
• Donovan Schneider, Data Warehousing Tutorial, Tutorial at
International Conference for Management of Data (SIGMOD
1996) and International Conference on Very Large Data Bases
97
• Umeshwar Dayal and Surajit Chaudhuri, Data Warehousing
Tutorial at International Conference on Very Large Data Bases
1996
• Anand Deshpande and S. Seshadri, Tutorial on
Datawarehousing and Data Mining, CSI-97
168
Useful URLs
• Ralph Kimball’s home page
– http://www.rkimball.com
• Larry Greenfield’s Data Warehouse Information
Center
– http://pwp.starnetinc.com/larryg/
• Data Warehousing Institute
– http://www.dw-institute.com/
• OLAP Council
– http://www.olapcouncil.com/
169