DWBI Unit-1
DWBI Unit-1
The major task of online operational database systems is to perform online transaction and query
processing. These systems are called online transaction processing (OLTP) systems. They cover
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
Users and system orientation: An OLTP system is customer-oriented and is used for transaction and
query processing by clerks, clients, and information technology professionals. An OLAP system is
market-oriented and is used for data analysis by knowledge workers, including managers, executives,
and analysts.
Data contents: An OLTP systemmanages current data that, typically, are too detailed to be easily used
for decision making. An OLAP system manages large amounts of historic data, provides facilities for
summarization and aggregation, and stores and manages information at different levels of granularity.
Database design: An OLTP system usually adopts an entity-relationship (ER) data model and an
application-oriented database design. An OLAP system typically adopts either a star or a snowflake
model and a subject-oriented database design.
View: An OLTP system focuses mainly on the current data within an enterprise or department,
without referring to historic data or data in different organizations. In contrast, an OLAP system often
spans multiple versions of a database schema, due to the evolutionary process of an organization.
Access patterns: The access patterns of an OLTP system consist mainly of short, atomic transactions.
Such a systemrequires concurrency control and recovery mechanisms. However, accesses to OLAP
systems are mostly read-only operations (because most data warehouses store historic rather than up-
to-date information), although many could be complex queries.
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
1. The bottom tier is a warehouse database server that is almost always a relational Database
system. Back-end tools and utilities are used to feed data into the bottom tier from operational
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
Data Warehouse Models: Enterprise Warehouse, Data Mart and Virtual Warehouse
From the architecture point of view, there are three data warehouse models: the enterprise warehouse,
the data mart, and the virtual warehouse.
Enterprise warehouse: An enterprise warehouse collects all of the information about subjects
spanning the entire organization. It provides corporate-wide data integration, usually from one or more
operational systems or external information providers, and is cross-functional in scope. It typically
contains detailed data as well as summarized data, and can range in size from a few gigabytes to
hundreds of gigabytes, terabytes, or beyond. An enterprise data warehouse may be implemented on
traditional mainframes, computer superservers, or parallel architecture platforms. It requires extensive
business modeling and may take years to design and build.
Data mart: A data mart contains a subset of corporate-wide data that is of value to a specific group of
users. The scope is confined to specific selected subjects. For example, a marketing data mart may
confine its subjects to customer, item, and sales. The data contained in data marts tend to be
summarized.
Data marts are usually implemented on low-cost departmental servers that are Unix/Linux or
Windows based. The implementation cycle of a data mart is more likely to be measured in weeks
rather than months or years. However, it may involve complex integration in the long run if its design
and planning were not enterprise-wide.
Data marts are two types. They are
1. Independent data mart
2. Dependent data mart
1. Independent data marts are sourced from data captured from one or more operational systems
or external information providers, or from data generated locally within a particular department
or geographic area.
2. Dependent data marts are sourced directly from enterprise data warehouses.
Virtual warehouse: A virtual warehouse is a set of views over operational databases. For efficient
query processing, only some of the possible summary views may be materialized. A virtual warehouse
is easy to build but requires excess capacity on operational database servers.
“What are the pros and cons of the top-down and bottom-up approaches to data warehouse
development?”
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
A recommended method for the development of data warehouse systems is to implement the
warehouse in an incremental and evolutionary manner. First, a high-level corporate data model is
defined within a reasonably short period (such as one or two months) that provides a corporate-wide,
consistent, integrated view of data among different subjects and potential usages. Second, independent
data marts can be implemented in parallel with the enterprise warehouse based on the same corporate
data model set noted before. Third, distributed data marts can be constructed to integrate different data
marts via hub servers. Finally, a multitier data warehouse is constructed where the enterprise
warehouse is the sole custodian of all warehouse data, which is then distributed to the various
dependent data marts.
Metadata Repository
Metadata are data about data. When used in a data warehouse, metadata are the data that define
warehouse objects. Metadata are created for the data names and definitions of the given warehouse.
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
Data warehouses and OLAP tools are based on a multidimensional data model. This model views
data in the form of a data cube.
“What is a data cube?” A data cube allows data to be modeled and viewed in multiple dimensions. It
is defined by dimensions and facts.
In general terms, dimensions are the perspectives or entities with respect to which an organization
wants to keep records. For example, AllElectronics may create a sales data warehouse in order to keep
records of the store’s sales with respect to the dimensions time, item, branch, and location. Each
dimension may have a table associated with it, called a dimension table, which further describes the
dimension. For example, a dimension table for item may contain the attributes item name, brand, and
type.
A multidimensional data model is typically organized around a central theme, such as sales. This
theme is represented by a fact table. Facts are numeric measures. Examples of facts for a sales data
warehouse include dollars_sold (sales amount in dollars), units_sold (number of units sold), and
amount_budgeted. The fact table contains the names of the facts, or measures, as well as keys to each
of the related dimension tables.
In 2-D representation, the sales for Vancouver are shown with respect to the time dimension
(organized in quarters) and the item dimension (organized according to the types of items sold). The
fact or measure displayed is dollars_sold (in thousands). Now, suppose that we would like to view the
sales data with a third dimension. For instance, suppose we would like to view the data according to
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
4D cube is often referred to as a Cuboid. Given a set of dimensions, we can generate a cuboid for each
of the possible subsets of the given dimensions. The result would form a lattice of cuboids, each
showing the data at a different level of summarization, or group-by. The lattice of cuboids is then
referred to as a data cube.
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
Stars, Snowflakes, and Fact Constellations: Schemas for Multidimensional Data Models
A data warehouse, however, requires a concise, subject-oriented schema that facilitates online data
analysis.
The most popular data model for a data warehouse is a multidimensional model, which can exist in
the formof a star schema, a snowflake schema, or a fact constellation schema.
Star schema: The most common modeling paradigm is the star schema, in which the data warehouse
contains (1) a large central table (fact table) containing the bulk of the data, with no redundancy, and
(2) a set of smaller attendant tables (dimension tables), one for each dimension.
Example 4.1 Star schema. A star schema for AllElectronics sales is shown in Figure 4.6. Sales are
considered
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
Notice that in the star schema, each dimension is represented by only one table, and each table
contains a set of attributes. For example, the location dimension table contains the attribute set
{location_key, street, city, province_or_state, country}. This constraint may introduce some
redundancy. For example, “Urbana” and “Chicago” are both cities in the state of Illinois, USA. Entries
for such cities in the location dimension table will create redundancy among the attributes
province_or_state and country; that is, (...., Urbana, IL, USA) and ( , Chicago, IL, USA).Moreover,
the attributes within a dimension table may form either a hierarchy (total order) or a lattice (partial
order).
Snowflake schema: The snowflake schema is a variant of the star schema model, where some
dimension tables are normalized, thereby further splitting the data into additional tables.
The major difference between the snowflake and star schema models is
1. Dimension tables of the snowflake model may be kept in normalized form to reduce
redundancies.
2. Table is easy to maintain.
3. Saves storage space.
Disadvantage
1. The snowflake structure can reduce the effectiveness of browsing.
2. More joins will be needed to execute a query.
3. The system performance may be adversely impacted.
Hence, although the snowflake schema reduces redundancy, it is not as popular as the star schema in
data warehouse design.
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
In data warehousing, there is a distinction between a data warehouse and a data mart. A data
warehouse collects information about subjects that span the entire organization, such as customers,
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
Measures can be organized into three categories—distributive, algebraic, and holistic— based on the
kind of aggregate functions used.
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
Most large data cube applications require efficient computation of distributive and algebraic measures.
Many efficient techniques for this exist. In contrast, it is difficult to compute holistic measures
efficiently. Efficient techniques to approximate the computation of some holistic measures, however,
do exist.
Roll-up: The roll-up operation (also called the drill-up operation by some vendors) performs
aggregation on a data cube, either by climbing up a concept hierarchy for a dimension or by dimension
reduction.
This hierarchy was defined as the total order “street < city < province or state < country.” The roll-up
operation shown aggregates the data by ascending the location hierarchy from the level of city to the
level of country
When roll-up is performed by dimension reduction, one or more dimensions are removed from the
given cube. For example, consider a sales data cube containing only the location and time dimensions.
Roll-up may be performed by removing, say, the time dimension, resulting in an aggregation of the
total sales by location, rather than by location and by time.
Drill-down: Drill-down is the reverse of roll-up. It navigates from less detailed data to more detailed
data. Drill-down can be realized by either stepping down a concept hierarchy for a dimension or
introducing additional dimensions. concept hierarchy for time defined as “day < month < quarter <
year.” Drill-down occurs by descending the time hierarchy fromthe level of quarter to the more
detailed level of month. Because a drill-down adds more detail to the given data, it can also be
performed by adding new dimensions to a cube.
Slice and dice: The slice operation performs a selection on one dimension of the given cube, resulting
in a subcube. Figure 4.12 shows a slice operation where the sales data are selected from the central
cube for the dimension time using the criterion time = “Q1.” The dice operation defines a subcube by
performing a selection on two or more dimensions. Figure 4.12 shows a dice operation on the central
cube based on the following selection criteria that involve three dimensions: (location = “Toronto” or
“Vancouver”) and (time = “Q1” or “Q2”) and (item = “home entertainment” or “computer”).
Pivot (rotate): Pivot (also called rotate) is a visualization operation that rotates the data axes in view
to provide an alternative data presentation. Figure 4.12 shows a pivot operation where the item and
location axes in a 2-D slice are rotated.
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
OLAP offers analytical modeling capabilities, including a calculation engine for deriving ratios,
variance, and so on, and for computing measures across multiple dimensions. OLAP also supports
functional models for forecasting, trend analysis, and statistical analysis.
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
These are intermediate servers which stand in between a relational back-end server and user frontend tools.
They use a relational or extended-relational DBMS to save and handle warehouse data, and OLAP middleware to
provide missing pieces.
ROLAP servers contain optimization for each DBMS back end, implementation of aggregation navigation logic, and
additional tools and services.
ROLAP technology tends to have higher scalability than MOLAP technology.
ROLAP systems work primarily from the data that resides in a relational database, where the base data and
dimension tables are stored as relational tables. This model permits the multidimensional analysis of data.
This technique relies on manipulating the data stored in the relational database to give the presence of traditional
OLAP's slicing and dicing functionality. In essence, each method of slicing and dicing is equivalent to adding a
"WHERE" clause in the SQL statement.
o Database server.
o ROLAP server.
o Front-end tool.
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
Some products in this segment have supported reliable SQL engines to help the complexity of multidimensional
analysis. This includes creating multiple SQL statements to handle user requests, being 'RDBMS' aware and also
being capable of generating the SQL statements based on the optimizer of the DBMS engine.
Advantages
The data size limitation of ROLAP technology is depends on the data size of the underlying RDBMS. So, ROLAP
itself does not restrict the data amount.
RDBMS already comes with a lot of features. So ROLAP technologies, (works on top of the RDBMS) can control
these functionalities.
Disadvantages
Performance can be slow: Each ROLAP report is a SQL query (or multiple SQL queries) in the relational database,
the query time can be prolonged if the underlying data size is large.
Limited by SQL functionalities: ROLAP technology relies on upon developing SQL statements to query the
relational database, and SQL statements do not suit all needs.
One of the significant distinctions of MOLAP against a ROLAP is that data are summarized and are stored in an
optimized format in a multidimensional cube, instead of in a relational database. In MOLAP model, data are
structured into proprietary formats by client's reporting requirements with the calculations pre-generated on the
cubes.
MOLAP Architecture
o Database server.
o MOLAP server.
o Front-end tool.
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
This can be very useful for organizations with performance-sensitive multidimensional analysis requirements and
that has built or is in the process of building a data warehouse architecture that contains multiple subject areas.
An example would be the creation of sales data measured by several dimensions (e.g., product and sales region) to
be stored and maintained in a persistent structure. This structure would be provided to reduce the application
overhead of performing calculations and building aggregation during initialization. These structures can be
automatically refreshed at predetermined intervals established by an administrator.
Advantages
1. Excellent Performance: A MOLAP cube is built for fast information retrieval, and is optimal for slicing and
dicing operations.
2. Can perform complex calculations: All evaluation have been pre-generated when the cube is created. Hence,
complex calculations are not only possible, but they return quickly.
Disadvantages
1. Limited in the amount of information it can handle: Because all calculations are performed when the cube is
built, it is not possible to contain a large amount of data in the cube itself.
2. Requires additional investment: Cube technology is generally proprietary and does not already exist in the
organization. Therefore, to adopt MOLAP technology, chances are other investments in human and capital
resources are needed.
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier
Advantages of HOLAP
1. HOLAP provide benefits of both MOLAP and ROLAP.
2. It provides fast access at all levels of aggregation.
3. HOLAP balances the disk space requirement, as it only stores the aggregate information on the OLAP
server and the detail record remains in the relational database. So no duplicate copy of the detail record is
maintained.
Disadvantages of HOLAP
1. HOLAP architecture is very complicated because it supports both MOLAP and ROLAP servers.
Reference: Data Mining – Concepts and Techniques – 3rd Edition, Jiawei Han, Micheline Kamber & Jian
Pei-Elsevier