Data Mining: What Is Data Mining?: Oracle
Data Mining: What Is Data Mining?: Oracle
Overview
Generally, data mining (sometimes called data or knowledge discovery) is the process of
analyzing data from different perspectives and summarizing it into useful information -
information that can be used to increase revenue, cuts costs, or both. Data mining software is one
of a number of analytical tools for analyzing data. It allows users to analyze data from many
different dimensions or angles, categorize it, and summarize the relationships identified.
Technically, data mining is the process of finding correlations or patterns among dozens of fields
in large relational databases.
Continuous Innovation
Although data mining is a relatively new term, the technology is not. Companies have used
powerful computers to sift through volumes of supermarket scanner data and analyze market
research reports for years. However, continuous innovations in computer processing power, disk
storage, and statistical software are dramatically increasing the accuracy of analysis while
driving down the cost.
Example
For example, one Midwest grocery chain used the data mining capacity of Oracle software to
analyze local buying patterns. They discovered that when men bought diapers on Thursdays and
Saturdays, they also tended to buy beer. Further analysis showed that these shoppers typically
did their weekly grocery shopping on Saturdays. On Thursdays, however, they only bought a few
items. The retailer concluded that they purchased the beer to have it available for the upcoming
weekend. The grocery chain could use this newly discovered information in various ways to
increase revenue. For example, they could move the beer display closer to the diaper display.
And, they could make sure beer and diapers were sold at full price on Thursdays.
Data
Data are any facts, numbers, or text that can be processed by a computer. Today, organizations
are accumulating vast and growing amounts of data in different formats and different databases.
This includes:
operational or transactional data such as, sales, cost, inventory, payroll, and accounting
nonoperational data, such as industry sales, forecast data, and macro economic data
meta data - data about the data itself, such as logical database design or data dictionary
definitions
Information
The patterns, associations, or relationships among all this data can provide information. For
example, analysis of retail point of sale transaction data can yield information on which products
are selling and when.
Knowledge
Information can be converted into knowledge about historical patterns and future trends. For
example, summary information on retail supermarket sales can be analyzed in light of
promotional efforts to provide knowledge of consumer buying behavior. Thus, a manufacturer or
retailer could determine which items are most susceptible to promotional efforts.
Data Warehouses
Dramatic advances in data capture, processing power, data transmission, and storage capabilities
are enabling organizations to integrate their various databases into data warehouses. Data
warehousing is defined as a process of centralized data management and retrieval. Data
warehousing, like data mining, is a relatively new term although the concept itself has been
around for years. Data warehousing represents an ideal vision of maintaining a central repository
of all organizational data. Centralization of data is needed to maximize user access and analysis.
Dramatic technological advances are making this vision a reality for many companies. And,
equally dramatic advances in data analysis software are allowing users to access this data freely.
The data analysis software is what supports data mining.
Data mining is primarily used today by companies with a strong consumer focus - retail,
financial, communication, and marketing organizations. It enables these companies to determine
relationships among "internal" factors such as price, product positioning, or staff skills, and
"external" factors such as economic indicators, competition, and customer demographics. And, it
enables them to determine the impact on sales, customer satisfaction, and corporate profits.
Finally, it enables them to "drill down" into summary information to view detail transactional
data.
With data mining, a retailer could use point-of-sale records of customer purchases to send
targeted promotions based on an individual's purchase history. By mining demographic data
from comment or warranty cards, the retailer could develop products and promotions to appeal to
specific customer segments.
For example, Blockbuster Entertainment mines its video rental history database to recommend
rentals to individual customers. American Express can suggest products to its cardholders based
on analysis of their monthly expenditures.
WalMart is pioneering massive data mining to transform its supplier relationships. WalMart
captures point-of-sale transactions from over 2,900 stores in 6 countries and continuously
transmits this data to its massive 7.5 terabyte Teradata data warehouse. WalMart allows more
than 3,500 suppliers, to access data on their products and perform data analyses. These suppliers
use this data to identify customer buying patterns at the store display level. They use this
information to manage local store inventory and identify new merchandising opportunities. In
1995, WalMart computers processed over 1 million complex data queries.
The National Basketball Association (NBA) is exploring a data mining application that can be
used in conjunction with image recordings of basketball games. The Advanced Scout software
analyzes the movements of players to help coaches orchestrate plays and strategies. For example,
an analysis of the play-by-play sheet of the game played between the New York Knicks and the
Cleveland Cavaliers on January 6, 1995 reveals that when Mark Price played the Guard position,
John Williams attempted four jump shots and made each one! Advanced Scout not only finds
this pattern, but explains that it is interesting because it differs considerably from the average
shooting percentage of 49.30% for the Cavaliers during that game.
By using the NBA universal clock, a coach can automatically bring up the video clips showing
each of the jump shots attempted by Williams with Price on the floor, without needing to comb
through hours of video footage. Those clips show a very successful pick-and-roll play in which
Price draws the Knick's defense and then finds Williams for an open jump shot.
While large-scale information technology has been evolving separate transaction and analytical
systems, data mining provides the link between the two. Data mining software analyzes
relationships and patterns in stored transaction data based on open-ended user queries. Several
types of analytical software are available: statistical, machine learning, and neural networks.
Generally, any of four types of relationships are sought:
Classes: Stored data is used to locate data in predetermined groups. For example, a
restaurant chain could mine customer purchase data to determine when customers visit
and what they typically order. This information could be used to increase traffic by
having daily specials.
Sequential patterns: Data is mined to anticipate behavior patterns and trends. For
example, an outdoor equipment retailer could predict the likelihood of a backpack being
purchased based on a consumer's purchase of sleeping bags and hiking shoes.
Artificial neural networks: Non-linear predictive models that learn through training and
resemble biological neural networks in structure.
Decision trees: Tree-shaped structures that represent sets of decisions. These decisions
generate rules for the classification of a dataset. Specific decision tree methods include
Classification and Regression Trees (CART) and Chi Square Automatic Interaction
Detection (CHAID) . CART and CHAID are decision tree techniques used for
classification of a dataset. They provide a set of rules that you can apply to a new
(unclassified) dataset to predict which records will have a given outcome. CART
segments a dataset by creating 2-way splits while CHAID segments using chi square tests
to create multi-way splits. CART typically requires less data preparation than CHAID.
Nearest neighbor method: A technique that classifies each record in a dataset based on a
combination of the classes of the k record(s) most similar to it in a historical dataset
(where k 1). Sometimes called the k-nearest neighbor technique.
Rule induction: The extraction of useful if-then rules from data based on statistical
significance.
Today, data mining applications are available on all size systems for mainframe, client/server,
and PC platforms. System prices range from several thousand dollars for the smallest
applications up to $1 million a terabyte for the largest. Enterprise-wide applications generally
range in size from 10 gigabytes to over 11 terabytes. NCR has the capacity to deliver
applications exceeding 100 terabytes. There are two critical technological drivers:
Size of the database: the more data being processed and maintained, the more powerful
the system required.
Query complexity: the more complex the queries and the greater the number of queries
being processed, the more powerful the system required.
Relational database storage and management technology is adequate for many data mining
applications less than 50 gigabytes. However, this infrastructure needs to be significantly
enhanced to support larger applications. Some vendors have added extensive indexing
capabilities to improve query performance. Others use new hardware architectures such as
Massively Parallel Processors (MPP) to achieve order-of-magnitude improvements in query
time. For example, MPP systems from NCR link hundreds of high-speed Pentium processors to
achieve performance levels exceeding those of the largest supercomputers.
Overview
Data mining, the extraction of hidden predictive information from large databases, is a powerful
new technology with great potential to help companies focus on the most important information
in their data warehouses. Data mining tools predict future trends and behaviors, allowing
businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses
offered by data mining move beyond the analyses of past events provided by retrospective tools
typical of decision support systems. Data mining tools can answer business questions that
traditionally were too time consuming to resolve. They scour databases for hidden patterns,
finding predictive information that experts may miss because it lies outside their expectations.
Most companies already collect and refine massive quantities of data. Data mining techniques
can be implemented rapidly on existing software and hardware platforms to enhance the value of
existing information resources, and can be integrated with new products and systems as they are
brought on-line. When implemented on high performance client/server or parallel processing
computers, data mining tools can analyze massive databases to deliver answers to questions such
as, "Which clients are most likely to respond to my next promotional mailing, and why?"
This white paper provides an introduction to the basic technologies of data mining. Examples of
profitable applications illustrate its relevance to today’s business environment as well as a basic
description of how data warehouse architectures can evolve to deliver the value of data mining to
end users.
Data mining techniques are the result of a long process of research and product development.
This evolution began when business data was first stored on computers, continued with
improvements in data access, and more recently, generated technologies that allow users to
navigate through their data in real time. Data mining takes this evolutionary process beyond
retrospective data access and navigation to prospective and proactive information delivery. Data
mining is ready for application in the business community because it is supported by three
technologies that are now sufficiently mature:
Commercial databases are growing at unprecedented rates. A recent META Group survey of data
warehouse projects found that 19% of respondents are beyond the 50 gigabyte level, while 59%
expect to be there by second quarter of 1996.1 In some industries, such as retail, these numbers
can be much larger. The accompanying need for improved computational engines can now be
met in a cost-effective manner with parallel multiprocessor computer technology. Data mining
algorithms embody techniques that have existed for at least 10 years, but have only recently been
implemented as mature, reliable, understandable tools that consistently outperform older
statistical methods.
In the evolution from business data to business information, each new step has built upon the
previous one. For example, dynamic data access is critical for drill-through in data navigation
applications, and the ability to store large databases is critical to data mining. From the user’s
point of view, the four steps listed in Table 1 were revolutionary because they allowed new
business questions to be answered accurately and quickly.
Data Collection "What was my total Computers, tapes, IBM, CDC Retrospective,
revenue in the last five disks static data
(1960s) years?" delivery
Data Access "What were unit sales in Relational databases Oracle, Retrospective,
New England last (RDBMS), Sybase, dynamic data
March?" Structured Query Informix, delivery at
(1980s) Language (SQL), IBM, record level
ODBC Microsoft
(1990s)
The core components of data mining technology have been under development for decades, in
research areas such as statistics, artificial intelligence, and machine learning. Today, the maturity
of these techniques, coupled with high-performance relational database engines and broad data
integration efforts, make these technologies practical for current data warehouse environments.
Data mining derives its name from the similarities between searching for valuable business
information in a large database — for example, finding linked products in gigabytes of store
scanner data — and mining a mountain for a vein of valuable ore. Both processes require either
sifting through an immense amount of material, or intelligently probing it to find exactly where
the value resides. Given databases of sufficient size and quality, data mining technology can
generate new business opportunities by providing these capabilities:
Automated prediction of trends and behaviors. Data mining automates the process of
finding predictive information in large databases. Questions that traditionally required
extensive hands-on analysis can now be answered directly from the data — quickly. A
typical example of a predictive problem is targeted marketing. Data mining uses data on
past promotional mailings to identify the targets most likely to maximize return on
investment in future mailings. Other predictive problems include forecasting bankruptcy
and other forms of default, and identifying segments of a population likely to respond
similarly to given events.
Automated discovery of previously unknown patterns. Data mining tools sweep
through databases and identify previously hidden patterns in one step. An example of
pattern discovery is the analysis of retail sales data to identify seemingly unrelated
products that are often purchased together. Other pattern discovery problems include
detecting fraudulent credit card transactions and identifying anomalous data that could
represent data entry keying errors.
Data mining techniques can yield the benefits of automation on existing software and hardware
platforms, and can be implemented on new systems as existing platforms are upgraded and new
products developed. When data mining tools are implemented on high performance parallel
processing systems, they can analyze massive databases in minutes. Faster processing means that
users can automatically experiment with more models to understand complex data. High speed
makes it practical for users to analyze huge quantities of data. Larger databases, in turn, yield
improved predictions.
More columns. Analysts must often limit the number of variables they examine when
doing hands-on analysis due to time constraints. Yet variables that are discarded because
they seem unimportant may carry information about unknown patterns. High
performance data mining allows users to explore the full depth of a database, without
preselecting a subset of variables.
More rows. Larger samples yield lower estimation errors and variance, and allow users
to make inferences about small but important segments of a population.
A recent Gartner Group Advanced Technology Research Note listed data mining and artificial
intelligence at the top of the five key technology areas that "will clearly have a major impact
across a wide range of industries within the next 3 to 5 years."2 Gartner also listed parallel
architectures and data mining as two of the top 10 new technologies in which companies will
invest during the next 5 years. According to a recent Gartner HPC Research Note, "With the
rapid advance in data capture, transmission and storage, large-systems users will increasingly
need to implement new and innovative ways to mine the after-market value of their vast stores of
detail data, employing MPP [massively parallel processing] systems to create new sources of
business advantage (0.9 probability)."3
Artificial neural networks: Non-linear predictive models that learn through training and
resemble biological neural networks in structure.
Decision trees: Tree-shaped structures that represent sets of decisions. These decisions
generate rules for the classification of a dataset. Specific decision tree methods include
Classification and Regression Trees (CART) and Chi Square Automatic Interaction
Detection (CHAID) .
Genetic algorithms: Optimization techniques that use processes such as genetic
combination, mutation, and natural selection in a design based on the concepts of
evolution.
Nearest neighbor method: A technique that classifies each record in a dataset based on a
combination of the classes of the k record(s) most similar to it in a historical dataset
(where k ³ 1). Sometimes called the k-nearest neighbor technique.
Rule induction: The extraction of useful if-then rules from data based on statistical
significance.
Many of these technologies have been in use for more than a decade in specialized analysis tools
that work with relatively small volumes of data. These capabilities are now evolving to integrate
directly with industry-standard data warehouse and OLAP platforms. The appendix to this white
paper provides a glossary of data mining terms.
How exactly is data mining able to tell you important things that you didn't know or what is
going to happen next? The technique that is used to perform these feats in data mining is called
modeling. Modeling is simply the act of building a model in one situation where you know the
answer and then applying it to another situation that you don't. For instance, if you were looking
for a sunken Spanish galleon on the high seas the first thing you might do is to research the times
when Spanish treasure had been found by others in the past. You might note that these ships
often tend to be found off the coast of Bermuda and that there are certain characteristics to the
ocean currents, and certain routes that have likely been taken by the ship’s captains in that era.
You note these similarities and build a model that includes the characteristics that are common to
the locations of these sunken treasures. With these models in hand you sail off looking for
treasure where your model indicates it most likely might be given a similar situation in the past.
Hopefully, if you've got a good model, you find your treasure.
This act of model building is thus something that people have been doing for a long time,
certainly before the advent of computers or data mining technology. What happens on
computers, however, is not much different than the way people build models. Computers are
loaded up with lots of information about a variety of situations where an answer is known and
then the data mining software on the computer must run through that data and distill the
characteristics of the data that should go into the model. Once the model is built it can then be
used in similar situations where you don't know the answer. For example, say that you are the
director of marketing for a telecommunications company and you'd like to acquire some new
long distance phone customers. You could just randomly go out and mail coupons to the general
population - just as you could randomly sail the seas looking for sunken treasure. In neither case
would you achieve the results you desired and of course you have the opportunity to do much
better than random - you could use your business experience stored in your database to build a
model.
As the marketing director you have access to a lot of information about all of your customers:
their age, sex, credit history and long distance calling usage. The good news is that you also have
a lot of information about your prospective customers: their age, sex, credit history etc. Your
problem is that you don't know the long distance calling usage of these prospects (since they are
most likely now customers of your competition). You'd like to concentrate on those prospects
who have large amounts of long distance usage. You can accomplish this by building a model.
Table 2 illustrates the data used for building a model for new customer prospecting in a data
warehouse.
Customers Prospects
The goal in prospecting is to make some calculated guesses about the information in the lower
right hand quadrant based on the model that we build going from Customer General Information
to Customer Proprietary Information. For instance, a simple model for a telecommunications
company might be:
98% of my customers who make more than $60,000/year spend more than $80/month on long
distance
This model could then be applied to the prospect data to try to tell something about the
proprietary information that this telecommunications company does not currently have access to.
With this model in hand new customers can be selectively targeted.
Test marketing is an excellent source of data for this kind of modeling. Mining the results of a
test market representing a broad but relatively small sample of prospects can provide a
foundation for identifying good prospects in the overall market. Table 3 shows another common
scenario for building models: predict what is going to happen in the future.
If someone told you that he had a model that could predict customer usage how would you know
if he really had a good model? The first thing you might try would be to ask him to apply his
model to your customer base - where you already knew the answer. With data mining, the best
way to accomplish this is by setting aside some of your data in a vault to isolate it from the
mining process. Once the mining is complete, the results can be tested against the data held in
the vault to confirm the model’s validity. If the model works, its observations should hold for the
vaulted data.
To best apply these advanced techniques, they must be fully integrated with a data warehouse as
well as flexible interactive business analysis tools. Many data mining tools currently operate
outside of the warehouse, requiring extra steps for extracting, importing, and analyzing the data.
Furthermore, when new insights require operational implementation, integration with the
warehouse simplifies the application of results from data mining. The resulting analytic data
warehouse can be applied to improve business processes throughout the organization, in areas
such as promotional campaign management, fraud detection, new product rollout, and so on.
Figure 1 illustrates an architecture for advanced analysis in a large data warehouse.
The ideal starting point is a data warehouse containing a combination of internal data tracking all
customer contact coupled with external market data about competitor activity. Background
information on potential customers also provides an excellent basis for prospecting. This
warehouse can be implemented in a variety of relational database systems: Sybase, Oracle,
Redbrick, and so on, and should be optimized for flexible and fast data access.
This design represents a fundamental shift from conventional decision support systems. Rather
than simply delivering data to the end user through query and reporting software, the Advanced
Analysis Server applies users’ business models directly to the warehouse and returns a proactive
analysis of the most relevant information. These results enhance the metadata in the OLAP
Server by providing a dynamic metadata layer that represents a distilled view of the data.
Reporting, visualization, and other analysis tools can then be applied to plan future actions and
confirm the impact of those plans.
Profitable Applications
A wide range of companies have deployed successful applications of data mining. While early
adopters of this technology have tended to be in information-intensive industries such as
financial services and direct mail marketing, the technology is applicable to any company
looking to leverage a large data warehouse to better manage their customer relationships. Two
critical factors for success with data mining are: a large, well-integrated data warehouse and a
well-defined understanding of the business process within which data mining is to be applied
(such as customer prospecting, retention, campaign management, and so on).
A pharmaceutical company can analyze its recent sales force activity and their results to
improve targeting of high-value physicians and determine which marketing activities will
have the greatest impact in the next few months. The data needs to include competitor
market activity as well as information about the local health care systems. The results can
be distributed to the sales force via a wide-area network that enables the representatives
to review the recommendations from the perspective of the key attributes in the decision
process. The ongoing, dynamic analysis of the data warehouse allows best practices from
throughout the organization to be applied in specific sales situations.
A credit card company can leverage its vast warehouse of customer transaction data to
identify customers most likely to be interested in a new credit product. Using a small test
mailing, the attributes of customers with an affinity for the product can be identified.
Recent projects have indicated more than a 20-fold decrease in costs for targeted mailing
campaigns over conventional approaches.
A diversified transportation company with a large direct sales force can apply data
mining to identify the best prospects for its services. Using data mining to analyze its
own customer experience, this company can build a unique segmentation identifying the
attributes of high-value prospects. Applying this segmentation to a general business
database such as those provided by Dun & Bradstreet can yield a prioritized list of
prospects by region.
A large consumer package goods company can apply data mining to improve its sales
process to retailers. Data from consumer panels, shipments, and competitor activity can
be applied to understand the reasons for brand and store switching. Through this analysis,
the manufacturer can select promotional strategies that best reach their target customer
segments.
Each of these examples have a clear common ground. They leverage the knowledge about
customers implicit in a data warehouse to reduce costs and improve the value of customer
relationships. These organizations can now focus their efforts on the most important (profitable)
customers and prospects, and design targeted marketing strategies to best reach them.
Conclusion
Comprehensive data warehouses that integrate operational data with customer, supplier, and
market information have resulted in an explosion of information. Competition requires timely
and sophisticated analysis on an integrated view of the data. However, there is a growing gap
between more powerful storage and retrieval systems and the users’ ability to effectively analyze
and act on the information they contain. Both relational and OLAP technologies have
tremendous capabilities for navigating massive data warehouses, but brute force navigation of
data is not enough. A new technological leap is needed to structure and prioritize information for
specific end-user problems. The data mining tools can make this leap. Quantifiable business
benefits have been proven through the integration of data mining with current information
systems, and new products are on the horizon that will bring this integration to an even wider
audience of users.
META Group Application Development Strategies: "Data Mining for Data Warehouses:
1
2
Gartner Group Advanced Technologies and Applications Research Note, 2/1/95.
3
Gartner Group High Performance Computing Research Note, 1/31/95.
Glossary of Data Mining Terms
analytical model A structure and process for analyzing a dataset. For example, a decision tree
is a model for the classification of a dataset.
anomalous data Data that result from errors (for example, data entry keying errors) or that
represent unusual events. Anomalous data should be examined carefully
because it may carry important information.
artificial neural Non-linear predictive models that learn through training and resemble
networks biological neural networks in structure.
CART Classification and Regression Trees. A decision tree technique used for
classification of a dataset. Provides a set of rules that you can apply to a new
(unclassified) dataset to predict which records will have a given outcome.
Segments a dataset by creating 2-way splits. Requires less data preparation
than CHAID.
CHAID Chi Square Automatic Interaction Detection. A decision tree technique used
for classification of a dataset. Provides a set of rules that you can apply to a
new (unclassified) dataset to predict which records will have a given
outcome. Segments a dataset by using chi square tests to create multi-way
splits. Preceded, and requires more data preparation than, CART.
classification The process of dividing a dataset into mutually exclusive groups such that the
members of each group are as "close" as possible to one another, and
different groups are as "far" as possible from one another, where distance is
measured with respect to specific variable(s) you are trying to predict. For
example, a typical classification problem is to divide a database of companies
into groups that are as homogeneous as possible with respect to a
creditworthiness variable with values "Good" and "Bad."
clustering The process of dividing a dataset into mutually exclusive groups such that the
members of each group are as "close" as possible to one another, and
different groups are as "far" as possible from one another, where distance is
measured with respect to all available variables.
data cleansing The process of ensuring that all values in a dataset are consistent and
correctly recorded.
data mining The extraction of hidden predictive information from large databases.
data navigation The process of viewing different dimensions, slices, and levels of detail of a
multidimensional database. See OLAP.
decision tree A tree-shaped structure that represents a set of decisions. These decisions
generate rules for the classification of a dataset. See CART and CHAID.
exploratory data The use of graphical and descriptive statistical techniques to learn about the
analysis structure of a dataset.
linear model An analytical model that assumes linear relationships in the coefficients of
the variables being studied.
linear regression A statistical technique used to find the best-fitting linear relationship between
a target (dependent) variable and its predictors (independent variables).
logistic regression A linear regression that predicts the proportions of a categorical target
variable, such as type of customer, in a population.
nearest neighbor A technique that classifies each record in a dataset based on a combination of
the classes of the k record(s) most similar to it in a historical dataset (where k
³ 1). Sometimes called a k-nearest neighbor technique.
non-linear model An analytical model that does not assume linear relationships in the
coefficients of the variables being studied.
outlier A data item whose value falls outside the bounds enclosing most of the other
corresponding values in the sample. May indicate anomalous data. Should be
examined carefully; may carry important information.
parallel The coordinated use of multiple processors to perform computational tasks.
processing Parallel processing can occur on a multiprocessor computer or on a network
of workstations or PCs.
predictive model A structure and process for predicting the values of specified variables in a
dataset.
prospective data Data analysis that predicts future trends, behaviors, or events based on
analysis historical data.
retrospective data Data analysis that provides insights into trends, behaviors, or events that have
analysis already occurred.
rule induction The extraction of useful if-then rules from data based on statistical
significance.
time series The analysis of a sequence of measurements made at specified time intervals.
analysis Time is usually the dominating dimension of the data.