Data Mining: What Is Data Mining?: Oracle
Data Mining: What Is Data Mining?: Oracle
Overview
Generally, data mining (sometimes called data or knowledge discovery) is the process
of analyzing data from different perspectives and summarizing it into useful
information - information that can be used to increase revenue, cuts costs, or both.
Data mining software is one of a number of analytical tools for analyzing data. It
allows users to analyze data from many different dimensions or angles, categorize it,
and summarize the relationships identified. Technically, data mining is the process of
finding correlations or patterns among dozens of fields in large relational databases.
Continuous Innovation
Although data mining is a relatively new term, the technology is not. Companies have
used powerful computers to sift through volumes of supermarket scanner data and
analyze market research reports for years. However, continuous innovations in
computer processing power, disk storage, and statistical software are dramatically
increasing the accuracy of analysis while driving down the cost.
Example
For example, one Midwest grocery chain used the data mining capacity of Oracle
software to analyze local buying patterns. They discovered that when men bought
diapers on Thursdays and Saturdays, they also tended to buy beer. Further analysis
showed that these shoppers typically did their weekly grocery shopping on Saturdays.
On Thursdays, however, they only bought a few items. The retailer concluded that
they purchased the beer to have it available for the upcoming weekend. The grocery
chain could use this newly discovered information in various ways to increase
revenue. For example, they could move the beer display closer to the diaper display.
And, they could make sure beer and diapers were sold at full price on Thursdays.
Data
Data are any facts, numbers, or text that can be processed by a computer. Today,
organizations are accumulating vast and growing amounts of data in different formats
and different databases. This includes:
operational or transactional data such as, sales, cost, inventory, payroll, and
accounting
nonoperational data, such as industry sales, forecast data, and macro economic
data
meta data - data about the data itself, such as logical database design or data
dictionary definitions
Information
The patterns, associations, or relationships among all this data can provide
information. For example, analysis of retail point of sale transaction data can yield
information on which products are selling and when.
Knowledge
Information can be converted into knowledge about historical patterns and future
trends. For example, summary information on retail supermarket sales can be
analyzed in light of promotional efforts to provide knowledge of consumer buying
behavior. Thus, a manufacturer or retailer could determine which items are most
susceptible to promotional efforts.
Data Warehouses
Dramatic advances in data capture, processing power, data transmission, and storage
capabilities are enabling organizations to integrate their various databases into data
warehouses. Data warehousing is defined as a process of centralized data
management and retrieval. Data warehousing, like data mining, is a relatively new
term although the concept itself has been around for years. Data warehousing
represents an ideal vision of maintaining a central repository of all organizational
data. Centralization of data is needed to maximize user access and analysis. Dramatic
technological advances are making this vision a reality for many companies. And,
equally dramatic advances in data analysis software are allowing users to access this
data freely. The data analysis software is what supports data mining.
Data mining is primarily used today by companies with a strong consumer focus -
retail, financial, communication, and marketing organizations. It enables these
companies to determine relationships among "internal" factors such as price, product
positioning, or staff skills, and "external" factors such as economic indicators,
competition, and customer demographics. And, it enables them to determine the
impact on sales, customer satisfaction, and corporate profits. Finally, it enables them
to "drill down" into summary information to view detail transactional data.
With data mining, a retailer could use point-of-sale records of customer purchases to
send targeted promotions based on an individual's purchase history. By mining
demographic data from comment or warranty cards, the retailer could develop
products and promotions to appeal to specific customer segments.
For example, Blockbuster Entertainment mines its video rental history database to
recommend rentals to individual customers. American Express can suggest products
to its cardholders based on analysis of their monthly expenditures.
By using the NBA universal clock, a coach can automatically bring up the video clips
showing each of the jump shots attempted by Williams with Price on the floor,
without needing to comb through hours of video footage. Those clips show a very
successful pick-and-roll play in which Price draws the Knick's defense and then finds
Williams for an open jump shot.
While large-scale information technology has been evolving separate transaction and
analytical systems, data mining provides the link between the two. Data mining
software analyzes relationships and patterns in stored transaction data based on open-
ended user queries. Several types of analytical software are available: statistical,
machine learning, and neural networks. Generally, any of four types of relationships
are sought:
Extract, transform, and load transaction data onto the data warehouse system.
Store and manage the data in a multidimensional database system.
Rule induction: The extraction of useful if-then rules from data based on
statistical significance.
Today, data mining applications are available on all size systems for mainframe,
client/server, and PC platforms. System prices range from several thousand dollars for
the smallest applications up to $1 million a terabyte for the largest. Enterprise-wide
applications generally range in size from 10 gigabytes to over 11 terabytes. NCR has
the capacity to deliver applications exceeding 100 terabytes. There are two critical
technological drivers:
Size of the database: the more data being processed and maintained, the more
powerful the system required.
Query complexity: the more complex the queries and the greater the number
of queries being processed, the more powerful the system required.
Relational database storage and management technology is adequate for many data
mining applications less than 50 gigabytes. However, this infrastructure needs to be
significantly enhanced to support larger applications. Some vendors have added
extensive indexing capabilities to improve query performance. Others use new
hardware architectures such as Massively Parallel Processors (MPP) to achieve order-
of-magnitude improvements in query time. For example, MPP systems from NCR
link hundreds of high-speed Pentium processors to achieve performance levels
exceeding those of the largest supercomputers.
Overview
Data mining, the extraction of hidden predictive information from large databases, is
a powerful new technology with great potential to help companies focus on the most
important information in their data warehouses. Data mining tools predict future
trends and behaviors, allowing businesses to make proactive, knowledge-driven
decisions. The automated, prospective analyses offered by data mining move beyond
the analyses of past events provided by retrospective tools typical of decision support
systems. Data mining tools can answer business questions that traditionally were too
time consuming to resolve. They scour databases for hidden patterns, finding
predictive information that experts may miss because it lies outside their expectations.
Most companies already collect and refine massive quantities of data. Data mining
techniques can be implemented rapidly on existing software and hardware platforms
to enhance the value of existing information resources, and can be integrated with
new products and systems as they are brought on-line. When implemented on high
performance client/server or parallel processing computers, data mining tools can
analyze massive databases to deliver answers to questions such as, "Which clients are
most likely to respond to my next promotional mailing, and why?"
This white paper provides an introduction to the basic technologies of data mining.
Examples of profitable applications illustrate its relevance to today’s business
environment as well as a basic description of how data warehouse architectures can
evolve to deliver the value of data mining to end users.
Data mining techniques are the result of a long process of research and product
development. This evolution began when business data was first stored on computers,
continued with improvements in data access, and more recently, generated
technologies that allow users to navigate through their data in real time. Data mining
takes this evolutionary process beyond retrospective data access and navigation to
prospective and proactive information delivery. Data mining is ready for application
in the business community because it is supported by three technologies that are now
sufficiently mature:
In the evolution from business data to business information, each new step has built
upon the previous one. For example, dynamic data access is critical for drill-through
in data navigation applications, and the ability to store large databases is critical to
data mining. From the user’s point of view, the four steps listed in Table 1 were
revolutionary because they allowed new business questions to be answered accurately
and quickly.
Data Collection "What was my total Computers, tapes, IBM, CDC Retrospective,
revenue in the last five disks static data
(1960s) years?" delivery
Data Access "What were unit sales in Relational databases Oracle, Retrospective,
New England last (RDBMS), Sybase, dynamic data
(1980s) March?" Structured Query Informix, delivery at
Language (SQL), IBM, record level
ODBC Microsoft
Data "What were unit sales in On-line analytic Pilot, Retrospective,
Warehousing & New England last processing (OLAP), Comshare, dynamic data
March? Drill down to multidimensional Arbor, delivery at
Decision Boston." databases, data Cognos, multiple levels
Support warehouses Microstrategy
(1990s)
The core components of data mining technology have been under development for
decades, in research areas such as statistics, artificial intelligence, and machine
learning. Today, the maturity of these techniques, coupled with high-performance
relational database engines and broad data integration efforts, make these
technologies practical for current data warehouse environments.
Data mining derives its name from the similarities between searching for valuable
business information in a large database — for example, finding linked products in
gigabytes of store scanner data — and mining a mountain for a vein of valuable ore.
Both processes require either sifting through an immense amount of material, or
intelligently probing it to find exactly where the value resides. Given databases of
sufficient size and quality, data mining technology can generate new business
opportunities by providing these capabilities:
Data mining techniques can yield the benefits of automation on existing software and
hardware platforms, and can be implemented on new systems as existing platforms
are upgraded and new products developed. When data mining tools are implemented
on high performance parallel processing systems, they can analyze massive databases
in minutes. Faster processing means that users can automatically experiment with
more models to understand complex data. High speed makes it practical for users to
analyze huge quantities of data. Larger databases, in turn, yield improved predictions.
More columns. Analysts must often limit the number of variables they
examine when doing hands-on analysis due to time constraints. Yet variables
that are discarded because they seem unimportant may carry information about
unknown patterns. High performance data mining allows users to explore the
full depth of a database, without preselecting a subset of variables.
More rows. Larger samples yield lower estimation errors and variance, and
allow users to make inferences about small but important segments of a
population.
A recent Gartner Group Advanced Technology Research Note listed data mining and
artificial intelligence at the top of the five key technology areas that "will clearly have
a major impact across a wide range of industries within the next 3 to 5 years."2
Gartner also listed parallel architectures and data mining as two of the top 10 new
technologies in which companies will invest during the next 5 years. According to a
recent Gartner HPC Research Note, "With the rapid advance in data capture,
transmission and storage, large-systems users will increasingly need to implement
new and innovative ways to mine the after-market value of their vast stores of detail
data, employing MPP [massively parallel processing] systems to create new sources
of business advantage (0.9 probability)."3
Rule induction: The extraction of useful if-then rules from data based on
statistical significance.
Many of these technologies have been in use for more than a decade in specialized
analysis tools that work with relatively small volumes of data. These capabilities are
now evolving to integrate directly with industry-standard data warehouse and OLAP
platforms. The appendix to this white paper provides a glossary of data mining terms.
How exactly is data mining able to tell you important things that you didn't know or
what is going to happen next? The technique that is used to perform these feats in data
mining is called modeling. Modeling is simply the act of building a model in one
situation where you know the answer and then applying it to another situation that you
don't. For instance, if you were looking for a sunken Spanish galleon on the high seas
the first thing you might do is to research the times when Spanish treasure had been
found by others in the past. You might note that these ships often tend to be found off
the coast of Bermuda and that there are certain characteristics to the ocean currents,
and certain routes that have likely been taken by the ship’s captains in that era. You
note these similarities and build a model that includes the characteristics that are
common to the locations of these sunken treasures. With these models in hand you
sail off looking for treasure where your model indicates it most likely might be given
a similar situation in the past. Hopefully, if you've got a good model, you find your
treasure.
This act of model building is thus something that people have been doing for a long
time, certainly before the advent of computers or data mining technology. What
happens on computers, however, is not much different than the way people build
models. Computers are loaded up with lots of information about a variety of situations
where an answer is known and then the data mining software on the computer must
run through that data and distill the characteristics of the data that should go into the
model. Once the model is built it can then be used in similar situations where you
don't know the answer. For example, say that you are the director of marketing for a
telecommunications company and you'd like to acquire some new long distance phone
customers. You could just randomly go out and mail coupons to the general
population - just as you could randomly sail the seas looking for sunken treasure. In
neither case would you achieve the results you desired and of course you have the
opportunity to do much better than random - you could use your business experience
stored in your database to build a model.
As the marketing director you have access to a lot of information about all of your
customers: their age, sex, credit history and long distance calling usage. The good
news is that you also have a lot of information about your prospective customers: their
age, sex, credit history etc. Your problem is that you don't know the long distance
calling usage of these prospects (since they are most likely now customers of your
competition). You'd like to concentrate on those prospects who have large amounts of
long distance usage. You can accomplish this by building a model. Table 2 illustrates
the data used for building a model for new customer prospecting in a data warehouse.
Customers Prospects
The goal in prospecting is to make some calculated guesses about the information in
the lower right hand quadrant based on the model that we build going from Customer
General Information to Customer Proprietary Information. For instance, a simple
model for a telecommunications company might be:
98% of my customers who make more than $60,000/year spend more than $80/month
on long distance
This model could then be applied to the prospect data to try to tell something about
the proprietary information that this telecommunications company does not currently
have access to. With this model in hand new customers can be selectively targeted.
Test marketing is an excellent source of data for this kind of modeling. Mining the
results of a test market representing a broad but relatively small sample of prospects
can provide a foundation for identifying good prospects in the overall market. Table 3
shows another common scenario for building models: predict what is going to happen
in the future.
If someone told you that he had a model that could predict customer usage how would
you know if he really had a good model? The first thing you might try would be to ask
him to apply his model to your customer base - where you already knew the answer.
With data mining, the best way to accomplish this is by setting aside some of your
data in a vault to isolate it from the mining process. Once the mining is complete, the
results can be tested against the data held in the vault to confirm the model’s validity.
If the model works, its observations should hold for the vaulted data.
To best apply these advanced techniques, they must be fully integrated with a data
warehouse as well as flexible interactive business analysis tools. Many data mining
tools currently operate outside of the warehouse, requiring extra steps for extracting,
importing, and analyzing the data. Furthermore, when new insights require
operational implementation, integration with the warehouse simplifies the application
of results from data mining. The resulting analytic data warehouse can be applied to
improve business processes throughout the organization, in areas such as promotional
campaign management, fraud detection, new product rollout, and so on. Figure 1
illustrates an architecture for advanced analysis in a large data warehouse.
The ideal starting point is a data warehouse containing a combination of internal data
tracking all customer contact coupled with external market data about competitor
activity. Background information on potential customers also provides an excellent
basis for prospecting. This warehouse can be implemented in a variety of relational
database systems: Sybase, Oracle, Redbrick, and so on, and should be optimized for
flexible and fast data access.
Profitable Applications
A pharmaceutical company can analyze its recent sales force activity and their
results to improve targeting of high-value physicians and determine which
marketing activities will have the greatest impact in the next few months. The
data needs to include competitor market activity as well as information about
the local health care systems. The results can be distributed to the sales force
via a wide-area network that enables the representatives to review the
recommendations from the perspective of the key attributes in the decision
process. The ongoing, dynamic analysis of the data warehouse allows best
practices from throughout the organization to be applied in specific sales
situations.
A credit card company can leverage its vast warehouse of customer
transaction data to identify customers most likely to be interested in a new
credit product. Using a small test mailing, the attributes of customers with an
affinity for the product can be identified. Recent projects have indicated more
than a 20-fold decrease in costs for targeted mailing campaigns over
conventional approaches.
A diversified transportation company with a large direct sales force can apply
data mining to identify the best prospects for its services. Using data mining to
analyze its own customer experience, this company can build a unique
segmentation identifying the attributes of high-value prospects. Applying this
segmentation to a general business database such as those provided by Dun &
Bradstreet can yield a prioritized list of prospects by region.
A large consumer package goods company can apply data mining to improve
its sales process to retailers. Data from consumer panels, shipments, and
competitor activity can be applied to understand the reasons for brand and
store switching. Through this analysis, the manufacturer can select
promotional strategies that best reach their target customer segments.
Each of these examples have a clear common ground. They leverage the knowledge
about customers implicit in a data warehouse to reduce costs and improve the value of
customer relationships. These organizations can now focus their efforts on the most
important (profitable) customers and prospects, and design targeted marketing
strategies to best reach them.
Conclusion
2
Gartner Group Advanced Technologies and Applications Research Note, 2/1/95.
3
Gartner Group High Performance Computing Research Note, 1/31/95.
analytical model A structure and process for analyzing a dataset. For example, a
decision tree is a model for the classification of a dataset.
anomalous data Data that result from errors (for example, data entry keying errors)
or that represent unusual events. Anomalous data should be
examined carefully because it may carry important information.
artificial neural Non-linear predictive models that learn through training and
networks resemble biological neural networks in structure.
data cleansing The process of ensuring that all values in a dataset are consistent
and correctly recorded.
data navigation The process of viewing different dimensions, slices, and levels of
detail of a multidimensional database. See OLAP.
data warehouse A system for storing and delivering massive quantities of data.
exploratory data The use of graphical and descriptive statistical techniques to learn
analysis about the structure of a dataset.
non-linear model An analytical model that does not assume linear relationships in
the coefficients of the variables being studied.
outlier A data item whose value falls outside the bounds enclosing most
of the other corresponding values in the sample. May indicate
anomalous data. Should be examined carefully; may carry
important information.
predictive model A structure and process for predicting the values of specified
variables in a dataset.
prospective data Data analysis that predicts future trends, behaviors, or events
analysis based on historical data.
retrospective data Data analysis that provides insights into trends, behaviors, or
analysis events that have already occurred.
rule induction The extraction of useful if-then rules from data based on statistical
significance.