Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

Data Analysis Using SQL and Excel
Data Analysis Using SQL and Excel
Data Analysis Using SQL and Excel
Ebook1,485 pages17 hours

Data Analysis Using SQL and Excel

Rating: 3.5 out of 5 stars

3.5/5

()

Read preview

About this ebook

A practical guide to data mining using SQL and Excel

Data Analysis Using SQL and Excel, 2nd Edition shows you how to leverage the two most popular tools for data query and analysis—SQL and Excel—to perform sophisticated data analysis without the need for complex and expensive data mining tools. Written by a leading expert on business data mining, this book shows you how to extract useful business information from relational databases. You'll learn the fundamental techniques before moving into the "where" and "why" of each analysis, and then learn how to design and perform these analyses using SQL and Excel. Examples include SQL and Excel code, and the appendix shows how non-standard constructs are implemented in other major databases, including Oracle and IBM DB2/UDB. The companion website includes datasets and Excel spreadsheets, and the book provides hints, warnings, and technical asides to help you every step of the way.

Data Analysis Using SQL and Excel, 2nd Edition shows you how to perform a wide range of sophisticated analyses using these simple tools, sparing you the significant expense of proprietary data mining tools like SAS.

  • Understand core analytic techniques that work with SQL and Excel
  • Ensure your analytic approach gets you the results you need
  • Design and perform your analysis using SQL and Excel

Data Analysis Using SQL and Excel, 2nd Edition shows you how to best use the tools you already know to achieve expert results.

LanguageEnglish
PublisherWiley
Release dateDec 3, 2015
ISBN9781119021445
Data Analysis Using SQL and Excel

Related to Data Analysis Using SQL and Excel

Related ebooks

Computers For You

View More

Related articles

Reviews for Data Analysis Using SQL and Excel

Rating: 3.25 out of 5 stars
3.5/5

4 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Data Analysis Using SQL and Excel - Gordon S. Linoff

    Foreword

    Gordon Linoff and I have written three and a half books together. (Four, if we get to count the second edition of Data Mining Techniques as a whole new book; it didn't feel like any less work.) Neither of us has written a book without the other before, so I must admit to a tiny twinge of regret upon first seeing the cover of this one without my name on it next to Gordon's. The feeling passed very quickly as recollections of the authorial life came flooding back—vacations spent at the keyboard instead of in or on the lake, opportunities missed, relationships strained. More importantly, this is a book that only Gordon Linoff could have written. His unique combination of talents and experiences informs every chapter.

    I first met Gordon at Thinking Machines Corporation, a now long-defunct manufacturer of parallel supercomputers where we both worked in the late eighties and early nineties. Among other roles, Gordon managed the implementation of a parallel relational database designed to support complex analytical queries on very large databases. The design point for this database was radically different from other relational database systems available at the time in that no trade-offs were made to support transaction processing. The requirements for a system designed to quickly retrieve or update a single record are quite different from the requirements for a system to scan and join huge tables. Jettisoning the requirement to support transaction processing made for a cleaner, more efficient database for analytical processing. This part of Gordon's background means he understands SQL for data analysis literally from the inside out.

    Just as a database designed to answer big important questions has a different structure from one designed to process many individual transactions, a book about using databases to answer big important questions requires a different approach to SQL. Many books on SQL are written for database administrators. Others are written for users wishing to prepare simple reports. Still others attempt to introduce some particular dialect of SQL in every detail. This one is written for data analysts, data miners, and anyone who wants to extract maximum information value from large corporate databases. Jettisoning the requirement to address all the disparate types of database users makes this a better, more focused book for the intended audience. In short, this is a book about how to use databases the way we ourselves use them.

    Even more important than Gordon's database technology background are his many years experience as a data mining consultant. This has given him a deep understanding of the kinds of questions businesses need to ask and of the data they are likely to have available to answer them. Years spent exploring corporate databases have given Gordon an intuitive feel for how to approach the kinds of problems that crop up time and again across many different business domains:

    How to take advantage of geographic data. A zip code field looks much richer when you realize that from zip code you can get to latitude and longitude, and from latitude and longitude you can get to distance. It looks richer still when your realize that you can use it to join in Census Bureau data to get at important attributes, such as population density, median income, percentage of people on public assistance, and the like.

    How to take advantage of dates. Order dates, ship dates, enrollment dates, birth dates. Corporate data is full of dates. These fields look richer when you understand how to turn dates into tenures, analyze purchases by day of week, and track trends in fulfillment time. They look richer still when you know how to use this data to analyze time-to-event problems such as time to next purchase or expected remaining lifetime of a customer relationship.

    How to build data mining models directly in SQL. This book shows you how to do things in SQL that you probably never imagined possible, including generating association rules for market basket analysis, building regression models, and implementing naïve Bayesian models and scorecards.

    How to prepare data for use with data mining tools. Although more than most people realize can be done using just SQL and Excel, eventually you will want to use more specialized data mining tools. These tools need data in a specific format known as a customer signature. This book shows you how to create these data mining extracts.

    The book is rich in examples and they all use real data. This point is worth saying more about. Unrealistic datasets lead to unrealistic results. This is frustrating to the student. In real life, the more you know about the business context, the better your data mining results will be. Subject matter expertise gives you a head start. You know what variables ought to be predictive and have good ideas about new ones to derive. Fake data does not reward these good ideas because patterns that should be in the data are missing and patterns that shouldn't be there have been introduced inadvertently. Real data is hard to come by, not least because real data may reveal more than its owners are willing to share about their business operations. As a result, many books and courses make do with artificially constructed datasets. Best of all, the datasets used in the book are all available for download at www.wiley.com/go/dataanalysisusingsqlandexcel2e.

    I reviewed the chapters of this book as they were written. This process was very beneficial to my own use of SQL and Excel. The exercise of thinking about the fairly complex queries used in the examples greatly increased my understanding of how SQL actually works. As a result, I have lost my fear of nested queries, multi-way joins, giant case statements, and other formerly daunting aspects of the language. In well over a decade of collaboration, I have always turned to Gordon for help using SQL and Excel to best advantage. Now, I can turn to this book. And you can, too.

    —Michael J. A. Berry

    Introduction

    The first edition of this book set out to explain data analysis from an eminently practical perspective, using the familiar tools of SQL and Excel. The guiding principle of the book was to start with questions and guide the reader through the solutions, both from a business perspective and a technical perspective. This approach proved to be quite successful.

    Much has changed in the ten years since I started writing the first edition. The tools themselves have changed. In those days, Excel did not have a Ribbon, for instance. And, window functions were rare in databases. The world that analysts inhabit has also changed, with tools such as Python and R and NoSQL databases becoming more common. However, relational databases are still in widespread use, and SQL is, if anything, even more relevant today as technology spreads through businesses big and small. Excel still seems to be the reporting and presentation tool of choice for many business users. Big data is no longer a future frontier; it is a problem, a challenge, and an opportunity that we face on a daily basis.

    The second edition has been revised and updated to reflect the changes in the underlying software, with more examples and more techniques, and an additional chapter on database performance. In doing so, I have strived to keep the strengths from the first edition. The book is still organized around the principles of data, analysis, and presentation—three capabilities that are rarely treated together. Examples are organized around questions, with a discussion of both the business relevance and the technical approaches to the problems. The examples carry through to actual code. The data, the code, and the Excel examples are all available on the companion website.

    The motivation for this approach originally came from a colleague, Nick Drake, who is a statistician by training. Once upon a time, he was looking for a book that would explain how to use SQL for the complex queries needed for data analysis. Books on SQL tend to cover either basic query constructs or the details of how databases work. None come strictly from a perspective of analyzing data, and none are structured around answering questions about data. Of the many books on statistics, none address the simple fact that most of the data being used resides in relational databases. This book fills that gap.

    My other books on data mining, written with Michael Berry, focus on advanced algorithms and case studies. By contrast, this book focuses on the how-to. It starts by describing data stored in databases and continues through preparing and producing results. Interspersed are stories based on my experience in the field, explaining how results might be applied and why some things work and other things do not. The examples are so practical that the data used for them is available on the book’s companion website (www.wiley.com/go/dataanalysisusingsqlandexcel2e).

    One of the truisms about data warehouses and analysis databases in general is that they don’t actually do anything. Yes, they store data. Yes, they bring together data from different sources, cleansing and clarifying along the way. Yes, they define business dimensions, store transactions about customers, and, perhaps, summarize important data. (And, yes, all these are very important!) However, data in a database resides on many spinning disks and in complex data structures in a computer’s memory. So much data, so little information.

    How can we exploit this data, particularly data that describes customers? The many fancy algorithms for statistical modeling and data mining all have a simple rule: garbage-in, garbage-out. The results of even the most sophisticated techniques are only as good as the data being used (and the assumptions being fed into the model). Data is central to the task of understanding customers, understanding products, and understanding markets.

    The chapters in this book cover different aspects of data and several important analytic techniques that are readily supported by SQL and Excel. The analytic techniques range from exploratory data analysis to survival analysis, from market basket analysis to naïve Bayesian models, and from simple animations to regression. Of course, the potential range of possible techniques is much larger than can be presented in one book. These methods have proven useful over time and are applicable in many different areas.

    And finally, data and analysis are not enough. Data must be analyzed, and the results must be presented to the right audience. To fully exploit its value, we must transform data into stories and scenarios, charts and metrics and insights.

    Overview of the Book and Technology

    This book focuses on three key technological areas used for transforming data into actionable information:

    Relational databases store data. The basic language for retrieving data is SQL. (Note that variants of SQL are also used for NoSQL databases.)

    Excel spreadsheets are the most popular tool for presenting data. Perhaps the most powerful feature of Excel is its charting capability, which turns columns of numbers into pictures.

    Statistics is the foundation of data analysis.

    These three technologies are presented together because they are all interrelated. SQL answers the question How do we access data? Statistics answers the question How is it relevant? And Excel makes it possible to convince other people of the veracity of our findings and to provide them results that they can play with.

    The description of data processing is organized around the SQL language. Databases such as Oracle, Postgres, MySQL, IBM DB2, and Microsoft SQL Server are common in the business world, storing the vast majority of business data transactions. The good news is that all relational databases support SQL as a query language. However, just as England and the United States have been described as two countries separated by a common language, each database supports a slightly different dialect of SQL. The Appendix contains a list of how commonly used functionality is represented in various different dialects.

    Similarly, beautiful presentation tools and professional graphics packages are available. However, rare and exceptional is the workplace computer that does not have Excel or an equivalent spreadsheet.

    Statistics and data mining techniques do not always require advanced tools. Some very important techniques are readily available using the combination of SQL and Excel, including survival analysis, look-alike models, naïve Bayesian models, and association rules. In fact, the methods in this book are often more powerful than the methods available in such tools, precisely because they are close to the data and readily customizable. The explanation of the techniques covers both the basic ideas and the extensions that may not be available in other tools.

    The chapters describing the various techniques provide a solid introduction to modeling and data exploration, in the context of familiar tools and data. They also highlight when more advanced tools are useful because the problem exceeds the capabilities of the simpler tools.

    How This Book Is Organized

    The 14 chapters in this book fall into four parts. The first three introduce key concepts of SQL, Excel, and statistics. The seven middle chapters discuss various methods of exploring data and analytic techniques specifically suited to SQL and Excel. More formal ideas about modeling, in the sense of statistics and data mining, are in the next three chapters. And, finally, a new chapter discusses performance issues when writing SQL queries.

    Each chapter explains some aspect of data analysis using SQL and Excel from several different perspectives, including:

    Business examples for using the analysis

    Questions the analysis answers

    Explanations about how the analytic techniques work

    SQL syntax for implementing the techniques

    Results as tables or charts and how to create them in Excel

    Examples in the chapters are generally available in Excel at www.wiley.com/go/dataanalysisusingsqlandexcel2e.

    SQL is a concise language that is sometimes difficult to follow. Dataflows, graphical representations of data processing, are used to illustrate how SQL works. These dataflow diagrams are a reasonable approximation of how SQL engines actually process the data, although the details necessarily differ based on the underlying engine.

    Results are presented in charts and tables, sprinkled throughout the book. In addition, important features of Excel are highlighted, and interesting uses of Excel graphics are explained. Each chapter has technical asides, typically explaining some aspect of a technique or an interesting bit of history associated with the methods described in the chapter.

    Introductory Chapters

    Chapter 1, A Data Miner Looks at SQL, introduces SQL from the perspective of data analysis. This is the querying part of the SQL language, used to extract data from databases using SELECT queries.

    This chapter introduces entity-relationship diagrams to describe the structure of the data—the tables and columns and how they relate to each other. It also introduces dataflow diagrams to describe the processing of queries; dataflow diagrams give a visual explanation of how data is processed. This chapter introduces the important functionality used throughout the book—such as joins, aggregations, and window functions.

    Furthermore, the first chapter describes the datasets used for examples throughout the book (and which are also available for downloading). This data includes tables describing retail purchases, tables describing mobile telephone customers, and reference tables that describe zip codes and the calendar.

    Chapter 2, What’s in a Table? Getting Started with Data Exploration, introduces Excel for exploratory data analysis and presentation. Of many useful capabilities in Excel, perhaps the most useful are charts. As the ancient Chinese saying goes, A picture paints a thousand words, and Excel charting paints pictures using data. Such charts are not only useful aesthetically, but also more practically for Word documents, PowerPoint presentations, email, the Web, and so on.

    Charts are not a means unto themselves. They are one aspect of exploratory data analysis. In addition, this chapter discusses summarizing columns in a table, as well as the interesting idea of using SQL to generate SQL queries.

    Chapter 3, How Different Is Different? explains some key concepts of descriptive statistics, such as averages, p-values, and the chi-square test. The purpose of this chapter is to show how to use such statistics on data residing in tables. The particular statistics and statistical tests are chosen for their practicality, and the chapter focuses on applying the methods, not explaining the underlying theory. Conveniently, most of the statistical tests that we want to do are feasible in Excel and even in SQL.

    SQL Techniques

    Several techniques are very well suited for the combination of SQL and Excel.

    Chapter 4, Where Is It All Happening? Location, Location, Location, explains geography and how to incorporate geographic information into data analysis. Geography starts with locations, described by latitude and longitude. Locations are also described by various levels of geography, such as census blocks, zip code tabulation areas, and the more familiar counties and states, all of which have information available from the Census Bureau (or an equivalent organization in other countries). This chapter also discusses methods for comparing results at different levels of geography. And, finally, no discussion of geography would be complete without maps. Using basic Excel, it is possible to build very rudimentary maps.

    Chapter 5, It’s a Matter of Time, discusses another key attribute of customer behavior: when things occur. This chapter describes how to access features of dates and times in databases, and then how to use this information to understand customers.

    The chapter includes examples for accurately making year-over-year comparisons, for summarizing by day of the week, for measuring durations in days, weeks, and months, and for calculating the number of active customers by day, historically. The chapter ends with a simple animation in Excel—the only use of Visual Basic in the book.

    Chapters 6 and Chapter 7 explain one of the most important analytic techniques for understanding customers over time. Survival analysis has its roots in traditional statistics, and it is very well suited to problems related to customers.

    Chapter 6, How Long Will Customers Last? Survival Analysis to Understand Customers and Their Value, introduces the basic ideas of hazard probabilities and survival, explaining how to calculate them easily using the combination of SQL and Excel. Perhaps surprisingly, sophisticated statistical tools are not needed to get started using survival analysis. Chapter 6 then explains how important ideas in survival analysis, such as average customer lifetime, can be used in a business context. It continues by explaining how to put these pieces together into a forecast and for customer value calculations.

    Chapter 7, Factors Affecting Survival: The What and Why of Customer Tenure, extends the discussion in three different areas. First, it addresses a key problem in many customer-oriented databases: left truncation. Second, it explains a very interesting idea in survival analysis called competing risks. This idea incorporates the fact that customers leave for different reasons. The third idea is to use survival analysis for before-and-after analysis. That is, how can we quantify what happens to customers when something happens during their lifetime—such as quantifying the effect of enrollment in a loyalty program or of a major billing fiasco.

    Chapters 8, 9 and 10 explain how to understand what customers are purchasing using SQL and Excel.

    Chapter 8, Customer Purchases and Other Repeated Events, covers everything about the purchase—when it occurs, where it occurs, how often—except for the particular items being purchased. This chapter covers RFM, a traditional technique for understanding customer purchase behavior, and various issues with recognizing customers over time. Purchases contain a lot of information, even before we dive into the details of the items.

    The products become the focus in Chapter 9, What’s in a Shopping Cart? Market Basket Analysis, which covers exploratory analysis of purchase behaviors over time. This chapter includes identifying products that may be important drivers of customer behavior. It also covers some interesting visualization methods available in Excel.

    Chapter 10, Association Rules and Beyond, then moves to the formal discussion of association rules, which are combinations of products purchased at the same time or in sequence. Building association rules in SQL is rather sophisticated. The methods in this chapter extend traditional association rule analysis, introducing alternative measures that make them more useful, and show how to produce combinations of different things, such as clicks that result in a purchase (to use an example from the Web). The association rule techniques explained in this chapter are more powerful than techniques available in data mining tools because the techniques are extensible and use additional measures beyond support, confidence, and lift.

    Modeling Techniques

    The next three chapters discuss statistical and data mining modeling techniques and methods.

    Chapter 11, Data Mining Models in SQL, introduces the idea of data mining modeling and the terminology associated with building such models. It also discusses some important types of models that are well suited to business problems and the SQL environment. Look-alike models find things similar to a given example. Lookup models use a lookup table to find model scores.

    This chapter also discusses a more sophisticated modeling technique called naïve Bayesian models. This technique combines information along various business dimensions to estimate an unknown quantity.

    Chapter 12, The Best-Fit Line: Linear Regression Models, covers a more traditional statistical technique: linear regression. Several variants of linear regression are introduced, including polynomial regression, weighted regression, multiple regression, and exponential regression. These are explained graphically, using Excel charts, along with the R² value that measures how well the model fits the data.

    Regression is explained using both Excel and SQL. Although Excel has several built-in functions for regression, an additional method using Solver is more powerful than the built-in functions. This chapter introduces Solver (which is free and bundled with Excel) in the context of linear regression.

    Chapter 13, Building Customer Signatures for Further Analysis, introduces the customer signature. This is a data structure that summarizes what a customer looks like at a particular point in time. Customer signatures are very powerful for modeling.

    This chapter recognizes that although SQL and Excel are quite powerful, more sophisticated tools are sometimes necessary. The customer signature is the right way to summarize customer information under many circumstances. And, SQL is a very powerful tool for this summarization.

    Performance

    One reason for writing SQL queries is performance—by doing at least some of the analytic work close to the data and in an environment that can exploit the resources available to a relational database. The downside to writing a book that is more generally about SQL and not specifically about a particular database is that some tricks and tips are only relevant to a single database.

    Happily, many good practices for writing SQL queries improve performance across many different databases. Chapter 14, "Performance Is the Issue: Using SQL Effectively," is devoted to this topic. In particular, it discusses indexes and how to best take advantage of them. It also covers different ways of writing queries—and why some of them are better from a performance perspective.

    Who Should Read This Book

    This book is designed for several audiences, with varying degrees of technical skills.

    On the less technical side are managers, particularly those with a quantitative bent who are responsible for understanding customers or a business unit. Such people are often quite proficient in Excel, but, alas, much of the data they need resides in relational databases. To help them, this book provides examples of business scenarios where analysis provides useful results. These scenarios are detailed, showing not only the business problem but also the technical approach and the results.

    Another part of the audience consists of people whose job is to understand data and customers, often having a job title including the word analyst. These individuals typically use Excel and other tools, sometimes having direct access to the data warehouse or to some customer-centric database. This book can help by improving SQL querying skills, showing good examples of charts, and introducing survival analysis and association rules for understanding customers and the business.

    An important audience is data scientists who are proficient in tools such as R or Python but who discover that they need to learn about other tools. In the business world, more programming-oriented tools may not be sufficient, and analysts may find themselves having to deal with data residing in relational databases and users who want to see results in Excel.

    At the more technical end are statisticians, who typically use special-purpose tools such as SAS, SPSS, R, and S-plus. However, the data resides in databases. This book can help the very technical with their SQL skills and also provides examples of using analysis to solve particular business problems.

    In addition, database administrators, database designers, and information architects should find this book interesting. The queries shown in the various chapters illustrate what people really want to do with the data and should encourage database administrators and designers to create efficient databases that support these needs.

    I encourage all readers, even the technically proficient, to read (or at least skim) the first three chapters. These chapters introduce SQL, Excel, and statistics all from the perspective of analyzing large quantities of data. This perspective is different from how these subjects are usually introduced. Certain ideas in these chapters, such as the example data, dataflows, SQL syntax and formatting conventions, and good chart design, are used throughout the book.

    Tools You Will Need

    This book is designed to be stand-alone—that is, readers should be able to learn the ideas and gain understanding directly from the text.

    All the SQL in the book has been tested (in Microsoft SQL Server and, with slight variations, Postgres). The datasets and results are available at www.wiley.com/go/dataanalysisusingsqlandexcel2e. Readers who want hands-on experience are encouraged to download the data and run the examples in the book.

    Most examples in the book are vendor-neutral, so they should run with only minor modification on almost any fully functional relational database. I do not recommend Microsoft Access or MySQL, because they lack window functions—key functionality for analytic queries.

    If you do not have a database, several packages are available for downloading; database vendors often have stand-alone versions of their software available at no cost. Some examples: SQL Server Express, a free version of SQL Server is available from Microsoft. A free version of Oracle is available from Oracle. Postgres is available at www.postgres.org. And other database products are available at no charge.

    What’s on the Website

    The companion website (at www.wiley.com/go/dataanalysisusingsqlandexcel2e) contains datasets used in the book. These datasets contain the following information:

    Reference tables. There are three reference tables, two containing census information (from the 2000 Census) and one containing calendar information about dates.

    Subscribers dataset. This is data describing a subset of customers in a mobile telephone company.

    Purchases dataset. This is data describing customer purchase patterns.

    This data is available for download, along with instructions for loading it into SQL Server and other databases.

    In addition, the companion website has pages with additional information, such as scripts for loading the data into common databases, spreadsheets containing the SQL queries, and all the tables and charts in the book that were generated using Excel.

    Summary

    The idea for this book originated with a colleague’s question about a reference book for SQL for data analysis queries. However, another reference book on SQL is not needed, even one focused on the practical aspects of using the language for querying purposes.

    For analyzing data, SQL cannot be learned in a vacuum. A SQL query, no matter how deftly crafted, is usually not the entire solution to a business problem. The business problem needs to be transformed into a question, which can be answered via a query. The results then need to be presented, often as tables or Excel charts.

    I would extend this further. In the real world, statistics also cannot be learned in a vacuum. Once upon a time, collecting data was a time-consuming and difficult process. Now, data is plentiful. The companion website for this book, for example, puts dozens of megabytes of data just a few clicks away. The problem of analyzing data now extends beyond the realm of a few statistical methods to the processes for managing and extracting data as well.

    This book combines three key ideas into a single thread of solving problems. Throughout my work as a data miner, I have found SQL, Excel, and statistics to be critical tools for analyzing data. More important than the specific technical skills, though, I hope this book helps readers improve their analytic skills and gives them ideas so they can better understand their customers and their businesses.

    CHAPTER 1

    A Data Miner Looks at SQL

    Data is being collected everywhere. Every transaction, every web page visit, every payment—and much more—is filling databases, relational and otherwise, with raw data. Computing power and storage have grown to be cost effective, a trend where today’s smart phones are more powerful than supercomputers of yesteryear. Databases are no longer merely platforms for storing data; they are powerful engines for transforming data into useful information about customers and products and business practices.

    The focus on data mining has historically been on complex algorithms developed by statisticians and machine-learning specialists. Once upon a time, data mining required downloading source code from a research lab or university, compiling the code to get it to run, and sometimes even debugging it. By the time the data and software were ready, the business problem had lost urgency.

    This book takes a different approach because it starts with the data. The billions of transactions that occur every day—credit cards swipes, web page visits, telephone calls, and so on—are now often stored in relational databases. Relational database engines count among the most powerful and sophisticated software products in the business world, so they are well suited for the task of extracting useful information. And the lingua franca of relational databases is SQL.

    The focus of this book is more on data and what to do with data and less on theory. Instead of trying to squeeze every last iota of information from a small sample—the goal of much statistical analysis—the goal is instead to find something useful in the gigabytes and terabytes of data stored by the business. Instead of asking programmers to learn data analysis, the goal is to give data analysts—and others—a solid foundation for using SQL to learn from data.

    This book strives to assist anyone facing the problem of analyzing data stored in large databases, by describing the power of data analysis using SQL and Excel. SQL, which stands for Structured Query Language, is a language for extracting information from data. Excel is a popular and useful spreadsheet for analyzing smaller amounts of data and presenting results.

    The various chapters of this book build skill in and enthusiasm for SQL queries and the graphical presentation of results. Throughout the book, the SQL queries are used for more and more sophisticated types of analyses, starting with basic summaries of tables, and moving to data exploration. The chapters continue with methods for understanding time-to-event problems, such as when customers stop, and market basket analysis for understanding what customers are purchasing. Data analysis is often about building models, and—perhaps surprisingly to most readers—some models can be built directly in SQL, as described in Chapter 11, Data Mining in SQL. An important part of any analysis, though, is constructing the data in a format suitable for modeling—customer signatures.

    The final chapter takes a step back from analysis to discuss performance. This chapter is an overview of a topic, concentrating on good performance practices that work across different databases.

    This chapter introduces SQL for data analysis and data mining. Admittedly, this introduction is heavily biased because the purpose is for querying databases rather than building and managing them. SQL is presented from three different perspectives, some of which may resonate more strongly with different groups of readers. The first perspective is the structure of the data, with a particular emphasis on entity-relationship diagrams. The second is the processing of data using dataflows, which happen to be what is under the hood of most relational database engines. The third, and strongest thread through subsequent chapters, is the syntax of SQL itself. Although data is well described by entities and relationships, and processing by dataflows, the ultimate goal is to express the transformations in SQL and present the results often through Excel.

    Databases, SQL, and Big Data

    Collecting and analyzing data is a major activity, so many tools are available for this purpose. Some of these focus on big data (whatever that might mean). Some focus on consistently storing the data quickly. Some on deep analysis. Some have pretty visual interfaces; others are programming languages.

    SQL and relational databases are a powerful combination that is useful in any arsenal of tools for analysis, particularly ad hoc analyses:

    A mature and standardized language for accessing data

    Multiple vendors, including open source

    Scalability over a very broad range of hardware

    A non-programming interface for data manipulations

    Before continuing with SQL, it is worth looking at SQL in the context of other tools.

    What Is Big Data?

    Big data is one of those concepts whose definition changes over time. In the 1800s, when statistics was first being invented, researchers worked with dozens or hundreds of rows of data. That might not seem like a lot, but if you have to add everything up with a pencil and paper, and do long division by hand or using a slide rule, then it certainly seems like a lot of data.

    The concept of big data has always been relative, at least since data processing was invented. The difference is that now data is measured in gigabytes and terabytes—enough bytes to fit the text in all the books in the Library of Congress—and we can readily carry it around with us. The good news is that analyzing big data no longer requires trying to get data to fit into very limited amounts of memory. The bad news is that simply scrolling through big data is not sufficient to really understand it.

    This book does not attempt to define big data. Relational databases definitely scale well into the tens of terabytes of data—big by anyone’s definition. They also work efficiently on smaller datasets, such as the ones accompanying this book.

    Relational Databases

    Relational databases, which were invented in the 1970s, are now the storehouse of mountains of data available to businesses. To a large extent, the popularity of relational databases rests on what are called ACID properties of transactions:

    Atomicity

    Consistency

    Isolation

    Durability

    These properties basically mean that when data is stored or updated in a database, it really is changed. The databases have transaction logs and other capabilities to ensure that changes really do happen and that modified data is visible when the data modification step completes. (The data should even survive major failures such as the operating system crashing.) In practice, databases support transactions, logs, replication, concurrent access, stored procedures, security, and a host of features suitable for designing real-world applications.

    From our perspective, a more important attribute of relational databases is their ability to take advantage of the hardware they are running on—multiple processors, memory, and disk. When you run a query, the optimization engine first translates the SQL query into the appropriate lower-level algorithms that exploit the available resources. The optimization engine is one of the reasons why SQL is so powerful: the same query running on a slightly different machine or slightly different data might have very different execution plans. The SQL remains the same; it is the optimization engine that chooses the best way to execute the code.

    Hadoop and Hive

    One of the technologies highly associated with big data is Hadoop in conjunction with MapReduce. Hadoop is an open-source project, meaning that the code is available for free online, with the goal of developing a framework for reliable, scalable, distributed computing. (The SQL world has free open-source databases such as MySQL, Postgres, and SQLite; in addition, several commercial databases have free versions.) In practice, Hadoop is a platform for processing humongous amounts of data, particularly data from sources such as web logs, high-energy physics, high volumes of streaming images, and other voluminous data sources.

    The roots of MapReduce go back to the 1960s and a language called Lisp. In the late 1990s, Google developed a parallel framework around MapReduce, and now it is a framework for programming data-intensive tasks on large grid computers. It became popular because both Google and Yahoo developed MapReduce engines; and, what big successful internet companies do must be interesting.

    Hadoop actually has a family of technologies and MapReduce is only one application. Built on Hadoop are other tools, all with colorful names such as Hive, Mahout, Cassandra, and Pig. Although the underlying technology is different from relational databases, there are similarities in the problems these technologies are trying to solve. Within the Hadoop world are languages, such CQL, which is based on SQL syntax. Hive, in particular, is being developed into a fully functional SQL engine and can run many of the queries in this book.

    NoSQL and Other Types of Databases

    NoSQL refers to a type of database that, at first sight, might seem to be the antithesis of SQL. Actually, the No stands for Not Only. This terminology can be used to refer to a variety of different database technologies:

    Key-value pairs, where the columns of data can vary between rows—and, quite importantly—the columns themselves can contain lists of things

    Graph-based databases, which specialize in representing and handling problems from graph theory

    Document databases, which are used for analyzing documents and other texts

    Geographic information systems (GIS), which are used for geographic analysis

    These types of databases are often specialized for particular functions. For instance, key-value pair databases provide excellent performance in a web environment for managing data about online sessions.

    These technologies are really complementary technologies to traditional relational databases rather than replacement technologies. For instance, key-value databases are often used on a website in conjunction with relational databases that store history. Graph and document databases are often used in conjunction with data warehouses that support more structured information.

    Further, good ideas are not limited to a single technology. One of the motivations for writing a second edition of this book is that database technology is improving. SQL and the underlying relational database technology increasingly support functionality similar to NoSQL databases. For example, recursive common table expressions provide functionality for traversing graphs. Full text indexes provide functionality for working with text. Most databases offer extensions for geographic data. And, increasingly databases are providing better functionality for nested tables and portable data formats, such as XML and JSON.

    SQL

    SQL was designed to work on structured data—think tables with well-defined columns and rows, much like an Excel spreadsheet. Much of the power of SQL comes from the power of the underlying database engine and the optimizer. Many people use databases running on powerful computers, without ever thinking about the underlying hardware. That is the power of SQL: The same query that runs on a mobile device can run on the largest grid computer, taking advantage of all available hardware with no changes to the query.

    The part of the SQL language used for analysis is the SELECT statement. Much of the rest of the language is about getting data in to databases. Our concern is getting information out of them to solve business problems. The SELECT statement describes what the results look like, freeing the analyst to think about what to do, instead of how to do it.

    TIP

    SQL (when used for querying) is a descriptive language rather than a procedural language. It describes what needs to be done, letting the SQL engine optimize the code for the particular data, hardware, and database layout where the query is running, and freeing the analyst to think more about the business problem.

    Picturing the Structure of the Data

    In the beginning, there is data. Although data may seem to be without form and chaotic, there is an organization to it, an organization based on tables and columns and relationships between and among them. Relational databases store structured data—that is, tables with well-defined rows and columns.

    This section describes databases by the data they contain. It introduces entity-relationship diagrams, in the context of the datasets (and associated data models) used with this book. These datasets are not intended to represent all the myriad different ways that data might be stored in databases; instead, they are intended as practice data for the ideas in the book. They are available on the companion website, along with all the examples in the book.

    What Is a Data Model?

    The definition of the tables, the columns, and the relationships among them constitute the data model for the database. A well-designed database actually has two data models. The logical data model explains the database in terms that business users understand. The logical data model communicates the contents of the database because it defines many business terms and how they are stored in the database.

    The physical data model explains how the database is actually implemented. In many cases, the physical data model is identical to or very similar to the logical data model. That is, every entity in the logical data model corresponds to a table in the database; every attribute corresponds to a column. This is true for the datasets used in this book.

    On the other hand, the logical and physical data models can differ. For instance, in more complicated databases, certain performance issues might drive physical database design. A single entity might have rows split into several tables to improve performance, enhance security, enable backup-restore functionality, or facilitate database replication. Multiple similar entities might be combined into a single table, especially when they have many attributes in common. Or, a single entity could have different columns in different tables, with the most commonly used columns in one table and less commonly used ones in another table (this is called vertical partitioning, which some databases support directly without having to resort to multiple tables). Often these differences are masked through the use of views and other database constructs.

    The logical model is quite important for analytic purposes because it provides an understanding of the data from the business perspective. However, queries actually run on the database represented by the physical model, so it is convenient that the logical and physical structures are often quite similar.

    What Is a Table?

    A table is a set of rows and columns that describe multiple instances of something. Each row represents one instance—such as a single purchase made by a customer, or a single visit to a web page, or a single zip code with its demographic details. Each column contains one attribute for one instance. SQL tables represent unordered sets, so the table does not have a first row or a last row—unless a specific column such as an id or creation date provides that information.

    Any given column contains the same genre of information for all rows. So a zip code column should not be the sent-to zip code in one row and the billed-to zip code in another. Although these are both zip codes, they represent two different uses, so they belong in two different columns.

    Columns, unless declared otherwise, are permitted to take on the value NULL, meaning that the value is not available or is unknown. For instance, a row describing customers might contain a column for birthdate. This column would take on the value of NULL for all rows where the birthdate is not known.

    A table can have as many columns as needed to describe an instance, although for practical purposes tables with more than a few hundred columns are rare (and most relational databases do have an upper limit on the number of columns in a single table, typically in the low thousands). A table can have as many rows as needed; here the numbers easily rise to the millions and even billions.

    As an example, Table 1.1 shows a few rows and columns from ZipCensus (which is available on the companion website). This table shows that each zip code is assigned to a particular state, which is the abbreviation in the stab column (STate ABbreviation). The pctstate column is an indicator that zip codes sometimes span state boundaries. For instance, 10004 is a zip code in New York City that covers Ellis Island. In 1998, the Supreme Court split jurisdiction of the island between New York and New Jersey, but the Post Office did not change the zip code. So, 10004 has a portion in New York and a smaller, unpopulated portion in New Jersey.

    Table 1.1: Some Rows and Columns from ZipCensus

    Each zip code also has an area, measured in square miles and recorded in the landsqmi column. This column contains a number, and the database does not know what this number means. It could be area in acres, or square kilometers, or square inches, or pyongs (a Korean unit for area). What the number really means depends on information not stored in the tables. The term metadata is used to describe such information about what the values in columns mean. Similarly, fipco is a numeric value that encodes the state and county, with the smallest value being 1001, for Alabaster County in Alabama.

    Databases typically have some metadata information about each column. Conveniently, there is often a label or description (and it is a good idea to fill this in when creating a table). More importantly, each column has a data type and a flag specifying whether NULL values are allowed. The next two sections discuss these two topics because they are quite important for analyzing data.

    Allowing NULL Values

    Nullability is whether or not a column may contain the NULL value. By default in SQL, a column in any row can contain a special value that says that the value is empty or unknown. Although this is quite useful, NULLs have unexpected side effects. Almost every comparison returns unknown if any argument is NULL, and unknown is treated as false.

    The following very simple query looks like it is counting all the rows in the ZipCensus table where the FIPCo column is not NULL. (<> is the SQL operator for not equals.)

    SELECT COUNT(*)

    FROM ZipCensus zc

    WHERE zc.fipco <> NULL

    Alas, this query always returns zero. When a NULL value is involved in a comparison—even not equals—the result is almost always NULL, which is treated as false.

    Of course, determining which rows have NULL values is quite useful, so SQL provides the special operators IS NULL and IS NOT NULL. These behave as expected, with the preceding query returning 32,845 instead of 0.

    The problem is more insidious when comparing column values, either within a single table or between tables. For instance, the column fipco contains the primary county of a zip code and fipco2 contains the second county, if any. The following query counts the number of zip codes in total and the number where these two county columns have different values. This query uses conditional aggregation, which is when a conditional statement (CASE) is the argument to an aggregation function such as SUM():

    SELECT COUNT(*),

          SUM(CASE WHEN fipco <> fipco2 THEN 1 ELSE 0 END) as numsame

    FROM ZipCensus zc

    Or does it? The columns fipco and fipco2 should always have different values, so the two counts should be the same. In reality, the query returns the values 32,989 and 8,904. And changing the not-equals to equals shows that there are 0 rows where the values are equal. What is happening on the other 32,989 − 8,904 rows? Once again, the problem is NULL values. When fipco2 is NULL, the test always fails.

    When a table is created, there is the option to allow NULL values on each column in the table. This is a relatively minor decision when creating the table. However, making mistakes on columns with NULL values is easy.

    WARNING

    Designing databases is different from analyzing the data inside them. For example, NULL columns can cause unexpected—and inaccurate—results when analyzing data and make reading queries difficult. Be very careful when using columns that allow them.

    NULL values may seem troublesome, but they solve an important problem: how to represent values that are not present. One alternative method is to use a special value, such as -99 or 0. However, the database would just treat this as a regular value, so calculations (such as MIN(), MAX(), and SUM()) would be incorrect.

    Another alternative would be to have separate flags indicating whether or not a value is NULL. That would make even simple calculations cumbersome. A + B, for instance, would have to be written as something like (CASE WHEN A_flag = 1 AND B_flag = 1 THEN A + B END). Given the alternatives, having NULLs in the database is a practical approach to handling missing values.

    Column Types

    The second important attribute of a column is its type, which tells the database exactly how to store values. A well-designed database usually has parsimonious columns, so if two characters suffice for a code, there is no reason to store eight. There are a few important aspects of column types and the roles that columns play.

    Primary key columns uniquely identify each row in the table. That is, no two rows have the same value for the primary key and the primary key is never NULL. Databases guarantee that primary keys are unique by refusing to insert rows with duplicate primary keys. Chapter 2, What’s in a Table? Getting Started with Data Exploration, shows techniques to determine whether this condition holds for any given column. Typically the primary key is a single column, although SQL does allow composite primary keys, which consist of multiple columns.

    Numeric values are values that support arithmetic and other mathematical operations. In SQL, these can be stored in different ways, such as floating-point numbers, integers, and decimals. The details of how these formats differ are much less important than what can be done with numeric data types.

    Within the category of numeric types, one big difference is between integers, which have no fractional part, and real numbers, which do. When doing arithmetic on integers, the result might be an integer or it might be a real number, depending on the database. So 5/2 might evaluate to 2 rather than 2.5, and the average of 1 and 2 might turn out to be 1 instead of 1.5, depending on the database. To avoid this problem, examples in this book multiply integer values by 1.0 to convert them to decimal values when necessary.

    Of course, just because it walks like a duck and talks like a duck does not mean that it is a duck. Some values look like numbers, but really are not. Zip codes (in the United States) are an example, as are primary key columns stored as numbers. What is the sum of two zip codes? What does it mean to multiply a primary key value by 2? These questions yield nonsense results (although the values can be calculated). Zip codes and primary keys happen to look like numbers, but they do not behave like numbers.

    The datasets used in this book use character strings for zip codes and numbers for primary keys. To distinguish such false numeric values from real numbers, the values are often left padded with zeros to get a fixed length. After all, the zip code for Harvard Square in Cambridge, MA, is 02138, not 2,138.

    Dates and date-times are exactly what their names imply. SQL provides several functions for common operations, such as determining the number of days between two dates, extracting the year and month, and comparing two times. Unfortunately, these functions often differ between databases. The Appendix provides a list of equivalent functions in different databases for functions used in this book, including date and time functions.

    Another type of data is character string data. These are commonly codes, such as the state abbreviation in the zip code table, or a description of something, such as a product name or the full state name. SQL has some very rudimentary functions for handling character strings, which in turn support rudimentary text processing. Spaces at the end of a character string are ignored, so the condition 'NY' = 'NY ' evaluates to TRUE. However, spaces at the beginning of a character string are counted, so 'NY' = ' NY' evaluates to FALSE. When working with data in character columns, it might be worth checking out whether there are spaces at the beginning, a topic discussed in Chapter 2.

    What Is an Entity-Relationship Diagram?

    The relational in the name relational databases refers to the fact that different tables relate to each other via keys, and to the fact that columns in a given row relate to the values for that column via the column name. For instance, a zip code column in any table can link (that is relate) to the zip code table. The key makes it possible to look up information available in the zip code table. Figure 1.1 shows the relationships between tables in the purchases dataset.

    Block diagram: Customers, Calendar, Campaigns lead to Orders to Order lines, Zip Census. Order lines to Products, Zip census to Zip Country. Two columns in each; left PK in all.

    Figure 1.1: This entity-relationship diagram shows the relationship among entities in the purchase dataset. Each entity corresponds to one table.

    These relationships have a characteristic called cardinality, which is the number of items related on each side. For instance, the relationship between Orders and ZipCensus is a zero/one-to-many relationship. This specifies every row in Orders has at most one zip code. And, every zip code has zero, one, or more orders. Typically, this relationship is implemented by having a column in the first table contain the zip code, which is called a foreign key. A foreign key is just a column whose contents are the primary key of another table (ZipCode in Orders is a foreign key; zcta5 in ZipCensus is a primary key). To indicate no match, the foreign key column would typically be NULL.

    The zero/one-to-one relationship says that there is at most one match between two tables. This is often a subsetting relationship. For instance, a database might contain sessions of web visits, some of which result in a purchase. Any given session would have zero or one purchases. Any given purchase would have exactly one session.

    Another relationship is a many-to-many relationship. A customer might purchase many different products and any given product might be purchased by many different customers. In fact, the purchase dataset does have a many-to-many relationship between Orders and Products; this relationship is represented by the OrderLines entity, which has a zero/one-to-many relationship with each of those.

    An example of the one-at-a-time relationship is a customer who resides in a particular zip code. The customer might move over time. Or, at any given time, a customer might have a particular handset or billing plan, but these can change over time.

    With this brief introduction to entity-relationship diagrams, the following sections describe the datasets used in this book.

    The Zip Code Tables

    The ZipCensus table consists of more than one hundred columns describing each zip code, or, strictly speaking, each zip code tabulation area (ZCTA) defined by the Census Bureau. The column zcta5 is the zip code. This information was gathered from the Missouri Census Data Center, based on US Census data, specifically the American Community Survey.

    The first few columns consist of overview information about each zip code, such as the state, the county, population (totpop), latitude, and longitude. There is a column for additional zip codes because the zip-code tabulation area does not necessarily match 100% with actual zip codes. In addition to population, there are four more counts: the number of households (tothhs), the number of families (famhhs), the number of housing units (tothus), and the number of occupied housing units (occhus).

    The following information is available for the general population:

    Proportion and counts in various age groups

    Proportion and counts by gender

    Proportion and counts in various racial categories

    Proportion and counts of households and families by income

    Information about occupation categories and income sources

    Information about marital status

    Information about educational attainment

    And more

    Information on the columns and exact definitions of terms such as ZCTA are available at http://mcdc.missouri.edu/data/georef/zcta_master.Metadata.html.

    The second zip code table is ZipCounty, a companion table that maps zip codes to counties. It contains information such as the following:

    County name

    Post office name

    Population of county

    Number of households in county

    County land area

    This table has one row for each zip code, so it can be joined to ZipCensus and to other tables using the ZipCode column. The two tables are from different time frames and sources so not all zip codes match between the two tables—a common problem when working with data.

    Subscription Dataset

    The subscription data has only two entities, shown in Figure 1.2. This dataset paints a picture of a subscriber at a given point in time (the date when the snapshot was created).

    Block Subscribers on left connected to Calendar on right. Left column PK in both. Right columns Subscriber Id, Date respectively with several entities.

    Figure 1.2: An entity-relationship diagram with only two entities describes the data in the customer snapshot dataset.

    The Subscribers table describes customers in a telephone company. It is a snapshot that shows what customers (and former customers) look like as of a particular date. The columns in this table describe customers as they start and as they stop. This particular snapshot table has no intermediate behavior information.

    The Calendar table is a general-purpose table that has information about dates, including:

    Year

    Month number

    Month name

    Day of month

    Day of week

    Day of year

    Holiday information

    This table has the date as a primary key, and covers dates from 1950 through 2050.

    Purchases Dataset

    The purchases dataset contains entities typical of retail purchases; the entities in this dataset and their relationships are shown in Figure 1.1 (page 11) :

    Customers

    Orders

    OrderLines

    Products

    Campaigns

    ZipCensus

    ZipCounty

    Calendar

    This data captures the important entities associated with retail purchases. The most detailed information is in OrderLines, which describes each of the items in an order. To understand the name of the table, think of a receipt. Each line on the receipt represents a different item in the purchase. In addition, the line has other information such as the product id, the price, and the number of items, which are all in this table.

    The Products table provides information such as the product group name and the full price of a product. The table does not contain detailed product names. These were removed as part of the effort to anonymize the data.

    To tie all the items in a single purchase together, each row of OrderLines has an OrderId. Each OrderId, in turn, represents one row in the Orders table, which has information such as the date and time of the purchase, where the order was shipped to, and the type of payment. It also contains the total dollar amount of the purchase, summed up from the individual items. Each order line is in exactly one order and each order has one or more order lines. This relationship is described as a one-to-many relationship between these tables.

    Just as the OrderId ties multiple order lines into an order, the CustomerId assigns orders made at different points in time to the same customer. The existence of the CustomerId prompts the question of how it is created. In one sense, it makes no difference how it is created; the CustomerId is simply a given, defining the customer in the database. Is it is doing a good job? That is, are a single customer’s purchases being tied together most of the time? The aside The Customer ID: Identifying Customers Over Time, discusses the creation of customer IDs.

    THE CUSTOMER ID: IDENTIFYING CUSTOMERS OVER TIME

    The CustomerId column combines transactions over time into a single grouping, the customer (or household or

    Enjoying the preview?
    Page 1 of 1