Database Normalization
Database Normalization
Database normalization
In the field of relational database design, normalization is a systematic way of ensuring that a database structure is
suitable for general-purpose querying and free of certain undesirable characteristics—insertion, update, and deletion
anomalies—that could lead to a loss of data integrity.[1]
Edgar F. Codd, the inventor of the relational model, introduced the concept of normalization and what we now know
as the First Normal Form (1NF) in 1970.[2] Codd went on to define the Second Normal Form (2NF) and Third
Normal Form (3NF) in 1971,[3] and Codd and Raymond F. Boyce defined the Boyce-Codd Normal Form (BCNF) in
1974.[4] Higher normal forms were defined by other theorists in subsequent years, the most recent being the Sixth
normal form (6NF) introduced by Chris Date, Hugh Darwen, and Nikos Lorentzos in 2002.[5]
Informally, a relational database table (the computerized representation of a relation) is often described as
"normalized" if it is in the Third Normal Form.[6] Most 3NF tables are free of insertion, update, and deletion
anomalies, i.e. in most cases 3NF tables adhere to BCNF, 4NF, and 5NF (but typically not 6NF).
A standard piece of database design guidance is that the designer should create a fully normalized design; selective
denormalization can subsequently be performed for performance reasons.[7] However, some modeling disciplines,
such as the dimensional modeling approach to data warehouse design, explicitly recommend non-normalized
designs, i.e. designs that in large part do not adhere to 3NF.[8]
Objectives of normalization
A basic objective of the first normal form defined by Codd in 1970 was to permit data to be queried and manipulated
using a "universal data sub-language" grounded in first-order logic.[9] (SQL is an example of such a data
sub-language, albeit one that Codd regarded as seriously flawed.)[10] Querying and manipulating the data within an
unnormalized data structure, such as the following non-1NF representation of customers' credit card transactions,
involves more complexity than is really necessary:
Customer Jones Wilkins Stevens Transactions
To each customer there corresponds a repeating group of transactions. The automated evaluation of any query
relating to customers' transactions therefore would broadly involve two stages:
1. Unpacking one or more customers' groups of transactions allowing the individual transactions in a group to be
examined, and
2. Deriving a query result based on the results of the first stage
For example, in order to find out the monetary sum of all transactions that occurred in October 2003 for all
customers, the system would have to know that it must first unpack the Transactions group of each customer, then
sum the Amounts of all transactions thus obtained where the Date of the transaction falls in October 2003.
One of Codd's important insights was that this structural complexity could always be removed completely, leading to
much greater power and flexibility in the way queries could be formulated (by users and applications) and evaluated
(by the DBMS). The normalized equivalent of the structure above would look like this:
Now each row represents an individual credit card transaction, and the DBMS can obtain the answer of interest,
simply by finding all rows with a Date falling in October, and summing their Amounts. All of the values in the data
structure are on an equal footing: they are all exposed to the DBMS directly, and can directly participate in queries,
whereas in the previous situation some values were embedded in lower-level structures that had to be handled
specially. Accordingly, the normalized design lends itself to general-purpose query processing, whereas the
unnormalized design does not.
The objectives of normalization beyond 1NF were stated as follows by Codd:
1. To free the collection of relations from undesirable insertion, update and deletion dependencies;
2. To reduce the need for restructuring the collection of relations as new types of data are introduced,
and thus increase the life span of application programs;
3. To make the relational model more informative to users;
4. To make the collection of relations neutral to the query statistics, where these statistics are liable to
change as time goes by.
—E.F. Codd, "Further Normalization of the Data Base Relational Model"[11]
The sections below give details of each of these objectives.
Database normalization 3
A multivalued dependency is a constraint according to which the presence of certain rows in a table implies
the presence of certain other rows.
Join dependency
A table T is subject to a join dependency if T can always be recreated by joining multiple tables each having a
subset of the attributes of T.
Superkey
A superkey is an attribute or set of attributes that uniquely identifies rows within a table; in other words, two
distinct rows are always guaranteed to have distinct superkeys. {Employee ID, Employee Address, Skill}
would be a superkey for the "Employees' Skills" table; {Employee ID, Skill} would also be a superkey.
Candidate key
A candidate key is a minimal superkey, that is, a superkey for which we can say that no proper subset of it is
also a superkey. {Employee Id, Skill} would be a candidate key for the "Employees' Skills" table.
Non-prime attribute
A non-prime attribute is an attribute that does not occur in any candidate key. Employee Address would be a
non-prime attribute in the "Employees' Skills" table.
Primary key
Most DBMSs require a table to be defined as having a single unique key, rather than a number of possible
unique keys. A primary key is a key which the database designer has designated for this purpose.
Normal forms
The normal forms (abbrev. NF) of relational database theory provide criteria for determining a table's degree of
vulnerability to logical inconsistencies and anomalies. The higher the normal form applicable to a table, the less
vulnerable it is to inconsistencies and anomalies. Each table has a "highest normal form" (HNF): by definition, a
table always meets the requirements of its HNF and of all normal forms lower than its HNF; also by definition, a
table fails to meet the requirements of any normal form higher than its HNF.
The normal forms are applicable to individual tables; to say that an entire database is in normal form n is to say that
all of its tables are in normal form n.
Newcomers to database design sometimes suppose that normalization proceeds in an iterative fashion, i.e. a 1NF
design is first normalized to 2NF, then to 3NF, and so on. This is not an accurate description of how normalization
typically works. A sensibly designed table is likely to be in 3NF on the first attempt; furthermore, if it is 3NF, it is
overwhelmingly likely to have an HNF of 5NF. Achieving the "higher" normal forms (above 3NF) does not usually
require an extra expenditure of effort on the part of the designer, because 3NF tables usually need no modification to
meet the requirements of these higher normal forms.
The main normal forms are summarized below.
Database normalization 6
First normal form [12] Table faithfully represents a relation and has no repeating
Two versions: E.F. Codd (1970), C.J. Date (2003)
(1NF) groups
Fourth normal form [17] Every non-trivial multivalued dependency in the table is a
Ronald Fagin (1977)
(4NF) dependency on a superkey
Fifth normal form [18] Every non-trivial join dependency in the table is implied
Ronald Fagin (1979)
(5NF) by the superkeys of the table
Sixth normal form [5] Table features no non-trivial join dependencies at all (with
C.J. Date, Hugh Darwen, and Nikos Lorentzos (2002)
(6NF) reference to generalized join operator)
Denormalization
Databases intended for online transaction processing (OLTP) are typically more normalized than databases intended
for online analytical processing (OLAP). OLTP applications are characterized by a high volume of small
transactions such as updating a sales record at a supermarket checkout counter. The expectation is that each
transaction will leave the database in a consistent state. By contrast, databases intended for OLAP operations are
primarily "read mostly" databases. OLAP applications tend to extract historical data that has accumulated over a
long period of time. For such databases, redundant or "denormalized" data may facilitate business intelligence
applications. Specifically, dimensional tables in a star schema often contain denormalized data. The denormalized or
redundant data must be carefully controlled during extract, transform, load (ETL) processing, and users should not
be permitted to see the data until it is in a consistent state. The normalized alternative to the star schema is the
snowflake schema. In many cases, the need for denormalization has waned as computers and RDBMS software have
become more powerful, but since data volumes have generally increased along with hardware and software
performance, OLAP databases often still use denormalized schemas.
Denormalization is also used to improve performance on smaller computers as in computerized cash-registers and
mobile devices, since these may use the data for look-up only (e.g. price lookups). Denormalization may also be
used when no RDBMS exists for a platform (such as Palm), or no changes are to be made to the data and a swift
response is crucial.
Database normalization 7
Bob blue
Bob red
Jane green
Jane yellow
Jane red
Assume a person has several favorite colors. Obviously, favorite colors consist of a set of colors modeled by the
given table. To transform a 1NF into an NF² table a "nest" operator is required which extends the relational algebra
of the higher normal forms. Applying the "nest" operator to the 1NF table yields the following NF² table:
Bob
Favorite Color
blue
red
Jane
Favorite Color
green
yellow
red
To transform this NF² table back into a 1NF an "unnest" operator is required which extends the relational algebra of
the higher normal forms (one would allow "colors" to be its own table).
Although "unnest" is the mathematical inverse to "nest", the operator "nest" is not always the mathematical inverse
of "unnest". Another constraint required is for the operators to be bijective, which is covered by the Partitioned
Normal Form (PNF).
Database normalization 8
Further reading
• Litt's Tips: Normalization [20]
• Date, C. J. (1999), An Introduction to Database Systems [21] (8th ed.). Addison-Wesley Longman. ISBN
0-321-19784-4.
• Kent, W. (1983) A Simple Guide to Five Normal Forms in Relational Database Theory [22], Communications of
the ACM, vol. 26, pp. 120–125
• Date, C.J., & Darwen, H., & Pascal, F. Database Debunkings [23]
• H.-J. Schek, P. Pistor Data Structures for an Integrated Data Base Management and Information Retrieval System
• Paper: "Non First Normal Form Relations" by G. Jaeschke, H. -J Schek ; IBM Heidelberg Scientific Center. ->
Paper studying normalization and denormalization operators nest and unnest as mildly described at the end of this
wiki page.
See also
• Aspect (computer science)
• Business rule
• Canonical form
• Cross-cutting concern
• Optimization (computer science)
• Refactoring
External links
• Database Normalization Basics (http://databases.about.com/od/specificproducts/a/normalization.htm) by
Mike Chapple (About.com)
• Database Normalization Intro (http://www.databasejournal.com/sqletc/article.php/1428511), Part 2 (http://
www.databasejournal.com/sqletc/article.php/26861_1474411_1)
• An Introduction to Database Normalization (http://dev.mysql.com/tech-resources/articles/
intro-to-normalization.html) by Mike Hillyer.
• Normalization (http://www.utexas.edu/its/windows/database/datamodeling/rm/rm7.html) by ITS,
University of Texas.
• A tutorial on the first 3 normal forms (http://phlonx.com/resources/nf3/) by Fred Coulson
• DB Normalization Examples (http://www.dbnormalization.com/)
• Description of the database normalization basics (http://support.microsoft.com/kb/283878) by Microsoft
• Database Normalization and Design Techniques (http://www.barrywise.com/2008/01/
database-normalization-and-design-techniques/) by Barry Wise, recommended reading for the Harvard MIS.
Article Sources and Contributors 10
License
Creative Commons Attribution-Share Alike 3.0 Unported
http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/