Network Model
Network Model
Hierarchical Database model is one of the oldest database models, dating from late
1950s. One of the first hierarchical databases Information Management System (IMS)
was developed jointly by North American Rockwell Company and IBM. This model is
like a structure of a tree with the records forming the nodes and fields forming the
branches of the tree.
The hierarchical model organizes data elements as tabular rows, one for each instance of
entity. Consider a company's organizational structure. At the top we have a General
Manager (GM). Under him we have several Deputy General Managers (DGMs). Each
DGM looks after a couple of departments and each department will have a manager and
many employees. When represented in hierarchical model, there will be separate rows
for representing the GM, each DGM, each department, each manager and each
employee. The row position implies a relationship to other rows. A given employee
belongs to the department that is closest above it in the list and the department belongs
to the manager that is immediately above it in the list and so on as shown.
In the hierarchical data model, records are linked with other superior records on which
they are dependent and also on the records, which are dependent on them. A tree
structure may establish one-to-many relationship. Figure illustrates the structure of a
family. Great grandparent is the root of the structure. Parents can have many children
exhibiting one to many relationships. The great grandparent record is known as the root
of the tree. The grandparents and children are the nodes or dependents of the root. In
general, a root may have any number of dependents. Each of these dependent may have
any number of lower level dependents, and so on, with no restriction of levels.
The different elements (e.g. records) present in the hierarchical tree structure have
Parent-Child relationship. A Parent element can have many children elements but a
Child element cannot have many parent elements. That is, hierarchical model cannot
represent many to many relationships among records.
Another example, of hierarchical model is shown. It shows a database of CustomerLoan, here a customer can take multiple loans and there is also a provision of joint loan
where more than one person can take a joint loan. As shown, CI customer takes a single
loan Ll of amount 10000 jointly with customer C2. Customer C3 takes two loans L2 of
amount 15000 and L3 of amount 25000.
Sample Database
In order to understand the hierarchical data model better, let us take the example of the
sample database consisting of supplier, parts and shipments. The record structure and
some sample records for supplier, parts and shipments elements are as given in
following tables.
For example, supplier S3 supplies 300 quantities of part P2. Note that the set of supplier
occurrences for a given part occurrence may contain any number of members, including
zero (for the case of part P4). Part PI is supplied by two suppliers, S1 and S2. Part P2 is
supplied by three suppliers, S1, S2 and S3 and part P3 supplied by only supplier SI as
shown in figure.
single record for P2. Then, a loop is constructed to search all suppliers under this part
and supplier numbers are printed for all suppliers.
Algorithm
get [next] part where PNO=P2;
do until no more shipments under this part;
get next supplier under this part;
print SNO;
end;
Query2: Find part numbers for parts supplied by supplier S2.
Solution: In order to get required part number we have to search S2 under each part. If
supplier S2, is found under a part then the corresponding part number is printed,
otherwise we go to next part until all the parts are searched for supplier S2.
Algorithm
do until no more parts;
get next part;
get [next] supplier under this part where SNO=S2;
if found then print PNO;
end;
In above algorithms "next" is interpreted relative the current position (normally the row
most recently accessed; for the initial case we assume it to be just prior to the first row of
the table). We have placed square brackets around "next" in those statements where we
expect at the most one occurrence to satisfy the specified conditions.
Since, both the queries involved different logic and are complex, so we can conclude that
retrieval operation of this model is complex and asymmetric.
Conclusion: As explained earlier, we can conclude that hierarchical model suffers from
the Insertion anomalies, Update anomalies and Deletion anomalies, also the retrieval
operation is complex and asymmetric, and thus hierarchical model is not suitable for all
the cases.
Record
A collection of field or data items values that provide information on an entity. Each
field has a certain data type such as integer, real or string. Records of the same type are
group into record type.
Parent Child Relationship Type
It is 1:N relation between two record type. The record type 1 side is parent record type
and one on the N side is called child record type of the PCR type.
Advantages
1. Simplicity
Data naturally have hierarchical relationship in most of the practical situations.
Therefore, it is easier to view data arranged in manner. This makes this type of database
more suitable for the purpose.
2. Security
These database system can enforce varying degree of security feature unlike flat-file
system.
3. Database Integrity
Because of its inherent parent-child structure, database integrity is highly promoted in
these systems.
4. Efficiency: The hierarchical database model is a very efficient, one when the
database contains a large number of I: N relationships (one-to-many relationships) and
when the users require large number of transactions, using data whose relationships are
fixed.
Disadvantages
1. Complexity of Implementation: The actual implementation of a hierarchical
database depends on the physical storage of data. This makes the implementation
complicated.
2. Difficulty in Management: The movement of a data segment from one location to
another cause all the accessing programs to be modified making database management
a complex affair.
3. Complexity of Programming: Programming a hierarchical database is relatively
complex because the programmers must know the physical path of the data items.
4. Poor Portability: The database is not easily portable mainly because there is little
or no standard existing for these types of database.
Network Model
The popularity of the network data model coincided with the popularity of the
hierarchical data model. Some data were more naturally modeled with more than one
parent per child. So, the network model permitted the modeling of many-to-many
relationships in data.
In 1971, the Conference on Data Systems Languages (CODASYL) formally defined the
network model. The basic data modeling construct in the network model is the set
construct. A set consists of an owner record type, a set name, and a member record type.
A member record type can have that role in more than one set; hence the multiparent
concept is supported.
An owner record type can also be a member or owner in another set. The data model is a
simple network, and link and intersection record types (called junction records by
IDMS) may exist, as well as sets between them. Thus, the complete network of
relationships is represented by several pairwise sets; in each set some (one) record type
is owner (at the tail of the network arrow) and one or more record types are members
(at the head of the relationship arrow).
Usually, a set defines a 1:M relationship, although 1:1 is permitted. The CODASYL
network model is based on mathematical set theory.
Network model is a collection data in which records are physically linked through linked
lists .A DBMS is said to be a Network DBMS if the relationships among
data in the database are of type many-to-many. The relationship among
many-to-many appears in the form of a network.
Relational Model
The Relational Model was the first theoretically founded and well thought out Data
Model, proposed by EfCodd in 1970, then a researcher at IBM. It has been the
foundation of most database software and theoretical database research ever since.
The Relational Model is a depiction of how each piece of stored information relates to
the other stored information. It shows how tables are linked, what type of links are
between tables, what keys are used, what information is referenced between tables. It's
an essential part of developing a normalized database structure to prevent repeat and
redundant data storage.
The basic idea behind the relational model is that a database consists of a series of
unordered tables (or relations) that can be manipulated using non-procedural
operations that return tables. This model was in vast contrast to the more traditional
database theories of the time that were much more complicated, less flexible and
dependent on the physical storage methods of the data.. The RELATIONAL database
model is based on the Relational Algebra, set theory and predicate logic.
It is commonly thought that the word relational in the relational model comes from the
fact that you relate together tables in a relational database. Although this is a convenient
way to think of the term, it's not accurate. Instead, the word relational has its roots in
the terminology that Codd used to define the relational model. The table in Codd's
writings was actually referred to as a relation (a related set of information).
In fact, Codd (and other relational database theorists) use the terms relations, attributes
and tuples where most of us use the more common terms tables, columns and rows,
respectively (or the more physicaland thus less preferable for discussions of database
design theoryfiles, fields and records).
The relational model can be applied to both databases and database management
systems (DBMS) themselves. The relational fidelity of database programs can be
compared using Codd's 12 rules (since Codd's seminal paper on the relational model, the
number of rules has been expanded to 300) for determining how DBMS products
conform to the relational model.
When compared with other database management programs, Microsoft Access fares
quite well in terms of relational fidelity. Still, it has a long way to go before it meets all
twelve rules completely.
Object-Oriented Model
An object-oriented database management system (OODBMS, but sometimes just called
object database or ODBMS) is a DBMS that stores data in a logical model that is
closely aligned with an application programs object model. Of course, an OODBMS will
have a physical data model optimized for the kinds of logical data model it expects.
Object oriented database models have been around since the seventies when the concept
of object oriented programming was first explored. It is only in the last ten or fifteen
years that companies are utilizing object oriented DBMSs (OODBMS). The major
problem for OODBMSs was that relational DBMSs (RDBMS) were already implemented
industry wide.
OODBMS should be used when there is a business need, high performance required,
and complex data is being used. Due to the object oriented nature of the database
model, it is much simpler to approach a problem with these needs in terms of objects.
The result can be a performance increase of ten to one thousand times while writing as
little as 40% of the code (this is because it requires no intermediate language such as
SQL; everything is programmed in the OO language of choice). This code can be directly
applied to a database, and thus saves time and money in development and maintenance.
An object-oriented database interface standard is being developed by an industry group,
the Object Data Management Group (ODMG). The Object Management Group (OMG)
has already standardized an object-oriented data brokering interface between systems in
a network.
What is a Database View, A view can join information from several tables together, for
example adding the ename field to the Order information. Database View is a subset of
the database sorted and displayed in a particular way. A database view displays one or
more database records on the same page. A view can display some or all of the database
fields.
Views have filters to determine which records they show. Views can be sorted to
control the record order and grouped to display records in related sets. Views have
other options such as totals and subtotals. A query returns information from a table or
set of tables that matches particular criteria.
Most users interact with the database using the database views. A key to creating a
useful database is a well-chosen set of views. Luckily, while views are powerful, they are
also easy to create. Create custom views of a database to organize, filter and sort records.
Database views allow you to easily reduce the complexity of the end user experience and
limit their ability to access data contained in database tables by limiting the data
presented to the end user. Essentially, a view uses the results of a database query to
dynamically populate the contents of an artificial database table.
You can use views to:
Focus on the data that interests them and on the tasks for which they are responsible.
Data that is not of interest to a user can be left out of the view.
Define frequently used joins, projections, and selections as views so that users do not
have to specify all the conditions and qualifications each time an operation is performed on
that data.
Display different data for different users, even when they are using the same data at the
same time. This advantage is particularly important when users of many different interests and
skill levels share the same database.
Advantages:
1. Provide additional level of table security by restricting access to a predetermined set
of
rows
or
columns
of
a
table.
2. Hide Data complexity: For example, a single view might be defined with a join, which
is a collection of related columns or rows in multiple tables. However, the view hides the
fact
that
this
information
actually
originates
from
several
tables.
3. Simplify Statements for User: Views allow users to select information from multiple
tables
without
actually
knowing
how
to
perform
join.
4. Present Data in different perspective: Columns of views can be renamed without
effecting
the
tables
on
which
the
views
are
based.
5. Isolate applications from changes in definitions of base tables. If a view is referencing
three columns of a four columns table, if a fifth column is added or fourth column is
changed,
the
view
and
associated
applications
are
un-affected.
6. Express query that cannot be expressed without using a view. For example, a view can
be defined that joins a group by view with a table or a view can be defined that joins a
UNION
view
with
a
table.
7. Saving of complex queries
Disadvantages:
Rows available through a view are not sorted and are not ordered either.
Cannot
use
DML
operations
on
a
View.
When table is dropped view becomes inactive, it depends on the table objects.
It affects performance, querying from view takes more time than directly querying from
the table
Relational Model
BY DINESH THAKUR
Relational model stores data in the form of tables. This concept purposed by Dr. E.F.
Codd, a researcher of IBM in the year 1960s. The relational model consists of three
major components:
1. The set of relations and set of domains that defines the way data can be represented
(data structure).
2. Integrity rules that define the procedure to protect the data (data integrity).
3. The operations that can be performed on data (data manipulation).
A rational model database is defined as a database that allows you to group its data
items into one or more independent tables that can be related to one another by using
fields common to each related table.
Tuples of a Relation
Each row of data is a tuple. Actually, each row is an n-tuple, but the "n-" is
usually dropped.
Cardinality of a relation: The number of tuples in a relation determines its
cardinality. In this case, the relation has a cardinality of 4.
Degree of a relation: Each column in the tuple is called an attribute. The number of
attributes in a relation determines its degree. The relation in figure has a degree of 3.
Domains: A domain definition specifies the kind of data represented by the attribute.
More- particularly, a domain is the set of all possible values that an attribute may validly
contain. Domains are often confused with data types, but this is inaccurate. Data type is
a physical concept while domain is a logical one. "Number" is a data type and "Age" is a
domain. To give another example "StreetName" and "Surname" might both be
represented as text fields, but they are obviously different kinds of text fields; they
belong to different domains.
Domain is also a broader concept than data type, in that a domain definition includes a
more specific description of the valid data. For example, the domain Degree A warded,
which represents the degrees awarded by a university. In the database schema, this
attribute might be defined as Text [3], but it's not just any three-character string, it's a
member of the set {BA, BS, MA, MS, PhD, LLB, MD}. Of course, not all domains can be
defined by simply listing their values. Age, for example, contains a hundred or so values
if we are talking about people, but tens of thousands if we are talking about museum
exhibits. In such instances it's useful to define the domain in terms of the rules, which
can be used to determine the membership of any specific value in the set of all valid
values.
For example, Person Age could be defined as "an integer in the range 0 to 120" whereas
Exhibit Age (age of any object for exhibition) might simply by "an integer equal to or
greater than 0."
Body of a Relation: The body of the relation consists of an unordered set of zero or
more tuples. There are some important concepts here. First the relation is unordered.
Record numbers do not apply to relations. Second a relation with no tuples still qualifies
as a relation. Third, a relation is a set. The items in a set are, by definition, uniquely
identifiable. Therefore, for a table to qualify as a relation each record must be uniquely
identifiable and the table must contain no duplicate records.
Keys of a Relation
It is a set of one or more columns whose combined values are unique among all
occurrences in a given table. A key is the relational means of specifying uniqueness.
Some different types of keys are:
Primary key is an attribute or a set of attributes of a relation which posses the
properties of uniqueness and irreducibility (No subset should be unique). For example:
Supplier number in S table is primary key, Part number in P table is primary key and the
combination of Supplier number and Part Number in SP table is a primary key
Foreign key is the attributes of a table, which refers to the primary key of some
another table. Foreign key permit only those values, which appears in the primary key of
the table to which it refers or may be null (Unknown value). For example: SNO in SP
table refers the SNO of S table, which is the primary key of S table, so we can say that
SNO in SP table is the foreign key. PNO in SP table refers the PNO of P table, which is
the primary key of P table, so we can say that PNO in SP table is the foreign key.
The database of Customer-Loan, which we discussed earlier for hierarchical model and
network model, is now represented for Relational model as shown.
In can easily understood that, this model is very simple and has no redundancy. The
total database is divided in to two tables. Customer table contains the information about
the customers with CNO as the primary key. The Cutomer_Loan table stores the
information about CNO, LNO and AMOUNT. It has the primary key combination of
CNO and LNO. Here, CNO also acts as the foreign key and refers to CNO of Customer
table. It means, only those customer number are allowed in transaction table
Cutomer_Loan that have their entry in the master Customer table.
The four basic operations Insert, Update, Delete and Retrieve operations are shown
below on the sample database in relational model:
Insert Operation: Suppose we wish to insert the information of supplier who does not
supply any part, can be inserted in S table without any anomaly e.g. S4 can be inserted
in Stable. Similarly, if we wish to insert information of a new part that is not supplied by
any supplier can be inserted into a P table. If a supplier starts supplying any new part,
then this information can be stored in shipment table SP with the supplier number, part
number and supplied quantity. So, we can say that insert operations can be performed
in all the cases without any anomaly.
Update Operation: Suppose supplier S1 has moved from Qadian to Jalandhar. In that
case we need to make changes in the record, so that the supplier table is up-to-date.
Since supplier number is the primary key in the S (supplier) table, so there is only a
single entry of S 1, which needs a single update and problem of data inconsistencies
would not arise. Similarly, part and shipment information can be updated by a single
modification in the tables P and SP respectively without the problem of inconsistency.
Update operation in relational model is very simple and without any anomaly in case of
relational model.
Delete Operation: Suppose if supplier S3 stops the supply of part P2, then we have to
delete the shipment connecting part P2 and supplier S3 from shipment table SP. This
information can be deleted from SP table without affecting the details of supplier of S3
in supplier table and part P2 information in part table. Similarly, we can delete the
information of parts in P table and their shipments in SP table and we can delete the
information suppliers in S table and their shipments in SP table.
Record Retrieval: Record retrieval methods for relational model are simple and
symmetric which can be clarified with the following queries:
Query1: Find the supplier numbers for suppliers who supply part P2.
Solution: In order to get this information we have to search the information of part P2
in the SP table (shipment table). For this a loop is constructed to find the records of P2
and on getting the records, corresponding supplier numbers are printed.
Algorithm
do until no more shipments;
get next shipment where PNO=P2;
print SNO;
end;
Ad hoc query capability: The presence of very powerful, flexible and easy-to-use
query capability is one of the main reasons for the immense popularity of the relational
database model. The query language of the relational database models structured query
language or SQL makes ad hoc queries a reality. SQL is a fourth generation language
(4GL). A 4 GL allows the user to specify what must be done without specifying how it
must be done. So, sing SQL the users can specify what information they want and leave
the details of how to get the information to the database.
But as we have said all these issues are minor when compared to the advantages and all
these issues could be avoided if the organization has a properly designed database and
has enforced good database standards.
Network Model
BY DINESH THAKUR
The Network model replaces the hierarchical tree with a graph thus allowing more
general connections among the nodes. The main difference of the network model from
the hierarchical model, is its ability to handle many to many (N:N) relations. In other
words, it allows a record to have more than one parent. Suppose an employee works for
two departments. The strict hierarchical arrangement is not possible here and the tree
becomes a more generalized graph - a network. The network model was evolved to
specifically handle non-hierarchical relationships. As shown below data can belong to
more than one parent. Note that there are lateral connections as well as top-down
connections. A network structure thus allows 1:1 (one: one), l: M (one: many), M: M
(many: many) relationships among entities.
In network database terminology, a relationship is a set. Each set is made up of at least
two types of records: an owner record (equivalent to parent in the hierarchical model)
and a member record (similar to the child record in the hierarchical model).
The database of Customer-Loan, which we discussed earlier for hierarchical model, is
now represented for Network model as shown.
In can easily depict that now the information about the joint loan L1 appears single
time, but in case of hierarchical model it appears for two times. Thus, it reduces the
redundancy and is better as compared to hierarchical model.
All connector occurrences for a given supplier are placed on a chain .The chain starts
from a supplier and finally returns to the supplier. Similarly, all connector occurrences
for a given part are placed on a chain starting from the part and finally returning to the
same part.
Operations on Network Model
Detailed description of all basic operations in Network Model is as under:
Insert Operation: To insert a new record containing the details of a new supplier, we
simply create a new record occurrence. Initially, there will be no connector. The new
supplier's chain will simply consist of a single pointer starting from the supplier to itself.
For example, supplier S4 can be inserted in network model that does not supply any
part as a new record occurrence with a single pointer from S4 to itself. This is not
possible in case of hierarchical model. Similarly a new part can be inserted who does not
supplied by any supplier.
Consider another case if supplier S 1 now starts supplying P3 part with quantity 100,
then a new connector containing the 100 as supplied quantity is added in to the model
and the pointer of S1 and P3 are modified as shown in the below.
We can summarize that there is no insert anomalies in network model as in hierarchical
model.
Update Operation: Unlike hierarchical model, where updation was carried out by
search and had many inconsistency problems, in a network model updating a record is a
much easier process. We can change the city of S I from Qadian to Jalandhar without
search or inconsistency problems because the city for S1 appears at just one place in the
network model. Similarly, same operation is performed to change the any attribute of
part.
Delete operation: If we wish to delete the information of any part say PI, then that
record occurrence can be deleted by removing the corresponding pointers and
connectors, without affecting the supplier who supplies that part i.e. P1, the model is
modified as shown. Similarly, same operation is performed to delete the information of
supplier.
In order to delete the shipment information, the connector for that shipment and
its corresponding pointers are removed without affecting supplier and part information.
For example, if supplier SI stops the supply of part PI with 250 quantity the model is
modified as shown below without affecting P1 and S1 information.
Retrieval Operation: Record retrieval methods for network model are symmetric but
complex. In order to understand this considers the following example queries:
Query 1. Find supplier number for suppliers who supply part P2.
Solution: In order to retrieve the required information, first we search for the required
part i.e. P2 we will get only one occurrence of P2 from the entire database, Then a loop
is constructed to visit each connector under this part i.e. P2. Then for each connector we
check the supplier over that connector and supplier number for the concerned supplier
record occurrence is printed as shown in below algorithm.
Algorithm
get [next] part where PNO=P2;
do until no more connectors under this part;
get next connector under this part;
get supplier over this connector;
print SNO;
Query 2. Find part number for parts supplied by supplier S2.
Solution: In order to retrieve the required information, same procedure is adopted.
First we search for the required supplier i.e. S2 and we will get only one occurrence of S2
from the entire database. Then a loop is constructed to visit each connector under this
supplier i.e. S2. Then for each connector we check the part over that connector and part
number for the concerned part record occurrence is printed as shown in below
algorithm.
Algorithm :
get [next] supplier where SNO=S2;
do until no more connectors under this supplier;
get next connector under this supplier;
get part over this connector;
print PNO;
end;
From both the above algorithms, we can conclude that retrieval algorithms are
symmetric, but they are complex because they involved lot of pointers.
Conclusion: As explained earlier, we can conclude that network model does not suffers
from the Insert anomalies, Update anomalies and Deletion anomalies, also the retrieve
operation is symmetric, as compared to hierarchical model, but the main disadvantage
is the complexity of the model. Since, each above operation involves the modification of
pointers, which makes whole model complicated and complex.
The Network model retains almost all the advantages of the hierarchical model while
eliminating some of its shortcomings.
The main advantages of the network model are:
Conceptual simplicity: Just like the hierarchical model, the network model IS also
conceptually simple and easy to design.
Capability to handle more relationship types: The network model can handle the
one to- many (l:N) and many to many (N:N) relationships, which is a real help in
modeling the real life situations.
Ease of data access: The data access is easier and flexible than the hierarchical model.
Data Integrity: The network model does not allow a member to exist without an
owner. Thus, a user must first define the owner record and then the member record.
This ensures the data integrity.
Data independence: The network model is better than the hierarchical model in
isolating the programs from the complex physical storage details.
Database Standards: One of the major drawbacks of the hierarchical model was the
non-availability of universal standards for database design and modeling. The network
model is based on the standards formulated by the DBTG and augmented by ANSI/SP
ARC (American National Standards Institute/Standards Planning and Requirements
Committee) in the 1970s. All the network database management systems conformed to
these standards. These standards included a Data Definition Language (DDL) and the
Data Manipulation Language (DML), thus greatly enhancing database administration
and portability.
can access data. Thus, even though the network database model succeeds in achieving
data independence, it still fails to achieve structural independence.
Because of the disadvantages mentioned and the implementation and administration
complexities, the relational database model replaced both the hierarchical and network
database models in the 1980s. The evolution of the relational database model is
considered as one of the greatest events-a major breakthrough in the history of database
management.
What are Strong and Weak Entity Sets in DBMS
BY DINESH THAKUR
The entity set which does not have sufficient attributes to form a primary key is called as
Weak entity set. An entity set that has a primary key is called as Strong entity set.
Consider an entity set Payment which has three attributes: payment_number,
payment_date and payment_amount. Although each payment entity is distinct but
payment for different loans may share the same payment number. Thus, this entity set
does not have a primary key and it is an entity set. Each weak set must be a part of oneto-many relationship set.
A member of a strong entity set is called dominant entity and member of weak entity set
is called as subordinate entity. A weak entity set does not have a primary key but we
need a means of distinguishing among all those entries in the entity set that depend on
one particular strong entity set. The discriminator of a weak entity set is a set of
attributes that allows this distinction be made. For example, payment_number acts as
discriminator for payment entity set. It is also called as the Partial key of the entity set.
The primary key of a weak entity set is formed by the primary key of the strong entity set
on which the weak entity set is existence dependent plus the weak entity sets
discriminator. In the above example {loan_number, payment_number} acts as primary
key for payment entity set.
The relationship between weak entity and strong entity set is called as Identifying
Relationship. In example, loan-payment is the identifying relationship for payment
entity. A weak entity set is represented by doubly outlined box .and corresponding
identifying relation by a doubly outlined diamond as shown in figure. Here double lines
indicate total participation of weak entity in strong entity set it means that every
payment must be related via loan-payment to some account. The arrow from loanpayment to loan indicates that each payment is for a single loan. The discriminator of a
weak entity set is underlined with dashed lines rather than solid line.
Let us consider another scenario, where we want to store the information of employees
and their dependents. The every employee may have zero to n number of dependents.
Every dependent has an id number and name.
Now let us consider the following data base:
There are three employees having E# as 1, 2, and 3 respectively.
Employee having E# 1, has two dependents as 1, Rahat and 2, Chahat.
Employee having E# 2, has no dependents.
Employee having E# 3, has three dependents as 1, Raju; 2, Ruhi; 3 Raja.
Now, in case of Dependent entity id cannot act as primary key because it is not unique.
Thus, Dependent is a weak entity set having id as a discriminator. It has a total
participation with the relationship "has" because no dependent can exist without the
employees (the company is concerned with employees). The E-R diagram for the
employee-dependent database is shown.
There are two tables need to created above e-r diagram. These are Employee having E#
as single column which acts as primary key. The other table will be of Dependent having
E#, id and name columns where primary key is the combination of (E# and id).
The tabular comparison between Strong Entity Set and Weak Entity Set is as follows:
Database Normalization
BY DINESH THAKUR
Normalization is the process of removing redundant data from your tables in order to
improve storage efficiency, data integrity and scalability. This improvement is balanced
against an increase in complexity and potential performance losses from the joining of
the normalized tables at query-time. There are two goals of the normalization process:
eliminating redundant data (for example, storing the same data in more than one table)
and ensuring data dependencies make sense (only storing related data in a table). Both
of these are worthy goals as they reduce the amount of space a database consumes and
ensure that data is logically stored.
WHY WE NEED NORMALIZATION?
Normalization is the aim of well design Relational Database Management System
(RDBMS). It is step by step set of rules by which data is put in its simplest forms. We
normalize the relational database management system because of the following reasons:
Data should be consistent throughout the database i.e. it should not suffer from
following anomalies.
Insert Anomaly - Due to lack of data i.e., all the data available for insertion such that
null values in keys should be avoided. This kind of anomaly can seriously damage a database
Deletion Anomaly - It leads to loss of data for rows that are not stored else where. It
could result in loss of vital data.
The resulting relations (tables) obtained on normalization should possess the properties
such as each row must be identified by a unique key, no repeating groups, homogenous
columns, each column is assigned a unique name etc.
ADVANTAGES OF NORMALIZATION
The following are the advantages of the normalization.
More flexible data structure i.e. we should be able to add new rows and data values easily
o
Easier to maintain data structure i.e. it is easy to perform operations and
complex queries can be easily handled.
o
DISADVANTAGES OF NORMALIZATION
The following are disadvantages of normalization.
o
You cannot start building the database before you know what the user needs.
o
On Normalizing the relations to higher normal forms i.e. 4NF, 5NF the
performance degrades.
o
It is very time consuming and difficult process in normalizing relations of
higher degree.
o
Careless decomposition may leads to bad design of database which may
leads to serious problems.
How many normal forms are there?
They are
(For simplicity we are working with few columns but in real world scenario there could
be column like friends phone no, email , address and favorites artists albums, awards
received by them, country etc. So in that case having two different tables would make
complete sense)
FID foreign key in FavoriteArtist table which refers to FID in our Friends Table.
Now we can say that our table is in first normal form.
Remember For First Normal Form
Column values should be atomic, scalar or should be holding single value
No repetition of information or values in multiple columns.
form
composite
primary
key.
So what do we need to do to
Here again well break the table in two.
bring
it
in
second
normal
form?
Again we need to make sure that the non-key columns depend upon the primary key and
not on any other non-key column.
Although the above table looks fine but still there is something in it because of which we
will normalize it further.
Album is the primary key of the above table.
Artist and No. of tracks are functionally dependent on the Album(primary key).
But can we say the same of Country as well?
In the above table Country value is getting repeated because of artist.
So in our above table Country column is depended on Artist column which is a non-key
column.
So we will move that information in another table and could save table from redundancy
i.e. repeating values of Country column.
A typical structure of a DBMS with its components and relationships between them is
show. The DBMS software is partitioned into several modules. Each module or
component is assigned a specific operation to perform. Some of the functions of the
DBMS are supported by operating systems (OS) to provide basic services and DBMS is
built on top of it. The physical data and system catalog are stored on a physical disk.
Access to the disk is controlled primarily by as, which schedules disk input/output.
Therefore, while designing a DBMS its interface with the as must be taken into account.
Components of a DBMS
The DBMS accepts the SQL commands generated from a variety of user interfaces,
produces query evaluation plans, executes these plans against the database, and returns
the answers. As shown, the major software modules or components of DBMS are as
follows:
(i) Query processor: The query processor transforms user queries into a series of low
level instructions. It is used to interpret the online user's query and convert it into an
efficient series of operations in a form capable of being sent to the run time data
manager for execution. The query processor uses the data dictionary to find the
structure of the relevant portion of the database and uses this information in modifying
the query and preparing and optimal plan to access the database.
(ii) Run time database manager: Run time database manager is the central
software component of the DBMS, which interfaces with user-submitted application
programs and queries. It handles database access at run time. It converts operations in
user's queries coming. Directly via the query processor or indirectly via an application
program from the user's logical view to a physical file system. It accepts queries and
examines the external and conceptual schemas to determine what conceptual records
are required to satisfy the users request. It enforces constraints to maintain the
consistency and integrity of the data, as well as its security. It also performs backing and
recovery operations. Run time database manager is sometimes referred to as the
database control system and has the following components:
Authorization control: The authorization control module checks the authorization
of users in terms of various privileges to users.
Command processor: The command processor processes the queries passed by
authorization control module.
Integrity checker: It .checks the integrity constraints so that only valid data can be
entered into the database.
Query optimizer: The query optimizers determine an optimal strategy for the query
execution.
Transaction manager: The transaction manager
transaction properties should be maintained by the system.
ensures
that
the
As show, conceptually, following logical steps are followed while executing users to
request to access the database system:
(I) Users issue a query using particular database language, for example, SQL commands.
(ii) The passes query is presented to a query optimizer, which uses information about
how the data is stored to produce an efficient execution plan for the evaluating the
query.
(iii) The DBMS accepts the users SQL commands and analyses them.
(iv) The DBMS produces query evaluation plans, that is, the external schema for the
user, the corresponding external/conceptual mapping, the conceptual schema, the
conceptual/internal mapping, and the storage structure definition. Thus, an evaluation\
plan is a blueprint for evaluating a query.
(v) The DBMS executes these plans against the physical database and returns the
answers to the user.
Using components such as transaction manager, buffer manager, and recovery manager,
the DBMS supports concurrency and recovery.
What is ER-Model?Advantages and Disadvantages of E-R Model.
BY DINESH THAKUR
There are two techniques used for the purpose of data base designing from the system
requirements. These are:
Top down Approach known as Entity-Relationship Modeling
Bottom Up approach known as Normalization.
we will focus on top down approach of designing database. It is a graphical technique,
which is used to convert the requirement of the system to graphical representation, so
that it can become well understandable. It also provides the framework for designing of
database.
The Entity-Relationship (ER) model was originally proposed by Peter in 1976 as a way
to unify the network and relational database views. Simply stated, the ER model is a
conceptual data model that views the real world as entities and relationships. A basic
component of the model is the Entity-Relationship diagram, which is used to visually
represent data objects. For the database designer, the utility of the ER model is:
It maps well to the relational model. The constructs used in the ER model can easily be
transformed into relational tables.
It is simple and easy to understand with a minimum of training. Therefore, the model
can be used by the database designer to communicate the design to the end user.
In addition, the model can be used as a design plan by the database developer to
implement a data model in specific database management software.
The purpose of a data model is to represent data and to make the data understandable.
There have been many data models proposed in the literature. They fall into three broad
categories:
The object based and record based data models are used to describe data at the
conceptual and external levels, the physical data model is used to describe data at the
internal level.
There are not as many physical data models as logical data models, the most common
one being the Unifying Model.
Since for each value of A there is associated one and only one value of B.
Example
Sname
Supplier name
City
Status
cities
Status of the city e.g. A grade cities may have status 10, B grad
may have status 20 and so on.
Here, Sname is FD on Sno. Because, Sname can take only one value for the given value
of Sno (e.g. S 1) or in other words there must be one Sname for supplier number S1.
FD is represented as:
Sno Sname
FD is shown by which means that Sname is functionally dependent on Sno.
Similarly, city and status are also FD on Sno, because for each value of Sno there will be
only one city and status.
FD is represented as:
Sno - City
Sno - Status
S. Sno - S (Sname, City, Status)
Consider another database of shipment with following attributes:
Sno
Pno
Qty
In this case Qty is FD on combination of Sno, Pno because each combination of Sno and
Pno results only for one Quantity.
SP (Sno, Pno) --> SP.QTY
Dependency Diagrams
A dependency diagram consists of the attribute names and all functional dependencies
in a given table. The dependency diagram of Supplier table is.
Sname
Sno
Sno
City
Sno
Status
Sname
City
Sname
Status
City
Status
Here Y is FD on X, but X has two proper subsets Sno and Status; city is FD .on one
proper subset .of X i.e. Sno
Sno City
According to 'FFD definition Y must not be FD .on any proper subset of X, but here City
is FD in one subset .of X i.e. Sno, so City is not FFD on (Sno, Status)
Consider another case of SP table:
Here, Qty is FD on combination of Sna, Pno.
(Sno, Pno)
X
Qty
Y
dependencies and. F+, the closure, is the set of all the functional dependencies
includingF and those that can be deduced from F. The closure is important and may, for
example, be needed in finding one or more candidate keys of the relation.
For example, the student relation has the following functional dependencies
sno Sname
cno came
sno address
cno instructor
Instructor office
Let these dependencies be denoted by F. The closure of F, denoted
includes F and all functional- dependencies that are implied by F.
by F +,
To determine F+, we need rules for deriving all functional dependencies that are
implied: by F. A set of rules that may be used to infer additional dependencies was
proposed by Armstrong in 1974. These rules (or axioms) are a complete set of rules in
that all possible functional dependencies may be derived from them. The rules are:
1. Reflexivity Rule - If X is a set of attributes and Y is a subset of X, then X Y holds.
The reflexivity rule is the simplest (almost trivial) rule. It states that each subset of X is
functionally dependent on X. In other words trivial dependence is defined as follows:
Trivial functional dependency: A trivial functional dependency is a functional
dependency of an attribute on a superset of itself.
For example: {Employee ID, Employee Address} {Employee Address} is trivial, here
{Employee Address} is a subset of {Employee ID, Employee Address}.
2. Augmentation Rule
then WX WY holds.
- If X
Y holds
and W is
set
of
attributes,
and
The argumentation ('u rule is also quite simple. It states that if Y is determined
by X then
a
set
of
attributes Wand Y together
will
be
determined
by W and X together. Note that we use the notation WX to mean the collection of all
attributes in W and X and write WX rather than the more conventional (W, X) for
convenience.
For example: Rno - Name; Class and Marks is a set of attributes and act as
W. Then {Rno, Class, Marks} -> {Name, Class, Marks}
find what attributes depend on a given set of attributes and therefore ought to be
together. The other approach is to find the minimal covers.
Minimal Functional Dependencies or Irreducible Set of Dependencies
In discussing the concept of equivalent FDs, it is useful to define the concept of minimal
functional dependenciesor minimal cover which is useful in eliminating necessary
functional dependencies so that only the minimal numbers of dependencies need to be
enforced by the system. The concept of minimal cover of F is sometimes
called irreducible Set of F.
A functional depending set S is irreducible if the set has three following properties:
Each right set of a functional dependency of S contains only one attribute.
Each left set of a functional dependency of S is irreducible. It means that reducing
anyone attribute from left set will change the content of S (S will lose
some information).
Reducing any functional dependency will change the content of S.
Sets
of
functional
dependencies
called canonical or minimal.
with
these
properties
are
also
E-R NOTATION
BY DINESH THAKUR
The E-R model can result problems due to limitations in the way the entities are related
in the relational databases. These problems are called connection traps. These problems
often occur due to a misinterpretation of the meaning of certain relationships.
Two main types of connection traps are called fan traps and chasm traps.
Fan Trap
A fan trap occurs when one to many relationships fan out from a single entity.
For example: Consider a database of Department, Site and Staff, where one site can
contain number of department, but a department is situated only at a single site. There
are multiple staff members working at a single site and a staff member can work from a
single site. The above case is represented in e-r diagram shown.
The problem of above e-r diagram is that, which staff works in a particular department
remain answered. The solution is to restructure the original E-R model to' represent the
correct association as shown.
In other words the two entities should have a direct relationship between them to
provide the necessaryinformation.
There is one another way to solve the problem of e-r diagram of figure, by introducing
direct relationship between DEPT and STAFF as shown in figure.
Another example: Let us consider another case, where one branch contains multiple
staff members and cars, which are represented.
The problem of above E-R diagram is that, it is unable to tell which member of staff uses
a particular, which is represented. It is not possible tell which member of staff uses' car
SH34.
The solution is to shown the relationship between STAFF and CAR as shown.
With this relationship the fan rap is resolved and now it is possible to tell car SH34 is
used by S1500 as shown in figure. It means it is now possible to tell which car is used by
which staff.
Chasm Trap
As discussed earlier, a chasm trap occurs when a model suggests the existence of a
relationship between entity types, but the pathway does not exist between certain entity
occurrences.
It occurs where there is a relationship with partial participation, which forms part of the
pathway between entities that are related.
For example: Let us consider a database where, a single branch is allocated many staff
who handles the management of properties for rent. Not all staff members handle the
property and not all property is managed by a member of staff. The above case is
represented in the e-r diagram.
Now, the above e-r diagram is not able to represent what properties are available at a
branch. The partial participation of Staff and Property in the SP relation means that
some properties cannot be associated with a branch office through a member of staff.
We need to add the missing relationship which is called BP between the Branch and the
Property entities as shown.
Another example: Consider another case, where a branch has multiple cars but a car
can be associated with a single branch. The car is handles by a single staff and a staff can
use only a single cat. Some of staff members have no car available for their use. The
above case is represented in E-R diagram with appropriate connectivity and cardinality.
The problem of the above E-R diagram is that, it is not possible tell in which branch staff
member S0003 works at as shown.
It means the above e-r diagram is not able to represent the relationship between the
BRANCH and STAFF due the partial participation of CAR and STAFF entities. We need
to add the missing relationship which is called BS between the Branch and STAFF
entities as shown.
With this relationship the Chasm trap resolved and now it is possible to represent to
which branch each member of staff works at, as for our example of staff S003 as shown.
The DBMS can be classified according to the number of users and the database
sitelocations. These are:
On the basis of the number of users:
Single-user DBMS
Multi-user DBMS
On the basis of the site location
Centralized DBMS
Parallel DBMS
Distributed DBMS
Client/server DBMS
we will discuss about some of the important types of DBMS system, which are presently
being used.
Client-Server DBMS
Client/Server architecture of database system has two logical components namely client,
and server. Clients are generally personal computers or workstations whereas server is
large workstations, mini range computer system or a mainframe computer system. The
applications and tools of DBMS run on one or more client platforms, while the DBMS
soft wares reside on the server. The server computer is caned backend and the client's
computer is called front end. These server and client computers are connected into a
network. The applications and tools act as clients of the DBMS, making requests for its
services. The DBMS, in turn, processes these requests and returns the results to the
client(s). Client/Server architecture handles the Graphical User Interface (GUI) and
does computations and other programming of interest to the end user. The server
handles parts of the job that are common to many clients, for example, database access
and updates.
Multi-Tier client server computing models
In a single-tier system the database is centralized, which means the DBMS Software and
the data reside in one location and the dumb terminals were used to access the DBMS as
shown.
The rise of personal computers in businesses during the 1980s, the increased reliability
of networking hardware causes Two-tier and Three-tier systems became common. In a
two-tier system, different software is required for the server and for the client.
Illustrates the two-tier client server model. At the early stages client server computing
model was called two-tier-computing model in which client is considered as data
capture and validation tier and Server was considered as data storage tier. This scenario
is depicted.
Problems of two-tier architecture
The need of enterprise scalability challenged this traditional two-tier client-server
model. In the mid-1990s, as application became more complex and could be deployed to
hundreds or thousands of end-users, the client side, now undergoes with following
problems:
A' fat' client requiring considerable resources on client's computer to run effectively.
This includes disk space,RAM and CPU.
Client machines require administration which results overhead.
Three-tier architecture
By 1995, three-tier architecture appears as improvement over two-tier architecture. It
has three layers, which are:
First Layer: User Interface which runs on end-user's computer (the client) .
Second Layer: Application Server It is a business logic and data processing layer.
This middle tier runs on a server which is called as Application Server.
Third Layer: Database Server It is a DBMS, which stores the data required by the
middle tier. This tier may run on a separate server called the database server.
As, described earlier, the client is now responsible for application's user interface, thus it
requires less computational resources now clients are called as 'thin client' and it
requires less maintenance.
Advantages of Client/Server Database System
Client/Server system has less expensive platforms to support applications that had
previously been running only on large and expensive mini or mainframe computers
Client offer icon-based menu-driven interface, which is superior to the traditional
command-line, dumb terminal interface typical of mini and mainframe computer
systems.
Client/Server environment facilitates in more productive work by the users and
making better use of existing data.
A metadata (also called the data dictionary) is the data about the data. It is the self
describing nature of the database that provides program-data independence. It is also
called as the System Catalog. It holds the following information about each data element
in the databases, it normally includes:
+ Name
+ Type
+ Range of values
+ Source
+ Access authorization
+ Indicates which application programs use the data so that, when a change in a data
structure is contemplated, a list of the affected programs can be generated.
Data dictionary is used to actually control the database operation, data integrity and
accuracy. Metadata is used by developers to develop the programs, queries, controls and
procedures to manage and manipulate the data. Metadata is available to database
administrators (DBAs), designers and authorized user as on-line system documentation.
This improves the control of database administrators (DBAs) over the information
system and the user's understanding and use of the system.
pay. Entering keywords would produce a list of possible identifiers and their definitions.
Using keywords one can search the dictionary to locate the proper identifier to use in a
program.
These days, commercial data dictionary packages are available to facilitate entry, editing
and to use the data elements.
What is DBA?
BY DINESH THAKUR
The DBA is also responsible for applying SQL Server service packs. A service pack is not
a true upgrade, but an installation of the current version of software with various bug
fixes and patches that have been resolved since the product's release.
Monitoring the Database Server's Health and Tuning Accordingly
Monitoring the health of the database server means making sure that the following is
done:
Making sure that the backup schedules meet the recovery requirements
warehousing provides new and interesting challenges to the DBA and in some
companies a new career as a warehouse specialist.
Scheduling Events
The database administrator is responsible for setting up and scheduling various events
using Windows NT and SQL Server to aid in performing many tasks such as backups
and replication.
Providing 24-Hour Access
The database server must stay up, and the databases must always be protected and
online. Be prepared to perform some maintenance and upgrades after hours. Also be
prepared to carry that dreaded beeper. If the database server should go down, be ready
to get the server up and running. After all, that's your job.
Learning Constantly
To be a good DBA, you must continue to study and practice your mission-critical
procedures, such as testing your backups by recovering to a test database. In this
business, technology changes very fast, so you must continue learning about SQL Server,
available client/servers, and database design tools. It is a never-ending process.
The DBA should posses the following skills
(1) A good knowledge of the operating system(s)
(2) A good knowledge of physical database design
(3) Ability to perform both Oracle and also operating system performance monitoring
and the necessary adjustments.
(4) Be able to provide a strategic database direction for the organization.
(5) Excellent knowledge of Oracle backup and recovery scenarios.
(6) Good skills in all Oracle tools.
(7) A good knowledge of Oracle security management.
(8) A good knowledge of how Oracle acquires and manages resources.
(9) Sound knowledge of the applications at your site.
(10) Experience and knowledge in migrating code, database changes, data and
A typical structure of a DBMS with its components and relationships between them is
show. The DBMS software is partitioned into several modules. Each module or
component is assigned a specific operation to perform. Some of the functions of the
DBMS are supported by operating systems (OS) to provide basic services and DBMS is
built on top of it. The physical data and system catalog are stored on a physical disk.
Access to the disk is controlled primarily by as, which schedules disk input/output.
Therefore, while designing a DBMS its interface with the as must be taken into account.
Components of a DBMS
The DBMS accepts the SQL commands generated from a variety of user interfaces,
produces query evaluation plans, executes these plans against the database, and returns
the answers. As shown, the major software modules or components of DBMS are as
follows:
(i) Query processor: The query processor transforms user queries into a series of low
level instructions. It is used to interpret the online user's query and convert it into an
efficient series of operations in a form capable of being sent to the run time data
manager for execution. The query processor uses the data dictionary to find the
structure of the relevant portion of the database and uses this information in modifying
the query and preparing and optimal plan to access the database.
(ii) Run time database manager: Run time database manager is the central
software component of the DBMS, which interfaces with user-submitted application
programs and queries. It handles database access at run time. It converts operations in
user's queries coming. Directly via the query processor or indirectly via an application
program from the user's logical view to a physical file system. It accepts queries and
examines the external and conceptual schemas to determine what conceptual records
are required to satisfy the users request. It enforces constraints to maintain the
consistency and integrity of the data, as well as its security. It also performs backing and
recovery operations. Run time database manager is sometimes referred to as the
database control system and has the following components:
Authorization control: The authorization control module checks the authorization
of users in terms of various privileges to users.
Command processor: The command processor processes the queries passed by
authorization control module.
Integrity checker: It .checks the integrity constraints so that only valid data can be
entered into the database.
Query optimizer: The query optimizers determine an optimal strategy for the query
execution.
Transaction manager: The transaction manager
transaction properties should be maintained by the system.
ensures
that
the
As show, conceptually, following logical steps are followed while executing users to
request to access the database system:
(I) Users issue a query using particular database language, for example, SQL commands.
(ii) The passes query is presented to a query optimizer, which uses information about
how the data is stored to produce an efficient execution plan for the evaluating the
query.
(iii) The DBMS accepts the users SQL commands and analyses them.
(iv) The DBMS produces query evaluation plans, that is, the external schema for the
user, the corresponding external/conceptual mapping, the conceptual schema, the
conceptual/internal mapping, and the storage structure definition. Thus, an evaluation\
plan is a blueprint for evaluating a query.
(v) The DBMS executes these plans against the physical database and returns the
answers to the user.
Using components such as transaction manager, buffer manager, and recovery manager,
the DBMS supports concurrency and recovery.
What is Data Independence of DBMS?
BY DINESH THAKUR
conceptual view (conceptual records) to the internal view and hence to the stored data
in the database (physical records).
If there is a need to change the file organization or the type of physical device used as a
result of growth in the database or new technology, a change is required in the
conceptual/ internal mapping between the conceptual and internal levels. This change is
necessary to maintain the conceptual level invariant. The physical data independence
criterion requires that the conceptual level does not specify storage structures or the
access methods (indexing, hashing etc.) used to retrieve the data from the physical
storage medium. Making the conceptual schema physically data independent means
that the external schema, which is defined on the conceptual schema, is in turn
physically data independent.
The Logical data independence is difficult to achieve than physical data independence as
it requires the flexibility in the design of database and prograll1iller has to foresee the
future requirements or modifications in the design.
What are the Difference Between DDL, DML and DCL Commands?
BY DINESH THAKUR
SQL statements are divided into two major categories: data definition language (DDL)
and data manipulation language (DML).
Data Definition Language (DDL)statements are used to define the database
structure or schema. Some examples:
*
CREATE
to
create
objects
in
the
database
*
ALTER
alters
the
structure
of
the
database
*
DROP
delete
objects
from
the
database
* TRUNCATE - remove all records from a table, including all spaces allocated for the
records
are
removed
*
COMMENT
add
comments
to
the
data
dictionary
* RENAME - rename an object
Data Manipulation Language (DML) statements are used for managing data
within schema objects. Some examples:
*
SELECT
retrieve
data
from
the
a
database
*
INSERT
insert
data
into
a
table
*
UPDATE
updates
existing
data
within
a
table
* DELETE - deletes all records from a table, the space for the records remain
*
MERGE
UPSERT
operation
(insert
or
update)
*
CALL
call
a
PL/SQL
or Java subprogram
*
EXPLAIN
PLAN
explain
access
path
to
data
* LOCK TABLE - control concurrency
database
Transaction Control (TCL) statements are used to manage the changes made by
DML statements. It allows statements to be grouped together into logical transactions.
*
COMMIT
save
work
done
* SAVEPOINT - identify a point in a transaction to which you can later roll back
* ROLLBACK - restore database to original since the last COMMIT
* SET TRANSACTION - Change transaction options like isolation level and what
rollback segment to use
Type of Database System
BY DINESH THAKUR
The DBMS can be classified according to the number of users and the database
sitelocations. These are:
On the basis of the number of users:
Single-user DBMS
Multi-user DBMS
On the basis of the site location
Centralized DBMS
Parallel DBMS
Distributed DBMS
Client/server DBMS
we will discuss about some of the important types of DBMS system, which are presently
being used.
The database system may be multi-user or single-user. The configuration of the
hardware and the size of the organization will determine whether it is a multi-user
system or a single user system.
In single user system the database resides on one computer and is only accessed by one
user at a time. This one user may design, maintain, and write database programs.
Due to large amount of data management most systems are multi-user. In this situation
the data are both integrated and shared. A database is integrated when the
same information is not recorded in two places. For example, both the Library
department and the Account department of the college database may need student
addresses. Even though both departments may access different portions of the database,
the students' addresses should only reside in one place. It is the job of the DBA to make
sure that the DBMS makes the correct addresses available from one central storage area.
When the central site computer or database system goes down, then every one (users)
is blocked from using the system until the system comes back.
Communication costs from the terminals to the central site can be expensive.
Client-Server DBMS
Client/Server architecture of database system has two logical components namely client,
and server. Clients are generally personal computers or workstations whereas server is
large workstations, mini range computer system or a mainframe computer system. The
applications and tools of DBMS run on one or more client platforms, while the DBMS
soft wares reside on the server. The server computer is caned backend and the client's
computer is called front end. These server and client computers are connected into a
network. The applications and tools act as clients of the DBMS, making requests for its
services. The DBMS, in turn, processes these requests and returns the results to the
client(s). Client/Server architecture handles the Graphical User Interface (GUI) and
does computations and other programming of interest to the end user. The server
handles parts of the job that are common to many clients, for example, database access
and updates.
Multi-Tier client server computing models
In a single-tier system the database is centralized, which means the DBMS Software and
the data reside in one location and the dumb terminals were used to access the DBMS as
shown.
The rise of personal computers in businesses during the 1980s, the increased reliability
of networking hardware causes Two-tier and Three-tier systems became common. In a
two-tier system, different software is required for the server and for the client.
Illustrates the two-tier client server model. At the early stages client server computing
model was called two-tier-computing model in which client is considered as data
capture and validation tier and Server was considered as data storage tier. This scenario
is depicted.
Problems of two-tier architecture
The need of enterprise scalability challenged this traditional two-tier client-server
model. In the mid-1990s, as application became more complex and could be deployed to
hundreds or thousands of end-users, the client side, now undergoes with following
problems:
A' fat' client requiring considerable resources on client's computer to run effectively.
This includes disk space,RAM and CPU.
Client machines require administration which results overhead.
Three-tier architecture
By 1995, three-tier architecture appears as improvement over two-tier architecture. It
has three layers, which are:
First Layer: User Interface which runs on end-user's computer (the client) .
Second Layer: Application Server It is a business logic and data processing layer.
This middle tier runs on a server which is called as Application Server.
Third Layer: Database Server It is a DBMS, which stores the data required by the
middle tier. This tier may run on a separate server called the database server.
As, described earlier, the client is now responsible for application's user interface, thus it
requires less computational resources now clients are called as 'thin client' and it
requires less maintenance.
Advantages of Client/Server Database System
Client/Server system has less expensive platforms to support applications that had
previously been running only on large and expensive mini or mainframe computers
Client offer icon-based menu-driven interface, which is superior to the traditional
command-line, dumb terminal interface typical of mini and mainframe computer
systems.
Client/Server environment facilitates in more productive work by the users and
making better use of existing data.