0% found this document useful (0 votes)
145 views

Unit 4: Database Management System

DBMS software functions as an interface between users and databases, managing data, databases, and schemas. The purposes of DBMS include reducing data redundancy and inconsistencies, improving data access, ensuring data isolation and integrity, and handling failures and concurrent access anomalies. DBMS use data definition, manipulation, and control languages to create, retrieve, update, and delete data from databases.

Uploaded by

Poojitha Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
145 views

Unit 4: Database Management System

DBMS software functions as an interface between users and databases, managing data, databases, and schemas. The purposes of DBMS include reducing data redundancy and inconsistencies, improving data access, ensuring data isolation and integrity, and handling failures and concurrent access anomalies. DBMS use data definition, manipulation, and control languages to create, retrieve, update, and delete data from databases.

Uploaded by

Poojitha Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 104

Unit 4

Database Management System


DBMS
Meaning
DBMS software primarily functions as an
interface between the end user and the
database, simultaneously managing the data,
the database engine, and the database
schema in order to facilitate the organization
and manipulation of data.
Purpose of DBMS
• Data redundancy and inconsistency:Data
redundancy occurs when the same piece of data
is stored in two or more separate places and is a
common occurrence in many businesses.
• This redundancy leads to higher storage and
access
cost. In addition, it may lead to data
inconsistency; that is, the various copies
of the same data may no longer agree.
For example, if a student has a double major
(say,
music and mathematics) the address and
telephone number of that student
may appear in a file that consists of student
records of students in the Music
department and in a file that consists of
student records of students in the
Mathematics department.
Difficulty in accessing data:
Conventional file-processing environments do
not
allow needed data to be retrieved in a
convenient and efficient manner. More
responsive data-retrieval systems are required
for general use.
• Data isolation: Isolation is the database-level
property that controls how and when changes
are made and if they become visible to each
other, users, and systems. One of the goals of
isolation is to allow multiple transactions to
occur at the same time without adversely
affecting the execution of each
• Integrity problems: The data values stored in
the database must satisfy certain
types of consistency constraints.
• Atomicity problems: A computer system, like any other
device, is subject to failure. In many applications, it is crucial
that, if a failure occurs, the data
be restored to the consistent state that existed prior to the
failure. Consider a program to transfer 500 from the account
balance of department A to
the account balance of department B. If a system failure
occurs during the execution of the program, it is possible
that the 500 was removed from the
balance of department A but was not credited to the balance
of department B, resulting in an inconsistent database state.
• Concurrent-access anomalies: For the sake of overall
performance of the system
and faster response, many systems allow multiple users
to update the
data simultaneously. Indeed, today, the largest Internet
retailers may have
millions of accesses per day to their data by shoppers. In
such an environment,
interaction of concurrent updates is possible and may
result in inconsistent
data.
• Security problems: Not every user of the
database system should be able
to access all the data. For example, in a
university, payroll personnel need
to see only that part of the database that has
financial information. They do
not need access to information about
academic records.
Data
• Computer data is information processed or
stored by a computer. This information may be
in the form of text documents, images, audio
clips, software programs, or other types of
data. Computer data may be processed by the
computer's CPU and is stored
in files and folders on the computer's hard
disk.
Database
• A database is a collection of information that
is organized so that it can be easily accessed,
managed and updated. Computer databases
typically contain aggregations of data records
or files, containing information about sales
transactions or interactions with specific
customers.
Types of databases
• 1. Operational databases
• An operational database is a database that stores
data inside of an enterprise. They can contain things
like payroll records, customer information and
employee data. They are critical to data
warehousing and business analytics operations. The
key characteristic of operational databases is their
orientation toward real-time operations, compared
with conventional databases that rely on batch
processing.
• 2. End-User databases:
• These databases are shared by users and
contain information meant for use by the end-
users like managers at different levels. These
managers may not be concerned about the
individual transactions as found in operational
databases. Rather, they would be more
interested in summary information.
• Centralised Database
• A Database which is located and stored in a
single location is called a Centralised
Database. The centralized database’s location
is generally a server CPU or desktop or the
mainframe computer which is accessed by the
users through a computer network like LAN or
WAN.
• Distributed databases:
• These databases have contributions from the
common databases as well as the data captured
from the local operations. The data remains
distributed at various sites in the organisation.
As the sites are linked to each other with the
help of communication links, the entire
collection of data at all the sites constitutes the
logical database of the organisation.
•  Personal databases:
• The personal databases are maintained,
generally, on Personal computers. They
contain information that is meant for use only
among a limited number of users, generally
working in the same department.
• Commercial databases:
• The database to which access is provided to
users as a commercial venture is called a
commercial or external database. These
databases contain information that external
users would require but by themselves would
not be able to afford main­taining such huge
databases.
Database Language
Data Definition Language (DDL)
DDL is used for specifying the database schema. It is used for
creating tables, schema, indexes, constraints etc. in database.
Lets see the operations that we can perform on database using
DDL:
• To create the database instance – CREATE
• To alter the structure of database – ALTER
• To drop database instances – DROP
• To delete tables in a database instance – TRUNCATE
• To rename database instances – RENAME
• To Comment – Comment
Data Manipulation Language (DML)
DML is used for accessing and manipulating
data in a database. The following operations
on database comes under DML:
• To read records from table(s) – SELECT
• To insert record(s) into the table(s) – INSERT
• Update the data in table(s) – UPDATE
• Delete all the records from the table – DELETE
• Data Control language (DCL)
• DCL is used for granting and revoking user
access on a database –
• To grant access to user – GRANT
• To revoke access from user – REVOKE
• Transaction Control Language(TCL)
• The changes in the database that we made
using DML commands are either performed or
rollbacked using TCL.
• To persist the changes made by DML
commands in database – COMMIT
• To rollback the changes made to the database
– ROLLBACK
Data Model
• A data model refers to the logical inter-
relationships and data flow between different
data elements involved in the information world.
It also documents the way data is stored and
retrieved. Data models facilitate communication
business and technical development by
accurately representing the requirements of the
information system and by designing the
responses needed for those requirements
Types of data models
1. The Conceptual Model Is To Establish The
Entities, Their Attributes, And Their
Relationships.
2. The Logical Data Model Defines The Structure
Of The Data Elements And Set The
Relationships Between Them.
3. The Physical Data Model Describes The
Database-Specific Implementation Of The Data
Model.
• A conceptual model is developed to present
an overall picture of the system by recognizing
the business objects involved. It defines what
entities exist, NOT which tables.
• Logical data model
• Logical ERD is a detailed version of a
Conceptual ERD. A logical ER model is
developed to enrich a conceptual model by
defining explicitly the columns in each entity
and introducing operational and transactional
entities.
• Physical data model
• Physical ERD represents the actual design blueprint
of a relational database. A physical data model
elaborates on the logical data model by assigning
each column with type, length, nullable, etc. Since a
physical ERD represents how data should be
structured and related in a specific DBMS it is
important to consider the convention and restriction
of the actual database system in which the database
will be created. 
Transaction management
• A Database Transaction is a logical unit of
processing in a DBMS which entails one or
more database access operation. In a nutshell,
database transactions represent real-world
events of any enterprise. During the
transaction the database is inconsistent. Only
once the database is committed the state is
changed from one consistent state to another.
States of Transactions
State Transaction types
Active State A transaction enters into an active state when
the execution process begins. During this
state read or write operations can be
performed.

Partially Committed A transaction goes into the partially


committed state after the end of a
transaction.

Committed State When the transaction is committed to state, it


has already completed its execution
successfully. Moreover, all of its changes are
recorded to the database permanently.

Failed State A transaction considers failed when any one


of the checks fails or if the transaction is
aborted while it is in the active state.

Terminated State State of transaction reaches terminated state


when certain transactions which are leaving
the system can't be restarted.
ACID properties
• Atomicity
By this, we mean that either the entire transaction takes
place at once or doesn’t happen at all. There is no midway
i.e. transactions do not occur partially. Each transaction is
considered as one unit and either runs to completion or is
not executed at all. It involves the following two operations.
—Abort: If a transaction aborts, changes made to database
are not visible.
—Commit: If a transaction commits, changes made are
visible.
Atomicity is also known as the ‘All or nothing rule’.
• Consistency
This means that integrity constraints must be
maintained so that the database is consistent
before and after the transaction. It refers to
the correctness of a database.
• Isolation
This property ensures that multiple transactions can occur
concurrently without leading to the inconsistency of
database state. Transactions occur independently without
interference. Changes occurring in a particular transaction
will not be visible to any other transaction until that
particular change in that transaction is written to memory
or has been committed. This property ensures that the
execution of transactions concurrently will result in a state
that is equivalent to a state achieved these were executed
serially in some order.
• Durability:
This property ensures that once the
transaction has completed execution, the
updates and modifications to the database are
stored in and written to disk and they persist
even if a system failure occurs. These updates
now become permanent and are stored in
non-volatile memory. The effects of the
transaction, thus, are never lost.
• Serializability
• Serial schedule both by definition and execution means
that the transactions bestowed upon it will take place
serially, that is, one after the other. This leaves no place for
inconsistency within the database. But, when a set of
transactions are scheduled non-serially, they are
interleaved leading to the problem of concurrency within
the database. Non-serial schedules do not wait for one
transaction to complete for the other one to begin.
Serializability in DBMS decides if an interleaved non-serial
schedule is serializable or not.
Storage management
• Storage Management refers to the
management of the data storage equipment’s
that are used to store the user/computer
generated data. Hence it is a tool or set of
processes used by an administrator to keep
your data and storage equipment’s safe.
• Primary Storage − The memory storage that is
directly accessible to the CPU comes under this
category. CPU's internal memory (registers), fast
memory (cache), and main memory (RAM) are
directly accessible to the CPU, as they are all placed
on the motherboard or CPU chipset. This storage is
typically very small, ultra-fast, and volatile. Primary
storage requires continuous power supply in order
to maintain its state. In case of a power failure, all
its data is lost.
• Secondary Storage − Secondary storage
devices are used to store data for future use
or as backup. Secondary storage includes
memory devices that are not a part of the CPU
chipset or motherboard, for example,
magnetic disks, optical disks (DVD, CD, etc.),
hard disks, flash drives, and magnetic tapes.
• Tertiary Storage − Tertiary storage is used to
store huge volumes of data. Since such storage
devices are external to the computer system,
they are the slowest in speed. These storage
devices are mostly used to take the back up of
an entire system. Optical disks and magnetic
tapes are widely used as tertiary storage.
Features of storage management
• 1. Volatility
• Non-volatile memory
• Will retain the stored information even if it is not constantly
supplied with electric power. It is suitable for long-term
storage of information. Nowadays used for most of
secondary, tertiary, and off-line storage..
• Volatile memory
• Requires constant power to maintain the stored information.
The fastest memory technologies of today are volatile ones.
Since primary storage is required to be     very   fast, it
predominantly uses volatile memory.
• 2. Differentiation
• Dynamic random access memory
• A form of volatile memory, which also requires the
stored information to be periodically re-read and re-
written, or refreshed, otherwise it would vanish.
• Static memory 
• A form of volatile memory similar to DRAM with the
exception that it never needs to be refreshed as long
as power is applied. (It loses its content if power is
removed).
• 3. Mutability
• Read/write storage or mutable storage 
• Allows information to be overwritten at any time. A computer without some amount
of read/write storage for primary storage purposes would be useless for many tasks.
Modern computers typically use read/write storage also for secondary storage.
• Read only storage 
• Retains the information stored at the time of manufacture, and write once
storage (Write Once Read Many) allows the information to be written only once at
some point after manufacture. These are called immutable storage. Immutable
storage is used for tertiary and off-line storage. Examples include CD-ROM and CD-R
• Slow write, fast read storage 
• Read/write storage, which allows information to be overwritten multiple times, but
with the write operation being much slower than the read operation. Examples
include CD-RW and flash memory.
• 4. Accessibility
• Random access 
• Any location in storage can be accessed at any moment in
approximately the same amount of time. Such characteristic
is well suited for primary and secondary storage.
• Sequential access
• The accessing of pieces of information will be in a serial order,
one after the other; therefore the time to access a particular
piece of information depends upon which piece of
information was last accessed. Such characteristic is typical of
off-line storage.
• 5. Addressability
• Location-addressable 
• Each individually accessible unit of information in storage is selected with its numerical memory
address. In modern computers, location-addressable storage usually limits to primary storage,
accessed internally by computer programs, since location-addressability is very efficient, but
burdensome for humans.
• File addressable 
• Information is divided into files of variable length, and a particular file is selected with human-
readable directory and file names. The underlying device is still location-addressable, but the
operating system of a computer provides the file system abstraction to make the operation more
understandable. In modern computers, secondary, tertiary and off-line storage use file systems.
• Content-addressable 
• Each individually accessible unit of information is selected based on the basis of (part of) the
contents stored there. Content-addressable storage can be implemented using software (computer
program) or hardware (computer device), with hardware being faster but more expensive option.
Hardware content addressable memory is often used in a computer's CPU cache.
Database administrator
• Database administrators (DBAs) use
specialized software to store and organize
data. The role may include capacity planning,
installation, configuration, database design,
migration, performance monitoring, security,
troubleshooting, as well as backup and data
recovery.
Roles of a database administrator
• 1. Software installation and Maintenance
• A DBA often collaborates on the initial installation
and configuration of a new Oracle, SQL Server etc
database. The system administrator sets up
hardware and deploys the operating system for
the database server, then the DBA installs the
database software and configures it for use. As
updates and patches are required, the DBA
handles this on-going maintenance.
• 2. Data Extraction, Transformation, and Loading
• Known as ETL, data extraction, transformation,
and loading refers to efficiently importing large
volumes of data that have been extracted from
multiple systems into a data warehouse
environment.
• This external data is cleaned up and
transformed to fit the desired format so that it
can be imported into a central repository.
• 3. Specialised Data Handling
• Today’s databases can be massive and may
contain unstructured data types such as
images, documents, or sound and video files.
Managing a very large database (VLDB) may
require higher-level skills and additional
monitoring and tuning to maintain efficiency.
• 4. Database Backup and Recovery
• DBAs create backup and recovery plans and
procedures based on industry best practices,
then make sure that the necessary steps are
followed. Backups cost time and money, so
the DBA may have to persuade management
to take necessary precautions to preserve
data.
• 5. Security
• A DBA needs to know potential weaknesses of
the database software and the company’s
overall system and work to minimise risks. No
system is one hundred per cent immune to
attacks, but implementing best practices can
minimise risks.
• 6. Authentication
• Setting up employee access is an important
aspect of database security. DBAs control who
has access and what type of access they are
allowed. For instance, a user may have
permission to see only certain pieces of
information, or they may be denied the ability
to make changes to the system.
• 7. Capacity Planning
• The DBA needs to know how large the
database currently is and how fast it is
growing in order to make predictions about
future needs. Storage refers to how much
room the database takes up in server and
backup space. Capacity refers to usage level.
• 8. Performance Monitoring
• Monitoring databases for performance issues is part
of the on-going system maintenance a DBA
performs. If some part of the system is slowing
down processing, the DBA may need to make
configuration changes to the software or add
additional hardware capacity. Many types of
monitoring tools are available, and part of the DBA’s
job is to understand what they need to track to
improve the system. 
• 9. Database Tuning
• Performance monitoring shows where the
database should be tweaked to operate as
efficiently as possible. The physical
configuration, the way the database is
indexed, and how queries are handled can all
have a dramatic effect on database
performance.
• 10. Troubleshooting
• DBAs are on call for troubleshooting in case of
any problems. Whether they need to quickly
restore lost data or correct an issue to
minimise damage, a DBA needs to quickly
understand and respond to problems when
they occur.
Types of database users
• Database users are categorized based up on their interaction with the data
base.
• These are six types of data base users in DBMS.
1. Database Administrator (DBA) :
Database Administrator (DBA) is a person/team who defines the schema
and also controls the 3 levels of database.
The DBA will then create a new account id and password for the user if
he/she need to access the data base.
DBA is also responsible for providing security to the data base and he
allows only the authorized users to access/modify the data base.
– DBA also monitors the recovery and back up and provide technical support.
– The DBA has a DBA account in the DBMS which called a system or superuser
account.
– DBA repairs damage caused due to hardware and/or software failures.
3 level database architecture
2. Naive / Parametric End Users :
Parametric End Users are the unsophisticated
who don’t have any DBMS knowledge but they
frequently use the data base applications in their
daily life to get the desired results. For examples,
Railway’s ticket booking users are naive users.
Clerks in any bank is a naive user because they
don’t have any DBMS knowledge but they still
use the database and perform their given task.
3. Sophisticated Users :
Sophisticated users can be engineers,
scientists, business analyst, who are familiar
with the database. They can develop their
own data base applications according to their
requirement. They don’t write the program
code but they interact the data base by
writing SQL queries directly through the query
processor.
4. Data Base Designers :
Data Base Designers are the users who design
the structure of data base which includes
tables, indexes, views, constraints, triggers,
stored procedures. He/she controls what data
must be stored and how the data items to be
related.
5. Application Program :
Application Program are the back end
programmers who writes the code for the
application programs. They are the computer
professionals. 
6. Casual Users / Temporary Users :
Casual Users are the users who occasionally
use/access the data base but each time when
they access the data base they require the
new information, for example, Middle or
higher level manager.
Overall structure of DBMS
• Applications: – It can be considered as a user-friendly web page where the
user enters the requests. Here he simply enters the details that he needs and
presses buttons to get the data.
• End User: – They are the real users of the database. They can be developers,
designers, administrators, or the actual users of the database.
• DDL: – Data Definition Language (DDL) is a query fired to create database,
schema, tables, mappings, etc in the database. These are the commands
used to create objects like tables, indexes in the database for the first time. In
other words, they create the structure of the database.
• DDL Compiler: – This part of the database is responsible for processing the
DDL commands. That means this compiler actually breaks down the
command into machine-understandable codes. It is also responsible for
storing the metadata information like table name, space used by it, number
of columns in it, mapping information, etc.
• DML Compiler: – When the user inserts, deletes, updates or retrieves the
record from the database, he will be sending requests which he
understands by pressing some buttons. But for the database to
work/understand the request, it should be broken down to object code.
This is done by this compiler. One can imagine this as when a person is
asked some question, how this is broken down into waves to reach the
brain!
• Query Optimizer: – When a user fires some requests, he is least bothered
how it will be fired on the database. He is not all aware of the database or
its way of performance. But whatever be the request, it should be
efficient enough to fetch, insert, update, or delete the data from the
database. The query optimizer decides the best way to execute the user
request which is received from the DML compiler. It is similar to selecting
the best nerve to carry the waves to the brain!
• Stored Data Manager: – This is also known as
Database Control System. It is one of the main
central systems of the database. It is
responsible for various tasks
– It converts the requests received from query
optimizer to machine-understandable form.  It
makes actual requests inside the database. It is
like fetching the exact part of the brain to answer.
– It helps to maintain consistency and integrity by applying the
constraints.  That means it does not allow inserting/updating /
deleting any data if it has child entry. Similarly, it does not allow
entering any duplicate value into database tables.
– It controls concurrent access. If there are multiple users accessing
the database at the same time, it makes sure, all of them see
correct data. It guarantees that there is no data loss or data
mismatch happens between the transactions of multiple users.
– It helps to back up the database and recovers data whenever
required. Since it is a huge database and when there is any
unexpected exploit of the transaction, and reverting the changes is
not easy. It maintains the backup of all data so that it can be
recovered.
• Data Files: – It has the real data stored in it. It can be
stored as magnetic tapes, magnetic disks, or optical disks.
• Compiled DML: – Some of the processed DML statements
(insert, update, delete) are stored in it so that if there are
similar requests, it will be re-used.
• Data Dictionary: – It contains all the information about
the database. As the name suggests, it is the dictionary of
all the data items. It contains a description of all
the tables, view,  constraints, indexes, triggers, etc.
Types of Database models
1. Flat file database
2. Hierarchical database
3. Network database
4. Relational database
5. No SQL database
Flat file database
• A flat file database is a database that stores
data in a plain text file. Each line of the text
file holds one record, with fields separated by
delimiters, such as commas or tabs. While it
uses a simple structure, a flat file database
cannot contain multiple tables like a relational
database can.
• Ex: Spreadsheet
Advantages
• All records are stored in one place.
• Easy to understand and configure using
various standard office applications.
• It is an excellent option for small databases.
• It requires less hardware and software
components.
Disadvantages
• Flat file database is harder to update.
• Harder to change data format.
• It is poor database in terms of complex
queries.
• It increased Redundancy and inconsistency.
Hierarchical database

• The hierarchical model organizes data into a


tree-like structure, where each record has a
single parent or root. Sibling records are
sorted in a particular order. That order is used
as the physical order for storing the database.
This model is good for describing many real-
world relationships.
Advantages
• Data can be retrieved easily due to the explicit
links present between the table structures.
• Referential integrity is always maintained i.e. any
changes made in the parent table are
automatically updated in a child table.
• Promotes data sharing.
• It is conceptually simple due to the parent-child
relationship.
• Database security is enforced.
Disadvantages
• Because it is designed in one to many
relationship a child can have only  a single
parent so, redundancy comes into picture
because of data duplication.
• When we want to access the data in this
model we need to travel from the root  node
step by step to the desired level, which will be
time consuming.
•  If you want to add the new level in between
the existing levels. The user has to reconstruct
the entire tree structure but it is “tedious and
time taking process”
Network database model
• Network database management systems
(Network DBMSs) are based on a network
data model that allows each record to have
multiple parents and multiple child records. A
network database allows flexible relationship
model between entities.
•  
Advantages
• It is fast data access with a network model.
• The network model allows creating more
complex and more strong queries as
compared to the database with a hierarchical
database model. A user can execute a variety
of database queries when selecting the
network model.
Disadvantages
• The network model is a very complex
database model, so the user must be very
familiar with the overall structure of the
database.
• Updating inside this database is a quite
difficult and boring task. We need the help of
the application programs that is being used to
navigate the data.
Relational database
• A relational database is a type of database. It
uses a structure that allows us to identify and
access data in relation to another piece of
data in the database. Often, data in a
relational database is organized into tables.
• Relational databases store data in tables.
Tables can grow large and have a multitude of
columns and records. Relational database
management systems (RDBMSs) use SQL to
manage the data in these large tables. 
Advantages
• It is easy to use.
• It is secured in nature.
• The data manipulation can be done.
• It limits redundancy and replication of the data.
• It offers better data integrity.
• It provides better data independence.
• It provides better backup and recovery procedures.
• It provides multiple interfaces.
• Multiple users can access the database which is not
possible in DBMS.
Disadvantages
• Software is expensive.
• Complex software refers to expensive
hardware and hence increases overall cost to
avail the RDBMS service.
• It requires skilled human resources to
implement.
• Certain applications are slow in processing.
• It is difficult to recover the lost data.
NoSQL Database
• A NoSQL originally referring to non SQL or non
relational is a database that provides a
mechanism for storage and retrieval of data.
This data is modeled in means other than the
tabular relations used in relational databases.
• The data structures used by NoSQL databases
are different from those used by default in
relational databases which makes some
operations faster in NoSQL.
Advantages
• NoSQL is Non-relational
• Non-relational, in other words, you can call it
as table-less, these NoSQL databases vary
from SQL databases. In this sense, they
provide the ease of management while
ensuring a high level of flexibility with data
models that are new.
• NoSQL is Low Cost
• While being low cost, NoSQL is also an open-
source database, that provides an awesome
solution for smaller enterprises to opt this
at affordable prices.
• Scalability is Easier
• NoSQL has been gaining popularity because of
the elasticity and scalability that it offers over
the other kinds of databases that are
available. It has been designed to perform
exceptionally well under any conditions
including low-cost hardware.
Disadvantages
• Less Community Support
• Though the NoSQL has been expanding at an unbelievable
pace, the community support is relatively less as its new.
• Standardization
• It lacks a standardized platform like SQL, which is preventing
it from further expanding. This has been creating concerns
during migration. Standardization is what helps the database
industry to unify.
• Interfaces and Interoperability
• Interfaces and interoperability is another concern that is
faced by NoSQL, which needs fixing immediately.
Advantage of DBMS

• 1. Improved data sharing:


• The DBMS helps create an environment in which end users have
better access to more and better-managed data.
• Such access makes it possible for end users to respond quickly to
changes in their environment.
• 2. Improved data security:
• The more users access the data, the greater the risks of data
security breaches.Corporations invest considerable amounts of
time, effort, and money to ensure that corporate data are used
properly.
• A DBMS provides a framework for better enforcement of data
privacy and security policies.
• 3. Better data integration:
• Wider access to well-managed data promotes an integrated view of the
organization’s operations and a clearer view of the big picture.
• It becomes much easier to see how actions in one segment of the company affect
other segments.
• 4. Minimized data inconsistency:
• Data inconsistency exists when different versions of the same data appear in
different places.
• For example, data inconsistency exists when a company’s sales department stores a
sales representative’s name as “Bill Brown” and the company’s personnel
department stores that same person’s name as “William G. Brown,” or when the
company’s regional sales office shows the price of a product as $45.95 and its
national sales office shows the same product’s price as $43.95.
• The probability of data inconsistency is greatly reduced in a properly designed
database.
• 5. Improved data access:
• The DBMS makes it possible to produce quick answers to
ad hoc queries.
• From a database perspective, a query is a specific request
issued to the DBMS for data manipulation—for example, to
read or update the data. Simply put, a query is a question,
and an ad hoc query is a spur-of-the-moment question.
• The DBMS sends back an answer (called the query result
set) to the application.
• For example, end users
• 6. Improved decision making:
• Better-managed data and improved data access make it possible to
generate better-quality information, on which better decisions are based.
• The quality of the information generated depends on the quality of the
underlying data.
• Data quality is a comprehensive approach to promoting the accuracy,
validity, and timeliness of the data. While the DBMS does not guarantee
data quality, it provides a framework to facilitate data quality initiatives.
• Increased end-user productivity
• The availability of data, combined with the tools that transform data into
usable information, empowers end users to make quick, informed
decisions that can make the difference between success and failure in the
global economy.
Disadvantage of DBMS

• 1. Increased costs:
• Database systems require sophisticated hardware and software and highly
skilled personnel.
• The cost of maintaining the hardware, software, and personnel required to
operate and manage a database system can be substantial. Training, licensing,
and regulation compliance costs are often overlooked when database systems
are implemented.
• 2. Management complexity:
• Database systems interface with many different technologies and have a
significant impact on a company’s resources and culture.
• The changes introduced by the adoption of a database system must be
properly managed to ensure that they help advance the company’s objectives.
Given the fact that database systems hold crucial company data that are
accessed from multiple sources, security issues must be assessed constantly.
• 3. Maintaining currency:
• To maximize the efficiency of the database system, you must keep your system
current.
• Therefore, you must perform frequent updates and apply the latest patches and
security measures to all components.
• Because database technology advances rapidly, personnel training costs tend to be
significant. Vendor dependence.
• Given the heavy investment in technology and personnel training, companies might
be reluctant to change database vendors.
• 4. Frequent upgrade/replacement cycles:
• DBMS vendors frequently upgrade their products by adding new functionality. Such
new features often come bundled in new upgrade versions of the software.
• Some of these versions require hardware upgrades. Not only do the upgrades
themselves cost money, but it also costs money to train database users and
administrators to properly use and manage the new features.

You might also like