DatabaseManagementSystemSandeep PDF
DatabaseManagementSystemSandeep PDF
SYSTEM
By Sandeep Yadav
MARCH 3, 2020
ISMT
Tinkune,KTM
Database Management System 2020
Contents
Introduction ............................................................................................................................................ 4
Conclusion: .......................................................................................................................................... 21
Introduction: ......................................................................................................................................... 22
Designs for a relational database management system to meet client requirements ........................... 23
Attribute(s): .......................................................................................................................................... 24
Other Advantages................................................................................................................................. 44
Development of a fully functional system which meets client and system requirements, using an
open source language ........................................................................................................................... 44
Dashboard: ........................................................................................................................................... 45
Transactions ......................................................................................................................................... 65
Database Security................................................................................................................................. 73
Critically evaluation the effectiveness of the system design and development against client and
system requirements............................................................................................................................. 81
Conclusion: .......................................................................................................................................... 84
Part 3 .................................................................................................................................................... 86
Introduction: ......................................................................................................................................... 86
Demonstration of the tools available in the system to monitor and optimize system performance, and
examine the audit logs ......................................................................................................................... 87
Extended Events................................................................................................................................... 91
Demonstration of the tools available in the system to manage security and authorizations.............. 102
Assessment of the effectiveness of the system administration and management tools available on the
platform identifying any shortcomings of the tools. .......................................................................... 106
Assessment of any future improvements that may be required to ensure the continued effectiveness
of the database system ....................................................................................................................... 110
Part 1: Produce a presentation slides which analyses different types of database management
system.
• Assess how relational database models and the process of normalization can provide reliable
and efficient data structures.
Introduction
Cosmos International College, a newly established educational organization established in the heart
of Mahendrapul, Pokhara provides the courses related to Management. Since there are limited
number of students they are maintaining all the information in a excel file. But currently they have
set up the library too and there is large number of books in the library. So the company has decided
to develop a web application to manage its library. The following are the requirements of the
software for managing the library:
• An appropriate book entry system with relevant information of the books and its category.
• Appropriate tracing for the books which have been issued and the books that needs to be returned.
• Appropriate search ability for the books, members, books issued and the books returned.
• An appropriate dashboard for admin which gives the overview of how many books category exist,
the total number of available books, the total number of members, the total number of books issued,
the total penalty collected over the fiscal year.
• Secure and effective login system for the admin and the librarian with admin having full right to
access the system and the librarian being able only to view stock, issue books and accept book
returns.
In this Part 1, I am going to produce a presentation slides which analyses different types of database
management system. I will compare and contrast the different types of database models. Then I will
assess how relational database models and the process of normalization can provide reliable and
efficient data structures. Then I will critically evaluate different database management systems
available in relation to open source and vendor-specific platforms, justifying the criteria used in the
evaluation.
Presentation Slides
Conclusion:
In this way, I have produced a presentation slides which analyses different types of database
management system. I have compared and contrast the different types of database models. Then I
will assess how relational database models and the process of normalization can provide reliable and
efficient data structures. Then I have critically evaluated different database management systems
available in relation to open source and vendor-specific platforms, justifying the criteria used in the
evaluation.
Part 2:
Design a database management system using a relational model to meet client requirements
and develop a database management system using a suitable platform.
• Produce a design for a relational database management system to meet client requirements
• Develop a fully functional system which meets client and system requirements, using an open
source language
• Critically evaluate the effectiveness of the system design and development against client and
system requirements.
Introduction:
Cosmos International College, a newly established educational organization established in the heart
of Mahendrapul, Pokhara provides the courses related to Management. Since there are limited
number of students they are maintaining all the information in a excel file. But currently they have
set up the library too and there is large number of books in the library. So the company has decided
to develop a web application to manage its library. The following are the requirements of the
software for managing the library:
• An appropriate book entry system with relevant information of the books and its category.
• Appropriate tracing for the books which have been issued and the books that needs to be returned.
• Appropriate search ability for the books, members, books issued and the books returned.
• An appropriate dashboard for admin which gives the overview of how many books category exist,
the total number of available books, the total number of members, the total number of books issued,
• Secure and effective login system for the admin and the librarian with admin having full right to
access the system and the librarian being able only to view stock, issue books and accept book
returns.
In this way in this part 2, I will design a database management system using a relational model to
meet client requirements and develop a database management system using a suitable platform. I will
produce a design for a relational database management system to meet client requirements. I will
analyze how the design will optimize system performance. I will develop a fully functional system
which meets client and system requirements, using an open source language, Then I will test the
system for functionality and performance. Then I will implement effective features in the solution to
handle concurrency, security, user authorizations and data recovery. And Finally I will critically
evaluate the effectiveness of the system design and development against client and system
requirements.
Attribute(s):
Attributes are the properties which define the entity type. For example, Roll_No, Name, DOB,
Age, Address, Mobile_No are the attributes which defines entity type Student. In ER diagram,
attribute is represented by an oval.
Key-Attribute –
The attribute which uniquely identifies each entity in the entity set is called key attribute.For
example, Roll_No will be unique for each student. In ER diagram, key attribute is represented by an
Composite-Attribute –
An attribute composed of many other attribute is called as composite attribute. For example,
Address attribute of student Entity type consists of Street, City, State, and Country. In ER diagram,
composite attribute is represented by an oval comprising of ovals.
Multivalued-Attribute –
An attribute consisting more than one value for a given entity. For example, Phone_No (can be
more than one for a given student). In ER diagram, multivalued attribute is represented by double
oval.
Derived-Attribute –
An attribute which can be derived from other attributes of the entity type is known as derived
attribute. e.g.; Age (can be derived from DOB). In ER diagram, derived attribute is represented by
dashed oval.
The complete entity type Student with its attributes can be represented as:
A set of relationships of same type is known as relationship set. The following relationship set
depicts S1 is enrolled in C2, S2 is enrolled in C1 and S3 is enrolled in C3.
Unary Relationship –
When there is only ONE entity set participating in a relation, the relationship is called as unary
relationship. For example, one person is married to only one person.
Binary Relationship –
When there are TWO entities set participating in a relation, the relationship is called as binary
relationship. For example, Student is enrolled in Course.
n-ary Relationship –
When there are n entities set participating in a relation, the relationship is called as n-ary relationship.
Cardinality:
The number of times an entity of an entity set participates in a relationship set is known as
One to one – When each entity in each entity set can take part only once in the relationship, the
cardinality is one to one. Let us assume that a male can marry to one female and a female can marry
to one male. So the relationship will be one to one.
Many to one – When entities in one entity set can take part only once in the relationship set and
entities in other entity set can take part more than once in the relationship set, cardinality is
many to one. Let us assume that a student can take only one course but one course can be taken by
many students. So the cardinality will be n to 1. It means that for one course there can be n students
but for one student, there will be only one course.
Using Sets, it
can be represented as:
In this case, each student is taking only 1 course but 1 course has been taken by many students.
Many to many – When entities in all entity sets can take part more than once in the
relationship cardinality is many to many. Let us assume that a student can take more than one
course and one course can be taken by many students. So the relationship will be many to many.
In this example, student S1 is enrolled in C1 and C3 and Course C3 is enrolled by S1, S3 and S4. So
it is many to many relationships.
Participation Constraint:
Participation Constraint is applied on the entity participating in the relationship set.
Total Participation – Each entity in the entity set must participate in the relationship. If each
student must enroll in a course, the participation of student will be total. Total participation is shown
by double line in ER diagram.
Partial Participation – The entity in the entity set may or may NOT participate in the
relationship. If some courses are not enrolled by any of the student, the participation of course will
be partial.
The diagram depicts the ‘Enrolled in’ relationship set with Student Entity set having total
participation and Course Entity set having partial participation.
Every student in Student Entity set is participating in relationship but there exists a course C4 which
is not taking part in the relationship.
For example, A company may store the information of dependents (Parents, Children, Spouse) of an
Employee. But the dependents don’t have existence without the employee. So Dependent will be
weak entity type and Employee will be Identifying Entity type for Dependent.
A weak entity type is represented by a double rectangle. The participation of weak entity type is
always total. The relationship between weak entity type and its identifying strong entity type is called
identifying relationship and it is represented by double diamond (geeksforgeeks, n.d.).
Following Above I have also created the ER Diagram of Library Management System which have
following Entities.
It shows how information enters and leaves the system, what changes the information and where
information is stored. The purpose of a DFD is to show the scope and boundaries of a system as a
whole. It may be used as a communications tool between a systems analyst and any person who
plays a part in the system that acts as the starting point for redesigning a system.
It is usually beginning with a context diagram as level 0 of the DFD diagram, a simple
representation of the whole system. To elaborate further from that, we drill down to a level 1
diagram with lower-level functions decomposed from the major functions of the system. This
could continue to evolve to become a level 2 diagram when further analysis is required.
Progression to levels 3, 4 and so on is possible but anything beyond level 3 is not very common.
Please bear in mind that the level of detail for decomposing a particular function depending on the
complexity that function.
Now I'd like to briefly introduce to you a few diagram notations which I'll see in the tutorial below.
External Entity
An external entity can represent a human, system or subsystem. It is where certain data comes
from or goes to. It is external to the system we study, in terms of the business process. For this
reason, people used to draw external entities on the edge of a diagram.
Process
A process is a business activity or function where the manipulation and transformation of data take
place. A process can be decomposed to a finer level of details, for representing how data is being
processed within the process.
Data Store
A data store represents the storage of persistent data required and/or produced by the process. Here
are some examples of data stores: membership forms, database tables, etc.
Data Flow
A data flow represents the flow of information, with its direction represented by an arrowhead th at
shows at the end(s) of flow connector.
0-level DFD:
It is also known as context diagram. It's designed to be an abstraction view, showing the system as a
single process with its relationship to external entities. It represent the entire system as single bubble
with input and output data indicated by incoming/outgoing arrows
1-level DFD:
In 1-level DFD, context diagram is decomposed into multiple bubbles/processes.in this level we
highlight the main functions of the system and breakdown the high level process of 0-level DFD into
subprocesses.
2-level DFD:
2-level DFD goes one step deeper into parts of 1-level DFD. It can be used to plan or record the
specific/necessary detail about the system’s functioning (geeksforgeeks, n.d.).
In this way By following above I have created the Data flow Diagram for Library
Management System. Below
This is Zero level Data flow Diagram which shows that there are two types of user one is admin and
another is member. They both communicate between the system.
This is first level data flow diagram which illustrates the mechanism between amin and database.
3.Class Diagram
Class diagrams are the main building blocks of every object oriented methods. The class diagram can
be used to show the classes, relationships, interface, association, and collaboration. UML is
standardized in class diagrams. Since classes are the building block of an application that is based on
OOPs, so as the class diagram has appropriate structure to represent the classes, inheritance,
relationships, and everything that OOPs have in its context. It describes various kinds of objects and
the static relationship in between them.
The main purpose to use class diagrams are:
This is the only UML which can appropriately depict various aspects of OOPs concept.
Proper design and analysis of application can be faster and efficient.
It is base for deployment and component diagram.
There are several software available which can be used online and offline to draw these diagrams
Like Edraw max, lucid chart etc. There are several points to be kept in focus while drawing the class
diagram. These can be said as its syntax:
Each class is represented by a rectangle having a subdivision of three compartments name, attributes
and operation.
There are three types of modifiers which are used to decide the visibility of attributes and operations.
After following above tutorial about class diagram, I have created class Diagram for Library
Management System which is below.
4.Schema Diagram
At its most basic level, the schema serves as a container for data assets. However, different database
vendors operationalize schemas in different ways. Oracle, for example, treats every schema as a user
account. To create a new schema, a database administrator creates a new database user with the
intended schema name.
Because schemas constitute a basic structural feature of a database, most database environments
apply access permissions to objects on a schema level.
For example, a company database might contain a series of users. Each user incurs a schema, but
access to different schemas is granted individually, and with the granularity of permissions, to users
outside of the home schema.
Most database management tools don't list schemas; instead, they list databases and users.
For example, a company creates user accounts (schemas) for Bob and Jane. It also creates accounts
for departments like HR and marketing. Then, it gives an analyst in each department access to the
department's schema account.
The HR analyst creates tables and views within the HR schema and grants access to Bob to read (but
not write to) a table that lists employee names and employee ID numbers. Also, the HR analyst may
grant access to Jane to read and write to a table that lists employee phone numbers.
By granting access this way, only the right roles and users can read, write, or modify the data in a
self-contained data asset within the larger database.
Every database engine looks to schemas as the foundational method of segregating data in a multi-
user environment. (quora, n.d.)
After following Above needed things about Schema, I have also designed Schema for Library
Management System.
Entity Relationship Diagrams are the best tools to communicate within the entire system. These
diagrams are the graphical representation of the flow of data and information. These diagrams are
most commonly used in business organizations to make data travel easy. This conceptual database
model is an effective way of communicating with the individuals at all the levels. In order to
implement this system effectively in your system, it is essential that you have outstanding knowledge
regarding the Lucidchart ER diagram. This will help you in using each and every feature of the flow
diagram in an effective manner. In order to gain appropriate knowledge about these relationship
diagrams, you can search for an ER diagram tutorial online.
The most common use of this diagram is to present the relation of the various tables present in a
database. Some key benefits of Entity Relationship Diagrams are further discussed in this article
(quora, n.d.).
Visual Representation
The most crucial benefit of ERD is that it offers a visual presentation of the layout. Having an
effective design provides assistance to the database designers to determine the flow of data and
working of the complete system. ERD in combination with data-flow diagrams result in an effective
visual representation.
Effective communication
The clear representation of the data listed under proper headings and tables results in the effective
flow of information and communication. The readers can easily understand the relationship between
different fields. The information is represented via different symbols. There are various symbols for
representing different information like relationships are represented by diamond shaped boxes,
attributes are represented by ovals and entities are represented by rectangular boxes. These symbols
allow the designer to have a proper understanding of the working of the database after completion.
Easy To Understand
Entity relationship diagrams can easily be created by expert designers. These are designed in a
simple manner so that all the individuals can understand it easily. Before actually designing the
database, the designers are required to get the design confirmed and approved the representatives
who are to use this data. The representatives have the right to give their suggestions for rectifying the
issues related to the designer. Their contribution can play an important role in enhancing the overall
design.
High flexibility
This is yet another feature of ERD models. Though the complete database is linked to information in
different tables, the readers can easily make out the relationship between various tables. There are
several other mathematical formulae which can be used to determine the relationships.
Entity relationship diagrams are an essential part of the business organizations as they prove to be
beneficial in managing wide data in an easy and effective manner. It acts as a blueprint of the
existing database and allows the designers to create an accurate design as per the needs and
requirements of the company and the project. The ERD model makes data flow more efficient. These
creative and simple diagrams serve as the best tool for the business organizations allowing them to
maintain their database effectively.
It gives you a clear model of your final product that you can show a non-technical client to
make sure you’re on the right track, before you’ve done any coding.
It provides a chance to recognize errors in logic or gaps in understanding and correct
them, before you’ve done any coding.
It provides a blueprint to work with when you do start creating the actual database - and gives
you something to point back to when questions arise about the design.
It gives a clear breakdown for anyone who wants a quick overview of the structure of the
database and how tables are related to each other.
It gives you a chance to work out multiplicity of relationships and various other
constraints before you’ve done any coding.
(quora, n.d.)
Dataflow diagram optimize system performance
Data flow diagrams are a classic technique for software modelling. Derived originally by Ed
Yourdon, they enable a model of the flow of information within existing systems, prior to their
Data analysis enables an understanding of the transformation of data by various processes. It’s
strength is the ability to model that transformation and follow the flow of information through a
system.
Where data analysis alone is insufficient for modelling is in the area of states and timing. Some
variations to the Yourdon model, such a Ward-Mellor extensions are intended to address this
limitation, but Yourdon models alone are not sufficient for exploring steady state systems in real
time.
Data flow diagrams are particularly good at modelling and design. They lack the rigour for
modelling low level implementations, which are usually expressed in pseudo code or similar
notations.
(quora, n.d.)
Class diagrams are at the heart of UML. They are based on the principles of object orientation and
can be implemented in various phases of a project. During the analysis they appear as the domain
model where they attempt to create a representation of reality. During the design phase the software
is used to model software, and during the implementation phase it can be applied to generate source
code. Class diagrams are a vital part of any software development project and they form the
foundation of all software products.
Class diagrams give you the ability to create models with the help of UML using attributes,
relationships, operations and intersections. A class diagram visualizes the paths between classes in
the form of aggregations and associations as well as through the passing on of properties and
behavior between classes. These take the form of generalizations.
Class diagrams are the most important kind of UML diagram and are vitally important in software
development. Class diagrams are the best way to illustrate a system’s structure in a detailed way,
showing its attributes, operations as well as its inter-relationships. Classes play a significant role in
object orientated programming languages – they are indispensable when it comes to software
modelling. (quora, n.d.)
words, schemas are very similar to separate namespaces or containers that are used to store database
objects. Security permissions can be applied to schemas hence schemas are an important tool for
separating and protecting database objects on the basis of user access rights. It improves flexibility
for security-related administration of the database.
Before SQL Server 2005, database object owners and users were the same things and database
objects (table, index, view and so on) were owned by the user. In other words database objects were
directly linked to the user and the user could not delete them without removing the database object
that were associated with the user. In SQL Server 2005, a schema separation is introduced, now the
database object is no longer owned by a user, group or role. The schema can be owned by the user,
group or role. The schema can have multiple owners. The schema ownership is transferrable.
Database objects are created within the schema. Now the user can be dropped without the dropping
of the database object owned by the user. But the schema cannot be deleted if it contains a database
object.
Default schema
The default schema is the first schema searched when resolving object names. The user can be
defined within the default schema. Using the "SCHEMA_NAME" function we can determine the
default schema for the database.
The schema can be made the default for the user by defining DEFAULT_SCHEMA with CREATE
USER or ALTER USER. If there is no default schema defined then SQL will assume "DBO" as the
default schema. Note that there is no default schema associated with a user if the user is
authenticated as a member of the group in the Windows operating system. In this case a new schema
will be created and the name is the same as the user name.
Other Advantages
A single schema can be shared among multiple databases and database users.
A database user can be dropped without dropping database objects.
Manipulation of and access to the object is now very complex and more secure. The schema
acts as an additional layer of security.
Database objects can be moved among schemas.
The ownership of schemas is transferable.
A schema is a very useful database concept and helps us to separate database users from the database
object owners and also helps to create a logical grouping of database objects.
Development of a fully functional system which meets client and system requirements, using an
open source language
Include the screenshots of all form of application designed.
Login: provide the way to enter to the library management system after entering username and
pswd
Dashboard:
It contains the details of all functionalities through which we can enter to desired page
Assession Mapping
It contains the record of assession mapping. We can edit, delete and detailing of assession mapping.
Author
It contains the record of authors. We can edit, delete and detailing of author.
Book
It contains the record of Books. We can edit, delete and detailing of book.
Book Categories
It contains the record of Book Categories. We can edit, delete and detailing of book categories.
It contains the record of Book Issue Return. We can edit, delete and detailing of Book Issue Return.
Fine
It contains the record of fine. We can edit, delete and detailing of fine.
Member:It contains the record of memeber. We can edit, delete and detailing of memeber.
Member Category
It contains the record of member category. We can edit, delete and detailing of member category.
Subject
It contains the record of Subject. We can edit, delete and detailing of subject.
Action Output
When assession mapping is The record of the assession mapping created is displayed in the
created record
Action: when the Output: the record of book issue return is saved
bookissuereturn is created
Action: when the book issue Output: the bookissue return is updated.
return is edited
Action:when fine is created Output: the fine record is saved and displayed
Action: when member is created Output: the member record is saved and displayed
Action: when member is edited Output: the member is updated and displayed.
Action: when subject is created Output: the subject record is saved and displayed.
Database Concurrency
Database concurrency is the ability of the database to support multiple users and processes working
on the database concurrently. For example, an airline reservation system supporting thousands of
active users at any given time!
Database concurrency basically refers to the ability of the database to support multiple users and
processes simultaneously. Concurrency improves system performance and throughput but not
without its side effects. In this lesson, we will learn about concurrency in database systems and how
to commit and rollback transactions. We will also learn how to lock records to maintain database
integrity.
Transactions
Transactions group a set of tasks into a single execution unit. Each transaction begins with a specific
task and ends when all the tasks in the group successfully complete. If any of the tasks fail, the
transaction fails. Therefore, a transaction has only two results: success or failure.
Incomplete steps result in the failure of the transaction. A database transaction, by definition, must be
atomic, consistent, isolated and durable. These are popularly known as
ACID properties.
How to implement Transactions using SQL?
Following commands are used to control transactions. It is important to note that these statements
cannot be used while creating tables and are only used with the DML Commands such as – INSERT,
UPDATE and DELETE.
Following is an example which would delete those records from the table which have age = 20 and
then COMMIT the changes in the database.
Queries:
COMMIT;
Output:
Thus, two rows from the table would be deleted and the SELECT statement would look like,
ROLLBACK: If any error occurs with any of the SQL grouped statements, all changes need to be
aborted. The process of reversing changes is called rollback. This command can only be used to
undo transactions since the last COMMIT or ROLLBACK command was issued.
Syntax:
ROLLBACK;
Example:
From the above example Sample table1,
Delete those records from the table which have age = 20 and then ROLLBACK the changes in the
database.
Queries:
ROLLBACK;
Output:
SAVEPOINT SAVEPOINT_NAME;
This command is used only in the creation of SAVEPOINT among all the transactions.
In general ROLLBACK is used to undo a group of transactions.
Syntax for rolling back to Savepoint command:
ROLLBACK TO SAVEPOINT_NAME;
you can ROLLBACK to any SAVEPOINT at any time to return the appropriate data to its original
state.
Example:
From the above example Sample table1,
Delete those records from the table which have age = 20 and then ROLLBACK the changes in the
database by keeping Savepoints.
Queries:
SAVEPOINT SP1;
//Savepoint created.
//deleted
SAVEPOINT SP2;
//Savepoint created.
Here SP1 is first SAVEPOINT created before deletion.In this example one deletion have taken place.
After deletion again SAVEPOINT SP2 is created.
Output:
Deletion have been taken place, let us assume that you have changed your mind and decided to
ROLLBACK to the SAVEPOINT that you identified as SP1 which is before deletion.
deletion is undone by this statement ,
ROLLBACK TO SP1;
//Rollback completed.
Notice that first deletion took place even though you rolled back to SP1 which is first SAVEPOINT.
RELEASE SAVEPOINT:- This command is used to remove a SAVEPOINT that you have created.
Syntax:
Once a SAVEPOINT has been released, you can no longer use the ROLLBACK command to undo
transactions performed since the last SAVEPOINT.
It is used to initiate a database transaction and used to specify characteristics of the transaction that
follows.
Triggers
Trigger: A trigger is a stored procedure in database which automatically invokes whenever a special
event in the database occurs. For example, a trigger can be invoked when a row is inserted into a
specified table or when certain table columns are being updated.
Syntax:
[before | after]
on [table_name]
[trigger_body]
Explanation of syntax:
create trigger [trigger_name]: Creates or replaces an existing trigger with the trigger_name.
on [table_name]: This specifies the name of the table associated with the trigger.
[for each row]: This specifies a row-level trigger, i.e., the trigger will be executed for each row being
affected.
Example:
Given Student Report Database, in which student marks assessment is recorded. In such schema,
create a trigger so that the total and average of specified marks is automatically inserted whenever a
record is insert.
Here, as trigger will invoke before record is inserted so, BEFORE Tag can be used.
+-------+-------------+------+-----+---------+----------------+
+-------+-------------+------+-----+---------+----------------+
+-------+-------------+------+-----+---------+----------------+
before INSERT
on
Student
Above SQL statement will create a trigger in the student database in which whenever subjects marks
are entered, before inserting this data into the database, trigger will compute those two values and
insert with the entered values. i.e.,
mysql> insert into Student values(0, "ABCDE", 20, 20, 20, 0, 0);
+-----+-------+-------+-------+-------+-------+------+
+-----+-------+-------+-------+-------+-------+------+
| 100 | ABCDE | 20 | 20 | 20 | 60 | 36 |
+-----+-------+-------+-------+-------+-------+------+
Stored Procedures
Stored Procedures are created to perform one or more DML operations on Database. It is nothing but
the group of SQL statements that accepts some input in the form of parameters and performs some
task and may or may not returns a value.
IS
variables;
BEGIN
//statements;
END;
The most important part is parameters. Parameters are used to pass values to the Procedure. There
are 3 different types of parameters, they are as follows:
IN:
This is the Default Parameter for the procedure. It always recieves the values from calling program.
OUT:
This parameter always sends the values to the calling program.
IN OUT:
This parameter performs both the operations. It Receives value from as well as sends the values to
the calling program.
Example:
Imagine a table named with emp_table stored in Database. We are Writing a Procedure to update a
Salary of Employee with 1000.
IS
BEGIN
COMMIT;
END;
VARIABLE v NUMBER;
PRINT :v
Database Security
Database security refers to the collective measures used to protect and secure a database or database
management software from illegitimate use and malicious threats and attacks.
It is a broad term that includes a multitude of processes, tools and methodologies that ensure security
within a database environment.
Database security covers and enforces security on all aspects and components of databases. This
includes:
Database security is generally planned, implemented and maintained by a database administrator and
or other information security professional.
Restricting unauthorized access and use by implementing strong and multifactor access and
data management controls
Load/stress testing and capacity testing of a database to ensure it does not crash in a
distributed denial of service (DDoS) attack or user overload
Physical security of the database server and backup equipment from theft and natural
disasters
Reviewing existing system for any known or unknown vulnerabilities and defining and
implementing a road map/plan to mitigate them
Backup and recovery refers to the process of backing up data in case of a loss and setting up systems
that allow that data recovery due to data loss. Backing up data requires copying and archiving
computer data, so that it is accessible in case of data deletion or corruption. Data from an earlier time
may only be recovered if it has been backed up.
Data backup is a form of disaster recovery and should be part of any disaster recovery plan.
Data backup cannot always restore all of a system's data and settings. For example, computer
clusters, active directory servers, or database servers may need additional forms of disaster recovery
because a backup and recovery may not be able to reconstitute them fully.
Today, a great deal of data can be backed up when using cloud storage, which means archiving on a
local system's hard drive or using external storage is not necessary. Mobile devices, in particular, can
be set up using cloud technologies, allowing data to be recovered automatically.
1.Scheduled Backups
Backup scheduling is one of the most important features in data backup software. It allows
completely eliminating manual backups (with all filtering, compressing, transferring to the storage,
and other).
In this example, the LibraryMsystem database will be backed up to disk at the default backup
location.
After connecting to the appropriate instance of the Microsoft SQL Server Database Engine,
in Object Explorer, expand the server tree.
Expand Databases, right-click SQLTestDB, point to Tasks, and then click Back Up....
Click OK.
When the backup completes successfully, click OK to close the SQL Server Management
Studio dialog box.
In this example, LibraryMsystem database will be backed up to disk at a location of your choice.
After connecting to the appropriate instance of the Microsoft SQL Server Database Engine,
in Object Explorer, expand the server tree.
Expand Databases, right-click LibraryMsystem, point to Tasks, and then click Back Up....
On the General page in the Destination section select Disk from the Back up to: drop-down
list.
Select Remove until all existing backup files have been removed.
Select Add and the Select Backup Destination dialog box will open.
Enter a valid path and file name in the File name text box and use .bak as the extension to
simplify the classification of this file.
Click OK and then click OK again to initiate the backup.
When the backup completes successfully, click OK to close the SQL Server Management
Studio dialog box.
In the Object Explorer, right-click the server, and then click Properties.
Server properties
On the Security page, under Server authentication, select SQL Server and Windows Authentication
mode, and then click OK.
In the Object Explorer, right-click your server, and then click Restart. If the SQL Server Agent is
running, it must also be restarted.
Critically evaluation the effectiveness of the system design and development against client and
system requirements.
The Library management system was designed using C# as programming based on .NET framework
using Visual Studio as IDE. Similarly, SQL server was used as the database management tool for
storing the data. The effectiveness of the designed system was evaluated comparing with the client
requirements.
This library management system is mainly use by librarian and library admin. Normal Librarian is
able to manage the member maintenance module, book maintenance module and also the most
important module in a library which is book transaction module. Besides that, library management
system also allows user to manage the publisher as well as lost book module. On the other hand,
other type of user which is admin level staff is able to handle the staff module and view the report
module.
The methodology I used to developed this system is waterfall model. Thus, the report’s chapter was
included system planning, requirement analysis, system design, programming, system testing, and
evaluation of the project. For the system planning, the outcomes are the project objectives and
project aims as well as to defined the project scope. Requirement analysis is a stage to gather the user
requirement such as functional requirement and functional requirement. Next, the system design is
mainly used to design the user interface and database design. The next stage after system design is
programming stage. This is the stage for coding. After complete coding part, we proceed to system
testing to minimize the system bug.
Library Management System is an application refer to other library system and it is suitable to use by
small and medium size library. It is use by librarian and library admin to manage the library using a
computerized system. The system was developed and designed to help librarian record every book
transaction so that the problem such as file missing or record missing will not happened again.
Barcode reader is equipped in this system so that users can enjoy the convenience without need to
key in the barcode of the book themselves. It is convenience and time saving as the
users can direct scan in the book’s barcode id when the members borrows few books in one time.
Book and member maintenance module also included in Library Management System. Users can
register or edit the member or book in the system. With this computerized maintenance, library will
not lost the book record or member record which always happen when no computerized system bring
used
In addition, report module is also included in Library Management System. If user’s position is
Admin, the user is able to view different kind of report. First type of report are rental and return
report, user can check the rental, return transaction which happen on particular day. Besides that,
user can check the Top10 books which borrow by the member in a day, month or year based on
category. Moreover, activity log report also provided by system so that admin can check what
process has been carried out such as register new book, edit member information as well as login,
logout information. When user lost the book, user can use Lost Book Module to register the lost
book and receive the fine which is double price of that book. All these modules are able to help
librarian to manage the library more convenience and efficiency compare to those library without
computerized system
Prerequisite Specifications
The Library management system has many requirements and needs that are needed to be fulfilled.
After analyzing all the collected information using various methods and procedure we explored the
various requirements that are needed in the assignment management system. The manual system had
various flaws and limitations which were needed to be overcome. The users and relation database
system included following requirements:
The proper evaluation of the system were carried out to verify whether all the modules would work
properly or not. All the modules involved in the Library management system such as login,
admin,users etc. functioned properly and smoothly. No any errors and exceptions were present in the
system that would compromise the requirements of the clients. Comparing the system requirements
with the client requirements it was clear that most of the client requirements well addressed well by
the designed system. Whenever the necessary details are filled in any forms, the message pops out
showing the successful entry of any information. Similarly, entry of unnecessary information shows
error. This can be clearly examined from above figures. In database software, the creation of separate
table and use of different validations like data type validation, constraints validation prevented data
redundancy and anomalies aiding in normalization process. The referential integrity have been
maintained using foreign keys. All the tables are linked with each and duplication is prohibited. Data
consistency and integrity are routinely examined using various maintenance tools and security
measures are applied well. All such activities provide a proper instance of system effectiveness
which fulfilled all the users and system requirements successfully. The result obtained in the
designed system shows that the objectives have been reached, hence the system can have admin,
members and users to have better way of communicating about assignment and other academic
materials. The different modules meets the expectations of college administration and fulfills all the
requirements as planned.
After the detailed examination of the user and system requirements the software was designed to
keep the necessary information of the system. The benefits of the system are:
The system is full of security as correct username and password is required to access the
system.
Different tables and modules are available to keep the records of book, members, subjects,
fine and assession mapping so there is data consistency and integrity.
It is ease to access as both user and technical documentation are available along with the
designed software.
The designed software is completely free from old flaws and there is no redundancy.
Generation of report and storage of details for future reference is possible.
Measure of Performance
The performance of the system has been measured based on various factors. The factors are
described below:
i. Speed
The Library management system works at normal speed with more accuracy and efficiency. The
records are saved in the database instantly after the message is displayed.
ii. Reliability
The software can operate with different hardware and software. The application is reliable as the
records of fine, books, members, assession mapping and subjects can be stored smoothly.
iii. Security
The designed application contains some security features for protection and prevention form
unauthorized access, malicious program, threats, accidental or intentional harms. There is provision
of backup and recovery measures to protect the data and keep it safe for future use. Similarly, in
order to get access to the dashboard of the system correct username and password provisions are
available. User authentication policy is made available and views are created in database software for
protection. Various data validation and security mechanism are used to maintain data integrity and
consistency.
Conclusion:
In this way in this part 2, I have designed a database management system using a relational model to
meet client requirements and develop a database management system using a suitable platform. I
have produced a design for a relational database management system to meet client requirements. I
have analyzed how the design will optimize system performance. I have developed a fully functional
system which meets client and system requirements, using an open source language, Then I have
tested the system for functionality and performance. Then I have implemented effective features in
the solution to handle concurrency, security, user authorizations and data recovery. And Finally I
have critically evaluated the effectiveness of the system design and development against client and
system requirements.
Part 3
Create a lab report: Demonstrate the system administration and management tools available
on the chosen platform
• Demonstrate the tools available in the system to monitor and optimize system performance,
and examine the audit logs.
• Demonstrate the tools available in the system to manage security and authorizations.
• Assess the effectiveness of the system administration and management tools available on the
platform identifying any shortcomings of the tools.
• Assess any future improvements that may be required to ensure the continued effectiveness of
the database system.
Introduction:
Cosmos International College, a newly established educational organization established in the heart
of Mahendrapul, Pokhara provides the courses related to Management. Since there are limited
number of students they are maintaining all the information in a excel file. But currently they have
set up the library too and there is large number of books in the library. So the company has decided
to develop a web application to manage its library. The following are the requirements of the
software for managing the library:
• An appropriate book entry system with relevant information of the books and its category.
• Appropriate tracing for the books which have been issued and the books that needs to be returned.
• Appropriate search ability for the books, members, books issued and the books returned.
• An appropriate dashboard for admin which gives the overview of how many books category exist,
the total number of available books, the total number of members, the total number of books issued,
the total penalty collected over the fiscal year.
• Secure and effective login system for the admin and the librarian with admin having full right to
access the system and the librarian being able only to view stock, issue books and accept book
returns.
In this Part 3, I am going to create a lab report. In the lab report I will demonstrate the system
administration and management tools available on the chosen platform Demonstrate the tools
available in the system to monitor and optimize system performance, and examine the audit logs,
then I will demonstrate the tools available in the system to manage security and authorizations. Then
I will assess the effectiveness of the system administration and management tools available on the
platform identifying any shortcomings of the tools. Then I will assess any future improvements that
may be required to ensure the continued effectiveness of the database system.
Demonstration of the tools available in the system to monitor and optimize system
performance, and examine the audit logs
System Monitoring and optimizing tools
SQL Server works with objects and counters, with each object comprising one or more counters. For
example, the SQL Server Locks object has counters called Number of Deadlocks/sec or Lock
Timeouts/sec.
Access Methods – Full scans/sec: higher numbers (> 1 or 2) may mean you are not using
indexes and resorting to table scans instead.
Buffer Manager – Buffer Cache hit ratio: This is the percentage of requests serviced by data
cache. When cache is properly used, this should be over 90%. The counter can be improved
by adding more RAM.
Memory Manager – Target Server Memory (KB): indicates how much memory SQL Server
“wants”. If this is the same as the SQL Server: Memory Manager — Total Server Memory
(KB) counter, then you know SQL Server has all the memory it needs.
Memory Manager — Total Server Memory (KB): much memory SQL Server is actually
using. If this is the same as SQL Server: Memory Manager — Target Server Memory (KB),
then SQL Server has all the memory it wants. If smaller, then SQL Server could benefit from
more memory.
Locks – Average Wait Time: This counter shows the average time needed to acquire a lock.
This value needs to be as low as possible. If unusually high, you may need to look for
processes blocking other processes. You may also need to examine your users’ T-SQL
statements, and check for any other I/O bottlenecks.
Error Log
The Error Log, the most important log file, is used to troubleshoot system problems. SQL Server
retains backups of the previous six logs, naming each archived log file sequentially. The current error
log file is named ERRORLOG. To view the error log, which is located in the %Program-
Files%\Microsoft SQL Server\MSSQL.1MSSQL\LOG\ERRORLOG directory, open SSMS, expand
a server node, expand Management, and click SQL Server Logs.
2 In the
Configure
SQL Server
Error Logs
dialog box,
choose from
the
following
options.
a. Log
files
count
b. Log
file
size
3 When error
log is
configured,
expand the
Managemen
t
section
Right-
click SQL
Server Logs,
select View,
and then
choose SQL
Server Log.
4. The Log
File
Viewer
appears with
a
list of logs
for
we want to
view.
Extended Events
Extended Events gives you the ability to monitor and collect different events and system information
from SQL Server and correlate these events for later analysis. It is much easier to perform analysis
using Extended Events as all the data is collected in a single source. Also, as newer features have
been added to SQL Server over the years, such as AlwaysON and ColumnStore indexes as examples,
events to monitor these features are only available in extended events and are not available in SQL
Profiler or SQL Trace. SQL Server 2016 has over 1320 Extended Events whereas every version
since SQL Server 2008 has had only 180 SQL Trace events. Also, since it was designed from
scratch, monitoring using Extended Events has been shown to have less resource overhead than its
older counterparts.
S. Steps Screenshots
N.
1 In Object
Explorer,
expand the
instance of
SQL
Server,
expand
Manageme
nt, and
extended
events can
be seen.
2 Expand the
Extended
Events and
session can
be seen.
3 Right click
on
Sessions
and select
New
Session
Wizard
4 The
Session
Wizard
will
help us to
create the
events for
tuning,
troublesho
o
ting and
performanc
e analysis.
5 In the Set
Session
Properties,
specify a
name for
the session.
6 In choose
template
properties,
select the
do not use
a
template
option and
press next
7 In select
events
property,
choose the
event we
want to
capture.
8 In capture
global
fields, it
contains
the
fields to be
included
when we
monitor
activities
9 In session
event filter
property,
we can
apply
filter.
10 The
session
storage
allows you
to store the
events in a
file.
11 The
summary
will
display
all the
options set
12 The final
wizard
window
shows the
success
message.
While it’s a robust tool, many features are being deprecated by Microsoft. This is happening because
most developers and DBAs feel a server side trace is a more robust option.
It works by giving DBAs and developers a high-level view of the operation of a system. Users create
traces to capture data and monitor errors and other problems. They then use the profiler to store,
retrieve, and view the results of many traces graphically for purposes of troubleshooting and repair.
This function all happens on the client-side, meaning it uses resources on the same machine it’s
monitoring.
S. Steps Screenshots
N.
1 In the
SQL
Server
Manage
me
nt
Studio
Tools
menu,
click
SQL
Server
Profiler.
2 In
Query
Editor,
right-
click
and then
select
Trace
Query in
SQL
Server
Profiler
and
click
run.
3 In
Activity
Monitor,
click the
Processe
s
pane,
right-
click the
process
that you
want to
profile,
and
then
click
Trace
Process
in
SQL
Server
Profiler.
Audit Log
An audit log is a chronological record of security-relevant data that documents the sequence of
activities affecting an operation, procedure, event, file or document. Audit logs are used to track the
date, time and activity of each user, including the pages that have been viewed. Audit logs vary
between applications, devices, systems, and operating systems but are similar in that they capture
events that can show “who” did “what” activity and “how” the system behaved. An administrator or
developer will want to examine all types of log files to get a complete picture of normal and
abnormal events on their network. A log file event will indicate what action was attempted and if it
was successful. This is critical to check during routine activities like updates and patching, and to
determine when a system component is failing or incorrectly configured.
Demonstration of the tools available in the system to manage security and authorizations.
Security can be one of the most complex issues to contend with when managing a SQL Server
instance, yet it’s also one of the most important, especially when sensitive and personal data are on
the line. In fact, for many organizations, security is their number one priority, which should come as
no surprise, given what’s at stake.
Fortunately, SQL Server includes a variety of tools for protecting data from theft, destruction, and
other types of malicious behavior. In this article, the first in a series about SQL Server security, I
introduce you to many of these tools, with the goal of providing you with an overview of the options
available for safeguarding your data. In subsequent articles, I’ll dig deeper into the different
technologies, providing a more in-depth look at how they work and what it takes to protect a SQL
Server instance.
Authentication and authorization are achieved in SQL Server through a combination of security
principals, securables, and permissions. Before I get into these, however, it’s important to note that
SQL Server supports two authentication modes: Windows Authentication, sometimes referred to
as integrated security, and SQL Server and Windows Authentication, sometimes referred to
as mixed mode.
Windows authentication is integrated with Windows user and group accounts, making it possible to
use a local or domain Windows account to log into SQL Server. When a Windows user connects to a
SQL Server instance, the database engine validates the login credentials against the Windows
principal token, eliminating the need for separate SQL Server credentials. Microsoft recommends
that you use Windows Authentication whenever possible.
In some cases, however, you might require SQL Server Authentication. For example, users might
connect from non-trusted domains, or the server on which SQL Server is hosted is not part of a
domain, in which case, you can use the login mechanisms built into SQL Server, without linking to
Windows accounts. Under this scenario, the user supplies a username and password to connect to the
SQL Server instance, bypassing Windows Authentication altogether.
2 Working with
security
features in
Object
Explorer
3 viewing a
user’s
permissions
granted on a
schema
object
4 Viewing a
security
policy
definition in
SSMS
5 Running the
Always
Encrypted
wizard
6 Accessing the
Surface Area
Configuration
facets for a
SQL Server
2017 instance
7 Viewing a
SQL
Vulnerability
Assessment
report
8 Viewing the
audited
actions
defined in a
database
audit
specification
Encryption is not an access-control mechanism, that is, it does not prevent unauthorized users from
accessing data. However, encryption can limit the exposure of sensitive data should unauthorized
users manage to break through SQL Server’s access-control defenses. For example, if cybercriminals
acquire encrypted credit card information from a database, they will not be able to make sense of that
data unless they’ve also figured out a way to decrypt it.
SQL Server supports several approaches to encryption to accommodate different types of data and
workloads. For example, you can encrypt data at the column level by taking advantage of SQL
Server’s built-in encryption hierarchy and key management infrastructure. Under this model, each
layer encrypts the layer below it, using a layered architecture made up of a public key certificate and
several symmetric keys. In this way, the column data is always protected until it is specifically
decrypted.
Assessment of the effectiveness of the system administration and management tools available
on the platform identifying any shortcomings of the tools.
Effectiveness of database Management Tools
Various database management tools have been used for the designing the database and securing it.
The database maintenance tools have been used to check the faults and errors in database, checking
the smoothness of database and backing up files for future use. The database security tools have been
used to protect and secure database or stored procedures from unauthorized access, intentional or
accidental threats. The database maintenance tools used encompass, transactions, triggers, stored
procedures, scheduled jobs and various database-monitoring tools. The database security tools that
have been used in the system includes, database backup and recovery, authentication and
authorization. Those tools perform their specific operations and help in smooth running of database.
However, those tools have their own shortcomings that must be addressed well for the proper design
and development of application.
Security - Stored Procedure not just secure the data and access code but also it applies the
security within the application code. Also, it limits the direct access to tables. Securing our
data is what all we need and so Stored Procedure do.
Testing - We can test stored procedure without any dependency of the application.
Speed - It has a good speed because stored procedures are saved in the cache memory, so we
don’t need to extract them from the base every time. We can easily use them through this
cache on the server.
Replication - We can replicate the stored procedure from one database to another. Also, we
can revise the policies on a central server rather than on individual servers.
Transactions
The primary benefit of using transactions is data integrity. Many database uses require storing data to
multiple tables, or multiple rows to the same table in order to maintain a consistent data set. Using
transactions ensures that other connections to the same database see either all the updates or none of
them. This also applies in the case of interrupted connections — if the power goes off in the middle
of a transaction, the database engine will roll back the transaction so it is as-if it was never started. If
each statement is committed independently, then other connections may see partial updates, and
there is no opportunity for automatic rollback on error.
A secondary benefit of using transactions is speed. There is often an overhead associated with
actually committing the data to the database. If you've got 1000 rows to insert, committing after
every row can cause quite a performance hit compared to committing once after all the inserts. Of
course, this can work the other way too — if you do too much work in a transaction then the
database engine can consume lots of space storing the not-yet-committed data or caching data for use
by other database connections in order to maintain consistency, which causes a performance hit. As
with every optimisation, if you're changing the boundaries of your transactions to gain performance,
then it is important to measure both before and after the change.
Triggers
1) Triggers can be used as an alternative method for implementing referential integrity constraints.
2) By using triggers, business rules and transactions are easy to store in database and can be used
consistently even if there are future updates to the database.
4) When a change happens in a database a trigger can adjust the change to the entire database.
Sql Profiler
Clarity. It can reveal how an instance works when it’s interacting with a client.
Troubleshoot problems. It can help zero in on trouble spots by allowing us to capture and
replay key events. This function also helps with stress testing and identifying slowly
executing queries.
Allow non-administrator users to create traces securely. It can cater to the needs of DBAs,
developers, database designers, business intelligence specialists, IT professionals, and even
accountants.
Compare activity to baselines. It lets users save trace data and compare it to newer data to
spotlight new trouble spots.
Capture traces for Transact-SQL, SSIS, and Analysis Services.
requires at least one person to do the job. Since remote data backup involves automation, you
won’t need to worry about taking the time to back it up with a CD or a USB drive, and you’ll
always know where the backups are.
Greater Security- When you employ remote data backup, you store your data in a secure
location, making it physically safe. This is typically done via advanced encryption tools that
are used at both the hardware and software level.
Saves Money- Think about all of the equipment you need when you manually back up your
data. You’ll need a lot of physical storage solutions for your computers, and if you have
many computers with large amounts of data, that can be a costly and burdensome solution.
Transactions
We want to keep transactions short, to begin them as late as possible and to end them as early as
possible. Otherwise concurrency hurts, and we are getting more lock waiting and deadlocks.
Triggers
1) It is easy to view table relationships , constraints, indexes, stored procedure in database but
triggers are difficult to view.
2) Triggers execute invisible to client-application application. They are not visible or can be traced in
debugging code.
3) It is hard to follow their logic as it they can be fired before or after the database insert/update
happens.
4) It is easy to forget about triggers and if there is no documentation it will be difficult to figure out
for new developers for their existence.
5) Triggers run every time when the database fields are updated and it is overhead on system. It
Sql Profiler
SQL Server Profiler is a GUI that utilizes SQL Server Trace through the client-side. Due to this
nature, yes you can potentially see a mild to severe performance impact depending on the
environment.
SQL Server Profiler is suited mainly for a quick glimpse at what is happening on the server
(provided the database server can handle the extra latency). It is not intended to be run for long
periods of time. For longer traces, use a server-side trace or Extended Events.
Assessment of any future improvements that may be required to ensure the continued
effectiveness of the database system
These are various ways in which we can improve overall the general performance of our database
management system.
We should upgrade our motherboards for the performance of our database management
system. As we know motherboards are the backbone of any computer. If we upgrade the
motherboards, it will faster the working of the CPU, faster RAM, faster data transfer and
many more. We are highly recommended to use a high-end system to improve the
performance of DBMS.
Using separate disks to store our files. The saying that goes like never put all our eggs in one
basket applies so much here. When setting up a database management system, there is a need
to use a set of drives to avoid slowing the performance of the system by having everything in
a single disk. Besides slow performance, we also stand the risk of losing our files in the event
of catastrophes like hacking, or physical incidents like theft and fire. The separate disks can
act as a backup from where we can recover our lost files from.
Optimizing the cache feature. Another way of ensuring that our database management system
is not prone to slow performance issues is by setting a reasonable cache size. Anything below
the 200MB would be perfect. We can always start with 10 MB going upwards depending on
our database performance.
Investing in a stronger Central Processing Unit. If we want to have an efficient DBMS, we
can achieve this by ensuring that we have a powerful CPU. With such a CPU, we can easily
multitask or open multiple applications without the system running slow or dragging initiated
processes. An underperforming CPU could render our system useless so if it starts having
performance issues we should consider upgrading it.
Working on the indexing strategies. Our data structures should be developed in such a way
that allows we to select and manipulate the rows efficiently. It is a perfect strategy for tuning
the database system where there is a systematic or orderly data record for easy access.
Paying attention to the network metrics. This is yet one of the many database solutions where
we should ensure that our routers, cables, and the network interfaces are operating just as
they should, failure to which the DBMS could slow down or shut altogether.
(small-bizsense, n.d.)
Conclusion:
In this way I have created a lab report: In the lab report I have demonstrated the system
administration and management tools available on the chosen platform. Then I have demonstrated
the tools available in the system to monitor and optimize system performance, and examine the audit
logs. Then I have demonstrated the tools available in the system to manage security and
authorizations. Then I have assessed the effectiveness of the system administration and management
tools available on the platform identifying any shortcomings of the tools. And then I have assessed
any future improvements that may be required to ensure the continued effectiveness of the database
system.
References
geeksforgeeks, n.d. geeksforgeeks. [Online]