DBMS Assigment
DBMS Assigment
Contents
Part 1: Produce a presentation slides which analyze different types of database management
system. ........................................................................................................................................5
• Assess how relational database models and the process of normalization can provide reliable
and efficient data structures. ........................................................................................................5
Introduction:............................................................................................................................5
Database models:.................................................................................................................5
Normalization ....................................................................................................................... 11
Justification ........................................................................................................................... 18
Conclusion: ........................................................................................................................... 26
Part 2: Design a database management system using a relational model to meet client requirements
and develop a database management system using a suitable platform. ...................................... 27
• Produce a design for a relational database management system to meet client requirements
27
• Develop a fully functional system which meets client and system requirements, using an open
source language (with an application software e.g. MySQL with front end Microsoft Access)... 27
• Critically evaluate the effectiveness of the system design and development against client and
system requirements. ................................................................................................................. 27
Introduction:.......................................................................................................................... 27
Anomalies: ........................................................................................................................ 28
ER Diagram: ..................................................................................................................... 36
Data dictionaries:............................................................................................................... 36
Testing: ............................................................................................................................. 55
Importance of Testing:....................................................................................................... 55
Comparing system design and system analysis against system and user requirement ......... 77
Part 3: Create a lab report: Demonstrate the system administration and management tools available
on the chosen platform .............................................................................................................. 85
• Demonstrate the tools available in the system to monitor and optimize system performance,
and examine the audit logs. ....................................................................................................... 85
• Demonstrate the tools available in the system to manage security and authorizations. ........ 85
• Assess the effectiveness of the system administration and management tools available on the
platform identifying any shortcomings of the tools. ................................................................... 85
• Assess any future improvements that may be required to ensure the continued effectiveness
of the database system. .............................................................................................................. 85
Lab Report on: Demonstrating the system administration and management tools available 86
Title: ................................................................................................................................. 86
Introduction:...................................................................................................................... 86
Materials: .......................................................................................................................... 86
Data: ................................................................................................................................. 87
Part 1: Produce a presentation slides which analyze different types of database management
system.
• Assess how relational database models and the process of normalization can provide
reliable and efficient data structures.
A. Object Oriented:
(SearchSQLServer 2019) This model defines a database as a collection of objects, or
reusable software elements, with associated features and methods. There are several kinds
of object-oriented databases: A multimedia database incorporates media, such as images,
that could not be stored in a relational database. A hypertext database allows any object to
link to any other object. It’s useful for organizing lots of disparate data, but it’s not ideal
for numerical analysis.
The object-oriented database model is the best known post-relational database model, since
it incorporates tables, but isn’t limited to tables. Such models are also known as hybrid
database models.
B. Relational Model:
(SearchSQLServer 2019) The most common model, the relational model sorts data into
tables, also known as relations, each of which consists of columns and rows. Each column
lists an attribute of the entity in question, such as price, zip code, or birth date. Together,
the attributes in a relation are called a domain. A particular attribute or combination of
attributes is chosen as a primary key that can be referred to in other tables, when it’s called
a foreign key.
Each row, also called a tuple, includes data about a specific instance of the entity in
question, such as a particular employee.
The model also accounts for the types of relationships between those tables, including one-
to-one, one-to-many, and many-to-many relationships. Here’s an example
D. Network Model:
(SearchSQLServer 2019) The network model builds on the hierarchical model by allowing
many-to-many relationships between linked records, implying multiple parent records.
Based on mathematical set theory, the model is constructed with sets of related records.
Each set consists of one owner or parent record and one or more member or child records.
A record can be a member or child in multiple sets, allowing this model to convey complex
relationships.
A structure of data organized A database model that allows A database model to manage
in a tree like model using multiple records to be linked data as tuples grouped into
parent, child relationships. to the same owner file. relations (tables).
Arrange data in a tree similar Organizes data in a graph Arrange data in tables
structure. structure
Represents "one to many" Represents "many to many" Represents both "one to
relationship. relationship. many" and "many to many"
relationship.
Difficult to access data Easier to access data Easier to access data
Less flexible Flexible Flexible
Comparison between Object oriented and Relational model:
(SearchSQLServer 2019) To the best of my knowledge, a relational database stores data in a table
which consists of rows and columns. In a table, it requires a primary key to identify difference
records. It represents the relationship of each data entries by column or attributes which we called
it as a foreign key. When we design a relational data model such as the ER model, we only consider
the relationship between data or we called entities. Because in the relational database, the data is
stored in a separate table, when we want to do query processing, we need to join those table
together which can have high cost if the size of data in each table is large.
However, a relational database cannot represent data in reality very well because data in the real
world is more complex and cannot be represented with the relationship only. So, the object-
oriented database emerged.
An object-oriented database applies the same concept as object-oriented programming like Java.
It can represent more complex data because it encapsulates the inheritance concept and operation
of the data object. Each data will be stored as an object. By using an object-oriented database, it
will enhance data complexity but the search capability will be low.
Normalization
(SearchSQLServer 2019) Database normalization is the process of organizing data into tables in
such a way that the results of using the database are always unambiguous and as intended. Such
normalization is intrinsic to relational database theory. It may have the effect of duplicating data
within the database and often results in the creation of additional tables.
By normalizing database, we can arrange data into tables and columns ensuring each table
contains only related data. If data is not directly related, we have to create a new table for the data.
For example, if there is a customer table we would normally create product table for the number
of product they have order another table is created to store each order made by customer which is
linked by their primary key which helps to easily update, delete and search relevant data in
database across table.
Benefits of Normalization
- Minimizes data redundancy (duplicate data).
- Minimizes null values.
- Results in a more compact database (due to less data redundancy/null values).
- Minimizes/avoids data modification issues.
- Simplifies queries.
- The database structure is cleaner and easier to understand. we can learn a lot about a relational
database just by looking at its schema.
- We can extend the database without necessarily impacting the existing data.
- Searching, sorting, and creating indexes can be faster, since tables are narrower, and more rows
fit on a data page.
The beside schema table separate the database table into three different table which have their own
data to be stored they are as: Assignment, student and teacher table. As RDMS help us to find the
relation between table we can find out which table represent which table due to its key’s.
If we de-normalize the above table we get:
Id Teacher Name Assignment Date Released Student name
1 Navaraj Bhandari Database 29th Sep 2019 Anoop
2 Navaraj Bhandari Project Management 29th Sep 2019 Anoop
3 Binod Saha Programming 29th Sep 2019 Anoop
4 Rahul Kumar Java 29th Sep 2019 Anoop
In the above database there is repeat of teacher name and student name which result to require
more storage space then normalized database. But it can also lead in the problem for inserting,
deleting and updating of the database.
Basically, it can result into the following three anomalies (these are the problem that can occur due
to the poorly planned, unnormalized database where all the data are stored into the single table):
Update anomaly: If we need to update teacher’s name, we’ll need to update multiple rows. This
could result in errors. If we update some rows, but not others, we’ll end up with inaccurate data.
This is known as an update anomaly.
Insertion anomaly: It is possible that some teacher has not released assignment so in this case we
cannot add teacher name unless we set three field null. When the teacher finally releases an
ANUP SAPKOTA(4TH SEM) 12
DATABASE MANAGMENT SYSTEM
assignment, we might insert a new row, but in this case, we’ll end up with two rows for that artist
– one of which is pretty much useless, and could cause confusion. While we could write scripts in
an attempt to deal with this scenario, it’s not an ideal situation, and the scripts themselves could
contain errors.
Deletion anomaly: If we need to delete an assignment, we cannot do it without deleting the
teacher. If a teacher has only one assignment and we delete that assignment we will end of deleting
teacher from our database. we won’t have any record relating to that teacher.
So, we have to normalize the database carefully for avoiding such anomalies in our database.
Levels of Normalization
1. UNF (Unnormalized Form):
A database is in UNF if it is not normalized at all.
Teachers Student Subject Credit Release Date Deadline Submitted date
Name Name Name
Nawarj Subash DBMS 4 25thseptember, 28thDecember,2019 28thDecember,2019
Bhandari Lama, 2019
Subash
Dhakal
Binod Samrit Programming 5 25thseptember, 28thDecember,2019 28thDecember,2019
Saha Basnet 2019
Rahul Susan Java 4 25thseptember, 28thDecember,2019 28thDecember,2019
Kumar Thapa, 2019
Saurav
Adhikari
Savita Suresh AI 5 25thseptember, 28thDecember,2019 28thDecember,2019
Havalkod Dhakal 2019
The above database table is not normalized at as there are multiple data in the student name
attribute. So, we have to take it to the First Normal Form for normalizing.
As the above table is normalized each product has unique id and none of the attribute contains
multi values which states that the table is in first normal form.
Rules:
• It should be in first normal form
• Should not have partial dependency
Teacher ID Teacher Name Subject ID Receive Date Feedback Date
01 Nawaraj Bhandari 110 29thDecember,2019 20thJanuary,220
02 Binod Saha 111 29thDecember,2019 20thJanuary,220
03 Rahul Kumar 112 29thDecember,2019 20thJanuary,220
04 Savita Havalkod 113 29thDecember,2019 20thJanuary,220
From the above table I have removed all the columns that are not dependent on primary key which
illustrates that the table is in second normal form.
4. 3NF (Third Normal Form):
A table is said to be in third normal form if the relation fulfills the requirement of the second
normal form along with this if the transitive dependency is removed from the table. More on third
normalization is done in the database mostly to reduce the data independency in the databases also
the attributes which are not dependent on primary keys are removed from the table.
Rules:
• It is in the Second Normal form
• There shouldn’t be Transitive Dependency
Teacher ID Teacher Name Receive Date Feedback Date
01 Nawaraj Bhandari 29thDecember,2019 20thJanuary,220
02 Binod Saha 29thDecember,2019 20thJanuary,220
03 Rahul Kumar 29thDecember,2019 20thJanuary,220
04 Savita Havalkod 29thDecember,2019 20thJanuary,220
In the above table all the transitive dependency in the table data are removed hence that the above
table is in third normalization.
(Tutorialspoint.com 2019) Database is the organized collection of data that are stored and access
electronically from a computer system. For the storing of data from the generation different types
of database like network, hierarchical, relational database are used on the basis of requirement of
the project.
Having different types of database management, I prefer to use relational database management
for my project. As relational database is a type of database that stores and provides access to data
points that are related to one another. The relational model means that the logical data structures
the data tables, views, and indexes are separate from the physical storage structures. The reason
behind choosing RDMS is due to its security reason. All the database has their own benefits in
their own field. In RDMS data are stored in multiple table which make user to access data easier.
Those table are connected with each other by the help of primary key and foreign key. The base
of forming relational database table is normalization so that it avoids data redundancy and data
duplication in database.
• Data Model:
Data model is the logical structure of a database, defining how data is connected to each other
and how they are processed and stored inside the system. For that type of modeling RDMS
provide clear view of data flow in the system.
• Data consistency and data security:
The most valuable thing for any project is providing data security. In the term of RDMS data
are encrypted and higher security like data encapsulation, data protection, data hiding is used
for protection of data.
• Manageability:
For starters, an RDB is easy to manipulate. Each table of data can be updated without disrupting
the others. We can also share certain sets of data with one group, but limit their access to others
such as confidential information about employees.
• Flexibility:
If you need to update your data, you only have to do it once – so no more having to change
multiple files one at a time.
And it’s pretty simple to extend your database. If your records are growing, a relational
database is easily scalable to grow with your data.
• Avoid Errors: There’s no room for mistakes in a relational database because it’s easy to check
for mistakes against the data in other parts of the records. And since each piece of information
is stored at a single point, you don’t have the problem of old versions of data clouding the
picture.
• Implementation of service cost: As RDMS is use for higher level project and middle level
project which budget is higher for the completion of the project. So, the implementation of
RDMS cost is higher.
Justification
Tutorialspoint.com 2019) Relational database management is the better and right database system
for any kind of project because the data are stored in the form of structured data in rows and
columns. If we use RDMS for the data storage we can have more security of data. In the RDMS
the data that need to be store are stored ion the table. By the use of RDMS we can create multiple
user. In the case of my project assignment submission we have to create a different user for
different teacher which is only possible in RDMS. The main fact of using RDMS for storing of
data is due to ACID (Atomicity, Consistency, Isolation and Durability).
Comparison between DBMS and RDBMS:
DBMS RDBMS
DBMS applications store data as file. RDBMS applications store data in a tabular
form.
Data is generally stored in either a hierarchical The tables have an identifier called primary
form or a navigational form. key and the data values are stored in the form
of tables.
Normalization is not present. Normalization is present.
Does not support distributed database. Supports distributed database.
Uses file system to store data, so there will be Data values are stored in the form of tables, so
no relation between the tables. a relationship between these data values will
be stored in the form of a table as well.
Examples of DBMS are file systems, xml etc. Example of RDBMS are MySQL, postgre,
SQL server, oracle etc.
Conclusion:
In this part I had Compared and contrast the different types of database models as well as I had
shown how relational database models and the process of normalization can provide reliable and
efficient data structures and finally I had critically evaluated different database management
systems available in relation to open source and vendor-specific platforms, justifying the criteria
used in the evaluation.
Part 2: Design a database management system using a relational model to meet client
requirements and develop a database management system using a suitable platform.
• Develop a fully functional system which meets client and system requirements, using
an open source language (with an application software e.g. MySQL with front end
Microsoft Access)
• Critically evaluate the effectiveness of the system design and development against
client and system requirements.
Introduction:
In this part I am going to produce a design for a relational database management system to meet
client requirements as well as I am going to analyze how the design will optimize system
performance also I am going to develop a fully functional system which meets client and system
requirements, using an open source language (with an application software e.g. MySQL with front
end Microsoft Access) as well as I am going to test the system for functionality and performance
and finally I am going to implement effective features in the solution to handle concurrency,
security, user authorizations and data recovery and critically evaluate the effectiveness of the
system design and development against client and system requirements.
Anomalies:
Jhigh.co.uk. (2019) Anomalies are problems that can occur in poorly planned, un-normalized
databases where all the data is stored in one table (a flat-file database). or we can say these are the
problem that can occur due to the poorly planned, unnormalized database where all the data are
stored into the single table.
For example: if there is no declaration of any foreign constraint then values can be added to the
primary table that have no relationship to the values of table.
There are three different types of the Anomalies they are:
• Update Anomalies:
Jhigh.co.uk. (2019) It happens when the person charged with the task of keeping all the
records current and accurate, is asked, for example, to change an employee’s title due to a
promotion. If the data is stored redundantly in the same table, and the person misses any
of them, then there will be multiple titles associated with the employee. The end user has
no way of knowing which is the correct title.
• Insertion Anomalies:
Jhigh.co.uk. (2019) It happens when inserting vital data into the database is not possible
because other data is not already there. For example, if a system is designed to require that
a customer be on file before a sale can be made to that customer, but you cannot add a
customer until they have bought something, then you have an insert anomaly. It is the
classic "catch-22" situation.
• Deletion Anomalies:
Jhigh.co.uk. (2019) It happens when the deletion of unwanted information causes desired
information to be deleted as well. For example, if a single database record contains
information about a particular product along with information about a salesperson for the
company and the salesperson quits, then information about the product is deleted along
with salesperson information.
Example of anomaly
The beside schema table discrete the database table into three dissimilar table which have their
own data to be kept they are as: Assignment, student and teacher table. As RDMS benefit us to
find the relation between table we can find out which table represent which table due to its key’s.
If we de-normalize the above table we get:
Id Teacher Name Assignment Date Released Student name
1 Navaraj Bhandari Database 29th Sep 2019 Anoop
2 Navaraj Bhandari Project Management 29th Sep 2019 Anoop
3 Binod Saha Programming 29th Sep 2019 Anoop
4 Rahul Kumar Java 29th Sep 2019 Anoop
In the above database there is repeat of teacher name and student name which result to require
more storage space then normalized database. But it can also lead in the problem for inserting,
deleting and updating of the database.
• Update anomaly: If we need to update teacher’s name, we’ll need to update multiple rows.
This could result in errors. If we update some rows, but not others, we’ll end up with
inaccurate data. This is known as an update anomaly.
• Insertion anomaly: It is possible that some teacher has not released assignment so in this
case we cannot add teacher name unless we set three field null. When the teacher finally
releases an assignment, we might insert a new row, but in this case, we’ll end up with two
rows for that artist – one of which is pretty much useless, and could cause confusion. While
we could write scripts in an attempt to deal with this scenario, it’s not an ideal situation,
and the scripts themselves could contain errors.
• Deletion anomaly: If we need to delete an assignment, we cannot do it without deleting the
teacher. If a teacher has only one assignment and we delete that assignment we will end of
deleting teacher from our database. we won’t have any record relating to that teacher.
Importance of anomaly detection:
Jhigh.co.uk. (2019) Some of the importance of anomaly detection are:
1. Help to reduce the data redundancy and dependency in the database.
2. Provide more flexibility in the database.
3. Helps us to save the time.
4. Insertion, updating and deletion of the data in the database table will be easier.
Normalization table of Assignment management:
• UNF (unnormalized form)
Teachers Student Subject Credit Release Date Deadline Submitted date
Name Name Name
Nawarj Subash DBMS 4 25thseptember, 28thDecember,2019 28thDecember,2019
Bhandari Lama, 2019
Subash
Dhakal
Binod Samrit Programming 5 25thseptember, 28thDecember,2019 28thDecember,2019
Saha Basnet 2019
2. 1 Level DFD:
(Lucidchart, 2018) The Level 0 DFD is broken down into more specific, Level 1 DFD.
Level 1 DFD depicts basic modules in the system and flow of data among various modules.
Level 1 DFD also mentions basic processes and sources of information.
• It provides a more detailed view of the Context Level Diagram.
• Here, the main functions carried out by the system are highlighted as we break into
its sub-processes.
3. 2 Level DFD:
(Lucidchart, 2018) A level 2 data flow diagram (DFD) offers a more detailed look at the
processes that make up an information system than a level 1 DFD does. It can be used to
plan or record the specific makeup of a system.
ER Diagram:
(Lucidchart, 2018) An entity relationship diagram (ERD) shows the relationships of entity sets
stored in a database. An entity in this context is an object, a component of data. An entity set is a
collection of similar entities. These entities can have attributes that define its properties.
By defining the entities, their attributes, and showing the relationships between them, an ER
diagram illustrates the logical structure of databases.
The entity relation diagram for assignment management system is shown below in the design:
Data dictionaries:
(Lucidchart, 2018) A data dictionary contains metadata i.e. data about the database. The data
dictionary is very important as it contains information such as what is in the database, who is
allowed to access it, where is the database physically stored etc. The users of the database normally
don't interact with the data dictionary, it is only handled by the database administrators.
has to be
submitted
Some of the point that help to reduce anomaly and redundancy are:
1. Query optimization:
The query optimizer attempts to determine the most efficient way to execute a given query by
considering the possible query plans. The goal of query optimization is to reduce the system
resources required to fulfill a query, and ultimately provide the user with the correct result set
faster. By making query flexible one query will result to more versatile data result. By doing
those kinds of stuff we can manage our database easier. There will not be messy code and
easier for developer to understand so that by optimizing the query we can reduce the anomaly
by not getting confuse in the data flow of the database while fetching the data for the tester and
user.
2. Data defragmentation:
Dividing the whole table data into smaller chunks and storing them in different DBs in the
DDBMS is called data fragmentation. Data fragmentation helps to manage the data in the
database. due to the help of fragmentation we can insert different data in different table. In
order to avoid anomaly, we can defragment those data to know whether the different data had
separate table or not. If we defragment the data, we can also get rid of the data duplication and
the dependency of the data to another table. Another importance is it provide security to the
database by not letting to give unauthorized access providing valid detail of the data. We can
access the same data from different location which is possible due to data fragmentation so it
is important factor in the database design and development that helps to reduce anomaly and
dependency of the data.
3. Data de-normalization:
Denormalization is a database optimization technique in which we add redundant data to one
or more tables. This can help us avoid costly joins in a relational database. Normalization is
the process of managing data in the database by dividing the table in the smaller form.
Denormalization does not mean not doing normalization. It is an optimization technique that
is applied after doing normalization. By normalizing database, we can arrange data into tables
and columns ensuring each table contains only related data. If data is not directly related, we
have to create a new table for the data. For the well managed database normalization is the key
process by normalizing the data we can remove data redundancy and duplication of the data.
A well normalized database should be denormalized to know whether the anomaly act or not.
We can easily find out that by insertion, deletion and updating of the database.
4. Data security and data management:
The major thing we need to do while designing the database of the project is providing data
security and data management. If the database is not secure then the breaching of the
unauthorized person should interrupt the database. For the better security of the database we
need to provide authentication and also providing different roles for the database handling. The
data should be well managed by using different data management process like Entity relation
diagram, Normalization, Use-case diagram, Activity diagram etc. for better understanding of
the data flow on the database for the database designer and analyst.
Screenshot of program:
Login Form
User
Managemen
t form
Student
Managemen
t Form
tori
Teacher
management
Form
Assignment
management
Form
Login Table Create table Login (username varchar (50), password varchar (50));
Teacher create table teacher
Table (Teacher_Id int primary key,
Teacher_Name varchar (50),
Qualification varchar (50),
Received_Date Date,
Assignment_Id int);
User Table Create table user (Username varchar (50), password varchar (50));
Subject create table subject
Table (Assignment_Id int primary key,
Released_Date date,
Deadline date);
Declaring
Username
and
Password
Connection
Name
return "system";
}
public String Password(){
return "champ";
}
}
Creation of User:
Add
Update
Delete
Students Management:
Add
Updat
e
Delete
Teachers Management:
Add
Updat
e
Delete
Assignments Management:
Add
Updat
e
Delete
Testing:
(Software Testing Material. 2015) Software testing is defined as an activity to check whether the
actual results match the expected results and to ensure that the software system is Defect free. It
involves execution of a software component or system component to evaluate one or more
properties of interest.
Software testing also helps to identify errors, gaps or missing requirements in contrary to the actual
requirements. It can be either done manually or using automated tools. Some prefer saying
Software testing as a White Box and Black Box Testing.
Importance of Testing:
(Software Testing Material. 2015)Some of the valuable point why it is necessary for the program
to be test are listed below:
• Cost effectiveness:
Testing has many benefits and one of the most important ones is cost-effectiveness. Having
testing in your project can save money in the long run. Software development consists of
many stages and if bugs are caught in the earlier stages it costs much less to fix them. That
is why it’s important to get testing done as soon as possible. Getting testers or QA’s who
are technically educated and experienced for a software project is just like an investment
and your project will benefit budget-wise.
• Security:
Another important point to add is security. This is probably the most sensitive and yet most
vulnerable part.
There have been many situations where user information has been stolen or hackers have
gotten to it and used it for their benefit. That is the reason people are looking for trusted
products that they can rely on. As a user of many products and apps, I am always looking
for products that I would give my information to with confidence and know that it will be
safe; perhaps so do you. Our personal information and what we do with it should stay as
private as possible, especially using services where it is a vulnerability to us, for example,
banking information, security details etc.
Customer trust is not easy to earn, especially if your product is glitching and functioning
only 60% of the time. We are a user of many products and perhaps have had horrible
experiences that made you delete the app and tell others not to use it. These days the market
is so saturated that first impression is important, otherwise users will find another product
that meets his needs.
Types of software testing:
Basically, there are two different types of the software testing which are as follows:
• White Box testing
• Black Box testing
White Box Testing:
(Guru99.com 2019) White box testing techniques analyze the internal structures the used data
structures, internal design, code structure and the working of the software rather than just the
functionality as in black box testing. It is also called glass box testing or clear box testing or
structural testing.
Working process of white box testing:
i. Input: Requirements, Functional specifications, design documents, source code.
ii. Processing: Performing risk analysis for guiding through the entire process.
iii. Proper test planning: Designing test cases so as to cover entire code. Execute rinse-repeat
until error-free software is reached. Also, the results are communicated.
iv. Output: Preparing final report of the entire testing process.
Black Box Testing:
(Guru99.com 2019) Black Box Testing, also known as Behavioral Testing, is a software testing
method in which the internal structure/design/implementation of the item being tested is not known
to the tester. These tests can be functional or non-functional, though usually functional.
This method is named so because the software program, in the eyes of the tester, is like a black
box; inside which one cannot see. This method attempts to find errors in the following categories:
i. Incorrect or missing functions
ii. Interface errors
a) Functional Testing:
(Guru99.com 2019) FUNCTIONAL TESTING is a type of software testing whereby the
system is tested against the functional requirements/specifications.
Functions (or features) are tested by feeding them input and examining the output.
Functional testing ensures that the requirements are properly satisfied by the application.
This type of testing is not concerned with how processing occurs, but rather, with the results
of processing. It simulates actual system usage but does not make any system structure
assumptions.
Typically, functional testing involves the following steps:
-Identify functions that the software is expected to perform.
tori
In the above testing student id should be in integer format but the input type is varchar so that
we got error due to wrong data input.
In the above testing the username and password that I have provided was incorrect due to which
the authentication was failed.
• Integration testing:
(Guru99.com 2019) Integration testing is a level of software testing where individual units
are combined and tested as a group. The purpose of this level of testing is to expose faults
in the interaction between integrated units. Test drivers and test stubs are used to assist in
Integration Testing.
In the above testing the I have inserted the username in terms of varchar but it has to be in Integer
due to which data type error has been found.
b) Non-functional testing:
(Guru99.com 2019) Non-functional testing is defined as a type of Software testing to check
non-functional aspects (performance, usability, reliability, etc.) of a software application.
It is designed to test the readiness of a system as per nonfunctional parameters which are
never addressed by functional testing.
An excellent example of non-functional test would be to check how many people can
simultaneously login into a software.
Non-functional testing is equally important as functional testing and affects client
satisfaction.
- Performance testing: Performance testing is the process of determining the speed,
responsiveness and stability of a computer, network, software program or device under a
workload. For the performance testing of the software I will be having stress testing which
provide the error information on the time of the system failure. For stress testing to be
successful, a system should display an appropriate error message while it is under extreme
conditions.
- Compatibility Testing: Compatibility Testing is a type of Software testing to check
whether your software is capable of running on different hardware, operating systems,
applications, network environments or Mobile devices.
- Accessibility Testing: Accessibility Testing is defined as a type of Software Testing
performed to ensure that the application being tested is usable by people with disabilities
like hearing, color blindness, old age and other disadvantaged groups. It is a subset of
Usability Testing.
Concurrency Control:
(Guru99.com 2019) Concurrency control is the procedure in DBMS for managing simultaneous
operations without conflicting with each another. Concurrent access is quite easy if all users are
just reading data. There is no way they can interfere with one another. Though for any practical
database, would have a mix of reading and WRITE operations and hence the concurrency is a
challenge.
Concurrency control is used to address such conflicts which mostly occur with a multi-user system.
It helps you to make sure that database transactions are performed concurrently without violating
the data integrity of respective databases.
Therefore, concurrency control is a most important element for the proper functioning of a system
where two or multiple database transactions that require access to the same data, are executed
simultaneously.
Potential problems of Concurrency:
Here, are some issues which you will likely to face while using the Concurrency Control method:
a) Lost Updates:
It occurs when multiple transactions select the same row and update the row based on the
value selected
b) Uncommitted dependency:
These issues occur when the second transaction selects a row which is updated by another
transaction (dirty read)
c) Non-Repeatable Read:
It occurs when a second transaction is trying to access the same row several times and reads
different data each time.
d) Incorrect Summary issue: It occurs when one transaction takes summary over the value of
all the instances of a repeated data-item, and second transaction update few instances of
that specific data-item. In that situation, the resulting summary does not reflect a correct
result.
(Guru99.com 2019) Two-Phase locking protocol which is also known as a 2PL protocol.
It is also called P2L. In this type of locking protocol, the transaction should acquire a lock
after it releases one of its locks.
This locking protocol divides the execution phase of a transaction into three different parts.
-In the first phase, when the transaction begins to execute, it requires permission for the
locks it needs.
-The second part is where the transaction obtains all the locks. When a transaction releases
its first lock, the third phase starts.
-In this third phase, the transaction cannot demand any new locks. Instead, it only releases
the acquired locks.
It is true that the 2PL protocol offers serializability. However, it does not ensure that deadlocks do
not happen
-Centralized 2PL
In Centralized 2 PL, a single site is responsible for lock management process. It has only one lock
manager for the entire DBMS.
-Primary copy 2PL
Primary copy 2PL mechanism, many lock managers are distributed to different sites. After that, a
particular lock manager is responsible for managing the lock for a set of data items. When the
primary copy has been updated, the change is propagated to the slaves.
-Distributed 2PL
In this kind of two-phase locking mechanism, Lock managers are distributed to all sites. They are
responsible for managing locks for data at that site. If no data is replicated, it is equivalent to
primary copy 2PL. Communication costs of Distributed 2PL are quite higher than primary copy
2PL
3) Timestamp-based Protocols:
The timestamp-based algorithm uses a timestamp to serialize the execution of concurrent
transactions. This protocol ensures that every conflicting read and write operations are
executed in timestamp order. The protocol uses the System Time or Logical Count as a
Timestamp.
The older transaction is always given priority in this method. It uses system time to
determine the time stamp of the transaction. This is the most commonly used concurrency
protocol.
Lock-based protocols help you to manage the order between the conflicting transactions
when they will execute. Timestamp-based protocols manage conflicts as soon as an
operation is created.
4) Validation based protocol:
Validation phase is also known as optimistic concurrency control technique. In the
validation-based protocol, the transaction is executed in the following three phases:
- Read phase: In this phase, the transaction T is read and executed. It is used to read the
value of various data items and stores them in temporary local variables. It can perform all
the write operations on temporary variables without an update to the actual database.
- Validation phase: In this phase, the temporary variable value will be validated against the
actual data to see if it violates the serializability.
- Write phase: If the validation of the transaction is validated, then the temporary results
are written to the database or system otherwise the transaction is rolled back.
Used protocol for development of project:
(Guru99.com 2019) For better development of the project I will be using validation based control
as in this phase firstly the data are sent for the validation on read only phase where the data came
from the interface is stored in the temporary variable after that it was validate with the actual data
in the database when they both match then the performance or request can be executed.
In the assignment management system for admin login first the data insert into the GUI of login is
store in the temporary login file which will validate it to the actual data that database has stored in
the login table. If the both data match to the each other than the request to open the database will
be completed opening the database of the assignment management system.
Next, we have to ensure the user privileges to GRANT ANY PRIVILEGES GRANT
connect to the database CREATE SESSION GRANT ANY
privilege TO student;
Table privileges if we want our student user to have the ability
Generally providing these privileges for the to perform SELECT, UPDATE, INSERT, and
newer version of oracle is not necessary but DELETE capabilities on the student able, we
some time it is required. might execute the following GRANT
statement
GRANT
SELECT, INSERT, UPDATE, DELETE
ON
Schema.student
TO Student;
The above query ensure that student can
perform the four-basic operation on the student
table.
I have presented the above table on how we can deliver the privileges and authorization for the
user on the database. Now I will provide security on the database which I will be discussing below:
Database security:
(Guru99.com 2019) Database security covers and enforces security on all aspects and components
of databases. This includes:
-Data stored in database
-Database server
-Database management system (DBMS)
-Other database workflow applications
Database security is generally planned, implemented and maintained by a database administrator
and or other information security professional.
For securing of the database following database security I have to implement, they are as:
1. Access authorization:
(SearchSecurity 2019 Authentication is the process of validating credential like username,
password, etc to verify the user identity. Access authorizations may allow general access
to a service or access to all records of a particular type, but more often, access will be
restricted based on the context of the collaboration.
Example: If the student or teacher will put the wright user name or password then they will
be allowed to access the particular file else they can’t access the particular file.
2. Access Control:
(SearchSecurity 2019 Access control is a security technique that regulates who or what can
view or use resources in a computing environment. It is a fundamental concept in security
that minimizes risk to the business or organization.
There are two types of access control: physical and logical. Physical access control limits
access to campuses, buildings, rooms and physical IT assets. Logical access control limits
connections to computer networks, system files and data.
3. Views:
(SearchSecurity 2019 A database view is a searchable object in a database that is defined
by a query. Though a view doesn’t store data, some refer to a view as “virtual tables,” you
can query a view like a table. A view can combine data from two or more table, using joins,
and also just contain a subset of information. This makes them convenient to abstract, or
hide, complicated queries. Just as a function (in programming) can provide abstraction, so
can a database view. In another parallel with functions, database users can manipulate
nested views, thus one view can aggregate data from other views. Without the use of views,
the normalization of databases above second normal form would become much more
difficult. Views can make it easier to create lossless join decomposition.
4. Data integrity:
(SearchSecurity 2019 Data integrity is the overall completeness, accuracy and consistency
of data. This can be indicated by the absence of alteration between two instances or between
two updates of a data record, meaning data is intact and unchanged. The concept of data
integrity ensures that all data in a database can be traced and connected to other data. This
ensures that everything is recoverable and searchable. Having a single, well-defined and
well-controlled data integrity system increases stability, performance, reusability and
maintainability
5. Backup and recovery of data:
(SearchSecurity 2019 Backup and recovery describe the process of creating and storing
copies of data that can be used to protect organizations against data loss. This is sometimes
referred to as operational recovery. Recovery from a backup typically involves restoring
the data to the original location, or to an alternate location where it can be used in place of
the lost or damaged data.
A proper backup copy is stored in a separate system or medium, such as tape, from the
primary data to protect against the possibility of data loss due to primary hardware or
software failure.
Screenshot that are related to backup and recovery of data are:
Run backup file Backup.bat in the folder
C:\oraclexe\app\oracle\product\11.2.0\server\bin
Comparing system design and system analysis against system and user requirement
(SearchSecurity 2019 Systems development is systematic process which includes phases such as
planning, analysis, design, deployment, and maintenance. Here, in this tutorial, we will primarily
focus on −
-Systems analysis
-Systems design
• System Analysis:
It is a process of collecting and interpreting facts, identifying the problems, and
decomposition of a system into its components.
System analysis is conducted for the purpose of studying a system or its parts in order to
identify its objectives. It is a problem-solving technique that improves the system and
ensures that all the components of the system work efficiently to accomplish their purpose.
Analysis specifies what the system should do.
• System Design:
It is a process of planning a new business system or replacing an existing system by
defining its components or modules to satisfy the specific requirements. Before planning,
you need to understand the old system thoroughly and determine how computers can best
be used in order to operate efficiently.
System Design focuses on how to accomplish the objective of the system.
System Analysis and Design (SAD) mainly focuses on −
-Systems
-Processes
-Technology
(EDUCBA.com 2019) Differentiating system analysis and system design in the below
table:
I. Planning:
(SearchSecurity 2019 Planning is the process of thinking about the activities required to achieve a
desired goal. It is the first and foremost activity to achieve desired results. It involves the creation
and maintenance of a plan, such as psychological aspects that require conceptual skills.
For the development of the assignment management system I have plan to use following things:
- Oracle for the back end of the project to store the data
- Java programing language for the interface designing and coding
- Visio tools for the designing
- Faculty teacher as the mentor for the development of the project
• User requirement:
Requirements Use
Log in Fully authenticate and validate login panel
for the login who manage the database of
assignment management system
1 level
DFD
2 level
DFD
ER
diagra
m
IV. Testing:
(SearchSecurity 2019 Testing is defined as an activity to check whether the actual results
match the expected results and to ensure that the software system is Defect free. I have
already done the different types of testing along with the output in the above questions.
V. Implementation:
(SearchSecurity 2019 After finishing the testing process the time is for the implementation
process. It is necessary to launch the application in the ISMT college. After launching the
application, I can get the feedback from the different person. The feedback that I have got
are as:
VI. Feedbacks:
(SearchSecurity 2019 As the application was tested by the different persons of the ISMT
college the feedback of the particular person are as follows:
Date:2019/12/27
The project is excellent it helps the college to manage the assignment in the appropriate
format. Student can easily submit the assignment as well as the teacher can evaluate the
assignment easily and can provide the feedback of the assignment to the particular
students.
-Nawaraj Bhandari (Teacher of DBMS)
Date:2019/12/27
The application is good. It is easy to access. It consumes the less time. The management
of the assignment is easier. The junior student can use the assignment of the senior
student as reference. The software is mesmerizing and heart taking due to its color
combination and development as it easily attracts the eye of the user.
-Rahul kumar (teacher of Java)
Date:2019/12/27
The application is good but for the development of the project there has been used of
core java which is not use in the meantime, which make the system vulnerable and attack
of the malicious code in the software. So, I suggest to use some of the best framework
like Spring MVC, JSF or Grails as per the need of the project for the development
-Binod Saha (Teacher of programming)
After collecting those feedback from the teacher’s expert at different field I have evaluated
that I have done the greater work but I would have done more better if I had upgrade little
bit work in terms of security as a whole, I found my application to be good.
Conclusion:
As requirement of the scenario in this part I had Produced a design for a relational database
management system to meet client requirements as well as I had developed a fully functional
system which meets client and
system requirements, using an open source language (with an application software e.g. MySQL
also I had analyzed how the design will optimize system performance and I had implemented
effective features in the solution to handle concurrency, security, user authorizations and data
recovery and finally I had Critically evaluated the effectiveness of the system design and
development against client and system requirements.
Part 3: Create a lab report: Demonstrate the system administration and management tools
available on the chosen platform
• Demonstrate the tools available in the system to monitor and optimize system
performance, and examine the audit logs.
• Demonstrate the tools available in the system to manage security and authorizations.
• Assess the effectiveness of the system administration and management tools available
on the platform identifying any shortcomings of the tools.
• Assess any future improvements that may be required to ensure the continued
effectiveness of the database system.
Lab Report on: Demonstrating the system administration and management tools available
Date:2019/12/28
Submitted by: Anup Sapkota
Submitted to: Nawaraj Bhandari
Title:
The title on which I am going to make the lab reports are as follows:
• Demonstrating the tools available in the system to monitor and optimize
system performance, and examine the audit logs.
• Demonstrating the tools available in the system to manage security and
authorizations.
• Assessing the effectiveness of the system administration and management
tools available on the platform identifying any shortcomings of the tools.
• Assessing any future improvements that may be required to ensure the
continued effectiveness of the database system.
Introduction:
The main purpose of my experiment is to demonstrate the system administration and
management tools available. In this experiment I am going to demonstrate the tools available in
the system to monitor and optimize system performance, and examine the audit logs as well as
I am going to demonstrate the tools available in the system to manage security and authorizations
as well as I am going to assess the effectiveness of the system administration and management
tools available on the platform identifying any shortcomings of the tools and finally I am going
to access any future improvements that may be required to ensure the continued effectiveness
of the database system.
Materials:
To do the experiment I am going to use the internet as well as my programs as a material. Also,
I am going to use some screenshots of my programs to demonstrate the experiment.
Data:
Audit log:
(Audit Log 2004) “An audit trail (also called audit log) is a security-relevant chronological
record, set of records, and/or destination and source of records that provide documentary
evidence of the sequence of activities that have affected at any time a specific operation,
procedure, or event.”
An audit log in its most primitive form would be a pen and paper a person would use to make
entries to accompany changes made to a system. Signing your kids in and out of daycare would
be an example of an audit log. To track the log of the database we can have it from the database
DBA which I will show down below:
• Increased Security|:
Having detailed audit logs can protect a business from liability during legal battles. They
also help companies monitor data for any potential security breaches or internal misuses
of information. They are also a great way to ensure that proper document protocols are
followed consistently, and to prevent (and track down) fraud.
• Risk Management:
Audit logs can play an important part in a business’ overall risk management strategy,
demonstrating to customers, business partners and regulators that an organization has
made a thorough effort to protect against and prevent potential problems before they
occur.
• Demonstrate Compliance
Organizations must comply with tax regulations and federal laws such as the Sarbanes-
Oxley act and the Gramm-Leach-Bliley Act, in addition to industry-specific regulations.
Audit logs can be used as proof of regulatory compliance during an audit and can help
a company fulfill its record-keeping requirements for compliance purposes.
• Detailed Insight:
Because an audit log tracks how long and how frequently individual users access a
document, it can be used to gain insight into which investors or potential partners are
most interested in a business, enabling the company to be more strategic with its
negotiations.
Tracking all user activities with an audit log can offer both startups and established
companies the insight and oversight abilities they need to increase efficiency and
security in a reliable, provable way.
Audit logs are a special procedure which execute in specific event, also called trigger
(Essentialsql.com. 2019) A database trigger is special stored procedure that is run when specific
actions occur within a database. Most triggers are defined to run when changes are made to a
table’s data. Triggers can be defined to run instead of or after DML (Data Manipulation
Language) actions such as INSERT, UPDATE, and DELETE.
Triggers help the database designer ensure certain actions, such as maintaining an audit file, are
completed regardless of which program or user makes changes to the data.
The programs are called triggers since an event, such as adding a record to a table, fires their
execution. The programs are called triggers since an event, such as adding a record to a table,
fires their execution.
Events:
The triggers can occur AFTER or INSTEAD OF a DML action. Triggers are associated with
the database DML actions INSERT, UPDATE, and DELETE. Triggers are defined to run when
these actions are executed on a specific table.
Trigger creation of database:
Trigger insertion of database:
After compiling the insert trigger on the insertion query on database we can have a trigger before
insertion complete.
After creating update trigger on updating of the table we can have trigger before the updation of
table.
After creating deletion trigger on table, we can have deleted trigger before delete operation
complete
applications within dynamic virtualized environments have shown some success in addressing
this problem.
Here I have provide some SQL query to grant roles and privileges to the user created in the
database but below I will provide GUI of user creation and system roles and privileges for the
user.
Firstly, we have to connect the database with our oracle login it as system providing password
then we will have an interface which look a like this:
Interface after login as system:
After extending the assignment we can observe user on right click as well as we can see create
user option as shown in above screenshot.
Then we can create user, I have created admin user providing password for login.
Then there is all roles and privileges to the user admin like:
Roles: connect, DBA
Privileges:
Create any table, alter any table, drop any table, select any table, unlimited tablespace, update
any table
After connect as admin now I can select any table that is inside the assignment database which
I will show in the below screenshot:
Connecting admin user in assignment database
As the admin is connected, I can select any table from the database in which I have select login
table shown down below:
Then I have insert into the login table with some values which is successfully completed.
On connecting through the user which is not created in the database it will denied. For that I
have try connecting through Anup which is not the valid user we get error as:
In the above I have discussed on providing user to the database so that we can handle the
database easier and also their privileges and roles so that they cannot ruin the database.
Single sign on:
(Searchsecurity.com 2019) Single sign-on (SSO) is a session and user authentication service
that permits a user to use one set of login credentials (e.g., name and password) to access
multiple applications. SSO can be used by enterprises, smaller organizations, and individuals to
mitigate the management of various usernames and passwords.
In a basic web SSO service, an agent module on the application server retrieves the specific
authentication credentials for an individual user from a dedicated SSO policy server, while
authenticating the user against a user repository such as a lightweight directory access protocol
(LDAP) directory. The service authenticates the end user for all the applications the user has
been given rights to and eliminates future password prompts for individual applications during
the same session.
Multifactor authentication:
(Searchsecurity.com 2019) Multi-factor authentication (MFA) is an authentication method in
which a computer user is granted access only after successfully presenting two or more pieces
of evidence (or factors) to an authentication mechanism: knowledge (something the user and
only the user knows), possession (something the user and only the user has), and inherence
(something the user and only the user is). The goal of MFA is to create a layered defense and
make it more difficult for an unauthorized person to access a target such as a physical location,
computing device, network or database. If one factor is compromised or broken, the attacker
still has at least one more barrier to breach before successfully breaking into the target.
Data storage on cloud:
(Searchsecurity.com 2019) Cloud storage is a model of computer data storage in which the
digital data is stored in logical pools. The physical storage spans multiple servers (sometimes in
multiple locations), and the physical environment is typically owned and managed by a hosting
company. These cloud storage providers are responsible for keeping the data available and
accessible, and the physical environment protected and running. People and organizations buy
or lease storage capacity from the providers to store user, organization, or application data.
Shortcoming of security management:
• Technology is always changing so users must always purchase upgraded information
security.
• Since technology is always changing nothing will ever be completely secure.
• If a user misses one single area that should be protected the whole system could be
compromised.
• It can be extremely complicated and users might not totally understand what they are
dealing with.
• It can slow down productivity if a user is constantly having to enter passwords.
Database version:
Another major factor in database performance is the version of SQL developer we are currently
deploying. Staying up to date with the latest version of database can have significant impact on
overall database performance. It's possible that one query may perform better in older versions
of SQL developer than in new ones, but when looking at overall performance, new versions tend
to perform better. As I have provided some method for the improvement of the database
performance there are plenty of the method to improve performance. We can make test of
different method and dee which make better impact in the database and use them for further
development.
Optimize queries:
In most cases performance issues are caused by poor SQL queries. While developing the
database we stuck between IN or EXISTS or writing subquery or join statement. Being in
dilemma cause poor management of database so for optimizing queries we can use query
optimizer such as Ever SQL Query Optimizer which will both speed up the query and explain
the recommendations.
Create optimal indexes:
If we done indexing properly it help to optimize the query execution duration and increase
overall performance. Indexes accomplish this by implementing a data structure that helps to
keep things organized and makes locating information easier, indexing speeds up the data
retrieval process and makes it more efficient, thereby saving our time and effort. (EverSql.com
2018)
Conclusion:
In this experiment I had demonstrated the tools available in the system to monitor and optimize
system performance, and examined the audit logs as well as I had demonstrated the tools
available in the system to manage security and authorizations as well as I had asses the
effectiveness of the system administration and management tools available on the platform
identifying any shortcomings of the tools and finally I had accessed any future improvements
that may be required to ensure the continued effectiveness of the database system. In the lab all
the demonstration of the experiments are successful.
References:
EDUCBA. (2019). System Analysis And Design | Top 11 Differences You Should Know.
[online] Available at: https://www.educba.com/system-analysis-and-design/ [Accessed 27 Dec.
2019].
What is Single Sign-On (SSO) and How Does It Work? (2019). Available at:
https://searchsecurity.techtarget.com/definition/single-sign-on (Accessed: 27 December 2019).
Audit Log (2004). Available at: https://martinfowler.com/eaaDev/AuditLog.html (Accessed: 27
December 2019).
(2019) Essentialsql.com. Available at: https://www.essentialsql.com/what-is-a-database-
trigger/ (Accessed: 27December 2019).
5 Easy Ways To Improve Your Database Performance (2018). Available at:
https://www.eversql.com/5-easy-ways-to-improve-your-database-performance/ (Accessed: 27
December 2019)
References:
1. SearchSQLServer. (2019). What is a Database Management System? - Definition from
WhatIs.com. [online] Available at: https://searchsqlserver.techtarget.com/definition/database-
management-system [Accessed 20 Dec. 2019].
2. Geol-amu.org. (2019). Types of Database Management Systems. [online] Available at:
http://www.geol-amu.org/notes/be1b-3-3.htm [Accessed 20 Dec. 2019].
3. SearchSQLServer. (2019). What is Database Normalization? [online] Available at:
https://searchsqlserver.techtarget.com/definition/normalization [Accessed 20 Dec. 2019].
4. Tutorialspoint.com. (2019). DBMS - Data Models - Tutorialspoint. [online] Available at:
https://www.tutorialspoint.com/dbms/dbms_data_models.htm [Accessed 20 Dec. 2019].
5. Sisense. (2019). What is a Relational Database Management System? | Sisense Glossary.
[online] Available at: https://www.sisense.com/glossary/relational-database/ [Accessed 20 Dec.
2019].
6. Guru99.com. (2019). DBMS vs RDBMS: Complete Difference between DBMS and RDBMS.
[online] Available at: https://www.guru99.com/difference-dbms-vs-rdbms.html [Accessed 20
Dec. 2019].
7. Jhigh.co.uk. (2019). Data Anomalies. [online] Available at:
http://jhigh.co.uk/Higher/dbases/anomalies.html [Accessed 21 Dec. 2019].
8. Guru99.com. (2019). DBMS Concurrency Control: Two Phase, Timestamp, Lock-Based
Protocol. [online] Available at: https://www.guru99.com/dbms-concurrency-control.html
[Accessed 22 Dec. 2019].
9. W3schools. (2019). Database Security. [online] Available at:
https://www.w3schools.in/dbms/database-security/ [Accessed 23 Dec. 2019].
10. Techopedia.com. (2019). What is Database Security? - Definition from Techopedia. [online]
Available at: https://www.techopedia.com/definition/29841/database-security [Accessed 23 Dec.
2019].
11. Differencebetween.net. (2019). Difference between Authentication and Authorization |
Difference Between. [online] Available
at:http://www.differencebetween.net/technology/difference-between-authentication-and-
authorization/ [Accessed 23 Dec. 2019].
12. SearchSecurity. (2019). What is access control? - Definition from WhatIs.com. [online]
Available at: https://searchsecurity.techtarget.com/definition/access-control [Accessed 24 Dec.
2019].
13. Software, D. (2017). Oracle Database Data: Backup and Restore. [online] Hetman Software.
Available at: https://hetmanrecovery.com/recovery_news/backing-up-and-restoring-the-database-
oracle-database.htm [Accessed 24 Dec. 2019].
14. Software Testing Material. (2015). Software Testing - Definition, Types, Methods,
Approaches. [online] Available at: https://www.softwaretestingmaterial.com/software-testing/
[Accessed 24 Dec. 2019].
15. Admin, S. (2014). Difference between Black Box Testing and White Box Testing. [online]
Software Testing Class. Available at: https://www.softwaretestingclass.com/difference-between-
black-box-testing-and-white-box-testing/ [Accessed 24 Dec. 2019].
16. Tutorialspoint.com. (2019). System Analysis and Design - Overview - Tutorialspoint. [online]
Available at:
https://www.tutorialspoint.com/system_analysis_and_design/system_analysis_and_design_overv
iew.htm [Accessed 24 Dec. 2019].
17.EDUCBA. (2019). System Analysis And Design | Top 11 Differences You Should Know.
[online] Available at: https://www.educba.com/system-analysis-and-design/ [Accessed 27 Dec.
2019].
18.What is Single Sign-On (SSO) and How Does It Work? (2019). Available at:
https://searchsecurity.techtarget.com/definition/single-sign-on (Accessed: 27 December 2019).
19.Audit Log (2004). Available at: https://martinfowler.com/eaaDev/AuditLog.html (Accessed:
27 December 2019).
(2019) Essentialsql.com. Available at: https://www.essentialsql.com/what-is-a-database-trigger/
(Accessed: 27December 2019).
20. 5 Easy Ways To Improve Your Database Performance (2018). Available at:
https://www.eversql.com/5-easy-ways-to-improve-your-database-performance/ (Accessed: 27
December 2019)