0% found this document useful (0 votes)
71 views33 pages

Data Base Administration Level IV: Shashemene Poly Technique College

The document provides information about testing a database system. It discusses preparing a test environment, different types of database testing including black box and white box testing, what aspects of the database should be tested like data mapping and integrity, business rules, and transactions. It also outlines the database testing process and different components that can be tested such as transactions, data integrity, and business rules.

Uploaded by

Mahdi Zeyn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views33 pages

Data Base Administration Level IV: Shashemene Poly Technique College

The document provides information about testing a database system. It discusses preparing a test environment, different types of database testing including black box and white box testing, what aspects of the database should be tested like data mapping and integrity, business rules, and transactions. It also outlines the database testing process and different components that can be tested such as transactions, data integrity, and business rules.

Uploaded by

Mahdi Zeyn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

SHASHEMENE POLY TECHNIQUE COLLEGE

Data base Administration


Level IV

Unit of Competence: perform Database system Test

Module Title: performing Database system Test

LG Code: EIS DBA4 LG08 M08 LO1-02

TTLM Code: EIS DBA4TTLM 0811v01

LO1- prepare for test

LO2- conduct test


Information EIS DBA4-LG08 M08 LO1 – IS01
Prepare test

1.1-Prepare test environment in line with work guideline.

Data base Testing

Database testing usually consists of a layered process, including the user interface (UI) layer, the
business layer, the data access layer and the database itself. The UI layer deals with the interface
design of the database [citation needed] while the business layer includes databases supporting
business strategies. The most critical layer is the data access layer, which deals with databases
directly during the communication process. Database testing mainly takes place at this layer and
involves testing strategies such as quality control and quality assurance of the product databases.
Testing at these different layers is frequently used to maintain consistency of database systems

Test environment;- the tester that prepare for testing that include, the technical environment,
data, work area, and interfaces used in testing must be prepare before the testing conducts.

Types of testing’s and processes

Black box and white box testing in database test

Black Box testing in database testing

Black box testing involves testing interfaces and the integration of the database, which includes:

 Mapping of data (including metadata)

 Verifying incoming data

 Verifying outgoing data from query functions

 Various techniques such as Cause effect graphing technique, equivalence partitioning and
boundary-value analysis.

With the help of these techniques, the functionality of the database can be tested thoroughly.

Pros and Cons of black box testing include: Test case generation in black box testing is fairly
simple. Their generation is completely independent of software development and can be done in
an early stage of development. As a consequence, the programmer has better knowledge of how
to design the database application and uses less time for debugging. Cost for development of
black box test cases is lower than development of white box test cases. The major drawback of
black box testing is that it is unknown how much of the program is being tested. Also, certain
errors cannot be detected.

White Box testing in database testing

White box testing mainly deals with the internal structure of the database. The specification
details are hidden from the user.

 It involves the testing of database triggers and logical views which are going to support
database refactoring.

 It performs module testing of database functions, triggers, views, SQL queries etc.

 It validates database tables, data models, database schema etc.

 It checks rules of Referential integrity.

 It selects default table values to check on database consistency.

 The techniques used in white box testing are condition coverage, decision coverage,
statement coverage, cyclomatic complexity.

The main advantage of white box testing in database testing is that coding error are detected, so
internal bugs in the database can be eliminated.

Why do we test a database?

1) Data Mapping: In the software systems, data often travels back and forth from the UI (user
interface) to the backend DB and vice versa. So following are the aspects to look for:

 To check whether the fields in the UI/Front end forms and mapped consistently with the
corresponding DB table (and also the fields within).  Typically this mapping information
is defined in the requirements documents.

 Whenever a certain action is performed in the front end of an application, a


corresponding CRUD (Create, Retrieve, Update and delete) action gets invoked at the
back end. A tester will have to check if the right action is invoked and the invoked action
in itself is successful or not.

2) ACID properties validation:  atomicity, consistency, isolation and durability. Every


transaction a DB performs has to adhere to these four properties.
 Atomicity means that a transaction either fails or passes. This means that even if a
single part of transaction fails- it means that the entire transaction has failed. Usually this
is called the “all-or nothing” rule.
 Consistency: A transaction will always result in a valid state of the DB
 Isolation: If there are multiple transactions and they are executed all at once, the
result/state of the DB should be the same as if they were executed one after the other.
 Durability: Once a transaction is done and committed, no external factors like power loss
or crash should be able to change it

Terms of DB transactions

 Active – the initial state; the transaction stays in this state while it is executing
 Partially committed – after the final statement has been executed.
 Failed -- after the discovery that normal execution can no longer proceed.
 Aborted – after the transaction has been rolled back and the database restored to its state
prior to the start of the transaction. Two options after it has been aborted:
o restart the transaction
 can be done only if no internal logical error
o kill the transaction
 Committed – after successful

3) Data integrity: This means that following any of the CRUD operations (Create, Retrieve,
Update and delete), the updated and most recent values/Status of shared data should appear on all
the forms and screens. A value should not be updated on one screen and display an older value
on another one. So check your DB test in a way to include checking the data in all the places it
appears to see if it is consistently the same.

4) Business rule conformity:  More complex databases means more complicated components
like relational constraints, triggers, stored procedures, etc. So testers will have to come up with
appropriate SQL queries in order to validate these complex objects.

How to test – Database Testing Process


The general test process for DB testing is not very different from any other application. The
following are the steps:

Step #1) Prepare the environment


Step #2) Run a test
Step #3) Check test result
Step #4) Validate according to the expected results
Step #5) Report the findings to the respective stakeholders

What to test – different components

1) Transactions:

When testing transactions it is important to make sure that they satisfy the ACID properties.

A transaction is a unit of work that is performed against a database. Transactions are units or
sequences of work accomplished in a logical order, whether in a manual fashion by a user or
automatically by some sort of a database program.

A transaction is the propagation of one or more changes to the database. For example, if you are
creating a record or updating a record or deleting a record from the table, then you are
performing transaction on the table. It is important to control transactions to ensure data integrity
and to handle database errors.

Practically, you will club many SQL queries into a group and you will execute all of them
together as a part of a transaction.

Properties of Transactions:
Transactions have the following four standard properties, usually referred to by the acronym
ACID:

 Atomicity: ensures that all operations within the work unit are completed successfully;
otherwise, the transaction is aborted at the point of failure, and previous operations are
rolled back to their former state.
 Consistency: ensures that the database properly changes states upon a successfully
committed transaction.
 Isolation: enables transactions to operate independently of and transparent to each other.
 Durability: ensures that the result or effect of a committed transaction persists in case of a
system failure.

Transaction Control:

There are following commands used to control transactions:

 COMMIT: to save the changes.


 ROLLBACK: to rollback the changes.
 SAVEPOINT: creates points within groups of transactions in which to ROLLBACK
 SET TRANSACTION: Places a name on a transaction.

Transactional control commands are only used with the DML commands INSERT, UPDATE
and DELETE only. They cannot be used while creating tables or dropping them because these
operations are automatically committed in the database.

The COMMIT Command:

The COMMIT command is the transactional command used to save changes invoked by a
transaction to the database.
The COMMIT command saves all transactions to the database since the last COMMIT or
ROLLBACK command.

The syntax for COMMIT command is as follows:


COMMIT;

Example:

Consider the CUSTOMERS table having the following records:

ID NAME AGE ADDRESS SALARY


1 Rahel 32 AA 2000.00
2 Kedir 25 Dire 1500.00
3 Kasa 23 Kofele 2000.00
4 Chaltu 25 Mojo 6500.00
5 Hasen 27 Bahirdar 8500.00
6 Kebede 22 Shashe 4500.00
7 Mulu 24 Adama 10000.00
Following is the example, which would delete records from the table having age = 25 and then
COMMIT the changes in the database.

DELETE FROM CUSTOMERS


WHERE AGE = 25;
COMMIT;

As a result, two rows from the table would be deleted and SELECT statement would produce the
following result:

ID NAME AGE ADDRESS SALARY


1 Rahel 32 AA 2000.00
3 Kasa 23 Kofele 2000.00
5 Hasen 27 Bahirdar 8500.00
6 Kebede 22 Shashe 4500.00
7 Mulu 24 Adama 10000.00

The ROLLBACK Command:

The ROLLBACK command is the transactional command used to undo transactions that have
not already been saved to the database.
The ROLLBACK command can only be used to undo transactions since the last COMMIT or
ROLLBACK command was issued.

The syntax for ROLLBACK command is as follows:

ROLLBACK;
Example:
Consider the CUSTOMERS table having the following records:
ID NAME AGE ADDRESS SALARY
1 Rahel 32 AA 2000.00
2 Kedir 25 Dire 1500.00
3 Kasa 23 Kofele 2000.00
4 Chaltu 25 Mojo 6500.00
5 Hasen 27 Bahirdar 8500.00
6 Kebede 22 Shashe 4500.00
7 Mulu 24 Adama 10000.00
Following is the example, which would delete records from the table having age = 25 and then
ROLLBACK the changes in the database.

DELETE FROM CUSTOMERS


WHERE AGE = 25;
ROLLBACK;

As a result, delete operation would not impact the table and SELECT statement would produce
the following result:
ID NAME AGE ADDRESS SALARY
1 Rahel 32 AA 2000.00
2 Kedir 25 Dire 1500.00
3 Kasa 23 Kofele 2000.00
4 Chaltu 25 Mojo 6500.00
5 Hasen 27 Bahirdar 8500.00
6 Kebede 22 Shashe 4500.00
7 Mulu 24 Adama 10000.00

The SAVEPOINT Command:

A SAVEPOINT is a point in a transaction when you can roll the transaction back to a certain
point without rolling back the entire transaction.

The syntax for SAVEPOINT command is as follows:

SAVEPOINT SAVEPOINT_NAME;

This command serves only in the creation of a SAVEPOINT among transactional statements.
The ROLLBACK command is used to undo a group of transactions.

The syntax for rolling back to a SAVEPOINT is as follows:

ROLLBACK TO SAVEPOINT_NAME;

Following is an example where you plan to delete the three different records from the
CUSTOMERS table. You want to create a SAVEPOINT before each delete, so that you can
ROLLBACK to any SAVEPOINT at any time to return the appropriate data to its original state:

Example:
Consider the CUSTOMERS table having the following records:

ID NAME AGE ADDRESS SALARY


1 Rahel 32 AA 2000.00
2 Kedir 25 Dire 1500.00
3 Kasa 23 Kofele 2000.00
4 Chaltu 25 Mojo 6500.00
5 Hasen 27 Bahirdar 8500.00
6 Kebede 22 Shashe 4500.00
7 Mulu 24 Adama 10000.00
Now, here is the series of operations:

SAVEPOINT SP1;
Savepoint created.
DELETE FROM CUSTOMERS WHERE ID=1;
1 row deleted.
SAVEPOINT SP2;
Savepoint created.
DELETE FROM CUSTOMERS WHERE ID=2;
1 row deleted.
SAVEPOINT SP3;
Savepoint created.
DELETE FROM CUSTOMERS WHERE ID=3;
1 row deleted.

Now that the three deletions have taken place, say you have changed your mind and decided to
ROLLBACK to the SAVEPOINT that you identified as SP2. Because SP2 was created after the
first deletion, the last two deletions are undone:

ROLLBACK TO SP2;
Rollback complete.

Notice that only the first deletion took place since you rolled back to SP2:

SELECT * FROM CUSTOMERS;


ID NAME AGE ADDRESS SALARY
2 Kedir 25 Dire 1500.00
3 Kasa 23 Kofele 2000.00
4 Chaltu 25 Mojo 6500.00
5 Hasen 27 Bahirdar 8500.00
6 Kebede 22 Shashe 4500.00
7 Mulu 24 Adama 10000.00

6 rows selected.

The RELEASE SAVEPOINT Command:

The RELEASE SAVEPOINT command is used to remove a SAVEPOINT that you have created.
The syntax for RELEASE SAVEPOINT is as follows:

RELEASE SAVEPOINT SAVEPOINT_NAME;

Once a SAVEPOINT has been released, you can no longer use the ROLLBACK command to
undo transactions performed since the SAVEPOINT.

The SET TRANSACTION Command:

The SET TRANSACTION command can be used to initiate a database transaction. This
command is used to specify characteristics for the transaction that follows.

For example, you can specify a transaction to be read only or read write.

The syntax for SET TRANSACTION is as follows:

SET TRANSACTION [READ WRITE | READ ONLY];

2) Database schema:

Database schema is nothing but a formal definition of the how the data is going to be organized
into a DB. To test it:

 Identify the requirements based on which the database operates. Sample requirements:

 Primary keys to be created before any other fields are created.


 Foreign keys should be completely indexed for easy retrieval and searching.
 Field names starting or ending with certain characters.
 Fields with a constraint that certain values can or cannot be inserted.

 Use one of the following ways according to the relevance:

 SQL Query DESC<table name> to validate the schema.


 Regular expressions for validating the names of the individual fields and their values
 Tools like SchemaCrawler

3) Trigger:

Triggers are special type of stored procedure. Instead of being executed by the user, triggers are
executed by the database server when certain operations are performed on a table
 An insert trigger runs whenever a new record is inserted in a table.
 A delete trigger runs whenever existing record is deleted from a table
 An update trigger runs whenever an existing record in a table is changed
A trigger is a stored procedure that is invoked when a particular event occurs. For example, you
can install a stored procedure that is triggered each time a record is deleted from a transaction
table and that automatically deletes the corresponding customer from a customer table when all
his transactions are deleted. The planned update language will be able to handle stored
procedures, but without triggers. Triggers usually slow down everything, even queries for which
they are not needed.

1. Insert Triggers:-This insert trigger update values form product table when a customer order
product this record on the order table:

Create trigger emp_triger on employee


Forinsert
As
Update employee
SetName='New'
Where E_id='01'

2. Delete triggers: - This trigger can protect any delete action on the table employee if the
Employee name is TEKLE and whose id is ‘02’

Create trigger emp_del on employee


ForDelete
As
Delete from employee whereName='TEKLE'
Print'Cannot remove Employee From the Table'
Print'Transaction has been cancelled'
use exercise
deletefrom employee where cid='02'

If you execute this code you should see an error message cannot remove customers From Addis.
Transaction has been cancelled

3. Update Triggers
CreateTrigger Upcnum on Customer
ForUpdate
As
IfUpdate(cname)
Print'can Not Update Cnum From Custommer'
Print' Transaction Has Been Cancelled'
Update Customer
Set cname='xx'Where cid=1

If You Execute the Above Command, the following Error Message will be displayed.
Can Not Update Cnum from Custommer
Transaction Has Been Cancelled

Ex.2. Assume No of Product in stock Are 530 and the Product Id = 2


Use sales
Update products
Set Instok = (Instok-600)
Where product-Id =2
If you try executing this code you should see the following error message:
Cannot over sell product
Transaction has been cancelled

Ex.3.
CreateTriggercheck PN On customers
ForUpdate
As
IfUpdate(phone)
Print ‘cannot change phone numbers’
Print ‘Transactions has been cancelled’

Note: The if update Statement can be used in insert triggers as well as in update triggers.

When a certain event takes places on a certain table, a piece of code (a trigger) can be auto-
instructed to be executed.

For example, a new student joined a school. The student is taking 2 classes; math and science.
The student is added to the “student table”.  A trigger could be adding the student to the
corresponding subject tables once he is added to the student table.

The common method to test is to execute SQL query embedded in the trigger independently first
and record the result. Follow this up with executing the trigger as a whole. Compare the results.

These are tested during both the black box and white box testing phases.

Black-box testing is a way of testing software without having much knowledge of the internal
workings of the software itself. Black box testing is often referred to as behavioral testing, in the
sense that you want to test how the software behaves as a whole. It is usually done with the
actual users of the software in mind, who usually have no knowledge of the actual code itself.

White box (aka clear box), on the other hand, is testing of the structural internals of the code – it
gets down to the for loops, if statements, etc. It allows one to peek inside the ‘box’. Tasks that
are typical of white box testing include boundary tests, use of assertions, and logging

 White box testing:   The basic idea is to just test the DB alone even before the
integration with the front end (UI) is made.
 Black box testing:
a) Since the UI and DB integration is now available; we can insert/delete/update data
from the front end in a way that the trigger gets invoked. Following that select
statements can be used to retrieve the DB data to see if the trigger was successful in
performing the intended operation.
b) Second way to test this is to directly load the data that would invoke the trigger and
see if it works as intended.

4) Stored Procedures:

Stored procedures are more or less similar to user defined functions. These can be invoked by a
call procedure/execute procedure statements and the output is usually in the form of result sets.

A stored procedure is a set of SQL commands that can be compiled and stored in the server.
Once this has been done, clients don't need to keep reissuing the entire query but can refer to the
stored procedure. This provides better performance because the query has to be parsed only once,
and less information needs to be sent between the server and the client. You can also raise the
conceptual level by having libraries of functions in the server.

Stored procedures are an SQL program stored inside the database. Stored procedures can offer
performance gains when used instead of regular queries. Stored procedures differ from ordinary
SQL statements and from batches of SQL statements in that they are precompiled.
The first time you run a procedure, SQL Server's query processor analyze it and prepare an
execution plan that is ultimately stored in a system table. The subsequent execution of the
procedure is according to the stored plan. Stored procedures are extremely similar to the
constructs seen in other programming languages.

Defining Stored Procedures

We use the CREATE PROCEDURE statement to create a stored procedure. The maximum
stored procedure name length is thirty characters.
The syntax to define a new procedure is as follows:
Create Procedure [pocedure_name]
As
Code

Creating and Running a Stored Procedure


E.g.1
Create Procedure sp_allemployees
As select * from employees
Go

This stored procedure is called "all_employees". All it contains is a SELECT statement. All
stored procedures that SQL Server provides start with "sp_" (and "xp_" for extended stored
procedures). If you try to call a stored procedure that starts with "sp_" SQL Server will first
search the MASTER database before searching the current database.
You execute a stored procedure by typing its name or using the EXECUTE statement. To
execute our stored procedure you can type:
Execute [procedure name][parameters].

To execute the procedure we write above, we will write:

EXECUTE sp_ allemployees

This will execute the stored procedure and return the results. If you are calling this procedure
from an ASP page (or other client) you can use the EXECUTE statement as your SQL string to
execute. In this case, our stored procedure will return a record set.
When you submit a stored procedure to the system, SQL Server compiles and verifies the
routines within it. If any problems are found, the procedure is rejected and you'll need to
determine what the problem is prior to re−submitting the routine.

Creating a Stored Procedure with Input Parameters


E.g.2
CREATE PROCEDURE proc1 (@p1int, @p2 char (20), @p3 char (5))
AS
Insert into Orders
Values (@p1, @p2, @p3)
Go
Execute proc1 456,’Scanner’,’01’
Go
Select * from Orders
Where product=’Scanner’;

E.g.3 let us assume that we have a table named Inventory:

This information is updated in real-time and store managers are constantly checking the levels of
products stored at their store house and available for shipment. In the past, each manager would
run queries similar to the following:

SELECT Product, Quantity


FROM Inventory
WHERE Storehouse = 'FL'

This resulted in a very inefficient performance at the SQL Server. Each time a store manager
executed the query, the database server was forced to recompile the query and execute it from
scratch. It also required the store manager to have knowledge of SQL and appropriate
permissions to access the table information. We can simplify this process through the use of a
stored procedure. Let's create a procedure called sp_GetInventory that retrieves the inventory
levels for a given storehouse. Here's the SQL code:

Making Changes and Dropping Stored Procedures


Two closely related tasks that you'll no doubt have to perform are making changes to existing
stored procedures and removing no longer used stored procedures.

Changing an Existing Stored Procedure

Stored procedures cannot be modified in place, so you're forced to first drop the procedure, and
then create it again. Although SQL Server can produce the code that was used to create the
stored procedure, you should always maintain a backup copy. You can pull the text associated
with a stored procedure by using the sp_helptext system stored procedure.

The syntax of sp_helptext is as follows:

sp_helptext procedure_name

Removing Existing Stored Procedures

We use the DROP PROCEDURE statement to drop a stored procedure that is created. Multiple
procedures can be dropped with a single DROP PROCEDURE statement by listing multiple
procedures separated by commas after the keywords DROP PROCEDURE in the syntax:

DROP PROCEDURE procedure_name_1,..., procedure_name_n

These are also tested during:

 White box testing: Stubs are used to invoke the stored procedures and then the results
are validated against the expected values.
 Black box testing: Perform an operation from the frontend (UI) of the application and
check for the execution of the stored procedure and its results.

5. Field constraints – Default value, unique value and foreign key:

Constraints are the rules enforced on data columns on table. These are used to limit the type of
data that can go into a table. This ensures the accuracy and reliability of the data in the database.
Constraints could be column level or table level. Column level constraints are applied only to
one column, whereas table level constraints are applied to the whole table.
Following are commonly used constraints available in SQL:
 NOT NULL Constraint: Ensures that a column cannot have NULL value.
 DEFAULT Constraint: Provides a default value for a column when none is specified.
 UNIQUE Constraint: Ensures that all values in a column are different.
 PRIMARY Key: Uniquely identified each rows/records in a database table.
 FOREIGN Key: Uniquely identified a rows/records in any another database table.
 CHECK Constraint: The CHECK constraint ensures that all values in a column satisfy
certain conditions.
 INDEX: Use to create and retrieve data from the database very quickly.

DEFAULT Constraint:
The DEFAULT constraint provides a default value to a column when the INSERT INTO
statement does not provide a specific value.

Example:
For example, the following SQL creates a new table called CUSTOMERS and adds five
columns. Here, SALARY column is set to 5000.00 by default, so in case INSERT INTO
statement does not provide a value for this column. Then by default this column would be set to
5000.00.

CREATE TABLE CUSTOMERS(


ID INT NOT NULL,
NAME VARCHAR (20) NOT NULL,
AGE INT NOT NULL,
ADDRESS CHAR (25) ,
SALARY DECIMAL (18, 2) DEFAULT 5000.00,
PRIMARY KEY (ID)
);

If CUSTOMERS table has already been created, then to add a DFAULT constraint to SALARY
column, you would write a statement similar to the following:

ALTER TABLE CUSTOMERS


MODIFY SALARY DECIMAL (18, 2) DEFAULT 5000.00;

Drop Default Constraint:

To drop a DEFAULT constraint, use the following SQL:


ALTER TABLE CUSTOMERS
ALTER COLUMN SALARY DROP DEFAULT;

UNIQUE Constraint:
The UNIQUE Constraint prevents two records from having identical values in a particular
column. In the CUSTOMERS table, for example, you might want to prevent two or more people
from having identical age.

Example:
For example, the following SQL creates a new table called CUSTOMERS and adds five
columns. Here, AGE column is set to UNIQUE, so that you cannot have two records with same
age:

CREATE TABLE CUSTOMERS (


ID INT NOT NULL,
NAME VARCHAR (20) NOT NULL,
AGE INT NOT NULL UNIQUE,
ADDRESS CHAR (25) ,
SALARY DECIMAL (18, 2),
PRIMARY KEY (ID)
);
If CUSTOMERS table has already been created, then to add a UNIQUE constraint to AGE
column, you would write a statement similar to the following:

ALTER TABLE CUSTOMERS


MODIFY AGE INT NOT NULL UNIQUE;

You can also use following syntax, which supports naming the constraint in multiple columns as
well:

ALTER TABLE CUSTOMERS


ADD CONSTRAINT myUniqueConstraint UNIQUE(AGE, SALARY);

DROP a UNIQUE Constraint:

To drop a UNIQUE constraint, use the following SQL:


ALTER TABLE CUSTOMERS
DROP CONSTRAINT myUniqueConstraint;

If you are using MySQL, then you can use the following syntax:

ALTER TABLE CUSTOMERS


DROP INDEX myUniqueConstraint;
);

FOREIGN Key:

A foreign key is a key used to link two tables together. This is sometimes called a referencing
key.
Foreign Key is a column or a combination of columns whose values match a Primary Key in a
different table.
The relationship between 2 tables matches the Primary Key in one of the tables with a
Foreign Key in the second table.
If a table has a primary key defined on any field(s), then you cannot have two records having the
same value of that field(s).

Example:

Consider the structure of the two tables as follows:


CUSTOMERS table:
CREATE TABLE CUSTOMERS(
ID INT NOT NULL,
NAME VARCHAR (20) NOT NULL,
AGE INT NOT NULL,
ADDRESS CHAR (25) ,
SALARY DECIMAL (18, 2),
PRIMARY KEY (ID)
);
ORDERS table:
CREATE TABLE ORDERS (
ID INT NOT NULL,
DATE DATETIME,
CUSTOMER_ID INT references CUSTOMERS(ID),
AMOUNT double,
PRIMARY KEY (ID)
);
If ORDERS table has already been created, and the foreign key has not yet been set, use the
syntax for specifying a foreign key by altering a table.
ALTER TABLE ORDERS
ADD FOREIGN KEY (Customer_ID) REFERENCES CUSTOMERS (ID);

DROP a FOREIGN KEY Constraint:

To drop a FOREIGN KEY constraint, use the following SQL:


ALTER TABLE ORDERS
DROP FOREIGN KEY;

 Perform a front end operation which overruns the database object condition
 Validate the results with a SQL Query.

Checking the default value for a certain field is quite simple. It is a part of business rule
validation. You can do it manually or you can use some other options to do so. Manually, you
can perform an action that will add a value other than the default value into the field from the
front end and see if it results in an error.

Checking the unique value can be done exactly the way we did for the default values. Try
entering values from the UI that will violate this rule and see if an error gets displayed.

For the foreign key constraint validation use data loads that directly input data that violates the
constraint and see if the application restricts the same or not. Along with the back end data load,
perform the front end UI operations too in a way that are going to violate the constraints and see
if the relevant error is displayed.

1.2-Determine software life cycle based on work principles.

The system development life cycle is the overall process of developing, implementing, and
retiring information systems through a multistep process from initiation, analysis, design,
implementation, and maintenance to disposal. There are many different SDLC models and
methodologies, but each generally consists of a series of defined steps or phases.
Soft ware life cycle refers to the period of time in which a software product is in use, from
design until the termination of its use. The software life cycle depends on many factors, some of
which include technology progression, the change need of an organization and the usefulness of
the soft ware after a period of time.

Lifecycle is the period of time the software or database can reasonably be expected to be in use.
The Database Lifecycle is a subset of the Systems Development Lifecycle. It occurs within
the development of the entire system. The database lifecycle begins with a problem within the
context of an organization.

Many of the processes parallel each other.


The Systems Development Diagrams the steps involved in the SDLC.

Lifecycle

The Database Lifecycle Diagrams the steps involved in the DBLC.

Requirements Analysis Outlines the planning and analysis phase of database development.
This phase identifies the problems that exist in a particular context and
the users' requirements of the system
Database Design Describes the process of analyzing the information needs of an
organization and developing a conceptual data model that reflects
those needs.
Implementation This phase involves the coding, testing and debugging of
the programs associated with the database files along with
the creation of the files themselves.
Testing and Evaluation Involves testing the various database components and fine tuning the
database.
Maintenance Changes to the system generate the various maintenance activities
The Systems Design Lifecycle

The Systems Development Lifecycle is a methodology used in the development and maintenance
of any information system, whether to be transactional, management information, decision
support or an expert system.

The Database Design Lifecycle


The Database Development Lifecycle parallels the Systems Development Lifecycle in the
activities that are performed. However, the tasks within those activities are specific to the
planning, analysis, development and maintenance of a database system.

Requirements Analysis

The Requirements Analysis phase outlines the planning and initial analysis phase of database
development. This phase identifies the problems that exist in a particular context and the users'
requirements of the system.

The main activities of this phase are listed below.

 Project Identification, Selection and Planning


 Purpose - understand the needs of the organization that prompted the request for a
information system solution
 Purpose - identify and describe the data required by the organization.

Database Design

Database design is the process of analyzing the information needs of an organization and
developing a conceptual data model that reflects those needs. It can also be described as a
process that develops database structures and how they interact from the user's requirements for
data.

The process starts with the Requirements Analysis which identifies the users' needs and then
translates those needs into Conceptual, then Logical and finally Physical design.

Implementation

A new database implementation requires the creation of special storage constructs depending on
the particular DBMS. Oracle, Microsoft SQL server, and DB2 require constructs such as storage
group and table space to be created. Other DBMS may automatically create these. For each
DBMS there are a particular set of steps to follow.

Testing and Evaluation

Testing and evaluation of the system takes place throughout systems development. The reason
behind testing is to turn up unforeseen problems prior to implementation.

Maintenance

Maintenance occurs throughout the life of the database. Apart from routine tasks that keep the
information system running and useful, there will be a constantly changing environment in which
the database is operating. The necessity to add additional tables, reports, and applications will
evolve out of the changing needs of the organization for which it was built.

Evolve Database
Relating the software and database life cycle
A life cycle model captures the development, deployment, and maintenance of atypical software
element.
 Development is the construction phase.
 Deployment is a formal release of the software.
 Maintenance consists of testing and tracking activities.
Different models may apply to the various stages as well as to different software components.
Formal models become crucial as a risk reduction method for complex systems.
Waterfall and spiral life cycle approaches are common.
 The waterfall method consists of discrete steps.
 Spiral model is iterative.

1.3-Define test plan and appropriate test tools

The Test plan

The test plan is a mandatory document. You can’t test without one. For simple, straight-forward
projects the plan doesn’t have to be elaborate but it must address certain items. As identified by
the “American National Standards Institute and Institute for Electrical and Electronic Engineers
Standard 829/1983 for Software Test Documentation”, the following components should be
covered in a software test plan.

Items Covered by a Test Plan

Component Description Purpose


Specific people who are and their Assigns responsibilities and keeps
Responsibilities
assignments everyone on track and focused
Code and systems status and Avoids misunderstandings about
Assumptions
availability schedules
Testing scope, schedule, duration, Outlines the entire process and maps
Test
and prioritization specific tests
Communications plan—who, Everyone knows what they need to
Communication
what, when, how know when they need to know it
Provides focus by identifying areas that
Risk Analysis Critical items that will be tested
are critical for success
Tells how to document a defect so that
How defects will be logged and
Defect Reporting it can be reproduced, fixed, and
documented
retested
The technical environment, data, Reduces or eliminates
Environment work area, and interfaces used in misunderstandings and sources of
testing potential delay

Test Plan can be formal detailed document that describes


 Scope, objectives, and the approach to testing,
 People and equipment dedicated/allocated to testing
 Tools that will be used
 Dependencies and risks
 Categories of defects
 Test entry and exit criteria
 Measurements to be captured
 Reporting and communication processes
 Schedules and milestone

There are several phases to the testing process which is described below.

Phase 1 - Program Testing

This phase of testing includes:


 unit testing - testing of each unit of code for errors in syntax and structure
 module testing - testing of various modules to ensure they work together
In this phase of testing, the programmer creates test data. Test data should include valid test data
as well as invalid test data. Invalid test data would include incorrect data in format and content
that should produce errors. This is done to ensure the program works as planned and to flush out
any programming errors.

Phase 2 - Link Testing

In this phase of testing, programs that work independently of one another are tested together
using test data.
Testing in this phase checks to see whether the system can handle normal transaction processing.
If it can, variations to the test data are added, including invalid data, to ensure the system can
detect errors. Testing in the phase generally takes many passes.

Phase 3 - Full System Testing with Test Data

In this phase, testing of the entire system is undertaken. At this stage outside users may become
involved. This testing phase, also called "alpha" testing, still uses test data.
This phase of testing should test for errors, timeliness, ease of use, correct ordering of
transactions and acceptable down time.
In addition, there are some factors that need to be considered when testing the system with test
data:
 seeing whether users have adequate documentation to correctly use the system
 checking that procedural and other manuals are written clearly and accurately for users
to follow
 determining whether output is correct and whether users understand what the output
should be and how it should look

Phase 4 - Full System Testing with Live Data


In this phase, also called "beta" testing, testing is done with live data, that is, data that has been
successfully processed through the existing system, but not the one being tested. In this phase, a
comparison of data output of the new system's to the old should indicate whether the new system
is operating as it was designed.
Again, factors to look out for are ease of use, user interaction to system feedback, including error
messages, system response time, how users view the system. Also tested is accuracy and
readability of the information contained in user and procedure manuals.

Phase 5 - Auditing or QA Testing

Auditing is another way to ensure the system functions as it was designed to function. In this
phase an expert who has not been involved in setting up and using the system tests the system to
determine its reliability.
Auditors that work for the organization that created the system are called "internal auditors"
whereas those hired outside the organization are "external auditors". Internal auditors test the
controls, including security controls, to ensure the system does what it should. External auditors
are used when the system processes data that influences the organization’s financial statements.

The most common test tools are categorized it to three parts, these are:

1. Unit test

Series of stand-alone tests are conducted during Unit Testing. Each test examines an individual
component that is new or has been modified. A unit test is also called a module test because it
tests the individual units of code that comprise the application

 Testing a single unit (or module) of code


 A unit test calls the methods of a class, passing suitable parameters, and verifies that
the returned value is what you expect.
 You can code unit tests by hand.
 Database unit test to test stored procedures, functions, triggers and any other types of
database object.
 Unit tests focus on functionality and reliability, and the entry and exit criteria can be
the same for each module or specific to a particular module.
 Unit testing is done in a test environment prior to system integration. If a defect is
discovered during a unit test, the severity of the defect will dictate whether or not it
will be fixed before the module is approved.

2. System Test
 System Testing tests all components and modules that are new, changed, affected by
a change, or needed to form the complete application.
 The system test may require involvement of other systems but this should be
minimized as much as possible to reduce the risk of externally-induced problems.
 testing overall requirements to the system
3. Integration Test
 Integration testing examines all the components and modules that are new, changed,
affected by a change, or needed to form a complete system.
 Testing of interfaces between subsystems
 Integration testing has a number of sub-types of tests that may or may not be used,
depending on the application being tested or expected usage patterns.

i. Compatibility Testing – Compatibility tests insures that the application


works with differently configured systems based on what the users have or
may have. When testing a web interface, this means testing for compatibility
with different browsers and connection speeds.

ii. Performance Testing – Performance tests are used to evaluate and


understand the application’s scalability when, for example, more users are
added or the volume of data increases. This is particularly important for
identifying bottlenecks in high usage applications. The basic approach is to
collect timings of the critical business processes while the test system is
under a very low load (a ‘quiet box’ condition) and then collect the same
timings with progressively higher loads until the maximum required load is
reached. For a data retrieval application, reviewing the performance pattern
may show that a change needs to be made in a stored SQL procedure or that
an index should be added to the database design.

iii. Stress Testing – Stress Testing is performance testing at higher than normal
simulated loads. Stressing runs the system or application beyond the limits of
its specified requirements to determine the load under which it fails and how
it fails. A gradual performance slow-down leading to a non-catastrophic
system halt is the desired result, but if the system will suddenly crash and
burn it’s important to know the point where that will happen. Catastrophic
failure in production means beepers going off, people coming in after hours,
system restarts, frayed tempers, and possible financial losses. This test is
arguably the most important test for mission-critical systems.

iv. Load Testing – Load tests are the opposite of stress tests. They test the
capability of the application to function properly under expected normal
production conditions and measure the response times for critical
transactions or processes to determine if they are within limits specified in
the business requirements and design documents or that they meet Service
Level Agreements. For database applications, load testing must be executed
on a current production-size database. If some database tables are forecast to
grow much larger in the foreseeable future then serious consideration should
be given to testing against a database of the projected size.

1.4-Recognize and separate systems in to run able to modules mirroring live scenarios.

Test Scenario: - A chronological record of the details of the execution of a test script.
 Captures the specifications, tester activities, and outcomes. Used to identify defects.

The Modular Approach

The design of large databases is usually handled by a modular approach. That is the database
project is divided into discrete functional sub-systems or modules. For example a database
project for a retail operation may be divided into warehouse, sales and finance.
Each of these databases would be designed separately with attention given to the transfer of data
between the sub-systems.
The sectioning of the database may follow different operations or processes within the
organization. (A data flow diagram is a handy tool in defining these sub-systems). Different
members of the design and development team may be responsible for a different sub-system.

There are a number of advantages to a modular approach.

 It simplifies the design process: - A large database may have a great many entities. Each
design sub-system contains a fewer number of entities. This tends to be more manageable
for handling tasks such as determining attributes belonging to entities, primary keys,
foreign keys and the relationships between entities.
 It delegates sections to design groups within team:-This speeds up development as
several parts of the database are designed in parallel.
 It is prototyped quickly: - Prototyping allows the identification of potential trouble spots
in application development and in implementation.
 One or more modules may be implemented before final completion: - This
demonstrates progress and also serves end users by having a portion of the entire system
up and running early. Large projects may take several years to implement fully.

1.5-Gather and prepare logs and result sheets

Test Case: - A document that defines a test item and specifies a set of test inputs or data,
execution conditions, and expected results. The inputs/data used by a test case should be both
normal and intended to produce a ‘good’ result and intentionally erroneous and intended to
produce an error. A test case is generally executed manually but many test cases can be
combined for automated execution.
It is a good idea to have some performance test cases designed for each release to ensure that
each release maintains adequate performance. It contains the following sections:
 Before Release / After Release – Notice that you will record timings before the code is
merged and after, that way you can determine if the release is faster or slower than
before.
 Preparation – To correctly achieve timings, you must clear your cache; reboot your
machine, etc before doing the timings.
 Area Testing – Test each critical area of your software, logged in with different accounts
(accounts with small amounts of data, accounts with large amounts of data, etc).
1.6-Announce scheduled test to ensure preparedness and understanding of
implications for operations.
Schedule: A sequence of test runs and resets. The test runs and reset operations are carried out
one at a time; there is no concurrency in the framework.

Maintenance Task Sequence

There is some sequence of steps which should be followed to achieve the maintenance task in
SQL server Express Edition.
 SQL Script - There are some SQL scripts for each process (Backup, Index, etc.,)
which will perform the maintenance task against the database.
 Batch Process File - A Batch file is available which will invoke all the TSQL scripts
and execute in order against the database.
 Windows Task Scheduler - A windows task scheduler is scheduled in the system
which will automatically perform the maintenance task using the batch process and
TSQL scripts.

7-Prepare test script (online test) or test run (batch test) for running

Test Case Creation and Traceability

When creating test cases, make sure you create solid steps so that the person running the test case
will fully understand how to run the test case:
 Always tie Test Cases to one or more Requirements to ensure traceability.
 Always name the test case Title as the Requirement Number, plus the section of the
Requirement, plus a description of the test case.
 Always include Login info to let the tester know what application to run
 Make sure test case is assigned to the correct person
 Make sure the Test Type is correct
 Make sure the status is correct.
 Make sure you have great STEPS

Test Script:-Step-by-step procedures for using a test case to test a specific unit of code,
function, or capability.
Test Run: - A series of logically related groups of test cases or conditions.
 A sequence of requests that are always executed in the same order. For instance, a test
run tests a specific business process that is composed of several actions (login, view
product catalog, place order, specify payment, etc.).
 The test run is the unit in which failures are reported. It is assumed that the test database
is in state at the beginning or the execution of a test run. During the execution of a test
run the state may change due to the execution of requests with side-effects.

8-Review expected results against acceptance criteria (walkthrough) and


system requirements documentation

Test Case Review


Once all test cases are defined for a Requirement, the tester should setup a meeting to review all
the test cases for the Requirement with their team members (programmers, project manager,
other testers, etc). To maximize use of the meeting, bring a projector and display the test case
listing screen. If all team members review the test cases ahead of time (and make notes of
missing test cases, test cases that were not clear, etc), this meeting can be done in about 30
minutes. If everyone is not prepared, it can take hours.
In the meeting, go to the Test Library listing screen and filter on the specific Requirement in
which test cases are being reviewed. Then go through each test case, to ensure that each test case
is understood by the team members, that each test case is adequate, that there are no missing test
cases for negative tests, etc.
If any new test cases are need, or if any test cases should be changed, update them on the spot.
Once completed, mark the Requirement as Test Case Reviewed (custom field) in your
Requirements area, this signifies that all test cases for the requirement has been fully reviewed
and approved.

What is a Requirements Document?

A requirements document explains why a product is needed, puts the product in context, and
describes what the finished product will be like. A large part of the requirements document is the
formal list of requirements
Requirements documents usually include user, system, and interface requirements; other classes
of requirements are included as needed.
 User requirements are written from the point of view of end users, and are generally
expressed in narrative form:
 System requirements are detailed specifications describing the functions the system
needs to do. These are usually more technical in nature
 Interface requirements specify how the interface (the part of the system that users see
and interact with) will look and behave.

Writing the Document


Armed with your list of prioritized, classified requirements, along with any other pertinent
project-related documents, you are ready to write the requirements document.

The documentation include

 The introduction. The document should start with an introduction which explains the
purpose of the requirements document and how to use it. State any technical background
that is needed to understand the requirements and point less-technical readers to overview
sections which will be easier to understand. Explain the scope of the product to define
boundaries and set expectations. Include the business case for the product (why is it a
good idea?). Finally, include a summary or a list of the most important requirements, to
give a general understanding of the requirements and focus attention on the most critical
ones.
 General description. Devote the next section of the document to a description of the
product in nontechnical terms. This is valuable for readers who want to know what the
project is about without wading through the technical requirements. Include a narrative
which gives readers an idea of what the product will do. Identify the product perspective:
the need(s) the product will serve, the primary stakeholders, and the developers. Describe
the main functions of the product (what activities will users be able to do with it?) and
describe typical user characteristics. Explain any constraints you will face during
development, and list your assumptions and any technical dependencies.
 Specific requirements. Include your requirements list here, divided into whatever
groupings are convenient for you.
 Appendices. Include, if you wish, any source documents such as personas and scenarios,
transcripts of user interviews, and so on.
 Glossary. Explain any unusual terms, acronyms, or abbreviations (either technical or
domain-specific).
 References. List references and source documents, if any.
 Index. If your document is long, consider including an index.

Validating Requirements

Go back to end-users and customers with the draft requirements document and have them read it
(mostly the narrative sections). Ask if this is what they had in mind, or if they think any changes
are in order. It's much easier to make changes now, while your project is still words on paper, than
after development has begun. Take their comments and make changes; you may need to revisit
the previous stages (recording, organizing, and prioritizing), but it will be easier this time. The
vital thing is to come up with a requirements specification that meets the needs of the major
stakeholders. Repeat the process until there is agreement, and get representatives of each
stakeholder group to actually sign off on the document.

During Product Development

During development, refer to the document for guidance. Interface designers, instructional
designers, and developers should use it as a map to direct their efforts. If someone's work is
called into question, check with the requirements document to decide which direction to proceed.
This isn't to say that you have to treat it as inviolate. It's possible that small requirements changes
will continue to occur. Just be sure to get stakeholder agreement before altering the specification.

During Product Testing

Continue to check the ongoing development process against the requirements; does the emerging
product do what it ought? When the product is in testing, check it against the requirements again.
Was anything omitted? If you're having problems with a feature, check with the requirements
document to see if there is a way around the problem. Sometimes developers add features which
don't need to be there, and in a time or budget crunch, you can use the requirements specification
as a reason to put off an extra feature.
Information Sheet- EISDBA4-LG08
M08 LO2-IS02
Conduct test

2.1- Run test scripts and document results in line with test and acceptance processes.

After run the test script in the test environment and the result is documents the tester can be
submit the result to the user and the organization. According to organizational rule and standard
the result can be accepted.

User Acceptance Testing (UAT)

User Acceptance Testing is also called Beta testing, application testing, and end-user testing.
Whatever you choose to call it, it’s where testing moves from the hands of the IT department
into those of the business users.
Software vendors often make extensive use of Beta testing, some more formally than others,
because they can get users to do it for free. By the time UAT is ready to start, the IT staff has
resolved in one way or another all the defects they identified. Regardless of their best efforts,
though, they probably don’t find all the flaws in the application.

 Representatives from the group work with Testing Coordinators to design and conduct
tests that reflect activities and conditions seen in normal business usage.
 Business users also participate in evaluating the results. This insures that the application
is tested in real-world situations and that the tests cover the full range of business usage.
 Integration testing signoff was obtained
 Business requirements have been met or renegotiated with the Business Sponsor or
representative
 UAT test scripts are ready for execution
 The testing environment is established
 Security requirements have been documented and necessary user access obtained
 UAT has been completed and approved by the user community in a transition meeting
 Change control is managing requested modifications and enhancements
 Business sponsor agrees that known defects do not impact a production release—no
remaining defects are rated 3, 2, or 1
2.2-Perform required quality benchmarks or comparisons in readiness for
acceptance testing

Quality benchmarks are done after

 UAT testing has been completed and approved by all necessary parties ,
 Known defects have been documented
 Migration package documentation has been completed, reviewed, and approved by the
production systems manage
 Package migration is complete
 Installation testing has been performed and documented and the results have been signed
off

Production verification testing

 Production verification testing is a final opportunity to determine if the software is ready


for release. Its purpose is to simulate the production cutover as closely as possible and for
a period of time simulate real business activity.
 The application should be completely removed from the test environment and then
completely reinstalled exactly as it will be in the production implementation.
 Then mock production runs will verify that the existing business process flows,
interfaces, and batch processes continue to run correctly.
 Mock testing has been documented, reviewed, and approved
 All tests show that the application will not adversely affect the production environment
 A System Change Record with approvals has been prepared

2.3-Adopt organization/ industry standards, where appropriate.

After the production is verifying the test the organization/industry standards can be adopt the
production according to their rule and regulation.

Prior to launching into User Acceptance Testing (UAT), supply your client with a document.
This document describes Installation Procedures, Defect Entry and Resolution, Roles and
Responsibilities, Drop Schedule, and describes any open Defects.
2.4-Compare actual results to expected results on completion of each system unit and
complete result sheets.

Regression testing

Regression testing is also known as validation testing and provides a consistent, repeatable
validation of each change to an application under development or being modified.
Each time a defect is fixed, the potential exists to inadvertently introduce new errors, problems,
and defects.
Regression testing is the probably selective retesting of an application or system that has been
modified to insure that no previously working components, functions, or features fail as a result
of the repairs.
Regression testing is conducted in parallel with other tests and can be viewed as a quality control
tool to ensure that the newly modified code still complies with its specified requirements and that
unmodified code has not been affected by the change.
 It is important to understand that regression testing doesn’t test that a specific defect has
been fixed.
 Regression testing tests that the rest of the application up to the point or repair was not
adversely affected by the fix
 The defect is repeatable and has been properly documented
 A change control or defect tracking record was opened to identify and track the
regression testing effort
 A regression test specific to the defect has been created, reviewed, and accepted Exit
Criteria
 Results of the test show no negative impact to the application

You might also like