Data Base Administration Level IV: Shashemene Poly Technique College
Data Base Administration Level IV: Shashemene Poly Technique College
Database testing usually consists of a layered process, including the user interface (UI) layer, the
business layer, the data access layer and the database itself. The UI layer deals with the interface
design of the database [citation needed] while the business layer includes databases supporting
business strategies. The most critical layer is the data access layer, which deals with databases
directly during the communication process. Database testing mainly takes place at this layer and
involves testing strategies such as quality control and quality assurance of the product databases.
Testing at these different layers is frequently used to maintain consistency of database systems
Test environment;- the tester that prepare for testing that include, the technical environment,
data, work area, and interfaces used in testing must be prepare before the testing conducts.
Black box testing involves testing interfaces and the integration of the database, which includes:
Various techniques such as Cause effect graphing technique, equivalence partitioning and
boundary-value analysis.
With the help of these techniques, the functionality of the database can be tested thoroughly.
Pros and Cons of black box testing include: Test case generation in black box testing is fairly
simple. Their generation is completely independent of software development and can be done in
an early stage of development. As a consequence, the programmer has better knowledge of how
to design the database application and uses less time for debugging. Cost for development of
black box test cases is lower than development of white box test cases. The major drawback of
black box testing is that it is unknown how much of the program is being tested. Also, certain
errors cannot be detected.
White box testing mainly deals with the internal structure of the database. The specification
details are hidden from the user.
It involves the testing of database triggers and logical views which are going to support
database refactoring.
It performs module testing of database functions, triggers, views, SQL queries etc.
The techniques used in white box testing are condition coverage, decision coverage,
statement coverage, cyclomatic complexity.
The main advantage of white box testing in database testing is that coding error are detected, so
internal bugs in the database can be eliminated.
1) Data Mapping: In the software systems, data often travels back and forth from the UI (user
interface) to the backend DB and vice versa. So following are the aspects to look for:
To check whether the fields in the UI/Front end forms and mapped consistently with the
corresponding DB table (and also the fields within). Typically this mapping information
is defined in the requirements documents.
Terms of DB transactions
Active – the initial state; the transaction stays in this state while it is executing
Partially committed – after the final statement has been executed.
Failed -- after the discovery that normal execution can no longer proceed.
Aborted – after the transaction has been rolled back and the database restored to its state
prior to the start of the transaction. Two options after it has been aborted:
o restart the transaction
can be done only if no internal logical error
o kill the transaction
Committed – after successful
3) Data integrity: This means that following any of the CRUD operations (Create, Retrieve,
Update and delete), the updated and most recent values/Status of shared data should appear on all
the forms and screens. A value should not be updated on one screen and display an older value
on another one. So check your DB test in a way to include checking the data in all the places it
appears to see if it is consistently the same.
4) Business rule conformity: More complex databases means more complicated components
like relational constraints, triggers, stored procedures, etc. So testers will have to come up with
appropriate SQL queries in order to validate these complex objects.
1) Transactions:
When testing transactions it is important to make sure that they satisfy the ACID properties.
A transaction is a unit of work that is performed against a database. Transactions are units or
sequences of work accomplished in a logical order, whether in a manual fashion by a user or
automatically by some sort of a database program.
A transaction is the propagation of one or more changes to the database. For example, if you are
creating a record or updating a record or deleting a record from the table, then you are
performing transaction on the table. It is important to control transactions to ensure data integrity
and to handle database errors.
Practically, you will club many SQL queries into a group and you will execute all of them
together as a part of a transaction.
Properties of Transactions:
Transactions have the following four standard properties, usually referred to by the acronym
ACID:
Atomicity: ensures that all operations within the work unit are completed successfully;
otherwise, the transaction is aborted at the point of failure, and previous operations are
rolled back to their former state.
Consistency: ensures that the database properly changes states upon a successfully
committed transaction.
Isolation: enables transactions to operate independently of and transparent to each other.
Durability: ensures that the result or effect of a committed transaction persists in case of a
system failure.
Transaction Control:
Transactional control commands are only used with the DML commands INSERT, UPDATE
and DELETE only. They cannot be used while creating tables or dropping them because these
operations are automatically committed in the database.
The COMMIT command is the transactional command used to save changes invoked by a
transaction to the database.
The COMMIT command saves all transactions to the database since the last COMMIT or
ROLLBACK command.
Example:
As a result, two rows from the table would be deleted and SELECT statement would produce the
following result:
The ROLLBACK command is the transactional command used to undo transactions that have
not already been saved to the database.
The ROLLBACK command can only be used to undo transactions since the last COMMIT or
ROLLBACK command was issued.
ROLLBACK;
Example:
Consider the CUSTOMERS table having the following records:
ID NAME AGE ADDRESS SALARY
1 Rahel 32 AA 2000.00
2 Kedir 25 Dire 1500.00
3 Kasa 23 Kofele 2000.00
4 Chaltu 25 Mojo 6500.00
5 Hasen 27 Bahirdar 8500.00
6 Kebede 22 Shashe 4500.00
7 Mulu 24 Adama 10000.00
Following is the example, which would delete records from the table having age = 25 and then
ROLLBACK the changes in the database.
As a result, delete operation would not impact the table and SELECT statement would produce
the following result:
ID NAME AGE ADDRESS SALARY
1 Rahel 32 AA 2000.00
2 Kedir 25 Dire 1500.00
3 Kasa 23 Kofele 2000.00
4 Chaltu 25 Mojo 6500.00
5 Hasen 27 Bahirdar 8500.00
6 Kebede 22 Shashe 4500.00
7 Mulu 24 Adama 10000.00
A SAVEPOINT is a point in a transaction when you can roll the transaction back to a certain
point without rolling back the entire transaction.
SAVEPOINT SAVEPOINT_NAME;
This command serves only in the creation of a SAVEPOINT among transactional statements.
The ROLLBACK command is used to undo a group of transactions.
ROLLBACK TO SAVEPOINT_NAME;
Following is an example where you plan to delete the three different records from the
CUSTOMERS table. You want to create a SAVEPOINT before each delete, so that you can
ROLLBACK to any SAVEPOINT at any time to return the appropriate data to its original state:
Example:
Consider the CUSTOMERS table having the following records:
SAVEPOINT SP1;
Savepoint created.
DELETE FROM CUSTOMERS WHERE ID=1;
1 row deleted.
SAVEPOINT SP2;
Savepoint created.
DELETE FROM CUSTOMERS WHERE ID=2;
1 row deleted.
SAVEPOINT SP3;
Savepoint created.
DELETE FROM CUSTOMERS WHERE ID=3;
1 row deleted.
Now that the three deletions have taken place, say you have changed your mind and decided to
ROLLBACK to the SAVEPOINT that you identified as SP2. Because SP2 was created after the
first deletion, the last two deletions are undone:
ROLLBACK TO SP2;
Rollback complete.
Notice that only the first deletion took place since you rolled back to SP2:
6 rows selected.
The RELEASE SAVEPOINT command is used to remove a SAVEPOINT that you have created.
The syntax for RELEASE SAVEPOINT is as follows:
Once a SAVEPOINT has been released, you can no longer use the ROLLBACK command to
undo transactions performed since the SAVEPOINT.
The SET TRANSACTION command can be used to initiate a database transaction. This
command is used to specify characteristics for the transaction that follows.
For example, you can specify a transaction to be read only or read write.
2) Database schema:
Database schema is nothing but a formal definition of the how the data is going to be organized
into a DB. To test it:
Identify the requirements based on which the database operates. Sample requirements:
3) Trigger:
Triggers are special type of stored procedure. Instead of being executed by the user, triggers are
executed by the database server when certain operations are performed on a table
An insert trigger runs whenever a new record is inserted in a table.
A delete trigger runs whenever existing record is deleted from a table
An update trigger runs whenever an existing record in a table is changed
A trigger is a stored procedure that is invoked when a particular event occurs. For example, you
can install a stored procedure that is triggered each time a record is deleted from a transaction
table and that automatically deletes the corresponding customer from a customer table when all
his transactions are deleted. The planned update language will be able to handle stored
procedures, but without triggers. Triggers usually slow down everything, even queries for which
they are not needed.
1. Insert Triggers:-This insert trigger update values form product table when a customer order
product this record on the order table:
2. Delete triggers: - This trigger can protect any delete action on the table employee if the
Employee name is TEKLE and whose id is ‘02’
If you execute this code you should see an error message cannot remove customers From Addis.
Transaction has been cancelled
3. Update Triggers
CreateTrigger Upcnum on Customer
ForUpdate
As
IfUpdate(cname)
Print'can Not Update Cnum From Custommer'
Print' Transaction Has Been Cancelled'
Update Customer
Set cname='xx'Where cid=1
If You Execute the Above Command, the following Error Message will be displayed.
Can Not Update Cnum from Custommer
Transaction Has Been Cancelled
Ex.3.
CreateTriggercheck PN On customers
ForUpdate
As
IfUpdate(phone)
Print ‘cannot change phone numbers’
Print ‘Transactions has been cancelled’
Note: The if update Statement can be used in insert triggers as well as in update triggers.
When a certain event takes places on a certain table, a piece of code (a trigger) can be auto-
instructed to be executed.
For example, a new student joined a school. The student is taking 2 classes; math and science.
The student is added to the “student table”. A trigger could be adding the student to the
corresponding subject tables once he is added to the student table.
The common method to test is to execute SQL query embedded in the trigger independently first
and record the result. Follow this up with executing the trigger as a whole. Compare the results.
These are tested during both the black box and white box testing phases.
Black-box testing is a way of testing software without having much knowledge of the internal
workings of the software itself. Black box testing is often referred to as behavioral testing, in the
sense that you want to test how the software behaves as a whole. It is usually done with the
actual users of the software in mind, who usually have no knowledge of the actual code itself.
White box (aka clear box), on the other hand, is testing of the structural internals of the code – it
gets down to the for loops, if statements, etc. It allows one to peek inside the ‘box’. Tasks that
are typical of white box testing include boundary tests, use of assertions, and logging
White box testing: The basic idea is to just test the DB alone even before the
integration with the front end (UI) is made.
Black box testing:
a) Since the UI and DB integration is now available; we can insert/delete/update data
from the front end in a way that the trigger gets invoked. Following that select
statements can be used to retrieve the DB data to see if the trigger was successful in
performing the intended operation.
b) Second way to test this is to directly load the data that would invoke the trigger and
see if it works as intended.
4) Stored Procedures:
Stored procedures are more or less similar to user defined functions. These can be invoked by a
call procedure/execute procedure statements and the output is usually in the form of result sets.
A stored procedure is a set of SQL commands that can be compiled and stored in the server.
Once this has been done, clients don't need to keep reissuing the entire query but can refer to the
stored procedure. This provides better performance because the query has to be parsed only once,
and less information needs to be sent between the server and the client. You can also raise the
conceptual level by having libraries of functions in the server.
Stored procedures are an SQL program stored inside the database. Stored procedures can offer
performance gains when used instead of regular queries. Stored procedures differ from ordinary
SQL statements and from batches of SQL statements in that they are precompiled.
The first time you run a procedure, SQL Server's query processor analyze it and prepare an
execution plan that is ultimately stored in a system table. The subsequent execution of the
procedure is according to the stored plan. Stored procedures are extremely similar to the
constructs seen in other programming languages.
We use the CREATE PROCEDURE statement to create a stored procedure. The maximum
stored procedure name length is thirty characters.
The syntax to define a new procedure is as follows:
Create Procedure [pocedure_name]
As
Code
This stored procedure is called "all_employees". All it contains is a SELECT statement. All
stored procedures that SQL Server provides start with "sp_" (and "xp_" for extended stored
procedures). If you try to call a stored procedure that starts with "sp_" SQL Server will first
search the MASTER database before searching the current database.
You execute a stored procedure by typing its name or using the EXECUTE statement. To
execute our stored procedure you can type:
Execute [procedure name][parameters].
This will execute the stored procedure and return the results. If you are calling this procedure
from an ASP page (or other client) you can use the EXECUTE statement as your SQL string to
execute. In this case, our stored procedure will return a record set.
When you submit a stored procedure to the system, SQL Server compiles and verifies the
routines within it. If any problems are found, the procedure is rejected and you'll need to
determine what the problem is prior to re−submitting the routine.
This information is updated in real-time and store managers are constantly checking the levels of
products stored at their store house and available for shipment. In the past, each manager would
run queries similar to the following:
This resulted in a very inefficient performance at the SQL Server. Each time a store manager
executed the query, the database server was forced to recompile the query and execute it from
scratch. It also required the store manager to have knowledge of SQL and appropriate
permissions to access the table information. We can simplify this process through the use of a
stored procedure. Let's create a procedure called sp_GetInventory that retrieves the inventory
levels for a given storehouse. Here's the SQL code:
Stored procedures cannot be modified in place, so you're forced to first drop the procedure, and
then create it again. Although SQL Server can produce the code that was used to create the
stored procedure, you should always maintain a backup copy. You can pull the text associated
with a stored procedure by using the sp_helptext system stored procedure.
sp_helptext procedure_name
We use the DROP PROCEDURE statement to drop a stored procedure that is created. Multiple
procedures can be dropped with a single DROP PROCEDURE statement by listing multiple
procedures separated by commas after the keywords DROP PROCEDURE in the syntax:
White box testing: Stubs are used to invoke the stored procedures and then the results
are validated against the expected values.
Black box testing: Perform an operation from the frontend (UI) of the application and
check for the execution of the stored procedure and its results.
Constraints are the rules enforced on data columns on table. These are used to limit the type of
data that can go into a table. This ensures the accuracy and reliability of the data in the database.
Constraints could be column level or table level. Column level constraints are applied only to
one column, whereas table level constraints are applied to the whole table.
Following are commonly used constraints available in SQL:
NOT NULL Constraint: Ensures that a column cannot have NULL value.
DEFAULT Constraint: Provides a default value for a column when none is specified.
UNIQUE Constraint: Ensures that all values in a column are different.
PRIMARY Key: Uniquely identified each rows/records in a database table.
FOREIGN Key: Uniquely identified a rows/records in any another database table.
CHECK Constraint: The CHECK constraint ensures that all values in a column satisfy
certain conditions.
INDEX: Use to create and retrieve data from the database very quickly.
DEFAULT Constraint:
The DEFAULT constraint provides a default value to a column when the INSERT INTO
statement does not provide a specific value.
Example:
For example, the following SQL creates a new table called CUSTOMERS and adds five
columns. Here, SALARY column is set to 5000.00 by default, so in case INSERT INTO
statement does not provide a value for this column. Then by default this column would be set to
5000.00.
If CUSTOMERS table has already been created, then to add a DFAULT constraint to SALARY
column, you would write a statement similar to the following:
UNIQUE Constraint:
The UNIQUE Constraint prevents two records from having identical values in a particular
column. In the CUSTOMERS table, for example, you might want to prevent two or more people
from having identical age.
Example:
For example, the following SQL creates a new table called CUSTOMERS and adds five
columns. Here, AGE column is set to UNIQUE, so that you cannot have two records with same
age:
You can also use following syntax, which supports naming the constraint in multiple columns as
well:
If you are using MySQL, then you can use the following syntax:
FOREIGN Key:
A foreign key is a key used to link two tables together. This is sometimes called a referencing
key.
Foreign Key is a column or a combination of columns whose values match a Primary Key in a
different table.
The relationship between 2 tables matches the Primary Key in one of the tables with a
Foreign Key in the second table.
If a table has a primary key defined on any field(s), then you cannot have two records having the
same value of that field(s).
Example:
Perform a front end operation which overruns the database object condition
Validate the results with a SQL Query.
Checking the default value for a certain field is quite simple. It is a part of business rule
validation. You can do it manually or you can use some other options to do so. Manually, you
can perform an action that will add a value other than the default value into the field from the
front end and see if it results in an error.
Checking the unique value can be done exactly the way we did for the default values. Try
entering values from the UI that will violate this rule and see if an error gets displayed.
For the foreign key constraint validation use data loads that directly input data that violates the
constraint and see if the application restricts the same or not. Along with the back end data load,
perform the front end UI operations too in a way that are going to violate the constraints and see
if the relevant error is displayed.
The system development life cycle is the overall process of developing, implementing, and
retiring information systems through a multistep process from initiation, analysis, design,
implementation, and maintenance to disposal. There are many different SDLC models and
methodologies, but each generally consists of a series of defined steps or phases.
Soft ware life cycle refers to the period of time in which a software product is in use, from
design until the termination of its use. The software life cycle depends on many factors, some of
which include technology progression, the change need of an organization and the usefulness of
the soft ware after a period of time.
Lifecycle is the period of time the software or database can reasonably be expected to be in use.
The Database Lifecycle is a subset of the Systems Development Lifecycle. It occurs within
the development of the entire system. The database lifecycle begins with a problem within the
context of an organization.
Lifecycle
Requirements Analysis Outlines the planning and analysis phase of database development.
This phase identifies the problems that exist in a particular context and
the users' requirements of the system
Database Design Describes the process of analyzing the information needs of an
organization and developing a conceptual data model that reflects
those needs.
Implementation This phase involves the coding, testing and debugging of
the programs associated with the database files along with
the creation of the files themselves.
Testing and Evaluation Involves testing the various database components and fine tuning the
database.
Maintenance Changes to the system generate the various maintenance activities
The Systems Design Lifecycle
The Systems Development Lifecycle is a methodology used in the development and maintenance
of any information system, whether to be transactional, management information, decision
support or an expert system.
Requirements Analysis
The Requirements Analysis phase outlines the planning and initial analysis phase of database
development. This phase identifies the problems that exist in a particular context and the users'
requirements of the system.
Database Design
Database design is the process of analyzing the information needs of an organization and
developing a conceptual data model that reflects those needs. It can also be described as a
process that develops database structures and how they interact from the user's requirements for
data.
The process starts with the Requirements Analysis which identifies the users' needs and then
translates those needs into Conceptual, then Logical and finally Physical design.
Implementation
A new database implementation requires the creation of special storage constructs depending on
the particular DBMS. Oracle, Microsoft SQL server, and DB2 require constructs such as storage
group and table space to be created. Other DBMS may automatically create these. For each
DBMS there are a particular set of steps to follow.
Testing and evaluation of the system takes place throughout systems development. The reason
behind testing is to turn up unforeseen problems prior to implementation.
Maintenance
Maintenance occurs throughout the life of the database. Apart from routine tasks that keep the
information system running and useful, there will be a constantly changing environment in which
the database is operating. The necessity to add additional tables, reports, and applications will
evolve out of the changing needs of the organization for which it was built.
Evolve Database
Relating the software and database life cycle
A life cycle model captures the development, deployment, and maintenance of atypical software
element.
Development is the construction phase.
Deployment is a formal release of the software.
Maintenance consists of testing and tracking activities.
Different models may apply to the various stages as well as to different software components.
Formal models become crucial as a risk reduction method for complex systems.
Waterfall and spiral life cycle approaches are common.
The waterfall method consists of discrete steps.
Spiral model is iterative.
The test plan is a mandatory document. You can’t test without one. For simple, straight-forward
projects the plan doesn’t have to be elaborate but it must address certain items. As identified by
the “American National Standards Institute and Institute for Electrical and Electronic Engineers
Standard 829/1983 for Software Test Documentation”, the following components should be
covered in a software test plan.
There are several phases to the testing process which is described below.
In this phase of testing, programs that work independently of one another are tested together
using test data.
Testing in this phase checks to see whether the system can handle normal transaction processing.
If it can, variations to the test data are added, including invalid data, to ensure the system can
detect errors. Testing in the phase generally takes many passes.
In this phase, testing of the entire system is undertaken. At this stage outside users may become
involved. This testing phase, also called "alpha" testing, still uses test data.
This phase of testing should test for errors, timeliness, ease of use, correct ordering of
transactions and acceptable down time.
In addition, there are some factors that need to be considered when testing the system with test
data:
seeing whether users have adequate documentation to correctly use the system
checking that procedural and other manuals are written clearly and accurately for users
to follow
determining whether output is correct and whether users understand what the output
should be and how it should look
Auditing is another way to ensure the system functions as it was designed to function. In this
phase an expert who has not been involved in setting up and using the system tests the system to
determine its reliability.
Auditors that work for the organization that created the system are called "internal auditors"
whereas those hired outside the organization are "external auditors". Internal auditors test the
controls, including security controls, to ensure the system does what it should. External auditors
are used when the system processes data that influences the organization’s financial statements.
The most common test tools are categorized it to three parts, these are:
1. Unit test
Series of stand-alone tests are conducted during Unit Testing. Each test examines an individual
component that is new or has been modified. A unit test is also called a module test because it
tests the individual units of code that comprise the application
2. System Test
System Testing tests all components and modules that are new, changed, affected by
a change, or needed to form the complete application.
The system test may require involvement of other systems but this should be
minimized as much as possible to reduce the risk of externally-induced problems.
testing overall requirements to the system
3. Integration Test
Integration testing examines all the components and modules that are new, changed,
affected by a change, or needed to form a complete system.
Testing of interfaces between subsystems
Integration testing has a number of sub-types of tests that may or may not be used,
depending on the application being tested or expected usage patterns.
iii. Stress Testing – Stress Testing is performance testing at higher than normal
simulated loads. Stressing runs the system or application beyond the limits of
its specified requirements to determine the load under which it fails and how
it fails. A gradual performance slow-down leading to a non-catastrophic
system halt is the desired result, but if the system will suddenly crash and
burn it’s important to know the point where that will happen. Catastrophic
failure in production means beepers going off, people coming in after hours,
system restarts, frayed tempers, and possible financial losses. This test is
arguably the most important test for mission-critical systems.
iv. Load Testing – Load tests are the opposite of stress tests. They test the
capability of the application to function properly under expected normal
production conditions and measure the response times for critical
transactions or processes to determine if they are within limits specified in
the business requirements and design documents or that they meet Service
Level Agreements. For database applications, load testing must be executed
on a current production-size database. If some database tables are forecast to
grow much larger in the foreseeable future then serious consideration should
be given to testing against a database of the projected size.
1.4-Recognize and separate systems in to run able to modules mirroring live scenarios.
Test Scenario: - A chronological record of the details of the execution of a test script.
Captures the specifications, tester activities, and outcomes. Used to identify defects.
The design of large databases is usually handled by a modular approach. That is the database
project is divided into discrete functional sub-systems or modules. For example a database
project for a retail operation may be divided into warehouse, sales and finance.
Each of these databases would be designed separately with attention given to the transfer of data
between the sub-systems.
The sectioning of the database may follow different operations or processes within the
organization. (A data flow diagram is a handy tool in defining these sub-systems). Different
members of the design and development team may be responsible for a different sub-system.
It simplifies the design process: - A large database may have a great many entities. Each
design sub-system contains a fewer number of entities. This tends to be more manageable
for handling tasks such as determining attributes belonging to entities, primary keys,
foreign keys and the relationships between entities.
It delegates sections to design groups within team:-This speeds up development as
several parts of the database are designed in parallel.
It is prototyped quickly: - Prototyping allows the identification of potential trouble spots
in application development and in implementation.
One or more modules may be implemented before final completion: - This
demonstrates progress and also serves end users by having a portion of the entire system
up and running early. Large projects may take several years to implement fully.
Test Case: - A document that defines a test item and specifies a set of test inputs or data,
execution conditions, and expected results. The inputs/data used by a test case should be both
normal and intended to produce a ‘good’ result and intentionally erroneous and intended to
produce an error. A test case is generally executed manually but many test cases can be
combined for automated execution.
It is a good idea to have some performance test cases designed for each release to ensure that
each release maintains adequate performance. It contains the following sections:
Before Release / After Release – Notice that you will record timings before the code is
merged and after, that way you can determine if the release is faster or slower than
before.
Preparation – To correctly achieve timings, you must clear your cache; reboot your
machine, etc before doing the timings.
Area Testing – Test each critical area of your software, logged in with different accounts
(accounts with small amounts of data, accounts with large amounts of data, etc).
1.6-Announce scheduled test to ensure preparedness and understanding of
implications for operations.
Schedule: A sequence of test runs and resets. The test runs and reset operations are carried out
one at a time; there is no concurrency in the framework.
There is some sequence of steps which should be followed to achieve the maintenance task in
SQL server Express Edition.
SQL Script - There are some SQL scripts for each process (Backup, Index, etc.,)
which will perform the maintenance task against the database.
Batch Process File - A Batch file is available which will invoke all the TSQL scripts
and execute in order against the database.
Windows Task Scheduler - A windows task scheduler is scheduled in the system
which will automatically perform the maintenance task using the batch process and
TSQL scripts.
7-Prepare test script (online test) or test run (batch test) for running
When creating test cases, make sure you create solid steps so that the person running the test case
will fully understand how to run the test case:
Always tie Test Cases to one or more Requirements to ensure traceability.
Always name the test case Title as the Requirement Number, plus the section of the
Requirement, plus a description of the test case.
Always include Login info to let the tester know what application to run
Make sure test case is assigned to the correct person
Make sure the Test Type is correct
Make sure the status is correct.
Make sure you have great STEPS
Test Script:-Step-by-step procedures for using a test case to test a specific unit of code,
function, or capability.
Test Run: - A series of logically related groups of test cases or conditions.
A sequence of requests that are always executed in the same order. For instance, a test
run tests a specific business process that is composed of several actions (login, view
product catalog, place order, specify payment, etc.).
The test run is the unit in which failures are reported. It is assumed that the test database
is in state at the beginning or the execution of a test run. During the execution of a test
run the state may change due to the execution of requests with side-effects.
A requirements document explains why a product is needed, puts the product in context, and
describes what the finished product will be like. A large part of the requirements document is the
formal list of requirements
Requirements documents usually include user, system, and interface requirements; other classes
of requirements are included as needed.
User requirements are written from the point of view of end users, and are generally
expressed in narrative form:
System requirements are detailed specifications describing the functions the system
needs to do. These are usually more technical in nature
Interface requirements specify how the interface (the part of the system that users see
and interact with) will look and behave.
The introduction. The document should start with an introduction which explains the
purpose of the requirements document and how to use it. State any technical background
that is needed to understand the requirements and point less-technical readers to overview
sections which will be easier to understand. Explain the scope of the product to define
boundaries and set expectations. Include the business case for the product (why is it a
good idea?). Finally, include a summary or a list of the most important requirements, to
give a general understanding of the requirements and focus attention on the most critical
ones.
General description. Devote the next section of the document to a description of the
product in nontechnical terms. This is valuable for readers who want to know what the
project is about without wading through the technical requirements. Include a narrative
which gives readers an idea of what the product will do. Identify the product perspective:
the need(s) the product will serve, the primary stakeholders, and the developers. Describe
the main functions of the product (what activities will users be able to do with it?) and
describe typical user characteristics. Explain any constraints you will face during
development, and list your assumptions and any technical dependencies.
Specific requirements. Include your requirements list here, divided into whatever
groupings are convenient for you.
Appendices. Include, if you wish, any source documents such as personas and scenarios,
transcripts of user interviews, and so on.
Glossary. Explain any unusual terms, acronyms, or abbreviations (either technical or
domain-specific).
References. List references and source documents, if any.
Index. If your document is long, consider including an index.
Validating Requirements
Go back to end-users and customers with the draft requirements document and have them read it
(mostly the narrative sections). Ask if this is what they had in mind, or if they think any changes
are in order. It's much easier to make changes now, while your project is still words on paper, than
after development has begun. Take their comments and make changes; you may need to revisit
the previous stages (recording, organizing, and prioritizing), but it will be easier this time. The
vital thing is to come up with a requirements specification that meets the needs of the major
stakeholders. Repeat the process until there is agreement, and get representatives of each
stakeholder group to actually sign off on the document.
During development, refer to the document for guidance. Interface designers, instructional
designers, and developers should use it as a map to direct their efforts. If someone's work is
called into question, check with the requirements document to decide which direction to proceed.
This isn't to say that you have to treat it as inviolate. It's possible that small requirements changes
will continue to occur. Just be sure to get stakeholder agreement before altering the specification.
Continue to check the ongoing development process against the requirements; does the emerging
product do what it ought? When the product is in testing, check it against the requirements again.
Was anything omitted? If you're having problems with a feature, check with the requirements
document to see if there is a way around the problem. Sometimes developers add features which
don't need to be there, and in a time or budget crunch, you can use the requirements specification
as a reason to put off an extra feature.
Information Sheet- EISDBA4-LG08
M08 LO2-IS02
Conduct test
2.1- Run test scripts and document results in line with test and acceptance processes.
After run the test script in the test environment and the result is documents the tester can be
submit the result to the user and the organization. According to organizational rule and standard
the result can be accepted.
User Acceptance Testing is also called Beta testing, application testing, and end-user testing.
Whatever you choose to call it, it’s where testing moves from the hands of the IT department
into those of the business users.
Software vendors often make extensive use of Beta testing, some more formally than others,
because they can get users to do it for free. By the time UAT is ready to start, the IT staff has
resolved in one way or another all the defects they identified. Regardless of their best efforts,
though, they probably don’t find all the flaws in the application.
Representatives from the group work with Testing Coordinators to design and conduct
tests that reflect activities and conditions seen in normal business usage.
Business users also participate in evaluating the results. This insures that the application
is tested in real-world situations and that the tests cover the full range of business usage.
Integration testing signoff was obtained
Business requirements have been met or renegotiated with the Business Sponsor or
representative
UAT test scripts are ready for execution
The testing environment is established
Security requirements have been documented and necessary user access obtained
UAT has been completed and approved by the user community in a transition meeting
Change control is managing requested modifications and enhancements
Business sponsor agrees that known defects do not impact a production release—no
remaining defects are rated 3, 2, or 1
2.2-Perform required quality benchmarks or comparisons in readiness for
acceptance testing
UAT testing has been completed and approved by all necessary parties ,
Known defects have been documented
Migration package documentation has been completed, reviewed, and approved by the
production systems manage
Package migration is complete
Installation testing has been performed and documented and the results have been signed
off
After the production is verifying the test the organization/industry standards can be adopt the
production according to their rule and regulation.
Prior to launching into User Acceptance Testing (UAT), supply your client with a document.
This document describes Installation Procedures, Defect Entry and Resolution, Roles and
Responsibilities, Drop Schedule, and describes any open Defects.
2.4-Compare actual results to expected results on completion of each system unit and
complete result sheets.
Regression testing
Regression testing is also known as validation testing and provides a consistent, repeatable
validation of each change to an application under development or being modified.
Each time a defect is fixed, the potential exists to inadvertently introduce new errors, problems,
and defects.
Regression testing is the probably selective retesting of an application or system that has been
modified to insure that no previously working components, functions, or features fail as a result
of the repairs.
Regression testing is conducted in parallel with other tests and can be viewed as a quality control
tool to ensure that the newly modified code still complies with its specified requirements and that
unmodified code has not been affected by the change.
It is important to understand that regression testing doesn’t test that a specific defect has
been fixed.
Regression testing tests that the rest of the application up to the point or repair was not
adversely affected by the fix
The defect is repeatable and has been properly documented
A change control or defect tracking record was opened to identify and track the
regression testing effort
A regression test specific to the defect has been created, reviewed, and accepted Exit
Criteria
Results of the test show no negative impact to the application