0% found this document useful (0 votes)
25 views39 pages

Testing + Evaluating Slides

Testing and evaluation is an important part of the software development cycle. Various types of testing are performed at different stages, including unit, integration, system, and acceptance testing. Live test data is used to ensure the system works as intended in its actual environment.

Uploaded by

eric.myzhou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views39 pages

Testing + Evaluating Slides

Testing and evaluation is an important part of the software development cycle. Various types of testing are performed at different stages, including unit, integration, system, and acceptance testing. Live test data is used to ensure the system works as intended in its actual environment.

Uploaded by

eric.myzhou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Testing & Evaluating

Testing the software


solution
the use of good programming practice
O Testing and evaluation is integral to all
stages of the software development
cycle.
O Personnel within the software
development process perform alpha
testing with real data. Beta testing
occurs when the product is distributed
for use to a limited number of outside
users. These users are engaged to
report any faults or recommendations
back to the software development
company.
O The testing and evaluation process is central to a
software development company's quality
assurance. The major aim is to ensure the product
meets the original design specifications created
during the 'Defining and Understanding the
Problem' stage. In terms of quality assurance, we
also evaluate the success of the design and
development processes. Another purpose of
testing is to evaluate the product's performance
against recognised industry standards or
benchmarks.
comparison of the solution
with the design specifications
O The original specifications and requirements,
which may have been modified during the
development of the product, must be tested for
compliance to the set of design specifications.
O A checklist of all requirements is used. Each
requirement is tested in turn. If faults are
encountered, then that aspect of the product is
altered until the requirement is met.
O The testing process will uncover inconsistencies in
approach that have occurred during development.
These inconsistencies must be identified and
rectified both in the existing product and for future
products.
How do we test the design
specifications?
O Both techniques are required to identify and correct
the majority of errors. Black box testing does not
attempt to identify the source of the problem, but
rather to identify that a problem exists. Once the
problem has been identified, it may be necessary to
move to white box testing to find and correct the
problem.
generating relevant test data for
complex solutions
O Test data is input that is used to ensure the
correctness of an algorithm or programs output. It
should be created during the 2nd stage of the cycle
when desk checking the algorithm. You have to
ensure that you are selecting values to use as test
data that should ensure:
O 1. a correct result
O 2. a value above this correct result
O 3. a value below this correct result
O 4. a value that should come out false
O 5. zero
comparison of actual with expected
output
O The test results from the application of test data
into the program need to be compared to what is
expected to be output are both calculated and if
they match then the program is functioning as
required.
O Desk checks from algorithms can be used for this
process.
levels of testing
O Software products are tested at various levels in
the SDC.
O During the implementation stage, the focus is on
the operation and processing of the actual
programming code for the particular product.
module testing
O test that each module and subroutine functions
correctly
O Each individual subroutine and module within a
program is tested as it is created.
O Arithmetic errors, comparison errors, control logic
errors, data structure errors, input/output errors
O Errors are usually tested as whiteboxes

O use of drivers
O drivers allow testing of routines & subroutines without
the need for all other modules to be coded fully
O useful for testing individual modules during
development
program testing
O test that the overall program (including
incorporated modules and subroutines)
functions correctly
O Each program will be tested to ensure modules work
together correctly.
O Interaction between modules is important and often
causes problems
O It is important that even if all modules are completely
functional
O All modules work correctly together
O Pass parameters effectively
O Save data effectively
O Modules are INTEGRATED before testing
O System-level testing aims to
ensure that the hardware,
software, data, personnel
and procedures that form
the components of the final
system are able to work
together efficiently,
correctly and in the manner
intended with the new
software product.
O test that the overall system (including all programs in the
suite) functions correctly, including the interfaces between
programs
O Hardware: The software product should be installed and
tested on different combinations of hardware ranging from
those specified as the minimum requirements to those
that include any additional hardware devices.
O Software: Other software installed on the system will have
an effect on the operation of new software products.
O Data: The data input into the system is the most likely
source of errors. The product needs to be tested with a
large combination and quantity of live test data to ensure
its reliability under live conditions.
O Personnel and Procedures: The users of the system will
have various skill and knowledge levels. Does the product
cater effectively to the needs of users of the product?
acceptance testing
O Acceptance tests are designed to confirm the
requirements have been met so the system can be
signed off as complete. Real data is used.
O Determines how well the system fits into the
environment it was designed for. To gauge how well
the computer/human interface works with users to
determine any errors missed in previous testing
stages.
O Can also be used to train users in the operation of
the system.
O If the tests are successful, then the client makes
their final payment and the development team's job
is complete.
the use of LIVE TEST DATA to ensure that
the testing environment accurately reflects
the expected environment in which the new
system will operate
O Large files
O Mix of transaction types
O Response times
O Volume of data
O Benchmarking
O effect of the new system on the existing systems in the
environment into which it will be installed
• large file sizes
O Many commercial applications obtain input from
large databases. During the development of
software products, relatively small data sets are
used. At the alpha and beta testing stages large
files should be used to test the system's
performance.
O The use of large files will highlight problems
associated with data access. Often systems that
perform at acceptable speeds with small data files
become unacceptably slow when large files are
accessed. This is particularly the case when data is
accessed via networks.
• mix of transaction types
O Testing needs to include random mixes of
transaction types.
O Live data is used
O Too hard to fabricate data sets
O E.G. cheque, EFTPOS, Credit transactions
• response times
O The response time is the time taken for a process to
complete. and are dependent on all the system
components, together with their interactions with each
other . Any processes that are likely to take more than
one second should provide feedback to the user.
O Progress bars are a common way of providing this
feedback. Data entry forms that need to validate data
before continuing should be able to do so in less than
one second, 0.1 second is preferable.
O Response times of around 0.1 second give the user the
impression of instantaneous response; this is of course
the ideal situation. Response times should be tested on
minimum hardware using typical data of different types.
• volume of data (load testing)
O Large amounts of data should be entered into the
new system to test the application under extreme
load conditions. Multi-user products should be
tested with large numbers of users entering and
processing data simultaneously. Alpha testing by
the software developer must try to simulate the
number of users who will simultaneously use the
product.
• benchmarking
O Creating a set of tests to measure the speed at
which a computer can complete a task. Tests can
be performed on the old and new systems to
compare performance.
effect of the new system on the
existing systems in the environment
into which it will be installed
O How does the new system work with the operating
system, hardware drivers and devices, any
hardware that is specific to the system
O e.g. selfservice machines
Reporting on the testing
process
Results of testing need to be evaluated and
acted upon if the problems highlighted are to
be corrected.
documentation of the test data
and output produced
O Documentation should be developed and retained as
the testing process is undertaken. Documents produced
will include the following:
use of CASE tools
O A large variety of Computer Aided Software
Engineering tools are available for use during the
testing and evaluation stage. These tools automate
the more tedious tasks associated with the testing
process. Many tools specialise in particular testing
functions such as test data generation, user
interface testing, or volume testing.

O Research each of these CASE Tools and their


testing features:
WinRunner, LoadRunner, DataTech, UsableNet
communication with those for whom the
solution has been developed, including:
O Test Results
O The testers must communicate their findings back to
the client (when developed externally),
developers(separate testers in large companies) or
software development company (Outsourcing).
O The test results will include problems from all
aspects of the system tests. Bugs in the code,
inconsistencies in user interface design,
unacceptable response times from other
applications, hardware conflicts and ambiguous error
messages.
O comparison with the original design specifications
O If the program meets all of the requirements of the
original design specs they will be happy with the
system
O They may require access to the test reports or a
summary
Evaluating the software
solution
O The extent to which all the requirements and
design specifications have been met (TESTING) has
the added bonus of providing critical information
with which to evaluate both the new system and
also to evaluate the design and development
process including the quality assurance
procedures.
O Specifications describe what the software should
do together with how it should be done. Evaluation
of the design specifications will therefore ensure
the product fulfils its requirements and that those
involved in the design process have maintained
standards in regard to how the requirements have
been achieved.
O verifying the requirements have been met
appropriately
O Using a checklist or giving the system to the users for
testing to ensure the system is what is required by
the client
O quality assurance
O The quality of a software product is measured
against how well the product meets or exceeds
users' expectations. Quality assurance is about
ensuring that this occurs.
O Quality cannot be added to a finished product, it
must be built into it.
Post implementation
review
• facilitation of open discussion and
evaluation with the client
• client sign off process
O A review of the system once operational is often
undertaken prior to the client formally accepting
the software as complete. For smaller projects this
evaluation is often in the form of discussion
between the client and the installers.
O Commonly the discussion will include
demonstrations to confirm and describe how the
requirements have been met. For large systems
professional independent assessors perform
thorough and rigorous acceptance tests followed by
a formal review.
O The acceptance tests and review confirm the
requirements have been met prior to the client
signing off on the project and paying all
outstanding moneys.
Design test data necessary and sufficient to test the
algorithm below.
Justify your choice for each item of test data used.
Design test data necessary and sufficient to test the
algorithm below.
Justify your choice for each item of test data used.
Mark1 Mark2 Reason/ Test path
25 25 ‘poor effort in both exams’

50 25 Boundary test mark1 =50

75 25 ‘results have deteriorated’

25 50 Boundary test mark2=50

50 50 Boundary test mark1=50,


mark2=50
75 50 Boundary test mark2=50

25 75 ‘good improvement in second


exam’
50 75 Boundary test mark1 =50

75 75 ‘pleasing result in both exams’

You might also like