0% found this document useful (0 votes)
8 views

SE--Software Coding & Testing

Uploaded by

dbajaj2005
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

SE--Software Coding & Testing

Uploaded by

dbajaj2005
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 68

Unit-6

Software Coding &


Testing
 Looping
Outline
 Code Review
 Software Documentation
 Testing Strategies
 Testing Techniques and Test Case
 Test Suites Design
 Testing Conventional Applications
 Testing Object Oriented Applications
 Testing Web and Mobile Applications
Coding Standards
 Most software development organizations
formulate their own coding standards that
suit them most, and require their engineers
to follow these standards strictly
 The purpose of requiring all engineers of an
organization to adhere to a standard style of
Good software A coding
coding is standard gives a uniform appearance
the following:
development to the codes written by different engineers.
organizations
normally require It enhances code understanding.
their programmers It encourages good programming practices.
to adhere to some
well-defined and
standard style of
coding called coding
standards.
3
Coding Standards Cont.
A coding standard lists several rules to be followed such as, the
way variables are to be named, the way the code is to be laid
out, error return conventions, etc.
The following are some representative coding standards
1 Rules for limiting the use of global
These rules list what types of data can be declared global and what cannot.
2 Naming conventions for global & local variables & constant identifiers
A possible naming convention can be that global variable names always start
with a capital letter, local variable names are made of small letters, and
constant of
3 Contents names are always
the headers capital letters.
preceding codes for different modules
• The information contained in The following are some standard header data
the headers of different modules Module Creation Date Author’s
should be standard for an Name Name
organization. Modification Synopsis of the
history module
• The exact format in which the Global variables accessed/modified by the
header information is organized in module
Different functions supported, along with their
the header can also be specified input/output parameters
4
Coding Standards Cont.
Sample Header
/*
* MyClass <br>
* This class is merely for illustrative purposes. <br>
*Revision History:<br>
* 1.1 – Added javadoc headers <br>
* 1.0 - Original release<br>
* @author Smith Jones
* @version 1.1, 12/02/2018
*/
public class MyClass {
. . .
}

4 Error return conventions and exception handling mechanisms


• The way error conditions are reported by different functions in a
program are handled should be standard within an organization.
• For example, different functions while encountering an error
condition should either return a 0 or 1 consistently.
5
Coding guidelines The following are some representative coding guidel

Do not use a coding style that is too clever or too difficult to


understand
Do not use an identifier for multiple Avoid obscure side effects
purposes • The side effects of a function call
The code should be well-documented
include modification of
The length of any function should not parameters passed by reference,
modification of global variables,
exceed 25-30 source lines
and I/O operations.
• An obscure side effect is one that
Do not use goto statements
is not obvious from a casual
examination of the code.
Documented

• Obscure side effects make it

Do not use
difficult to understand a piece of

goto
Well

code.
• For example, if a global variable is
changed obscurely in a called
module or some file I/O is performed
which is difficult to infer from the
function’s name and header
6
Types of Faults
Software Faults
 Quite inevitable
(unavoidable) Algorithmic Logic is wrong Code reviews
 Many reasons Syntax Wrong syntax; typos Compiler

Software systems Computation/ Precision Not enough accuracy


with large number of Documentation Misleading documentation
states Stress/Overload Maximum load violated
Complex formulas,
activities, algorithms Capacity/Boundary Boundary cases are usually
special cases
Customer is often Timing/Coordination Synchronization issues Very hard to
replicate
unclear of needs Throughput/ System performs below
Performance expectations
Size of software Recovery System restarted from abnormal
state
Number of people Hardware & related Compatibility issues
involved software
Standards Makes for difficult maintenance

7
Software Quality

Software Quality remains an issue Code Review


Who is to blame? Code Walk
Customers blame developers
Through
Arguing that careless practices lead to low-quality Code Inspection
software
Developers blame Customers & other
stakeholders
Arguing that irrational delivery dates and continuous
stream of changes force the to deliver software before
it has been fully validated
Who is Right? Both – and that’s the problem

8
Code Review
 Code Review is carried out after Few classical programming
the module is successfully errors
 Use of uninitialized variables
compiled and all the syntax
errors have been eliminated.  Jumps into loops
 Code Reviews are extremely  Nonterminating loops
cost-effective strategies for  Incompatible assignments
reduction in coding errors
and to produce high quality  Array indices out of bounds
Types of
code. Reviews  Improper storage allocation and
deallocation
Code Code  Mismatches between actual and
Walk Inspectio formal parameter in function calls
Through n
 Use of incorrect logical operators or
incorrect precedence among
operators
 Improper modification of loop
9
Code Review Code Walk Through & Code Inspection

Code Walk Code


Through
 Inspection

Code walk through is an informal The aim of Code Inspection is
code analysis technique. to discover some common
 The main objectives of the walk types of errors caused due to
through are to discover the improper programming.
algorithmic and logical errors in  In other words, during Code
the code. Inspection the code is
 A few members of the development examined for the presence of
team are given the code few days certain kinds of errors.
 For instance, consider the classical
before the walk- through meeting to
error of writing a procedure that
read and understand code. modifies a parameter while the
 Each member selects some test calling routine calls that procedure
cases and simulates execution of with a constant actual parameter.
 It is more likely that such an
the code by hand
error will be discovered by
 The members note down their looking for these kinds of
mistakes in the code. 10
Software Documentation
 When different kinds of software products are developed,
various kinds of documents are also prepared as part of any
software engineering process e.g..
Users’ Design Test documents Installation
manual documents manual
Software requirements specification (SRS) documents, etc

 Different types of software documents can broadly be classified into


the following:
Software
Documents

Internal External
Documentati Documentat
on ion
11
Software Documentation Cont. Internal & External Documenta

Internal External
Documentation
 Documentation
 It is provided through
It is the code perception features
provided as part of the source code. various types of
 It is provided through appropriate module supporting documents
headers and comments embedded in the  such as users’ manual
source code.  software requirements
specification document
 It is also provided through the useful  design document
variable names, module and function  test documents, etc.
headers, code indentation, code
structuring, use of enumerated types  A systematic software
and constant identifiers, use of user- development style
defined data types, etc. ensures that all these
 Even when code is carefully commented, documents are
meaningful variable names are still more produced in an orderly
helpful in understanding a piece of code. fashion.
 Good organizations ensure good internal 12
Software Testing
Testing is the process
of exercising a program
with the specific intent
of finding errors prior
to delivery to the end
user.

Don’t view testing as


a “safety net” that
will catch all errors that
occurred because of
weak software
engineering practice.
13
Who Test the Software
Developer

Who Test

Tester
the
Software
?

Must learn about the


Understands the system
but, will test "gently“ and,
system, but will attempt <Developer>
to break it and, is driven
is driven by "delivery"
by quality OR
Testing without plan is of
Testing need a strategy [Tester]
Dev team needs to work
no point
with Test team, “Egoless
It wastes time and effort
Programming”
14
When to Test the Software?
Component Component Component
Unit
Code Unit
Code Unit
Code
Test Test Test

Integration
Design Integrated modules
Test
Specifications
System functional Function Test Functioning
requirements system
Performance
Other software Verified, validated software
Test
requirements
Acceptance
Customer SRS Accepted system
Test
Installation
User environment System in use!
Test
15
Verification & Validation
Verification Verification Validation
Are we building the Process of evaluating Process of evaluating software
product right? products of a development at the end of the
The objective of phase to find out whether they development to determine
Verification is to make meet the specified whether software meets the
sure that the product requirements. customer expectations and
Activities involved: Reviews, Activities involved: Testing like
being develop is as per requirements.
Meetings and Inspections black box testing, white box
the requirements and testing, gray box testing
design specifications.
Validation Carried out by QA team Carried out by testing
team
Are we building the right Execution of code does not Execution of code comes
product? come under Verification under Validation
The objective of
Validation is to make sure Explains whether the Describes whether the
outputs are according to software is accepted by the
that the product meet up
inputs or not user or not
the user’s requirements Cost of errors caught is less Cost of errors caught is high
and check whether the
specifications were
correct in the first place.
16
Software Testing Strategy

Unit Testing
It concentrate on
each unit of the
software as
implemented in
 Unit Testing source code
It focuses on each
 Integration Testing component
 Validation Testing individual,
 System Testing ensuring that it
functions properly
as a unit. 17
Software Testing Strategy Cont.

Integration Validation Testing System Testing


Testing
It focus is on Software is The software and other
design and validated against software elements are
construction of requirements tested as a whole
software established as a part Software once validated,
Integration testing
architecture of requirement must be combined with other
is the process of It give assurance
modeling system elements e.g.
testing the that software meets hardware, people, database
interface all informational, It verifies that all elements
etc…
between two functional, mesh properly and that
software units or behavioral and overall system function /
modules performance performance is achieved.
18
Unit Testing
 Unit is the smallest part of a software system which is
testable.
 It may include code files, classes and methods which can be
tested individually for correctness.
 Unit Testing validates small building block of a complex
system before testing an integrated large module or whole
system
 The unit test focuses on the internal processing logic and
data structures within the boundaries of a component.
 The module is tested to ensure that information properly
flows into and out of the program unit
 Local data structures are examined to ensure that data
stored temporarily maintains its integrity during execution
 All independent paths through the control structures are
exercised to ensure that all statements in module have
been executed at least once
 Boundary conditions are tested to ensure that the module
19
Driver & Stub (Unit Testing)
A B C
 Let’s take an example to understand it in a
better way.
 Suppose there is an application consisting
of three modules say, module A, module
B & module C.
 Developer has design in such a way that
module B depends on module A &
Component-testing (Unit module C depends on module B
Testing) may be done in  The developer has developed the
isolation from rest of the
In such case the missing module B and now wanted to test it.
system
software is replaced by Stubs  But the module A and module C has not
and Drivers and simulate the been developed yet.
interface between the software
components in a simple manner  In that case to test the module B
20
Diver & Stub (Unit Testing) Cont.
Driver Stub
 Driver and/or Stub software  Stubs serve to replace modules
must be developed for each that are subordinate (called by)
unit test the component to be tested.
 A driver is nothing more than a A stub or "dummy
"main program" subprogram"
 It accepts test case data  Uses the subordinate module's
 Passes such data to the component interface
and  May do minimal data manipulation
 Prints relevant results.  Prints verification of entry and
 Driver  Returns control to the module
undergoing testing
 Used in Bottom up approach
 Lowest modules are tested first.  Stubs
 Simulates the higher level of  Used in Top down approach
components  Top most module is tested first
 Dummy program for Higher level  Simulates the lower level of
21
Integration Testing
Integration testing is the process of testing the interface between two
software units or modules
can be done in 3 ways
1. Big Bang Approach
2. Top Down Approach
3. Bottom Up Approach

Big Bang Top Down


Approach
 Combining all the Approach
Testing take place from top to bottom
modules once and  High level modules are tested first and then low-
verifying the
level modules and finally integrated the low level
functionality after
modules to high level to ensure the system is working
completion of
as intended
individual module
testing  Stubs are used as a temporary module, if a module is
Bottom Up not ready for integration testing
Approach
 Testing take place from bottom to up
 Lowest level modules are tested first and then high-level modules and finally
integrated the high level modules to low level to ensure the system is working as
intended
 Drivers are used as a temporary module, if a module is not ready for integration
22
Regression Testing
When to do regression testing?
 Repeated testing of an  When new functionalities are added
already tested program, to the application
after modification, to discover  E.g. A website has login functionality with
any defects introduced or only Email. Now the new features look like
uncovered as a result of the “also allow login using Facebook”
changes in the software being
 When there is a change requirement
tested
 Forgot password should be removed from
 Regression testing is done by the login page
re-executing the tests
against the modified application  When there is a defect fix
to evaluate whether the  E.g. assume that “Login” button is not
modified code breaks working and tester reports a bug. Once the
anything which was working bug is fixed by developer, tester tests using
earlier this approach
 Anytime we modify an  When there is a performance issue
application, we should do  E.g. loading a page takes 15 seconds.
regression testing Reducing load time to 2 seconds
 It gives confidence to the  When there is an environment change
23
Smoke Testing
 Smoke Testing is an integrated
testing approach that is commonly
Build
used when product software is
developed
 This test is performed after each F F F F F F
Build Release 1 2 3 4 5 6
 Smoke testing verifies – Build Critical Critical Major Major
Stability
It test the build just to check if
 This testing is performed by “Tester”
any major or critical functionalities
or “Developer”
are broken
 This testing is executed for If there are smoke or Failure in the
Integration Testing, System Testing & build after Test, build is rejected,
Acceptance Testing and developer team is reported
 What to Test? with the issue
 All major and critical functionalities
of the application is tested 24
Validation Testing
 The process of evaluating software to determine whether it satisfies
specified business requirements (client’s need).
 It provides final assurance that software meets all informational,
functional, behavioral, and performance requirements
 When custom software is built for one customer, a series of
acceptance tests are conducted to validate all requirements
 It is conducted by end user rather then software engineers
 If software is developed as a product to be used by many customers,
it is impractical to perform formal acceptance tests with each one
 Most software product builders use a process called alpha and beta
testing to uncover errors that only the end user seems able to find

25
Validation Testing – Alpha & Beta Test
Alpha Test
 The alpha test is conducted at the developer’s site by a
representative group of end users
 The software is used in a natural setting with the developer “looking
over the shoulders” of the users and recording errors and usage
problems
Beta
 The Test
alpha tests are conducted in a controlled environment
 The beta test is conducted at one or more end-user sites
 Developers are not generally present
 Beta test is a “live” application of the software in an environment
that can not be controlled by the developer
 The customer records all problems and reports to the developers
at regular intervals
 After modifications, software is released for entire customer base
26
System Testing
 In system testing the software and other system elements are tested.
 To test computer software, you spiral out in a clockwise direction along
streamlines that increase the scope of testing with each turn.
 System testing verifies that all elements mesh properly and overall
system function/performance is achieved.
 System testing is a series of different tests whose primary purpose is
to fully exercise the computer-based system.

ypes of System Testing

1 Recovery Testing 4 Performance Testing


2 Security Testing 5 Deployment Testing
3 Stress Testing

27
Types of System Testing
Recovery
Testing  It is a system test that forces the software to fail in
a variety of ways and verifies that recovery is
properly performed.
 If recovery is automatic (performed by the system
itself)
 Re-initialization, check pointing mechanisms, data
recovery, and restart are evaluated for correctness.
 If recovery requires human intervention
Security  The mean-time-to-repair (MTTR) is evaluated to
Testing  It determine whether
attempts to it verify
is within acceptable
software’slimits
protection
mechanisms, which protect it from improper
penetration (access).
 During this test, the tester plays the role of the
individual who desires to penetrate the system.
28
Types of System Testing Cont.
Stress Testing
 It executes a system in a manner that demands
resources in abnormal quantity, frequency or volume.
 A variation of stress testing is a technique called
sensitivity testing.

Performance
Testing  It is designed to test the run-time performance of
software.
 It occurs throughout all steps in the testing process.
 Even at the unit testing level, the performance of an
individual module may be tested.

29
Types of System Testing Cont.
Deployment
Testing  It exercises the software in each environment in
which it is to operate.
 In addition, it examines
 All installation procedures
 Specialized installation software that will be used by
customers
 All documentation that will be used to introduce the
software to end users

30
Acceptance Testing
 It is a level of the software testing where a system is tested for
acceptability.
 The purpose of this test is to evaluate the system’s compliance with the
business requirements.
 It is a formal testing conducted to determine whether a system
satisfies the acceptance criteria with respect to user needs,
requirements, and business processes
 It enables the customer to determine, whether to accept the system.
 It is performed after System Testing and before making the system
available for actual use.

31
Views of Test Objects

Black Box Testing Grey Box Testing White Box Testing


Close Box Testing Partial knowledge of Open Box Testing
Testing based only source code Testing based on
on specification actual source code

32
Black Box Testing
 Also known as specification-based testing
 Tester has access only to running code and the specification it is supposed
to satisfy
 Test cases are written with no knowledge of internal workings of the code
 No access to source code
 So, test cases don’t worry about structure
 Emphasis is only on ensuring that the contract is met
Advantages
 Scalable; not dependent on size of code
 Testing needs no knowledge of implementation
 Tester and developer can be truly independent of each other
 Tests are done with requirements in mind
 Does not excuse inconsistencies in the specifications
 Test cases can be developed in parallel with code
33
Black Box Testing Cont.
Test Case Design Disadvantages
 Examine pre-condition, and identify  Test size will have to be small
equivalence classes  Specifications must be clear,
 All possible inputs such that all concise, and correct
classes are covered  May leave many program
 Apply the specification to input to paths untested
write down expected output  Weighting of program paths is
not possible Test Case 1
Input: x1 (sat.
X)
Specification
Specificatio Exp. Output:
Operation op n- y2
Pre: X Based Test
Post: Y Case Test Case 2
Input: x2 (sat.
Design X)
Exp. Output:
y2
34
Black Box Testing Cont.
 Exhausting testing is not always possible when there is a large set of
input combinations, because of budget and time constraint.
 The special techniques are needed which select test-cases smartly
from the all combination of test-cases in such a way that all scenarios
are covered
Two techniques
are used
1 Equivalence 2 Boundary Value Analysis
Partitioning (BVA)
Equivalence
Partitioning
 Input data for a program unit usually falls into several
partitions, e.g. all negative integers, zero, all positive numbers
 Each partition of input data makes the program behave in
a similar way
 Two test cases based on members from the same partition
is likely to reveal the same bugs
35
Equivalence Partitioning (Black Box Testing)
By identifying and testing one member of each partition we gain 'good'
coverage with 'small' number of test cases
Testing one member of a partition should be as good as testing any
member of the partition

Example - Equivalence
Partitioning
For binary search the following partitions exist  Pick specific conditions of the
 Inputs that conform to pre-conditions array
 Inputs where the precondition is false  The array has a single value
 Inputs where the key element is a member of the array  Array length is even
 Inputs where the key element is not a member of the  Array length is odd
array

36
Equivalence Partitioning (Black Box Testing)
Cont.
Example - Equivalence
Partitioning
 Example: Assume that we must test field which accepts SPI (Semester
Performance Index) as input (SPI range is 0 to 10)

SPI * Accepts value 0 to


10
Equivalence Partitioning
Invalid Valid Invalid
<=-1 0 to 10 >=11

 Valid Class: 0 – 10, pick any one input test data from 0 to 10
 Invalid Class 1: <=-1, pick any one input test data less than or equal to -1
 Invalid Class 2: >=11, pick any one input test data greater than or equal to 11

37
Boundary Value Analysis (BVA) (Black Box Testing)
 It arises from the fact that most program fail at input boundaries
 Boundary testing is the process of testing between extreme ends or
boundaries between partitions of the input values.
 In Boundary Testing, Equivalence Class Partitioning plays a good role
 Boundary Testing comes after the Equivalence Class Partitioning
 The basic idea in boundary value testing is to select input variable
values at their:
Just below the Minimu Just above the
minimum m minimum
Just below the maximum Maximu Just above the
m maximum
Bound
ary

Boundary Values

38
Boundary Value Analysis (BVA) (Black Box Testing)
 Suppose system asks for “a number between 100
and 999 inclusive”
99 100 999 999
 The boundaries are 100 and 999 101 1000
 We therefore test for values Lower boundary Upper boundary
BVA - Advantages
 The BVA is easy to use and remember because of the uniformity of identified
tests and the automated nature of this technique.
 One can easily control the expenses made on the testing by controlling the
number of identified test cases.
 BVA is the best approach in cases where the functionality of a software is
based on numerous variables representing physical quantities.
 The technique is best at user input troubles in the software.
 The procedure and guidelines are crystal clear and easy when it comes to
determining the test cases through BVA.
 The test cases generated through BVA are very small.
39
Boundary Value Analysis (BVA) (Black Box
Testing) Cont.
BVA - Disadvantages
 This technique sometimes fails to test all the potential input values.
And so, the results are unsure.
 The dependencies with BVA are not tested between two inputs.
 This technique doesn’t fit well when it comes to Boolean Variables.
 It only works well with independent variables that depict quantity.

40
White Box Testing
 Also known as structural testing
 White Box Testing is a software testing method in which the internal
structure/design/implementation of the module being tested is
known to the tester
 Focus is on ensuring that even abnormal invocations are handled
gracefully
 Using white-box testing methods, you can derive test cases that
 Guarantee that all independent paths within a module have been exercised at
least once
 Exercise all logical decisions on their true and false sides
...our
 Execute all loops at their goal is to
boundaries
 Exercise internal dataensure thattoall
structures ensure their validity
statements and
conditions have been
executed at least
once ...
41
White Box Testing Cont.
 It is applicable to the following levels of software testing
 Unit Testing: For testing paths within a unit
 Integration Testing: For testing paths between units
 System Testing: For testing paths between subsystems

Advantage Disadvant
s
 Testing ages
 Since tests can be very complex, highly
can be
commenced at an skilled resources are required, with thorough
earlier stage as one knowledge of programming and
need not wait for the implementation
GUI to be available.  Test script maintenance can be a burden, if
 Testing the implementation changes too frequently
is more
thorough, with the  Since this method of testing is closely tied with
possibility of covering the application being testing, tools to cater to
every kind of implementation/platform may not
most paths be readily available
42
White-box testing strategies
 One white-box testing strategy is said to be stronger than another
strategy, if all types of errors detected by the first testing strategy is also
detected by the second testing strategy, and the second testing strategy
additionally detects some more types of errors.
White-box testing
strategies
1 Statement 2 Branch coverage 3 Path coverage
coverage
Statement
coverage
 It aims to design test cases so that every statement in a program is
executed at least once
 Principal idea is unless a statement is executed, it is very hard to determine if an
error exists in that statement
 Unless a statement is executed, it is very difficult to observe whether it causes
failure due to some illegal memory access, wrong result computation, etc.

43
White-box testing strategies Cont.
Consider the Euclid’s GCD computation algorithm

int compute_gcd(x, y) By choosing the test set


int x, y; {(x=3, y=3), (x=4, y=3),
(x=2, y=4)}, we can
{
exercise the program such
1 while (x! = y){ that all statements are
2 if (x>y) then executed at least once.

3 x= x – y;
4 else y= y – x;
5 }
6 return x;
}

44
White-box testing strategies Cont.
Branch Path Coverage
coverage
 In the branch coverage-based  In this strategy test cases are
testing strategy, test cases are executed in such a way that every
designed to make each path is executed at least once
branch condition to assume  All possible control paths taken,
true and false values in turn including
 It is also known as edge  All loop paths taken zero, once
Testing as in this testing and multiple items in technique
scheme, each edge of a  The test case are prepared based
program’s control flow graph is on the logical complexity measure
of the procedure design
traversed at least once
 Branch coverage guarantees  Flow graph, Cyclomatic
statement coverage, so it is Complexity and Graph Metrices
stronger strategy compared to are used to arrive at basis path.
Statement Coverage.
45
What is Cyclomatic Complexity V(G)?

 A software metric used to measure the complexity of


software - Developed by Thomas McCabe
 Applies to decision logic embedded within written code.
 Is derived from predicates in decision logic.
 Grows from 1 to high, finite number based on the amount of
decision logic.
 Described (informally) as the number of decision
points + 1
 Is correlated to software quality and testing quantity; units
with higher v(G), v(G)>10, are less reliable and require high
levels of testing. 46
Cyclomatic Complexity V(G) Cont.

Computing the cyclomatic complexity

number of simple decisions + 1


OR

number of enclosed areas + 1

In this case, V(G) = 4

47
Graph Complexity (Cyclomatic Complexity)
V(G) Cont.

48
Cyclomatic Complexity V(G) – Basis Path
Testing

49
What is the Cyclomatic Complexity V(G)?

public void howComplex() {


int i=20;

while (i<10) {
System.out.printf("i is %d", i);
if (i%2 == 0) {
System.out.println("even");
} else {
System.out.println("odd");
}
}
}

50
What is the Cyclomatic Complexity V(G)?

51
Grey Box Testing
 Combination of white box and black box testing
 Tester has access to source code, but uses it in a restricted manner
 Test cases are still written using specifications based on expected
outputs for given input
 These test cases are informed by program code structure

52
Testing Web Applications
 WebApp testing is a collection of related activities with
a single goal to uncover errors in WebApp content,
function, usability, navigability, performance,
capacity, and security
 To accomplish this, a testing strategy that encompasses
both reviews and executable testing is applied.

Dimensions of
Quality
 Content is evaluated at both a syntactic and
semantic level.
 At the syntactic level spelling, punctuation, and
grammar are assessed for text-based documents.
 At a semantic level correctness of information
presented, Consistency across the entire content
object and related objects, and lack of ambiguity
are all assessed.

53
Dimensions of Quality Cont.
 Function is tested to uncover errors that indicate lack of conformance to
customer requirements
 Structure is assessed to ensure that it properly delivers WebApp
content
 Usability is tested to ensure that each category of user is supported by
the interface and can learn and apply all required navigation.
 Navigability is tested to ensure that all navigation syntax and
semantics are exercised to uncover any navigation errors
 Ex., dead links, improper links, and erroneous links
 Performance is tested under a variety of operating conditions,
configurations and loading
 to ensure that the system is responsive to user interaction and handles
extreme loading
 Compatibility is tested by executing the WebApp in a variety of different
host configurations on both the client and server sides
 Interoperability is tested to ensure that the WebApp properly interfaces
54
Content Testing User Interface Testing
 Errors in WebApp content can be Verification and validation of a
 as trivial as minor typographical WebApp user interface occurs at
errors or three distinct points
 as significant as incorrect
1. During requirements analysis
information, improper  the interface model is reviewed to
organization, or violation of ensure that it conforms to
intellectual property laws stakeholder requirements
 Content testing attempts to 2. During design
uncover these and many other  the interface design model is
problems before the user reviewed to ensure that generic
quality criteria established for all
encounters them user interfaces have been achieved
 Content testing combines both 3. During testing
reviews and the generation of  the focus shifts to the execution of
executable test cases application-specific aspects of user
 Reviews are applied to uncover interaction as they are manifested by
interface syntax and semantics.
semantic errors in content
 In addition, testing provides a final
 Executable testing is used to
assessment of usability 55
Component-Level Testing
Navigation Testing
 Component-level testing  The job of navigation testing is
(function testing), focuses on a to ensure that
set of tests that attempt to  the mechanisms that allow the
uncover errors in WebApp WebApp user to travel through the
functions. WebApp are all functional and,
 to validate that each Navigation
 Each WebApp function is a Semantic Unit (NSU) can be
software component achieved by the appropriate user
(implemented in one of a variety category
of programming languages)  Navigation mechanisms should
 WebApp function can be tested be tested are
using black-box (and in some cases,  Navigation links,
white-box) techniques.
 Redirects,
 Component-level test cases are  Bookmarks,
often driven by forms-level input.  Frames and framesets,
 Once forms data are defined, the  Site maps,
user selects a button or other  Internal search engines.
control mechanism to initiate
execution. 56
Configuration Testing Security Testing
 Configuration variability and  Security tests are designed
instability are important factors that to probe
make WebApp testing a challenge.  vulnerabilities of the client-
 Hardware, operating system(s), side environment,
 the network communications
browsers, storage capacity, that occur as data are passed
network communication speeds, from client to server and back
and a variety of other client-side again, and
factors are difficult to predict for  the server-side environment.
each user.  Each of these domains can
 One user’s impression of the WebApp be attacked, and it is the job
and the manner in which he/she of the security tester to
interacts with it can differ uncover weaknesses
significantly.  that can be exploited by those
 Configuration testing is to test a with the intent to do so.
set of probable client-side and server-
side configurations
 to ensure that the user experience 57
Performance Testing
 Performance testing is used to uncover
 performance problems that can result from lack of server-side resources,
 inappropriate network bandwidth,
 inadequate database capabilities,
 faulty or weak operating system capabilities,
 poorly designed WebApp functionality, and
 other hardware or software issues that can lead to degraded client-server
performance

58
Testing Object Oriented Applications
Unit Testing in the OO Context
 The concept of the unit testing changes in object-
oriented software
 Encapsulation drives the definition of classes
and objects
 Means, each class and each instance of a class (object)
packages attributes (data) and the operations (methods
or services) that manipulate these data
 Rather than testing an individual module, the
smallest testable unit is the encapsulated class
 Unlike unit testing of conventional software,
 which focuses on the algorithmic detail of a module and
the data that flows across the module interface,
 class testing for OO software is driven by the
operations encapsulated by the class and the state
behavior of the class

59
Integration Testing in the OO Context
 Object-oriented software does not have a hierarchical control
structure,
 conventional top-down and bottom-up integration strategies have little
meaning
 There are two different strategies for integration testing of OO systems.
1. Thread-based testing
 integrates the set of classes required to respond to one input or event for the system
 Each thread is integrated and tested individually
 Regression testing is applied to ensure that no side effects occur
2. Use-based testing
 begins the construction of the system by testing those classes (called independent
classes) that use very few (if any) of server classes
 After the independent classes are tested, the next layer of classes, called dependent
classes, that use the independent classes are tested
 Cluster testing is one step in the integration testing of OO software
 Here, a cluster of collaborating classes is exercised by designing test
cases that attempt to uncover
60
Validation Testing in an OO Context
 At the validation or system level, the details of class connections
disappear
 Like conventional validation, the validation of OO software focuses on
user-visible actions and user-recognizable outputs from the system
 To assist in the derivation of validation tests, the tester should draw upon
use cases that are part of the requirements model
 Conventional black-box testing methods can be used to drive
validation tests

61
62
63
64
65
66
67
Thank
You

You might also like