0% found this document useful (0 votes)
34 views23 pages

Se 4

Software Engineering notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views23 pages

Se 4

Software Engineering notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

SOFTWARE QUALITY ASSURANCE

Software quality assurance is a planned and systematic plan of all actions necessary to
provide adequate confidence that an item or product conforms to establish technical
requirements.

A set of activities designed to calculate the process by which the products are developed or
manufactured.

SQA Encompasses

● A quality management approach


● Effective Software engineering technology (methods and tools)
● Formal technical reviews that are tested throughout the software process
● A multitier testing strategy
● Control of software documentation and the changes made to it.
● A procedure to ensure compliances with software development standards
● Measuring and reporting mechanisms

SQA Activities

Software quality assurance is composed of a variety of functions associated with two


different constituencies .The software engineers who do technical work and an SQA group
that has responsibility for quality assurance planning, record keeping, analysis, and
reporting.

Following activities are performed by an independent SQA group:

Prepares an SQA plan for a project:

The program is developed during project planning and is reviewed by all stakeholders. The
plan governs quality assurance activities performed by the software engineering team and
the SQA group. The plan identifies calculation to be performed, audits and reviews to be
performed, standards that apply to the project, techniques for error reporting and tracking,
documents to be produced by the SQA team, and amount of feedback provided to the
software project team.

Participates in the development of the project's software process description:

The software team selects a process for the work to be performed. The SQA group reviews
the process description for compliance with organizational policy, internal software
standards, externally imposed standards (e.g. ISO-9001), and other parts of the software
project plan.

Reviews software engineering activities to verify compliance with the defined


software process:
The SQA group identifies, reports, and tracks deviations from the process and verifies that
corrections have been made.

Audits designated software work products to verify compliance with those defined as
a part of the software process:

The SQA group reviews selected work products, identifies, documents and tracks deviations,
verify that corrections have been made, and periodically reports the results of its work to the
project manager.

Ensures that deviations in software work and work products are documented and
handled according to a documented procedure:

Deviations may be encountered in the project method, process description, applicable


standards, or technical work products.

Records any noncompliance and reports to senior management:

Non- compliance items are tracked until they are resolved.

SOFTWARE REVIEW

Software Review is systematic inspection of a software by one or more individuals who work
together to find and resolve errors and defects in the software during the early stages of
Software Development Life Cycle (SDLC). Software review is an essential part of Software
Development Life Cycle (SDLC) that helps software engineers in validating the quality,
functionality and other vital features and components of the software. It is a whole process
that includes testing the software product and it makes sure that it meets the requirements
stated by the client.

Usually performed manually, software review is used to verify various documents like
requirements, system designs, codes, test plans and test cases

Objectives of Software Review:

● To improve the productivity of the development team.

● To make the testing process time and cost effective.

● To make the final software with fewer defects.

● To eliminate the inadequacies.

Process of Software Review:


Types of Software Reviews:

1)Software Peer Review:

Peer review is the process of assessing the technical content and quality of the product and
it is usually conducted by the author of the work product along with some other developers.
Peer review is performed in order to examine or resolve the defects in the software, whose
quality is also checked by other members of the team.
Peer Review has following types:

(i) Code Review:

Computer source code is examined in a systematic way.

(ii) Pair Programming:

It is a code review where two developers develop code together at the same platform.

(iii) Walkthrough:

Members of the development team is guided by author and other interested parties and the
participants ask questions and make comments about defects.

(iv) Technical Review:

A team of highly qualified individuals examines the software product for its client’s use and
identifies technical defects from specifications and standards.

(v) Inspection:

In inspection the reviewers follow a well-defined process to find defects.

2)Software Management Review:


Software Management Review evaluates the work status. In this section decisions regarding
downstream activities are taken.

3)Software Audit Review:

Software Audit Review is a type of external review in which one or more critics, who are not
a part of the development team, organize an independent inspection of the software product
and its processes to assess their compliance with stated specifications and standards. This
is done by managerial level people.

Advantages of Software Review:

● Defects can be identified earlier stage of development (especially in formal review).

● Earlier inspection also reduces the maintenance cost of software.

● It can be used to train technical authors.

● It can be used to remove process inadequacies that encourage defects.

SOFTWARE INSPECTIONS

The term software inspection was developed by IBM in the early 1970s, when it was noticed
that the testing was not enough sufficient to attain high quality software for large
applications.

Inspection is used to determine the defects in the code and remove it efficiently. This
prevents defects and enhances the quality of testing to remove defects. This software
inspection method achieved the highest level for efficiently removing defects and improving
software quality.

There are some factors that generate the high quality software:

1)Phrases quality design inspection and Code inspections:

This factor refers to formal oversight that follows protocols such as training. Participants,
material distributed for inspection. Both moderators and recorders are present to analyze
defect statistics.

2)Phrase quality assurance :

This factor refers to an active software quality assurance group, which joins a group of
software developments to support them in the development of high quality software.

3)Formal Testing
It throws the test process under certain conditions
● For an application, a test plan was created.
● Are complete specifications so that test cases can be made without significant gaps.
● Vast library control tools are used.
● Test coverage analysis tools are used.

Software Inspection Process :

The inspection process was developed in the mid 1970s, later extended and revised. The
process must have an entry criterion that determines whether the inspection process is
ready to begin. this prevents incomplete products from entering the inspection process.
Entry criteria can be interstitial with items such as “The Spell-Document Check”.

There are some of the stages in the software inspection process such as-

1) Planning : The moderator plan the inspection.

2) Overview Meeting: The background of the work product is described by the author.

3) Preparation: The examination of the work product is done by inspector to identify the
possible defects.

4) Inspection Meeting: The reader reads the work product part by part during this
meeting and the inspectors the faults of each part.

5) Rework: After the inspection meeting, the writer changes the work product according
to the work plans.

6) Follow Up: The changes done by the author are checked to make sure that
everything is correct.

Advantages of Software Inspection:

● Helps in the Early removal of major defects.


● This inspection enables a numeric quality assessment of any technical document.
● Software inspection helps in process improvement.
● It helps in staff training on the job.
● Software inspection helps in gradual productivity improvement.

Disadvantages of Software Inspection:

● It is a time-consuming process.
● Software inspection requires discipline.

VERIFICATION AND VALIDATION

Verification and Validation is the process of investigating that a software system satisfies
specifications and standards and it fulfills the required purpose. Barry Boehm described
verification and validation as the following:

● Verification: Are we building the product right?

● Validation: Are we building the right product?

Verification:

Verification is the process of checking that a software achieves its goal without any bugs. It
is the process to ensure whether the product that is developed is right or not. It verifies
whether the developed product fulfills the requirements that we have. TestingVerification is
Static Testing.

Activities involved in verification:

● Inspections
● Reviews
● Walkthroughs
● Desk-checking

Validation:

Validation is the process of checking whether the software product is up to the mark or in
other words product has high level requirements. It is the process of checking the validation
of product i.e. it checks what we are developing is the right product. it is validation of actual
and expected product.Validation is the Dynamic Testing

Activities involved in validation:

● Black box testing


● White box testing
● Unit testing
● Integration testing
Verification is followed by Validation.

The difference between Verification and Validation is as follow

Verification

1. It includes checking documents, design, codes and programs


2. Verification is the static testing.
3. It does not include the execution of the code
4. Methods used in verification are reviews, walkthroughs, inspections and
desk-checking
5. It checks whether the software conforms to specifications or not.
6. It can find the bugs in the early stage of the development.
7. The goal of verification is application and software architecture and specification.
8. Quality assurance team does verification.
9. It comes before validation
10. It consists of checking of documents/files and is performed by human.

Validation

1. It includes testing and validating the actual product


2. Validation is the dynamic testing
3. It includes the execution of the code.
4. Methods used in validation are Black Box Testing, White Box Testing and
non-functional testing.
5. It checks whether the software meets the requirements and expectations of a
customer or not
6. It can only find the bugs that could not be found by the verification process
7. The goal of validation is an actual product.
8. Validation is executed on software code with the help of testing team.
9. It comes after verification
10. It consists of execution of program and is performed by computer.

CLEAN ROOM APPROACH TESTING

Cleanroom Testing was pioneered by IBM. this kind of testing depends heavily on
walkthroughs, inspection, and formal verification. The programmers don’t seem to be
allowed to check any of their code by corporal punishment the code apart from doing a little
syntax testing employing a compiler. The computer code development philosophy relies on
avoiding computer code defects by employing a rigorous examination method. the target of
this computer code is that the zero-defect computer code.

The name ‘CLEAN ROOM’ was derived from the analogy with semiconductor fabrication
units. In these units (clean rooms), defects area unit avoided by producing within the
ultra-clean atmosphere. during this reasonable development, inspections to ascertain the
consistency of the parts with their specifications has replaced unit-testing.

This technique reportedly produces documentation and code that’s extra reliable and fixable
than various development methods relying heavily on code execution-based testing.

The clean room approach to computer code development relies on 5 characteristics:

1)Formal specification:

The computer code to be developed is formally given. A state-transition model that shows
system responses to stimuli is employed to precise the specification.

2)Incremental development:

The computer code is partitioned off into increments that area unit developed and valid on
individual basis mistreatment the white room method. These increments area unit given, with
client input, at Associate in Nursing early stage within the method.

3)Structured programming:

Only a restricted range of management and information abstraction constructs area unit
used. The program development method is that the method of stepwise refinement of the
specification. Use

4)Static verification:

The developed computer code is statically verified by mistreatment rigorous computer code
inspections. there’s no unit or module testing method for code parts.

5)Statistical testing of the system:


The integrated computer code increment is tested statistically to work out its responsibility.
These applied mathematics tests area unit supported the operational profile that is
developed in parallel with the system specification.

TESTING STRATEGIES

A strategy of software testing is shown in the context of spiral.

Unit testing

Unit testing starts at the centre and each unit is implemented in source code.

Integration testing

An integration testing focuses on the construction and design of the software.

Validation testing

Check all the requirements like functional, behavioral and performance requirement are
validate against the construction software.

System testing

System testing confirms all system elements and performance are tested entirely.

Testing strategy for procedural point of view

As per the procedural point of view the testing includes following steps.

1) Unit testing
2) Integration testing
3) High-order tests
4) Validation testing
DIFFERENT TYPES AND LEVELS OF TESTING

Functional Testing

Functional testing relates to testing how a product functions, performing a functional test
entails checking and testing each functionality of the software to ensure you are getting the
expected results.

1)Unit testing

Level of software testing where individual units or components of the software are tested.
The purpose is to validate each unit of the software performs ad designed. A unit is the
smallest testable part of any software. It usually has one or a few inputs and usually a single
output.
Benefits- Reliable, Cost-effective, Easy to maintain code, Faster, Debugging is easy.

2)Integration testing

It is a level of software testing where individual units are combined and tested as a group.
The purpose of this level of testing is to expose faults in the integration between integrated
units. Test drivers and test stubs are used to assist in integration testing.

stubs- are used during top-down integration testing, in order to simulate the behavior of the
lower-level modules that are not yet integrated.

drivers- are used in the bottom-up integration testing approach. It can simulate the behavior
of the upper-level module that is not integrated yet.

Approaches:
● Big Bang- no incremental testing takes place prior to all system’s components being
combined to form the system.
● Top-down- simulate the behavior of the lower-level modules that are not yet
integrated.
● Bottom-up- simulate the behavior of the upper-level modules that are not yet
integrated.
● Hybrid- is a combination of both top-down and bottom-up integration testing.

3)System testing — is testing conducted on a complete integrated system to evaluate the


system’s compliance with its specified requirements.

4)Interface testing — is defined as a software testing type that verifies whether the
communication between two different software systems is done correctly.

5)Regression testing — is re-running functional and non-functional tests to ensure that


previously developed and tested software still performs after a change.

6)Acceptance testing — is a test conducted to determine if the requirements of a


specification or contract are met.

Non-functional Testing

Non-functional testing deals with testing the non-functional aspects of the software, aspects
such as usability, performance, security, reliability and more. This test is performed after the
functional test as been done. The non-functional test doesn’t focus on whether the software
works or not it focuses on how well it works.

1)Documentation testing — It may be a type of black-box testing that ensures that


documentation about how to how to use the system matches what the system does,
providing proof that system changes and improvements have been documented.
The key target areas for testing of documentation is-
Instructions
Examples
Messages
Samples

2)Installation testing — Most software systems have installation procedures that are needed
before they can be used for their main purpose. Testing these procedures to achieve an
installed software system that may be used is known as installation testing. These
procedures may involve full or partial upgrades and install/uninstall processes.

3)Performance testing — In software quality assurance, performance testing is in general


testing practice performed to determine how a system performs in terms of responsiveness
and stability under a particular workload.

4)Reliability testing —is a field of software testing that relates to testing a software’s ability to
function, given environmental conditions, for a particular amount of time. This testing helps
to discover many problems in software design and functionality.
5)Security testing — is a process intended to reveal flaws in the security mechanisms of an
information system that protect data and maintain functionality as intended.
Focusing areas:
Network security
System security
Client-Server security

LEVEL OF TESTING

There are different levels of testing :

1)Unit Testing :

In this type of testing, errors are detected individually from every component or unit by
individually testing the components or units of software to ensure that if they are fit for use by
the developers. It is the smallest testable part of the software.

2)Integration Testing :

In this testing, two or more modules which are unit tested are integrated to test i.e. technique
interacting components and are then verified if these integrated modules work as per the
expectation or not and interface errors are also detected.

3)System Testing :

In system testing, complete and integrated Softwares are tested i.e. all the system elements
forming the system is tested as a whole to meet the requirements of the system.

4)Acceptance Testing :

It is a kind of testing conducted to ensure whether the requirement of the users are fulfilled
prior to its delivery and the software works correctly in the user’s working environment.

These testing can be conducted at various stages of software development. The levels of
testing along with the corresponding software development phase is shown by the following
diagram

While performing the software testing, following Testing principles must be applied by every
software engineer:
● The requirements of customers should be traceable and identified by all different
tests.
● Planning of tests that how tests will be conducted should be done long before the
beginning of the test.
● The Pareto principle can be applied to software testing- 80% of all errors identified
during testing will likely be traceable to 20% of all program modules.
● Testing should begin “in the small” and progress toward testing “in the large”.
● Exhaustive testing which simply means to test all the possible combinations of data is
not possible.
● Testing conducted should be most effective and for this purpose, an independent
third party is required.

BLACK BOX AND WHITE BOX TESTING

There are different methods that can be used for software testing.

Black-Box Testing

Black box testing is a type of software testing in which the functionality of the software is not
known. The testing is done without the internal knowledge of the products.

Black box testing can be done in following ways:

1. Syntax Driven Testing –

This type of testing is applied to systems that can be syntactically represented by some
language. For example- compilers,language that can be represented by context free
grammar. In this, the test cases are generated so that each grammar rule is used at least
once.

2. Equivalence partitioning –

It is often seen that many type of inputs work similarly so instead of giving all of them
separately we can group them together and test only one input of each group. The idea is to
partition the input domain of the system into a number of equivalence classes such that each
member of class works in a similar way, i.e., if a test case in one class results in some error,
other members of class would also result into same error.

The technique involves two steps:

● Identification of equivalence class – Partition any input domain into minimum two
sets: valid values and invalid values. For example, if the valid range is 0 to 100 then
select one valid input like 49 and one invalid like 104.
● Generating test cases –
(i) To each valid and invalid class of input assign unique identification number
(ii) Write test case covering all valid and invalid test case considering that no two
invalid inputs mask each other.
To calculate the square root of a number, the equivalence classes will be:

(a) Valid inputs:

● Whole number which is a perfect square- output will be an integer.


● Whole number which is not a perfect square- output will be decimal number.
● Positive decimals

(b) Invalid inputs:

● Negative numbers(integer or decimal).


● Characters other that numbers like “a”,”!”,”;”,etc.

3. Boundary value analysis –

Boundaries are very good places for errors to occur. Hence if test cases are designed for
boundary values of input domain then the efficiency of testing improves and probability of
finding errors also increase. For example – If valid range is 10 to 100 then test for 10,100
also apart from valid and invalid inputs.

4. Cause effect Graphing –

This technique establishes relationship between logical input called causes with
corresponding actions called effect. The causes and effects are represented using Boolean
graphs. The following steps are followed:

1. Identify inputs (causes) and outputs (effect).


2. Develop cause effect graph.
3. Transform the graph into decision table.
4. Convert decision table rules to test cases.

5. Requirement based testing –

It includes validating the requirements given in SRS of software system.

6. Compatibility testing –

The test case result not only depend on product but also infrastructure for delivering
functionality. When the infrastructure parameters are changed it is still expected to work
properly. Some parameters that generally affect compatibility of software are:

1. Processor (Pentium 3,Pentium 4) and number of processors.


2. Architecture and characteristic of machine (32 bit or 64 bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).

Advantages
● Efficient when used on large systems.
● SInce the tester and developer are independent of each other, testing is balanced
and unprejudiced.
● Tester can be non-technical.
● There is no need for the tester to have detailed functional knowledge of system.
● Tests will be done from an end user's point of view, because the end user should
accept the system. (This testing technique is sometimes also called Acceptance
testing.)
● Testing helps to identify vagueness and contradictions in functional specifications.
● Test cases can be designed as soon as the functional specifications are complete.

Disadvantages

● Test cases are challenging to design without having clear functional specifications.
● It is difficult to identify tricky inputs if the test cases are not developed based on
specifications.
● It is difficult to identify all possible inputs in limited testing time. As a result, writing
test cases may be slow and difficult.
● There are chances of having unidentified paths during the testing process.
● There is a high probability of repeating tests already performed by the programmer.

White box Testing

White box testing techniques analyze the internal structures the used data structures,
internal design, code structure and the working of the software rather than just the
functionality as in black box testing. It is also called glass box testing or clear box testing or
structural testing.

Working process of white box testing:

● Input: Requirements, Functional specifications, design documents, source code.

● Processing: Performing risk analysis for guiding through the entire process.

● Proper test planning: Designing test cases so as to cover entire code. Execute
rinse-repeat until error-free software is reached. Also, the results are communicated.

● Output: Preparing final report of the entire testing process

Testing techniques:

1)Statement coverage:

In this technique, the aim is to traverse all statement at least once. Hence, each line of code
is tested. In case of a flowchart, every node must be traversed at least once. Since all lines
of code are covered, helps in pointing out faulty code.
2)Branch Coverage:

In this technique, test cases are designed so that each branch from all decision points are
traversed at least once. In a flowchart, all edges must be traversed at least once.

3)Condition Coverage:

In this technique, all individual conditions must be covered.

4)Multiple Condition Coverage:

In this technique, all the possible combinations of the possible outcomes of conditions are
tested at least once.

5)Basis Path Testing:

In this technique, control flow graphs are made from code or flowchart and then Cyclomatic
complexity is calculated which defines the number of independent paths so that the minimal
number of test cases can be designed for each independent path.

Steps:

● Make the corresponding control flow graph


● Calculate the cyclomatic complexity
● Find the independent paths
● Design test cases corresponding to each independent path

6)Loop Testing:

Loops are widely used and these are fundamental to many algorithms hence, their testing is
very important. Errors often occur at the beginnings and ends of loops.

Advantages
● White box testing is very thorough as the entire code and structures are tested.
● It results in the optimization of code removing error and helps in removing extra lines
of code.
● It can start at an earlier stage as it doesn’t require any interface as in case of black
box testing.
● Easy to automate.

Disadvantages

● Main disadvantage is that it is very expensive.


● Redesign of code and rewriting code needs test cases to be written again.
● Testers are required to have in-depth knowledge of the code and programming
language as opposed to black box testing.
● Missing functionalities cannot be detected as the code that exists is tested.
● Very complex and at times not realistic
Difference between black box and white box testing

Black Box testing

● It is a software testing technique that examines the functionality of software without


knowing its internal structure or coding.
● Black Box Testing is also known as functional testing, data-driven testing, and
closed-box testing.
● In black-box testing, there is less programming knowledge is required.
● It is not well suitable for algorithm testing.
● It is done at higher levels of testing that are system testing and acceptance testing.
● It is hard to automate black-box testing due to the dependency of testers and
programmers on each other.
● It is mainly performed by the software testers.
● It is less time-consuming. In Black box testing, time consumption depends upon the
availability of the functional specifications.
● The base of this testing is external expectations.
● It is less exhaustive than White Box testing.
● In black-box testing, there is no implementation knowledge is required.
● The main objective of implementing black box testing is to specify the business
needs or the customer's requirements.
● In black-box testing, defects are identified once the code is ready.
● It can be performed by trial and error technique.
● Mainly, there are three types of black-box testing: functional testing, Non-Functional
testing, and Regression testing.
● It does not find the errors related to the code.

White Box testing

● In white-box testing, the internal structure of the software is known to the tester.
● It is also known as structural testing, clear box testing, code-based testing, and
transparent testing.
● In white-box testing, there is a requirement of programming knowledge.
● It is well suitable and recommended for algorithm testing.
● It is done at lower levels of testing that are unit testing and integration testing.
● It is easy to automate the white box testing.
● It is mainly performed by developers.
● It is more time-consuming. It takes a long time to design test cases due to lengthy
code.
● The base of this testing is coding which is responsible for internal working.
● It is more exhaustive than Black Box testing.
● In white-box testing, there is a requirement of implementation knowledge.
● Its main objective is to check the code quality.
● Whereas, in white box testing, there is a possibility of early detection of defects.
● It can test data domain and data boundaries in a better way.
● The types of white box testing are – Path testing, Loop testing, and Condition testing.
● In white-box testing, there is the detection of hidden errors. It also helps to optimize
the code.

TESTING OBJECT ORIENTED APPLICATION

The objective of testing, stated simply, is to fi nd the greatest possible number of


errors with a manageable amount of effort applied over a realistic time span. Al-
though this fundamental objective remains unchanged for object-oriented soft-
ware, the nature of object-oriented software changes both testing strategy and testing tactics

Unit Testing in the OO Context

When object-oriented software is considered, the concept of the unit changes. Encapsulation
drives the defi nition of classes and objects. This means that each class and each instance
of a class packages attributes (data) and the operations that manipulate these data. An
encapsulated class is usually the focus of unit testing. However, operations (methods) within
the class are the smallest testable units. Because a class can contain a number of different
operations, and a particular operation may exist as part of a number of different classes, the
tactics applied to unit testing must change.

You can no longer test a single operation in isolation (the conventional view of unit testing)
but rather as part of a class. To illustrate, consider a class hierarchy in which an operation X
is defined for the superclass and is inherited by a number of subclasses. Each subclass
uses operation X, but it is applied within the context of the private attributes and operations
that have been defi ned for the subclass. Because the context in which operation X is used
varies in subtle ways, it is necessary to test operation X in the context of each of the
subclasses. This means that testing operation X in a stand-alone fashion (the conventional
unit-testing approach) is usually ineffective in the object-oriented context.

Class testing for OO software is the equivalent of unit testing for conventional
software. Unlike unit testing of
conventional software, which tends to focus on the algorithmic detail of a module and the
data that fl ow across the module interface, class testing for OO software is driven by the
operations encapsulated by
the class and the state behavior of the class.

Integration Testing in the OO Context

Because object-oriented software does not have an obvious hierarchical con-


trol structure, traditional top-down and bottom-up integration strategies have little meaning.
In addition, integrating operations one at a time Into a class (the conventional incremental
integration approach) is often impossible because of the “direct and indirect interactions of
the components that make up the class.

There are two different strategies for integration testing of OO systems


[Bin94b]. The fi rst, thread-based testing, integrates the set of classes required
to respond to one input or event for the system. Each thread is integrated and
tested individually. Regression testing is applied to ensure that no side effects
occur. The second integration approach, use-based testing, begins the construc-
tion of the system by testing those classes (called independent classes) that use
very few (if any) server classes. After the independent classes are tested, the next
layer of classes, called dependent classes, that use the independent classes are
tested. This sequence of testing layers of dependent classes continues until the
entire system is constructed.

The use of drivers and stubs also changes when integration testing of OO sys-
tems is conducted. Drivers can be used to test operations at the lowest level and
for the testing of whole groups of classes. A driver can also be used to replace
the user interface so that tests of system functionality can be conducted prior to
implementation of the interface. Stubs can be used in situations in which collab-
oration between classes is required but one or more of the collaborating classes
has not yet been fully implemented

Cluster testing is one step in the integration testing of OO software. Here, a


cluster of collaborating classes (determined by examining the CRC and object relationship
model) is exercised by designing test cases that attempt to uncover errors in the
collaborations.

TESTING WEB APPLICATIONS

The strategy for WebApp testing adopts the basic principles for all software test-
ing and applies a strategy and tactics that are used for object-oriented systems.
The following steps summarize the approach:

1. The content model for the WebApp is reviewed to uncover errors.

2. The interface model is reviewed to ensure that all use cases can be
accommodated.

3. The design model for the WebApp is reviewed to uncover navigation


errors.

4. The user interface is tested to uncover errors in presentation and/or navi-


gation mechanics.

5. Each functional component is unit tested.

6. Navigation throughout the architecture is tested.

7. The WebApp is implemented in a variety of different environmental configurations and is


tested for compatibility with each configuration.

8. Security tests are conducted in an attempt to exploit vulnerabilities in the


WebApp or within its environment.
9. Performance tests are conducted.

10. The WebApp is tested by a controlled and monitored population of end users. The
results of their interaction with the system are evaluated for errors.

Because many WebApps evolve continuously, the testing process is an ongoing activity,
conducted by support staff who use regression tests derived from the tests developed when
the WebApp was first engineered

SOFTWARE MAINTENANCE AND EVOLUTIONS

Software Maintenance

Software Maintenance is the process of modifying a software product after it has been
delivered to the customer. The main purpose of software maintenance is to modify and
update software applications after delivery to correct faults and to improve performance.

Need for Maintenance –

Software Maintenance must be performed in order to:

● Correct faults.
● Improve the design.
● Implement enhancements.
● Interface with other systems.
● Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
● Migrate legacy software.
● Retire software.

Challenges in Software Maintenance:

The various challenges in software maintenance are given below:

● The popular age of any software program is taken into consideration up to ten to
fifteen years. As software program renovation is open ended and might maintain for
decades making it very expensive.
● Older software program’s, which had been intended to paintings on sluggish
machines with much less reminiscence and garage ability can not maintain
themselves tough in opposition to newly coming more advantageous software
program on contemporary-day hardware.
● Changes are frequently left undocumented which can also additionally reason
greater conflicts in future.
● As era advances, it turns into high priced to preserve vintage software program.
● Often adjustments made can without problems harm the authentic shape of the
software program, making it difficult for any next adjustments.
Categories of Software Maintenance –

Maintenance can be divided into the


following:

1)Corrective maintenance:

Corrective maintenance of a software product may be essential either to rectify some bugs
observed while the system is in use, or to enhance the performance of the system.

2)Adaptive maintenance:

This includes modifications and updations when the customers need the product to run on
new platforms, on new operating systems, or when they need the product to interface with
new hardware and software.

3)Perfective maintenance:

A software product needs maintenance to support the new features that the users want or to
change different types of functionalities of the system according to the customer demands.

4)Preventive maintenance:

This type of maintenance includes modifications and updations to prevent future problems of
the software. It goals to attend problems, which are not significant at this moment but may
cause serious issues in future.

Software Evolution

Software Evolution is a term which refers to the process of developing software initially, then
timely updating it for various reasons, i.e., to add new features or to remove obsolete
functionalities etc. The evolution process includes fundamental activities of change analysis,
release planning, system implementation and releasing a system to customers.

The cost and impact of these changes are accessed to see how much system is affected by
the change and how much it might cost to implement the change. If the proposed changes
are accepted, a new release of the software system is planned. During release planning, all
the proposed changes (fault repair, adaptation, and new functionality) are considered

A design is then made on which changes to implement in the next version of the system.
The process of change implementation is an iteration of the development process where the
revisions to the system are designed, implemented and tested.

The necessity of Software evolution

Software evaluation is necessary just because of the following reasons:

a) Change in requirement with time:


With the passes of time, the organization’s needs and modus Operandi of working could
substantially be changed so in this frequently changing time the tools(software) that they are
using need to change for maximizing the performance.

b) Environment change:

As the working environment changes the things(tools) that enable us to work in that
environment also changes proportionally same happens in the software world as the working
environment changes then, the organizations need reintroduction of old software with
updated features and functionality to adapt the new environment.

c) Errors and bugs:

As the age of the deployed software within an organization increases their preciseness or
impeccability decrease and the efficiency to bear the increasing complexity workload also
continually degrades. So, in that case, it becomes necessary to avoid use of obsolete and
aged software. All such obsolete Softwares need to undergo the evolution process in order
to become robust as per the workload complexity of the current environment.

d) Security risks:

Using outdated software within an organization may lead you to at the verge of various
software-based cyberattacks and could expose your confidential data illegally associated
with the software that is in use. So, it becomes necessary to avoid such security breaches
through regular assessment of the security patches/modules are used within the software. If
the software isn’t robust enough to bear the current occurring Cyber attacks so it must be
changed (updated).

e) For having new functionality and features:

In order to increase the performance and fast data processing and other functionalities, an
organization need to continuously evolute the software throughout its life cycle so that
stakeholders & clients of the product could work efficiently.

Laws used for Software Evolution:

1)Law of continuing change:


This law states that any software system that represents some real-world reality undergoes
continuous change or become progressively less useful in that environment.

2)Law of increasing complexity:

As an evolving program changes, its structure becomes more complex unless effective
efforts are made to avoid this phenomenon.

3)Law of conservation of organization stability:

Over the lifetime of a program, the rate of development of that program is approximately
constant and independent of the resource devoted to system development.

4)Law of conservation of familiarity:

This law states that during the active lifetime of the program, changes made in the
successive release are almost constant.

You might also like