0% found this document useful (0 votes)
15 views

Software Quality Testing

The document outlines key concepts in Software Quality Testing, including the V-Shaped Model, Shewhart Cycle, and Software Quality Assurance objectives. It discusses various quality factors, metrics, and the Capability Maturity Model, emphasizing the importance of defect removal efficiency and Six Sigma methodology in improving software quality. Additionally, it highlights the need for SQA in ensuring traceability of errors, cost efficiency, and better customer service.

Uploaded by

wgamerz247
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Software Quality Testing

The document outlines key concepts in Software Quality Testing, including the V-Shaped Model, Shewhart Cycle, and Software Quality Assurance objectives. It discusses various quality factors, metrics, and the Capability Maturity Model, emphasizing the importance of defect removal efficiency and Six Sigma methodology in improving software quality. Additionally, it highlights the need for SQA in ensuring traceability of errors, cost efficiency, and better customer service.

Uploaded by

wgamerz247
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Software Quality Testing

1.​ Explain V-Shaped Model

○​ The V-shaped model is a software development process(also


applicable to hardware development) which may be
considered an extension of the waterfall model.
○​ Instead of moving down in a linear way, the process steps
are bent upwards after the coding phase, to form the typical
V shape.
○​ The V-model demonstrates the relationships between each
phase of the development life cycle and its associated phase
of testing.
○​ The horizontal and vertical axes represents time or project
completeness(left-to-right) and level of abstraction(
coarsest-grain abstraction uppermost), respectively
2.​ Explain Shewhart Cycle
○​ Shewhart cycle is the most popular tool used to determine
quality assurance which was developed by Dr. W. Edwards
Deming.
○​ This cycle for quality assurance consists of four steps:
■​ Plan - Establish objectives & processes required to
deliver the desired results
■​ Do - Implement the process developed
■​ Check - Monitor & evaluate the implemented process
by testing the results against the predetermined
objectives.
■​ Act - Apply actions necessary for improvement if the
results require change
○​ These steps are commonly abbreviated as PDCA.
○​ It is an effectives method for monitoring quality assurance
because it analyses existing conditions and methods used to
provide the product or service to customers.
○​ The goal is to ensure that excellence is inherent in every
component of the process.
○​ In addition, if the PDCA cycle is repeated throughout the
lifetime of the product or service, it helps improve internal
company efficiency.
3.​ Software Quality Assurance Objectives
Software Quality Assurance was created with the following
objectives:
○​ Small to Zero Defects After Installation - One of the biggest
goals of SQA is to prevent any possible defects when the
output is made. Developers and engineers have to use
universally approved steps to ensure that the program was
built up to expectations but also to prevent errors in the
system.The ability to handle stress of a program is different
from the errors it has but crashes usually comes from defects
so prevention of defects will most likely yield a continuously
working application.
○​ Customer Satisfaction - Everything else will be just nothing if
the customers don’t like what they see. Part of SQA is to
ensure that software development was made according to
their needs, wants and exceeding their expectations. Even if
the bugs and errors are minimised by the system, customer
satisfaction is more important and should be emphasised
○​ Well Structured - SQA takes care of the stages of application
construction. Anyone could easily build an application and
launch it in their environment without any glitches.SQA
ensures that each application is built in an understandable
manner. Their applications could easily be transferred from
one developer to another
4.​ List Quality Factors
They can be broadly divided into two categories. The classification
is done on the basis of measurability.They are:
○​ Factors that cannot be measured directly -
■​ Maintainability- effort required to locate and fix an error
in a program.
■​ Flexibility- effort needed to modify an operational
program.
■​ Testability- effort required to test the programs for their
functionality.
■​ Portability- effort required to run the program from one
platform to another or to different hardware.
■​ Interoperability- effort required to couple one system to
another.
○​ Factors that can be measured:
■​ Conciseness- program’s compactness in terms of lines
of code
■​ Error tolerance – damage done when a program
encounters an error.
■​ Execution efficiency- run-time performance of a
program
■​ Operability- eases of programs operation
■​ Training- degree to which the software is user-friendly
to new users
5.​ Software Quality Assurance Metrics
There are many forms of metrics in SQA but they can easily be
divided into three categories: product evaluation, product quality, and
process auditing
○​ Product Evaluation Metrics - Basically, this type of metric is
actually the number of hours the SQA member would spend
to evaluate the application. Developers who might have a
good application would solicit lesser product evaluation while
it could take more when tackling an application that is rigged
with errors. The numbers extracted from this metric will give
the SQA team a good estimate on the timeframe for the
product evaluation
○​ Product Quality Metrics - These metrics tabulate all the
possible errors in the application. These numbers will show
how many errors there are and where they come from. The
main purpose of this metric is to show the trend in error.
When the trend is identified and the common source of error
is located. This way, developers can easily take care of the
problem compared to answering smaller divisions of the
problem. There are also metrics that show the actual time of
correcting the errors of the application.
○​ Product Audit Metrics - These metrics will show how the
application works. These metrics are not looking for errors
but performance. One classic example of this type of metric
is the actual response time compared to the stress placed on
the application. Businesses will always look for this metric
since they want to make sure the application will work well
even when there are thousands of users of the application at
the same time
6.​ Software Quality Metrics
Software Quality Metrics focus on the process, project and
product. Although there are many measures of software quality, these
provide useful insights:
○​ Correctness - A program must operate correctly. Correctness
is the degree to which the software performs the required
functions accurately. One of the most common measures is
Defects per KLOC. KLOC means thousands (Kilo) Of Lines
of Code.) KLOC is a way of measuring the size of a
computer program by counting the number of lines of source
code a program has.
○​ Maintainability : Maintainability is the ease with which a
program can be correct if an error occurs. Since there is no
direct way of measuring this an indirect way has been used
to measure this. MTTC (Mean time to change) is one such
measure. It measures when an error is found, how much
time it takes to analyse the change, design the modification,
implement it and test it.
○​ Integrity : This measures the system’s ability to withstand
attacks to its security. In order to measure integrity two
additional parameters are threat and security need to be
defined. Threat -> probability that an attack of certain type
will happen over a period of time. Security -> probability that
an attack of certain type will be removed over a period of
time. Integrity = Summation [(1 - threat) X (1 - security)]
○​ Usability :How usable is your software application ? This
important characteristic of your application is measured in
terms of the following characteristics:
■​ Physical / Intellectual skill required to learn the system
■​ time required to become moderately efficient in the
system.
■​ the net increase in productivity by use of the new
system.
■​ subjective assessment(usually in the form of a
questionnaire on the new system).
By analysing the metrics of the organisation the organisation can
take corrective action to fix those areas in the process, project or product
which are the cause of the software defects.
7.​ Explain Defect Removal Efficiency
○​ Defect Removal Efficiency (DRE) is a measure of the
efficacy of your SQA activities.
DRE = E / ( E + D )
Where E = No. of Errors found before delivery of the
software and D = No. of Errors found after delivery of the
software
○​ Ideal value of DRE should be 1 which means no defects
found. If you score low on DRE it means to say you need to
relook at your existing process
○​ DRE is an indicator of the filtering ability of quality control
and quality assurance activity.It encourages the team to find
as many defects before they are passed to the next activity
stage
8.​ Explain The SEI Process Capability Maturity Model
○​ The Capability Maturity Model, now called the CMMI
('Capability Maturity Model Integration'), was developed by
the Software Engineering Institute(SEI)
○​ It's a model of 5 levels of process 'maturity' that determine
effectiveness in delivering quality software.
○​ It is geared to large organisations such as the U.S. Defense
Department contractors.
○​ However, many of the QA processes involved are
appropriate to any organisation, and if reasonably applied
can be helpful. Organisations can receive CMMI ratings by
undergoing assessments by qualified auditors.
○​ The Five Levels are
■​ Level 1 - characterised by chaos, periodic panics, and
heroic efforts required by individuals to successfully
complete projects. Few if any processes in place;
successes may not be repeatable
■​ Level 2 - software project tracking, requirements
management, realistic planning, and configuration
management processes are in place; successful
practices can be repeated.
■​ Level 3 - standard software development and
maintenance processes are integrated throughout an
organisation; a Software Engineering Process Group is
in place to oversee software processes, and training
programs are used to ensure understanding and
compliance.
■​ Level 4 - metrics are used to track productivity,
processes, and products. Project performance is
predictable, and quality is consistently high.
■​ Level 5 - the focus is on continuous process
improvement. The impact of new processes and
technologies can be predicted and effectively
implemented when required.
9.​ Process Areas in Capability Maturity Model
○​ The Capability Maturity Model Integration (CMMI), based
process improvement can result in better project
performance and higher quality products.
○​ A Process Area is a cluster of related practices in an area
that, when implemented collectively, satisfy a set of goals
considered important for making significant improvement in
that area.
○​ In CMMI, Process Areas (PAs) can be grouped into the
following four categories to understand their interactions and
links with one another regardless of their defined level -
○​ Process Management : It contains the cross-project activities
related to defining, planning, resourcing, deploying,
implementing, monitoring, controlling, appraising, measuring,
and improving processes. Process areas are :
■​ Organisational Process Focus.
■​ Organisational Process Definition.
■​ Organisational Training.
■​ Organisational Process Performance.
■​ Organisational Innovation and Deployment
○​ Project Management : The process areas cover the project
management activities related to planning, monitoring, and
controlling the project. Process areas are:
■​ Project Planning.
■​ Project Monitoring and Control.
■​ Supplier Agreement Management.
■​ Integrated Project Management for IPPD (or Integrated
Project Management).
■​ Risk Management.
■​ Integrated Teaming.
■​ Integrated Supplier Management.
■​ Quantitative Project Management.
○​ Engineering : Engineering process areas cover the
development and maintenance activities that are shared
across engineering disciplines. Process areas are :
■​ Requirements Development.
■​ Requirements Management.
■​ Technical Solution.
■​ Product Integration.
■​ Verification.
■​ Validation.
○​ Support : Support process areas cover the activities that
support product development and maintenance. Process
areas are :
■​ Process and Product Quality Assurance.
■​ Configuration Management.
■​ Measurement and Analysis.
■​ Organisational Environment for Integration.
■​ Decision Analysis and Resolution.
■​ Causal Analysis and Resolution.
10.​ Process and Product Quality Assurance(PPQA)
○​ The purpose of Process and Product Quality Assurance
(PPQA) is to provide staff and management with objective
insight into processes and associated work products
○​ The Process and Product Quality Assurance process area
involves the following activities:
■​ Objectively evaluating performed processes, work
products, and services against applicable process
descriptions, standards, and procedures.
■​ Identifying and documenting noncompliance issues.
■​ Providing feedback to project staff and managers on
the results of quality assurance activities.
■​ Ensuring that noncompliance issues are addressed
○​ The Process and Product Quality Assurance process area
supports the delivery of high-quality products and services by
providing project staff and managers at all levels with
appropriate visibility into, and feedback on, processes and
associated work products throughout the life of the project
○​ The Specific Goals and Practices of PPQA are:
○​ SG 1 - Objectively Evaluate Processes and Work Products
■​ SP 1.1 - Objectively Evaluate Processes.
■​ SP 1.2 - Objectively Evaluate Work Products and
Services
○​ SG 2 - Provide Objective Insight
■​ SP 2.1 Communicate and Ensure Resolution of
Noncompliance Issues.
■​ SP 2.2 Establish Records.
11.​ Six Sigma Project
○​ Six Sigma is a methodology of quality management that
gives a company tools for business processes improvement.
○​ This approach allows to manage quality assurance and
business processes more effectively, and reduce costs and
increase company profits.
○​ The fundamental principle of Six Sigma approach is the
customer satisfaction through implementing defects-free
business processes and products (3.4 or fewer defective
parts per million).
○​ The Six Sigma approach determines factors that are
important for product and service quality.
○​ There are five stages in Six Sigma Project -
■​ Defining - The first stage of Six Sigma project is to
define the problem and deadlines to solve this problem.
The team of specialists considers a business process
(e.g.production process) in detail and identifies defects
that should be erased. Then the team generates a list
of tasks to improve the business process, project
boundaries, customers, their product and service
requirements and expectations
■​ Measuring - On the second stage the business process
is to be measured and current performance is to be
defined. The team collects all data and compares it to
customer requirements and expectations. Then the
team prepares measures for future large-scale
analysis.
■​ Analysing - As soon as the data is put together and the
whole process is documented, the team starts analysis
of the business process. The data collected on stage
two “Measuring” are used to determine root reasons of
defects/problems and identify gaps between current
performance and new goal performance
■​ Improving - On this stage the team analyses the
business process and works up some
recommendations, solutions and improvements to
erase defects/problems or achieve desired
performance level.
■​ Controlling - On the final stage of Six Sigma Project the
team creates means of control of the business process.
It allows the company to hold and extend the scale of
transformations.
○​ This approach contributes to reduction of the business
process deviation, improvement of opportunities and
increase of production stability.
12.​ Need of SQA
The following are the reasons why SQA should be used by
any company before releasing their application to their intended
users:
○​ Traceability of Errors - SQA not only just looks for answers to
their problems but also looks for the reason why the error
occurred. The SQA team should be able to tell which
practice has started the problem.
○​ Cost Efficient
○​ Flexible Solutions - SQA could easily provide multiple
solutions to the problem since they look for the root of the
problem instead of just looking for errors.
○​ Better Customer Service - One of the ultimate goals of any
business is to provide the best customer service possible.
SQA could help these companies realise that goal​
○​ Innovation and Creativity - Since the SQA team is here to
standardise how their work is done, it still fosters innovation
and creativity for developing the product. Everyone is given a
free hand in developing the application and the SQA team is
there to help them standardise the ideas.
13.​ Software Principles
The SQA team also has to follow certain principles. As a
provider of quality assurance for procedures and application, they
need to have a strong foundation on what to believe in and what to
stand for.
○​ Multiple Objectives – This is partly a challenge as well as risk
for the SQA team. At the start of the SQA planning, the team
should have more than one objective. If you think about it, it
could be very dangerous, however it is already a common
practice. But what is emphasised here is that each objective
should be focused on
○​ Evolution – Reaching the objective is really easy but every
time something new happens, it should be always noted.
Evolution is setting the benchmark in each development.
Since the SQA team is able to mark every time something
new is done, evolution is monitored.The good thing about
this principle is for future use
○​ Quality Control – By the name itself, Quality Control is the
pillar for Software Quality Assurance. Everything needs to
have quality control – from the start to the finish.
○​ Motivation - There is no substitute for having the right people
who have the will to do their job at all times. When they have
the right mindset and the willingness to do it, everything will
just go through.Quality assurance is a very tedious task and
will get the most out of the person if they are not dedicated to
their line of work
○​ Process Improvement - Every project of the SQA team
should be a learning experience. Of course each project will
give us the chance to increase our experience of SQA but
there’s more to that. Process improvement fosters the
development of the actual treatment of the project
○​ Persistence - There is no perfect application. The bigger they
get, the more errors there could be. The SQA team should
be very tenacious in looking for concerns in every aspect of
the software development process
○​ Different Effects of SQA -SQA should go beyond software
development. A regular SQA will just report for work, look for
errors and leave. The SQA team should be role models in
business protocols at all times. This way, the SQA does not
only foster perfection in the application but also in their way
of life.
○​ Result-focused – SQA should not only look at the process
but ultimately its effect to the clients and users. The SQA
process should always look for results whenever a phase is
set.
These are the principles that every SQA plan and team
should foster. These principles tell encourages dedication towards
work and patience not necessarily for perfection but for maximum
efficiency
14.​ SQA Activities
○​ SQA is composed of a variety of tasks linked with 2
dissimilar constituencies: SQA group that has responsibility
for quality assurance planning, record keeping, oversight,
analysis and reporting and the other are software engineers,
who do technical work. The character of a SQA group is to
help the software team in attaining a high-class product.
○​ The SQA group prepares a SQA plan that identifies:
■​ Assessments to be carried out,
■​ Reviews and audits to be executed,
■​ Standards those are relevant to the project,
■​ Measures for error reporting and tracking,
■​ Credentials to be produced by the SQL group, and
■​ Amount of feedback provided to the software project
team.
15.​ Explain some Notable SQA Tools
The following are some of the renowned SQA tools and
applications. There are still hundreds out there but the following
tools have been around for years and have been used by
thousands or probably millions of testers.
○​ WinRunner - Developed by HP, It is a user friendly
application that can test the applications reaction from the
user. But other than measuring the response time, It can also
replay and verify every transaction and interaction the
application had with the user. The application works like a
simple user and captures and records every response the
application does
○​ LoadRunner - Developed by HP, LoadRunner is one of the
simple applications that can test the actual performance of
the application.It has the ability to work like thousands of
users at the same time – testing the stress of the application
○​ QuickTest Professional - Built by HP, QuickTest emulates the
actions of users and exploits the application depending on
the procedure set by testers. It can be used in GUI and
non-GUI websites and applications. The testing tool could be
customised through different plug-ins
○​ Mercury Test Director - An all-in-one package, this
web-based interface could be used from start to end in
testing an application or a website. Every defect will be
managed according to their effect on the application. Users
will also have the option to use this exclusively for their
application or use it together with a wide array of testers.
○​ Silktest – Although available in a limited operating system,
Silktest is a very smart testing tool. Silktest lists all the
possible functions and tries to identify the function one by
one. It can be implemented in smaller iterations as it
translate the available codes into actual objects
○​ Bugzilla – Developed by Mozilla, this open source testing
tool works as the name suggests. Bugzilla specialises in
detecting bugs found in the application or website. Since the
application is open-source it can be used freely and its
availability in different OS makes it even a viable alternative
for error tracking. The only downside is it has a long list of
requirements before it could run.
○​ Application Center Test – Also known as ACT, this testing
tool was developed by Microsoft using ASP.NET. This
application is primarily used for determining the capacity of
the servers that handle the application. Testers can test the
server by asking constant requests. A customised script
either from VB or JS could be used to test the server’s
capacity
○​ OpenSTA – Another open source tool, testers can easily
launch the application and use it for testing the application’s
stress capacity. The testing process could be recorded and
testing times could be scheduled. Great for websites that
need daily maintenance.
○​ QARun – Instead of an application, QARun is actually a
platform where you can build your own testing application.
QARun could be easily integrated with the application so that
it could easily sync with the releases and checks the
application or website every time something new is
introduced
16.​ SQA Project Metrics
To gauge the actual application, the metrics are divided into
four categories. Each category has attributes with specific metrics
ensuring that the attribute is achieved.
○​ Quality of Requirements - These set of metrics will show how
well the application is planned. Among the requirements of
quality first and foremost is completeness. Remember that
requirements are geared to answer all the problems and
concerns of the users. Therefore, its main aim is to answer
all the concerns
○​ Product Quality - These set of metrics will gauge the ability
of the developers to formulate codes and functions. The SQA
team will first gauge how simple the application has been
written.When the application is logically coded, it has two
repercussions, Maintainability and Reusability which are also
gauged by SQA
○​ Implementation Capability - The general coding behaviour of
the developers is also gauged by the SQA team. The metrics
used in this classification is based on the SDLC used by the
developers. The SQA team will rate the developer’s ability to
finish each action in the stage of the SDLC on time
○​ Software Efficiency - the most important is to ensure that the
application does not have any errors at all. This is basically a
major pillar to any application. Without errors, the software
will never give any hackers a chance to infiltrate the system.
Usually, the SQA team will develop test cases wherein the
error is highlighted.
These are the metrics that every software developer will
have to go through. The better the rating, the better the application
would work.
17.​ Explain the SQA Management Plans
In the planning and requirements phase, there will be four
plans that will be created. These plans will be used in the next
stage of software development which is the architectural phase
○​ Software Quality Assurance Plan (SQAP) for architectural
design (AD) - It is basically the following list of activities to
ensure that the preparation of the architectural plan is a
success. If any tool or application will be used, the SQA team
should point this out in this phase
○​ Software Project Management Plans (SPMP) for Architecture
Design. - The SQA team should ensure that this
management plan will have a specific budget for developing
the software. Aside from being specific, the SQA team
should also ensure that the estimate should be obtained
using scientific methods.
○​ Software Configuration Management Plans (SCMP) for
Architectural Design - SCMP is the plan on how the software
will be eventually configured. In this phase, the SQA team
should ensure that the configuration should have been
established at the start.
○​ Software Verification Management Plan (SVMP) for
Architectural Design - Like most of the management plans,
this one has already been established. But extra information
should be sought after by the SQA team. Since this
document will be used in the architectural design phase
more detailed information is needed. A test plan should be
made to ensure that the Architectural Design phase is a
success. Also, the SQA team should ensure a general
testing system should be established.
18.​ SQA Plan
○​ An SQA Plan is detailed description of the project and its
approach for testing.Going with the standards, an SQA Plan
is divided into four sections:
○​ Software Quality Assurance Plan for Software Requirements
- In the first phase, the SQA team should write in detail the
activities related for software requirements. In this stage, the
team will be creating steps and stages on how they will
analyze the software requirements. They could refer to
additional documents to ensure the plan works out.
○​ Software Quality Assurance Plan for Architectural Design -
The second stage of SQA Plan or the SQAP for AD
(Architectural Design) the team should analyse in detail the
preparation of the development team for detailed build-up.
This stage is a rough representation of the program but it still
has to go through rigorous scrutiny before it reaches the next
stage.
○​ Software Quality Assurance Plan for Detailed Design and
Production - The third phase which tackles the quality
assurance plan for detailed design and actual product is
probably the longest among phases. The SQA team should
write in detail the tools and approach they will be using to
ensure that the produced application is written according to
plan. The team should also start planning on the transfer
phase as well.
○​ Software Quality Assurance Plan for Transfer - The last
stage is the QA plan for transfer of to the operations. The
SQA team should write their plan on how they will monitor
the transfer of such as training and support.
19.​ Define Software Reliability
○​ According to ANSI, Software Reliability is defined as: the
probability of failure free software operation for a specified
period of time in a specified environment.
20.​ Bathtub Curve for Hardware reliability
21.​ Differences between Hardware Reliability and Software
Reliability
○​ There are two major differences between hardware and
software curves.
○​ One difference is that in the last phase, software does not
have an increasing failure rate as hardware does. In this
phase, software is approaching obsolescence; there is no
motivation for any upgrades or changes to the software.
Therefore, the failure rate will not change.
○​ The second difference is that in the useful-life phase,
software will experience a drastic increase in failure rate
each time an upgrade is made. The failure rate levels off
gradually, partly because of the defects found and fixed after
the upgrades
22.​ Software Reliability Metrics
Measuring software reliability remains a difficult problem
because we don't have a good understanding of the nature of
software.The current practices of software reliability measurement
can be divided into four categories:
○​ Product Metrics - Software size is thought to be reflective of
complexity, development effort and reliability. Lines Of Code
(LOC), or LOC in thousands (KLOC), is an intuitive initial
approach to measuring software size.Function point metric is
a method of measuring the functionality of a proposed
software development based upon a count of inputs, outputs,
master files, inquires, and interfaces.Complexity-oriented
metrics is a method of determining the complexity of a
program’s control structure, by simplifying the code into a
graphical representation. Representative metric is McCabe's
Complexity Metric.
○​ Project Management Metrics - Researchers have realised
that good management can result in better products.Costs
increase when developers use inadequate processes. Higher
reliability can be achieved by using better development
processes, risk management processes, configuration
management processes, etc.
○​ Process Metrics - Based on the assumption that the quality
of the product is a direct function of the process, process
metrics can be used to estimate, monitor and improve the
reliability and quality of software. ISO-9000 certification, or
"quality management standards", is the generic reference for
a family of standards developed by the International
Standards Organization (ISO)
○​ Fault and Failure Metrics - The goal of collecting fault and
failure metrics is to be able to determine when the software
is approaching failure-free execution. Minimally, both the
number of faults found during testing (i.e., before delivery)
and the failures (or other problems) reported by users after
delivery are collected, summarised and analysed to achieve
this goal.The failure data collected is therefore used to
calculate failure density, Mean Time Between Failures
(MTBF) or other parameters to measure or predict software
reliability.
23.​ Explain Verification and terms involved in Verification and list
the activities involved
○​ "Are we building the product RIGHT?" i.e. Verification is a
process that makes it sure that the software product is
developed the right way
○​ The Verification part of ‘Verification and Validation Model’
comes before Validation, which incorporates Software
inspections, reviews, audits, walkthroughs, buddy checks
etc. in each phase of verification (every phase of Verification
is a phase of the Testing Life Cycle).
○​ During the Verification, the work product (the ready part of
the Software being developed and various documentations)
is reviewed/examined personally by one or more persons in
order to find and point out the defects in it. This process
helps in prevention of potential bugs, which may cause
failure of the project.
○​ Few Terms involved in Verification:
■​ Inspection - Inspection involves a team of about 3-6
people, led by a leader, which formally reviews the
documents and work product during various phases of
the product development life cycle. The work product
and related documents are presented in front of the
inspection team, the members of which carry different
interpretations of the presentation. The bugs that are
detected during the inspection are communicated to
the next level in order to take care of them
■​ Walkthroughs - Walkthrough can be considered the
same as inspection without formal preparation (of any
presentation or documentations). During the
walkthrough meeting, the presenter/author introduces
the material to all the participants in order to make
them familiar with it. Even when the walkthroughs can
help in finding potential bugs, they are used for
knowledge sharing or communication purpose
■​ Buddy Checks - This is the simplest type of review
activity used to find out bugs in a work product during
the verification. In buddy check, one person goes
through the documents prepared by another person in
order to find out if that person has made mistake(s) i.e.
to find out bugs which the author couldn’t find
previously
○​ The Activities involved in Verification Process are:
■​ Requirement Specification Verification
■​ Functional Design Verification
■​ Internal/system design verification
■​ Code Verification
○​ These Phases can also be subdivided further.Each activity
makes it sure that the product is developed right way and
every requirement; every specification, design code etc. is
verified
24.​ What is Validation and terms used in Validation Process
○​ Validation is a process of finding out if the product being built
is right? I.e. whatever the software product is being
developed, it should do what the user expects it to do
○​ Validation and Verification processes go hand in hand, but
visibly Validation process starts after Verification process
ends (after coding of the product ends).
○​ All types of testing methods are basically carried out during
the Validation process. Test plan, test suits and test cases
are developed, which are used during the various phases of
the Validation process.
○​ The phases involved in Validation process are: Code
Validation/Testing, Integration Validation/Integration Testing,
Functional Validation / Functional Testing, and System/User
Acceptance Testing/Validation
○​ Terms used in Validation Process:
■​ Code Validation/Testing: Developers as well as testers
do the code validation. Unit Code Validation or Unit
Testing is a type of testing, which the developers
conduct in order to find out any bug in the code
unit/module developed by them. Code testing other
than Unit Testing can be done by testers or developers.
■​ Integration Validation/Testing:Integration testing is
carried out in order to find out if different (two or more)
units/modules coordinate properly. This test helps in
finding out if there is any defect in the interface
between different modules.
■​ Functional Validation/Testing:This type of testing is
carried out in order to find if the system meets the
functional requirements. In this type of testing, the
system is validated for its functional
behavior.Functional testing does not deal with internal
coding of the project, instead, it checks if the system
behaves as per the expectations.
■​ User Acceptance Testing or System Validation: In this
type of testing, the developed product is handed over
to the user/paid testers in order to test it in a real time
scenario. The product is validated to find out if it works
according to the system specifications and satisfies all
the user requirements.This helps in improvement of the
final product.
25.​ Explain Inspection Process
○​ The inspection process was developed by Michael Fagan in
the mid-1970s and it has later been extended and modified.
○​ The process should have entry criteria that determine if the
inspection process is ready to begin.
○​ This prevents unfinished work products from entering the
inspection process.
○​ The entry criteria might be a checklist including items such
as "The document has been spell-checked"
○​ The Stages in the Inspection Process are:
■​ Planning - The inspection is planned by the moderator.
■​ Overview meeting - The author describes the
background of the work product.
■​ Preparation - identifies possible defects: Each
inspector examines the work product too.
■​ Inspection meeting - During this meeting the reader
reads through the work product, part by part and the
inspectors point out the defects for every part.
■​ Rework - The author makes changes to the work
product according to the action plans from the
inspection meeting.
■​ Follow-up - The changes by the author are checked to
make sure everything is correct
○​ The process is ended by the moderator when it satisfies
some predefined exit criteria
○​ During an inspection the following roles are used.
■​ Author: The person who created the work product
being inspected.
■​ Moderator: This is the leader of the inspection. The
moderator plans the inspection and coordinates it.
■​ Reader: The person reading through the documents,
one item at a time. The other inspectors then point out
defects.
■​ Recorder/Scribe: The person that documents the
defects that are found during the inspection.
■​ Inspector: The person that examines the work product
to identify possible defects.
26.​ Explain Automated Static Analysis
○​ Manual Audits are time consuming and require extended
expertise to be efficient, whereas a static code analysis tool
could do the job at a fraction of the time.
○​ However,the most important limitation being the incorrect
warnings that are reported by static analysis tools
○​ The Output from Static Analysis Tools can be categorised in
the following four groups:
■​ False Positive: Warnings that do not cause a fault in
the software or state and untrue fact. These are often
caused by either weak verification or incomplete/weak
checkers.
■​ True Positive: Correct reports of faults within the
software. However, the fault does not allow a software
user to create a failure that would result in any of the
four security consequences
■​ Security Positive: Warnings that are correct and can be
exploited into any of the four effects
■​ False Negative: Known vulnerabilities that the static
analysis tool did not report. Either because the analysis
lacked the precision required to detect them or
because there are no rules or checks that look for the
particle vulnerability.
○​ In addition, a development process overhead was deemed
necessary to successfully use static analysis in an industry
setting
27.​ Cleanroom Software Development
○​ The Cleanroom Software Engineering process is a software
development process intended to produce software with a
certifiable level of reliability.
○​ The Cleanroom process was originally developed by Harlan
Mills and several of his colleagues including Alan Hevner at
IBM.
○​ The focus of the Cleanroom process is on defect prevention,
rather than defect removal.
○​ The basic principles of the Cleanroom process are Software
development based on formal methods.
○​ Cleanroom development makes use of the Box Structure
Method to specify and design a software product.
○​ Verification that the design correctly implements the
specification is performed through team review.
○​ Cleanroom development uses an iterative approach, in which
the product is developed in increments that gradually
increase the implemented functionality.
○​ A failure to meet quality standards results in the cessation of
testing for the current increment, and a return to the design
phase
○​ Software testing in the Cleanroom process is carried out as a
statistical experiment.
○​ Based on the formal specification, a representative subset of
software input/output trajectories is selected and tested.
28.​ What is Testing and List its Objectives
○​ Testing is a process of executing a program with the intent of
finding an error.
○​ A good test is one that has a high probability of finding an as
yet undiscovered error.
○​ A successful test is one that uncovers an as yet
undiscovered error.
○​ The objective is to design tests that systematically uncover
different classes of errors and do so with a minimum amount
of time and effort.
○​ It demonstrates that software functions appear to be working
according to specification. Those performance requirements
appear to have been met.
○​ Data collected during testing provides a good indication of
software reliability and some indication of software quality.
○​ Testing cannot show the absence of defects, it can only show
that software defects are present
29.​ Explain Software Testing Life Cycle
Software testing life cycle identifies what test activities to
carry out and when (what is the best time) to accomplish those test
activities. Even though testing differs between organisations, there
is a testing life cycle.
Software Testing Life Cycle consists of six (generic) phases:
○​ Test Planning - This is the phase where the Project Manager
has to decide what things need to be tested, do I have the
appropriate budget etc. Naturally proper planning at this
stage would greatly reduce the risk of low quality
software.Activities at this stage would include preparation of
high level test plan-(according to IEEE test plan template)
The Software Test Plan (STP) is designed to prescribe the
scope, approach, resources, and schedule of all testing
activities.
○​ Test Analysis - Once test plan is made and decided upon,
next step is to delve little more into the project and decide
what types of testing should be carried out at different stages
of SDLC, do we need or plan to automate, if yes then when
the appropriate time to automate is, what type of specific
documentation I need for testing.In this stage we need to
develop Functional validation matrix based on Business
Requirements to ensure that all system requirements are
covered by one or more test cases, identify which test cases
to automate, begin review of documentation, i.e. Functional
Design, Business Requirements, Product Specifications,
Product Externals etc.
○​ Test Design - Test plans and cases which were developed in
the analysis phase are revised. Functional validation matrix
is also revised and finalized. In this stage risk assessment
criteria is developed.Test data is prepared. Standards for unit
testing and pass / fail criteria are defined here. Schedule for
testing is revised (if necessary) & finalised and the test
environment is prepared.
○​ Construction and Verification - In this phase we have to
complete all the test plans, test cases, complete the scripting
of the automated test cases, Stress and Performance testing
plans need to be completed.Integration tests are performed
and errors (if any) are reported.
○​ Testing Cycles - In this phase we have to complete testing
cycles until test cases are executed without errors or a
predefined condition is reached. Run test cases –> Report
Bugs –> revise test cases (if needed) –> add new test cases
(if needed) –> bug fixing –> retesting (test cycle 2, test cycle
3….).
○​ Final Testing and Implementation - In this we have to
execute remaining stress and performance test cases,
documentation for testing is completed / updated, provide
and complete different matrices for testing. Acceptance, load
and recovery testing will also be conducted and the
application needs to be verified under production conditions.
○​ Post Implementation - In this phase, the testing process is
evaluated and lessons learnt from that testing processes are
documented. Line of attack to prevent similar problems in
future projects is identified. Create plans to improve the
processes. The recording of new errors and enhancements
is an ongoing process. Cleaning up of test environment is
done and test machines are restored to base lines in this
stage
Software testing has its own life cycle that intersects with
every stage of the SDLC. The basic requirements in software
testing life cycle is to control/deal with software testing – Manual,
Automated and Performance.
30.​ Test Cases and the Importance of TestDesign
○​ Test cases are the specific inputs that you'll try and the
procedures that you'll follow when you test the software.
○​ Selecting test cases is the single most important task that
software testers do.
○​ Improper selection can result in testing too much , testing too
little, or testing the wrong things.
○​ Intelligently weighing the risks and reducing the infinite
possibilities to a manageable effective set is where the magic
is.
○​ Importance of Test Design
■​ Test cases form the foundation on which to design and
develop test scripts
■​ The depth of testing is proportional to number of test
cases
■​ A principle measure of completeness of test is
requirements based coverage based on the number of
test cases identified, implemented and lor executed
■​ The scale of test effort is proportional to number of test
cases
■​ The kinds of test design and development and the
resources needed are largely governed by the test
cases.
○​ Test Design Essentials
■​ Test case should cover all features
■​ There should be balance between all types of test
cases
■​ The test cases should be documented properly
31.​ Unit Testing
○​ Unit testing is a software development process in which the
smallest testable parts of an application, called units, are
individually and independently scrutinised for proper
operation.
○​ Unit testing is often automated but it can also be done
manually.
○​ Unit testing involves only those characteristics that are vital
to the performance of the unit under test.
○​ This encourages developers to modify the source code
without immediate concerns about how such changes might
affect the functioning of other units or the program as a
whole.
○​ Unit testing can be time-consuming and tedious. It demands
patience and thoroughness on the part of the development
team
○​ Rigorous documentation must be maintained.
○​ Once all of the units in a program have been found to be
working in the most efficient and error-free manner possible,
larger components of the program can be evaluated by
means of integration testing
32.​ Integration Testing
○​ Integration testing is a systematic technique for constructing
the program structure while conducting tests to uncover
errors associated with interfacing.
○​ The objective is to take unit tested modules and build a
program structure that has been dictated by Design Different
Integration strategies
○​ There are two approaches in integration testing they are as
follows -
○​ Top-down integration
■​ It is an incremental approach to construction of
program structure. Modules are integrated by moving
downward through the control hierarchy, beginning with
the main control module
■​ The integration process is performed in a series of five
steps:
1.​ The main control module is used as a test driver,
and stubs are substituted for all modules directly
subordinate to the main control module.
2.​ Depending on the integration approach selected
(i.e., depth-or breadth first), subordinate stubs
are replaced one at a time with actual modules.
3.​ Tests are conducted as each modules are
integrated
4.​ On completion of each set of tests, another stub
is replaced with real module
5.​ Regression testing may be conducted to ensure
that new errors have not been introduced
■​ The process continues from step2 until the entire
program structure is built.
■​ Top-down strategy sounds relatively uncomplicated,
but in practice, logistical problems arise.
■​ The most common of these problems occurs when
processing at low levels in the hierarchy are required to
adequately test upper levels.
■​ Stubs replace low level modules at the beginning of
top-down testing; therefore, no significant data can flow
upward in the program structure.


○​ Bottom-up Integration
■​ Modules are integrated from the bottom to top, in this
approach processing required for modules subordinate
to a given level is always available and the needs for
subs is eliminated
■​ A bottom-up integration strategy may be implemented
with the following steps:
1.​ Low-level modules are combined into clusters
that perform a specific software sub function.
2.​ A driver is written to coordinate test case input
and output.
3.​ The cluster is tested.
4.​ Drivers are removed and clusters are combined
moving upward in the program structure.
■​ As integration moves upward, the need for separate
test drivers lessens.
■​ In fact, if the top two levels of program structure are
integrated top-down, the number of drivers can be
reduced substantially and integration of clusters is
greatly simplified.
33.​ System Testing
○​ Once the entire system has been built then it has to be
tested against the “System Specification” to check if it
delivers the features required. It is still developer focussed,
although specialist developers known as systems testers are
normally employed to do it.
○​ In essence System Testing is not about checking the
individual parts of the design, but about checking the system
as a whole. In effect it is one giant component.
○​ System testing can involve a number of specialist types of
tests to see if all the functional and non- functional
requirements have been met.
○​ In addition to functional requirements these may include the
following types of testing for the non-functional requirements:
■​ Performance - Are the performance criteria met?
■​ Volume - Can large volumes of information be
handled?
■​ Stress - Can peak volumes of information be handled?
■​ Documentation - Is the documentation usable for the
system?
■​ Robustness - Does the system remain stable under
adverse circumstances?
○​ There are many others, the needs for which are dictated by
how the system is supposed to perform
34.​ Acceptance Testing
○​ Acceptance Testing checks the system against the
“Requirements”.
○​ It is similar to systems testing in that the whole system is
checked but the important difference is the change in focus.
○​ Systems Testing checks that the system that was specified
has been delivered.
○​ Acceptance Testing checks that the system delivers what
was requested.
○​ The customer and not the developer should always do
acceptance testing.
○​ The customer knows what is required from the system to
achieve value in the business and is the only person
qualified to make that judgement.
35.​ Define Alpha Testing
○​ The alpha test conducted at the developer’s site by a
customer software is used in a natural setting with the
developer “Looking over the shoulder” of the user and
recording errors and usage problems. Alpha tests are
conducted in a controlled environment.
36.​ Define Beta Testing
○​ The beta test is conducted at one or more customer sites by
the end user(S) of the software. Unlike alpha testing the
developer is generally not present; therefore the beta test is
"live". Application of the software is in an environment that
cannot be controlled by the developer
37.​ Define Static Testing
○​ Static testing is a non-execution-based testing and carried
through by mostly human effort.
○​ In static testing, we test, design, code or any document
through inspection, walkthroughs and reviews.
○​ Many studies show that the single most effective defect
reduction process is the classic structural test; the code
inspection or walk-through.
○​ Code inspection is like proofreading and developers will be
benefited in identifying the typographical errors, logic errors
and deviations in styles and standards normally followed.
38.​ Define Dynamic Testing
○​ Dynamic testing is an execution based testing technique.
Program must be executed to find the possible errors.
○​ Here, the program, module or the entire system is executed
(run) and the output is verified against the expected result.
○​ Dynamic execution of tests is based on specifications of the
program, code and methodology.
39.​ Advantages & Disadvantages of Automatic Testing
○​ Advantages:
■​ If you have to run a set of tests repeatedly automation
is a huge gain
■​ Helps performing "compatibility testing" - testing the
software on different configurations
■​ It gives you the ability to run automation scenarios to
perform regressions in a shorter time
■​ It gives you the ability to run regressions on a code that
is continuously changing
■​ Can be run simultaneously on different machines thus
decreasing testing time
■​ Long term costs are reduced
○​ Disadvantages:
■​ It's more expensive to automate. Initial investments are
bigger than manual testing
■​ You cannot automate everything, some tests still have
to be done manually
40.​ Advantages & Disadvantages of Manual Testing
○​ Advantages:
■​ If Test Cases have to be run a small number of times
it's more likely to perform manual testing
■​ It allows the tester to perform more ad-hoc (random
testing)
■​ Short term costs are reduced
■​ The more time tester spends testing a module the
greater the odds to find real user bugs
○​ Disadvantages:
■​ Manual tests can be very time consuming
■​ For every release you must rerun the same set of tests
which can be tiresome
41.​ Explain Testers Workbench
○​ A tester's workbench is a virtual environment used to verify
the correctness or soundness of a design or model (e.g., a
software product).
○​ In the context of software or firmware or hardware
engineering, a test bench refers to an environment in which
the product under development is tested with the aid of a
collection of testing tools.
○​ Often, though not always, the suite of testing tools is
designed specifically for the product under test.

○​ A test bench or testing workbench has four components:


■​ INPUT: The entrance criteria or deliverables needed to
perform work
■​ PROCEDURES TO DO: The tasks or processes that
will transform the input into the output
■​ PROCEDURES TO CHECK: The processes that
determine that the output meets the standards.
■​ OUTPUT: The exit criteria or deliverables produced
from the workbench
42.​ Explain the 11 Steps of Testing Process
○​ Assess Development Plan and Status - This first step is a
prerequisite to building the Verification, Validation, and
Testing (VV&T )Plan used to evaluate the implemented
software solution. During this step, testers challenge the
completeness and correctness of the development plan.
○​ Develop the Test Plan - Forming the plan for testing will
follow the same pattern as any software planning process.
The structure of all plans should be the same, but the
content will vary based on the degree of risk the testers
perceive as associated with the software being developed.
○​ Test Software Requirements - Incomplete, inaccurate, or
inconsistent requirements lead to most software
failures.Testers, through verification, must determine that the
requirements are accurate, complete, and they do not
conflict with another.
○​ Test Software Design - This step tests both external and
internal design primarily through verification techniques. The
testers are concerned that the design will achieve the
objectives of the requirements, as well as the design being
effective and efficient on the designated hardware.
○​ Program (build) phase Testing - The method chosen to build
the software from the internal design document will
determine the type and extensiveness of the testers
needed.Experience has shown that it is significantly cheaper
to identify defects during the construction phase, than
through dynamic testing during the test execution step.
○​ Execute and Record Result - This involves the testing of
code in a dynamic state. The approach, methods, and tools
specified in the test plan will be used to validate that the
executable code in fact meets the stated software
requirements, and the structural specifications of the design.
○​ Acceptance Test - Acceptance testing enables users to
evaluate the applicability and usability of the software in
performing their day-to-day job functions. This tests what the
user believes the software should perform, as opposed to
what the documented requirements state the software should
perform
○​ Report Test Results - Test reporting is a continuous process.
It may be both oral and written. It is important that defects
and concerns be reported to the appropriate parties as early
as possible, so that corrections can be made at the lowest
possible cost
○​ The Software Installation - Once the test team has confirmed
that the software is ready for production use, the ability to
execute that software in a production environment should be
tested. This tests the interface to operating software, related
software, and operating procedures.
○​ Test Software Changes - Whenever requirements changes,
the test plan must change, and the impact of that change on
software systems must be tested and evaluate
○​ Evaluate Test Effectiveness - Testing improvement can best
be achieved by evaluating the effectiveness of testing at the
end of each software test assignment. While this assessment
is primarily performed by the testers, it should involve the
developers, users of the software, and quality assurance
professionals if the function exists in the IT organisation.
43.​ Installation Testing
○​ Installation testing (Implementation testing) is a kind of
quality assurance work in the software industry that focuses
on what customers will need to do to install and set up the
new software successfully.
○​ The testing process may involve full, partial or upgrades
install/uninstall processes.
○​ This testing is typically done by the software testing engineer
in conjunction with the configuration manager.
○​ Implementation testing is usually defined as testing which
places a compiled version of code into the testing or
pre-production environment, from which it may or may not
progress into production.
○​ The simplest installation approach is to run an install
program, sometimes called package software.
○​ This package software typically uses a setup program which
acts as a multi-configuration wrapper and which may allow
the software to be installed on a variety of machine and/or
operating environments
44.​ Usability Testing
○​ Usability testing is the process of observing users’ reactions
to a product and adjusting the design to suit their needs.
○​ In usability testing a basic model or prototype of the product
is put in front of evaluators who are representative of typical
end-users.
○​ They are then set a number of standard tasks which they
must complete using the product.
○​ Any difficulty or obstructions they encounter are then noted
by a host or observers and design changes are made to the
product to correct these.
○​ The process is then repeated with the new design to
evaluate those changes.
○​ There are some fairly important tenets of usability testing that
must be understood:
■​ Users are not testers, engineers or designers
■​ You are testing the product and not the users
■​ Usability testing is a design tool
○​ The proper way to select evaluators is to profile a typical
end-user and then solicit the services of individuals who
closely fit that profile.
○​ A profile should consist of factors such as age, experience,
gender, education, prior training and technical expertise
45.​ Regression Testing
○​ Regression testing is any type of software testing that seeks
to uncover software errors after changes to the program (e.g.
bug fixes or new functionality) have been made, by retesting
the program.
○​ The intent of regression testing is to assure that a change,
such as a bugfix, did not introduce new bugs.
○​ Regression testing can be used to test the system efficiently
by systematically selecting the appropriate minimum suite of
tests needed to adequately cover the affected change.
○​ One of the main reasons for regression testing is that it's
often extremely difficult for a programmer to figure out how a
change in one part of the software will echo in other parts of
the software
○​ Common methods of regression testing include rerunning
previously run tests and checking whether program
behaviour has changed and whether previously fixed faults
have re-emerged.
○​ Regression Testing attempts to verify:
■​ That the application works as specified even after the
changes/additions/modifications were made to it. The
original functionality continues to work as specified
even after changes/additions/modification to the
software application.
■​ The changes/additions/modification to the software
application have not introduced any new bugs.
46.​ When is Regression Testing necessary?
○​ Regression Testing plays an important role in any Scenario
where a change has been made to a previously tested
software code.
○​ Regression Testing is hence an important aspect in various
Software Methodologies where software changes
enhancements occur frequently
○​ Any Software Development Project is invariably faced with
requests for changing Design, code, features or all of them.
Some Development Methodologies embrace change.
○​ For example ‘Extreme Programming’ Methodology
advocates applying small incremental changes to the system
based on the end user feedback.
○​ Each change implies more Regression Testing needs to be
done to ensure that the System meets the Project Goals.
47.​ Why is Regression Testing Important?
○​ Any Software change can cause existing functionality to
break.
○​ Changes to a Software component could impact dependent
Components.
○​ It is commonly observed that a Software fix could cause
other bugs.
○​ All this affects the quality and reliability of the system. Hence
Regression Testing,since it aims to verify all this, is very
important.
48.​ Performance Testing
○​ Performance testing is testing that is performed, to determine
how fast some aspect of a system performs under a
particular workload.
○​ It can also serve to validate and verify other quality attributes
of the system, such as scalability, reliability and resource
usage.
○​ Performance testing can serve different purposes.
■​ It can demonstrate that the system meets performance
criteria.
■​ It can compare two systems to find which performs
better.
■​ Or it can measure what parts of the system or workload
causes the system to perform badly.
○​ Performance testing is a subset of Performance engineering,
an emerging computer science practice which strives to build
performance into the design and architecture of a system,
prior to the onset of actual coding effort.
○​ Many performance tests are undertaken without due
consideration to the setting of realistic performance goals.
○​ Performance testing can be performed across the
web,in-house and even done in different parts of the country,
since it is known that the response times of the internet itself
vary regionally.
○​ Performance Testing is further divided into:
■​ Load Testing
■​ Stress testing
49.​ Load Testing
○​ Load testing generally refers to the practice of modelling the
expected usage of a software program by simulating multiple
users accessing the program concurrently.
○​ As such, this testing is most relevant for multi-user systems;
often one built using a client/server model, such as web
servers. However, other types of software systems can also
be load tested.
○​ However, all load test plans attempt to simulate system
performance across a range of anticipated peak workflows
and volumes
○​ The specifics of a load test plan or script will generally vary
across organisations
○​ The criteria for passing or failing a load test (pass/fail criteria)
are generally different across organisations as well. There
are no standards specifying acceptable load testing
performance metrics.
○​ A common misconception is that load testing software
provides record and playback capabilities like regression
testing tools. Load testing tools analyse the entire OSI
protocol stack whereas most regression testing tools focus
on GUI performance.
○​ Load testing is especially important if the application, system
or service will be subject to a service level agreement or
SLA.
50.​ Stress Testing
○​ Stress testing is a form of testing that is used to determine
the stability of a given system or entity.
○​ It involves testing beyond normal operational capacity, often
to a breaking point, in order to observe the results.
○​ In software testing, a system stress test refers to tests that
put a greater emphasis on robustness, availability, and error
handling under a heavy load, rather than on what would be
considered correct behaviour under normal circumstances
○​ Stress testing defines a scenario and uses a specific
algorithm to determine the expected impact on a portfolio's
return should such a scenario occur. There are three types of
scenarios:
■​ Extreme event: hypothesise the portfolio's return given
the recurrence of a historical event. Current positions
and risk exposures are combined with the historical
factor returns.
■​ Risk factor shock: shock any factor in the chosen risk
model by a user-specified amount. The factor
exposures remain unchanged, while the covariance
matrix is used to adjust the factor returns based on
their correlation with the shocked factor.
■​ External factor shock: instead of a risk factor, shock
any index, macroeconomic series (e.g., oil prices), or
custom series (e.g., exchange rates). Using regression
analysis, new factor returns are estimated as a result of
the shock.
○​ System stress testing, also known as stress testing, is
loading the concurrent users over and beyond the level that
the system can handle, so it breaks at the weakest link within
the entire system

51.​ Security Testing


○​ Security testing is a process to determine that an information
system protects data and maintains functionality as intended.
○​ The six basic security measures that need to be covered by
security testing are:
■​ Confidentiality - A security measure which protects
against the disclosure of information to parties other
than the intended recipient that is by no means the only
way of ensuring the security.
■​ Integrity - A measure intended to allow the receiver to
determine that the information which it is providing is
correct. Integrity schemes often use some of the same
underlying technologies as confidentiality schemes, but
they usually involve adding additional information to a
communication to form the basis of an algorithmic
check rather than the encoding of all of the
communication.
■​ Authentication - It is a type of security testing in which
one will enter different combinations of usernames and
passwords and will check whether only the authorised
people are able to access it or not. The process of
establishing the identity of the user. Authentication can
take many forms including but not limited to:
passwords, biometrics, radio frequency identification,
etc.
■​ Availability - Assuring information and communications
services will be ready for use when expected.
Information must be kept available to authorised
persons when they need it.
■​ Authorization - The process of determining that a
requester is allowed to receive a service or perform an
operation. Access control is an example of
authorization.
■​ Non-repudiation - A measure intended to prevent the
later denial that an action happened, or a
communication that took place etc.In communication
terms this often involves the interchange of
authentication information combined with some form of
provable time stamp.
52.​ Security Testing Taxonomy
○​ Common terms used for the delivery of Security Testing are:
■​ Discovery
■​ Vulnerability Scan
■​ Vulnerability Assessment
■​ Security Assessment
■​ Penetration Test
■​ Security Audit
■​ Security Review
53.​ Define Penetration Test
○​ Penetration test simulates an attack by a malicious party.
○​ Using this approach will result in an understanding of the
ability of an attacker to gain access to confidential
information, affect data integrity or availability of a service
and the respective impact.
○​ Each test is approached using a consistent and complete
methodology in a way that allows the tester to use their
problem solving abilities, the output from a range of tools and
their own knowledge of networking and systems to find
vulnerabilities that would/ could not be identified by
automated tools.
○​ This approach looks at the depth of attack as compared to
the Security Assessment approach that looks at the broader
coverage
54.​ Static Testing Techniques
○​ Static testing techniques do not execute the software that is
being tested. The main manual activity is to examine a work
product and make comments about it.
○​ Static testing techniques are categorised into manual
(reviews) or automated (static analysis).
55.​ Reviews
○​ Defects detected during reviews early in the life cycle are
often much cheaper to remove than those detected while
running tests (e.g. defects found in requirements).
○​ Any software work product can be reviewed, including
requirement specifications, design specifications, code, test
plans, test cases, test scripts, user guides or web pages.
○​ Reviews are a way of testing software work products
(including code) and can be performed well before dynamic
test execution
○​ Benefits of Reviews
■​ Early defect detection and correction,
■​ Development productivity improvements,
■​ Reduced development timescales
■​ Reduced testing cost and time,
■​ Lifetime cost reductions,
■​ Fewer defects at later stage and improved
communication
○​ Typical defects that are easier to find in reviews are:
■​ Deviations from standards
■​ Requirement defects
■​ Design defects
■​ Incorrect interface specifications etc
○​ A typical formal review has the following main phases:
■​ Planning: selecting the personnel, allocating roles;
defining the entry and exit criteria for more formal
review types (e.g. inspection); and selecting which
parts of documents to look at.
■​ Kick-off: distributing documents; explaining the
objectives, process and documents to the participants;
and checking entry criteria (for more formal review
types).
■​ Individual preparation: work done by each of the
participants on their own before the review meeting,
noting potential defects, questions and comments.
■​ Review meeting: discussion or logging, with
documented results or minutes (for more formal review
types). The meeting participants may simply note
defects, make recommendations for handling the
defects, or make decisions about the defects.
■​ Rework: fixing defects found, typically done by the
author.
■​ Follow-up: checking that defects have been addressed,
gathering metrics and checking on exit criteria (for
more formal review types).
○​ Type of Review are
■​ Informal Reviews
■​ Walkthroughs
■​ Formal Technical Reviews
■​ Inspections
56.​ Characteristics of informal review
○​ No formal process
○​ There may be pair programming or a technical lead
reviewing designs and code(led by individual)
○​ Optionally may be documented or undocumented
○​ May vary in usefulness depending on the reviewer
○​ Main purpose: inexpensive way to get some benefit
57.​ Characteristics of Walkthroughs
○​ Meeting led by author
○​ Scenarios, dry runs, peer group
○​ Open-ended sessions
○​ Optionally a pre-meeting preparation of reviewers, review
report, list of findings and scribe(who is not the author)
○​ May vary in practice from quite informal to very formal;
○​ Main purposes: learning, gaining understanding, defect
finding.
58.​ Characteristics of Formal technical Reviews
○​ Documented, defined defect-detection process that includes
peers and technical experts
○​ May be performed as a peer review without management
participation
○​ Ideally led by trained moderator (not the author)
○​ Pre-meeting preparation
○​ Optionally the use of checklists, review report, list of findings
and management participation
○​ May vary in practice from quite informal to very formal
○​ Main purposes: discuss, make decisions, evaluate
alternatives, find defects, solve technical problems and
check conformance to specifications and standards
59.​ Characteristics of Inspection
○​ led by trained moderator (not the author)
○​ Usually peer examination
○​ Defined roles
○​ Includes metrics
○​ Formal process based on rules and checklists with entry and
exit criteria
○​ Pre-meeting preparation
○​ Inspection report, list of findings
○​ Formal follow-up process
○​ Optionally, process improvement and reader
○​ Main purpose: find defects
60.​ Roles in a typical formal Review
A typical formal review will include the roles below:
○​ Manager: decides on the execution of reviews, allocates time
in project schedules and determines if the review objectives
have been met.
○​ Moderator: the person who leads the review of the document
or set of documents, including planning the review, running
the meeting, and follow-up after the meeting. If necessary,
the moderator may mediate between the various points of
view and is often the person upon whom the success of the
review rests.
○​ Author: the writer or person with chief responsibility for the
document(s) to be reviewed.
○​ Reviewers: individuals with a specific technical or business
background (also called checkers or inspectors) who, after
the necessary preparation, identify and describe findings
(e.g. defects) in the product under review. Reviewers should
be chosen to represent different perspectives and roles in
the review process and they take part in any review
meetings.
○​ Scribe (or recorder): documents all the issues, problems and
open that were identified during the meeting
61.​ Review Checklist
○​ Compliance with standards — Does the requirements
specification comply with ISD or tailored Branch/project-level
standards and naming conventions?
○​ Completeness of Specifications — Does the requirements
specification document address all known requirements?
Have 'TBD' requirements been kept to a minimum, or
eliminated entirely?
○​ Clarity — Are the requirements clear enough to be turned
over to an independent group for implementation?
○​ Consistency — Are the specifications consistent in notation,
terminology, and level of functionality? Are any required
algorithms mutually compatible?
○​ External Interfaces — Have external interfaces been
adequately defined?
○​ Testability — Are the requirements testable? Will the testers
be able to determine whether each requirement has been
satisfied?
○​ Design-Neutrality — Does the requirements specification
state what actions are to be performed, rather than how
these actions will be performed?
○​ Readability — Does the requirements specification use the
language of the intended testers and users of the system,
not software jargon?
○​ Level of Detail — Are the requirements at a fairly consistent
level of detail? Should any particular requirement be
specified in more detail? In less detail?
○​ Requirements Singularity — Does each requirement address
a single concept, topic, element, or value?
○​ Definition of Inputs and Outputs — Have the internal
interfaces, i.e., the required inputs to and outputs from the
software system, been fully defined? Have the required data
transformations been adequately specified?
○​ Scope — Does the requirements specification adequately
define boundaries for the scope of the target software
system? Are any essential requirements missing?
○​ Design Constraints — Are all stated design and performance
constraints realistic and justifiable?
○​ Traceability — Has a bidirectional traceability matrix been
provided?
62.​ Static Analysis
○​ Static analysis, also called static code analysis, is a method
of computer program debugging that is done by examining
the code without executing the program.
○​ The process provides an understanding of the code
structure, and can help to ensure that the code adheres to
industry standards.
○​ Automated tools can assist programmers and developers in
carrying out static analysis.
○​ The process of scrutinising code by visual inspection alone
(by looking at a printout, for example), without the assistance
of automated tools, is sometimes called program
understanding or program comprehension.
○​ The principal advantage of static analysis is the fact that it
can reveal errors that do not manifest themselves until a
disaster occurs weeks, months or years after release.
○​ Nevertheless, static analysis is only a first step in a
comprehensive software quality-control regime.After static
analysis has been done, dynamic analysis is often performed
in an effort to uncover subtle defects or vulnerabilities.
○​ The value of static analysis is:
■​ Early detection of defects prior to test execution
■​ Early warning about suspicious aspects of the code or
design, by the calculation of metrics, such as a high
complexity measure.
■​ Identification of defects not easily found by dynamic
testing
■​ Detecting dependencies and inconsistencies in
software models
■​ Improved maintainability of code and design
■​ Prevention of defects
63.​ Static Analysis by Tools
○​ The objective is to find defects in software source code and
software models.
○​ Static analysis is performed without actually executing the
software being examined by the Static analysis tools can
locate defects that are hard to find in testing.
○​ As Wth reviews, static analysis finds defects rather than
failures.
○​ Static analysis tools analyse program code (e.g. control flow
and data flow), as well as generated output such as HTML
and XML
○​ Typical defects discovered by static analysis tools include:
■​ Referencing a variable with an undefined value
■​ Inconsistent interface between modules and
components
■​ Variables that are never used
■​ Unreachable (dead) code
■​ Programming standards violations
■​ Security vulnerabilities
■​ Syntax violations of code and software models.
64.​ Data flow Analysis
○​ Data-flow analysis is a technique for gathering information
about the possible set of values calculated at various points
in a computer program.
○​ A program's control flow graph (CFG) is used to determine
those parts of a program to which a particular value assigned
to a variable might propagate.
○​ The information gathered is often used by compilers when
optimising a program.
○​ A canonical example of a data-flow analysis is reaching
definitions.
○​
65.​ Control Flow Analysis
66.​ Cyclomatic Complexity Analysis
○​ Cyclomatic Code Complexity was first introduced by Thomas
McCabe in 1976. In 1976, Thomas McCabe published a
paper arguing that code complexity is defined by its control
flow.
○​ This measure provides a single ordinal number that can be
compared to the complexity of other programs. It is one of
the most widely accepted static software metrics and is
intended to be independent of language and language
format.
○​ Code Complexity is a measure of the number of
linearly-independent paths through a program module and is
calculated by counting the number of decision points found in
the code (if, else, do, while, throw, catch, return, break etc.).
○​ Cyclomatic Complexity for a software module calculated
based on graph theory is based on the following equation:
CC = E - N + p
Where
CC = Cyclomatic Complexity
E = the number of edges of the graph
N = the number of nodes of the graph
p = the number of connected components
○​ There is also a simpler equation which is easier to
understand and implement by following the guidelines
shown:
■​ Start with 1 for a straight path through the routine.
■​ Add 1 for each of the following keywords or their
equivalent: if,while,repeat,for, and, or.
■​ Add 1 for each case in a switch statement.
67.​ Dynamic Testing
○​ Dynamic testing (or dynamic analysis) is a term used in
software engineering to describe the testing of the dynamic
behaviour of code.
○​ That is, dynamic analysis refers to the examination of the
physical response from the system to variables that are not
constant and change with time.
○​ Unit Tests, Integration Tests, System Tests and Acceptance
Tests are few of the Dynamic Testing methodologies.These
are the Validation activities.
○​ Dynamic analysis involves the testing and evaluation of a
program based on execution.
○​ Dynamic testing means testing based on specific test cases
by execution of the test object or running programs.
○​ Dynamic testing is used to test software through executing it.
68.​ BlackBox Testing(Functional Testing)
○​ Functional testing is a type of black box testing that bases its
test cases on the specifications of the software component
under test.
○​ Functions are tested by feeding them input and examining
the output, and internal program structure is rarely
considered.
○​ Functional testing differs from system testing in that
functional testing "verifies a program by checking it against
design documents or specifications", while system testing
"validates a program by checking it against the published
user or system requirements".
○​ Functional testing typically involves five steps -
■​ The identification of functions that the software is
expected to perform
■​ The creation of input data based on the function's
specifications
■​ The determination of output based on the function's
specifications
■​ The execution of the test case
■​ The comparison of actual and expected outputs
69.​ Equivalence Partitioning
○​ In this method the input domain data is divided into different
equivalence data classes.
○​ This method is typically used to reduce the total number of
test cases to a finite set of testable test cases, still covering
maximum requirements.
○​ In short it is the process of taking all possible test cases and
placing them into classes.
○​ One test value is picked from each class while testing.
○​ Equivalence partitioning uses fewest test cases to cover
maximum requirements.
○​ Eg.,Test cases for input box accepting numbers between 1
and 1000 using Equivalence Partitioning:
■​ One input data class with all valid inputs. Pick a single
value from range 1 to 1000 as a valid test case. If you
select other values between 1 and 1000 then result is
going to be the same. So one test case for valid input
data should be sufficient.
■​ Input data class with all values below lower limit. I.e.
any value below 1, as a invalid input data test case.
■​ Input data with any value greater than 1000 to
represent the third invalid input class.
○​ Test case values are selected in such a way that the largest
number of attributes of the equivalence class can be
exercised.
○​ Using the equivalence partitioning method above test cases
can be divided into three sets of input data called classes.
Each test case is a representative of respective class
70.​ Boundary Value Analysis
○​ It’s widely recognized that input values at the extreme ends
of the input domain cause more errors in the system.
○​ More application errors occur at the boundaries of the input
domain.
○​ ‘Boundary value analysis’ testing technique is used to
identify errors at boundaries rather than finding those that
exist in the centre of the input domain.
○​ Boundary value analysis is a next part of Equivalence
partitioning for designing test cases where test cases are
selected at the edges of the equivalence classes.
○​ Boundary value analysis is often called as a part of stress
and negative testing.
○​ Test cases for input box accepting numbers between 1 and
1000 using Boundary value analysis:
■​ Test cases with test data exactly as the input
boundaries of input domain i.e. values 1 and 1000 in
our case.
■​ Test data with values just below the extreme edges of
input domains i.e. values 0 and 999.
■​ Test data with values just above the extreme edges of
the input domain i.e. values 2 and 1001.
○​ There is no hard-and-fast rule to test only one value from
each equivalence class you created for input domains.
○​ You can select multiple valid and invalid values from each
equivalence class according to your needs and previous
judgments.
71.​ Cause-Effect Graphing
○​ A cause-effect graph is a directed graph that maps a set of
causes to a set of effects.
○​ The causes may be thought of as the input to the program,
and the effects may be thought of as the output.
○​ Usually the graph shows the nodes representing the causes
on the left side and the nodes representing the effects on the
right side.
○​ There may be intermediate nodes in between that combine
inputs using logical operators such as AND & OR.
○​ Constraints may be added to the causes and effects. These
are represented as edges labelled with the constraint symbol
using a dashed line.
○​ For causes, valid constraint symbols are E (exclusive), O
(one and only one), and I (at least one).
○​ The exclusive constraint states that both causes1 and
cause2 cannot be true simultaneously.
○​ The Inclusive (at least one) constraint states that at least one
of the causes 1, 2 or 3 must be true.
○​ The OaOO (One and Only One) constraint states that only
one of the causes 1, 2 or 3 can be true.
○​ For effects, valid constraint symbols are R (Requires) and M
(Mask).
○​ The Requires constraint states that if cause 1 is true, then
cause 2 must be true, and it is impossible for 1 to be true and
2 to be false.
○​ The mask constraint states that if effect 1 is true then effect 2
is false.
○​ The graph's direction is as follows: Causes --> intermediate
nodes --> Effects
○​ The graph can always be rearranged so there is only one
node between any input and any output.
○​ They are conjunctive normal form and disjunctive normal
form.
○​ A cause-effect graph is useful for generating a reduced
decision table.
72.​ Syntax Testing
○​ Syntax Testing uses a model of the formally-defined syntax
of the inputs to a component.
○​ The syntax is represented as a number of rules each of
which defines the possible means of production of a symbol
in terms of sequences of, iterations of, or selections between
other symbols
○​ Test cases with valid and invalid syntax are designed from
the formally defined syntax of the inputs to the component.
○​ For Test Cases with Valid Syntax,
■​ They shall be designed to execute options which are
derived from rules which shall include those that follow,
although additional rules may also be applied where
appropriate:
1.​ Whenever a selection is used, an option is
derived for each alternative by replacing the
selection with that alternative;
2.​ Whenever iteration is used, at least two options
are derived, one with the minimum number of
iterated symbols and the other with more than the
minimum number of repetitions.
■​ A test case may exercise any number of options. For
each test case the following shall be identified:
1.​ the input(s) to the component;
2.​ option(s) exercised;
3.​ the expected outcome of the test case.
○​ For Test Cases with Invalid Syntax,
■​ They shall be designed as follows:
1.​ a checklist of generic mutations shall be
documented which can be applied to rules or
parts of rules in order to generate a part of the
input which is invalid;
2.​ this checklist shall be applied to the syntax to
identify specific mutations of the valid input, each
of which employs at least one generic mutation;
3.​ test cases shall be designed to execute specific
mutations.
■​ For each test case the following shall be identified:
1.​ the input(s) to the component;
2.​ the generic mutation(s) used;
3.​ the syntax element(s) to which the mutation or
mutations are applied;
4.​ the expected outcome of the test case.
73.​ Structural Testing(White Box Testing)
○​ Structural testing is a method of testing software that tests
internal structures or workings of an application, as opposed
to its functionality.
○​ In structural testing an internal perspective of the system, as
well as programming skills, are required and used to design
test cases.
○​ The tester chooses inputs to exercise paths through the code
and determine the appropriate outputs.
○​ Structural testing compares test program behaviour against
the apparent intention of the source code. This contrasts with
functional testing (AKA black-box testing), which compares
test program behaviour against a requirements specification.
○​ Structural testing examines how the program works, taking
into account possible pitfalls in the structure and logic.
Functional testing examines what the program accomplishes,
without regard to how it works internally.
○​ Structural testing is also called path testing since you choose
test cases that cause paths to be taken through the structure
of the program.
74.​ Coverage Testing
○​ Code coverage analysis is a structural testing technique
(AKA glass box testing and white box testing).
○​ The main aim of Coverage testing is to ‘cover’ the program
with test cases that satisfy some fixed coverage criteria.
○​ The basic assumptions behind coverage analysis tell us
about the strengths and limitations of this testing technique.
○​ Some fundamental assumptions are listed below:
■​ Bugs relate to control flow and you can expose Bugs
by varying the control
■​ You can look for failures without knowing what failures
might occur and all tests are reliable, in that successful
test runs imply program correctness.
■​ Other assumptions include achievable specifications,
no errors of omission, and no unreachable code.
○​ Clearly, these assumptions do not always hold. Coverage
analysis exposes some plausible bugs but does not come
close to exposing all classes of bugs.
○​ Coverage analysis provides more benefit when applied to an
application that makes a lot of decisions rather than
data-centric applications, such as a database application.
75.​ Metrics of Coverage testing
○​ A large variety of coverage metrics exist.
○​ The U.S. Department of Transportation Federal Aviation
Administration (FAA) has formal requirements for structural
coverage in the certification of safety-critical airborne
systems.
○​ Few other organisations have such requirements, so the FAA
is influential in the definitions of these metrics.
○​ They are:
■​ Statement Coverage
■​ Decision Coverage
■​ Path Coverage
■​ Condition Coverage
76.​ Statement Coverage
○​ This metric reports whether each executable statement is
encountered.
○​ Declarative statements that generate executable code are
considered executable statements.
○​ Control-flow statements, such as if, for, and switch are
covered if the expression controlling the flow is covered as
well as all the contained statements.
○​ Implicit statements, such as an omitted return, are not
subject to statement coverage.
○​ Also known as: line coverage, segment coverage C1 and
basic block coverage. Basic block coverage is the same as
statement coverage except the unit of code measured is
each sequence of non-branching statements.
○​ The chief advantage of this metric is that it can be applied
directly to object code and does not require processing
source code. Performance profilers commonly implement
this metric.
○​ The chief disadvantage of statement coverage is that it is
insensitive to some control structures.
○​ In summary, this metric is affected more by computational
statements than by decisions.
77.​ Decision Coverage
○​ This metric reports whether Boolean expressions tested in
control structures (such as the if-statement and
while-statement) are evaluated to be both true and false.
○​ The entire Boolean expression is considered one
true-or-false predicate regardless of whether it contains
logical-and or logical-or operators.
○​ Constant expressions controlling the flow are ignored.
○​ Also known as: branch coverage, all-edges coverage, basis
path coverage, C2, decision-decision-path testing. "Basis
path" testing selects paths that achieve decision coverage.
○​ This metric has the advantage of simplicity without the
problems of statement coverage.
○​ A disadvantage is that this metric ignores branches within
Boolean expressions which occur due to short-circuit
operators.
○​ The FAA suggests that for the purposes of measuring
decision coverage, the operands of short-circuit operators
(including the C conditional operator) be interpreted as
decisions
78.​ Condition Coverage
○​ Condition coverage reports the true or false outcome of each
condition.
○​ A condition is an operand of a logical operator that does not
contain logical operators.
○​ Condition coverage measures the conditions independently
of each other.
○​ This metric is similar to decision coverage but has better
sensitivity to the control flow.
79.​ Path Coverage
○​ This metric reports whether each of the possible paths in
each function have followed.
○​ A path is a unique sequence of branches from the function
entry to the exit.
○​ Also known as predicate coverage. Predicate coverage
views paths as possible combinations of logical conditions
○​ Path coverage has the advantage of requiring very thorough
testing.
○​ Path coverage has two severe disadvantages. The first is
that the number of paths is exponential to the number of
branches.The second disadvantage is that many paths are
impossible to exercise due to relationships of data.
80.​ Domain Testing
○​ Domain is a specific area to which the project belongs.
○​ Domain testing is a field of study that defines a set of
common requirements, terminology, and functionality for any
software program constructed to solve a problem in that field.
○​ The important white box testing method is domain testing.
○​ The goal is to check values taken by a variable, a condition
or an index and to prove that they are outside the specified
or valid range.
○​ It also contains checking that the program accepts only valid
input.For this domain testing tester should be expert on that
particular domain.
○​ Tester should be aware and clear cut knowledge in the
Domain makes him work effectively in the different Domains.
○​ It is nothing but conducting testing on different domains
related to various projects .It varies based on project
area/Domain
○​ Different Software Domains -
Banking,Finance,Insurance,E-Learning,Job Portal,Health
Care,Shopping Portal etc.
81.​ Non Functional Testing Techniques
○​ Non-functional testing is the testing of a software application
for its non- functional requirements.
○​ The names of many non-functional tests are often used
interchangeably because of the overlap in scope between
various non-functional requirements.
○​ For example, software performance is a broad term that
includes many specific requirements like reliability and
scalability.
○​ Non-functional testing includes: Baseline testing,
Compatibility testing,Compliance testing,Documentation
testing,Endurance testing,Load testing,Localization
testing,Performance testing,Recovery testing,Resilience
testing,Security testing,Scalability testing,Stress
testing,Usability testing,Volume testing
82.​ Validation Testing Activities
○​ Verification and validation testing are two important tests,
which are carried out on software, before it has been handed
over to the customer.
○​ The aim of both verification and validation is to ensure that
the software product is made according to the requirements
of the client and does indeed fulfil the intended
purpose.Therefore, validation testing is an important part of
software quality assurance procedures and standards.
○​ If the testers are involved in the software product right from
the very beginning, then validation testing in software testing
starts right after a component of the system has been
developed.
○​ The different types of software validation testing are:
■​ Component Testing - Component testing is also known
as unit testing. The aim of the tests carried out in this
testing type is to search for defects in the software
component. At the same time, it also verifies the
functioning of the different software components, like
modules, objects, classes, etc., which can be tested
separately.
■​ Integration Testing - This is an important part of the
software validation model, where the interaction
between the different interfaces of the components is
tested. Along with the interaction between the different
parts of the system, the interaction of the system with
the computer operating system, file system, hardware
and any other software system it might interact with is
also tested.
■​ System Testing - System testing, also known as
functional and system testing is carried out when the
entire software system is ready. The concern of this
testing is to check the behavior of the whole system as
defined by the scope of the project. The main concern
of system testing is to verify the system against the
specified requirements. While carrying out the tester is
not concerned with the internals of the system, but
checks if the system behaves as per expectations.
■​ Acceptance Testing - Here the tester especially has to
literally think like the client and test the software with
respect to user needs, requirements and business
processes and determine whether the software can be
handed over to the client. At this stage, often a client
representative is also a part of the testing team, so that
the client has confidence in the system. There are
different types of acceptance testing:
1.​ Operational Acceptance Testing
2.​ Compliance Acceptance Testing
3.​ Alpha Testing
4.​ Beta Testing
83.​ Black Box vs White Box
Black Box White Box
Focuses on the functionality Focuses on the structure
of the system (Program) of the system
Techniques used are : Techniques used are:
●​ Equivalence partitioning ●​ Basis Path Testing
●​ Boundary-value analysis ●​ Flow Graph Notation
●​ Error guessing ●​ Control Structure Testing
●​ Race conditions ○​ Condition Testing
●​ Cause-effect graphing ○​ Data Flow testing
●​ Syntax testing ●​ Loop Testing
●​ State transition testing ○​ Simple Loops
●​ Graph matrix ○​ Nested Loops
○​ Concatenated
Loops
○​ Unstructured Loops
Tester can be non technical Tester should be technical
Helps to identify the vagueness Helps to identify the logical
and contradiction in functional and coding issues.
specifications

84.​ Factors affecting Testing of Web Applications


As mentioned above Web Applications can have a lot of
variables affecting them such as:
○​ Numerous Application Usage (Entry – Exit) Paths are
possible.Due to the design and nature of the web
applications it is possible that different users follow different
application usage paths
○​ People with varying backgrounds & technical skills may use
the application; not all applications are self explanatory to all
people. People have varying backgrounds and may find the
application hard to use.
○​ Intranet versus Internet based Applications - Intranet based
applications generally cater to a controlled audience.While it
may be difficult to make similar assumptions for Internet
Based Applications Also the intranet users can generally
access the app from ‘trusted’ sources whereas for internet
applications the users may need to be authenticated and the
security measures may have to be much more stringent.
○​ The end users may use different types of browsers to access
the app
○​ Even on similar browsers application may be rendered
differently based on the Screen
resolution/Hardware/Software Configuration
○​ Network speeds - Slow Network speeds may cause the
various components of a Webpage to be downloaded with a
time lag. This may cause errors to be thrown up.
○​ ADA (Americans with Disabilities Act) - It may be required
that the applications be compliant with ADA. Due certain
disabilities, some of the users may have difficulty in
accessing the Web Applications unless the applications are
ADA compliant.
○​ Other Regulatory Compliance/Standards - Depending on the
nature of the application and sensitivity of the data captured
the applications may have to be tested for relevant
Compliance Standards.
○​ Firewalls - Applications may behave differently across
firewalls. Applications may have certain web services or may
operate on different ports that may have been blocked.
○​ Security Aspects - If the Application captures certain
personal or sensitive information, it may be crucial to test the
security strength of the application. Sufficient care needs to
be taken that the security of the data is not compromised.
85.​ How do you test Object Oriented Software?
○​ Object oriented software testing requires a different strategy
from that of traditional software testing as the software is
divided in classes and modules.
○​ The unique testing issues of the object oriented software
necessitate improvisation of the conventional software
testing techniques and development of new testing
techniques.
○​ Class is the basic unit of testing in object oriented software.
Class is a complete unit that can be isolated and tested
independently.
○​ Late binding creates indefiniteness in the testing process
since the method to be executed is unknown until runtime.
○​ As opposed to the waterfall model used for traditional
software development, object oriented software is developed
using the iterative and incremental approach.
○​ Thus, object oriented testing too becomes iterative and
incremental, requiring use of regression testing techniques.
86.​ Explain in detail Web Application Testing
Web application testing is also an important aspect in testing
as a variety of types of users are using it.It is done using the
following checklist
○​ Functionality Testing - Test for – all the links in web pages,
database connection, forms used in the web pages for
submitting or getting information from users, Cookie testing,
Validating HTML/CSS, Database testing
○​ Usability Testing - Test for navigation, means how the user
surfs the web pages, different controls like buttons, boxes or
how the user uses the links on the pages to surf different
pages. Here Content checking should also not be ignored
○​ Interface Testing - The main interfaces are: Web server and
application server interface Application server and Database
server interface. Check if all the interactions between these
servers are executed properly. Errors are handled properly.
○​ Compatibility Testing - Compatibility of your web site is very
important testing aspect. See which compatibility test to be
executed:
■​ Browser compatibility - Some applications are very
dependent on browsers. Different browsers have
different configurations and settings that your web
page should be compatible with.
■​ Operating system compatibility - Some functionality in
your web application may not be compatible with all
operating systems.
■​ Mobile browsing - Test your web pages on mobile
browsers. Compatibility issues may be there on mobile.
■​ Printing options - If you are giving page-printing options
then make sure fonts, page alignment, page graphics
getting printed properly.
○​ Performance Testing - Web applications should sustain a
heavy load. Web performance testing should include: Web
Load Testing & Web Stress Testing
○​ Security Testing - Following are some test cases for web
security testing:
■​ Test by pasting the internal url directly into the browser
address bar without login. Internal pages should not
open.
■​ Test the CAPTCHA for automating script logins.
■​ Test if SSL is used for security measures.
■​ Web directories or files should not be accessible
directly unless given a download option.
■​ All transactions, error messages, security breach
attempts should get logged in log files somewhere on
the web server.
The web based applications are powerful and have the ability to
provide feature rich content to a wide audience spread across the
globe at an economical cost.Hence it is a daunting task to test
these applications and with more and more features testing these
apps is becoming even more complex.
87.​ Define Cookies Testing
○​ Cookies are small files stored on a user machine. These are
basically used to maintain the session, mainly login sessions.
○​ Test the application by enabling or disabling the cookies in
your browser options.
○​ Test if the cookies are encrypted before writing to the user
machine.
○​ If you are testing the session cookies (i.e. cookies expire
after the sessions ends) check for login sessions and user
stats after session end.
88.​ Computer Aided Software Testing Tools(CAST)
○​ CAST (Computer Aided Software Testing) Tools are
designed to assist the testing process.
○​ They are used in various stages of SDLC.
○​ CAST Tools support Test Management to -
■​ Manage the testing process
■​ Definition of what to test how to test when to test and
what happened
■​ Provide testing statistics / Graphs for management
reporting
■​ Enable impact analysis
■​ Can also provide incident management
○​ Ex: Test Director from Mercury QA Director from Compuware

You might also like